WorldWideScience

Sample records for carlo determined self-shielded

  1. Uranium self-shielding in fast reactor blankets

    Energy Technology Data Exchange (ETDEWEB)

    Kadiroglu, O.K.; Driscoll, M.J.

    1976-03-01

    The effects of heterogeneity on resonance self-shielding are examined with particular emphasis on the blanket region of the fast breeder reactor and on its dominant reaction--capture in /sup 238/U. The results, however, apply equally well to scattering resonances, to other isotopes (fertile, fissile and structural species) and to other environments, so long as the underlying assumptions of narrow resonance theory apply. The heterogeneous resonance integral is first cast into a modified homogeneous form involving the ratio of coolant-to-fuel fluxes. A generalized correlation (useful in its own right in many other applications) is developed for this ratio, using both integral transport and collision probability theory to infer the form of correlation, and then relying upon Monte Carlo calculations to establish absolute values of the correlation coefficients. It is shown that a simple linear prescription can be developed for the flux ratio as a function of only fuel optical thickness and the fraction of the slowing-down source generated by the coolant. This in turn permitted derivation of a new equivalence theorem relating the heterogeneous self-shielding factor to the homogeneous self-shielding factor at a modified value of the background scattering cross section per absorber nucleus. A simple version of this relation is developed and used to show that heterogeneity has a negligible effect on the calculated blanket breeding ratio in fast reactors.

  2. Gamma self-shielding correction factors calculation for aqueous bulk sample analysis by PGNAA technique

    Energy Technology Data Exchange (ETDEWEB)

    Nasrabadi, M.N. [Department of Nuclear Engineering, Faculty of Modern Sciences and Technologies, University of Isfahan, Isfahan 81746-73441 (Iran, Islamic Republic of)], E-mail: mnnasrabadi@ast.ui.ac.ir; Mohammadi, A. [Department of Physics, Payame Noor University (PNU), Kohandej, Isfahan (Iran, Islamic Republic of); Jalali, M. [Isfahan Nuclear Science and Technology Research Institute (NSTRT), Reactor and Accelerators Research and Development School, Atomic Energy Organization of Iran (Iran, Islamic Republic of)

    2009-07-15

    In this paper bulk sample prompt gamma neutron activation analysis (BSPGNAA) was applied to aqueous sample analysis using a relative method. For elemental analysis of an unknown bulk sample, gamma self-shielding coefficient was required. Gamma self-shielding coefficient of unknown samples was estimated by an experimental method and also by MCNP code calculation. The proposed methodology can be used for the determination of the elemental concentration of unknown aqueous samples by BSPGNAA where knowledge of the gamma self-shielding within the sample volume is required.

  3. REPOSITORY LAYOUT SUPPORTING DESIGN FEATURE #13- WASTE PACKAGE SELF SHIELDING

    Energy Technology Data Exchange (ETDEWEB)

    J. Owen

    1999-04-09

    The objective of this analysis is to develop a repository layout, for Feature No. 13, that will accommodate self-shielding waste packages (WP) with an areal mass loading of 25 metric tons of uranium per acre (MTU/acre). The scope of this analysis includes determination of the number of emplacement drifts, amount of emplacement drift excavation required, and a preliminary layout for illustrative purposes.

  4. MPACT Subgroup Self-Shielding Efficiency Improvements

    Energy Technology Data Exchange (ETDEWEB)

    Stimpson, Shane [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Liu, Yuxuan [Univ. of Michigan, Ann Arbor, MI (United States); Collins, Benjamin S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Clarno, Kevin T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-08-31

    Recent developments to improve the efficiency of the MOC solvers in MPACT have yielded effective kernels that loop over several energy groups at once, rather that looping over one group at a time. These kernels have produced roughly a 2x speedup on the MOC sweeping time during eigenvalue calculation. However, the self-shielding subgroup calculation had not been reevaluated to take advantage of these new kernels, which typically requires substantial solve time. The improvements covered in this report start by integrating the multigroup kernel concepts into the subgroup calculation, which are then used as the basis for further extensions. The next improvement that is covered is what is currently being termed as “Lumped Parameter MOC”. Because the subgroup calculation is a purely fixed source problem and multiple sweeps are performed only to update the boundary angular fluxes, the sweep procedure can be condensed to allow for the instantaneous propagation of the flux across a spatial domain, without the need to sweep along all segments in a ray. Once the boundary angular fluxes are considered to be converged, an additional sweep that will tally the scalar flux is completed. The last improvement that is investigated is the possible reduction of the number of azimuthal angles per octant in the shielding sweep. Typically 16 azimuthal angles per octant are used for self-shielding and eigenvalue calculations, but it is possible that the self-shielding sweeps are less sensitive to the number of angles than the full eigenvalue calculation.

  5. Self-shielding clumps in starburst clusters

    CERN Document Server

    Palouš, Jan; Ehlerová, Soňa; Tenorio-Tagle, Guillermo

    2016-01-01

    Young and massive star clusters above a critical mass form thermally unstable clumps reducing locally the temperature and pressure of the hot 10$^{7}$~K cluster wind. The matter reinserted by stars, and mass loaded in interactions with pristine gas and from evaporating circumstellar disks, accumulate on clumps that are ionized with photons produced by massive stars. We discuss if they may become self-shielded when they reach the central part of the cluster, or even before it, during their free fall to the cluster center. Here we explore the importance of heating efficiency of stellar winds.

  6. Self-shielding Electron Beam Installation for Sterilization

    Institute of Scientific and Technical Information of China (English)

    Linac; Laboratory

    2002-01-01

    China Institute of Atomic Energy (CIAE) has developed a self-shielding electron beam installationfor sterilization as handling letters with anthrax germ or spores which has the least volume and the least

  7. Self-Shielding Of Transmission Lines

    Energy Technology Data Exchange (ETDEWEB)

    Christodoulou, Christos [Univ. of New Mexico, Albuquerque, NM (United States)

    2017-03-01

    The use of shielding to contend with noise or harmful EMI/EMR energy is not a new concept. An inevitable trade that must be made for shielding is physical space and weight. Space was often not as much of a painful design trade in older larger systems as they are in today’s smaller systems. Today we are packing in an exponentially growing number of functionality within the same or smaller volumes. As systems become smaller and space within systems become more restricted, the implementation of shielding becomes more problematic. Often, space that was used to design a more mechanically robust component must be used for shielding. As the system gets smaller and space is at more of a premium, the trades starts to result in defects, designs with inadequate margin in other performance areas, and designs that are sensitive to manufacturing variability. With these challenges in mind, it would be ideal to maximize attenuation of harmful fields as they inevitably couple onto transmission lines without the use of traditional shielding. Dr. Tom Van Doren proposed a design concept for transmission lines to a class of engineers while visiting New Mexico. This design concept works by maximizing Electric field (E) and Magnetic Field (H) field containment between operating transmission lines to achieve what he called “Self-Shielding”. By making the geometric centroid of the outgoing current coincident with the return current, maximum field containment is achieved. The reciprocal should be true as well, resulting in greater attenuation of incident fields. Figure’s 1(a)-1(b) are examples of designs where the current centroids are coincident. Coax cables are good examples of transmission lines with co-located centroids but they demonstrate excellent field attenuation for other reasons and can’t be used to test this design concept. Figure 1(b) is a flex circuit design that demonstrate the implementation of self-shielding vs a standard conductor layout.

  8. Self-shielded electron linear accelerators designed for radiation technologies

    Science.gov (United States)

    Belugin, V. M.; Rozanov, N. E.; Pirozhenko, V. M.

    2009-09-01

    This paper describes self-shielded high-intensity electron linear accelerators designed for radiation technologies. The specific property of the accelerators is that they do not apply an external magnetic field; acceleration and focusing of electron beams are performed by radio-frequency fields in the accelerating structures. The main characteristics of the accelerators are high current and beam power, but also reliable operation and a long service life. To obtain these characteristics, a number of problems have been solved, including a particular optimization of the accelerator components and the application of a variety of specific means. The paper describes features of the electron beam dynamics, accelerating structure, and radio-frequency power supply. Several compact self-shielded accelerators for radiation sterilization and x-ray cargo inspection have been created. The introduced methods made it possible to obtain a high intensity of the electron beam and good performance of the accelerators.

  9. Experimental investigation of resonance self-shielding and the Doppler effect in uranium and tantalum

    Science.gov (United States)

    Byoun, T. Y.; Block, R. C.; Semler, T. T.

    1972-01-01

    A series of average transmission and average self-indication ratio measurements were performed in order to investigate the temperature dependence of the resonance self-shielding effect in the unresolved resonance region of depleted uranium and tantalum. The measurements were carried out at 77 K, 295 K and approximately 1000 K with sample thicknesses varying from approximately 0.1 to 1.0 mean free path. The average resonance parameters as well as the temperature dependence were determined by using an analytical model which directly integrates over the resonance parameter distribution functions.

  10. Sensitivity of computed uranium-238 self-shielding factors to the choice of the unresolved average resonance parameters

    Energy Technology Data Exchange (ETDEWEB)

    Munoz-Cobos, J.L.; de Saussure, G.; Perez, R.B.

    1982-05-01

    The influence of different representations of the unresolved resonances of /sup 238/U on the computed self-shielding factors is examined. It is shown that the evaluated infinitely diluted average capture cross section does not provide sufficient information to determine a unique set of unresolved resonance parameters; different sets of unresolved resonance parameters equally consistent with the evaluated average capture cross section yield significantly different computed self-shielding factors. In the conclusion it is recommended that the resolved resonance description of the evaluated /sup 238/U cross sections be extended to higher energies and that thick sample transmission data and self-indication data be used to improve the evaluation of the unresolved resonance region.

  11. PAPIN: A Fortran-IV program to calculate cross section probability tables, Bondarenko and transmission self-shielding factors for fertile isotopes in the unresolved resonance region

    Energy Technology Data Exchange (ETDEWEB)

    Munoz-Cobos, J.G.

    1981-08-01

    The Fortran IV code PAPIN has been developed to calculate cross section probability tables, Bondarenko self-shielding factors and average self-indication ratios for non-fissile isotopes, below the inelastic threshold, on the basis of the ENDF/B prescriptions for the unresolved resonance region. Monte-Carlo methods are utilized to generate ladders of resonance parameters in the unresolved resonance region, from average resonance parameters and their appropriate distribution functions. The neutron cross-sections are calculated by the single level Breit-Wigner (SLBW) formalism, with s, p and d-wave contributions. The cross section probability tables are constructed by sampling the Doppler-broadened cross sections. The various self-shielded factors are computed numerically as Lebesgue integrals over the cross section probability tables. The program PAPIN has been validated through extensive comparisons with several deterministic codes.

  12. Formation Mechanism of Inclusion in Self-Shielded Flux Cored Arc Welds

    Institute of Scientific and Technical Information of China (English)

    YU Ping; LU Xiao-sheng; PAN Chuan; XUE Jin; LI Zheng-bang

    2005-01-01

    The formation mechanism of inclusion in welds with different aluminum contents was determined based on thermodynamic equilibrium in self-shielded flux cored arc welds. Inclusions in welds were systematically studied by optical microscopy, scanning microscopy and image analyzer. The results show that the average size and the contamination rate of inclusions in low-aluminum weld are lower than those in high-aluminum weld. Highly faceted AlN inclusions with big size in the high-aluminum weld are more than those in low-aluminum weld. As a result,the low temperature impact toughness of low-aluminum weld is higher than that of high-aluminum weld. Finally,the thermodynamic analysis indicates that thermodynamic result agrees with the experimental data.

  13. Self-shielding effect of a single phase liquid xenon detector for direct dark matter search

    CERN Document Server

    Minamino, A; Ashie, Y; Hosaka, J; Ishihara, K; Kobayashi, K; Koshio, Y; Mitsuda, C; Moriyama, S; Nakahata, M; Nakajima, Y; Namba, T; Ogawa, H; Sekiya, H; Shiozawa, M; Suzuki, Y; Takeda, A; Takeuchi, Y; Taki, K; Ueshima, K; Ebizuka, Y; Ota, A; Suzuki, S; Hagiwara, H; Hashimoto, Y; Kamada, S; Kikuchi, M; Kobayashi, N; Nagase, T; Nakamura, S; Tomita, K; Uchida, Y; Fukuda, Y; Sato, T; Nishijima, K; Maruyama, T; Motoki, D; Itow, Y; Kim, Y D; Lee, J I; Moon, S H; Lim, K E; Cravens, J P; Smy, M B

    2009-01-01

    Liquid xenon is a suitable material for a dark matter search. For future large scale experiments, single phase detectors are attractive due to their simple configuration and scalability. However, in order to reduce backgrounds, they need to fully rely on liquid xenon's self-shielding property. A prototype detector was developed at Kamioka Observatory to establish vertex and energy reconstruction methods and to demonstrate the self-shielding power against gamma rays from outside of the detector. Sufficient self-shielding power for future experiments was obtained.

  14. Study on the Processing Method for Resonance Self-shielding Calculations

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    We investigate a new approach for resonance self-shielding calculations, based on a straightforward subgroup method, used in association with characteristics method. Subgroup method is actually the subdivision of cross section range for resonance energy range.

  15. How an improved implementation of H2 self-shielding influences the formation of massive stars and black holes

    Science.gov (United States)

    Hartwig, Tilman; Glover, Simon C. O.; Klessen, Ralf S.; Latif, Muhammad A.; Volonteri, Marta

    2015-09-01

    High-redshift quasars at z > 6 have masses up to ˜109 M⊙. One of the pathways to their formation includes direct collapse of gas, forming a supermassive star, precursor of the black hole seed. The conditions for direct collapse are more easily achievable in metal-free haloes, where atomic hydrogen cooling operates and molecular hydrogen (H2) formation is inhibited by a strong external (ultraviolet) UV flux. Above a certain value of UV flux (Jcrit), the gas in a halo collapses isothermally at ˜104 K and provides the conditions for supermassive star formation. However, H2 can self-shield, reducing the effect of photodissociation. So far, most numerical studies used the local Jeans length to calculate the column densities for self-shielding. We implement an improved method for the determination of column densities in 3D simulations and analyse its effect on the value of Jcrit. This new method captures the gas geometry and velocity field and enables us to properly determine the direction-dependent self-shielding factor of H2 against photodissociating radiation. We find a value of Jcrit that is a factor of 2 smaller than with the Jeans approach (˜2000 J21 versus ˜4000 J21). The main reason for this difference is the strong directional dependence of the H2 column density. With this lower value of Jcrit, the number of haloes exposed to a flux > Jcrit is larger by more than an order of magnitude compared to previous studies. This may translate into a similar enhancement in the predicted number density of black hole seeds.

  16. Determining MTF of digital detector system with Monte Carlo simulation

    Science.gov (United States)

    Jeong, Eun Seon; Lee, Hyung Won; Nam, Sang Hee

    2005-04-01

    We have designed a detector based on a-Se(amorphous Selenium) and done simulation the detector with Monte Carlo method. We will apply the cascaded linear system theory to determine the MTF for whole detector system. For direct comparison with experiment, we have simulated 139um pixel pitch and used simulated X-ray tube spectrum.

  17. GCR Transport in the Brain: Assessment of Self-Shielding, Columnar Damage, and Nuclear Reactions on Cell Inactivation Rates

    Science.gov (United States)

    Shavers, M. R.; Atwell, W.; Cucinotta, F. A.; Badhwar, G. D. (Technical Monitor)

    1999-01-01

    cell killing from GCR, including patterns of cell killing from single particle tracks. can provide useful information on expected differences between proton and HZE tracks and clinical experiences with photon irradiation. To model effects on cells in the brain, it is important that transport models accurately describe changes in the GCR due to interactions in the cranium and proximate tissues. We describe calculations of the attenuated GCR particle fluxes at three dose-points in the brain and associated patterns of cell killing using biophysical models. The effects of the brain self-shielding and bone-tissue interface of the skull in modulating the GCR environment are considered. For each brain dose-point, the mass distribution in the surrounding 4(pi) solid angle is characterized using the CAM model to trace 512 rays. The CAM model describes the self-shielding by converting the tissue distribution to mass-equivalent aluminum, and nominal values of spacecraft shielding is considered. Particle transport is performed with the proton, neutron, and heavy-ion transport code HZETRN with the nuclear fragmentation model QMSFRG. The distribution of cells killed along the path of individual GCR ions is modeled using in vitro cell inactivation data for cells with varying sensitivity. Monte Carlo simulations of arrays of inactivated cells are considered for protons and heavy ions and used to describe the absolute number of cell killing events of various magnitude in the brain from the GCR. Included are simulations of positions of inactivated cells from stopping heavy ions and nuclear stars produced by high-energy ions most importantly, protons and neutrons.

  18. Instantaneous GNSS attitude determination: A Monte Carlo sampling approach

    Science.gov (United States)

    Sun, Xiucong; Han, Chao; Chen, Pei

    2017-04-01

    A novel instantaneous GNSS ambiguity resolution approach which makes use of only single-frequency carrier phase measurements for ultra-short baseline attitude determination is proposed. The Monte Carlo sampling method is employed to obtain the probability density function of ambiguities from a quaternion-based GNSS-attitude model and the LAMBDA method strengthened with a screening mechanism is then utilized to fix the integer values. Experimental results show that 100% success rate could be achieved for ultra-short baselines.

  19. How an improved implementation of H$_2$ self-shielding influences the formation of massive stars and black holes

    CERN Document Server

    Hartwig, Tilman; Klessen, Ralf S; Latif, Muhammad A; Volonteri, Marta

    2015-01-01

    The highest redshift quasars at z>6 have mass estimates of about a billion M$_\\odot$. One of the pathways to their formation includes direct collapse of gas, forming a supermassive star ($\\sim 10^5\\,\\mathrm{M}_\\odot$) precursor of the black hole seed. The conditions for direct collapse are more easily achievable in metal-free haloes, where atomic hydrogen cooling operates and molecular hydrogen (H$_2$) formation is inhibited by a strong external UV flux. Above a certain value of UV flux ($J_{\\rm crit}$), the gas in a halo collapses isothermally at $\\sim10^4$K and provides the conditions for supermassive star formation. However, H$_2$ can self-shield and the effect of photodissociation is reduced. So far, most numerical studies used the local Jeans length to calculate the column densities for self-shielding. We implement an improved method for the determination of column densities in 3D simulations and analyse its effect on the value of $J_{\\rm crit}$. This new method captures the gas geometry and velocity fie...

  20. Calculation of resonance self-shielding for {sup 235}U from 0 to 2250 eV

    Energy Technology Data Exchange (ETDEWEB)

    Leal, L.C.; Larson, N.M.; Derrien, H. [Oak Ridge National Lab., TN (United States); Santos, G.R. [Cidade Univ., Rio de Janeiro (Brazil). Inst. de Engenharia Nuclear

    1998-08-01

    Over the years, the evaluated {sup 235}U cross sections in the resolved energy range have been extensively revised. A major accomplishment was the first evaluation released to the ENDF/B-VI library. In that evaluation, the low energy range bound was lowered to 10{sup {minus}5} eV, and the upper limit raised to 2,250 eV. Several high-resolution measurements in conjunction with the Bayesian computer code SAMMY were used to perform the analysis of the {sup 235}U resonance parameters. SAMMY uses the Reich-Moore formalism, which is adequate for representing neutron cross sections of fissile isotopes, and a generalized least-squares (Bayes) technique for determining the energy-dependence of the neutron cross sections. Recently a re-evaluation of the {sup 235}U cross section in the resolved resonance region was completed. This evaluation has undergone integral tests in various laboratories throughout the USA and abroad. The evaluation has been accepted for inclusion in ENDF/B-VI release 5. The intent of this work is to present results of calculations of self-shielded fission rates carried out with these resonance parameters and to compare those fission rates with experimental data. Results of this comparison study provide an assessment of the resonance parameters with respect to the calculation of self-shielded group cross sections.

  1. Automatic welding technologies for long-distance pipelines by use of all-position self-shielded flux cored wires

    Directory of Open Access Journals (Sweden)

    Zeng Huilin

    2014-10-01

    Full Text Available In order to realize the automatic welding of pipes in a complex operation environment, an automatic welding system has been developed by use of all-position self-shielded flux cored wires due to their advantages, such as all-position weldability, good detachability, arc's stability, low incomplete fusion, no need for welding protective gas or protection against wind when the wind speed is < 8 m/s. This system consists of a welding carrier, a guide rail, an auto-control system, a welding source, a wire feeder, and so on. Welding experiments with this system were performed on the X-80 pipeline steel to determine proper welding parameters. The welding technique comprises root welding, filling welding and cover welding and their welding parameters were obtained from experimental analysis. On this basis, the mechanical properties tests were carried out on welded joints in this case. Results show that this system can help improve the continuity and stability of the whole welding process and the welded joints' inherent quality, appearance shape, and mechanical performance can all meet the welding criteria for X-80 pipeline steel; with no need for windbreak fences, the overall welding cost will be sharply reduced. Meanwhile, more positive proposals were presented herein for the further research and development of this self-shielded flux core wires.

  2. SUBGR: A Program to Generate Subgroup Data for the Subgroup Resonance Self-Shielding Calculation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-06-06

    The Subgroup Data Generation (SUBGR) program generates subgroup data, including levels and weights from the resonance self-shielded cross section table as a function of background cross section. Depending on the nuclide and the energy range, these subgroup data can be generated by (a) narrow resonance approximation, (b) pointwise flux calculations for homogeneous media; and (c) pointwise flux calculations for heterogeneous lattice cells. The latter two options are performed by the AMPX module IRFFACTOR. These subgroup data are to be used in the Consortium for Advanced Simulation of Light Water Reactors (CASL) neutronic simulator MPACT, for which the primary resonance self-shielding method is the subgroup method.

  3. Calculation of thermal neutron self-shielding correction factors for aqueous bulk sample prompt gamma neutron activation analysis using the MCNP code

    Energy Technology Data Exchange (ETDEWEB)

    Nasrabadi, M.N. [Department of Physics, Faculty of Science, University of Kashan, Km. 6, Ravand Road, Kashan (Iran, Islamic Republic of)], E-mail: mnnasri@kashanu.ac.ir; Jalali, M. [Isfahan Nuclear Science and Technology Research Institute, Atomic Energy organization of Iran (Iran, Islamic Republic of); Mohammadi, A. [Department of Physics, Faculty of Science, University of Kashan, Km. 6, Ravand Road, Kashan (Iran, Islamic Republic of)

    2007-10-15

    In this work thermal neutron self-shielding in aqueous bulk samples containing neutron absorbing materials is studied using bulk sample prompt gamma neutron activation analysis (BSPGNAA) with the MCNP code. The code was used to perform three dimensional simulations of a neutron source, neutron detector and sample of various material compositions. The MCNP model was validated against experimental measurements of the neutron flux performed using a BF{sub 3} detector. Simulations were performed to predict thermal neutron self-shielding in aqueous bulk samples containing neutron absorbing solutes. In practice, the MCNP calculations are combined with experimental measurements of the relative thermal neutron flux over the sample's surface, with respect to a reference water sample, to derive the thermal neutron self-shielding within the sample. The proposed methodology can be used for the determination of the elemental concentration of unknown aqueous samples by BSPGNAA where knowledge of the average thermal neutron flux within the sample volume is required.

  4. Continuous Energy, Multi-Dimensional Transport Calculations for Problem Dependent Resonance Self-Shielding

    Energy Technology Data Exchange (ETDEWEB)

    T. Downar

    2009-03-31

    The overall objective of the work here has been to eliminate the approximations used in current resonance treatments by developing continuous energy multi-dimensional transport calculations for problem dependent self-shielding calculations. The work here builds on the existing resonance treatment capabilities in the ORNL SCALE code system.

  5. Rotor position sensing in brushless ac motors with self-shielding magnets using linear Hall sensors

    Science.gov (United States)

    Zhu, Z. Q.; Shi, Y. F.; Howe, D.

    2006-04-01

    This paper investigates the use of low cost linear Hall sensors for rotor position sensing in brushless ac motors equipped with self-shielding magnets, addresses practical issues, such as the influence of magnetic and mechanical tolerances, temperature variations, and the armature reaction field, and describes the performance which is achieved.

  6. Characterization and dosimetry of a practical X-ray alternative to self-shielded gamma irradiators

    Science.gov (United States)

    Mehta, Kishor; Parker, Andrew

    2011-01-01

    The Insect Pest Control Laboratory of the Joint FAO/IAEA Division of Nuclear Techniques in Food and Agriculture recently purchased an X-ray irradiator as part of their programme to develop the sterile insect technique (SIT). It is a self-contained type with a maximum X-ray beam energy of 150 keV using a newly developed 4 π X-ray tube to provide a very uniform dose to the product. This paper describes the results of our characterization study, which includes determination of dose rate in the centre of a canister as well as establishing absorbed dose distribution in the canister. The irradiation geometry consists of five canisters rotating around an X-ray tube—the volume of each canister being 3.5 l. The dose rate at the maximum allowed power of the tube (about 6.75 kW) in the centre of a canister filled with insects (or a simulated product) is about 14 Gy min -1. The dose uniformity ratio is about 1.3. The dose rate was measured using a Farmer type 0.18-cm 3 ionization chamber calibrated at the relevant low photon energies. Routine absorbed dose measurement and absorbed dose mapping can be performed using a Gafchromic® film dosimetry system. The radiation response of Gafchromic film is almost independent of X-ray energy in the range 100-150 keV, but is very sensitive to the surrounding material with which it is in immediate contact. It is important, therefore, to ensure that all absorbed dose measurements are performed under identical conditions to those used for the calibration of the dosimetry system. Our study indicates that this X-ray irradiator provides a practical alternative to self-shielded gamma irradiators for SIT programmes. Food and Agriculture Organization/International Atomic Energy Agency.

  7. Multi-Determinant Wave-functions in Quantum Monte Carlo

    CERN Document Server

    Morales, M A; Clark, B K; Kim, J; Scuseria, G; 10.1021/ct3003404

    2013-01-01

    Quantum Monte Carlo (QMC) methods have received considerable attention over the last decades due to their great promise for providing a direct solution to the many-body Schrodinger equation in electronic systems. Thanks to their low scaling with number of particles, QMC methods present a compelling competitive alternative for the accurate study of large molecular systems and solid state calculations. In spite of such promise, the method has not permeated the quantum chemistry community broadly, mainly because of the fixed-node error, which can be large and whose control is difficult. In this Perspective, we present a systematic application of large scale multi-determinant expansions in QMC, and report on its impressive performance with first row dimers and the 55 molecules of the G1 test set. We demonstrate the potential of this strategy for systematically reducing the fixed-node error in the wave function and for achieving chemical accuracy in energy predictions. When compared to traditional quantum chemistr...

  8. MONTE-CARLO BURNUP CALCULATION UNCERTAINTY QUANTIFICATION AND PROPAGATION DETERMINATION

    Energy Technology Data Exchange (ETDEWEB)

    Nichols, T.; Sternat, M.; Charlton, W.

    2011-05-08

    MONTEBURNS is a Monte-Carlo depletion routine utilizing MCNP and ORIGEN 2.2. Uncertainties exist in the MCNP transport calculation, but this information is not passed to the depletion calculation in ORIGEN or saved. To quantify this transport uncertainty and determine how it propagates between burnup steps, a statistical analysis of a multiple repeated depletion runs is performed. The reactor model chosen is the Oak Ridge Research Reactor (ORR) in a single assembly, infinite lattice configuration. This model was burned for a 25.5 day cycle broken down into three steps. The output isotopics as well as effective multiplication factor (k-effective) were tabulated and histograms were created at each burnup step using the Scott Method to determine the bin width. It was expected that the gram quantities and k-effective histograms would produce normally distributed results since they were produced from a Monte-Carlo routine, but some of results do not. The standard deviation at each burnup step was consistent between fission product isotopes as expected, while the uranium isotopes created some unique results. The variation in the quantity of uranium was small enough that, from the reaction rate MCNP tally, round off error occurred producing a set of repeated results with slight variation. Statistical analyses were performed using the {chi}{sup 2} test against a normal distribution for several isotopes and the k-effective results. While the isotopes failed to reject the null hypothesis of being normally distributed, the {chi}{sup 2} statistic grew through the steps in the k-effective test. The null hypothesis was rejected in the later steps. These results suggest, for a high accuracy solution, MCNP cell material quantities less than 100 grams and greater kcode parameters are needed to minimize uncertainty propagation and minimize round off effects.

  9. Radiologic assessment of a self-shield with boron-containing water for a compact medical cyclotron

    OpenAIRE

    Horitsugi, Genki; Fujibuchi, Toshioh; Yamaguchi, Ichiro; Eto, Akihisa; Iwamoto, Yasuo; Hashimoto, Hiromi; Hamada, Seiki; Obara, Satoshi; Watanabe, Hiroshi; Hatazawa, Jun

    2012-01-01

    The cyclotron at our hospital has a self-shield of boron-containing water. The amount of induced radioactivity in the boron-containing water shield of a compact medical cyclotron has not yet been reported. In this study, we measured the photon and neutron dose rates outside the self-shield during cyclotron operation. We estimated the induced radioactivities of the boron-containing water used for the self-shield and then measured them. We estimated the activation of concrete outside the self-s...

  10. A Monte Carlo simulation technique to determine the optimal portfolio

    Directory of Open Access Journals (Sweden)

    Hassan Ghodrati

    2014-03-01

    Full Text Available During the past few years, there have been several studies for portfolio management. One of the primary concerns on any stock market is to detect the risk associated with various assets. One of the recognized methods in order to measure, to forecast, and to manage the existing risk is associated with Value at Risk (VaR, which draws much attention by financial institutions in recent years. VaR is a method for recognizing and evaluating of risk, which uses the standard statistical techniques and the method has been used in other fields, increasingly. The present study has measured the value at risk of 26 companies from chemical industry in Tehran Stock Exchange over the period 2009-2011 using the simulation technique of Monte Carlo with 95% confidence level. The used variability in the present study has been the daily return resulted from the stock daily price change. Moreover, the weight of optimal investment has been determined using a hybrid model called Markowitz and Winker model in each determined stocks. The results showed that the maximum loss would not exceed from 1259432 Rials at 95% confidence level in future day.

  11. Effects of CeF3 on properties of self-shielded flux cored wire

    Institute of Scientific and Technical Information of China (English)

    Yu Ping; Tian Zhiling; Pan Chuan; Xue Jin

    2006-01-01

    Effects of CeF3 on properties of self-shielded flux cored wire including welding process, inclusions in weld metal and mechanical properties are systematically studied. Welding smoke and spatter are reduced with the addition of CeF3. The main non-metallic inclusions in weld metal are AlN and Al2 O3. CeF3 can refine non-metallic inclusions and reduce the amount of large size inclusions, which is attributed to the inclusion floating behavior during the solidification of weld metal. The low temperature impact toughness is improved by adding suitable amount of CeF3 in the flux.

  12. Radiologic assessment of a self-shield with boron-containing water for a compact medical cyclotron.

    Science.gov (United States)

    Horitsugi, Genki; Fujibuchi, Toshioh; Yamaguchi, Ichiro; Eto, Akihisa; Iwamoto, Yasuo; Hashimoto, Hiromi; Hamada, Seiki; Obara, Satoshi; Watanabe, Hiroshi; Hatazawa, Jun

    2012-07-01

    The cyclotron at our hospital has a self-shield of boron-containing water. The amount of induced radioactivity in the boron-containing water shield of a compact medical cyclotron has not yet been reported. In this study, we measured the photon and neutron dose rates outside the self-shield during cyclotron operation. We estimated the induced radioactivities of the boron-containing water used for the self-shield and then measured them. We estimated the activation of concrete outside the self-shield in the cyclotron laboratory. The thermal neutron flux during cyclotron operation was estimated to be 4.72 × 10(2) cm(-2) s(-1), and the activation of concrete in a cyclotron laboratory was about three orders of magnitude lower than the clearance level of RS-G-1.7 (IAEA). The activity concentration of the boron-containing water did not exceed the concentration limit for radioactive isotopes in drainage in Japan and the exemption level for Basic Safety Standards. Consequently, the boron-containing water is treatable as non-radioactive waste. Neutrons were effectively shielded by the self-shield during cyclotron operation.

  13. Self-shielding effects in neutron spectra measurements for neutron capture therapy by means of activation foils.

    Science.gov (United States)

    Pytel, Krzysztof; Józefowicz, Krystyna; Pytel, Beatrycze; Koziel, Alina

    2004-01-01

    The design and optimisation of a neutron beam for neutron capture therapy (NCT) is accompanied by the neutron spectra measurements at the target position. The method of activation detectors was applied for the neutron spectra measurements. Epithermal neutron energy region imposes the resonance structure of activation cross sections resulting in strong self-shielding effects. The neutron self-shielding correction factor was calculated using a simple analytical model of a single absorption event. Such a procedure has been applied to individual cross sections from pointwise ENDF/B-VI library and new corrected activation cross sections were introduced to a spectra unfolding algorithm. The method has been verified experimentally both for isotropic and for parallel neutron beams. Two sets of diluted and non-diluted activation foils covered with cadmium were irradiated in the neutron field. The comparison of activation rates of diluted and non-diluted foils has demonstrated the correctness of the applied self-shielding model.

  14. An analytical approach to γ-ray self-shielding effects for radioactive bodies encountered nuclear decommissioning scenarios.

    Science.gov (United States)

    Gamage, K A A; Joyce, M J

    2011-10-01

    A novel analytical approach is described that accounts for self-shielding of γ radiation in decommissioning scenarios. The approach is developed with plutonium-239, cobalt-60 and caesium-137 as examples; stainless steel and concrete have been chosen as the media for cobalt-60 and caesium-137, respectively. The analytical methods have been compared MCNPX 2.6.0 simulations. A simple, linear correction factor relates the analytical results and the simulated estimates. This has the potential to greatly simplify the estimation of self-shielding effects in decommissioning activities.

  15. Abrasive resistance of arc sprayed carbonitride alloying self-shielded coatings

    Science.gov (United States)

    Deng, Yu; Yu, Shengfu; Xing, Shule; Huang, Linbing; Lu, Yan

    2011-10-01

    Wear-resistant coatings were prepared on the surface of the Q235 low-carbon steel plate by HVAS with the carbonitride alloying self-shielded flux-cored wire. Detection and analysis on the microstructure and properties of the coatings were carried out by using scanning electron microscope, microhardness tester and wear tester. The forming, the wear resistance and its mechanism of the coatings were studied. The results show that the coatings have good forming, homogeneous microstructure and compact structure. The coatings have good hardness, the average microhardness value reaches 520 HV 0.1, and the highest value is up to about 560 HV 0.1. As a result, the coatings have good abrasive wear performance and adhesion strength.

  16. Line Overlap and Self-Shielding of Molecular Hydrogen in Galaxies

    Science.gov (United States)

    Gnedin, Nickolay Y.; Draine, Bruce T.

    2014-11-01

    The effect of line overlap in the Lyman and Werner bands, often ignored in galactic studies of the atomic-to-molecular transition, greatly enhances molecular hydrogen self-shielding in low metallicity environments and dominates over dust shielding for metallicities below about 10% solar. We implement that effect in cosmological hydrodynamics simulations with an empirical model, calibrated against the observational data, and provide fitting formulae for the molecular hydrogen fraction as a function of gas density on various spatial scales and in environments with varied dust abundance and interstellar radiation field. We find that line overlap, while important for detailed radiative transfer in the Lyman and Werner bands, has only a minor effect on star formation on galactic scales, which, to a much larger degree, is regulated by stellar feedback.

  17. Photodissociation of H2 in Protogalaxies: Modeling Self-Shielding in 3D Simulations

    CERN Document Server

    Wolcott-Green, Jemma; Bryan, Greg L

    2011-01-01

    The ability of primordial gas to cool in proto-galactic haloes exposed to Lyman-Werner (LW) radiation is critically dependent on the self-shielding of H_2. We perform radiative transfer calculations of LW line photons, post-processing outputs from three-dimensional adaptive mesh refinement (AMR) simulations of haloes with T_vir > 10^4 K at redshifts around z=10. We calculate the optically thick photodissociation rate numerically, including the effects of density, temperature, and velocity gradients in the gas, as well as line overlap and shielding of H_2 by HI, over a large number of sight-lines. In low-density regions (n10^4 K haloes by an order of magnitude; this increases the number of such haloes in which supermassive (approx. M=10^5 M_sun) black holes may have formed.

  18. Line Overlap and Self-Shielding of Molecular Hydrogen in Galaxies

    CERN Document Server

    Gnedin, Nickolay Y

    2014-01-01

    The effect of line overlap in the Lyman and Werner bands, often ignored in galactic studies of the atomic-to-molecular transition, greatly enhances molecular hydrogen self-shielding in low metallicity environments, and dominates over dust shielding for metallicities below about 10% solar. We implement that effect in cosmological hydrodynamics simulations with an empirical model, calibrated against the observational data, and provide fitting formulae for the molecular hydrogen fraction as a function of gas density on various spatial scales and in environments with varied dust abundance and interstellar radiation field. We find that line overlap, while important for detailed radiative transfer in the Lyman and Werner bands, has only a minor effect on star formation on galactic scales, which, to a much larger degree, is regulated by stellar feedback.

  19. RESONANCE SELF-SHIELDING EFFECT IN UNCERTAINTY QUANTIFICATION OF FISSION REACTOR NEUTRONICS PARAMETERS

    Directory of Open Access Journals (Sweden)

    GO CHIBA

    2014-06-01

    Full Text Available In order to properly quantify fission reactor neutronics parameter uncertainties, we have to use covariance data and sensitivity profiles consistently. In the present paper, we establish two consistent methodologies for uncertainty quantification: a self-shielded cross section-based consistent methodology and an infinitely-diluted cross section-based consistent methodology. With these methodologies and the covariance data of uranium-238 nuclear data given in JENDL-3.3, we quantify uncertainties of infinite neutron multiplication factors of light water reactor and fast reactor fuel cells. While an inconsistent methodology gives results which depend on the energy group structure of neutron flux and neutron-nuclide reaction cross section representation, both the consistent methodologies give fair results with no such dependences.

  20. 78 FR 53817 - Culturally Significant Objects Imported for Exhibition Determinations: “Venetian Glass by Carlo...

    Science.gov (United States)

    2013-08-30

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF STATE Culturally Significant Objects Imported for Exhibition Determinations: ``Venetian Glass by Carlo Scarpa: The... April 15, 2003, I hereby determine that the objects to be included in the exhibition ``Venetian Glass...

  1. Simple formalism for efficient derivatives and multi-determinant expansions in quantum Monte Carlo

    NARCIS (Netherlands)

    Filippi, C.; Assaraf, R.; Moroni, S.

    2016-01-01

    We present a simple and general formalism to compute efficiently the derivatives of a multi-determinant Jastrow-Slater wave function, the local energy, the interatomic forces, and similar quantities needed in quantum Monte Carlo. Through a straightforward manipulation of matrices evaluated on the oc

  2. Determination of scatter fractions of some materials by experimental studies and Monte Carlo calculations

    CERN Document Server

    Meric, N; Bor, D

    1999-01-01

    Scatter fractions have been determined experimentally for lucite, polyethylene, polypropylene, aluminium and copper of varying thicknesses using a polyenergetic broad X-ray beam of 67 kVp. Simulation of the experiment has been carried out by the Monte Carlo technique under the same input conditions. Comparison of the measured and predicted data with each other and with the previously reported values has been given. The Monte Carlo calculations have also been carried out for water, bakelite and bone to examine the dependence of scatter fraction on the density of the scatterer.

  3. CO Self-Shielding as a Mechanism to Make 16O-Enriched Solids in the Solar Nebula

    Directory of Open Access Journals (Sweden)

    Joseph A. Nuth, III

    2014-05-01

    Full Text Available Photochemical self-shielding of CO has been proposed as a mechanism to produce solids observed in the modern, 16O-depleted solar system. This is distinct from the relatively 16O-enriched composition of the solar nebula, as demonstrated by the oxygen isotopic composition of the contemporary sun. While supporting the idea that self-shielding can produce local enhancements in 16O-depleted solids, we argue that complementary enhancements of 16O-enriched solids can also be produced via C16O-based, Fischer-Tropsch type (FTT catalytic processes that could produce much of the carbonaceous feedstock incorporated into accreting planetesimals. Local enhancements could explain observed 16O enrichment in calcium-aluminum-rich inclusions (CAIs, such as those from the meteorite, Isheyevo (CH/CHb, as well as in chondrules from the meteorite, Acfer 214 (CH3. CO self-shielding results in an overall increase in the 17O and 18O content of nebular solids only to the extent that there is a net loss of C16O from the solar nebula. In contrast, if C16O reacts in the nebula to produce organics and water then the net effect of the self-shielding process will be negligible for the average oxygen isotopic content of nebular solids and other mechanisms must be sought to produce the observed dichotomy between oxygen in the Sun and that in meteorites and the terrestrial planets. This illustrates that the formation and metamorphism of rocks and organics need to be considered in tandem rather than as isolated reaction networks.

  4. The new solid target system at UNAM in a self-shielded 11 MeV cyclotron

    Energy Technology Data Exchange (ETDEWEB)

    Zarate-Morales, A.; Gaspar-Carcamo, R. E.; Lopez-Rodriguez, V.; Flores-Moreno, A.; Trejo-Ballado, F.; Avila-Rodriguez, Miguel A. [Unidad PET, Facultad de Medicina, Universidad Nacional Autonoma de Mexico, 04510 , D.F. Mexico (Mexico)

    2012-12-19

    A dual beam line (BL) self-shielded RDS 111 cyclotron for radionuclide production was installed at the School of Medicine of the National Autonomous University of Mexico in 2001. One of the BL's was upgraded to Eclipse HP (Siemens) in 2008 and the second BL was recently upgraded (June 2011) to the same version with the option for the irradiation of solid targets for the production of metallic radioisotopes.

  5. Impact of photon cross section systematic uncertainties on Monte Carlo-determined depth-dose distributions

    CERN Document Server

    Aguirre, Eder; David, Mariano; deAlmeida, Carlos E

    2016-01-01

    This work studies the impact of systematic uncertainties associated to interaction cross sections on depth dose curves determined by Monte Carlo simulations. The corresponding sensitivity factors are quantified by changing cross sections in a given amount and determining the variation in the dose. The influence of total cross sections for all particles, photons and only for Compton scattering is addressed. The PENELOPE code was used in all simulations. It was found that photon cross section sensitivity factors depend on depth. In addition, they are positive and negative for depths below and above an equilibrium depth, respectively. At this depth, sensitivity factors are null. The equilibrium depths found in this work agree very well with the mean free path of the corresponding incident photon energy. Using the sensitivity factors reported here, it is possible to estimate the impact of photon cross section uncertainties on the uncertainty of Monte Carlo-determined depth dose curves.

  6. SCALE Continuous-Energy Monte Carlo Depletion with Parallel KENO in TRITON

    Energy Technology Data Exchange (ETDEWEB)

    Goluoglu, Sedat [ORNL; Bekar, Kursat B [ORNL; Wiarda, Dorothea [ORNL

    2012-01-01

    The TRITON sequence of the SCALE code system is a powerful and robust tool for performing multigroup (MG) reactor physics analysis using either the 2-D deterministic solver NEWT or the 3-D Monte Carlo transport code KENO. However, as with all MG codes, the accuracy of the results depends on the accuracy of the MG cross sections that are generated and/or used. While SCALE resonance self-shielding modules provide rigorous resonance self-shielding, they are based on 1-D models and therefore 2-D or 3-D effects such as heterogeneity of the lattice structures may render final MG cross sections inaccurate. Another potential drawback to MG Monte Carlo depletion is the need to perform resonance self-shielding calculations at each depletion step for each fuel segment that is being depleted. The CPU time and memory required for self-shielding calculations can often eclipse the resources needed for the Monte Carlo transport. This summary presents the results of the new continuous-energy (CE) calculation mode in TRITON. With the new capability, accurate reactor physics analyses can be performed for all types of systems using the SCALE Monte Carlo code KENO as the CE transport solver. In addition, transport calculations can be performed in parallel mode on multiple processors.

  7. Application of Monte Carlo Technique for Determining Maneuvering Loads from Statistical Information on Airplane Motions

    Science.gov (United States)

    Hamer, Harold A.; Mayer, John P.; Huston, Wilber B.

    1961-01-01

    Results of a statistical analysis of horizontal-tail loads on a fighter airplane are presented. The data were obtained from a number of operational training missions with flight at altitudes up to about 50,000 feet and at Mach numbers up to 1.22. The analysis was performed to determine the feasibility of calculating horizontal-tail load from data on the flight conditions and airplane motions. In the analysis the calculated loads are compared with the measured loads for the different types of missions performed. The loads were calculated by two methods: a direct approach and a Monte Carlo technique. The procedures used and some of the problems associated with the data analysis are discussed. frequencies of occurrence of tail loads of given magnitudes are derived from statistical information on the flight quantities. In the direct method, a time history of tail load is calculated from time-history measurements of the flight quantities. The Monte Carlo method could be useful for extending loads information for design of prospective airplanes . For the Monte Carlo method, the The results indicate that the accuracy of loads, regardless of the method used for calculation, is largely dependent on the knowledge of the pertinent airplane aerodynamic characteristics and center-of-gravity location. In addition, reliable Monte Carlo results require an adequate sample of statistical data and a knowledge of the more important statistical dependencies between the various flight conditions and airplane motions.

  8. CO Self-Shielding as a Mechanism to Make O-16 Enriched Solids in the Solar Nebula

    Science.gov (United States)

    Nuth, Joseph A. III; Johnson, Natasha M.; Hill, Hugh G. M.

    2014-01-01

    Photochemical self-shielding of CO has been proposed as a mechanism to produce solids observed in the modern, O-16 depleted solar system. This is distinct from the relatively O-16 enriched composition of the solar nebula, as demonstrated by the oxygen isotopic composition of the contemporary sun. While supporting the idea that self-shielding can produce local enhancements in O-16 depleted solids, we argue that complementary enhancements of O-16 enriched solids can also be produced via CO-16 based, Fischer-Tropsch type (FTT) catalytic processes that could produce much of the carbonaceous feedstock incorporated into accreting planetesimals. Local enhancements could explain observed O-16 enrichment in calcium-aluminum-rich inclusions (CAIs), such as those from the meteorite, Isheyevo (CH/CHb), as well as in chondrules from the meteorite, Acfer 214 (CH3). CO selfshielding results in an overall increase in the O-17 and O-18 content of nebular solids only to the extent that there is a net loss of CO-16 from the solar nebula. In contrast, if CO-16 reacts in the nebula to produce organics and water then the net effect of the self-shielding process will be negligible for the average oxygen isotopic content of nebular solids and other mechanisms must be sought to produce the observed dichotomy between oxygen in the Sun and that in meteorites and the terrestrial planets. This illustrates that the formation and metamorphism of rocks and organics need to be considered in tandem rather than as isolated reaction networks.

  9. Determination of in-process limits during parenteral solution manufacturing using Monte Carlo Simulation.

    Science.gov (United States)

    Kuu, Wei Y; Chilamkurti, Rao

    2003-01-01

    The purpose of this study is to utilize Monte Carlo Simulation methodology to determine the in-process limits for the parenteral solution manufacturing process. The Monte Carlo Simulation predicts the distribution of a dependable variable (such as drug concentration) in a naturally occurring process through random value generation considering the variability associated with the depended variable. The propagation of variation in drug concentration from batch to batch is cascading in nature during the following four formulation steps: 1) determination of drug raw material potency (or purity), 2) weighing of drug raw material, 3) measurement of batch volume, and 4) determination of drug concentration in the mix tank. The coefficients of variation for these four steps are denoted as CV1, CV2, CV3, and CV4, respectively. The Monte Carlo Simulation was performed for each of the above four cascading steps. The results of the simulation demonstrate that the in-process limits of the drug can be successfully determined using the Monte Carlo Simulation. Once the specification limits are determined, the Monte Carlo Simulation can be used to study the effect of each variability on the percent out of specification limits (OOL) for the in-process testing. Demonstrations were performed using the acceptance criterion of less than 5% of OOL batches, and the typical values of CV2 and CV3 being equal to 0.03% and 0.5%, respectively. The results show that for the in-process limits of +/- 1%, the values of CV1 and CV4 should not be greater than 0.1%. These assay requirements appear to be difficult to achieve for a given chemical analytical method. By comparison, for the In-process limits of +/- 4%, the requirements are much easier to achieve. The values of CV1 and CV4 should not be greater than 1.38%. In addition, the relationship between the percent OOL versus CV1 or CV4 is nonlinear per se. The number of OOL batches increases rapidly with increasing variability of CV1 or CV4.

  10. Monte Carlo simulation: tool for the calibration in analytical determination of radionuclides; Simulacion Monte Carlo: herramienta para la calibracion en determinaciones analiticas de radionucleidos

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez, Jorge A. Carrazana; Ferrera, Eduardo A. Capote; Gomez, Isis M. Fernandez; Castro, Gloria V. Rodriguez; Ricardo, Niury Martinez, E-mail: cphr@cphr.edu.cu [Centro de Proteccion e Higiene de las Radiaciones (CPHR), La Habana (Cuba)

    2013-07-01

    This work shows how is established the traceability of the analytical determinations using this calibration method. Highlights the advantages offered by Monte Carlo simulation for the application of corrections by differences in chemical composition, density and height of the samples analyzed. Likewise, the results obtained by the LVRA in two exercises organized by the International Agency for Atomic Energy (IAEA) are presented. In these exercises (an intercomparison and a proficiency test) all reported analytical results were obtained based on calibrations in efficiency by Monte Carlo simulation using the DETEFF program.

  11. Mapping systematic errors in helium abundance determinations using Markov Chain Monte Carlo

    CERN Document Server

    Aver, Erik; Skillman, Evan D

    2010-01-01

    Monte Carlo techniques have been used to evaluate the statistical and systematic uncertainties in the helium abundances derived from extragalactic H~II regions. The helium abundance is sensitive to several physical parameters associated with the H~II region. In this work, we introduce Markov Chain Monte Carlo (MCMC) methods to efficiently explore the parameter space and determine the helium abundance, the physical parameters, and the uncertainties derived from observations of metal poor nebulae. Experiments with synthetic data show that the MCMC method is superior to previous implementations (based on flux perturbation) in that it is not affected by biases due to non-physical parameter space. The MCMC analysis allows a detailed exploration of degeneracies, and, in particular, a false minimum that occurs at large values of optical depth in the He~I emission lines. We demonstrate that introducing the electron temperature derived from the [O~III] emission lines as a prior, in a very conservative manner, produces...

  12. A method based on Monte Carlo simulation for the determination of the G(E) function.

    Science.gov (United States)

    Chen, Wei; Feng, Tiancheng; Liu, Jun; Su, Chuanying; Tian, Yanjie

    2015-02-01

    The G(E) function method is a spectrometric method for the exposure dose estimation; this paper describes a method based on Monte Carlo method to determine the G(E) function of a 4″ × 4″ × 16″ NaI(Tl) detector. Simulated spectrums of various monoenergetic gamma rays in the region of 40 -3200 keV and the corresponding deposited energy in an air ball in the energy region of full-energy peak were obtained using Monte Carlo N-particle Transport Code. Absorbed dose rate in air was obtained according to the deposited energy and divided by counts of corresponding full-energy peak to get the G(E) function value at energy E in spectra. Curve-fitting software 1st0pt was used to determine coefficients of the G(E) function. Experimental results show that the calculated dose rates using the G(E) function determined by the authors' method are accordant well with those values obtained by ionisation chamber, with a maximum deviation of 6.31 %.

  13. Impact of photon cross section uncertainties on Monte Carlo-determined depth-dose distributions.

    Science.gov (United States)

    Aguirre, E; David, M; deAlmeida, C E; Bernal, M A

    2016-09-01

    This work studies the impact of systematic uncertainties associated to interaction cross sections on depth dose curves determined by Monte Carlo simulations. The corresponding sensitivity factors are quantified by changing cross sections by a given amount and determining the variation in the dose. The influence of total and partial photon cross sections is addressed. Partial cross sections for Compton and Rayleigh scattering, photo-electric effect, and pair production have been accounted for. The PENELOPE code was used in all simulations. It was found that photon cross section sensitivity factors depend on depth. In addition, they are positive and negative for depths below and above an equilibrium depth, respectively. At this depth, sensitivity factors are null. The equilibrium depths found in this work agree very well with the mean free path of the corresponding incident photon energy. Using the sensitivity factors reported here, it is possible to estimate the impact of photon cross section uncertainties on the uncertainty of Monte Carlo-determined depth dose curves.

  14. A fast and efficient algorithm for Slater determinant updates in quantum Monte Carlo simulations.

    Science.gov (United States)

    Nukala, Phani K V V; Kent, P R C

    2009-05-28

    We present an efficient low-rank updating algorithm for updating the trial wave functions used in quantum Monte Carlo (QMC) simulations. The algorithm is based on low-rank updating of the Slater determinants. In particular, the computational complexity of the algorithm is O(kN) during the kth step compared to traditional algorithms that require O(N(2)) computations, where N is the system size. For single determinant trial wave functions the new algorithm is faster than the traditional O(N(2)) Sherman-Morrison algorithm for up to O(N) updates. For multideterminant configuration-interaction-type trial wave functions of M+1 determinants, the new algorithm is significantly more efficient, saving both O(MN(2)) work and O(MN(2)) storage. The algorithm enables more accurate and significantly more efficient QMC calculations using configuration-interaction-type wave functions.

  15. Simple formalism for efficient derivatives and multi-determinant expansions in quantum Monte Carlo.

    Science.gov (United States)

    Filippi, Claudia; Assaraf, Roland; Moroni, Saverio

    2016-05-21

    We present a simple and general formalism to compute efficiently the derivatives of a multi-determinant Jastrow-Slater wave function, the local energy, the interatomic forces, and similar quantities needed in quantum Monte Carlo. Through a straightforward manipulation of matrices evaluated on the occupied and virtual orbitals, we obtain an efficiency equivalent to algorithmic differentiation in the computation of the interatomic forces and the optimization of the orbital parameters. Furthermore, for a large multi-determinant expansion, the significant computational gain afforded by a recently introduced table method is here extended to the local value of any one-body operator and to its derivatives, in both all-electron and pseudopotential calculations.

  16. Simple formalism for efficient derivatives and multi-determinant expansions in quantum Monte Carlo

    Science.gov (United States)

    Filippi, Claudia; Assaraf, Roland; Moroni, Saverio

    2016-05-01

    We present a simple and general formalism to compute efficiently the derivatives of a multi-determinant Jastrow-Slater wave function, the local energy, the interatomic forces, and similar quantities needed in quantum Monte Carlo. Through a straightforward manipulation of matrices evaluated on the occupied and virtual orbitals, we obtain an efficiency equivalent to algorithmic differentiation in the computation of the interatomic forces and the optimization of the orbital parameters. Furthermore, for a large multi-determinant expansion, the significant computational gain afforded by a recently introduced table method is here extended to the local value of any one-body operator and to its derivatives, in both all-electron and pseudopotential calculations.

  17. Monte Carlo Method for Determining Earthquake Recurrence Parameters from Short Paleoseismic Catalogs: Example Calculations for California

    Science.gov (United States)

    Parsons, Tom

    2008-01-01

    Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques [e.g., Ellsworth et al., 1999]. In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means [e.g., NIST/SEMATECH, 2006]. For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDF?s, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.

  18. Excited states from quantum Monte Carlo in the basis of Slater determinants

    Energy Technology Data Exchange (ETDEWEB)

    Humeniuk, Alexander; Mitrić, Roland, E-mail: roland.mitric@uni-wuerzburg.de [Institut für Physikalische und Theoretische Chemie, Julius-Maximilians Universität Würzburg, Emil-Fischer-Straße 42, 97074 Würzburg (Germany)

    2014-11-21

    Building on the full configuration interaction quantum Monte Carlo (FCIQMC) algorithm introduced recently by Booth et al. [J. Chem. Phys. 131, 054106 (2009)] to compute the ground state of correlated many-electron systems, an extension to the computation of excited states (exFCIQMC) is presented. The Hilbert space is divided into a large part consisting of pure Slater determinants and a much smaller orthogonal part (the size of which is controlled by a cut-off threshold), from which the lowest eigenstates can be removed efficiently. In this way, the quantum Monte Carlo algorithm is restricted to the orthogonal complement of the lower excited states and projects out the next highest excited state. Starting from the ground state, higher excited states can be found one after the other. The Schrödinger equation in imaginary time is solved by the same population dynamics as in the ground state algorithm with modified probabilities and matrix elements, for which working formulae are provided. As a proof of principle, the method is applied to lithium hydride in the 3-21G basis set and to the helium dimer in the aug-cc-pVDZ basis set. It is shown to give the correct electronic structure for all bond lengths. Much more testing will be required before the applicability of this method to electron correlation problems of interesting size can be assessed.

  19. Radiation Dosimetry of Self-Shielding Cyclotron%自屏蔽回旋加速器辐射剂量测定

    Institute of Scientific and Technical Information of China (English)

    包宝亮; 何玉林; 李剑波

    2014-01-01

    Objective To detect the radiation dose of self-shielding cyclotron to ensure the security of radiation workers. Methods Detecting the dose of radiation released by self-shielding cyclotron in cyclotron room, radiochemical laboratory and the teardowning and reinstalling process of target body with a portable radiation measuring instrument. Results The radiation dose in cyclotron room, radiochemical laboratory and the teardowning and reinstalling process of target body is within the security scope. Conclusion The self-shielding system, hot synthesis chamber and safety shield of self-shielding cyclotron have good shielding effect for radiation.%目的:检测自屏蔽回旋加速器的辐射剂量,以保障放射工作人员的安全。方法应用手提便携式辐射测量仪检测加速器室、放化实验室以及靶的拆卸和重装过程中的辐射剂量。结果加速器室、放化实验室以及靶的拆卸和重装过程中的辐射剂量在规定的安全范围之内。结论回旋加速器自屏蔽系统、合成热室、分装防护屏等具有良好的辐射屏蔽效果。

  20. Radioactivity determination of sealed pure beta-sources by surface dose measurements and Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Chang Heon [Interdisciplinary Program in Radiation Applied Life Science, Seoul National University College of Medicine, Seoul (Korea, Republic of); Jung, Seongmoon [Program in Biomedical Radiation Sciences, Department of Transdisciplinary Studies, Graduate School of Convergence Science and Technology, Seoul National University, Seoul (Korea, Republic of); Choi, Kanghyuk; Son, Kwang-Jae; Lee, Jun Sig [Hanaro Applications Research, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Ye, Sung-Joon, E-mail: sye@snu.ac.kr [Interdisciplinary Program in Radiation Applied Life Science, Seoul National University College of Medicine, Seoul (Korea, Republic of); Program in Biomedical Radiation Sciences, Department of Transdisciplinary Studies, Graduate School of Convergence Science and Technology, Seoul National University, Seoul (Korea, Republic of); Center for Convergence Research on Robotics, Advance Institutes of Convergence Technology, Seoul National University, Suwon (Korea, Republic of)

    2016-04-21

    This study aims to determine the activity of a sealed pure beta-source by measuring the surface dose rate using an extrapolation chamber. A conversion factor (cGy s{sup −1} Bq{sup −1}), which was defined as the ratio of surface dose rate to activity, can be calculated by Monte Carlo simulations of the extrapolation chamber measurement. To validate this hypothesis the certified activities of two standard pure beta-sources of Sr/Y-90 and Si/P-32 were compared with those determined by this method. In addition, a sealed test source of Sr/Y-90 was manufactured by the HANARO reactor group of KAERI (Korea Atomic Energy Research Institute) and used to further validate this method. The measured surface dose rates of the Sr/Y-90 and Si/P-32 standard sources were 4.615×10{sup −5} cGy s{sup −1} and 2.259×10{sup −5} cGy s{sup −1}, respectively. The calculated conversion factors of the two sources were 1.213×10{sup −8} cGy s{sup −1} Bq{sup −1} and 1.071×10{sup −8} cGy s{sup −1} Bq{sup −1}, respectively. Therefore, the activity of the standard Sr/Y-90 source was determined to be 3.995 kBq, which was 2.0% less than the certified value (4.077 kBq). For Si/P-32 the determined activity was 2.102 kBq, which was 6.6% larger than the certified activity (1.971 kBq). The activity of the Sr/Y-90 test source was determined to be 4.166 kBq, while the apparent activity reported by KAERI was 5.803 kBq. This large difference might be due to evaporation and diffusion of the source liquid during preparation and uncertainty in the amount of weighed aliquot of source liquid. The overall uncertainty involved in this method was determined to be 7.3%. We demonstrated that the activity of a sealed pure beta-source could be conveniently determined by complementary combination of measuring the surface dose rate and Monte Carlo simulations.

  1. A Monte Carlo approach for determining cluster evaporation rates from concentration measurements

    Science.gov (United States)

    Kupiainen-Määttä, Oona

    2016-11-01

    Evaporation rates of small negatively charged sulfuric acid-ammonia clusters are determined by combining detailed cluster formation simulations with cluster distributions measured in the CLOUD experiment at CERN. The analysis is performed by varying the evaporation rates with Markov chain Monte Carlo (MCMC), running cluster formation simulations with each new set of evaporation rates and comparing the obtained cluster distributions to the measurements. In a second set of simulations, the fragmentation of clusters in the mass spectrometer due to energetic collisions is studied by treating also the fragmentation probabilities as unknown parameters and varying them with MCMC. This second set of simulations results in a better fit to the experimental data, suggesting that a large fraction of the observed HSO4- and HSO4- ṡ H2SO4 signals may result from fragmentation of larger clusters, most importantly the HSO4- ṡ (H2SO4)2 trimer.

  2. Determination of phase equilibria in confined systems by open pore cell Monte Carlo method.

    Science.gov (United States)

    Miyahara, Minoru T; Tanaka, Hideki

    2013-02-28

    We present a modification of the molecular dynamics simulation method with a unit pore cell with imaginary gas phase [M. Miyahara, T. Yoshioka, and M. Okazaki, J. Chem. Phys. 106, 8124 (1997)] designed for determination of phase equilibria in nanopores. This new method is based on a Monte Carlo technique and it combines the pore cell, opened to the imaginary gas phase (open pore cell), with a gas cell to measure the equilibrium chemical potential of the confined system. The most striking feature of our new method is that the confined system is steadily led to a thermodynamically stable state by forming concave menisci in the open pore cell. This feature of the open pore cell makes it possible to obtain the equilibrium chemical potential with only a single simulation run, unlike existing simulation methods, which need a number of additional runs. We apply the method to evaluate the equilibrium chemical potentials of confined nitrogen in carbon slit pores and silica cylindrical pores at 77 K, and show that the results are in good agreement with those obtained by two conventional thermodynamic integration methods. Moreover, we also show that the proposed method can be particularly useful for determining vapor-liquid and vapor-solid coexistence curves and the triple point of the confined system.

  3. Accurate and precise determination of critical properties from Gibbs ensemble Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Dinpajooh, Mohammadhasan [Department of Chemistry and Chemical Theory Center, University of Minnesota, 207 Pleasant Street SE, Minneapolis, Minnesota 55455 (United States); Bai, Peng; Allan, Douglas A. [Department of Chemical Engineering and Materials Science, University of Minnesota, 421 Washington Avenue SE, Minneapolis, Minnesota 55455 (United States); Siepmann, J. Ilja, E-mail: siepmann@umn.edu [Department of Chemistry and Chemical Theory Center, University of Minnesota, 207 Pleasant Street SE, Minneapolis, Minnesota 55455 (United States); Department of Chemical Engineering and Materials Science, University of Minnesota, 421 Washington Avenue SE, Minneapolis, Minnesota 55455 (United States)

    2015-09-21

    Since the seminal paper by Panagiotopoulos [Mol. Phys. 61, 813 (1997)], the Gibbs ensemble Monte Carlo (GEMC) method has been the most popular particle-based simulation approach for the computation of vapor–liquid phase equilibria. However, the validity of GEMC simulations in the near-critical region has been questioned because rigorous finite-size scaling approaches cannot be applied to simulations with fluctuating volume. Valleau [Mol. Simul. 29, 627 (2003)] has argued that GEMC simulations would lead to a spurious overestimation of the critical temperature. More recently, Patel et al. [J. Chem. Phys. 134, 024101 (2011)] opined that the use of analytical tail corrections would be problematic in the near-critical region. To address these issues, we perform extensive GEMC simulations for Lennard-Jones particles in the near-critical region varying the system size, the overall system density, and the cutoff distance. For a system with N = 5500 particles, potential truncation at 8σ and analytical tail corrections, an extrapolation of GEMC simulation data at temperatures in the range from 1.27 to 1.305 yields T{sub c} = 1.3128 ± 0.0016, ρ{sub c} = 0.316 ± 0.004, and p{sub c} = 0.1274 ± 0.0013 in excellent agreement with the thermodynamic limit determined by Potoff and Panagiotopoulos [J. Chem. Phys. 109, 10914 (1998)] using grand canonical Monte Carlo simulations and finite-size scaling. Critical properties estimated using GEMC simulations with different overall system densities (0.296 ≤ ρ{sub t} ≤ 0.336) agree to within the statistical uncertainties. For simulations with tail corrections, data obtained using r{sub cut} = 3.5σ yield T{sub c} and p{sub c} that are higher by 0.2% and 1.4% than simulations with r{sub cut} = 5 and 8σ but still with overlapping 95% confidence intervals. In contrast, GEMC simulations with a truncated and shifted potential show that r{sub cut} = 8σ is insufficient to obtain accurate results. Additional GEMC simulations for hard

  4. Use of Monte Carlo Methods for determination of isodose curves in brachytherapy; Uso de tecnicas Monte Carlo para determinacao de curvas de isodose em braquiterapia

    Energy Technology Data Exchange (ETDEWEB)

    Vieira, Jose Wilson

    2001-08-01

    Brachytherapy is a special form of cancer treatment in which the radioactive source is very close to or inside the tumor with the objective of causing the necrosis of the cancerous tissue. The intensity of cell response to the radiation varies according to the tissue type and degree of differentiation. Since the malign cells are less differentiated than the normal ones, they are more sensitive to the radiation. This is the basis for radiotherapy techniques. Institutes that work with the application of high dose rates use sophisticated computer programs to calculate the necessary dose to achieve the necrosis of the tumor and the same time, minimizing the irradiation of tissues and organs of the neighborhood. With knowledge the characteristics of the source and the tumor, it is possible to trace isodose curves with the necessary information for planning the brachytherapy in patients. The objective of this work is, using Monte Carlo techniques, to develop a computer program - the ISODOSE - which allows to determine isodose curves in turn of linear radioactive sources used in brachytherapy. The development of ISODOSE is important because the available commercial programs, in general, are very expensive and practically inaccessible to small clinics. The use of Monte Carlo techniques is viable because they avoid problems inherent to analytic solutions as, for instance , the integration of functions with singularities in its domain. The results of ISODOSE were compared with similar data found in the literature and also with those obtained at the institutes of radiotherapy of the 'Hospital do Cancer do Recife' and of the 'Hospital Portugues do Recife'. ISODOSE presented good performance, mainly, due to the Monte Carlo techniques, that allowed a quite detailed drawing of the isodose curves in turn of linear sources. (author)

  5. Initial experience with an 11 MeV self-shielded medical cyclotron on operation and radiation safety

    Directory of Open Access Journals (Sweden)

    Pant G

    2007-01-01

    Full Text Available A self-shielded medical cyclotron (11 MeV was commissioned at our center, to produce positron emitters, namely, 18 F, 15 O, 13 N and 11 C for positron emission tomography (PET imaging. Presently the cyclotron has been exclusively used for the production of 18 F - for 18 F-FDG imaging. The operational parameters which influence the yield of 18 F - production were monitored. The radiation levels in the cyclotron and radiochemistry laboratory were also monitored to assess the radiation safety status in the facility. The target material, 18 O water, is bombarded with proton beam from the cyclotron to produce 18 F - ion that is used for the synthesis of 18 F-FDG. The operational parameters which influence the yield of 18 F - were observed during 292 production runs out of a total of more than 400 runs. The radiation dose levels were also measured in the facility at various locations during cyclotron production runs and in the radiochemistry laboratory during 18 F-FDG syntheses. It was observed that rinsing the target after delivery increased the number of production runs in a given target, as well as resulted in a better correlation between the duration of bombardment and the end of bombardment 18 F - activity with absolutely clean target after being rebuilt. The radiation levels in the cyclotron and radiochemistry laboratory were observed to be well within prescribed limits with safe work practice.

  6. Boron film thickness determination to develop a low cost neutron using Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Priscila; Raele, Marcus P.; Yoriyaz, Helio; Siqueira, Paulo de T.D.; Zahn, Guilherme S.; Genezini, Frederico A., E-mail: fredzini@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    Neutron measurement is important for safety and security of workers at nuclear facilities. As neutron is an uncharged particle, for its detection is necessary to use a converter material that interacts with the neutron and produce a charged particle, which is easy to detect. One of the converter candidates is natural boron composed by about 20% of Boron-10, which capture a low energy neutron ejecting an energetic alpha particle and a lithium ion. A neutron detector can be developed applying a boron thin film over a silicon photodiode, which is charged particle sensitive. For this reason is important to determine the optimal film thickness. We have used an empirical solution for the boron film thickness evaluation; furthermore we developed, using Monte Carlo method (MCNP6), a model to simulate the alpha particles propagation through the detector. Our goal was to ensure the best production and transference of alpha particles to silicon region. The film thickness ranged between 0 to 5.5 μm, the neutron energy was also varied. The optimal thickness value will be used to develop a prototype of a low cost neutron detector. (author)

  7. Self-shielding phenomenon modelling in multigroup transport code Apollo-2; Modelisation du phenomene d'autoprotection dans le code de transport multigroupe Apollo 2

    Energy Technology Data Exchange (ETDEWEB)

    Coste-Delclaux, M

    2006-03-15

    This document describes the improvements carried out for modelling the self-shielding phenomenon in the multigroup transport code APOLLO2. They concern the space and energy treatment of the slowing-down equation, the setting up of quadrature formulas to calculate reaction rates, the setting-up of a method that treats directly a resonant mixture and the development of a sub-group method. We validate these improvements either in an elementary or in a global way. Now, we obtain, more accurate multigroup reaction rates and we are able to carry out a reference self-shielding calculation on a very fine multigroup mesh. To end, we draw a conclusion and give some prospects on the remaining work. (author)

  8. Uncertainty Determination for Aeroheating in Uranus and Saturn Probe Entries by the Monte Carlo Method

    Science.gov (United States)

    Palmer, Grant; Prabhu, Dinesh; Cruden, Brett A.

    2013-01-01

    The 2013-2022 Decaedal survey for planetary exploration has identified probe missions to Uranus and Saturn as high priorities. This work endeavors to examine the uncertainty for determining aeroheating in such entry environments. Representative entry trajectories are constructed using the TRAJ software. Flowfields at selected points on the trajectories are then computed using the Data Parallel Line Relaxation (DPLR) Computational Fluid Dynamics Code. A Monte Carlo study is performed on the DPLR input parameters to determine the uncertainty in the predicted aeroheating, and correlation coefficients are examined to identify which input parameters show the most influence on the uncertainty. A review of the present best practices for input parameters (e.g. transport coefficient and vibrational relaxation time) is also conducted. It is found that the 2(sigma) - uncertainty for heating on Uranus entry is no more than 2.1%, assuming an equilibrium catalytic wall, with the uncertainty being determined primarily by diffusion and H(sub 2) recombination rate within the boundary layer. However, if the wall is assumed to be partially or non-catalytic, this uncertainty may increase to as large as 18%. The catalytic wall model can contribute over 3x change in heat flux and a 20% variation in film coefficient. Therefore, coupled material response/fluid dynamic models are recommended for this problem. It was also found that much of this variability is artificially suppressed when a constant Schmidt number approach is implemented. Because the boundary layer is reacting, it is necessary to employ self-consistent effective binary diffusion to obtain a correct thermal transport solution. For Saturn entries, the 2(sigma) - uncertainty for convective heating was less than 3.7%. The major uncertainty driver was dependent on shock temperature/velocity, changing from boundary layer thermal conductivity to diffusivity and then to shock layer ionization rate as velocity increases. While

  9. Direct determination of intermolecular structure of ethanol adsorbed in micropores using X-ray diffraction and reverse Monte Carlo analysis

    OpenAIRE

    Iiyama, Taku; Hagi, Kousuke; Urushibara, Takafumi; Ozeki, Sumio

    2009-01-01

    The intermolecular structure of C(2)H(5)OH molecules confined in slit-shaped graphitic micropore of activated carbon fiber was investigated by in situ X-ray diffraction (XRD) measurement and reverse Monte Carlo (RMC) analysis. The pseudo-3-dimensional intermolecular structure Of C(2)H(5)OH adsorbed in the micropores was determined by applying the RMC analysis to XRD data, assuming a simple slit-shaped space composed of double graphene sheets. The results were consistent with conventional Mont...

  10. Universality of the Ising and the S=1 model on Archimedean lattices: A Monte Carlo determination

    Science.gov (United States)

    Malakis, A.; Gulpinar, G.; Karaaslan, Y.; Papakonstantinou, T.; Aslan, G.

    2012-03-01

    The Ising models S=1/2 and S=1 are studied by efficient Monte Carlo schemes on the (3,4,6,4) and the (3,3,3,3,6) Archimedean lattices. The algorithms used, a hybrid Metropolis-Wolff algorithm and a parallel tempering protocol, are briefly described and compared with the simple Metropolis algorithm. Accurate Monte Carlo data are produced at the exact critical temperatures of the Ising model for these lattices. Their finite-size analysis provide, with high accuracy, all critical exponents which, as expected, are the same with the well-known 2D Ising model exact values. A detailed finite-size scaling analysis of our Monte Carlo data for the S=1 model on the same lattices provides very clear evidence that this model obeys, also very well, the 2D Ising model critical exponents. As a result, we find that recent Monte Carlo simulations and attempts to define effective dimensionality for the S=1 model on these lattices are misleading. Accurate estimates are obtained for the critical amplitudes of the logarithmic expansions of the specific heat for both models on the two Archimedean lattices.

  11. TRIPOLI-3: a neutron/photon Monte Carlo transport code

    Energy Technology Data Exchange (ETDEWEB)

    Nimal, J.C.; Vergnaud, T. [Commissariat a l' Energie Atomique, Gif-sur-Yvette (France). Service d' Etudes de Reacteurs et de Mathematiques Appliquees

    2001-07-01

    The present version of TRIPOLI-3 solves the transport equation for coupled neutron and gamma ray problems in three dimensional geometries by using the Monte Carlo method. This code is devoted both to shielding and criticality problems. The most important feature for particle transport equation solving is the fine treatment of the physical phenomena and sophisticated biasing technics useful for deep penetrations. The code is used either for shielding design studies or for reference and benchmark to validate cross sections. Neutronic studies are essentially cell or small core calculations and criticality problems. TRIPOLI-3 has been used as reference method, for example, for resonance self shielding qualification. (orig.)

  12. Determination of cascade summing correction for HPGe spectrometers by the Monte Carlo method

    CERN Document Server

    Takeda, M N

    2001-01-01

    The present work describes the methodology developed for calculating the cascade sum correction to be applied to experimental efficiencies obtained by means of HPGe spectrometers. The detection efficiencies have been numerically calculated by the Monte Carlo Method for point sources. Another Monte Carlo algorithm has been developed to follow the path in the decay scheme from the beginning state at the precursor radionuclide decay level, down to the ground state of the daughter radionuclide. Each step in the decay scheme is selected by random numbers taking into account the transition probabilities and internal transition coefficients. The selected transitions are properly tagged according to the type of interaction has occurred, giving rise to a total or partial energy absorption events inside the detector crystal. Once the final state has been reached, the selected transitions were accounted for verifying each pair of transitions which occurred simultaneously. With this procedure it was possible to calculate...

  13. A fast, primary-interaction Monte Carlo methodology for determination of total efficiency of cylindrical scintillation gamma-ray detectors

    Directory of Open Access Journals (Sweden)

    Rehman Shakeel U.

    2009-01-01

    Full Text Available A primary-interaction based Monte Carlo algorithm has been developed for determination of the total efficiency of cylindrical scintillation g-ray detectors. This methodology has been implemented in a Matlab based computer program BPIMC. For point isotropic sources at axial locations with respect to the detector axis, excellent agreement has been found between the predictions of the BPIMC code with the corresponding results obtained by using hybrid Monte Carlo as well as by experimental measurements over a wide range of g-ray energy values. For off-axis located point sources, the comparison of the BPIMC predictions with the corresponding results obtained by direct calculations as well as by conventional Monte Carlo schemes shows good agreement validating the proposed algorithm. Using the BPIMC program, the energy dependent detector efficiency has been found to approach an asymptotic profile by increasing either thickness or diameter of scintillator while keeping the other fixed. The variation of energy dependent total efficiency of a 3'x3' NaI(Tl scintillator with axial distance has been studied using the BPIMC code. About two orders of magnitude change in detector efficiency has been observed for zero to 50 cm variation in the axial distance. For small values of axial separation, a similar large variation has also been observed in total efficiency for 137Cs as well as for 60Co sources by increasing the axial-offset from zero to 50 cm.

  14. Will Organic Synthesis Within Icy Grains or on Dust Surfaces in the Primitive Solar Nebula Completely Erase the Effects of Photochemical Self Shielding?

    Science.gov (United States)

    Nuth, Joseph A., III; Johnson, Natasha M.

    2012-01-01

    There are at least 3 separate photochemical self-shielding models with different degrees of commonality. All of these models rely on the selective absorption of (12))C(16)O dissociative photons as the radiation source penetrates through the gas allowing the production of reactive O-17 and O-18 atoms within a specific volume. Each model also assumes that the undissociated C(16)O is stable and does not participate in the chemistry of nebular dust grains. In what follows we will argue that this last, very important assumption is simply not true despite the very high energy of the CO molecular bond.

  15. Determination of surface dose rate of indigenous (32)P patch brachytherapy source by experimental and Monte Carlo methods.

    Science.gov (United States)

    Kumar, Sudhir; Srinivasan, P; Sharma, S D; Saxena, Sanjay Kumar; Bakshi, A K; Dash, Ashutosh; Babu, D A R; Sharma, D N

    2015-09-01

    Isotope production and Application Division of Bhabha Atomic Research Center developed (32)P patch sources for treatment of superficial tumors. Surface dose rate of a newly developed (32)P patch source of nominal diameter 25 mm was measured experimentally using standard extrapolation ionization chamber and Gafchromic EBT film. Monte Carlo model of the (32)P patch source along with the extrapolation chamber was also developed to estimate the surface dose rates from these sources. The surface dose rates to tissue (cGy/min) measured using extrapolation chamber and radiochromic films are 82.03±4.18 (k=2) and 79.13±2.53 (k=2) respectively. The two values of the surface dose rates measured using the two independent experimental methods are in good agreement to each other within a variation of 3.5%. The surface dose rate to tissue (cGy/min) estimated using the MCNP Monte Carlo code works out to be 77.78±1.16 (k=2). The maximum deviation between the surface dose rates to tissue obtained by Monte Carlo and the extrapolation chamber method is 5.2% whereas the difference between the surface dose rates obtained by radiochromic film measurement and the Monte Carlo simulation is 1.7%. The three values of the surface dose rates of the (32)P patch source obtained by three independent methods are in good agreement to one another within the uncertainties associated with their measurements and calculation. This work has demonstrated that MCNP based electron transport simulations are accurate enough for determining the dosimetry parameters of the indigenously developed (32)P patch sources for contact brachytherapy applications.

  16. Monte-Carlo simulation for determining SNR and DQE of linear array plastic scintillating fiber

    Institute of Scientific and Technical Information of China (English)

    Mohammad Mehdi NASSERI; MA Qing-Li; YIN Ze-Jie; WU Xiao-Yi

    2004-01-01

    Fundamental characteristics of the plastic-scintillating fiber (PSF) for wide energy range of electromagnetic radiation (X & γ) have been studied to evaluate possibility of using the PSF as an imaging detector for industrial purposes. Monte-Carlo simulation program (GEANT4.5.1, 2003) was used to generate the data. In order to evaluate image quality of the detector, fiber array was irradiated under various energy and fluxes. Signal to noise ratio (SNR)as well as detector quantum efficiency (DQE) were obtained.

  17. Verification of effectiveness of borated water shield for a cyclotron type self-shielded; Verificacao da eficacia da blindagem de agua borada construida para um acelerador ciclotron do tipo autoblindado

    Energy Technology Data Exchange (ETDEWEB)

    Videira, Heber S.; Burkhardt, Guilherme M.; Santos, Ronielly S., E-mail: heber@cyclopet.com.br [Cyclopet Radiofarmacos Ltda., Curitiba, PR (Brazil); Passaro, Bruno M.; Gonzalez, Julia A.; Santos, Josefina; Guimaraes, Maria I.C.C. [Universidade de Sao Paulo (HCFMRP/USP), Sao Paulo, SP (Brazil). Faculdade de Medicina. Hospital das Clinicas; Lenzi, Marcelo K. [Universidade Federal do Parana (UFPR), Curitina (Brazil). Programa de Pos-Graduacao em Engenharia Quimica

    2013-04-15

    The technological advances in positron emission tomography (PET) in conventional clinic imaging have led to a steady increase in the number of cyclotrons worldwide. Most of these cyclotrons are being used to produce {sup 18}F-FDG, either for themselves as for the distribution to other centers that have PET. For there to be safety in radiological facilities, the cyclotron intended for medical purposes can be classified in category I and category II, ie, self-shielded or non-shielded (bunker). Therefore, the aim of this work is to verify the effectiveness of borated water shield built for a cyclotron accelerator-type Self-shielded PETtrace 860. Mixtures of water borated occurred in accordance with the manufacturer’s specifications, as well as the results of the radiometric survey in the vicinity of the self-shielding of the cyclotron in the conditions established by the manufacturer showed that radiation levels were below the limits. (author)

  18. Protein fold determination from sparse distance restraints: The restrained generic protein direct Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Debe, D.A.; Carlson, M.J.; Chan, S.I; Goddard, W.A. III [California Inst. of Tech., Pasadena, CA (United States); Sadanobu, Jiro [Teijin Limited, Iwakuni, Yamaguchi (Japan). Polymer and Materials Research Labs.

    1999-04-15

    The authors present the generate-and-select hierarchy for tertiary protein structure prediction. The foundation of this hierarchy is the Restrained Generic Protein (RGP) Direct Monte Carlo method. The RGP method is a highly efficient off-lattice residue buildup procedure that can quickly generate the complete set of topologies that satisfy a very small number of interresidue distance restraints. For three restraints uniformly distributed in a 72-residue protein, the authors demonstrate that the size of this set is {approximately}10{sup 4}. The RGP method can generate this set of structures in less than 1 h using a Silicon Graphics R10000 single processor workstation. Following structure generation, a simple criterion that measures the burial of hydrophobic and hydrophilic residues can reliably select a reduced set of {approximately}10{sup 2} structures that contains the native topology. A minimization of the structures in the reduced set typically ranks the native topology in the five lowest energy folds. Thus, using this hierarchical approach, the authors suggest that de novo prediction of moderate resolution globular protein structure can be achieved in just a few hours on a single processor workstation.

  19. Evaluation of Jacobian determinants by Monte Carlo methods - Application to the quasiclassical approximation in molecular scattering.

    Science.gov (United States)

    La Budde, R. A.

    1972-01-01

    Sampling techniques have been used previously to evaluate Jacobian determinants that occur in classical mechanical descriptions of molecular scattering. These determinants also occur in the quasiclassical approximation. A new technique is described which can be used to evaluate Jacobian determinants which occur in either description. This method is expected to be valuable in the study of reactive scattering using the quasiclassical approximation.

  20. EFFECT OF Fe2O3 ON WELDING TECHNOLOGY AND MECHANICAL PROPERTIES OF WELD METAL DEPOSITED BY SELF-SHIELDED FLUX CORED WIRE

    Institute of Scientific and Technical Information of China (English)

    Yu Ping; Pan Chuan; Xue Jin; Li Zhengbang

    2005-01-01

    Five experimental self-shielded flux cored wires are fabricated with different amount of Fe2O3 in the flux. The effect of Fe2O3 on welding technology and mechanical properties of weld metals deposited by these wires are studied. The results show that with the increase of Fe2O3 in the mix, the melting point of the pretreated mix is increased. LiBaF3 and BaFe12O19, which are very low in inherent moisture, are formed after the pretreatment. The mechanical properties are evaluated to the weld metals. The low temperature notch toughness of the weld metals is increased linearly with the Fe2O3 content in the flux due to the balance between Fe2O3 and residual Al in the weld metal. The optimum Fe2O3 content in flux is 2.5%~3.5 %.

  1. 基于Monte Carlo method的均衡度确定模型%Equilibrium Degree Determine Model based on the Monte Carlo method

    Institute of Scientific and Technical Information of China (English)

    朱颖; 程纪品

    2012-01-01

    The Monte Carlo method,also known as the statistical simulation method,is a very important kind of numerical methods guided by the theory of probability and statistics.It is applied to solve many computational problems using the random number (or pseudo-random number).%蒙特卡罗方法(Monte Carlo method),也称统计模拟方法,是一种以概率统计理论为指导的一类非常重要的数值计算方法,是指使用随机数(或更常见的伪随机数)来解决很多计算问题的方法,本文尝试建立警察服务平台的均衡度模型并用蒙特卡罗方法求解,实验结果可以满足一般的应用需求。

  2. Sample Size Determination for Regression Models Using Monte Carlo Methods in R

    Science.gov (United States)

    Beaujean, A. Alexander

    2014-01-01

    A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…

  3. Accurate determination of the Gibbs energy of Cu-Zr melts using the thermodynamic integration method in Monte Carlo simulations

    Science.gov (United States)

    Harvey, J.-P.; Gheribi, A. E.; Chartrand, P.

    2011-08-01

    The design of multicomponent alloys used in different applications based on specific thermo-physical properties determined experimentally or predicted from theoretical calculations is of major importance in many engineering applications. A procedure based on Monte Carlo simulations (MCS) and the thermodynamic integration (TI) method to improve the quality of the predicted thermodynamic properties calculated from classical thermodynamic calculations is presented in this study. The Gibbs energy function of the liquid phase of the Cu-Zr system at 1800 K has been determined based on this approach. The internal structure of Cu-Zr melts and amorphous alloys at different temperatures, as well as other physical properties were also obtained from MCS in which the phase trajectory was modeled by the modified embedded atom model formalism. A rigorous comparison between available experimental data and simulated thermo-physical properties obtained from our MCS is presented in this work. The modified quasichemical model in the pair approximation was parameterized using the internal structure data obtained from our MCS and the precise Gibbs energy function calculated at 1800 K from the TI method. The predicted activity of copper in Cu-Zr melts at 1499 K obtained from our thermodynamic optimization was corroborated by experimental data found in the literature. The validity of the amplitude of the entropy of mixing obtained from the in silico procedure presented in this work was analyzed based on the thermodynamic description of hard sphere mixtures.

  4. Determination of the in-plane anisotropy field in hexagonal systems via rotational magnetization: Theoretical model and Monte Carlo simulations

    Institute of Scientific and Technical Information of China (English)

    WANG AiMin; PANG Hua

    2009-01-01

    The magnetic anisotropy field in thin films with in-plane uniaxial anisotropy can be deduced from the VSM magnetization curves measured in magnetic fields of constant magnitudes. This offers a new possibility of applying rotational magnetization curves to determine the first- and second-order ani-aotropy constant in these films. In this paper we report a theoretical derivation of rotational magnetiza-tion curve in hexagonal crystal system with easy-plane anisotropy based on the principle of the minimum total energy. This model is applied to calculate and analyze the rotational magnetization process for magnetic spherical particles with hexagonal easy-plane anisotropy when rotating the external magnetic field in the basal plane. The theoretical calculations are consistent with Monte Carlo simulation results. It is found that to well reproduce experimental curves, the effect of coercive force on the magnetization reversal process should be fully considered when the intensity of the ex-ternal field is much weaker than that of the anisotropy field. Our research proves that the rotational magnetization curve from VSM measurement provides an effective access to analyze the in-plane anisotropy constant K3 in hexagonal compounds, and the suitable experimental condition to measure K3 is met when the ratio of the magnitude of the external field to that of the anisotropy field is around 0.2.

  5. Determination of the in-plane anisotropy field in hexagonal systems via rotational magnetization: Theoretical model and Monte Carlo simulations

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    The magnetic anisotropy field in thin films with in-plane uniaxial anisotropy can be deduced from the VSM magnetization curves measured in magnetic fields of constant magnitudes. This offers a new possibility of applying rotational magnetization curves to determine the firstand second-order anisotropy constant in these films. In this paper we report a theoretical derivation of rotational magnetization curve in hexagonal crystal system with easy-plane anisotropy based on the principle of the minimum total energy. This model is applied to calculate and analyze the rotational magnetization process for magnetic spherical particles with hexagonal easy-plane anisotropy when rotating the external magnetic field in the basal plane. The theoretical calculations are consistent with Monte Carlo simulation results. It is found that to well reproduce experimental curves, the effect of coercive force on the magnetization reversal process should be fully considered when the intensity of the external field is much weaker than that of the anisotropy field. Our research proves that the rotational magnetization curve from VSM measurement provides an effective access to analyze the in-plane anisotropy constant K3 in hexagonal compounds, and the suitable experimental condition to measure K3 is met when the ratio of the magnitude of the external field to that of the anisotropy field is around 0.2.

  6. Estimation of trace gas fluxes with objectively determined basis functions using reversible-jump Markov chain Monte Carlo

    Science.gov (United States)

    Lunt, Mark F.; Rigby, Matt; Ganesan, Anita L.; Manning, Alistair J.

    2016-09-01

    Atmospheric trace gas inversions often attempt to attribute fluxes to a high-dimensional grid using observations. To make this problem computationally feasible, and to reduce the degree of under-determination, some form of dimension reduction is usually performed. Here, we present an objective method for reducing the spatial dimension of the parameter space in atmospheric trace gas inversions. In addition to solving for a set of unknowns that govern emissions of a trace gas, we set out a framework that considers the number of unknowns to itself be an unknown. We rely on the well-established reversible-jump Markov chain Monte Carlo algorithm to use the data to determine the dimension of the parameter space. This framework provides a single-step process that solves for both the resolution of the inversion grid, as well as the magnitude of fluxes from this grid. Therefore, the uncertainty that surrounds the choice of aggregation is accounted for in the posterior parameter distribution. The posterior distribution of this transdimensional Markov chain provides a naturally smoothed solution, formed from an ensemble of coarser partitions of the spatial domain. We describe the form of the reversible-jump algorithm and how it may be applied to trace gas inversions. We build the system into a hierarchical Bayesian framework in which other unknown factors, such as the magnitude of the model uncertainty, can also be explored. A pseudo-data example is used to show the usefulness of this approach when compared to a subjectively chosen partitioning of a spatial domain. An inversion using real data is also shown to illustrate the scales at which the data allow for methane emissions over north-west Europe to be resolved.

  7. COMPARISONS OF THE FINITE-ELEMENT-WITH-DISCONTIGUOUS-SUPPORT METHOD TO CONTINUOUS-ENERGY MONTE CARLO FOR PIN-CELL PROBLEMS

    Energy Technology Data Exchange (ETDEWEB)

    A. T. Till; M. Hanuš; J. Lou; J. E. Morel; M. L. Adams

    2016-05-01

    The standard multigroup (MG) method for energy discretization of the transport equation can be sensitive to approximations in the weighting spectrum chosen for cross-section averaging. As a result, MG often inaccurately treats important phenomena such as self-shielding variations across a material. From a finite-element viewpoint, MG uses a single fixed basis function (the pre-selected spectrum) within each group, with no mechanism to adapt to local solution behavior. In this work, we introduce the Finite-Element-with-Discontiguous-Support (FEDS) method, whose only approximation with respect to energy is that the angular flux is a linear combination of unknowns multiplied by basis functions. A basis function is non-zero only in the discontiguous set of energy intervals associated with its energy element. Discontiguous energy elements are generalizations of bands and are determined by minimizing a norm of the difference between snapshot spectra and their averages over the energy elements. We begin by presenting the theory of the FEDS method. We then compare to continuous-energy Monte Carlo for one-dimensional slab and two-dimensional pin-cell problem. We find FEDS to be accurate and efficient at producing quantities of interest such as reaction rates and eigenvalues. Results show that FEDS converges at a rate that is approximately first-order in the number of energy elements and that FEDS is less sensitive to weighting spectrum than standard MG.

  8. Neutron cross-section probability tables in TRIPOLI-3 Monte Carlo transport code

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, S.H.; Vergnaud, T.; Nimal, J.C. [Commissariat a l`Energie Atomique, Gif-sur-Yvette (France). Lab. d`Etudes de Protection et de Probabilite

    1998-03-01

    Neutron transport calculations need an accurate treatment of cross sections. Two methods (multi-group and pointwise) are usually used. A third one, the probability table (PT) method, has been developed to produce a set of cross-section libraries, well adapted to describe the neutron interaction in the unresolved resonance energy range. Its advantage is to present properly the neutron cross-section fluctuation within a given energy group, allowing correct calculation of the self-shielding effect. Also, this PT cross-section representation is suitable for simulation of neutron propagation by the Monte Carlo method. The implementation of PTs in the TRIPOLI-3 three-dimensional general Monte Carlo transport code, developed at Commissariat a l`Energie Atomique, and several validation calculations are presented. The PT method is proved to be valid not only in the unresolved resonance range but also in all the other energy ranges.

  9. Path-Integral Monte Carlo Determination of the Fourth-Order Virial Coefficient for a Unitary Two-Component Fermi Gas with Zero-Range Interactions.

    Science.gov (United States)

    Yan, Yangqian; Blume, D

    2016-06-10

    The unitary equal-mass Fermi gas with zero-range interactions constitutes a paradigmatic model system that is relevant to atomic, condensed matter, nuclear, particle, and astrophysics. This work determines the fourth-order virial coefficient b_{4} of such a strongly interacting Fermi gas using a customized ab initio path-integral Monte Carlo (PIMC) algorithm. In contrast to earlier theoretical results, which disagreed on the sign and magnitude of b_{4}, our b_{4} agrees within error bars with the experimentally determined value, thereby resolving an ongoing literature debate. Utilizing a trap regulator, our PIMC approach determines the fourth-order virial coefficient by directly sampling the partition function. An on-the-fly antisymmetrization avoids the Thomas collapse and, combined with the use of the exact two-body zero-range propagator, establishes an efficient general means to treat small Fermi systems with zero-range interactions.

  10. Determination of gossypol content in cottonseeds by near infrared spectroscopy based on Monte Carlo uninformative variable elimination and nonlinear calibration methods.

    Science.gov (United States)

    Li, Cheng; Zhao, Tianlun; Li, Cong; Mei, Lei; Yu, En; Dong, Yating; Chen, Jinhong; Zhu, Shuijin

    2017-04-15

    Near infrared (NIR) spectroscopy combined with Monte Carlo uninformative variable elimination (MC-UVE) and nonlinear calibration methods employed to determine gossypol content in cottonseeds were investigated. The reference method was performed by high performance liquid chromatography coupled to an ultraviolet detector (HPLC-UV). MC-UVE was employed to extract the effective information from the full NIR spectra. Nonlinear calibration methods were applied to establish the models compared with the linear method. The optimal model for gossypol content was obtained by MC-UVE-WLS-SVM, with root mean squares error of prediction (RMSEP) of 0.0422, coefficient of determination (R(2)) of 0.9331, and residual predictive deviation (RPD) of 3.8374, respectively, which was accurate and robust enough to substitute for traditional gossypol measurements. The nonlinear methods performed more reliable than linear method during the development of calibration models. Furthermore, MC-UVE could provide better and simpler calibration models than full spectra.

  11. The ground state tunneling splitting and the zero point energy of malonaldehyde: a quantum Monte Carlo determination.

    Science.gov (United States)

    Viel, Alexandra; Coutinho-Neto, Maurício D; Manthe, Uwe

    2007-01-14

    Quantum dynamics calculations of the ground state tunneling splitting and of the zero point energy of malonaldehyde on the full dimensional potential energy surface proposed by Yagi et al. [J. Chem. Phys. 1154, 10647 (2001)] are reported. The exact diffusion Monte Carlo and the projection operator imaginary time spectral evolution methods are used to compute accurate benchmark results for this 21-dimensional ab initio potential energy surface. A tunneling splitting of 25.7+/-0.3 cm-1 is obtained, and the vibrational ground state energy is found to be 15 122+/-4 cm-1. Isotopic substitution of the tunneling hydrogen modifies the tunneling splitting down to 3.21+/-0.09 cm-1 and the vibrational ground state energy to 14 385+/-2 cm-1. The computed tunneling splittings are slightly higher than the experimental values as expected from the potential energy surface which slightly underestimates the barrier height, and they are slightly lower than the results from the instanton theory obtained using the same potential energy surface.

  12. EURADOS action for determination of americium in skull measures in vivo and Monte Carlo simulation; Accion EURADOS para la determinacion de americio en craneo mediante medidas in-vivo y simulacion Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Lopez Ponte, M. A.; Navarro Amaro, J. F.; Perez Lopez, B.; Navarro Bravo, T.; Nogueira, P.; Vrba, T.

    2013-07-01

    From the Group of WG7 internal dosimetry of the EURADOS Organization (European Radiation Dosimetry group, e.V.) which It coordinates CIEMAT, international action for the vivo measurement of americium has been conducted in three mannequins type skull with detectors of Germanium by gamma spectrometry and simulation by Monte Carlo methods. Such action has been raised as two separate exercises, with the participation of institutions in Europe, America and Asia. Other actions similar precede this vivo intercomparison of measurement and modeling Monte Carlo1. The preliminary results and associated findings are presented in this work. The laboratory of the body radioactivity (CRC) of service counter of dosimetry staff internal (DPI) of the CIEMAT, it has been one of the participants in vivo measures exercise. On the other hand part, the Group of numerical dosimetry of CIEMAT is participant of the Monte Carlo2 simulation exercise. (Author)

  13. The determination of pair-distance distribution by double electron-electron resonance: regularization by the length of distance discretization with Monte Carlo calculations

    Science.gov (United States)

    Dzuba, Sergei A.

    2016-08-01

    Pulsed double electron-electron resonance technique (DEER, or PELDOR) is applied to study conformations and aggregation of peptides, proteins, nucleic acids, and other macromolecules. For a pair of spin labels, experimental data allows for the determination of their distance distribution function, P(r). P(r) is derived as a solution of a first-kind Fredholm integral equation, which is an ill-posed problem. Here, we suggest regularization by increasing the distance discretization length to its upper limit where numerical integration still provides agreement with experiment. This upper limit is found to be well above the lower limit for which the solution instability appears because of the ill-posed nature of the problem. For solving the integral equation, Monte Carlo trials of P(r) functions are employed; this method has an obvious advantage of the fulfillment of the non-negativity constraint for P(r). The regularization by the increasing of distance discretization length for the case of overlapping broad and narrow distributions may be employed selectively, with this length being different for different distance ranges. The approach is checked for model distance distributions and for experimental data taken from literature for doubly spin-labeled DNA and peptide antibiotics.

  14. An empirical determination of upper operational frequency limits of transferred electron mechanism in bulk GaAs and GaN through ensemble Monte Carlo particle simulations

    Science.gov (United States)

    Francis, S.; van Zyl, R. R.; Perold, W. J.

    2015-08-01

    The ensemble Monte Carlo particle simulation technique is used to determine the upper operational frequency limit of the transferred electron mechanism in bulk GaAs and GaN empirically. This mechanism manifests as a decrease in the average velocity of the electrons in the bulk material with an increase in the electric field bias, which yields the characteristic negative slope in the velocity-field curves of these materials. A novel approach is proposed whereby the hysteresis in the simulated dynamic, high-frequency velocity-field curves is exploited. The upper operational frequency limit supported by the material is defined as that frequency, where the average gradient of the dynamic characteristic curve over a radio frequency cycle approaches zero. Effects of temperature and doping level on the operational frequency limit are reported. The frequency limit thus obtained is also useful to predict the highest fundamental frequency of operation of transferred electron devices, such as Gunn diodes, which are based on materials that support the transferred electron mechanism. Based on the method presented here, the upper operational frequency limits of the transferred electron mechanism in bulk GaAs and GaN are 80 and 255 GHz, respectively, at typical doping levels and operating temperatures of Gunn diodes.

  15. Who Writes Carlos Bulosan?

    Directory of Open Access Journals (Sweden)

    Charlie Samuya Veric

    2001-12-01

    Full Text Available The importance of Carlos Bulosan in Filipino and Filipino-American radical history and literature is indisputable. His eminence spans the pacific, and he is known, diversely, as a radical poet, fictionist, novelist, and labor organizer. Author of the canonical America Iis the Hearts, Bulosan is celebrated for chronicling the conditions in America in his time, such as racism and unemployment. In the history of criticism on Bulosan's life and work, however, there is an undeclared general consensus that views Bulosan and his work as coherent permanent texts of radicalism and anti-imperialism. Central to the existence of such a tradition of critical reception are the generations of critics who, in more ways than one, control the discourse on and of Carlos Bulosan. This essay inquires into the sphere of the critical reception that orders, for our time and for the time ahead, the reading and interpretation of Bulosan. What eye and seeing, the essay asks, determine the perception of Bulosan as the angel of radicalism? What is obscured in constructing Bulosan as an immutable figure of the political? What light does the reader conceive when the personal is brought into the open and situated against the political? the essay explores the answers to these questions in Bulosan's loving letters to various friends, strangers, and white American women. The presence of these interrogations, the essay believes, will secure ultimately the continuing importance of Carlos Bulosan to radical literature and history.

  16. 自保护药芯焊丝弧桥并存过渡试验分析%Experimental analysis of bridging transfer without arc interruption of self-shielded flux cored wire

    Institute of Scientific and Technical Information of China (English)

    王志明; 刘海云; 王勇; 张英乔

    2011-01-01

    采用高速摄影、汉诺威弧焊质量分析仪和体式显微镜对自保护药芯焊丝弧桥并存过渡特征和过渡机理进行了试验研究,结果表明:弧桥并存过渡是一种液桥持续存在的同时电弧不熄灭的熔滴过渡模式,是自保护药芯焊丝主要熔滴过渡模式之一;电弧电压和焊接电流波形没有短路过渡特征,表现为一定范围内小幅波动,与弧桥并存过渡特征相对应;电压概率密度分布曲线和电流概率密度曲线都没有短路过渡的特征;弧桥并存过渡的液桥是由熔融渣包裹液态金属混合形成的;自保护药芯焊丝孤桥并存过渡主要是在表面张力和电磁收缩力的共同作用下完成.%By using the high speed photography,Hanover analyzer and macro-microscope,the tranfer characteristic and the tranfer mechanisim of metal transfer of self-shielded flux cored wire have been studied.The results reaveal that bridging transfer without arc interruption transfer is a kind of transfer which the bridge exists lastingly while the arc isn't extinguished,and it is one of the metal transfer models of self-shielded flux cored wire.Without the characteristic of short-circuit transfer,the wave pattern of arc voltage and welding current represent the slight waving within a certain scope,corresponding to the characteristic of bridging transfer without arc interruption,the U-PDD & I-PDD doesn't show the characteristic of short circuit transfer.The bridge of bridging transfer without arc interruption is formed of mixture of the slag's covering the liquid metal.The accomplishment of bridging transfer without arc interruption is mainly conducted by joint work of surface tension and electromagnetic force.

  17. Approaching Chemical Accuracy with Quantum Monte Carlo

    OpenAIRE

    Petruzielo, Frank R.; Toulouse, Julien; Umrigar, C. J.

    2012-01-01

    International audience; A quantum Monte Carlo study of the atomization energies for the G2 set of molecules is presented. Basis size dependence of diffusion Monte Carlo atomization energies is studied with a single determinant Slater-Jastrow trial wavefunction formed from Hartree-Fock orbitals. With the largest basis set, the mean absolute deviation from experimental atomization energies for the G2 set is 3.0 kcal/mol. Optimizing the orbitals within variational Monte Carlo improves the agreem...

  18. Extended Ensemble Monte Carlo

    OpenAIRE

    Iba, Yukito

    2000-01-01

    ``Extended Ensemble Monte Carlo''is a generic term that indicates a set of algorithms which are now popular in a variety of fields in physics and statistical information processing. Exchange Monte Carlo (Metropolis-Coupled Chain, Parallel Tempering), Simulated Tempering (Expanded Ensemble Monte Carlo), and Multicanonical Monte Carlo (Adaptive Umbrella Sampling) are typical members of this family. Here we give a cross-disciplinary survey of these algorithms with special emphasis on the great f...

  19. Monte Carlo determination of the conversion coefficients Hp(3)/Ka in a right cylinder phantom with 'PENELOPE' code. Comparison with 'MCNP' simulations.

    Science.gov (United States)

    Daures, J; Gouriou, J; Bordy, J M

    2011-03-01

    This work has been performed within the frame of the European Union ORAMED project (Optimisation of RAdiation protection for MEDical staff). The main goal of the project is to improve standards of protection for medical staff for procedures resulting in potentially high exposures and to develop methodologies for better assessing and for reducing, exposures to medical staff. The Work Package WP2 is involved in the development of practical eye-lens dosimetry in interventional radiology. This study is complementary of the part of the ENEA report concerning the calculations with the MCNP-4C code of the conversion factors related to the operational quantity H(p)(3). In this study, a set of energy- and angular-dependent conversion coefficients (H(p)(3)/K(a)), in the newly proposed square cylindrical phantom made of ICRU tissue, have been calculated with the Monte-Carlo code PENELOPE and MCNP5. The H(p)(3) values have been determined in terms of absorbed dose, according to the definition of this quantity, and also with the kerma approximation as formerly reported in ICRU reports. At a low-photon energy (up to 1 MeV), the two results obtained with the two methods are consistent. Nevertheless, large differences are showed at a higher energy. This is mainly due to the lack of electronic equilibrium, especially for small angle incidences. The values of the conversion coefficients obtained with the MCNP-4C code published by ENEA quite agree with the kerma approximation calculations obtained with PENELOPE. We also performed the same calculations with the code MCNP5 with two types of tallies: F6 for kerma approximation and *F8 for estimating the absorbed dose that is, as known, due to secondary electrons. PENELOPE and MCNP5 results agree for the kerma approximation and for the absorbed dose calculation of H(p)(3) and prove that, for photon energies larger than 1 MeV, the transport of the secondary electrons has to be taken into account.

  20. Determination of output factor for 6 MV small photon beam: comparison between Monte Carlo simulation technique and microDiamond detector

    Science.gov (United States)

    Krongkietlearts, K.; Tangboonduangjit, P.; Paisangittisakul, N.

    2016-03-01

    In order to improve the life's quality for a cancer patient, the radiation techniques are constantly evolving. Especially, the two modern techniques which are intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) are quite promising. They comprise of many small beam sizes (beamlets) with various intensities to achieve the intended radiation dose to the tumor and minimal dose to the nearby normal tissue. The study investigates whether the microDiamond detector (PTW manufacturer), a synthetic single crystal diamond detector, is suitable for small field output factor measurement. The results were compared with those measured by the stereotactic field detector (SFD) and the Monte Carlo simulation (EGSnrc/BEAMnrc/DOSXYZ). The calibration of Monte Carlo simulation was done using the percentage depth dose and dose profile measured by the photon field detector (PFD) of the 10×10 cm2 field size with 100 cm SSD. Comparison of the values obtained from the calculations and measurements are consistent, no more than 1% difference. The output factors obtained from the microDiamond detector have been compared with those of SFD and Monte Carlo simulation, the results demonstrate the percentage difference of less than 2%.

  1. Application de la methode des sous-groupes au calcul Monte-Carlo multigroupe

    Science.gov (United States)

    Martin, Nicolas

    This thesis is dedicated to the development of a Monte Carlo neutron transport solver based on the subgroup (or multiband) method. In this formalism, cross sections for resonant isotopes are represented in the form of probability tables on the whole energy spectrum. This study is intended in order to test and validate this approach in lattice physics and criticality-safety applications. The probability table method seems promising since it introduces an alternative computational way between the legacy continuous-energy representation and the multigroup method. In the first case, the amount of data invoked in continuous-energy Monte Carlo calculations can be very important and tend to slow down the overall computational time. In addition, this model preserves the quality of the physical laws present in the ENDF format. Due to its cheap computational cost, the multigroup Monte Carlo way is usually at the basis of production codes in criticality-safety studies. However, the use of a multigroup representation of the cross sections implies a preliminary calculation to take into account self-shielding effects for resonant isotopes. This is generally performed by deterministic lattice codes relying on the collision probability method. Using cross-section probability tables on the whole energy range permits to directly take into account self-shielding effects and can be employed in both lattice physics and criticality-safety calculations. Several aspects have been thoroughly studied: (1) The consistent computation of probability tables with a energy grid comprising only 295 or 361 groups. The CALENDF moment approach conducted to probability tables suitable for a Monte Carlo code. (2) The combination of the probability table sampling for the energy variable with the delta-tracking rejection technique for the space variable, and its impact on the overall efficiency of the proposed Monte Carlo algorithm. (3) The derivation of a model for taking into account anisotropic

  2. Monte Carlo methods

    OpenAIRE

    Bardenet, R.

    2012-01-01

    ISBN:978-2-7598-1032-1; International audience; Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC) methods. We give intuition on the theoretic...

  3. Exploring Monte Carlo methods

    CERN Document Server

    Dunn, William L

    2012-01-01

    Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble

  4. Determination of boron over a large dynamic range by prompt-gamma activation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Harrison, R.K. [University of Texas at Austin, Nuclear Engineering Teaching Lab., Pickle Research Campus, R-9000 Austin, TX 78712 (United States); Landsberger, S. [University of Texas at Austin, Nuclear Engineering Teaching Lab., Pickle Research Campus, R-9000 Austin, TX 78712 (United States)], E-mail: s.landsberger@mail.utexas.edu

    2009-02-15

    An evaluation of the PGAA method for the determination of boron across a wide dynamic range of concentrations was performed for trace levels up to 5 wt.% boron. This range encompasses a transition from neutron transparency to significant self- shielding conditions. To account for self-shielding, several PGAA techniques were employed. First, a calibration curve was developed in which a set of boron standards was tested and the count rate to boron mass curve was determined. This set of boron measurements was compared with an internal standard self-shielding correction method and with a method for determining composition using PGAA peak ratios. The advantages and disadvantages of each method are analyzed. The boron concentrations of several laboratory-grade chemicals and standard reference materials were measured with each method and compared. The evaluation of the boron content of nanocrystalline transition metals prepared with a boron-containing reducing agent was also performed with each of the methods tested. Finally, the k{sub 0} method was used for non-destructive measurement of boron in catalyst materials for the characterization of new non-platinum fuel cell catalysts.

  5. 钢轨自保护药芯焊丝自动窄间隙电弧焊工艺及装备研究%Technology and equipment development of self-shielded flux cored narrow gap arc rail welding

    Institute of Scientific and Technical Information of China (English)

    宋宏图; 李力; 丁韦; 季关钰

    2011-01-01

    无缝线路建设的大范围展开迫切需要性能、质量、生产效率相匹配的原位焊接方法,目前使用最多的为铝热焊和电弧焊.介绍了窄间陈电弧焊在钢轨焊接中的应用,并重点对电弧位置实时检测技术和自保护药芯焊丝自动钢轨窄间隙电弧焊工艺及装备进行了说明.进行接头性能试验,结果表明:采用钢轨自保护药芯焊丝自动窄间隙电弧焊焊接的接头性能良好,完全超过另外一种原位焊接方法铝热焊的接头性能,能够通过铝热焊不能通过的落锤试验,拉伸性能也强于铝热焊,冲击性能大幅度优于目前使用的闪光焊、气压焊和铝热焊.%Higher joint quality and performance good production efficiency in situ rail welding method should be developed for jointless railway wide construction. Nowadays most ways in used are thermit welding and arc welding. In this paper,narrow gap are welding used in rail welding is introduced,especially on our study of automatic narrow gap are rail welding using self-shielded flux cored wire and based on are position vision detection.Joint properties is overall better than another method thermit welding of in situ welding joint performance,can pass through the drop hammer test,tensile properties is also stronger than thermit welding,the impact performance significantly better than the currently used flash welding, gas pressure welding and thermit welding.

  6. 3D Monte-Carlo transport calculations of whole slab reactor cores: validation of deterministic neutronic calculation routes

    Energy Technology Data Exchange (ETDEWEB)

    Palau, J.M. [CEA Cadarache, Service de Physique des Reacteurs et du Cycle, Lab. de Projets Nucleaires, 13 - Saint-Paul-lez-Durance (France)

    2005-07-01

    This paper presents how Monte-Carlo calculations (French TRIPOLI4 poly-kinetic code with an appropriate pre-processing and post-processing software called OVNI) are used in the case of 3-dimensional heterogeneous benchmarks (slab reactor cores) to reduce model biases and enable a thorough and detailed analysis of the performances of deterministic methods and their associated data libraries with respect to key neutron parameters (reactivity, local power). Outstanding examples of application of these tools are presented regarding the new numerical methods implemented in the French lattice code APOLLO2 (advanced self-shielding models, new IDT characteristics method implemented within the discrete-ordinates flux solver model) and the JEFF3.1 nuclear data library (checked against JEF2.2 previous file). In particular we have pointed out, by performing multigroup/point-wise TRIPOLI4 (assembly and core) calculations, the efficiency (in terms of accuracy and computation time) of the new IDT method developed in APOLLO2. In addition, by performing 3-dimensional TRIPOLI4 calculations of the whole slab core (few millions of elementary volumes), the high quality of the new JEFF3.1 nuclear data files and revised evaluations (U{sup 235}, U{sup 238}, Hf) for reactivity prediction of slab cores critical experiments has been stressed. As a feedback of the whole validation process, improvements in terms of nuclear data (mainly Hf capture cross-sections) and numerical methods (advanced quadrature formulas accounting validation results, validation of new self-shielding models, parallelization) are suggested to improve even more the APOLLO2-CRONOS2 standard calculation route. (author)

  7. The impact of absorption coefficient on polarimetric determination of Berry phase based depth resolved characterization of biomedical scattering samples: a polarized Monte Carlo investigation

    Energy Technology Data Exchange (ETDEWEB)

    Baba, Justin S [ORNL; Koju, Vijay [ORNL; John, Dwayne O [ORNL

    2016-01-01

    The modulation of the state of polarization of photons due to scatter generates associated geometric phase that is being investigated as a means for decreasing the degree of uncertainty in back-projecting the paths traversed by photons detected in backscattered geometry. In our previous work, we established that polarimetrically detected Berry phase correlates with the mean photon penetration depth of the backscattered photons collected for image formation. In this work, we report on the impact of state-of-linear-polarization (SOLP) filtering on both the magnitude and population distributions of image forming detected photons as a function of the absorption coefficient of the scattering sample. The results, based on Berry phase tracking implemented Polarized Monte Carlo Code, indicate that sample absorption plays a significant role in the mean depth attained by the image forming backscattered detected photons.

  8. San Carlo Operaen

    DEFF Research Database (Denmark)

    Holm, Bent

    2005-01-01

    En indplacering af operahuset San Carlo i en kulturhistorisk repræsentationskontekst med særligt henblik på begrebet napolalità.......En indplacering af operahuset San Carlo i en kulturhistorisk repræsentationskontekst med særligt henblik på begrebet napolalità....

  9. SAN CARLOS APACHE PAPERS.

    Science.gov (United States)

    ROESSEL, ROBERT A., JR.

    THE FIRST SECTION OF THIS BOOK COVERS THE HISTORICAL AND CULTURAL BACKGROUND OF THE SAN CARLOS APACHE INDIANS, AS WELL AS AN HISTORICAL SKETCH OF THE DEVELOPMENT OF THEIR FORMAL EDUCATIONAL SYSTEM. THE SECOND SECTION IS DEVOTED TO THE PROBLEMS OF TEACHERS OF THE INDIAN CHILDREN IN GLOBE AND SAN CARLOS, ARIZONA. IT IS DIVIDED INTO THREE PARTS--(1)…

  10. Efficient kinetic Monte Carlo simulation

    Science.gov (United States)

    Schulze, Tim P.

    2008-02-01

    This paper concerns kinetic Monte Carlo (KMC) algorithms that have a single-event execution time independent of the system size. Two methods are presented—one that combines the use of inverted-list data structures with rejection Monte Carlo and a second that combines inverted lists with the Marsaglia-Norman-Cannon algorithm. The resulting algorithms apply to models with rates that are determined by the local environment but are otherwise arbitrary, time-dependent and spatially heterogeneous. While especially useful for crystal growth simulation, the algorithms are presented from the point of view that KMC is the numerical task of simulating a single realization of a Markov process, allowing application to a broad range of areas where heterogeneous random walks are the dominate simulation cost.

  11. MORSE Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.

  12. Determining the focal spot limit of 1 MeV X-ray targets: Monte Carlo simulation of the point spread function

    Science.gov (United States)

    Wang, Jiayue; Shi, Jiaru; Huang, Wenhui; Tang, Chuanxiang

    2017-02-01

    Among all microfocus X-ray tubes, 1 MeV has remained a "gray zone" despite its universal application in radiation therapy and non-destructive testing. One challenge existing in fabricating 1 MeV microfocus X-ray tubes is beam broadening inside metal anodes, which limits the minimum focal spot size a system can obtain. In particular, a complete understanding of the intrinsic broadening process, i.e., the point-spread function (PSF) of X-ray targets is needed. In this paper, relationships between PSF and beam energy, target thickness and electron incidence angle were investigated via Monte Carlo simulation. Focal spot limits for both transmission- and reflection-type tungsten targets at 0.5, 1 and 1.5 MeV were calculated, with target thicknesses ranging from 1 μm to 2 cm. Transmission-type targets with thickness less than 5 μ m could achieve micrometer-scale spots while reflection-type targets exhibited superiority for spots larger than 100 μm . In addition, by demonstrating the spot variation at off-normal incidence, the role of unidirectional beam was explored in microfocus X-ray systems. We expect that these results can enable alternative designs to improve the focal spot limit of X-ray tubes and benefit accurate photon source modeling.

  13. Quantum Monte Carlo simulation

    OpenAIRE

    Wang, Yazhen

    2011-01-01

    Contemporary scientific studies often rely on the understanding of complex quantum systems via computer simulation. This paper initiates the statistical study of quantum simulation and proposes a Monte Carlo method for estimating analytically intractable quantities. We derive the bias and variance for the proposed Monte Carlo quantum simulation estimator and establish the asymptotic theory for the estimator. The theory is used to design a computational scheme for minimizing the mean square er...

  14. Monte Carlo transition probabilities

    OpenAIRE

    Lucy, L. B.

    2001-01-01

    Transition probabilities governing the interaction of energy packets and matter are derived that allow Monte Carlo NLTE transfer codes to be constructed without simplifying the treatment of line formation. These probabilities are such that the Monte Carlo calculation asymptotically recovers the local emissivity of a gas in statistical equilibrium. Numerical experiments with one-point statistical equilibrium problems for Fe II and Hydrogen confirm this asymptotic behaviour. In addition, the re...

  15. Keno-Nr a Monte Carlo Code Simulating the Californium -252-SOURCE-DRIVEN Noise Analysis Experimental Method for Determining Subcriticality

    Science.gov (United States)

    Ficaro, Edward Patrick

    The ^{252}Cf -source-driven noise analysis (CSDNA) requires the measurement of the cross power spectral density (CPSD) G_ {23}(omega), between a pair of neutron detectors (subscripts 2 and 3) located in or near the fissile assembly, and the CPSDs, G_{12}( omega) and G_{13}( omega), between the neutron detectors and an ionization chamber 1 containing ^{252}Cf also located in or near the fissile assembly. The key advantage of this method is that the subcriticality of the assembly can be obtained from the ratio of spectral densities,{G _sp{12}{*}(omega)G_ {13}(omega)over G_{11 }(omega)G_{23}(omega) },using a point kinetic model formulation which is independent of the detector's properties and a reference measurement. The multigroup, Monte Carlo code, KENO-NR, was developed to eliminate the dependence of the measurement on the point kinetic formulation. This code utilizes time dependent, analog neutron tracking to simulate the experimental method, in addition to the underlying nuclear physics, as closely as possible. From a direct comparison of simulated and measured data, the calculational model and cross sections are validated for the calculation, and KENO-NR can then be rerun to provide a distributed source k_ {eff} calculation. Depending on the fissile assembly, a few hours to a couple of days of computation time are needed for a typical simulation executed on a desktop workstation. In this work, KENO-NR demonstrated the ability to accurately estimate the measured ratio of spectral densities from experiments using capture detectors performed on uranium metal cylinders, a cylindrical tank filled with aqueous uranyl nitrate, and arrays of safe storage bottles filled with uranyl nitrate. Good agreement was also seen between simulated and measured values of the prompt neutron decay constant from the fitted CPSDs. Poor agreement was seen between simulated and measured results using composite ^6Li-glass-plastic scintillators at large subcriticalities for the tank of

  16. CERN honours Carlo Rubbia

    CERN Multimedia

    2009-01-01

    Carlo Rubbia turned 75 on March 31, and CERN held a symposium to mark his birthday and pay tribute to his impressive contribution to both CERN and science. Carlo Rubbia, 4th from right, together with the speakers at the symposium.On 7 April CERN hosted a celebration marking Carlo Rubbia’s 75th birthday and 25 years since he was awarded the Nobel Prize for Physics. "Today we will celebrate 100 years of Carlo Rubbia" joked CERN’s Director-General, Rolf Heuer in his opening speech, "75 years of his age and 25 years of the Nobel Prize." Rubbia received the Nobel Prize along with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. During the symposium, which was held in the Main Auditorium, several eminent speakers gave lectures on areas of science to which Carlo Rubbia made decisive contributions. Among those who spoke were Michel Spiro, Director of the French National Insti...

  17. The Virtual Monte Carlo

    CERN Document Server

    Hrivnacova, I; Berejnov, V V; Brun, R; Carminati, F; Fassò, A; Futo, E; Gheata, A; Caballero, I G; Morsch, Andreas

    2003-01-01

    The concept of Virtual Monte Carlo (VMC) has been developed by the ALICE Software Project to allow different Monte Carlo simulation programs to run without changing the user code, such as the geometry definition, the detector response simulation or input and output formats. Recently, the VMC classes have been integrated into the ROOT framework, and the other relevant packages have been separated from the AliRoot framework and can be used individually by any other HEP project. The general concept of the VMC and its set of base classes provided in ROOT will be presented. Existing implementations for Geant3, Geant4 and FLUKA and simple examples of usage will be described.

  18. Approaching Chemical Accuracy with Quantum Monte Carlo

    CERN Document Server

    Petruzielo, F R; Umrigar, C J

    2012-01-01

    A quantum Monte Carlo study of the atomization energies for the G2 set of molecules is presented. Basis size dependence of diffusion Monte Carlo atomization energies is studied with a single determinant Slater-Jastrow trial wavefunction formed from Hartree-Fock orbitals. With the largest basis set, the mean absolute deviation from experimental atomization energies for the G2 set is 3.0 kcal/mol. Optimizing the orbitals within variational Monte Carlo improves the agreement between diffusion Monte Carlo and experiment, reducing the mean absolute deviation to 2.1 kcal/mol. Moving beyond a single determinant Slater-Jastrow trial wavefunction, diffusion Monte Carlo with a small complete active space Slater-Jastrow trial wavefunction results in near chemical accuracy. In this case, the mean absolute deviation from experimental atomization energies is 1.2 kcal/mol. It is shown from calculations on systems containing phosphorus that the accuracy can be further improved by employing a larger active space.

  19. Carlo Caso (1940 - 2007)

    CERN Multimedia

    Leonardo Rossi

    Carlo Caso (1940 - 2007) Our friend and colleague Carlo Caso passed away on July 7th, after several months of courageous fight against cancer. Carlo spent most of his scientific career at CERN, taking an active part in the experimental programme of the laboratory. His long and fruitful involvement in particle physics started in the sixties, in the Genoa group led by G. Tomasini. He then made several experiments using the CERN liquid hydrogen bubble chambers -first the 2000HBC and later BEBC- to study various facets of the production and decay of meson and baryon resonances. He later made his own group and joined the NA27 Collaboration to exploit the EHS Spectrometer with a rapid cycling bubble chamber as vertex detector. Amongst their many achievements, they were the first to measure, with excellent precision, the lifetime of the charmed D mesons. At the start of the LEP era, Carlo and his group moved to the DELPHI experiment, participating in the construction and running of the HPC electromagnetic c...

  20. Carlos Vesga Duarte

    OpenAIRE

    Pedro Medina Avendaño

    1981-01-01

    Carlos Vega Duarte tenía la sencillez de los seres elementales y puros. Su corazón era limpio como oro de aluvión. Su trato directo y coloquial ponía de relieve a un santandereano sin contaminaciones que amaba el fulgor de las armas y se encandilaba con el destello de las frases perfectas

  1. A Monte-Carlo simulation analysis for evaluating the severity distribution functions (SDFs) calibration methodology and determining the minimum sample-size requirements.

    Science.gov (United States)

    Shirazi, Mohammadali; Reddy Geedipally, Srinivas; Lord, Dominique

    2017-01-01

    Severity distribution functions (SDFs) are used in highway safety to estimate the severity of crashes and conduct different types of safety evaluations and analyses. Developing a new SDF is a difficult task and demands significant time and resources. To simplify the process, the Highway Safety Manual (HSM) has started to document SDF models for different types of facilities. As such, SDF models have recently been introduced for freeway and ramps in HSM addendum. However, since these functions or models are fitted and validated using data from a few selected number of states, they are required to be calibrated to the local conditions when applied to a new jurisdiction. The HSM provides a methodology to calibrate the models through a scalar calibration factor. However, the proposed methodology to calibrate SDFs was never validated through research. Furthermore, there are no concrete guidelines to select a reliable sample size. Using extensive simulation, this paper documents an analysis that examined the bias between the 'true' and 'estimated' calibration factors. It was indicated that as the value of the true calibration factor deviates further away from '1', more bias is observed between the 'true' and 'estimated' calibration factors. In addition, simulation studies were performed to determine the calibration sample size for various conditions. It was found that, as the average of the coefficient of variation (CV) of the 'KAB' and 'C' crashes increases, the analyst needs to collect a larger sample size to calibrate SDF models. Taking this observation into account, sample-size guidelines are proposed based on the average CV of crash severities that are used for the calibration process.

  2. Monte Carlo and nonlinearities

    CERN Document Server

    Dauchet, Jérémi; Blanco, Stéphane; Caliot, Cyril; Charon, Julien; Coustet, Christophe; Hafi, Mouna El; Eymet, Vincent; Farges, Olivier; Forest, Vincent; Fournier, Richard; Galtier, Mathieu; Gautrais, Jacques; Khuong, Anaïs; Pelissier, Lionel; Piaud, Benjamin; Roger, Maxime; Terrée, Guillaume; Weitz, Sebastian

    2016-01-01

    The Monte Carlo method is widely used to numerically predict systems behaviour. However, its powerful incremental design assumes a strong premise which has severely limited application so far: the estimation process must combine linearly over dimensions. Here we show that this premise can be alleviated by projecting nonlinearities on a polynomial basis and increasing the configuration-space dimension. Considering phytoplankton growth in light-limited environments, radiative transfer in planetary atmospheres, electromagnetic scattering by particles and concentrated-solar-power-plant productions, we prove the real world usability of this advance on four test-cases that were so far regarded as impracticable by Monte Carlo approaches. We also illustrate an outstanding feature of our method when applied to sharp problems with interacting particles: handling rare events is now straightforward. Overall, our extension preserves the features that made the method popular: addressing nonlinearities does not compromise o...

  3. Carlos Vesga Duarte

    Directory of Open Access Journals (Sweden)

    Pedro Medina Avendaño

    1981-01-01

    Full Text Available Carlos Vega Duarte tenía la sencillez de los seres elementales y puros. Su corazón era limpio como oro de aluvión. Su trato directo y coloquial ponía de relieve a un santandereano sin contaminaciones que amaba el fulgor de las armas y se encandilaba con el destello de las frases perfectas

  4. Fundamentals of Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.

  5. Determination and Fabrication of New Shield Super Alloys Materials for Nuclear Reactor Safety by Experiments and Cern-Fluka Monte Carlo Simulation Code, Geant4 and WinXCom

    Science.gov (United States)

    Aygun, Bünyamin; Korkut, Turgay; Karabulut, Abdulhalik

    2016-05-01

    Despite the possibility of depletion of fossil fuels increasing energy needs the use of radiation tends to increase. Recently the security-focused debate about planned nuclear power plants still continues. The objective of this thesis is to prevent the radiation spread from nuclear reactors into the environment. In order to do this, we produced higher performanced of new shielding materials which are high radiation holders in reactors operation. Some additives used in new shielding materials; some of iron (Fe), rhenium (Re), nickel (Ni), chromium (Cr), boron (B), copper (Cu), tungsten (W), tantalum (Ta), boron carbide (B4C). The results of this experiments indicated that these materials are good shields against gamma and neutrons. The powder metallurgy technique was used to produce new shielding materials. CERN - FLUKA Geant4 Monte Carlo simulation code and WinXCom were used for determination of the percentages of high temperature resistant and high-level fast neutron and gamma shielding materials participated components. Super alloys was produced and then the experimental fast neutron dose equivalent measurements and gamma radiation absorpsion of the new shielding materials were carried out. The produced products to be used safely reactors not only in nuclear medicine, in the treatment room, for the storage of nuclear waste, nuclear research laboratories, against cosmic radiation in space vehicles and has the qualities.

  6. CERN honours Carlo Rubbia

    CERN Multimedia

    2009-01-01

    On 7 April CERN will be holding a symposium to mark the 75th birthday of Carlo Rubbia, who shared the 1984 Nobel Prize for Physics with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. Following a presentation by Rolf Heuer, lectures will be given by eminent speakers on areas of science to which Carlo Rubbia has made decisive contributions. Michel Spiro, Director of the French National Institute of Nuclear and Particle Physics (IN2P3) of the CNRS, Lyn Evans, sLHC Project Leader, and Alan Astbury of the TRIUMF Laboratory will talk about the physics of the weak interaction and the discovery of the W and Z bosons. Former CERN Director-General Herwig Schopper will lecture on CERN’s accelerators from LEP to the LHC. Giovanni Bignami, former President of the Italian Space Agency, will speak about his work with Carlo Rubbia. Finally, Hans Joachim Schellnhuber of the Potsdam Institute for Climate Research and Sven Kul...

  7. CERN honours Carlo Rubbia

    CERN Multimedia

    2009-01-01

    On 7 April CERN will be holding a symposium to mark the 75th birthday of Carlo Rubbia, who shared the 1984 Nobel Prize for Physics with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. Following a presentation by Rolf Heuer, lectures will be given by eminent speakers on areas of science to which Carlo Rubbia has made decisive contributions. Michel Spiro, Director of the French National Institute of Nuclear and Particle Physics (IN2P3) of the CNRS, Lyn Evans, sLHC Project Leader, and Alan Astbury of the TRIUMF Laboratory will talk about the physics of the weak interaction and the discovery of the W and Z bosons. Former CERN Director-General Herwig Schopper will lecture on CERN’s accelerators from LEP to the LHC. Giovanni Bignami, former President of the Italian Space Agency and Professor at the IUSS School for Advanced Studies in Pavia will speak about his work with Carlo Rubbia. Finally, Hans Joachim Sch...

  8. Correspondencia epistolar de Carlos Vicioso a Carlos Pau durante su estancia en Bicorp (Valencia [Epistolary correspondece between Carlos Vicioso and Carlos Pau during a stay in Bicorp (Valencia

    Directory of Open Access Journals (Sweden)

    Pedro Pablo Ferrer Gallego

    2012-07-01

    Full Text Available RESUMEN: Se presenta y comenta un conjunto de cartas enviadas por Carlos Vicioso a Carlos Pau durante su estancia en Bicorp (Valencia entre 1914 y 1915. Las cartas se encuentran depositadas en el Archivo Histórico del Instituto Botánico de Barcelona. Esta correspondencia epistolar marca el comienzo de la relación científica entre Vicioso y Pau, basada en un primer momento en las consultas que le hace Vicioso al de Segorbe para la determinación de las especies que a través de pliegos de herbario envía desde su estancia en la localidad valenciana. En la actualidad estos pliegos testigo se encuentran conservados en diferentes herbarios oficiales nacionales y también extranjeros, fruto del envío e intercambio de material entre Vicioso y otros botánicos de la época, principalmente con Pau, Sennen y Font Quer.ABSTRACT: Epistolary correspondece between Carlos Vicioso and Carlos Pau during a stay in Bicorp (Valencia. A set of letters sent from Carlos Vicioso to Carlos Pau during a stay in Bicorp (Valencia between 1914 and 1915 are here presented and discussed. The letters are located in the Archivo Histórico del Instituto Botánico de Barcelona. This lengthy correspondence among the authors shows the beginning of their scientific relationship. At first, the correspondence was based on the consults from Vicioso to Pau for the determination of the species which were sent from Bicorp, in the herbarium sheets. Nowadays, these witness sheets are preserved in the national and international herbaria, thanks to the botanical material exchange among Vicioso and other botanist of the time, mainly with Pau, Sennen and Font Quer.

  9. Quantum Monte Carlo methods algorithms for lattice models

    CERN Document Server

    Gubernatis, James; Werner, Philipp

    2016-01-01

    Featuring detailed explanations of the major algorithms used in quantum Monte Carlo simulations, this is the first textbook of its kind to provide a pedagogical overview of the field and its applications. The book provides a comprehensive introduction to the Monte Carlo method, its use, and its foundations, and examines algorithms for the simulation of quantum many-body lattice problems at finite and zero temperature. These algorithms include continuous-time loop and cluster algorithms for quantum spins, determinant methods for simulating fermions, power methods for computing ground and excited states, and the variational Monte Carlo method. Also discussed are continuous-time algorithms for quantum impurity models and their use within dynamical mean-field theory, along with algorithms for analytically continuing imaginary-time quantum Monte Carlo data. The parallelization of Monte Carlo simulations is also addressed. This is an essential resource for graduate students, teachers, and researchers interested in ...

  10. Ing. Carlos M. Ochoa

    OpenAIRE

    Montesinos A, Fernando; Facultad de Farmacia y Bioquímica de la Universidad Nacional Mayor de San Marcos, Lima, Perú.

    2014-01-01

    Este personaje es un extraordinario investigador dedicado, durante muchos año, al estudio de la papa, tubérculo del genero Solanum y al infinito número de especies y variedades que cubren los territorios del Perú, Bolivia y Chile, y posiblemente otros países. Originalmente silvestre, hoy como resultado del avance científico constituye un alimento de gran valor en el mundo, desde todo punto de vista.Carlos M. Ochoa nació en el Cusco: se trasladó a Bolivia donde llevó a cabo estudios iniciales,...

  11. Challenges of Monte Carlo Transport

    Energy Technology Data Exchange (ETDEWEB)

    Long, Alex Roberts [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-10

    These are slides from a presentation for Parallel Summer School at Los Alamos National Laboratory. Solving discretized partial differential equations (PDEs) of interest can require a large number of computations. We can identify concurrency to allow parallel solution of discrete PDEs. Simulated particles histories can be used to solve the Boltzmann transport equation. Particle histories are independent in neutral particle transport, making them amenable to parallel computation. Physical parameters and method type determine the data dependencies of particle histories. Data requirements shape parallel algorithms for Monte Carlo. Then, Parallel Computational Physics and Parallel Monte Carlo are discussed and, finally, the results are given. The mesh passing method greatly simplifies the IMC implementation and allows simple load-balancing. Using MPI windows and passive, one-sided RMA further simplifies the implementation by removing target synchronization. The author is very interested in implementations of PGAS that may allow further optimization for one-sided, read-only memory access (e.g. Open SHMEM). The MPICH_RMA_OVER_DMAPP option and library is required to make one-sided messaging scale on Trinitite - Moonlight scales poorly. Interconnect specific libraries or functions are likely necessary to ensure performance. BRANSON has been used to directly compare the current standard method to a proposed method on idealized problems. The mesh passing algorithm performs well on problems that are designed to show the scalability of the particle passing method. BRANSON can now run load-imbalanced, dynamic problems. Potential avenues of improvement in the mesh passing algorithm will be implemented and explored. A suite of test problems that stress DD methods will elucidate a possible path forward for production codes.

  12. Modelling of HTR (High Temperature Reactor Pebble-Bed 10 MW to Determine Criticality as A Variations of Enrichment and Radius of the Fuel (Kernel With the Monte Carlo Code MCNP4C

    Directory of Open Access Journals (Sweden)

    Hammam Oktajianto

    2014-12-01

    Full Text Available Gas-cooled nuclear reactor is a Generation IV reactor which has been receiving significant attention due to many desired characteristics such as inherent safety, modularity, relatively low cost, short construction period, and easy financing. High temperature reactor (HTR pebble-bed as one of type of gas-cooled reactor concept is getting attention. In HTR pebble-bed design, radius and enrichment of the fuel kernel are the key parameter that can be chosen freely to determine the desired value of criticality. This paper models HTR pebble-bed 10 MW and determines an effective of enrichment and radius of the fuel (Kernel to get criticality value of reactor. The TRISO particle coated fuel particle which was modelled explicitly and distributed in the fuelled region of the fuel pebbles using a Simple-Cubic (SC lattice. The pebble-bed balls and moderator balls distributed in the core zone using a Body-Centred Cubic lattice with assumption of a fresh fuel by the fuel enrichment was 7-17% at 1% range and the size of the fuel radius was 175-300 µm at 25 µm ranges. The geometrical model of the full reactor is obtained by using lattice and universe facilities provided by MCNP4C. The details of model are discussed with necessary simplifications. Criticality calculations were conducted by Monte Carlo transport code MCNP4C and continuous energy nuclear data library ENDF/B-VI. From calculation results can be concluded that an effective of enrichment and radius of fuel (Kernel to achieve a critical condition was the enrichment of 15-17% at a radius of 200 µm, the enrichment of 13-17% at a radius of 225 µm, the enrichments of 12-15% at radius of 250 µm, the enrichments of 11-14% at a radius of 275 µm and the enrichment of 10-13% at a radius of 300 µm, so that the effective of enrichments and radii of fuel (Kernel can be considered in the HTR 10 MW. Keywords—MCNP4C, HTR, enrichment, radius, criticality 

  13. MCMini: Monte Carlo on GPGPU

    Energy Technology Data Exchange (ETDEWEB)

    Marcus, Ryan C. [Los Alamos National Laboratory

    2012-07-25

    MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.

  14. [Relationship between climate change and determinant factors of mortality among the elderly in the municipality of São Carlos (São Paulo, Brazil) over a period of ten years].

    Science.gov (United States)

    Soares, Fabiana Vieira; Greve, Patrícia; Sendín, Francisco Alburquerque; Benze, Benedito Galvão; de Castro, Alessandra Paiva; Rebelatto, José Rubens

    2012-01-01

    The aim of this study was to identify the correlation between the number of deaths of elderly people and climate change in the district of São Carlos (SP) over a period of 10 years (1997-2006). Records of deaths were obtained from DATASUS for people aged over 60 who died between 1997 and 2006 in São Carlos. The average monthly maximum and minimum temperature data and relative air humidity in São Carlos were provided by the National Institute of Meteorology. The mortality coefficient of the district was calculated by gender and age and the resulting data were analyzed using t test, one-way ANOVA, the Bonferroni test and the Pearson correlation coefficient test. There were 8,304 deaths which predominantly occurred among males aged over 80, and diseases of the circulatory system were the main cause of death. There was a positive correlation between mortality by infectious disease and minimum humidity, and a negative correlation between mortality by infectious diseases and minimum temperatures, between mortality caused by respiratory disease and minimum humidity, between mortality caused by endocrine disease and minimum and maximum temperature. Thereby, it was possible to conclude that there was a correlation between climate change and mortality among elderly individuals in São Carlos.

  15. Monte Carlo methods for electromagnetics

    CERN Document Server

    Sadiku, Matthew NO

    2009-01-01

    Until now, novices had to painstakingly dig through the literature to discover how to use Monte Carlo techniques for solving electromagnetic problems. Written by one of the foremost researchers in the field, Monte Carlo Methods for Electromagnetics provides a solid understanding of these methods and their applications in electromagnetic computation. Including much of his own work, the author brings together essential information from several different publications.Using a simple, clear writing style, the author begins with a historical background and review of electromagnetic theory. After addressing probability and statistics, he introduces the finite difference method as well as the fixed and floating random walk Monte Carlo methods. The text then applies the Exodus method to Laplace's and Poisson's equations and presents Monte Carlo techniques for handing Neumann problems. It also deals with whole field computation using the Markov chain, applies Monte Carlo methods to time-varying diffusion problems, and ...

  16. Confidence and efficiency scaling in variational quantum Monte Carlo calculations

    Science.gov (United States)

    Delyon, F.; Bernu, B.; Holzmann, Markus

    2017-02-01

    Based on the central limit theorem, we discuss the problem of evaluation of the statistical error of Monte Carlo calculations using a time-discretized diffusion process. We present a robust and practical method to determine the effective variance of general observables and show how to verify the equilibrium hypothesis by the Kolmogorov-Smirnov test. We then derive scaling laws of the efficiency illustrated by variational Monte Carlo calculations on the two-dimensional electron gas.

  17. Confidence and efficiency scaling in Variational Quantum Monte Carlo calculations

    CERN Document Server

    Delyon, François; Holzmann, Markus

    2016-01-01

    Based on the central limit theorem, we discuss the problem of evaluation of the statistical error of Monte Carlo calculations using a time discretized diffusion process. We present a robust and practical method to determine the effective variance of general observables and show how to verify the equilibrium hypothesis by the Kolmogorov-Smirnov test. We then derive scaling laws of the efficiency illustrated by Variational Monte Carlo calculations on the two dimensional electron gas.

  18. Radiative Equilibrium and Temperature Correction in Monte Carlo Radiation Transfer

    OpenAIRE

    Bjorkman, J. E.; Wood, Kenneth

    2001-01-01

    We describe a general radiative equilibrium and temperature correction procedure for use in Monte Carlo radiation transfer codes with sources of temperature-independent opacity, such as astrophysical dust. The technique utilizes the fact that Monte Carlo simulations track individual photon packets, so we may easily determine where their energy is absorbed. When a packet is absorbed, it heats a particular cell within the envelope, raising its temperature. To enforce radiative equilibrium, the ...

  19. Metropolis Methods for Quantum Monte Carlo Simulations

    OpenAIRE

    Ceperley, D. M.

    2003-01-01

    Since its first description fifty years ago, the Metropolis Monte Carlo method has been used in a variety of different ways for the simulation of continuum quantum many-body systems. This paper will consider some of the generalizations of the Metropolis algorithm employed in quantum Monte Carlo: Variational Monte Carlo, dynamical methods for projector monte carlo ({\\it i.e.} diffusion Monte Carlo with rejection), multilevel sampling in path integral Monte Carlo, the sampling of permutations, ...

  20. Adjustment and start-up of an irradiator self shielding model Isogamma LL.CO. in the Centre of Technological Applications and Nuclear Development; Ajuste y puesta en marcha en el centro de aplicaciones tecnologicas y desarrollo nuclear de un irradiador autoblindado modelo Isogamma LL.CO.

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez, Dania Soguero; Ardanza, Armando Chavez, E-mail: sdania@ceaden.edu.cu [Centro de Aplicaciones Tecnologicas y Desarrollo Nuclear (CEADEN), La Habana (Cuba)

    2013-07-01

    This paper describes the process of installation of a self-shielded irradiator category I, model ISOGAMMA LL.Co of {sup 60}Co, with a nominal 25 kCi activity, rate of absorbed dose 8 kG/h and 5 L workload. The stages are describe step by step: import, the customs procedure which included the interview with the master of the vessel transporter, the monitoring of the entire process by the head of radiological protection of the importing Center, control of the levels of surface contamination of the shipping container of the sources before the removal of the ship, the supervision of the national regulatory authority and the transportation to the final destination. Details of assembling of the installation and the opening of the container for transportation of supplies is outlined. The action plan previously developed for the case of occurrence of radiological successful events is presented, detailing the phase of the load of radioactive sources by the specialists of the company selling the facility (IZOTOP). Finally describes the setting and implementation of the installation and the procedure of licensing for exploitation.

  1. An enhanced Monte Carlo outlier detection method.

    Science.gov (United States)

    Zhang, Liangxiao; Li, Peiwu; Mao, Jin; Ma, Fei; Ding, Xiaoxia; Zhang, Qi

    2015-09-30

    Outlier detection is crucial in building a highly predictive model. In this study, we proposed an enhanced Monte Carlo outlier detection method by establishing cross-prediction models based on determinate normal samples and analyzing the distribution of prediction errors individually for dubious samples. One simulated and three real datasets were used to illustrate and validate the performance of our method, and the results indicated that this method outperformed Monte Carlo outlier detection in outlier diagnosis. After these outliers were removed, the value of validation by Kovats retention indices and the root mean square error of prediction decreased from 3.195 to 1.655, and the average cross-validation prediction error decreased from 2.0341 to 1.2780. This method helps establish a good model by eliminating outliers. © 2015 Wiley Periodicals, Inc.

  2. Monte Carlo Simulation for Particle Detectors

    CERN Document Server

    Pia, Maria Grazia

    2012-01-01

    Monte Carlo simulation is an essential component of experimental particle physics in all the phases of its life-cycle: the investigation of the physics reach of detector concepts, the design of facilities and detectors, the development and optimization of data reconstruction software, the data analysis for the production of physics results. This note briefly outlines some research topics related to Monte Carlo simulation, that are relevant to future experimental perspectives in particle physics. The focus is on physics aspects: conceptual progress beyond current particle transport schemes, the incorporation of materials science knowledge relevant to novel detection technologies, functionality to model radiation damage, the capability for multi-scale simulation, quantitative validation and uncertainty quantification to determine the predictive power of simulation. The R&D on simulation for future detectors would profit from cooperation within various components of the particle physics community, and synerg...

  3. Conversation with Juan Carlos Negrete.

    Science.gov (United States)

    Negrete, Juan Carlos

    2013-08-01

    Juan Carlos Negrete is Emeritus Professor of Psychiatry, McGill University; Founding Director, Addictions Unit, Montreal General Hospital; former President, Canadian Society of Addiction Medicine; and former WHO/PAHO Consultant on Alcoholism, Drug Addiction and Mental Health.

  4. TARC: Carlo Rubbia's Energy Amplifier

    CERN Multimedia

    Laurent Guiraud

    1997-01-01

    Transmutation by Adiabatic Resonance Crossing (TARC) is Carlo Rubbia's energy amplifier. This CERN experiment demonstrated that long-lived fission fragments, such as 99-TC, can be efficiently destroyed.

  5. Monte Carlo integration on GPU

    OpenAIRE

    Kanzaki, J.

    2010-01-01

    We use a graphics processing unit (GPU) for fast computations of Monte Carlo integrations. Two widely used Monte Carlo integration programs, VEGAS and BASES, are parallelized on GPU. By using $W^{+}$ plus multi-gluon production processes at LHC, we test integrated cross sections and execution time for programs in FORTRAN and C on CPU and those on GPU. Integrated results agree with each other within statistical errors. Execution time of programs on GPU run about 50 times faster than those in C...

  6. 蒙特卡罗方法计算外照射所致红骨髓剂量方法的研究%Monte Carlo simulation methods of determining red bone marrow dose from external radiation

    Institute of Scientific and Technical Information of China (English)

    高佚名; 刘海宽; 顾乃谷; 吴锦海; 黄卫琴; 王凤仙; 王力; 苏旭

    2011-01-01

    Objective To provide evidence for a more reasonable method of determining red bone marrow dose by analyzing and comparing existing simulation methods.Methods By utilizing Monte Carlo simulation software MCNPX,the absorbed doses of red hone marrow of Rensselaer Polytechnic Institute (RPI)adult female voxel phantom were calculated throush 4 different methods:direct energy deposition.dose response function(DRF),King-Spiers factor method and mass-energy absorption coefficient (MEAC).The radiation sources were defined as infinite plate.sources with the energy ranging from 20 keV to 10 MeV.and 23 sources with different energies were simulated in total.The source was placed right next to the front of the RPI model to achieve a homogeneous anteroposterior radiation scenario.The results of different simulated photon energy sources through different methods were compared.Results When the photon energy was lower than 100 key,the direct energy deposition method gave the highest result while the MEAC and King-Spiers factor methods showed more reasonable results.When the photon energy was higher than 150 keV taking into account of the higher absorption ability of red bone marrow at highcr photon energy,the result of the King-Spiers factor method was larger than those of other methods.Conclusions The King-Spiers factor method might be the most reasonable method to estimate the red bone marrow dose from external radiation.%目的 对现有的红骨髓剂量模拟计算方法进行比较和分析.为确定更为合理的计算方法提供依据.方法 借助MCNPX蒙特卡罗模拟软件,模拟了能量20 keV~10 MeV的γ光子源,对Rensselaer理工学院(RPI)体素人体模型进行前后(AP)全身均匀照射,分别采用直接能量沉积法、剂量响应函数法(DRF)、King-Spiers因子法和质能吸收系数法(MEAC),进行红骨髓剂量的模拟计算.结果 在入射γ光子能量低于100 keV时,直接能量沉积法的结果最大,而质能吸收系数法和King

  7. Monte Carlo analysis of the long-lived fission product neutron capture rates at the Transmutation by Adiabatic Resonance Crossing (TARC) experiment

    Energy Technology Data Exchange (ETDEWEB)

    Abanades, A., E-mail: abanades@etsii.upm.es [Grupo de Modelizacion de Sistemas Termoenergeticos, ETSII, Universidad Politecnica de Madrid, c/Ramiro de Maeztu, 7, 28040 Madrid (Spain); Alvarez-Velarde, F.; Gonzalez-Romero, E.M. [Centro de Investigaciones Medioambientales y Tecnologicas (CIEMAT), Avda. Complutense, 40, Ed. 17, 28040 Madrid (Spain); Ismailov, K. [Tokyo Institute of Technology, 2-12-1, O-okayama, Meguro-ku, Tokyo 152-8550 (Japan); Lafuente, A. [Grupo de Modelizacion de Sistemas Termoenergeticos, ETSII, Universidad Politecnica de Madrid, c/Ramiro de Maeztu, 7, 28040 Madrid (Spain); Nishihara, K. [Transmutation Section, J-PARC Center, JAEA, Tokai-mura, Ibaraki-ken 319-1195 (Japan); Saito, M. [Tokyo Institute of Technology, 2-12-1, O-okayama, Meguro-ku, Tokyo 152-8550 (Japan); Stanculescu, A. [International Atomic Energy Agency (IAEA), Vienna (Austria); Sugawara, T. [Transmutation Section, J-PARC Center, JAEA, Tokai-mura, Ibaraki-ken 319-1195 (Japan)

    2013-01-15

    Highlights: Black-Right-Pointing-Pointer TARC experiment benchmark capture rates results. Black-Right-Pointing-Pointer Utilization of updated databases, included ADSLib. Black-Right-Pointing-Pointer Self-shielding effect in reactor design for transmutation. Black-Right-Pointing-Pointer Effect of Lead nuclear data. - Abstract: The design of Accelerator Driven Systems (ADS) requires the development of simulation tools that are able to describe in a realistic way their nuclear performance and transmutation rate capability. In this publication, we present an evaluation of state of the art Monte Carlo design tools to assess their performance concerning transmutation of long-lived fission products. This work, performed under the umbrella of the International Atomic Energy Agency, analyses two important aspects for transmutation systems: moderation on Lead and neutron captures of {sup 99}Tc, {sup 127}I and {sup 129}I. The analysis of the results shows how shielding effects due to the resonances at epithermal energies of these nuclides affects strongly their transmutation rate. The results suggest that some research effort should be undertaken to improve the quality of Iodine nuclear data at epithermal and fast neutron energy to obtain a reliable transmutation estimation.

  8. Automated Monte Carlo biasing for photon-generated electrons near surfaces.

    Energy Technology Data Exchange (ETDEWEB)

    Franke, Brian Claude; Crawford, Martin James; Kensek, Ronald Patrick

    2009-09-01

    This report describes efforts to automate the biasing of coupled electron-photon Monte Carlo particle transport calculations. The approach was based on weight-windows biasing. Weight-window settings were determined using adjoint-flux Monte Carlo calculations. A variety of algorithms were investigated for adaptivity of the Monte Carlo tallies. Tree data structures were used to investigate spatial partitioning. Functional-expansion tallies were used to investigate higher-order spatial representations.

  9. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros

    2016-08-29

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  10. Equilibrium Statistics: Monte Carlo Methods

    Science.gov (United States)

    Kröger, Martin

    Monte Carlo methods use random numbers, or ‘random’ sequences, to sample from a known shape of a distribution, or to extract distribution by other means. and, in the context of this book, to (i) generate representative equilibrated samples prior being subjected to external fields, or (ii) evaluate high-dimensional integrals. Recipes for both topics, and some more general methods, are summarized in this chapter. It is important to realize, that Monte Carlo should be as artificial as possible to be efficient and elegant. Advanced Monte Carlo ‘moves’, required to optimize the speed of algorithms for a particular problem at hand, are outside the scope of this brief introduction. One particular modern example is the wavelet-accelerated MC sampling of polymer chains [406].

  11. Monte Carlo Hamiltonian: Linear Potentials

    Institute of Scientific and Technical Information of China (English)

    LUO Xiang-Qian; LIU Jin-Jiang; HUANG Chun-Qing; JIANG Jun-Qin; Helmut KROGER

    2002-01-01

    We further study the validity of the Monte Carlo Hamiltonian method. The advantage of the method,in comparison with the standard Monte Carlo Lagrangian approach, is its capability to study the excited states. Weconsider two quantum mechanical models: a symmetric one V(x) = |x|/2; and an asymmetric one V(x) = ∞, forx < 0 and V(x) = x, for x ≥ 0. The results for the spectrum, wave functions and thermodynamical observables are inagreement with the analytical or Runge-Kutta calculations.

  12. Proton Upset Monte Carlo Simulation

    Science.gov (United States)

    O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.

    2009-01-01

    The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.

  13. Determination of absorbed dose distribution in water for COC ophthalmic applicator of {sup 106}Ru/{sup 106}Rh using Monte Carlo code-MCNPX; Determinacao da distribuicao de dose absorvida na agua para o aplicador oftalmico COC de {sup 106}Ru/{sup 106}Rh utilizando o codigo de Monte Carlo - MCNPX

    Energy Technology Data Exchange (ETDEWEB)

    Barbosa, Nilseia A.; Rosa, Luiz A. Ribeiro da, E-mail: nilseia@ird.gov.br, E-mail: lrosa@ird.gov.br [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ),Rio de Janeiro, RJ (Brazil); Braz, Delson, E-mail: delson@nuclear.ufrj.br [Coordenacao dos programas de Pos-Graduacao em Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear

    2014-07-01

    The COC ophthalmic applicators using beta radiation source of {sup 106}Ru/{sup 106}Rh are used in the treatment of intraocular tumors near the optic nerve. In this type of treatment is very important to know the dose distribution in order to provide the best possible delivery of prescribed dose to the tumor, preserves the optic nerve region extremely critical, that if damaged, can compromise the patient's visual acuity, and cause brain sequelae. These dose distributions are complex and doctors, who will have the responsibility on the therapy, only have the source calibration certificate provided by the manufacturer Eckert and Ziegler BEBIG GmbH. These certificates provide 10 absorbed dose values at water depth along the central axis applicator with the uncertainties of the order of 20% isodose and in a plane located 1 mm from the applicator surface. Thus, it is important to know with more detail and precision the dose distributions in water generated by such applicators. To this end, the Monte Carlo simulation was used using MCNPX code. Initially, was validated the simulation by comparing the obtained results to the central axis of the applicator with those provided by the certificate. The different percentages were lower than 5%, validating the used method. Lateral dose profile was calculated for 6 different depths in intervals of 1 mm and the dose rates in mGy.min{sup -1} for the same depths.

  14. 115In中子非弹性散射截面的实验测量及蒙特卡罗修正%Experimental measurement and Monte Carlo correction of neutron inelastic scattering cross section of 115In

    Institute of Scientific and Technical Information of China (English)

    王攀; 肖军; 李映映; 李子越; 汪超

    2016-01-01

    Background:As a kind of important activating material, accurate measurement of 115In neutron inelastic scattering cross section data of neutron flux monitoring is of great significance.Purpose: The purpose is to measure 115In neutron inelastic scattering cross section, and compare the results with the existing data.Methods: The cross section at 2.95 MeV, 3.94 MeV and 5.24 MeV was measured using the activation technique at a 2.5 MeV electrostatic accelerator of Sichuan University, and the reaction of D(d,n)3He was used for neutron sources. The deflections which were caused by multiple scattering and self-shielding of the experiment were corrected with MCNPX.Results:115In neutron inelastic scattering cross section data at three energy values were obtained after Monte Carlo correction and the results fit well with the calculated values of Loevestam.Conclusion:The effect of multiple scattering effects and self-shielding effect can be reduced by reducing the thickness of the target tube, bottom lining, water layer and cladding material of the sample.%115In是一种重要的活化材料,准确测量它的中子非弹性散射截面数据对中子注量监测具有重要意义.在四川大学原子核科学技术研究所2.5 MV静电质子加速器上,利用核反应D(d,n)3He产生的单能中子,以197Au作为标准,采用活化法测量了2.95MeV、3.94MeV、5.24MeV能点的115In中子非弹性散射截面.用Monte Carlo程序MCNPX(Monte Carlo N-Particle eXtended)对靶头材料、冷却水层和样品的包层材料等引起的多次散射效应及注量率衰减效应等进行了修正计算,得到最终结果与Loevestam的计算值符合较好,并且实验中可通过减小靶管、靶底衬、水层及样品的包层材料等厚度来减小多次散射效应和自屏蔽效应的影响.

  15. Carlos Mayo and Argentine historiography

    Directory of Open Access Journals (Sweden)

    Sara E. Mata

    2012-11-01

    Full Text Available The work of Carlos Mayo is distinguished by its originality and academic excellence. Our goal has been to briefly address their valuable contributions to the Argentine historiography, particularly that relating to the agricultural history of the Río de la Plata

  16. Monte Carlo Particle Lists: MCPL

    CERN Document Server

    Kittelmann, Thomas; Knudsen, Erik B; Willendrup, Peter; Cai, Xiao Xiao; Kanaki, Kalliopi

    2016-01-01

    A binary format with lists of particle state information, for interchanging particles between various Monte Carlo simulation applications, is presented. Portable C code for file manipulation is made available to the scientific community, along with converters and plugins for several popular simulation packages.

  17. Applications of Monte Carlo Methods in Calculus.

    Science.gov (United States)

    Gordon, Sheldon P.; Gordon, Florence S.

    1990-01-01

    Discusses the application of probabilistic ideas, especially Monte Carlo simulation, to calculus. Describes some applications using the Monte Carlo method: Riemann sums; maximizing and minimizing a function; mean value theorems; and testing conjectures. (YP)

  18. Structure of Self-shielding Electron Beam Installation for Sterilization

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    In order to prevent terrorist using letters with anthrax germ or spores to postal route and disturbsociety, and defend the people’s life-safety China Institute of Atomic Energy (CIAE) has developed aself-shielding electron beam installation for sterilization (SEBIS).

  19. Self shielding of surfaces irradiated by intense energy fluxes

    Energy Technology Data Exchange (ETDEWEB)

    Varghese, P.L.; Howell, J.R.; Propp, A.

    1991-08-01

    This dissertation will outline a direct methods of temperature, density, composition, and velocity measurement which should be widely applicable to railgun systems. The measurements demonstrated here should prove usefull basis for further studies of plasma/target interaction.

  20. Testing Dependent Correlations with Nonoverlapping Variables: A Monte Carlo Simulation

    Science.gov (United States)

    Silver, N. Clayton; Hittner, James B.; May, Kim

    2004-01-01

    The authors conducted a Monte Carlo simulation of 4 test statistics or comparing dependent correlations with no variables in common. Empirical Type 1 error rates and power estimates were determined for K. Pearson and L. N. G. Filon's (1898) z, O. J. Dunn and V. A. Clark's (1969) z, J. H. Steiger's (1980) original modification of Dunn and Clark's…

  1. Monte Carlo Simulation Optimizing Design of Grid Ionization Chamber

    Institute of Scientific and Technical Information of China (English)

    ZHENG; Yu-lai; WANG; Qiang; YANG; Lu

    2013-01-01

    The grid ionization chamber detector is often used for measuring charged particles.Based on Monte Carlo simulation method,the energy loss distribution and electron ion pairs of alpha particle with different energy have been calculated to determine suitable filling gas in the ionization chamber filled with

  2. Determining

    Directory of Open Access Journals (Sweden)

    Bahram Andarzian

    2015-06-01

    Full Text Available Wheat production in the south of Khuzestan, Iran is constrained by heat stress for late sowing dates. For optimization of yield, sowing at the appropriate time to fit the cultivar maturity length and growing season is critical. Crop models could be used to determine optimum sowing window for a locality. The objectives of this study were to evaluate the Cropping System Model (CSM-CERES-Wheat for its ability to simulate growth, development, grain yield of wheat in the tropical regions of Iran, and to study the impact of different sowing dates on wheat performance. The genetic coefficients of cultivar Chamran were calibrated for the CSM-CERES-Wheat model and crop model performance was evaluated with experimental data. Wheat cultivar Chamran was sown on different dates, ranging from 5 November to 9 January during 5 years of field experiments that were conducted in the Khuzestan province, Iran, under full and deficit irrigation conditions. The model was run for 8 sowing dates starting on 25 October and repeated every 10 days until 5 January using long-term historical weather data from the Ahvaz, Behbehan, Dezful and Izeh locations. The seasonal analysis program of DSSAT was used to determine the optimum sowing window for different locations as well. Evaluation with the experimental data showed that performance of the model was reasonable as indicated by fairly accurate simulation of crop phenology, biomass accumulation and grain yield against measured data. The normalized RMSE were 3%, 2%, 11.8%, and 3.4% for anthesis date, maturity date, grain yield and biomass, respectively. Optimum sowing window was different among locations. It was opened and closed on 5 November and 5 December for Ahvaz; 5 November and 15 December for Behbehan and Dezful;and 1 November and 15 December for Izeh, respectively. CERES-Wheat model could be used as a tool to evaluate the effect of sowing date on wheat performance in Khuzestan conditions. Further model evaluations

  3. Experimental Monte Carlo Quantum Process Certification

    CERN Document Server

    Steffen, L; Fedorov, A; Baur, M; Wallraff, A

    2012-01-01

    Experimental implementations of quantum information processing have now reached a level of sophistication where quantum process tomography is impractical. The number of experimental settings as well as the computational cost of the data post-processing now translates to days of effort to characterize even experiments with as few as 8 qubits. Recently a more practical approach to determine the fidelity of an experimental quantum process has been proposed, where the experimental data is compared directly to an ideal process using Monte Carlo sampling. Here we present an experimental implementation of this scheme in a circuit quantum electrodynamics setup to determine the fidelity of two qubit gates, such as the cphase and the cnot gate, and three qubit gates, such as the Toffoli gate and two sequential cphase gates.

  4. Accurate barrier heights using diffusion Monte Carlo

    CERN Document Server

    Krongchon, Kittithat; Wagner, Lucas K

    2016-01-01

    Fixed node diffusion Monte Carlo (DMC) has been performed on a test set of forward and reverse barrier heights for 19 non-hydrogen-transfer reactions, and the nodal error has been assessed. The DMC results are robust to changes in the nodal surface, as assessed by using different mean-field techniques to generate single determinant wave functions. Using these single determinant nodal surfaces, DMC results in errors of 1.5(5) kcal/mol on barrier heights. Using the large data set of DMC energies, we attempted to find good descriptors of the fixed node error. It does not correlate with a number of descriptors including change in density, but does correlate with the gap between the highest occupied and lowest unoccupied orbital energies in the mean-field calculation.

  5. An Unbiased Hessian Representation for Monte Carlo PDFs

    CERN Document Server

    Carrazza, Stefano; Kassabov, Zahari; Latorre, Jose Ignacio; Rojo, Juan

    2015-01-01

    We develop a methodology for the construction of a Hessian representation of Monte Carlo sets of parton distributions, based on the use of a subset of the Monte Carlo PDF replicas as an unbiased linear basis, and of a genetic algorithm for the determination of the optimal basis. We validate the methodology by first showing that it faithfully reproduces a native Monte Carlo PDF set (NNPDF3.0), and then, that if applied to Hessian PDF set (MMHT14) which was transformed into a Monte Carlo set, it gives back the starting PDFs with minimal information loss. We then show that, when applied to a large Monte Carlo PDF set obtained as combination of several underlying sets, the methodology leads to a Hessian representation in terms of a rather smaller set of parameters (CMC-H PDFs), thereby providing an alternative implementation of the recently suggested Meta-PDF idea and a Hessian version of the recently suggested PDF compression algorithm (CMC-PDFs). The mc2hessian conversion code is made publicly available togethe...

  6. An unbiased Hessian representation for Monte Carlo PDFs

    Energy Technology Data Exchange (ETDEWEB)

    Carrazza, Stefano; Forte, Stefano [Universita di Milano, TIF Lab, Dipartimento di Fisica, Milan (Italy); INFN, Sezione di Milano (Italy); Kassabov, Zahari [Universita di Milano, TIF Lab, Dipartimento di Fisica, Milan (Italy); Universita di Torino, Dipartimento di Fisica, Turin (Italy); INFN, Sezione di Torino (Italy); Latorre, Jose Ignacio [Universitat de Barcelona, Departament d' Estructura i Constituents de la Materia, Barcelona (Spain); Rojo, Juan [University of Oxford, Rudolf Peierls Centre for Theoretical Physics, Oxford (United Kingdom)

    2015-08-15

    We develop a methodology for the construction of a Hessian representation of Monte Carlo sets of parton distributions, based on the use of a subset of the Monte Carlo PDF replicas as an unbiased linear basis, and of a genetic algorithm for the determination of the optimal basis. We validate the methodology by first showing that it faithfully reproduces a native Monte Carlo PDF set (NNPDF3.0), and then, that if applied to Hessian PDF set (MMHT14) which was transformed into a Monte Carlo set, it gives back the starting PDFs with minimal information loss. We then show that, when applied to a large Monte Carlo PDF set obtained as combination of several underlying sets, the methodology leads to a Hessian representation in terms of a rather smaller set of parameters (MC-H PDFs), thereby providing an alternative implementation of the recently suggested Meta-PDF idea and a Hessian version of the recently suggested PDF compression algorithm (CMC-PDFs). The mc2hessian conversion code is made publicly available together with (through LHAPDF6) a Hessian representations of the NNPDF3.0 set, and the MC-H PDF set. (orig.)

  7. Calibration and Monte Carlo modelling of neutron long counters

    CERN Document Server

    Tagziria, H

    2000-01-01

    The Monte Carlo technique has become a very powerful tool in radiation transport as full advantage is taken of enhanced cross-section data, more powerful computers and statistical techniques, together with better characterisation of neutron and photon source spectra. At the National Physical Laboratory, calculations using the Monte Carlo radiation transport code MCNP-4B have been combined with accurate measurements to characterise two long counters routinely used to standardise monoenergetic neutron fields. New and more accurate response function curves have been produced for both long counters. A novel approach using Monte Carlo methods has been developed, validated and used to model the response function of the counters and determine more accurately their effective centres, which have always been difficult to establish experimentally. Calculations and measurements agree well, especially for the De Pangher long counter for which details of the design and constructional material are well known. The sensitivit...

  8. Density matrix quantum Monte Carlo

    CERN Document Server

    Blunt, N S; Spencer, J S; Foulkes, W M C

    2013-01-01

    This paper describes a quantum Monte Carlo method capable of sampling the full density matrix of a many-particle system, thus granting access to arbitrary reduced density matrices and allowing expectation values of complicated non-local operators to be evaluated easily. The direct sampling of the density matrix also raises the possibility of calculating previously inaccessible entanglement measures. The algorithm closely resembles the recently introduced full configuration interaction quantum Monte Carlo method, but works all the way from infinite to zero temperature. We explain the theory underlying the method, describe the algorithm, and introduce an importance-sampling procedure to improve the stochastic efficiency. To demonstrate the potential of our approach, the energy and staggered magnetization of the isotropic antiferromagnetic Heisenberg model on small lattices and the concurrence of one-dimensional spin rings are compared to exact or well-established results. Finally, the nature of the sign problem...

  9. Adaptive Multilevel Monte Carlo Simulation

    KAUST Repository

    Hoel, H

    2011-08-23

    This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).

  10. Morse Monte Carlo Radiation Transport Code System

    Energy Technology Data Exchange (ETDEWEB)

    Emmett, M.B.

    1975-02-01

    The report contains sections containing descriptions of the MORSE and PICTURE codes, input descriptions, sample problems, deviations of the physical equations and explanations of the various error messages. The MORSE code is a multipurpose neutron and gamma-ray transport Monte Carlo code. Time dependence for both shielding and criticality problems is provided. General three-dimensional geometry may be used with an albedo option available at any material surface. The PICTURE code provide aid in preparing correct input data for the combinatorial geometry package CG. It provides a printed view of arbitrary two-dimensional slices through the geometry. By inspecting these pictures one may determine if the geometry specified by the input cards is indeed the desired geometry. 23 refs. (WRF)

  11. Variational Monte Carlo study of pentaquark states

    Energy Technology Data Exchange (ETDEWEB)

    Mark W. Paris

    2005-07-01

    Accurate numerical solution of the five-body Schrodinger equation is effected via variational Monte Carlo. The spectrum is assumed to exhibit a narrow resonance with strangeness S=+1. A fully antisymmetrized and pair-correlated five-quark wave function is obtained for the assumed non-relativistic Hamiltonian which has spin, isospin, and color dependent pair interactions and many-body confining terms which are fixed by the non-exotic spectra. Gauge field dynamics are modeled via flux tube exchange factors. The energy determined for the ground states with J=1/2 and negative (positive) parity is 2.22 GeV (2.50 GeV). A lower energy negative parity state is consistent with recent lattice results. The short-range structure of the state is analyzed via its diquark content.

  12. Geometric Monte Carlo and Black Janus Geometries

    CERN Document Server

    Bak, Dongsu; Kim, Kyung Kiu; Min, Hyunsoo; Song, Jeong-Pil

    2016-01-01

    We describe an application of the Monte Carlo method to the Janus deformation of the black brane background. We present numerical results for three and five dimensional black Janus geometries with planar and spherical interfaces. In particular, we argue that the 5D geometry with a spherical interface has an application in understanding the finite temperature bag-like QCD model via the AdS/CFT correspondence. The accuracy and convergence of the algorithm are evaluated with respect to the grid spacing. The systematic errors of the method are determined using an exact solution of 3D black Janus. This numerical approach for solving linear problems is unaffected initial guess of a trial solution and can handle an arbitrary geometry under various boundary conditions in the presence of source fields.

  13. Cuartel San Carlos. Yacimiento veterano

    Directory of Open Access Journals (Sweden)

    Mariana Flores

    2007-01-01

    Full Text Available El Cuartel San Carlos es un monumento histórico nacional (1986 de finales del siglo XVIII (1785-1790, caracterizado por sufrir diversas adversidades en su construcción y soportar los terremotos de 1812 y 1900. En el año 2006, el organismo encargado de su custodia, el Instituto de Patrimonio Cultural del Ministerio de Cultura, ejecutó tres etapas de exploración arqueológica, que abarcaron las áreas Traspatio, Patio Central y las Naves Este y Oeste de la edificación. Este trabajo reseña el análisis de la documentación arqueológica obtenida en el sitio, a partir de la realización de dicho proyecto, denominado EACUSAC (Estudio Arqueológico del Cuartel San Carlos, que representa además, la tercera campaña realizada en el sitio. La importancia de este yacimiento histórico, radica en su participación en los acontecimientos que propiciaron conflictos de poder durante el surgimiento de la República y en los sucesos políticos del siglo XX. De igual manera, se encontró en el sitio una amplia muestra de materiales arqueológicos que reseñan un estilo de vida cotidiana militar, así como las dinámicas sociales internas ocurridas en el San Carlos, como lugar estratégico para la defensa de los diferentes regímenes que atravesó el país, desde la época del imperialismo español hasta nuestros días.

  14. Evaluation of a special pencil ionization chamber by the Monte Carlo method; Avaliacao de uma camara de ionizacao tipo lapis especial pelo metodo de Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Mendonca, Dalila; Neves, Lucio P.; Perini, Ana P., E-mail: anapaula.perini@ufu.br [Universidade Federal de Uberlandia (INFIS/UFU), Uberlandia, MG (Brazil). Instituto de Fisica; Santos, William S.; Caldas, Linda V.E. [Instituto de Pesquisas Energeticas e Nucleres (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    A special pencil type ionization chamber, developed at the Instituto de Pesquisas Energeticas e Nucleares, was characterized by means of Monte Carlo simulation to determine the influence of its components on its response. The main differences between this ionization chamber and commercial ionization chambers are related to its configuration and constituent materials. The simulations were made employing the MCNP-4C Monte Carlo code. The highest influence was obtained for the body of PMMA: 7.0%. (author)

  15. Monte-Carlo Method for Coalbed Methane Resource Assessment in Key Coal Mining Areas of China

    Institute of Scientific and Technical Information of China (English)

    Yang Yongguo; Chen Yuhua; Qin Yong; Cheng Qiuming

    2008-01-01

    Monte-Carlo method is used for estimating coalbed methane (CBM) resources in key coal mining areas of China. Monte-Carlo method is shown to be superior to the traditional volumetric method with constant parameters in the calculation of CBM resources. The focus of the article is to introduce the main algorithm and the realization of functions estimated by Monte-Carlo method, including selection of parameters, determination of distribution function, generation of pseudo-random numbers, and evaluation of the parameters corresponding to pseudo-random numbers. A specified software on the basis of Monte-Carlo method is developed using Visual C++ for the assessment of the CBM resources. A case study shows that calculation results using Monte-Carlo method have smaller error range in comparison with those using volumetric method.

  16. Monte Carlo approach to turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Dueben, P.; Homeier, D.; Muenster, G. [Muenster Univ. (Germany). Inst. fuer Theoretische Physik; Jansen, K. [DESY, Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Mesterhazy, D. [Humboldt Univ., Berlin (Germany). Inst. fuer Physik

    2009-11-15

    The behavior of the one-dimensional random-force-driven Burgers equation is investigated in the path integral formalism on a discrete space-time lattice. We show that by means of Monte Carlo methods one may evaluate observables, such as structure functions, as ensemble averages over different field realizations. The regularization of shock solutions to the zero-viscosity limit (Hopf-equation) eventually leads to constraints on lattice parameters required for the stability of the simulations. Insight into the formation of localized structures (shocks) and their dynamics is obtained. (orig.)

  17. Luis Carlos López

    Directory of Open Access Journals (Sweden)

    Rafael Maya

    1979-04-01

    Full Text Available Entre los poetasa del Centenario tuvo Luis Carlos López mucha popularidad en el extranjero, desde la publicación de su primer libro. Creo que su obra llamó la atención de filósofos como Unamuno y, si no estoy equivocado, Darío se refirió a ella en términos elogiosos. En Colombia ha sido encomiada hiperbólicamente por algunos, a tiemp que otros no le conceden mayor mérito.

  18. Monte Carlo techniques in radiation therapy

    CERN Document Server

    Verhaegen, Frank

    2013-01-01

    Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...

  19. Novel Quantum Monte Carlo Approaches for Quantum Liquids

    Science.gov (United States)

    Rubenstein, Brenda M.

    Quantum Monte Carlo methods are a powerful suite of techniques for solving the quantum many-body problem. By using random numbers to stochastically sample quantum properties, QMC methods are capable of studying low-temperature quantum systems well beyond the reach of conventional deterministic techniques. QMC techniques have likewise been indispensible tools for augmenting our current knowledge of superfluidity and superconductivity. In this thesis, I present two new quantum Monte Carlo techniques, the Monte Carlo Power Method and Bose-Fermi Auxiliary-Field Quantum Monte Carlo, and apply previously developed Path Integral Monte Carlo methods to explore two new phases of quantum hard spheres and hydrogen. I lay the foundation for a subsequent description of my research by first reviewing the physics of quantum liquids in Chapter One and the mathematics behind Quantum Monte Carlo algorithms in Chapter Two. I then discuss the Monte Carlo Power Method, a stochastic way of computing the first several extremal eigenvalues of a matrix too memory-intensive to be stored and therefore diagonalized. As an illustration of the technique, I demonstrate how it can be used to determine the second eigenvalues of the transition matrices of several popular Monte Carlo algorithms. This information may be used to quantify how rapidly a Monte Carlo algorithm is converging to the equilibrium probability distribution it is sampling. I next present the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm. This algorithm generalizes the well-known Auxiliary-Field Quantum Monte Carlo algorithm for fermions to bosons and Bose-Fermi mixtures. Despite some shortcomings, the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm represents the first exact technique capable of studying Bose-Fermi mixtures of any size in any dimension. In Chapter Six, I describe a new Constant Stress Path Integral Monte Carlo algorithm for the study of quantum mechanical systems under high pressures. While

  20. Mean field simulation for Monte Carlo integration

    CERN Document Server

    Del Moral, Pierre

    2013-01-01

    In the last three decades, there has been a dramatic increase in the use of interacting particle methods as a powerful tool in real-world applications of Monte Carlo simulation in computational physics, population biology, computer sciences, and statistical machine learning. Ideally suited to parallel and distributed computation, these advanced particle algorithms include nonlinear interacting jump diffusions; quantum, diffusion, and resampled Monte Carlo methods; Feynman-Kac particle models; genetic and evolutionary algorithms; sequential Monte Carlo methods; adaptive and interacting Marko

  1. 1-D EQUILIBRIUM DISCRETE DIFFUSION MONTE CARLO

    Energy Technology Data Exchange (ETDEWEB)

    T. EVANS; ET AL

    2000-08-01

    We present a new hybrid Monte Carlo method for 1-D equilibrium diffusion problems in which the radiation field coexists with matter in local thermodynamic equilibrium. This method, the Equilibrium Discrete Diffusion Monte Carlo (EqDDMC) method, combines Monte Carlo particles with spatially discrete diffusion solutions. We verify the EqDDMC method with computational results from three slab problems. The EqDDMC method represents an incremental step toward applying this hybrid methodology to non-equilibrium diffusion, where it could be simultaneously coupled to Monte Carlo transport.

  2. Monte Carlo Treatment Planning for Advanced Radiotherapy

    DEFF Research Database (Denmark)

    Cronholm, Rickard

    and validation of a Monte Carlo model of a medical linear accelerator (i), converting a CT scan of a patient to a Monte Carlo compliant phantom (ii) and translating the treatment plan parameters (including beam energy, angles of incidence, collimator settings etc) to a Monte Carlo input file (iii). A protocol...... previous algorithms since it uses delineations of structures in order to include and/or exclude certain media in various anatomical regions. This method has the potential to reduce anatomically irrelevant media assignment. In house MATLAB scripts translating the treatment plan parameters to Monte Carlo...

  3. Error in Monte Carlo, quasi-error in Quasi-Monte Carlo

    OpenAIRE

    Kleiss, R. H. P.; Lazopoulos, A.

    2006-01-01

    While the Quasi-Monte Carlo method of numerical integration achieves smaller integration error than standard Monte Carlo, its use in particle physics phenomenology has been hindered by the abscence of a reliable way to estimate that error. The standard Monte Carlo error estimator relies on the assumption that the points are generated independently of each other and, therefore, fails to account for the error improvement advertised by the Quasi-Monte Carlo method. We advocate the construction o...

  4. Monte Carlo Simulations of the Photospheric Process

    CERN Document Server

    Santana, Rodolfo; Hernandez, Roberto A; Kumar, Pawan

    2015-01-01

    We present a Monte Carlo (MC) code we wrote to simulate the photospheric process and to study the photospheric spectrum above the peak energy. Our simulations were performed with a photon to electron ratio $N_{\\gamma}/N_{e} = 10^{5}$, as determined by observations of the GRB prompt emission. We searched an exhaustive parameter space to determine if the photospheric process can match the observed high-energy spectrum of the prompt emission. If we do not consider electron re-heating, we determined that the best conditions to produce the observed high-energy spectrum are low photon temperatures and high optical depths. However, for these simulations, the spectrum peaks at an energy below 300 keV by a factor $\\sim 10$. For the cases we consider with higher photon temperatures and lower optical depths, we demonstrate that additional energy in the electrons is required to produce a power-law spectrum above the peak-energy. By considering electron re-heating near the photosphere, the spectrum for these simulations h...

  5. Avaliação de genótipos de Leucaena spp. nas condições edafoclimáticas de São Carlos,SP: II. determinações bromatológicas no período de estabelecimento Evaluation of Leucaena spp. genotypes in the edaphic and climatic conditions of São Carlos, SP: II. bromatological determinations at the establishment period

    Directory of Open Access Journals (Sweden)

    A.C.P. de A. Primavesi

    1994-04-01

    Full Text Available Em experimento conduzido em Latossolo Vermelho-Amarelo distrófico, em área da EMBRAPA - CPPSE em São Carlos, situada a 22°01'S e 47°53'W, com altitude de 856 m e média de precipitação anual de 1502 mm, procedeu-se a determinação da composição bromatológicade folhas, hastes com diâmetro menor que 6 mm e vagens, de genótipos de leucena. Os genótipos avaliados, foram: L.leucocephala cv. Texas 1074 (TI, L.leucocephala 29 A9 (T2, L.leucocephala 11 x L.dlversifolia 25 (T3, L.leucocephala 11 x L.diversifolia 26 (T4, L.leucocephala 24-19/2-39 x L.diverstfolia 26 (T5 e L.leucocephala c v. Cunningham (testemunha. Verificou-se que: os genótipos avahados não apresentaram diferenças nas determinações bromatológicas, realizadas nas folhas e talos finos; o genótipo T3 registrou o maior teor de proteína bruta (28,06%, de fósforo (0,29% e a maior relação PB/FDN e o menor teor de FDN para vagens; os genótipos apresentaram os seguintes teores médios, em porcentagem, para a composição bromatológicadas folhas, vagens e talos finos, respectivamente: Proteína bruta (18,57; 21,68; 6,41; Fibra detergente neutro (29,09; 41,58; 71,01; Fósforo (0,12; 0,22; 0,06; Cálcio (1,39; 0,36; 0,49; Magnesio (0,51; 0,28; 0,24; Tanino (1,32; 1,15; 0,28 e Digestibilidade "in vitro" (58,39; 61,22; 33,61; os teores de proteína e fósforo apresentaram a seguinte ordem decrescente nas partes das plantas: vagens > folhas > talos finos; os teores de cálcio: folhas > talos finos > vagens e de magnésio: folhas > vagens > talos finos.In a trial conducted on a distrofic Red-Yellow Latossol, at EMBRAPA-CPPSE, São Carlos, located at 22°01'S and 47'53'W, altitude of 856 m and with a mean annual rainfall of 1502 mm, the bromatological composition of leaves, stems smaller than 6 mm diameter and pods of leucena genotypes was determined. The genotypes evaluated were: L.leucocephala cv. Texas 1074 (T1, L.leucocephala 29 A9 (T2, L.leucocephala 11 x L.dlversifolia 25

  6. Minimising biases in full configuration interaction quantum Monte Carlo.

    Science.gov (United States)

    Vigor, W A; Spencer, J S; Bearpark, M J; Thom, A J W

    2015-03-14

    We show that Full Configuration Interaction Quantum Monte Carlo (FCIQMC) is a Markov chain in its present form. We construct the Markov matrix of FCIQMC for a two determinant system and hence compute the stationary distribution. These solutions are used to quantify the dependence of the population dynamics on the parameters defining the Markov chain. Despite the simplicity of a system with only two determinants, it still reveals a population control bias inherent to the FCIQMC algorithm. We investigate the effect of simulation parameters on the population control bias for the neon atom and suggest simulation setups to, in general, minimise the bias. We show a reweight ing scheme to remove the bias caused by population control commonly used in diffusion Monte Carlo [Umrigar et al., J. Chem. Phys. 99, 2865 (1993)] is effective and recommend its use as a post processing step.

  7. Minimising biases in full configuration interaction quantum Monte Carlo

    Science.gov (United States)

    Vigor, W. A.; Spencer, J. S.; Bearpark, M. J.; Thom, A. J. W.

    2015-03-01

    We show that Full Configuration Interaction Quantum Monte Carlo (FCIQMC) is a Markov chain in its present form. We construct the Markov matrix of FCIQMC for a two determinant system and hence compute the stationary distribution. These solutions are used to quantify the dependence of the population dynamics on the parameters defining the Markov chain. Despite the simplicity of a system with only two determinants, it still reveals a population control bias inherent to the FCIQMC algorithm. We investigate the effect of simulation parameters on the population control bias for the neon atom and suggest simulation setups to, in general, minimise the bias. We show a reweight ing scheme to remove the bias caused by population control commonly used in diffusion Monte Carlo [Umrigar et al., J. Chem. Phys. 99, 2865 (1993)] is effective and recommend its use as a post processing step.

  8. El lenguaje de Carlos Alonso

    Directory of Open Access Journals (Sweden)

    Bárbara Bustamante

    2005-10-01

    Full Text Available El talento de Carlos Alonso (Argentina, 1929 ha logrado conquistar un lenguaje con estilo propio. La creación de dibujos, pinturas, pasteles y tintas, collages y grabados fijaron en el campo visual la proyección de su subjetividad. Tanto la imagen como la palabra explicitan una visión crítica de la realidad, que tensiona al espectador obligándolo a una condición reflexiva y comprometida con el mensaje; este es el aspecto más destacado por los historiadores del arte. Sin embargo, la presente investigación pretende focalizar aspectos icónicos y plásticos de su hacer.

  9. An introduction to Monte Carlo methods

    NARCIS (Netherlands)

    Walter, J. -C.; Barkema, G. T.

    2015-01-01

    Monte Carlo simulations are methods for simulating statistical systems. The aim is to generate a representative ensemble of configurations to access thermodynamical quantities without the need to solve the system analytically or to perform an exact enumeration. The main principles of Monte Carlo sim

  10. Carlos Restrepo. Un verdadero Maestro

    Directory of Open Access Journals (Sweden)

    Pelayo Correa

    2009-12-01

    Full Text Available Carlos Restrepo fue el primer profesor de Patología y un miembro ilustre del grupo de pioneros que fundaron la Facultad de Medicina de la Universidad del Valle. Estos pioneros convergieron en Cali en la década de 1950, en posesión de un espíritu renovador y creativo que emprendió con mucho éxito la labor de cambiar la cultura académica del Valle del Cauca. Ellos encontraron una sociedad apacible, que disfrutaba de la generosidad de su entorno, sin deseos de romper las tradiciones centenarias de estilo de vida sencillo y satisfecho. Cuando los hijos tenían el deseo y la capacidad de seguir estudios universitarios, especialmente en el área de la medicina, la familia los enviaba a climas menos cálidos, que supuestamente favorecían la función cerebral y la acumulación de conocimientos. Los pioneros de la educación médica en el Valle del Cauca, en buena parte reclutados en universidades nacionales y extranjeras, sabían muy bien que el ambiente vallecaucano no impide una formación universitaria de primera clase. Carlos Restrepo era prototipo del espíritu de cambio y formación intelectual de las nuevas generaciones. Lo manifestaba de múltiples maneras, en buena parte con su genio alegre, extrovertido, optimista, de risa fácil y contagiosa. Pero esta fase amable de su personalidad no ocultaba su tarea formativa; exigía de sus discípulos dedicación y trabajo duro, con fidelidad expresados en memorables caricaturas que exageraban su genio ocasionalmente explosivo. El grupo de pioneros se enfocó con un espíritu de total entrega (tiempo completo y dedicación exclusiva y organizó la nueva Facultad en bien definidos y estructurados departamentos: Anatomía, Bioquímica, Fisiología, Farmacología, Patología, Medicina Interna, Cirugía, Obstetricia y Ginecología, Psiquiatría y Medicina Preventiva. Los departamentos integraron sus funciones primordiales en la enseñanza, la investigación y el servicio a la comunidad. El centro

  11. Lattice gauge theories and Monte Carlo simulations

    CERN Document Server

    Rebbi, Claudio

    1983-01-01

    This volume is the most up-to-date review on Lattice Gauge Theories and Monte Carlo Simulations. It consists of two parts. Part one is an introductory lecture on the lattice gauge theories in general, Monte Carlo techniques and on the results to date. Part two consists of important original papers in this field. These selected reprints involve the following: Lattice Gauge Theories, General Formalism and Expansion Techniques, Monte Carlo Simulations. Phase Structures, Observables in Pure Gauge Theories, Systems with Bosonic Matter Fields, Simulation of Systems with Fermions.

  12. Fast quantum Monte Carlo on a GPU

    CERN Document Server

    Lutsyshyn, Y

    2013-01-01

    We present a scheme for the parallelization of quantum Monte Carlo on graphical processing units, focusing on bosonic systems and variational Monte Carlo. We use asynchronous execution schemes with shared memory persistence, and obtain an excellent acceleration. Comparing with single core execution, GPU-accelerated code runs over x100 faster. The CUDA code is provided along with the package that is necessary to execute variational Monte Carlo for a system representing liquid helium-4. The program was benchmarked on several models of Nvidia GPU, including Fermi GTX560 and M2090, and the latest Kepler architecture K20 GPU. Kepler-specific optimization is discussed.

  13. Monte carlo simulation for soot dynamics

    KAUST Repository

    Zhou, Kun

    2012-01-01

    A new Monte Carlo method termed Comb-like frame Monte Carlo is developed to simulate the soot dynamics. Detailed stochastic error analysis is provided. Comb-like frame Monte Carlo is coupled with the gas phase solver Chemkin II to simulate soot formation in a 1-D premixed burner stabilized flame. The simulated soot number density, volume fraction, and particle size distribution all agree well with the measurement available in literature. The origin of the bimodal distribution of particle size distribution is revealed with quantitative proof.

  14. Monte Carlo approaches to light nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, J.

    1990-01-01

    Significant progress has been made recently in the application of Monte Carlo methods to the study of light nuclei. We review new Green's function Monte Carlo results for the alpha particle, Variational Monte Carlo studies of {sup 16}O, and methods for low-energy scattering and transitions. Through these calculations, a coherent picture of the structure and electromagnetic properties of light nuclei has arisen. In particular, we examine the effect of the three-nucleon interaction and the importance of exchange currents in a variety of experimentally measured properties, including form factors and capture cross sections. 29 refs., 7 figs.

  15. 6711型125I种子源剂量学参量的蒙特卡罗模拟研究%Monte Carlo determination of dosimetry parameters for model 6711 125I seed

    Institute of Scientific and Technical Information of China (English)

    孙吉宁; 樊铁栓; 杨元第; 温松明; 胡家成; 苏以翔; 杨鸿栋; 温琛琳

    2008-01-01

    Objective To calculate the brachytherapy dosimetry parameters for Model 6711 125I seeds.Methods Brachytherapy dosimetry parameters are determined in accordance with the AAPM Task Group No.43 TG43-U1 dosimetry protocol.The anisotropy funetion,radial dose function,dose rate constant and dose distributions in different medium are calculated using EGSnrc code with the updated XCOM cross section libraries.Results The anisotropy function and the radial dose function are in good agreement with the data reported lately.The calculated dose rate constant is 0.951 cGy·h-1·U-1,in a 1.45% agreement with the consensus data recommended by AAPM TG43-U1 report.Conclusions The dose field around the seed source shows the characteristics of very low dose rate and very high dose gradient.A small structure iS found in the small polar angle range and the small distances of the anisotropy function.%目的 计算6711型125I种子源的剂量特性.方法 使用EGSnrc蒙特卡罗模拟程序对种子源的各向异性函数、径向剂量函数和剂量率常数进行计算,并给出了不同介质中的空间剂量率分布,将计算结果与美国医学物理师协会(AAPM)TG43-U1号报告中的推荐值和其他已发表的相关数据进行了比较.结果 各向异性函数与其他最新发表的数据符合较好;径向剂量函数与TG43-U1推荐值符合较好;剂量率常数为0.951 cGy·h-1·U-1,与TG43-U1推荐值在1.45%内吻合.结论 6711型125I种子源剂量场具有低剂量率,高梯度的特点;各向异性函数在近距离小角度处存在小突起的结构.

  16. Monte Carlo Simulation of River Meander Modelling

    Science.gov (United States)

    Posner, A. J.; Duan, J. G.

    2010-12-01

    This study first compares the first order analytical solutions for flow field by Ikeda et. al. (1981) and Johanesson and Parker (1989b). Ikeda et. al.’s (1981) linear bank erosion model was implemented to predict the rate of bank erosion in which the bank erosion coefficient is treated as a stochastic variable that varies with physical properties of the bank (e.g. cohesiveness, stratigraphy, vegetation density). The developed model was used to predict the evolution of meandering planforms. Then, the modeling results were analyzed and compared to the observed data. Since the migration of meandering channel consists of downstream translation, lateral expansion, and downstream or upstream rotations. Several measures are formulated in order to determine which of the resulting planform is closest to the experimental measured one. Results from the deterministic model highly depend on the calibrated erosion coefficient. Since field measurements are always limited, the stochastic model yielded more realistic predictions of meandering planform evolutions. Due to the random nature of bank erosion coefficient, the meandering planform evolution is a stochastic process that can only be accurately predicted by a stochastic model. Quasi-2D Ikeda (1989) flow solution with Monte Carlo Simulation of Bank Erosion Coefficient.

  17. 11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing

    CERN Document Server

    Nuyens, Dirk

    2016-01-01

    This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.

  18. Monte Carlo simulations for plasma physics

    Energy Technology Data Exchange (ETDEWEB)

    Okamoto, M.; Murakami, S.; Nakajima, N.; Wang, W.X. [National Inst. for Fusion Science, Toki, Gifu (Japan)

    2000-07-01

    Plasma behaviours are very complicated and the analyses are generally difficult. However, when the collisional processes play an important role in the plasma behaviour, the Monte Carlo method is often employed as a useful tool. For examples, in neutral particle injection heating (NBI heating), electron or ion cyclotron heating, and alpha heating, Coulomb collisions slow down high energetic particles and pitch angle scatter them. These processes are often studied by the Monte Carlo technique and good agreements can be obtained with the experimental results. Recently, Monte Carlo Method has been developed to study fast particle transports associated with heating and generating the radial electric field. Further it is applied to investigating the neoclassical transport in the plasma with steep gradients of density and temperatures which is beyong the conventional neoclassical theory. In this report, we briefly summarize the researches done by the present authors utilizing the Monte Carlo method. (author)

  19. Improved Monte Carlo Renormalization Group Method

    Science.gov (United States)

    Gupta, R.; Wilson, K. G.; Umrigar, C.

    1985-01-01

    An extensive program to analyze critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated.

  20. "Shaakal" Carlos kaebas arreteerija kohtusse / Margo Pajuste

    Index Scriptorium Estoniae

    Pajuste, Margo

    2006-01-01

    Ilmunud ka: Postimees : na russkom jazõke 3. juuli lk. 11. Vangistatud kurikuulus terrorist "Shaakal" Carlos alustas kohtuasja oma kunagise vahistaja vastu. Ta süüdistab Prantsusmaa luureteenistuse endist juhti inimröövis

  1. Simulation and the Monte Carlo method

    CERN Document Server

    Rubinstein, Reuven Y

    2016-01-01

    Simulation and the Monte Carlo Method, Third Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over more than a quarter of a century ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, such as engineering, statistics, computer science, mathematics, and the physical and life sciences. The book begins with a modernized introduction that addresses the basic concepts of probability, Markov processes, and convex optimization. Subsequent chapters discuss the dramatic changes that have occurred in the field of the Monte Carlo method, with coverage of many modern topics including: Markov Chain Monte Carlo, variance reduction techniques such as the transform likelihood ratio...

  2. Smart detectors for Monte Carlo radiative transfer

    CERN Document Server

    Baes, Maarten

    2008-01-01

    Many optimization techniques have been invented to reduce the noise that is inherent in Monte Carlo radiative transfer simulations. As the typical detectors used in Monte Carlo simulations do not take into account all the information contained in the impacting photon packages, there is still room to optimize this detection process and the corresponding estimate of the surface brightness distributions. We want to investigate how all the information contained in the distribution of impacting photon packages can be optimally used to decrease the noise in the surface brightness distributions and hence to increase the efficiency of Monte Carlo radiative transfer simulations. We demonstrate that the estimate of the surface brightness distribution in a Monte Carlo radiative transfer simulation is similar to the estimate of the density distribution in an SPH simulation. Based on this similarity, a recipe is constructed for smart detectors that take full advantage of the exact location of the impact of the photon pack...

  3. Monte Carlo methods for particle transport

    CERN Document Server

    Haghighat, Alireza

    2015-01-01

    The Monte Carlo method has become the de facto standard in radiation transport. Although powerful, if not understood and used appropriately, the method can give misleading results. Monte Carlo Methods for Particle Transport teaches appropriate use of the Monte Carlo method, explaining the method's fundamental concepts as well as its limitations. Concise yet comprehensive, this well-organized text: * Introduces the particle importance equation and its use for variance reduction * Describes general and particle-transport-specific variance reduction techniques * Presents particle transport eigenvalue issues and methodologies to address these issues * Explores advanced formulations based on the author's research activities * Discusses parallel processing concepts and factors affecting parallel performance Featuring illustrative examples, mathematical derivations, computer algorithms, and homework problems, Monte Carlo Methods for Particle Transport provides nuclear engineers and scientists with a practical guide ...

  4. Quantum Monte Carlo Calculations of Light Nuclei

    CERN Document Server

    Pieper, Steven C

    2007-01-01

    During the last 15 years, there has been much progress in defining the nuclear Hamiltonian and applying quantum Monte Carlo methods to the calculation of light nuclei. I describe both aspects of this work and some recent results.

  5. Quantum Monte Carlo with reoptimized perturbatively selected configuration-interaction wave functions

    CERN Document Server

    Giner, Emmanuel; Toulouse, Julien

    2016-01-01

    We explore the use in quantum Monte Carlo (QMC) of trial wave functions consisting of a Jastrow factor multiplied by a truncated configuration-interaction (CI) expansion in Slater determinants obtained from a CI perturbatively selected iteratively (CIPSI) calculation. In the CIPSI algorithm, the CI expansion is iteratively enlarged by selecting the best determinants using perturbation theory, which provides an optimal and automatic way of constructing truncated CI expansions approaching the full CI limit. We perform a systematic study of variational Monte Carlo (VMC) and fixed-node diffusion Monte Carlo (DMC) total energies of first-row atoms from B to Ne with different levels of optimization of the parameters (Jastrow parameters, coefficients of the determinants, and orbital parameters) in these trial wave functions. The results show that the reoptimization of the coefficients of the determinants in VMC (together with the Jastrow factor) leads to an important lowering of both VMC and DMC total energies, and ...

  6. Fermion-dimer scattering using an impurity lattice Monte Carlo approach and the adiabatic projection method

    Science.gov (United States)

    Elhatisari, Serdar; Lee, Dean

    2014-12-01

    We present lattice Monte Carlo calculations of fermion-dimer scattering in the limit of zero-range interactions using the adiabatic projection method. The adiabatic projection method uses a set of initial cluster states and Euclidean time projection to give a systematically improvable description of the low-lying scattering cluster states in a finite volume. We use Lüscher's finite-volume relations to determine the s -wave, p -wave, and d -wave phase shifts. For comparison, we also compute exact lattice results using Lanczos iteration and continuum results using the Skorniakov-Ter-Martirosian equation. For our Monte Carlo calculations we use a new lattice algorithm called impurity lattice Monte Carlo. This algorithm can be viewed as a hybrid technique which incorporates elements of both worldline and auxiliary-field Monte Carlo simulations.

  7. Fermion-Dimer Scattering using Impurity Lattice Monte Carlo and the Adiabatic Projection Method

    CERN Document Server

    Elhatisari, Serdar

    2014-01-01

    We present lattice Monte Carlo calculations of fermion-dimer scattering in the limit of zero-range interactions using the adiabatic projection method. The adiabatic projection method uses a set of initial cluster states and Euclidean time projection to give a systematically improvable description of the low-lying scattering cluster states in a finite volume. We use L\\"uscher's finite-volume relations to determine the $s$-wave, $p$-wave, and $d$-wave phase shifts. For comparison, we also compute exact lattice results using Lanczos iteration and continuum results using the Skorniakov-Ter-Martirosian equation. For our Monte Carlo calculations we use a new lattice algorithm called impurity lattice Monte Carlo. This algorithm can be viewed as a hybrid technique which incorporates elements of both worldline and auxiliary-field Monte Carlo simulations.

  8. Monte Carlo simulation on kinetics of batch and semi-batch free radical polymerization

    KAUST Repository

    Shao, Jing

    2015-10-27

    Based on Monte Carlo simulation technology, we proposed a hybrid routine which combines reaction mechanism together with coarse-grained molecular simulation to study the kinetics of free radical polymerization. By comparing with previous experimental and simulation studies, we showed the capability of our Monte Carlo scheme on representing polymerization kinetics in batch and semi-batch processes. Various kinetics information, such as instant monomer conversion, molecular weight, and polydispersity etc. are readily calculated from Monte Carlo simulation. The kinetic constants such as polymerization rate k p is determined in the simulation without of “steady-state” hypothesis. We explored the mechanism for the variation of polymerization kinetics those observed in previous studies, as well as polymerization-induced phase separation. Our Monte Carlo simulation scheme is versatile on studying polymerization kinetics in batch and semi-batch processes.

  9. Monte Carlo Hamiltonian:Inverse Potential

    Institute of Scientific and Technical Information of China (English)

    LUO Xiang-Qian; CHENG Xiao-Ni; Helmut KR(O)GER

    2004-01-01

    The Monte Carlo Hamiltonian method developed recently allows to investigate the ground state and low-lying excited states of a quantum system,using Monte Carlo(MC)algorithm with importance sampling.However,conventional MC algorithm has some difficulties when applied to inverse potentials.We propose to use effective potential and extrapolation method to solve the problem.We present examples from the hydrogen system.

  10. The Feynman Path Goes Monte Carlo

    OpenAIRE

    Sauer, Tilman

    2001-01-01

    Path integral Monte Carlo (PIMC) simulations have become an important tool for the investigation of the statistical mechanics of quantum systems. I discuss some of the history of applying the Monte Carlo method to non-relativistic quantum systems in path-integral representation. The principle feasibility of the method was well established by the early eighties, a number of algorithmic improvements have been introduced in the last two decades.

  11. Self-consistent kinetic lattice Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Horsfield, A.; Dunham, S.; Fujitani, Hideaki

    1999-07-01

    The authors present a brief description of a formalism for modeling point defect diffusion in crystalline systems using a Monte Carlo technique. The main approximations required to construct a practical scheme are briefly discussed, with special emphasis on the proper treatment of charged dopants and defects. This is followed by tight binding calculations of the diffusion barrier heights for charged vacancies. Finally, an application of the kinetic lattice Monte Carlo method to vacancy diffusion is presented.

  12. Monte Carlo Algorithms for Linear Problems

    OpenAIRE

    DIMOV, Ivan

    2000-01-01

    MSC Subject Classification: 65C05, 65U05. Monte Carlo methods are a powerful tool in many fields of mathematics, physics and engineering. It is known, that these methods give statistical estimates for the functional of the solution by performing random sampling of a certain chance variable whose mathematical expectation is the desired functional. Monte Carlo methods are methods for solving problems using random variables. In the book [16] edited by Yu. A. Shreider one can find the followin...

  13. Carlos II: el centenario olvidado

    Directory of Open Access Journals (Sweden)

    Luis Antonio RIBOT GARCÍA

    2009-12-01

    Full Text Available RESUMEN: A partir de una reflexión inicial sobre el fenómeno de las conmemoraciones, el autor se plantea las causas por las que el tercer centenario de la muerte de Carlos II no dará lugar a ninguna conmemoración. Con independencia de las valoraciones de todo tipo que puedan hacerse de dichas celebraciones, lo cierto es que, en este caso, tal vez hubieran permitido acercar al gran público a uno de los monarcas peor conocidos y menos valorados de la historia de España. Lo más grave, sin embargo, es que la sombra del desconocimiento y el juicio peyorativo se extienden también sobre todo su reinado. Las investigaciones sobre aquel periodo, sin embargo, a pesar de que no abundan, muestran una realidad bastante distinta, en la que la decadencia y la pérdida de la hegemonía internacional convivieron con importantes iniciativas y realizaciones políticas, tanto en el ámbito interno de la Monarquía, como en las relaciones internacionales.ABSTRACT: Parting from an initial reflection about the phenomenon of commemorations, the author ponders the causes for which the third centenary of Charles IFs death will not be the subjet of any celebrations. Besides any evaluations which might be made of these events, the truth is that, perhaps, in this case, a commemoration would have brought the general public closer to one of the least known and worst valued monarchs in the history of Spain. What is more serious, however, is the fact that the shadow of ignorance and pejorative judgement extend also over the entirety of his reign. Though scarce, research about this period shows a very different reality, in wich decadence and the loss of international hegemony cohabitated with important political initiatives and achievements, both in the monarchy's internal domain and in the international arena.

  14. Error in Monte Carlo, quasi-error in Quasi-Monte Carlo

    CERN Document Server

    Kleiss, R H

    2006-01-01

    While the Quasi-Monte Carlo method of numerical integration achieves smaller integration error than standard Monte Carlo, its use in particle physics phenomenology has been hindered by the abscence of a reliable way to estimate that error. The standard Monte Carlo error estimator relies on the assumption that the points are generated independently of each other and, therefore, fails to account for the error improvement advertised by the Quasi-Monte Carlo method. We advocate the construction of an estimator of stochastic nature, based on the ensemble of pointsets with a particular discrepancy value. We investigate the consequences of this choice and give some first empirical results on the suggested estimators.

  15. Determination of effective resonance energy for the 193Ir(n,γ)194Ir reaction by the cadmium ratio method

    Science.gov (United States)

    Budak, Mustafa Guray; Karadag, Mustafa; Yücel, Haluk

    2016-04-01

    In this work, the effective resonance energy, Ebarr -value for the 193Ir(n,γ)194Ir reaction was measured using cadmium ratio method. A dual monitor (197Au-98Mo), which has convenient resonance properties, was employed for characterization of the irradiation sites. Then analytical grade iridium oxide samples diluted with CaCO3 to lower neutron self-shielding effect stacked in small cylindrical Teflon boxes were irradiated once with a 1 mm thick Cd cylindrical box placed in a thermalized neutron field of an 241Am-Be neutron source then without it. The activities produced in samples during 193Ir(n,γ)194Ir reaction were measured using a p-type HPGe detector γ-ray spectrometer with a 44.8% relative efficiency. The correction factors for thermal, epithermal neutron self-shielding (Gth, Gepi), true coincidence summing (Fcoi) and gamma-ray self-absorption (Fs) effects were determined with appropriate approaches and programs. Thus, the experimental Ebarr -value was determined to be 2.65 ± 0.61 eV for 193Ir target nuclide. The recent data for Q0 and FCd values for Ebarr determination were based on k0-NAA online database. The present experimental Ebarr value was calculated and compared with more recent values for Q0 and FCd for 193Ir. Additionally, the Ebarr -values was theoretically calculated from the up-to-date resonance data obtained from ENDF/B VII library using two different approaches. Since there is no experimentally determined Ebarr -value for the 193Ir isotope, the results are compared with the calculated ones given in the literature.

  16. Monte Carlo methods for pricing financial options

    Indian Academy of Sciences (India)

    N Bolia; S Juneja

    2005-04-01

    Pricing financial options is amongst the most important and challenging problems in the modern financial industry. Except in the simplest cases, the prices of options do not have a simple closed form solution and efficient computational methods are needed to determine them. Monte Carlo methods have increasingly become a popular computational tool to price complex financial options, especially when the underlying space of assets has a large dimensionality, as the performance of other numerical methods typically suffer from the ‘curse of dimensionality’. However, even Monte-Carlo techniques can be quite slow as the problem-size increases, motivating research in variance reduction techniques to increase the efficiency of the simulations. In this paper, we review some of the popular variance reduction techniques and their application to pricing options. We particularly focus on the recent Monte-Carlo techniques proposed to tackle the difficult problem of pricing American options. These include: regression-based methods, random tree methods and stochastic mesh methods. Further, we show how importance sampling, a popular variance reduction technique, may be combined with these methods to enhance their effectiveness. We also briefly review the evolving options market in India.

  17. Monte Carlo Simulation as a Research Management Tool

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, L. J.

    1986-06-01

    Monte Carlo simulation provides a research manager with a performance monitoring tool to supplement the standard schedule- and resource-based tools such as the Program Evaluation and Review Technique (PERT) and Critical Path Method (CPM). The value of the Monte Carlo simulation in a research environment is that it 1) provides a method for ranking competing processes, 2) couples technical improvements to the process economics, and 3) provides a mechanism to determine the value of research dollars. In this paper the Monte Carlo simulation approach is developed and applied to the evaluation of three competing processes for converting lignocellulosic biomass to ethanol. The technique is shown to be useful for ranking the processes and illustrating the importance of the timeframe of the analysis on the decision process. The results show that acid hydrolysis processes have higher potential for near-term application (2-5 years), while the enzymatic hydrolysis approach has an equal chance to be competitive in the long term (beyond 10 years).

  18. A New Approach to Monte Carlo Simulations in Statistical Physics

    Science.gov (United States)

    Landau, David P.

    2002-08-01

    Monte Carlo simulations [1] have become a powerful tool for the study of diverse problems in statistical/condensed matter physics. Standard methods sample the probability distribution for the states of the system, most often in the canonical ensemble, and over the past several decades enormous improvements have been made in performance. Nonetheless, difficulties arise near phase transitions-due to critical slowing down near 2nd order transitions and to metastability near 1st order transitions, and these complications limit the applicability of the method. We shall describe a new Monte Carlo approach [2] that uses a random walk in energy space to determine the density of states directly. Once the density of states is known, all thermodynamic properties can be calculated. This approach can be extended to multi-dimensional parameter spaces and should be effective for systems with complex energy landscapes, e.g., spin glasses, protein folding models, etc. Generalizations should produce a broadly applicable optimization tool. 1. A Guide to Monte Carlo Simulations in Statistical Physics, D. P. Landau and K. Binder (Cambridge U. Press, Cambridge, 2000). 2. Fugao Wang and D. P. Landau, Phys. Rev. Lett. 86, 2050 (2001); Phys. Rev. E64, 056101-1 (2001).

  19. Monte Carlo simulation of large electron fields

    Science.gov (United States)

    Faddegon, Bruce A.; Perl, Joseph; Asai, Makoto

    2008-03-01

    Two Monte Carlo systems, EGSnrc and Geant4, the latter with two different 'physics lists,' were used to calculate dose distributions in large electron fields used in radiotherapy. Source and geometry parameters were adjusted to match calculated results to measurement. Both codes were capable of accurately reproducing the measured dose distributions of the six electron beams available on the accelerator. Depth penetration matched the average measured with a diode and parallel-plate chamber to 0.04 cm or better. Calculated depth dose curves agreed to 2% with diode measurements in the build-up region, although for the lower beam energies there was a discrepancy of up to 5% in this region when calculated results are compared to parallel-plate measurements. Dose profiles at the depth of maximum dose matched to 2-3% in the central 25 cm of the field, corresponding to the field size of the largest applicator. A 4% match was obtained outside the central region. The discrepancy observed in the bremsstrahlung tail in published results that used EGS4 is no longer evident. Simulations with the different codes and physics lists used different source energies, incident beam angles, thicknesses of the primary foils, and distance between the primary and secondary foil. The true source and geometry parameters were not known with sufficient accuracy to determine which parameter set, including the energy of the source, was closest to the truth. These results underscore the requirement for experimental benchmarks of depth penetration and electron scatter for beam energies and foils relevant to radiotherapy.

  20. kmos: A lattice kinetic Monte Carlo framework

    Science.gov (United States)

    Hoffmann, Max J.; Matera, Sebastian; Reuter, Karsten

    2014-07-01

    Kinetic Monte Carlo (kMC) simulations have emerged as a key tool for microkinetic modeling in heterogeneous catalysis and other materials applications. Systems, where site-specificity of all elementary reactions allows a mapping onto a lattice of discrete active sites, can be addressed within the particularly efficient lattice kMC approach. To this end we describe the versatile kmos software package, which offers a most user-friendly implementation, execution, and evaluation of lattice kMC models of arbitrary complexity in one- to three-dimensional lattice systems, involving multiple active sites in periodic or aperiodic arrangements, as well as site-resolved pairwise and higher-order lateral interactions. Conceptually, kmos achieves a maximum runtime performance which is essentially independent of lattice size by generating code for the efficiency-determining local update of available events that is optimized for a defined kMC model. For this model definition and the control of all runtime and evaluation aspects kmos offers a high-level application programming interface. Usage proceeds interactively, via scripts, or a graphical user interface, which visualizes the model geometry, the lattice occupations and rates of selected elementary reactions, while allowing on-the-fly changes of simulation parameters. We demonstrate the performance and scaling of kmos with the application to kMC models for surface catalytic processes, where for given operation conditions (temperature and partial pressures of all reactants) central simulation outcomes are catalytic activity and selectivities, surface composition, and mechanistic insight into the occurrence of individual elementary processes in the reaction network.

  1. A generalized hard-sphere model for Monte Carlo simulation

    Science.gov (United States)

    Hassan, H. A.; Hash, David B.

    1993-01-01

    A new molecular model, called the generalized hard-sphere, or GHS model, is introduced. This model contains, as a special case, the variable hard-sphere model of Bird (1981) and is capable of reproducing all of the analytic viscosity coefficients available in the literature that are derived for a variety of interaction potentials incorporating attraction and repulsion. In addition, a new procedure for determining interaction potentials in a gas mixture is outlined. Expressions needed for implementing the new model in the direct simulation Monte Carlo methods are derived. This development makes it possible to employ interaction models that have the same level of complexity as used in Navier-Stokes calculations.

  2. Novel Extrapolation Method in the Monte Carlo Shell Model

    CERN Document Server

    Shimizu, Noritaka; Mizusaki, Takahiro; Otsuka, Takaharu; Abe, Takashi; Honma, Michio

    2010-01-01

    We propose an extrapolation method utilizing energy variance in the Monte Carlo shell model in order to estimate the energy eigenvalue and observables accurately. We derive a formula for the energy variance with deformed Slater determinants, which enables us to calculate the energy variance efficiently. The feasibility of the method is demonstrated for the full $pf$-shell calculation of $^{56}$Ni, and the applicability of the method to a system beyond current limit of exact diagonalization is shown for the $pf$+$g_{9/2}$-shell calculation of $^{64}$Ge.

  3. Optical Monte Carlo modeling of a true portwine stain anatomy

    Science.gov (United States)

    Barton, Jennifer K.; Pfefer, T. Joshua; Welch, Ashley J.; Smithies, Derek J.; Nelson, Jerry; van Gemert, Martin J.

    1998-04-01

    A unique Monte Carlo program capable of accommodating an arbitrarily complex geometry was used to determine the energy deposition in a true port wine stain anatomy. Serial histologic sections taken from a biopsy of a dark red, laser therapy resistant stain were digitized and used to create the program input for simulation at wavelengths of 532 and 585 nm. At both wavelengths, the greatest energy deposition occurred in the superficial blood vessels, and subsequently decreased with depth as the laser beam was attenuated. However, more energy was deposited in the epidermis and superficial blood vessels at 532 nm than at 585 nm.

  4. Quantifying uncertainties in primordial nucleosynthesis without Monte Carlo simulations

    CERN Document Server

    Fiorentini, G; Sarkar, S; Villante, F L

    1998-01-01

    We present a simple method for determining the (correlated) uncertainties of the light element abundances expected from big bang nucleosynthesis, which avoids the need for lengthy Monte Carlo simulations. Our approach helps to clarify the role of the different nuclear reactions contributing to a particular elemental abundance and makes it easy to implement energy-independent changes in the measured reaction rates. As an application, we demonstrate how this method simplifies the statistical estimation of the nucleon-to-photon ratio through comparison of the standard BBN predictions with the observationally inferred abundances.

  5. Monte Carlo simulations of charge transport in heterogeneous organic semiconductors

    Science.gov (United States)

    Aung, Pyie Phyo; Khanal, Kiran; Luettmer-Strathmann, Jutta

    2015-03-01

    The efficiency of organic solar cells depends on the morphology and electronic properties of the active layer. Research teams have been experimenting with different conducting materials to achieve more efficient solar panels. In this work, we perform Monte Carlo simulations to study charge transport in heterogeneous materials. We have developed a coarse-grained lattice model of polymeric photovoltaics and use it to generate active layers with ordered and disordered regions. We determine carrier mobilities for a range of conditions to investigate the effect of the morphology on charge transport.

  6. Quantum Monte Carlo with Variable Spins

    CERN Document Server

    Melton, Cody A; Mitas, Lubos

    2016-01-01

    We investigate the inclusion of variable spins in electronic structure quantum Monte Carlo, with a focus on diffusion Monte Carlo with Hamiltonians that include spin-orbit interactions. Following our previous introduction of fixed-phase spin-orbit diffusion Monte Carlo (FPSODMC), we thoroughly discuss the details of the method and elaborate upon its technicalities. We present a proof for an upper-bound property for complex nonlocal operators, which allows for the implementation of T-moves to ensure the variational property. We discuss the time step biases associated with our particular choice of spin representation. Applications of the method are also presented for atomic and molecular systems. We calculate the binding energies and geometry of the PbH and Sn$_2$ molecules, as well as the electron affinities of the 6$p$ row elements in close agreement with experiments.

  7. Quantum speedup of Monte Carlo methods.

    Science.gov (United States)

    Montanaro, Ashley

    2015-09-08

    Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.

  8. Adiabatic optimization versus diffusion Monte Carlo methods

    Science.gov (United States)

    Jarret, Michael; Jordan, Stephen P.; Lackey, Brad

    2016-10-01

    Most experimental and theoretical studies of adiabatic optimization use stoquastic Hamiltonians, whose ground states are expressible using only real nonnegative amplitudes. This raises a question as to whether classical Monte Carlo methods can simulate stoquastic adiabatic algorithms with polynomial overhead. Here we analyze diffusion Monte Carlo algorithms. We argue that, based on differences between L1 and L2 normalized states, these algorithms suffer from certain obstructions preventing them from efficiently simulating stoquastic adiabatic evolution in generality. In practice however, we obtain good performance by introducing a method that we call Substochastic Monte Carlo. In fact, our simulations are good classical optimization algorithms in their own right, competitive with the best previously known heuristic solvers for MAX-k -SAT at k =2 ,3 ,4 .

  9. Random Numbers and Monte Carlo Methods

    Science.gov (United States)

    Scherer, Philipp O. J.

    Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.

  10. CosmoPMC: Cosmology Population Monte Carlo

    CERN Document Server

    Kilbinger, Martin; Cappe, Olivier; Cardoso, Jean-Francois; Fort, Gersende; Prunet, Simon; Robert, Christian P; Wraith, Darren

    2011-01-01

    We present the public release of the Bayesian sampling algorithm for cosmology, CosmoPMC (Cosmology Population Monte Carlo). CosmoPMC explores the parameter space of various cosmological probes, and also provides a robust estimate of the Bayesian evidence. CosmoPMC is based on an adaptive importance sampling method called Population Monte Carlo (PMC). Various cosmology likelihood modules are implemented, and new modules can be added easily. The importance-sampling algorithm is written in C, and fully parallelised using the Message Passing Interface (MPI). Due to very little overhead, the wall-clock time required for sampling scales approximately with the number of CPUs. The CosmoPMC package contains post-processing and plotting programs, and in addition a Monte-Carlo Markov chain (MCMC) algorithm. The sampling engine is implemented in the library pmclib, and can be used independently. The software is available for download at http://www.cosmopmc.info.

  11. Shell model the Monte Carlo way

    Energy Technology Data Exchange (ETDEWEB)

    Ormand, W.E.

    1995-03-01

    The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.

  12. Monte Carlo strategies in scientific computing

    CERN Document Server

    Liu, Jun S

    2008-01-01

    This paperback edition is a reprint of the 2001 Springer edition This book provides a self-contained and up-to-date treatment of the Monte Carlo method and develops a common framework under which various Monte Carlo techniques can be "standardized" and compared Given the interdisciplinary nature of the topics and a moderate prerequisite for the reader, this book should be of interest to a broad audience of quantitative researchers such as computational biologists, computer scientists, econometricians, engineers, probabilists, and statisticians It can also be used as the textbook for a graduate-level course on Monte Carlo methods Many problems discussed in the alter chapters can be potential thesis topics for masters’ or PhD students in statistics or computer science departments Jun Liu is Professor of Statistics at Harvard University, with a courtesy Professor appointment at Harvard Biostatistics Department Professor Liu was the recipient of the 2002 COPSS Presidents' Award, the most prestigious one for sta...

  13. Monte Carlo simulation of neutron scattering instruments

    Energy Technology Data Exchange (ETDEWEB)

    Seeger, P.A.

    1995-12-31

    A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width.

  14. Monte carlo simulations of organic photovoltaics.

    Science.gov (United States)

    Groves, Chris; Greenham, Neil C

    2014-01-01

    Monte Carlo simulations are a valuable tool to model the generation, separation, and collection of charges in organic photovoltaics where charges move by hopping in a complex nanostructure and Coulomb interactions between charge carriers are important. We review the Monte Carlo techniques that have been applied to this problem, and describe the results of simulations of the various recombination processes that limit device performance. We show how these processes are influenced by the local physical and energetic structure of the material, providing information that is useful for design of efficient photovoltaic systems.

  15. Monte Carlo dose distributions for radiosurgery

    Energy Technology Data Exchange (ETDEWEB)

    Perucha, M.; Leal, A.; Rincon, M.; Carrasco, E. [Sevilla Univ. (Spain). Dept. Fisiologia Medica y Biofisica; Sanchez-Doblado, F. [Sevilla Univ. (Spain). Dept. Fisiologia Medica y Biofisica]|[Hospital Univ. Virgen Macarena, Sevilla (Spain). Servicio de Oncologia Radioterapica; Nunez, L. [Clinica Puerta de Hierro, Madrid (Spain). Servicio de Radiofisica; Arrans, R.; Sanchez-Calzado, J.A.; Errazquin, L. [Hospital Univ. Virgen Macarena, Sevilla (Spain). Servicio de Oncologia Radioterapica; Sanchez-Nieto, B. [Royal Marsden NHS Trust (United Kingdom). Joint Dept. of Physics]|[Inst. of Cancer Research, Sutton, Surrey (United Kingdom)

    2001-07-01

    The precision of Radiosurgery Treatment planning systems is limited by the approximations of their algorithms and by their dosimetrical input data. This fact is especially important in small fields. However, the Monte Carlo methods is an accurate alternative as it considers every aspect of particle transport. In this work an acoustic neurinoma is studied by comparing the dose distribution of both a planning system and Monte Carlo. Relative shifts have been measured and furthermore, Dose-Volume Histograms have been calculated for target and adjacent organs at risk. (orig.)

  16. The Rational Hybrid Monte Carlo Algorithm

    CERN Document Server

    Clark, M A

    2006-01-01

    The past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions. The Rational Hybrid Monte Carlo (RHMC) algorithm, where Hybrid Monte Carlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments. This algorithm has been found to be extremely beneficial in many areas of lattice QCD (chiral fermions, finite temperature, Wilson fermions etc.). We review the algorithm and some of these benefits, and we compare against other recent algorithm developements. We conclude with an update of the Berlin wall plot comparing costs of all popular fermion formulations.

  17. The Rational Hybrid Monte Carlo algorithm

    Science.gov (United States)

    Clark, Michael

    2006-12-01

    The past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions. The Rational Hybrid Monte Carlo (RHMC) algorithm, where Hybrid Monte Carlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments. This algorithm has been found to be extremely beneficial in many areas of lattice QCD (chiral fermions, finite temperature, Wilson fermions etc.). We review the algorithm and some of these benefits, and we compare against other recent algorithm developements. We conclude with an update of the Berlin wall plot comparing costs of all popular fermion formulations.

  18. Monte Carlo Hamiltonian:Linear Potentials

    Institute of Scientific and Technical Information of China (English)

    LUOXiang-Qian; HelmutKROEGER; 等

    2002-01-01

    We further study the validity of the Monte Carlo Hamiltonian method .The advantage of the method,in comparison with the standard Monte Carlo Lagrangian approach,is its capability to study the excited states.We consider two quantum mechanical models:a symmetric one V(x)=/x/2;and an asymmetric one V(x)==∞,for x<0 and V(x)=2,for x≥0.The results for the spectrum,wave functions and thermodynamical observables are in agreement with the analytical or Runge-Kutta calculations.

  19. Parallel Markov chain Monte Carlo simulations.

    Science.gov (United States)

    Ren, Ruichao; Orkoulas, G

    2007-06-07

    With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.

  20. Fast sequential Monte Carlo methods for counting and optimization

    CERN Document Server

    Rubinstein, Reuven Y; Vaisman, Radislav

    2013-01-01

    A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the

  1. Monte Carlo EM加速算法%Acceleration of Monte Carlo EM Algorithm

    Institute of Scientific and Technical Information of China (English)

    罗季

    2008-01-01

    EM算法是近年来常用的求后验众数的估计的一种数据增广算法,但由于求出其E步中积分的显示表达式有时很困难,甚至不可能,限制了其应用的广泛性.而Monte Carlo EM算法很好地解决了这个问题,将EM算法中E步的积分用Monte Carlo模拟来有效实现,使其适用性大大增强.但无论是EM算法,还是Monte Carlo EM算法,其收敛速度都是线性的,被缺损信息的倒数所控制,当缺损数据的比例很高时,收敛速度就非常缓慢.而Newton-Raphson算法在后验众数的附近具有二次收敛速率.本文提出Monte Carlo EM加速算法,将Monte Carlo EM算法与Newton-Raphson算法结合,既使得EM算法中的E步用Monte Carlo模拟得以实现,又证明了该算法在后验众数附近具有二次收敛速度.从而使其保留了Monte Carlo EM算法的优点,并改进了Monte Carlo EM算法的收敛速度.本文通过数值例子,将Monte Carlo EM加速算法的结果与EM算法、Monte Carlo EM算法的结果进行比较,进一步说明了Monte Carlo EM加速算法的优良性.

  2. Combined experimental and Monte Carlo verification of brachytherapy plans for vaginal applicators

    Science.gov (United States)

    Sloboda, Ron S.; Wang, Ruqing

    1998-12-01

    Dose rates in a phantom around a shielded and an unshielded vaginal applicator containing Selectron low-dose-rate sources were determined by experiment and Monte Carlo simulation. Measurements were performed with thermoluminescent dosimeters in a white polystyrene phantom using an experimental protocol geared for precision. Calculations for the same set-up were done using a version of the EGS4 Monte Carlo code system modified for brachytherapy applications into which a new combinatorial geometry package developed by Bielajew was recently incorporated. Measured dose rates agree with Monte Carlo estimates to within 5% (1 SD) for the unshielded applicator, while highlighting some experimental uncertainties for the shielded applicator. Monte Carlo calculations were also done to determine a value for the effective transmission of the shield required for clinical treatment planning, and to estimate the dose rate in water at points in axial and sagittal planes transecting the shielded applicator. Comparison with dose rates generated by the planning system indicates that agreement is better than 5% (1 SD) at most positions. The precision thermoluminescent dosimetry protocol and modified Monte Carlo code are effective complementary tools for brachytherapy applicator dosimetry.

  3. Monte Carlo methods in AB initio quantum chemistry quantum Monte Carlo for molecules

    CERN Document Server

    Lester, William A; Reynolds, PJ

    1994-01-01

    This book presents the basic theory and application of the Monte Carlo method to the electronic structure of atoms and molecules. It assumes no previous knowledge of the subject, only a knowledge of molecular quantum mechanics at the first-year graduate level. A working knowledge of traditional ab initio quantum chemistry is helpful, but not essential.Some distinguishing features of this book are: Clear exposition of the basic theory at a level to facilitate independent study. Discussion of the various versions of the theory: diffusion Monte Carlo, Green's function Monte Carlo, and release n

  4. On the use of stochastic approximation Monte Carlo for Monte Carlo integration

    KAUST Repository

    Liang, Faming

    2009-03-01

    The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration via a dynamically weighted estimator by calling some results from the literature of nonhomogeneous Markov chains. Our numerical results indicate that SAMC can yield significant savings over conventional Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, for the problems for which the energy landscape is rugged. © 2008 Elsevier B.V. All rights reserved.

  5. Use of Monte Carlo Methods in brachytherapy; Uso del metodo de Monte Carlo en braquiterapia

    Energy Technology Data Exchange (ETDEWEB)

    Granero Cabanero, D.

    2015-07-01

    The Monte Carlo method has become a fundamental tool for brachytherapy dosimetry mainly because no difficulties associated with experimental dosimetry. In brachytherapy the main handicap of experimental dosimetry is the high dose gradient near the present sources making small uncertainties in the positioning of the detectors lead to large uncertainties in the dose. This presentation will review mainly the procedure for calculating dose distributions around a fountain using the Monte Carlo method showing the difficulties inherent in these calculations. In addition we will briefly review other applications of the method of Monte Carlo in brachytherapy dosimetry, as its use in advanced calculation algorithms, calculating barriers or obtaining dose applicators around. (Author)

  6. Utilizing Monte Carlo Simulations to Optimize Institutional Empiric Antipseudomonal Therapy

    Directory of Open Access Journals (Sweden)

    Sarah J. Tennant

    2015-12-01

    Full Text Available Pseudomonas aeruginosa is a common pathogen implicated in nosocomial infections with increasing resistance to a limited arsenal of antibiotics. Monte Carlo simulation provides antimicrobial stewardship teams with an additional tool to guide empiric therapy. We modeled empiric therapies with antipseudomonal β-lactam antibiotic regimens to determine which were most likely to achieve probability of target attainment (PTA of ≥90%. Microbiological data for P. aeruginosa was reviewed for 2012. Antibiotics modeled for intermittent and prolonged infusion were aztreonam, cefepime, meropenem, and piperacillin/tazobactam. Using minimum inhibitory concentrations (MICs from institution-specific isolates, and pharmacokinetic and pharmacodynamic parameters from previously published studies, a 10,000-subject Monte Carlo simulation was performed for each regimen to determine PTA. MICs from 272 isolates were included in this analysis. No intermittent infusion regimens achieved PTA ≥90%. Prolonged infusions of cefepime 2000 mg Q8 h, meropenem 1000 mg Q8 h, and meropenem 2000 mg Q8 h demonstrated PTA of 93%, 92%, and 100%, respectively. Prolonged infusions of piperacillin/tazobactam 4.5 g Q6 h and aztreonam 2 g Q8 h failed to achieved PTA ≥90% but demonstrated PTA of 81% and 73%, respectively. Standard doses of β-lactam antibiotics as intermittent infusion did not achieve 90% PTA against P. aeruginosa isolated at our institution; however, some prolonged infusions were able to achieve these targets.

  7. Monte Carlo simulation of zinc protoporphyrin fluorescence in the retina

    Science.gov (United States)

    Chen, Xiaoyan; Lane, Stephen

    2010-02-01

    We have used Monte Carlo simulation of autofluorescence in the retina to determine that noninvasive detection of nutritional iron deficiency is possible. Nutritional iron deficiency (which leads to iron deficiency anemia) affects more than 2 billion people worldwide, and there is an urgent need for a simple, noninvasive diagnostic test. Zinc protoporphyrin (ZPP) is a fluorescent compound that accumulates in red blood cells and is used as a biomarker for nutritional iron deficiency. We developed a computational model of the eye, using parameters that were identified either by literature search, or by direct experimental measurement to test the possibility of detecting ZPP non-invasively in retina. By incorporating fluorescence into Steven Jacques' original code for multi-layered tissue, we performed Monte Carlo simulation of fluorescence in the retina and determined that if the beam is not focused on a blood vessel in a neural retina layer or if part of light is hitting the vessel, ZPP fluorescence will be 10-200 times higher than background lipofuscin fluorescence coming from the retinal pigment epithelium (RPE) layer directly below. In addition we found that if the light can be focused entirely onto a blood vessel in the neural retina layer, the fluorescence signal comes only from ZPP. The fluorescence from layers below in this second situation does not contribute to the signal. Therefore, the possibility that a device could potentially be built and detect ZPP fluorescence in retina looks very promising.

  8. Variance Reduction Techniques in Monte Carlo Methods

    NARCIS (Netherlands)

    Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.

    2010-01-01

    Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the intr

  9. Monte Carlo methods beyond detailed balance

    NARCIS (Netherlands)

    Schram, Raoul D.; Barkema, Gerard T.

    2015-01-01

    Monte Carlo algorithms are nearly always based on the concept of detailed balance and ergodicity. In this paper we focus on algorithms that do not satisfy detailed balance. We introduce a general method for designing non-detailed balance algorithms, starting from a conventional algorithm satisfying

  10. A comparison of Monte Carlo generators

    CERN Document Server

    Golan, Tomasz

    2014-01-01

    A comparison of GENIE, NEUT, NUANCE, and NuWro Monte Carlo neutrino event generators is presented using a set of four observables: protons multiplicity, total visible energy, most energetic proton momentum, and $\\pi^+$ two-dimensional energy vs cosine distribution.

  11. Scalable Domain Decomposed Monte Carlo Particle Transport

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, Matthew Joseph [Univ. of California, Davis, CA (United States)

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  12. Juan Carlos D'Olivo: A portrait

    Science.gov (United States)

    Aguilar-Arévalo, Alexis A.

    2013-06-01

    This report attempts to give a brief bibliographical sketch of the academic life of Juan Carlos D'Olivo, researcher and teacher at the Instituto de Ciencias Nucleares of UNAM, devoted to advancing the fields of High Energy Physics and Astroparticle Physics in Mexico and Latin America.

  13. Monte Carlo Simulation of Counting Experiments.

    Science.gov (United States)

    Ogden, Philip M.

    A computer program to perform a Monte Carlo simulation of counting experiments was written. The program was based on a mathematical derivation which started with counts in a time interval. The time interval was subdivided to form a binomial distribution with no two counts in the same subinterval. Then the number of subintervals was extended to…

  14. Kuidas kirjutatakse ajalugu? / Carlo Ginzburg ; interv. Marek Tamm

    Index Scriptorium Estoniae

    Ginzburg, Carlo

    2007-01-01

    Ülevaade Pisa Euroopa kultuuride professori C. Ginzburg'i teostest. Varem. ilm.: Märgid, jäljed ja tõendid : intervjuu Carlo Ginzburgiga // Ginzburg, Carlo. Juust ja vaglad. - Tallinn, 2000. - Lk. 262-271

  15. Monte Carlo radiation transport in external beam radiotherapy

    OpenAIRE

    Çeçen, Yiğit

    2013-01-01

    The use of Monte Carlo in radiation transport is an effective way to predict absorbed dose distributions. Monte Carlo modeling has contributed to a better understanding of photon and electron transport by radiotherapy physicists. The aim of this review is to introduce Monte Carlo as a powerful radiation transport tool. In this review, photon and electron transport algorithms for Monte Carlo techniques are investigated and a clinical linear accelerator model is studied for external beam radiot...

  16. Coherent Scattering Imaging Monte Carlo Simulation

    Science.gov (United States)

    Hassan, Laila Abdulgalil Rafik

    Conventional mammography has poor contrast between healthy and cancerous tissues due to the small difference in attenuation properties. Coherent scatter potentially provides more information because interference of coherently scattered radiation depends on the average intermolecular spacing, and can be used to characterize tissue types. However, typical coherent scatter analysis techniques are not compatible with rapid low dose screening techniques. Coherent scatter slot scan imaging is a novel imaging technique which provides new information with higher contrast. In this work a simulation of coherent scatter was performed for slot scan imaging to assess its performance and provide system optimization. In coherent scatter imaging, the coherent scatter is exploited using a conventional slot scan mammography system with anti-scatter grids tilted at the characteristic angle of cancerous tissues. A Monte Carlo simulation was used to simulate the coherent scatter imaging. System optimization was performed across several parameters, including source voltage, tilt angle, grid distances, grid ratio, and shielding geometry. The contrast increased as the grid tilt angle increased beyond the characteristic angle for the modeled carcinoma. A grid tilt angle of 16 degrees yielded the highest contrast and signal to noise ratio (SNR). Also, contrast increased as the source voltage increased. Increasing grid ratio improved contrast at the expense of decreasing SNR. A grid ratio of 10:1 was sufficient to give a good contrast without reducing the intensity to a noise level. The optimal source to sample distance was determined to be such that the source should be located at the focal distance of the grid. A carcinoma lump of 0.5x0.5x0.5 cm3 in size was detectable which is reasonable considering the high noise due to the usage of relatively small number of incident photons for computational reasons. A further study is needed to study the effect of breast density and breast thickness

  17. The information-based complexity of approximation problem by adaptive Monte Carlo methods

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    In this paper, we study the complexity of information of approximation problem on the multivariate Sobolev space with bounded mixed derivative MWpr,α(Td), 1 < p < ∞, in the norm of Lq(Td), 1 < q < ∞, by adaptive Monte Carlo methods. Applying the discretization technique and some properties of pseudo-s-scale, we determine the exact asymptotic orders of this problem.

  18. A Monte Carlo Study of Eight Confidence Interval Methods for Coefficient Alpha

    Science.gov (United States)

    Romano, Jeanine L.; Kromrey, Jeffrey D.; Hibbard, Susan T.

    2010-01-01

    The purpose of this research is to examine eight of the different methods for computing confidence intervals around alpha that have been proposed to determine which of these, if any, is the most accurate and precise. Monte Carlo methods were used to simulate samples under known and controlled population conditions. In general, the differences in…

  19. Computer program uses Monte Carlo techniques for statistical system performance analysis

    Science.gov (United States)

    Wohl, D. P.

    1967-01-01

    Computer program with Monte Carlo sampling techniques determines the effect of a component part of a unit upon the overall system performance. It utilizes the full statistics of the disturbances and misalignments of each component to provide unbiased results through simulated random sampling.

  20. A Monte Carlo Evaluation of Estimated Parameters of Five Shrinkage Estimate Formuli.

    Science.gov (United States)

    Newman, Isadore; And Others

    1979-01-01

    A Monte Carlo simulation was employed to determine the accuracy with which the shrinkage in R squared can be estimated by five different shrinkage formulas. The study dealt with the use of shrinkage formulas for various sample sizes, different R squared values, and different degrees of multicollinearity. (Author/JKS)

  1. Enhancements for Monte-Carlo Tree Search in Ms Pac-Man

    NARCIS (Netherlands)

    Pepels, Tom; Winands, Mark H M

    2012-01-01

    In this paper enhancements for the Monte-Carlo Tree Search (MCTS) framework are investigated to play Ms Pac-Man. MCTS is used to find an optimal path for an agent at each turn, determining the move to make based on randomised simulations. Ms Pac-Man is a real-time arcade game, in which the protagoni

  2. Monte Carlo study of the isotropic-nematic transition in a fluid of thin hard disk

    NARCIS (Netherlands)

    Frenkel, D.; Eppenga, R.

    1982-01-01

    The first numerical determination of the thermodynamic isotropic-nematic transition in a simple three-dimensional model fluid, viz., a system of infinitely thin hard platelets, is reported. Thermodynamic properties were studied with use of the constant-pressure Monte Carlo method; Widom's particle-i

  3. Reliability analysis of tunnel surrounding rock stability by Monte-Carlo method

    Institute of Scientific and Technical Information of China (English)

    XI Jia-mi; YANG Geng-she

    2008-01-01

    Discussed advantages of improved Monte-Carlo method and feasibility aboutproposed approach applying in reliability analysis for tunnel surrounding rock stability. Onthe basis of deterministic parsing for tunnel surrounding rock, reliability computing methodof surrounding rock stability was derived from improved Monte-Carlo method. The com-puting method considered random of related parameters, and therefore satisfies relativityamong parameters. The proposed method can reasonably determine reliability of sur-rounding rock stability. Calculation results show that this method is a scientific method indiscriminating and checking surrounding rock stability.

  4. Monte Carlo simulations of the HP model (the "Ising model" of protein folding)

    Science.gov (United States)

    Li, Ying Wai; Wüst, Thomas; Landau, David P.

    2011-09-01

    Using Wang-Landau sampling with suitable Monte Carlo trial moves (pull moves and bond-rebridging moves combined) we have determined the density of states and thermodynamic properties for a short sequence of the HP protein model. For free chains these proteins are known to first undergo a collapse "transition" to a globule state followed by a second "transition" into a native state. When placed in the proximity of an attractive surface, there is a competition between surface adsorption and folding that leads to an intriguing sequence of "transitions". These transitions depend upon the relative interaction strengths and are largely inaccessible to "standard" Monte Carlo methods.

  5. Reversible jump Markov chain Monte Carlo for deconvolution.

    Science.gov (United States)

    Kang, Dongwoo; Verotta, Davide

    2007-06-01

    To solve the problem of estimating an unknown input function to a linear time invariant system we propose an adaptive non-parametric method based on reversible jump Markov chain Monte Carlo (RJMCMC). We use piecewise polynomial functions (splines) to represent the input function. The RJMCMC algorithm allows the exploration of a large space of competing models, in our case the collection of splines corresponding to alternative positions of breakpoints, and it is based on the specification of transition probabilities between the models. RJMCMC determines: the number and the position of the breakpoints, and the coefficients determining the shape of the spline, as well as the corresponding posterior distribution of breakpoints, number of breakpoints, coefficients and arbitrary statistics of interest associated with the estimation problem. Simulation studies show that the RJMCMC method can obtain accurate reconstructions of complex input functions, and obtains better results compared with standard non-parametric deconvolution methods. Applications to real data are also reported.

  6. Top Quark Mass Calibration for Monte Carlo Event Generators

    CERN Document Server

    Butenschoen, Mathias; Hoang, Andre H; Mateu, Vicent; Preisser, Moritz; Stewart, Iain W

    2016-01-01

    The most precise top quark mass measurements use kinematic reconstruction methods, determining the top mass parameter of a Monte Carlo event generator, $m_t^{\\rm MC}$. Due to hadronization and parton shower dynamics, relating $m_t^{\\rm MC}$ to a field theory mass is difficult. We present a calibration procedure to determine this relation using hadron level QCD predictions for observables with kinematic mass sensitivity. Fitting $e^+e^-$ 2-Jettiness calculations at NLL/NNLL order to Pythia 8.205, $m_t^{\\rm MC}$ differs from the pole mass by $900$/$600$ MeV, and agrees with the MSR mass within uncertainties, $m_t^{\\rm MC}\\simeq m_{t,1\\,{\\rm GeV}}^{\\rm MSR}$.

  7. CORPORATE VALUATION USING TWO-DIMENSIONAL MONTE CARLO SIMULATION

    Directory of Open Access Journals (Sweden)

    Toth Reka

    2010-12-01

    Full Text Available In this paper, we have presented a corporate valuation model. The model combine several valuation methods in order to get more accurate results. To determine the corporate asset value we have used the Gordon-like two-stage asset valuation model based on the calculation of the free cash flow to the firm. We have used the free cash flow to the firm to determine the corporate market value, which was calculated with use of the Black-Scholes option pricing model in frame of the two-dimensional Monte Carlo simulation method. The combined model and the use of the two-dimensional simulation model provides a better opportunity for the corporate value estimation.

  8. Accuracy of Monte Carlo simulations compared to in-vivo MDCT dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Bostani, Maryam, E-mail: mbostani@mednet.ucla.edu; McMillan, Kyle; Cagnon, Chris H.; McNitt-Gray, Michael F. [Departments of Biomedical Physics and Radiology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90024 (United States); Mueller, Jonathon W. [United States Air Force, Keesler Air Force Base, Biloxi, Mississippi 39534 (United States); Cody, Dianna D. [University of Texas M.D. Anderson Cancer Center, Houston, Texas 77030 (United States); DeMarco, John J. [Departments of Biomedical Physics and Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90024 (United States)

    2015-02-15

    Purpose: The purpose of this study was to assess the accuracy of a Monte Carlo simulation-based method for estimating radiation dose from multidetector computed tomography (MDCT) by comparing simulated doses in ten patients to in-vivo dose measurements. Methods: MD Anderson Cancer Center Institutional Review Board approved the acquisition of in-vivo rectal dose measurements in a pilot study of ten patients undergoing virtual colonoscopy. The dose measurements were obtained by affixing TLD capsules to the inner lumen of rectal catheters. Voxelized patient models were generated from the MDCT images of the ten patients, and the dose to the TLD for all exposures was estimated using Monte Carlo based simulations. The Monte Carlo simulation results were compared to the in-vivo dose measurements to determine accuracy. Results: The calculated mean percent difference between TLD measurements and Monte Carlo simulations was −4.9% with standard deviation of 8.7% and a range of −22.7% to 5.7%. Conclusions: The results of this study demonstrate very good agreement between simulated and measured doses in-vivo. Taken together with previous validation efforts, this work demonstrates that the Monte Carlo simulation methods can provide accurate estimates of radiation dose in patients undergoing CT examinations.

  9. Monte Carlo simulation experiments on box-type radon dosimeter

    Energy Technology Data Exchange (ETDEWEB)

    Jamil, Khalid, E-mail: kjamil@comsats.edu.pk; Kamran, Muhammad; Illahi, Ahsan; Manzoor, Shahid

    2014-11-11

    Epidemiological studies show that inhalation of radon gas ({sup 222}Rn) may be carcinogenic especially to mine workers, people living in closed indoor energy conserved environments and underground dwellers. It is, therefore, of paramount importance to measure the {sup 222}Rn concentrations (Bq/m{sup 3}) in indoors environments. For this purpose, box-type passive radon dosimeters employing ion track detector like CR-39 are widely used. Fraction of the number of radon alphas emitted in the volume of the box type dosimeter resulting in latent track formation on CR-39 is the latent track registration efficiency. Latent track registration efficiency is ultimately required to evaluate the radon concentration which consequently determines the effective dose and the radiological hazards. In this research, Monte Carlo simulation experiments were carried out to study the alpha latent track registration efficiency for box type radon dosimeter as a function of dosimeter’s dimensions and range of alpha particles in air. Two different self developed Monte Carlo simulation techniques were employed namely: (a) Surface ratio (SURA) method and (b) Ray hitting (RAHI) method. Monte Carlo simulation experiments revealed that there are two types of efficiencies i.e. intrinsic efficiency (η{sub int}) and alpha hit efficiency (η{sub hit}). The η{sub int} depends upon only on the dimensions of the dosimeter and η{sub hit} depends both upon dimensions of the dosimeter and range of the alpha particles. The total latent track registration efficiency is the product of both intrinsic and hit efficiencies. It has been concluded that if diagonal length of box type dosimeter is kept smaller than the range of alpha particle then hit efficiency is achieved as 100%. Nevertheless the intrinsic efficiency keeps playing its role. The Monte Carlo simulation experimental results have been found helpful to understand the intricate track registration mechanisms in the box type dosimeter. This paper

  10. Monte Carlo simulation experiments on box-type radon dosimeter

    Science.gov (United States)

    Jamil, Khalid; Kamran, Muhammad; Illahi, Ahsan; Manzoor, Shahid

    2014-11-01

    Epidemiological studies show that inhalation of radon gas (222Rn) may be carcinogenic especially to mine workers, people living in closed indoor energy conserved environments and underground dwellers. It is, therefore, of paramount importance to measure the 222Rn concentrations (Bq/m3) in indoors environments. For this purpose, box-type passive radon dosimeters employing ion track detector like CR-39 are widely used. Fraction of the number of radon alphas emitted in the volume of the box type dosimeter resulting in latent track formation on CR-39 is the latent track registration efficiency. Latent track registration efficiency is ultimately required to evaluate the radon concentration which consequently determines the effective dose and the radiological hazards. In this research, Monte Carlo simulation experiments were carried out to study the alpha latent track registration efficiency for box type radon dosimeter as a function of dosimeter's dimensions and range of alpha particles in air. Two different self developed Monte Carlo simulation techniques were employed namely: (a) Surface ratio (SURA) method and (b) Ray hitting (RAHI) method. Monte Carlo simulation experiments revealed that there are two types of efficiencies i.e. intrinsic efficiency (ηint) and alpha hit efficiency (ηhit). The ηint depends upon only on the dimensions of the dosimeter and ηhit depends both upon dimensions of the dosimeter and range of the alpha particles. The total latent track registration efficiency is the product of both intrinsic and hit efficiencies. It has been concluded that if diagonal length of box type dosimeter is kept smaller than the range of alpha particle then hit efficiency is achieved as 100%. Nevertheless the intrinsic efficiency keeps playing its role. The Monte Carlo simulation experimental results have been found helpful to understand the intricate track registration mechanisms in the box type dosimeter. This paper explains that how radon concentration from the

  11. A pure-sampling quantum Monte Carlo algorithm.

    Science.gov (United States)

    Ospadov, Egor; Rothstein, Stuart M

    2015-01-14

    The objective of pure-sampling quantum Monte Carlo is to calculate physical properties that are independent of the importance sampling function being employed in the calculation, save for the mismatch of its nodal hypersurface with that of the exact wave function. To achieve this objective, we report a pure-sampling algorithm that combines features of forward walking methods of pure-sampling and reptation quantum Monte Carlo (RQMC). The new algorithm accurately samples properties from the mixed and pure distributions simultaneously in runs performed at a single set of time-steps, over which extrapolation to zero time-step is performed. In a detailed comparison, we found RQMC to be less efficient. It requires different sets of time-steps to accurately determine the energy and other properties, such as the dipole moment. We implement our algorithm by systematically increasing an algorithmic parameter until the properties converge to statistically equivalent values. As a proof in principle, we calculated the fixed-node energy, static α polarizability, and other one-electron expectation values for the ground-states of LiH and water molecules. These quantities are free from importance sampling bias, population control bias, time-step bias, extrapolation-model bias, and the finite-field approximation. We found excellent agreement with the accepted values for the energy and a variety of other properties for those systems.

  12. Longitudinal functional principal component modelling via Stochastic Approximation Monte Carlo

    KAUST Repository

    Martinez, Josue G.

    2010-06-01

    The authors consider the analysis of hierarchical longitudinal functional data based upon a functional principal components approach. In contrast to standard frequentist approaches to selecting the number of principal components, the authors do model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order to overcome this, the authors show how to apply Stochastic Approximation Monte Carlo (SAMC) to this problem, a method that has the potential to explore the entire space and does not become trapped in local extrema. The combination of reversible jump methods and SAMC in hierarchical longitudinal functional data is simplified by a polar coordinate representation of the principal components. The approach is easy to implement and does well in simulated data in determining the distribution of the number of principal components, and in terms of its frequentist estimation properties. Empirical applications are also presented.

  13. Multilevel Monte Carlo Approaches for Numerical Homogenization

    KAUST Repository

    Efendiev, Yalchin R.

    2015-10-01

    In this article, we study the application of multilevel Monte Carlo (MLMC) approaches to numerical random homogenization. Our objective is to compute the expectation of some functionals of the homogenized coefficients, or of the homogenized solutions. This is accomplished within MLMC by considering different sizes of representative volumes (RVEs). Many inexpensive computations with the smallest RVE size are combined with fewer expensive computations performed on larger RVEs. Likewise, when it comes to homogenized solutions, different levels of coarse-grid meshes are used to solve the homogenized equation. We show that, by carefully selecting the number of realizations at each level, we can achieve a speed-up in the computations in comparison to a standard Monte Carlo method. Numerical results are presented for both one-dimensional and two-dimensional test-cases that illustrate the efficiency of the approach.

  14. Monte Carlo study of real time dynamics

    CERN Document Server

    Alexandru, Andrei; Bedaque, Paulo F; Vartak, Sohan; Warrington, Neill C

    2016-01-01

    Monte Carlo studies involving real time dynamics are severely restricted by the sign problem that emerges from highly oscillatory phase of the path integral. In this letter, we present a new method to compute real time quantities on the lattice using the Schwinger-Keldysh formalism via Monte Carlo simulations. The key idea is to deform the path integration domain to a complex manifold where the phase oscillations are mild and the sign problem is manageable. We use the previously introduced "contraction algorithm" to create a Markov chain on this alternative manifold. We substantiate our approach by analyzing the quantum mechanical anharmonic oscillator. Our results are in agreement with the exact ones obtained by diagonalization of the Hamiltonian. The method we introduce is generic and in principle applicable to quantum field theory albeit very slow. We discuss some possible improvements that should speed up the algorithm.

  15. Hybrid Monte Carlo with Chaotic Mixing

    CERN Document Server

    Kadakia, Nirag

    2016-01-01

    We propose a hybrid Monte Carlo (HMC) technique applicable to high-dimensional multivariate normal distributions that effectively samples along chaotic trajectories. The method is predicated on the freedom of choice of the HMC momentum distribution, and due to its mixing properties, exhibits sample-to-sample autocorrelations that decay far faster than those in the traditional hybrid Monte Carlo algorithm. We test the methods on distributions of varying correlation structure, finding that the proposed technique produces superior covariance estimates, is less reliant on step-size tuning, and can even function with sparse or no momentum re-sampling. The method presented here is promising for more general distributions, such as those that arise in Bayesian learning of artificial neural networks and in the state and parameter estimation of dynamical systems.

  16. Composite biasing in Monte Carlo radiative transfer

    CERN Document Server

    Baes, Maarten; Lunttila, Tuomas; Bianchi, Simone; Camps, Peter; Juvela, Mika; Kuiper, Rolf

    2016-01-01

    Biasing or importance sampling is a powerful technique in Monte Carlo radiative transfer, and can be applied in different forms to increase the accuracy and efficiency of simulations. One of the drawbacks of the use of biasing is the potential introduction of large weight factors. We discuss a general strategy, composite biasing, to suppress the appearance of large weight factors. We use this composite biasing approach for two different problems faced by current state-of-the-art Monte Carlo radiative transfer codes: the generation of photon packages from multiple components, and the penetration of radiation through high optical depth barriers. In both cases, the implementation of the relevant algorithms is trivial and does not interfere with any other optimisation techniques. Through simple test models, we demonstrate the general applicability, accuracy and efficiency of the composite biasing approach. In particular, for the penetration of high optical depths, the gain in efficiency is spectacular for the spe...

  17. EU Commissioner Carlos Moedas visits SESAME

    CERN Multimedia

    CERN Bulletin

    2015-01-01

    The European Commissioner for research, science and innovation, Carlos Moedas, visited the SESAME laboratory in Jordan on Monday 13 April. When it begins operation in 2016, SESAME, a synchrotron light source, will be the Middle East’s first major international science centre, carrying out experiments ranging from the physical sciences to environmental science and archaeology.   CERN Director-General Rolf Heuer (left) and European Commissioner Carlos Moedas with the model SESAME magnet. © European Union, 2015.   Commissioner Moedas was accompanied by a European Commission delegation led by Robert-Jan Smits, Director-General of DG Research and Innovation, as well as Rolf Heuer, CERN Director-General, Jean-Pierre Koutchouk, coordinator of the CERN-EC Support for SESAME Magnets (CESSAMag) project and Princess Sumaya bint El Hassan of Jordan, a leading advocate of science in the region. They toured the SESAME facility together with SESAME Director, Khaled Tou...

  18. Handbook of Markov chain Monte Carlo

    CERN Document Server

    Brooks, Steve

    2011-01-01

    ""Handbook of Markov Chain Monte Carlo"" brings together the major advances that have occurred in recent years while incorporating enough introductory material for new users of MCMC. Along with thorough coverage of the theoretical foundations and algorithmic and computational methodology, this comprehensive handbook includes substantial realistic case studies from a variety of disciplines. These case studies demonstrate the application of MCMC methods and serve as a series of templates for the construction, implementation, and choice of MCMC methodology.

  19. Accelerated Monte Carlo by Embedded Cluster Dynamics

    Science.gov (United States)

    Brower, R. C.; Gross, N. A.; Moriarty, K. J. M.

    1991-07-01

    We present an overview of the new methods for embedding Ising spins in continuous fields to achieve accelerated cluster Monte Carlo algorithms. The methods of Brower and Tamayo and Wolff are summarized and variations are suggested for the O( N) models based on multiple embedded Z2 spin components and/or correlated projections. Topological features are discussed for the XY model and numerical simulations presented for d=2, d=3 and mean field theory lattices.

  20. Inhomogeneous Monte Carlo simulations of dermoscopic spectroscopy

    Science.gov (United States)

    Gareau, Daniel S.; Li, Ting; Jacques, Steven; Krueger, James

    2012-03-01

    Clinical skin-lesion diagnosis uses dermoscopy: 10X epiluminescence microscopy. Skin appearance ranges from black to white with shades of blue, red, gray and orange. Color is an important diagnostic criteria for diseases including melanoma. Melanin and blood content and distribution impact the diffuse spectral remittance (300-1000nm). Skin layers: immersion medium, stratum corneum, spinous epidermis, basal epidermis and dermis as well as laterally asymmetric features (eg. melanocytic invasion) were modeled in an inhomogeneous Monte Carlo model.

  1. An introduction to Monte Carlo methods

    Science.gov (United States)

    Walter, J.-C.; Barkema, G. T.

    2015-01-01

    Monte Carlo simulations are methods for simulating statistical systems. The aim is to generate a representative ensemble of configurations to access thermodynamical quantities without the need to solve the system analytically or to perform an exact enumeration. The main principles of Monte Carlo simulations are ergodicity and detailed balance. The Ising model is a lattice spin system with nearest neighbor interactions that is appropriate to illustrate different examples of Monte Carlo simulations. It displays a second order phase transition between disordered (high temperature) and ordered (low temperature) phases, leading to different strategies of simulations. The Metropolis algorithm and the Glauber dynamics are efficient at high temperature. Close to the critical temperature, where the spins display long range correlations, cluster algorithms are more efficient. We introduce the rejection free (or continuous time) algorithm and describe in details an interesting alternative representation of the Ising model using graphs instead of spins with the so-called Worm algorithm. We conclude with an important discussion of the dynamical effects such as thermalization and correlation time.

  2. Carlos Castillo-Chavez: a century ahead.

    Science.gov (United States)

    Schatz, James

    2013-01-01

    When the opportunity to contribute a short essay about Dr. Carlos Castillo-Chavez presented itself in the context of this wonderful birthday celebration my immediate reaction was por supuesto que sí! Sixteen years ago, I travelled to Cornell University with my colleague at the National Security Agency (NSA) Barbara Deuink to meet Carlos and hear about his vision to expand the talent pool of mathematicians in our country. Our motivation was very simple. First of all, the Agency relies heavily on mathematicians to carry out its mission. If the U.S. mathematics community is not healthy, NSA is not healthy. Keeping our country safe requires a team of the sharpest minds in the nation to tackle amazing intellectual challenges on a daily basis. Second, the Agency cares deeply about diversity. Within the mathematical sciences, students with advanced degrees from the Chicano, Latino, Native American, and African-American communities are underrepresented. It was clear that addressing this issue would require visionary leadership and a long-term commitment. Carlos had the vision for a program that would provide promising undergraduates from minority communities with an opportunity to gain confidence and expertise through meaningful research experiences while sharing in the excitement of mathematical and scientific discovery. His commitment to the venture was unquestionable and that commitment has not waivered since the inception of the Mathematics and Theoretical Biology Institute (MTBI) in 1996.

  3. CARLOS FUENTES Y SU INCURSIÓN EN LA NARRATIVA POLICIAL Carlos Fuentes and his attempts at detective stories

    Directory of Open Access Journals (Sweden)

    Jaime Alberto Galgani

    2009-12-01

    Full Text Available La única obra de Carlos Fuentes perteneciente al género policial es La cabeza de la hidra (1978. Centrada en las problemáticas asociadas con el contrabando de petróleo hacia los Estados Unidos, durante la década de los 70, este relato se ubica en el contexto de la novela negra latinoamericana, presentando ciertos rasgos que la acercan al neopolicial. Con una notable hibridación de géneros, se presenta la parodia narrativa de un detective que viene a ser la versión invertida, latinoamericana, de James Bond. El artículo indaga en la novela para determinar la función que el género policial cumple en ella.The only work by Carlos Fuentes belonging to the detective story genre is La cabeza de la hidra (1978. This story, which centers on problems relating to oil smuggling into the United States during the 70s, is to be found within the contexts of the Latin-American black novel, presenting certain features which approximate the neo-detective narrative. With a remarkable hybridization of genres, the narrative parody of a detective is presented which happens to be the Latin-American opposite versión of James Bond. The article analyses the novel with the purpose of determining the function that the neo-detective genre performs in it.

  4. Monte Carlo simulation. The water regime in the gas diffusion layer of a PEM fuel cell; Monte-Carlo-Simulation. Wasserhaushalt in der GDL einer PEM-Brennstoffzelle

    Energy Technology Data Exchange (ETDEWEB)

    Seidenberger, Katrin; Wilhelm, Florian; Scholta, Joachim [Zentrum fuer Sonnenenergie- und Wasserstoff-Forschung Baden-Wuerttemberg (ZSW), Ulm (Germany)

    2011-04-15

    The life of a fuel cell is determined by the life of its components. A Monte Carlo model developed by Zentrum fuer Sonnenenergie- und Wasserstoff-Forschung Baden-Wuerttemberg (ZWS) focuses on the gas diffusion layer (GDL). The simulation program assumes a medium-scale water distribution, thus enabling the detection of water accumulation in the GDL. The results can be compared with experimental data, e.g. from synchrotron tomography measurements, and verified.

  5. Guideline of Monte Carlo calculation. Neutron/gamma ray transport simulation by Monte Carlo method

    CERN Document Server

    2002-01-01

    This report condenses basic theories and advanced applications of neutron/gamma ray transport calculations in many fields of nuclear energy research. Chapters 1 through 5 treat historical progress of Monte Carlo methods, general issues of variance reduction technique, cross section libraries used in continuous energy Monte Carlo codes. In chapter 6, the following issues are discussed: fusion benchmark experiments, design of ITER, experiment analyses of fast critical assembly, core analyses of JMTR, simulation of pulsed neutron experiment, core analyses of HTTR, duct streaming calculations, bulk shielding calculations, neutron/gamma ray transport calculations of the Hiroshima atomic bomb. Chapters 8 and 9 treat function enhancements of MCNP and MVP codes, and a parallel processing of Monte Carlo calculation, respectively. An important references are attached at the end of this report.

  6. Calibration coefficient of reference brachytherapy ionization chamber using analytical and Monte Carlo methods.

    Science.gov (United States)

    Kumar, Sudhir; Srinivasan, P; Sharma, S D

    2010-06-01

    A cylindrical graphite ionization chamber of sensitive volume 1002.4 cm(3) was designed and fabricated at Bhabha Atomic Research Centre (BARC) for use as a reference dosimeter to measure the strength of high dose rate (HDR) (192)Ir brachytherapy sources. The air kerma calibration coefficient (N(K)) of this ionization chamber was estimated analytically using Burlin general cavity theory and by the Monte Carlo method. In the analytical method, calibration coefficients were calculated for each spectral line of an HDR (192)Ir source and the weighted mean was taken as N(K). In the Monte Carlo method, the geometry of the measurement setup and physics related input data of the HDR (192)Ir source and the surrounding material were simulated using the Monte Carlo N-particle code. The total photon energy fluence was used to arrive at the reference air kerma rate (RAKR) using mass energy absorption coefficients. The energy deposition rates were used to simulate the value of charge rate in the ionization chamber and N(K) was determined. The Monte Carlo calculated N(K) agreed within 1.77 % of that obtained using the analytical method. The experimentally determined RAKR of HDR (192)Ir sources, using this reference ionization chamber by applying the analytically estimated N(K), was found to be in agreement with the vendor quoted RAKR within 1.43%.

  7. Monte Carlo studies of model Langmuir monolayers.

    Science.gov (United States)

    Opps, S B; Yang, B; Gray, C G; Sullivan, D E

    2001-04-01

    This paper examines some of the basic properties of a model Langmuir monolayer, consisting of surfactant molecules deposited onto a water subphase. The surfactants are modeled as rigid rods composed of a head and tail segment of diameters sigma(hh) and sigma(tt), respectively. The tails consist of n(t) approximately 4-7 effective monomers representing methylene groups. These rigid rods interact via site-site Lennard-Jones potentials with different interaction parameters for the tail-tail, head-tail, and head-head interactions. In a previous paper, we studied the ground-state properties of this system using a Landau approach. In the present paper, Monte Carlo simulations were performed in the canonical ensemble to elucidate the finite-temperature behavior of this system. Simulation techniques, incorporating a system of dynamic filters, allow us to decrease CPU time with negligible statistical error. This paper focuses on several of the key parameters, such as density, head-tail diameter mismatch, and chain length, responsible for driving transitions from uniformly tilted to untilted phases and between different tilt-ordered phases. Upon varying the density of the system, with sigma(hh)=sigma(tt), we observe a transition from a tilted (NNN)-condensed phase to an untilted-liquid phase and, upon comparison with recent experiments with fatty acid-alcohol and fatty acid-ester mixtures [M. C. Shih, M. K. Durbin, A. Malik, P. Zschack, and P. Dutta, J. Chem. Phys. 101, 9132 (1994); E. Teer, C. M. Knobler, C. Lautz, S. Wurlitzer, J. Kildae, and T. M. Fischer, J. Chem. Phys. 106, 1913 (1997)], we identify this as the L'(2)/Ov-L1 phase boundary. By varying the head-tail diameter ratio, we observe a decrease in T(c) with increasing mismatch. However, as the chain length was increased we observed that the transition temperatures increased and differences in T(c) due to head-tail diameter mismatch were diminished. In most of the present research, the water was treated as a hard

  8. Quantum Monte Carlo calculations of two neutrons in finite volume

    CERN Document Server

    Klos, P; Tews, I; Gandolfi, S; Gezerlis, A; Hammer, H -W; Hoferichter, M; Schwenk, A

    2016-01-01

    Ab initio calculations provide direct access to the properties of pure neutron systems that are challenging to study experimentally. In addition to their importance for fundamental physics, their properties are required as input for effective field theories of the strong interaction. In this work, we perform auxiliary-field diffusion Monte Carlo calculations of the ground and first excited state of two neutrons in a finite box, considering a simple contact potential as well as chiral effective field theory interactions. We compare the results against exact diagonalizations and present a detailed analysis of the finite-volume effects, whose understanding is crucial for determining observables from the calculated energies. Using the L\\"uscher formula, we extract the low-energy S-wave scattering parameters from ground- and excited-state energies for different box sizes.

  9. Monte Carlo study of Dirac semimetals phase diagram

    Science.gov (United States)

    Braguta, V. V.; Katsnelson, M. I.; Kotov, A. Yu.; Nikolaev, A. A.

    2016-11-01

    In this paper the phase diagram of Dirac semimetals is studied within a lattice Monte Carlo simulation. In particular, we concentrate on the dynamical chiral symmetry breaking which results in a semimetal-insulator transition. Using numerical simulation, we determine the values of the critical coupling constant of the semimetal-insulator transition for different values of the anisotropy of the Fermi velocity. This measurement allows us to draw a tentative phase diagram for Dirac semimetals. It turns out that within the Dirac model with Coulomb interaction both Na3Bi and Cd3As2 , known experimentally to be Dirac semimetals, would lie deep in the insulating region of the phase diagram. This result probably shows a decisive role of screening of the interelectron interaction in real materials, similar to the situation in graphene.

  10. Calibration of the Top-Quark Monte-Carlo Mass

    CERN Document Server

    Kieseler, Jan; Moch, Sven-Olaf

    2015-01-01

    We present a method to establish experimentally the relation between the top-quark mass $m_t^{MC}$ as implemented in Monte-Carlo generators and the Lagrangian mass parameter $m_t$ in a theoretically well-defined renormalization scheme. We propose a simultaneous fit of $m_t^{MC}$ and an observable sensitive to $m_t$, which does not rely on any prior assumptions about the relation between $m_t$ and $m_t^{MC}$. The measured observable is independent of $m_t^{MC}$ and can be used subsequently for a determination of $m_t$. The analysis strategy is illustrated with examples for the extraction of $m_t$ from inclusive and differential cross sections for hadro-production of top-quarks.

  11. Criticality accident detector coverage analysis using the Monte Carlo Method

    Energy Technology Data Exchange (ETDEWEB)

    Zino, J.F.; Okafor, K.C.

    1993-12-31

    As a result of the need for a more accurate computational methodology, the Los Alamos developed Monte Carlo code MCNP is used to show the implementation of a more advanced and accurate methodology in criticality accident detector analysis. This paper will detail the application of MCNP for the analysis of the areas of coverage of a criticality accident alarm detector located inside a concrete storage vault at the Savannah River Site. The paper will discuss; (1) the generation of fixed-source representations of various criticality fission sources (for spherical geometries); (2) the normalization of these sources to the ``minimum criticality of concern`` as defined by ANS 8.3; (3) the optimization process used to determine which source produces the lowest total detector response for a given set of conditions; and (4) the use of this minimum source for the analysis of the areas of coverage of the criticality accident alarm detector.

  12. Monte Carlo Modeling of Crystal Channeling at High Energies

    CERN Document Server

    Schoofs, Philippe; Cerutti, Francesco

    Charged particles entering a crystal close to some preferred direction can be trapped in the electromagnetic potential well existing between consecutive planes or strings of atoms. This channeling effect can be used to extract beam particles if the crystal is bent beforehand. Crystal channeling is becoming a reliable and efficient technique for collimating beams and removing halo particles. At CERN, the installation of silicon crystals in the LHC is under scrutiny by the UA9 collaboration with the goal of investigating if they are a viable option for the collimation system upgrade. This thesis describes a new Monte Carlo model of planar channeling which has been developed from scratch in order to be implemented in the FLUKA code simulating particle transport and interactions. Crystal channels are described through the concept of continuous potential taking into account thermal motion of the lattice atoms and using Moliere screening function. The energy of the particle transverse motion determines whether or n...

  13. Residual entropy of ice III from Monte Carlo simulation.

    Science.gov (United States)

    Kolafa, Jiří

    2016-03-28

    We calculated the residual entropy of ice III as a function of the occupation probabilities of hydrogen positions α and β assuming equal energies of all configurations. To do this, a discrete ice model with Bjerrum defect energy penalty and harmonic terms to constrain the occupation probabilities was simulated by the Metropolis Monte Carlo method for a range of temperatures and sizes followed by thermodynamic integration and extrapolation to N = ∞. Similarly as for other ices, the residual entropies are slightly higher than the mean-field (no-loop) approximation. However, the corrections caused by fluctuation of energies of ice samples calculated using molecular models of water are too large for accurate determination of the chemical potential and phase equilibria.

  14. Monte-Carlo study of Dirac semimetals phase diagram

    CERN Document Server

    Braguta, V V; Kotov, A Yu; Nikolaev, A A

    2016-01-01

    In this paper the phase diagram of Dirac semimetals is studied within lattice Monte-Carlo simulation. In particular, we concentrate on the dynamical chiral symmetry breaking which results in semimetal/insulator transition. Using numerical simulation we determined the values of the critical coupling constant of the semimetal/insulator transition for different values of the anisotropy of the Fermi velocity. This measurement allowed us to draw tentative phase diagram for Dirac semimetals. It turns out that within the Dirac model with Coulomb interaction both Na$_3$Bi and Cd$_3$As$_2$ known experimentally to be Dirac semimetals would lie deeply in the insulating region of the phase diagram. It probably shows a decisive role of screening of the interelectron interaction in real materials, similar to the situation in graphene.

  15. Advanced Mesh-Enabled Monte carlo capability for Multi-Physics Reactor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, Paul; Evans, Thomas; Tautges, Tim

    2012-12-24

    This project will accumulate high-precision fluxes throughout reactor geometry on a non- orthogonal grid of cells to support multi-physics coupling, in order to more accurately calculate parameters such as reactivity coefficients and to generate multi-group cross sections. This work will be based upon recent developments to incorporate advanced geometry and mesh capability in a modular Monte Carlo toolkit with computational science technology that is in use in related reactor simulation software development. Coupling this capability with production-scale Monte Carlo radiation transport codes can provide advanced and extensible test-beds for these developments. Continuous energy Monte Carlo methods are generally considered to be the most accurate computational tool for simulating radiation transport in complex geometries, particularly neutron transport in reactors. Nevertheless, there are several limitations for their use in reactor analysis. Most significantly, there is a trade-off between the fidelity of results in phase space, statistical accuracy, and the amount of computer time required for simulation. Consequently, to achieve an acceptable level of statistical convergence in high-fidelity results required for modern coupled multi-physics analysis, the required computer time makes Monte Carlo methods prohibitive for design iterations and detailed whole-core analysis. More subtly, the statistical uncertainty is typically not uniform throughout the domain, and the simulation quality is limited by the regions with the largest statistical uncertainty. In addition, the formulation of neutron scattering laws in continuous energy Monte Carlo methods makes it difficult to calculate adjoint neutron fluxes required to properly determine important reactivity parameters. Finally, most Monte Carlo codes available for reactor analysis have relied on orthogonal hexahedral grids for tallies that do not conform to the geometric boundaries and are thus generally not well

  16. Virtual detector characterisation with Monte-Carlo simulations

    Science.gov (United States)

    Sukowski, F.; Yaneu Yaneu, J. F.; Salamon, M.; Ebert, S.; Uhlmann, N.

    2009-08-01

    In the field of X-ray imaging flat-panel detectors which convert X-rays into electrical signals, are widely used. For different applications, detectors differ in several specific parameters that can be used for characterizing the detector. At the Development Center X-ray Technology EZRT we studied the question how well these characteristics can be determined by only knowing the layer composition of a detector. In order to determine the required parameters, the Monte-Carlo (MC) simulation program ROSI [J. Giersch et al., Nucl. Instr. and Meth. A 509 (2003) 151] was used while taking into account all primary and secondary particle interactions as well as the focal spot size of the X-ray tube. For the study, the Hamamatsu C9311DK [Technical Datasheet Hamamatsu C9311DK flat panel sensor, Hamamatsu Photonics, ( www.hamamatsu.com)], a scintillator-based detector, and the Ajat DIC 100TL [Technical description of Ajat DIC 100TL, Ajat Oy Ltd., ( www.ajat.fi)], a direct converting semiconductor detector, were used. The layer compositions of the two detectors were implemented into the MC simulation program. The following characteristics were measured [N. Uhlmann et al., Nucl. Instr. and Meth. A 591 (2008) 46] and compared to simulation results: The basic spatial resolution (BSR), the modulation transfer function (MTF), the contrast sensitivity (CS) and the specific material thickness range (SMTR). To take scattering of optical photons into account DETECT2000 [C. Moisan et al., DETECT2000—A Program for Modeling Optical Properties of Scintillators, Department of Electrical and Computer Engineering, Laval University, Quebec City, 2000], another Monte-Carlo simulation was used.

  17. Status of Monte-Carlo Event Generators

    Energy Technology Data Exchange (ETDEWEB)

    Hoeche, Stefan; /SLAC

    2011-08-11

    Recent progress on general-purpose Monte-Carlo event generators is reviewed with emphasis on the simulation of hard QCD processes and subsequent parton cascades. Describing full final states of high-energy particle collisions in contemporary experiments is an intricate task. Hundreds of particles are typically produced, and the reactions involve both large and small momentum transfer. The high-dimensional phase space makes an exact solution of the problem impossible. Instead, one typically resorts to regarding events as factorized into different steps, ordered descending in the mass scales or invariant momentum transfers which are involved. In this picture, a hard interaction, described through fixed-order perturbation theory, is followed by multiple Bremsstrahlung emissions off initial- and final-state and, finally, by the hadronization process, which binds QCD partons into color-neutral hadrons. Each of these steps can be treated independently, which is the basic concept inherent to general-purpose event generators. Their development is nowadays often focused on an improved description of radiative corrections to hard processes through perturbative QCD. In this context, the concept of jets is introduced, which allows to relate sprays of hadronic particles in detectors to the partons in perturbation theory. In this talk, we briefly review recent progress on perturbative QCD in event generation. The main focus lies on the general-purpose Monte-Carlo programs HERWIG, PYTHIA and SHERPA, which will be the workhorses for LHC phenomenology. A detailed description of the physics models included in these generators can be found in [8]. We also discuss matrix-element generators, which provide the parton-level input for general-purpose Monte Carlo.

  18. ANALISIS PERBANDINGAN PENENTUAN HARGA CALL OPTION DENGAN MENGGUNAKAN METODE BLACK-SCHOLES DAN METODE SIMULASI MONTE CARLO

    Directory of Open Access Journals (Sweden)

    Krishna Kusumahadi

    2016-03-01

    Full Text Available Abstract - This study was conducted to determine the accuracy of the Black-Scholes method compared with the Monte Carlo simulation method to predict the price of a call option on KOMPAS 100 Index at maturity in 1 month, 2 months, and 3 months. The method used in this research is descriptive analysis by using historical data and perform price comparisons with absolute error value to determine whether the Black-Scholes method is more accurate than the method of Monte Carlo simulation in maturities. Result from this research; found that the price value at maturity absolute error for 1 month is 3.76 and the Black-Scholes method for Monte Carlo simulation method is 0:03. Value price absolute error at maturity for 2 months is 3.76 and the Black-Scholes method for Monte Carlo simulation method is 0.03. Value price absolute error on the maturity using Black-Scholes method for 3 months is 3.48 and 2.99 for the Monte Carlo method. Judging from the data obtained that the Monte Carlo method is more accurate than the Black-Scholes method to predict the price of the call option KOMPAS 100 Stock Index in the period of 1 month, 2 months, and 3 months. Implications for investors and capital market participants is when investors want to invest in stocks included in the KOMPAS 100 Index, Monte Carlo simulation method could be use to predict the price of the call option.  It is also advisable to compare with other methods such as GARCH, Neural Network, etc.   Keywords: Black-Scholes, Monte Carlo, Garch, and Artificial Neural Networks.   Abstrak - Penelitian ini dilakukan untuk mengetahui keakuratan Metode Black Scholes dibandingkan dengan Metode Simulasi Monte Carlo dalam memprediksi harga call option Indeks KOMPAS 100 pada saat jatuh tempo 1 bulan, 2 bulan, dan 3 bulan. Metode penelitian yang digunakan dalam penelitian ini adalah deskriptif analitis dengan menggunakan data-data historis, dan melakukan perbandingan nilai price absolute error untuk mengetahui

  19. Millares Carlo en el exilio 

    OpenAIRE

    Blasco Gil, Yolanda

    2010-01-01

    El artículo describe la trayectoria del conocido historiador y paleógrafo Agustín Millares Carlo, exiliado tras la guerra civil: su carrera en España, sus destinos y aportaciones científicas en México; en especial, su vinculación con la Universidad Nacional Autónoma de México, a través del expediente académico. Así como las consecuencias que supuso el exilio de profesores para las universidades españolas. El fraude cometido por el estado franquista durante la postguerra al editar obras de exi...

  20. Mosaic crystal algorithm for Monte Carlo simulations

    CERN Document Server

    Seeger, P A

    2002-01-01

    An algorithm is presented for calculating reflectivity, absorption, and scattering of mosaic crystals in Monte Carlo simulations of neutron instruments. The algorithm uses multi-step transport through the crystal with an exact solution of the Darwin equations at each step. It relies on the kinematical model for Bragg reflection (with parameters adjusted to reproduce experimental data). For computation of thermal effects (the Debye-Waller factor and coherent inelastic scattering), an expansion of the Debye integral as a rapidly converging series of exponential terms is also presented. Any crystal geometry and plane orientation may be treated. The algorithm has been incorporated into the neutron instrument simulation package NISP. (orig.)

  1. Monte Carlo simulation for the transport beamline

    Energy Technology Data Exchange (ETDEWEB)

    Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania (Italy); Attili, A.; Marchetto, F.; Russo, G. [INFN, Sezione di Torino, Via P.Giuria, 1 10125 Torino (Italy); Cirrone, G. A. P.; Schillaci, F.; Scuderi, V. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Institute of Physics Czech Academy of Science, ELI-Beamlines project, Na Slovance 2, Prague (Czech Republic); Carpinelli, M. [INFN Sezione di Cagliari, c/o Dipartimento di Fisica, Università di Cagliari, Cagliari (Italy); Tramontana, A. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Università di Catania, Dipartimento di Fisica e Astronomia, Via S. Sofia 64, Catania (Italy)

    2013-07-26

    In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.

  2. A note on simultaneous Monte Carlo tests

    DEFF Research Database (Denmark)

    Hahn, Ute

    In this short note, Monte Carlo tests of goodness of fit for data of the form X(t), t ∈ I are considered, that reject the null hypothesis if X(t) leaves an acceptance region bounded by an upper and lower curve for some t in I. A construction of the acceptance region is proposed that complies to a...... to a given target level of rejection, and yields exact p-values. The construction is based on pointwise quantiles, estimated from simulated realizations of X(t) under the null hypothesis....

  3. IN MEMORIAM CARLOS RESTREPO. UN VERDADERO MAESTRO

    OpenAIRE

    Pelayo Correa

    2009-01-01

    Carlos Restrepo fue el primer profesor de Patología y un miembro ilustre del grupo de pioneros que fundaron la Facultad de Medicina de la Universidad del Valle. Estos pioneros convergieron en Cali en la década de 1950, en posesión de un espíritu renovador y creativo que emprendió con mucho éxito la labor de cambiar la cultura académica del Valle del Cauca. Ellos encontraron una sociedad apacible, que disfrutaba de la generosidad de su entorno, sin deseos de romper las tradiciones centenarias ...

  4. Jose carlos mariategui: sus articulos sobre arte

    OpenAIRE

    Henríquez, Cecilia

    2014-01-01

    En la prolífica obra que colma la corta vida del pensador latinoamericano,se encuentran profundas reflexiones sobre el arte que valela pena recordar en e1 año del centenario de su nacimiento. Los juiciosque propone sobre el arte están n1ediados por su pensamiento marxista.Aunque gran conocedor del arte de sus contemporáneos, sucompromiso socialista, cada vez mayor, reduce el espacio del análisisestilístico.Un punto de partida para observar el pensamiento estético deJosé Carlos Mariátegui es s...

  5. A Monte Carlo algorithm for degenerate plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Turrell, A.E., E-mail: a.turrell09@imperial.ac.uk; Sherlock, M.; Rose, S.J.

    2013-09-15

    A procedure for performing Monte Carlo calculations of plasmas with an arbitrary level of degeneracy is outlined. It has possible applications in inertial confinement fusion and astrophysics. Degenerate particles are initialised according to the Fermi–Dirac distribution function, and scattering is via a Pauli blocked binary collision approximation. The algorithm is tested against degenerate electron–ion equilibration, and the degenerate resistivity transport coefficient from unmagnetised first order transport theory. The code is applied to the cold fuel shell and alpha particle equilibration problem of inertial confinement fusion.

  6. Archimedes, the Free Monte Carlo simulator

    CERN Document Server

    Sellier, Jean Michel D

    2012-01-01

    Archimedes is the GNU package for Monte Carlo simulations of electron transport in semiconductor devices. The first release appeared in 2004 and since then it has been improved with many new features like quantum corrections, magnetic fields, new materials, GUI, etc. This document represents the first attempt to have a complete manual. Many of the Physics models implemented are described and a detailed description is presented to make the user able to write his/her own input deck. Please, feel free to contact the author if you want to contribute to the project.

  7. Cluster hybrid Monte Carlo simulation algorithms

    Science.gov (United States)

    Plascak, J. A.; Ferrenberg, Alan M.; Landau, D. P.

    2002-06-01

    We show that addition of Metropolis single spin flips to the Wolff cluster-flipping Monte Carlo procedure leads to a dramatic increase in performance for the spin-1/2 Ising model. We also show that adding Wolff cluster flipping to the Metropolis or heat bath algorithms in systems where just cluster flipping is not immediately obvious (such as the spin-3/2 Ising model) can substantially reduce the statistical errors of the simulations. A further advantage of these methods is that systematic errors introduced by the use of imperfect random-number generation may be largely healed by hybridizing single spin flips with cluster flipping.

  8. Introduction to Cluster Monte Carlo Algorithms

    Science.gov (United States)

    Luijten, E.

    This chapter provides an introduction to cluster Monte Carlo algorithms for classical statistical-mechanical systems. A brief review of the conventional Metropolis algorithm is given, followed by a detailed discussion of the lattice cluster algorithm developed by Swendsen and Wang and the single-cluster variant introduced by Wolff. For continuum systems, the geometric cluster algorithm of Dress and Krauth is described. It is shown how their geometric approach can be generalized to incorporate particle interactions beyond hardcore repulsions, thus forging a connection between the lattice and continuum approaches. Several illustrative examples are discussed.

  9. Exascale Monte Carlo R&D

    Energy Technology Data Exchange (ETDEWEB)

    Marcus, Ryan C. [Los Alamos National Laboratory

    2012-07-24

    Overview of this presentation is (1) Exascale computing - different technologies, getting there; (2) high-performance proof-of-concept MCMini - features and results; and (3) OpenCL toolkit - Oatmeal (OpenCL Automatic Memory Allocation Library) - purpose and features. Despite driver issues, OpenCL seems like a good, hardware agnostic tool. MCMini demonstrates the possibility for GPGPU-based Monte Carlo methods - it shows great scaling for HPC application and algorithmic equivalence. Oatmeal provides a flexible framework to aid in the development of scientific OpenCL codes.

  10. State-of-the-art Monte Carlo 1988

    Energy Technology Data Exchange (ETDEWEB)

    Soran, P.D.

    1988-06-28

    Particle transport calculations in highly dimensional and physically complex geometries, such as detector calibration, radiation shielding, space reactors, and oil-well logging, generally require Monte Carlo transport techniques. Monte Carlo particle transport can be performed on a variety of computers ranging from APOLLOs to VAXs. Some of the hardware and software developments, which now permit Monte Carlo methods to be routinely used, are reviewed in this paper. The development of inexpensive, large, fast computer memory, coupled with fast central processing units, permits Monte Carlo calculations to be performed on workstations, minicomputers, and supercomputers. The Monte Carlo renaissance is further aided by innovations in computer architecture and software development. Advances in vectorization and parallelization architecture have resulted in the development of new algorithms which have greatly reduced processing times. Finally, the renewed interest in Monte Carlo has spawned new variance reduction techniques which are being implemented in large computer codes. 45 refs.

  11. Discrete diffusion Monte Carlo for frequency-dependent radiative transfer

    Energy Technology Data Exchange (ETDEWEB)

    Densmore, Jeffrey D [Los Alamos National Laboratory; Kelly, Thompson G [Los Alamos National Laboratory; Urbatish, Todd J [Los Alamos National Laboratory

    2010-11-17

    Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique.

  12. Alternative Monte Carlo Approach for General Global Illumination

    Institute of Scientific and Technical Information of China (English)

    徐庆; 李朋; 徐源; 孙济洲

    2004-01-01

    An alternative Monte Carlo strategy for the computation of global illumination problem was presented.The proposed approach provided a new and optimal way for solving Monte Carlo global illumination based on the zero variance importance sampling procedure. A new importance driven Monte Carlo global illumination algorithm in the framework of the new computing scheme was developed and implemented. Results, which were obtained by rendering test scenes, show that this new framework and the newly derived algorithm are effective and promising.

  13. Discrete range clustering using Monte Carlo methods

    Science.gov (United States)

    Chatterji, G. B.; Sridhar, B.

    1993-01-01

    For automatic obstacle avoidance guidance during rotorcraft low altitude flight, a reliable model of the nearby environment is needed. Such a model may be constructed by applying surface fitting techniques to the dense range map obtained by active sensing using radars. However, for covertness, passive sensing techniques using electro-optic sensors are desirable. As opposed to the dense range map obtained via active sensing, passive sensing algorithms produce reliable range at sparse locations, and therefore, surface fitting techniques to fill the gaps in the range measurement are not directly applicable. Both for automatic guidance and as a display for aiding the pilot, these discrete ranges need to be grouped into sets which correspond to objects in the nearby environment. The focus of this paper is on using Monte Carlo methods for clustering range points into meaningful groups. One of the aims of the paper is to explore whether simulated annealing methods offer significant advantage over the basic Monte Carlo method for this class of problems. We compare three different approaches and present application results of these algorithms to a laboratory image sequence and a helicopter flight sequence.

  14. Information Geometry and Sequential Monte Carlo

    CERN Document Server

    Sim, Aaron; Stumpf, Michael P H

    2012-01-01

    This paper explores the application of methods from information geometry to the sequential Monte Carlo (SMC) sampler. In particular the Riemannian manifold Metropolis-adjusted Langevin algorithm (mMALA) is adapted for the transition kernels in SMC. Similar to its function in Markov chain Monte Carlo methods, the mMALA is a fully adaptable kernel which allows for efficient sampling of high-dimensional and highly correlated parameter spaces. We set up the theoretical framework for its use in SMC with a focus on the application to the problem of sequential Bayesian inference for dynamical systems as modelled by sets of ordinary differential equations. In addition, we argue that defining the sequence of distributions on geodesics optimises the effective sample sizes in the SMC run. We illustrate the application of the methodology by inferring the parameters of simulated Lotka-Volterra and Fitzhugh-Nagumo models. In particular we demonstrate that compared to employing a standard adaptive random walk kernel, the SM...

  15. Quantum Monte Carlo Calculations of Neutron Matter

    CERN Document Server

    Carlson, J; Ravenhall, D G

    2003-01-01

    Uniform neutron matter is approximated by a cubic box containing a finite number of neutrons, with periodic boundary conditions. We report variational and Green's function Monte Carlo calculations of the ground state of fourteen neutrons in a periodic box using the Argonne $\\vep $ two-nucleon interaction at densities up to one and half times the nuclear matter density. The effects of the finite box size are estimated using variational wave functions together with cluster expansion and chain summation techniques. They are small at subnuclear densities. We discuss the expansion of the energy of low-density neutron gas in powers of its Fermi momentum. This expansion is strongly modified by the large nn scattering length, and does not begin with the Fermi-gas kinetic energy as assumed in both Skyrme and relativistic mean field theories. The leading term of neutron gas energy is ~ half the Fermi-gas kinetic energy. The quantum Monte Carlo results are also used to calibrate the accuracy of variational calculations ...

  16. THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE

    Energy Technology Data Exchange (ETDEWEB)

    WATERS, LAURIE S. [Los Alamos National Laboratory; MCKINNEY, GREGG W. [Los Alamos National Laboratory; DURKEE, JOE W. [Los Alamos National Laboratory; FENSIN, MICHAEL L. [Los Alamos National Laboratory; JAMES, MICHAEL R. [Los Alamos National Laboratory; JOHNS, RUSSELL C. [Los Alamos National Laboratory; PELOWITZ, DENISE B. [Los Alamos National Laboratory

    2007-01-10

    MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.

  17. Chemical application of diffusion quantum Monte Carlo

    Science.gov (United States)

    Reynolds, P. J.; Lester, W. A., Jr.

    1983-10-01

    The diffusion quantum Monte Carlo (QMC) method gives a stochastic solution to the Schroedinger equation. As an example the singlet-triplet splitting of the energy of the methylene molecule CH2 is given. The QMC algorithm was implemented on the CYBER 205, first as a direct transcription of the algorithm running on our VAX 11/780, and second by explicitly writing vector code for all loops longer than a crossover length C. The speed of the codes relative to one another as a function of C, and relative to the VAX is discussed. Since CH2 has only eight electrons, most of the loops in this application are fairly short. The longest inner loops run over the set of atomic basis functions. The CPU time dependence obtained versus the number of basis functions is discussed and compared with that obtained from traditional quantum chemistry codes and that obtained from traditional computer architectures. Finally, preliminary work on restructuring the algorithm to compute the separate Monte Carlo realizations in parallel is discussed.

  18. Monte Carlo MP2 on Many Graphical Processing Units.

    Science.gov (United States)

    Doran, Alexander E; Hirata, So

    2016-10-11

    In the Monte Carlo second-order many-body perturbation (MC-MP2) method, the long sum-of-product matrix expression of the MP2 energy, whose literal evaluation may be poorly scalable, is recast into a single high-dimensional integral of functions of electron pair coordinates, which is evaluated by the scalable method of Monte Carlo integration. The sampling efficiency is further accelerated by the redundant-walker algorithm, which allows a maximal reuse of electron pairs. Here, a multitude of graphical processing units (GPUs) offers a uniquely ideal platform to expose multilevel parallelism: fine-grain data-parallelism for the redundant-walker algorithm in which millions of threads compute and share orbital amplitudes on each GPU; coarse-grain instruction-parallelism for near-independent Monte Carlo integrations on many GPUs with few and infrequent interprocessor communications. While the efficiency boost by the redundant-walker algorithm on central processing units (CPUs) grows linearly with the number of electron pairs and tends to saturate when the latter exceeds the number of orbitals, on a GPU it grows quadratically before it increases linearly and then eventually saturates at a much larger number of pairs. This is because the orbital constructions are nearly perfectly parallelized on a GPU and thus completed in a near-constant time regardless of the number of pairs. In consequence, an MC-MP2/cc-pVDZ calculation of a benzene dimer is 2700 times faster on 256 GPUs (using 2048 electron pairs) than on two CPUs, each with 8 cores (which can use only up to 256 pairs effectively). We also numerically determine that the cost to achieve a given relative statistical uncertainty in an MC-MP2 energy increases as O(n(3)) or better with system size n, which may be compared with the O(n(5)) scaling of the conventional implementation of deterministic MP2. We thus establish the scalability of MC-MP2 with both system and computer sizes.

  19. The macro response Monte Carlo method for electron transport

    Energy Technology Data Exchange (ETDEWEB)

    Svatos, M M

    1998-09-01

    The main goal of this thesis was to prove the feasibility of basing electron depth dose calculations in a phantom on first-principles single scatter physics, in an amount of time that is equal to or better than current electron Monte Carlo methods. The Macro Response Monte Carlo (MRMC) method achieves run times that are on the order of conventional electron transport methods such as condensed history, with the potential to be much faster. This is possible because MRMC is a Local-to-Global method, meaning the problem is broken down into two separate transport calculations. The first stage is a local, in this case, single scatter calculation, which generates probability distribution functions (PDFs) to describe the electron's energy, position and trajectory after leaving the local geometry, a small sphere or "kugel" A number of local kugel calculations were run for calcium and carbon, creating a library of kugel data sets over a range of incident energies (0.25 MeV - 8 MeV) and sizes (0.025 cm to 0.1 cm in radius). The second transport stage is a global calculation, where steps that conform to the size of the kugels in the library are taken through the global geometry. For each step, the appropriate PDFs from the MRMC library are sampled to determine the electron's new energy, position and trajectory. The electron is immediately advanced to the end of the step and then chooses another kugel to sample, which continues until transport is completed. The MRMC global stepping code was benchmarked as a series of subroutines inside of the Peregrine Monte Carlo code. It was compared to Peregrine's class II condensed history electron transport package, EGS4, and MCNP for depth dose in simple phantoms having density inhomogeneities. Since the kugels completed in the library were of relatively small size, the zoning of the phantoms was scaled down from a clinical size, so that the energy deposition algorithms for spreading dose across 5-10 zones per kugel could

  20. Multiple Monte Carlo Testing with Applications in Spatial Point Processes

    DEFF Research Database (Denmark)

    Mrkvička, Tomáš; Myllymäki, Mari; Hahn, Ute

    with a function as the test statistic, 3) several Monte Carlo tests with functions as test statistics. The rank test has correct (global) type I error in each case and it is accompanied with a p-value and with a graphical interpretation which shows which subtest or which distances of the used test function......The rank envelope test (Myllym\\"aki et al., Global envelope tests for spatial processes, arXiv:1307.0239 [stat.ME]) is proposed as a solution to multiple testing problem for Monte Carlo tests. Three different situations are recognized: 1) a few univariate Monte Carlo tests, 2) a Monte Carlo test...

  1. Quantum Monte Carlo Endstation for Petascale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Lubos Mitas

    2011-01-26

    NCSU research group has been focused on accomplising the key goals of this initiative: establishing new generation of quantum Monte Carlo (QMC) computational tools as a part of Endstation petaflop initiative for use at the DOE ORNL computational facilities and for use by computational electronic structure community at large; carrying out high accuracy quantum Monte Carlo demonstration projects in application of these tools to the forefront electronic structure problems in molecular and solid systems; expanding the impact of QMC methods and approaches; explaining and enhancing the impact of these advanced computational approaches. In particular, we have developed quantum Monte Carlo code (QWalk, www.qwalk.org) which was significantly expanded and optimized using funds from this support and at present became an actively used tool in the petascale regime by ORNL researchers and beyond. These developments have been built upon efforts undertaken by the PI's group and collaborators over the period of the last decade. The code was optimized and tested extensively on a number of parallel architectures including petaflop ORNL Jaguar machine. We have developed and redesigned a number of code modules such as evaluation of wave functions and orbitals, calculations of pfaffians and introduction of backflow coordinates together with overall organization of the code and random walker distribution over multicore architectures. We have addressed several bottlenecks such as load balancing and verified efficiency and accuracy of the calculations with the other groups of the Endstation team. The QWalk package contains about 50,000 lines of high quality object-oriented C++ and includes also interfaces to data files from other conventional electronic structure codes such as Gamess, Gaussian, Crystal and others. This grant supported PI for one month during summers, a full-time postdoc and partially three graduate students over the period of the grant duration, it has resulted in 13

  2. On the Assessment of Monte Carlo Error in Simulation-Based Statistical Analyses.

    Science.gov (United States)

    Koehler, Elizabeth; Brown, Elizabeth; Haneuse, Sebastien J-P A

    2009-05-01

    Statistical experiments, more commonly referred to as Monte Carlo or simulation studies, are used to study the behavior of statistical methods and measures under controlled situations. Whereas recent computing and methodological advances have permitted increased efficiency in the simulation process, known as variance reduction, such experiments remain limited by their finite nature and hence are subject to uncertainty; when a simulation is run more than once, different results are obtained. However, virtually no emphasis has been placed on reporting the uncertainty, referred to here as Monte Carlo error, associated with simulation results in the published literature, or on justifying the number of replications used. These deserve broader consideration. Here we present a series of simple and practical methods for estimating Monte Carlo error as well as determining the number of replications required to achieve a desired level of accuracy. The issues and methods are demonstrated with two simple examples, one evaluating operating characteristics of the maximum likelihood estimator for the parameters in logistic regression and the other in the context of using the bootstrap to obtain 95% confidence intervals. The results suggest that in many settings, Monte Carlo error may be more substantial than traditionally thought.

  3. Multi-pass Monte Carlo simulation method in nuclear transmutations.

    Science.gov (United States)

    Mateescu, Liviu; Kadambi, N Prasad; Ravindra, Nuggehalli M

    2016-12-01

    Monte Carlo methods, in their direct brute simulation incarnation, bring realistic results if the involved probabilities, be they geometrical or otherwise, remain constant for the duration of the simulation. However, there are physical setups where the evolution of the simulation represents a modification of the simulated system itself. Chief among such evolving simulated systems are the activation/transmutation setups. That is, the simulation starts with a given set of probabilities, which are determined by the geometry of the system, the components and by the microscopic interaction cross-sections. However, the relative weight of the components of the system changes along with the steps of the simulation. A natural measure would be adjusting probabilities after every step of the simulation. On the other hand, the physical system has typically a number of components of the order of Avogadro's number, usually 10(25) or 10(26) members. A simulation step changes the characteristics for just a few of these members; a probability will therefore shift by a quantity of 1/10(25). Such a change cannot be accounted for within a simulation, because then the simulation should have then a number of at least 10(28) steps in order to have some significance. This is not feasible, of course. For our computing devices, a simulation of one million steps is comfortable, but a further order of magnitude becomes too big a stretch for the computing resources. We propose here a method of dealing with the changing probabilities, leading to the increasing of the precision. This method is intended as a fast approximating approach, and also as a simple introduction (for the benefit of students) in the very branched subject of Monte Carlo simulations vis-à-vis nuclear reactors.

  4. Monte Carlo optimization for site selection of new chemical plants.

    Science.gov (United States)

    Cai, Tianxing; Wang, Sujing; Xu, Qiang

    2015-11-01

    Geographic distribution of chemical manufacturing sites has significant impact on the business sustainability of industrial development and regional environmental sustainability as well. The common site selection rules have included the evaluation of the air quality impact of a newly constructed chemical manufacturing site to surrounding communities. In order to achieve this target, the simultaneous consideration should cover the regional background air-quality information, the emissions of new manufacturing site, and statistical pattern of local meteorological conditions. According to the above information, the risk assessment can be conducted for the potential air-quality impacts from candidate locations of a new chemical manufacturing site, and thus the optimization of the final site selection can be achieved by minimizing its air-quality impacts. This paper has provided a systematic methodology for the above purpose. There are total two stages of modeling and optimization work: i) Monte Carlo simulation for the purpose to identify background pollutant concentration based on currently existing emission sources and regional statistical meteorological conditions; and ii) multi-objective (simultaneous minimization of both peak pollutant concentration and standard deviation of pollutant concentration spatial distribution at air-quality concern regions) Monte Carlo optimization for optimal location selection of new chemical manufacturing sites according to their design data of potential emission. This study can be helpful to both determination of the potential air-quality impact for geographic distribution of multiple chemical plants with respect to regional statistical meteorological conditions, and the identification of an optimal site for each new chemical manufacturing site with the minimal environment impact to surrounding communities. The efficacy of the developed methodology has been demonstrated through the case studies.

  5. Learning About Ares I from Monte Carlo Simulation

    Science.gov (United States)

    Hanson, John M.; Hall, Charlie E.

    2008-01-01

    This paper addresses Monte Carlo simulation analyses that are being conducted to understand the behavior of the Ares I launch vehicle, and to assist with its design. After describing the simulation and modeling of Ares I, the paper addresses the process used to determine what simulations are necessary, and the parameters that are varied in order to understand how the Ares I vehicle will behave in flight. Outputs of these simulations furnish a significant group of design customers with data needed for the development of Ares I and of the Orion spacecraft that will ride atop Ares I. After listing the customers, examples of many of the outputs are described. Products discussed in this paper include those that support structural loads analysis, aerothermal analysis, flight control design, failure/abort analysis, determination of flight performance reserve, examination of orbit insertion accuracy, determination of the Upper Stage impact footprint, analysis of stage separation, analysis of launch probability, analysis of first stage recovery, thrust vector control and reaction control system design, liftoff drift analysis, communications analysis, umbilical release, acoustics, and design of jettison systems.

  6. Phase equilibria of molecular fluids via hybrid Monte Carlo Wang-Landau simulations: applications to benzene and n-alkanes.

    Science.gov (United States)

    Desgranges, Caroline; Delhommelle, Jerome

    2009-06-28

    In recent years, powerful and accurate methods, based on a Wang-Landau sampling, have been developed to determine phase equilibria. However, while these methods have been extensively applied to study the phase behavior of model fluids, they have yet to be applied to molecular systems. In this work, we show how, by combining hybrid Monte Carlo simulations in the isothermal-isobaric ensemble with the Wang-Landau sampling method, we determine the vapor-liquid equilibria of various molecular fluids. More specifically, we present results obtained on rigid molecules, such as benzene, as well as on flexible chains of n-alkanes. The reliability of the method introduced in this work is assessed by demonstrating that our results are in excellent agreement with the results obtained in previous work on simple fluids, using either transition matrix or conventional Monte Carlo simulations with a Wang-Landau sampling, and on molecular fluids, using histogram reweighting or Gibbs ensemble Monte Carlo simulations.

  7. Experimental and Monte Carlo evaluation of an ionization chamber in a {sup 60}Co beam

    Energy Technology Data Exchange (ETDEWEB)

    Perini, Ana P.; Neves, Lucio Pereira, E-mail: anapaula.perini@ufu.br [Universidade Federal de Uberlandia (INFIS/UFU), MG (Brazil). Instituto de Fisica; Santos, William S.; Caldas, Linda V.E. [Instituto de Pesquisas Energeticas e Nucleres (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    Recently a special parallel-plate ionization chamber was developed and characterized at the Instituto de Pesquisas Energeticas e Nucleares. The operational tests presented results within the recommended limits. In order to determine the influence of some components of the ionization chamber on its response, Monte Carlo simulations were carried out. The experimental and simulation results pointed out that the dosimeter evaluated in the present work has favorable properties to be applied to {sup 60}Co dosimetry at calibration laboratories. (author)

  8. Experimental and Monte Carlo evaluation of an ionization chamber in a 60Co beam

    Science.gov (United States)

    Perini, A. P.; Neves, L. P.; Santos, W. S.; Caldas, L. V. E.

    2016-07-01

    Recently a special parallel-plate ionization chamber was developed and characterized at the Instituto de Pesquisas Energeticas e Nucleares. The operational tests presented results within the recommended limits. In order to determine the influence of some components of the ionization chamber on its response, Monte Carlo simulations were carried out. The experimental and simulation results pointed out that the dosimeter evaluated in the present work has favorable properties to be applied to 60Co dosimetry at calibration laboratories.

  9. Monte Carlo Simulation of the Coaxial Electrons Backscattering from Thin Films

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    By using the Monte Carlo method, we simulated the trajectories of coaxial backscattering electrons corresponding to a new type of scanning electron microscope. From the calculated results, we obtain a universal expression, which describes with good accuracy the backscattering coefficient versus film thickness under all conditions used. By measuring the coaxial backscattering coefficient and using this universal formula, the thickness of thin films can be determined if the composition is known.

  10. Kinetic Monte Carlo simulation of physical vapor deposition of thin Cu film

    Institute of Scientific and Technical Information of China (English)

    WANG Jun; CHEN Chang-qi; ZHU Wu

    2004-01-01

    A two-dimensional Kinetic Monte Carlo method has been developed for simulating the physical vapor deposition of thin Cu films on Cu substrate. An improved embedded atom method was used to calculate the interatomic potential and determine the diffusion barrier energy and residence time. Parameters, including incident angle,deposition rate and substrate temperature, were investigated and discussed in order to find their influences on the thin film morphology.

  11. MONTE CARLO ANALYSIS FOR PREDICTION OF NOISE FROM A CONSTRUCTION SITE

    Directory of Open Access Journals (Sweden)

    Zaiton Haron

    2009-06-01

    Full Text Available The large number of operations involving noisy machinery associated with construction site activities result in considerable variation in the noise levels experienced at receiver locations. This paper suggests an approach to predict noise levels generated from a site by using a Monte Carlo approach. This approach enables the determination of details regarding the statistical uncertainties associated with noise level predictions or temporal distributions. This technique could provide the basis for a generalised prediction technique and a simple noise management tool.

  12. The information-based complexity of approximation problem by adaptive Monte Carlo methods

    Institute of Scientific and Technical Information of China (English)

    FANG GenSun; DUAN LiQin

    2008-01-01

    In this paper,we study the complexity of information of approximation problem on the multivariate Sobolev space with bounded mixed derivative MWTp,α(Td),1<p<∞,in the norm of Lq(Td),1<q<∞,by adaptive Monte Carlo methods.Applying the discretization technique and some properties of pseudo-s-scale,we determine the exact asymptotic orders of this problem.

  13. Anomalous scaling in the random-force-driven Burgers equation. A Monte Carlo study

    Energy Technology Data Exchange (ETDEWEB)

    Mesterhazy, David [TU Darmstadt (Germany). Inst. fuer Kernphysik; Jansen, Karl [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann Inst. fuer Computing

    2011-12-15

    We present a new approach to determine the small-scale statistical behavior of hydrodynamic turbulence by means of lattice simulations. Using the functional integral representation of the random-force-driven Burgers equation we show that high-order moments of velocity differences satisfy anomalous scaling. The general applicability of Monte Carlo methods provides the opportunity to study also other systems of interest within this framework. (orig.)

  14. Attenuation Correction in SPECT during Image Reconstruction using an Inverse Monte Carlo Method: A Simulation Study

    OpenAIRE

    Shahla Ahmadi; Hossein Rajabi; Farshid Babapoor; Faraz Kalantari

    2011-01-01

    Introduction: The main goal of SPECT imaging is to determine activity distribution inside the organs of the body. However, due to photon attenuation, it is almost impossible to do a quantitative study. In this paper, we suggest a mathematical relationship between activity distribution and its corresponding projections using a transfer matrix. Monte Carlo simulation was used to find a precise transfer matrix including the effects of photon attenuation.  Material and Methods: List mode output o...

  15. Evaluation of the material assignment method used by a Monte Carlo treatment planning system.

    Science.gov (United States)

    Isambert, A; Brualla, L; Lefkopoulos, D

    2009-12-01

    An evaluation of the conversion process from Hounsfield units (HU) to material composition in computerised tomography (CT) images, employed by the Monte Carlo based treatment planning system ISOgray (DOSIsoft), is presented. A boundary in the HU for the material conversion between "air" and "lung" materials was determined based on a study using 22 patients. The dosimetric consequence of the new boundary was quantitatively evaluated for a lung patient plan.

  16. Monte-Carlo simulation of backscattered electrons in Auger electron spectroscopy. Part 1: Backscattering factor calculation

    Energy Technology Data Exchange (ETDEWEB)

    Tholomier, M.; Vicario, E.; Doghmane, N.

    1987-10-01

    The contribution of backscattered electrons to Auger electrons yield was studied with a multiple scattering Monte-Carlo simulation. The Auger backscattering factor has been calculated in the 5 keV-60 keV energy range. The dependence of the Auger backscattering factor on the primary energy and the beam incidence angle were determined. Spatial distributions of backscattered electrons and Auger electrons are presented for a point incident beam. Correlations between these distributions are briefly investigated.

  17. Commensurabilities between ETNOs: a Monte Carlo survey

    CERN Document Server

    Marcos, C de la Fuente

    2016-01-01

    Many asteroids in the main and trans-Neptunian belts are trapped in mean motion resonances with Jupiter and Neptune, respectively. As a side effect, they experience accidental commensurabilities among themselves. These commensurabilities define characteristic patterns that can be used to trace the source of the observed resonant behaviour. Here, we explore systematically the existence of commensurabilities between the known ETNOs using their heliocentric and barycentric semimajor axes, their uncertainties, and Monte Carlo techniques. We find that the commensurability patterns present in the known ETNO population resemble those found in the main and trans-Neptunian belts. Although based on small number statistics, such patterns can only be properly explained if most, if not all, of the known ETNOs are subjected to the resonant gravitational perturbations of yet undetected trans-Plutonian planets. We show explicitly that some of the statistically significant commensurabilities are compatible with the Planet Nin...

  18. Monte Carlo exploration of warped Higgsless models

    Energy Technology Data Exchange (ETDEWEB)

    Hewett, JoAnne L.; Lillie, Benjamin; Rizzo, Thomas Gerard [Stanford Linear Accelerator Center, 2575 Sand Hill Rd., Menlo Park, CA, 94025 (United States)]. E-mail: rizzo@slac.stanford.edu

    2004-10-01

    We have performed a detailed Monte Carlo exploration of the parameter space for a warped Higgsless model of electroweak symmetry breaking in 5 dimensions. This model is based on the SU(2){sub L} x SU(2){sub R} x U(1){sub B-L} gauge group in an AdS{sub 5} bulk with arbitrary gauge kinetic terms on both the Planck and TeV branes. Constraints arising from precision electroweak measurements and collider data are found to be relatively easy to satisfy. We show, however, that the additional requirement of perturbative unitarity up to the cut-off, {approx_equal} 10 TeV, in W{sub L}{sup +}W{sub L}{sup -} elastic scattering in the absence of dangerous tachyons eliminates all models. If successful models of this class exist, they must be highly fine-tuned. (author)

  19. Monte Carlo Exploration of Warped Higgsless Models

    CERN Document Server

    Hewett, J L; Rizzo, T G

    2004-01-01

    We have performed a detailed Monte Carlo exploration of the parameter space for a warped Higgsless model of electroweak symmetry breaking in 5 dimensions. This model is based on the $SU(2)_L\\times SU(2)_R\\times U(1)_{B-L}$ gauge group in an AdS$_5$ bulk with arbitrary gauge kinetic terms on both the Planck and TeV branes. Constraints arising from precision electroweak measurements and collider data are found to be relatively easy to satisfy. We show, however, that the additional requirement of perturbative unitarity up to the cut-off, $\\simeq 10$ TeV, in $W_L^+W_L^-$ elastic scattering in the absence of dangerous tachyons eliminates all models. If successful models of this class exist, they must be highly fine-tuned.

  20. Variable length trajectory compressible hybrid Monte Carlo

    CERN Document Server

    Nishimura, Akihiko

    2016-01-01

    Hybrid Monte Carlo (HMC) generates samples from a prescribed probability distribution in a configuration space by simulating Hamiltonian dynamics, followed by the Metropolis (-Hastings) acceptance/rejection step. Compressible HMC (CHMC) generalizes HMC to a situation in which the dynamics is reversible but not necessarily Hamiltonian. This article presents a framework to further extend the algorithm. Within the existing framework, each trajectory of the dynamics must be integrated for the same amount of (random) time to generate a valid Metropolis proposal. Our generalized acceptance/rejection mechanism allows a more deliberate choice of the integration time for each trajectory. The proposed algorithm in particular enables an effective application of variable step size integrators to HMC-type sampling algorithms based on reversible dynamics. The potential of our framework is further demonstrated by another extension of HMC which reduces the wasted computations due to unstable numerical approximations and corr...

  1. On nonlinear Markov chain Monte Carlo

    CERN Document Server

    Andrieu, Christophe; Doucet, Arnaud; Del Moral, Pierre; 10.3150/10-BEJ307

    2011-01-01

    Let $\\mathscr{P}(E)$ be the space of probability measures on a measurable space $(E,\\mathcal{E})$. In this paper we introduce a class of nonlinear Markov chain Monte Carlo (MCMC) methods for simulating from a probability measure $\\pi\\in\\mathscr{P}(E)$. Nonlinear Markov kernels (see [Feynman--Kac Formulae: Genealogical and Interacting Particle Systems with Applications (2004) Springer]) $K:\\mathscr{P}(E)\\times E\\rightarrow\\mathscr{P}(E)$ can be constructed to, in some sense, improve over MCMC methods. However, such nonlinear kernels cannot be simulated exactly, so approximations of the nonlinear kernels are constructed using auxiliary or potentially self-interacting chains. Several nonlinear kernels are presented and it is demonstrated that, under some conditions, the associated approximations exhibit a strong law of large numbers; our proof technique is via the Poisson equation and Foster--Lyapunov conditions. We investigate the performance of our approximations with some simulations.

  2. Monte Carlo Implementation of Polarized Hadronization

    CERN Document Server

    Matevosyan, Hrayr H; Thomas, Anthony W

    2016-01-01

    We study the polarized quark hadronization in a Monte Carlo (MC) framework based on the recent extension of the quark-jet framework, where a self-consistent treatment of the quark polarization transfer in a sequential hadronization picture has been presented. Here, we first adopt this approach for MC simulations of hadronization process with finite number of produced hadrons, expressing the relevant probabilities in terms of the eight leading twist quark-to-quark transverse momentum dependent (TMD) splitting functions (SFs) for elementary $q \\to q'+h$ transition. We present explicit expressions for the unpolarized and Collins fragmentation functions (FFs) of unpolarized hadrons emitted at rank two. Further, we demonstrate that all the current spectator-type model calculations of the leading twist quark-to-quark TMD SFs violate the positivity constraints, and propose quark model based ansatz for these input functions that circumvents the problem. We validate our MC framework by explicitly proving the absence o...

  3. Lunar Regolith Albedos Using Monte Carlos

    Science.gov (United States)

    Wilson, T. L.; Andersen, V.; Pinsky, L. S.

    2003-01-01

    The analysis of planetary regoliths for their backscatter albedos produced by cosmic rays (CRs) is important for space exploration and its potential contributions to science investigations in fundamental physics and astrophysics. Albedos affect all such experiments and the personnel that operate them. Groups have analyzed the production rates of various particles and elemental species by planetary surfaces when bombarded with Galactic CR fluxes, both theoretically and by means of various transport codes, some of which have emphasized neutrons. Here we report on the preliminary results of our current Monte Carlo investigation into the production of charged particles, neutrons, and neutrinos by the lunar surface using FLUKA. In contrast to previous work, the effects of charm are now included.

  4. Gas discharges modeling by Monte Carlo technique

    Directory of Open Access Journals (Sweden)

    Savić Marija

    2010-01-01

    Full Text Available The basic assumption of the Townsend theory - that ions produce secondary electrons - is valid only in a very narrow range of the reduced electric field E/N. In accordance with the revised Townsend theory that was suggested by Phelps and Petrović, secondary electrons are produced in collisions of ions, fast neutrals, metastable atoms or photons with the cathode, or in gas phase ionizations by fast neutrals. In this paper we tried to build up a Monte Carlo code that can be used to calculate secondary electron yields for different types of particles. The obtained results are in good agreement with the analytical results of Phelps and. Petrović [Plasma Sourc. Sci. Technol. 8 (1999 R1].

  5. [Chagas Carlos Justiniano Ribeiro (1879-1934)].

    Science.gov (United States)

    Pays, J F

    2009-12-01

    The story of the life of Carlos Chagas is closely associated with the discovery of American Human Trypanosomiasis, caused by Trypanosoma cruzi. Indeed, he worked on this for almost all of his life. Nowadays he is considered as a national hero, but, when he was alive, he was criticised more severely in his own country than elsewhere, often unjustly and motivated by jealousy, but sometimes with good reason. Cases of Chagas disease in non-endemic countries became such a concern that public health measures have had to be taken. In this article we give a short account of the scientific journey of this man, who can be said to occupy his very own place in the history of Tropical Medicine.

  6. San Carlos Apache Tribe - Energy Organizational Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Rapp, James; Albert, Steve

    2012-04-01

    The San Carlos Apache Tribe (SCAT) was awarded $164,000 in late-2011 by the U.S. Department of Energy (U.S. DOE) Tribal Energy Program's "First Steps Toward Developing Renewable Energy and Energy Efficiency on Tribal Lands" Grant Program. This grant funded:  The analysis and selection of preferred form(s) of tribal energy organization (this Energy Organization Analysis, hereinafter referred to as "EOA").  Start-up staffing and other costs associated with the Phase 1 SCAT energy organization.  An intern program.  Staff training.  Tribal outreach and workshops regarding the new organization and SCAT energy programs and projects, including two annual tribal energy summits (2011 and 2012). This report documents the analysis and selection of preferred form(s) of a tribal energy organization.

  7. CARLOS MARTÍ ARÍS: CABOS SUELTOS

    Directory of Open Access Journals (Sweden)

    Ángel Martínez García-Posada

    2012-11-01

    Full Text Available Al viento de su mismo título, ondea este libro otoñal su carácter diverso y su direccionalidad múltiple: con la apariencia de una clásica recopilación de presentaciones, conferencias o artículos, alentados estos últimos años a propósito de causas ajenas y afinidades electivas, esta edición agavilla comentarios, prefacios y notas en páginas dispersas, del profesor Carlos Martí, y compone un orden silencioso, secreto autorretrato, velado tras la trama de una tupida cartografía de lazos suaves pero seguros.

  8. Monte Carlo simulation of neutron scattering instruments

    Energy Technology Data Exchange (ETDEWEB)

    Seeger, P.A.; Daemen, L.L.; Hjelm, R.P. Jr.

    1998-12-01

    A code package consisting of the Monte Carlo Library MCLIB, the executing code MC{_}RUN, the web application MC{_}Web, and various ancillary codes is proposed as an open standard for simulation of neutron scattering instruments. The architecture of the package includes structures to define surfaces, regions, and optical elements contained in regions. A particle is defined by its vector position and velocity, its time of flight, its mass and charge, and a polarization vector. The MC{_}RUN code handles neutron transport and bookkeeping, while the action on the neutron within any region is computed using algorithms that may be deterministic, probabilistic, or a combination. Complete versatility is possible because the existing library may be supplemented by any procedures a user is able to code. Some examples are shown.

  9. DANGEROUS GAMES IN CARLOS DENIS MOLINA'S PLAYS

    OpenAIRE

    Braillon-Chantraine, Cécile

    2014-01-01

    International audience; De par son hyper réflexivité, l'oeuvre dramatique de l'écrivain uruguayen Carlos Denis Molina (1916-1983) constitue un témoignage des pratiques théâtrales de son époque et des mutations socio-économiques que traverse son pays, l'Uruguay, au cours du vingtième siècle. Le procédé méta théâtral du jeu, mettant en scène des personnages en train de se divertir et de jouer un rôle, est présent dans plusieurs de ses pièces, à chaque étape de sa production dramatique, et débou...

  10. Atomistic Monte Carlo simulation of lipid membranes

    DEFF Research Database (Denmark)

    Wüstner, Daniel; Sklenar, Heinz

    2014-01-01

    Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction......, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential...... of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol....

  11. Modeling neutron guides using Monte Carlo simulations

    CERN Document Server

    Wang, D Q; Crow, M L; Wang, X L; Lee, W T; Hubbard, C R

    2002-01-01

    Four neutron guide geometries, straight, converging, diverging and curved, were characterized using Monte Carlo ray-tracing simulations. The main areas of interest are the transmission of the guides at various neutron energies and the intrinsic time-of-flight (TOF) peak broadening. Use of a delta-function time pulse from a uniform Lambert neutron source allows one to quantitatively simulate the effect of guides' geometry on the TOF peak broadening. With a converging guide, the intensity and the beam divergence increases while the TOF peak width decreases compared with that of a straight guide. By contrast, use of a diverging guide decreases the intensity and the beam divergence, and broadens the width (in TOF) of the transmitted neutron pulse.

  12. Monte Carlo analysis of radiative transport in oceanographic lidar measurements

    Energy Technology Data Exchange (ETDEWEB)

    Cupini, E.; Ferro, G. [ENEA, Divisione Fisica Applicata, Centro Ricerche Ezio Clementel, Bologna (Italy); Ferrari, N. [Bologna Univ., Bologna (Italy). Dipt. Ingegneria Energetica, Nucleare e del Controllo Ambientale

    2001-07-01

    The analysis of oceanographic lidar systems measurements is often carried out with semi-empirical methods, since there is only a rough understanding of the effects of many environmental variables. The development of techniques for interpreting the accuracy of lidar measurements is needed to evaluate the effects of various environmental situations, as well as of different experimental geometric configurations and boundary conditions. A Monte Carlo simulation model represents a tool that is particularly well suited for answering these important questions. The PREMAR-2F Monte Carlo code has been developed taking into account the main molecular and non-molecular components of the marine environment. The laser radiation interaction processes of diffusion, re-emission, refraction and absorption are treated. In particular are considered: the Rayleigh elastic scattering, produced by atoms and molecules with small dimensions with respect to the laser emission wavelength (i.e. water molecules), the Mie elastic scattering, arising from atoms or molecules with dimensions comparable to the laser wavelength (hydrosols), the Raman inelastic scattering, typical of water, the absorption of water, inorganic (sediments) and organic (phytoplankton and CDOM) hydrosols, the fluorescence re-emission of chlorophyll and yellow substances. PREMAR-2F is an extension of a code for the simulation of the radiative transport in atmospheric environments (PREMAR-2). The approach followed in PREMAR-2 was to combine conventional Monte Carlo techniques with analytical estimates of the probability of the receiver to have a contribution from photons coming back after an interaction in the field of view of the lidar fluorosensor collecting apparatus. This offers an effective mean for modelling a lidar system with realistic geometric constraints. The retrieved semianalytic Monte Carlo radiative transfer model has been developed in the frame of the Italian Research Program for Antarctica (PNRA) and it is

  13. Reporting Monte Carlo Studies in Structural Equation Modeling

    NARCIS (Netherlands)

    Boomsma, Anne

    2013-01-01

    In structural equation modeling, Monte Carlo simulations have been used increasingly over the last two decades, as an inventory from the journal Structural Equation Modeling illustrates. Reaching out to a broad audience, this article provides guidelines for reporting Monte Carlo studies in that fiel

  14. Quantum Monte Carlo Simulations : Algorithms, Limitations and Applications

    NARCIS (Netherlands)

    Raedt, H. De

    1992-01-01

    A survey is given of Quantum Monte Carlo methods currently used to simulate quantum lattice models. The formalisms employed to construct the simulation algorithms are sketched. The origin of fundamental (minus sign) problems which limit the applicability of the Quantum Monte Carlo approach is shown

  15. Quantum Monte Carlo using a Stochastic Poisson Solver

    Energy Technology Data Exchange (ETDEWEB)

    Das, D; Martin, R M; Kalos, M H

    2005-05-06

    Quantum Monte Carlo (QMC) is an extremely powerful method to treat many-body systems. Usually quantum Monte Carlo has been applied in cases where the interaction potential has a simple analytic form, like the 1/r Coulomb potential. However, in a complicated environment as in a semiconductor heterostructure, the evaluation of the interaction itself becomes a non-trivial problem. Obtaining the potential from any grid-based finite-difference method, for every walker and every step is unfeasible. We demonstrate an alternative approach of solving the Poisson equation by a classical Monte Carlo within the overall quantum Monte Carlo scheme. We have developed a modified ''Walk On Spheres'' algorithm using Green's function techniques, which can efficiently account for the interaction energy of walker configurations, typical of quantum Monte Carlo algorithms. This stochastically obtained potential can be easily incorporated within popular quantum Monte Carlo techniques like variational Monte Carlo (VMC) or diffusion Monte Carlo (DMC). We demonstrate the validity of this method by studying a simple problem, the polarization of a helium atom in the electric field of an infinite capacitor.

  16. Efficiency and accuracy of Monte Carlo (importance) sampling

    NARCIS (Netherlands)

    Waarts, P.H.

    2003-01-01

    Monte Carlo Analysis is often regarded as the most simple and accurate reliability method. Be-sides it is the most transparent method. The only problem is the accuracy in correlation with the efficiency. Monte Carlo gets less efficient or less accurate when very low probabilities are to be computed

  17. The Monte Carlo Method. Popular Lectures in Mathematics.

    Science.gov (United States)

    Sobol', I. M.

    The Monte Carlo Method is a method of approximately solving mathematical and physical problems by the simulation of random quantities. The principal goal of this booklet is to suggest to specialists in all areas that they will encounter problems which can be solved by the Monte Carlo Method. Part I of the booklet discusses the simulation of random…

  18. Forest canopy BRDF simulation using Monte Carlo method

    NARCIS (Netherlands)

    Huang, J.; Wu, B.; Zeng, Y.; Tian, Y.

    2006-01-01

    Monte Carlo method is a random statistic method, which has been widely used to simulate the Bidirectional Reflectance Distribution Function (BRDF) of vegetation canopy in the field of visible remote sensing. The random process between photons and forest canopy was designed using Monte Carlo method.

  19. QWalk: A Quantum Monte Carlo Program for Electronic Structure

    CERN Document Server

    Wagner, Lucas K; Mitas, Lubos

    2007-01-01

    We describe QWalk, a new computational package capable of performing Quantum Monte Carlo electronic structure calculations for molecules and solids with many electrons. We describe the structure of the program and its implementation of Quantum Monte Carlo methods. It is open-source, licensed under the GPL, and available at the web site http://www.qwalk.org

  20. QUANTUM MONTE-CARLO SIMULATIONS - ALGORITHMS, LIMITATIONS AND APPLICATIONS

    NARCIS (Netherlands)

    DERAEDT, H

    1992-01-01

    A survey is given of Quantum Monte Carlo methods currently used to simulate quantum lattice models. The formalisms employed to construct the simulation algorithms are sketched. The origin of fundamental (minus sign) problems which limit the applicability of the Quantum Monte Carlo approach is shown

  1. Recent Developments in Quantum Monte Carlo: Methods and Applications

    Science.gov (United States)

    Aspuru-Guzik, Alan; Austin, Brian; Domin, Dominik; Galek, Peter T. A.; Handy, Nicholas; Prasad, Rajendra; Salomon-Ferrer, Romelia; Umezawa, Naoto; Lester, William A.

    2007-12-01

    The quantum Monte Carlo method in the diffusion Monte Carlo form has become recognized for its capability of describing the electronic structure of atomic, molecular and condensed matter systems to high accuracy. This talk will briefly outline the method with emphasis on recent developments connected with trial function construction, linear scaling, and applications to selected systems.

  2. Sensitivity of Monte Carlo simulations to input distributions

    Energy Technology Data Exchange (ETDEWEB)

    RamoRao, B. S.; Srikanta Mishra, S.; McNeish, J.; Andrews, R. W.

    2001-07-01

    The sensitivity of the results of a Monte Carlo simulation to the shapes and moments of the probability distributions of the input variables is studied. An economical computational scheme is presented as an alternative to the replicate Monte Carlo simulations and is explained with an illustrative example. (Author) 4 refs.

  3. 33 CFR 117.267 - Big Carlos Pass.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Big Carlos Pass. 117.267 Section 117.267 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY BRIDGES DRAWBRIDGE OPERATION REGULATIONS Specific Requirements Florida § 117.267 Big Carlos Pass. The draw of...

  4. CERN Summer Student Report 2016 Monte Carlo Data Base Improvement

    CERN Document Server

    Caciulescu, Alexandru Razvan

    2016-01-01

    During my Summer Student project I worked on improving the Monte Carlo Data Base and MonALISA services for the ALICE Collaboration. The project included learning the infrastructure for tracking and monitoring of the Monte Carlo productions as well as developing a new RESTful API for seamless integration with the JIRA issue tracking framework.

  5. Practical schemes for accurate forces in quantum Monte Carlo

    NARCIS (Netherlands)

    Moroni, S.; Saccani, S.; Filippi, C.

    2014-01-01

    While the computation of interatomic forces has become a well-established practice within variational Monte Carlo (VMC), the use of the more accurate Fixed-Node Diffusion Monte Carlo (DMC) method is still largely limited to the computation of total energies on structures obtained at a lower level of

  6. Monte Carlo modelling of TRIGA research reactor

    Science.gov (United States)

    El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.

    2010-10-01

    The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucléaires de la Maâmora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.

  7. Accelerated GPU based SPECT Monte Carlo simulations.

    Science.gov (United States)

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-07

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational

  8. Accelerated GPU based SPECT Monte Carlo simulations

    Science.gov (United States)

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-01

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency

  9. IN MEMORIAM CARLOS RESTREPO. UN VERDADERO MAESTRO

    Directory of Open Access Journals (Sweden)

    Pelayo Correa

    2009-06-01

    Full Text Available Carlos Restrepo fue el primer profesor de Patología y un miembro ilustre del grupo de pioneros que fundaron la Facultad de Medicina de la Universidad del Valle. Estos pioneros convergieron en Cali en la década de 1950, en posesión de un espíritu renovador y creativo que emprendió con mucho éxito la labor de cambiar la cultura académica del Valle del Cauca. Ellos encontraron una sociedad apacible, que disfrutaba de la generosidad de su entorno, sin deseos de romper las tradiciones centenarias de estilo de vida sencillo y satisfecho. Cuando los hijos tenían el deseo y la capacidad de seguir estudios universitarios, especialmente en el área de la medicina, la familia los enviaba a climas menos cálidos, que supuestamente favorecían la función cerebral y la acumulación de conocimientos. Los pioneros de la educación médica en el Valle del Cauca, en buena parte reclutados en universidades nacionales y extranjeras, sabían muy bien que el ambiente vallecaucano no impide una formación universitaria de primera clase.Carlos Restrepo era prototipo del espíritu de cambio y formación intelectual de las nuevas generaciones. Lo manifestaba de múltiples maneras, en buena parte con su genio alegre, extrovertido, optimista, de risa fácil y contagiosa. Pero esta fase amable de su personalidad no ocultaba su tarea formativa; exigía de sus discípulos dedicación y trabajo duro, con fidelidad expresados en memorables caricaturas que exageraban su genio ocasionalmente explosivo.El grupo de pioneros se enfocó con un espíritu de total entrega (tiempo completo y dedicación exclusiva y organizó la nueva Facultad en bien definidos y estructurados departamentos: Anatomía, Bioquímica, Fisiología, Farmacología, Patología, Medicina Interna, Cirugía, Obstetricia y Ginecología, Psiquiatría y Medicina Preventiva. Los departamentos integraron sus funciones primordiales en la enseñanza, la investigación y el servicio a la comunidad. El centro

  10. Fission Matrix Capability for MCNP Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Carney, Sean E. [Los Alamos National Laboratory; Brown, Forrest B. [Los Alamos National Laboratory; Kiedrowski, Brian C. [Los Alamos National Laboratory; Martin, William R. [Los Alamos National Laboratory

    2012-09-05

    In a Monte Carlo criticality calculation, before the tallying of quantities can begin, a converged fission source (the fundamental eigenvector of the fission kernel) is required. Tallies of interest may include powers, absorption rates, leakage rates, or the multiplication factor (the fundamental eigenvalue of the fission kernel, k{sub eff}). Just as in the power iteration method of linear algebra, if the dominance ratio (the ratio of the first and zeroth eigenvalues) is high, many iterations of neutron history simulations are required to isolate the fundamental mode of the problem. Optically large systems have large dominance ratios, and systems containing poor neutron communication between regions are also slow to converge. The fission matrix method, implemented into MCNP[1], addresses these problems. When Monte Carlo random walk from a source is executed, the fission kernel is stochastically applied to the source. Random numbers are used for: distances to collision, reaction types, scattering physics, fission reactions, etc. This method is used because the fission kernel is a complex, 7-dimensional operator that is not explicitly known. Deterministic methods use approximations/discretization in energy, space, and direction to the kernel. Consequently, they are faster. Monte Carlo directly simulates the physics, which necessitates the use of random sampling. Because of this statistical noise, common convergence acceleration methods used in deterministic methods do not work. In the fission matrix method, we are using the random walk information not only to build the next-iteration fission source, but also a spatially-averaged fission kernel. Just like in deterministic methods, this involves approximation and discretization. The approximation is the tallying of the spatially-discretized fission kernel with an incorrect fission source. We address this by making the spatial mesh fine enough that this error is negligible. As a consequence of discretization we get a

  11. Vectorized Monte Carlo methods for reactor lattice analysis

    Science.gov (United States)

    Brown, F. B.

    1984-01-01

    Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.

  12. Monte Carlo 2000 Conference : Advanced Monte Carlo for Radiation Physics, Particle Transport Simulation and Applications

    CERN Document Server

    Baräo, Fernando; Nakagawa, Masayuki; Távora, Luis; Vaz, Pedro

    2001-01-01

    This book focusses on the state of the art of Monte Carlo methods in radiation physics and particle transport simulation and applications, the latter involving in particular, the use and development of electron--gamma, neutron--gamma and hadronic codes. Besides the basic theory and the methods employed, special attention is paid to algorithm development for modeling, and the analysis of experiments and measurements in a variety of fields ranging from particle to medical physics.

  13. Monte Carlo simulations incorporating Mie calculations of light transport in tissue phantoms: Examination of photon sampling volumes for endoscopically compatible fiber optic probes

    Energy Technology Data Exchange (ETDEWEB)

    Mourant, J.R.; Hielscher, A.H.; Bigio, I.J.

    1996-04-01

    Details of the interaction of photons with tissue phantoms are elucidated using Monte Carlo simulations. In particular, photon sampling volumes and photon pathlengths are determined for a variety of scattering and absorption parameters. The Monte Carlo simulations are specifically designed to model light delivery and collection geometries relevant to clinical applications of optical biopsy techniques. The Monte Carlo simulations assume that light is delivered and collected by two, nearly-adjacent optical fibers and take into account the numerical aperture of the fibers as well as reflectance and refraction at interfaces between different media. To determine the validity of the Monte Carlo simulations for modeling the interactions between the photons and the tissue phantom in these geometries, the simulations were compared to measurements of aqueous suspensions of polystyrene microspheres in the wavelength range 450-750 nm.

  14. Monte Carlo calculations of the properties of solid nitromethane

    Science.gov (United States)

    Rice, Betsy M.; Trevino, Samuel F.

    1991-09-01

    Pairwise additive potential energy functions for H-O, H-H, and O-O intermolecular interactions are presented; methods by which these functions were developed are discussed, and preliminary Monte Carlo calculations of the crystal lattice parameters using these functions are presented. The results indicate that these potential energy functions correctly reproduce the lattice parameters measured by neutron diffraction at 4.2 K, ambient pressure, and at pressures below 1.0 GPa, room temperature. It is our intention in this and future work to obtain sufficient information concerning the intermolecular interactions between molecules of nitromethane (CH3NO2) in order to produce, via computer simulation, a reliable equation of state and other related properties in the condensed phase. For this purpose, substantial experimental investigations have been performed in the past on several properties of the crystal. For the present study, the most important of these are the determination of the crystal structure at ambient pressure, from 4.2 K to 228 K (Trevino, Prince, and Hubbard 1980) and neutron spectroscopic determination of the rotational properties of the methyl group (Trevino and Rymes 1980; Alefeld et al. 1982; Cavagnat et al. 1985).

  15. A Monte Carlo-based model of gold nanoparticle radiosensitization

    Science.gov (United States)

    Lechtman, Eli Solomon

    The goal of radiotherapy is to operate within the therapeutic window - delivering doses of ionizing radiation to achieve locoregional tumour control, while minimizing normal tissue toxicity. A greater therapeutic ratio can be achieved by utilizing radiosensitizing agents designed to enhance the effects of radiation at the tumour. Gold nanoparticles (AuNP) represent a novel radiosensitizer with unique and attractive properties. AuNPs enhance local photon interactions, thereby converting photons into localized damaging electrons. Experimental reports of AuNP radiosensitization reveal this enhancement effect to be highly sensitive to irradiation source energy, cell line, and AuNP size, concentration and intracellular localization. This thesis explored the physics and some of the underlying mechanisms behind AuNP radiosensitization. A Monte Carlo simulation approach was developed to investigate the enhanced photoelectric absorption within AuNPs, and to characterize the escaping energy and range of the photoelectric products. Simulations revealed a 10 3 fold increase in the rate of photoelectric absorption using low-energy brachytherapy sources compared to megavolt sources. For low-energy sources, AuNPs released electrons with ranges of only a few microns in the surrounding tissue. For higher energy sources, longer ranged photoelectric products travelled orders of magnitude farther. A novel radiobiological model called the AuNP radiosensitization predictive (ARP) model was developed based on the unique nanoscale energy deposition pattern around AuNPs. The ARP model incorporated detailed Monte Carlo simulations with experimentally determined parameters to predict AuNP radiosensitization. This model compared well to in vitro experiments involving two cancer cell lines (PC-3 and SK-BR-3), two AuNP sizes (5 and 30 nm) and two source energies (100 and 300 kVp). The ARP model was then used to explore the effects of AuNP intracellular localization using 1.9 and 100 nm Au

  16. Estimating statistical uncertainty of Monte Carlo efficiency-gain in the context of a correlated sampling Monte Carlo code for brachytherapy treatment planning with non-normal dose distribution.

    Science.gov (United States)

    Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr

    2012-01-01

    Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed.

  17. Estimating statistical uncertainty of Monte Carlo efficiency-gain in the context of a correlated sampling Monte Carlo code for brachytherapy treatment planning with non-normal dose distribution

    Energy Technology Data Exchange (ETDEWEB)

    Mukhopadhyay, Nitai D. [Department of Biostatistics, Virginia Commonwealth University, Richmond, VA 23298 (United States); Sampson, Andrew J. [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, VA 23298 (United States); Deniz, Daniel; Alm Carlsson, Gudrun [Department of Radiation Physics, Faculty of Health Sciences, Linkoeping University, SE 581 85 (Sweden); Williamson, Jeffrey [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, VA 23298 (United States); Malusek, Alexandr, E-mail: malusek@ujf.cas.cz [Department of Radiation Physics, Faculty of Health Sciences, Linkoeping University, SE 581 85 (Sweden); Department of Radiation Dosimetry, Nuclear Physics Institute AS CR v.v.i., Na Truhlarce 39/64, 180 86 Prague (Czech Republic)

    2012-01-15

    Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed.

  18. Monte Carlo Simulation of Opacities of Hot and Dense Au Plasma in the Unresolved Transition Array Approximation

    Institute of Scientific and Technical Information of China (English)

    程新路; 杨莉; 张红; 杨向东

    2002-01-01

    The opacity, and its Planck and Rosseland mean values, of the hot and dense Au plasma in local thermodynamicsequilibrium are studied by the Monte Carlo method based on the unresolved transition array (UTA) approxima-tion. The average ion model and the Saha equation are used to determine the atomic level populations. Theresult gives a more detailed structure for frequency-dependent opacity than the popularly used super transitionarray or UTA in the photon energy range of 500eV to 2000eV. The Monte Carlo method can give a result betterthan that of the UTA, with almost the same computation effort.

  19. Uncertainties in Monte Carlo-based absorbed dose calculations for an experimental benchmark.

    Science.gov (United States)

    Renner, F; Wulff, J; Kapsch, R-P; Zink, K

    2015-10-01

    There is a need to verify the accuracy of general purpose Monte Carlo codes like EGSnrc, which are commonly employed for investigations of dosimetric problems in radiation therapy. A number of experimental benchmarks have been published to compare calculated values of absorbed dose to experimentally determined values. However, there is a lack of absolute benchmarks, i.e. benchmarks without involved normalization which may cause some quantities to be cancelled. Therefore, at the Physikalisch-Technische Bundesanstalt a benchmark experiment was performed, which aimed at the absolute verification of radiation transport calculations for dosimetry in radiation therapy. A thimble-type ionization chamber in a solid phantom was irradiated by high-energy bremsstrahlung and the mean absorbed dose in the sensitive volume was measured per incident electron of the target. The characteristics of the accelerator and experimental setup were precisely determined and the results of a corresponding Monte Carlo simulation with EGSnrc are presented within this study. For a meaningful comparison, an analysis of the uncertainty of the Monte Carlo simulation is necessary. In this study uncertainties with regard to the simulation geometry, the radiation source, transport options of the Monte Carlo code and specific interaction cross sections are investigated, applying the general methodology of the Guide to the expression of uncertainty in measurement. Besides studying the general influence of changes in transport options of the EGSnrc code, uncertainties are analyzed by estimating the sensitivity coefficients of various input quantities in a first step. Secondly, standard uncertainties are assigned to each quantity which are known from the experiment, e.g. uncertainties for geometric dimensions. Data for more fundamental quantities such as photon cross sections and the I-value of electron stopping powers are taken from literature. The significant uncertainty contributions are identified as

  20. Carlos Gardel, el patrimonio que sonrie

    Directory of Open Access Journals (Sweden)

    María Julia Carozzi

    2003-10-01

    Full Text Available Analizando los modos en que los porteños recordaron a Carlos Gardel en el mes del 68 aniversario de su muerte, el artículo intenta dar cuenta de una de las formas en que los habitantes de la ciudad de Buenos Aires conciben aquello que es memorable, identifican aquello en que se reconocen como porteños y singularizan aquello frente a lo cual experimentan sentimientos de pertenencia colectiva. El trabajo señala la centralidad que el milagro, la mimesis y el contacto directo con su cuerpo desempeñan en la preservación de la memoria de Gardel, quien encarna tanto al tango como a su éxito en el mundo. El caso de Gardel se presenta como un ejemplo de la organización de la memoria y la identidad de los porteños en particular y los argentinos en general en torno a personas reales a quienes se les asigna un valor extraordinario. Al sostener su profundo enraizamiento en cuerpos humanos concretos, tornan problemática la adopción local de los conceptos globalmente aceptados de patrimonio histórico y cultural.The article analyses one of the ways in which the inhabitants of Buenos Aires conceive that which is memorable, source of positive identification and origin of feelings of communitas by examining their commemoration of the 68th anniversary of the death of Carlos Gardel. It underscores the central role that miracles, mimesis and direct bodily contact play in the preservation of the memory of the star, who incarnates both the tango and its world-wide success. The case of Gardel is presented as an example of the centrality that real persons of extraordinary value have in the organization of local memory and collective identity. Since they are embedded in concrete human bodies, they reveal problems in the local adoption of globally accepted concepts of historical and cultural heritage.

  1. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations

    Energy Technology Data Exchange (ETDEWEB)

    Urbatsch, T.J.

    1995-11-01

    If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.

  2. A Monte Carlo approach for estimating measurement uncertainty using standard spreadsheet software.

    Science.gov (United States)

    Chew, Gina; Walczyk, Thomas

    2012-03-01

    Despite the importance of stating the measurement uncertainty in chemical analysis, concepts are still not widely applied by the broader scientific community. The Guide to the expression of uncertainty in measurement approves the use of both the partial derivative approach and the Monte Carlo approach. There are two limitations to the partial derivative approach. Firstly, it involves the computation of first-order derivatives of each component of the output quantity. This requires some mathematical skills and can be tedious if the mathematical model is complex. Secondly, it is not able to predict the probability distribution of the output quantity accurately if the input quantities are not normally distributed. Knowledge of the probability distribution is essential to determine the coverage interval. The Monte Carlo approach performs random sampling from probability distributions of the input quantities; hence, there is no need to compute first-order derivatives. In addition, it gives the probability density function of the output quantity as the end result, from which the coverage interval can be determined. Here we demonstrate how the Monte Carlo approach can be easily implemented to estimate measurement uncertainty using a standard spreadsheet software program such as Microsoft Excel. It is our aim to provide the analytical community with a tool to estimate measurement uncertainty using software that is already widely available and that is so simple to apply that it can even be used by students with basic computer skills and minimal mathematical knowledge.

  3. The effect of a number of selective points in modeling of polymerization reacting Monte Carlo method: studying the initiation reaction

    CERN Document Server

    Sadi, M; Dabir, B

    2003-01-01

    Monte Carlo Method is one of the most powerful techniques to model different processes, such as polymerization reactions. By this method, without any need to solve moment equations, a very detailed information on the structure and properties of polymers are obtained. The number of algorithm repetitions (selected volumes of reactor for modelling which represent the number of initial molecules) is very important in this method. In Monte Carlo method calculations are based on the random number of generations and reaction probability determinations. so the number of algorithm repetition is very important. In this paper, the initiation reaction was considered alone and the importance of number of initiator molecules on the result were studied. It can be concluded that Monte Carlo method will not give accurate results if the number of molecules is not satisfied to be big enough, because in that case , selected volume would not be representative of the whole system.

  4. Monte Carlo wave packet approach to dissociative multiple ionization in diatomic molecules

    DEFF Research Database (Denmark)

    Leth, Henriette Astrup; Madsen, Lars Bojer; Mølmer, Klaus

    2010-01-01

    A detailed description of the Monte Carlo wave packet technique applied to dissociative multiple ionization of diatomic molecules in short intense laser pulses is presented. The Monte Carlo wave packet technique relies on the Born-Oppenheimer separation of electronic and nuclear dynamics...... and provides a consistent theoretical framework for treating simultaneously both ionization and dissociation. By simulating the detection of continuum electrons and collapsing the system onto either the neutral, singly ionized or doubly ionized states in every time step the nuclear dynamics can be solved....... The computational effort is restricted and the model is applicable to any molecular system where electronic Born-Oppenheimer curves, dipole moment functions, and ionization rates as a function of nuclear coordinates can be determined....

  5. Using Monte Carlo Simulations to Develop an Understanding of the Hyperpolarizability Near the Fundamental Limit

    Science.gov (United States)

    Shafei, Shoresh; Kuzyk, Mark C.; Kuzyk, Mark G.

    2010-03-01

    The hyperpolarizability governs all light-matter interactions. In recent years, quantum mechanical calculations have shown that there is a fundamental limit of the hyperpolarizability of all materials. The fundamental limits are calculated only under the assumption that the Thomas Kuhn sum rules and the three-level ansatz hold. (The three-level ansatz states that for optimized hyperpolarizability, only two excited states contribute to the hyperpolarizability.) All molecules ever characterized have hyperpolarizabilities that fall well below the limits. However, Monte Carlo simulations of the nonlinear polarizability have shown that attaining values close to the fundamental limit is theoretically possible; but, the calculations do not provide guidance with regards to what potentials are optimized. The focus of our work is to use Monte Carlo techniques to determine sets of energies and transition moments that are consistent with the sum rules, and study the constraints on their signs. This analysis will be used to implement a numerical proof of three-level ansatz.

  6. Monte Carlo simulation applied in total reflection x-ray fluorescence: Preliminary results

    Energy Technology Data Exchange (ETDEWEB)

    Meira, Luiza L. C.; Inocente, Guilherme F.; Vieira, Leticia D.; Mesa, Joel [Departamento de Fisica e Biofisica - Instituto de Biociencias de Botucatu, Universidade Estadual Paulista Julio de Mesquita Filho (Brazil)

    2013-05-06

    The X-ray Fluorescence (XRF) analysis is a technique for the qualitative and quantitative determination of chemical constituents in a sample. This method is based on detection of the characteristic radiation intensities emitted by the elements of the sample, when properly excited. A variant of this technique is the Total Reflection X-ray Fluorescence (TXRF) that utilizes electromagnetic radiation as excitation source. In total reflection of X-ray, the angle of refraction of the incident beam tends to zero and the refracted beam is tangent to the sample support interface. Thus, there is a minimum angle of incidence at which no refracted beam exists and all incident radiation undergoes total reflection. In this study, we evaluated the influence of the energy variation of the beam of incident x-rays, using the MCNPX code (Monte Carlo NParticle) based on Monte Carlo method.

  7. Monte Carlo calculations of the magnetoresistance in magnetic multilayer structures with giant magnetoresistance effects

    Science.gov (United States)

    Prudnikov, V. V.; Prudnikov, P. V.; Romanovskiy, D. E.

    2016-06-01

    A Monte Carlo study of trilayer and spin-valve magnetic structures with giant magnetoresistance effects is carried out. The anisotropic Heisenberg model is used for description of magnetic properties of ultrathin ferromagnetic films forming these structures. The temperature and magnetic field dependences of magnetic characteristics are considered for ferromagnetic and antiferromagnetic configurations of these multilayer structures. The methodology for determination of the magnetoresistance by the Monte Carlo method is introduced; this permits us to calculate the magnetoresistance of multilayer structures for different thicknesses of the ferromagnetic films. The calculated temperature dependence of the magnetoresistance agrees very well with the experimental results measured for the Fe(0 0 1)-Cr(0 0 1) multilayer structure and CFAS-Ag-CFAS-IrMn spin-valve structure based on the half-metallic Heusler alloy Co2FeAl0.5Si0.5.

  8. Monte Carlo simulation of multilayer magnetic structures and calculation of the magnetoresistance coefficient

    Science.gov (United States)

    Prudnikov, V. V.; Prudnikov, P. V.; Romanovskii, D. E.

    2015-11-01

    The Monte Carlo study of three-layer and spin-valve magnetic structures with giant magnetoresistance effects has been performed with the application of the Heisenberg anisotropic model to the description of the magnetic properties of thin ferromagnetic films. The dependences of the magnetic characteristics on the temperature and external magnetic field have been obtained for the ferromagnetic and antiferromagnetic configurations of these structures. A Monte Carlo method for determining the magnetoresistance coefficient has been developed. The magnetoresistance coefficient has been calculated for three-layer and spin-valve magnetic structures at various thicknesses of ferromagnetic films. It has been shown that the calculated temperature dependence of the magnetoresistance coefficient is in good agreement with experimental data obtained for the Fe(001)/Cr(001) multilayer structure and the CFAS/Ag/CFAS/IrMn spin valve based on the Co2FeAl0.5Si0.5 (CFAS) Heusler alloy.

  9. Large-scale Monte Carlo simulations for the depinning transition in Ising-type lattice models

    Science.gov (United States)

    Si, Lisha; Liao, Xiaoyun; Zhou, Nengji

    2016-12-01

    With the developed "extended Monte Carlo" (EMC) algorithm, we have studied the depinning transition in Ising-type lattice models by extensive numerical simulations, taking the random-field Ising model with a driving field and the driven bond-diluted Ising model as examples. In comparison with the usual Monte Carlo method, the EMC algorithm exhibits greater efficiency of the simulations. Based on the short-time dynamic scaling form, both the transition field and critical exponents of the depinning transition are determined accurately via the large-scale simulations with the lattice size up to L = 8912, significantly refining the results in earlier literature. In the strong-disorder regime, a new universality class of the Ising-type lattice model is unveiled with the exponents β = 0.304(5) , ν = 1.32(3) , z = 1.12(1) , and ζ = 0.90(1) , quite different from that of the quenched Edwards-Wilkinson equation.

  10. Study of nuclear pairing with Configuration-Space Monte-Carlo approach

    CERN Document Server

    Lingle, Mark

    2015-01-01

    Pairing correlations in nuclei play a decisive role in determining nuclear drip-lines, binding energies, and many collective properties. In this work a new Configuration-Space Monte-Carlo (CSMC) method for treating nuclear pairing correlations is developed, implemented, and demonstrated. In CSMC the Hamiltonian matrix is stochastically generated in Krylov subspace, resulting in the Monte-Carlo version of Lanczos-like diagonalization. The advantages of this approach over other techniques are discussed; the absence of the fermionic sign problem, probabilistic interpretation of quantum-mechanical amplitudes, and ability to handle truly large-scale problems with defined precision and error control, are noteworthy merits of CSMC. The features of our CSMC approach are shown using models and realistic examples. Special attention is given to difficult limits: situations with non-constant pairing strengths, cases with nearly degenerate excited states, limits when pairing correlations in finite systems are weak, and pr...

  11. Monte Carlo Calculation for Landmine Detection using Prompt Gamma Neutron Activation Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Park, Seungil; Kim, Seong Bong; Yoo, Suk Jae [Plasma Technology Research Center, Gunsan (Korea, Republic of); Shin, Sung Gyun; Cho, Moohyun [POSTECH, Pohang (Korea, Republic of); Han, Seunghoon; Lim, Byeongok [Samsung Thales, Yongin (Korea, Republic of)

    2014-05-15

    Identification and demining of landmines are a very important issue for the safety of the people and the economic development. To solve the issue, several methods have been proposed in the past. In Korea, National Fusion Research Institute (NFRI) is developing a landmine detector using prompt gamma neutron activation analysis (PGNAA) as a part of the complex sensor-based landmine detection system. In this paper, the Monte Carlo calculation results for this system are presented. Monte Carlo calculation was carried out for the design of the landmine detector using PGNAA. To consider the soil effect, average soil composition is analyzed and applied to the calculation. This results has been used to determine the specification of the landmine detector.

  12. Simulation model based on Monte Carlo method for traffic assignment in local area road network

    Institute of Scientific and Technical Information of China (English)

    Yuchuan DU; Yuanjing GENG; Lijun SUN

    2009-01-01

    For a local area road network, the available traffic data of traveling are the flow volumes in the key intersections, not the complete OD matrix. Considering the circumstance characteristic and the data availability of a local area road network, a new model for traffic assignment based on Monte Carlo simulation of intersection turning movement is provided in this paper. For good stability in temporal sequence, turning ratio is adopted as the important parameter of this model. The formulation for local area road network assignment problems is proposed on the assumption of random turning behavior. The traffic assignment model based on the Monte Carlo method has been used in traffic analysis for an actual urban road network. The results comparing surveying traffic flow data and determining flow data by the previous model verify the applicability and validity of the proposed methodology.

  13. Monte Carlo Simulations of Synchrotron Radiation and Vacuum Performance of the MAX IV Light Sources

    CERN Document Server

    Ady, M; Grabski, M

    2014-01-01

    In the 3 GeV ring of MAX IV light source in Lund, Sweden, the intense synchrotron radiation (SR) distributed along the ring generates important thermal and vacuum effects. By means of a Monte Carlo simulation package, which is currently developed at CERN, both thermal and vacuum effects are quantitatively analysed, in particular near the crotch absorbers and the surrounding NEG-coated vacuum chambers. Using SynRad+, the beam trajectory of the upstream bending magnet is calculated; SR photons are generated and traced through the geometry until their absorption. This allows an analysis of the incident power density on the absorber, and to calculate the photon induced outgassing. The results are imported to Molflow+, a Monte Carlo vacuum simulator that works in the molecular flow regime, and the pressure in the vacuum system and the saturation length of the NEG coating are determined using iterations.

  14. Monte Carlo Analysis as a Trajectory Design Driver for the TESS Mission

    Science.gov (United States)

    Nickel, Craig; Lebois, Ryan; Lutz, Stephen; Dichmann, Donald; Parker, Joel

    2016-01-01

    The Transiting Exoplanet Survey Satellite (TESS) will be injected into a highly eccentric Earth orbit and fly 3.5 phasing loops followed by a lunar flyby to enter a mission orbit with lunar 2:1 resonance. Through the phasing loops and mission orbit, the trajectory is significantly affected by lunar and solar gravity. We have developed a trajectory design to achieve the mission orbit and meet mission constraints, including eclipse avoidance and a 30-year geostationary orbit avoidance requirement. A parallelized Monte Carlo simulation was performed to validate the trajectory after injecting common perturbations, including launch dispersions, orbit determination errors, and maneuver execution errors. The Monte Carlo analysis helped identify mission risks and is used in the trajectory selection process.

  15. Monte Carlo Analysis as a Trajectory Design Driver for the Transiting Exoplanet Survey Satellite (TESS) Mission

    Science.gov (United States)

    Nickel, Craig; Parker, Joel; Dichmann, Don; Lebois, Ryan; Lutz, Stephen

    2016-01-01

    The Transiting Exoplanet Survey Satellite (TESS) will be injected into a highly eccentric Earth orbit and fly 3.5 phasing loops followed by a lunar flyby to enter a mission orbit with lunar 2:1 resonance. Through the phasing loops and mission orbit, the trajectory is significantly affected by lunar and solar gravity. We have developed a trajectory design to achieve the mission orbit and meet mission constraints, including eclipse avoidance and a 30-year geostationary orbit avoidance requirement. A parallelized Monte Carlo simulation was performed to validate the trajectory after injecting common perturbations, including launch dispersions, orbit determination errors, and maneuver execution errors. The Monte Carlo analysis helped identify mission risks and is used in the trajectory selection process.

  16. Information-Geometric Markov Chain Monte Carlo Methods Using Diffusions

    Directory of Open Access Journals (Sweden)

    Samuel Livingstone

    2014-06-01

    Full Text Available Recent work incorporating geometric ideas in Markov chain Monte Carlo is reviewed in order to highlight these advances and their possible application in a range of domains beyond statistics. A full exposition of Markov chains and their use in Monte Carlo simulation for statistical inference and molecular dynamics is provided, with particular emphasis on methods based on Langevin diffusions. After this, geometric concepts in Markov chain Monte Carlo are introduced. A full derivation of the Langevin diffusion on a Riemannian manifold is given, together with a discussion of the appropriate Riemannian metric choice for different problems. A survey of applications is provided, and some open questions are discussed.

  17. The Monte Carlo method the method of statistical trials

    CERN Document Server

    Shreider, YuA

    1966-01-01

    The Monte Carlo Method: The Method of Statistical Trials is a systematic account of the fundamental concepts and techniques of the Monte Carlo method, together with its range of applications. Some of these applications include the computation of definite integrals, neutron physics, and in the investigation of servicing processes. This volume is comprised of seven chapters and begins with an overview of the basic features of the Monte Carlo method and typical examples of its application to simple problems in computational mathematics. The next chapter examines the computation of multi-dimensio

  18. Monte Carlo simulations for heavy ion dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Geithner, O.

    2006-07-26

    Water-to-air stopping power ratio (s{sub w,air}) calculations for the ionization chamber dosimetry of clinically relevant ion beams with initial energies from 50 to 450 MeV/u have been performed using the Monte Carlo technique. To simulate the transport of a particle in water the computer code SHIELD-HIT v2 was used which is a substantially modified version of its predecessor SHIELD-HIT v1. The code was partially rewritten, replacing formerly used single precision variables with double precision variables. The lowest particle transport specific energy was decreased from 1 MeV/u down to 10 keV/u by modifying the Bethe- Bloch formula, thus widening its range for medical dosimetry applications. Optional MSTAR and ICRU-73 stopping power data were included. The fragmentation model was verified using all available experimental data and some parameters were adjusted. The present code version shows excellent agreement with experimental data. Additional to the calculations of stopping power ratios, s{sub w,air}, the influence of fragments and I-values on s{sub w,air} for carbon ion beams was investigated. The value of s{sub w,air} deviates as much as 2.3% at the Bragg peak from the recommended by TRS-398 constant value of 1.130 for an energy of 50 MeV/u. (orig.)

  19. Monte Carlo models of dust coagulation

    CERN Document Server

    Zsom, Andras

    2010-01-01

    The thesis deals with the first stage of planet formation, namely dust coagulation from micron to millimeter sizes in circumstellar disks. For the first time, we collect and compile the recent laboratory experiments on dust aggregates into a collision model that can be implemented into dust coagulation models. We put this model into a Monte Carlo code that uses representative particles to simulate dust evolution. Simulations are performed using three different disk models in a local box (0D) located at 1 AU distance from the central star. We find that the dust evolution does not follow the previously assumed growth-fragmentation cycle, but growth is halted by bouncing before the fragmentation regime is reached. We call this the bouncing barrier which is an additional obstacle during the already complex formation process of planetesimals. The absence of the growth-fragmentation cycle and the halted growth has two important consequences for planet formation. 1) It is observed that disk atmospheres are dusty thr...

  20. Monte Carlo simulations of Protein Adsorption

    Science.gov (United States)

    Sharma, Sumit; Kumar, Sanat K.; Belfort, Georges

    2008-03-01

    Amyloidogenic diseases, such as, Alzheimer's are caused by adsorption and aggregation of partially unfolded proteins. Adsorption of proteins is a concern in design of biomedical devices, such as dialysis membranes. Protein adsorption is often accompanied by conformational rearrangements in protein molecules. Such conformational rearrangements are thought to affect many properties of adsorbed protein molecules such as their adhesion strength to the surface, biological activity, and aggregation tendency. It has been experimentally shown that many naturally occurring proteins, upon adsorption to hydrophobic surfaces, undergo a helix to sheet or random coil secondary structural rearrangement. However, to better understand the equilibrium structural complexities of this phenomenon, we have performed Monte Carlo (MC) simulations of adsorption of a four helix bundle, modeled as a lattice protein, and studied the adsorption behavior and equilibrium protein conformations at different temperatures and degrees of surface hydrophobicity. To study the free energy and entropic effects on adsorption, Canonical ensemble MC simulations have been combined with Weighted Histogram Analysis Method(WHAM). Conformational transitions of proteins on surfaces will be discussed as a function of surface hydrophobicity and compared to analogous bulk transitions.

  1. Commensurabilities between ETNOs: a Monte Carlo survey

    Science.gov (United States)

    de la Fuente Marcos, C.; de la Fuente Marcos, R.

    2016-07-01

    Many asteroids in the main and trans-Neptunian belts are trapped in mean motion resonances with Jupiter and Neptune, respectively. As a side effect, they experience accidental commensurabilities among themselves. These commensurabilities define characteristic patterns that can be used to trace the source of the observed resonant behaviour. Here, we explore systematically the existence of commensurabilities between the known ETNOs using their heliocentric and barycentric semimajor axes, their uncertainties, and Monte Carlo techniques. We find that the commensurability patterns present in the known ETNO population resemble those found in the main and trans-Neptunian belts. Although based on small number statistics, such patterns can only be properly explained if most, if not all, of the known ETNOs are subjected to the resonant gravitational perturbations of yet undetected trans-Plutonian planets. We show explicitly that some of the statistically significant commensurabilities are compatible with the Planet Nine hypothesis; in particular, a number of objects may be trapped in the 5:3 and 3:1 mean motion resonances with a putative Planet Nine with semimajor axis ˜700 au.

  2. Diffusion Monte Carlo in internal coordinates.

    Science.gov (United States)

    Petit, Andrew S; McCoy, Anne B

    2013-08-15

    An internal coordinate extension of diffusion Monte Carlo (DMC) is described as a first step toward a generalized reduced-dimensional DMC approach. The method places no constraints on the choice of internal coordinates other than the requirement that they all be independent. Using H(3)(+) and its isotopologues as model systems, the methodology is shown to be capable of successfully describing the ground state properties of molecules that undergo large amplitude, zero-point vibrational motions. Combining the approach developed here with the fixed-node approximation allows vibrationally excited states to be treated. Analysis of the ground state probability distribution is shown to provide important insights into the set of internal coordinates that are less strongly coupled and therefore more suitable for use as the nodal coordinates for the fixed-node DMC calculations. In particular, the curvilinear normal mode coordinates are found to provide reasonable nodal surfaces for the fundamentals of H(2)D(+) and D(2)H(+) despite both molecules being highly fluxional.

  3. A continuation multilevel Monte Carlo algorithm

    KAUST Repository

    Collier, Nathan

    2014-09-05

    We propose a novel Continuation Multi Level Monte Carlo (CMLMC) algorithm for weak approximation of stochastic models. The CMLMC algorithm solves the given approximation problem for a sequence of decreasing tolerances, ending when the required error tolerance is satisfied. CMLMC assumes discretization hierarchies that are defined a priori for each level and are geometrically refined across levels. The actual choice of computational work across levels is based on parametric models for the average cost per sample and the corresponding variance and weak error. These parameters are calibrated using Bayesian estimation, taking particular notice of the deepest levels of the discretization hierarchy, where only few realizations are available to produce the estimates. The resulting CMLMC estimator exhibits a non-trivial splitting between bias and statistical contributions. We also show the asymptotic normality of the statistical error in the MLMC estimator and justify in this way our error estimate that allows prescribing both required accuracy and confidence in the final result. Numerical results substantiate the above results and illustrate the corresponding computational savings in examples that are described in terms of differential equations either driven by random measures or with random coefficients. © 2014, Springer Science+Business Media Dordrecht.

  4. Rare event simulation using Monte Carlo methods

    CERN Document Server

    Rubino, Gerardo

    2009-01-01

    In a probabilistic model, a rare event is an event with a very small probability of occurrence. The forecasting of rare events is a formidable task but is important in many areas. For instance a catastrophic failure in a transport system or in a nuclear power plant, the failure of an information processing system in a bank, or in the communication network of a group of banks, leading to financial losses. Being able to evaluate the probability of rare events is therefore a critical issue. Monte Carlo Methods, the simulation of corresponding models, are used to analyze rare events. This book sets out to present the mathematical tools available for the efficient simulation of rare events. Importance sampling and splitting are presented along with an exposition of how to apply these tools to a variety of fields ranging from performance and dependability evaluation of complex systems, typically in computer science or in telecommunications, to chemical reaction analysis in biology or particle transport in physics. ...

  5. Monte Carlo simulations for focusing elliptical guides

    Energy Technology Data Exchange (ETDEWEB)

    Valicu, Roxana [FRM2 Garching, Muenchen (Germany); Boeni, Peter [E20, TU Muenchen (Germany)

    2009-07-01

    The aim of the Monte Carlo simulations using McStas Programme was to improve the focusing of the neutron beam existing at PGAA (FRM II) by prolongation of the existing elliptic guide (coated now with supermirrors with m=3) with a new part. First we have tried with an initial length of the additional guide of 7,5cm and coatings for the neutron guide of supermirrors with m=4,5 and 6. The gain (calculated by dividing the intensity in the focal point after adding the guide by the intensity at the focal point with the initial guide) obtained for this coatings indicated that a coating with m=5 would be appropriate for a first trial. The next step was to vary the length of the additional guide for this m value and therefore choosing the appropriate length for the maximal gain. With the m value and the length of the guide fixed we have introduced an aperture 1 cm before the focal point and we have varied the radius of this aperture in order to obtain a focused beam. We have observed a dramatic decrease in the size of the beam in the focal point after introducing this aperture. The simulation results, the gains obtained and the evolution of the beam size will be presented.

  6. Monte Carlo Production Management at CMS

    CERN Document Server

    Boudoul, G.; Pol, A; Srimanobhas, P; Vlimant, J R; Franzoni, Giovanni

    2015-01-01

    The analysis of the LHC data at the Compact Muon Solenoid (CMS) experiment requires the production of a large number of simulated events.During the runI of LHC (2010-2012), CMS has produced over 12 Billion simulated events,organized in approximately sixty different campaigns each emulating specific detector conditions and LHC running conditions (pile up).In order toaggregate the information needed for the configuration and prioritization of the events production,assure the book-keeping and of all the processing requests placed by the physics analysis groups,and to interface with the CMS production infrastructure,the web-based service Monte Carlo Management (McM) has been developed and put in production in 2012.McM is based on recent server infrastructure technology (CherryPy + java) and relies on a CouchDB database back-end.This contribution will coverthe one and half year of operational experience managing samples of simulated events for CMS,the evolution of its functionalitiesand the extension of its capabi...

  7. Parallel Monte Carlo simulation of aerosol dynamics

    KAUST Repository

    Zhou, K.

    2014-01-01

    A highly efficient Monte Carlo (MC) algorithm is developed for the numerical simulation of aerosol dynamics, that is, nucleation, surface growth, and coagulation. Nucleation and surface growth are handled with deterministic means, while coagulation is simulated with a stochastic method (Marcus-Lushnikov stochastic process). Operator splitting techniques are used to synthesize the deterministic and stochastic parts in the algorithm. The algorithm is parallelized using the Message Passing Interface (MPI). The parallel computing efficiency is investigated through numerical examples. Near 60% parallel efficiency is achieved for the maximum testing case with 3.7 million MC particles running on 93 parallel computing nodes. The algorithm is verified through simulating various testing cases and comparing the simulation results with available analytical and/or other numerical solutions. Generally, it is found that only small number (hundreds or thousands) of MC particles is necessary to accurately predict the aerosol particle number density, volume fraction, and so forth, that is, low order moments of the Particle Size Distribution (PSD) function. Accurately predicting the high order moments of the PSD needs to dramatically increase the number of MC particles. 2014 Kun Zhou et al.

  8. Finding Planet Nine: a Monte Carlo approach

    CERN Document Server

    Marcos, C de la Fuente

    2016-01-01

    Planet Nine is a hypothetical planet located well beyond Pluto that has been proposed in an attempt to explain the observed clustering in physical space of the perihelia of six extreme trans-Neptunian objects or ETNOs. The predicted approximate values of its orbital elements include a semimajor axis of 700 au, an eccentricity of 0.6, an inclination of 30 degrees, and an argument of perihelion of 150 degrees. Searching for this putative planet is already under way. Here, we use a Monte Carlo approach to create a synthetic population of Planet Nine orbits and study its visibility statistically in terms of various parameters and focusing on the aphelion configuration. Our analysis shows that, if Planet Nine exists and is at aphelion, it might be found projected against one out of four specific areas in the sky. Each area is linked to a particular value of the longitude of the ascending node and two of them are compatible with an apsidal antialignment scenario. In addition and after studying the current statistic...

  9. Parallel Monte Carlo Simulation of Aerosol Dynamics

    Directory of Open Access Journals (Sweden)

    Kun Zhou

    2014-02-01

    Full Text Available A highly efficient Monte Carlo (MC algorithm is developed for the numerical simulation of aerosol dynamics, that is, nucleation, surface growth, and coagulation. Nucleation and surface growth are handled with deterministic means, while coagulation is simulated with a stochastic method (Marcus-Lushnikov stochastic process. Operator splitting techniques are used to synthesize the deterministic and stochastic parts in the algorithm. The algorithm is parallelized using the Message Passing Interface (MPI. The parallel computing efficiency is investigated through numerical examples. Near 60% parallel efficiency is achieved for the maximum testing case with 3.7 million MC particles running on 93 parallel computing nodes. The algorithm is verified through simulating various testing cases and comparing the simulation results with available analytical and/or other numerical solutions. Generally, it is found that only small number (hundreds or thousands of MC particles is necessary to accurately predict the aerosol particle number density, volume fraction, and so forth, that is, low order moments of the Particle Size Distribution (PSD function. Accurately predicting the high order moments of the PSD needs to dramatically increase the number of MC particles.

  10. Measuring Berry curvature with quantum Monte Carlo

    CERN Document Server

    Kolodrubetz, Michael

    2014-01-01

    The Berry curvature and its descendant, the Berry phase, play an important role in quantum mechanics. They can be used to understand the Aharonov-Bohm effect, define topological Chern numbers, and generally to investigate the geometric properties of a quantum ground state manifold. While Berry curvature has been well-studied in the regimes of few-body physics and non-interacting particles, its use in the regime of strong interactions is hindered by the lack of numerical methods to solve it. In this paper we fill this gap by implementing a quantum Monte Carlo method to solve for the Berry curvature, based on interpreting Berry curvature as a leading correction to imaginary time ramps. We demonstrate our algorithm using the transverse-field Ising model in one and two dimensions, the latter of which is non-integrable. Despite the fact that the Berry curvature gives information about the phase of the wave function, we show that our algorithm has no sign or phase problem for standard sign-problem-free Hamiltonians...

  11. Atomistic Monte Carlo Simulation of Lipid Membranes

    Directory of Open Access Journals (Sweden)

    Daniel Wüstner

    2014-01-01

    Full Text Available Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC simulation of lipid membranes. We provide an introduction into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches. We use our recently devised chain breakage/closure (CBC local move set in the bond-/torsion angle space with the constant-bond-length approximation (CBLA for the phospholipid dipalmitoylphosphatidylcholine (DPPC. We demonstrate rapid conformational equilibration for a single DPPC molecule, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol.

  12. Il mestiere di tradurre 3: Carlos Gumpert

    Directory of Open Access Journals (Sweden)

    Carlos Gumpert

    2014-01-01

    Full Text Available Al traductor Carlos Gumpert le debemos algunas de las traducciones de los escritores italianos de mayor éxito de las últimas décadas, como es el caso, por citar tan solo unos pocos, de Alessandro Baricco (Océano mar, Anagrama, 2002, Erri de Luca (El contrario de uno, Siruela, 2005 o al ya desaparecido Antonio Tabucchi, con quien mantuvo una estrecha relación que reflejó en el interesantísimo libro Conversaciones con Antonio Tabucchi (Anagrama, 1995. Traductor también de autores fundamentales, aunque tal vez menos conocidos entre el público español, como Giorgio Manganelli (La ciénaga definitiva, Siruela, 2002, Goffredo Parise (Silabario, Alfaguara, 2002 o Ugo Riccarelli (El dolor perfecto, Maeva, 2007, su intensa relación con las letras italianas nos ha llevado a plantearle algunas cuestiones relacionadas con la presencia de esta interesante literatura en nuestro país. Entrevista de Juan José Tejero y Juan Pérez Andrés.

  13. Monte-Carlo simulations of the new LNHB manganese bath facility.

    Science.gov (United States)

    Ogheard, F; Chartier, J L; Cassette, P

    2012-04-01

    The new manganese bath facility of the Laboratoire National Henri Becquerel has been modeled by using three Monte-Carlo codes: MCNPX, GEANT4, and FLUKA, in order to determine the correction factors needed in the neutron source calibration process. The most realistic source geometry has been determined, and the most reliable cross sections library has been chosen. The models were compared, and discrepancies between the codes have been pointed out. Potential causes of deviations between results were assessed and discussed using additional models. Finally, an experimental process is proposed to validate the accuracy of the different codes and their abilities in simulating the neutron capture by the manganese bath.

  14. Integrated logistic support studies using behavioral Monte Carlo simulation, supported by Generalized Stochastic Petri Nets

    Energy Technology Data Exchange (ETDEWEB)

    Garnier, Robert; Chevalier, Marcel [Schneider Electric (France)

    2000-07-01

    Studying large and complex industrial sites, requires more and more accuracy in modeling. In particular, when considering Spares, Maintenance and Repair / Replacement processes, determining optimal Integrated Logistic Support policies requires a high level modeling formalism, in order to make the model as close as possible to the real considered processes. Generally, numerical methods are used to process this kind of study. In this paper, we propose an alternate way to process optimal Integrated Logistic Support policy determination when dealing with large, complex and distributed multi-policies industrial sites. This method is based on the use of behavioral Monte Carlo simulation, supported by Generalized Stochastic Petri Nets. (author)

  15. Monte Carlo Simulation for LINAC Standoff Interrogation of Nuclear Material

    Energy Technology Data Exchange (ETDEWEB)

    Clarke, Shaun D [ORNL; Flaska, Marek [ORNL; Miller, Thomas Martin [ORNL; Protopopescu, Vladimir A [ORNL; Pozzi, Sara A [ORNL

    2007-06-01

    The development of new techniques for the interrogation of shielded nuclear materials relies on the use of Monte Carlo codes to accurately simulate the entire system, including the interrogation source, the fissile target and the detection environment. The objective of this modeling effort is to develop analysis tools and methods-based on a relevant scenario-which may be applied to the design of future systems for active interrogation at a standoff. For the specific scenario considered here, the analysis will focus on providing the information needed to determine the type and optimum position of the detectors. This report describes the results of simulations for a detection system employing gamma rays to interrogate fissile and nonfissile targets. The simulations were performed using specialized versions of the codes MCNPX and MCNP-PoliMi. Both prompt neutron and gamma ray and delayed neutron fluxes have been mapped in three dimensions. The time dependence of the prompt neutrons in the system has also been characterized For this particular scenario, the flux maps generated with the Monte Carlo model indicate that the detectors should be placed approximately 50 cm behind the exit of the accelerator, 40 cm away from the vehicle, and 150 cm above the ground. This position minimizes the number of neutrons coming from the accelerator structure and also receives the maximum flux of prompt neutrons coming from the source. The lead shielding around the accelerator minimizes the gamma-ray background from the accelerator in this area. The number of delayed neutrons emitted from the target is approximately seven orders of magnitude less than the prompt neutrons emitted from the system. Therefore, in order to possibly detect the delayed neutrons, the detectors should be active only after all prompt neutrons have scattered out of the system. Preliminary results have shown this time to be greater than 5 ?s after the accelerator pulse. This type of system is illustrative of a

  16. GPU-Monte Carlo based fast IMRT plan optimization

    Directory of Open Access Journals (Sweden)

    Yongbao Li

    2014-03-01

    Full Text Available Purpose: Intensity-modulated radiation treatment (IMRT plan optimization needs pre-calculated beamlet dose distribution. Pencil-beam or superposition/convolution type algorithms are typically used because of high computation speed. However, inaccurate beamlet dose distributions, particularly in cases with high levels of inhomogeneity, may mislead optimization, hindering the resulting plan quality. It is desire to use Monte Carlo (MC methods for beamlet dose calculations. Yet, the long computational time from repeated dose calculations for a number of beamlets prevents this application. It is our objective to integrate a GPU-based MC dose engine in lung IMRT optimization using a novel two-steps workflow.Methods: A GPU-based MC code gDPM is used. Each particle is tagged with an index of a beamlet where the source particle is from. Deposit dose are stored separately for beamlets based on the index. Due to limited GPU memory size, a pyramid space is allocated for each beamlet, and dose outside the space is neglected. A two-steps optimization workflow is proposed for fast MC-based optimization. At first step, a rough dose calculation is conducted with only a few number of particle per beamlet. Plan optimization is followed to get an approximated fluence map. In the second step, more accurate beamlet doses are calculated, where sampled number of particles for a beamlet is proportional to the intensity determined previously. A second-round optimization is conducted, yielding the final result.Results: For a lung case with 5317 beamlets, 105 particles per beamlet in the first round, and 108 particles per beam in the second round are enough to get a good plan quality. The total simulation time is 96.4 sec.Conclusion: A fast GPU-based MC dose calculation method along with a novel two-step optimization workflow are developed. The high efficiency allows the use of MC for IMRT optimizations.--------------------------------Cite this article as: Li Y, Tian Z

  17. Quantum Monte Carlo for electronic structure: Recent developments and applications

    Energy Technology Data Exchange (ETDEWEB)

    Rodriquez, Maria Milagos Soto [Lawrence Berkeley Lab. and Univ. of California, Berkeley, CA (United States). Dept. of Chemistry

    1995-04-01

    Quantum Monte Carlo (QMC) methods have been found to give excellent results when applied to chemical systems. The main goal of the present work is to use QMC to perform electronic structure calculations. In QMC, a Monte Carlo simulation is used to solve the Schroedinger equation, taking advantage of its analogy to a classical diffusion process with branching. In the present work the author focuses on how to extend the usefulness of QMC to more meaningful molecular systems. This study is aimed at questions concerning polyatomic and large atomic number systems. The accuracy of the solution obtained is determined by the accuracy of the trial wave function`s nodal structure. Efforts in the group have given great emphasis to finding optimized wave functions for the QMC calculations. Little work had been done by systematically looking at a family of systems to see how the best wave functions evolve with system size. In this work the author presents a study of trial wave functions for C, CH, C2H and C2H2. The goal is to study how to build wave functions for larger systems by accumulating knowledge from the wave functions of its fragments as well as gaining some knowledge on the usefulness of multi-reference wave functions. In a MC calculation of a heavy atom, for reasonable time steps most moves for core electrons are rejected. For this reason true equilibration is rarely achieved. A method proposed by Batrouni and Reynolds modifies the way the simulation is performed without altering the final steady-state solution. It introduces an acceleration matrix chosen so that all coordinates (i.e., of core and valence electrons) propagate at comparable speeds. A study of the results obtained using their proposed matrix suggests that it may not be the optimum choice. In this work the author has found that the desired mixing of coordinates between core and valence electrons is not achieved when using this matrix. A bibliography of 175 references is

  18. The new IBA self-shielded dynamitron accelerator for industrial applications

    Science.gov (United States)

    Galloway, R. A.; DeNeuter, S.; Lisanti, T. F.; Cleland, M. R.

    2004-09-01

    Radiation Dynamics Inc. (RDI), currently a member of the IBA Group (Ion Beam Applications based Louvain-la-Neuve, Belgium), has been supplying accelerators since its founding in 1958. These systems supplied for both industrial processing and research application for electrons and ions have proven to be reliable and robust. Today's demands in the industrial sector have driven the design and development of a new version of our Dynamitron ®. This new system, envisioned to operate at electron energies up to 1.5 MeV, in many cases can be supplied with integral shielding providing a small footprint requirement for placement in a facility. In the majority of these lower energy applications this allows the appropriate material handling system to be installed inside the steel radiation enclosure. Designed to deliver beam power outputs as high as 100 kW, this new system is capable of servicing the high throughput demands of today's manufacturing lines. Still retaining the positive aspects of the industrially proven Dynamitron system, this compact system can be tailored to meet a variety of in-line or off-line processing applications.

  19. Self shielding of surfaces irradiated by intense energy fluxes. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Varghese, P.L.; Howell, J.R.; Propp, A.

    1991-08-01

    This dissertation will outline a direct methods of temperature, density, composition, and velocity measurement which should be widely applicable to railgun systems. The measurements demonstrated here should prove usefull basis for further studies of plasma/target interaction.

  20. An Introduction to Multilevel Monte Carlo for Option Valuation

    CERN Document Server

    Higham, Desmond J

    2015-01-01

    Monte Carlo is a simple and flexible tool that is widely used in computational finance. In this context, it is common for the quantity of interest to be the expected value of a random variable defined via a stochastic differential equation. In 2008, Giles proposed a remarkable improvement to the approach of discretizing with a numerical method and applying standard Monte Carlo. His multilevel Monte Carlo method offers an order of speed up given by the inverse of epsilon, where epsilon is the required accuracy. So computations can run 100 times more quickly when two digits of accuracy are required. The multilevel philosophy has since been adopted by a range of researchers and a wealth of practically significant results has arisen, most of which have yet to make their way into the expository literature. In this work, we give a brief, accessible, introduction to multilevel Monte Carlo and summarize recent results applicable to the task of option evaluation.

  1. Using Supervised Learning to Improve Monte Carlo Integral Estimation

    CERN Document Server

    Tracey, Brendan; Alonso, Juan J

    2011-01-01

    Monte Carlo (MC) techniques are often used to estimate integrals of a multivariate function using randomly generated samples of the function. In light of the increasing interest in uncertainty quantification and robust design applications in aerospace engineering, the calculation of expected values of such functions (e.g. performance measures) becomes important. However, MC techniques often suffer from high variance and slow convergence as the number of samples increases. In this paper we present Stacked Monte Carlo (StackMC), a new method for post-processing an existing set of MC samples to improve the associated integral estimate. StackMC is based on the supervised learning techniques of fitting functions and cross validation. It should reduce the variance of any type of Monte Carlo integral estimate (simple sampling, importance sampling, quasi-Monte Carlo, MCMC, etc.) without adding bias. We report on an extensive set of experiments confirming that the StackMC estimate of an integral is more accurate than ...

  2. MODELING LEACHING OF VIRUSES BY THE MONTE CARLO METHOD

    Science.gov (United States)

    A predictive screening model was developed for fate and transport of viruses in the unsaturated zone. A database of input parameters allowed Monte Carlo analysis with the model. The resulting kernel densities of predicted attenuation during percolation indicated very ...

  3. A MONTE-CARLO METHOD FOR ESTIMATING THE CORRELATION EXPONENT

    NARCIS (Netherlands)

    MIKOSCH, T; WANG, QA

    1995-01-01

    We propose a Monte Carlo method for estimating the correlation exponent of a stationary ergodic sequence. The estimator can be considered as a bootstrap version of the classical Hill estimator. A simulation study shows that the method yields reasonable estimates.

  4. Bayesian phylogeny analysis via stochastic approximation Monte Carlo

    KAUST Repository

    Cheon, Sooyoung

    2009-11-01

    Monte Carlo methods have received much attention in the recent literature of phylogeny analysis. However, the conventional Markov chain Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, tend to get trapped in a local mode in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method is compared with two popular Bayesian phylogeny software, BAMBE and MrBayes, on simulated and real datasets. The numerical results indicate that our method outperforms BAMBE and MrBayes. Among the three methods, SAMC produces the consensus trees which have the highest similarity to the true trees, and the model parameter estimates which have the smallest mean square errors, but costs the least CPU time. © 2009 Elsevier Inc. All rights reserved.

  5. Bayesian phylogeny analysis via stochastic approximation Monte Carlo.

    Science.gov (United States)

    Cheon, Sooyoung; Liang, Faming

    2009-11-01

    Monte Carlo methods have received much attention in the recent literature of phylogeny analysis. However, the conventional Markov chain Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, tend to get trapped in a local mode in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method is compared with two popular Bayesian phylogeny software, BAMBE and MrBayes, on simulated and real datasets. The numerical results indicate that our method outperforms BAMBE and MrBayes. Among the three methods, SAMC produces the consensus trees which have the highest similarity to the true trees, and the model parameter estimates which have the smallest mean square errors, but costs the least CPU time.

  6. Monte Carlo techniques for analyzing deep penetration problems

    Energy Technology Data Exchange (ETDEWEB)

    Cramer, S.N.; Gonnord, J.; Hendricks, J.S.

    1985-01-01

    A review of current methods and difficulties in Monte Carlo deep-penetration calculations is presented. Statistical uncertainty is discussed, and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing is reviewed. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multi-group Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications. 29 refs.

  7. EXTENDED MONTE CARLO LOCALIZATION ALGORITHM FOR MOBILE SENSOR NETWORKS

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    A real-world localization system for wireless sensor networks that adapts for mobility and irregular radio propagation model is considered.The traditional range-based techniques and recent range-free localization schemes are not welt competent for localization in mobile sensor networks,while the probabilistic approach of Bayesian filtering with particle-based density representations provides a comprehensive solution to such localization problem.Monte Carlo localization is a Bayesian filtering method that approximates the mobile node’S location by a set of weighted particles.In this paper,an enhanced Monte Carlo localization algorithm-Extended Monte Carlo Localization (Ext-MCL) is suitable for the practical wireless network environment where the radio propagation model is irregular.Simulation results show the proposal gets better localization accuracy and higher localizable node number than previously proposed Monte Carlo localization schemes not only for ideal radio model,but also for irregular one.

  8. Monte Carlo simulations: Hidden errors from ``good'' random number generators

    Science.gov (United States)

    Ferrenberg, Alan M.; Landau, D. P.; Wong, Y. Joanna

    1992-12-01

    The Wolff algorithm is now accepted as the best cluster-flipping Monte Carlo algorithm for beating ``critical slowing down.'' We show how this method can yield incorrect answers due to subtle correlations in ``high quality'' random number generators.

  9. On the Markov Chain Monte Carlo (MCMC) method

    Indian Academy of Sciences (India)

    Rajeeva L Karandikar

    2006-04-01

    Markov Chain Monte Carlo (MCMC) is a popular method used to generate samples from arbitrary distributions, which may be specified indirectly. In this article, we give an introduction to this method along with some examples.

  10. Monte-Carlo simulation-based statistical modeling

    CERN Document Server

    Chen, John

    2017-01-01

    This book brings together expert researchers engaged in Monte-Carlo simulation-based statistical modeling, offering them a forum to present and discuss recent issues in methodological development as well as public health applications. It is divided into three parts, with the first providing an overview of Monte-Carlo techniques, the second focusing on missing data Monte-Carlo methods, and the third addressing Bayesian and general statistical modeling using Monte-Carlo simulations. The data and computer programs used here will also be made publicly available, allowing readers to replicate the model development and data analysis presented in each chapter, and to readily apply them in their own research. Featuring highly topical content, the book has the potential to impact model development and data analyses across a wide spectrum of fields, and to spark further research in this direction.

  11. Accelerating Monte Carlo Renderers by Ray Histogram Fusion

    Directory of Open Access Journals (Sweden)

    Mauricio Delbracio

    2015-03-01

    Full Text Available This paper details the recently introduced Ray Histogram Fusion (RHF filter for accelerating Monte Carlo renderers [M. Delbracio et al., Boosting Monte Carlo Rendering by Ray Histogram Fusion, ACM Transactions on Graphics, 33 (2014]. In this filter, each pixel in the image is characterized by the colors of the rays that reach its surface. Pixels are compared using a statistical distance on the associated ray color distributions. Based on this distance, it decides whether two pixels can share their rays or not. The RHF filter is consistent: as the number of samples increases, more evidence is required to average two pixels. The algorithm provides a significant gain in PSNR, or equivalently accelerates the rendering process by using many fewer Monte Carlo samples without observable bias. Since the RHF filter depends only on the Monte Carlo samples color values, it can be naturally combined with all rendering effects.

  12. Carlo Ginzburg: anomaalia viitab normile / intervjueerinud Marek Tamm

    Index Scriptorium Estoniae

    Ginzburg, Carlo, 1939-

    2014-01-01

    Intervjuu itaalia ajaloolase Carlo Ginzburgiga tema raamatu "Ükski saar pole saar : neli pilguheitu inglise kirjandusele globaalsest vaatenurgast" eesti keeles ilmumise puhul. Teos ilmus Tallinna Ülikooli Kirjastuses

  13. Monte Carlo methods for light propagation in biological tissues

    OpenAIRE

    Vinckenbosch, Laura; Lacaux, Céline; Tindel, Samy; Thomassin, Magalie; Obara, Tiphaine

    2016-01-01

    Light propagation in turbid media is driven by the equation of radiative transfer. We give a formal probabilistic representation of its solution in the framework of biological tissues and we implement algorithms based on Monte Carlo methods in order to estimate the quantity of light that is received by a homogeneous tissue when emitted by an optic fiber. A variance reduction method is studied and implemented, as well as a Markov chain Monte Carlo method based on the Metropolis–Hastings algori...

  14. de Finetti Priors using Markov chain Monte Carlo computations.

    Science.gov (United States)

    Bacallado, Sergio; Diaconis, Persi; Holmes, Susan

    2015-07-01

    Recent advances in Monte Carlo methods allow us to revisit work by de Finetti who suggested the use of approximate exchangeability in the analyses of contingency tables. This paper gives examples of computational implementations using Metropolis Hastings, Langevin and Hamiltonian Monte Carlo to compute posterior distributions for test statistics relevant for testing independence, reversible or three way models for discrete exponential families using polynomial priors and Gröbner bases.

  15. Study of the Transition Flow Regime using Monte Carlo Methods

    Science.gov (United States)

    Hassan, H. A.

    1999-01-01

    This NASA Cooperative Agreement presents a study of the Transition Flow Regime Using Monte Carlo Methods. The topics included in this final report are: 1) New Direct Simulation Monte Carlo (DSMC) procedures; 2) The DS3W and DS2A Programs; 3) Papers presented; 4) Miscellaneous Applications and Program Modifications; 5) Solution of Transitional Wake Flows at Mach 10; and 6) Turbulence Modeling of Shock-Dominated Fows with a k-Enstrophy Formulation.

  16. Monte Carlo Simulation of Optical Properties of Wake Bubbles

    Institute of Scientific and Technical Information of China (English)

    CAO Jing; WANG Jiang-An; JIANG Xing-Zhou; SHI Sheng-Wei

    2007-01-01

    Based on Mie scattering theory and the theory of multiple light scattering, the light scattering properties of air bubbles in a wake are analysed by Monte Carlo simulation. The results show that backscattering is enhanced obviously due to the existence of bubbles, especially with the increase of bubble density, and that it is feasible to use the Monte Carlo method to study the properties of light scattering by air bubbles.

  17. Successful combination of the stochastic linearization and Monte Carlo methods

    Science.gov (United States)

    Elishakoff, I.; Colombi, P.

    1993-01-01

    A combination of a stochastic linearization and Monte Carlo techniques is presented for the first time in literature. A system with separable nonlinear damping and nonlinear restoring force is considered. The proposed combination of the energy-wise linearization with the Monte Carlo method yields an error under 5 percent, which corresponds to the error reduction associated with the conventional stochastic linearization by a factor of 4.6.

  18. Geometrical and Monte Carlo projectors in 3D PET reconstruction

    OpenAIRE

    Aguiar, Pablo; Rafecas López, Magdalena; Ortuno, Juan Enrique; Kontaxakis, George; Santos, Andrés; Pavía, Javier; Ros, Domènec

    2010-01-01

    Purpose: In the present work, the authors compare geometrical and Monte Carlo projectors in detail. The geometrical projectors considered were the conventional geometrical Siddon ray-tracer (S-RT) and the orthogonal distance-based ray-tracer (OD-RT), based on computing the orthogonal distance from the center of image voxel to the line-of-response. A comparison of these geometrical projectors was performed using different point spread function (PSF) models. The Monte Carlo-based method under c...

  19. Chemical accuracy from quantum Monte Carlo for the Benzene Dimer

    OpenAIRE

    Azadi, Sam; Cohen, R. E

    2015-01-01

    We report an accurate study of interactions between Benzene molecules using variational quantum Monte Carlo (VMC) and diffusion quantum Monte Carlo (DMC) methods. We compare these results with density functional theory (DFT) using different van der Waals (vdW) functionals. In our QMC calculations, we use accurate correlated trial wave functions including three-body Jastrow factors, and backflow transformations. We consider two benzene molecules in the parallel displaced (PD) geometry, and fin...

  20. Event-chain Monte Carlo for classical continuous spin models

    Science.gov (United States)

    Michel, Manon; Mayer, Johannes; Krauth, Werner

    2015-10-01

    We apply the event-chain Monte Carlo algorithm to classical continuum spin models on a lattice and clarify the condition for its validity. In the two-dimensional XY model, it outperforms the local Monte Carlo algorithm by two orders of magnitude, although it remains slower than the Wolff cluster algorithm. In the three-dimensional XY spin glass model at low temperature, the event-chain algorithm is far superior to the other algorithms.

  1. Public Infrastructure for Monte Carlo Simulation: publicMC@BATAN

    CERN Document Server

    Waskita, A A; Akbar, Z; Handoko, L T; 10.1063/1.3462759

    2010-01-01

    The first cluster-based public computing for Monte Carlo simulation in Indonesia is introduced. The system has been developed to enable public to perform Monte Carlo simulation on a parallel computer through an integrated and user friendly dynamic web interface. The beta version, so called publicMC@BATAN, has been released and implemented for internal users at the National Nuclear Energy Agency (BATAN). In this paper the concept and architecture of publicMC@BATAN are presented.

  2. Monte Carlo methods and applications in nuclear physics

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, J.

    1990-01-01

    Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs.

  3. Monte Carlo method for solving a parabolic problem

    Directory of Open Access Journals (Sweden)

    Tian Yi

    2016-01-01

    Full Text Available In this paper, we present a numerical method based on random sampling for a parabolic problem. This method combines use of the Crank-Nicolson method and Monte Carlo method. In the numerical algorithm, we first discretize governing equations by Crank-Nicolson method, and obtain a large sparse system of linear algebraic equations, then use Monte Carlo method to solve the linear algebraic equations. To illustrate the usefulness of this technique, we apply it to some test problems.

  4. Monte Carlo Volcano Seismic Moment Tensors

    Science.gov (United States)

    Waite, G. P.; Brill, K. A.; Lanza, F.

    2015-12-01

    Inverse modeling of volcano seismic sources can provide insight into the geometry and dynamics of volcanic conduits. But given the logistical challenges of working on an active volcano, seismic networks are typically deficient in spatial and temporal coverage; this potentially leads to large errors in source models. In addition, uncertainties in the centroid location and moment-tensor components, including volumetric components, are difficult to constrain from the linear inversion results, which leads to a poor understanding of the model space. In this study, we employ a nonlinear inversion using a Monte Carlo scheme with the objective of defining robustly resolved elements of model space. The model space is randomized by centroid location and moment tensor eigenvectors. Point sources densely sample the summit area and moment tensors are constrained to a randomly chosen geometry within the inversion; Green's functions for the random moment tensors are all calculated from modeled single forces, making the nonlinear inversion computationally reasonable. We apply this method to very-long-period (VLP) seismic events that accompany minor eruptions at Fuego volcano, Guatemala. The library of single force Green's functions is computed with a 3D finite-difference modeling algorithm through a homogeneous velocity-density model that includes topography, for a 3D grid of nodes, spaced 40 m apart, within the summit region. The homogenous velocity and density model is justified by long wavelength of VLP data. The nonlinear inversion reveals well resolved model features and informs the interpretation through a better understanding of the possible models. This approach can also be used to evaluate possible station geometries in order to optimize networks prior to deployment.

  5. Monte Carlo implementation of polarized hadronization

    Science.gov (United States)

    Matevosyan, Hrayr H.; Kotzinian, Aram; Thomas, Anthony W.

    2017-01-01

    We study the polarized quark hadronization in a Monte Carlo (MC) framework based on the recent extension of the quark-jet framework, where a self-consistent treatment of the quark polarization transfer in a sequential hadronization picture has been presented. Here, we first adopt this approach for MC simulations of the hadronization process with a finite number of produced hadrons, expressing the relevant probabilities in terms of the eight leading twist quark-to-quark transverse-momentum-dependent (TMD) splitting functions (SFs) for elementary q →q'+h transition. We present explicit expressions for the unpolarized and Collins fragmentation functions (FFs) of unpolarized hadrons emitted at rank 2. Further, we demonstrate that all the current spectator-type model calculations of the leading twist quark-to-quark TMD SFs violate the positivity constraints, and we propose a quark model based ansatz for these input functions that circumvents the problem. We validate our MC framework by explicitly proving the absence of unphysical azimuthal modulations of the computed polarized FFs, and by precisely reproducing the earlier derived explicit results for rank-2 pions. Finally, we present the full results for pion unpolarized and Collins FFs, as well as the corresponding analyzing powers from high statistics MC simulations with a large number of produced hadrons for two different model input elementary SFs. The results for both sets of input functions exhibit the same general features of an opposite signed Collins function for favored and unfavored channels at large z and, at the same time, demonstrate the flexibility of the quark-jet framework by producing significantly different dependences of the results at mid to low z for the two model inputs.

  6. Quantum Monte Carlo with directed loops.

    Science.gov (United States)

    Syljuåsen, Olav F; Sandvik, Anders W

    2002-10-01

    We introduce the concept of directed loops in stochastic series expansion and path-integral quantum Monte Carlo methods. Using the detailed balance rules for directed loops, we show that it is possible to smoothly connect generally applicable simulation schemes (in which it is necessary to include backtracking processes in the loop construction) to more restricted loop algorithms that can be constructed only for a limited range of Hamiltonians (where backtracking can be avoided). The "algorithmic discontinuities" between general and special points (or regions) in parameter space can hence be eliminated. As a specific example, we consider the anisotropic S=1/2 Heisenberg antiferromagnet in an external magnetic field. We show that directed-loop simulations are very efficient for the full range of magnetic fields (zero to the saturation point) and anisotropies. In particular, for weak fields and anisotropies, the autocorrelations are significantly reduced relative to those of previous approaches. The back-tracking probability vanishes continuously as the isotropic Heisenberg point is approached. For the XY model, we show that back tracking can be avoided for all fields extending up to the saturation field. The method is hence particularly efficient in this case. We use directed-loop simulations to study the magnetization process in the two-dimensional Heisenberg model at very low temperatures. For LxL lattices with L up to 64, we utilize the step structure in the magnetization curve to extract gaps between different spin sectors. Finite-size scaling of the gaps gives an accurate estimate of the transverse susceptibility in the thermodynamic limit: chi( perpendicular )=0.0659+/-0.0002.

  7. Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments

    Energy Technology Data Exchange (ETDEWEB)

    Pevey, Ronald E.

    2005-09-15

    Most criticality safety calculations are performed using Monte Carlo techniques because of Monte Carlo's ability to handle complex three-dimensional geometries. For Monte Carlo calculations, the more histories sampled, the lower the standard deviation of the resulting estimates. The common intuition is, therefore, that the more histories, the better; as a result, analysts tend to run Monte Carlo analyses as long as possible (or at least to a minimum acceptable uncertainty). For Monte Carlo criticality safety analyses, however, the optimization situation is complicated by the fact that procedures usually require that an extra margin of safety be added because of the statistical uncertainty of the Monte Carlo calculations. This additional safety margin affects the impact of the choice of the calculational standard deviation, both on production and on safety. This paper shows that, under the assumptions of normally distributed benchmarking calculational errors and exact compliance with the upper subcritical limit (USL), the standard deviation that optimizes production is zero, but there is a non-zero value of the calculational standard deviation that minimizes the risk of inadvertently labeling a supercritical configuration as subcritical. Furthermore, this value is shown to be a simple function of the typical benchmarking step outcomes--the bias, the standard deviation of the bias, the upper subcritical limit, and the number of standard deviations added to calculated k-effectives before comparison to the USL.

  8. A Survey on Multilevel Monte Carlo for European Options

    Directory of Open Access Journals (Sweden)

    Masoud Moharamnejad

    2016-03-01

    Full Text Available One of the most applicable and common methods for pricing options is the Monte Carlo simulation. Among the advantages of this method we can name ease of use, being suitable for different types of options including vanilla options and exotic options. On one hand, convergence rate of Monte Carlo's variance is , which has a slow convergence in responding problems, such that for achieving accuracy of ε for a d dimensional problem, computation complexity would be . Thus, various methods have been proposed in Monte Carlo framework to increase the convergence rate of variance as variance reduction methods. One of the recent methods was proposed by Gills in 2006, is the multilevel Monte Carlo method. This method besides reducing the computationcomplexity to while being used in Euler discretizing and to while being used in Milsteindiscretizing method, has the capacity to be combined with other variance reduction methods. In this article, multilevel Monte Carlo using Euler and Milsteindiscretizing methods is adopted for comparing computation complexity with standard Monte Carlo method in pricing European call options.

  9. Perturbation Monte Carlo methods for tissue structure alterations.

    Science.gov (United States)

    Nguyen, Jennifer; Hayakawa, Carole K; Mourant, Judith R; Spanier, Jerome

    2013-01-01

    This paper describes an extension of the perturbation Monte Carlo method to model light transport when the phase function is arbitrarily perturbed. Current perturbation Monte Carlo methods allow perturbation of both the scattering and absorption coefficients, however, the phase function can not be varied. The more complex method we develop and test here is not limited in this way. We derive a rigorous perturbation Monte Carlo extension that can be applied to a large family of important biomedical light transport problems and demonstrate its greater computational efficiency compared with using conventional Monte Carlo simulations to produce forward transport problem solutions. The gains of the perturbation method occur because only a single baseline Monte Carlo simulation is needed to obtain forward solutions to other closely related problems whose input is described by perturbing one or more parameters from the input of the baseline problem. The new perturbation Monte Carlo methods are tested using tissue light scattering parameters relevant to epithelia where many tumors originate. The tissue model has parameters for the number density and average size of three classes of scatterers; whole nuclei, organelles such as lysosomes and mitochondria, and small particles such as ribosomes or large protein complexes. When these parameters or the wavelength is varied the scattering coefficient and the phase function vary. Perturbation calculations give accurate results over variations of ∼15-25% of the scattering parameters.

  10. Sequence specific resonance assignment via Multicanonical Monte Carlo search using an ABACUS approach.

    Science.gov (United States)

    Lemak, Alexander; Steren, Carlos A; Arrowsmith, Cheryl H; Llinás, Miguel

    2008-05-01

    ABACUS [Grishaev et al. (2005) Proteins 61:36-43] is a novel protocol for automated protein structure determination via NMR. ABACUS starts from molecular fragments defined by unassigned J-coupled spin-systems and involves a Monte Carlo stochastic search in assignment space, probabilistic sequence selection, and assembly of fragments into structures that are used to guide the stochastic search. Here, we report further development of the two main algorithms that increase the flexibility and robustness of the method. Performance of the BACUS [Grishaev and Llinás (2004) J Biomol NMR 28:1-101] algorithm was significantly improved through use of sequential connectivities available from through-bond correlated 3D-NMR experiments, and a new set of likelihood probabilities derived from a database of 56 ultra high resolution X-ray structures. A Multicanonical Monte Carlo procedure, Fragment Monte Carlo (FMC), was developed for sequence-specific assignment of spin-systems. It relies on an enhanced assignment sampling and provides the uncertainty of assignments in a quantitative manner. The efficiency of the protocol was validated on data from four proteins of between 68-116 residues, yielding 100% accuracy in sequence specific assignment of backbone and side chain resonances.

  11. Monte Carlo Molecular Simulation with Isobaric-Isothermal and Gibbs-NPT Ensembles

    KAUST Repository

    Du, Shouhong

    2012-05-01

    This thesis presents Monte Carlo methods for simulations of phase behaviors of Lennard-Jones fluids. The isobaric-isothermal (NPT) ensemble and Gibbs-NPT ensemble are introduced in detail. NPT ensemble is employed to determine the phase diagram of pure component. The reduced simulation results are verified by comparison with the equation of state by by Johnson et al. and results with L-J parameters of methane agree considerably with the experiment measurements. We adopt the blocking method for variance estimation and error analysis of the simulation results. The relationship between variance and number of Monte Carlo cycles, error propagation and Random Number Generator performance are also investigated. We review the Gibbs-NPT ensemble employed for phase equilibrium of binary mixture. The phase equilibrium is achieved by performing three types of trial move: particle displacement, volume rearrangement and particle transfer. The simulation models and the simulation details are introduced. The simulation results of phase coexistence for methane and ethane are reported with comparison of the experimental data. Good agreement is found for a wide range of pressures. The contribution of this thesis work lies in the study of the error analysis with respect to the Monte Carlo cycles and number of particles in some interesting aspects.

  12. Risk Consideration and Cost Estimation in Construction Projects Using Monte Carlo Simulation

    Directory of Open Access Journals (Sweden)

    Claudius A. Peleskei

    2015-06-01

    Full Text Available Construction projects usually involve high investments. It is, therefore, a risky adventure for companies as actual costs of construction projects nearly always exceed the planed scenario. This is due to the various risks and the large uncertainty existing within this industry. Determination and quantification of risks and their impact on project costs within the construction industry is described to be one of the most difficult areas. This paper analyses how the cost of construction projects can be estimated using Monte Carlo Simulation. It investigates if the different cost elements in a construction project follow a specific probability distribution. The research examines the effect of correlation between different project costs on the result of the Monte Carlo Simulation. The paper finds out that Monte Carlo Simulation can be a helpful tool for risk managers and can be used for cost estimation of construction projects. The research has shown that cost distributions are positively skewed and cost elements seem to have some interdependent relationships.

  13. Hamiltonian Monte Carlo algorithm for the characterization of hydraulic conductivity from the heat tracing data

    Science.gov (United States)

    Djibrilla Saley, A.; Jardani, A.; Soueid Ahmed, A.; Raphael, A.; Dupont, J. P.

    2016-11-01

    Estimating spatial distributions of the hydraulic conductivity in heterogeneous aquifers has always been an important and challenging task in hydrology. Generally, the hydraulic conductivity field is determined from hydraulic head or pressure measurements. In the present study, we propose to use temperature data as source of information for characterizing the spatial distributions of the hydraulic conductivity field. In this way, we performed a laboratory sandbox experiment with the aim of imaging the heterogeneities of the hydraulic conductivity field from thermal monitoring. During the laboratory experiment, we injected a hot water pulse, which induces a heat plume motion into the sandbox. The induced plume was followed by a set of thermocouples placed in the sandbox. After the temperature data acquisition, we performed a hydraulic tomography using the stochastic Hybrid Monte Carlo approach, also called the Hamiltonian Monte Carlo (HMC) algorithm to invert the temperature data. This algorithm is based on a combination of the Metropolis Monte Carlo method and the Hamiltonian dynamics approach. The parameterization of the inverse problem was done with the Karhunen-Loève (KL) expansion to reduce the dimensionality of the unknown parameters. Our approach has provided successful reconstruction of the hydraulic conductivity field with low computational effort.

  14. Force calibration using errors-in-variables regression and Monte Carlo uncertainty evaluation

    Science.gov (United States)

    Bartel, Thomas; Stoudt, Sara; Possolo, Antonio

    2016-06-01

    An errors-in-variables regression method is presented as an alternative to the ordinary least-squares regression computation currently employed for determining the calibration function for force measuring instruments from data acquired during calibration. A Monte Carlo uncertainty evaluation for the errors-in-variables regression is also presented. The corresponding function (which we call measurement function, often called analysis function in gas metrology) necessary for the subsequent use of the calibrated device to measure force, and the associated uncertainty evaluation, are also derived from the calibration results. Comparisons are made, using real force calibration data, between the results from the errors-in-variables and ordinary least-squares analyses, as well as between the Monte Carlo uncertainty assessment and the conventional uncertainty propagation employed at the National Institute of Standards and Technology (NIST). The results show that the errors-in-variables analysis properly accounts for the uncertainty in the applied calibrated forces, and that the Monte Carlo method, owing to its intrinsic ability to model uncertainty contributions accurately, yields a better representation of the calibration uncertainty throughout the transducer’s force range than the methods currently in use. These improvements notwithstanding, the differences between the results produced by the current and by the proposed new methods generally are small because the relative uncertainties of the inputs are small and most contemporary load cells respond approximately linearly to such inputs. For this reason, there will be no compelling need to revise any of the force calibration reports previously issued by NIST.

  15. Monte Carlo calculations and experimental measurements of dosimetric parameters of the IRA-103Pd brachytherapy source.

    Science.gov (United States)

    Sadeghi, Mahdi; Raisali, Gholamreza; Hosseini, S Hamed; Shavar, Arzhang

    2008-04-01

    This article presents a brachytherapy source having 103Pd adsorbed onto a cylindrical silver rod that has been developed by the Agricultural, Medical, and Industrial Research School for permanent implant applications. Dosimetric characteristics (radial dose function, anisotropy function, and anisotropy factor) of this source were experimentally and theoretically determined in terms of the updated AAPM Task group 43 (TG-43U1) recommendations. Monte Carlo simulations were used to calculate the dose rate constant. Measurements were performed using TLD-GR200A circular chip dosimeters using standard methods employing thermoluminescent dosimeters in a Perspex phantom. Precision machined bores in the phantom located the dosimeters and the source in a reproducible fixed geometry, providing for transverse-axis and angular dose profiles over a range of distances from 0.5 to 5 cm. The Monte Carlo N-particle (MCNP) code, version 4C simulation techniques have been used to evaluate the dose-rate distributions around this model 103Pd source in water and Perspex phantoms. The Monte Carlo calculated dose rate constant of the IRA-103Pd source in water was found to be 0.678 cGy h(-1) U(-1) with an approximate uncertainty of +/-0.1%. The anisotropy function, F(r, theta), and the radial dose function, g(r), of the IRA- 103Pd source were also measured in a Perspex phantom and calculated in both Perspex and liquid water phantoms.

  16. Multilevel markov chain monte carlo method for high-contrast single-phase flow problems

    KAUST Repository

    Efendiev, Yalchin R.

    2014-12-19

    In this paper we propose a general framework for the uncertainty quantification of quantities of interest for high-contrast single-phase flow problems. It is based on the generalized multiscale finite element method (GMsFEM) and multilevel Monte Carlo (MLMC) methods. The former provides a hierarchy of approximations of different resolution, whereas the latter gives an efficient way to estimate quantities of interest using samples on different levels. The number of basis functions in the online GMsFEM stage can be varied to determine the solution resolution and the computational cost, and to efficiently generate samples at different levels. In particular, it is cheap to generate samples on coarse grids but with low resolution, and it is expensive to generate samples on fine grids with high accuracy. By suitably choosing the number of samples at different levels, one can leverage the expensive computation in larger fine-grid spaces toward smaller coarse-grid spaces, while retaining the accuracy of the final Monte Carlo estimate. Further, we describe a multilevel Markov chain Monte Carlo method, which sequentially screens the proposal with different levels of approximations and reduces the number of evaluations required on fine grids, while combining the samples at different levels to arrive at an accurate estimate. The framework seamlessly integrates the multiscale features of the GMsFEM with the multilevel feature of the MLMC methods following the work in [26], and our numerical experiments illustrate its efficiency and accuracy in comparison with standard Monte Carlo estimates. © Global Science Press Limited 2015.

  17. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications Group; Univ. of New Mexico, Albuquerque, NM (United States). Nuclear Engineering Dept.

    2016-11-29

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations

  18. Finding organic vapors - a Monte Carlo approach

    Science.gov (United States)

    Vuollekoski, Henri; Boy, Michael; Kerminen, Veli-Matti; Kulmala, Markku

    2010-05-01

    drawbacks in accuracy, the inability to find diurnal variation and the lack of size resolution. Here, we aim to shed some light onto the problem by applying an ad hoc Monte Carlo algorithm to a well established aerosol dynamical model, the University of Helsinki Multicomponent Aerosol model (UHMA). By performing a side-by-side comparison with measurement data within the algorithm, this approach has the significant advantage of decreasing the amount of manual labor. But more importantly, by basing the comparison on particle number size distribution data - a quantity that can be quite reliably measured - the accuracy of the results is good.

  19. Reducing quasi-ergodicity in a double well potential by Tsallis Monte Carlo simulation

    OpenAIRE

    Iwamatsu, Masao; Okabe, Yutaka

    2000-01-01

    A new Monte Carlo scheme based on the system of Tsallis's generalized statistical mechanics is applied to a simple double well potential to calculate the canonical thermal average of potential energy. Although we observed serious quasi-ergodicity when using the standard Metropolis Monte Carlo algorithm, this problem is largely reduced by the use of the new Monte Carlo algorithm. Therefore the ergodicity is guaranteed even for short Monte Carlo steps if we use this new canonical Monte Carlo sc...

  20. Monte-Carlo Application for Nondestructive Nuclear Waste Analysis

    Science.gov (United States)

    Carasco, C.; Engels, R.; Frank, M.; Furletov, S.; Furletova, J.; Genreith, C.; Havenith, A.; Kemmerling, G.; Kettler, J.; Krings, T.; Ma, J.-L.; Mauerhofer, E.; Neike, D.; Payan, E.; Perot, B.; Rossbach, M.; Schitthelm, O.; Schumann, M.; Vasquez, R.

    2014-06-01

    , neutron flux distribution. The validation of the measurements simulations with Mont-Carlo transport codes for the design, optimization and data analysis of further P&DGNAA facilities is performed in collaboration with LMN CEA Cadarache. The performance of the prompt gamma neutron activation analysis (PGNAA) for the nondestructive determination of actinides in small samples is investigated. The quantitative determination of actinides relies on the precise knowledge of partial neutron capture cross sections. Up to today these cross sections are not very accurate for analytical purpose. The goal of the TANDEM (Trans-uranium Actinides' Nuclear Data - Evaluation and Measurement) Collaboration is the evaluation of these cross sections. Cross sections are measured using prompt gamma activation analysis facilities in Budapest and Munich. Geant4 is used to optimally design the detection system with Compton suppression. Furthermore, for the evaluation of the cross sections it is strongly needed to correct the results to the self-attenuation of the prompt gammas within the sample. In the framework of cooperation RWTH Aachen University, Forschungszentrum Jülich and the Siemens AG will study the feasibility of a compact Neutron Imaging System for Radioactive waste Analysis (NISRA). The system is based on a 14 MeV neutron source and an advanced detector system (a-Si flat panel) linked to an exclusive converter/scintillator for fast neutrons. For shielding and radioprotection studies the codes MCNPX and Geant4 were used. The two codes were benchmarked in processing time and accuracy in the neutron and gamma fluxes. Also the detector response was simulated with Geant4 to optimize components of the system.

  1. Quantum Monte Carlo methods and lithium cluster properties. [Atomic clusters

    Energy Technology Data Exchange (ETDEWEB)

    Owen, R.K.

    1990-12-01

    Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) (0.1981), 0.1895(9) (0.1874(4)), 0.1530(34) (0.1599(73)), 0.1664(37) (0.1724(110)), 0.1613(43) (0.1675(110)) Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) (0.0203(12)), 0.0188(10) (0.0220(21)), 0.0247(8) (0.0310(12)), 0.0253(8) (0.0351(8)) Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.

  2. Quantum Monte Carlo methods and lithium cluster properties

    Energy Technology Data Exchange (ETDEWEB)

    Owen, R.K.

    1990-12-01

    Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) [0.1981], 0.1895(9) [0.1874(4)], 0.1530(34) [0.1599(73)], 0.1664(37) [0.1724(110)], 0.1613(43) [0.1675(110)] Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) [0.0203(12)], 0.0188(10) [0.0220(21)], 0.0247(8) [0.0310(12)], 0.0253(8) [0.0351(8)] Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.

  3. Tests of General relativity with planetary orbits and Monte Carlo simulations

    CERN Document Server

    Fienga, A; Exertier, P; Manche, H; Gastineau, M

    2014-01-01

    Based on the new developped planetary ephemerides INPOP13c, determinations of acceptable intervals of General Relativity violation in considering simultaneously the PPN parameters $\\beta$, PPN $\\gamma$, the flattening of the sun $J_{2}^\\odot$ and time variation of the gravitational mass of the sun $\\mu$ are obtained in using Monte Carlo simulation coupled with basic genetic algorithm. Possible time variations of the gravitational constant G are also deduced. Discussions are lead about the better choice of indicators for the goodness-of-fit for each run and limits consistent with general relativity are obtained simultaneously.

  4. The Monte Carlo approach to transport modeling in deca-nanometer MOSFETs

    Science.gov (United States)

    Sangiorgi, Enrico; Palestri, Pierpaolo; Esseni, David; Fiegna, Claudio; Selmi, Luca

    2008-09-01

    In this paper, we review recent developments of the Monte Carlo approach to the simulation of semi-classical carrier transport in nano-MOSFETs, with particular focus on the inclusion of quantum-mechanical effects in the simulation (using either the multi-subband approach or quantum corrections to the electrostatic potential) and on the numerical stability issues related to the coupling of the transport with the Poisson equation. Selected applications are presented, including the analysis of quasi-ballistic transport, the determination of the RF characteristics of deca-nanometric MOSFETs, and the study of non-conventional device structures and channel materials.

  5. Response and Monte Carlo evaluation of a reference ionization chamber for radioprotection level at calibration laboratories

    Science.gov (United States)

    Neves, Lucio P.; Vivolo, Vitor; Perini, Ana P.; Caldas, Linda V. E.

    2015-07-01

    A special parallel plate ionization chamber, inserted in a slab phantom for the personal dose equivalent Hp(10) determination, was developed and characterized in this work. This ionization chamber has collecting electrodes and window made of graphite, and the walls and phantom made of PMMA. The tests comprise experimental evaluation following international standards and Monte Carlo simulations, employing the PENELOPE code to evaluate the design of this new dosimeter. The experimental tests were conducted employing the radioprotection level quality N-60 established at the IPEN, and all results were within the recommended standards.

  6. Modeling and Monte Carlo simulation of nucleation and growth of UV/low-temperature-induced nanostructures

    Science.gov (United States)

    Flicstein, Jean; Pata, S.; Chun, L. S. H. K.; Palmier, Jean F.; Courant, J. L.

    1998-05-01

    A model for ultraviolet induced chemical vapor deposition (UV CVD) for a-SiN:H is described. In the simulation of UV CVD process, activate charged centers creation, species incorporation, surface diffusion, and desorption are considered as elementary steps for the photonucleation and photodeposition mechanisms. The process is characterized by two surface sticking coefficients. Surface diffusion of species is modeled with a gaussian distribution. A real time Monte Carlo method is used to determine photonucleation and photodeposition rates in nanostructures. Comparison of experimental versus simulation results for a-SiN:H is shown to predict the morphology temporal evolution under operating conditions down to atomistic resolution.

  7. STARlight: A Monte Carlo simulation program for ultra-peripheral collisions of relativistic ions

    Science.gov (United States)

    Klein, Spencer R.; Nystrand, Joakim; Seger, Janet; Gorbunov, Yuri; Butterworth, Joey

    2017-03-01

    Ultra-peripheral collisions (UPCs) have been a significant source of study at RHIC and the LHC. In these collisions, the two colliding nuclei interact electromagnetically, via two-photon or photonuclear interactions, but not hadronically; they effectively miss each other. Photonuclear interactions produce vector meson states or more general photonuclear final states, while two-photon interactions can produce lepton or meson pairs, or single mesons. In these interactions, the collision geometry plays a major role. We present a program, STARlight, that calculates the cross-sections for a variety of UPC final states and also creates, via Monte Carlo simulation, events for use in determining detector efficiency.

  8. Shuttle vertical fin flowfield by the direct simulation Monte Carlo method

    Science.gov (United States)

    Hueser, J. E.; Brock, F. J.; Melfi, L. T.

    1985-01-01

    The flow properties in a model flowfield, simulating the shuttle vertical fin, determined using the Direct Simulation Monte Carlo method. The case analyzed corresponds to an orbit height of 225 km with the freestream velocity vector orthogonal to the fin surface. Contour plots of the flowfield distributions of density, temperature, velocity and flow angle are presented. The results also include mean molecular collision frequency (which reaches 1/60 sec near the surface), collision frequency density (approaches 7 x 10 to the 18/cu m sec at the surface) and the mean free path (19 m at the surface).

  9. First Monte Carlo analysis of fragmentation functions from single-inclusive e+e- annihilation

    Science.gov (United States)

    Sato, Nobuo; Ethier, J. J.; Melnitchouk, W.; Hirai, M.; Kumano, S.; Accardi, A.; Jefferson Lab Angular Momentum Collaboration

    2016-12-01

    We perform the first iterative Monte Carlo (IMC) analysis of fragmentation functions constrained by all available data from single-inclusive e+e- annihilation into pions and kaons. The IMC method eliminates potential bias in traditional analyses based on single fits introduced by fixing parameters not well constrained by the data and provides a statistically rigorous determination of uncertainties. Our analysis reveals specific features of fragmentation functions using the new IMC methodology and those obtained from previous analyses, especially for light quarks and for strange quark fragmentation to kaons.

  10. Quest for precision in hadronic cross sections at low energy: Monte Carlo tools vs. experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Actis, S. [Paul-Scherrer-Institute Wuerenlingen and Villigen, Villigen (Switzerland); Arbuzov, A. [Joint Institute for Nuclear Research, Dubna (Russian Federation). Bogoliubov Lab. of Theoretical Physics; Balossini, G. [Pavia Univ. (Italy). Dipt. di Fisica Nucleare e Teorica; INFN, Pavia (IT)] (and others)

    2009-12-15

    We present the achievements of the last years of the experimental and theoretical groups working on hadronic cross section measurements at the low energy e{sup +}e{sup -} colliders in Beijing, Frascati, Ithaca, Novosibirsk, Stanford and Tsukuba and on {tau} decays. We sketch the prospects in these fields for the years to come. We emphasise the status and the precision of the Monte Carlo generators used to analyse the hadronic cross section measurements obtained as well with energy scans as with radiative return, to determine luminosities and {tau} decays. The radiative corrections fully or approximately implemented in the various codes and the contribution of the vacuum polarisation are discussed. (orig.)

  11. STARlight: A Monte Carlo simulation program for ultra-peripheral collisions of relativistic ions

    CERN Document Server

    Klein, Spencer R; Seger, Janet; Gorbunov, Yuri; Butterworth, Joey

    2016-01-01

    Ultra-peripheral collisions (UPCs) have been a significant source of study at RHIC and the LHC. In these collisions, the two colliding nuclei interact electromagnetically, via two-photon or photonuclear interactions, but not hadronically; they effectively miss each other. Photonuclear interactions produce vector meson states or more general photonuclear final states, while two-photon interactions can produce lepton or meson pairs, or single mesons. In these interactions, the collision geometry plays a major role. We present a program, STARlight, that calculates the cross-sections for a variety of UPC final states and also creates, via Monte Carlo simulation, events for use in determining detector efficiency.

  12. Quest for precision in hadronic cross sections at low energy: Monte Carlo tools vs. experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Actis, S. [Paul Scherrer Inst., Wuerenlingen and Villigen, Villigen PSI (Switzerland); Arbuzov, A.; Kuraev, E.A. [Joint Inst. for Nuclear Research, Bogoliubov Lab. of Theoretical Physics, Dubna (Russian Federation); Balossini, G.; Bignamini, C.; Montagna, G. [Univ. di Pavia, Dipt. di Fisica Nucleare e Teorica, Pavia (Italy); INFN, Sezione di Pavia, Pavia (Italy); Beltrame, P. [CERN, Physics Dept., Geneve (Switzerland); Bonciani, R. [Univ. Joseph Fourier/CNRS-IN2P3/INPG, Lab. de Physique Subatomique et de Cosmologie, Grenoble (France); Carloni Calame, C.M. [Univ. of Southampton, School of Physics and Astronomy, Southampton (United Kingdom); Cherepanov, V.; Eidelman, S.; Fedotovich, G.V. [Budker Inst. of Nuclear Physics, Novosibirsk (Russian Federation); Novosibirsk State Univ., Novosibirsk (Russian Federation); Czakon, M. [RWTH Aachen Univ., Inst. fuer Theoretische Physik E, Aachen (Germany); Czyz, H.; Gluza, J.; Gunia, M. [Univ. of Silesia, Inst. of Physics, Katowice (Poland); Denig, A.; Hafner, A.; Mueller, S.E. [Johannes Gutenberg-Univ. Mainz, Inst. fuer Kernphysik, Mainz (Germany); Ferroglia, A. [Johannes Gutenberg-Univ., Inst. fuer Physik, THEP, Mainz (Germany); Grzelinska, A.; Jadach, S.; Was, Z. [Inst. of Nuclear Physics Polish Academy of Sciences, Cracow (Poland); Ignatov, F.; Lukin, P.; Sibidanov, A.L. [Budker Inst. of Nuclear Physics, Novosibirsk (Russian Federation); Jegerlehner, F. [Inst. fuer Physik Humboldt-Univ. zu Berlin, Berlin (Germany); Univ. of Silesia, Inst. of Physics, Katowice (Poland); Deutsches Elektronen-Synchrotron, DESY, Zeuthen (Germany); Kalinowski, A. [LLR-Ecole Polytechnique, Palaiseau (France); Kluge, W. [Univ. Karlsruhe, Inst. fuer Experimentelle Kernphysik, Karlsruhe (Germany); Korchin, A. [National Science Center ' Kharkov Inst. of Physics and Technology' , Kharkov (Ukraine); Kuehn, J.H. [Univ. Karlsruhe, Inst. fuer Theoretische Teilchenphysik, Karlsruhe (Germany)] [and others

    2010-04-15

    We present the achievements of the last years of the experimental and theoretical groups working on hadronic cross section measurements at the low-energy e {sup +} e {sup -} colliders in Beijing, Frascati, Ithaca, Novosibirsk, Stanford and Tsukuba and on {tau} decays. We sketch the prospects in these fields for the years to come. We emphasise the status and the precision of the Monte Carlo generators used to analyse the hadronic cross section measurements obtained as well with energy scans as with radiative return, to determine luminosities and {tau} decays. The radiative corrections fully or approximately implemented in the various codes and the contribution of the vacuum polarisation are discussed. (orig.)

  13. Quest for precision in hadronic cross sections at low energy: Monte Carlo tools vs. experimental data

    CERN Document Server

    Actis, S; Arbuzov, A; Balossini, G; Beltrame, P; Bignamini, C; Bonciani, R; Carloni Calame, C M; Cherepanov, V; Czakon, M; Czyz, H; Denig, A; Eidelman, S; Fedotovich, G V; Ferroglia, A; Gluza, J; Grzeli nska, A; Gunia, M; Hafner, A; Ignatov, F; Jadach, S; Jegerlehner, F; Kalinowski, A; Kluge, W; Korchin, A; Kuhn, J H; Kuraev, E A; Lukin, P; Mastrolia, P; Montagna, G; Muller, S E; Nguyen, F; Nicrosini, O; Nomura, D; Pakhlova, G; Pancheri, G; Passera, M; Penin, A; Piccinini, F; Placzek, W; Przedzinski, T; Remiddi, E; Riemann, T; Rodrigo, G; Roig, P; Shekhovtsova, O; Shen, C P; Sibidanov, A L; Teubner, T; Trentadue, L; Venanzoni, G; van der Bij, J J; Wang, P; Ward, B F L; Was, Z; Worek, M; Yuan, C Z

    2010-01-01

    We present the achievements of the last years of the experimental and theoretical groups working on hadronic cross section measurements at the low energy e+e- colliders in Beijing, Frascati, Ithaca, Novosibirsk, Stanford and Tsukuba and on tau decays. We sketch the prospects in these fields for the years to come. We emphasise the status and the precision of the Monte Carlo generators used to analyse the hadronic cross section measurements obtained as well with energy scans as with radiative return, to determine luminosities and tau decays. The radiative corrections fully or approximately implemented in the various codes and the contribution of the vacuum polarisation are discussed.

  14. A new approach to Monte Carlo simulations in statistical physics: Wang-Landau sampling

    Science.gov (United States)

    Landau, D. P.; Tsai, Shan-Ho; Exler, M.

    2004-10-01

    We describe a Monte Carlo algorithm for doing simulations in classical statistical physics in a different way. Instead of sampling the probability distribution at a fixed temperature, a random walk is performed in energy space to extract an estimate for the density of states. The probability can be computed at any temperature by weighting the density of states by the appropriate Boltzmann factor. Thermodynamic properties can be determined from suitable derivatives of the partition function and, unlike "standard" methods, the free energy and entropy can also be computed directly. To demonstrate the simplicity and power of the algorithm, we apply it to models exhibiting first-order or second-order phase transitions.

  15. Monte Carlo renormalization-group investigation of the two-dimensional O(4) sigma model

    Science.gov (United States)

    Heller, Urs M.

    1988-01-01

    An improved Monte Carlo renormalization-group method is used to determine the beta function of the two-dimensional O(4) sigma model. While for (inverse) couplings beta = greater than about 2.2 agreement is obtained with asymptotic scaling according to asymptotic freedom, deviations from it are obtained at smaller couplings. They are, however, consistent with the behavior of the correlation length, indicating 'scaling' according to the full beta function. These results contradict recent claims that the model has a critical point at finite coupling.

  16. First Monte Carlo analysis of fragmentation functions from single-inclusive $e^+ e^-$ annihilation

    CERN Document Server

    Sato, N; Melnitchouk, W; Hirai, M; Kumano, S; Accardi, A

    2016-01-01

    We perform the first iterative Monte Carlo (IMC) analysis of fragmentation functions constrained by all available data from single-inclusive $e^+ e^-$ annihilation into pions and kaons. The IMC method eliminates potential bias in traditional analyses based on single fits introduced by fixing parameters not well contrained by the data and provides a statistically rigorous determination of uncertainties. Our analysis reveals specific features of fragmentation functions using the new IMC methodology and those obtained from previous analyses, especially for light quarks and for strange quark fragmentation to kaons.

  17. Fitting Spectral Energy Distributions of AGN - A Markov Chain Monte Carlo Approach

    CERN Document Server

    Rivera, Gabriela Calistro; Hennawi, Joseph F; Hogg, David W

    2014-01-01

    We present AGNfitter: a Markov Chain Monte Carlo algorithm developed to fit the spectral energy distributions (SEDs) of active galactic nuclei (AGN) with different physical models of AGN components. This code is well suited to determine in a robust way multiple parameters and their uncertainties, which quantify the physical processes responsible for the panchromatic nature of active galaxies and quasars. We describe the technicalities of the code and test its capabilities in the context of X-ray selected obscured AGN using multiwavelength data from the XMM-COSMOS survey.

  18. Vectorizing and macrotasking Monte Carlo neutral particle algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Heifetz, D.B.

    1987-04-01

    Monte Carlo algorithms for computing neutral particle transport in plasmas have been vectorized and macrotasked. The techniques used are directly applicable to Monte Carlo calculations of neutron and photon transport, and Monte Carlo integration schemes in general. A highly vectorized code was achieved by calculating test flight trajectories in loops over arrays of flight data, isolating the conditional branches to as few a number of loops as possible. A number of solutions are discussed to the problem of gaps appearing in the arrays due to completed flights, which impede vectorization. A simple and effective implementation of macrotasking is achieved by dividing the calculation of the test flight profile among several processors. A tree of random numbers is used to ensure reproducible results. The additional memory required for each task may preclude using a larger number of tasks. In future machines, the limit of macrotasking may be possible, with each test flight, and split test flight, being a separate task.

  19. VARIATIONAL MONTE-CARLO APPROACH FOR ARTICULATED OBJECT TRACKING

    Directory of Open Access Journals (Sweden)

    Kartik Dwivedi

    2013-12-01

    Full Text Available In this paper, we describe a novel variational Monte Carlo approach for modeling and tracking body parts of articulated objects. An articulated object (human target is represented as a dynamic Markov network of the different constituent parts. The proposed approach combines local information of individual body parts and other spatial constraints influenced by neighboring parts. The movement of the relative parts of the articulated body is modeled with local information of displacements from the Markov network and the global information from other neighboring parts. We explore the effect of certain model parameters (including the number of parts tracked; number of Monte-Carlo cycles, etc. on system accuracy and show that ourvariational Monte Carlo approach achieves better efficiency and effectiveness compared to other methods on a number of real-time video datasets containing single targets.

  20. Meaningful timescales from Monte Carlo simulations of molecular systems

    CERN Document Server

    Costa, Liborio I

    2016-01-01

    A new Markov Chain Monte Carlo method for simulating the dynamics of molecular systems with atomistic detail is introduced. In contrast to traditional Kinetic Monte Carlo approaches, where the state of the system is associated with minima in the energy landscape, in the proposed method, the state of the system is associated with the set of paths traveled by the atoms and the transition probabilities for an atom to be displaced are proportional to the corresponding velocities. In this way, the number of possible state-to-state transitions is reduced to a discrete set, and a direct link between the Monte Carlo time step and true physical time is naturally established. The resulting rejection-free algorithm is validated against event-driven molecular dynamics: the equilibrium and non-equilibrium dynamics of hard disks converge to the exact results with decreasing displacement size.

  1. Sequential Monte Carlo on large binary sampling spaces

    CERN Document Server

    Schäfer, Christian

    2011-01-01

    A Monte Carlo algorithm is said to be adaptive if it automatically calibrates its current proposal distribution using past simulations. The choice of the parametric family that defines the set of proposal distributions is critical for a good performance. In this paper, we present such a parametric family for adaptive sampling on high-dimensional binary spaces. A practical motivation for this problem is variable selection in a linear regression context. We want to sample from a Bayesian posterior distribution on the model space using an appropriate version of Sequential Monte Carlo. Raw versions of Sequential Monte Carlo are easily implemented using binary vectors with independent components. For high-dimensional problems, however, these simple proposals do not yield satisfactory results. The key to an efficient adaptive algorithm are binary parametric families which take correlations into account, analogously to the multivariate normal distribution on continuous spaces. We provide a review of models for binar...

  2. Introduction to the variational and diffusion Monte Carlo methods

    CERN Document Server

    Toulouse, Julien; Umrigar, C J

    2015-01-01

    We provide a pedagogical introduction to the two main variants of real-space quantum Monte Carlo methods for electronic-structure calculations: variational Monte Carlo (VMC) and diffusion Monte Carlo (DMC). Assuming no prior knowledge on the subject, we review in depth the Metropolis-Hastings algorithm used in VMC for sampling the square of an approximate wave function, discussing details important for applications to electronic systems. We also review in detail the more sophisticated DMC algorithm within the fixed-node approximation, introduced to avoid the infamous Fermionic sign problem, which allows one to sample a more accurate approximation to the ground-state wave function. Throughout this review, we discuss the statistical methods used for evaluating expectation values and statistical uncertainties. In particular, we show how to estimate nonlinear functions of expectation values and their statistical uncertainties.

  3. Monte Carlo Methods for Tempo Tracking and Rhythm Quantization

    CERN Document Server

    Cemgil, A T; 10.1613/jair.1121

    2011-01-01

    We present a probabilistic generative model for timing deviations in expressive music performance. The structure of the proposed model is equivalent to a switching state space model. The switch variables correspond to discrete note locations as in a musical score. The continuous hidden variables denote the tempo. We formulate two well known music recognition problems, namely tempo tracking and automatic transcription (rhythm quantization) as filtering and maximum a posteriori (MAP) state estimation tasks. Exact computation of posterior features such as the MAP state is intractable in this model class, so we introduce Monte Carlo methods for integration and optimization. We compare Markov Chain Monte Carlo (MCMC) methods (such as Gibbs sampling, simulated annealing and iterative improvement) and sequential Monte Carlo methods (particle filters). Our simulation results suggest better results with sequential methods. The methods can be applied in both online and batch scenarios such as tempo tracking and transcr...

  4. Efficiency of Monte Carlo sampling in chaotic systems.

    Science.gov (United States)

    Leitão, Jorge C; Lopes, J M Viana Parente; Altmann, Eduardo G

    2014-11-01

    In this paper we investigate how the complexity of chaotic phase spaces affect the efficiency of importance sampling Monte Carlo simulations. We focus on flat-histogram simulations of the distribution of finite-time Lyapunov exponent in a simple chaotic system and obtain analytically that the computational effort: (i) scales polynomially with the finite time, a tremendous improvement over the exponential scaling obtained in uniform sampling simulations; and (ii) the polynomial scaling is suboptimal, a phenomenon known as critical slowing down. We show that critical slowing down appears because of the limited possibilities to issue a local proposal in the Monte Carlo procedure when it is applied to chaotic systems. These results show how generic properties of chaotic systems limit the efficiency of Monte Carlo simulations.

  5. Monte Carlo simulation of laser attenuation characteristics in fog

    Science.gov (United States)

    Wang, Hong-Xia; Sun, Chao; Zhu, You-zhang; Sun, Hong-hui; Li, Pan-shi

    2011-06-01

    Based on the Mie scattering theory and the gamma size distribution model, the scattering extinction parameter of spherical fog-drop is calculated. For the transmission attenuation of the laser in the fog, a Monte Carlo simulation model is established, and the impact of attenuation ratio on visibility and field angle is computed and analysed using the program developed by MATLAB language. The results of the Monte Carlo method in this paper are compared with the results of single scattering method. The results show that the influence of multiple scattering need to be considered when the visibility is low, and single scattering calculations have larger errors. The phenomenon of multiple scattering can be interpreted more better when the Monte Carlo is used to calculate the attenuation ratio of the laser transmitting in the fog.

  6. The Monte Carlo method in quantum field theory

    CERN Document Server

    Morningstar, C

    2007-01-01

    This series of six lectures is an introduction to using the Monte Carlo method to carry out nonperturbative studies in quantum field theories. Path integrals in quantum field theory are reviewed, and their evaluation by the Monte Carlo method with Markov-chain based importance sampling is presented. Properties of Markov chains are discussed in detail and several proofs are presented, culminating in the fundamental limit theorem for irreducible Markov chains. The example of a real scalar field theory is used to illustrate the Metropolis-Hastings method and to demonstrate the effectiveness of an action-preserving (microcanonical) local updating algorithm in reducing autocorrelations. The goal of these lectures is to provide the beginner with the basic skills needed to start carrying out Monte Carlo studies in quantum field theories, as well as to present the underlying theoretical foundations of the method.

  7. Properties of Reactive Oxygen Species by Quantum Monte Carlo

    CERN Document Server

    Zen, Andrea; Guidoni, Leonardo

    2014-01-01

    The electronic properties of the oxygen molecule, in its singlet and triplet states, and of many small oxygen-containing radicals and anions have important roles in different fields of Chemistry, Biology and Atmospheric Science. Nevertheless, the electronic structure of such species is a challenge for ab-initio computational approaches because of the difficulties to correctly describe the statical and dynamical correlation effects in presence of one or more unpaired electrons. Only the highest-level quantum chemical approaches can yield reliable characterizations of their molecular properties, such as binding energies, equilibrium structures, molecular vibrations, charge distribution and polarizabilities. In this work we use the variational Monte Carlo (VMC) and the lattice regularized Monte Carlo (LRDMC) methods to investigate the equilibrium geometries and molecular properties of oxygen and oxygen reactive species. Quantum Monte Carlo methods are used in combination with the Jastrow Antisymmetrized Geminal ...

  8. Optimised Iteration in Coupled Monte Carlo - Thermal-Hydraulics Calculations

    Science.gov (United States)

    Hoogenboom, J. Eduard; Dufek, Jan

    2014-06-01

    This paper describes an optimised iteration scheme for the number of neutron histories and the relaxation factor in successive iterations of coupled Monte Carlo and thermal-hydraulic reactor calculations based on the stochastic iteration method. The scheme results in an increasing number of neutron histories for the Monte Carlo calculation in successive iteration steps and a decreasing relaxation factor for the spatial power distribution to be used as input to the thermal-hydraulics calculation. The theoretical basis is discussed in detail and practical consequences of the scheme are shown, among which a nearly linear increase per iteration of the number of cycles in the Monte Carlo calculation. The scheme is demonstrated for a full PWR type fuel assembly. Results are shown for the axial power distribution during several iteration steps. A few alternative iteration method are also tested and it is concluded that the presented iteration method is near optimal.

  9. TAKING THE NEXT STEP WITH INTELLIGENT MONTE CARLO

    Energy Technology Data Exchange (ETDEWEB)

    Booth, T.E.; Carlson, J.A. [and others

    2000-10-01

    For many scientific calculations, Monte Carlo is the only practical method available. Unfortunately, standard Monte Carlo methods converge slowly as the square root of the computer time. We have shown, both numerically and theoretically, that the convergence rate can be increased dramatically if the Monte Carlo algorithm is allowed to adapt based on what it has learned from previous samples. As the learning continues, computational efficiency increases, often geometrically fast. The particle transport work achieved geometric convergence for a two-region problem as well as for problems with rapidly changing nuclear data. The statistics work provided theoretical proof of geometic convergence for continuous transport problems and promising initial results for airborne migration of particles. The statistical physics work applied adaptive methods to a variety of physical problems including the three-dimensional Ising glass, quantum scattering, and eigenvalue problems.

  10. Monte Carlo tests of the ELIPGRID-PC algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Davidson, J.R.

    1995-04-01

    The standard tool for calculating the probability of detecting pockets of contamination called hot spots has been the ELIPGRID computer code of Singer and Wickman. The ELIPGRID-PC program has recently made this algorithm available for an IBM{reg_sign} PC. However, no known independent validation of the ELIPGRID algorithm exists. This document describes a Monte Carlo simulation-based validation of a modified version of the ELIPGRID-PC code. The modified ELIPGRID-PC code is shown to match Monte Carlo-calculated hot-spot detection probabilities to within {plus_minus}0.5% for 319 out of 320 test cases. The one exception, a very thin elliptical hot spot located within a rectangular sampling grid, differed from the Monte Carlo-calculated probability by about 1%. These results provide confidence in the ability of the modified ELIPGRID-PC code to accurately predict hot-spot detection probabilities within an acceptable range of error.

  11. Failure Probability Estimation of Wind Turbines by Enhanced Monte Carlo

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Naess, Arvid

    2012-01-01

    This paper discusses the estimation of the failure probability of wind turbines required by codes of practice for designing them. The Standard Monte Carlo (SMC) simulations may be used for this reason conceptually as an alternative to the popular Peaks-Over-Threshold (POT) method. However......, estimation of very low failure probabilities with SMC simulations leads to unacceptably high computational costs. In this study, an Enhanced Monte Carlo (EMC) method is proposed that overcomes this obstacle. The method has advantages over both POT and SMC in terms of its low computational cost and accuracy...... is controlled by the pitch controller. This provides a fair framework for comparison of the behavior and failure event of the wind turbine with emphasis on the effect of the pitch controller. The Enhanced Monte Carlo method is then applied to the model and the failure probabilities of the model are estimated...

  12. Monte Carlo Simulation in Statistical Physics An Introduction

    CERN Document Server

    Binder, Kurt

    2010-01-01

    Monte Carlo Simulation in Statistical Physics deals with the computer simulation of many-body systems in condensed-matter physics and related fields of physics, chemistry and beyond, to traffic flows, stock market fluctuations, etc.). Using random numbers generated by a computer, probability distributions are calculated, allowing the estimation of the thermodynamic properties of various systems. This book describes the theoretical background to several variants of these Monte Carlo methods and gives a systematic presentation from which newcomers can learn to perform such simulations and to analyze their results. The fifth edition covers Classical as well as Quantum Monte Carlo methods. Furthermore a new chapter on the sampling of free-energy landscapes has been added. To help students in their work a special web server has been installed to host programs and discussion groups (http://wwwcp.tphys.uni-heidelberg.de). Prof. Binder was awarded the Berni J. Alder CECAM Award for Computational Physics 2001 as well ...

  13. Applicability of Quasi-Monte Carlo for lattice systems

    CERN Document Server

    Ammon, Andreas; Jansen, Karl; Leovey, Hernan; Griewank, Andreas; Müller-Preussker, Micheal

    2013-01-01

    This project investigates the applicability of quasi-Monte Carlo methods to Euclidean lattice systems in order to improve the asymptotic error scaling of observables for such theories. The error of an observable calculated by averaging over random observations generated from ordinary Monte Carlo simulations scales like $N^{-1/2}$, where $N$ is the number of observations. By means of quasi-Monte Carlo methods it is possible to improve this scaling for certain problems to $N^{-1}$, or even further if the problems are regular enough. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling of all investigated observables in both cases.

  14. Localized Polycentric Orbital Basis Set for Quantum Monte Carlo Calculations Derived from the Decomposition of Kohn-Sham Optimized Orbitals

    Directory of Open Access Journals (Sweden)

    Claudio Amovilli

    2016-02-01

    Full Text Available In this work, we present a simple decomposition scheme of the Kohn-Sham optimized orbitals which is able to provide a reduced basis set, made of localized polycentric orbitals, specifically designed for Quantum Monte Carlo. The decomposition follows a standard Density functional theory (DFT calculation and is based on atomic connectivity and shell structure. The new orbitals are used to construct a compact correlated wave function of the Slater–Jastrow form which is optimized at the Variational Monte Carlo level and then used as the trial wave function for a final Diffusion Monte Carlo accurate energy calculation. We are able, in this way, to capture the basic information on the real system brought by the Kohn-Sham orbitals and use it for the calculation of the ground state energy within a strictly variational method. Here, we show test calculations performed on some small selected systems to assess the validity of the proposed approach in a molecular fragmentation, in the calculation of a barrier height of a chemical reaction and in the determination of intermolecular potentials. The final Diffusion Monte Carlo energies are in very good agreement with the best literature data within chemical accuracy.

  15. Implementation of Monte Carlo Simulations for the Gamma Knife System

    Energy Technology Data Exchange (ETDEWEB)

    Xiong, W [Memorial Sloan-Kettering Cancer Center/Mercy Medical Center, 1000 N Village Ave., Rockville Centre, NY 11570 (United States); Huang, D [Memorial Sloan-Kettering Cancer Center/Mercy Medical Center, 1000 N Village Ave., Rockville Centre, NY 11570 (United States); Lee, L [Memorial Sloan-Kettering Cancer Center/Mercy Medical Center, 1000 N Village Ave., Rockville Centre, NY 11570 (United States); Feng, J [Memorial Sloan-Kettering Cancer Center/Mercy Medical Center, 1000 N Village Ave., Rockville Centre, NY 11570 (United States); Morris, K [Memorial Sloan-Kettering Cancer Center/Mercy Medical Center, 1000 N Village Ave., Rockville Centre, NY 11570 (United States); Calugaru, E [Memorial Sloan-Kettering Cancer Center/Mercy Medical Center, 1000 N Village Ave., Rockville Centre, NY 11570 (United States); Burman, C [Memorial Sloan-Kettering Cancer Center/Mercy Medical Center, 1000 N Village Ave., Rockville Centre, NY 11570 (United States); Li, J [Fox Chase Cancer Center, 333 Cottman Ave., Philadelphia, PA 17111 (United States); Ma, C-M [Fox Chase Cancer Center, 333 Cottman Ave., Philadelphia, PA 17111 (United States)

    2007-06-15

    Currently the Gamma Knife system is accompanied with a treatment planning system, Leksell GammaPlan (LGP) which is a standard, computer-based treatment planning system for Gamma Knife radiosurgery. In LGP, the dose calculation algorithm does not consider the scatter dose contributions and the inhomogeneity effect due to the skull and air cavities. To improve the dose calculation accuracy, Monte Carlo simulations have been implemented for the Gamma Knife planning system. In this work, the 201 Cobalt-60 sources in the Gamma Knife unit are considered to have the same activity. Each Cobalt-60 source is contained in a cylindric stainless steel capsule. The particle phase space information is stored in four beam data files, which are collected in the inner sides of the 4 treatment helmets, after the Cobalt beam passes through the stationary and helmet collimators. Patient geometries are rebuilt from patient CT data. Twenty two Patients are included in the Monte Carlo simulation for this study. The dose is calculated using Monte Carlo in both homogenous and inhomogeneous geometries with identical beam parameters. To investigate the attenuation effect of the skull bone the dose in a 16cm diameter spherical QA phantom is measured with and without a 1.5mm Lead-covering and also simulated using Monte Carlo. The dose ratios with and without the 1.5mm Lead-covering are 89.8% based on measurements and 89.2% according to Monte Carlo for a 18mm-collimator Helmet. For patient geometries, the Monte Carlo results show that although the relative isodose lines remain almost the same with and without inhomogeneity corrections, the difference in the absolute dose is clinically significant. The average inhomogeneity correction is (3.9 {+-} 0.90) % for the 22 patients investigated. These results suggest that the inhomogeneity effect should be considered in the dose calculation for Gamma Knife treatment planning.

  16. William Carlos Williams, Literacy, and the Imagination.

    Science.gov (United States)

    Kazemak, Francis E.

    1987-01-01

    Argues that the cultivation of the imagination in schools and colleges is largely ignored because of utilitarian biases in the education system, where achievement is determined by quantitative measures of cognitive skills. Discusses Williams' view that acts of the imagination transform reality and applies view to English education. (JG)

  17. Calculating Pi Using the Monte Carlo Method

    Science.gov (United States)

    Williamson, Timothy

    2013-01-01

    During the summer of 2012, I had the opportunity to participate in a research experience for teachers at the center for sustainable energy at Notre Dame University (RET @ cSEND) working with Professor John LoSecco on the problem of using antineutrino detection to accurately determine the fuel makeup and operating power of nuclear reactors. During…

  18. A standard Event Class for Monte Carlo Generators

    Institute of Scientific and Technical Information of China (English)

    L.A.Gerren; M.Fischler

    2001-01-01

    StdHepC++[1]is a CLHEP[2] Monte Carlo event class library which provides a common interface to Monte Carlo Event Generators,This work is an extensive redesign of the StdHep Fortran interface to use the full power of object oriented design,A generated event maps naturally onto the Directed Acyclic Graph concept and we have used the HepMC classes to implement this.The full implementation allows the user to combine events to simulate beam pileup and access them transparently as though they were a single event.

  19. Parallelization of Monte Carlo codes MVP/GMVP

    Energy Technology Data Exchange (ETDEWEB)

    Nagaya, Yasunobu; Mori, Takamasa; Nakagawa, Masayuki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Sasaki, Makoto

    1998-03-01

    General-purpose Monte Carlo codes MVP/GMVP are well-vectorized and thus enable us to perform high-speed Monte Carlo calculations. In order to achieve more speedups, we parallelized the codes on the different types of the parallel processing platforms. The platforms reported are a distributed-memory vector-parallel computer Fujitsu VPP500, a distributed-memory massively parallel computer Intel Paragon and a distributed-memory scalar-parallel computer Hitachi SR2201. As mentioned generally, ideal speedup could be obtained for large-scale problems but parallelization efficiency got worse as the batch size per a processing element (PE) was smaller. (author)

  20. Parton distribution functions in Monte Carlo factorisation scheme

    Science.gov (United States)

    Jadach, S.; Płaczek, W.; Sapeta, S.; Siódmok, A.; Skrzypek, M.

    2016-12-01

    A next step in development of the KrkNLO method of including complete NLO QCD corrections to hard processes in a LO parton-shower Monte Carlo is presented. It consists of a generalisation of the method, previously used for the Drell-Yan process, to Higgs-boson production. This extension is accompanied with the complete description of parton distribution functions in a dedicated, Monte Carlo factorisation scheme, applicable to any process of production of one or more colour-neutral particles in hadron-hadron collisions.