WorldWideScience

Sample records for modeling effort monte

  1. Shell model Monte Carlo methods

    International Nuclear Information System (INIS)

    Koonin, S.E.; Dean, D.J.; Langanke, K.

    1997-01-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; the resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo (SMMC) methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, the thermal and rotational behavior of rare-earth and γ-soft nuclei, and the calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. (orig.)

  2. Shell model Monte Carlo methods

    International Nuclear Information System (INIS)

    Koonin, S.E.

    1996-01-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of γ-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs

  3. Monte Carlo simulation of Markov unreliability models

    International Nuclear Information System (INIS)

    Lewis, E.E.; Boehm, F.

    1984-01-01

    A Monte Carlo method is formulated for the evaluation of the unrealibility of complex systems with known component failure and repair rates. The formulation is in terms of a Markov process allowing dependences between components to be modeled and computational efficiencies to be achieved in the Monte Carlo simulation. Two variance reduction techniques, forced transition and failure biasing, are employed to increase computational efficiency of the random walk procedure. For an example problem these result in improved computational efficiency by more than three orders of magnitudes over analog Monte Carlo. The method is generalized to treat problems with distributed failure and repair rate data, and a batching technique is introduced and shown to result in substantial increases in computational efficiency for an example problem. A method for separating the variance due to the data uncertainty from that due to the finite number of random walks is presented. (orig.)

  4. Shell model the Monte Carlo way

    International Nuclear Information System (INIS)

    Ormand, W.E.

    1995-01-01

    The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined

  5. Shell model the Monte Carlo way

    Energy Technology Data Exchange (ETDEWEB)

    Ormand, W.E.

    1995-03-01

    The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.

  6. Monte Carlo Euler approximations of HJM term structure financial models

    KAUST Repository

    Björk, Tomas

    2012-11-22

    We present Monte Carlo-Euler methods for a weak approximation problem related to the Heath-Jarrow-Morton (HJM) term structure model, based on Itô stochastic differential equations in infinite dimensional spaces, and prove strong and weak error convergence estimates. The weak error estimates are based on stochastic flows and discrete dual backward problems, and they can be used to identify different error contributions arising from time and maturity discretization as well as the classical statistical error due to finite sampling. Explicit formulas for efficient computation of sharp error approximation are included. Due to the structure of the HJM models considered here, the computational effort devoted to the error estimates is low compared to the work to compute Monte Carlo solutions to the HJM model. Numerical examples with known exact solution are included in order to show the behavior of the estimates. © 2012 Springer Science+Business Media Dordrecht.

  7. Monte Carlo Euler approximations of HJM term structure financial models

    KAUST Repository

    Bjö rk, Tomas; Szepessy, Anders; Tempone, Raul; Zouraris, Georgios E.

    2012-01-01

    We present Monte Carlo-Euler methods for a weak approximation problem related to the Heath-Jarrow-Morton (HJM) term structure model, based on Itô stochastic differential equations in infinite dimensional spaces, and prove strong and weak error convergence estimates. The weak error estimates are based on stochastic flows and discrete dual backward problems, and they can be used to identify different error contributions arising from time and maturity discretization as well as the classical statistical error due to finite sampling. Explicit formulas for efficient computation of sharp error approximation are included. Due to the structure of the HJM models considered here, the computational effort devoted to the error estimates is low compared to the work to compute Monte Carlo solutions to the HJM model. Numerical examples with known exact solution are included in order to show the behavior of the estimates. © 2012 Springer Science+Business Media Dordrecht.

  8. Monte Carlo models: Quo vadimus?

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Xin-Nian

    2001-01-01

    Coherence, multiple scattering and the interplay between soft and hard processes are discussed. These physics phenomena are essential for understanding the nuclear dependences of rapidity density and p{sub T} spectra in high-energy heavy-ion collisions. The RHIC data have shown the onset of hard processes and indications of high p{sub T} spectra suppression due to parton energy loss. Within the pQCD parton model, the combination of azimuthal anisotropy ({nu}{sub 2}) and hadron spectra suppression at large p{sub T} can help one to determine the initial gluon density in heavy-ion collisions at RHIC.

  9. Monte Carlo models: Quo vadimus?

    International Nuclear Information System (INIS)

    Wang, Xin-Nian

    2001-01-01

    Coherence, multiple scattering and the interplay between soft and hard processes are discussed. These physics phenomena are essential for understanding the nuclear dependences of rapidity density and p T spectra in high-energy heavy-ion collisions. The RHIC data have shown the onset of hard processes and indications of high p T spectra suppression due to parton energy loss. Within the pQCD parton model, the combination of azimuthal anisotropy (ν 2 ) and hadron spectra suppression at large p T can help one to determine the initial gluon density in heavy-ion collisions at RHIC

  10. Radiation Modeling with Direct Simulation Monte Carlo

    Science.gov (United States)

    Carlson, Ann B.; Hassan, H. A.

    1991-01-01

    Improvements in the modeling of radiation in low density shock waves with direct simulation Monte Carlo (DSMC) are the subject of this study. A new scheme to determine the relaxation collision numbers for excitation of electronic states is proposed. This scheme attempts to move the DSMC programs toward a more detailed modeling of the physics and more reliance on available rate data. The new method is compared with the current modeling technique and both techniques are compared with available experimental data. The differences in the results are evaluated. The test case is based on experimental measurements from the AVCO-Everett Research Laboratory electric arc-driven shock tube of a normal shock wave in air at 10 km/s and .1 Torr. The new method agrees with the available data as well as the results from the earlier scheme and is more easily extrapolated to di erent ow conditions.

  11. Monte Carlo methods and models in finance and insurance

    CERN Document Server

    Korn, Ralf; Kroisandt, Gerald

    2010-01-01

    Offering a unique balance between applications and calculations, Monte Carlo Methods and Models in Finance and Insurance incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The authors separately discuss Monte Carlo techniques, stochastic process basics, and the theoretical background and intuition behind financial and actuarial mathematics, before bringing the topics together to apply the Monte Carlo methods to areas of finance and insurance. This allows for the easy identification of standard Monte Carlo tools and for a detailed focus on the main principles of financial and insurance mathematics. The book describes high-level Monte Carlo methods for standard simulation and the simulation of...

  12. Monte Carlo modelling of TRIGA research reactor

    Science.gov (United States)

    El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.

    2010-10-01

    The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucléaires de la Maâmora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.

  13. Longitudinal functional principal component modelling via Stochastic Approximation Monte Carlo

    KAUST Repository

    Martinez, Josue G.; Liang, Faming; Zhou, Lan; Carroll, Raymond J.

    2010-01-01

    model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order

  14. Efforts and models of education for parents

    DEFF Research Database (Denmark)

    Jensen, Niels Rosendal

    2010-01-01

    Artiklen omfatter en gennemgang af modeller for forældreuddannelse, der fortrinsvis anvendes i Danmark. Artiklen indlejrer modellerne i nogle bredere blikke på uddannelsessystemet og den aktuelle diskurs om ansvarliggørelse af forældre.   Udgivelsesdato: Marts 2010...

  15. Studies of Monte Carlo Modelling of Jets at ATLAS

    CERN Document Server

    Kar, Deepak; The ATLAS collaboration

    2017-01-01

    The predictions of different Monte Carlo generators for QCD jet production, both in multijets and for jets produced in association with other objects, are presented. Recent improvements in showering Monte Carlos provide new tools for assessing systematic uncertainties associated with these jets.  Studies of the dependence of physical observables on the choice of shower tune parameters and new prescriptions for assessing systematic uncertainties associated with the choice of shower model and tune are presented.

  16. Monte Carlo modelling for neutron guide losses

    International Nuclear Information System (INIS)

    Cser, L.; Rosta, L.; Toeroek, Gy.

    1989-09-01

    In modern research reactors, neutron guides are commonly used for beam conducting. The neutron guide is a well polished or equivalently smooth glass tube covered inside by sputtered or evaporated film of natural Ni or 58 Ni isotope where the neutrons are totally reflected. A Monte Carlo calculation was carried out to establish the real efficiency and the spectral as well as spatial distribution of the neutron beam at the end of a glass mirror guide. The losses caused by mechanical inaccuracy and mirror quality were considered and the effects due to the geometrical arrangement were analyzed. (author) 2 refs.; 2 figs

  17. The SGHWR version of the Monte Carlo code W-MONTE. Part 1. The theoretical model

    International Nuclear Information System (INIS)

    Allen, F.R.

    1976-03-01

    W-MONTE provides a multi-group model of neutron transport in the exact geometry of a reactor lattice using Monte Carlo methods. It is currently restricted to uniform axial properties. Material data is normally obtained from a preliminary WIMS lattice calculation in the transport group structure. The SGHWR version has been required for analysis of zero energy experiments and special aspects of power reactor lattices, such as the unmoderated lattice region above the moderator when drained to dump height. Neutron transport is modelled for a uniform infinite lattice, simultaneously treating the cases of no leakage, radial leakage or axial leakage only, and the combined effects of radial and axial leakage. Multigroup neutron balance edits are incorporated for the separate effects of radial and axial leakage to facilitate the analysis of leakage and to provide effective diffusion theory parameters for core representation in reactor cores. (author)

  18. Aspects of perturbative QCD in Monte Carlo shower models

    International Nuclear Information System (INIS)

    Gottschalk, T.D.

    1986-01-01

    The perturbative QCD content of Monte Carlo models for high energy hadron-hadron scattering is examined. Particular attention is given to the recently developed backwards evolution formalism for initial state parton showers, and the merging of parton shower evolution with hard scattering cross sections. Shower estimates of K-factors are discussed, and a simple scheme is presented for incorporating 2 → QCD cross sections into shower model calculations without double counting. Additional issues in the development of hard scattering Monte Carlo models are summarized. 69 references, 20 figures

  19. Automatic modeling for the Monte Carlo transport code Geant4

    International Nuclear Information System (INIS)

    Nie Fanzhi; Hu Liqin; Wang Guozhong; Wang Dianxi; Wu Yican; Wang Dong; Long Pengcheng; FDS Team

    2015-01-01

    Geant4 is a widely used Monte Carlo transport simulation package. Its geometry models could be described in Geometry Description Markup Language (GDML), but it is time-consuming and error-prone to describe the geometry models manually. This study implemented the conversion between computer-aided design (CAD) geometry models and GDML models. This method has been Studied based on Multi-Physics Coupling Analysis Modeling Program (MCAM). The tests, including FDS-Ⅱ model, demonstrated its accuracy and feasibility. (authors)

  20. Monte Carlo simulation models of breeding-population advancement.

    Science.gov (United States)

    J.N. King; G.R. Johnson

    1993-01-01

    Five generations of population improvement were modeled using Monte Carlo simulations. The model was designed to address questions that are important to the development of an advanced generation breeding population. Specifically we addressed the effects on both gain and effective population size of different mating schemes when creating a recombinant population for...

  1. Towards a Revised Monte Carlo Neutral Particle Surface Interaction Model

    International Nuclear Information System (INIS)

    Stotler, D.P.

    2005-01-01

    The components of the neutral- and plasma-surface interaction model used in the Monte Carlo neutral transport code DEGAS 2 are reviewed. The idealized surfaces and processes handled by that model are inadequate for accurately simulating neutral transport behavior in present day and future fusion devices. We identify some of the physical processes missing from the model, such as mixed materials and implanted hydrogen, and make some suggestions for improving the model

  2. Calibration and Monte Carlo modelling of neutron long counters

    CERN Document Server

    Tagziria, H

    2000-01-01

    The Monte Carlo technique has become a very powerful tool in radiation transport as full advantage is taken of enhanced cross-section data, more powerful computers and statistical techniques, together with better characterisation of neutron and photon source spectra. At the National Physical Laboratory, calculations using the Monte Carlo radiation transport code MCNP-4B have been combined with accurate measurements to characterise two long counters routinely used to standardise monoenergetic neutron fields. New and more accurate response function curves have been produced for both long counters. A novel approach using Monte Carlo methods has been developed, validated and used to model the response function of the counters and determine more accurately their effective centres, which have always been difficult to establish experimentally. Calculations and measurements agree well, especially for the De Pangher long counter for which details of the design and constructional material are well known. The sensitivit...

  3. Profit Forecast Model Using Monte Carlo Simulation in Excel

    Directory of Open Access Journals (Sweden)

    Petru BALOGH

    2014-01-01

    Full Text Available Profit forecast is very important for any company. The purpose of this study is to provide a method to estimate the profit and the probability of obtaining the expected profit. Monte Carlo methods are stochastic techniques–meaning they are based on the use of random numbers and probability statistics to investigate problems. Monte Carlo simulation furnishes the decision-maker with a range of possible outcomes and the probabilities they will occur for any choice of action. Our example of Monte Carlo simulation in Excel will be a simplified profit forecast model. Each step of the analysis will be described in detail. The input data for the case presented: the number of leads per month, the percentage of leads that result in sales, , the cost of a single lead, the profit per sale and fixed cost, allow obtaining profit and associated probabilities of achieving.

  4. Monte Carlo investigation of the one-dimensional Potts model

    International Nuclear Information System (INIS)

    Karma, A.S.; Nolan, M.J.

    1983-01-01

    Monte Carlo results are presented for a variety of one-dimensional dynamical q-state Potts models. Our calculations confirm the expected universal value z = 2 for the dynamic scaling exponent. Our results also indicate that an increase in q at fixed correlation length drives the dynamics into the scaling regime

  5. A novel Monte Carlo approach to hybrid local volatility models

    NARCIS (Netherlands)

    A.W. van der Stoep (Anton); L.A. Grzelak (Lech Aleksander); C.W. Oosterlee (Cornelis)

    2017-01-01

    textabstractWe present in a Monte Carlo simulation framework, a novel approach for the evaluation of hybrid local volatility [Risk, 1994, 7, 18–20], [Int. J. Theor. Appl. Finance, 1998, 1, 61–110] models. In particular, we consider the stochastic local volatility model—see e.g. Lipton et al. [Quant.

  6. Yet another Monte Carlo study of the Schwinger model

    International Nuclear Information System (INIS)

    Sogo, K.; Kimura, N.

    1986-01-01

    Some methodological improvements are introduced in the quantum Monte Carlo simulation of the 1 + 1 dimensional quantum electrodynamics (the Schwinger model). Properties at finite temperatures are investigated, concentrating on the existence of the chirality transition and of the deconfinement transition. (author)

  7. Reservoir Modeling Combining Geostatistics with Markov Chain Monte Carlo Inversion

    DEFF Research Database (Denmark)

    Zunino, Andrea; Lange, Katrine; Melnikova, Yulia

    2014-01-01

    We present a study on the inversion of seismic reflection data generated from a synthetic reservoir model. Our aim is to invert directly for rock facies and porosity of the target reservoir zone. We solve this inverse problem using a Markov chain Monte Carlo (McMC) method to handle the nonlinear...

  8. Yet another Monte Carlo study of the Schwinger model

    International Nuclear Information System (INIS)

    Sogo, K.; Kimura, N.

    1986-03-01

    Some methodological improvements are introduced in the quantum Monte Carlo simulation of the 1 + 1 dimensional quantum electrodynamics (the Schwinger model). Properties at finite temperatures are investigated, concentrating on the existence of the chirality transition and of the deconfinement transition. (author)

  9. Automatic modeling for the monte carlo transport TRIPOLI code

    International Nuclear Information System (INIS)

    Zhang Junjun; Zeng Qin; Wu Yican; Wang Guozhong; FDS Team

    2010-01-01

    TRIPOLI, developed by CEA, France, is Monte Carlo particle transport simulation code. It has been widely applied to nuclear physics, shielding design, evaluation of nuclear safety. However, it is time-consuming and error-prone to manually describe the TRIPOLI input file. This paper implemented bi-directional conversion between CAD model and TRIPOLI model. Its feasibility and efficiency have been demonstrated by several benchmarking examples. (authors)

  10. City Logistics Modeling Efforts : Trends and Gaps - A Review

    NARCIS (Netherlands)

    Anand, N.R.; Quak, H.J.; Van Duin, J.H.R.; Tavasszy, L.A.

    2012-01-01

    In this paper, we present a review of city logistics modeling efforts reported in the literature for urban freight analysis. The review framework takes into account the diversity and complexity found in the present-day city logistics practice. Next, it covers the different aspects in the modeling

  11. V and V Efforts of Auroral Precipitation Models: Preliminary Results

    Science.gov (United States)

    Zheng, Yihua; Kuznetsova, Masha; Rastaetter, Lutz; Hesse, Michael

    2011-01-01

    Auroral precipitation models have been valuable both in terms of space weather applications and space science research. Yet very limited testing has been performed regarding model performance. A variety of auroral models are available, including empirical models that are parameterized by geomagnetic indices or upstream solar wind conditions, now casting models that are based on satellite observations, or those derived from physics-based, coupled global models. In this presentation, we will show our preliminary results regarding V&V efforts of some of the models.

  12. Monte Carlo Modelling of Mammograms : Development and Validation

    Energy Technology Data Exchange (ETDEWEB)

    Spyrou, G; Panayiotakis, G [Univercity of Patras, School of Medicine, Medical Physics Department, 265 00 Patras (Greece); Bakas, A [Technological Educational Institution of Athens, Department of Radiography, 122 10 Athens (Greece); Tzanakos, G [University of Athens, Department of Physics, Divission of Nuclear and Particle Physics, 157 71 Athens (Greece)

    1999-12-31

    A software package using Monte Carlo methods has been developed for the simulation of x-ray mammography. A simplified geometry of the mammographic apparatus has been considered along with the software phantom of compressed breast. This phantom may contain inhomogeneities of various compositions and sizes at any point. Using this model one can produce simulated mammograms. Results that demonstrate the validity of this simulation are presented. (authors) 16 refs, 4 figs

  13. GPU based Monte Carlo for PET image reconstruction: detector modeling

    International Nuclear Information System (INIS)

    Légrády; Cserkaszky, Á; Lantos, J.; Patay, G.; Bükki, T.

    2011-01-01

    Monte Carlo (MC) calculations and Graphical Processing Units (GPUs) are almost like the dedicated hardware designed for the specific task given the similarities between visible light transport and neutral particle trajectories. A GPU based MC gamma transport code has been developed for Positron Emission Tomography iterative image reconstruction calculating the projection from unknowns to data at each iteration step taking into account the full physics of the system. This paper describes the simplified scintillation detector modeling and its effect on convergence. (author)

  14. Monte Carlo Modelling of Mammograms : Development and Validation

    International Nuclear Information System (INIS)

    Spyrou, G.; Panayiotakis, G.; Bakas, A.; Tzanakos, G.

    1998-01-01

    A software package using Monte Carlo methods has been developed for the simulation of x-ray mammography. A simplified geometry of the mammographic apparatus has been considered along with the software phantom of compressed breast. This phantom may contain inhomogeneities of various compositions and sizes at any point. Using this model one can produce simulated mammograms. Results that demonstrate the validity of this simulation are presented. (authors)

  15. Quantum Monte Carlo Simulation of Frustrated Kondo Lattice Models

    Science.gov (United States)

    Sato, Toshihiro; Assaad, Fakher F.; Grover, Tarun

    2018-03-01

    The absence of the negative sign problem in quantum Monte Carlo simulations of spin and fermion systems has different origins. World-line based algorithms for spins require positivity of matrix elements whereas auxiliary field approaches for fermions depend on symmetries such as particle-hole symmetry. For negative-sign-free spin and fermionic systems, we show that one can formulate a negative-sign-free auxiliary field quantum Monte Carlo algorithm that allows Kondo coupling of fermions with the spins. Using this general approach, we study a half-filled Kondo lattice model on the honeycomb lattice with geometric frustration. In addition to the conventional Kondo insulator and antiferromagnetically ordered phases, we find a partial Kondo screened state where spins are selectively screened so as to alleviate frustration, and the lattice rotation symmetry is broken nematically.

  16. Monte Carlo Numerical Models for Nuclear Logging Applications

    Directory of Open Access Journals (Sweden)

    Fusheng Li

    2012-06-01

    Full Text Available Nuclear logging is one of most important logging services provided by many oil service companies. The main parameters of interest are formation porosity, bulk density, and natural radiation. Other services are also provided from using complex nuclear logging tools, such as formation lithology/mineralogy, etc. Some parameters can be measured by using neutron logging tools and some can only be measured by using a gamma ray tool. To understand the response of nuclear logging tools, the neutron transport/diffusion theory and photon diffusion theory are needed. Unfortunately, for most cases there are no analytical answers if complex tool geometry is involved. For many years, Monte Carlo numerical models have been used by nuclear scientists in the well logging industry to address these challenges. The models have been widely employed in the optimization of nuclear logging tool design, and the development of interpretation methods for nuclear logs. They have also been used to predict the response of nuclear logging systems for forward simulation problems. In this case, the system parameters including geometry, materials and nuclear sources, etc., are pre-defined and the transportation and interactions of nuclear particles (such as neutrons, photons and/or electrons in the regions of interest are simulated according to detailed nuclear physics theory and their nuclear cross-section data (probability of interacting. Then the deposited energies of particles entering the detectors are recorded and tallied and the tool responses to such a scenario are generated. A general-purpose code named Monte Carlo N– Particle (MCNP has been the industry-standard for some time. In this paper, we briefly introduce the fundamental principles of Monte Carlo numerical modeling and review the physics of MCNP. Some of the latest developments of Monte Carlo Models are also reviewed. A variety of examples are presented to illustrate the uses of Monte Carlo numerical models

  17. APPLICATION OF BAYESIAN MONTE CARLO ANALYSIS TO A LAGRANGIAN PHOTOCHEMICAL AIR QUALITY MODEL. (R824792)

    Science.gov (United States)

    Uncertainties in ozone concentrations predicted with a Lagrangian photochemical air quality model have been estimated using Bayesian Monte Carlo (BMC) analysis. Bayesian Monte Carlo analysis provides a means of combining subjective "prior" uncertainty estimates developed ...

  18. Efforts - Final technical report on task 4. Physical modelling calidation

    DEFF Research Database (Denmark)

    Andreasen, Jan Lasson; Olsson, David Dam; Christensen, T. W.

    The present report is documentation for the work carried out in Task 4 at DTU Physical modelling-validation on the Brite/Euram project No. BE96-3340, contract No. BRPR-CT97-0398, with the title Enhanced Framework for forging design using reliable three-dimensional simulation (EFFORTS). The report...

  19. Monte Carlo modeling of human tooth optical coherence tomography imaging

    International Nuclear Information System (INIS)

    Shi, Boya; Meng, Zhuo; Wang, Longzhi; Liu, Tiegen

    2013-01-01

    We present a Monte Carlo model for optical coherence tomography (OCT) imaging of human tooth. The model is implemented by combining the simulation of a Gaussian beam with simulation for photon propagation in a two-layer human tooth model with non-parallel surfaces through a Monte Carlo method. The geometry and the optical parameters of the human tooth model are chosen on the basis of the experimental OCT images. The results show that the simulated OCT images are qualitatively consistent with the experimental ones. Using the model, we demonstrate the following: firstly, two types of photons contribute to the information of morphological features and noise in the OCT image of a human tooth, respectively. Secondly, the critical imaging depth of the tooth model is obtained, and it is found to decrease significantly with increasing mineral loss, simulated as different enamel scattering coefficients. Finally, the best focus position is located below and close to the dental surface by analysis of the effect of focus positions on the OCT signal and critical imaging depth. We anticipate that this modeling will become a powerful and accurate tool for a preliminary numerical study of the OCT technique on diseases of dental hard tissue in human teeth. (paper)

  20. Incorporating Responsiveness to Marketing Efforts in Brand Choice Modeling

    Directory of Open Access Journals (Sweden)

    Dennis Fok

    2014-02-01

    Full Text Available We put forward a brand choice model with unobserved heterogeneity that concerns responsiveness to marketing efforts. We introduce two latent segments of households. The first segment is assumed to respond to marketing efforts, while households in the second segment do not do so. Whether a specific household is a member of the first or the second segment at a specific purchase occasion is described by household-specific characteristics and characteristics concerning buying behavior. Households may switch between the two responsiveness states over time. When comparing the performance of our model with alternative choice models that account for various forms of heterogeneity for three different datasets, we find better face validity for our parameters. Our model also forecasts better.

  1. Skin fluorescence model based on the Monte Carlo technique

    Science.gov (United States)

    Churmakov, Dmitry Y.; Meglinski, Igor V.; Piletsky, Sergey A.; Greenhalgh, Douglas A.

    2003-10-01

    The novel Monte Carlo technique of simulation of spatial fluorescence distribution within the human skin is presented. The computational model of skin takes into account spatial distribution of fluorophores following the collagen fibers packing, whereas in epidermis and stratum corneum the distribution of fluorophores assumed to be homogeneous. The results of simulation suggest that distribution of auto-fluorescence is significantly suppressed in the NIR spectral region, while fluorescence of sensor layer embedded in epidermis is localized at the adjusted depth. The model is also able to simulate the skin fluorescence spectra.

  2. An opportunity cost model of subjective effort and task performance

    Science.gov (United States)

    Kurzban, Robert; Duckworth, Angela; Kable, Joseph W.; Myers, Justus

    2013-01-01

    Why does performing certain tasks cause the aversive experience of mental effort and concomitant deterioration in task performance? One explanation posits a physical resource that is depleted over time. We propose an alternate explanation that centers on mental representations of the costs and benefits associated with task performance. Specifically, certain computational mechanisms, especially those associated with executive function, can be deployed for only a limited number of simultaneous tasks at any given moment. Consequently, the deployment of these computational mechanisms carries an opportunity cost – that is, the next-best use to which these systems might be put. We argue that the phenomenology of effort can be understood as the felt output of these cost/benefit computations. In turn, the subjective experience of effort motivates reduced deployment of these computational mechanisms in the service of the present task. These opportunity cost representations, then, together with other cost/benefit calculations, determine effort expended and, everything else equal, result in performance reductions. In making our case for this position, we review alternate explanations both for the phenomenology of effort associated with these tasks and for performance reductions over time. Likewise, we review the broad range of relevant empirical results from across subdisciplines, especially psychology and neuroscience. We hope that our proposal will help to build links among the diverse fields that have been addressing similar questions from different perspectives, and we emphasize ways in which alternate models might be empirically distinguished. PMID:24304775

  3. Conditional Monte Carlo randomization tests for regression models.

    Science.gov (United States)

    Parhat, Parwen; Rosenberger, William F; Diao, Guoqing

    2014-08-15

    We discuss the computation of randomization tests for clinical trials of two treatments when the primary outcome is based on a regression model. We begin by revisiting the seminal paper of Gail, Tan, and Piantadosi (1988), and then describe a method based on Monte Carlo generation of randomization sequences. The tests based on this Monte Carlo procedure are design based, in that they incorporate the particular randomization procedure used. We discuss permuted block designs, complete randomization, and biased coin designs. We also use a new technique by Plamadeala and Rosenberger (2012) for simple computation of conditional randomization tests. Like Gail, Tan, and Piantadosi, we focus on residuals from generalized linear models and martingale residuals from survival models. Such techniques do not apply to longitudinal data analysis, and we introduce a method for computation of randomization tests based on the predicted rate of change from a generalized linear mixed model when outcomes are longitudinal. We show, by simulation, that these randomization tests preserve the size and power well under model misspecification. Copyright © 2014 John Wiley & Sons, Ltd.

  4. Monte Carlo technique for very large ising models

    Science.gov (United States)

    Kalle, C.; Winkelmann, V.

    1982-08-01

    Rebbi's multispin coding technique is improved and applied to the kinetic Ising model with size 600*600*600. We give the central part of our computer program (for a CDC Cyber 76), which will be helpful also in a simulation of smaller systems, and describe the other tricks necessary to go to large lattices. The magnetization M at T=1.4* T c is found to decay asymptotically as exp(-t/2.90) if t is measured in Monte Carlo steps per spin, and M( t = 0) = 1 initially.

  5. Novel extrapolation method in the Monte Carlo shell model

    International Nuclear Information System (INIS)

    Shimizu, Noritaka; Abe, Takashi; Utsuno, Yutaka; Mizusaki, Takahiro; Otsuka, Takaharu; Honma, Michio

    2010-01-01

    We propose an extrapolation method utilizing energy variance in the Monte Carlo shell model to estimate the energy eigenvalue and observables accurately. We derive a formula for the energy variance with deformed Slater determinants, which enables us to calculate the energy variance efficiently. The feasibility of the method is demonstrated for the full pf-shell calculation of 56 Ni, and the applicability of the method to a system beyond the current limit of exact diagonalization is shown for the pf+g 9/2 -shell calculation of 64 Ge.

  6. Shell-model Monte Carlo studies of nuclei

    International Nuclear Information System (INIS)

    Dean, D.J.

    1997-01-01

    The pair content and structure of nuclei near N = Z are described in the frwnework of shell-model Monte Carlo (SMMC) calculations. Results include the enhancement of J=0 T=1 proton-neutron pairing at N=Z nuclei, and the maxked difference of thermal properties between even-even and odd-odd N=Z nuclei. Additionally, a study of the rotational properties of the T=1 (ground state), and T=0 band mixing seen in 74 Rb is presented

  7. Maintenance personnel performance simulation (MAPPS) model: overview and evaluation efforts

    International Nuclear Information System (INIS)

    Knee, H.E.; Haas, P.M.; Siegel, A.I.; Bartter, W.D.; Wolf, J.J.; Ryan, T.G.

    1984-01-01

    The development of the MAPPS model has been completed and the model is currently undergoing evaluation. These efforts are addressing a number of identified issues concerning practicality, acceptability, usefulness, and validity. Preliminary analysis of the evaluation data that has been collected indicates that MAPPS will provide comprehensive and reliable data for PRA purposes and for a number of other applications. The MAPPS computer simulation model provides the user with a sophisticated tool for gaining insights into tasks performed by NPP maintenance personnel. Its wide variety of input parameters and output data makes it extremely flexible for application to a number of diverse applications. With the demonstration of favorable model evaluation results, the MAPPS model will represent a valuable source of NPP maintainer reliability data and provide PRA studies with a source of data on maintainers that has previously not existed

  8. Optimizing Availability of a Framework in Series Configuration Utilizing Markov Model and Monte Carlo Simulation Techniques

    Directory of Open Access Journals (Sweden)

    Mansoor Ahmed Siddiqui

    2017-06-01

    Full Text Available This research work is aimed at optimizing the availability of a framework comprising of two units linked together in series configuration utilizing Markov Model and Monte Carlo (MC Simulation techniques. In this article, effort has been made to develop a maintenance model that incorporates three distinct states for each unit, while taking into account their different levels of deterioration. Calculations are carried out using the proposed model for two distinct cases of corrective repair, namely perfect and imperfect repairs, with as well as without opportunistic maintenance. Initially, results are accomplished using an analytical technique i.e., Markov Model. Validation of the results achieved is later carried out with the help of MC Simulation. In addition, MC Simulation based codes also work well for the frameworks that follow non-exponential failure and repair rates, and thus overcome the limitations offered by the Markov Model.

  9. Monte Carlo modeling of the Fastscan whole body counter response

    International Nuclear Information System (INIS)

    Graham, H.R.; Waller, E.J.

    2015-01-01

    Monte Carlo N-Particle (MCNP) was used to make a model of the Fastscan for the purpose of calibration. Two models were made one for the Pickering Nuclear Site, and one for the Darlington Nuclear Site. Once these models were benchmarked and found to be in good agreement, simulations were run to study the effect different sized phantoms had on the detected response, and the shielding effect of torso fat was not negligible. Simulations into the nature of a source being positioned externally on the anterior or posterior of a person were also conducted to determine a ratio that could be used to determine if a source is externally or internally placed. (author)

  10. Dynamic connectivity algorithms for Monte Carlo simulations of the random-cluster model

    International Nuclear Information System (INIS)

    Elçi, Eren Metin; Weigel, Martin

    2014-01-01

    We review Sweeny's algorithm for Monte Carlo simulations of the random cluster model. Straightforward implementations suffer from the problem of computational critical slowing down, where the computational effort per edge operation scales with a power of the system size. By using a tailored dynamic connectivity algorithm we are able to perform all operations with a poly-logarithmic computational effort. This approach is shown to be efficient in keeping online connectivity information and is of use for a number of applications also beyond cluster-update simulations, for instance in monitoring droplet shape transitions. As the handling of the relevant data structures is non-trivial, we provide a Python module with a full implementation for future reference.

  11. Modeling granular phosphor screens by Monte Carlo methods

    International Nuclear Information System (INIS)

    Liaparinos, Panagiotis F.; Kandarakis, Ioannis S.; Cavouras, Dionisis A.; Delis, Harry B.; Panayiotakis, George S.

    2006-01-01

    The intrinsic phosphor properties are of significant importance for the performance of phosphor screens used in medical imaging systems. In previous analytical-theoretical and Monte Carlo studies on granular phosphor materials, values of optical properties, and light interaction cross sections were found by fitting to experimental data. These values were then employed for the assessment of phosphor screen imaging performance. However, it was found that, depending on the experimental technique and fitting methodology, the optical parameters of a specific phosphor material varied within a wide range of values, i.e., variations of light scattering with respect to light absorption coefficients were often observed for the same phosphor material. In this study, x-ray and light transport within granular phosphor materials was studied by developing a computational model using Monte Carlo methods. The model was based on the intrinsic physical characteristics of the phosphor. Input values required to feed the model can be easily obtained from tabulated data. The complex refractive index was introduced and microscopic probabilities for light interactions were produced, using Mie scattering theory. Model validation was carried out by comparing model results on x-ray and light parameters (x-ray absorption, statistical fluctuations in the x-ray to light conversion process, number of emitted light photons, output light spatial distribution) with previous published experimental data on Gd 2 O 2 S:Tb phosphor material (Kodak Min-R screen). Results showed the dependence of the modulation transfer function (MTF) on phosphor grain size and material packing density. It was predicted that granular Gd 2 O 2 S:Tb screens of high packing density and small grain size may exhibit considerably better resolution and light emission properties than the conventional Gd 2 O 2 S:Tb screens, under similar conditions (x-ray incident energy, screen thickness)

  12. Preliminary validation of a Monte Carlo model for IMRT fields

    International Nuclear Information System (INIS)

    Wright, Tracy; Lye, Jessica; Mohammadi, Mohammad

    2011-01-01

    Full text: A Monte Carlo model of an Elekta linac, validated for medium to large (10-30 cm) symmetric fields, has been investigated for small, irregular and asymmetric fields suitable for IMRT treatments. The model has been validated with field segments using radiochromic film in solid water. The modelled positions of the multileaf collimator (MLC) leaves have been validated using EBT film, In the model, electrons with a narrow energy spectrum are incident on the target and all components of the linac head are included. The MLC is modelled using the EGSnrc MLCE component module. For the validation, a number of single complex IMRT segments with dimensions approximately 1-8 cm were delivered to film in solid water (see Fig, I), The same segments were modelled using EGSnrc by adjusting the MLC leaf positions in the model validated for 10 cm symmetric fields. Dose distributions along the centre of each MLC leaf as determined by both methods were compared. A picket fence test was also performed to confirm the MLC leaf positions. 95% of the points in the modelled dose distribution along the leaf axis agree with the film measurement to within 1%/1 mm for dose difference and distance to agreement. Areas of most deviation occur in the penumbra region. A system has been developed to calculate the MLC leaf positions in the model for any planned field size.

  13. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    Science.gov (United States)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  14. Longitudinal functional principal component modelling via Stochastic Approximation Monte Carlo

    KAUST Repository

    Martinez, Josue G.

    2010-06-01

    The authors consider the analysis of hierarchical longitudinal functional data based upon a functional principal components approach. In contrast to standard frequentist approaches to selecting the number of principal components, the authors do model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order to overcome this, the authors show how to apply Stochastic Approximation Monte Carlo (SAMC) to this problem, a method that has the potential to explore the entire space and does not become trapped in local extrema. The combination of reversible jump methods and SAMC in hierarchical longitudinal functional data is simplified by a polar coordinate representation of the principal components. The approach is easy to implement and does well in simulated data in determining the distribution of the number of principal components, and in terms of its frequentist estimation properties. Empirical applications are also presented.

  15. Modeling Dynamic Objects in Monte Carlo Particle Transport Calculations

    International Nuclear Information System (INIS)

    Yegin, G.

    2008-01-01

    In this study, the Multi-Geometry geometry modeling technique was improved in order to handle moving objects in a Monte Carlo particle transport calculation. In the Multi-Geometry technique, the geometry is a superposition of objects not surfaces. By using this feature, we developed a new algorithm which allows a user to make enable or disable geometry elements during particle transport. A disabled object can be ignored at a certain stage of a calculation and switching among identical copies of the same object located adjacent poins during a particle simulation corresponds to the movement of that object in space. We called this powerfull feature as Dynamic Multi-Geometry technique (DMG) which is used for the first time in Brachy Dose Monte Carlo code to simulate HDR brachytherapy treatment systems. Our results showed that having disabled objects in a geometry does not effect calculated dose values. This technique is also suitable to be used in other areas such as IMRT treatment planning systems

  16. Iterative optimisation of Monte Carlo detector models using measurements and simulations

    Energy Technology Data Exchange (ETDEWEB)

    Marzocchi, O., E-mail: olaf@marzocchi.net [European Patent Office, Rijswijk (Netherlands); Leone, D., E-mail: debora.leone@kit.edu [Institute for Nuclear Waste Disposal, Karlsruhe Institute of Technology, Karlsruhe (Germany)

    2015-04-11

    This work proposes a new technique to optimise the Monte Carlo models of radiation detectors, offering the advantage of a significantly lower user effort and therefore an improved work efficiency compared to the prior techniques. The method consists of four steps, two of which are iterative and suitable for automation using scripting languages. The four steps consist in the acquisition in the laboratory of measurement data to be used as reference; the modification of a previously available detector model; the simulation of a tentative model of the detector to obtain the coefficients of a set of linear equations; the solution of the system of equations and the update of the detector model. Steps three and four can be repeated for more accurate results. This method avoids the “try and fail” approach typical of the prior techniques.

  17. Monte Carlo modelling of Schottky diode for rectenna simulation

    Science.gov (United States)

    Bernuchon, E.; Aniel, F.; Zerounian, N.; Grimault-Jacquin, A. S.

    2017-09-01

    Before designing a detector circuit, the electrical parameters extraction of the Schottky diode is a critical step. This article is based on a Monte-Carlo (MC) solver of the Boltzmann Transport Equation (BTE) including different transport mechanisms at the metal-semiconductor contact such as image force effect or tunneling. The weight of tunneling and thermionic current is quantified according to different degrees of tunneling modelling. The I-V characteristic highlights the dependence of the ideality factor and the current saturation with bias. Harmonic Balance (HB) simulation on a rectifier circuit within Advanced Design System (ADS) software shows that considering non-linear ideality factor and saturation current for the electrical model of the Schottky diode does not seem essential. Indeed, bias independent values extracted in forward regime on I-V curve are sufficient. However, the non-linear series resistance extracted from a small signal analysis (SSA) strongly influences the conversion efficiency at low input powers.

  18. Shell model Monte Carlo investigation of rare earth nuclei

    International Nuclear Information System (INIS)

    White, J. A.; Koonin, S. E.; Dean, D. J.

    2000-01-01

    We utilize the shell model Monte Carlo method to study the structure of rare earth nuclei. This work demonstrates the first systematic full oscillator shell with intruder calculations in such heavy nuclei. Exact solutions of a pairing plus quadrupole Hamiltonian are compared with the static path approximation in several dysprosium isotopes from A=152 to 162, including the odd mass A=153. Some comparisons are also made with Hartree-Fock-Bogoliubov results from Baranger and Kumar. Basic properties of these nuclei at various temperatures and spin are explored. These include energy, deformation, moments of inertia, pairing channel strengths, band crossing, and evolution of shell model occupation numbers. Exact level densities are also calculated and, in the case of 162 Dy, compared with experimental data. (c) 2000 The American Physical Society

  19. Synergies Between Grace and Regional Atmospheric Modeling Efforts

    Science.gov (United States)

    Kusche, J.; Springer, A.; Ohlwein, C.; Hartung, K.; Longuevergne, L.; Kollet, S. J.; Keune, J.; Dobslaw, H.; Forootan, E.; Eicker, A.

    2014-12-01

    In the meteorological community, efforts converge towards implementation of high-resolution (precipitation, evapotranspiration and runoff data; confirming that the model does favorably at representing observations. We show that after GRACE-derived bias correction, basin-average hydrological conditions prior to 2002 can be reconstructed better than before. Next, comparing GRACE with CLM forced by EURO-CORDEX simulations allows identifying processes needing improvement in the model. Finally, we compare COSMO-EU atmospheric pressure, a proxy for mass corrections in satellite gravimetry, with ERA-Interim over Europe at timescales shorter/longer than 1 month, and spatial scales below/above ERA resolution. We find differences between regional and global model more pronounced at high frequencies, with magnitude at sub-grid scale and larger scale corresponding to 1-3 hPa (1-3 cm EWH); relevant for the assessment of post-GRACE concepts.

  20. Linking effort and fishing mortality in a mixed fisheries model

    DEFF Research Database (Denmark)

    Thøgersen, Thomas Talund; Hoff, Ayoe; Frost, Hans Staby

    2012-01-01

    in fish stocks has led to overcapacity in many fisheries, leading to incentives for overfishing. Recent research has shown that the allocation of effort among fleets can play an important role in mitigating overfishing when the targeting covers a range of species (multi-species—i.e., so-called mixed...... fisheries), while simultaneously optimising the overall economic performance of the fleets. The so-called FcubEcon model, in particular, has elucidated both the biologically and economically optimal method for allocating catches—and thus effort—between fishing fleets, while ensuring that the quotas...

  1. Monte Carlo model of diagnostic X-ray dosimetry

    International Nuclear Information System (INIS)

    Khrutchinsky, Arkady; Kutsen, Semion; Gatskevich, George

    2008-01-01

    Full text: A Monte Carlo simulation of absorbed dose distribution in patient's tissues is often used in a dosimetry assessment of X-ray examinations. The results of such simulations in Belarus are presented in the report based on an anthropomorphic tissue-equivalent Rando-like physical phantom. The phantom corresponds to an adult 173 cm high and of 73 kg and consists of a torso and a head made of tissue-equivalent plastics which model soft (muscular), bone, and lung tissues. It consists of 39 layers (each 25 mm thick), including 10 head and neck ones, 16 chest and 13 pelvis ones. A tomographic model of the phantom has been developed from its CT-scan images with a voxel size of 0.88 x 0.88 x 4 mm 3 . A necessary pixelization in Mathematics-based in-house program was carried out for the phantom to be used in the radiation transport code MCNP-4b. The final voxel size of 14.2 x 14.2 x 8 mm 3 was used for the reasonable computer consuming calculations of absorbed dose in tissues and organs in various diagnostic X-ray examinations. MCNP point detectors allocated through body slices obtained as a result of the pixelization were used to calculate the absorbed dose. X-ray spectra generated by the empirical TASMIP model were verified on the X-ray units MEVASIM and SIREGRAPH CF. Absorbed dose distributions in the phantom volume were determined by the corresponding Monte Carlo simulations with a set of point detectors. Doses in organs of the adult phantom computed from the absorbed dose distributions by another Mathematics-based in-house program were estimated for 22 standard organs for various standard X-ray examinations. The results of Monte Carlo simulations were compared with the results of direct measurements of the absorbed dose in the phantom on the X-ray unit SIREGRAPH CF with the calibrated thermo-luminescent dosimeter DTU-01. The measurements were carried out in specified locations of different layers in heart, lungs, liver, pancreas, and stomach at high voltage of

  2. Modelling a gamma irradiation process using the Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Soares, Gabriela A.; Pereira, Marcio T., E-mail: gas@cdtn.br, E-mail: mtp@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2011-07-01

    In gamma irradiation service it is of great importance the evaluation of absorbed dose in order to guarantee the service quality. When physical structure and human resources are not available for performing dosimetry in each product irradiated, the appliance of mathematic models may be a solution. Through this, the prediction of the delivered dose in a specific product, irradiated in a specific position and during a certain period of time becomes possible, if validated with dosimetry tests. At the gamma irradiation facility of CDTN, equipped with a Cobalt-60 source, the Monte Carlo method was applied to perform simulations of products irradiations and the results were compared with Fricke dosimeters irradiated under the same conditions of the simulations. The first obtained results showed applicability of this method, with a linear relation between simulation and experimental results. (author)

  3. Monte Carlo Modeling of Crystal Channeling at High Energies

    CERN Document Server

    Schoofs, Philippe; Cerutti, Francesco

    Charged particles entering a crystal close to some preferred direction can be trapped in the electromagnetic potential well existing between consecutive planes or strings of atoms. This channeling effect can be used to extract beam particles if the crystal is bent beforehand. Crystal channeling is becoming a reliable and efficient technique for collimating beams and removing halo particles. At CERN, the installation of silicon crystals in the LHC is under scrutiny by the UA9 collaboration with the goal of investigating if they are a viable option for the collimation system upgrade. This thesis describes a new Monte Carlo model of planar channeling which has been developed from scratch in order to be implemented in the FLUKA code simulating particle transport and interactions. Crystal channels are described through the concept of continuous potential taking into account thermal motion of the lattice atoms and using Moliere screening function. The energy of the particle transverse motion determines whether or n...

  4. Modelling a gamma irradiation process using the Monte Carlo method

    International Nuclear Information System (INIS)

    Soares, Gabriela A.; Pereira, Marcio T.

    2011-01-01

    In gamma irradiation service it is of great importance the evaluation of absorbed dose in order to guarantee the service quality. When physical structure and human resources are not available for performing dosimetry in each product irradiated, the appliance of mathematic models may be a solution. Through this, the prediction of the delivered dose in a specific product, irradiated in a specific position and during a certain period of time becomes possible, if validated with dosimetry tests. At the gamma irradiation facility of CDTN, equipped with a Cobalt-60 source, the Monte Carlo method was applied to perform simulations of products irradiations and the results were compared with Fricke dosimeters irradiated under the same conditions of the simulations. The first obtained results showed applicability of this method, with a linear relation between simulation and experimental results. (author)

  5. A Monte Carlo Simulation Framework for Testing Cosmological Models

    Directory of Open Access Journals (Sweden)

    Heymann Y.

    2014-10-01

    Full Text Available We tested alternative cosmologies using Monte Carlo simulations based on the sam- pling method of the zCosmos galactic survey. The survey encompasses a collection of observable galaxies with respective redshifts that have been obtained for a given spec- troscopic area of the sky. Using a cosmological model, we can convert the redshifts into light-travel times and, by slicing the survey into small redshift buckets, compute a curve of galactic density over time. Because foreground galaxies obstruct the images of more distant galaxies, we simulated the theoretical galactic density curve using an average galactic radius. By comparing the galactic density curves of the simulations with that of the survey, we could assess the cosmologies. We applied the test to the expanding-universe cosmology of de Sitter and to a dichotomous cosmology.

  6. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Diks, Cees G H [NON LANL; Clark, Martyn P [NON LANL

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  7. Monte Carlo simulations of lattice models for single polymer systems

    Science.gov (United States)

    Hsu, Hsiao-Ping

    2014-10-01

    Single linear polymer chains in dilute solutions under good solvent conditions are studied by Monte Carlo simulations with the pruned-enriched Rosenbluth method up to the chain length N ˜ O(10^4). Based on the standard simple cubic lattice model (SCLM) with fixed bond length and the bond fluctuation model (BFM) with bond lengths in a range between 2 and sqrt{10}, we investigate the conformations of polymer chains described by self-avoiding walks on the simple cubic lattice, and by random walks and non-reversible random walks in the absence of excluded volume interactions. In addition to flexible chains, we also extend our study to semiflexible chains for different stiffness controlled by a bending potential. The persistence lengths of chains extracted from the orientational correlations are estimated for all cases. We show that chains based on the BFM are more flexible than those based on the SCLM for a fixed bending energy. The microscopic differences between these two lattice models are discussed and the theoretical predictions of scaling laws given in the literature are checked and verified. Our simulations clarify that a different mapping ratio between the coarse-grained models and the atomistically realistic description of polymers is required in a coarse-graining approach due to the different crossovers to the asymptotic behavior.

  8. Monte Carlo simulations of lattice models for single polymer systems

    International Nuclear Information System (INIS)

    Hsu, Hsiao-Ping

    2014-01-01

    Single linear polymer chains in dilute solutions under good solvent conditions are studied by Monte Carlo simulations with the pruned-enriched Rosenbluth method up to the chain length N∼O(10 4 ). Based on the standard simple cubic lattice model (SCLM) with fixed bond length and the bond fluctuation model (BFM) with bond lengths in a range between 2 and √(10), we investigate the conformations of polymer chains described by self-avoiding walks on the simple cubic lattice, and by random walks and non-reversible random walks in the absence of excluded volume interactions. In addition to flexible chains, we also extend our study to semiflexible chains for different stiffness controlled by a bending potential. The persistence lengths of chains extracted from the orientational correlations are estimated for all cases. We show that chains based on the BFM are more flexible than those based on the SCLM for a fixed bending energy. The microscopic differences between these two lattice models are discussed and the theoretical predictions of scaling laws given in the literature are checked and verified. Our simulations clarify that a different mapping ratio between the coarse-grained models and the atomistically realistic description of polymers is required in a coarse-graining approach due to the different crossovers to the asymptotic behavior

  9. Introduction to the Monte Carlo project and the approach to the validation of probabilistic models of dietary exposure to selected food chemicals

    NARCIS (Netherlands)

    Gibney, M.J.; Voet, van der H.

    2003-01-01

    The Monte Carlo project was established to allow an international collaborative effort to define conceptual models for food chemical and nutrient exposure, to define and validate the software code to govern these models, to provide new or reconstructed databases for validation studies, and to use

  10. Monte Carlo Modeling the UCN τ Magneto-Gravitational Trap

    Science.gov (United States)

    Holley, A. T.; UCNτ Collaboration

    2016-09-01

    The current uncertainty in our knowledge of the free neutron lifetime is dominated by the nearly 4 σ discrepancy between complementary ``beam'' and ``bottle'' measurement techniques. An incomplete assessment of systematic effects is the most likely explanation for this difference and must be addressed in order to realize the potential of both approaches. The UCN τ collaboration has constructed a large-volume magneto-gravitational trap that eliminates the material interactions which complicated the interpretation of previous bottle experiments. This is accomplished using permanent NdFeB magnets in a bowl-shaped Halbach array to confine polarized UCN from the sides and below and the earth's gravitational field to trap them from above. New in situ detectors that count surviving UCN provide a means of empirically assessing residual systematic effects. The interpretation of that data, and its implication for experimental configurations with enhanced precision, can be bolstered by Monte Carlo models of the current experiment which provide the capability for stable tracking of trapped UCN and detailed modeling of their polarization. Work to develop such models and their comparison with data acquired during our first extensive set of systematics studies will be discussed.

  11. Monte Carlo modelling of large scale NORM sources using MCNP.

    Science.gov (United States)

    Wallace, J D

    2013-12-01

    The representative Monte Carlo modelling of large scale planar sources (for comparison to external environmental radiation fields) is undertaken using substantial diameter and thin profile planar cylindrical sources. The relative impact of source extent, soil thickness and sky-shine are investigated to guide decisions relating to representative geometries. In addition, the impact of source to detector distance on the nature of the detector response, for a range of source sizes, has been investigated. These investigations, using an MCNP based model, indicate a soil cylinder of greater than 20 m diameter and of no less than 50 cm depth/height, combined with a 20 m deep sky section above the soil cylinder, are needed to representatively model the semi-infinite plane of uniformly distributed NORM sources. Initial investigation of the effect of detector placement indicate that smaller source sizes may be used to achieve a representative response at shorter source to detector distances. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  12. Modelling of scintillator based flat-panel detectors with Monte-Carlo simulations

    International Nuclear Information System (INIS)

    Reims, N; Sukowski, F; Uhlmann, N

    2011-01-01

    Scintillator based flat panel detectors are state of the art in the field of industrial X-ray imaging applications. Choosing the proper system and setup parameters for the vast range of different applications can be a time consuming task, especially when developing new detector systems. Since the system behaviour cannot always be foreseen easily, Monte-Carlo (MC) simulations are keys to gain further knowledge of system components and their behaviour for different imaging conditions. In this work we used two Monte-Carlo based models to examine an indirect converting flat panel detector, specifically the Hamamatsu C9312SK. We focused on the signal generation in the scintillation layer and its influence on the spatial resolution of the whole system. The models differ significantly in their level of complexity. The first model gives a global description of the detector based on different parameters characterizing the spatial resolution. With relatively small effort a simulation model can be developed which equates the real detector regarding signal transfer. The second model allows a more detailed insight of the system. It is based on the well established cascade theory, i.e. describing the detector as a cascade of elemental gain and scattering stages, which represent the built in components and their signal transfer behaviour. In comparison to the first model the influence of single components especially the important light spread behaviour in the scintillator can be analysed in a more differentiated way. Although the implementation of the second model is more time consuming both models have in common that a relatively small amount of system manufacturer parameters are needed. The results of both models were in good agreement with the measured parameters of the real system.

  13. Image based Monte Carlo modeling for computational phantom

    International Nuclear Information System (INIS)

    Cheng, M.; Wang, W.; Zhao, K.; Fan, Y.; Long, P.; Wu, Y.

    2013-01-01

    Full text of the publication follows. The evaluation on the effects of ionizing radiation and the risk of radiation exposure on human body has been becoming one of the most important issues for radiation protection and radiotherapy fields, which is helpful to avoid unnecessary radiation and decrease harm to human body. In order to accurately evaluate the dose on human body, it is necessary to construct more realistic computational phantom. However, manual description and verification of the models for Monte Carlo (MC) simulation are very tedious, error-prone and time-consuming. In addition, it is difficult to locate and fix the geometry error, and difficult to describe material information and assign it to cells. MCAM (CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport Simulation) was developed as an interface program to achieve both CAD- and image-based automatic modeling. The advanced version (Version 6) of MCAM can achieve automatic conversion from CT/segmented sectioned images to computational phantoms such as MCNP models. Imaged-based automatic modeling program(MCAM6.0) has been tested by several medical images and sectioned images. And it has been applied in the construction of Rad-HUMAN. Following manual segmentation and 3D reconstruction, a whole-body computational phantom of Chinese adult female called Rad-HUMAN was created by using MCAM6.0 from sectioned images of a Chinese visible human dataset. Rad-HUMAN contains 46 organs/tissues, which faithfully represented the average anatomical characteristics of the Chinese female. The dose conversion coefficients (Dt/Ka) from kerma free-in-air to absorbed dose of Rad-HUMAN were calculated. Rad-HUMAN can be applied to predict and evaluate dose distributions in the Treatment Plan System (TPS), as well as radiation exposure for human body in radiation protection. (authors)

  14. Mesoscopic kinetic Monte Carlo modeling of organic photovoltaic device characteristics

    Science.gov (United States)

    Kimber, Robin G. E.; Wright, Edward N.; O'Kane, Simon E. J.; Walker, Alison B.; Blakesley, James C.

    2012-12-01

    Measured mobility and current-voltage characteristics of single layer and photovoltaic (PV) devices composed of poly{9,9-dioctylfluorene-co-bis[N,N'-(4-butylphenyl)]bis(N,N'-phenyl-1,4-phenylene)diamine} (PFB) and poly(9,9-dioctylfluorene-co-benzothiadiazole) (F8BT) have been reproduced by a mesoscopic model employing the kinetic Monte Carlo (KMC) approach. Our aim is to show how to avoid the uncertainties common in electrical transport models arising from the need to fit a large number of parameters when little information is available, for example, a single current-voltage curve. Here, simulation parameters are derived from a series of measurements using a self-consistent “building-blocks” approach, starting from data on the simplest systems. We found that site energies show disorder and that correlations in the site energies and a distribution of deep traps must be included in order to reproduce measured charge mobility-field curves at low charge densities in bulk PFB and F8BT. The parameter set from the mobility-field curves reproduces the unipolar current in single layers of PFB and F8BT and allows us to deduce charge injection barriers. Finally, by combining these disorder descriptions and injection barriers with an optical model, the external quantum efficiency and current densities of blend and bilayer organic PV devices can be successfully reproduced across a voltage range encompassing reverse and forward bias, with the recombination rate the only parameter to be fitted, found to be 1×107 s-1. These findings demonstrate an approach that removes some of the arbitrariness present in transport models of organic devices, which validates the KMC as an accurate description of organic optoelectronic systems, and provides information on the microscopic origins of the device behavior.

  15. Characterization of infiltration rates from landfills: supporting groundwater modeling efforts.

    Science.gov (United States)

    Moo-Young, Horace; Johnson, Barnes; Johnson, Ann; Carson, David; Lew, Christine; Liu, Salley; Hancocks, Katherine

    2004-01-01

    The purpose of this paper is to review the literature to characterize infiltration rates from landfill liners to support groundwater modeling efforts. The focus of this investigation was on collecting studies that describe the performance of liners 'as installed' or 'as operated'. This document reviews the state of the science and practice on the infiltration rate through compacted clay liner (CCL) for 149 sites and geosynthetic clay liner (GCL) for 1 site. In addition, it reviews the leakage rate through geomembrane (GM) liners and composite liners for 259 sites. For compacted clay liners (CCL), there was limited information on infiltration rates (i.e., only 9 sites reported infiltration rates.), thus, it was difficult to develop a national distribution. The field hydraulic conductivities for natural clay liners range from 1 x 10(-9) cm s(-1) to 1 x 10(-4) cm s(-1), with an average of 6.5 x 10(-8) cm s(-1). There was limited information on geosynthetic clay liner. For composite lined and geomembrane systems, the leak detection system flow rates were utilized. The average monthly flow rate for composite liners ranged from 0-32 lphd for geomembrane and GCL systems to 0 to 1410 lphd for geomembrane and CCL systems. The increased infiltration for the geomembrane and CCL system may be attributed to consolidation water from the clay.

  16. New software library of geometrical primitives for modelling of solids used in Monte Carlo detector simulations

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    We present our effort for the creation of a new software library of geometrical primitives, which are used for solid modelling in Monte Carlo detector simulations. We plan to replace and unify current geometrical primitive classes in the CERN software projects Geant4 and ROOT with this library. Each solid is represented by a C++ class with methods suited for measuring distances of particles from the surface of a solid and for determination as to whether the particles are located inside, outside or on the surface of the solid. We use numerical tolerance for determining whether the particles are located on the surface. The class methods also contain basic support for visualization. We use dedicated test suites for validation of the shape codes. These include also special performance and numerical value comparison tests for help with analysis of possible candidates of class methods as well as to verify that our new implementation proposals were designed and implemented properly. Currently, bridge classes are u...

  17. Investigation of a Monte Carlo model for chemical reactions

    International Nuclear Information System (INIS)

    Hamm, R.N.; Turner, J.E.; Stabin, M.G.

    1998-01-01

    Monte Carlo computer simulations are in use at a number of laboratories for calculating time-dependent yields, which can be compared with experiments in the radiolysis of water. We report here on calculations to investigate the validity and consistency of the procedures used for simulating chemical reactions in our code, RADLYS. Model calculations were performed of the rate constants themselves. The rates thus determined showed an expected rapid decline over the first few hundred ps and a very gradual decline thereafter out to the termination of the calculations at 4.5 ns. Results are reported for different initial concentrations and numbers of reactive species. Generally, the calculated rate constants are smallest when the initial concentrations of the reactants are largest. It is found that inhomogeneities that quickly develop in the initial random spatial distribution of reactants persist in time as a result of subsequent chemical reactions, and thus conditions may poorly approximate those assumed from diffusion theory. We also investigated the reaction of a single species of one type placed among a large number of randomly distributed species of another type with which it could react. The distribution of survival times of the single species was calculated by using three different combinations of the diffusion constants for the two species, as is sometimes discussed in diffusion theory. The three methods gave virtually identical results. (orig.)

  18. Monte Carlo Modeling Electronuclear Processes in Cascade Subcritical Reactor

    CERN Document Server

    Bznuni, S A; Zhamkochyan, V M; Polyanskii, A A; Sosnin, A N; Khudaverdian, A G

    2000-01-01

    Accelerator driven subcritical cascade reactor composed of the main thermal neutron reactor constructed analogous to the core of the VVER-1000 reactor and a booster-reactor, which is constructed similar to the core of the BN-350 fast breeder reactor, is taken as a model example. It is shown by means of Monte Carlo calculations that such system is a safe energy source (k_{eff}=0.94-0.98) and it is capable of transmuting produced radioactive wastes (neutron flux density in the thermal zone is PHI^{max} (r,z)=10^{14} n/(cm^{-2} s^{-1}), neutron flux in the fast zone is respectively equal PHI^{max} (r,z)=2.25 cdot 10^{15} n/(cm^{-2} s^{-1}) if the beam current of the proton accelerator is k_{eff}=0.98 and I=5.3 mA). Suggested configuration of the "cascade" reactor system essentially reduces the requirements on the proton accelerator current.

  19. Monte Carlo modeling of ion chamber performance using MCNP.

    Science.gov (United States)

    Wallace, J D

    2012-12-01

    Ion Chambers have a generally flat energy response with some deviations at very low (2 MeV) energies. Some improvements in the low energy response can be achieved through use of high atomic number gases, such as argon and xenon, and higher chamber pressures. This work looks at the energy response of high pressure xenon-filled ion chambers using the MCNP Monte Carlo package to develop geometric models of a commercially available high pressure ion chamber (HPIC). The use of the F6 tally as an estimator of the energy deposited in a region of interest per unit mass, and the underlying assumptions associated with its use are described. The effect of gas composition, chamber gas pressure, chamber wall thickness, and chamber holder wall thicknesses on energy response are investigated and reported. The predicted energy response curve for the HPIC was found to be similar to that reported by other investigators. These investigations indicate that improvements to flatten the overall energy response of the HPIC down to 70 keV could be achieved through use of 3 mm-thick stainless steel walls for the ion chamber.

  20. Uncertainties in models of tropospheric ozone based on Monte Carlo analysis: Tropospheric ozone burdens, atmospheric lifetimes and surface distributions

    Science.gov (United States)

    Derwent, Richard G.; Parrish, David D.; Galbally, Ian E.; Stevenson, David S.; Doherty, Ruth M.; Naik, Vaishali; Young, Paul J.

    2018-05-01

    Recognising that global tropospheric ozone models have many uncertain input parameters, an attempt has been made to employ Monte Carlo sampling to quantify the uncertainties in model output that arise from global tropospheric ozone precursor emissions and from ozone production and destruction in a global Lagrangian chemistry-transport model. Ninety eight quasi-randomly Monte Carlo sampled model runs were completed and the uncertainties were quantified in tropospheric burdens and lifetimes of ozone, carbon monoxide and methane, together with the surface distribution and seasonal cycle in ozone. The results have shown a satisfactory degree of convergence and provide a first estimate of the likely uncertainties in tropospheric ozone model outputs. There are likely to be diminishing returns in carrying out many more Monte Carlo runs in order to refine further these outputs. Uncertainties due to model formulation were separately addressed using the results from 14 Atmospheric Chemistry Coupled Climate Model Intercomparison Project (ACCMIP) chemistry-climate models. The 95% confidence ranges surrounding the ACCMIP model burdens and lifetimes for ozone, carbon monoxide and methane were somewhat smaller than for the Monte Carlo estimates. This reflected the situation where the ACCMIP models used harmonised emissions data and differed only in their meteorological data and model formulations whereas a conscious effort was made to describe the uncertainties in the ozone precursor emissions and in the kinetic and photochemical data in the Monte Carlo runs. Attention was focussed on the model predictions of the ozone seasonal cycles at three marine boundary layer stations: Mace Head, Ireland, Trinidad Head, California and Cape Grim, Tasmania. Despite comprehensively addressing the uncertainties due to global emissions and ozone sources and sinks, none of the Monte Carlo runs were able to generate seasonal cycles that matched the observations at all three MBL stations. Although

  1. On an efficient multiple time step Monte Carlo simulation of the SABR model

    NARCIS (Netherlands)

    Leitao Rodriguez, A.; Grzelak, L.A.; Oosterlee, C.W.

    2017-01-01

    In this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl. Math.

  2. A Monte Carlo reflectance model for soil surfaces with three-dimensional structure

    Science.gov (United States)

    Cooper, K. D.; Smith, J. A.

    1985-01-01

    A Monte Carlo soil reflectance model has been developed to study the effect of macroscopic surface irregularities larger than the wavelength of incident flux. The model treats incoherent multiple scattering from Lambertian facets distributed on a periodic surface. Resulting bidirectional reflectance distribution functions are non-Lambertian and compare well with experimental trends reported in the literature. Examples showing the coupling of the Monte Carlo soil model to an adding bidirectional canopy of reflectance model are also given.

  3. Bayesian calibration of terrestrial ecosystem models: a study of advanced Markov chain Monte Carlo methods

    Science.gov (United States)

    Lu, Dan; Ricciuto, Daniel; Walker, Anthony; Safta, Cosmin; Munger, William

    2017-09-01

    Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC) sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this work, a differential evolution adaptive Metropolis (DREAM) algorithm is used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC) model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The calibration of DREAM results in a better model fit and predictive performance compared to the popular adaptive Metropolis (AM) scheme. Moreover, DREAM indicates that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identifies one mode. The application suggests that DREAM is very suitable to calibrate complex terrestrial ecosystem models, where the uncertain parameter size is usually large and existence of local optima is always a concern. In addition, this effort justifies the assumptions of the error model used in Bayesian calibration according to the residual analysis. The result indicates that a heteroscedastic, correlated, Gaussian error model is appropriate for the problem, and the consequent constructed likelihood function can alleviate the underestimation of parameter uncertainty that is usually caused by using uncorrelated error models.

  4. Sky-Radiance Models for Monte Carlo Radiative Transfer Applications

    Science.gov (United States)

    Santos, I.; Dalimonte, D.; Santos, J. P.

    2012-04-01

    Photon-tracing can be initialized through sky-radiance (Lsky) distribution models when executing Monte Carlo simulations for ocean color studies. To be effective, the Lsky model should: 1) properly represent sky-radiance features of interest; 2) require low computing time; and 3) depend on a limited number of input parameters. The present study verifies the satisfiability of these prerequisite by comparing results from different Lsky formulations. Specifically, two Lsky models were considered as reference cases because of their different approach among solutions presented in the literature. The first model, developed by the Harrisson and Coombes (HC), is based on a parametric expression where the sun geometry is the unique input. The HC model is one of the sky-radiance analytical distribution applied in state-of-art simulations for ocean optics. The coefficients of the HC model were set upon broad-band field measurements and the result is a model that requires a few implementation steps. The second model, implemented by Zibordi and Voss (ZV), is based on physical expressions that accounts for the optical thickness of permanent gases, aerosol, ozone and water vapour at specific wavelengths. Inter-comparisons between normalized ^LskyZV and ^LskyHC (i.e., with unitary scalar irradiance) are discussed by means of individual polar maps and percent difference between sky-radiance distributions. Sky-radiance cross-sections are presented as well. Considered cases include different sun zenith values and wavelengths (i.e., λ=413, 490 and 665 nm, corresponding to selected center-bands of the MEdium Resolution Imaging Spectrometer MERIS). Results have shown a significant convergence between ^LskyHC and ^LskyZV at 665 nm. Differences between models increase with the sun zenith and mostly with wavelength. For Instance, relative differences up to 50% between ^ L skyHC and ^ LskyZV can be observed in the antisolar region for λ=665 nm and θ*=45°. The effects of these

  5. Utility of Monte Carlo Modelling for Holdup Measurements.

    Energy Technology Data Exchange (ETDEWEB)

    Belian, Anthony P.; Russo, P. A. (Phyllis A.); Weier, Dennis R. (Dennis Ray),

    2005-01-01

    Non-destructive assay (NDA) measurements performed to locate and quantify holdup in the Oak Ridge K25 enrichment cascade used neutron totals counting and low-resolution gamma-ray spectroscopy. This facility housed the gaseous diffusion process for enrichment of uranium, in the form of UF{sub 6} gas, from {approx} 20% to 93%. Inventory of {sup 235}U inventory in K-25 is all holdup. These buildings have been slated for decontaminatino and decommissioning. The NDA measurements establish the inventory quantities and will be used to assure criticality safety and meet criteria for waste analysis and transportation. The tendency to err on the side of conservatism for the sake of criticality safety in specifying total NDA uncertainty argues, in the interests of safety and costs, for obtaining the best possible value of uncertainty at the conservative confidence level for each item of process equipment. Variable deposit distribution is a complex systematic effect (i.e., determined by multiple independent variables) on the portable NDA results for very large and bulk converters that contributes greatly to total uncertainty for holdup in converters measured by gamma or neutron NDA methods. Because the magnitudes of complex systematic effects are difficult to estimate, computational tools are important for evaluating those that are large. Motivated by very large discrepancies between gamma and neutron measurements of high-mass converters with gamma results tending to dominate, the Monte Carlo code MCNP has been used to determine the systematic effects of deposit distribution on gamma and neutron results for {sup 235}U holdup mass in converters. This paper details the numerical methodology used to evaluate large systematic effects unique to each measurement type, validates the methodology by comparison with measurements, and discusses how modeling tools can supplement the calibration of instruments used for holdup measurements by providing realistic values at well

  6. Monte Carlo study of superconductivity in the three-band Emery model

    International Nuclear Information System (INIS)

    Frick, M.; Pattnaik, P.C.; Morgenstern, I.; Newns, D.M.; von der Linden, W.

    1990-01-01

    We have examined the three-band Hubbard model for the copper oxide planes in high-temperature superconductors using the projector quantum Monte Carlo method. We find no evidence for s-wave superconductivity

  7. Simplest Validation of the HIJING Monte Carlo Model

    CERN Document Server

    Uzhinsky, V.V.

    2003-01-01

    Fulfillment of the energy-momentum conservation law, as well as the charge, baryon and lepton number conservation is checked for the HIJING Monte Carlo program in $pp$-interactions at $\\sqrt{s}=$ 200, 5500, and 14000 GeV. It is shown that the energy is conserved quite well. The transverse momentum is not conserved, the deviation from zero is at the level of 1--2 GeV/c, and it is connected with the hard jet production. The deviation is absent for soft interactions. Charge, baryon and lepton numbers are conserved. Azimuthal symmetry of the Monte Carlo events is studied, too. It is shown that there is a small signature of a "flow". The situation with the symmetry gets worse for nucleus-nucleus interactions.

  8. An exercise in model validation: Comparing univariate statistics and Monte Carlo-based multivariate statistics

    International Nuclear Information System (INIS)

    Weathers, J.B.; Luck, R.; Weathers, J.W.

    2009-01-01

    The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.

  9. An exercise in model validation: Comparing univariate statistics and Monte Carlo-based multivariate statistics

    Energy Technology Data Exchange (ETDEWEB)

    Weathers, J.B. [Shock, Noise, and Vibration Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: James.Weathers@ngc.com; Luck, R. [Department of Mechanical Engineering, Mississippi State University, 210 Carpenter Engineering Building, P.O. Box ME, Mississippi State, MS 39762-5925 (United States)], E-mail: Luck@me.msstate.edu; Weathers, J.W. [Structural Analysis Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: Jeffrey.Weathers@ngc.com

    2009-11-15

    The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.

  10. Hybrid discrete choice models: Gained insights versus increasing effort

    International Nuclear Information System (INIS)

    Mariel, Petr; Meyerhoff, Jürgen

    2016-01-01

    Hybrid choice models expand the standard models in discrete choice modelling by incorporating psychological factors as latent variables. They could therefore provide further insights into choice processes and underlying taste heterogeneity but the costs of estimating these models often significantly increase. This paper aims at comparing the results from a hybrid choice model and a classical random parameter logit. Point of departure for this analysis is whether researchers and practitioners should add hybrid choice models to their suite of models routinely estimated. Our comparison reveals, in line with the few prior studies, that hybrid models gain in efficiency by the inclusion of additional information. The use of one of the two proposed approaches, however, depends on the objective of the analysis. If disentangling preference heterogeneity is most important, hybrid model seems to be preferable. If the focus is on predictive power, a standard random parameter logit model might be the better choice. Finally, we give recommendations for an adequate use of hybrid choice models based on known principles of elementary scientific inference. - Highlights: • The paper compares performance of a Hybrid Choice Model (HCM) and a classical Random Parameter Logit (RPL) model. • The HCM indeed provides insights regarding preference heterogeneity not gained from the RPL. • The RPL has similar predictive power as the HCM in our data. • The costs of estimating HCM seem to be justified when learning more on taste heterogeneity is a major study objective.

  11. Hybrid discrete choice models: Gained insights versus increasing effort

    Energy Technology Data Exchange (ETDEWEB)

    Mariel, Petr, E-mail: petr.mariel@ehu.es [UPV/EHU, Economía Aplicada III, Avda. Lehendakari Aguire, 83, 48015 Bilbao (Spain); Meyerhoff, Jürgen [Institute for Landscape Architecture and Environmental Planning, Technical University of Berlin, D-10623 Berlin, Germany and The Kiel Institute for the World Economy, Duesternbrooker Weg 120, 24105 Kiel (Germany)

    2016-10-15

    Hybrid choice models expand the standard models in discrete choice modelling by incorporating psychological factors as latent variables. They could therefore provide further insights into choice processes and underlying taste heterogeneity but the costs of estimating these models often significantly increase. This paper aims at comparing the results from a hybrid choice model and a classical random parameter logit. Point of departure for this analysis is whether researchers and practitioners should add hybrid choice models to their suite of models routinely estimated. Our comparison reveals, in line with the few prior studies, that hybrid models gain in efficiency by the inclusion of additional information. The use of one of the two proposed approaches, however, depends on the objective of the analysis. If disentangling preference heterogeneity is most important, hybrid model seems to be preferable. If the focus is on predictive power, a standard random parameter logit model might be the better choice. Finally, we give recommendations for an adequate use of hybrid choice models based on known principles of elementary scientific inference. - Highlights: • The paper compares performance of a Hybrid Choice Model (HCM) and a classical Random Parameter Logit (RPL) model. • The HCM indeed provides insights regarding preference heterogeneity not gained from the RPL. • The RPL has similar predictive power as the HCM in our data. • The costs of estimating HCM seem to be justified when learning more on taste heterogeneity is a major study objective.

  12. Application of MCAM in generating Monte Carlo model for ITER port limiter

    International Nuclear Information System (INIS)

    Lu Lei; Li Ying; Ding Aiping; Zeng Qin; Huang Chenyu; Wu Yican

    2007-01-01

    On the basis of the pre-processing and conversion functions supplied by MCAM (Monte-Carlo Particle Transport Calculated Automatic Modeling System), this paper performed the generation of ITER Port Limiter MC (Monte-Carlo) calculation model from the CAD engineering model. The result was validated by using reverse function of MCAM and MCNP PLOT 2D cross-section drawing program. the successful application of MCAM to ITER Port Limiter demonstrates that MCAM is capable of dramatically increasing the efficiency and accuracy to generate MC calculation models from CAD engineering models with complex geometry comparing with the traditional manual modeling method. (authors)

  13. Pushing the limits of Monte Carlo simulations for the three-dimensional Ising model

    Science.gov (United States)

    Ferrenberg, Alan M.; Xu, Jiahao; Landau, David P.

    2018-04-01

    While the three-dimensional Ising model has defied analytic solution, various numerical methods like Monte Carlo, Monte Carlo renormalization group, and series expansion have provided precise information about the phase transition. Using Monte Carlo simulation that employs the Wolff cluster flipping algorithm with both 32-bit and 53-bit random number generators and data analysis with histogram reweighting and quadruple precision arithmetic, we have investigated the critical behavior of the simple cubic Ising Model, with lattice sizes ranging from 163 to 10243. By analyzing data with cross correlations between various thermodynamic quantities obtained from the same data pool, e.g., logarithmic derivatives of magnetization and derivatives of magnetization cumulants, we have obtained the critical inverse temperature Kc=0.221 654 626 (5 ) and the critical exponent of the correlation length ν =0.629 912 (86 ) with precision that exceeds all previous Monte Carlo estimates.

  14. Using a cloud to replenish parched groundwater modeling efforts.

    Science.gov (United States)

    Hunt, Randall J; Luchette, Joseph; Schreuder, Willem A; Rumbaugh, James O; Doherty, John; Tonkin, Matthew J; Rumbaugh, Douglas B

    2010-01-01

    Groundwater models can be improved by introduction of additional parameter flexibility and simultaneous use of soft-knowledge. However, these sophisticated approaches have high computational requirements. Cloud computing provides unprecedented access to computing power via the Internet to facilitate the use of these techniques. A modeler can create, launch, and terminate "virtual" computers as needed, paying by the hour, and save machine images for future use. Such cost-effective and flexible computing power empowers groundwater modelers to routinely perform model calibration and uncertainty analysis in ways not previously possible.

  15. Using a cloud to replenish parched groundwater modeling efforts

    Science.gov (United States)

    Hunt, Randall J.; Luchette, Joseph; Schreuder, Willem A.; Rumbaugh, James O.; Doherty, John; Tonkin, Matthew J.; Rumbaugh, Douglas B.

    2010-01-01

    Groundwater models can be improved by introduction of additional parameter flexibility and simultaneous use of soft-knowledge. However, these sophisticated approaches have high computational requirements. Cloud computing provides unprecedented access to computing power via the Internet to facilitate the use of these techniques. A modeler can create, launch, and terminate “virtual” computers as needed, paying by the hour, and save machine images for future use. Such cost-effective and flexible computing power empowers groundwater modelers to routinely perform model calibration and uncertainty analysis in ways not previously possible.

  16. Improving system modeling accuracy with Monte Carlo codes

    International Nuclear Information System (INIS)

    Johnson, A.S.

    1996-01-01

    The use of computer codes based on Monte Carlo methods to perform criticality calculations has become common-place. Although results frequently published in the literature report calculated k eff values to four decimal places, people who use the codes in their everyday work say that they only believe the first two decimal places of any result. The lack of confidence in the computed k eff values may be due to the tendency of the reported standard deviation to underestimate errors associated with the Monte Carlo process. The standard deviation as reported by the codes is the standard deviation of the mean of the k eff values for individual generations in the computer simulation, not the standard deviation of the computed k eff value compared with the physical system. A more subtle problem with the standard deviation of the mean as reported by the codes is that all the k eff values from the separate generations are not statistically independent since the k eff of a given generation is a function of k eff of the previous generation, which is ultimately based on the starting source. To produce a standard deviation that is more representative of the physical system, statistically independent values of k eff are needed

  17. Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases

    Science.gov (United States)

    Pfeiffer, M.; Nizenkov, P.; Mirza, A.; Fasoulas, S.

    2016-02-01

    Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn's Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methods are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.

  18. Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases

    International Nuclear Information System (INIS)

    Pfeiffer, M.; Nizenkov, P.; Mirza, A.; Fasoulas, S.

    2016-01-01

    Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn’s Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methods are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder

  19. Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases

    Energy Technology Data Exchange (ETDEWEB)

    Pfeiffer, M., E-mail: mpfeiffer@irs.uni-stuttgart.de; Nizenkov, P., E-mail: nizenkov@irs.uni-stuttgart.de; Mirza, A., E-mail: mirza@irs.uni-stuttgart.de; Fasoulas, S., E-mail: fasoulas@irs.uni-stuttgart.de [Institute of Space Systems, University of Stuttgart, Pfaffenwaldring 29, D-70569 Stuttgart (Germany)

    2016-02-15

    Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn’s Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methods are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.

  20. Exploring uncertainty in glacier mass balance modelling with Monte Carlo simulation

    NARCIS (Netherlands)

    Machguth, H.; Purves, R.S.; Oerlemans, J.; Hoelzle, M.; Paul, F.

    2008-01-01

    By means of Monte Carlo simulations we calculated uncertainty in modelled cumulative mass balance over 400 days at one particular point on the tongue of Morteratsch Glacier, Switzerland, using a glacier energy balance model of intermediate complexity. Before uncertainty assessment, the model was

  1. Lessons learned from HRA and human-system modeling efforts

    International Nuclear Information System (INIS)

    Hallbert, B.P.

    1993-01-01

    Human-System modeling is not unique to the field of Human Reliability Analysis (HRA). Since human factors professionals first began their explorations of human activities, they have done so with the concept of open-quotes systemclose quotes in mind. Though the two - human and system - are distinct, they can be properly understood only in terms of each other: the system provides a context in which goals and objectives for work are defined, and the human plays either a pre-defined or ad hoc role in meeting these goals. In this sense, every intervention which attempts to evaluate or improve upon some system parameter requires that an understanding of human-system interactions be developed. It is too often the case, however, that somewhere between the inception of a system and its implementation, the human-system relationships are overlooked, misunderstood, or inadequately framed. This results in mismatches between demands versus capabilities of human operators, systems which are difficult to operate, and the obvious end product-human error. The lessons learned from human system modeling provide a valuable feedback mechanism to the process of HRA, and the technologies which employ this form of modeling

  2. Microscopic imaging through turbid media Monte Carlo modeling and applications

    CERN Document Server

    Gu, Min; Deng, Xiaoyuan

    2015-01-01

    This book provides a systematic introduction to the principles of microscopic imaging through tissue-like turbid media in terms of Monte-Carlo simulation. It describes various gating mechanisms based on the physical differences between the unscattered and scattered photons and method for microscopic image reconstruction, using the concept of the effective point spread function. Imaging an object embedded in a turbid medium is a challenging problem in physics as well as in biophotonics. A turbid medium surrounding an object under inspection causes multiple scattering, which degrades the contrast, resolution and signal-to-noise ratio. Biological tissues are typically turbid media. Microscopic imaging through a tissue-like turbid medium can provide higher resolution than transillumination imaging in which no objective is used. This book serves as a valuable reference for engineers and scientists working on microscopy of tissue turbid media.

  3. Implementation of a Monte Carlo based inverse planning model for clinical IMRT with MCNP code

    International Nuclear Information System (INIS)

    He, Tongming Tony

    2003-01-01

    Inaccurate dose calculations and limitations of optimization algorithms in inverse planning introduce systematic and convergence errors to treatment plans. This work was to implement a Monte Carlo based inverse planning model for clinical IMRT aiming to minimize the aforementioned errors. The strategy was to precalculate the dose matrices of beamlets in a Monte Carlo based method followed by the optimization of beamlet intensities. The MCNP 4B (Monte Carlo N-Particle version 4B) code was modified to implement selective particle transport and dose tallying in voxels and efficient estimation of statistical uncertainties. The resulting performance gain was over eleven thousand times. Due to concurrent calculation of multiple beamlets of individual ports, hundreds of beamlets in an IMRT plan could be calculated within a practical length of time. A finite-sized point source model provided a simple and accurate modeling of treatment beams. The dose matrix calculations were validated through measurements in phantoms. Agreements were better than 1.5% or 0.2 cm. The beamlet intensities were optimized using a parallel platform based optimization algorithm that was capable of escape from local minima and preventing premature convergence. The Monte Carlo based inverse planning model was applied to clinical cases. The feasibility and capability of Monte Carlo based inverse planning for clinical IMRT was demonstrated. Systematic errors in treatment plans of a commercial inverse planning system were assessed in comparison with the Monte Carlo based calculations. Discrepancies in tumor doses and critical structure doses were up to 12% and 17%, respectively. The clinical importance of Monte Carlo based inverse planning for IMRT was demonstrated

  4. Efforts and Models of Education for Parents: the Danish Approach

    Directory of Open Access Journals (Sweden)

    Rosendal Jensen, Niels

    2009-12-01

    to underline that Danish welfare policy has been changing rather radical. The classic model was an understanding of welfare as social assurance and/or as social distribution – based on social solidarity. The modern model looks like welfare as social service and/or social investment. This means that citizens are changing role – from user and/or citizen to consumer and/or investor. The Danish state is in correspondence with decisions taken by the government investing in a national future shaped by global competition. The new models of welfare – “service” and “investment” – imply severe changes in hitherto known concepts of family life, relationship between parents and children etc. As an example the investment model points at a new implementation of the relationship between social rights and the rights of freedom. The service model has demonstrated that weakness that the access to qualified services in the field of health or education is becoming more and more dependent of the private purchasing power. The weakness of the investment model is that it represents a sort of “The Winner takes it all” – since a political majority is enabled to make agendas in societal fields former protected by the tripartite power and the rights of freedom of the citizens. The outcome of the Danish development seems to be an establishment of a political governed public service industry which on one side are capable of competing on market conditions and on the other are able being governed by contracts. This represents a new form of close linking of politics, economy and professional work. Attempts of controlling education, pedagogy and thereby the population are not a recent invention. In European history we could easily point at several such experiments. The real news is the linking between political priorities and exercise of public activities by economic incentives. By defining visible goals for the public servants, by introducing measurement of achievements and

  5. Importance estimation in Monte Carlo modelling of neutron and photon transport

    International Nuclear Information System (INIS)

    Mickael, M.W.

    1992-01-01

    The estimation of neutron and photon importance in a three-dimensional geometry is achieved using a coupled Monte Carlo and diffusion theory calculation. The parameters required for the solution of the multigroup adjoint diffusion equation are estimated from an analog Monte Carlo simulation of the system under investigation. The solution of the adjoint diffusion equation is then used as an estimate of the particle importance in the actual simulation. This approach provides an automated and efficient variance reduction method for Monte Carlo simulations. The technique has been successfully applied to Monte Carlo simulation of neutron and coupled neutron-photon transport in the nuclear well-logging field. The results show that the importance maps obtained in a few minutes of computer time using this technique are in good agreement with Monte Carlo generated importance maps that require prohibitive computing times. The application of this method to Monte Carlo modelling of the response of neutron porosity and pulsed neutron instruments has resulted in major reductions in computation time. (Author)

  6. Modelling of electron contamination in clinical photon beams for Monte Carlo dose calculation

    International Nuclear Information System (INIS)

    Yang, J; Li, J S; Qin, L; Xiong, W; Ma, C-M

    2004-01-01

    The purpose of this work is to model electron contamination in clinical photon beams and to commission the source model using measured data for Monte Carlo treatment planning. In this work, a planar source is used to represent the contaminant electrons at a plane above the upper jaws. The source size depends on the dimensions of the field size at the isocentre. The energy spectra of the contaminant electrons are predetermined using Monte Carlo simulations for photon beams from different clinical accelerators. A 'random creep' method is employed to derive the weight of the electron contamination source by matching Monte Carlo calculated monoenergetic photon and electron percent depth-dose (PDD) curves with measured PDD curves. We have integrated this electron contamination source into a previously developed multiple source model and validated the model for photon beams from Siemens PRIMUS accelerators. The EGS4 based Monte Carlo user code BEAM and MCSIM were used for linac head simulation and dose calculation. The Monte Carlo calculated dose distributions were compared with measured data. Our results showed good agreement (less than 2% or 2 mm) for 6, 10 and 18 MV photon beams

  7. Automatic modeling for the Monte Carlo transport code Geant4 in MCAM

    International Nuclear Information System (INIS)

    Nie Fanzhi; Hu Liqin; Wang Guozhong; Wang Dianxi; Wu Yican; Wang Dong; Long Pengcheng; FDS Team

    2014-01-01

    Geant4 is a widely used Monte Carlo transport simulation package. Its geometry models could be described in geometry description markup language (GDML), but it is time-consuming and error-prone to describe the geometry models manually. This study implemented the conversion between computer-aided design (CAD) geometry models and GDML models. The conversion program was integrated into Multi-Physics Coupling Analysis Modeling Program (MCAM). The tests, including FDS-Ⅱ model, demonstrated its accuracy and feasibility. (authors)

  8. R and D on automatic modeling methods for Monte Carlo codes FLUKA

    International Nuclear Information System (INIS)

    Wang Dianxi; Hu Liqin; Wang Guozhong; Zhao Zijia; Nie Fanzhi; Wu Yican; Long Pengcheng

    2013-01-01

    FLUKA is a fully integrated particle physics Monte Carlo simulation package. It is necessary to create the geometry models before calculation. However, it is time- consuming and error-prone to describe the geometry models manually. This study developed an automatic modeling method which could automatically convert computer-aided design (CAD) geometry models into FLUKA models. The conversion program was integrated into CAD/image-based automatic modeling program for nuclear and radiation transport simulation (MCAM). Its correctness has been demonstrated. (authors)

  9. CAD-based Monte Carlo automatic modeling method based on primitive solid

    International Nuclear Information System (INIS)

    Wang, Dong; Song, Jing; Yu, Shengpeng; Long, Pengcheng; Wang, Yongliang

    2016-01-01

    Highlights: • We develop a method which bi-convert between CAD model and primitive solid. • This method was improved from convert method between CAD model and half space. • This method was test by ITER model and validated the correctness and efficiency. • This method was integrated in SuperMC which could model for SuperMC and Geant4. - Abstract: Monte Carlo method has been widely used in nuclear design and analysis, where geometries are described with primitive solids. However, it is time consuming and error prone to describe a primitive solid geometry, especially for a complicated model. To reuse the abundant existed CAD models and conveniently model with CAD modeling tools, an automatic modeling method for accurate prompt modeling between CAD model and primitive solid is needed. An automatic modeling method for Monte Carlo geometry described by primitive solid was developed which could bi-convert between CAD model and Monte Carlo geometry represented by primitive solids. While converting from CAD model to primitive solid model, the CAD model was decomposed into several convex solid sets, and then corresponding primitive solids were generated and exported. While converting from primitive solid model to the CAD model, the basic primitive solids were created and related operation was done. This method was integrated in the SuperMC and was benchmarked with ITER benchmark model. The correctness and efficiency of this method were demonstrated.

  10. Nuclear Hybrid Energy Systems FY16 Modeling Efforts at ORNL

    Energy Technology Data Exchange (ETDEWEB)

    Cetiner, Sacit M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Greenwood, Michael Scott [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Harrison, Thomas J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Qualls, A. L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Guler Yigitoglu, Askin [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Fugate, David W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-09-01

    A nuclear hybrid system uses a nuclear reactor as the basic power generation unit. The power generated by the nuclear reactor is utilized by one or more power customers as either thermal power, electrical power, or both. In general, a nuclear hybrid system will couple the nuclear reactor to at least one thermal power user in addition to the power conversion system. The definition and architecture of a particular nuclear hybrid system is flexible depending on local markets needs and opportunities. For example, locations in need of potable water may be best served by coupling a desalination plant to the nuclear system. Similarly, an area near oil refineries may have a need for emission-free hydrogen production. A nuclear hybrid system expands the nuclear power plant from its more familiar central power station role by diversifying its immediately and directly connected customer base. The definition, design, analysis, and optimization work currently performed with respect to the nuclear hybrid systems represents the work of three national laboratories. Idaho National Laboratory (INL) is the lead lab working with Argonne National Laboratory (ANL) and Oak Ridge National Laboratory. Each laboratory is providing modeling and simulation expertise for the integration of the hybrid system.

  11. Fullrmc, a rigid body Reverse Monte Carlo modeling package enabled with machine learning and artificial intelligence.

    Science.gov (United States)

    Aoun, Bachir

    2016-05-05

    A new Reverse Monte Carlo (RMC) package "fullrmc" for atomic or rigid body and molecular, amorphous, or crystalline materials is presented. fullrmc main purpose is to provide a fully modular, fast and flexible software, thoroughly documented, complex molecules enabled, written in a modern programming language (python, cython, C and C++ when performance is needed) and complying to modern programming practices. fullrmc approach in solving an atomic or molecular structure is different from existing RMC algorithms and software. In a nutshell, traditional RMC methods and software randomly adjust atom positions until the whole system has the greatest consistency with a set of experimental data. In contrast, fullrmc applies smart moves endorsed with reinforcement machine learning to groups of atoms. While fullrmc allows running traditional RMC modeling, the uniqueness of this approach resides in its ability to customize grouping atoms in any convenient way with no additional programming efforts and to apply smart and more physically meaningful moves to the defined groups of atoms. In addition, fullrmc provides a unique way with almost no additional computational cost to recur a group's selection, allowing the system to go out of local minimas by refining a group's position or exploring through and beyond not allowed positions and energy barriers the unrestricted three dimensional space around a group. © 2016 Wiley Periodicals, Inc.

  12. Monte Carlo modeling for realizing optimized management of failed fuel replacement

    International Nuclear Information System (INIS)

    Morishita, Kazunori; Yamamoto, Yasunori; Nakasuji, Toshiki

    2014-01-01

    Fuel cladding is one of the key components in a fission reactor to keep confining radioactive materials inside a fuel tube. During reactor operation, the cladding is however sometimes breached and radioactive materials leak from the fuel ceramic pellet into the coolant water through the breach. The primary coolant water is therefore monitored so that any leak is quickly detected, where the coolant water is periodically sampled and the concentration of, for example the radioactive iodine 131 (I-131), is measured. Depending on the measured concentration, the faulty fuel assembly with leaking rod is removed from the reactor and replaced by new one immediately or at the next refueling. In the present study, an effort has been made to develop a methodology to optimize the management for replacement of failed fuels due to cladding failures using the I-131 concentration measured in the sampled coolant water. A model numerical equation is proposed to describe the time evolution of I-131 concentration due to fuel leaks, and is then solved using the Monte-Carlo method as a function of sampling rate. Our results have indicated that, in order to achieve the rationalized management of failed fuels, higher resolution to detect a small amount of I-131 is not necessarily required but more frequent sampling is favorable. (author)

  13. NRMC - A GPU code for N-Reverse Monte Carlo modeling of fluids in confined media

    Science.gov (United States)

    Sánchez-Gil, Vicente; Noya, Eva G.; Lomba, Enrique

    2017-08-01

    NRMC is a parallel code for performing N-Reverse Monte Carlo modeling of fluids in confined media [V. Sánchez-Gil, E.G. Noya, E. Lomba, J. Chem. Phys. 140 (2014) 024504]. This method is an extension of the usual Reverse Monte Carlo method to obtain structural models of confined fluids compatible with experimental diffraction patterns, specifically designed to overcome the problem of slow diffusion that can appear under conditions of tight confinement. Most of the computational time in N-Reverse Monte Carlo modeling is spent in the evaluation of the structure factor for each trial configuration, a calculation that can be easily parallelized. Implementation of the structure factor evaluation in NVIDIA® CUDA so that the code can be run on GPUs leads to a speed up of up to two orders of magnitude.

  14. Monte Carlo simulation of diblock copolymer microphases by means of a 'fast' off-lattice model

    DEFF Research Database (Denmark)

    Besold, Gerhard; Hassager, O.; Mouritsen, Ole G.

    1999-01-01

    We present a mesoscopic off-lattice model for the simulation of diblock copolymer melts by Monte Carlo techniques. A single copolymer molecule is modeled as a discrete Edwards chain consisting of two blocks with vertices of type A and B, respectively. The volume interaction is formulated in terms...

  15. Modelling of the RA-1 reactor using a Monte Carlo code

    International Nuclear Information System (INIS)

    Quinteiro, Guillermo F.; Calabrese, Carlos R.

    2000-01-01

    It was carried out for the first time, a model of the Argentine RA-1 reactor using the MCNP Monte Carlo code. This model was validated using data for experimental neutron and gamma measurements at different energy ranges and locations. In addition, the resulting fluxes were compared with the data obtained using a 3D diffusion code. (author)

  16. Vectorized Monte Carlo

    International Nuclear Information System (INIS)

    Brown, F.B.

    1981-01-01

    Examination of the global algorithms and local kernels of conventional general-purpose Monte Carlo codes shows that multigroup Monte Carlo methods have sufficient structure to permit efficient vectorization. A structured multigroup Monte Carlo algorithm for vector computers is developed in which many particle events are treated at once on a cell-by-cell basis. Vectorization of kernels for tracking and variance reduction is described, and a new method for discrete sampling is developed to facilitate the vectorization of collision analysis. To demonstrate the potential of the new method, a vectorized Monte Carlo code for multigroup radiation transport analysis was developed. This code incorporates many features of conventional general-purpose production codes, including general geometry, splitting and Russian roulette, survival biasing, variance estimation via batching, a number of cutoffs, and generalized tallies of collision, tracklength, and surface crossing estimators with response functions. Predictions of vectorized performance characteristics for the CYBER-205 were made using emulated coding and a dynamic model of vector instruction timing. Computation rates were examined for a variety of test problems to determine sensitivities to batch size and vector lengths. Significant speedups are predicted for even a few hundred particles per batch, and asymptotic speedups by about 40 over equivalent Amdahl 470V/8 scalar codes arepredicted for a few thousand particles per batch. The principal conclusion is that vectorization of a general-purpose multigroup Monte Carlo code is well worth the significant effort required for stylized coding and major algorithmic changes

  17. Parameter uncertainty and model predictions: a review of Monte Carlo results

    International Nuclear Information System (INIS)

    Gardner, R.H.; O'Neill, R.V.

    1979-01-01

    Studies of parameter variability by Monte Carlo analysis are reviewed using repeated simulations of the model with randomly selected parameter values. At the beginning of each simulation, parameter values are chosen from specific frequency distributions. This process is continued for a number of iterations sufficient to converge on an estimate of the frequency distribution of the output variables. The purpose was to explore the general properties of error propagaton in models. Testing the implicit assumptions of analytical methods and pointing out counter-intuitive results produced by the Monte Carlo approach are additional points covered

  18. Monte Carlo Simulations of Compressible Ising Models: Do We Understand Them?

    Science.gov (United States)

    Landau, D. P.; Dünweg, B.; Laradji, M.; Tavazza, F.; Adler, J.; Cannavaccioulo, L.; Zhu, X.

    Extensive Monte Carlo simulations have begun to shed light on our understanding of phase transitions and universality classes for compressible Ising models. A comprehensive analysis of a Landau-Ginsburg-Wilson hamiltonian for systems with elastic degrees of freedom resulted in the prediction that there should be four distinct cases that would have different behavior, depending upon symmetries and thermodynamic constraints. We shall provide an account of the results of careful Monte Carlo simulations for a simple compressible Ising model that can be suitably modified so as to replicate all four cases.

  19. A High-Resolution Spatially Explicit Monte-Carlo Simulation Approach to Commercial and Residential Electricity and Water Demand Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Morton, April M [ORNL; McManamay, Ryan A [ORNL; Nagle, Nicholas N [ORNL; Piburn, Jesse O [ORNL; Stewart, Robert N [ORNL; Surendran Nair, Sujithkumar [ORNL

    2016-01-01

    Abstract As urban areas continue to grow and evolve in a world of increasing environmental awareness, the need for high resolution spatially explicit estimates for energy and water demand has become increasingly important. Though current modeling efforts mark significant progress in the effort to better understand the spatial distribution of energy and water consumption, many are provided at a course spatial resolution or rely on techniques which depend on detailed region-specific data sources that are not publicly available for many parts of the U.S. Furthermore, many existing methods do not account for errors in input data sources and may therefore not accurately reflect inherent uncertainties in model outputs. We propose an alternative and more flexible Monte-Carlo simulation approach to high-resolution residential and commercial electricity and water consumption modeling that relies primarily on publicly available data sources. The method s flexible data requirement and statistical framework ensure that the model is both applicable to a wide range of regions and reflective of uncertainties in model results. Key words: Energy Modeling, Water Modeling, Monte-Carlo Simulation, Uncertainty Quantification Acknowledgment This manuscript has been authored by employees of UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the U.S. Department of Energy. Accordingly, the United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  20. Empirical Analysis of Stochastic Volatility Model by Hybrid Monte Carlo Algorithm

    International Nuclear Information System (INIS)

    Takaishi, Tetsuya

    2013-01-01

    The stochastic volatility model is one of volatility models which infer latent volatility of asset returns. The Bayesian inference of the stochastic volatility (SV) model is performed by the hybrid Monte Carlo (HMC) algorithm which is superior to other Markov Chain Monte Carlo methods in sampling volatility variables. We perform the HMC simulations of the SV model for two liquid stock returns traded on the Tokyo Stock Exchange and measure the volatilities of those stock returns. Then we calculate the accuracy of the volatility measurement using the realized volatility as a proxy of the true volatility and compare the SV model with the GARCH model which is one of other volatility models. Using the accuracy calculated with the realized volatility we find that empirically the SV model performs better than the GARCH model.

  1. Accelerated Monte Carlo system reliability analysis through machine-learning-based surrogate models of network connectivity

    International Nuclear Information System (INIS)

    Stern, R.E.; Song, J.; Work, D.B.

    2017-01-01

    The two-terminal reliability problem in system reliability analysis is known to be computationally intractable for large infrastructure graphs. Monte Carlo techniques can estimate the probability of a disconnection between two points in a network by selecting a representative sample of network component failure realizations and determining the source-terminal connectivity of each realization. To reduce the runtime required for the Monte Carlo approximation, this article proposes an approximate framework in which the connectivity check of each sample is estimated using a machine-learning-based classifier. The framework is implemented using both a support vector machine (SVM) and a logistic regression based surrogate model. Numerical experiments are performed on the California gas distribution network using the epicenter and magnitude of the 1989 Loma Prieta earthquake as well as randomly-generated earthquakes. It is shown that the SVM and logistic regression surrogate models are able to predict network connectivity with accuracies of 99% for both methods, and are 1–2 orders of magnitude faster than using a Monte Carlo method with an exact connectivity check. - Highlights: • Surrogate models of network connectivity are developed by machine-learning algorithms. • Developed surrogate models can reduce the runtime required for Monte Carlo simulations. • Support vector machine and logistic regressions are employed to develop surrogate models. • Numerical example of California gas distribution network demonstrate the proposed approach. • The developed models have accuracies 99%, and are 1–2 orders of magnitude faster than MCS.

  2. Modelling of the RA-1 reactor using a Monte Carlo code; Modelado del reactor RA-1 utilizando un codigo Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Quinteiro, Guillermo F; Calabrese, Carlos R [Comision Nacional de Energia Atomica, General San Martin (Argentina). Dept. de Reactores y Centrales Nucleares

    2000-07-01

    It was carried out for the first time, a model of the Argentine RA-1 reactor using the MCNP Monte Carlo code. This model was validated using data for experimental neutron and gamma measurements at different energy ranges and locations. In addition, the resulting fluxes were compared with the data obtained using a 3D diffusion code. (author)

  3. Converting boundary representation solid models to half-space representation models for Monte Carlo analysis

    International Nuclear Information System (INIS)

    Davis, J. E.; Eddy, M. J.; Sutton, T. M.; Altomari, T. J.

    2007-01-01

    Solid modeling computer software systems provide for the design of three-dimensional solid models used in the design and analysis of physical components. The current state-of-the-art in solid modeling representation uses a boundary representation format in which geometry and topology are used to form three-dimensional boundaries of the solid. The geometry representation used in these systems is cubic B-spline curves and surfaces - a network of cubic B-spline functions in three-dimensional Cartesian coordinate space. Many Monte Carlo codes, however, use a geometry representation in which geometry units are specified by intersections and unions of half-spaces. This paper describes an algorithm for converting from a boundary representation to a half-space representation. (authors)

  4. Modeling to Mars: a NASA Model Based Systems Engineering Pathfinder Effort

    Science.gov (United States)

    Phojanamongkolkij, Nipa; Lee, Kristopher A.; Miller, Scott T.; Vorndran, Kenneth A.; Vaden, Karl R.; Ross, Eric P.; Powell, Bobby C.; Moses, Robert W.

    2017-01-01

    The NASA Engineering Safety Center (NESC) Systems Engineering (SE) Technical Discipline Team (TDT) initiated the Model Based Systems Engineering (MBSE) Pathfinder effort in FY16. The goals and objectives of the MBSE Pathfinder include developing and advancing MBSE capability across NASA, applying MBSE to real NASA issues, and capturing issues and opportunities surrounding MBSE. The Pathfinder effort consisted of four teams, with each team addressing a particular focus area. This paper focuses on Pathfinder team 1 with the focus area of architectures and mission campaigns. These efforts covered the timeframe of February 2016 through September 2016. The team was comprised of eight team members from seven NASA Centers (Glenn Research Center, Langley Research Center, Ames Research Center, Goddard Space Flight Center IV&V Facility, Johnson Space Center, Marshall Space Flight Center, and Stennis Space Center). Collectively, the team had varying levels of knowledge, skills and expertise in systems engineering and MBSE. The team applied their existing and newly acquired system modeling knowledge and expertise to develop modeling products for a campaign (Program) of crew and cargo missions (Projects) to establish a human presence on Mars utilizing In-Situ Resource Utilization (ISRU). Pathfinder team 1 developed a subset of modeling products that are required for a Program System Requirement Review (SRR)/System Design Review (SDR) and Project Mission Concept Review (MCR)/SRR as defined in NASA Procedural Requirements. Additionally, Team 1 was able to perform and demonstrate some trades and constraint analyses. At the end of these efforts, over twenty lessons learned and recommended next steps have been identified.

  5. A Monte Carlo Investigation of the Box-Cox Model and a Nonlinear Least Squares Alternative.

    OpenAIRE

    Showalter, Mark H

    1994-01-01

    This paper reports a Monte Carlo study of the Box-Cox model and a nonlinear least squares alternative. Key results include the following: the transformation parameter in the Box-Cox model appears to be inconsistently estimated in the presence of conditional heteroskedasticity; the constant term in both the Box-Cox and the nonlinear least squares models is poorly estimated in small samples; conditional mean forecasts tend to underestimate their true value in the Box-Cox model when the transfor...

  6. Quasi-Monte Carlo methods: applications to modeling of light transport in tissue

    Science.gov (United States)

    Schafer, Steven A.

    1996-05-01

    Monte Carlo modeling of light propagation can accurately predict the distribution of light in scattering materials. A drawback of Monte Carlo methods is that they converge inversely with the square root of the number of iterations. Theoretical considerations suggest that convergence which scales inversely with the first power of the number of iterations is possible. We have previously shown that one can obtain at least a portion of that improvement by using van der Corput sequences in place of a conventional pseudo-random number generator. Here, we present our further analysis, and show that quasi-Monte Carlo methods do have limited applicability to light scattering problems. We also discuss potential improvements which may increase the applicability.

  7. Model unspecific search in CMS. Treatment of insufficient Monte Carlo statistics

    Energy Technology Data Exchange (ETDEWEB)

    Lieb, Jonas; Albert, Andreas; Duchardt, Deborah; Hebbeker, Thomas; Knutzen, Simon; Meyer, Arnd; Pook, Tobias; Roemer, Jonas [III. Physikalisches Institut A, RWTH Aachen University (Germany)

    2016-07-01

    In 2015, the CMS detector recorded proton-proton collisions at an unprecedented center of mass energy of √(s)=13 TeV. The Model Unspecific Search in CMS (MUSiC) offers an analysis approach of these data which is complementary to dedicated analyses: By taking all produced final states into consideration, MUSiC is sensitive to indicators of new physics appearing in final states that are usually not investigated. In a two step process, MUSiC first classifies events according to their physics content and then searches kinematic distributions for the most significant deviations between Monte Carlo simulations and observed data. Such a general approach introduces its own set of challenges. One of them is the treatment of situations with insufficient Monte Carlo statistics. Complementing introductory presentations on the MUSiC event selection and classification, this talk will present a method of dealing with the issue of low Monte Carlo statistics.

  8. A Monte Carlo modeling alternative for the API Gamma Ray Calibration Facility

    International Nuclear Information System (INIS)

    Galford, J.E.

    2017-01-01

    The gamma ray pit at the API Calibration Facility, located on the University of Houston campus, defines the API unit for natural gamma ray logs used throughout the petroleum logging industry. Future use of the facility is uncertain. An alternative method is proposed to preserve the gamma ray API unit definition as an industry standard by using Monte Carlo modeling to obtain accurate counting rate-to-API unit conversion factors for gross-counting and spectral gamma ray tool designs. - Highlights: • A Monte Carlo alternative is proposed to replace empirical calibration procedures. • The proposed Monte Carlo alternative preserves the original API unit definition. • MCNP source and materials descriptions are provided for the API gamma ray pit. • Simulated results are presented for several wireline logging tool designs. • The proposed method can be adapted for use with logging-while-drilling tools.

  9. Competing probabilistic models for catch-effort relationships in wildlife censuses

    Energy Technology Data Exchange (ETDEWEB)

    Skalski, J.R.; Robson, D.S.; Matsuzaki, C.L.

    1983-01-01

    Two probabilistic models are presented for describing the chance that an animal is captured during a wildlife census, as a function of trapping effort. The models in turn are used to propose relationships between sampling intensity and catch-per-unit-effort (C.P.U.E.) that were field tested on small mammal populations. Capture data suggests a model of diminshing C.P.U.E. with increasing levels of trapping intensity. The catch-effort model is used to illustrate optimization procedures in the design of mark-recapture experiments for censusing wild populations. 14 references, 2 tables.

  10. Markov chain Monte Carlo methods in directed graphical models

    DEFF Research Database (Denmark)

    Højbjerre, Malene

    Directed graphical models present data possessing a complex dependence structure, and MCMC methods are computer-intensive simulation techniques to approximate high-dimensional intractable integrals, which emerge in such models with incomplete data. MCMC computations in directed graphical models h...

  11. Confronting uncertainty in model-based geostatistics using Markov Chain Monte Carlo simulation

    NARCIS (Netherlands)

    Minasny, B.; Vrugt, J.A.; McBratney, A.B.

    2011-01-01

    This paper demonstrates for the first time the use of Markov Chain Monte Carlo (MCMC) simulation for parameter inference in model-based soil geostatistics. We implemented the recently developed DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm to jointly summarize the posterior

  12. Monte Carlo study of the phase diagram for the two-dimensional Z(4) model

    International Nuclear Information System (INIS)

    Carneiro, G.M.; Pol, M.E.; Zagury, N.

    1982-05-01

    The phase diagram of the two-dimensional Z(4) model on a square lattice is determined using a Monte Carlo method. The results of this simulation confirm the general features of the phase diagram predicted theoretically for the ferromagnetic case, and show the existence of a new phase with perpendicular order. (Author) [pt

  13. Hamiltonian Monte Carlo study of (1+1)-dimensional models with restricted supersymmetry on the lattice

    International Nuclear Information System (INIS)

    Ranft, J.; Schiller, A.

    1984-01-01

    Lattice versions with restricted suppersymmetry of simple (1+1)-dimensional supersymmetric models are numerically studied using a local hamiltonian Monte Carlo method. The pattern of supersymmetry breaking closely follows the expectations of Bartels and Bronzan obtain in an alternative lattice formulation. (orig.)

  14. Derivation of a Monte Carlo method for modeling heterodyne detection in optical coherence tomography systems

    DEFF Research Database (Denmark)

    Tycho, Andreas; Jørgensen, Thomas Martini; Andersen, Peter E.

    2002-01-01

    A Monte Carlo (MC) method for modeling optical coherence tomography (OCT) measurements of a diffusely reflecting discontinuity emb edded in a scattering medium is presented. For the first time to the authors' knowledge it is shown analytically that the applicability of an MC approach to this opti...

  15. Treatment of input uncertainty in hydrologic modeling: Doing hydrology backward with Markov chain Monte Carlo simulation

    NARCIS (Netherlands)

    Vrugt, J.A.; Braak, ter C.J.F.; Clark, M.P.; Hyman, J.M.; Robinson, B.A.

    2008-01-01

    There is increasing consensus in the hydrologic literature that an appropriate framework for streamflow forecasting and simulation should include explicit recognition of forcing and parameter and model structural error. This paper presents a novel Markov chain Monte Carlo (MCMC) sampler, entitled

  16. SKIRT: The design of a suite of input models for Monte Carlo radiative transfer simulations

    Science.gov (United States)

    Baes, M.; Camps, P.

    2015-09-01

    The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can be either analytical toy models or numerical models defined on grids or a set of particles) and the extensive use of decorators that combine and alter these building blocks to more complex structures. For a number of decorators, e.g. those that add spiral structure or clumpiness, we provide a detailed description of the algorithms that can be used to generate random positions. Advantages of this decorator-based design include code transparency, the avoidance of code duplication, and an increase in code maintainability. Moreover, since decorators can be chained without problems, very complex models can easily be constructed out of simple building blocks. Finally, based on a number of test simulations, we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black-box random position generator.

  17. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans.

    Science.gov (United States)

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2017-03-07

    The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients' CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.

  18. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans

    Science.gov (United States)

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2017-03-01

    The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.

  19. Monte Carlo sensitivity analysis of an Eulerian large-scale air pollution model

    International Nuclear Information System (INIS)

    Dimov, I.; Georgieva, R.; Ostromsky, Tz.

    2012-01-01

    Variance-based approaches for global sensitivity analysis have been applied and analyzed to study the sensitivity of air pollutant concentrations according to variations of rates of chemical reactions. The Unified Danish Eulerian Model has been used as a mathematical model simulating a remote transport of air pollutants. Various Monte Carlo algorithms for numerical integration have been applied to compute Sobol's global sensitivity indices. A newly developed Monte Carlo algorithm based on Sobol's quasi-random points MCA-MSS has been applied for numerical integration. It has been compared with some existing approaches, namely Sobol's ΛΠ τ sequences, an adaptive Monte Carlo algorithm, the plain Monte Carlo algorithm, as well as, eFAST and Sobol's sensitivity approaches both implemented in SIMLAB software. The analysis and numerical results show advantages of MCA-MSS for relatively small sensitivity indices in terms of accuracy and efficiency. Practical guidelines on the estimation of Sobol's global sensitivity indices in the presence of computational difficulties have been provided. - Highlights: ► Variance-based global sensitivity analysis is performed for the air pollution model UNI-DEM. ► The main effect of input parameters dominates over higher-order interactions. ► Ozone concentrations are influenced mostly by variability of three chemical reactions rates. ► The newly developed MCA-MSS for multidimensional integration is compared with other approaches. ► More precise approaches like MCA-MSS should be applied when the needed accuracy has not been achieved.

  20. forecasting with nonlinear time series model: a monte-carlo

    African Journals Online (AJOL)

    PUBLICATIONS1

    erated recursively up to any step greater than one. For nonlinear time series model, point forecast for step one can be done easily like in the linear case but forecast for a step greater than or equal to ..... London. Franses, P. H. (1998). Time series models for business and Economic forecasting, Cam- bridge University press.

  1. Monte Carlo modeling of a High-Sensitivity MOSFET dosimeter for low- and medium-energy photon sources

    International Nuclear Information System (INIS)

    Wang, Brian; Kim, C.-H.; Xu, X. George

    2004-01-01

    Metal-oxide-semiconductor field effect transistor (MOSFET) dosimeters are increasingly utilized in radiation therapy and diagnostic radiology. While it is difficult to characterize the dosimeter responses for monoenergetic sources by experiments, this paper reports a detailed Monte Carlo simulation model of the High-Sensitivity MOSFET dosimeter using Monte Carlo N-Particle (MCNP) 4C. A dose estimator method was used to calculate the dose in the extremely thin sensitive volume. Efforts were made to validate the MCNP model using three experiments: (1) comparison of the simulated dose with the measurement of a Cs-137 source, (2) comparison of the simulated dose with analytical values, and (3) comparison of the simulated energy dependence with theoretical values. Our simulation results show that the MOSFET dosimeter has a maximum response at about 40 keV of photon energy. The energy dependence curve is also found to agree with the predicted value from theory within statistical uncertainties. The angular dependence study shows that the MOSFET dosimeter has a higher response (about 8%) when photons come from the epoxy side, compared with the kapton side for the Cs-137 source

  2. forecasting with nonlinear time series model: a monte-carlo

    African Journals Online (AJOL)

    PUBLICATIONS1

    Carlo method of forecasting using a special nonlinear time series model, called logistic smooth transition ... We illustrate this new method using some simulation ..... in MATLAB 7.5.0. ... process (DGP) using the logistic smooth transi-.

  3. Perturbation analysis for Monte Carlo continuous cross section models

    International Nuclear Information System (INIS)

    Kennedy, Chris B.; Abdel-Khalik, Hany S.

    2011-01-01

    Sensitivity analysis, including both its forward and adjoint applications, collectively referred to hereinafter as Perturbation Analysis (PA), is an essential tool to complete Uncertainty Quantification (UQ) and Data Assimilation (DA). PA-assisted UQ and DA have traditionally been carried out for reactor analysis problems using deterministic as opposed to stochastic models for radiation transport. This is because PA requires many model executions to quantify how variations in input data, primarily cross sections, affect variations in model's responses, e.g. detectors readings, flux distribution, multiplication factor, etc. Although stochastic models are often sought for their higher accuracy, their repeated execution is at best computationally expensive and in reality intractable for typical reactor analysis problems involving many input data and output responses. Deterministic methods however achieve computational efficiency needed to carry out the PA analysis by reducing problem dimensionality via various spatial and energy homogenization assumptions. This however introduces modeling error components into the PA results which propagate to the following UQ and DA analyses. The introduced errors are problem specific and therefore are expected to limit the applicability of UQ and DA analyses to reactor systems that satisfy the introduced assumptions. This manuscript introduces a new method to complete PA employing a continuous cross section stochastic model and performed in a computationally efficient manner. If successful, the modeling error components introduced by deterministic methods could be eliminated, thereby allowing for wider applicability of DA and UQ results. Two MCNP models demonstrate the application of the new method - a Critical Pu Sphere (Jezebel), a Pu Fast Metal Array (Russian BR-1). The PA is completed for reaction rate densities, reaction rate ratios, and the multiplication factor. (author)

  4. Recommended direct simulation Monte Carlo collision model parameters for modeling ionized air transport processes

    Energy Technology Data Exchange (ETDEWEB)

    Swaminathan-Gopalan, Krishnan; Stephani, Kelly A., E-mail: ksteph@illinois.edu [Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States)

    2016-02-15

    A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach. The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.

  5. Sequential Markov chain Monte Carlo filter with simultaneous model selection for electrocardiogram signal modeling.

    Science.gov (United States)

    Edla, Shwetha; Kovvali, Narayan; Papandreou-Suppappola, Antonia

    2012-01-01

    Constructing statistical models of electrocardiogram (ECG) signals, whose parameters can be used for automated disease classification, is of great importance in precluding manual annotation and providing prompt diagnosis of cardiac diseases. ECG signals consist of several segments with different morphologies (namely the P wave, QRS complex and the T wave) in a single heart beat, which can vary across individuals and diseases. Also, existing statistical ECG models exhibit a reliance upon obtaining a priori information from the ECG data by using preprocessing algorithms to initialize the filter parameters, or to define the user-specified model parameters. In this paper, we propose an ECG modeling technique using the sequential Markov chain Monte Carlo (SMCMC) filter that can perform simultaneous model selection, by adaptively choosing from different representations depending upon the nature of the data. Our results demonstrate the ability of the algorithm to track various types of ECG morphologies, including intermittently occurring ECG beats. In addition, we use the estimated model parameters as the feature set to classify between ECG signals with normal sinus rhythm and four different types of arrhythmia.

  6. Adaptable three-dimensional Monte Carlo modeling of imaged blood vessels in skin

    Science.gov (United States)

    Pfefer, T. Joshua; Barton, Jennifer K.; Chan, Eric K.; Ducros, Mathieu G.; Sorg, Brian S.; Milner, Thomas E.; Nelson, J. Stuart; Welch, Ashley J.

    1997-06-01

    In order to reach a higher level of accuracy in simulation of port wine stain treatment, we propose to discard the typical layered geometry and cylindrical blood vessel assumptions made in optical models and use imaging techniques to define actual tissue geometry. Two main additions to the typical 3D, weighted photon, variable step size Monte Carlo routine were necessary to achieve this goal. First, optical low coherence reflectometry (OLCR) images of rat skin were used to specify a 3D material array, with each entry assigned a label to represent the type of tissue in that particular voxel. Second, the Monte Carlo algorithm was altered so that when a photon crosses into a new voxel, the remaining path length is recalculated using the new optical properties, as specified by the material array. The model has shown good agreement with data from the literature. Monte Carlo simulations using OLCR images of asymmetrically curved blood vessels show various effects such as shading, scattering-induced peaks at vessel surfaces, and directionality-induced gradients in energy deposition. In conclusion, this augmentation of the Monte Carlo method can accurately simulate light transport for a wide variety of nonhomogeneous tissue geometries.

  7. The structure of liquid water by polarized neutron diffraction and reverse Monte Carlo modelling.

    Science.gov (United States)

    Temleitner, László; Pusztai, László; Schweika, Werner

    2007-08-22

    The coherent static structure factor of water has been investigated by polarized neutron diffraction. Polarization analysis allows us to separate the huge incoherent scattering background from hydrogen and to obtain high quality data of the coherent scattering from four different mixtures of liquid H(2)O and D(2)O. The information obtained by the variation of the scattering contrast confines the configurational space of water and is used by the reverse Monte Carlo technique to model the total structure factors. Structural characteristics have been calculated directly from the resulting sets of particle coordinates. Consistency with existing partial pair correlation functions, derived without the application of polarized neutrons, was checked by incorporating them into our reverse Monte Carlo calculations. We also performed Monte Carlo simulations of a hard sphere system, which provides an accurate estimate of the information content of the measured data. It is shown that the present combination of polarized neutron scattering and reverse Monte Carlo structural modelling is a promising approach towards a detailed understanding of the microscopic structure of water.

  8. Absorbed dose in fibrotic microenvironment models employing Monte Carlo simulation

    International Nuclear Information System (INIS)

    Zambrano Ramírez, O.D.; Rojas Calderón, E.L.; Azorín Vega, E.P.; Ferro Flores, G.; Martínez Caballero, E.

    2015-01-01

    The presence or absence of fibrosis and yet more, the multimeric and multivalent nature of the radiopharmaceutical have recently been reported to have an effect on the radiation absorbed dose in tumor microenvironment models. Fibroblast and myofibroblast cells produce the extracellular matrix by the secretion of proteins which provide structural and biochemical support to cells. The reactive and reparative mechanisms triggered during the inflammatory process causes the production and deposition of extracellular matrix proteins, the abnormal excessive growth of the connective tissue leads to fibrosis. In this work, microenvironment (either not fibrotic or fibrotic) models composed of seven spheres representing cancer cells of 10 μm in diameter each with a 5 μm diameter inner sphere (cell nucleus) were created in two distinct radiation transport codes (PENELOPE and MCNP). The purpose of creating these models was to determine the radiation absorbed dose in the nucleus of cancer cells, based on previously reported radiopharmaceutical retain (by HeLa cells) percentages of the 177 Lu-Tyr 3 -octreotate (monomeric) and 177 Lu-Tyr 3 -octreotate-AuNP (multimeric) radiopharmaceuticals. A comparison in the results between the PENELOPE and MCNP was done. We found a good agreement in the results of the codes. The percent difference between the increase percentages of the absorbed dose in the not fibrotic model with respect to the fibrotic model of the codes PENELOPE and MCNP was found to be under 1% for both radiopharmaceuticals. (authors)

  9. Modelling the IRSN's radio-photo-luminescent dosimeter using the MCPNX Monte Carlo code

    International Nuclear Information System (INIS)

    Hocine, N.; Donadille, L.; Huet, Ch.; Itie, Ch.

    2010-01-01

    The authors report the modelling of the new radio-photo-luminescent (RPL) dosimeter of the IRSN using the MCPNX Monte Carlo code. The Hp(10) and Hp(0, 07) dose equivalents are computed for different irradiation configurations involving photonic beams (gamma and X) defined according to the ISO 4037-1 standard. Results are compared to experimental measurements performed on the RPL dosimeter. The agreement is good and the model is thus validated

  10. Systematic Identification of Stakeholders for Engagement with Systems Modeling Efforts in the Snohomish Basin, Washington, USA

    Science.gov (United States)

    Even as stakeholder engagement in systems dynamic modeling efforts is increasingly promoted, the mechanisms for identifying which stakeholders should be included are rarely documented. Accordingly, for an Environmental Protection Agency’s Triple Value Simulation (3VS) mode...

  11. Monte Carlo based toy model for fission process

    International Nuclear Information System (INIS)

    Kurniadi, R.; Waris, A.; Viridi, S.

    2014-01-01

    There are many models and calculation techniques to obtain visible image of fission yield process. In particular, fission yield can be calculated by using two calculations approach, namely macroscopic approach and microscopic approach. This work proposes another calculation approach in which the nucleus is treated as a toy model. Hence, the fission process does not represent real fission process in nature completely. The toy model is formed by Gaussian distribution of random number that randomizes distance like the distance between particle and central point. The scission process is started by smashing compound nucleus central point into two parts that are left central and right central points. These three points have different Gaussian distribution parameters such as mean (μ CN , μ L , μ R ), and standard deviation (σ CN , σ L , σ R ). By overlaying of three distributions, the number of particles (N L , N R ) that are trapped by central points can be obtained. This process is iterated until (N L , N R ) become constant numbers. Smashing process is repeated by changing σ L and σ R , randomly

  12. Monte Carlo Analysis of Reservoir Models Using Seismic Data and Geostatistical Models

    Science.gov (United States)

    Zunino, A.; Mosegaard, K.; Lange, K.; Melnikova, Y.; Hansen, T. M.

    2013-12-01

    We present a study on the analysis of petroleum reservoir models consistent with seismic data and geostatistical constraints performed on a synthetic reservoir model. Our aim is to invert directly for structure and rock bulk properties of the target reservoir zone. To infer the rock facies, porosity and oil saturation seismology alone is not sufficient but a rock physics model must be taken into account, which links the unknown properties to the elastic parameters. We then combine a rock physics model with a simple convolutional approach for seismic waves to invert the "measured" seismograms. To solve this inverse problem, we employ a Markov chain Monte Carlo (MCMC) method, because it offers the possibility to handle non-linearity, complex and multi-step forward models and provides realistic estimates of uncertainties. However, for large data sets the MCMC method may be impractical because of a very high computational demand. To face this challenge one strategy is to feed the algorithm with realistic models, hence relying on proper prior information. To address this problem, we utilize an algorithm drawn from geostatistics to generate geologically plausible models which represent samples of the prior distribution. The geostatistical algorithm learns the multiple-point statistics from prototype models (in the form of training images), then generates thousands of different models which are accepted or rejected by a Metropolis sampler. To further reduce the computation time we parallelize the software and run it on multi-core machines. The solution of the inverse problem is then represented by a collection of reservoir models in terms of facies, porosity and oil saturation, which constitute samples of the posterior distribution. We are finally able to produce probability maps of the properties we are interested in by performing statistical analysis on the collection of solutions.

  13. New model for mines and transportation tunnels external dose calculation using Monte Carlo simulation

    International Nuclear Information System (INIS)

    Allam, Kh. A.

    2017-01-01

    In this work, a new methodology is developed based on Monte Carlo simulation for tunnels and mines external dose calculation. Tunnels external dose evaluation model of a cylindrical shape of finite thickness with an entrance and with or without exit. A photon transportation model was applied for exposure dose calculations. A new software based on Monte Carlo solution was designed and programmed using Delphi programming language. The variation of external dose due to radioactive nuclei in a mine tunnel and the corresponding experimental data lies in the range 7.3 19.9%. The variation of specific external dose rate with position in, tunnel building material density and composition were studied. The given new model has more flexible for real external dose in any cylindrical tunnel structure calculations. (authors)

  14. Setup of HDRK-Man voxel model in Geant4 Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Jong Hwi; Cho, Sung Koo; Kim, Chan Hyeong [Hanyang Univ., Seoul (Korea, Republic of); Choi, Sang Hyoun [Inha Univ., Incheon (Korea, Republic of); Cho, Kun Woo [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2008-10-15

    Many different voxel models, developed using tomographic images of human body, are used in various fields including both ionizing and non-ionizing radiation fields. Recently a high-quality voxel model/ named HDRK-Man, was constructed at Hanyang University and used to calculate the dose conversion coefficients (DCC) values for external photon and neutron beams using the MCNPX Monte Carlo code. The objective of the present study is to set up the HDRK-Man model in Geant4 in order to use it in more advanced calculations such as 4-D Monte Carlo simulations and space dosimetry studies involving very high energy particles. To that end, the HDRK-Man was ported to Geant4 and used to calculate the DCC values for external photon beams. The calculated values were then compared with the results of the MCNPX code. In addition, a computational Linux cluster was built to improve the computing speed in Geant4.

  15. Randomly dispersed particle fuel model in the PSG Monte Carlo neutron transport code

    International Nuclear Information System (INIS)

    Leppaenen, J.

    2007-01-01

    High-temperature gas-cooled reactor fuels are composed of thousands of microscopic fuel particles, randomly dispersed in a graphite matrix. The modelling of such geometry is complicated, especially using continuous-energy Monte Carlo codes, which are unable to apply any deterministic corrections in the calculation. This paper presents the geometry routine developed for modelling randomly dispersed particle fuels using the PSG Monte Carlo reactor physics code. The model is based on the delta-tracking method, and it takes into account the spatial self-shielding effects and the random dispersion of the fuel particles. The calculation routine is validated by comparing the results to reference MCNP4C calculations using uranium and plutonium based fuels. (authors)

  16. Environmental dose rate heterogeneity of beta radiation and its implications for luminescence dating: Monte Carlo modelling and experimental validation

    DEFF Research Database (Denmark)

    Nathan, R.P.; Thomas, P.J.; Jain, M.

    2003-01-01

    and identify the likely size of these effects on D-e distributions. The study employs the MCNP 4C Monte Carlo electron/photon transport model, supported by an experimental validation of the code in several case studies. We find good agreement between the experimental measurements and the Monte Carlo...

  17. Monte Carlo simulations of a model for opinion formation

    Science.gov (United States)

    Bordogna, C. M.; Albano, E. V.

    2007-04-01

    A model for opinion formation based on the Theory of Social Impact is presented and studied by means of numerical simulations. Individuals with two states of opinion are impacted due to social interactions with: i) members of the society, ii) a strong leader with a well-defined opinion and iii) the mass media that could either support or compete with the leader. Due to that competition, the average opinion of the social group exhibits phase-transition like behaviour between different states of opinion.

  18. Strain in the mesoscale kinetic Monte Carlo model for sintering

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Frandsen, Henrik Lund; Tikare, V.

    2014-01-01

    anisotropic strains for homogeneous powder compacts with aspect ratios different from unity. It is shown that the line direction biases shrinkage strains in proportion the compact dimension aspect ratios. A new algorithm that corrects this bias in strains is proposed; the direction for collapsing the column...... densification by vacancy annihilation removes an isolated pore site at a grain boundary and collapses a column of sites extending from the vacancy to the surface of sintering compact, through the center of mass of the nearest grain. Using this algorithm, the existing published kMC models are shown to produce...

  19. Flat-histogram methods in quantum Monte Carlo simulations: Application to the t-J model

    International Nuclear Information System (INIS)

    Diamantis, Nikolaos G.; Manousakis, Efstratios

    2016-01-01

    We discuss that flat-histogram techniques can be appropriately applied in the sampling of quantum Monte Carlo simulation in order to improve the statistical quality of the results at long imaginary time or low excitation energy. Typical imaginary-time correlation functions calculated in quantum Monte Carlo are subject to exponentially growing errors as the range of imaginary time grows and this smears the information on the low energy excitations. We show that we can extract the low energy physics by modifying the Monte Carlo sampling technique to one in which configurations which contribute to making the histogram of certain quantities flat are promoted. We apply the diagrammatic Monte Carlo (diag-MC) method to the motion of a single hole in the t-J model and we show that the implementation of flat-histogram techniques allows us to calculate the Green's function in a wide range of imaginary-time. In addition, we show that applying the flat-histogram technique alleviates the “sign”-problem associated with the simulation of the single-hole Green's function at long imaginary time. (paper)

  20. A vectorized Monte Carlo code for modeling photon transport in SPECT

    International Nuclear Information System (INIS)

    Smith, M.F.; Floyd, C.E. Jr.; Jaszczak, R.J.

    1993-01-01

    A vectorized Monte Carlo computer code has been developed for modeling photon transport in single photon emission computed tomography (SPECT). The code models photon transport in a uniform attenuating region and photon detection by a gamma camera. It is adapted from a history-based Monte Carlo code in which photon history data are stored in scalar variables and photon histories are computed sequentially. The vectorized code is written in FORTRAN77 and uses an event-based algorithm in which photon history data are stored in arrays and photon history computations are performed within DO loops. The indices of the DO loops range over the number of photon histories, and these loops may take advantage of the vector processing unit of our Stellar GS1000 computer for pipelined computations. Without the use of the vector processor the event-based code is faster than the history-based code because of numerical optimization performed during conversion to the event-based algorithm. When only the detection of unscattered photons is modeled, the event-based code executes 5.1 times faster with the use of the vector processor than without; when the detection of scattered and unscattered photons is modeled the speed increase is a factor of 2.9. Vectorization is a valuable way to increase the performance of Monte Carlo code for modeling photon transport in SPECT

  1. An analytical model for backscattered luminance in fog: comparisons with Monte Carlo computations and experimental results

    International Nuclear Information System (INIS)

    Taillade, Frédéric; Dumont, Eric; Belin, Etienne

    2008-01-01

    We propose an analytical model for backscattered luminance in fog and derive an expression for the visibility signal-to-noise ratio as a function of meteorological visibility distance. The model uses single scattering processes. It is based on the Mie theory and the geometry of the optical device (emitter and receiver). In particular, we present an overlap function and take the phase function of fog into account. The results of the backscattered luminance obtained with our analytical model are compared to simulations made using the Monte Carlo method based on multiple scattering processes. An excellent agreement is found in that the discrepancy between the results is smaller than the Monte Carlo standard uncertainties. If we take no account of the geometry of the optical device, the results of the model-estimated backscattered luminance differ from the simulations by a factor 20. We also conclude that the signal-to-noise ratio computed with the Monte Carlo method and our analytical model is in good agreement with experimental results since the mean difference between the calculations and experimental measurements is smaller than the experimental uncertainty

  2. Supercomputer and cluster performance modeling and analysis efforts:2004-2006.

    Energy Technology Data Exchange (ETDEWEB)

    Sturtevant, Judith E.; Ganti, Anand; Meyer, Harold (Hal) Edward; Stevenson, Joel O.; Benner, Robert E., Jr. (.,; .); Goudy, Susan Phelps; Doerfler, Douglas W.; Domino, Stefan Paul; Taylor, Mark A.; Malins, Robert Joseph; Scott, Ryan T.; Barnette, Daniel Wayne; Rajan, Mahesh; Ang, James Alfred; Black, Amalia Rebecca; Laub, Thomas William; Vaughan, Courtenay Thomas; Franke, Brian Claude

    2007-02-01

    This report describes efforts by the Performance Modeling and Analysis Team to investigate performance characteristics of Sandia's engineering and scientific applications on the ASC capability and advanced architecture supercomputers, and Sandia's capacity Linux clusters. Efforts to model various aspects of these computers are also discussed. The goals of these efforts are to quantify and compare Sandia's supercomputer and cluster performance characteristics; to reveal strengths and weaknesses in such systems; and to predict performance characteristics of, and provide guidelines for, future acquisitions and follow-on systems. Described herein are the results obtained from running benchmarks and applications to extract performance characteristics and comparisons, as well as modeling efforts, obtained during the time period 2004-2006. The format of the report, with hypertext links to numerous additional documents, purposefully minimizes the document size needed to disseminate the extensive results from our research.

  3. Monte Carlo modeling of ion beam induced secondary electrons

    Energy Technology Data Exchange (ETDEWEB)

    Huh, U., E-mail: uhuh@vols.utk.edu [Biochemistry & Cellular & Molecular Biology, University of Tennessee, Knoxville, TN 37996-0840 (United States); Cho, W. [Electrical and Computer Engineering, University of Tennessee, Knoxville, TN 37996-2100 (United States); Joy, D.C. [Biochemistry & Cellular & Molecular Biology, University of Tennessee, Knoxville, TN 37996-0840 (United States); Center for Nanophase Materials Science, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States)

    2016-09-15

    Ion induced secondary electrons (iSE) can produce high-resolution images ranging from a few eV to 100 keV over a wide range of materials. The interpretation of such images requires knowledge of the secondary electron yields (iSE δ) for each of the elements and materials present and as a function of the incident beam energy. Experimental data for helium ions are currently limited to 40 elements and six compounds while other ions are not well represented. To overcome this limitation, we propose a simple procedure based on the comprehensive work of Berger et al. Here we show that between the energy range of 10–100 keV the Berger et al. data for elements and compounds can be accurately represented by a single universal curve. The agreement between the limited experimental data that is available and the predictive model is good, and has been found to provide reliable yield data for a wide range of elements and compounds. - Highlights: • The Universal ASTAR Yield Curve was derived from data recently published by NIST. • IONiSE incorporated with the Curve will predict iSE yield for elements and compounds. • This approach can also handle other ion beams by changing basic scattering profile.

  4. Testing effort dependent software reliability model for imperfect debugging process considering both detection and correction

    International Nuclear Information System (INIS)

    Peng, R.; Li, Y.F.; Zhang, W.J.; Hu, Q.P.

    2014-01-01

    This paper studies the fault detection process (FDP) and fault correction process (FCP) with the incorporation of testing effort function and imperfect debugging. In order to ensure high reliability, it is essential for software to undergo a testing phase, during which faults can be detected and corrected by debuggers. The testing resource allocation during this phase, which is usually depicted by the testing effort function, considerably influences not only the fault detection rate but also the time to correct a detected fault. In addition, testing is usually far from perfect such that new faults may be introduced. In this paper, we first show how to incorporate testing effort function and fault introduction into FDP and then develop FCP as delayed FDP with a correction effort. Various specific paired FDP and FCP models are obtained based on different assumptions of fault introduction and correction effort. An illustrative example is presented. The optimal release policy under different criteria is also discussed

  5. A sequential Monte Carlo model of the combined GB gas and electricity network

    International Nuclear Information System (INIS)

    Chaudry, Modassar; Wu, Jianzhong; Jenkins, Nick

    2013-01-01

    A Monte Carlo model of the combined GB gas and electricity network was developed to determine the reliability of the energy infrastructure. The model integrates the gas and electricity network into a single sequential Monte Carlo simulation. The model minimises the combined costs of the gas and electricity network, these include gas supplies, gas storage operation and electricity generation. The Monte Carlo model calculates reliability indices such as loss of load probability and expected energy unserved for the combined gas and electricity network. The intention of this tool is to facilitate reliability analysis of integrated energy systems. Applications of this tool are demonstrated through a case study that quantifies the impact on the reliability of the GB gas and electricity network given uncertainties such as wind variability, gas supply availability and outages to energy infrastructure assets. Analysis is performed over a typical midwinter week on a hypothesised GB gas and electricity network in 2020 that meets European renewable energy targets. The efficacy of doubling GB gas storage capacity on the reliability of the energy system is assessed. The results highlight the value of greater gas storage facilities in enhancing the reliability of the GB energy system given various energy uncertainties. -- Highlights: •A Monte Carlo model of the combined GB gas and electricity network was developed. •Reliability indices are calculated for the combined GB gas and electricity system. •The efficacy of doubling GB gas storage capacity on reliability of the energy system is assessed. •Integrated reliability indices could be used to assess the impact of investment in energy assets

  6. Machine Learning Approach for Software Reliability Growth Modeling with Infinite Testing Effort Function

    Directory of Open Access Journals (Sweden)

    Subburaj Ramasamy

    2017-01-01

    Full Text Available Reliability is one of the quantifiable software quality attributes. Software Reliability Growth Models (SRGMs are used to assess the reliability achieved at different times of testing. Traditional time-based SRGMs may not be accurate enough in all situations where test effort varies with time. To overcome this lacuna, test effort was used instead of time in SRGMs. In the past, finite test effort functions were proposed, which may not be realistic as, at infinite testing time, test effort will be infinite. Hence in this paper, we propose an infinite test effort function in conjunction with a classical Nonhomogeneous Poisson Process (NHPP model. We use Artificial Neural Network (ANN for training the proposed model with software failure data. Here it is possible to get a large set of weights for the same model to describe the past failure data equally well. We use machine learning approach to select the appropriate set of weights for the model which will describe both the past and the future data well. We compare the performance of the proposed model with existing model using practical software failure data sets. The proposed log-power TEF based SRGM describes all types of failure data equally well and also improves the accuracy of parameter estimation more than existing TEF and can be used for software release time determination as well.

  7. Experimental validation of a Monte Carlo proton therapy nozzle model incorporating magnetically steered protons

    International Nuclear Information System (INIS)

    Peterson, S W; Polf, J; Archambault, L; Beddar, S; Bues, M; Ciangaru, G; Smith, A

    2009-01-01

    The purpose of this study is to validate the accuracy of a Monte Carlo calculation model of a proton magnetic beam scanning delivery nozzle developed using the Geant4 toolkit. The Monte Carlo model was used to produce depth dose and lateral profiles, which were compared to data measured in the clinical scanning treatment nozzle at several energies. Comparisons were also made between measured and simulated off-axis profiles to test the accuracy of the model's magnetic steering. Comparison of the 80% distal dose fall-off values for the measured and simulated depth dose profiles agreed to within 1 mm for the beam energies evaluated. Agreement of the full width at half maximum values for the measured and simulated lateral fluence profiles was within 1.3 mm for all energies. The position of measured and simulated spot positions for the magnetically steered beams agreed to within 0.7 mm of each other. Based on these results, we found that the Geant4 Monte Carlo model of the beam scanning nozzle has the ability to accurately predict depth dose profiles, lateral profiles perpendicular to the beam axis and magnetic steering of a proton beam during beam scanning proton therapy.

  8. Monte Carlo evaluation of path integral for the nuclear shell model

    International Nuclear Information System (INIS)

    Lang, G.H.

    1993-01-01

    The authors present a path-integral formulation of the nuclear shell model using auxillary fields; the path-integral is evaluated by Monte Carlo methods. The method scales favorably with valence-nucleon number and shell-model basis: full-basis calculations are demonstrated up to the rare-earth region, which cannot be treated by other methods. Observables are calculated for the ground state and in a thermal ensemble. Dynamical correlations are obtained, from which strength functions are extracted through the Maximum Entropy method. Examples in the s-d shell, where exact diagonalization can be carried out, compared well with exact results. The open-quotes sign problemclose quotes generic to quantum Monte Carlo calculations is found to be absent in the attractive pairing-plus-multipole interactions. The formulation is general for interacting fermion systems and is well suited for parallel computation. The authors have implemented it on the Intel Touchstone Delta System, achieving better than 99% parallelization

  9. MONTE CARLO ANALYSES OF THE YALINA THERMAL FACILITY WITH SERPENT STEREOLITHOGRAPHY GEOMETRY MODEL

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, A.; Gohar, Y.

    2015-01-01

    This paper analyzes the YALINA Thermal subcritical assembly of Belarus using two different Monte Carlo transport programs, SERPENT and MCNP. The MCNP model is based on combinatorial geometry and universes hierarchy, while the SERPENT model is based on Stereolithography geometry. The latter consists of unstructured triangulated surfaces defined by the normal and vertices. This geometry format is used by 3D printers and it has been created by: the CUBIT software, MATLAB scripts, and C coding. All the Monte Carlo simulations have been performed using the ENDF/B-VII.0 nuclear data library. Both MCNP and SERPENT share the same geometry specifications, which describe the facility details without using any material homogenization. Three different configurations have been studied with different number of fuel rods. The three fuel configurations use 216, 245, or 280 fuel rods, respectively. The numerical simulations show that the agreement between SERPENT and MCNP results is within few tens of pcms.

  10. Use of Monte Carlo modeling approach for evaluating risk and environmental compliance

    International Nuclear Information System (INIS)

    Higley, K.A.; Strenge, D.L.

    1988-09-01

    Evaluating compliance with environmental regulations, specifically those regulations that pertain to human exposure, can be a difficult task. Historically, maximum individual or worst-case exposures have been calculated as a basis for evaluating risk or compliance with such regulations. However, these calculations may significantly overestimate exposure and may not provide a clear understanding of the uncertainty in the analysis. The use of Monte Carlo modeling techniques can provide a better understanding of the potential range of exposures and the likelihood of high (worst-case) exposures. This paper compares the results of standard exposure estimation techniques with the Monte Carlo modeling approach. The authors discuss the potential application of this approach for demonstrating regulatory compliance, along with the strengths and weaknesses of the approach. Suggestions on implementing this method as a routine tool in exposure and risk analyses are also presented. 16 refs., 5 tabs

  11. Implementation of a Monte Carlo method to model photon conversion for solar cells

    International Nuclear Information System (INIS)

    Canizo, C. del; Tobias, I.; Perez-Bedmar, J.; Pan, A.C.; Luque, A.

    2008-01-01

    A physical model describing different photon conversion mechanisms is presented in the context of photovoltaic applications. To solve the resulting system of equations, a Monte Carlo ray-tracing model is implemented, which takes into account the coupling of the photon transport phenomena to the non-linear rate equations describing luminescence. It also separates the generation of rays from the two very different sources of photons involved (the sun and the luminescence centers). The Monte Carlo simulator presented in this paper is proposed as a tool to help in the evaluation of candidate materials for up- and down-conversion. Some application examples are presented, exploring the range of values that the most relevant parameters describing the converter should have in order to give significant gain in photocurrent

  12. Testing Lorentz Invariance Emergence in the Ising Model using Monte Carlo simulations

    CERN Document Server

    Dias Astros, Maria Isabel

    2017-01-01

    In the context of the Lorentz invariance as an emergent phenomenon at low energy scales to study quantum gravity a system composed by two 3D interacting Ising models (one with an anisotropy in one direction) was proposed. Two Monte Carlo simulations were run: one for the 2D Ising model and one for the target model. In both cases the observables (energy, magnetization, heat capacity and magnetic susceptibility) were computed for different lattice sizes and a Binder cumulant introduced in order to estimate the critical temperature of the systems. Moreover, the correlation function was calculated for the 2D Ising model.

  13. Monte Carlo study of radiation-induced demagnetization using the two-dimensional Ising model

    International Nuclear Information System (INIS)

    Samin, Adib; Cao, Lei

    2015-01-01

    A simple radiation-damage model based on the Ising model for magnets is proposed to study the effects of radiation on the magnetism of permanent magnets. The model is studied in two dimensions using a Monte Carlo simulation, and it accounts for the radiation through the introduction of a localized heat pulse. The model exhibits qualitative agreement with experimental results, and it clearly elucidates the role that the coercivity and the radiation particle’s energy play in the process. A more quantitative agreement with experiment will entail accounting for the long-range dipole–dipole interactions and the crystalline anisotropy.

  14. Monte Carlo study of radiation-induced demagnetization using the two-dimensional Ising model

    Energy Technology Data Exchange (ETDEWEB)

    Samin, Adib; Cao, Lei

    2015-10-01

    A simple radiation-damage model based on the Ising model for magnets is proposed to study the effects of radiation on the magnetism of permanent magnets. The model is studied in two dimensions using a Monte Carlo simulation, and it accounts for the radiation through the introduction of a localized heat pulse. The model exhibits qualitative agreement with experimental results, and it clearly elucidates the role that the coercivity and the radiation particle’s energy play in the process. A more quantitative agreement with experiment will entail accounting for the long-range dipole–dipole interactions and the crystalline anisotropy.

  15. Particle Markov Chain Monte Carlo Techniques of Unobserved Component Time Series Models Using Ox

    DEFF Research Database (Denmark)

    Nonejad, Nima

    This paper details Particle Markov chain Monte Carlo techniques for analysis of unobserved component time series models using several economic data sets. PMCMC combines the particle filter with the Metropolis-Hastings algorithm. Overall PMCMC provides a very compelling, computationally fast...... and efficient framework for estimation. These advantages are used to for instance estimate stochastic volatility models with leverage effect or with Student-t distributed errors. We also model changing time series characteristics of the US inflation rate by considering a heteroskedastic ARFIMA model where...

  16. Monte Carlo modeling of neutron imaging at the SINQ spallation source

    International Nuclear Information System (INIS)

    Lebenhaft, J.R.; Lehmann, E.H.; Pitcher, E.J.; McKinney, G.W.

    2003-01-01

    Modeling of the Swiss Spallation Neutron Source (SINQ) has been used to demonstrate the neutron radiography capability of the newly released MPI-version of the MCNPX Monte Carlo code. A detailed MCNPX model was developed of SINQ and its associated neutron transmission radiography (NEUTRA) facility. Preliminary validation of the model was performed by comparing the calculated and measured neutron fluxes in the NEUTRA beam line, and a simulated radiography image was generated for a sample consisting of steel tubes containing different materials. This paper describes the SINQ facility, provides details of the MCNPX model, and presents preliminary results of the neutron imaging. (authors)

  17. State-to-state models of vibrational relaxation in Direct Simulation Monte Carlo (DSMC)

    Science.gov (United States)

    Oblapenko, G. P.; Kashkovsky, A. V.; Bondar, Ye A.

    2017-02-01

    In the present work, the application of state-to-state models of vibrational energy exchanges to the Direct Simulation Monte Carlo (DSMC) is considered. A state-to-state model for VT transitions of vibrational energy in nitrogen and oxygen, based on the application of the inverse Laplace transform to results of quasiclassical trajectory calculations (QCT) of vibrational energy transitions, along with the Forced Harmonic Oscillator (FHO) state-to-state model is implemented in DSMC code and applied to flows around blunt bodies. Comparisons are made with the widely used Larsen-Borgnakke model and the in uence of multi-quantum VT transitions is assessed.

  18. Optical roughness BRDF model for reverse Monte Carlo simulation of real material thermal radiation transfer.

    Science.gov (United States)

    Su, Peiran; Eri, Qitai; Wang, Qiang

    2014-04-10

    Optical roughness was introduced into the bidirectional reflectance distribution function (BRDF) model to simulate the reflectance characteristics of thermal radiation. The optical roughness BRDF model stemmed from the influence of surface roughness and wavelength on the ray reflectance calculation. This model was adopted to simulate real metal emissivity. The reverse Monte Carlo method was used to display the distribution of reflectance rays. The numerical simulations showed that the optical roughness BRDF model can calculate the wavelength effect on emissivity and simulate the real metal emissivity variance with incidence angles.

  19. Topological excitations and Monte-Carlo simulation of the Abelian-Higgs model

    International Nuclear Information System (INIS)

    Ranft, J.

    1981-01-01

    The phase structure and topological excitations, in particular the magnetic monopole current density, are investigated in a Monte-Carlo simulation of the lattice version of the four-dimensional Abelian-Higgs model. The monopole current density is found to be large in the confinement phase and rapidly decreasing in the Coulomb and Higgs phases. This result supports the view that confinement is neglected with the condensation of monopole-antimonopole pairs

  20. Computer simulation of stochastic processes through model-sampling (Monte Carlo) techniques.

    Science.gov (United States)

    Sheppard, C W.

    1969-03-01

    A simple Monte Carlo simulation program is outlined which can be used for the investigation of random-walk problems, for example in diffusion, or the movement of tracers in the blood circulation. The results given by the simulation are compared with those predicted by well-established theory, and it is shown how the model can be expanded to deal with drift, and with reflexion from or adsorption at a boundary.

  1. Extrapolation method in the Monte Carlo Shell Model and its applications

    International Nuclear Information System (INIS)

    Shimizu, Noritaka; Abe, Takashi; Utsuno, Yutaka; Mizusaki, Takahiro; Otsuka, Takaharu; Honma, Michio

    2011-01-01

    We demonstrate how the energy-variance extrapolation method works using the sequence of the approximated wave functions obtained by the Monte Carlo Shell Model (MCSM), taking 56 Ni with pf-shell as an example. The extrapolation method is shown to work well even in the case that the MCSM shows slow convergence, such as 72 Ge with f5pg9-shell. The structure of 72 Se is also studied including the discussion of the shape-coexistence phenomenon.

  2. Monte Carlo and analytical model predictions of leakage neutron exposures from passively scattered proton therapy

    International Nuclear Information System (INIS)

    Pérez-Andújar, Angélica; Zhang, Rui; Newhauser, Wayne

    2013-01-01

    Purpose: Stray neutron radiation is of concern after radiation therapy, especially in children, because of the high risk it might carry for secondary cancers. Several previous studies predicted the stray neutron exposure from proton therapy, mostly using Monte Carlo simulations. Promising attempts to develop analytical models have also been reported, but these were limited to only a few proton beam energies. The purpose of this study was to develop an analytical model to predict leakage neutron equivalent dose from passively scattered proton beams in the 100-250-MeV interval.Methods: To develop and validate the analytical model, the authors used values of equivalent dose per therapeutic absorbed dose (H/D) predicted with Monte Carlo simulations. The authors also characterized the behavior of the mean neutron radiation-weighting factor, w R , as a function of depth in a water phantom and distance from the beam central axis.Results: The simulated and analytical predictions agreed well. On average, the percentage difference between the analytical model and the Monte Carlo simulations was 10% for the energies and positions studied. The authors found that w R was highest at the shallowest depth and decreased with depth until around 10 cm, where it started to increase slowly with depth. This was consistent among all energies.Conclusion: Simple analytical methods are promising alternatives to complex and slow Monte Carlo simulations to predict H/D values. The authors' results also provide improved understanding of the behavior of w R which strongly depends on depth, but is nearly independent of lateral distance from the beam central axis

  3. Systematic vacuum study of the ITER model cryopump by test particle Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Xueli; Haas, Horst; Day, Christian [Institute for Technical Physics, Karlsruhe Institute of Technology, P.O. Box 3640, 76021 Karlsruhe (Germany)

    2011-07-01

    The primary pumping systems on the ITER torus are based on eight tailor-made cryogenic pumps because not any standard commercial vacuum pump can meet the ITER working criteria. This kind of cryopump can provide high pumping speed, especially for light gases, by the cryosorption on activated charcoal at 4.5 K. In this paper we will present the systematic Monte Carlo simulation results of the model pump in a reduced scale by ProVac3D, a new Test Particle Monte Carlo simulation program developed by KIT. The simulation model has included the most important mechanical structures such as sixteen cryogenic panels working at 4.5 K, the 80 K radiation shield envelope with baffles, the pump housing, inlet valve and the TIMO (Test facility for the ITER Model Pump) test facility. Three typical gas species, i.e., deuterium, protium and helium are simulated. The pumping characteristics have been obtained. The result is in good agreement with the experiment data up to the gas throughput of 1000 sccm, which marks the limit for free molecular flow. This means that ProVac3D is a useful tool in the design of the prototype cryopump of ITER. Meanwhile, the capture factors at different critical positions are calculated. They can be used as the important input parameters for a follow-up Direct Simulation Monte Carlo (DSMC) simulation for higher gas throughput.

  4. Continuous energy Monte Carlo calculations for randomly distributed spherical fuels based on statistical geometry model

    Energy Technology Data Exchange (ETDEWEB)

    Murata, Isao [Osaka Univ., Suita (Japan); Mori, Takamasa; Nakagawa, Masayuki; Itakura, Hirofumi

    1996-03-01

    The method to calculate neutronics parameters of a core composed of randomly distributed spherical fuels has been developed based on a statistical geometry model with a continuous energy Monte Carlo method. This method was implemented in a general purpose Monte Carlo code MCNP, and a new code MCNP-CFP had been developed. This paper describes the model and method how to use it and the validation results. In the Monte Carlo calculation, the location of a spherical fuel is sampled probabilistically along the particle flight path from the spatial probability distribution of spherical fuels, called nearest neighbor distribution (NND). This sampling method was validated through the following two comparisons: (1) Calculations of inventory of coated fuel particles (CFPs) in a fuel compact by both track length estimator and direct evaluation method, and (2) Criticality calculations for ordered packed geometries. This method was also confined by applying to an analysis of the critical assembly experiment at VHTRC. The method established in the present study is quite unique so as to a probabilistic model of the geometry with a great number of spherical fuels distributed randomly. Realizing the speed-up by vector or parallel computations in future, it is expected to be widely used in calculation of a nuclear reactor core, especially HTGR cores. (author).

  5. Learning of state-space models with highly informative observations: A tempered sequential Monte Carlo solution

    Science.gov (United States)

    Svensson, Andreas; Schön, Thomas B.; Lindsten, Fredrik

    2018-05-01

    Probabilistic (or Bayesian) modeling and learning offers interesting possibilities for systematic representation of uncertainty using probability theory. However, probabilistic learning often leads to computationally challenging problems. Some problems of this type that were previously intractable can now be solved on standard personal computers thanks to recent advances in Monte Carlo methods. In particular, for learning of unknown parameters in nonlinear state-space models, methods based on the particle filter (a Monte Carlo method) have proven very useful. A notoriously challenging problem, however, still occurs when the observations in the state-space model are highly informative, i.e. when there is very little or no measurement noise present, relative to the amount of process noise. The particle filter will then struggle in estimating one of the basic components for probabilistic learning, namely the likelihood p (data | parameters). To this end we suggest an algorithm which initially assumes that there is substantial amount of artificial measurement noise present. The variance of this noise is sequentially decreased in an adaptive fashion such that we, in the end, recover the original problem or possibly a very close approximation of it. The main component in our algorithm is a sequential Monte Carlo (SMC) sampler, which gives our proposed method a clear resemblance to the SMC2 method. Another natural link is also made to the ideas underlying the approximate Bayesian computation (ABC). We illustrate it with numerical examples, and in particular show promising results for a challenging Wiener-Hammerstein benchmark problem.

  6. [Psychosocial factors at work and cardiovascular diseases: contribution of the Effort-Reward Imbalance model].

    Science.gov (United States)

    Niedhammer, I; Siegrist, J

    1998-11-01

    The effect of psychosocial factors at work on health, especially cardiovascular health, has given rise to growing concern in occupational epidemiology over the last few years. Two theoretical models, Karasek's model and the Effort-Reward Imbalance model, have been developed to evaluate psychosocial factors at work within specific conceptual frameworks in an attempt to take into account the serious methodological difficulties inherent in the evaluation of such factors. Karasek's model, the most widely used model, measures three factors: psychological demands, decision latitude and social support at work. Many studies have shown the predictive effects of these factors on cardiovascular diseases independently of well-known cardiovascular risk factors. More recently, the Effort-Reward Imbalance model takes into account the role of individual coping characteristics which was neglected in the Karasek model. The effort-reward imbalance model focuses on the reciprocity of exchange in occupational life where high-cost/low-gain conditions are considered particularly stressful. Three dimensions of rewards are distinguished: money, esteem and gratifications in terms of promotion prospects and job security. Some studies already support that high-effort/low reward-conditions are predictive of cardiovascular diseases.

  7. Monte Carlo tests of the Rasch model based on scalability coefficients

    DEFF Research Database (Denmark)

    Christensen, Karl Bang; Kreiner, Svend

    2010-01-01

    that summarizes the number of Guttman errors in the data matrix. These coefficients are shown to yield efficient tests of the Rasch model using p-values computed using Markov chain Monte Carlo methods. The power of the tests of unequal item discrimination, and their ability to distinguish between local dependence......For item responses fitting the Rasch model, the assumptions underlying the Mokken model of double monotonicity are met. This makes non-parametric item response theory a natural starting-point for Rasch item analysis. This paper studies scalability coefficients based on Loevinger's H coefficient...

  8. Monte Carlo tools for Beyond the Standard Model Physics , April 14-16

    DEFF Research Database (Denmark)

    Badger...[], Simon; Christensen, Christian Holm; Dalsgaard, Hans Hjersing

    2011-01-01

    This workshop aims to gather together theorists and experimentalists interested in developing and using Monte Carlo tools for Beyond the Standard Model Physics in an attempt to be prepared for the analysis of data focusing on the Large Hadron Collider. Since a large number of excellent tools....... To identify promising models (or processes) for which the tools have not yet been constructed and start filling up these gaps. To propose ways to streamline the process of going from models to events, i.e. to make the process more user-friendly so that more people can get involved and perform serious collider...

  9. Criticality assessment for prismatic high temperature reactors by fuel stochastic Monte Carlo modeling

    Energy Technology Data Exchange (ETDEWEB)

    Zakova, Jitka [Department of Nuclear and Reactor Physics, Royal Institute of Technology, KTH, Roslagstullsbacken 21, S-10691 Stockholm (Sweden)], E-mail: jitka.zakova@neutron.kth.se; Talamo, Alberto [Nuclear Engineering Division, Argonne National Laboratory, ANL, 9700 South Cass Avenue, Argonne, IL 60439 (United States)], E-mail: alby@anl.gov

    2008-05-15

    Modeling of prismatic high temperature reactors requires a high precision description due to the triple heterogeneity of the core and also to the random distribution of fuel particles inside the fuel pins. On the latter issue, even with the most advanced Monte Carlo techniques, some approximation often arises while assessing the criticality level: first, a regular lattice of TRISO particles inside the fuel pins and, second, the cutting of TRISO particles by the fuel boundaries. We utilized two of the most accurate Monte Codes: MONK and MCNP, which are both used for licensing nuclear power plants in United Kingdom and in the USA, respectively, to evaluate the influence of the two previous approximations on estimating the criticality level of the Gas Turbine Modular Helium Reactor. The two codes exactly shared the same geometry and nuclear data library, ENDF/B, and only modeled different lattices of TRISO particles inside the fuel pins. More precisely, we investigated the difference between a regular lattice that cuts TRISO particles and a random lattice that axially repeats a region containing over 3000 non-cut particles. We have found that both Monte Carlo codes provide similar excesses of reactivity, provided that they share the same approximations.

  10. Modeling Replenishment of Ultrathin Liquid Perfluoro polyether Z Films on Solid Surfaces Using Monte Carlo Simulation

    International Nuclear Information System (INIS)

    Mayeed, M.S.; Kato, T.

    2014-01-01

    Applying the reptation algorithm to a simplified perfluoro polyether Z off-lattice polymer model an NVT Monte Carlo simulation has been performed. Bulk condition has been simulated first to compare the average radius of gyration with the bulk experimental results. Then the model is tested for its ability to describe dynamics. After this, it is applied to observe the replenishment of nano scale ultrathin liquid films on solid flat carbon surfaces. The replenishment rate for trenches of different widths (8, 12, and 16 nms for several molecular weights) between two films of perfluoro polyether Z from the Monte Carlo simulation is compared to that obtained solving the diffusion equation using the experimental diffusion coefficients of Ma et al. (1999), with room condition in both cases. Replenishment per Monte Carlo cycle seems to be a constant multiple of replenishment per second at least up to 2 nm replenished film thickness of the trenches over the carbon surface. Considerable good agreement has been achieved here between the experimental results and the dynamics of molecules using reptation moves in the ultrathin liquid films on solid surfaces.

  11. Automatic mesh adaptivity for hybrid Monte Carlo/deterministic neutronics modeling of difficult shielding problems

    International Nuclear Information System (INIS)

    Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.

    2015-01-01

    The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer

  12. Obligatory Effort [Hishtadlut] as an Explanatory Model: A Critique of Reproductive Choice and Control.

    Science.gov (United States)

    Teman, Elly; Ivry, Tsipy; Goren, Heela

    2016-06-01

    Studies on reproductive technologies often examine women's reproductive lives in terms of choice and control. Drawing on 48 accounts of procreative experiences of religiously devout Jewish women in Israel and the US, we examine their attitudes, understandings and experiences of pregnancy, reproductive technologies and prenatal testing. We suggest that the concept of hishtadlut-"obligatory effort"-works as an explanatory model that organizes Haredi women's reproductive careers and their negotiations of reproductive technologies. As an elastic category with negotiable and dynamic boundaries, hishtadlut gives ultra-orthodox Jewish women room for effort without the assumption of control; it allows them to exercise discretion in relation to medical issues without framing their efforts in terms of individual choice. Haredi women hold themselves responsible for making their obligatory effort and not for pregnancy outcomes. We suggest that an alternative paradigm to autonomous choice and control emerges from cosmological orders where reproductive duties constitute "obligatory choices."

  13. A Monte Carlo-adjusted goodness-of-fit test for parametric models describing spatial point patterns

    KAUST Repository

    Dao, Ngocanh; Genton, Marc G.

    2014-01-01

    Assessing the goodness-of-fit (GOF) for intricate parametric spatial point process models is important for many application fields. When the probability density of the statistic of the GOF test is intractable, a commonly used procedure is the Monte

  14. A Monte Carlo Simulation approach for the modeling of free-molecule squeeze-film damping of flexible microresonators

    KAUST Repository

    Leung, Roger; Cheung, Howard; Gang, Hong; Ye, Wenjing

    2010-01-01

    Squeeze-film damping on microresonators is a significant damping source even when the surrounding gas is highly rarefied. This article presents a general modeling approach based on Monte Carlo (MC) simulations for the prediction of squeeze

  15. Bayesian Monte Carlo and Maximum Likelihood Approach for Uncertainty Estimation and Risk Management: Application to Lake Oxygen Recovery Model

    Science.gov (United States)

    Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...

  16. Progress towards an effective model for FeSe from high-accuracy first-principles quantum Monte Carlo

    Science.gov (United States)

    Busemeyer, Brian; Wagner, Lucas K.

    While the origin of superconductivity in the iron-based materials is still controversial, the proximity of the superconductivity to magnetic order is suggestive that magnetism may be important. Our previous work has suggested that first-principles Diffusion Monte Carlo (FN-DMC) can capture magnetic properties of iron-based superconductors that density functional theory (DFT) misses, but which are consistent with experiment. We report on the progress of efforts to find simple effective models consistent with the FN-DMC description of the low-lying Hilbert space of the iron-based superconductor, FeSe. We utilize a procedure outlined by Changlani et al.[1], which both produces parameter values and indications of whether the model is a good description of the first-principles Hamiltonian. Using this procedure, we evaluate several models of the magnetic part of the Hilbert space found in the literature, as well as the Hubbard model, and a spin-fermion model. We discuss which interaction parameters are important for this material, and how the material-specific properties give rise to these interactions. U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Scientific Discovery through Advanced Computing (SciDAC) program under Award No. FG02-12ER46875, as well as the NSF Graduate Research Fellowship Program.

  17. Modeling dose-rate on/over the surface of cylindrical radio-models using Monte Carlo methods

    International Nuclear Information System (INIS)

    Xiao Xuefu; Ma Guoxue; Wen Fuping; Wang Zhongqi; Wang Chaohui; Zhang Jiyun; Huang Qingbo; Zhang Jiaqiu; Wang Xinxing; Wang Jun

    2004-01-01

    Objective: To determine the dose-rates on/over the surface of 10 cylindrical radio-models, which belong to the Metrology Station of Radio-Geological Survey of CNNC. Methods: The dose-rates on/over the surface of 10 cylindrical radio-models were modeled using the famous Monte Carlo code-MCNP. The dose-rates on/over the surface of 10 cylindrical radio-models were measured by a high gas pressurized ionization chamber dose-rate meter, respectively. The values of dose-rate modeled using MCNP code were compared with those obtained by authors in the present experimental measurement, and with those obtained by other workers previously. Some factors causing the discrepancy between the data obtained by authors using MCNP code and the data obtained using other methods are discussed in this paper. Results: The data of dose-rates on/over the surface of 10 cylindrical radio-models, obtained using MCNP code, were in good agreement with those obtained by other workers using the theoretical method. They were within the discrepancy of ±5% in general, and the maximum discrepancy was less than 10%. Conclusions: As if each factor needed for the Monte Carlo code is correct, the dose-rates on/over the surface of cylindrical radio-models modeled using the Monte Carlo code are correct with an uncertainty of 3%

  18. Monte Carlo Studies of Phase Separation in Compressible 2-dim Ising Models

    Science.gov (United States)

    Mitchell, S. J.; Landau, D. P.

    2006-03-01

    Using high resolution Monte Carlo simulations, we study time-dependent domain growth in compressible 2-dim ferromagnetic (s=1/2) Ising models with continuous spin positions and spin-exchange moves [1]. Spins interact with slightly modified Lennard-Jones potentials, and we consider a model with no lattice mismatch and one with 4% mismatch. For comparison, we repeat calculations for the rigid Ising model [2]. For all models, large systems (512^2) and long times (10^ 6 MCS) are examined over multiple runs, and the growth exponent is measured in the asymptotic scaling regime. For the rigid model and the compressible model with no lattice mismatch, the growth exponent is consistent with the theoretically expected value of 1/3 [1] for Model B type growth. However, we find that non-zero lattice mismatch has a significant and unexpected effect on the growth behavior.Supported by the NSF.[1] D.P. Landau and K. Binder, A Guide to Monte Carlo Simulations in Statistical Physics, second ed. (Cambridge University Press, New York, 2005).[2] J. Amar, F. Sullivan, and R.D. Mountain, Phys. Rev. B 37, 196 (1988).

  19. History and future perspectives of the Monte Carlo shell model -from Alphleet to K computer-

    International Nuclear Information System (INIS)

    Shimizu, Noritaka; Otsuka, Takaharu; Utsuno, Yutaka; Mizusaki, Takahiro; Honma, Michio; Abe, Takashi

    2013-01-01

    We report a history of the developments of the Monte Carlo shell model (MCSM). The MCSM was proposed in order to perform large-scale shell-model calculations which direct diagonalization method cannot reach. Since 1999 PC clusters were introduced for parallel computation of the MCSM. Since 2011 we participated the High Performance Computing Infrastructure Strategic Program and developed a new MCSM code for current massively parallel computers such as K computer. We discuss future perspectives concerning a new framework and parallel computation of the MCSM by incorporating conjugate gradient method and energy-variance extrapolation

  20. Monte Carlo modelling of the Belgian materials testing reactor BR2: present status

    International Nuclear Information System (INIS)

    Verboomen, B.; Aoust, Th.; Raedt, Ch. de; Beeckmans de West-Meerbeeck, A.

    2001-01-01

    A very detailed 3-D MCNP-4B model of the BR2 reactor was developed to perform all neutron and gamma calculations needed for the design of new experimental irradiation rigs. The Monte Carlo model of BR2 includes the nearly exact geometrical representation of fuel elements (now with their axially varying burn-up), of partially inserted control and regulating rods, of experimental devices and of radioisotope production rigs. The multiple level-geometry possibilities of MCNP-4B are fully exploited to obtain sufficiently flexible tools to cope with the very changing core loading. (orig.)

  1. Quantum Monte Carlo simulation for S=1 Heisenberg model with uniaxial anisotropy

    International Nuclear Information System (INIS)

    Tsukamoto, Mitsuaki; Batista, Cristian; Kawashima, Naoki

    2007-01-01

    We perform quantum Monte Carlo simulations for S=1 Heisenberg model with an uniaxial anisotropy. The system exhibits a phase transition as we vary the anisotropy and a long range order appears at a finite temperature when the exchange interaction J is comparable to the uniaxial anisotropy D. We investigate quantum critical phenomena of this model and obtain the line of the phase transition which approaches a power-law with logarithmic corrections at low temperature. We derive the form of logarithmic corrections analytically and compare it to our simulation results

  2. Monte Carlo simulations of the NJL model near the nonzero temperature phase transition

    International Nuclear Information System (INIS)

    Strouthos, Costas; Christofi, Stavros

    2005-01-01

    We present results from numerical simulations of the Nambu-Jona-Lasinio model with an SU(2)xSU(2) chiral symmetry and N c = 4,8, and 16 quark colors at nonzero temperature. We performed the simulations by utilizing the hybrid Monte Carlo and hybrid Molecular Dynamics algorithms. We show that the model undergoes a second order phase transition. The critical exponents measured are consistent with the classical 3d O(4) universality class and hence in accordance with the dimensional reduction scenario. We also show that the Ginzburg region is suppressed by a factor of 1/N c in accordance with previous analytical predictions. (author)

  3. Modeling of the YALINA booster facility by the Monte Carlo code MONK

    International Nuclear Information System (INIS)

    Talamo, A.; Gohar, Y.; Kondev, F.; Kiyavitskaya, H.; Serafimovich, I.; Bournos, V.; Fokov, Y.; Routkovskaya, C.

    2007-01-01

    The YALINA-Booster facility has been modeled according to the benchmark specifications defined for the IAEA activity without any geometrical homogenization using the Monte Carlo codes MONK and MCNP/MCNPX/MCB. The MONK model perfectly matches the MCNP one. The computational analyses have been extended through the MCB code, which is an extension of the MCNP code with burnup capability because of its additional feature for analyzing source driven multiplying assemblies. The main neutronics arameters of the YALINA-Booster facility were calculated using these computer codes with different nuclear data libraries based on ENDF/B-VI-0, -6, JEF-2.2, and JEF-3.1.

  4. Modeling the cathode region of noble gas mixture discharges using Monte Carlo simulation

    International Nuclear Information System (INIS)

    Donko, Z.; Janossy, M.

    1992-10-01

    A model of the cathode dark space of DC glow discharges was developed in order to study the effects caused by mixing small amounts (≤2%) of other noble gases (Ne, Ar, Kr and Xe) to He. The motion of charged particles was described by Monte Carlo simulation. Several discharge parameters (electron and ion energy distribution functions, electron and ion current densities, reduced ionization coefficients, and current density-voltage characteristics) were obtained. Small amounts of admixtures were found to modify significantly the discharge parameters. Current density-voltage characteristics obtained from the model showed good agreement with experimental data. (author) 40 refs.; 14 figs

  5. Bayesian estimation of realized stochastic volatility model by Hybrid Monte Carlo algorithm

    International Nuclear Information System (INIS)

    Takaishi, Tetsuya

    2014-01-01

    The hybrid Monte Carlo algorithm (HMCA) is applied for Bayesian parameter estimation of the realized stochastic volatility (RSV) model. Using the 2nd order minimum norm integrator (2MNI) for the molecular dynamics (MD) simulation in the HMCA, we find that the 2MNI is more efficient than the conventional leapfrog integrator. We also find that the autocorrelation time of the volatility variables sampled by the HMCA is very short. Thus it is concluded that the HMCA with the 2MNI is an efficient algorithm for parameter estimations of the RSV model

  6. Design and evaluation of a Monte Carlo based model of an orthovoltage treatment system

    International Nuclear Information System (INIS)

    Penchev, Petar; Maeder, Ulf; Fiebich, Martin; Zink, Klemens; University Hospital Marburg

    2015-01-01

    The aim of this study was to develop a flexible framework of an orthovoltage treatment system capable of calculating and visualizing dose distributions in different phantoms and CT datasets. The framework provides a complete set of various filters, applicators and X-ray energies and therefore can be adapted to varying studies or be used for educational purposes. A dedicated user friendly graphical interface was developed allowing for easy setup of the simulation parameters and visualization of the results. For the Monte Carlo simulations the EGSnrc Monte Carlo code package was used. Building the geometry was accomplished with the help of the EGSnrc C++ class library. The deposited dose was calculated according to the KERMA approximation using the track-length estimator. The validation against measurements showed a good agreement within 4-5% deviation, down to depths of 20% of the depth dose maximum. Furthermore, to show its capabilities, the validated model was used to calculate the dose distribution on two CT datasets. Typical Monte Carlo calculation time for these simulations was about 10 minutes achieving an average statistical uncertainty of 2% on a standard PC. However, this calculation time depends strongly on the used CT dataset, tube potential, filter material/thickness and applicator size.

  7. Competition for marine space: modelling the Baltic Sea fisheries and effort displacement under spatial restrictions

    DEFF Research Database (Denmark)

    Bastardie, Francois; Nielsen, J. Rasmus; Eigaard, Ole Ritzau

    2015-01-01

    DISPLACE model) to combine stochastic variations in spatial fishing activities with harvested resource dynamics in scenario projections. The assessment computes economic and stock status indicators by modelling the activity of Danish, Swedish, and German vessels (.12 m) in the international western Baltic...... Sea commercial fishery, together with the underlying size-based distribution dynamics of the main fishery resources of sprat, herring, and cod. The outcomes of alternative scenarios for spatial effort displacement are exemplified by evaluating the fishers’s abilities to adapt to spatial plans under...... various constraints. Interlinked spatial, technical, and biological dynamics of vessels and stocks in the scenarios result in stable profits, which compensate for the additional costs from effort displacement and release pressure on the fish stocks. The effort is further redirected away from sensitive...

  8. Effort dynamics in a fisheries bioeconomic model: A vessel level approach through Game Theory

    Directory of Open Access Journals (Sweden)

    Gorka Merino

    2007-09-01

    Full Text Available Red shrimp, Aristeus antennatus (Risso, 1816 is one of the most important resources for the bottom-trawl fleets in the northwestern Mediterranean, in terms of both landings and economic value. A simple bioeconomic model introducing Game Theory for the prediction of effort dynamics at vessel level is proposed. The game is performed by the twelve vessels exploiting red shrimp in Blanes. Within the game, two solutions are performed: non-cooperation and cooperation. The first is proposed as a realistic method for the prediction of individual effort strategies and the second is used to illustrate the potential profitability of the analysed fishery. The effort strategy for each vessel is the number of fishing days per year and their objective is profit maximisation, individual profits for the non-cooperative solution and total profits for the cooperative one. In the present analysis, strategic conflicts arise from the differences between vessels in technical efficiency (catchability coefficient and economic efficiency (defined here. The ten-year and 1000-iteration stochastic simulations performed for the two effort solutions show that the best strategy from both an economic and a conservationist perspective is homogeneous effort cooperation. However, the results under non-cooperation are more similar to the observed data on effort strategies and landings.

  9. Comparison of nonstationary generalized logistic models based on Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    S. Kim

    2015-06-01

    Full Text Available Recently, the evidences of climate change have been observed in hydrologic data such as rainfall and flow data. The time-dependent characteristics of statistics in hydrologic data are widely defined as nonstationarity. Therefore, various nonstationary GEV and generalized Pareto models have been suggested for frequency analysis of nonstationary annual maximum and POT (peak-over-threshold data, respectively. However, the alternative models are required for nonstatinoary frequency analysis because of analyzing the complex characteristics of nonstationary data based on climate change. This study proposed the nonstationary generalized logistic model including time-dependent parameters. The parameters of proposed model are estimated using the method of maximum likelihood based on the Newton-Raphson method. In addition, the proposed model is compared by Monte Carlo simulation to investigate the characteristics of models and applicability.

  10. Investigation of Psychological Health and Migraine Headaches Among Personnel According to Effort-Reward Imbalance Model

    Directory of Open Access Journals (Sweden)

    Z. Darami

    2012-05-01

    Full Text Available Background and aims: The relationship between physical-mental health and Migraine headaches and stress, especially job stress, is known. Many factors can construct job stress in work settings. The factor that has gained much attention recently is inequality (imbalance of employees’ effort versus the reward they gain. The aim of the current attempt was to investigate the validity of effort-reward imbalance model and indicate the relation of this model with migraine headaches and psychological well-being among subjects in balance and imbalance groups. Methods: Participants were 180 personnel of Oil distribution company located in Isfahan city, and instruments used were General health questionnaire (Goldberg & Hilier, Social Re-adjustment Rating Scale (Holmes & Rahe, Ahvaz Migraine Questionnaire (Najariyan and Effort-reward imbalance scale (Van Vegchel & et al.   Results: The result of exploratory and confirmatory factor analysis for investigating the Construct validity of the effort-reward imbalance model showed that in both analyses, the two factor model was confirmed. Moreover, findings indicate that balance group was in better psychological (p<0/01 and physical (migraine (p<0/05 status comparing to the imbalance group. These findings indicate the significance of justice to present appropriate reward relative to personnel performance on their health.   Conclusion: Implication of these findings can improve Iranian industrial personnel health from both physical and psychological aspects.  

  11. Hindsight Bias Doesn't Always Come Easy: Causal Models, Cognitive Effort, and Creeping Determinism

    Science.gov (United States)

    Nestler, Steffen; Blank, Hartmut; von Collani, Gernot

    2008-01-01

    Creeping determinism, a form of hindsight bias, refers to people's hindsight perceptions of events as being determined or inevitable. This article proposes, on the basis of a causal-model theory of creeping determinism, that the underlying processes are effortful, and hence creeping determinism should disappear when individuals lack the cognitive…

  12. A Covariance Structure Model Test of Antecedents of Adolescent Alcohol Misuse and a Prevention Effort.

    Science.gov (United States)

    Dielman, T. E.; And Others

    1989-01-01

    Questionnaires were administered to 4,157 junior high school students to determine levels of alcohol misuse, exposure to peer use and misuse of alcohol, susceptibility to peer pressure, internal health locus of control, and self-esteem. Conceptual model of antecendents of adolescent alcohol misuse and effectiveness of a prevention effort was…

  13. MONTE CARLO SIMULATION MODEL OF ENERGETIC PROTON TRANSPORT THROUGH SELF-GENERATED ALFVEN WAVES

    Energy Technology Data Exchange (ETDEWEB)

    Afanasiev, A.; Vainio, R., E-mail: alexandr.afanasiev@helsinki.fi [Department of Physics, University of Helsinki (Finland)

    2013-08-15

    A new Monte Carlo simulation model for the transport of energetic protons through self-generated Alfven waves is presented. The key point of the model is that, unlike the previous ones, it employs the full form (i.e., includes the dependence on the pitch-angle cosine) of the resonance condition governing the scattering of particles off Alfven waves-the process that approximates the wave-particle interactions in the framework of quasilinear theory. This allows us to model the wave-particle interactions in weak turbulence more adequately, in particular, to implement anisotropic particle scattering instead of isotropic scattering, which the previous Monte Carlo models were based on. The developed model is applied to study the transport of flare-accelerated protons in an open magnetic flux tube. Simulation results for the transport of monoenergetic protons through the spectrum of Alfven waves reveal that the anisotropic scattering leads to spatially more distributed wave growth than isotropic scattering. This result can have important implications for diffusive shock acceleration, e.g., affect the scattering mean free path of the accelerated particles in and the size of the foreshock region.

  14. A new moving strategy for the sequential Monte Carlo approach in optimizing the hydrological model parameters

    Science.gov (United States)

    Zhu, Gaofeng; Li, Xin; Ma, Jinzhu; Wang, Yunquan; Liu, Shaomin; Huang, Chunlin; Zhang, Kun; Hu, Xiaoli

    2018-04-01

    Sequential Monte Carlo (SMC) samplers have become increasing popular for estimating the posterior parameter distribution with the non-linear dependency structures and multiple modes often present in hydrological models. However, the explorative capabilities and efficiency of the sampler depends strongly on the efficiency in the move step of SMC sampler. In this paper we presented a new SMC sampler entitled the Particle Evolution Metropolis Sequential Monte Carlo (PEM-SMC) algorithm, which is well suited to handle unknown static parameters of hydrologic model. The PEM-SMC sampler is inspired by the works of Liang and Wong (2001) and operates by incorporating the strengths of the genetic algorithm, differential evolution algorithm and Metropolis-Hasting algorithm into the framework of SMC. We also prove that the sampler admits the target distribution to be a stationary distribution. Two case studies including a multi-dimensional bimodal normal distribution and a conceptual rainfall-runoff hydrologic model by only considering parameter uncertainty and simultaneously considering parameter and input uncertainty show that PEM-SMC sampler is generally superior to other popular SMC algorithms in handling the high dimensional problems. The study also indicated that it may be important to account for model structural uncertainty by using multiplier different hydrological models in the SMC framework in future study.

  15. Monte Carlo Modelling of Single-Crystal Diffuse Scattering from Intermetallics

    Directory of Open Access Journals (Sweden)

    Darren J. Goossens

    2016-02-01

    Full Text Available Single-crystal diffuse scattering (SCDS reveals detailed structural insights into materials. In particular, it is sensitive to two-body correlations, whereas traditional Bragg peak-based methods are sensitive to single-body correlations. This means that diffuse scattering is sensitive to ordering that persists for just a few unit cells: nanoscale order, sometimes referred to as “local structure”, which is often crucial for understanding a material and its function. Metals and alloys were early candidates for SCDS studies because of the availability of large single crystals. While great progress has been made in areas like ab initio modelling and molecular dynamics, a place remains for Monte Carlo modelling of model crystals because of its ability to model very large systems; important when correlations are relatively long (though still finite in range. This paper briefly outlines, and gives examples of, some Monte Carlo methods appropriate for the modelling of SCDS from metallic compounds, and considers data collection as well as analysis. Even if the interest in the material is driven primarily by magnetism or transport behaviour, an understanding of the local structure can underpin such studies and give an indication of nanoscale inhomogeneity.

  16. The Physical Models and Statistical Procedures Used in the RACER Monte Carlo Code

    International Nuclear Information System (INIS)

    Sutton, T.M.; Brown, F.B.; Bischoff, F.G.; MacMillan, D.B.; Ellis, C.L.; Ward, J.T.; Ballinger, C.T.; Kelly, D.J.; Schindler, L.

    1999-01-01

    This report describes the MCV (Monte Carlo - Vectorized)Monte Carlo neutron transport code [Brown, 1982, 1983; Brown and Mendelson, 1984a]. MCV is a module in the RACER system of codes that is used for Monte Carlo reactor physics analysis. The MCV module contains all of the neutron transport and statistical analysis functions of the system, while other modules perform various input-related functions such as geometry description, material assignment, output edit specification, etc. MCV is very closely related to the 05R neutron Monte Carlo code [Irving et al., 1965] developed at Oak Ridge National Laboratory. 05R evolved into the 05RR module of the STEMB system, which was the forerunner of the RACER system. Much of the overall logic and physics treatment of 05RR has been retained and, indeed, the original verification of MCV was achieved through comparison with STEMB results. MCV has been designed to be very computationally efficient [Brown, 1981, Brown and Martin, 1984b; Brown, 1986]. It was originally programmed to make use of vector-computing architectures such as those of the CDC Cyber- 205 and Cray X-MP. MCV was the first full-scale production Monte Carlo code to effectively utilize vector-processing capabilities. Subsequently, MCV was modified to utilize both distributed-memory [Sutton and Brown, 1994] and shared memory parallelism. The code has been compiled and run on platforms ranging from 32-bit UNIX workstations to clusters of 64-bit vector-parallel supercomputers. The computational efficiency of the code allows the analyst to perform calculations using many more neutron histories than is practical with most other Monte Carlo codes, thereby yielding results with smaller statistical uncertainties. MCV also utilizes variance reduction techniques such as survival biasing, splitting, and rouletting to permit additional reduction in uncertainties. While a general-purpose neutron Monte Carlo code, MCV is optimized for reactor physics calculations. It has the

  17. Computational Models of Anterior Cingulate Cortex: At the Crossroads between Prediction and Effort

    Directory of Open Access Journals (Sweden)

    Eliana Vassena

    2017-06-01

    Full Text Available In the last two decades the anterior cingulate cortex (ACC has become one of the most investigated areas of the brain. Extensive neuroimaging evidence suggests countless functions for this region, ranging from conflict and error coding, to social cognition, pain and effortful control. In response to this burgeoning amount of data, a proliferation of computational models has tried to characterize the neurocognitive architecture of ACC. Early seminal models provided a computational explanation for a relatively circumscribed set of empirical findings, mainly accounting for EEG and fMRI evidence. More recent models have focused on ACC's contribution to effortful control. In parallel to these developments, several proposals attempted to explain within a single computational framework a wider variety of empirical findings that span different cognitive processes and experimental modalities. Here we critically evaluate these modeling attempts, highlighting the continued need to reconcile the array of disparate ACC observations within a coherent, unifying framework.

  18. Monte Carlo based statistical power analysis for mediation models: methods and software.

    Science.gov (United States)

    Zhang, Zhiyong

    2014-12-01

    The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.

  19. New-generation Monte Carlo shell model for the K computer era

    International Nuclear Information System (INIS)

    Shimizu, Noritaka; Abe, Takashi; Yoshida, Tooru; Otsuka, Takaharu; Tsunoda, Yusuke; Utsuno, Yutaka; Mizusaki, Takahiro; Honma, Michio

    2012-01-01

    We present a newly enhanced version of the Monte Carlo shell-model (MCSM) method by incorporating the conjugate gradient method and energy-variance extrapolation. This new method enables us to perform large-scale shell-model calculations that the direct diagonalization method cannot reach. This new-generation framework of the MCSM provides us with a powerful tool to perform very advanced large-scale shell-model calculations on current massively parallel computers such as the K computer. We discuss the validity of this method in ab initio calculations of light nuclei, and propose a new method to describe the intrinsic wave function in terms of the shell-model picture. We also apply this new MCSM to the study of neutron-rich Cr and Ni isotopes using conventional shell-model calculations with an inert 40 Ca core and discuss how the magicity of N = 28, 40, 50 remains or is broken. (author)

  20. DUAL STATE-PARAMETER UPDATING SCHEME ON A CONCEPTUAL HYDROLOGIC MODEL USING SEQUENTIAL MONTE CARLO FILTERS

    Science.gov (United States)

    Noh, Seong Jin; Tachikawa, Yasuto; Shiiba, Michiharu; Kim, Sunmin

    Applications of data assimilation techniques have been widely used to improve upon the predictability of hydrologic modeling. Among various data assimilation techniques, sequential Monte Carlo (SMC) filters, known as "particle filters" provide the capability to handle non-linear and non-Gaussian state-space models. This paper proposes a dual state-parameter updating scheme (DUS) based on SMC methods to estimate both state and parameter variables of a hydrologic model. We introduce a kernel smoothing method for the robust estimation of uncertain model parameters in the DUS. The applicability of the dual updating scheme is illustrated using the implementation of the storage function model on a middle-sized Japanese catchment. We also compare performance results of DUS combined with various SMC methods, such as SIR, ASIR and RPF.

  1. A Monte Carlo model for 3D grain evolution during welding

    Science.gov (United States)

    Rodgers, Theron M.; Mitchell, John A.; Tikare, Veena

    2017-09-01

    Welding is one of the most wide-spread processes used in metal joining. However, there are currently no open-source software implementations for the simulation of microstructural evolution during a weld pass. Here we describe a Potts Monte Carlo based model implemented in the SPPARKS kinetic Monte Carlo computational framework. The model simulates melting, solidification and solid-state microstructural evolution of material in the fusion and heat-affected zones of a weld. The model does not simulate thermal behavior, but rather utilizes user input parameters to specify weld pool and heat-affect zone properties. Weld pool shapes are specified by Bézier curves, which allow for the specification of a wide range of pool shapes. Pool shapes can range from narrow and deep to wide and shallow representing different fluid flow conditions within the pool. Surrounding temperature gradients are calculated with the aide of a closest point projection algorithm. The model also allows simulation of pulsed power welding through time-dependent variation of the weld pool size. Example simulation results and comparisons with laboratory weld observations demonstrate microstructural variation with weld speed, pool shape, and pulsed-power.

  2. Monte Carlo simulation as a tool to predict blasting fragmentation based on the Kuz Ram model

    Science.gov (United States)

    Morin, Mario A.; Ficarazzo, Francesco

    2006-04-01

    Rock fragmentation is considered the most important aspect of production blasting because of its direct effects on the costs of drilling and blasting and on the economics of the subsequent operations of loading, hauling and crushing. Over the past three decades, significant progress has been made in the development of new technologies for blasting applications. These technologies include increasingly sophisticated computer models for blast design and blast performance prediction. Rock fragmentation depends on many variables such as rock mass properties, site geology, in situ fracturing and blasting parameters and as such has no complete theoretical solution for its prediction. However, empirical models for the estimation of size distribution of rock fragments have been developed. In this study, a blast fragmentation Monte Carlo-based simulator, based on the Kuz-Ram fragmentation model, has been developed to predict the entire fragmentation size distribution, taking into account intact and joints rock properties, the type and properties of explosives and the drilling pattern. Results produced by this simulator were quite favorable when compared with real fragmentation data obtained from a blast quarry. It is anticipated that the use of Monte Carlo simulation will increase our understanding of the effects of rock mass and explosive properties on the rock fragmentation by blasting, as well as increase our confidence in these empirical models. This understanding will translate into improvements in blasting operations, its corresponding costs and the overall economics of open pit mines and rock quarries.

  3. Fission yield calculation using toy model based on Monte Carlo simulation

    International Nuclear Information System (INIS)

    Jubaidah; Kurniadi, Rizal

    2015-01-01

    Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (R c ), mean of left curve (μ L ) and mean of right curve (μ R ), deviation of left curve (σ L ) and deviation of right curve (σ R ). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90

  4. Fission yield calculation using toy model based on Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Jubaidah, E-mail: jubaidah@student.itb.ac.id [Nuclear Physics and Biophysics Division, Department of Physics, Bandung Institute of Technology. Jl. Ganesa No. 10 Bandung – West Java, Indonesia 40132 (Indonesia); Physics Department, Faculty of Mathematics and Natural Science – State University of Medan. Jl. Willem Iskandar Pasar V Medan Estate – North Sumatera, Indonesia 20221 (Indonesia); Kurniadi, Rizal, E-mail: rijalk@fi.itb.ac.id [Nuclear Physics and Biophysics Division, Department of Physics, Bandung Institute of Technology. Jl. Ganesa No. 10 Bandung – West Java, Indonesia 40132 (Indonesia)

    2015-09-30

    Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (R{sub c}), mean of left curve (μ{sub L}) and mean of right curve (μ{sub R}), deviation of left curve (σ{sub L}) and deviation of right curve (σ{sub R}). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90

  5. Effort-reward imbalance and organisational injustice among aged nurses: a moderated mediation model.

    Science.gov (United States)

    Topa, Gabriela; Guglielmi, Dina; Depolo, Marco

    2016-09-01

    To test the effort-reward imbalance model among older nurses, expanding it to include the moderation of overcommitment and age in the stress-health complaints relationship, mediated by organisational injustice. The theoretical framework included the effort-reward imbalance, the uncertainty management and the socio-emotional selectivity models. Employing a two-wave design, the participants were 255 nurses aged 45 years and over, recruited from four large hospitals in Spain (Madrid and Basque Country). The direct effect of imbalance on health complaints was supported: it was significant when overcommitment was low but not when it was high. Organisational injustice mediated the influence of effort-reward imbalance on health complaints. The conditional effect of the mediation of organisational injustice was significant in three of the overcommitment/age conditions but it weakened, becoming non-significant, when the level of overcommitment was low and age was high. The study tested the model in nursing populations and expanded it to the settings of occupational health and safety at work. The results of this study highlight the importance of effort-reward imbalance and organisational justice for creating healthy work environments. © 2016 John Wiley & Sons Ltd.

  6. Studies on top-quark Monte Carlo modelling for Top2016

    CERN Document Server

    The ATLAS collaboration

    2016-01-01

    This note summarises recent studies on Monte Carlo simulation setups of top-quark pair production used by the ATLAS experiment and presents a new method to deal with interference effects for the $Wt$ single-top-quark production which is compared against previous techniques. The main focus for the top-quark pair production is on the improvement of the modelling of the Powheg generator interfaced to the Pythia8 and Herwig7 shower generators. The studies are done using unfolded data at centre-of-mass energies of 7, 8, and 13 TeV.

  7. A study of potential energy curves from the model space quantum Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Ohtsuka, Yuhki; Ten-no, Seiichiro, E-mail: tenno@cs.kobe-u.ac.jp [Department of Computational Sciences, Graduate School of System Informatics, Kobe University, Nada-ku, Kobe 657-8501 (Japan)

    2015-12-07

    We report on the first application of the model space quantum Monte Carlo (MSQMC) to potential energy curves (PECs) for the excited states of C{sub 2}, N{sub 2}, and O{sub 2} to validate the applicability of the method. A parallel MSQMC code is implemented with the initiator approximation to enable efficient sampling. The PECs of MSQMC for various excited and ionized states are compared with those from the Rydberg-Klein-Rees and full configuration interaction methods. The results indicate the usefulness of MSQMC for precise PECs in a wide range obviating problems concerning quasi-degeneracy.

  8. A Monte Carlo modeling alternative for the API Gamma Ray Calibration Facility.

    Science.gov (United States)

    Galford, J E

    2017-04-01

    The gamma ray pit at the API Calibration Facility, located on the University of Houston campus, defines the API unit for natural gamma ray logs used throughout the petroleum logging industry. Future use of the facility is uncertain. An alternative method is proposed to preserve the gamma ray API unit definition as an industry standard by using Monte Carlo modeling to obtain accurate counting rate-to-API unit conversion factors for gross-counting and spectral gamma ray tool designs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Integrating multiple distribution models to guide conservation efforts of an endangered toad

    Science.gov (United States)

    Treglia, Michael L.; Fisher, Robert N.; Fitzgerald, Lee A.

    2015-01-01

    Species distribution models are used for numerous purposes such as predicting changes in species’ ranges and identifying biodiversity hotspots. Although implications of distribution models for conservation are often implicit, few studies use these tools explicitly to inform conservation efforts. Herein, we illustrate how multiple distribution models developed using distinct sets of environmental variables can be integrated to aid in identification sites for use in conservation. We focus on the endangered arroyo toad (Anaxyrus californicus), which relies on open, sandy streams and surrounding floodplains in southern California, USA, and northern Baja California, Mexico. Declines of the species are largely attributed to habitat degradation associated with vegetation encroachment, invasive predators, and altered hydrologic regimes. We had three main goals: 1) develop a model of potential habitat for arroyo toads, based on long-term environmental variables and all available locality data; 2) develop a model of the species’ current habitat by incorporating recent remotely-sensed variables and only using recent locality data; and 3) integrate results of both models to identify sites that may be employed in conservation efforts. We used a machine learning technique, Random Forests, to develop the models, focused on riparian zones in southern California. We identified 14.37% and 10.50% of our study area as potential and current habitat for the arroyo toad, respectively. Generally, inclusion of remotely-sensed variables reduced modeled suitability of sites, thus many areas modeled as potential habitat were not modeled as current habitat. We propose such sites could be made suitable for arroyo toads through active management, increasing current habitat by up to 67.02%. Our general approach can be employed to guide conservation efforts of virtually any species with sufficient data necessary to develop appropriate distribution models.

  10. The Effect of the Demand Control and Effort Reward Imbalance Models on the Academic Burnout of Korean Adolescents

    Science.gov (United States)

    Lee, Jayoung; Puig, Ana; Lee, Sang Min

    2012-01-01

    The purpose of this study was to examine the effects of the Demand Control Model (DCM) and the Effort Reward Imbalance Model (ERIM) on academic burnout for Korean students. Specifically, this study identified the effects of the predictor variables based on DCM and ERIM (i.e., demand, control, effort, reward, Demand Control Ratio, Effort Reward…

  11. Particle rejuvenation of Rao-Blackwellized sequential Monte Carlo smoothers for conditionally linear and Gaussian models

    Science.gov (United States)

    Nguyen, Ngoc Minh; Corff, Sylvain Le; Moulines, Éric

    2017-12-01

    This paper focuses on sequential Monte Carlo approximations of smoothing distributions in conditionally linear and Gaussian state spaces. To reduce Monte Carlo variance of smoothers, it is typical in these models to use Rao-Blackwellization: particle approximation is used to sample sequences of hidden regimes while the Gaussian states are explicitly integrated conditional on the sequence of regimes and observations, using variants of the Kalman filter/smoother. The first successful attempt to use Rao-Blackwellization for smoothing extends the Bryson-Frazier smoother for Gaussian linear state space models using the generalized two-filter formula together with Kalman filters/smoothers. More recently, a forward-backward decomposition of smoothing distributions mimicking the Rauch-Tung-Striebel smoother for the regimes combined with backward Kalman updates has been introduced. This paper investigates the benefit of introducing additional rejuvenation steps in all these algorithms to sample at each time instant new regimes conditional on the forward and backward particles. This defines particle-based approximations of the smoothing distributions whose support is not restricted to the set of particles sampled in the forward or backward filter. These procedures are applied to commodity markets which are described using a two-factor model based on the spot price and a convenience yield for crude oil data.

  12. Monte Carlo modeling of Standard Model multi-boson production processes for √s = 13 TeV ATLAS analyses

    CERN Document Server

    Li, Shu; The ATLAS collaboration

    2017-01-01

    We present the Monte Carlo(MC) setup used by ATLAS to model multi-boson processes in √s = 13 TeV proton-proton collisions. The baseline Monte Carlo generators are compared with each other in key kinematic distributions of the processes under study. Sample normalization and systematic uncertainties are discussed.

  13. Multi-chain Markov chain Monte Carlo methods for computationally expensive models

    Science.gov (United States)

    Huang, M.; Ray, J.; Ren, H.; Hou, Z.; Bao, J.

    2017-12-01

    Markov chain Monte Carlo (MCMC) methods are used to infer model parameters from observational data. The parameters are inferred as probability densities, thus capturing estimation error due to sparsity of the data, and the shortcomings of the model. Multiple communicating chains executing the MCMC method have the potential to explore the parameter space better, and conceivably accelerate the convergence to the final distribution. We present results from tests conducted with the multi-chain method to show how the acceleration occurs i.e., for loose convergence tolerances, the multiple chains do not make much of a difference. The ensemble of chains also seems to have the ability to accelerate the convergence of a few chains that might start from suboptimal starting points. Finally, we show the performance of the chains in the estimation of O(10) parameters using computationally expensive forward models such as the Community Land Model, where the sampling burden is distributed over multiple chains.

  14. Three-dimensional Monte Carlo model of pulsed-laser treatment of cutaneous vascular lesions

    Science.gov (United States)

    Milanič, Matija; Majaron, Boris

    2011-12-01

    We present a three-dimensional Monte Carlo model of optical transport in skin with a novel approach to treatment of side boundaries of the volume of interest. This represents an effective way to overcome the inherent limitations of ``escape'' and ``mirror'' boundary conditions and enables high-resolution modeling of skin inclusions with complex geometries and arbitrary irradiation patterns. The optical model correctly reproduces measured values of diffuse reflectance for normal skin. When coupled with a sophisticated model of thermal transport and tissue coagulation kinetics, it also reproduces realistic values of radiant exposure thresholds for epidermal injury and for photocoagulation of port wine stain blood vessels in various skin phototypes, with or without application of cryogen spray cooling.

  15. Collectivity in heavy nuclei in the shell model Monte Carlo approach

    International Nuclear Information System (INIS)

    Özen, C.; Alhassid, Y.; Nakada, H.

    2014-01-01

    The microscopic description of collectivity in heavy nuclei in the framework of the configuration-interaction shell model has been a major challenge. The size of the model space required for the description of heavy nuclei prohibits the use of conventional diagonalization methods. We have overcome this difficulty by using the shell model Monte Carlo (SMMC) method, which can treat model spaces that are many orders of magnitude larger than those that can be treated by conventional methods. We identify a thermal observable that can distinguish between vibrational and rotational collectivity and use it to describe the crossover from vibrational to rotational collectivity in families of even-even rare-earth isotopes. We calculate the state densities in these nuclei and find them to be in close agreement with experimental data. We also calculate the collective enhancement factors of the corresponding level densities and find that their decay with excitation energy is correlated with the pairing and shape phase transitions. (author)

  16. Calibration of a gamma spectrometer for natural radioactivity measurement. Experimental measurements and Monte Carlo modelling

    International Nuclear Information System (INIS)

    Courtine, Fabien

    2007-03-01

    The thesis proceeded in the context of dating by thermoluminescence. This method requires laboratory measurements of the natural radioactivity. For that purpose, we have been using a germanium spectrometer. To refine the calibration of this one, we modelled it by using a Monte-Carlo computer code: Geant4. We developed a geometrical model which takes into account the presence of inactive zones and zones of poor charge-collection within the germanium crystal. The parameters of the model were adjusted by comparison with experimental results obtained with a source of 137 Cs. It appeared that the form of the inactive zones is less simple than is presented in the specialized literature. This model was widened to the case of a more complex source, with cascade effect and angular correlations between photons: the 60 Co. Lastly, applied to extended sources, it gave correct results and allowed us to validate the simulation of matrix effect. (author)

  17. Treatment plan evaluation for interstitial photodynamic therapy in a mouse model by Monte Carlo simulation with FullMonte

    Directory of Open Access Journals (Sweden)

    Jeffrey eCassidy

    2015-02-01

    Full Text Available Monte Carlo (MC simulation is recognized as the gold standard for biophotonic simulation, capturing all relevant physics and material properties at the perceived cost of high computing demands. Tetrahedral-mesh-based MC simulations particularly are attractive due to the ability to refine the mesh at will to conform to complicated geometries or user-defined resolution requirements. Since no approximations of material or light-source properties are required, MC methods are applicable to the broadest set of biophotonic simulation problems. MC methods also have other implementation features including inherent parallelism, and permit a continuously-variable quality-runtime tradeoff. We demonstrate here a complete MC-based prospective fluence dose evaluation system for interstitial PDT to generate dose-volume histograms on a tetrahedral mesh geometry description. To our knowledge, this is the first such system for general interstitial photodynamic therapy employing MC methods and is therefore applicable to a very broad cross-section of anatomy and material properties. We demonstrate that evaluation of dose-volume histograms is an effective variance-reduction scheme in its own right which greatly reduces the number of packets required and hence runtime required to achieve acceptable result confidence. We conclude that MC methods are feasible for general PDT treatment evaluation and planning, and considerably less costly than widely believed.

  18. Clinical Management and Burden of Prostate Cancer: A Markov Monte Carlo Model

    Science.gov (United States)

    Sanyal, Chiranjeev; Aprikian, Armen; Cury, Fabio; Chevalier, Simone; Dragomir, Alice

    2014-01-01

    Background Prostate cancer (PCa) is the most common non-skin cancer among men in developed countries. Several novel treatments have been adopted by healthcare systems to manage PCa. Most of the observational studies and randomized trials on PCa have concurrently evaluated fewer treatments over short follow-up. Further, preceding decision analytic models on PCa management have not evaluated various contemporary management options. Therefore, a contemporary decision analytic model was necessary to address limitations to the literature by synthesizing the evidence on novel treatments thereby forecasting short and long-term clinical outcomes. Objectives To develop and validate a Markov Monte Carlo model for the contemporary clinical management of PCa, and to assess the clinical burden of the disease from diagnosis to end-of-life. Methods A Markov Monte Carlo model was developed to simulate the management of PCa in men 65 years and older from diagnosis to end-of-life. Health states modeled were: risk at diagnosis, active surveillance, active treatment, PCa recurrence, PCa recurrence free, metastatic castrate resistant prostate cancer, overall and PCa death. Treatment trajectories were based on state transition probabilities derived from the literature. Validation and sensitivity analyses assessed the accuracy and robustness of model predicted outcomes. Results Validation indicated model predicted rates were comparable to observed rates in the published literature. The simulated distribution of clinical outcomes for the base case was consistent with sensitivity analyses. Predicted rate of clinical outcomes and mortality varied across risk groups. Life expectancy and health adjusted life expectancy predicted for the simulated cohort was 20.9 years (95%CI 20.5–21.3) and 18.2 years (95% CI 17.9–18.5), respectively. Conclusion Study findings indicated contemporary management strategies improved survival and quality of life in patients with PCa. This model could be used

  19. Quasi Monte Carlo methods for optimization models of the energy industry with pricing and load processes; Quasi-Monte Carlo Methoden fuer Optimierungsmodelle der Energiewirtschaft mit Preis- und Last-Prozessen

    Energy Technology Data Exchange (ETDEWEB)

    Leoevey, H.; Roemisch, W. [Humboldt-Univ., Berlin (Germany)

    2015-07-01

    We discuss progress in quasi Monte Carlo methods for numerical calculation integrals or expected values and justify why these methods are more efficient than the classic Monte Carlo methods. Quasi Monte Carlo methods are found to be particularly efficient if the integrands have a low effective dimension. That's why We also discuss the concept of effective dimension and prove on the example of a stochastic Optimization model of the energy industry that such models can posses a low effective dimension. Modern quasi Monte Carlo methods are therefore for such models very promising. [German] Wir diskutieren Fortschritte bei Quasi-Monte Carlo Methoden zur numerischen Berechnung von Integralen bzw. Erwartungswerten und begruenden warum diese Methoden effizienter sind als die klassischen Monte Carlo Methoden. Quasi-Monte Carlo Methoden erweisen sich als besonders effizient, falls die Integranden eine geringe effektive Dimension besitzen. Deshalb diskutieren wir auch den Begriff effektive Dimension und weisen am Beispiel eines stochastischen Optimierungsmodell aus der Energiewirtschaft nach, dass solche Modelle eine niedrige effektive Dimension besitzen koennen. Moderne Quasi-Monte Carlo Methoden sind deshalb fuer solche Modelle sehr erfolgversprechend.

  20. Parameter sensitivity and uncertainty of the forest carbon flux model FORUG : a Monte Carlo analysis

    Energy Technology Data Exchange (ETDEWEB)

    Verbeeck, H.; Samson, R.; Lemeur, R. [Ghent Univ., Ghent (Belgium). Laboratory of Plant Ecology; Verdonck, F. [Ghent Univ., Ghent (Belgium). Dept. of Applied Mathematics, Biometrics and Process Control

    2006-06-15

    The FORUG model is a multi-layer process-based model that simulates carbon dioxide (CO{sub 2}) and water exchange between forest stands and the atmosphere. The main model outputs are net ecosystem exchange (NEE), total ecosystem respiration (TER), gross primary production (GPP) and evapotranspiration. This study used a sensitivity analysis to identify the parameters contributing to NEE uncertainty in the FORUG model. The aim was to determine if it is necessary to estimate the uncertainty of all parameters of a model to determine overall output uncertainty. Data used in the study were the meteorological and flux data of beech trees in Hesse. The Monte Carlo method was used to rank sensitivity and uncertainty parameters in combination with a multiple linear regression. Simulations were run in which parameters were assigned probability distributions and the effect of variance in the parameters on the output distribution was assessed. The uncertainty of the output for NEE was estimated. Based on the arbitrary uncertainty of 10 key parameters, a standard deviation of 0.88 Mg C per year per NEE was found, which was equal to 24 per cent of the mean value of NEE. The sensitivity analysis showed that the overall output uncertainty of the FORUG model could be determined by accounting for only a few key parameters, which were identified as corresponding to critical parameters in the literature. It was concluded that the 10 most important parameters determined more than 90 per cent of the output uncertainty. High ranking parameters included soil respiration; photosynthesis; and crown architecture. It was concluded that the Monte Carlo technique is a useful tool for ranking the uncertainty of parameters of process-based forest flux models. 48 refs., 2 tabs., 2 figs.

  1. Fast Monte Carlo-simulator with full collimator and detector response modelling for SPECT

    International Nuclear Information System (INIS)

    Sohlberg, A.O.; Kajaste, M.T.

    2012-01-01

    Monte Carlo (MC)-simulations have proved to be a valuable tool in studying single photon emission computed tomography (SPECT)-reconstruction algorithms. Despite their popularity, the use of Monte Carlo-simulations is still often limited by their large computation demand. This is especially true in situations where full collimator and detector modelling with septal penetration, scatter and X-ray fluorescence needs to be included. This paper presents a rapid and simple MC-simulator, which can effectively reduce the computation times. The simulator was built on the convolution-based forced detection principle, which can markedly lower the number of simulated photons. Full collimator and detector response look-up tables are pre-simulated and then later used in the actual MC-simulations to model the system response. The developed simulator was validated by comparing it against 123 I point source measurements made with a clinical gamma camera system and against 99m Tc software phantom simulations made with the SIMIND MC-package. The results showed good agreement between the new simulator, measurements and the SIMIND-package. The new simulator provided near noise-free projection data in approximately 1.5 min per projection with 99m Tc, which was less than one-tenth of SIMIND's time. The developed MC-simulator can markedly decrease the simulation time without sacrificing image quality. (author)

  2. Kinetic Monte-Carlo modeling of hydrogen retention and re-emission from Tore Supra deposits

    International Nuclear Information System (INIS)

    Rai, A.; Schneider, R.; Warrier, M.; Roubin, P.; Martin, C.; Richou, M.

    2009-01-01

    A multi-scale model has been developed to study the reactive-diffusive transport of hydrogen in porous graphite [A. Rai, R. Schneider, M. Warrier, J. Nucl. Mater. (submitted for publication). http://dx.doi.org/10.1016/j.jnucmat.2007.08.013.]. The deposits found on the leading edge of the neutralizer of Tore Supra are multi-scale in nature, consisting of micropores with typical size lower than 2 nm (∼11%), mesopores (∼5%) and macropores with a typical size more than 50 nm [C. Martin, M. Richou, W. Sakaily, B. Pegourie, C. Brosset, P. Roubin, J. Nucl. Mater. 363-365 (2007) 1251]. Kinetic Monte-Carlo (KMC) has been used to study the hydrogen transport at meso-scales. Recombination rate and the diffusion coefficient calculated at the meso-scale was used as an input to scale up and analyze the hydrogen transport at macro-scale. A combination of KMC and MCD (Monte-Carlo diffusion) method was used at macro-scales. Flux dependence of hydrogen recycling has been studied. The retention and re-emission analysis of the model has been extended to study the chemical erosion process based on the Kueppers-Hopf cycle [M. Wittmann, J. Kueppers, J. Nucl. Mater. 227 (1996) 186].

  3. Study on Quantification for Multi-unit Seismic PSA Model using Monte Carlo Sampling

    International Nuclear Information System (INIS)

    Oh, Kyemin; Han, Sang Hoon; Jang, Seung-cheol; Park, Jin Hee; Lim, Ho-Gon; Yang, Joon Eon; Heo, Gyunyoung

    2015-01-01

    In existing PSA, frequency for accident sequences occurred in single-unit has been estimated. While multi-unit PSA has to consider various combinations because accident sequence in each units can be different. However, it is difficult to quantify all of combination between inter-units using traditional method such as Minimal Cut Upper Bound (MCUB). For this reason, we used Monte Carlo sampling as a method to quantify multi-unit PSA model. In this paper, Monte Carlo method was used to quantify multi-unit PSA model. The advantage of this method is to consider all of combinations by the increase of number of unit and to calculate nearly exact value compared to other method. However, it is difficult to get detailed information such as minimal cut sets and accident sequence. To solve partially this problem, FTeMC was modified. In multi-unit PSA, quantification for both internal and external multi-unit accidents is the significant issue. Although our result above mentioned was one of the case studies to check application of method suggested in this paper, it is expected that this method can be used in practical assessment for multi-unit risk

  4. Monte Carlo modeling of time-resolved fluorescence for depth-selective interrogation of layered tissue.

    Science.gov (United States)

    Pfefer, T Joshua; Wang, Quanzeng; Drezek, Rebekah A

    2011-11-01

    Computational approaches for simulation of light-tissue interactions have provided extensive insight into biophotonic procedures for diagnosis and therapy. However, few studies have addressed simulation of time-resolved fluorescence (TRF) in tissue and none have combined Monte Carlo simulations with standard TRF processing algorithms to elucidate approaches for cancer detection in layered biological tissue. In this study, we investigate how illumination-collection parameters (e.g., collection angle and source-detector separation) influence the ability to measure fluorophore lifetime and tissue layer thickness. Decay curves are simulated with a Monte Carlo TRF light propagation model. Multi-exponential iterative deconvolution is used to determine lifetimes and fractional signal contributions. The ability to detect changes in mucosal thickness is optimized by probes that selectively interrogate regions superficial to the mucosal-submucosal boundary. Optimal accuracy in simultaneous determination of lifetimes in both layers is achieved when each layer contributes 40-60% of the signal. These results indicate that depth-selective approaches to TRF have the potential to enhance disease detection in layered biological tissue and that modeling can play an important role in probe design optimization. Published by Elsevier Ireland Ltd.

  5. Nonlinear Monte Carlo model of superdiffusive shock acceleration with magnetic field amplification

    Science.gov (United States)

    Bykov, Andrei M.; Ellison, Donald C.; Osipov, Sergei M.

    2017-03-01

    Fast collisionless shocks in cosmic plasmas convert their kinetic energy flow into the hot downstream thermal plasma with a substantial fraction of energy going into a broad spectrum of superthermal charged particles and magnetic fluctuations. The superthermal particles can penetrate into the shock upstream region producing an extended shock precursor. The cold upstream plasma flow is decelerated by the force provided by the superthermal particle pressure gradient. In high Mach number collisionless shocks, efficient particle acceleration is likely coupled with turbulent magnetic field amplification (MFA) generated by the anisotropic distribution of accelerated particles. This anisotropy is determined by fast particle transport, making the problem strongly nonlinear and multiscale. Here, we present a nonlinear Monte Carlo model of collisionless shock structure with superdiffusive propagation of high-energy Fermi accelerated particles coupled to particle acceleration and MFA, which affords a consistent description of strong shocks. A distinctive feature of the Monte Carlo technique is that it includes the full angular anisotropy of the particle distribution at all precursor positions. The model reveals that the superdiffusive transport of energetic particles (i.e., Lévy-walk propagation) generates a strong quadruple anisotropy in the precursor particle distribution. The resultant pressure anisotropy of the high-energy particles produces a nonresonant mirror-type instability that amplifies compressible wave modes with wavelengths longer than the gyroradii of the highest-energy protons produced by the shock.

  6. The First 24 Years of Reverse Monte Carlo Modelling, Budapest, Hungary, 20-22 September 2012

    Science.gov (United States)

    Keen, David A.; Pusztai, László

    2013-11-01

    This special issue contains a collection of papers reflecting the content of the fifth workshop on reverse Monte Carlo (RMC) methods, held in a hotel on the banks of the Danube in the Budapest suburbs in the autumn of 2012. Over fifty participants gathered to hear talks and discuss a broad range of science based on the RMC technique in very convivial surroundings. Reverse Monte Carlo modelling is a method for producing three-dimensional disordered structural models in quantitative agreement with experimental data. The method was developed in the late 1980s and has since achieved wide acceptance within the scientific community [1], producing an average of over 90 papers and 1200 citations per year over the last five years. It is particularly suitable for the study of the structures of liquid and amorphous materials, as well as the structural analysis of disordered crystalline systems. The principal experimental data that are modelled are obtained from total x-ray or neutron scattering experiments, using the reciprocal space structure factor and/or the real space pair distribution function (PDF). Additional data might be included from extended x-ray absorption fine structure spectroscopy (EXAFS), Bragg peak intensities or indeed any measured data that can be calculated from a three-dimensional atomistic model. It is this use of total scattering (diffuse and Bragg), rather than just the Bragg peak intensities more commonly used for crystalline structure analysis, which enables RMC modelling to probe the often important deviations from the average crystal structure, to probe the structures of poorly crystalline or nanocrystalline materials, and the local structures of non-crystalline materials where only diffuse scattering is observed. This flexibility across various condensed matter structure-types has made the RMC method very attractive in a wide range of disciplines, as borne out in the contents of this special issue. It is however important to point out that since

  7. Transport appraisal and Monte Carlo simulation by use of the CBA-DK model

    DEFF Research Database (Denmark)

    Salling, Kim Bang; Leleur, Steen

    2011-01-01

    calculation, where risk analysis is carried out using Monte Carlo simulation. Special emphasis has been placed on the separation between inherent randomness in the modeling system and lack of knowledge. These two concepts have been defined in terms of variability (ontological uncertainty) and uncertainty......This paper presents the Danish CBA-DK software model for assessment of transport infrastructure projects. The assessment model is based on both a deterministic calculation following the cost-benefit analysis (CBA) methodology in a Danish manual from the Ministry of Transport and on a stochastic...... (epistemic uncertainty). After a short introduction to deterministic calculation resulting in some evaluation criteria a more comprehensive evaluation of the stochastic calculation is made. Especially, the risk analysis part of CBA-DK, with considerations about which probability distributions should be used...

  8. Kinetic Monte Carlo modeling of chemical reactions coupled with heat transfer.

    Science.gov (United States)

    Castonguay, Thomas C; Wang, Feng

    2008-03-28

    In this paper, we describe two types of effective events for describing heat transfer in a kinetic Monte Carlo (KMC) simulation that may involve stochastic chemical reactions. Simulations employing these events are referred to as KMC-TBT and KMC-PHE. In KMC-TBT, heat transfer is modeled as the stochastic transfer of "thermal bits" between adjacent grid points. In KMC-PHE, heat transfer is modeled by integrating the Poisson heat equation for a short time. Either approach is capable of capturing the time dependent system behavior exactly. Both KMC-PHE and KMC-TBT are validated by simulating pure heat transfer in a rod and a square and modeling a heated desorption problem where exact numerical results are available. KMC-PHE is much faster than KMC-TBT and is used to study the endothermic desorption of a lattice gas. Interesting findings from this study are reported.

  9. Dynamic Value at Risk: A Comparative Study Between Heteroscedastic Models and Monte Carlo Simulation

    Directory of Open Access Journals (Sweden)

    José Lamartine Távora Junior

    2006-12-01

    Full Text Available The objective of this paper was to analyze the risk management of a portfolio composed by Petrobras PN, Telemar PN and Vale do Rio Doce PNA stocks. It was verified if the modeling of Value-at-Risk (VaR through the place Monte Carlo simulation with volatility of GARCH family is supported by hypothesis of efficient market. The results have shown that the statistic evaluation in inferior to dynamics, evidencing that the dynamic analysis supplies support to the hypothesis of efficient market of the Brazilian share holding market, in opposition of some empirical evidences. Also, it was verified that the GARCH models of volatility is enough to accommodate the variations of the shareholding Brazilian market, since the model is capable to accommodate the great dynamic of the Brazilian market.

  10. Clinical trial optimization: Monte Carlo simulation Markov model for planning clinical trials recruitment.

    Science.gov (United States)

    Abbas, Ismail; Rovira, Joan; Casanovas, Josep

    2007-05-01

    The patient recruitment process of clinical trials is an essential element which needs to be designed properly. In this paper we describe different simulation models under continuous and discrete time assumptions for the design of recruitment in clinical trials. The results of hypothetical examples of clinical trial recruitments are presented. The recruitment time is calculated and the number of recruited patients is quantified for a given time and probability of recruitment. The expected delay and the effective recruitment durations are estimated using both continuous and discrete time modeling. The proposed type of Monte Carlo simulation Markov models will enable optimization of the recruitment process and the estimation and the calibration of its parameters to aid the proposed clinical trials. A continuous time simulation may minimize the duration of the recruitment and, consequently, the total duration of the trial.

  11. Monte Carlo impurity transport modeling in the DIII-D transport

    International Nuclear Information System (INIS)

    Evans, T.E.; Finkenthal, D.F.

    1998-04-01

    A description of the carbon transport and sputtering physics contained in the Monte Carlo Impurity (MCI) transport code is given. Examples of statistically significant carbon transport pathways are examined using MCI's unique tracking visualizer and a mechanism for enhanced carbon accumulation on the high field side of the divertor chamber is discussed. Comparisons between carbon emissions calculated with MCI and those measured in the DIII-D tokamak are described. Good qualitative agreement is found between 2D carbon emission patterns calculated with MCI and experimentally measured carbon patterns. While uncertainties in the sputtering physics, atomic data, and transport models have made quantitative comparisons with experiments more difficult, recent results using a physics based model for physical and chemical sputtering has yielded simulations with about 50% of the total carbon radiation measured in the divertor. These results and plans for future improvement in the physics models and atomic data are discussed

  12. Uncertainty assessment of integrated distributed hydrological models using GLUE with Markov chain Monte Carlo sampling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan

    2008-01-01

    uncertainty estimation (GLUE) procedure based on Markov chain Monte Carlo sampling is applied in order to improve the performance of the methodology in estimating parameters and posterior output distributions. The description of the spatial variations of the hydrological processes is accounted for by defining......In recent years, there has been an increase in the application of distributed, physically-based and integrated hydrological models. Many questions regarding how to properly calibrate and validate distributed models and assess the uncertainty of the estimated parameters and the spatially......-site validation must complement the usual time validation. In this study, we develop, through an application, a comprehensive framework for multi-criteria calibration and uncertainty assessment of distributed physically-based, integrated hydrological models. A revised version of the generalized likelihood...

  13. Monte Carlo modelling of germanium crystals that are tilted and have rounded front edges

    International Nuclear Information System (INIS)

    Gasparro, Joel; Hult, Mikael; Johnston, Peter N.; Tagziria, Hamid

    2008-01-01

    Gamma-ray detection efficiencies and cascade summing effects in germanium detectors are often calculated using Monte Carlo codes based on a computer model of the detection system. Such a model can never fully replicate reality and it is important to understand how various parameters affect the results. This work concentrates on quantifying two issues, namely (i) the effect of having a Ge-crystal that is tilted inside the cryostat and (ii) the effect of having a model of a Ge-crystal with rounded edges (bulletization). The effect of the tilting is very small (in the order of per mille) when the tilting angles are within a realistic range. The effect of the rounded edges is, however, relatively large (5-10% or higher) particularly for gamma-ray energies below 100 keV

  14. Monte Carlo modelling of germanium crystals that are tilted and have rounded front edges

    Energy Technology Data Exchange (ETDEWEB)

    Gasparro, Joel [EC-JRC-IRMM, Institute for Reference Materials and Measurements, Retieseweg 111, B-2440 Geel (Belgium); Hult, Mikael [EC-JRC-IRMM, Institute for Reference Materials and Measurements, Retieseweg 111, B-2440 Geel (Belgium)], E-mail: mikael.hult@ec.europa.eu; Johnston, Peter N. [Applied Physics, Royal Melbourne Institute of Technology, GPO Box 2476V, Melbourne 3001 (Australia); Tagziria, Hamid [EC-JRC-IPSC, Institute for the Protection and the Security of the Citizen, Via E. Fermi 1, I-21020 Ispra (Vatican City State, Holy See,) (Italy)

    2008-09-01

    Gamma-ray detection efficiencies and cascade summing effects in germanium detectors are often calculated using Monte Carlo codes based on a computer model of the detection system. Such a model can never fully replicate reality and it is important to understand how various parameters affect the results. This work concentrates on quantifying two issues, namely (i) the effect of having a Ge-crystal that is tilted inside the cryostat and (ii) the effect of having a model of a Ge-crystal with rounded edges (bulletization). The effect of the tilting is very small (in the order of per mille) when the tilting angles are within a realistic range. The effect of the rounded edges is, however, relatively large (5-10% or higher) particularly for gamma-ray energies below 100 keV.

  15. Monte Carlo modeling of fiber-scintillator flow-cell radiation detector geometry

    International Nuclear Information System (INIS)

    Rucker, T.L.; Ross, H.H.; Tennessee Univ., Knoxville; Schweitzer, G.K.

    1988-01-01

    A Monte Carlo computer calculation is described which models the geometric efficiency of a fiber-scintillator flow-cell radiation detector designed to detect radiolabeled compounds in liquid chromatography eluates. By using special mathematical techniques, an efficiency prediction with a precision of 1% is obtained after generating only 1000 random events. Good agreement is seen between predicted and experimental efficiency except for very low energy beta emission where the geometric limitation on efficiency is overcome by pulse height limitations which the model does not consider. The modeling results show that in the test system, the detection efficiency for low energy beta emitters is limited primarily by light generation and collection rather than geometry. (orig.)

  16. Modelling phase separation in Fe-Cr system using different atomistic kinetic Monte Carlo techniques

    International Nuclear Information System (INIS)

    Castin, N.; Bonny, G.; Terentyev, D.; Lavrentiev, M.Yu.; Nguyen-Manh, D.

    2011-01-01

    Atomistic kinetic Monte Carlo (AKMC) simulations were performed to study α-α' phase separation in Fe-Cr alloys. Two different energy models and two approaches to estimate the local vacancy migration barriers were used. The energy models considered are a two-band model Fe-Cr potential and a cluster expansion, both fitted to ab initio data. The classical Kang-Weinberg decomposition, based on the total energy change of the system, and an Artificial Neural Network (ANN), employed as a regression tool were used to predict the local vacancy migration barriers 'on the fly'. The results are compared with experimental thermal annealing data and differences between the applied AKMC approaches are discussed. The ability of the ANN regression method to accurately predict migration barriers not present in the training list is also addressed by performing cross-check calculations using the nudged elastic band method.

  17. Fermi-level effects in semiconductor processing: A modeling scheme for atomistic kinetic Monte Carlo simulators

    Science.gov (United States)

    Martin-Bragado, I.; Castrillo, P.; Jaraiz, M.; Pinacho, R.; Rubio, J. E.; Barbolla, J.; Moroz, V.

    2005-09-01

    Atomistic process simulation is expected to play an important role for the development of next generations of integrated circuits. This work describes an approach for modeling electric charge effects in a three-dimensional atomistic kinetic Monte Carlo process simulator. The proposed model has been applied to the diffusion of electrically active boron and arsenic atoms in silicon. Several key aspects of the underlying physical mechanisms are discussed: (i) the use of the local Debye length to smooth out the atomistic point-charge distribution, (ii) algorithms to correctly update the charge state in a physically accurate and computationally efficient way, and (iii) an efficient implementation of the drift of charged particles in an electric field. High-concentration effects such as band-gap narrowing and degenerate statistics are also taken into account. The efficiency, accuracy, and relevance of the model are discussed.

  18. Terry Turbopump Analytical Modeling Efforts in Fiscal Year 2016 ? Progress Report.

    Energy Technology Data Exchange (ETDEWEB)

    Osborn, Douglas; Ross, Kyle; Cardoni, Jeffrey N

    2018-04-01

    This document details the Fiscal Year 2016 modeling efforts to define the true operating limitations (margins) of the Terry turbopump systems used in the nuclear industry for Milestone 3 (full-scale component experiments) and Milestone 4 (Terry turbopump basic science experiments) experiments. The overall multinational-sponsored program creates the technical basis to: (1) reduce and defer additional utility costs, (2) simplify plant operations, and (3) provide a better understanding of the true margin which could reduce overall risk of operations.

  19. Fundamental Drop Dynamics and Mass Transfer Experiments to Support Solvent Extraction Modeling Efforts

    International Nuclear Information System (INIS)

    Christensen, Kristi; Rutledge, Veronica; Garn, Troy

    2011-01-01

    In support of the Nuclear Energy Advanced Modeling Simulation Safeguards and Separations (NEAMS SafeSep) program, the Idaho National Laboratory (INL) worked in collaboration with Los Alamos National Laboratory (LANL) to further a modeling effort designed to predict mass transfer behavior for selected metal species between individual dispersed drops and a continuous phase in a two phase liquid-liquid extraction (LLE) system. The purpose of the model is to understand the fundamental processes of mass transfer that occur at the drop interface. This fundamental understanding can be extended to support modeling of larger LLE equipment such as mixer settlers, pulse columns, and centrifugal contactors. The work performed at the INL involved gathering the necessary experimental data to support the modeling effort. A custom experimental apparatus was designed and built for performing drop contact experiments to measure mass transfer coefficients as a function of contact time. A high speed digital camera was used in conjunction with the apparatus to measure size, shape, and velocity of the drops. In addition to drop data, the physical properties of the experimental fluids were measured to be used as input data for the model. Physical properties measurements included density, viscosity, surface tension and interfacial tension. Additionally, self diffusion coefficients for the selected metal species in each experimental solution were measured, and the distribution coefficient for the metal partitioning between phases was determined. At the completion of this work, the INL has determined the mass transfer coefficient and a velocity profile for drops rising by buoyancy through a continuous medium under a specific set of experimental conditions. Additionally, a complete set of experimentally determined fluid properties has been obtained. All data will be provided to LANL to support the modeling effort.

  20. A Monte Carlo-based model for simulation of digital chest tomo-synthesis

    International Nuclear Information System (INIS)

    Ullman, G.; Dance, D. R.; Sandborg, M.; Carlsson, G. A.; Svalkvist, A.; Baath, M.

    2010-01-01

    The aim of this work was to calculate synthetic digital chest tomo-synthesis projections using a computer simulation model based on the Monte Carlo method. An anthropomorphic chest phantom was scanned in a computed tomography scanner, segmented and included in the computer model to allow for simulation of realistic high-resolution X-ray images. The input parameters to the model were adapted to correspond to the VolumeRAD chest tomo-synthesis system from GE Healthcare. Sixty tomo-synthesis projections were calculated with projection angles ranging from + 15 to -15 deg. The images from primary photons were calculated using an analytical model of the anti-scatter grid and a pre-calculated detector response function. The contributions from scattered photons were calculated using an in-house Monte Carlo-based model employing a number of variance reduction techniques such as the collision density estimator. Tomographic section images were reconstructed by transferring the simulated projections into the VolumeRAD system. The reconstruction was performed for three types of images using: (i) noise-free primary projections, (ii) primary projections including contributions from scattered photons and (iii) projections as in (ii) with added correlated noise. The simulated section images were compared with corresponding section images from projections taken with the real, anthropomorphic phantom from which the digital voxel phantom was originally created. The present article describes a work in progress aiming towards developing a model intended for optimisation of chest tomo-synthesis, allowing for simulation of both existing and future chest tomo-synthesis systems. (authors)

  1. Using a Monte Carlo model to predict dosimetric properties of small radiotherapy photon fields

    International Nuclear Information System (INIS)

    Scott, Alison J. D.; Nahum, Alan E.; Fenwick, John D.

    2008-01-01

    Accurate characterization of small-field dosimetry requires measurements to be made with precisely aligned specialized detectors and is thus time consuming and error prone. This work explores measurement differences between detectors by using a Monte Carlo model matched to large-field data to predict properties of smaller fields. Measurements made with a variety of detectors have been compared with calculated results to assess their validity and explore reasons for differences. Unshielded diodes are expected to produce some of the most useful data, as their small sensitive cross sections give good resolution whilst their energy dependence is shown to vary little with depth in a 15 MV linac beam. Their response is shown to be constant with field size over the range 1-10 cm, with a correction of 3% needed for a field size of 0.5 cm. BEAMnrc has been used to create a 15 MV beam model, matched to dosimetric data for square fields larger than 3 cm, and producing small-field profiles and percentage depth doses (PDDs) that agree well with unshielded diode data for field sizes down to 0.5 cm. For fields sizes of 1.5 cm and above, little detector-to-detector variation exists in measured output factors, however for a 0.5 cm field a relative spread of 18% is seen between output factors measured with different detectors--values measured with the diamond and pinpoint detectors lying below that of the unshielded diode, with the shielded diode value being higher. Relative to the corrected unshielded diode measurement, the Monte Carlo modeled output factor is 4.5% low, a discrepancy that is probably due to the focal spot fluence profile and source occlusion modeling. The large-field Monte Carlo model can, therefore, currently be used to predict small-field profiles and PDDs measured with an unshielded diode. However, determination of output factors for the smallest fields requires a more detailed model of focal spot fluence and source occlusion.

  2. Full modelling of the MOSAIC animal PET system based on the GATE Monte Carlo simulation code

    International Nuclear Information System (INIS)

    Merheb, C; Petegnief, Y; Talbot, J N

    2007-01-01

    Positron emission tomography (PET) systems dedicated to animal imaging are now widely used for biological studies. The scanner performance strongly depends on the design and the characteristics of the system. Many parameters must be optimized like the dimensions and type of crystals, geometry and field-of-view (FOV), sampling, electronics, lightguide, shielding, etc. Monte Carlo modelling is a powerful tool to study the effect of each of these parameters on the basis of realistic simulated data. Performance assessment in terms of spatial resolution, count rates, scatter fraction and sensitivity is an important prerequisite before the model can be used instead of real data for a reliable description of the system response function or for optimization of reconstruction algorithms. The aim of this study is to model the performance of the Philips Mosaic(TM) animal PET system using a comprehensive PET simulation code in order to understand and describe the origin of important factors that influence image quality. We use GATE, a Monte Carlo simulation toolkit for a realistic description of the ring PET model, the detectors, shielding, cap, electronic processing and dead times. We incorporate new features to adjust signal processing to the Anger logic underlying the Mosaic(TM) system. Special attention was paid to dead time and energy spectra descriptions. Sorting of simulated events in a list mode format similar to the system outputs was developed to compare experimental and simulated sensitivity and scatter fractions for different energy thresholds using various models of phantoms describing rat and mouse geometries. Count rates were compared for both cylindrical homogeneous phantoms. Simulated spatial resolution was fitted to experimental data for 18 F point sources at different locations within the FOV with an analytical blurring function for electronic processing effects. Simulated and measured sensitivities differed by less than 3%, while scatter fractions agreed

  3. Convex-based void filling method for CAD-based Monte Carlo geometry modeling

    International Nuclear Information System (INIS)

    Yu, Shengpeng; Cheng, Mengyun; Song, Jing; Long, Pengcheng; Hu, Liqin

    2015-01-01

    Highlights: • We present a new void filling method named CVF for CAD based MC geometry modeling. • We describe convex based void description based and quality-based space subdivision. • The results showed improvements provided by CVF for both modeling and MC calculation efficiency. - Abstract: CAD based automatic geometry modeling tools have been widely applied to generate Monte Carlo (MC) calculation geometry for complex systems according to CAD models. Automatic void filling is one of the main functions in the CAD based MC geometry modeling tools, because the void space between parts in CAD models is traditionally not modeled while MC codes such as MCNP need all the problem space to be described. A dedicated void filling method, named Convex-based Void Filling (CVF), is proposed in this study for efficient void filling and concise void descriptions. The method subdivides all the problem space into disjointed regions using Quality based Subdivision (QS) and describes the void space in each region with complementary descriptions of the convex volumes intersecting with that region. It has been implemented in SuperMC/MCAM, the Multiple-Physics Coupling Analysis Modeling Program, and tested on International Thermonuclear Experimental Reactor (ITER) Alite model. The results showed that the new method reduced both automatic modeling time and MC calculation time

  4. Development of self-learning Monte Carlo technique for more efficient modeling of nuclear logging measurements

    International Nuclear Information System (INIS)

    Zazula, J.M.

    1988-01-01

    The self-learning Monte Carlo technique has been implemented to the commonly used general purpose neutron transport code MORSE, in order to enhance sampling of the particle histories that contribute to a detector response. The parameters of all the biasing techniques available in MORSE, i.e. of splitting, Russian roulette, source and collision outgoing energy importance sampling, path length transformation and additional biasing of the source angular distribution are optimized. The learning process is iteratively performed after each batch of particles, by retrieving the data concerning the subset of histories that passed the detector region and energy range in the previous batches. This procedure has been tested on two sample problems in nuclear geophysics, where an unoptimized Monte Carlo calculation is particularly inefficient. The results are encouraging, although the presented method does not directly minimize the variance and the convergence of our algorithm is restricted by the statistics of successful histories from previous random walk. Further applications for modeling of the nuclear logging measurements seem to be promising. 11 refs., 2 figs., 3 tabs. (author)

  5. A virtual source method for Monte Carlo simulation of Gamma Knife Model C

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Hoon; Kim, Yong Kyun [Hanyang University, Seoul (Korea, Republic of); Chung, Hyun Tai [Seoul National University College of Medicine, Seoul (Korea, Republic of)

    2016-05-15

    The Monte Carlo simulation method has been used for dosimetry of radiation treatment. Monte Carlo simulation is the method that determines paths and dosimetry of particles using random number. Recently, owing to the ability of fast processing of the computers, it is possible to treat a patient more precisely. However, it is necessary to increase the simulation time to improve the efficiency of accuracy uncertainty. When generating the particles from the cobalt source in a simulation, there are many particles cut off. So it takes time to simulate more accurately. For the efficiency, we generated the virtual source that has the phase space distribution which acquired a single gamma knife channel. We performed the simulation using the virtual sources on the 201 channel and compared the measurement with the simulation using virtual sources and real sources. A virtual source file was generated to reduce the simulation time of a Gamma Knife Model C. Simulations with a virtual source executed about 50 times faster than the original source code and there was no statistically significant difference in simulated results.

  6. A virtual source method for Monte Carlo simulation of Gamma Knife Model C

    International Nuclear Information System (INIS)

    Kim, Tae Hoon; Kim, Yong Kyun; Chung, Hyun Tai

    2016-01-01

    The Monte Carlo simulation method has been used for dosimetry of radiation treatment. Monte Carlo simulation is the method that determines paths and dosimetry of particles using random number. Recently, owing to the ability of fast processing of the computers, it is possible to treat a patient more precisely. However, it is necessary to increase the simulation time to improve the efficiency of accuracy uncertainty. When generating the particles from the cobalt source in a simulation, there are many particles cut off. So it takes time to simulate more accurately. For the efficiency, we generated the virtual source that has the phase space distribution which acquired a single gamma knife channel. We performed the simulation using the virtual sources on the 201 channel and compared the measurement with the simulation using virtual sources and real sources. A virtual source file was generated to reduce the simulation time of a Gamma Knife Model C. Simulations with a virtual source executed about 50 times faster than the original source code and there was no statistically significant difference in simulated results

  7. Monte Carlo climate change forecasts with a global coupled ocean-atmosphere model

    International Nuclear Information System (INIS)

    Cubasch, U.; Santer, B.D.; Hegerl, G.; Hoeck, H.; Maier-Reimer, E.; Mikolajwicz, U.; Stoessel, A.; Voss, R.

    1992-01-01

    The Monte Carlo approach, which has increasingly been used during the last decade in the field of extended range weather forecasting, has been applied for climate change experiments. Four integrations with a global coupled ocean-atmosphere model have been started from different initial conditions, but with the same greenhouse gas forcing according to the IPCC scenario A. All experiments have been run for a period of 50 years. The results indicate that the time evolution of the global mean warming depends strongly on the initial state of the climate system. It can vary between 6 and 31 years. The Monte Carlo approach delivers information about both the mean response and the statistical significance of the response. While the individual members of the ensemble show a considerable variation in the climate change pattern of temperature after 50 years, the ensemble mean climate change pattern closely resembles the pattern obtained in a 100 year integration and is, at least over most of the land areas, statistically significant. The ensemble averaged sea-level change due to thermal expansion is significant in the global mean and locally over wide regions of the Pacific. The hydrological cycle is also significantly enhanced in the global mean, but locally the changes in precipitation and soil moisture are masked by the variability of the experiments. (orig.)

  8. A three-dimensional self-learning kinetic Monte Carlo model: application to Ag(111)

    International Nuclear Information System (INIS)

    Latz, Andreas; Brendel, Lothar; Wolf, Dietrich E

    2012-01-01

    The reliability of kinetic Monte Carlo (KMC) simulations depends on accurate transition rates. The self-learning KMC method (Trushin et al 2005 Phys. Rev. B 72 115401) combines the accuracy of rates calculated from a realistic potential with the efficiency of a rate catalog, using a pattern recognition scheme. This work expands the original two-dimensional method to three dimensions. The concomitant huge increase in the number of rate calculations on the fly needed can be avoided by setting up an initial database, containing exact activation energies calculated for processes gathered from a simpler KMC model. To provide two representative examples, the model is applied to the diffusion of Ag monolayer islands on Ag(111), and the homoepitaxial growth of Ag on Ag(111) at low temperatures.

  9. Monte Carlo method for critical systems in infinite volume: The planar Ising model.

    Science.gov (United States)

    Herdeiro, Victor; Doyon, Benjamin

    2016-10-01

    In this paper we propose a Monte Carlo method for generating finite-domain marginals of critical distributions of statistical models in infinite volume. The algorithm corrects the problem of the long-range effects of boundaries associated to generating critical distributions on finite lattices. It uses the advantage of scale invariance combined with ideas of the renormalization group in order to construct a type of "holographic" boundary condition that encodes the presence of an infinite volume beyond it. We check the quality of the distribution obtained in the case of the planar Ising model by comparing various observables with their infinite-plane prediction. We accurately reproduce planar two-, three-, and four-point of spin and energy operators. We also define a lattice stress-energy tensor, and numerically obtain the associated conformal Ward identities and the Ising central charge.

  10. Development of numerical models for Monte Carlo simulations of Th-Pb fuel assembly

    Directory of Open Access Journals (Sweden)

    Oettingen Mikołaj

    2017-01-01

    Full Text Available The thorium-uranium fuel cycle is a promising alternative against uranium-plutonium fuel cycle, but it demands many advanced research before starting its industrial application in commercial nuclear reactors. The paper presents the development of the thorium-lead (Th-Pb fuel assembly numerical models for the integral irradiation experiments. The Th-Pb assembly consists of a hexagonal array of ThO2 fuel rods and metallic Pb rods. The design of the assembly allows different combinations of rods for various types of irradiations and experimental measurements. The numerical model of the Th-Pb assembly was designed for the numerical simulations with the continuous energy Monte Carlo Burnup code (MCB implemented on the supercomputer Prometheus of the Academic Computer Centre Cyfronet AGH.

  11. Hybrid transport and diffusion modeling using electron thermal transport Monte Carlo SNB in DRACO

    Science.gov (United States)

    Chenhall, Jeffrey; Moses, Gregory

    2017-10-01

    The iSNB (implicit Schurtz Nicolai Busquet) multigroup diffusion electron thermal transport method is adapted into an Electron Thermal Transport Monte Carlo (ETTMC) transport method to better model angular and long mean free path non-local effects. Previously, the ETTMC model had been implemented in the 2D DRACO multiphysics code and found to produce consistent results with the iSNB method. Current work is focused on a hybridization of the computationally slower but higher fidelity ETTMC transport method with the computationally faster iSNB diffusion method in order to maximize computational efficiency. Furthermore, effects on the energy distribution of the heat flux divergence are studied. Work to date on the hybrid method will be presented. This work was supported by Sandia National Laboratories and the Univ. of Rochester Laboratory for Laser Energetics.

  12. Measurement and Monte Carlo modeling of the spatial response of scintillation screens

    Energy Technology Data Exchange (ETDEWEB)

    Pistrui-Maximean, S.A. [CNDRI (NDT using Ionizing Radiation) Laboratory, INSA-Lyon, 69621 Villeurbanne (France)], E-mail: spistrui@gmail.com; Letang, J.M. [CNDRI (NDT using Ionizing Radiation) Laboratory, INSA-Lyon, 69621 Villeurbanne (France)], E-mail: jean-michel.letang@insa-lyon.fr; Freud, N. [CNDRI (NDT using Ionizing Radiation) Laboratory, INSA-Lyon, 69621 Villeurbanne (France); Koch, A. [Thales Electron Devices, 38430 Moirans (France); Walenta, A.H. [Detectors and Electronics Department, FB Physik, Siegen University, 57068 Siegen (Germany); Montarou, G. [Corpuscular Physics Laboratory, Blaise Pascal University, 63177 Aubiere (France); Babot, D. [CNDRI (NDT using Ionizing Radiation) Laboratory, INSA-Lyon, 69621 Villeurbanne (France)

    2007-11-01

    In this article, we propose a detailed protocol to carry out measurements of the spatial response of scintillation screens and to assess the agreement with simulated results. The experimental measurements have been carried out using a practical implementation of the slit method. A Monte Carlo simulation model of scintillator screens, implemented with the toolkit Geant4, has been used to study the influence of the acquisition setup parameters and to compare with the experimental results. An algorithm of global stochastic optimization based on a localized random search method has been implemented to adjust the optical parameters (optical scattering and absorption coefficients). The algorithm has been tested for different X-ray tube voltages (40, 70 and 100 kV). A satisfactory convergence between the results simulated with the optimized model and the experimental measurements is obtained.

  13. Solving the master equation without kinetic Monte Carlo: Tensor train approximations for a CO oxidation model

    International Nuclear Information System (INIS)

    Gelß, Patrick; Matera, Sebastian; Schütte, Christof

    2016-01-01

    In multiscale modeling of heterogeneous catalytic processes, one crucial point is the solution of a Markovian master equation describing the stochastic reaction kinetics. Usually, this is too high-dimensional to be solved with standard numerical techniques and one has to rely on sampling approaches based on the kinetic Monte Carlo method. In this study we break the curse of dimensionality for the direct solution of the Markovian master equation by exploiting the Tensor Train Format for this purpose. The performance of the approach is demonstrated on a first principles based, reduced model for the CO oxidation on the RuO 2 (110) surface. We investigate the complexity for increasing system size and for various reaction conditions. The advantage over the stochastic simulation approach is illustrated by a problem with increased stiffness.

  14. Monte Carlo simulation of a statistical mechanical model of multiple protein sequence alignment.

    Science.gov (United States)

    Kinjo, Akira R

    2017-01-01

    A grand canonical Monte Carlo (MC) algorithm is presented for studying the lattice gas model (LGM) of multiple protein sequence alignment, which coherently combines long-range interactions and variable-length insertions. MC simulations are used for both parameter optimization of the model and production runs to explore the sequence subspace around a given protein family. In this Note, I describe the details of the MC algorithm as well as some preliminary results of MC simulations with various temperatures and chemical potentials, and compare them with the mean-field approximation. The existence of a two-state transition in the sequence space is suggested for the SH3 domain family, and inappropriateness of the mean-field approximation for the LGM is demonstrated.

  15. Solving the master equation without kinetic Monte Carlo: Tensor train approximations for a CO oxidation model

    Science.gov (United States)

    Gelß, Patrick; Matera, Sebastian; Schütte, Christof

    2016-06-01

    In multiscale modeling of heterogeneous catalytic processes, one crucial point is the solution of a Markovian master equation describing the stochastic reaction kinetics. Usually, this is too high-dimensional to be solved with standard numerical techniques and one has to rely on sampling approaches based on the kinetic Monte Carlo method. In this study we break the curse of dimensionality for the direct solution of the Markovian master equation by exploiting the Tensor Train Format for this purpose. The performance of the approach is demonstrated on a first principles based, reduced model for the CO oxidation on the RuO2(110) surface. We investigate the complexity for increasing system size and for various reaction conditions. The advantage over the stochastic simulation approach is illustrated by a problem with increased stiffness.

  16. Solving the master equation without kinetic Monte Carlo: Tensor train approximations for a CO oxidation model

    Energy Technology Data Exchange (ETDEWEB)

    Gelß, Patrick, E-mail: p.gelss@fu-berlin.de; Matera, Sebastian, E-mail: matera@math.fu-berlin.de; Schütte, Christof, E-mail: schuette@mi.fu-berlin.de

    2016-06-01

    In multiscale modeling of heterogeneous catalytic processes, one crucial point is the solution of a Markovian master equation describing the stochastic reaction kinetics. Usually, this is too high-dimensional to be solved with standard numerical techniques and one has to rely on sampling approaches based on the kinetic Monte Carlo method. In this study we break the curse of dimensionality for the direct solution of the Markovian master equation by exploiting the Tensor Train Format for this purpose. The performance of the approach is demonstrated on a first principles based, reduced model for the CO oxidation on the RuO{sub 2}(110) surface. We investigate the complexity for increasing system size and for various reaction conditions. The advantage over the stochastic simulation approach is illustrated by a problem with increased stiffness.

  17. Density-based Monte Carlo filter and its applications in nonlinear stochastic differential equation models.

    Science.gov (United States)

    Huang, Guanghui; Wan, Jianping; Chen, Hui

    2013-02-01

    Nonlinear stochastic differential equation models with unobservable state variables are now widely used in analysis of PK/PD data. Unobservable state variables are usually estimated with extended Kalman filter (EKF), and the unknown pharmacokinetic parameters are usually estimated by maximum likelihood estimator. However, EKF is inadequate for nonlinear PK/PD models, and MLE is known to be biased downwards. A density-based Monte Carlo filter (DMF) is proposed to estimate the unobservable state variables, and a simulation-based M estimator is proposed to estimate the unknown parameters in this paper, where a genetic algorithm is designed to search the optimal values of pharmacokinetic parameters. The performances of EKF and DMF are compared through simulations for discrete time and continuous time systems respectively, and it is found that the results based on DMF are more accurate than those given by EKF with respect to mean absolute error. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Advanced modeling in positron emission tomography using Monte Carlo simulations for improving reconstruction and quantification

    International Nuclear Information System (INIS)

    Stute, Simon

    2010-01-01

    Positron Emission Tomography (PET) is a medical imaging technique that plays a major role in oncology, especially using "1"8F-Fluoro-Deoxyglucose. However, PET images suffer from a modest spatial resolution and from high noise. As a result, there is still no consensus on how tumor metabolically active volume and tumor uptake should be characterized. In the meantime, research groups keep producing new methods for such characterizations that need to be assessed. A Monte Carlo simulation based method has been developed to produce simulated PET images of patients suffering from cancer, indistinguishable from clinical images, and for which all parameters are known. The method uses high resolution PET images from patient acquisitions, from which the physiological heterogeneous activity distribution can be modeled. It was shown that the performance of quantification methods on such highly realistic simulated images are significantly lower and more variable than using simple phantom studies. Fourteen different quantification methods were also compared in realistic conditions using a group of such simulated patients. In addition, the proposed method was extended to simulate serial PET scans in the context of patient monitoring, including a modeling of the tumor changes, as well as the variability over time of non-tumoral physiological activity distribution. Monte Carlo simulations were also used to study the detection probability inside the crystals of the tomograph. A model of the crystal response was derived and included in the system matrix involved in tomographic reconstruction. The resulting reconstruction method was compared with other sophisticated methods for modeling the detector response in the image space, proposed in the literature. We demonstrated the superiority of the proposed method over equivalent approaches on simulated data, and illustrated its robustness on clinical data. For a same noise level, it is possible to reconstruct PET images offering a

  19. William, a voxel model of child anatomy from tomographic images for Monte Carlo dosimetry calculations

    International Nuclear Information System (INIS)

    Caon, M.

    2010-01-01

    Full text: Medical imaging provides two-dimensional pictures of the human internal anatomy from which may be constructed a three-dimensional model of organs and tissues suitable for calculation of dose from radiation. Diagnostic CT provides the greatest exposure to radiation per examination and the frequency of CT examination is high. Esti mates of dose from diagnostic radiography are still determined from data derived from geometric models (rather than anatomical models), models scaled from adult bodies (rather than bodies of children) and CT scanner hardware that is no longer used. The aim of anatomical modelling is to produce a mathematical representation of internal anatomy that has organs of realistic size, shape and positioning. The organs and tissues are represented by a great many cuboidal volumes (voxels). The conversion of medical images to voxels is called segmentation and on completion every pixel in an image is assigned to a tissue or organ. Segmentation is time consuming. An image processing pack age is used to identify organ boundaries in each image. Thirty to forty tomographic voxel models of anatomy have been reported in the literature. Each model is of an individual, or a composite from several individuals. Images of children are particularly scarce. So there remains a need for more paediatric anatomical models. I am working on segmenting ''William'' who is 368 PET-CT images from head to toe of a seven year old boy. William will be used for Monte Carlo dose calculations of dose from CT examination using a simulated modern CT scanner.

  20. A model of reward- and effort-based optimal decision making and motor control.

    Directory of Open Access Journals (Sweden)

    Lionel Rigoux

    Full Text Available Costs (e.g. energetic expenditure and benefits (e.g. food are central determinants of behavior. In ecology and economics, they are combined to form a utility function which is maximized to guide choices. This principle is widely used in neuroscience as a normative model of decision and action, but current versions of this model fail to consider how decisions are actually converted into actions (i.e. the formation of trajectories. Here, we describe an approach where decision making and motor control are optimal, iterative processes derived from the maximization of the discounted, weighted difference between expected rewards and foreseeable motor efforts. The model accounts for decision making in cost/benefit situations, and detailed characteristics of control and goal tracking in realistic motor tasks. As a normative construction, the model is relevant to address the neural bases and pathological aspects of decision making and motor control.

  1. APPLYING TEACHING-LEARNING TO ARTIFICIAL BEE COLONY FOR PARAMETER OPTIMIZATION OF SOFTWARE EFFORT ESTIMATION MODEL

    Directory of Open Access Journals (Sweden)

    THANH TUNG KHUAT

    2017-05-01

    Full Text Available Artificial Bee Colony inspired by the foraging behaviour of honey bees is a novel meta-heuristic optimization algorithm in the community of swarm intelligence algorithms. Nevertheless, it is still insufficient in the speed of convergence and the quality of solutions. This paper proposes an approach in order to tackle these downsides by combining the positive aspects of TeachingLearning based optimization and Artificial Bee Colony. The performance of the proposed method is assessed on the software effort estimation problem, which is the complex and important issue in the project management. Software developers often carry out the software estimation in the early stages of the software development life cycle to derive the required cost and schedule for a project. There are a large number of methods for effort estimation in which COCOMO II is one of the most widely used models. However, this model has some restricts because its parameters have not been optimized yet. In this work, therefore, we will present the approach to overcome this limitation of COCOMO II model. The experiments have been conducted on NASA software project dataset and the obtained results indicated that the improvement of parameters provided better estimation capabilities compared to the original COCOMO II model.

  2. Quantitative Analysis of the Security of Software-Defined Network Controller Using Threat/Effort Model

    Directory of Open Access Journals (Sweden)

    Zehui Wu

    2017-01-01

    Full Text Available SDN-based controller, which is responsible for the configuration and management of the network, is the core of Software-Defined Networks. Current methods, which focus on the secure mechanism, use qualitative analysis to estimate the security of controllers, leading to inaccurate results frequently. In this paper, we employ a quantitative approach to overcome the above shortage. Under the analysis of the controller threat model we give the formal model results of the APIs, the protocol interfaces, and the data items of controller and further provide our Threat/Effort quantitative calculation model. With the help of Threat/Effort model, we are able to compare not only the security of different versions of the same kind controller but also different kinds of controllers and provide a basis for controller selection and secure development. We evaluated our approach in four widely used SDN-based controllers which are POX, OpenDaylight, Floodlight, and Ryu. The test, which shows the similarity outcomes with the traditional qualitative analysis, demonstrates that with our approach we are able to get the specific security values of different controllers and presents more accurate results.

  3. Monte Carlo Simulation Of The Portfolio-Balance Model Of Exchange Rates: Finite Sample Properties Of The GMM Estimator

    OpenAIRE

    Hong-Ghi Min

    2011-01-01

    Using Monte Carlo simulation of the Portfolio-balance model of the exchange rates, we report finite sample properties of the GMM estimator for testing over-identifying restrictions in the simultaneous equations model. F-form of Sargans statistic performs better than its chi-squared form while Hansens GMM statistic has the smallest bias.

  4. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

    Science.gov (United States)

    Kieftenbeld, Vincent; Natesan, Prathiba

    2012-01-01

    Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…

  5. Implementation of 3D models in the Monte Carlo code MCNP

    International Nuclear Information System (INIS)

    Lopes, Vivaldo; Millian, Felix M.; Guevara, Maria Victoria M.; Garcia, Fermin; Sena, Isaac; Menezes, Hugo

    2009-01-01

    On the area of numerical dosimetry Applied to medical physics, the scientific community focuses on the elaboration of new hybrids models based on 3D models. But different steps of the process of simulation with 3D models needed improvement and optimization in order to expedite the calculations and accuracy using this methodology. This project was developed with the aim of optimize the process of introduction of 3D models within the simulation code of radiation transport by Monte Carlo (MCNP). The fast implementation of these models on the simulation code allows the estimation of the dose deposited on the patient organs on a more personalized way, increasing the accuracy with this on the estimates and reducing the risks to health, caused by ionizing radiations. The introduction o these models within the MCNP was made through a input file, that was constructed through a sequence of images, bi-dimensional in the 3D model, generated using the program '3DSMAX', imported by the program 'TOMO M C' and thus, introduced as INPUT FILE of the MCNP code. (author)

  6. Monte Carlo Techniques for the Comprehensive Modeling of Isotopic Inventories in Future Nuclear Systems and Fuel Cycles. Final Report

    International Nuclear Information System (INIS)

    Paul P.H. Wilson

    2005-01-01

    The development of Monte Carlo techniques for isotopic inventory analysis has been explored in order to facilitate the modeling of systems with flowing streams of material through varying neutron irradiation environments. This represents a novel application of Monte Carlo methods to a field that has traditionally relied on deterministic solutions to systems of first-order differential equations. The Monte Carlo techniques were based largely on the known modeling techniques of Monte Carlo radiation transport, but with important differences, particularly in the area of variance reduction and efficiency measurement. The software that was developed to implement and test these methods now provides a basis for validating approximate modeling techniques that are available to deterministic methodologies. The Monte Carlo methods have been shown to be effective in reproducing the solutions of simple problems that are possible using both stochastic and deterministic methods. The Monte Carlo methods are also effective for tracking flows of materials through complex systems including the ability to model removal of individual elements or isotopes in the system. Computational performance is best for flows that have characteristic times that are large fractions of the system lifetime. As the characteristic times become short, leading to thousands or millions of passes through the system, the computational performance drops significantly. Further research is underway to determine modeling techniques to improve performance within this range of problems. This report describes the technical development of Monte Carlo techniques for isotopic inventory analysis. The primary motivation for this solution methodology is the ability to model systems of flowing material being exposed to varying and stochastically varying radiation environments. The methodology was developed in three stages: analog methods which model each atom with true reaction probabilities (Section 2), non-analog methods

  7. ICF target 2D modeling using Monte Carlo SNB electron thermal transport in DRACO

    Science.gov (United States)

    Chenhall, Jeffrey; Cao, Duc; Moses, Gregory

    2016-10-01

    The iSNB (implicit Schurtz Nicolai Busquet multigroup diffusion electron thermal transport method is adapted into a Monte Carlo (MC) transport method to better model angular and long mean free path non-local effects. The MC model was first implemented in the 1D LILAC code to verify consistency with the iSNB model. Implementation of the MC SNB model in the 2D DRACO code enables higher fidelity non-local thermal transport modeling in 2D implosions such as polar drive experiments on NIF. The final step is to optimize the MC model by hybridizing it with a MC version of the iSNB diffusion method. The hybrid method will combine the efficiency of a diffusion method in intermediate mean free path regions with the accuracy of a transport method in long mean free path regions allowing for improved computational efficiency while maintaining accuracy. Work to date on the method will be presented. This work was supported by Sandia National Laboratories and the Univ. of Rochester Laboratory for Laser Energetics.

  8. 3D Monte Carlo model with direct photon flux recording for optimal optogenetic light delivery

    Science.gov (United States)

    Shin, Younghoon; Kim, Dongmok; Lee, Jihoon; Kwon, Hyuk-Sang

    2017-02-01

    Configuring the light power emitted from the optical fiber is an essential first step in planning in-vivo optogenetic experiments. However, diffusion theory, which was adopted for optogenetic research, precluded accurate estimates of light intensity in the semi-diffusive region where the primary locus of the stimulation is located. We present a 3D Monte Carlo model that provides an accurate and direct solution for light distribution in this region. Our method directly records the photon trajectory in the separate volumetric grid planes for the near-source recording efficiency gain, and it incorporates a 3D brain mesh to support both homogeneous and heterogeneous brain tissue. We investigated the light emitted from optical fibers in brain tissue in 3D, and we applied the results to design optimal light delivery parameters for precise optogenetic manipulation by considering the fiber output power, wavelength, fiber-to-target distance, and the area of neural tissue activation.

  9. Constrained-path quantum Monte Carlo approach for non-yrast states within the shell model

    Energy Technology Data Exchange (ETDEWEB)

    Bonnard, J. [INFN, Sezione di Padova, Padova (Italy); LPC Caen, ENSICAEN, Universite de Caen, CNRS/IN2P3, Caen (France); Juillet, O. [LPC Caen, ENSICAEN, Universite de Caen, CNRS/IN2P3, Caen (France)

    2016-04-15

    The present paper intends to present an extension of the constrained-path quantum Monte Carlo approach allowing to reconstruct non-yrast states in order to reach the complete spectroscopy of nuclei within the interacting shell model. As in the yrast case studied in a previous work, the formalism involves a variational symmetry-restored wave function assuming two central roles. First, it guides the underlying Brownian motion to improve the efficiency of the sampling. Second, it constrains the stochastic paths according to the phaseless approximation to control sign or phase problems that usually plague fermionic QMC simulations. Proof-of-principle results in the sd valence space are reported. They prove the ability of the scheme to offer remarkably accurate binding energies for both even- and odd-mass nuclei irrespective of the considered interaction. (orig.)

  10. Monte Carlo based geometrical model for efficiency calculation of an n-type HPGe detector

    Energy Technology Data Exchange (ETDEWEB)

    Padilla Cabal, Fatima, E-mail: fpadilla@instec.c [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba); Lopez-Pino, Neivy; Luis Bernal-Castillo, Jose; Martinez-Palenzuela, Yisel; Aguilar-Mena, Jimmy; D' Alessandro, Katia; Arbelo, Yuniesky; Corrales, Yasser; Diaz, Oscar [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba)

    2010-12-15

    A procedure to optimize the geometrical model of an n-type detector is described. Sixteen lines from seven point sources ({sup 241}Am, {sup 133}Ba, {sup 22}Na, {sup 60}Co, {sup 57}Co, {sup 137}Cs and {sup 152}Eu) placed at three different source-to-detector distances (10, 20 and 30 cm) were used to calibrate a low-background gamma spectrometer between 26 and 1408 keV. Direct Monte Carlo techniques using the MCNPX 2.6 and GEANT 4 9.2 codes, and a semi-empirical procedure were performed to obtain theoretical efficiency curves. Since discrepancies were found between experimental and calculated data using the manufacturer parameters of the detector, a detail study of the crystal dimensions and the geometrical configuration is carried out. The relative deviation with experimental data decreases from a mean value of 18-4%, after the parameters were optimized.

  11. Monte Carlo evidence for the gluon-chain model of QCD string formation

    International Nuclear Information System (INIS)

    Greensite, J.; San Francisco State Univ., CA

    1988-08-01

    The Monte Carlo method is used to calculate the overlaps string vertical stroken gluons>, where Ψ string [A] is the Yang-Mills wavefunctional due to a static quark-antiquark pair, and vertical stroken gluons > are orthogonal trial states containing n=0, 1, or 2 gluon operators multiplying the true ground state. The calculation is carried out for SU(2) lattice gauge theory in Coulomb gauge, in D=4 dimensions. It is found that the string state is dominated, at small qanti q separations, by the vacuum ('no-gluon') state, at larger separations by the 1-gluon state, and, at the largest separations attempted, the 2-gluon state begins to dominate. This behavior is in qualitative agreement with the gluon-chain model, which is a large-N colors motivated theory of QCD string formation. (orig.)

  12. Calibration of lung counter using a CT model of Torso phantom and Monte Carlo method

    International Nuclear Information System (INIS)

    Zhang Binquan; Ma Jizeng; Yang Duanjie; Liu Liye; Cheng Jianping

    2006-01-01

    Tomography image of a Torso phantom was obtained from CT-Scan. The Torso phantom represents the trunk of an adult man that is 170 cm high and weight of 65 kg. After these images were segmented, cropped, and resized, a 3-dimension voxel phantom was created. The voxel phantom includes more than 2 million voxels, which size was 2.73 mm x 2.73 mm x 3 mm. This model could be used for the calibration of lung counter with Monte Carlo method. On the assumption that radioactive material was homogeneously distributed throughout the lung, counting efficiencies of a HPGe detector in different positions were calculated as Adipose Mass fraction (AMF) was different in the soft tissue in chest. The results showed that counting efficiencies of the lung counter changed up to 67% for 17.5 keV γ ray and 20% for 25 keV γ ray when AMF changed from 0 to 40%. (authors)

  13. A Monte Carlo calculation model of electronic portal imaging device for transit dosimetry through heterogeneous media

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Jihyung; Jung, Jae Won, E-mail: jungj@ecu.edu [Department of Physics, East Carolina University, Greenville, North Carolina 27858 (United States); Kim, Jong Oh [Department of Radiation Oncology, University of Pittsburgh Cancer Institute, Pittsburgh, Pennsylvania 15232 (United States); Yeo, Inhwan [Department of Radiation Medicine, Loma Linda University Medical Center, Loma Linda, California 92354 (United States)

    2016-05-15

    Purpose: To develop and evaluate a fast Monte Carlo (MC) dose calculation model of electronic portal imaging device (EPID) based on its effective atomic number modeling in the XVMC code. Methods: A previously developed EPID model, based on the XVMC code by density scaling of EPID structures, was modified by additionally considering effective atomic number (Z{sub eff}) of each structure and adopting a phase space file from the EGSnrc code. The model was tested under various homogeneous and heterogeneous phantoms and field sizes by comparing the calculations in the model with measurements in EPID. In order to better evaluate the model, the performance of the XVMC code was separately tested by comparing calculated dose to water with ion chamber (IC) array measurement in the plane of EPID. Results: In the EPID plane, calculated dose to water by the code showed agreement with IC measurements within 1.8%. The difference was averaged across the in-field regions of the acquired profiles for all field sizes and phantoms. The maximum point difference was 2.8%, affected by proximity of the maximum points to penumbra and MC noise. The EPID model showed agreement with measured EPID images within 1.3%. The maximum point difference was 1.9%. The difference dropped from the higher value of the code by employing the calibration that is dependent on field sizes and thicknesses for the conversion of calculated images to measured images. Thanks to the Z{sub eff} correction, the EPID model showed a linear trend of the calibration factors unlike those of the density-only-scaled model. The phase space file from the EGSnrc code sharpened penumbra profiles significantly, improving agreement of calculated profiles with measured profiles. Conclusions: Demonstrating high accuracy, the EPID model with the associated calibration system may be used for in vivo dosimetry of radiation therapy. Through this study, a MC model of EPID has been developed, and their performance has been rigorously

  14. Development of a Monte Carlo multiple source model for inclusion in a dose calculation auditing tool.

    Science.gov (United States)

    Faught, Austin M; Davidson, Scott E; Fontenot, Jonas; Kry, Stephen F; Etzel, Carol; Ibbott, Geoffrey S; Followill, David S

    2017-09-01

    The Imaging and Radiation Oncology Core Houston (IROC-H) (formerly the Radiological Physics Center) has reported varying levels of agreement in their anthropomorphic phantom audits. There is reason to believe one source of error in this observed disagreement is the accuracy of the dose calculation algorithms and heterogeneity corrections used. To audit this component of the radiotherapy treatment process, an independent dose calculation tool is needed. Monte Carlo multiple source models for Elekta 6 MV and 10 MV therapeutic x-ray beams were commissioned based on measurement of central axis depth dose data for a 10 × 10 cm 2 field size and dose profiles for a 40 × 40 cm 2 field size. The models were validated against open field measurements consisting of depth dose data and dose profiles for field sizes ranging from 3 × 3 cm 2 to 30 × 30 cm 2 . The models were then benchmarked against measurements in IROC-H's anthropomorphic head and neck and lung phantoms. Validation results showed 97.9% and 96.8% of depth dose data passed a ±2% Van Dyk criterion for 6 MV and 10 MV models respectively. Dose profile comparisons showed an average agreement using a ±2%/2 mm criterion of 98.0% and 99.0% for 6 MV and 10 MV models respectively. Phantom plan comparisons were evaluated using ±3%/2 mm gamma criterion, and averaged passing rates between Monte Carlo and measurements were 87.4% and 89.9% for 6 MV and 10 MV models respectively. Accurate multiple source models for Elekta 6 MV and 10 MV x-ray beams have been developed for inclusion in an independent dose calculation tool for use in clinical trial audits. © 2017 American Association of Physicists in Medicine.

  15. Study of Monte Carlo Simulation Method for Methane Phase Diagram Prediction using Two Different Potential Models

    KAUST Repository

    Kadoura, Ahmad

    2011-06-06

    Lennard‐Jones (L‐J) and Buckingham exponential‐6 (exp‐6) potential models were used to produce isotherms for methane at temperatures below and above critical one. Molecular simulation approach, particularly Monte Carlo simulations, were employed to create these isotherms working with both canonical and Gibbs ensembles. Experiments in canonical ensemble with each model were conducted to estimate pressures at a range of temperatures above methane critical temperature. Results were collected and compared to experimental data existing in literature; both models showed an elegant agreement with the experimental data. In parallel, experiments below critical temperature were run in Gibbs ensemble using L‐J model only. Upon comparing results with experimental ones, a good fit was obtained with small deviations. The work was further developed by adding some statistical studies in order to achieve better understanding and interpretation to the estimated quantities by the simulation. Methane phase diagrams were successfully reproduced by an efficient molecular simulation technique with different potential models. This relatively simple demonstration shows how powerful molecular simulation methods could be, hence further applications on more complicated systems are considered. Prediction of phase behavior of elemental sulfur in sour natural gases has been an interesting and challenging field in oil and gas industry. Determination of elemental sulfur solubility conditions helps avoiding all kinds of problems caused by its dissolution in gas production and transportation processes. For this purpose, further enhancement to the methods used is to be considered in order to successfully simulate elemental sulfur phase behavior in sour natural gases mixtures.

  16. Unified description of pf-shell nuclei by the Monte Carlo shell model calculations

    Energy Technology Data Exchange (ETDEWEB)

    Mizusaki, Takahiro; Otsuka, Takaharu [Tokyo Univ. (Japan). Dept. of Physics; Honma, Michio

    1998-03-01

    The attempts to solve shell model by new methods are briefed. The shell model calculation by quantum Monte Carlo diagonalization which was proposed by the authors is a more practical method, and it became to be known that it can solve the problem with sufficiently good accuracy. As to the treatment of angular momentum, in the method of the authors, deformed Slater determinant is used as the basis, therefore, for making angular momentum into the peculiar state, projected operator is used. The space determined dynamically is treated mainly stochastically, and the energy of the multibody by the basis formed as the result is evaluated and selectively adopted. The symmetry is discussed, and the method of decomposing shell model space into dynamically determined space and the product of spin and isospin spaces was devised. The calculation processes are shown with the example of {sup 50}Mn nuclei. The calculation of the level structure of {sup 48}Cr with known exact energy can be done with the accuracy of peculiar absolute energy value within 200 keV. {sup 56}Ni nuclei are the self-conjugate nuclei of Z=N=28. The results of the shell model calculation of {sup 56}Ni nucleus structure by using the interactions of nuclear models are reported. (K.I.)

  17. STUDY OF INSTRUCTIONAL MODELS AND SYNTAX AS AN EFFORT FOR DEVELOPING ‘OIDDE’ INSTRUCTIONAL MODEL

    Directory of Open Access Journals (Sweden)

    Atok Miftachul Hudha

    2016-07-01

    Full Text Available The 21st century requires the availability of human resources with seven skills or competence (Maftuh, 2016, namely: 1 critical thinking and problem solving skills, 2 creative and innovative, 3 behave ethically, 4 flexible and quick to adapt, 5 competence in ICT and literacy, 6 interpersonal and collaborative capabilities, 7 social skills and cross-cultural interaction. One of the competence of human resources of the 21st century are behaving ethically should be established and developed through learning that includes the study of ethics because ethical behavior can not be created and owned as it is by human, but must proceed through solving problem, especially ethical dilemma solving on the ethical problems atau problematics of ethics. The fundamental problem, in order to ethical behavior competence can be achieved through learning, is the right model of learning is not found yet by teachers to implement the learning associated with ethical values as expected in character education (Hudha, et al, 2014a, 2014b, 2014c. Therefore, it needs a decent learning model (valid, practical and effective so that ethics learning, to establish a human resources behave ethically, can be met. Thus, it is necessary to study (to analyze and modificate the steps of learning (syntax existing learning model, in order to obtain the results of the development model of learning syntax. One model of learning that is feasible, practical, and effective question is the learning model on the analysis and modification of syntax model of social learning, syntax learning model systems behavior (Joyce and Weil, 1980, Joyce, et al, 2009 as well as syntax learning model Tri Prakoro (Akbar, 2013. The modified syntax generate learning model 'OIDDE' which is an acronym of orientation, identify, discussion, decision, and engage in behavior.

  18. Geological modeling of a fault zone in clay rocks at the Mont-Terri laboratory (Switzerland)

    Science.gov (United States)

    Kakurina, M.; Guglielmi, Y.; Nussbaum, C.; Valley, B.

    2016-12-01

    Clay-rich formations are considered to be a natural barrier for radionuclides or fluids (water, hydrocarbons, CO2) migration. However, little is known about the architecture of faults affecting clay formations because of their quick alteration at the Earth's surface. The Mont Terri Underground Research Laboratory provides exceptional conditions to investigate an un-weathered, perfectly exposed clay fault zone architecture and to conduct fault activation experiments that allow explore the conditions for stability of such clay faults. Here we show first results from a detailed geological model of the Mont Terri Main Fault architecture, using GoCad software, a detailed structural analysis of 6 fully cored and logged 30-to-50m long and 3-to-15m spaced boreholes crossing the fault zone. These high-definition geological data were acquired within the Fault Slip (FS) experiment project that consisted in fluid injections in different intervals within the fault using the SIMFIP probe to explore the conditions for the fault mechanical and seismic stability. The Mont Terri Main Fault "core" consists of a thrust zone about 0.8 to 3m wide that is bounded by two major fault planes. Between these planes, there is an assembly of distinct slickensided surfaces and various facies including scaly clays, fault gouge and fractured zones. Scaly clay including S-C bands and microfolds occurs in larger zones at top and bottom of the Mail Fault. A cm-thin layer of gouge, that is known to accommodate high strain parts, runs along the upper fault zone boundary. The non-scaly part mainly consists of undeformed rock block, bounded by slickensides. Such a complexity as well as the continuity of the two major surfaces are hard to correlate between the different boreholes even with the high density of geological data within the relatively small volume of the experiment. This may show that a poor strain localization occurred during faulting giving some perspectives about the potential for

  19. Simulation and Modeling Efforts to Support Decision Making in Healthcare Supply Chain Management

    Directory of Open Access Journals (Sweden)

    Eman AbuKhousa

    2014-01-01

    Full Text Available Recently, most healthcare organizations focus their attention on reducing the cost of their supply chain management (SCM by improving the decision making pertaining processes’ efficiencies. The availability of products through healthcare SCM is often a matter of life or death to the patient; therefore, trial and error approaches are not an option in this environment. Simulation and modeling (SM has been presented as an alternative approach for supply chain managers in healthcare organizations to test solutions and to support decision making processes associated with various SCM problems. This paper presents and analyzes past SM efforts to support decision making in healthcare SCM and identifies the key challenges associated with healthcare SCM modeling. We also present and discuss emerging technologies to meet these challenges.

  20. Simulation and modeling efforts to support decision making in healthcare supply chain management.

    Science.gov (United States)

    AbuKhousa, Eman; Al-Jaroodi, Jameela; Lazarova-Molnar, Sanja; Mohamed, Nader

    2014-01-01

    Recently, most healthcare organizations focus their attention on reducing the cost of their supply chain management (SCM) by improving the decision making pertaining processes' efficiencies. The availability of products through healthcare SCM is often a matter of life or death to the patient; therefore, trial and error approaches are not an option in this environment. Simulation and modeling (SM) has been presented as an alternative approach for supply chain managers in healthcare organizations to test solutions and to support decision making processes associated with various SCM problems. This paper presents and analyzes past SM efforts to support decision making in healthcare SCM and identifies the key challenges associated with healthcare SCM modeling. We also present and discuss emerging technologies to meet these challenges.

  1. Modeling a secular trend by Monte Carlo simulation of height biased migration in a spatial network.

    Science.gov (United States)

    Groth, Detlef

    2017-04-01

    Background: In a recent Monte Carlo simulation, the clustering of body height of Swiss military conscripts within a spatial network with characteristic features of the natural Swiss geography was investigated. In this study I examined the effect of migration of tall individuals into network hubs on the dynamics of body height within the whole spatial network. The aim of this study was to simulate height trends. Material and methods: Three networks were used for modeling, a regular rectangular fishing net like network, a real world example based on the geographic map of Switzerland, and a random network. All networks contained between 144 and 148 districts and between 265-307 road connections. Around 100,000 agents were initially released with average height of 170 cm, and height standard deviation of 6.5 cm. The simulation was started with the a priori assumption that height variation within a district is limited and also depends on height of neighboring districts (community effect on height). In addition to a neighborhood influence factor, which simulates a community effect, body height dependent migration of conscripts between adjacent districts in each Monte Carlo simulation was used to re-calculate next generation body heights. In order to determine the direction of migration for taller individuals, various centrality measures for the evaluation of district importance within the spatial network were applied. Taller individuals were favored to migrate more into network hubs, backward migration using the same number of individuals was random, not biased towards body height. Network hubs were defined by the importance of a district within the spatial network. The importance of a district was evaluated by various centrality measures. In the null model there were no road connections, height information could not be delivered between the districts. Results: Due to the favored migration of tall individuals into network hubs, average body height of the hubs, and later

  2. Development of a randomized 3D cell model for Monte Carlo microdosimetry simulations

    Energy Technology Data Exchange (ETDEWEB)

    Douglass, Michael; Bezak, Eva; Penfold, Scott [School of Chemistry and Physics, University of Adelaide, North Terrace, Adelaide 5005, South Australia (Australia) and Department of Medical Physics, Royal Adelaide Hospital, North Terrace, Adelaide 5000, South Australia (Australia)

    2012-06-15

    Purpose: The objective of the current work was to develop an algorithm for growing a macroscopic tumor volume from individual randomized quasi-realistic cells. The major physical and chemical components of the cell need to be modeled. It is intended to import the tumor volume into GEANT4 (and potentially other Monte Carlo packages) to simulate ionization events within the cell regions. Methods: A MATLAB Copyright-Sign code was developed to produce a tumor coordinate system consisting of individual ellipsoidal cells randomized in their spatial coordinates, sizes, and rotations. An eigenvalue method using a mathematical equation to represent individual cells was used to detect overlapping cells. GEANT4 code was then developed to import the coordinate system into GEANT4 and populate it with individual cells of varying sizes and composed of the membrane, cytoplasm, reticulum, nucleus, and nucleolus. Each region is composed of chemically realistic materials. Results: The in-house developed MATLAB Copyright-Sign code was able to grow semi-realistic cell distributions ({approx}2 Multiplication-Sign 10{sup 8} cells in 1 cm{sup 3}) in under 36 h. The cell distribution can be used in any number of Monte Carlo particle tracking toolkits including GEANT4, which has been demonstrated in this work. Conclusions: Using the cell distribution and GEANT4, the authors were able to simulate ionization events in the individual cell components resulting from 80 keV gamma radiation (the code is applicable to other particles and a wide range of energies). This virtual microdosimetry tool will allow for a more complete picture of cell damage to be developed.

  3. A Monte Carlo risk assessment model for acrylamide formation in French fries.

    Science.gov (United States)

    Cummins, Enda; Butler, Francis; Gormley, Ronan; Brunton, Nigel

    2009-10-01

    The objective of this study is to estimate the likely human exposure to the group 2a carcinogen, acrylamide, from French fries by Irish consumers by developing a quantitative risk assessment model using Monte Carlo simulation techniques. Various stages in the French-fry-making process were modeled from initial potato harvest, storage, and processing procedures. The model was developed in Microsoft Excel with the @Risk add-on package. The model was run for 10,000 iterations using Latin hypercube sampling. The simulated mean acrylamide level in French fries was calculated to be 317 microg/kg. It was found that females are exposed to smaller levels of acrylamide than males (mean exposure of 0.20 microg/kg bw/day and 0.27 microg/kg bw/day, respectively). Although the carcinogenic potency of acrylamide is not well known, the simulated probability of exceeding the average chronic human dietary intake of 1 microg/kg bw/day (as suggested by WHO) was 0.054 and 0.029 for males and females, respectively. A sensitivity analysis highlighted the importance of the selection of appropriate cultivars with known low reducing sugar levels for French fry production. Strict control of cooking conditions (correlation coefficient of 0.42 and 0.35 for frying time and temperature, respectively) and blanching procedures (correlation coefficient -0.25) were also found to be important in ensuring minimal acrylamide formation.

  4. Modeling of molecular nitrogen collisions and dissociation processes for direct simulation Monte Carlo.

    Science.gov (United States)

    Parsons, Neal; Levin, Deborah A; van Duin, Adri C T; Zhu, Tong

    2014-12-21

    The Direct Simulation Monte Carlo (DSMC) method typically used for simulating hypersonic Earth re-entry flows requires accurate total collision cross sections and reaction probabilities. However, total cross sections are often determined from extrapolations of relatively low-temperature viscosity data, so their reliability is unknown for the high temperatures observed in hypersonic flows. Existing DSMC reaction models accurately reproduce experimental equilibrium reaction rates, but the applicability of these rates to the strong thermal nonequilibrium observed in hypersonic shocks is unknown. For hypersonic flows, these modeling issues are particularly relevant for nitrogen, the dominant species of air. To rectify this deficiency, the Molecular Dynamics/Quasi-Classical Trajectories (MD/QCT) method is used to accurately compute collision and reaction cross sections for the N2(Σg+1)-N2(Σg+1) collision pair for conditions expected in hypersonic shocks using a new potential energy surface developed using a ReaxFF fit to recent advanced ab initio calculations. The MD/QCT-computed reaction probabilities were found to exhibit better physical behavior and predict less dissociation than the baseline total collision energy reaction model for strong nonequilibrium conditions expected in a shock. The MD/QCT reaction model compared well with computed equilibrium reaction rates and shock-tube data. In addition, the MD/QCT-computed total cross sections were found to agree well with established variable hard sphere total cross sections.

  5. Modeling of molecular nitrogen collisions and dissociation processes for direct simulation Monte Carlo

    International Nuclear Information System (INIS)

    Parsons, Neal; Levin, Deborah A.; Duin, Adri C. T. van; Zhu, Tong

    2014-01-01

    The Direct Simulation Monte Carlo (DSMC) method typically used for simulating hypersonic Earth re-entry flows requires accurate total collision cross sections and reaction probabilities. However, total cross sections are often determined from extrapolations of relatively low-temperature viscosity data, so their reliability is unknown for the high temperatures observed in hypersonic flows. Existing DSMC reaction models accurately reproduce experimental equilibrium reaction rates, but the applicability of these rates to the strong thermal nonequilibrium observed in hypersonic shocks is unknown. For hypersonic flows, these modeling issues are particularly relevant for nitrogen, the dominant species of air. To rectify this deficiency, the Molecular Dynamics/Quasi-Classical Trajectories (MD/QCT) method is used to accurately compute collision and reaction cross sections for the N 2 ( 1 Σ g + )-N 2 ( 1 Σ g + ) collision pair for conditions expected in hypersonic shocks using a new potential energy surface developed using a ReaxFF fit to recent advanced ab initio calculations. The MD/QCT-computed reaction probabilities were found to exhibit better physical behavior and predict less dissociation than the baseline total collision energy reaction model for strong nonequilibrium conditions expected in a shock. The MD/QCT reaction model compared well with computed equilibrium reaction rates and shock-tube data. In addition, the MD/QCT-computed total cross sections were found to agree well with established variable hard sphere total cross sections

  6. Monte Carlo modeling of a conventional X-ray computed tomography scanner for gel dosimetry purposes.

    Science.gov (United States)

    Hayati, Homa; Mesbahi, Asghar; Nazarpoor, Mahmood

    2016-01-01

    Our purpose in the current study was to model an X-ray CT scanner with the Monte Carlo (MC) method for gel dosimetry. In this study, a conventional CT scanner with one array detector was modeled with use of the MCNPX MC code. The MC calculated photon fluence in detector arrays was used for image reconstruction of a simple water phantom as well as polyacrylamide polymer gel (PAG) used for radiation therapy. Image reconstruction was performed with the filtered back-projection method with a Hann filter and the Spline interpolation method. Using MC results, we obtained the dose-response curve for images of irradiated gel at different absorbed doses. A spatial resolution of about 2 mm was found for our simulated MC model. The MC-based CT images of the PAG gel showed a reliable increase in the CT number with increasing absorbed dose for the studied gel. Also, our results showed that the current MC model of a CT scanner can be used for further studies on the parameters that influence the usability and reliability of results, such as the photon energy spectra and exposure techniques in X-ray CT gel dosimetry.

  7. Assessing the convergence of LHS Monte Carlo simulations of wastewater treatment models.

    Science.gov (United States)

    Benedetti, Lorenzo; Claeys, Filip; Nopens, Ingmar; Vanrolleghem, Peter A

    2011-01-01

    Monte Carlo (MC) simulation appears to be the only currently adopted tool to estimate global sensitivities and uncertainties in wastewater treatment modelling. Such models are highly complex, dynamic and non-linear, requiring long computation times, especially in the scope of MC simulation, due to the large number of simulations usually required. However, no stopping rule to decide on the number of simulations required to achieve a given confidence in the MC simulation results has been adopted so far in the field. In this work, a pragmatic method is proposed to minimize the computation time by using a combination of several criteria. It makes no use of prior knowledge about the model, is very simple, intuitive and can be automated: all convenient features in engineering applications. A case study is used to show an application of the method, and the results indicate that the required number of simulations strongly depends on the model output(s) selected, and on the type and desired accuracy of the analysis conducted. Hence, no prior indication is available regarding the necessary number of MC simulations, but the proposed method is capable of dealing with these variations and stopping the calculations after convergence is reached.

  8. 3D Monte Carlo model of optical transport in laser-irradiated cutaneous vascular malformations

    Science.gov (United States)

    Majaron, Boris; Milanič, Matija; Jia, Wangcun; Nelson, J. S.

    2010-11-01

    We have developed a three-dimensional Monte Carlo (MC) model of optical transport in skin and applied it to analysis of port wine stain treatment with sequential laser irradiation and intermittent cryogen spray cooling. Our MC model extends the approaches of the popular multi-layer model by Wang et al.1 to three dimensions, thus allowing treatment of skin inclusions with more complex geometries and arbitrary irradiation patterns. To overcome the obvious drawbacks of either "escape" or "mirror" boundary conditions at the lateral boundaries of the finely discretized volume of interest (VOI), photons exiting the VOI are propagated in laterally infinite tissue layers with appropriate optical properties, until they loose all their energy, escape into the air, or return to the VOI, but the energy deposition outside of the VOI is not computed and recorded. After discussing the selection of tissue parameters, we apply the model to analysis of blood photocoagulation and collateral thermal damage in treatment of port wine stain (PWS) lesions with sequential laser irradiation and intermittent cryogen spray cooling.

  9. EURADOS intercomparison exercise on Monte Carlo modelling of a medical linear accelerator.

    Science.gov (United States)

    Caccia, Barbara; Le Roy, Maïwenn; Blideanu, Valentin; Andenna, Claudio; Arun, Chairmadurai; Czarnecki, Damian; El Bardouni, Tarek; Gschwind, Régine; Huot, Nicolas; Martin, Eric; Zink, Klemens; Zoubair, Mariam; Price, Robert; de Carlan, Loïc

    2017-01-01

    In radiotherapy, Monte Carlo (MC) methods are considered a gold standard to calculate accurate dose distributions, particularly in heterogeneous tissues. EURADOS organized an international comparison with six participants applying different MC models to a real medical linear accelerator and to one homogeneous and four heterogeneous dosimetric phantoms. The aim of this exercise was to identify, by comparison of different MC models with a complete experimental dataset, critical aspects useful for MC users to build and calibrate a simulation and perform a dosimetric analysis. Results show on average a good agreement between simulated and experimental data. However, some significant differences have been observed especially in presence of heterogeneities. Moreover, the results are critically dependent on the different choices of the initial electron source parameters. This intercomparison allowed the participants to identify some critical issues in MC modelling of a medical linear accelerator. Therefore, the complete experimental dataset assembled for this intercomparison will be available to all the MC users, thus providing them an opportunity to build and calibrate a model for a real medical linear accelerator.

  10. Kinetic Monte Carlo Potts Model for Simulating a High Burnup Structure in UO2

    International Nuclear Information System (INIS)

    Oh, Jae-Yong; Koo, Yang-Hyun; Lee, Byung-Ho

    2008-01-01

    A Potts model, based on the kinetic Monte Carlo method, was originally developed for magnetic domain evolutions, but it was also proposed as a model for a grain growth in polycrystals due to similarities between Potts domain structures and grain structures. It has modeled various microstructural phenomena such as grain growths, a recrystallization, a sintering, and so on. A high burnup structure (HBS) is observed in the periphery of a high burnup UO 2 fuel. Although its formation mechanism is not clearly understood yet, its characteristics are well recognized: The HBS microstructure consists of very small grains and large bubbles instead of original as-sintered grains. A threshold burnup for the HBS is observed at a local burnup 60-80 Gwd/tM, and the threshold temperature is 1000-1200 .deg. C. Concerning a energy stability, the HBS can be created if the system energy of the HBS is lower than that of the original structure in an irradiated UO 2 . In this paper, a Potts model was implemented for simulating the HBS by calculating system energies, and the simulation results were compared with the HBS characteristics mentioned above

  11. Direct Simulation Monte Carlo Application of the Three Dimensional Forced Harmonic Oscillator Model

    Science.gov (United States)

    2017-12-07

    NUMBER (Include area code) 07 December 2017 Journal Article 24 February 2017 - 31 December 2017 Direct Simulation Monte Carlo Application of the...is proposed. The implementation employs precalculated lookup tables for transition probabilities and is suitable for the direct simulation Monte Carlo...method. It takes into account the microscopic reversibility between the excitation and deexcitation processes , and it satisfies the detailed balance

  12. An open source Bayesian Monte Carlo isotope mixing model with applications in Earth surface processes

    Science.gov (United States)

    Arendt, Carli A.; Aciego, Sarah M.; Hetland, Eric A.

    2015-05-01

    The implementation of isotopic tracers as constraints on source contributions has become increasingly relevant to understanding Earth surface processes. Interpretation of these isotopic tracers has become more accessible with the development of Bayesian Monte Carlo (BMC) mixing models, which allow uncertainty in mixing end-members and provide methodology for systems with multicomponent mixing. This study presents an open source multiple isotope BMC mixing model that is applicable to Earth surface environments with sources exhibiting distinct end-member isotopic signatures. Our model is first applied to new δ18O and δD measurements from the Athabasca Glacier, which showed expected seasonal melt evolution trends and vigorously assessed the statistical relevance of the resulting fraction estimations. To highlight the broad applicability of our model to a variety of Earth surface environments and relevant isotopic systems, we expand our model to two additional case studies: deriving melt sources from δ18O, δD, and 222Rn measurements of Greenland Ice Sheet bulk water samples and assessing nutrient sources from ɛNd and 87Sr/86Sr measurements of Hawaiian soil cores. The model produces results for the Greenland Ice Sheet and Hawaiian soil data sets that are consistent with the originally published fractional contribution estimates. The advantage of this method is that it quantifies the error induced by variability in the end-member compositions, unrealized by the models previously applied to the above case studies. Results from all three case studies demonstrate the broad applicability of this statistical BMC isotopic mixing model for estimating source contribution fractions in a variety of Earth surface systems.

  13. Creating high-resolution digital elevation model using thin plate spline interpolation and Monte Carlo simulation

    International Nuclear Information System (INIS)

    Pohjola, J.; Turunen, J.; Lipping, T.

    2009-07-01

    In this report creation of the digital elevation model of Olkiluoto area incorporating a large area of seabed is described. The modeled area covers 960 square kilometers and the apparent resolution of the created elevation model was specified to be 2.5 x 2.5 meters. Various elevation data like contour lines and irregular elevation measurements were used as source data in the process. The precision and reliability of the available source data varied largely. Digital elevation model (DEM) comprises a representation of the elevation of the surface of the earth in particular area in digital format. DEM is an essential component of geographic information systems designed for the analysis and visualization of the location-related data. DEM is most often represented either in raster or Triangulated Irregular Network (TIN) format. After testing several methods the thin plate spline interpolation was found to be best suited for the creation of the elevation model. The thin plate spline method gave the smallest error in the test where certain amount of points was removed from the data and the resulting model looked most natural. In addition to the elevation data the confidence interval at each point of the new model was required. The Monte Carlo simulation method was selected for this purpose. The source data points were assigned probability distributions according to what was known about their measurement procedure and from these distributions 1 000 (20 000 in the first version) values were drawn for each data point. Each point of the newly created DEM had thus as many realizations. The resulting high resolution DEM will be used in modeling the effects of land uplift and evolution of the landscape in the time range of 10 000 years from the present. This time range comes from the requirements set for the spent nuclear fuel repository site. (orig.)

  14. A model to estimate cost-savings in diabetic foot ulcer prevention efforts.

    Science.gov (United States)

    Barshes, Neal R; Saedi, Samira; Wrobel, James; Kougias, Panos; Kundakcioglu, O Erhun; Armstrong, David G

    2017-04-01

    Sustained efforts at preventing diabetic foot ulcers (DFUs) and subsequent leg amputations are sporadic in most health care systems despite the high costs associated with such complications. We sought to estimate effectiveness targets at which cost-savings (i.e. improved health outcomes at decreased total costs) might occur. A Markov model with probabilistic sensitivity analyses was used to simulate the five-year survival, incidence of foot complications, and total health care costs in a hypothetical population of 100,000 people with diabetes. Clinical event and cost estimates were obtained from previously-published trials and studies. A population without previous DFU but with 17% neuropathy and 11% peripheral artery disease (PAD) prevalence was assumed. Primary prevention (PP) was defined as reducing initial DFU incidence. PP was more than 90% likely to provide cost-savings when annual prevention costs are less than $50/person and/or annual DFU incidence is reduced by at least 25%. Efforts directed at patients with diabetes who were at moderate or high risk for DFUs were very likely to provide cost-savings if DFU incidence was decreased by at least 10% and/or the cost was less than $150 per person per year. Low-cost DFU primary prevention efforts producing even small decreases in DFU incidence may provide the best opportunity for cost-savings, especially if focused on patients with neuropathy and/or PAD. Mobile phone-based reminders, self-identification of risk factors (ex. Ipswich touch test), and written brochures may be among such low-cost interventions that should be investigated for cost-savings potential. Published by Elsevier Inc.

  15. Shell-model Monte Carlo simulations of the BCS-BEC crossover in few-fermion systems

    DEFF Research Database (Denmark)

    Zinner, Nikolaj Thomas; Mølmer, Klaus; Özen, C.

    2009-01-01

    We study a trapped system of fermions with a zero-range two-body interaction using the shell-model Monte Carlo method, providing ab initio results for the low particle number limit where mean-field theory is not applicable. We present results for the N-body energies as function of interaction...

  16. EURADOS intercomparison on measurements and Monte Carlo modelling for the assessment of Americium in a USTUR leg phantom

    International Nuclear Information System (INIS)

    Lopez, M. A.; Broggio, D.; Capello, K.; Cardenas-Mendez, E.; El-Faramawy, N.; Franck, D.; James, A. C.; Kramer, G. H.; Lacerenza, G.; Lynch, T. P.; Navarro, J. F.; Navarro, T.; Perez, B.; Ruehm, W.; Tolmachev, S. Y.; Weitzenegger, E.

    2011-01-01

    A collaboration of the EURADOS working group on 'Internal Dosimetry' and the United States Transuranium and Uranium Registries (USTUR) has taken place to carry out an intercomparison on measurements and Monte Carlo modelling determining americium deposited in the bone of a USTUR leg phantom. Preliminary results and conclusions of this intercomparison exercise are presented here. (authors)

  17. Monte Carlo simulations with Symanzik's improved actions in the lattice 0(3) non-linear sigma-model

    International Nuclear Information System (INIS)

    Berg, B.; Montvay, I.; Meyer, S.

    1983-10-01

    The scaling properties of the lattice 0(3) non-linear delta-model are studied. The mass-gap, energy-momentum dispersion, correlation functions are measured by numerical Monte Carlo methods. Symanzik's tree-level and 1-loop improved actions are compared to the standard (nearest neigbour) action. (orig.)

  18. Transfer-Matrix Monte Carlo Estimates of Critical Points in the Simple Cubic Ising, Planar and Heisenberg Models

    NARCIS (Netherlands)

    Nightingale, M.P.; Blöte, H.W.J.

    1996-01-01

    The principle and the efficiency of the Monte Carlo transfer-matrix algorithm are discussed. Enhancements of this algorithm are illustrated by applications to several phase transitions in lattice spin models. We demonstrate how the statistical noise can be reduced considerably by a similarity

  19. A Monte Carlo study of time-aggregation in continuous-time and discrete-time parametric hazard models.

    NARCIS (Netherlands)

    Hofstede, ter F.; Wedel, M.

    1998-01-01

    This study investigates the effects of time aggregation in discrete and continuous-time hazard models. A Monte Carlo study is conducted in which data are generated according to various continuous and discrete-time processes, and aggregated into daily, weekly and monthly intervals. These data are

  20. Modeling of the 3RS tau protein with self-consistent field method and Monte Carlo simulation

    NARCIS (Netherlands)

    Leermakers, F.A.M.; Jho, Y.S.; Zhulina, E.B.

    2010-01-01

    Using a model with amino acid resolution of the 196 aa N-terminus of the 3RS tau protein, we performed both a Monte Carlo study and a complementary self-consistent field (SCF) analysis to obtain detailed information on conformational properties of these moieties near a charged plane (mimicking the

  1. Kinetic Monte Carlo modeling of the efficiency roll-off in a multilayer white organic light-emitting device

    NARCIS (Netherlands)

    Mesta, M.; van Eersel, H.; Coehoorn, R.; Bobbert, P.A.

    2016-01-01

    Triplet-triplet annihilation (TTA) and triplet-polaron quenching (TPQ) in organic light-emitting devices (OLEDs) lead to a roll-off of the internal quantum efficiency (IQE) with increasing current density J. We employ a kinetic Monte Carlo modeling study to analyze the measured IQE and color balance

  2. Monte Carlo simulation of OLS and linear mixed model inference of phenotypic effects on gene expression.

    Science.gov (United States)

    Walker, Jeffrey A

    2016-01-01

    Self-contained tests estimate and test the association between a phenotype and mean expression level in a gene set defined a priori . Many self-contained gene set analysis methods have been developed but the performance of these methods for phenotypes that are continuous rather than discrete and with multiple nuisance covariates has not been well studied. Here, I use Monte Carlo simulation to evaluate the performance of both novel and previously published (and readily available via R) methods for inferring effects of a continuous predictor on mean expression in the presence of nuisance covariates. The motivating data are a high-profile dataset which was used to show opposing effects of hedonic and eudaimonic well-being (or happiness) on the mean expression level of a set of genes that has been correlated with social adversity (the CTRA gene set). The original analysis of these data used a linear model (GLS) of fixed effects with correlated error to infer effects of Hedonia and Eudaimonia on mean CTRA expression. The standardized effects of Hedonia and Eudaimonia on CTRA gene set expression estimated by GLS were compared to estimates using multivariate (OLS) linear models and generalized estimating equation (GEE) models. The OLS estimates were tested using O'Brien's OLS test, Anderson's permutation [Formula: see text]-test, two permutation F -tests (including GlobalAncova), and a rotation z -test (Roast). The GEE estimates were tested using a Wald test with robust standard errors. The performance (Type I, II, S, and M errors) of all tests was investigated using a Monte Carlo simulation of data explicitly modeled on the re-analyzed dataset. GLS estimates are inconsistent between data sets, and, in each dataset, at least one coefficient is large and highly statistically significant. By contrast, effects estimated by OLS or GEE are very small, especially relative to the standard errors. Bootstrap and permutation GLS distributions suggest that the GLS results in

  3. Monte Carlo simulation of OLS and linear mixed model inference of phenotypic effects on gene expression

    Directory of Open Access Journals (Sweden)

    Jeffrey A. Walker

    2016-10-01

    Full Text Available Background Self-contained tests estimate and test the association between a phenotype and mean expression level in a gene set defined a priori. Many self-contained gene set analysis methods have been developed but the performance of these methods for phenotypes that are continuous rather than discrete and with multiple nuisance covariates has not been well studied. Here, I use Monte Carlo simulation to evaluate the performance of both novel and previously published (and readily available via R methods for inferring effects of a continuous predictor on mean expression in the presence of nuisance covariates. The motivating data are a high-profile dataset which was used to show opposing effects of hedonic and eudaimonic well-being (or happiness on the mean expression level of a set of genes that has been correlated with social adversity (the CTRA gene set. The original analysis of these data used a linear model (GLS of fixed effects with correlated error to infer effects of Hedonia and Eudaimonia on mean CTRA expression. Methods The standardized effects of Hedonia and Eudaimonia on CTRA gene set expression estimated by GLS were compared to estimates using multivariate (OLS linear models and generalized estimating equation (GEE models. The OLS estimates were tested using O’Brien’s OLS test, Anderson’s permutation ${r}_{F}^{2}$ r F 2 -test, two permutation F-tests (including GlobalAncova, and a rotation z-test (Roast. The GEE estimates were tested using a Wald test with robust standard errors. The performance (Type I, II, S, and M errors of all tests was investigated using a Monte Carlo simulation of data explicitly modeled on the re-analyzed dataset. Results GLS estimates are inconsistent between data sets, and, in each dataset, at least one coefficient is large and highly statistically significant. By contrast, effects estimated by OLS or GEE are very small, especially relative to the standard errors. Bootstrap and permutation GLS

  4. AN ENHANCED MODEL TO ESTIMATE EFFORT, PERFORMANCE AND COST OF THE SOFTWARE PROJECTS

    Directory of Open Access Journals (Sweden)

    M. Pauline

    2013-04-01

    Full Text Available The Authors have proposed a model that first captures the fundamentals of software metrics in the phase 1 consisting of three primitive primary software engineering metrics; they are person-months (PM, function-points (FP, and lines of code (LOC. The phase 2 consists of the proposed function point which is obtained by grouping the adjustment factors to simplify the process of adjustment and to ensure more consistency in the adjustments. In the proposed method fuzzy logic is used for quantifying the quality of requirements and is added as one of the adjustment factor, thus a fuzzy based approach for the Enhanced General System Characteristics to Estimate Effort of the Software Projects using productivity has been obtained. The phase 3 takes the calculated function point from our work and is given as input to the static single variable model (i.e. to the Intermediate COCOMO and COCOMO II for cost estimation. The Authors have tailored the cost factors in intermediate COCOMO and both; cost and scale factors are tailored in COCOMO II to suite to the individual development environment, which is very important for the accuracy of the cost estimates. The software performance indicators are project duration, schedule predictability, requirements completion ratio and post-release defect density, are also measured for the software projects in my work. A comparative study for effort, performance measurement and cost estimation of the software project is done between the existing model and the authors proposed work. Thus our work analyzes the interaction¬al process through which the estimation tasks were collectively accomplished.

  5. Multiresolution Modeling of Semidilute Polymer Solutions: Coarse-Graining Using Wavelet-Accelerated Monte Carlo

    Directory of Open Access Journals (Sweden)

    Animesh Agarwal

    2017-09-01

    Full Text Available We present a hierarchical coarse-graining framework for modeling semidilute polymer solutions, based on the wavelet-accelerated Monte Carlo (WAMC method. This framework forms a hierarchy of resolutions to model polymers at length scales that cannot be reached via atomistic or even standard coarse-grained simulations. Previously, it was applied to simulations examining the structure of individual polymer chains in solution using up to four levels of coarse-graining (Ismail et al., J. Chem. Phys., 2005, 122, 234901 and Ismail et al., J. Chem. Phys., 2005, 122, 234902, recovering the correct scaling behavior in the coarse-grained representation. In the present work, we extend this method to the study of polymer solutions, deriving the bonded and non-bonded potentials between coarse-grained superatoms from the single chain statistics. A universal scaling function is obtained, which does not require recalculation of the potentials as the scale of the system is changed. To model semi-dilute polymer solutions, we assume the intermolecular potential between the coarse-grained beads to be equal to the non-bonded potential, which is a reasonable approximation in the case of semidilute systems. Thus, a minimal input of microscopic data is required for simulating the systems at the mesoscopic scale. We show that coarse-grained polymer solutions can reproduce results obtained from the more detailed atomistic system without a significant loss of accuracy.

  6. Study on quantification method based on Monte Carlo sampling for multiunit probabilistic safety assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Kye Min [KHNP Central Research Institute, Daejeon (Korea, Republic of); Han, Sang Hoon; Park, Jin Hee; Lim, Ho Gon; Yang, Joon Yang [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Heo, Gyun Young [Kyung Hee University, Yongin (Korea, Republic of)

    2017-06-15

    In Korea, many nuclear power plants operate at a single site based on geographical characteristics, but the population density near the sites is higher than that in other countries. Thus, multiunit accidents are a more important consideration than in other countries and should be addressed appropriately. Currently, there are many issues related to a multiunit probabilistic safety assessment (PSA). One of them is the quantification of a multiunit PSA model. A traditional PSA uses a Boolean manipulation of the fault tree in terms of the minimal cut set. However, such methods have some limitations when rare event approximations cannot be used effectively or a very small truncation limit should be applied to identify accident sequence combinations for a multiunit site. In particular, it is well known that seismic risk in terms of core damage frequency can be overestimated because there are many events that have a high failure probability. In this study, we propose a quantification method based on a Monte Carlo approach for a multiunit PSA model. This method can consider all possible accident sequence combinations in a multiunit site and calculate a more exact value for events that have a high failure probability. An example model for six identical units at a site was also developed and quantified to confirm the applicability of the proposed method.

  7. Mathematical modeling, analysis and Markov Chain Monte Carlo simulation of Ebola epidemics

    Science.gov (United States)

    Tulu, Thomas Wetere; Tian, Boping; Wu, Zunyou

    Ebola virus infection is a severe infectious disease with the highest case fatality rate which become the global public health treat now. What makes the disease the worst of all is no specific effective treatment available, its dynamics is not much researched and understood. In this article a new mathematical model incorporating both vaccination and quarantine to study the dynamics of Ebola epidemic has been developed and comprehensively analyzed. The existence as well as uniqueness of the solution to the model is also verified and the basic reproduction number is calculated. Besides, stability conditions are also checked and finally simulation is done using both Euler method and one of the top ten most influential algorithm known as Markov Chain Monte Carlo (MCMC) method. Different rates of vaccination to predict the effect of vaccination on the infected individual over time and that of quarantine are discussed. The results show that quarantine and vaccination are very effective ways to control Ebola epidemic. From our study it was also seen that there is less possibility of an individual for getting Ebola virus for the second time if they survived his/her first infection. Last but not least real data has been fitted to the model, showing that it can used to predict the dynamic of Ebola epidemic.

  8. Modeling Monte Carlo of multileaf collimators using the code GEANT4

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Alex C.H.; Lima, Fernando R.A., E-mail: oliveira.ach@yahoo.com, E-mail: falima@cnen.gov.br [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil); Lima, Luciano S.; Vieira, Jose W., E-mail: lusoulima@yahoo.com.br [Instituto Federal de Educacao, Ciencia e Tecnologia de Pernambuco (IFPE), Recife, PE (Brazil)

    2014-07-01

    Radiotherapy uses various techniques and equipment for local treatment of cancer. The equipment most often used in radiotherapy to the patient irradiation is linear accelerator (Linac). Among the many algorithms developed for evaluation of dose distributions in radiotherapy planning, the algorithms based on Monte Carlo (MC) methods have proven to be very promising in terms of accuracy by providing more realistic results. The MC simulations for applications in radiotherapy are divided into two parts. In the first, the simulation of the production of the radiation beam by the Linac is performed and then the phase space is generated. The phase space contains information such as energy, position, direction, etc. of millions of particles (photons, electrons, positrons). In the second part the simulation of the transport of particles (sampled phase space) in certain configurations of irradiation field is performed to assess the dose distribution in the patient (or phantom). Accurate modeling of the Linac head is of particular interest in the calculation of dose distributions for intensity modulated radiation therapy (IMRT), where complex intensity distributions are delivered using a multileaf collimator (MLC). The objective of this work is to describe a methodology for modeling MC of MLCs using code Geant4. To exemplify this methodology, the Varian Millennium 120-leaf MLC was modeled, whose physical description is available in BEAMnrc Users Manual (20 11). The dosimetric characteristics (i.e., penumbra, leakage, and tongue-and-groove effect) of this MLC were evaluated. The results agreed with data published in the literature concerning the same MLC. (author)

  9. Monte Carlo Uncertainty Quantification Using Quasi-1D SRM Ballistic Model

    Directory of Open Access Journals (Sweden)

    Davide Viganò

    2016-01-01

    Full Text Available Compactness, reliability, readiness, and construction simplicity of solid rocket motors make them very appealing for commercial launcher missions and embarked systems. Solid propulsion grants high thrust-to-weight ratio, high volumetric specific impulse, and a Technology Readiness Level of 9. However, solid rocket systems are missing any throttling capability at run-time, since pressure-time evolution is defined at the design phase. This lack of mission flexibility makes their missions sensitive to deviations of performance from nominal behavior. For this reason, the reliability of predictions and reproducibility of performances represent a primary goal in this field. This paper presents an analysis of SRM performance uncertainties throughout the implementation of a quasi-1D numerical model of motor internal ballistics based on Shapiro’s equations. The code is coupled with a Monte Carlo algorithm to evaluate statistics and propagation of some peculiar uncertainties from design data to rocker performance parameters. The model has been set for the reproduction of a small-scale rocket motor, discussing a set of parametric investigations on uncertainty propagation across the ballistic model.

  10. Benchmarking the MCNP code for Monte Carlo modelling of an in vivo neutron activation analysis system.

    Science.gov (United States)

    Natto, S A; Lewis, D G; Ryde, S J

    1998-01-01

    The Monte Carlo computer code MCNP (version 4A) has been used to develop a personal computer-based model of the Swansea in vivo neutron activation analysis (IVNAA) system. The model included specification of the neutron source (252Cf), collimators, reflectors and shielding. The MCNP model was 'benchmarked' against fast neutron and thermal neutron fluence data obtained experimentally from the IVNAA system. The Swansea system allows two irradiation geometries using 'short' and 'long' collimators, which provide alternative dose rates for IVNAA. The data presented here relate to the short collimator, although results of similar accuracy were obtained using the long collimator. The fast neutron fluence was measured in air at a series of depths inside the collimator. The measurements agreed with the MCNP simulation within the statistical uncertainty (5-10%) of the calculations. The thermal neutron fluence was measured and calculated inside the cuboidal water phantom. The depth of maximum thermal fluence was 3.2 cm (measured) and 3.0 cm (calculated). The width of the 50% thermal fluence level across the phantom at its mid-depth was found to be the same by both MCNP and experiment. This benchmarking exercise has given us a high degree of confidence in MCNP as a tool for the design of IVNAA systems.

  11. Monte Carlo simulation of atomic short range order and cluster formation in two dimensional model alloys

    International Nuclear Information System (INIS)

    Rojas T, J.; Instituto Peruano de Energia Nuclear, Lima; Manrique C, E.; Torres T, E.

    2002-01-01

    Using monte Carlo simulation have been carried out an atomistic description of the structure and ordering processes in the system Cu-Au in a two-dimensional model. The ABV model of the alloy is a system of N atoms A and B, located in rigid lattice with some vacant sites. In the model we assume pair wise interactions between nearest neighbors with constant ordering energy J = 0,03 eV. The dynamics was introduced by means of a vacancy that exchanges of place with any atom of its neighbors. The simulations were carried out in a square lattice with 1024 and 4096 particles, using periodic boundary conditions to avoid border effects. We calculate the first two parameters of short range order of Warren-Cowley as function of the concentration and temperature. It was also studied the probabilities of formation of different atomic clusters that consist of 9 atoms as function of the concentration of the alloy and temperatures in a wide range of values. In some regions of temperature and concentration it was observed compositional and thermal polymorphism

  12. A Monte Carlo approach to constraining uncertainties in modelled downhole gravity gradiometry applications

    Science.gov (United States)

    Matthews, Samuel J.; O'Neill, Craig; Lackie, Mark A.

    2017-06-01

    Gravity gradiometry has a long legacy, with airborne/marine applications as well as surface applications receiving renewed recent interest. Recent instrumental advances has led to the emergence of downhole gravity gradiometry applications that have the potential for greater resolving power than borehole gravity alone. This has promise in both the petroleum and geosequestration industries; however, the effect of inherent uncertainties in the ability of downhole gravity gradiometry to resolve a subsurface signal is unknown. Here, we utilise the open source modelling package, Fatiando a Terra, to model both the gravity and gravity gradiometry responses of a subsurface body. We use a Monte Carlo approach to vary the geological structure and reference densities of the model within preset distributions. We then perform 100 000 simulations to constrain the mean response of the buried body as well as uncertainties in these results. We varied our modelled borehole to be either centred on the anomaly, adjacent to the anomaly (in the x-direction), and 2500 m distant to the anomaly (also in the x-direction). We demonstrate that gravity gradiometry is able to resolve a reservoir-scale modelled subsurface density variation up to 2500 m away, and that certain gravity gradient components (Gzz, Gxz, and Gxx) are particularly sensitive to this variation in gravity/gradiometry above the level of uncertainty in the model. The responses provided by downhole gravity gradiometry modelling clearly demonstrate a technique that can be utilised in determining a buried density contrast, which will be of particular use in the emerging industry of CO2 geosequestration. The results also provide a strong benchmark for the development of newly emerging prototype downhole gravity gradiometers.

  13. Propagation of uncertainty in nasal spray in vitro performance models using Monte Carlo simulation: Part II. Error propagation during product performance modeling.

    Science.gov (United States)

    Guo, Changning; Doub, William H; Kauffman, John F

    2010-08-01

    Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association

  14. Restricted primitive model for electrical double layers: modified HNC theory of density profiles and Monte Carlo study of differential capacitance

    International Nuclear Information System (INIS)

    Ballone, P.; Pastore, G.; Tosi, M.P.

    1986-02-01

    Interfacial properties of an ionic fluid next to a uniformly charged planar wall are studied in the restricted primitive model by both theoretical and Monte Carlo methods. The system is a 1:1 fluid of equisized charged hard spheres in a state appropriate to 1M aqueous electrolyte solutions. The interfacial density profiles of counterions and coions are evaluated by extending the hypernetted chain approximation (HNC) to include the leading bridge diagrams for the wall-ion correlations. The theoretical results compare well with those of grand canonical Monte Carlo computations of Torrie and Valleau over the whole range of surface charge density considered by these authors, thus resolving the earlier disagreement between statistical mechanical theories and simulation data at large charge densities. In view of the importance of the model as a testing ground for theories of the diffuse layer, the Monte Carlo calculations are tested by considering alternative choices for the basic simulation cell and are extended so as to allow an evaluation of the differential capacitance of the model interface by two independent methods. These involve numerical differentiation of the mean potential drop as a function of the surface charge density or alternatively an appropriate use of a fluctuation theory formula for the capacitance. The results of these two Monte Carlo approaches consistently indicate an initially smooth increase of the diffuse layer capacitance followed by structure at large charge densities, this behaviour being connected with layering of counterions as already revealed in the density profiles reported by Torrie and Valleau. (author)

  15. Monte Carlo modeling of Lead-Cooled Fast Reactor in adiabatic equilibrium state

    Energy Technology Data Exchange (ETDEWEB)

    Stanisz, Przemysław, E-mail: pstanisz@agh.edu.pl; Oettingen, Mikołaj, E-mail: moettin@agh.edu.pl; Cetnar, Jerzy, E-mail: cetnar@mail.ftj.agh.edu.pl

    2016-05-15

    Graphical abstract: - Highlights: • We present the Monte Carlo modeling of the LFR in the adiabatic equilibrium state. • We assess the adiabatic equilibrium fuel composition using the MCB code. • We define the self-adjusting process of breeding gain by the control rod operation. • The designed LFR can work in the adiabatic cycle with zero fuel breeding. - Abstract: Nuclear power would appear to be the only energy source able to satisfy the global energy demand while also achieving a significant reduction of greenhouse gas emissions. Moreover, it can provide a stable and secure source of electricity, and plays an important role in many European countries. However, nuclear power generation from its birth has been doomed by the legacy of radioactive nuclear waste. In addition, the looming decrease in the available resources of fissile U235 may influence the future sustainability of nuclear energy. The integrated solution to both problems is not trivial, and postulates the introduction of a closed-fuel cycle strategy based on breeder reactors. The perfect choice of a novel reactor system fulfilling both requirements is the Lead-Cooled Fast Reactor operating in the adiabatic equilibrium state. In such a state, the reactor converts depleted or natural uranium into plutonium while consuming any self-generated minor actinides and transferring only fission products as waste. We present the preliminary design of a Lead-Cooled Fast Reactor operating in the adiabatic equilibrium state with the Monte Carlo Continuous Energy Burnup Code – MCB. As a reference reactor model we apply the core design developed initially under the framework of the European Lead-cooled SYstem (ELSY) project and refined in the follow-up Lead-cooled European Advanced DEmonstration Reactor (LEADER) project. The major objective of the study is to show to what extent the constraints of the adiabatic cycle are maintained and to indicate the phase space for further improvements. The analysis

  16. Monte Carlo modelling of damage to haemopoietic stem cells from internally deposited alpha-emitters

    International Nuclear Information System (INIS)

    Utteridge, T.D.; University of South Australia, Pooraka, SA; Charlton, D.E.; Turner, M.S.; Beddoe, A.H.; Leong, A. S-Y.; Milios, J.; Fazzalari, N.; To, L.B.

    1996-01-01

    Full text: Monte Carlo modelling of alpha particle radiation dose to haemopoietic stem cells from radon decay in human marrow fat cells was undertaken following Richardson et al's (Brit J Radiol, 64, 608-624, 1991) proposition that such exposure could induce leukaemia, and epidemiological observations that uranium miners have not developed an excess of leukaemia (Tomasek L. et al, Lancet, 341, 919-923, 1993). The dose to haemopoietic stem cells from alpha emitting radiopharmaceuticals proposed for radiotherapy is also important in risk assessment. Haemopoietic stem cells (presumed to be the targets for leukaemia) were identified as CD34+CD38- mononuclear cells (Terstappen LWMM et al, Blood, 77, 1218-1227, 1991) and their diameters measured using image analysis. The distribution of stem cell distances from fat cells was also measured. The model was used with Monte Carlo treatment of the alpha particle flux from radon and its short lived decay products to estimate (a) the dose and LET distributions for the three stem cell diameters; (b) the number of passages per hit; and (c) stem cell survival. The stem cell population exhibited a trimodal distribution, with mean diameters of 5.7, 11.6 and 14.8 μm; a trimodal distribution has previously been identified in mice (Visser J et al, Exper Hematol Today, 21-27, 1977). At 40% fat in a human lumbar vertebra 3 section, approximately half the stem cells were located on, or very close to the edge, of fat cells in marrow sections. This agrees with the predicted distribution of distances between fat and stem cells obtained using a 3-D model with randomly distributed stem cells. At an air activity of 20 Bq m -3 (ie the UK average indoor radon concentration used by Richardson et al mentioned above) about 0.1 stem cells per person-year were hit and survived; at 100 Bq m -3 about 1 stem cell per person-year was hit and survived. Across the range of radon concentrations encountered in residential and underground miner exposures

  17. A Direct Simulation Monte Carlo Model Of Thermal Escape From Titan

    Science.gov (United States)

    Johnson, Robert E.; Tucker, O. J.

    2008-09-01

    Recent analysis of density profiles vs. altitude from the Ion Neutral Mass Spectrometer (INMS) on Cassini (Waite et al. 2005) suggest Titan could have loss a significant amount of atmosphere in 4 Gyr at present escape rates (e.g., Johnson 2008). Strobel 2008 applied a slow hydrodynamic escape model to Titan's atmosphere using solar heating below the exobase to drive upward thermal conduction and power escape. However, near the exobase continuum models become problematic as a result of the increasing rarefaction in the atmosphere. The microscopic nature of DSMC is directly suitable to model atmosphere flow in nominal exobase region (e.g., Michael et. al. 2005). Our Preliminary DSMC models have shown no evidence for slow hydrodynamic escape of N2 and CH4 from Titan's atmosphere using boundary conditions normalized to the atmospheric properties in Strobel (2008). In this paper we use a 1D radial Direct Simulation Monte Carlo (DSMC) model of heating in Titan's upper atmosphere to estimate the escape rate as a function of the Jean's parameter. In this way we can test under what conditions the suggested deviations from Jeans escape would occur. In addition, we will be able to extract the necessary energy deposition to power the heavy molecule loss rates suggested in recent models (Strobel 2008; Yelle et. al. 2008). Michael, M. Johnson, R.E. 2005 Energy Deposition of pickup ions and heating of Titan's atmosphere. Planat. Sp. Sci. 53, 1510-1514 Johnson, R.E., "Sputtering and Heating of Titan's Upper Atmosphere", Proc Royal Soc. (London) (2008) Strobel, D.F. 2008 Titan's hydrodynamically escaping atmosphere. Icarus 193, 588-594 Yelle, R.V., J. Cui and I. C.F. Muller-Wodarg 2008 Methane Escape from Titan's Atmosphere. J. Geophys. Res in press Waite, J.H., Jr., Niemann, H.B., Yelle, R.V. et al. 2005 Ion Neutral Mass Spectrometer Results from the First Flyby of Titan. Science 308, 982-986

  18. The effort-reward imbalance work-stress model and daytime salivary cortisol and dehydroepiandrosterone (DHEA) among Japanese women.

    Science.gov (United States)

    Ota, Atsuhiko; Mase, Junji; Howteerakul, Nopporn; Rajatanun, Thitipat; Suwannapong, Nawarat; Yatsuya, Hiroshi; Ono, Yuichiro

    2014-09-17

    We examined the influence of work-related effort-reward imbalance and overcommitment to work (OC), as derived from Siegrist's Effort-Reward Imbalance (ERI) model, on the hypothalamic-pituitary-adrenocortical (HPA) axis. We hypothesized that, among healthy workers, both cortisol and dehydroepiandrosterone (DHEA) secretion would be increased by effort-reward imbalance and OC and, as a result, cortisol-to-DHEA ratio (C/D ratio) would not differ by effort-reward imbalance or OC. The subjects were 115 healthy female nursery school teachers. Salivary cortisol, DHEA, and C/D ratio were used as indexes of HPA activity. Mixed-model analyses of variance revealed that neither the interaction between the ERI model indicators (i.e., effort, reward, effort-to-reward ratio, and OC) and the series of measurement times (9:00, 12:00, and 15:00) nor the main effect of the ERI model indicators was significant for daytime salivary cortisol, DHEA, or C/D ratio. Multiple linear regression analyses indicated that none of the ERI model indicators was significantly associated with area under the curve of daytime salivary cortisol, DHEA, or C/D ratio. We found that effort, reward, effort-reward imbalance, and OC had little influence on daytime variation patterns, levels, or amounts of salivary HPA-axis-related hormones. Thus, our hypotheses were not supported.

  19. Modification and validation of an analytical source model for external beam radiotherapy Monte Carlo dose calculations

    Energy Technology Data Exchange (ETDEWEB)

    Davidson, Scott E., E-mail: sedavids@utmb.edu [Radiation Oncology, The University of Texas Medical Branch, Galveston, Texas 77555 (United States); Cui, Jing [Radiation Oncology, University of Southern California, Los Angeles, California 90033 (United States); Kry, Stephen; Ibbott, Geoffrey S.; Followill, David S. [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States); Deasy, Joseph O. [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States); Vicic, Milos [Department of Applied Physics, University of Belgrade, Belgrade 11000 (Serbia); White, R. Allen [Bioinformatics and Computational Biology, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States)

    2016-08-15

    Purpose: A dose calculation tool, which combines the accuracy of the dose planning method (DPM) Monte Carlo code and the versatility of a practical analytical multisource model, which was previously reported has been improved and validated for the Varian 6 and 10 MV linear accelerators (linacs). The calculation tool can be used to calculate doses in advanced clinical application studies. One shortcoming of current clinical trials that report dose from patient plans is the lack of a standardized dose calculation methodology. Because commercial treatment planning systems (TPSs) have their own dose calculation algorithms and the clinical trial participant who uses these systems is responsible for commissioning the beam model, variation exists in the reported calculated dose distributions. Today’s modern linac is manufactured to tight specifications so that variability within a linac model is quite low. The expectation is that a single dose calculation tool for a specific linac model can be used to accurately recalculate dose from patient plans that have been submitted to the clinical trial community from any institution. The calculation tool would provide for a more meaningful outcome analysis. Methods: The analytical source model was described by a primary point source, a secondary extra-focal source, and a contaminant electron source. Off-axis energy softening and fluence effects were also included. The additions of hyperbolic functions have been incorporated into the model to correct for the changes in output and in electron contamination with field size. A multileaf collimator (MLC) model is included to facilitate phantom and patient dose calculations. An offset to the MLC leaf positions was used to correct for the rudimentary assumed primary point source. Results: Dose calculations of the depth dose and profiles for field sizes 4 × 4 to 40 × 40 cm agree with measurement within 2% of the maximum dose or 2 mm distance to agreement (DTA) for 95% of the data

  20. Single-site Lennard-Jones models via polynomial chaos surrogates of Monte Carlo molecular simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kadoura, Ahmad, E-mail: ahmad.kadoura@kaust.edu.sa, E-mail: adil.siripatana@kaust.edu.sa, E-mail: shuyu.sun@kaust.edu.sa, E-mail: omar.knio@kaust.edu.sa; Sun, Shuyu, E-mail: ahmad.kadoura@kaust.edu.sa, E-mail: adil.siripatana@kaust.edu.sa, E-mail: shuyu.sun@kaust.edu.sa, E-mail: omar.knio@kaust.edu.sa [Computational Transport Phenomena Laboratory, The Earth Sciences and Engineering Department, The Physical Sciences and Engineering Division, King Abdullah University of Science and Technology, Thuwal 23955-6900 (Saudi Arabia); Siripatana, Adil, E-mail: ahmad.kadoura@kaust.edu.sa, E-mail: adil.siripatana@kaust.edu.sa, E-mail: shuyu.sun@kaust.edu.sa, E-mail: omar.knio@kaust.edu.sa; Hoteit, Ibrahim, E-mail: ibrahim.hoteit@kaust.edu.sa [Earth Fluid Modeling and Predicting Group, The Earth Sciences and Engineering Department, The Physical Sciences and Engineering Division, King Abdullah University of Science and Technology, Thuwal 23955-6900 (Saudi Arabia); Knio, Omar, E-mail: ahmad.kadoura@kaust.edu.sa, E-mail: adil.siripatana@kaust.edu.sa, E-mail: shuyu.sun@kaust.edu.sa, E-mail: omar.knio@kaust.edu.sa [Uncertainty Quantification Center, The Applied Mathematics and Computational Science Department, The Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology, Thuwal 23955-6900 (Saudi Arabia)

    2016-06-07

    In this work, two Polynomial Chaos (PC) surrogates were generated to reproduce Monte Carlo (MC) molecular simulation results of the canonical (single-phase) and the NVT-Gibbs (two-phase) ensembles for a system of normalized structureless Lennard-Jones (LJ) particles. The main advantage of such surrogates, once generated, is the capability of accurately computing the needed thermodynamic quantities in a few seconds, thus efficiently replacing the computationally expensive MC molecular simulations. Benefiting from the tremendous computational time reduction, the PC surrogates were used to conduct large-scale optimization in order to propose single-site LJ models for several simple molecules. Experimental data, a set of supercritical isotherms, and part of the two-phase envelope, of several pure components were used for tuning the LJ parameters (ε, σ). Based on the conducted optimization, excellent fit was obtained for different noble gases (Ar, Kr, and Xe) and other small molecules (CH{sub 4}, N{sub 2}, and CO). On the other hand, due to the simplicity of the LJ model used, dramatic deviations between simulation and experimental data were observed, especially in the two-phase region, for more complex molecules such as CO{sub 2} and C{sub 2} H{sub 6}.

  1. A Monte Carlo model for the intermittent plasticity of micro-pillars

    International Nuclear Information System (INIS)

    Ng, K S; Ngan, A H W

    2008-01-01

    Earlier compression experiments on micrometre-sized aluminium pillars, fabricated by focused-ion beam milling, using a flat-punch nanoindenter revealed that post-yield deformation during constant-rate loading was jerky with interspersing strain bursts and linear elastic segments. Under load hold, the pillars crept mainly by means of sporadic strain bursts. In this work, a Monte Carlo simulation model is developed, with two statistics gathered from the load-ramp experiments as input, to account for the jerky deformation during the load ramp as well as load hold. Under load-ramp conditions, the simulations successfully captured other experimental observations made independently from the two inputs, namely, the diverging behaviour of the jerky stress–strain response at higher stresses, the increase in burst frequency and burst size with stress and the overall power-law distribution of the burst size. The model also predicts creep behaviour agreeable with the experimental observations, namely, the occurrence of sporadic bursts with frequency depending on stress, creep time and pillar dimensions

  2. A stochastic Markov chain approach for tennis: Monte Carlo simulation and modeling

    Science.gov (United States)

    Aslam, Kamran

    This dissertation describes the computational formulation of probability density functions (pdfs) that facilitate head-to-head match simulations in tennis along with ranking systems developed from their use. A background on the statistical method used to develop the pdfs , the Monte Carlo method, and the resulting rankings are included along with a discussion on ranking methods currently being used both in professional sports and in other applications. Using an analytical theory developed by Newton and Keller in [34] that defines a tennis player's probability of winning a game, set, match and single elimination tournament, a computational simulation has been developed in Matlab that allows further modeling not previously possible with the analytical theory alone. Such experimentation consists of the exploration of non-iid effects, considers the concept the varying importance of points in a match and allows an unlimited number of matches to be simulated between unlikely opponents. The results of these studies have provided pdfs that accurately model an individual tennis player's ability along with a realistic, fair and mathematically sound platform for ranking them.

  3. Monte Carlo modeling of neutron and gamma-ray imaging systems

    International Nuclear Information System (INIS)

    Hall, J.

    1996-04-01

    Detailed numerical prototypes are essential to design of efficient and cost-effective neutron and gamma-ray imaging systems. We have exploited the unique capabilities of an LLNL-developed radiation transport code (COG) to develop code modules capable of simulating the performance of neutron and gamma-ray imaging systems over a wide range of source energies. COG allows us to simulate complex, energy-, angle-, and time-dependent radiation sources, model 3-dimensional system geometries with ''real world'' complexity, specify detailed elemental and isotopic distributions and predict the responses of various types of imaging detectors with full Monte Carlo accuray. COG references detailed, evaluated nuclear interaction databases allowingusers to account for multiple scattering, energy straggling, and secondary particle production phenomena which may significantly effect the performance of an imaging system by may be difficult or even impossible to estimate using simple analytical models. This work presents examples illustrating the use of these routines in the analysis of industrial radiographic systems for thick target inspection, nonintrusive luggage and cargoscanning systems, and international treaty verification

  4. Neutron and gamma sensitivities of self-powered detectors: Monte Carlo modelling

    Energy Technology Data Exchange (ETDEWEB)

    Vermeeren, Ludo [SCK-CEN, Nuclear Research Centre, Boeretang 200, B-2400 Mol, (Belgium)

    2015-07-01

    This paper deals with the development of a detailed Monte Carlo approach for the calculation of the absolute neutron sensitivity of SPNDs, which makes use of the MCNP code. We will explain the calculation approach, including the activation and beta emission steps, the gamma-electron interactions, the charge deposition in the various detector parts and the effect of the space charge field in the insulator. The model can also be applied for the calculation of the gamma sensitivity of self-powered detectors and for the radiation-induced currents in signal cables. The model yields detailed information on the various contributions to the sensor currents, with distinct response times. Results for the neutron sensitivity of various types of SPNDs are in excellent agreement with experimental data obtained at the BR2 research reactor. For typical neutron to gamma flux ratios, the calculated gamma induced SPND currents are significantly lower than the neutron induced currents. The gamma sensitivity depends very strongly upon the immediate detector surroundings and on the gamma spectrum. Our calculation method opens the way to a reliable on-line determination of the absolute in-pile thermal neutron flux. (authors)

  5. Application of a Monte Carlo linac model in routine verifications of dose calculations

    International Nuclear Information System (INIS)

    Linares Rosales, H. M.; Alfonso Laguardia, R.; Lara Mas, E.; Popescu, T.

    2015-01-01

    The analysis of some parameters of interest in Radiotherapy Medical Physics based on an experimentally validated Monte Carlo model of an Elekta Precise lineal accelerator, was performed for 6 and 15 Mv photon beams. The simulations were performed using the EGSnrc code. As reference for simulations, the optimal beam parameters values (energy and FWHM) previously obtained were used. Deposited dose calculations in water phantoms were done, on typical complex geometries commonly are used in acceptance and quality control tests, such as irregular and asymmetric fields. Parameters such as MLC scatter, maximum opening or closing position, and the separation between them were analyzed from calculations in water. Similarly simulations were performed on phantoms obtained from CT studies of real patients, making comparisons of the dose distribution calculated with EGSnrc and the dose distribution obtained from the computerized treatment planning systems (TPS) used in routine clinical plans. All the results showed a great agreement with measurements, finding all of them within tolerance limits. These results allowed the possibility of using the developed model as a robust verification tool for validating calculations in very complex situation, where the accuracy of the available TPS could be questionable. (Author)

  6. Single-site Lennard-Jones models via polynomial chaos surrogates of Monte Carlo molecular simulation

    KAUST Repository

    Kadoura, Ahmad Salim

    2016-06-01

    In this work, two Polynomial Chaos (PC) surrogates were generated to reproduce Monte Carlo (MC) molecular simulation results of the canonical (single-phase) and the NVT-Gibbs (two-phase) ensembles for a system of normalized structureless Lennard-Jones (LJ) particles. The main advantage of such surrogates, once generated, is the capability of accurately computing the needed thermodynamic quantities in a few seconds, thus efficiently replacing the computationally expensive MC molecular simulations. Benefiting from the tremendous computational time reduction, the PC surrogates were used to conduct large-scale optimization in order to propose single-site LJ models for several simple molecules. Experimental data, a set of supercritical isotherms, and part of the two-phase envelope, of several pure components were used for tuning the LJ parameters (ε, σ). Based on the conducted optimization, excellent fit was obtained for different noble gases (Ar, Kr, and Xe) and other small molecules (CH4, N2, and CO). On the other hand, due to the simplicity of the LJ model used, dramatic deviations between simulation and experimental data were observed, especially in the two-phase region, for more complex molecules such as CO2 and C2 H6.

  7. Modelling of a general purpose irradiation chamber using a Monte Carlo particle transport code

    International Nuclear Information System (INIS)

    Dhiyauddin Ahmad Fauzi; Sheik, F.O.A.; Nurul Fadzlin Hasbullah

    2013-01-01

    Full-text: The aim of this research is to stimulate the effectiveness use of a general purpose irradiation chamber to contain pure neutron particles obtained from a research reactor. The secondary neutron and gamma particles dose discharge from the chamber layers will be used as a platform to estimate the safe dimension of the chamber. The chamber, made up of layers of lead (Pb), shielding, polyethylene (PE), moderator and commercial grade aluminium (Al) cladding is proposed for the use of interacting samples with pure neutron particles in a nuclear reactor environment. The estimation was accomplished through simulation based on general Monte Carlo N-Particle transport code using Los Alamos MCNPX software. Simulations were performed on the model of the chamber subjected to high neutron flux radiation and its gamma radiation product. The model of neutron particle used is based on the neutron source found in PUSPATI TRIGA MARK II research reactor which holds a maximum flux value of 1 x 10 12 neutron/ cm 2 s. The expected outcomes of this research are zero gamma dose in the core of the chamber and neutron dose rate of less than 10 μSv/ day discharge from the chamber system. (author)

  8. Nanostructure evolution of neutron-irradiated reactor pressure vessel steels: Revised Object kinetic Monte Carlo model

    Energy Technology Data Exchange (ETDEWEB)

    Chiapetto, M., E-mail: mchiapet@sckcen.be [SCK-CEN, Nuclear Materials Science Institute, Boeretang 200, B-2400 Mol (Belgium); Unité Matériaux Et Transformations (UMET), UMR 8207, Université de Lille 1, ENSCL, F-59600 Villeneuve d’Ascq Cedex (France); Messina, L. [DEN-Service de Recherches de Métallurgie Physique, CEA, Université Paris-Saclay, F-91191 Gif-sur-Yvette (France); KTH Royal Institute of Technology, Roslagstullsbacken 21, SE-114 21 Stockholm (Sweden); Becquart, C.S. [Unité Matériaux Et Transformations (UMET), UMR 8207, Université de Lille 1, ENSCL, F-59600 Villeneuve d’Ascq Cedex (France); Olsson, P. [KTH Royal Institute of Technology, Roslagstullsbacken 21, SE-114 21 Stockholm (Sweden); Malerba, L. [SCK-CEN, Nuclear Materials Science Institute, Boeretang 200, B-2400 Mol (Belgium)

    2017-02-15

    This work presents a revised set of parameters to be used in an Object kinetic Monte Carlo model to simulate the microstructure evolution under neutron irradiation of reactor pressure vessel steels at the operational temperature of light water reactors (∼300 °C). Within a “grey-alloy” approach, a more physical description than in a previous work is used to translate the effect of Mn and Ni solute atoms on the defect cluster diffusivity reduction. The slowing down of self-interstitial clusters, due to the interaction between solutes and crowdions in Fe is now parameterized using binding energies from the latest DFT calculations and the solute concentration in the matrix from atom-probe experiments. The mobility of vacancy clusters in the presence of Mn and Ni solute atoms was also modified on the basis of recent DFT results, thereby removing some previous approximations. The same set of parameters was seen to predict the correct microstructure evolution for two different types of alloys, under very different irradiation conditions: an Fe-C-MnNi model alloy, neutron irradiated at a relatively high flux, and a high-Mn, high-Ni RPV steel from the Swedish Ringhals reactor surveillance program. In both cases, the predicted self-interstitial loop density matches the experimental solute cluster density, further corroborating the surmise that the MnNi-rich nanofeatures form by solute enrichment of immobilized small interstitial loops, which are invisible to the electron microscope.

  9. Monte Carlo modelling of a-Si EPID response: The effect of spectral variations with field size and position

    International Nuclear Information System (INIS)

    Parent, Laure; Seco, Joao; Evans, Phil M.; Fielding, Andrew; Dance, David R.

    2006-01-01

    This study focused on predicting the electronic portal imaging device (EPID) image of intensity modulated radiation treatment (IMRT) fields in the absence of attenuation material in the beam with Monte Carlo methods. As IMRT treatments consist of a series of segments of various sizes that are not always delivered on the central axis, large spectral variations may be observed between the segments. The effect of these spectral variations on the EPID response was studied with fields of various sizes and off-axis positions. A detailed description of the EPID was implemented in a Monte Carlo model. The EPID model was validated by comparing the EPID output factors for field sizes between 1x1 and 26x26 cm 2 at the isocenter. The Monte Carlo simulations agreed with the measurements to within 1.5%. The Monte Carlo model succeeded in predicting the EPID response at the center of the fields of various sizes and offsets to within 1% of the measurements. Large variations (up to 29%) of the EPID response were observed between the various offsets. The EPID response increased with field size and with field offset for most cases. The Monte Carlo model was then used to predict the image of a simple test IMRT field delivered on the beam axis and with an offset. A variation of EPID response up to 28% was found between the on- and off-axis delivery. Finally, two clinical IMRT fields were simulated and compared to the measurements. For all IMRT fields, simulations and measurements agreed within 3%--0.2 cm for 98% of the pixels. The spectral variations were quantified by extracting from the spectra at the center of the fields the total photon yield (Y total ), the photon yield below 1 MeV (Y low ), and the percentage of photons below 1 MeV (P low ). For the studied cases, a correlation was shown between the EPID response variation and Y total , Y low , and P low

  10. A least-effort principle based model for heterogeneous pedestrian flow considering overtaking behavior

    Science.gov (United States)

    Liu, Chi; Ye, Rui; Lian, Liping; Song, Weiguo; Zhang, Jun; Lo, Siuming

    2018-05-01

    In the context of global aging, how to design traffic facilities for a population with a different age composition is of high importance. For this purpose, we propose a model based on the least effort principle to simulate heterogeneous pedestrian flow. In the model, the pedestrian is represented by a three-disc shaped agent. We add a new parameter to realize pedestrians' preference to avoid changing their direction of movement too quickly. The model is validated with numerous experimental data on unidirectional pedestrian flow. In addition, we investigate the influence of corridor width and velocity distribution of crowds on unidirectional heterogeneous pedestrian flow. The simulation results reflect that widening corridors could increase the specific flow for the crowd composed of two kinds of pedestrians with significantly different free velocities. Moreover, compared with a unified crowd, the crowd composed of pedestrians with great mobility differences requires a wider corridor to attain the same traffic efficiency. This study could be beneficial in providing a better understanding of heterogeneous pedestrian flow, and quantified outcomes could be applied in traffic facility design.

  11. Modeling of continuous free-radical butadiene-styrene copolymerization process by the Monte Carlo method

    Directory of Open Access Journals (Sweden)

    T. A. Mikhailova

    2016-01-01

    Full Text Available In the paper the algorithm of modeling of continuous low-temperature free-radical butadiene-styrene copolymerization process in emulsion based on the Monte-Carlo method is offered. This process is the cornerstone of industrial production butadiene – styrene synthetic rubber which is the most widespread large-capacity rubber of general purpose. Imitation of growth of each macromolecule of the formed copolymer and tracking of the processes happening to it is the basis of algorithm of modeling. Modeling is carried out taking into account residence-time distribution of particles in system that gives the chance to research the process proceeding in the battery of consistently connected polymerization reactors. At the same time each polymerization reactor represents the continuous stirred tank reactor. Since the process is continuous, it is considered continuous addition of portions to the reaction mixture in the first reactor of battery. The constructed model allows to research molecular-weight and viscous characteristics of the formed copolymerization product, to predict the mass content of butadiene and styrene in copolymer, to carry out calculation of molecular-weight distribution of the received product at any moment of conducting process. According to the results of computational experiments analyzed the influence of mode of the process of the regulator introduced during the maintaining on change of characteristics of the formed butadiene-styrene copolymer. As the considered process takes place with participation of monomers of two types, besides listed the model allows to research compositional heterogeneity of the received product that is to carry out calculation of composite distribution and distribution of macromolecules for the size and structure. On the basis of the proposed algorithm created the software tool that allows you to keep track of changes in the characteristics of the resulting product in the dynamics.

  12. Learning reduced kinetic Monte Carlo models of complex chemistry from molecular dynamics.

    Science.gov (United States)

    Yang, Qian; Sing-Long, Carlos A; Reed, Evan J

    2017-08-01

    We propose a novel statistical learning framework for automatically and efficiently building reduced kinetic Monte Carlo (KMC) models of large-scale elementary reaction networks from data generated by a single or few molecular dynamics simulations (MD). Existing approaches for identifying species and reactions from molecular dynamics typically use bond length and duration criteria, where bond duration is a fixed parameter motivated by an understanding of bond vibrational frequencies. In contrast, we show that for highly reactive systems, bond duration should be a model parameter that is chosen to maximize the predictive power of the resulting statistical model. We demonstrate our method on a high temperature, high pressure system of reacting liquid methane, and show that the learned KMC model is able to extrapolate more than an order of magnitude in time for key molecules. Additionally, our KMC model of elementary reactions enables us to isolate the most important set of reactions governing the behavior of key molecules found in the MD simulation. We develop a new data-driven algorithm to reduce the chemical reaction network which can be solved either as an integer program or efficiently using L1 regularization, and compare our results with simple count-based reduction. For our liquid methane system, we discover that rare reactions do not play a significant role in the system, and find that less than 7% of the approximately 2000 reactions observed from molecular dynamics are necessary to reproduce the molecular concentration over time of methane. The framework described in this work paves the way towards a genomic approach to studying complex chemical systems, where expensive MD simulation data can be reused to contribute to an increasingly large and accurate genome of elementary reactions and rates.

  13. Investigation of attenuation correction in SPECT using textural features, Monte Carlo simulations, and computational anthropomorphic models.

    Science.gov (United States)

    Spirou, Spiridon V; Papadimitroulas, Panagiotis; Liakou, Paraskevi; Georgoulias, Panagiotis; Loudos, George

    2015-09-01

    To present and evaluate a new methodology to investigate the effect of attenuation correction (AC) in single-photon emission computed tomography (SPECT) using textural features analysis, Monte Carlo techniques, and a computational anthropomorphic model. The GATE Monte Carlo toolkit was used to simulate SPECT experiments using the XCAT computational anthropomorphic model, filled with a realistic biodistribution of (99m)Tc-N-DBODC. The simulated gamma camera was the Siemens ECAM Dual-Head, equipped with a parallel hole lead collimator, with an image resolution of 3.54 × 3.54 mm(2). Thirty-six equispaced camera positions, spanning a full 360° arc, were simulated. Projections were calculated after applying a ± 20% energy window or after eliminating all scattered photons. The activity of the radioisotope was reconstructed using the MLEM algorithm. Photon attenuation was accounted for by calculating the radiological pathlength in a perpendicular line from the center of each voxel to the gamma camera. Twenty-two textural features were calculated on each slice, with and without AC, using 16 and 64 gray levels. A mask was used to identify only those pixels that belonged to each organ. Twelve of the 22 features showed almost no dependence on AC, irrespective of the organ involved. In both the heart and the liver, the mean and SD were the features most affected by AC. In the liver, six features were affected by AC only on some slices. Depending on the slice, skewness decreased by 22-34% with AC, kurtosis by 35-50%, long-run emphasis mean by 71-91%, and long-run emphasis range by 62-95%. In contrast, gray-level non-uniformity mean increased by 78-218% compared with the value without AC and run percentage mean by 51-159%. These results were not affected by the number of gray levels (16 vs. 64) or the data used for reconstruction: with the energy window or without scattered photons. The mean and SD were the main features affected by AC. In the heart, no other feature was

  14. The structure of molten CuCl: Reverse Monte Carlo modeling with high-energy X-ray diffraction data and molecular dynamics of a polarizable ion model

    International Nuclear Information System (INIS)

    Alcaraz, Olga; Trullàs, Joaquim; Tahara, Shuta; Kawakita, Yukinobu; Takeda, Shin’ichi

    2016-01-01

    The results of the structural properties of molten copper chloride are reported from high-energy X-ray diffraction measurements, reverse Monte Carlo modeling method, and molecular dynamics simulations using a polarizable ion model. The simulated X-ray structure factor reproduces all trends observed experimentally, in particular the shoulder at around 1 Å −1 related to intermediate range ordering, as well as the partial copper-copper correlations from the reverse Monte Carlo modeling, which cannot be reproduced by using a simple rigid ion model. It is shown that the shoulder comes from intermediate range copper-copper correlations caused by the polarized chlorides.

  15. The structure of molten CuCl: Reverse Monte Carlo modeling with high-energy X-ray diffraction data and molecular dynamics of a polarizable ion model

    Energy Technology Data Exchange (ETDEWEB)

    Alcaraz, Olga; Trullàs, Joaquim, E-mail: quim.trullas@upc.edu [Departament de Física i Enginyeria Nuclear, Universitat Politècnica de Catalunya, Campus Nord UPC B4-B5, 08034 Barcelona (Spain); Tahara, Shuta [Department of Physics and Earth Sciences, Faculty of Science, University of the Ryukyus, Okinawa 903-0213 (Japan); Kawakita, Yukinobu [J-PARC Center, Japan Atomic Energy Agency (JAEA), Ibaraki 319-1195 (Japan); Takeda, Shin’ichi [Department of Physics, Faculty of Sciences, Kyushu University, Fukuoka 819-0395 (Japan)

    2016-09-07

    The results of the structural properties of molten copper chloride are reported from high-energy X-ray diffraction measurements, reverse Monte Carlo modeling method, and molecular dynamics simulations using a polarizable ion model. The simulated X-ray structure factor reproduces all trends observed experimentally, in particular the shoulder at around 1 Å{sup −1} related to intermediate range ordering, as well as the partial copper-copper correlations from the reverse Monte Carlo modeling, which cannot be reproduced by using a simple rigid ion model. It is shown that the shoulder comes from intermediate range copper-copper correlations caused by the polarized chlorides.

  16. A Collaborative Effort Between Caribbean States for Tsunami Numerical Modeling: Case Study CaribeWave15

    Science.gov (United States)

    Chacón-Barrantes, Silvia; López-Venegas, Alberto; Sánchez-Escobar, Rónald; Luque-Vergara, Néstor

    2018-04-01

    Historical records have shown that tsunami have affected the Caribbean region in the past. However infrequent, recent studies have demonstrated that they pose a latent hazard for countries within this basin. The Hazard Assessment Working Group of the ICG/CARIBE-EWS (Intergovernmental Coordination Group of the Early Warning System for Tsunamis and Other Coastal Threats for the Caribbean Sea and Adjacent Regions) of IOC/UNESCO has a modeling subgroup, which seeks to develop a modeling platform to assess the effects of possible tsunami sources within the basin. The CaribeWave tsunami exercise is carried out annually in the Caribbean region to increase awareness and test tsunami preparedness of countries within the basin. In this study we present results of tsunami inundation using the CaribeWave15 exercise scenario for four selected locations within the Caribbean basin (Colombia, Costa Rica, Panamá and Puerto Rico), performed by tsunami modeling researchers from those selected countries. The purpose of this study was to provide the states with additional results for the exercise. The results obtained here were compared to co-seismic deformation and tsunami heights within the basin (energy plots) provided for the exercise to assess the performance of the decision support tools distributed by PTWC (Pacific Tsunami Warning Center), the tsunami service provider for the Caribbean basin. However, comparison of coastal tsunami heights was not possible, due to inconsistencies between the provided fault parameters and the modeling results within the provided exercise products. Still, the modeling performed here allowed to analyze tsunami characteristics at the mentioned states from sources within the North Panamá Deformed Belt. The occurrence of a tsunami in the Caribbean may affect several countries because a great variety of them share coastal zones in this basin. Therefore, collaborative efforts similar to the one presented in this study, particularly between neighboring

  17. Free energy and phase equilibria for the restricted primitive model of ionic fluids from Monte Carlo simulations

    International Nuclear Information System (INIS)

    Orkoulas, G.; Panagiotopoulos, A.Z.

    1994-01-01

    In this work, we investigate the liquid--vapor phase transition of the restricted primitive model of ionic fluids. We show that at the low temperatures where the phase transition occurs, the system cannot be studied by conventional molecular simulation methods because convergence to equilibrium is slow. To accelerate convergence, we propose cluster Monte Carlo moves capable of moving more than one particle at a time. We then address the issue of charged particle transfers in grand canonical and Gibbs ensemble Monte Carlo simulations, for which we propose a biased particle insertion/destruction scheme capable of sampling short interparticle distances. We compute the chemical potential for the restricted primitive model as a function of temperature and density from grand canonical Monte Carlo simulations and the phase envelope from Gibbs Monte Carlo simulations. Our calculated phase coexistence curve is in agreement with recent results of Caillol obtained on the four-dimensional hypersphere and our own earlier Gibbs ensemble simulations with single-ion transfers, with the exception of the critical temperature, which is lower in the current calculations. Our best estimates for the critical parameters are T * c =0.053, ρ * c =0.025. We conclude with possible future applications of the biased techniques developed here for phase equilibrium calculations for ionic fluids

  18. Assesment of advanced step models for steady state Monte Carlo burnup calculations in application to prismatic HTGR

    Directory of Open Access Journals (Sweden)

    Kępisty Grzegorz

    2015-09-01

    Full Text Available In this paper, we compare the methodology of different time-step models in the context of Monte Carlo burnup calculations for nuclear reactors. We discuss the differences between staircase step model, slope model, bridge scheme and stochastic implicit Euler method proposed in literature. We focus on the spatial stability of depletion procedure and put additional emphasis on the problem of normalization of neutron source strength. Considered methodology has been implemented in our continuous energy Monte Carlo burnup code (MCB5. The burnup simulations have been performed using the simplified high temperature gas-cooled reactor (HTGR system with and without modeling of control rod withdrawal. Useful conclusions have been formulated on the basis of results.

  19. Index of Effort: An Analytical Model for Evaluating and Re-Directing Student Recruitment Activities for a Local Community College.

    Science.gov (United States)

    Landini, Albert J.

    This index of effort is proposed as a means by which those in charge of student recruitment activities at community colleges can be sure that their efforts are being directed toward all of the appropriate population. The index is an analytical model based on the concept of socio-economic profiles, using small area 1970 census data, and is the…

  20. Effects of fishing effort allocation scenarios on energy efficiency and profitability: an individual-based model applied to Danish fisheries

    DEFF Research Database (Denmark)

    Bastardie, Francois; Nielsen, J. Rasmus; Andersen, Bo Sølgaard

    2010-01-01

    to the harbour, and (C) allocating effort towards optimising the expected area-specific profit per trip. The model is informed by data from each Danish fishing vessel >15 m after coupling its high resolution spatial and temporal effort data (VMS) with data from logbook landing declarations, sales slips, vessel...... effort allocation has actually been sub-optimal because increased profits from decreased fuel consumption and larger landings could have been obtained by applying a different spatial effort allocation. Based on recent advances in VMS and logbooks data analyses, this paper contributes to improve...

  1. Single-site Lennard-Jones models via polynomial chaos surrogates of Monte Carlo molecular simulation

    KAUST Repository

    Kadoura, Ahmad Salim; Siripatana, Adil; Sun, Shuyu; Knio, Omar; Hoteit, Ibrahim

    2016-01-01

    In this work, two Polynomial Chaos (PC) surrogates were generated to reproduce Monte Carlo (MC) molecular simulation results of the canonical (single-phase) and the NVT-Gibbs (two-phase) ensembles for a system of normalized structureless Lennard

  2. Inverse Modeling Using Markov Chain Monte Carlo Aided by Adaptive Stochastic Collocation Method with Transformation

    Science.gov (United States)

    Zhang, D.; Liao, Q.

    2016-12-01

    The Bayesian inference provides a convenient framework to solve statistical inverse problems. In this method, the parameters to be identified are treated as random variables. The prior knowledge, the system nonlinearity, and the measurement errors can be directly incorporated in the posterior probability density function (PDF) of the parameters. The Markov chain Monte Carlo (MCMC) method is a powerful tool to generate samples from the posterior PDF. However, since the MCMC usually requires thousands or even millions of forward simulations, it can be a computationally intensive endeavor, particularly when faced with large-scale flow and transport models. To address this issue, we construct a surrogate system for the model responses in the form of polynomials by the stochastic collocation method. In addition, we employ interpolation based on the nested sparse grids and takes into account the different importance of the parameters, under the condition of high random dimensions in the stochastic space. Furthermore, in case of low regularity such as discontinuous or unsmooth relation between the input parameters and the output responses, we introduce an additional transform process to improve the accuracy of the surrogate model. Once we build the surrogate system, we may evaluate the likelihood with very little computational cost. We analyzed the convergence rate of the forward solution and the surrogate posterior by Kullback-Leibler divergence, which quantifies the difference between probability distributions. The fast convergence of the forward solution implies fast convergence of the surrogate posterior to the true posterior. We also tested the proposed algorithm on water-flooding two-phase flow reservoir examples. The posterior PDF calculated from a very long chain with direct forward simulation is assumed to be accurate. The posterior PDF calculated using the surrogate model is in reasonable agreement with the reference, revealing a great improvement in terms of

  3. Current Observational Constraints to Holographic Dark Energy Model with New Infrared cut-off via Markov Chain Monte Carlo Method

    OpenAIRE

    Wang, Yuting; Xu, Lixin

    2010-01-01

    In this paper, the holographic dark energy model with new infrared (IR) cut-off for both the flat case and the non-flat case are confronted with the combined constraints of current cosmological observations: type Ia Supernovae, Baryon Acoustic Oscillations, current Cosmic Microwave Background, and the observational hubble data. By utilizing the Markov Chain Monte Carlo (MCMC) method, we obtain the best fit values of the parameters with $1\\sigma, 2\\sigma$ errors in the flat model: $\\Omega_{b}h...

  4. A Monte Carlo/response surface strategy for sensitivity analysis: application to a dynamic model of vegetative plant growth

    Science.gov (United States)

    Lim, J. T.; Gold, H. J.; Wilkerson, G. G.; Raper, C. D. Jr; Raper CD, J. r. (Principal Investigator)

    1989-01-01

    We describe the application of a strategy for conducting a sensitivity analysis for a complex dynamic model. The procedure involves preliminary screening of parameter sensitivities by numerical estimation of linear sensitivity coefficients, followed by generation of a response surface based on Monte Carlo simulation. Application is to a physiological model of the vegetative growth of soybean plants. The analysis provides insights as to the relative importance of certain physiological processes in controlling plant growth. Advantages and disadvantages of the strategy are discussed.

  5. A Monte-Carlo simulation of the behaviour of electron swarms in hydrogen using an anisotropic scattering model

    International Nuclear Information System (INIS)

    Blevin, H.A.; Fletcher, J.; Hunter, S.R.

    1978-05-01

    In a recent paper, a Monte-Carlo simulation of electron swarms in hydrogen using an isotropic scattering model was reported. In this previous work discrepancies between the predicted and measured electron transport parameters were observed. In this paper a far more realistic anisotropic scattering model is used. Good agreement between predicted and experimental data is observed and the simulation code has been used to calculate various parameters which are not directly measurable

  6. Habitat models to assist plant protection efforts in Shenandoah National Park, Virginia, USA

    Science.gov (United States)

    Van Manen, F.T.; Young, J.A.; Thatcher, C.A.; Cass, W.B.; Ulrey, C.

    2005-01-01

    During 2002, the National Park Service initiated a demonstration project to develop science-based law enforcement strategies for the protection of at-risk natural resources, including American ginseng (Panax quinquefolius L.), bloodroot (Sanguinaria canadensis L.), and black cohosh (Cimicifuga racemosa (L.) Nutt. [syn. Actaea racemosa L.]). Harvest pressure on these species is increasing because of the growing herbal remedy market. We developed habitat models for Shenandoah National Park and the northern portion of the Blue Ridge Parkway to determine the distribution of favorable habitats of these three plant species and to demonstrate the use of that information to support plant protection activities. We compiled locations for the three plant species to delineate favorable habitats with a geographic information system (GIS). We mapped potential habitat quality for each species by calculating a multivariate statistic, Mahalanobis distance, based on GIS layers that characterized the topography, land cover, and geology of the plant locations (10-m resolution). We tested model performance with an independent dataset of plant locations, which indicated a significant relationship between Mahalanobis distance values and species occurrence. We also generated null models by examining the distribution of the Mahalanobis distance values had plants been distributed randomly. For all species, the habitat models performed markedly better than their respective null models. We used our models to direct field searches to the most favorable habitats, resulting in a sizeable number of new plant locations (82 ginseng, 73 bloodroot, and 139 black cohosh locations). The odds of finding new plant locations based on the habitat models were 4.5 (black cohosh) to 12.3 (American ginseng) times greater than random searches; thus, the habitat models can be used to improve the efficiency of plant protection efforts, (e.g., marking of plants, law enforcement activities). The field searches also

  7. Modeling the Movement of Homicide by Type to Inform Public Health Prevention Efforts.

    Science.gov (United States)

    Zeoli, April M; Grady, Sue; Pizarro, Jesenia M; Melde, Chris

    2015-10-01

    We modeled the spatiotemporal movement of hotspot clusters of homicide by motive in Newark, New Jersey, to investigate whether different homicide types have different patterns of clustering and movement. We obtained homicide data from the Newark Police Department Homicide Unit's investigative files from 1997 through 2007 (n = 560). We geocoded the address at which each homicide victim was found and recorded the date of and the motive for the homicide. We used cluster detection software to model the spatiotemporal movement of statistically significant homicide clusters by motive, using census tract and month of occurrence as the spatial and temporal units of analysis. Gang-motivated homicides showed evidence of clustering and diffusion through Newark. Additionally, gang-motivated homicide clusters overlapped to a degree with revenge and drug-motivated homicide clusters. Escalating dispute and nonintimate familial homicides clustered; however, there was no evidence of diffusion. Intimate partner and robbery homicides did not cluster. By tracking how homicide types diffuse through communities and determining which places have ongoing or emerging homicide problems by type, we can better inform the deployment of prevention and intervention efforts.

  8. A virtual source model for Monte Carlo simulation of helical tomotherapy.

    Science.gov (United States)

    Yuan, Jiankui; Rong, Yi; Chen, Quan

    2015-01-08

    The purpose of this study was to present a Monte Carlo (MC) simulation method based on a virtual source, jaw, and MLC model to calculate dose in patient for helical tomotherapy without the need of calculating phase-space files (PSFs). Current studies on the tomotherapy MC simulation adopt a full MC model, which includes extensive modeling of radiation source, primary and secondary jaws, and multileaf collimator (MLC). In the full MC model, PSFs need to be created at different scoring planes to facilitate the patient dose calculations. In the present work, the virtual source model (VSM) we established was based on the gold standard beam data of a tomotherapy unit, which can be exported from the treatment planning station (TPS). The TPS-generated sinograms were extracted from the archived patient XML (eXtensible Markup Language) files. The fluence map for the MC sampling was created by incorporating the percentage leaf open time (LOT) with leaf filter, jaw penumbra, and leaf latency contained from sinogram files. The VSM was validated for various geometry setups and clinical situations involving heterogeneous media and delivery quality assurance (DQA) cases. An agreement of < 1% was obtained between the measured and simulated results for percent depth doses (PDDs) and open beam profiles for all three jaw settings in the VSM commissioning. The accuracy of the VSM leaf filter model was verified in comparing the measured and simulated results for a Picket Fence pattern. An agreement of < 2% was achieved between the presented VSM and a published full MC model for heterogeneous phantoms. For complex clinical head and neck (HN) cases, the VSM-based MC simulation of DQA plans agreed with the film measurement with 98% of planar dose pixels passing on the 2%/2 mm gamma criteria. For patient treatment plans, results showed comparable dose-volume histograms (DVHs) for planning target volumes (PTVs) and organs at risk (OARs). Deviations observed in this study were consistent

  9. Estimation of snow albedo reduction by light absorbing impurities using Monte Carlo radiative transfer model

    Science.gov (United States)

    Sengupta, D.; Gao, L.; Wilcox, E. M.; Beres, N. D.; Moosmüller, H.; Khlystov, A.

    2017-12-01

    wavelength range (300 nm - 2000 nm). Results will be compared with the SNICAR model to better understand the differences in snow albedo computation between plane-parallel methods and the statistical Monte Carlo methods.

  10. Verification of the VEF photon beam model for dose calculations by the voxel-Monte-Carlo-algorithm

    International Nuclear Information System (INIS)

    Kriesen, S.; Fippel, M.

    2005-01-01

    The VEF linac head model (VEF, virtual energy fluence) was developed at the University of Tuebingen to determine the primary fluence for calculations of dose distributions in patients by the Voxel-Monte-Carlo-Algorithm (XVMC). This analytical model can be fitted to any therapy accelerator head by measuring only a few basic dose data; therefore, time-consuming Monte-Carlo simulations of the linac head become unnecessary. The aim of the present study was the verification of the VEF model by means of water-phantom measurements, as well as the comparison of this system with a common analytical linac head model of a commercial planning system (TMS, formerly HELAX or MDS Nordion, respectively). The results show that both the VEF and the TMS models can very well simulate the primary fluence. However, the VEF model proved superior in the simulations of scattered radiation and in the calculations of strongly irregular MLC fields. Thus, an accurate and clinically practicable tool for the determination of the primary fluence for Monte-Carlo-Simulations with photons was established, especially for the use in IMRT planning. (orig.)

  11. [Verification of the VEF photon beam model for dose calculations by the Voxel-Monte-Carlo-Algorithm].

    Science.gov (United States)

    Kriesen, Stephan; Fippel, Matthias

    2005-01-01

    The VEF linac head model (VEF, virtual energy fluence) was developed at the University of Tübingen to determine the primary fluence for calculations of dose distributions in patients by the Voxel-Monte-Carlo-Algorithm (XVMC). This analytical model can be fitted to any therapy accelerator head by measuring only a few basic dose data; therefore, time-consuming Monte-Carlo simulations of the linac head become unnecessary. The aim of the present study was the verification of the VEF model by means of water-phantom measurements, as well as the comparison of this system with a common analytical linac head model of a commercial planning system (TMS, formerly HELAX or MDS Nordion, respectively). The results show that both the VEF and the TMS models can very well simulate the primary fluence. However, the VEF model proved superior in the simulations of scattered radiation and in the calculations of strongly irregular MLC fields. Thus, an accurate and clinically practicable tool for the determination of the primary fluence for Monte-Carlo-Simulations with photons was established, especially for the use in IMRT planning.

  12. Economic effort management in multispecies fisheries: the FcubEcon model

    DEFF Research Database (Denmark)

    Hoff, Ayoe; Frost, Hans; Ulrich, Clara

    2010-01-01

    in the development of management tools based on fleets, fisheries, and areas, rather than on unit fish stocks. A natural consequence of this has been to consider effort rather than quota management, a final effort decision being based on fleet-harvest potential and fish-stock-preservation considerations. Effort...... allocation between fleets should not be based on biological considerations alone, but also on the economic behaviour of fishers, because fisheries management has a significant impact on human behaviour as well as on ecosystem development. The FcubEcon management framework for effort allocation between fleets...... the past decade, increased focus on this issue has resulted in the development of management tools based on fleets, fisheries, and areas, rather than on unit fish stocks. A natural consequence of this has been to consider effort rather than quota management, a final effort decision being based on fleet...

  13. Comparison of serological and milk tests for bovine brucellosis using a Monte Carlo simulation model

    Directory of Open Access Journals (Sweden)

    V. Caporale

    2004-01-01

    Full Text Available European Union (EU Directive 97/12/EC allows the trade of cattle within the EU of animals originating from an 'officially brucellosis-free herd'. To qualify for this status, a number of different programmes must be implemented. Each EU Member Country is free to decide which procedure to use to qualify herds. The authors conducted a study to compare the merits and costs of testing programmes given in the Directive and of some alternative testing strategies. The effectiveness of testing programmes was evaluated by a Monte Carlo simulation model. Programmes listed in the Directive do not appear to have identical sensitivity and specificity. Simulations of the programmes showed that milk testing may be more effective and efficient than blood testing to identify infected herds. Results indicated that it could be advisable that legislation, rather than defining very detailed procedures both for laboratory tests and testing programmes, should establish minimal requirements in terms of efficacy of testing procedures (i.e. the probability of detecting an infected herd.

  14. Monte Carlo simulations of the Spin-2 Blume-Emery-Griffiths model

    International Nuclear Information System (INIS)

    Iwashita, Takashi; Uragami, Kakuko; Muraoka, Yoshinori; Kinoshita, Takehiro; Idogaki, Toshihiro

    2010-01-01

    The magnetic properties of the spin S = 2 Ising system with the bilinear exchange interaction J 1 S iz S jz , the biquadratic exchange interaction J 2 S iz 2 S jz 2 and the single-ion anisotropy DS iz 2 are discussed by making use of the Monte Carlo (MC) simulation for the magnetization z >, sub-lattice magnetizations z (A)> and z (B)>, the magnetic specific heat C M and spin structures. This Ising spin system of S = 2 with interactions J 1 and J 2 and with anisotropy D corresponds to the spin-2 Blume-Emery-Griffiths model. The phase diagram of this Ising spin system on a two-dimensional square lattice has been obtained for exchange parameter J 2 /J 1 and anisotropy parameter D/J 1 . The shapes of the temperature dependence of sublattice magnetizations z (A)> and z (B)> are related with abnormal behavior of temperature dependence of z > at low temperatures and affected significantly by the single-ion anisotropy D. The staggered quadrupolar (SQ) ordering turns out to be different largely between Ising systems with the single-ion anisotropy (D ≠ 0) and without the one (D 0).

  15. Risk analysis of gravity dam instability using credibility theory Monte Carlo simulation model.

    Science.gov (United States)

    Xin, Cao; Chongshi, Gu

    2016-01-01

    Risk analysis of gravity dam stability involves complicated uncertainty in many design parameters and measured data. Stability failure risk ratio described jointly by probability and possibility has deficiency in characterization of influence of fuzzy factors and representation of the likelihood of risk occurrence in practical engineering. In this article, credibility theory is applied into stability failure risk analysis of gravity dam. Stability of gravity dam is viewed as a hybrid event considering both fuzziness and randomness of failure criterion, design parameters and measured data. Credibility distribution function is conducted as a novel way to represent uncertainty of influence factors of gravity dam stability. And combining with Monte Carlo simulation, corresponding calculation method and procedure are proposed. Based on a dam section, a detailed application of the modeling approach on risk calculation of both dam foundation and double sliding surfaces is provided. The results show that, the present method is feasible to be applied on analysis of stability failure risk for gravity dams. The risk assessment obtained can reflect influence of both sorts of uncertainty, and is suitable as an index value.

  16. Optimization of dual-wavelength intravascular photoacoustic imaging of atherosclerotic plaques using Monte Carlo optical modeling

    Science.gov (United States)

    Dana, Nicholas; Sowers, Timothy; Karpiouk, Andrei; Vanderlaan, Donald; Emelianov, Stanislav

    2017-10-01

    Coronary heart disease (the presence of coronary atherosclerotic plaques) is a significant health problem in the industrialized world. A clinical method to accurately visualize and characterize atherosclerotic plaques is needed. Intravascular photoacoustic (IVPA) imaging is being developed to fill this role, but questions remain regarding optimal imaging wavelengths. We utilized a Monte Carlo optical model to simulate IVPA excitation in coronary tissues, identifying optimal wavelengths for plaque characterization. Near-infrared wavelengths (≤1800 nm) were simulated, and single- and dual-wavelength data were analyzed for accuracy of plaque characterization. Results indicate light penetration is best in the range of 1050 to 1370 nm, where 5% residual fluence can be achieved at clinically relevant depths of ≥2 mm in arteries. Across the arterial wall, fluence may vary by over 10-fold, confounding plaque characterization. For single-wavelength results, plaque segmentation accuracy peaked at 1210 and 1720 nm, though correlation was poor (blood, a primary and secondary wavelength near 1210 and 1350 nm, respectively, may offer the best implementation of dual-wavelength IVPA imaging. These findings could guide the development of a cost-effective clinical system by highlighting optimal wavelengths and improving plaque characterization.

  17. Monte Carlo Modeling of Sodium in Mercury's Exosphere During the First Two MESSENGER Flybys

    Science.gov (United States)

    Burger, Matthew H.; Killen, Rosemary M.; Vervack, Ronald J., Jr.; Bradley, E. Todd; McClintock, William E.; Sarantos, Menelaos; Benna, Mehdi; Mouawad, Nelly

    2010-01-01

    We present a Monte Carlo model of the distribution of neutral sodium in Mercury's exosphere and tail using data from the Mercury Atmospheric and Surface Composition Spectrometer (MASCS) on the MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) spacecraft during the first two flybys of the planet in January and September 2008. We show that the dominant source mechanism for ejecting sodium from the surface is photon-stimulated desorption (PSD) and that the desorption rate is limited by the diffusion rate of sodium from the interior of grains in the regolith to the topmost few monolayers where PSD is effective. In the absence of ion precipitation, we find that the sodium source rate is limited to approximately 10(exp 6) - 10(exp 7) per square centimeter per second, depending on the sticking efficiency of exospheric sodium that returns to the surface. The diffusion rate must be at least a factor of 5 higher in regions of ion precipitation to explain the MASCS observations during the second MESSENGER f1yby. We estimate that impact vaporization of micrometeoroids may provide up to 15% of the total sodium source rate in the regions observed. Although sputtering by precipitating ions was found not to be a significant source of sodium during the MESSENGER flybys, ion precipitation is responsible for increasing the source rate at high latitudes through ion-enhanced diffusion.

  18. Monte Carlo study of the double and super-exchange model with lattice distortion

    Energy Technology Data Exchange (ETDEWEB)

    Suarez, J R; Vallejo, E; Navarro, O [Instituto de Investigaciones en Materiales, Universidad Nacional Autonoma de Mexico, Apartado Postal 70-360, 04510 Mexico D. F. (Mexico); Avignon, M, E-mail: jrsuarez@iim.unam.m [Institut Neel, Centre National de la Recherche Scientifique (CNRS) and Universite Joseph Fourier, BP 166, 38042 Grenoble Cedex 9 (France)

    2009-05-01

    In this work a magneto-elastic phase transition was obtained in a linear chain due to the interplay between magnetism and lattice distortion in a double and super-exchange model. It is considered a linear chain consisting of localized classical spins interacting with itinerant electrons. Due to the double exchange interaction, localized spins tend to align ferromagnetically. This ferromagnetic tendency is expected to be frustrated by anti-ferromagnetic super-exchange interactions between neighbor localized spins. Additionally, lattice parameter is allowed to have small changes, which contributes harmonically to the energy of the system. Phase diagram is obtained as a function of the electron density and the super-exchange interaction using a Monte Carlo minimization. At low super-exchange interaction energy phase transition between electron-full ferromagnetic distorted and electron-empty anti-ferromagnetic undistorted phases occurs. In this case all electrons and lattice distortions were found within the ferromagnetic domain. For high super-exchange interaction energy, phase transition between two site distorted periodic arrangement of independent magnetic polarons ordered anti-ferromagnetically and the electron-empty anti-ferromagnetic undistorted phase was found. For this high interaction energy, Wigner crystallization, lattice distortion and charge distribution inside two-site polarons were obtained.

  19. Modeling the biophysical effects in a carbon beam delivery line by using Monte Carlo simulations

    Science.gov (United States)

    Cho, Ilsung; Yoo, SeungHoon; Cho, Sungho; Kim, Eun Ho; Song, Yongkeun; Shin, Jae-ik; Jung, Won-Gyun

    2016-09-01

    The Relative biological effectiveness (RBE) plays an important role in designing a uniform dose response for ion-beam therapy. In this study, the biological effectiveness of a carbon-ion beam delivery system was investigated using Monte Carlo simulations. A carbon-ion beam delivery line was designed for the Korea Heavy Ion Medical Accelerator (KHIMA) project. The GEANT4 simulation tool kit was used to simulate carbon-ion beam transport into media. An incident energy carbon-ion beam with energy in the range between 220 MeV/u and 290 MeV/u was chosen to generate secondary particles. The microdosimetric-kinetic (MK) model was applied to describe the RBE of 10% survival in human salivary-gland (HSG) cells. The RBE weighted dose was estimated as a function of the penetration depth in the water phantom along the incident beam's direction. A biologically photon-equivalent Spread Out Bragg Peak (SOBP) was designed using the RBE-weighted absorbed dose. Finally, the RBE of mixed beams was predicted as a function of the depth in the water phantom.

  20. Reliability of Monte Carlo simulations in modeling neutron yields from a shielded fission source

    Energy Technology Data Exchange (ETDEWEB)

    McArthur, Matthew S., E-mail: matthew.s.mcarthur@gmail.com; Rees, Lawrence B., E-mail: Lawrence_Rees@byu.edu; Czirr, J. Bart, E-mail: czirr@juno.com

    2016-08-11

    Using the combination of a neutron-sensitive {sup 6}Li glass scintillator detector with a neutron-insensitive {sup 7}Li glass scintillator detector, we are able to make an accurate measurement of the capture rate of fission neutrons on {sup 6}Li. We used this detector with a {sup 252}Cf neutron source to measure the effects of both non-borated polyethylene and 5% borated polyethylene shielding on detection rates over a range of shielding thicknesses. Both of these measurements were compared with MCNP calculations to determine how well the calculations reproduced the measurements. When the source is highly shielded, the number of interactions experienced by each neutron prior to arriving at the detector is large, so it is important to compare Monte Carlo modeling with actual experimental measurements. MCNP reproduces the data fairly well, but it does generally underestimate detector efficiency both with and without polyethylene shielding. For non-borated polyethylene it underestimates the measured value by an average of 8%. This increases to an average of 11% for borated polyethylene.

  1. Monte Carlo modeling of transport in PbSe nanocrystal films

    Energy Technology Data Exchange (ETDEWEB)

    Carbone, I., E-mail: icarbone@ucsc.edu; Carter, S. A. [University of California, Santa Cruz, California 95060 (United States); Zimanyi, G. T. [University of California, Davis, California 95616 (United States)

    2013-11-21

    A Monte Carlo hopping model was developed to simulate electron and hole transport in nanocrystalline PbSe films. Transport is carried out as a series of thermally activated hopping events between neighboring sites on a cubic lattice. Each site, representing an individual nanocrystal, is assigned a size-dependent electronic structure, and the effects of particle size, charging, interparticle coupling, and energetic disorder on electron and hole mobilities were investigated. Results of simulated field-effect measurements confirm that electron mobilities and conductivities at constant carrier densities increase with particle diameter by an order of magnitude up to 5 nm and begin to decrease above 6 nm. We find that as particle size increases, fewer hops are required to traverse the same distance and that site energy disorder significantly inhibits transport in films composed of smaller nanoparticles. The dip in mobilities and conductivities at larger particle sizes can be explained by a decrease in tunneling amplitudes and by charging penalties that are incurred more frequently when carriers are confined to fewer, larger nanoparticles. Using a nearly identical set of parameter values as the electron simulations, hole mobility simulations confirm measurements that increase monotonically with particle size over two orders of magnitude.

  2. Two models at work : A study of interactions and specificity in relation to the Demand-Control Model and the Effort-Reward Imbalance Model

    NARCIS (Netherlands)

    Vegchel, N.

    2005-01-01

    To investigate the relation between work and employee health, several work stress models, e.g., the Demand-Control (DC) Model and the Effort-Reward Imbalance (ERI) Model, have been developed. Although these models focus on job demands and job resources, relatively little attention has been devoted

  3. Microscopic calculation of level densities: the shell model Monte Carlo approach

    International Nuclear Information System (INIS)

    Alhassid, Yoram

    2012-01-01

    The shell model Monte Carlo (SMMC) approach provides a powerful technique for the microscopic calculation of level densities in model spaces that are many orders of magnitude larger than those that can be treated by conventional methods. We discuss a number of developments: (i) Spin distribution. We used a spin projection method to calculate the exact spin distribution of energy levels as a function of excitation energy. In even-even nuclei we find an odd-even staggering effect (in spin). Our results were confirmed in recent analysis of experimental data. (ii) Heavy nuclei. The SMMC approach was extended to heavy nuclei. We have studied the crossover between vibrational and rotational collectivity in families of samarium and neodymium isotopes in model spaces of dimension approx. 10 29 . We find good agreement with experimental results for both state densities and 2 > (where J is the total spin). (iii) Collective enhancement factors. We have calculated microscopically the vibrational and rotational enhancement factors of level densities versus excitation energy. We find that the decay of these enhancement factors in heavy nuclei is correlated with the pairing and shape phase transitions. (iv) Odd-even and odd-odd nuclei. The projection on an odd number of particles leads to a sign problem in SMMC. We discuss a novel method to calculate state densities in odd-even and odd-odd nuclei despite the sign problem. (v) State densities versus level densities. The SMMC approach has been used extensively to calculate state densities. However, experiments often measure level densities (where levels are counted without including their spin degeneracies.) A spin projection method enables us to also calculate level densities in SMMC. We have calculated the SMMC level density of 162 Dy and found it to agree well with experiments

  4. A Monte Carlo Approach to Modeling Wildfire Risk on Changing Landscapes

    Science.gov (United States)

    Burzynski, A. M.; Beavers, A.

    2016-12-01

    The U.S. Department of Defense (DoD) maintains approximately 28 million acres of land across 420 of their largest installations. These sites harbored 425 federally listed Threatened and Endangered species as of 2013, representing a density of rare species that is several times greater than any other land management agency in the U.S. This is a major driver of DoD natural resources policy and many of these species are affected by wildland fire, both positively and negatively. Military installations collectively experience thousands of wildfires per year, and the majority of ignitions are caused by mission and training activities that can be planned to accommodate fire risk. Motivated by the need for accurately modeled wildfire under the unique land-use conditions of military installations and the assessment of risk exposure at installations throughout the U.S., we developed custom, FARSITE-based scientific software that applies a Monte Carlo approach to wildfire risk analysis. This simulation accounts for the dynamics of vegetation and weather over time, as well as the spatial and temporal distribution of wildfire ignitions, and can be applied to landscapes up to several million acres in size. The data-driven simulation provides insight that feeds directly into mitigation decision-making and can be used to assess future risk scenarios, both real and hypothetical. We highlight an example of a future scenario comparing wildfire behavior between unmitigated fuels and one in which a prescribed burn program is implemented. The same process can be used for a variety of scenarios including changes in vegetation (e.g. new or altered grazing regimes, extreme weather, or drought) and changes in spatiotemporal ignition probability. The modeling capabilities that we apply to predicting wildfire risk on military lands are also relevant to the greater scientific community for modeling wildland fire in the context of environmental change, historical ecology, or climate change.

  5. A single-source photon source model of a linear accelerator for Monte Carlo dose calculation.

    Science.gov (United States)

    Nwankwo, Obioma; Glatting, Gerhard; Wenz, Frederik; Fleckenstein, Jens

    2017-01-01

    To introduce a new method of deriving a virtual source model (VSM) of a linear accelerator photon beam from a phase space file (PSF) for Monte Carlo (MC) dose calculation. A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden) and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs) for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC) between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses. The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate) for the evaluated fields. A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm.

  6. A single-source photon source model of a linear accelerator for Monte Carlo dose calculation.

    Directory of Open Access Journals (Sweden)

    Obioma Nwankwo

    Full Text Available To introduce a new method of deriving a virtual source model (VSM of a linear accelerator photon beam from a phase space file (PSF for Monte Carlo (MC dose calculation.A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses.The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate for the evaluated fields.A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm.

  7. Monte Carlo-based dose reconstruction in a rat model for scattered ionizing radiation investigations.

    Science.gov (United States)

    Kirkby, Charles; Ghasroddashti, Esmaeel; Kovalchuk, Anna; Kolb, Bryan; Kovalchuk, Olga

    2013-09-01

    In radiation biology, rats are often irradiated, but the precise dose distributions are often lacking, particularly in areas that receive scatter radiation. We used a non-dedicated set of resources to calculate detailed dose distributions, including doses to peripheral organs well outside of the primary field, in common rat exposure settings. We conducted a detailed dose reconstruction in a rat through an analog to the conventional human treatment planning process. The process consisted of: (i) Characterizing source properties of an X-ray irradiator system, (ii) acquiring a computed tomography (CT) scan of a rat model, and (iii) using a Monte Carlo (MC) dose calculation engine to generate the dose distribution within the rat model. We considered cranial and liver irradiation scenarios where the rest of the body was protected by a lead shield. Organs of interest were the brain, liver and gonads. The study also included paired scenarios where the dose to adjacent, shielded rats was determined as a potential control for analysis of bystander effects. We established the precise doses and dose distributions delivered to the peripheral organs in single and paired rats. Mean doses to non-targeted organs in irradiated rats ranged from 0.03-0.1% of the reference platform dose. Mean doses to the adjacent rat peripheral organs were consistent to within 10% those of the directly irradiated rat. This work provided details of dose distributions in rat models under common irradiation conditions and established an effective scenario for delivering only scattered radiation consistent with that in a directly irradiated rat.

  8. A Monte Carlo multiple source model applied to radiosurgery narrow photon beams

    International Nuclear Information System (INIS)

    Chaves, A.; Lopes, M.C.; Alves, C.C.; Oliveira, C.; Peralta, L.; Rodrigues, P.; Trindade, A.

    2004-01-01

    Monte Carlo (MC) methods are nowadays often used in the field of radiotherapy. Through successive steps, radiation fields are simulated, producing source Phase Space Data (PSD) that enable a dose calculation with good accuracy. Narrow photon beams used in radiosurgery can also be simulated by MC codes. However, the poor efficiency in simulating these narrow photon beams produces PSD whose quality prevents calculating dose with the required accuracy. To overcome this difficulty, a multiple source model was developed that enhances the quality of the reconstructed PSD, reducing also the time and storage capacities. This multiple source model was based on the full MC simulation, performed with the MC code MCNP4C, of the Siemens Mevatron KD2 (6 MV mode) linear accelerator head and additional collimators. The full simulation allowed the characterization of the particles coming from the accelerator head and from the additional collimators that shape the narrow photon beams used in radiosurgery treatments. Eight relevant photon virtual sources were identified from the full characterization analysis. Spatial and energy distributions were stored in histograms for the virtual sources representing the accelerator head components and the additional collimators. The photon directions were calculated for virtual sources representing the accelerator head components whereas, for the virtual sources representing the additional collimators, they were recorded into histograms. All these histograms were included in the MC code, DPM code and using a sampling procedure that reconstructed the PSDs, dose distributions were calculated in a water phantom divided in 20000 voxels of 1x1x5 mm 3 . The model accurately calculates dose distributions in the water phantom for all the additional collimators; for depth dose curves, associated errors at 2σ were lower than 2.5% until a depth of 202.5 mm for all the additional collimators and for profiles at various depths, deviations between measured

  9. Uncertainty Analysis Based on Sparse Grid Collocation and Quasi-Monte Carlo Sampling with Application in Groundwater Modeling

    Science.gov (United States)

    Zhang, G.; Lu, D.; Ye, M.; Gunzburger, M.

    2011-12-01

    Markov Chain Monte Carlo (MCMC) methods have been widely used in many fields of uncertainty analysis to estimate the posterior distributions of parameters and credible intervals of predictions in the Bayesian framework. However, in practice, MCMC may be computationally unaffordable due to slow convergence and the excessive number of forward model executions required, especially when the forward model is expensive to compute. Both disadvantages arise from the curse of dimensionality, i.e., the posterior distribution is usually a multivariate function of parameters. Recently, sparse grid method has been demonstrated to be an effective technique for coping with high-dimensional interpolation or integration problems. Thus, in order to accelerate the forward model and avoid the slow convergence of MCMC, we propose a new method for uncertainty analysis based on sparse grid interpolation and quasi-Monte Carlo sampling. First, we construct a polynomial approximation of the forward model in the parameter space by using the sparse grid interpolation. This approximation then defines an accurate surrogate posterior distribution that can be evaluated repeatedly at minimal computational cost. Second, instead of using MCMC, a quasi-Monte Carlo method is applied to draw samples in the parameter space. Then, the desired probability density function of each prediction is approximated by accumulating the posterior density values of all the samples according to the prediction values. Our method has the following advantages: (1) the polynomial approximation of the forward model on the sparse grid provides a very efficient evaluation of the surrogate posterior distribution; (2) the quasi-Monte Carlo method retains the same accuracy in approximating the PDF of predictions but avoids all disadvantages of MCMC. The proposed method is applied to a controlled numerical experiment of groundwater flow modeling. The results show that our method attains the same accuracy much more efficiently

  10. A Monte Carlo model for mean glandular dose evaluation in spot compression mammography.

    Science.gov (United States)

    Sarno, Antonio; Dance, David R; van Engen, Ruben E; Young, Kenneth C; Russo, Paolo; Di Lillo, Francesca; Mettivier, Giovanni; Bliznakova, Kristina; Fei, Baowei; Sechopoulos, Ioannis

    2017-07-01

    To characterize the dependence of normalized glandular dose (DgN) on various breast model and image acquisition parameters during spot compression mammography and other partial breast irradiation conditions, and evaluate alternative previously proposed dose-related metrics for this breast imaging modality. Using Monte Carlo simulations with both simple homogeneous breast models and patient-specific breasts, three different dose-related metrics for spot compression mammography were compared: the standard DgN, the normalized glandular dose to only the directly irradiated portion of the breast (DgNv), and the DgN obtained by the product of the DgN for full field irradiation and the ratio of the mid-height area of the irradiated breast to the entire breast area (DgN M ). How these metrics vary with field-of-view size, spot area thickness, x-ray energy, spot area and position, breast shape and size, and system geometry was characterized for the simple breast model and a comparison of the simple model results to those with patient-specific breasts was also performed. The DgN in spot compression mammography can vary considerably with breast area. However, the difference in breast thickness between the spot compressed area and the uncompressed area does not introduce a variation in DgN. As long as the spot compressed area is completely within the breast area and only the compressed breast portion is directly irradiated, its position and size does not introduce a variation in DgN for the homogeneous breast model. As expected, DgN is lower than DgNv for all partial breast irradiation areas, especially when considering spot compression areas within the clinically used range. DgN M underestimates DgN by 6.7% for a W/Rh spectrum at 28 kVp and for a 9 × 9 cm 2 compression paddle. As part of the development of a new breast dosimetry model, a task undertaken by the American Association of Physicists in Medicine and the European Federation of Organizations of Medical Physics

  11. Cutting Edge PBPK Models and Analyses: Providing the Basis for Future Modeling Efforts and Bridges to Emerging Toxicology Paradigms

    Directory of Open Access Journals (Sweden)

    Jane C. Caldwell

    2012-01-01

    Full Text Available Physiologically based Pharmacokinetic (PBPK models are used for predictions of internal or target dose from environmental and pharmacologic chemical exposures. Their use in human risk assessment is dependent on the nature of databases (animal or human used to develop and test them, and includes extrapolations across species, experimental paradigms, and determination of variability of response within human populations. Integration of state-of-the science PBPK modeling with emerging computational toxicology models is critical for extrapolation between in vitro exposures, in vivo physiologic exposure, whole organism responses, and long-term health outcomes. This special issue contains papers that can provide the basis for future modeling efforts and provide bridges to emerging toxicology paradigms. In this overview paper, we present an overview of the field and introduction for these papers that includes discussions of model development, best practices, risk-assessment applications of PBPK models, and limitations and bridges of modeling approaches for future applications. Specifically, issues addressed include: (a increased understanding of human variability of pharmacokinetics and pharmacodynamics in the population, (b exploration of mode of action hypotheses (MOA, (c application of biological modeling in the risk assessment of individual chemicals and chemical mixtures, and (d identification and discussion of uncertainties in the modeling process.

  12. Monte Carlo Simulation Modeling of a Regional Stroke Team's Use of Telemedicine.

    Science.gov (United States)

    Torabi, Elham; Froehle, Craig M; Lindsell, Christopher J; Moomaw, Charles J; Kanter, Daniel; Kleindorfer, Dawn; Adeoye, Opeolu

    2016-01-01

    The objective of this study was to evaluate operational policies that may improve the proportion of eligible stroke patients within a population who would receive intravenous recombinant tissue plasminogen activator (rt-PA) and minimize time to treatment in eligible patients. In the context of a regional stroke team, the authors examined the effects of staff location and telemedicine deployment policies on the timeliness of thrombolytic treatment, and estimated the efficacy and cost-effectiveness of six different policies. A process map comprising the steps from recognition of stroke symptoms to intravenous administration of rt-PA was constructed using data from published literature combined with expert opinion. Six scenarios were investigated: telemedicine deployment (none, all, or outer-ring hospitals only) and staff location (center of region or anywhere in region). Physician locations were randomly generated based on their zip codes of residence and work. The outcomes of interest were onset-to-treatment (OTT) time, door-to-needle (DTN) time, and the proportion of patients treated within 3 hours. A Monte Carlo simulation of the stroke team care-delivery system was constructed based on a primary data set of 121 ischemic stroke patients who were potentially eligible for treatment with rt-PA. With the physician located randomly in the region, deploying telemedicine at all hospitals in the region (compared with partial or no telemedicine) would result in the highest rates of treatment within 3 hours (80% vs. 75% vs. 70%) and the shortest OTT (148 vs. 164 vs. 176 minutes) and DTN (45 vs. 61 vs. 73 minutes) times. However, locating the on-call physician centrally coupled with partial telemedicine deployment (five of the 17 hospitals) would be most cost-effective with comparable eligibility and treatment times. Given the potential societal benefits, continued efforts to deploy telemedicine appear warranted. Aligning the incentives between those who would have to fund

  13. Upending the social ecological model to guide health promotion efforts toward policy and environmental change.

    Science.gov (United States)

    Golden, Shelley D; McLeroy, Kenneth R; Green, Lawrence W; Earp, Jo Anne L; Lieberman, Lisa D

    2015-04-01

    Efforts to change policies and the environments in which people live, work, and play have gained increasing attention over the past several decades. Yet health promotion frameworks that illustrate the complex processes that produce health-enhancing structural changes are limited. Building on the experiences of health educators, community activists, and community-based researchers described in this supplement and elsewhere, as well as several political, social, and behavioral science theories, we propose a new framework to organize our thinking about producing policy, environmental, and other structural changes. We build on the social ecological model, a framework widely employed in public health research and practice, by turning it inside out, placing health-related and other social policies and environments at the center, and conceptualizing the ways in which individuals, their social networks, and organized groups produce a community context that fosters healthy policy and environmental development. We conclude by describing how health promotion practitioners and researchers can foster structural change by (1) conveying the health and social relevance of policy and environmental change initiatives, (2) building partnerships to support them, and (3) promoting more equitable distributions of the resources necessary for people to meet their daily needs, control their lives, and freely participate in the public sphere. © 2015 Society for Public Health Education.

  14. Monte Carlo analysis of an ODE Model of the Sea Urchin Endomesoderm Network

    Directory of Open Access Journals (Sweden)

    Klipp Edda

    2009-08-01

    Full Text Available Abstract Background Gene Regulatory Networks (GRNs control the differentiation, specification and function of cells at the genomic level. The levels of interactions within large GRNs are of enormous depth and complexity. Details about many GRNs are emerging, but in most cases it is unknown to what extent they control a given process, i.e. the grade of completeness is uncertain. This uncertainty stems from limited experimental data, which is the main bottleneck for creating detailed dynamical models of cellular processes. Parameter estimation for each node is often infeasible for very large GRNs. We propose a method, based on random parameter estimations through Monte-Carlo simulations to measure completeness grades of GRNs. Results We developed a heuristic to assess the completeness of large GRNs, using ODE simulations under different conditions and randomly sampled parameter sets to detect parameter-invariant effects of perturbations. To test this heuristic, we constructed the first ODE model of the whole sea urchin endomesoderm GRN, one of the best studied large GRNs. We find that nearly 48% of the parameter-invariant effects correspond with experimental data, which is 65% of the expected optimal agreement obtained from a submodel for which kinetic parameters were estimated and used for simulations. Randomized versions of the model reproduce only 23.5% of the experimental data. Conclusion The method described in this paper enables an evaluation of network topologies of GRNs without requiring any parameter values. The benefit of this method is exemplified in the first mathematical analysis of the complete Endomesoderm Network Model. The predictions we provide deliver candidate nodes in the network that are likely to be erroneous or miss unknown connections, which may need additional experiments to improve the network topology. This mathematical model can serve as a scaffold for detailed and more realistic models. We propose that our method can

  15. Target dose conversion modeling from pencil beam (PB) to Monte Carlo (MC) for lung SBRT

    International Nuclear Information System (INIS)

    Zheng, Dandan; Zhu, Xiaofeng; Zhang, Qinghui; Liang, Xiaoying; Zhen, Weining; Lin, Chi; Verma, Vivek; Wang, Shuo; Wahl, Andrew; Lei, Yu; Zhou, Sumin; Zhang, Chi

    2016-01-01

    A challenge preventing routine clinical implementation of Monte Carlo (MC)-based lung SBRT is the difficulty of reinterpreting historical outcome data calculated with inaccurate dose algorithms, because the target dose was found to decrease to varying degrees when recalculated with MC. The large variability was previously found to be affected by factors such as tumour size, location, and lung density, usually through sub-group comparisons. We hereby conducted a pilot study to systematically and quantitatively analyze these patient factors and explore accurate target dose conversion models, so that large-scale historical outcome data can be correlated with more accurate MC dose without recalculation. Twenty-one patients that underwent SBRT for early-stage lung cancer were replanned with 6MV 360° dynamic conformal arcs using pencil-beam (PB) and recalculated with MC. The percent D95 difference (PB-MC) was calculated for the PTV and GTV. Using single linear regression, this difference was correlated with the following quantitative patient indices: maximum tumour diameter (MaxD); PTV and GTV volumes; minimum distance from tumour to soft tissue (dmin); and mean density and standard deviation of the PTV, GTV, PTV margin, lung, and 2 mm, 15 mm, 50 mm shells outside the PTV. Multiple linear regression and artificial neural network (ANN) were employed to model multiple factors and improve dose conversion accuracy. Single linear regression with PTV D95 deficiency identified the strongest correlation on mean-density (location) indices, weaker on lung density, and the weakest on size indices, with the following R 2 values in decreasing orders: shell2mm (0.71), PTV (0.68), PTV margin (0.65), shell15mm (0.62), shell50mm (0.49), lung (0.40), dmin (0.22), GTV (0.19), MaxD (0.17), PTV volume (0.15), and GTV volume (0.08). A multiple linear regression model yielded the significance factor of 3.0E-7 using two independent features: mean density of shell2mm (P = 1.6E-7) and PTV volume

  16. SU-F-T-371: Development of a Linac Monte Carlo Model to Calculate Surface Dose

    Energy Technology Data Exchange (ETDEWEB)

    Prajapati, S; Yan, Y; Gifford, K [UT MD Anderson Cancer Center, Houston, TX (United States)

    2016-06-15

    Purpose: To generate and validate a linac Monte Carlo (MC) model for surface dose prediction. Methods: BEAMnrc V4-2.4.0 was used to model 6 and 18 MV photon beams for a commercially available linac. DOSXYZnrc V4-2.4.0 calculated 3D dose distributions in water. Percent depth dose (PDD) and beam profiles were extracted for comparison to measured data. Surface dose and at depths in the buildup region was measured with radiochromic film at 100 cm SSD for 4 × 4 cm{sup 2} and 10 × 10 cm{sup 2} collimator settings for open and MLC collimated fields. For the 6 MV beam, films were placed at depths ranging from 0.015 cm to 2 cm and for 18 MV, 0.015 cm to 3.5 cm in Solid Water™. Films were calibrated for both photon energies at their respective dmax. PDDs and profiles were extracted from the film and compared to the MC data. The MC model was adjusted to match measured PDD and profiles. Results: For the 6 MV beam, the mean error(ME) in PDD between film and MC for open fields was 1.9%, whereas it was 2.4% for MLC. For the 18 MV beam, the ME in PDD for open fields was 2% and was 3.5% for MLC. For the 6 MV beam, the average root mean square(RMS) deviation for the central 80% of the beam profile for open fields was 1.5%, whereas it was 1.6% for MLC. For the 18 MV beam, the maximum RMS for open fields was 3%, and was 3.1% for MLC. Conclusion: The MC model of a linac agreed to within 4% of film measurements for depths ranging from the surface to dmax. Therefore, the MC linac model can predict surface dose for clinical applications. Future work will focus on adjusting the linac MC model to reduce RMS error and improve accuracy.

  17. Aqueous corrosion of borosilicate glasses: experiments, modeling and Monte-Carlo simulations

    International Nuclear Information System (INIS)

    Ledieu, A.

    2004-10-01

    This work is concerned with the corrosion of borosilicate glasses with variable oxide contents. The originality of this study is the complementary use of experiments and numerical simulations. This study is expected to contribute to a better understanding of the corrosion of nuclear waste confinement glasses. First, the corrosion of glasses containing only silicon, boron and sodium oxides has been studied. The kinetics of leaching show that the rate of leaching and the final degree of corrosion sharply depend on the boron content through a percolation mechanism. For some glass contents and some conditions of leaching, the layer which appears at the glass surface stops the release of soluble species (boron and sodium). This altered layer (also called the gel layer) has been characterized with nuclear magnetic resonance (NMR) and small angle X-ray scattering (SAXS) techniques. Second, additional elements have been included in the glass composition. It appears that calcium, zirconium or aluminum oxides strongly modify the final degree of corrosion so that the percolation properties of the boron sub-network is no more a sufficient explanation to account for the behavior of these glasses. Meanwhile, we have developed a theoretical model, based on the dissolution and the reprecipitation of the silicon. Kinetic Monte Carlo simulations have been used in order to test several concepts such as the boron percolation, the local reactivity of weakly soluble elements and the restructuring of the gel layer. This model has been fully validated by comparison with the results on the three oxide glasses. Then, it has been used as a comprehensive tool to investigate the paradoxical behavior of the aluminum and zirconium glasses: although these elements slow down the corrosion kinetics, they lead to a deeper final degree of corrosion. The main contribution of this work is that the final degree of corrosion of borosilicate glasses results from the competition of two opposite mechanisms

  18. Water leaching of borosilicate glasses: experiments, modeling and Monte Carlo simulations

    International Nuclear Information System (INIS)

    Ledieu, A.

    2004-10-01

    This work is concerned with the corrosion of borosilicate glasses with variable oxide contents. The originality of this study is the complementary use of experiments and numerical simulations. This study is expected to contribute to a better understanding of the corrosion of nuclear waste confinement glasses. First, the corrosion of glasses containing only silicon, boron and sodium oxides has been studied. The kinetics of leaching show that the rate of leaching and the final degree of corrosion sharply depend on the boron content through a percolation mechanism. For some glass contents and some conditions of leaching, the layer which appears at the glass surface stops the release of soluble species (boron and sodium). This altered layer (also called the gel layer) has been characterized with nuclear magnetic resonance (NMR) and small angle X-ray scattering (SAXS) techniques. Second, additional elements have been included in the glass composition. It appears that calcium, zirconium or aluminum oxides strongly modify the final degree of corrosion so that the percolation properties of the boron sub-network is no more a sufficient explanation to account for the behavior of these glasses. Meanwhile, we have developed a theoretical model, based on the dissolution and the reprecipitation of the silicon. Kinetic Monte Carlo simulations have been used in order to test several concepts such as the boron percolation, the local reactivity of weakly soluble elements and the restructuring of the gel layer. This model has been fully validated by comparison with the results on the three oxide glasses. Then, it has been used as a comprehensive tool to investigate the paradoxical behavior of the aluminum and zirconium glasses: although these elements slow down the corrosion kinetics, they lead to a deeper final degree of corrosion. The main contribution of this work is that the final degree of corrosion of borosilicate glasses results from the competition of two opposite mechanisms

  19. A Monte Carlo modeling on charging effect for structures with arbitrary geometries

    Science.gov (United States)

    Li, C.; Mao, S. F.; Zou, Y. B.; Li, Yong Gang; Zhang, P.; Li, H. M.; Ding, Z. J.

    2018-04-01

    Insulating materials usually suffer charging effects when irradiated by charged particles. In this paper, we present a Monte Carlo study on the charging effect caused by electron beam irradiation for sample structures with any complex geometry. When transporting in an insulating solid, electrons encounter elastic and inelastic scattering events; the Mott cross section and a Lorentz-type dielectric function are respectively employed to describe such scatterings. In addition, the band gap and the electron–long optical phonon interaction are taken into account. The electronic excitation in inelastic scattering causes generation of electron–hole pairs; these negative and positive charges establish an inner electric field, which in turn induces the drift of charges to be trapped by impurities, defects, vacancies etc in the solid, where the distributions of trapping sites are assumed to have uniform density. Under charging conditions, the inner electric field distorts electron trajectories, and the surface electric potential dynamically alters secondary electron emission. We present, in this work, an iterative modeling method for a self-consistent calculation of electric potential; the method has advantages in treating any structure with arbitrary complex geometry, in comparison with the image charge method—which is limited to a quite simple boundary geometry. Our modeling is based on: the combination of the finite triangle mesh method for an arbitrary geometry construction; a self-consistent method for the spatial potential calculation; and a full dynamic description for the motion of deposited charges. Example calculations have been done to simulate secondary electron yield of SiO2 for a semi-infinite solid, the charging for a heterostructure of SiO2 film grown on an Au substrate, and SEM imaging of a SiO2 line structure with rough surfaces and SiO2 nanoparticles with irregular shapes. The simulations have explored interesting interlaced charge layer distribution

  20. Comprehensive modeling of solid phase epitaxial growth using Lattice Kinetic Monte Carlo

    International Nuclear Information System (INIS)

    Martin-Bragado, Ignacio

    2013-01-01

    Damage evolution of irradiated silicon is, and has been, a topic of interest for the last decades for its applications to the semiconductor industry. In particular, sometimes, the damage is heavy enough to collapse the lattice and to locally amorphize the silicon, while in other cases amorphization is introduced explicitly to improve other implanted profiles. Subsequent annealing of the implanted samples heals the amorphized regions through Solid Phase Epitaxial Regrowth (SPER). SPER is a complicated process. It is anisotropic, it generates defects in the recrystallized silicon, it has a different amorphous/crystalline (A/C) roughness for each orientation, leaving pits in Si(1 1 0), and in Si(1 1 1) it produces two modes of recrystallization with different rates. The recently developed code MMonCa has been used to introduce a physically-based comprehensive model using Lattice Kinetic Monte Carlo that explains all the above singularities of silicon SPER. The model operates by having, as building blocks, the silicon lattice microconfigurations and their four twins. It detects the local configurations, assigns microscopical growth rates, and reconstructs the positions of the lattice locally with one of those building blocks. The overall results reproduce the (a) anisotropy as a result of the different growth rates, (b) localization of SPER induced defects, (c) roughness trends of the A/C interface, (d) pits on Si(1 1 0) regrown surfaces, and (e) bimodal Si(1 1 1) growth. It also provides physical insights of the nature and shape of deposited defects and how they assist in the occurrence of all the above effects

  1. Organic scintillators response function modeling for Monte Carlo simulation of Time-of-Flight measurements

    Energy Technology Data Exchange (ETDEWEB)

    Carasco, C., E-mail: cedric.carasco@cea.fr [CEA, DEN, Cadarache, Nuclear Measurement Laboratory, F-13108 Saint-Paul-lez-Durance (France)

    2012-07-15

    In neutron Time-of-Flight (TOF) measurements performed with fast organic scintillation detectors, both pulse arrival time and amplitude are relevant. Monte Carlo simulation can be used to calculate the time-energy dependant neutron flux at the detector position. To convert the flux into a pulse height spectrum, one must calculate the detector response function for mono-energetic neutrons. MCNP can be used to design TOF systems, but standard MCNP versions cannot reliably calculate the energy deposited by fast neutrons in the detector since multiple scattering effects must be taken into account in an analog way, the individual recoil particles energy deposit being summed with the appropriate scintillation efficiency. In this paper, the energy response function of 2 Double-Prime Multiplication-Sign 2 Double-Prime and 5 Double-Prime Multiplication-Sign 5 Double-Prime liquid scintillation BC-501 A (Bicron) detectors to fast neutrons ranging from 20 keV to 5.0 MeV is computed with GEANT4 to be coupled with MCNPX through the 'MCNP Output Data Analysis' software developed under ROOT (). - Highlights: Black-Right-Pointing-Pointer GEANT4 has been used to model organic scintillators response to neutrons up to 5 MeV. Black-Right-Pointing-Pointer The response of 2 Double-Prime Multiplication-Sign 2 Double-Prime and 5 Double-Prime Multiplication-Sign 5 Double-Prime BC501A detectors has been parameterized with simple functions. Black-Right-Pointing-Pointer Parameterization will allow the modeling of neutron Time of Flight measurements with MCNP using tools based on CERN's ROOT.

  2. Assessment of mean annual flood damage using simple hydraulic modeling and Monte Carlo simulation

    Science.gov (United States)

    Oubennaceur, K.; Agili, H.; Chokmani, K.; Poulin, J.; Marceau, P.

    2016-12-01

    Floods are the most frequent and the most damaging natural disaster in Canada. The issue of assessing and managing the risk related to this disaster has become increasingly crucial for both local and national authorities. Brigham, a municipality located in southern Quebec Province, is one of the heavily affected regions by this disaster because of frequent overflows of the Yamaska River reaching two to three times per year. Since Irene Hurricane which struck the region in 2011, causing considerable socio-economic damage, the implementation of mitigation measures has become a major priority for this municipality. To do this, a preliminary study to evaluate the risk to which this region is exposed is essential. Conventionally, approaches only based on the characterization of the hazard (e.g. floodplains extensive, flood depth) are generally adopted to study the risk of flooding. In order to improve the knowledge of this risk, a Monte Carlo simulation approach combining information on the hazard with vulnerability-related aspects has been developed. This approach integrates three main components: (1) hydrologic modelling aiming to establish a probability-discharge function which associate each measured discharge to its probability of occurrence (2) hydraulic modeling that aims to establish the relationship between the discharge and the water stage at each building (3) damage study that aims to assess the buildings damage using damage functions. The damage is estimated according to the water depth defined as the difference between the water level and the elevation of the building's first floor. The application of the proposed approach allows estimating the annual average cost of damage caused by floods on buildings. The obtained results will be useful for authorities to support their decisions on risk management and prevention against this disaster.

  3. A kinetic Monte Carlo model with improved charge injection model for the photocurrent characteristics of organic solar cells

    Science.gov (United States)

    Kipp, Dylan; Ganesan, Venkat

    2013-06-01

    We develop a kinetic Monte Carlo model for photocurrent generation in organic solar cells that demonstrates improved agreement with experimental illuminated and dark current-voltage curves. In our model, we introduce a charge injection rate prefactor to correct for the electrode grid-size and electrode charge density biases apparent in the coarse-grained approximation of the electrode as a grid of single occupancy, charge-injecting reservoirs. We use the charge injection rate prefactor to control the portion of dark current attributed to each of four kinds of charge injection. By shifting the dark current between electrode-polymer pairs, we align the injection timescales and expand the applicability of the method to accommodate ohmic energy barriers. We consider the device characteristics of the ITO/PEDOT/PSS:PPDI:PBTT:Al system and demonstrate the manner in which our model captures the device charge densities unique to systems with small injection energy barriers. To elucidate the defining characteristics of our model, we first demonstrate the manner in which charge accumulation and band bending affect the shape and placement of the various current-voltage regimes. We then discuss the influence of various model parameters upon the current-voltage characteristics.

  4. Monte Carlo modeling of Standard Model multi-boson production processes for $\\sqrt{s} = 13$ TeV ATLAS analyses

    CERN Document Server

    Li, Shu; The ATLAS collaboration

    2017-01-01

    Proceeding for the poster presentation at LHCP2017, Shanghai, China on the topic of "Monte Carlo modeling of Standard Model multi-boson production processes for $\\sqrt{s} = 13$ TeV ATLAS analyses" (ATL-PHYS-SLIDE-2017-265 https://cds.cern.ch/record/2265389) Deadline: 01/09/2017

  5. Study of the validity of a combined potential model using the Hybrid Reverse Monte Carlo method in Fluoride glass system

    Directory of Open Access Journals (Sweden)

    M. Kotbi

    2013-03-01

    Full Text Available The choice of appropriate interaction models is among the major disadvantages of conventional methods such as Molecular Dynamics (MD and Monte Carlo (MC simulations. On the other hand, the so-called Reverse Monte Carlo (RMC method, based on experimental data, can be applied without any interatomic and/or intermolecular interactions. The RMC results are accompanied by artificial satellite peaks. To remedy this problem, we use an extension of the RMC algorithm, which introduces an energy penalty term into the acceptance criteria. This method is referred to as the Hybrid Reverse Monte Carlo (HRMC method. The idea of this paper is to test the validity of a combined potential model of coulomb and Lennard-Jones in a Fluoride glass system BaMnMF7 (M = Fe,V using HRMC method. The results show a good agreement between experimental and calculated characteristics, as well as a meaningful improvement in partial pair distribution functions (PDFs. We suggest that this model should be used in calculating the structural properties and in describing the average correlations between components of fluoride glass or a similar system. We also suggest that HRMC could be useful as a tool for testing the interaction potential models, as well as for conventional applications.

  6. Optimization of the Monte Carlo code for modeling of photon migration in tissue.

    Science.gov (United States)

    Zołek, Norbert S; Liebert, Adam; Maniewski, Roman

    2006-10-01

    The Monte Carlo method is frequently used to simulate light transport in turbid media because of its simplicity and flexibility, allowing to analyze complicated geometrical structures. Monte Carlo simulations are, however, time consuming because of the necessity to track the paths of individual photons. The time consuming computation is mainly associated with the calculation of the logarithmic and trigonometric functions as well as the generation of pseudo-random numbers. In this paper, the Monte Carlo algorithm was developed and optimized, by approximation of the logarithmic and trigonometric functions. The approximations were based on polynomial and rational functions, and the errors of these approximations are less than 1% of the values of the original functions. The proposed algorithm was verified by simulations of the time-resolved reflectance at several source-detector separations. The results of the calculation using the approximated algorithm were compared with those of the Monte Carlo simulations obtained with an exact computation of the logarithm and trigonometric functions as well as with the solution of the diffusion equation. The errors of the moments of the simulated distributions of times of flight of photons (total number of photons, mean time of flight and variance) are less than 2% for a range of optical properties, typical of living tissues. The proposed approximated algorithm allows to speed up the Monte Carlo simulations by a factor of 4. The developed code can be used on parallel machines, allowing for further acceleration.

  7. Early efforts in modeling the incubation period of infectious diseases with an acute course of illness

    Directory of Open Access Journals (Sweden)

    Nishiura Hiroshi

    2007-05-01

    Full Text Available Abstract The incubation period of infectious diseases, the time from infection with a microorganism to onset of disease, is directly relevant to prevention and control. Since explicit models of the incubation period enhance our understanding of the spread of disease, previous classic studies were revisited, focusing on the modeling methods employed and paying particular attention to relatively unknown historical efforts. The earliest study on the incubation period of pandemic influenza was published in 1919, providing estimates of the incubation period of Spanish flu using the daily incidence on ships departing from several ports in Australia. Although the study explicitly dealt with an unknown time of exposure, the assumed periods of exposure, which had an equal probability of infection, were too long, and thus, likely resulted in slight underestimates of the incubation period. After the suggestion that the incubation period follows lognormal distribution, Japanese epidemiologists extended this assumption to estimates of the time of exposure during a point source outbreak. Although the reason why the incubation period of acute infectious diseases tends to reveal a right-skewed distribution has been explored several times, the validity of the lognormal assumption is yet to be fully clarified. At present, various different distributions are assumed, and the lack of validity in assuming lognormal distribution is particularly apparent in the case of slowly progressing diseases. The present paper indicates that (1 analysis using well-defined short periods of exposure with appropriate statistical methods is critical when the exact time of exposure is unknown, and (2 when assuming a specific distribution for the incubation period, comparisons using different distributions are needed in addition to estimations using different datasets, analyses of the determinants of incubation period, and an understanding of the underlying disease mechanisms.

  8. Early efforts in modeling the incubation period of infectious diseases with an acute course of illness.

    Science.gov (United States)

    Nishiura, Hiroshi

    2007-05-11

    The incubation period of infectious diseases, the time from infection with a microorganism to onset of disease, is directly relevant to prevention and control. Since explicit models of the incubation period enhance our understanding of the spread of disease, previous classic studies were revisited, focusing on the modeling methods employed and paying particular attention to relatively unknown historical efforts. The earliest study on the incubation period of pandemic influenza was published in 1919, providing estimates of the incubation period of Spanish flu using the daily incidence on ships departing from several ports in Australia. Although the study explicitly dealt with an unknown time of exposure, the assumed periods of exposure, which had an equal probability of infection, were too long, and thus, likely resulted in slight underestimates of the incubation period. After the suggestion that the incubation period follows lognormal distribution, Japanese epidemiologists extended this assumption to estimates of the time of exposure during a point source outbreak. Although the reason why the incubation period of acute infectious diseases tends to reveal a right-skewed distribution has been explored several times, the validity of the lognormal assumption is yet to be fully clarified. At present, various different distributions are assumed, and the lack of validity in assuming lognormal distribution is particularly apparent in the case of slowly progressing diseases. The present paper indicates that (1) analysis using well-defined short periods of exposure with appropriate statistical methods is critical when the exact time of exposure is unknown, and (2) when assuming a specific distribution for the incubation period, comparisons using different distributions are needed in addition to estimations using different datasets, analyses of the determinants of incubation period, and an understanding of the underlying disease mechanisms.

  9. Efficiency enhancement of optimized Latin hypercube sampling strategies: Application to Monte Carlo uncertainty analysis and meta-modeling

    Science.gov (United States)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans

    2015-02-01

    The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this

  10. Oxygen distribution in tumors: A qualitative analysis and modeling study providing a novel Monte Carlo approach

    International Nuclear Information System (INIS)

    Lagerlöf, Jakob H.; Kindblom, Jon; Bernhardt, Peter

    2014-01-01

    Purpose: To construct a Monte Carlo (MC)-based simulation model for analyzing the dependence of tumor oxygen distribution on different variables related to tumor vasculature [blood velocity, vessel-to-vessel proximity (vessel proximity), and inflowing oxygen partial pressure (pO 2 )]. Methods: A voxel-based tissue model containing parallel capillaries with square cross-sections (sides of 10 μm) was constructed. Green's function was used for diffusion calculations and Michaelis-Menten's kinetics to manage oxygen consumption. The model was tuned to approximately reproduce the oxygenational status of a renal carcinoma; the depth oxygenation curves (DOC) were fitted with an analytical expression to facilitate rapid MC simulations of tumor oxygen distribution. DOCs were simulated with three variables at three settings each (blood velocity, vessel proximity, and inflowing pO 2 ), which resulted in 27 combinations of conditions. To create a model that simulated variable oxygen distributions, the oxygen tension at a specific point was randomly sampled with trilinear interpolation in the dataset from the first simulation. Six correlations between blood velocity, vessel proximity, and inflowing pO 2 were hypothesized. Variable models with correlated parameters were compared to each other and to a nonvariable, DOC-based model to evaluate the differences in simulated oxygen distributions and tumor radiosensitivities for different tumor sizes. Results: For tumors with radii ranging from 5 to 30 mm, the nonvariable DOC model tended to generate normal or log-normal oxygen distributions, with a cut-off at zero. The pO 2 distributions simulated with the six-variable DOC models were quite different from the distributions generated with the nonvariable DOC model; in the former case the variable models simulated oxygen distributions that were more similar to in vivo results found in the literature. For larger tumors, the oxygen distributions became truncated in the lower

  11. Model Reduction via Principe Component Analysis and Markov Chain Monte Carlo (MCMC) Methods

    Science.gov (United States)

    Gong, R.; Chen, J.; Hoversten, M. G.; Luo, J.

    2011-12-01

    Geophysical and hydrogeological inverse problems often include a large number of unknown parameters, ranging from hundreds to millions, depending on parameterization and problems undertaking. This makes inverse estimation and uncertainty quantification very challenging, especially for those problems in two- or three-dimensional spatial domains. Model reduction technique has the potential of mitigating the curse of dimensionality by reducing total numbers of unknowns while describing the complex subsurface systems adequately. In this study, we explore the use of principal component analysis (PCA) and Markov chain Monte Carlo (MCMC) sampling methods for model reduction through the use of synthetic datasets. We compare the performances of three different but closely related model reduction approaches: (1) PCA methods with geometric sampling (referred to as 'Method 1'), (2) PCA methods with MCMC sampling (referred to as 'Method 2'), and (3) PCA methods with MCMC sampling and inclusion of random effects (referred to as 'Method 3'). We consider a simple convolution model with five unknown parameters as our goal is to understand and visualize the advantages and disadvantages of each method by comparing their inversion results with the corresponding analytical solutions. We generated synthetic data with noise added and invert them under two different situations: (1) the noised data and the covariance matrix for PCA analysis are consistent (referred to as the unbiased case), and (2) the noise data and the covariance matrix are inconsistent (referred to as biased case). In the unbiased case, comparison between the analytical solutions and the inversion results show that all three methods provide good estimates of the true values and Method 1 is computationally more efficient. In terms of uncertainty quantification, Method 1 performs poorly because of relatively small number of samples obtained, Method 2 performs best, and Method 3 overestimates uncertainty due to inclusion

  12. Integrated Cost and Schedule using Monte Carlo Simulation of a CPM Model - 12419

    Energy Technology Data Exchange (ETDEWEB)

    Hulett, David T. [Hulett and Associates, LLC (United States); Nosbisch, Michael R. [Project Time and Cost, Inc. (United States)

    2012-07-01

    . - Good-quality risk data that are usually collected in risk interviews of the project team, management and others knowledgeable in the risk of the project. The risks from the risk register are used as the basis of the risk data in the risk driver method. The risk driver method is based in the fundamental principle that identifiable risks drive overall cost and schedule risk. - A Monte Carlo simulation software program that can simulate schedule risk, burn WM2012 rate risk and time-independent resource risk. The results include the standard histograms and cumulative distributions of possible cost and time results for the project. However, by simulating both cost and time simultaneously we can collect the cost-time pairs of results and hence show the scatter diagram ('football chart') that indicates the joint probability of finishing on time and on budget. Also, we can derive the probabilistic cash flow for comparison with the time-phased project budget. Finally the risks to schedule completion and to cost can be prioritized, say at the P-80 level of confidence, to help focus the risk mitigation efforts. If the cost and schedule estimates including contingency reserves are not acceptable to the project stakeholders the project team should conduct risk mitigation workshops and studies, deciding which risk mitigation actions to take, and re-run the Monte Carlo simulation to determine the possible improvement to the project's objectives. Finally, it is recommended that the contingency reserves of cost and of time, calculated at a level that represents an acceptable degree of certainty and uncertainty for the project stakeholders, be added as a resource-loaded activity to the project schedule for strategic planning purposes. The risk analysis described in this paper is correct only for the current plan, represented by the schedule. The project contingency reserve of time and cost that are the main results of this analysis apply if that plan is to be followed. Of

  13. Overview 2004 of NASA Stirling-Convertor CFD-Model Development and Regenerator R&D Efforts

    Science.gov (United States)

    Tew, Roy C.; Dyson, Rodger W.; Wilson, Scott D.; Demko, Rikako

    2005-01-01

    This paper reports on accomplishments in 2004 in development of Stirling-convertor CFD model at NASA GRC and via a NASA grant, a Stirling regenerator-research effort being conducted via a NASA grant (a follow-on effort to an earlier DOE contract), and a regenerator-microfabrication contract for development of a "next-generation Stirling regenerator." Cleveland State University is the lead organization for all three grant/contractual efforts, with the University of Minnesota and Gedeor Associates as subcontractors. Also, the Stirling Technology Co. and Sunpower, Inc. are both involved in all three efforts, either as funded or unfunded participants. International Mezzo Technologies of Baton Rouge, LA is the regenerator fabricator for the regenerator-microfabrication contract. Results of the efforts in these three areas are summarized.

  14. Bayesian Modelling, Monte Carlo Sampling and Capital Allocation of Insurance Risks

    Directory of Open Access Journals (Sweden)

    Gareth W. Peters

    2017-09-01

    Full Text Available The main objective of this work is to develop a detailed step-by-step guide to the development and application of a new class of efficient Monte Carlo methods to solve practically important problems faced by insurers under the new solvency regulations. In particular, a novel Monte Carlo method to calculate capital allocations for a general insurance company is developed, with a focus on coherent capital allocation that is compliant with the Swiss Solvency Test. The data used is based on the balance sheet of a representative stylized company. For each line of business in that company, allocations are calculated for the one-year risk with dependencies based on correlations given by the Swiss Solvency Test. Two different approaches for dealing with parameter uncertainty are discussed and simulation algorithms based on (pseudo-marginal Sequential Monte Carlo algorithms are described and their efficiency is analysed.

  15. A Monte Carlo simulation model for stationary non-Gaussian processes

    DEFF Research Database (Denmark)

    Grigoriu, M.; Ditlevsen, Ove Dalager; Arwade, S. R.

    2003-01-01

    includes translation processes and is useful for both Monte Carlo simulation and analytical studies. As for translation processes, the mixture of translation processes can have a wide range of marginal distributions and correlation functions. Moreover, these processes can match a broader range of second...... athe proposed Monte Carlo algorithm and compare features of translation processes and mixture of translation processes. Keywords: Monte Carlo simulation, non-Gaussian processes, sampling theorem, stochastic processes, translation processes......A class of stationary non-Gaussian processes, referred to as the class of mixtures of translation processes, is defined by their finite dimensional distributions consisting of mixtures of finite dimensional distributions of translation processes. The class of mixtures of translation processes...

  16. Monte Carlo simulation of the Leksell Gamma Knife: I. Source modelling and calculations in homogeneous media

    Energy Technology Data Exchange (ETDEWEB)

    Moskvin, Vadim [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN (United States)]. E-mail: vmoskvin@iupui.edu; DesRosiers, Colleen; Papiez, Lech; Timmerman, Robert; Randall, Marcus; DesRosiers, Paul [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN (United States)

    2002-06-21

    The Monte Carlo code PENELOPE has been used to simulate photon flux from the Leksell Gamma Knife, a precision method for treating intracranial lesions. Radiation from a single {sup 60}Co assembly traversing the collimator system was simulated, and phase space distributions at the output surface of the helmet for photons and electrons were calculated. The characteristics describing the emitted final beam were used to build a two-stage Monte Carlo simulation of irradiation of a target. A dose field inside a standard spherical polystyrene phantom, usually used for Gamma Knife dosimetry, has been computed and compared with experimental results, with calculations performed by other authors with the use of the EGS4 Monte Carlo code, and data provided by the treatment planning system Gamma Plan. Good agreement was found between these data and results of simulations in homogeneous media. Owing to this established accuracy, PENELOPE is suitable for simulating problems relevant to stereotactic radiosurgery. (author)

  17. Ultra high energy interaction models for Monte Carlo calculations: what model is the best fit

    Energy Technology Data Exchange (ETDEWEB)

    Stanev, Todor [Bartol Research Institute, University of Delaware, Newark DE 19716 (United States)

    2006-01-15

    We briefly outline two methods for extension of hadronic interaction models to extremely high energy. Then we compare the main characteristics of representative computer codes that implement the different models and give examples of air shower parameters predicted by those codes.

  18. A GPU-accelerated Monte Carlo dose calculation platform and its application toward validating an MRI-guided radiation therapy beam model

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yuhe; Mazur, Thomas R.; Green, Olga; Hu, Yanle; Li, Hua; Rodriguez, Vivian; Wooten, H. Omar; Yang, Deshan; Zhao, Tianyu; Mutic, Sasa; Li, H. Harold, E-mail: hli@radonc.wustl.edu [Department of Radiation Oncology, Washington University School of Medicine, 4921 Parkview Place, Campus Box 8224, St. Louis, Missouri 63110 (United States)

    2016-07-15

    Purpose: The clinical commissioning of IMRT subject to a magnetic field is challenging. The purpose of this work is to develop a GPU-accelerated Monte Carlo dose calculation platform based on PENELOPE and then use the platform to validate a vendor-provided MRIdian head model toward quality assurance of clinical IMRT treatment plans subject to a 0.35 T magnetic field. Methods: PENELOPE was first translated from FORTRAN to C++ and the result was confirmed to produce equivalent results to the original code. The C++ code was then adapted to CUDA in a workflow optimized for GPU architecture. The original code was expanded to include voxelized transport with Woodcock tracking, faster electron/positron propagation in a magnetic field, and several features that make gPENELOPE highly user-friendly. Moreover, the vendor-provided MRIdian head model was incorporated into the code in an effort to apply gPENELOPE as both an accurate and rapid dose validation system. A set of experimental measurements were performed on the MRIdian system to examine the accuracy of both the head model and gPENELOPE. Ultimately, gPENELOPE was applied toward independent validation of patient doses calculated by MRIdian’s KMC. Results: An acceleration factor of 152 was achieved in comparison to the original single-thread FORTRAN implementation with the original accuracy being preserved. For 16 treatment plans including stomach (4), lung (2), liver (3), adrenal gland (2), pancreas (2), spleen(1), mediastinum (1), and breast (1), the MRIdian dose calculation engine agrees with gPENELOPE with a mean gamma passing rate of 99.1% ± 0.6% (2%/2 mm). Conclusions: A Monte Carlo simulation platform was developed based on a GPU- accelerated version of PENELOPE. This platform was used to validate that both the vendor-provided head model and fast Monte Carlo engine used by the MRIdian system are accurate in modeling radiation transport in a patient using 2%/2 mm gamma criteria. Future applications of this

  19. A GPU-accelerated Monte Carlo dose calculation platform and its application toward validating an MRI-guided radiation therapy beam model

    International Nuclear Information System (INIS)

    Wang, Yuhe; Mazur, Thomas R.; Green, Olga; Hu, Yanle; Li, Hua; Rodriguez, Vivian; Wooten, H. Omar; Yang, Deshan; Zhao, Tianyu; Mutic, Sasa; Li, H. Harold

    2016-01-01

    Purpose: The clinical commissioning of IMRT subject to a magnetic field is challenging. The purpose of this work is to develop a GPU-accelerated Monte Carlo dose calculation platform based on PENELOPE and then use the platform to validate a vendor-provided MRIdian head model toward quality assurance of clinical IMRT treatment plans subject to a 0.35 T magnetic field. Methods: PENELOPE was first translated from FORTRAN to C++ and the result was confirmed to produce equivalent results to the original code. The C++ code was then adapted to CUDA in a workflow optimized for GPU architecture. The original code was expanded to include voxelized transport with Woodcock tracking, faster electron/positron propagation in a magnetic field, and several features that make gPENELOPE highly user-friendly. Moreover, the vendor-provided MRIdian head model was incorporated into the code in an effort to apply gPENELOPE as both an accurate and rapid dose validation system. A set of experimental measurements were performed on the MRIdian system to examine the accuracy of both the head model and gPENELOPE. Ultimately, gPENELOPE was applied toward independent validation of patient doses calculated by MRIdian’s KMC. Results: An acceleration factor of 152 was achieved in comparison to the original single-thread FORTRAN implementation with the original accuracy being preserved. For 16 treatment plans including stomach (4), lung (2), liver (3), adrenal gland (2), pancreas (2), spleen(1), mediastinum (1), and breast (1), the MRIdian dose calculation engine agrees with gPENELOPE with a mean gamma passing rate of 99.1% ± 0.6% (2%/2 mm). Conclusions: A Monte Carlo simulation platform was developed based on a GPU- accelerated version of PENELOPE. This platform was used to validate that both the vendor-provided head model and fast Monte Carlo engine used by the MRIdian system are accurate in modeling radiation transport in a patient using 2%/2 mm gamma criteria. Future applications of this

  20. A GPU-accelerated Monte Carlo dose calculation platform and its application toward validating an MRI-guided radiation therapy beam model.

    Science.gov (United States)

    Wang, Yuhe; Mazur, Thomas R; Green, Olga; Hu, Yanle; Li, Hua; Rodriguez, Vivian; Wooten, H Omar; Yang, Deshan; Zhao, Tianyu; Mutic, Sasa; Li, H Harold

    2016-07-01

    The clinical commissioning of IMRT subject to a magnetic field is challenging. The purpose of this work is to develop a GPU-accelerated Monte Carlo dose calculation platform based on penelope and then use the platform to validate a vendor-provided MRIdian head model toward quality assurance of clinical IMRT treatment plans subject to a 0.35 T magnetic field. penelope was first translated from fortran to c++ and the result was confirmed to produce equivalent results to the original code. The c++ code was then adapted to cuda in a workflow optimized for GPU architecture. The original code was expanded to include voxelized transport with Woodcock tracking, faster electron/positron propagation in a magnetic field, and several features that make gpenelope highly user-friendly. Moreover, the vendor-provided MRIdian head model was incorporated into the code in an effort to apply gpenelope as both an accurate and rapid dose validation system. A set of experimental measurements were performed on the MRIdian system to examine the accuracy of both the head model and gpenelope. Ultimately, gpenelope was applied toward independent validation of patient doses calculated by MRIdian's kmc. An acceleration factor of 152 was achieved in comparison to the original single-thread fortran implementation with the original accuracy being preserved. For 16 treatment plans including stomach (4), lung (2), liver (3), adrenal gland (2), pancreas (2), spleen(1), mediastinum (1), and breast (1), the MRIdian dose calculation engine agrees with gpenelope with a mean gamma passing rate of 99.1% ± 0.6% (2%/2 mm). A Monte Carlo simulation platform was developed based on a GPU- accelerated version of penelope. This platform was used to validate that both the vendor-provided head model and fast Monte Carlo engine used by the MRIdian system are accurate in modeling radiation transport in a patient using 2%/2 mm gamma criteria. Future applications of this platform will include dose validation and

  1. Validating a virtual source model based in Monte Carlo Method for profiles and percent deep doses calculation

    Energy Technology Data Exchange (ETDEWEB)

    Del Nero, Renata Aline; Yoriyaz, Hélio [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Nakandakari, Marcos Vinicius Nakaoka, E-mail: hyoriyaz@ipen.br, E-mail: marcos.sake@gmail.com [Hospital Beneficência Portuguesa de São Paulo, SP (Brazil)

    2017-07-01

    The Monte Carlo method for radiation transport data has been adapted for medical physics application. More specifically, it has received more attention in clinical treatment planning with the development of more efficient computer simulation techniques. In linear accelerator modeling by the Monte Carlo method, the phase space data file (phsp) is used a lot. However, to obtain precision in the results, it is necessary detailed information about the accelerator's head and commonly the supplier does not provide all the necessary data. An alternative to the phsp is the Virtual Source Model (VSM). This alternative approach presents many advantages for the clinical Monte Carlo application. This is the most efficient method for particle generation and can provide an accuracy similar when the phsp is used. This research propose a VSM simulation with the use of a Virtual Flattening Filter (VFF) for profiles and percent deep doses calculation. Two different sizes of open fields (40 x 40 cm² and 40√2 x 40√2 cm²) were used and two different source to surface distance (SSD) were applied: the standard 100 cm and custom SSD of 370 cm, which is applied in radiotherapy treatments of total body irradiation. The data generated by the simulation was analyzed and compared with experimental data to validate the VSM. This current model is easy to build and test. (author)

  2. Coupling of kinetic Monte Carlo simulations of surface reactions to transport in a fluid for heterogeneous catalytic reactor modeling

    International Nuclear Information System (INIS)

    Schaefer, C.; Jansen, A. P. J.

    2013-01-01

    We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature.

  3. Modeling of FREYA fast critical experiments with the Serpent Monte Carlo code

    International Nuclear Information System (INIS)

    Fridman, E.; Kochetkov, A.; Krása, A.

    2017-01-01

    Highlights: • FREYA – the EURATOM project executed to support fast lead-based reactor systems. • Critical experiments in the VENUS-F facility during the FREYA project. • Characterization of the critical VENUS-F cores with Serpent. • Comparison of the numerical Serpent results to the experimental data. - Abstract: The FP7 EURATOM project FREYA has been executed between 2011 and 2016 with the aim of supporting the design of fast lead-cooled reactor systems such as MYRRHA and ALFRED. During the project, a number of critical experiments were conducted in the VENUS-F facility located at SCK·CEN, Mol, Belgium. The Monte Carlo code Serpent was one of the codes applied for the characterization of the critical VENUS-F cores. Four critical configurations were modeled with Serpent, namely the reference critical core, the clean MYRRHA mock-up, the full MYRRHA mock-up, and the critical core with the ALFRED island. This paper briefly presents the VENUS-F facility, provides a detailed description of the aforementioned critical VENUS-F cores, and compares the numerical results calculated by Serpent to the available experimental data. The compared parameters include keff, point kinetics parameters, fission rate ratios of important actinides to that of U235 (spectral indices), axial and radial distribution of fission rates, and lead void reactivity effect. The reported results show generally good agreement between the calculated and experimental values. Nevertheless, the paper also reveals some noteworthy issues requiring further attention. This includes the systematic overprediction of reactivity and systematic underestimation of the U238 to U235 fission rate ratio.

  4. A Monte Carlo Model for Neutron Coincidence Counting with Fast Organic Liquid Scintillation Detectors

    International Nuclear Information System (INIS)

    Gamage, Kelum A.A.; Joyce, Malcolm J.; Cave, Frank D.

    2013-06-01

    Neutron coincidence counting is an established, nondestructive method for the qualitative and quantitative analysis of nuclear materials. Several even-numbered nuclei of the actinide isotopes, and especially even-numbered plutonium isotopes, undergo spontaneous fission, resulting in the emission of neutrons which are correlated in time. The characteristics of this i.e. the multiplicity can be used to identify each isotope in question. Similarly, the corresponding characteristics of isotopes that are susceptible to stimulated fission are somewhat isotope-related, and also dependent on the energy of the incident neutron that stimulates the fission event, and this can hence be used to identify and quantify isotopes also. Most of the neutron coincidence counters currently used are based on 3 He gas tubes. In the 3 He-filled gas proportional-counter, the (n, p) reaction is largely responsible for the detection of slow neutrons and hence neutrons have to be slowed down to thermal energies. As a result, moderator and shielding materials are essential components of many systems designed to assess quantities of fissile materials. The use of a moderator, however, extends the die-away time of the detector necessitating a larger coincidence window and, further, 3 He is now in short supply and expensive. In this paper, a simulation based on the Monte Carlo method is described which has been performed using MCNPX 2.6.0, to model the geometry of a sector-shaped liquid scintillation detector in response to coincident neutron events. The detection of neutrons from a mixed-oxide (MOX) fuel pellet using an organic liquid scintillator has been simulated for different thicknesses of scintillators. In this new neutron detector, a layer of lead has been used to reduce the gamma-ray fluence reaching the scintillator. The effect of lead for neutron detection has also been estimated by considering different thicknesses of lead layers. (authors)

  5. Aqueous corrosion of borosilicate glasses: experiments, modeling and Monte-Carlo simulations; Alteration par l'eau des verres borosilicates: experiences, modelisation et simulations Monte-Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Ledieu, A

    2004-10-01

    This work is concerned with the corrosion of borosilicate glasses with variable oxide contents. The originality of this study is the complementary use of experiments and numerical simulations. This study is expected to contribute to a better understanding of the corrosion of nuclear waste confinement glasses. First, the corrosion of glasses containing only silicon, boron and sodium oxides has been studied. The kinetics of leaching show that the rate of leaching and the final degree of corrosion sharply depend on the boron content through a percolation mechanism. For some glass contents and some conditions of leaching, the layer which appears at the glass surface stops the release of soluble species (boron and sodium). This altered layer (also called the gel layer) has been characterized with nuclear magnetic resonance (NMR) and small angle X-ray scattering (SAXS) techniques. Second, additional elements have been included in the glass composition. It appears that calcium, zirconium or aluminum oxides strongly modify the final degree of corrosion so that the percolation properties of the boron sub-network is no more a sufficient explanation to account for the behavior of these glasses. Meanwhile, we have developed a theoretical model, based on the dissolution and the reprecipitation of the silicon. Kinetic Monte Carlo simulations have been used in order to test several concepts such as the boron percolation, the local reactivity of weakly soluble elements and the restructuring of the gel layer. This model has been fully validated by comparison with the results on the three oxide glasses. Then, it has been used as a comprehensive tool to investigate the paradoxical behavior of the aluminum and zirconium glasses: although these elements slow down the corrosion kinetics, they lead to a deeper final degree of corrosion. The main contribution of this work is that the final degree of corrosion of borosilicate glasses results from the competition of two opposite mechanisms

  6. Water leaching of borosilicate glasses: experiments, modeling and Monte Carlo simulations; Alteration par l'eau des verres borosilicates: experiences, modelisation et simulations Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Ledieu, A

    2004-10-15

    This work is concerned with the corrosion of borosilicate glasses with variable oxide contents. The originality of this study is the complementary use of experiments and numerical simulations. This study is expected to contribute to a better understanding of the corrosion of nuclear waste confinement glasses. First, the corrosion of glasses containing only silicon, boron and sodium oxides has been studied. The kinetics of leaching show that the rate of leaching and the final degree of corrosion sharply depend on the boron content through a percolation mechanism. For some glass contents and some conditions of leaching, the layer which appears at the glass surface stops the release of soluble species (boron and sodium). This altered layer (also called the gel layer) has been characterized with nuclear magnetic resonance (NMR) and small angle X-ray scattering (SAXS) techniques. Second, additional elements have been included in the glass composition. It appears that calcium, zirconium or aluminum oxides strongly modify the final degree of corrosion so that the percolation properties of the boron sub-network is no more a sufficient explanation to account for the behavior of these glasses. Meanwhile, we have developed a theoretical model, based on the dissolution and the reprecipitation of the silicon. Kinetic Monte Carlo simulations have been used in order to test several concepts such as the boron percolation, the local reactivity of weakly soluble elements and the restructuring of the gel layer. This model has been fully validated by comparison with the results on the three oxide glasses. Then, it has been used as a comprehensive tool to investigate the paradoxical behavior of the aluminum and zirconium glasses: although these elements slow down the corrosion kinetics, they lead to a deeper final degree of corrosion. The main contribution of this work is that the final degree of corrosion of borosilicate glasses results from the competition of two opposite mechanisms

  7. Water leaching of borosilicate glasses: experiments, modeling and Monte Carlo simulations; Alteration par l'eau des verres borosilicates: experiences, modelisation et simulations Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Ledieu, A

    2004-10-15

    This work is concerned with the corrosion of borosilicate glasses with variable oxide contents. The originality of this study is the complementary use of experiments and numerical simulations. This study is expected to contribute to a better understanding of the corrosion of nuclear waste confinement glasses. First, the corrosion of glasses containing only silicon, boron and sodium oxides has been studied. The kinetics of leaching show that the rate of leaching and the final degree of corrosion sharply depend on the boron content through a percolation mechanism. For some glass contents and some conditions of leaching, the layer which appears at the glass surface stops the release of soluble species (boron and sodium). This altered layer (also called the gel layer) has been characterized with nuclear magnetic resonance (NMR) and small angle X-ray scattering (SAXS) techniques. Second, additional elements have been included in the glass composition. It appears that calcium, zirconium or aluminum oxides strongly modify the final degree of corrosion so that the percolation properties of the boron sub-network is no more a sufficient explanation to account for the behavior of these glasses. Meanwhile, we have developed a theoretical model, based on the dissolution and the reprecipitation of the silicon. Kinetic Monte Carlo simulations have been used in order to test several concepts such as the boron percolation, the local reactivity of weakly soluble elements and the restructuring of the gel layer. This model has been fully validated by comparison with the results on the three oxide glasses. Then, it has been used as a comprehensive tool to investigate the paradoxical behavior of the aluminum and zirconium glasses: although these elements slow down the corrosion kinetics, they lead to a deeper final degree of corrosion. The main contribution of this work is that the final degree of corrosion of borosilicate glasses results from the competition of two opposite mechanisms

  8. Aqueous corrosion of borosilicate glasses: experiments, modeling and Monte-Carlo simulations; Alteration par l'eau des verres borosilicates: experiences, modelisation et simulations Monte-Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Ledieu, A

    2004-10-01

    This work is concerned with the corrosion of borosilicate glasses with variable oxide contents. The originality of this study is the complementary use of experiments and numerical simulations. This study is expected to contribute to a better understanding of the corrosion of nuclear waste confinement glasses. First, the corrosion of glasses containing only silicon, boron and sodium oxides has been studied. The kinetics of leaching show that the rate of leaching and the final degree of corrosion sharply depend on the boron content through a percolation mechanism. For some glass contents and some conditions of leaching, the layer which appears at the glass surface stops the release of soluble species (boron and sodium). This altered layer (also called the gel layer) has been characterized with nuclear magnetic resonance (NMR) and small angle X-ray scattering (SAXS) techniques. Second, additional elements have been included in the glass composition. It appears that calcium, zirconium or aluminum oxides strongly modify the final degree of corrosion so that the percolation properties of the boron sub-network is no more a sufficient explanation to account for the behavior of these glasses. Meanwhile, we have developed a theoretical model, based on the dissolution and the reprecipitation of the silicon. Kinetic Monte Carlo simulations have been used in order to test several concepts such as the boron percolation, the local reactivity of weakly soluble elements and the restructuring of the gel layer. This model has been fully validated by comparison with the results on the three oxide glasses. Then, it has been used as a comprehensive tool to investigate the paradoxical behavior of the aluminum and zirconium glasses: although these elements slow down the corrosion kinetics, they lead to a deeper final degree of corrosion. The main contribution of this work is that the final degree of corrosion of borosilicate glasses results from the competition of two opposite mechanisms

  9. Application of the Monte Carlo method for building up models for octanol-water partition coefficient of platinum complexes

    Science.gov (United States)

    Toropov, Andrey A.; Toropova, Alla P.

    2018-06-01

    Predictive model of logP for Pt(II) and Pt(IV) complexes built up with the Monte Carlo method using the CORAL software has been validated with six different splits into the training and validation sets. The improving of the predictive potential of models for six different splits has been obtained using so-called index of ideality of correlation. The suggested models give possibility to extract molecular features, which cause the increase or vice versa decrease of the logP.

  10. Studies of Top Quark Monte Carlo Modelling with the ATLAS Detector

    CERN Document Server

    Asquith, Lily; The ATLAS collaboration

    2017-01-01

    The status of recent studies of modern Monte Carlo generator setups for the pair production of top quarks at the LHC. Samples at a center of mass energy of 13 TeV have been generated for a variety of generators and with different generator configurations. The predictions from these sample are compared to ATLAS data for a variety of kinematic observables.

  11. Co-combustion of peanut hull and coal blends: Artificial neural networks modeling, particle swarm optimization and Monte Carlo simulation.

    Science.gov (United States)

    Buyukada, Musa

    2016-09-01

    Co-combustion of coal and peanut hull (PH) were investigated using artificial neural networks (ANN), particle swarm optimization, and Monte Carlo simulation as a function of blend ratio, heating rate, and temperature. The best prediction was reached by ANN61 multi-layer perception model with a R(2) of 0.99994. Blend ratio of 90 to 10 (PH to coal, wt%), temperature of 305°C, and heating rate of 49°Cmin(-1) were determined as the optimum input values and yield of 87.4% was obtained under PSO optimized conditions. The validation experiments resulted in yields of 87.5%±0.2 after three replications. Monte Carlo simulations were used for the probabilistic assessments of stochastic variability and uncertainty associated with explanatory variables of co-combustion process. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Monte Carlo simulations of phase transitions and lattice dynamics in an atom-phonon model for spin transition compounds

    International Nuclear Information System (INIS)

    Apetrei, Alin Marian; Enachescu, Cristian; Tanasa, Radu; Stoleriu, Laurentiu; Stancu, Alexandru

    2010-01-01

    We apply here the Monte Carlo Metropolis method to a known atom-phonon coupling model for 1D spin transition compounds (STC). These inorganic molecular systems can switch under thermal or optical excitation, between two states in thermodynamical competition, i.e. high spin (HS) and low spin (LS). In the model, the ST units (molecules) are linked by springs, whose elastic constants depend on the spin states of the neighboring atoms, and can only have three possible values. Several previous analytical papers considered a unique average value for the elastic constants (mean-field approximation) and obtained phase diagrams and thermal hysteresis loops. Recently, Monte Carlo simulation papers, taking into account all three values of the elastic constants, obtained thermal hysteresis loops, but no phase diagrams. Employing Monte Carlo simulation, in this work we obtain the phase diagram at T=0 K, which is fully consistent with earlier analytical work; however it is more complex. The main difference is the existence of two supplementary critical curves that mark a hysteresis zone in the phase diagram. This explains the pressure hysteresis curves at low temperature observed experimentally and predicts a 'chemical' hysteresis in STC at very low temperatures. The formation and the dynamics of the domains are also discussed.

  13. Analysis of polytype stability in PVT grown silicon carbide single crystal using competitive lattice model Monte Carlo simulations

    Directory of Open Access Journals (Sweden)

    Hui-Jun Guo

    2014-09-01

    Full Text Available Polytype stability is very important for high quality SiC single crystal growth. However, the growth conditions for the 4H, 6H and 15R polytypes are similar, and the mechanism of polytype stability is not clear. The kinetics aspects, such as surface-step nucleation, are important. The kinetic Monte Carlo method is a common tool to study surface kinetics in crystal growth. However, the present lattice models for kinetic Monte Carlo simulations cannot solve the problem of the competitive growth of two or more lattice structures. In this study, a competitive lattice model was developed for kinetic Monte Carlo simulation of the competition growth of the 4H and 6H polytypes of SiC. The site positions are fixed at the perfect crystal lattice positions without any adjustment of the site positions. Surface steps on seeds and large ratios of diffusion/deposition have positive effects on the 4H polytype stability. The 3D polytype distribution in a physical vapor transport method grown SiC ingot showed that the facet preserved the 4H polytype even if the 6H polytype dominated the growth surface. The theoretical and experimental results of polytype growth in SiC suggest that retaining the step growth mode is an important factor to maintain a stable single 4H polytype during SiC growth.

  14. WE-H-BRA-08: A Monte Carlo Cell Nucleus Model for Assessing Cell Survival Probability Based On Particle Track Structure Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lee, B [Northwestern Memorial Hospital, Chicago, IL (United States); Georgia Institute of Technology, Atlanta, GA (Georgia); Wang, C [Georgia Institute of Technology, Atlanta, GA (Georgia)

    2016-06-15

    Purpose: To correlate the damage produced by particles of different types and qualities to cell survival on the basis of nanodosimetric analysis and advanced DNA structures in the cell nucleus. Methods: A Monte Carlo code was developed to simulate subnuclear DNA chromatin fibers (CFs) of 30nm utilizing a mean-free-path approach common to radiation transport. The cell nucleus was modeled as a spherical region containing 6000 chromatin-dense domains (CDs) of 400nm diameter, with additional CFs modeled in a sparser interchromatin region. The Geant4-DNA code was utilized to produce a particle track database representing various particles at different energies and dose quantities. These tracks were used to stochastically position the DNA structures based on their mean free path to interaction with CFs. Excitation and ionization events intersecting CFs were analyzed using the DBSCAN clustering algorithm for assessment of the likelihood of producing DSBs. Simulated DSBs were then assessed based on their proximity to one another for a probability of inducing cell death. Results: Variations in energy deposition to chromatin fibers match expectations based on differences in particle track structure. The quality of damage to CFs based on different particle types indicate more severe damage by high-LET radiation than low-LET radiation of identical particles. In addition, the model indicates more severe damage by protons than of alpha particles of same LET, which is consistent with differences in their track structure. Cell survival curves have been produced showing the L-Q behavior of sparsely ionizing radiation. Conclusion: Initial results indicate the feasibility of producing cell survival curves based on the Monte Carlo cell nucleus method. Accurate correlation between simulated DNA damage to cell survival on the basis of nanodosimetric analysis can provide insight into the biological responses to various radiation types. Current efforts are directed at producing cell

  15. Evaluation of an ARPS-based canopy flow modeling system for use in future operational smoke prediction efforts

    Science.gov (United States)

    M. T. Kiefer; S. Zhong; W. E. Heilman; J. J. Charney; X. Bian

    2013-01-01

    Efforts to develop a canopy flow modeling system based on the Advanced Regional Prediction System (ARPS) model are discussed. The standard version of ARPS is modified to account for the effect of drag forces on mean and turbulent flow through a vegetation canopy, via production and sink terms in the momentum and subgrid-scale turbulent kinetic energy (TKE) equations....

  16. A neuronal model of a global workspace in effortful cognitive tasks.

    Science.gov (United States)

    Dehaene, S; Kerszberg, M; Changeux, J P

    1998-11-24

    A minimal hypothesis is proposed concerning the brain processes underlying effortful tasks. It distinguishes two main computational spaces: a unique global workspace composed of distributed and heavily interconnected neurons with long-range axons, and a set of specialized and modular perceptual, motor, memory, evaluative, and attentional processors. Workspace neurons are mobilized in effortful tasks for which the specialized processors do not suffice. They selectively mobilize or suppress, through descending connections, the contribution of specific processor neurons. In the course of task performance, workspace neurons become spontaneously coactivated, forming discrete though variable spatio-temporal patterns subject to modulation by vigilance signals and to selection by reward signals. A computer simulation of the Stroop task shows workspace activation to increase during acquisition of a novel task, effortful execution, and after errors. We outline predictions for spatio-temporal activation patterns during brain imaging, particularly about the contribution of dorsolateral prefrontal cortex and anterior cingulate to the workspace.

  17. Improved quantitative 90 Y bremsstrahlung SPECT/CT reconstruction with Monte Carlo scatter modeling.

    Science.gov (United States)

    Dewaraja, Yuni K; Chun, Se Young; Srinivasa, Ravi N; Kaza, Ravi K; Cuneo, Kyle C; Majdalany, Bill S; Novelli, Paula M; Ljungberg, Michael; Fessler, Jeffrey A

    2017-12-01

    In 90 Y microsphere radioembolization (RE), accurate post-therapy imaging-based dosimetry is important for establishing absorbed dose versus outcome relationships for developing future treatment planning strategies. Additionally, accurately assessing microsphere distributions is important because of concerns for unexpected activity deposition outside the liver. Quantitative 90 Y imaging by either SPECT or PET is challenging. In 90 Y SPECT model based methods are necessary for scatter correction because energy window-based methods are not feasible with the continuous bremsstrahlung energy spectrum. The objective of this work was to implement and evaluate a scatter estimation method for accurate 90 Y bremsstrahlung SPECT/CT imaging. Since a fully Monte Carlo (MC) approach to 90 Y SPECT reconstruction is computationally very demanding, in the present study the scatter estimate generated by a MC simulator was combined with an analytical projector in the 3D OS-EM reconstruction model. A single window (105 to 195-keV) was used for both the acquisition and the projector modeling. A liver/lung torso phantom with intrahepatic lesions and low-uptake extrahepatic objects was imaged to evaluate SPECT/CT reconstruction without and with scatter correction. Clinical application was demonstrated by applying the reconstruction approach to five patients treated with RE to determine lesion and normal liver activity concentrations using a (liver) relative calibration. There was convergence of the scatter estimate after just two updates, greatly reducing computational requirements. In the phantom study, compared with reconstruction without scatter correction, with MC scatter modeling there was substantial improvement in activity recovery in intrahepatic lesions (from > 55% to > 86%), normal liver (from 113% to 104%), and lungs (from 227% to 104%) with only a small degradation in noise (13% vs. 17%). Similarly, with scatter modeling contrast improved substantially both visually and in

  18. DISPLACE: a dynamic, individual-based model for spatial fishing planning and effort displacement: Integrating underlying fish population models

    DEFF Research Database (Denmark)

    Bastardie, Francois; Nielsen, J. Rasmus; Miethe, Tanja

    or to the alteration of individual fishing patterns. We demonstrate that integrating the spatial activity of vessels and local fish stock abundance dynamics allow for interactions and more realistic predictions of fishermen behaviour, revenues and stock abundance......We previously developed a spatially explicit, individual-based model (IBM) evaluating the bio-economic efficiency of fishing vessel movements between regions according to the catching and targeting of different species based on the most recent high resolution spatial fishery data. The main purpose...... was to test the effects of alternative fishing effort allocation scenarios related to fuel consumption, energy efficiency (value per litre of fuel), sustainable fish stock harvesting, and profitability of the fisheries. The assumption here was constant underlying resource availability. Now, an advanced...

  19. One State's Systems Change Efforts to Reduce Child Care Expulsion: Taking the Pyramid Model to Scale

    Science.gov (United States)

    Vinh, Megan; Strain, Phil; Davidon, Sarah; Smith, Barbara J.

    2016-01-01

    This article describes the efforts funded by the state of Colorado to address unacceptably high rates of expulsion from child care. Based on the results of a 2006 survey, the state of Colorado launched two complementary policy initiatives in 2009 to impact expulsion rates and to improve the use of evidence-based practices related to challenging…

  20. Bodily Effort Enhances Learning and Metacognition: Investigating the Relation Between Physical Effort and Cognition Using Dual-Process Models of Embodiment.

    Science.gov (United States)

    Skulmowski, Alexander; Rey, Günter Daniel

    2017-01-01

    Recent embodiment research revealed that cognitive processes can be influenced by bodily cues. Some of these cues were found to elicit disparate effects on cognition. For instance, weight sensations can inhibit problem-solving performance, but were shown to increase judgments regarding recall probability (judgments of learning; JOLs) in memory tasks. We investigated the effects of physical effort on learning and metacognition by conducting two studies in which we varied whether a backpack was worn or not while 20 nouns were to be learned. Participants entered a JOL for each word and completed a recall test. Experiment 1 ( N = 18) revealed that exerting physical effort by wearing a backpack led to higher JOLs for easy nouns, without a notable effect on difficult nouns. Participants who wore a backpack reached higher recall scores. Therefore, physical effort may act as a form of desirable difficulty during learning. In Experiment 2 ( N = 30), the influence of physical effort on JOL s and learning disappeared when more difficult nouns were to be learned, implying that a high cognitive load may diminish bodily effects. These findings suggest that physical effort mainly influences superficial modes of thought and raise doubts concerning the explanatory power of metaphor-centered accounts of embodiment for higher-level cognition.

  1. An assessment of the feasibility of using Monte Carlo calculations to model a combined neutron/gamma electronic personal dosemeter

    International Nuclear Information System (INIS)

    Tanner, J.E.; Witts, D.; Tanner, R.J.; Bartlett, D.T.; Burgess, P.H.; Edwards, A.A.; More, B.R.

    1995-01-01

    A Monte Carlo facility has been developed for modelling the response of semiconductor devices to mixed neutron-photon fields. This utilises the code MCNP for neutron and photon transport and a new code, STRUGGLE, which has been developed to model the secondary charged particle transport. It is thus possible to predict the pulse height distribution expected from prototype electronic personal detectors, given the detector efficiency factor. Initial calculations have been performed on a simple passivated implanted planar silicon detector. This device has also been irradiated in neutron, gamma and X ray fields to verify the accuracy of the predictions. Good agreement was found between experiment and calculation. (author)

  2. Modelling of neutron and photon transport in iron and concrete radiation shieldings by the Monte Carlo method - Version 2

    CERN Document Server

    Žukauskaite, A; Plukiene, R; Plukis, A

    2007-01-01

    Particle accelerators and other high energy facilities produce penetrating ionizing radiation (neutrons and γ-rays) that must be shielded. The objective of this work was to model photon and neutron transport in various materials, usually used as shielding, such as concrete, iron or graphite. Monte Carlo method allows obtaining answers by simulating individual particles and recording some aspects of their average behavior. In this work several nuclear experiments were modeled: AVF 65 – γ-ray beams (1-10 MeV), HIMAC and ISIS-800 – high energy neutrons (20-800 MeV) transport in iron and concrete. The results were then compared with experimental data.

  3. Monte Carlo modeling and optimization of contrast-enhanced radiotherapy of brain tumors

    Energy Technology Data Exchange (ETDEWEB)

    Perez-Lopez, C E; Garnica-Garza, H M, E-mail: hgarnica@cinvestav.mx [Centro de Investigacion y de Estudios Avanzados del Instituto Politecnico Nacional Unidad Monterrey, Via del Conocimiento 201 Parque de Investigacion e Innovacion Tecnologica, Apodaca NL CP 66600 (Mexico)

    2011-07-07

    Contrast-enhanced radiotherapy involves the use of a kilovoltage x-ray beam to impart a tumoricidal dose to a target into which a radiological contrast agent has previously been loaded in order to increase the x-ray absorption efficiency. In this treatment modality the selection of the proper x-ray spectrum is important since at the energy range of interest the penetration ability of the x-ray beam is limited. For the treatment of brain tumors, the situation is further complicated by the presence of the skull, which also absorbs kilovoltage x-ray in a very efficient manner. In this work, using Monte Carlo simulation, a realistic patient model and the Cimmino algorithm, several irradiation techniques and x-ray spectra are evaluated for two possible clinical scenarios with respect to the location of the target, these being a tumor located at the center of the head and at a position close to the surface of the head. It will be shown that x-ray spectra, such as those produced by a conventional x-ray generator, are capable of producing absorbed dose distributions with excellent uniformity in the target as well as dose differential of at least 20% of the prescribed tumor dose between this and the surrounding brain tissue, when the tumor is located at the center of the head. However, for tumors with a lateral displacement from the center and close to the skull, while the absorbed dose distribution in the target is also quite uniform and the dose to the surrounding brain tissue is within an acceptable range, hot spots in the skull arise which are above what is considered a safe limit. A comparison with previously reported results using mono-energetic x-ray beams such as those produced by a radiation synchrotron is also presented and it is shown that the absorbed dose distributions rendered by this type of beam are very similar to those obtained with a conventional x-ray beam.

  4. A monte carlo simulation model for the steady-state plasma in the scrape-off layer

    International Nuclear Information System (INIS)

    Wang, W.X.; Okamoto, M.; Nakajima, N.; Murakami, S.; Ohyabu, N.

    1995-12-01

    A new Monte Carlo simulation model for the scrape-off layer (SOL) plasma is proposed to investigate a feasibility of so-called 'high temperature divertor operation'. In the model, Coulomb collision effect is accurately described by a nonlinear Monte Carlo collision operator; a conductive heat flux into the SOL is effectively modelled via randomly exchanging the source particles and SOL particles; secondary electrons are included. The steady state of the SOL plasma, which satisfies particle and energy balances and the neutrality constraint, is determined in terms of total particle and heat fluxes across the separatrix, the edge plasma temperature, the secondary electron emission rate, and the SOL size. The model gives gross features of the SOL such as plasma temperatures and densities, the total sheath potential drop, and the sheath energy transmission factor. The simulations are performed for collisional SOL plasma to confirm the validity of the proposed model. It is found that the potential drop and the electron energy transmission factor are in close agreement with theoretical predictions. The present model can provide primarily useful information for collisionless SOL plasma which is difficult to be understood analytically. (author)

  5. Characterization of an Ar/O2 magnetron plasma by a multi-species Monte Carlo model

    International Nuclear Information System (INIS)

    Bultinck, E; Bogaerts, A

    2011-01-01

    A combined Monte Carlo (MC)/analytical surface model is developed to study the plasma processes occurring during the reactive sputter deposition of TiO x thin films. This model describes the important plasma species with a MC approach (i.e. electrons, Ar + ions, O 2 + ions, fast Ar atoms and sputtered Ti atoms). The deposition of the TiO x film is treated by an analytical surface model. The implementation of our so-called multi-species MC model is presented, and some typical calculation results are shown, such as densities, fluxes, energies and collision rates. The advantages and disadvantages of the multi-species MC model are illustrated by a comparison with a particle-in-cell/Monte Carlo collisions (PIC/MCC) model. Disadvantages include the fact that certain input values and assumptions are needed. However, when these are accounted for, the results are in good agreement with the PIC/MCC simulations, and the calculation time has drastically decreased, which enables us to simulate large and complicated reactor geometries. To illustrate this, the effect of larger target-substrate distances on the film properties is investigated. It is shown that a stoichiometric film is deposited at all investigated target-substrate distances (24, 40, 60 and 80 mm). Moreover, a larger target-substrate distance promotes film uniformity, but the deposition rate is much lower.

  6. Voxel2MCNP: a framework for modeling, simulation and evaluation of radiation transport scenarios for Monte Carlo codes

    International Nuclear Information System (INIS)

    Pölz, Stefan; Laubersheimer, Sven; Eberhardt, Jakob S; Harrendorf, Marco A; Keck, Thomas; Benzler, Andreas; Breustedt, Bastian

    2013-01-01

    The basic idea of Voxel2MCNP is to provide a framework supporting users in modeling radiation transport scenarios using voxel phantoms and other geometric models, generating corresponding input for the Monte Carlo code MCNPX, and evaluating simulation output. Applications at Karlsruhe Institute of Technology are primarily whole and partial body counter calibration and calculation of dose conversion coefficients. A new generic data model describing data related to radiation transport, including phantom and detector geometries and their properties, sources, tallies and materials, has been developed. It is modular and generally independent of the targeted Monte Carlo code. The data model has been implemented as an XML-based file format to facilitate data exchange, and integrated with Voxel2MCNP to provide a common interface for modeling, visualization, and evaluation of data. Also, extensions to allow compatibility with several file formats, such as ENSDF for nuclear structure properties and radioactive decay data, SimpleGeo for solid geometry modeling, ImageJ for voxel lattices, and MCNPX’s MCTAL for simulation results have been added. The framework is presented and discussed in this paper and example workflows for body counter calibration and calculation of dose conversion coefficients is given to illustrate its application. (paper)

  7. Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo

    Science.gov (United States)

    Cheong, R. Y.; Gabda, D.

    2017-09-01

    Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.

  8. LPM-Effect in Monte Carlo Models of Radiative Energy Loss

    CERN Document Server

    Zapp, Korinna C; Wiedemann, Urs Achim

    2009-01-01

    Extending the use of Monte Carlo (MC) event generators to jets in nuclear collisions requires a probabilistic implementation of the non-abelian LPM effect. We demonstrate that a local, probabilistic MC implementation based on the concept of formation times can account fully for the LPM-effect. The main features of the analytically known eikonal and collinear approximation can be reproduced, but we show how going beyond this approximation can lead to qualitatively different results.

  9. QCD Monte-Carlo model tuning studies with CMS data at 13 TeV

    CERN Document Server

    Sunar Cerci, Deniz

    2018-01-01

    New CMS PYTHIA 8 event tunes are presented. The new tunes are obtained using minimum bias and underlying event observables using Monte Carlo configurations with consistent parton distribution functions and strong coupling constant values in the matrix element and the parton shower. Validation and performance studies are presented by comparing the predictions of the new tune to various soft- and hard-QCD measurements at 7, 8 and 13 TeV with CMS.

  10. Modelling of an industrial environment, part 1.: Monte Carlo simulations of photon transport

    International Nuclear Information System (INIS)

    Kis, Z.; Eged, K.; Meckbach, R.; Voigt, G.

    2002-01-01

    After a nuclear accident releasing radioactive material into the environment the external exposures may contribute significantly to the radiation exposure of the population (UNSCEAR 1988, 2000). For urban populations the external gamma exposure from radionuclides deposited on the surfaces of the urban-industrial environments yields the dominant contributions to the total dose to the public (Kelly 1987; Jacob and Meckbach 1990). The radiation field is naturally influenced by the environment around the sources. For calculations of the shielding effect of the structures in complex and realistic urban environments Monte Carlo methods turned out to be useful tools (Jacob and Meckbach 1987; Meckbach et al. 1988). Using these methods a complex environment can be set up in which the photon transport can be solved on a reliable way. The accuracy of the methods is in principle limited only by the knowledge of the atomic cross sections and the computational time. Several papers using Monte Carlo results for calculating doses from the external gamma exposures were published (Jacob and Meckbach 1987, 1990; Meckbach et al. 1988; Rochedo et al. 1996). In these papers the Monte Carlo simulations were run in urban environments and for different photon energies. The industrial environment can be defined as such an area where productive and/or commercial activity is carried out. A good example can be a factory or a supermarket. An industrial environment can rather be different from the urban ones as for the types and structures of the buildings and their dimensions. These variations will affect the radiation field of this environment. Hence there is a need to run new Monte Carlo simulations designed specially for the industrial environments

  11. Tracking student progress in a game-like physics learning environment with a Monte Carlo Bayesian knowledge tracing model

    Science.gov (United States)

    Gweon, Gey-Hong; Lee, Hee-Sun; Dorsey, Chad; Tinker, Robert; Finzer, William; Damelin, Daniel

    2015-03-01

    In tracking student learning in on-line learning systems, the Bayesian knowledge tracing (BKT) model is a popular model. However, the model has well-known problems such as the identifiability problem or the empirical degeneracy problem. Understanding of these problems remain unclear and solutions to them remain subjective. Here, we analyze the log data from an online physics learning program with our new model, a Monte Carlo BKT model. With our new approach, we are able to perform a completely unbiased analysis, which can then be used for classifying student learning patterns and performances. Furthermore, a theoretical analysis of the BKT model and our computational work shed new light on the nature of the aforementioned problems. This material is based upon work supported by the National Science Foundation under Grant REC-1147621 and REC-1435470.

  12. A replica exchange Monte Carlo algorithm for protein folding in the HP model

    Directory of Open Access Journals (Sweden)

    Shmygelska Alena

    2007-09-01

    Full Text Available Abstract Background The ab initio protein folding problem consists of predicting protein tertiary structure from a given amino acid sequence by minimizing an energy function; it is one of the most important and challenging problems in biochemistry, molecular biology and biophysics. The ab initio protein folding problem is computationally challenging and has been shown to be NP MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaat0uy0HwzTfgDPnwy1egaryqtHrhAL1wy0L2yHvdaiqaacqWFneVtcqqGqbauaaa@3961@-hard even when conformations are restricted to a lattice. In this work, we implement and evaluate the replica exchange Monte Carlo (REMC method, which has already been applied very successfully to more complex protein models and other optimization problems with complex energy landscapes, in combination with the highly effective pull move neighbourhood in two widely studied Hydrophobic Polar (HP lattice models. Results We demonstrate that REMC is highly effective for solving instances of the square (2D and cubic (3D HP protein folding problem. When using the pull move neighbourhood, REMC outperforms current state-of-the-art algorithms for most benchmark instances. Additionally, we show that this new algorithm provides a larger ensemble of ground-state structures than the existing state-of-the-art methods. Furthermore, it scales well with sequence length, and it finds significantly better conformations on long biological sequences and sequences with a provably unique ground-state structure, which is believed to be a characteristic of real proteins. We also present evidence that our REMC algorithm can fold sequences which exhibit significant interaction between termini in the hydrophobic core relatively easily. Conclusion We demonstrate that REMC utilizing the pull move

  13. Monte Carlo modeling and analyses of YALINA-booster subcritical assembly part 1: analytical models and main neutronics parameters

    International Nuclear Information System (INIS)

    Talamo, A.; Gohar, M. Y. A.; Nuclear Engineering Division

    2008-01-01

    This study was carried out to model and analyze the YALINA-Booster facility, of the Joint Institute for Power and Nuclear Research of Belarus, with the long term objective of advancing the utilization of accelerator driven systems for the incineration of nuclear waste. The YALINA-Booster facility is a subcritical assembly, driven by an external neutron source, which has been constructed to study the neutron physics and to develop and refine methodologies to control the operation of accelerator driven systems. The external neutron source consists of Californium-252 spontaneous fission neutrons, 2.45 MeV neutrons from Deuterium-Deuterium reactions, or 14.1 MeV neutrons from Deuterium-Tritium reactions. In the latter two cases a deuteron beam is used to generate the neutrons. This study is a part of the collaborative activity between Argonne National Laboratory (ANL) of USA and the Joint Institute for Power and Nuclear Research of Belarus. In addition, the International Atomic Energy Agency (IAEA) has a coordinated research project benchmarking and comparing the results of different numerical codes with the experimental data available from the YALINA-Booster facility and ANL has a leading role coordinating the IAEA activity. The YALINA-Booster facility has been modeled according to the benchmark specifications defined for the IAEA activity without any geometrical homogenization using the Monte Carlo codes MONK and MCNP/MCNPX/MCB. The MONK model perfectly matches the MCNP one. The computational analyses have been extended through the MCB code, which is an extension of the MCNP code with burnup capability because of its additional feature for analyzing source driven multiplying assemblies. The main neutronics parameters of the YALINA-Booster facility were calculated using these computer codes with different nuclear data libraries based on ENDF/B-VI-0, -6, JEF-2.2, and JEF-3.1

  14. Monte Carlo modeling and analyses of YALINA-booster subcritical assembly part 1: analytical models and main neutronics parameters.

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, A.; Gohar, M. Y. A.; Nuclear Engineering Division

    2008-09-11

    This study was carried out to model and analyze the YALINA-Booster facility, of the Joint Institute for Power and Nuclear Research of Belarus, with the long term objective of advancing the utilization of accelerator driven systems for the incineration of nuclear waste. The YALINA-Booster facility is a subcritical assembly, driven by an external neutron source, which has been constructed to study the neutron physics and to develop and refine methodologies to control the operation of accelerator driven systems. The external neutron source consists of Californium-252 spontaneous fission neutrons, 2.45 MeV neutrons from Deuterium-Deuterium reactions, or 14.1 MeV neutrons from Deuterium-Tritium reactions. In the latter two cases a deuteron beam is used to generate the neutrons. This study is a part of the collaborative activity between Argonne National Laboratory (ANL) of USA and the Joint Institute for Power and Nuclear Research of Belarus. In addition, the International Atomic Energy Agency (IAEA) has a coordinated research project benchmarking and comparing the results of different numerical codes with the experimental data available from the YALINA-Booster facility and ANL has a leading role coordinating the IAEA activity. The YALINA-Booster facility has been modeled according to the benchmark specifications defined for the IAEA activity without any geometrical homogenization using the Monte Carlo codes MONK and MCNP/MCNPX/MCB. The MONK model perfectly matches the MCNP one. The computational analyses have been extended through the MCB code, which is an extension of the MCNP code with burnup capability because of its additional feature for analyzing source driven multiplying assemblies. The main neutronics parameters of the YALINA-Booster facility were calculated using these computer codes with different nuclear data libraries based on ENDF/B-VI-0, -6, JEF-2.2, and JEF-3.1.

  15. Health Promotion Efforts as Predictors of Physical Activity in Schools: An Application of the Diffusion of Innovations Model

    Science.gov (United States)

    Glowacki, Elizabeth M.; Centeio, Erin E.; Van Dongen, Daniel J.; Carson, Russell L.; Castelli, Darla M.

    2016-01-01

    Background: Implementing a comprehensive school physical activity program (CSPAP) effectively addresses public health issues by providing opportunities for physical activity (PA). Grounded in the Diffusion of Innovations model, the purpose of this study was to identify how health promotion efforts facilitate opportunities for PA. Methods: Physical…

  16. Computational Model of D-Region Ion Production Caused by Energetic Electron Precipitations Based on General Monte Carlo Transport Calculations

    Science.gov (United States)

    Kouznetsov, A.; Cully, C. M.

    2017-12-01

    During enhanced magnetic activities, large ejections of energetic electrons from radiation belts are deposited in the upper polar atmosphere where they play important roles in its physical and chemical processes, including VLF signals subionospheric propagation. Electron deposition can affect D-Region ionization, which are estimated based on ionization rates derived from energy depositions. We present a model of D-region ion production caused by an arbitrary (in energy and pitch angle) distribution of fast (10 keV - 1 MeV) electrons. The model relies on a set of pre-calculated results obtained using a general Monte Carlo approach with the latest version of the MCNP6 (Monte Carlo N-Particle) code for the explicit electron tracking in magnetic fields. By expressing those results using the ionization yield functions, the pre-calculated results are extended to cover arbitrary magnetic field inclinations and atmospheric density profiles, allowing ionization rate altitude profile computations in the range of 20 and 200 km at any geographic point of interest and date/time by adopting results from an external atmospheric density model (e.g. NRLMSISE-00). The pre-calculated MCNP6 results are stored in a CDF (Common Data Format) file, and IDL routines library is written to provide an end-user interface to the model.

  17. Reducing Monte Carlo error in the Bayesian estimation of risk ratios using log-binomial regression models.

    Science.gov (United States)

    Salmerón, Diego; Cano, Juan A; Chirlaque, María D

    2015-08-30

    In cohort studies, binary outcomes are very often analyzed by logistic regression. However, it is well known that when the goal is to estimate a risk ratio, the logistic regression is inappropriate if the outcome is common. In these cases, a log-binomial regression model is preferable. On the other hand, the estimation of the regression coefficients of the log-binomial model is difficult owing to the constraints that must be imposed on these coefficients. Bayesian methods allow a straightforward approach for log-binomial regression models and produce smaller mean squared errors in the estimation of risk ratios than the frequentist methods, and the posterior inferences can be obtained using the software WinBUGS. However, Markov chain Monte Carlo methods implemented in WinBUGS can lead to large Monte Carlo errors in the approximations to the posterior inferences because they produce correlated simulations, and the accuracy of the approximations are inversely related to this correlation. To reduce correlation and to improve accuracy, we propose a reparameterization based on a Poisson model and a sampling algorithm coded in R. Copyright © 2015 John Wiley & Sons, Ltd.

  18. Development of reversible jump Markov Chain Monte Carlo algorithm in the Bayesian mixture modeling for microarray data in Indonesia

    Science.gov (United States)

    Astuti, Ani Budi; Iriawan, Nur; Irhamah, Kuswanto, Heri

    2017-12-01

    In the Bayesian mixture modeling requires stages the identification number of the most appropriate mixture components thus obtained mixture models fit the data through data driven concept. Reversible Jump Markov Chain Monte Carlo (RJMCMC) is a combination of the reversible jump (RJ) concept and the Markov Chain Monte Carlo (MCMC) concept used by some researchers to solve the problem of identifying the number of mixture components which are not known with certainty number. In its application, RJMCMC using the concept of the birth/death and the split-merge with six types of movement, that are w updating, θ updating, z updating, hyperparameter β updating, split-merge for components and birth/death from blank components. The development of the RJMCMC algorithm needs to be done according to the observed case. The purpose of this study is to know the performance of RJMCMC algorithm development in identifying the number of mixture components which are not known with certainty number in the Bayesian mixture modeling for microarray data in Indonesia. The results of this study represent that the concept RJMCMC algorithm development able to properly identify the number of mixture components in the Bayesian normal mixture model wherein the component mixture in the case of microarray data in Indonesia is not known for certain number.

  19. A multi-agent quantum Monte Carlo model for charge transport: Application to organic field-effect transistors

    International Nuclear Information System (INIS)

    Bauer, Thilo; Jäger, Christof M.; Jordan, Meredith J. T.; Clark, Timothy

    2015-01-01

    We have developed a multi-agent quantum Monte Carlo model to describe the spatial dynamics of multiple majority charge carriers during conduction of electric current in the channel of organic field-effect transistors. The charge carriers are treated by a neglect of diatomic differential overlap Hamiltonian using a lattice of hydrogen-like basis functions. The local ionization energy and local electron affinity defined previously map the bulk structure of the transistor channel to external potentials for the simulations of electron- and hole-conduction, respectively. The model is designed without a specific charge-transport mechanism like hopping- or band-transport in mind and does not arbitrarily localize charge. An electrode model allows dynamic injection and depletion of charge carriers according to source-drain voltage. The field-effect is modeled by using the source-gate voltage in a Metropolis-like acceptance criterion. Although the current cannot be calculated because the simulations have no time axis, using the number of Monte Carlo moves as pseudo-time gives results that resemble experimental I/V curves

  20. A multi-agent quantum Monte Carlo model for charge transport: Application to organic field-effect transistors

    Energy Technology Data Exchange (ETDEWEB)

    Bauer, Thilo; Jäger, Christof M. [Department of Chemistry and Pharmacy, Computer-Chemistry-Center and Interdisciplinary Center for Molecular Materials, Friedrich-Alexander-Universität Erlangen-Nürnberg, Nägelsbachstrasse 25, 91052 Erlangen (Germany); Jordan, Meredith J. T. [School of Chemistry, University of Sydney, Sydney, NSW 2006 (Australia); Clark, Timothy, E-mail: tim.clark@fau.de [Department of Chemistry and Pharmacy, Computer-Chemistry-Center and Interdisciplinary Center for Molecular Materials, Friedrich-Alexander-Universität Erlangen-Nürnberg, Nägelsbachstrasse 25, 91052 Erlangen (Germany); Centre for Molecular Design, University of Portsmouth, Portsmouth PO1 2DY (United Kingdom)

    2015-07-28

    We have developed a multi-agent quantum Monte Carlo model to describe the spatial dynamics of multiple majority charge carriers during conduction of electric current in the channel of organic field-effect transistors. The charge carriers are treated by a neglect of diatomic differential overlap Hamiltonian using a lattice of hydrogen-like basis functions. The local ionization energy and local electron affinity defined previously map the bulk structure of the transistor channel to external potentials for the simulations of electron- and hole-conduction, respectively. The model is designed without a specific charge-transport mechanism like hopping- or band-transport in mind and does not arbitrarily localize charge. An electrode model allows dynamic injection and depletion of charge carriers according to source-drain voltage. The field-effect is modeled by using the source-gate voltage in a Metropolis-like acceptance criterion. Although the current cannot be calculated because the simulations have no time axis, using the number of Monte Carlo moves as pseudo-time gives results that resemble experimental I/V curves.

  1. A Monte Carlo-adjusted goodness-of-fit test for parametric models describing spatial point patterns

    KAUST Repository

    Dao, Ngocanh

    2014-04-03

    Assessing the goodness-of-fit (GOF) for intricate parametric spatial point process models is important for many application fields. When the probability density of the statistic of the GOF test is intractable, a commonly used procedure is the Monte Carlo GOF test. Additionally, if the data comprise a single dataset, a popular version of the test plugs a parameter estimate in the hypothesized parametric model to generate data for theMonte Carlo GOF test. In this case, the test is invalid because the resulting empirical level does not reach the nominal level. In this article, we propose a method consisting of nested Monte Carlo simulations which has the following advantages: the bias of the resulting empirical level of the test is eliminated, hence the empirical levels can always reach the nominal level, and information about inhomogeneity of the data can be provided.We theoretically justify our testing procedure using Taylor expansions and demonstrate that it is correctly sized through various simulation studies. In our first data application, we discover, in agreement with Illian et al., that Phlebocarya filifolia plants near Perth, Australia, can follow a homogeneous Poisson clustered process that provides insight into the propagation mechanism of these plants. In our second data application, we find, in contrast to Diggle, that a pairwise interaction model provides a good fit to the micro-anatomy data of amacrine cells designed for analyzing the developmental growth of immature retina cells in rabbits. This article has supplementary material online. © 2013 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.

  2. EchoSeed Model 6733 Iodine-125 brachytherapy source: Improved dosimetric characterization using the MCNP5 Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Mosleh-Shirazi, M. A.; Hadad, K.; Faghihi, R.; Baradaran-Ghahfarokhi, M.; Naghshnezhad, Z.; Meigooni, A. S. [Center for Research in Medical Physics and Biomedical Engineering and Physics Unit, Radiotherapy Department, Shiraz University of Medical Sciences, Shiraz 71936-13311 (Iran, Islamic Republic of); Radiation Research Center and Medical Radiation Department, School of Engineering, Shiraz University, Shiraz 71936-13311 (Iran, Islamic Republic of); Comprehensive Cancer Center of Nevada, Las Vegas, Nevada 89169 (United States)

    2012-08-15

    This study primarily aimed to obtain the dosimetric characteristics of the Model 6733 {sup 125}I seed (EchoSeed) with improved precision and accuracy using a more up-to-date Monte-Carlo code and data (MCNP5) compared to previously published results, including an uncertainty analysis. Its secondary aim was to compare the results obtained using the MCNP5, MCNP4c2, and PTRAN codes for simulation of this low-energy photon-emitting source. The EchoSeed geometry and chemical compositions together with a published {sup 125}I spectrum were used to perform dosimetric characterization of this source as per the updated AAPM TG-43 protocol. These simulations were performed in liquid water material in order to obtain the clinically applicable dosimetric parameters for this source model. Dose rate constants in liquid water, derived from MCNP4c2 and MCNP5 simulations, were found to be 0.993 cGyh{sup -1} U{sup -1} ({+-}1.73%) and 0.965 cGyh{sup -1} U{sup -1} ({+-}1.68%), respectively. Overall, the MCNP5 derived radial dose and 2D anisotropy functions results were generally closer to the measured data (within {+-}4%) than MCNP4c and the published data for PTRAN code (Version 7.43), while the opposite was seen for dose rate constant. The generally improved MCNP5 Monte Carlo simulation may be attributed to a more recent and accurate cross-section library. However, some of the data points in the results obtained from the above-mentioned Monte Carlo codes showed no statistically significant differences. Derived dosimetric characteristics in liquid water are provided for clinical applications of this source model.

  3. Monte Carlo simulations on the Black-Litterman model with absolute views: a comparison with the Markowitz model and an equal weight asset allocation strategy

    OpenAIRE

    Fernández Pibrall, Eric

    2015-01-01

    The focus of this degree thesis is on the Black-Litterman asset allocation model applied to recent popular investment vehicles such as Exchange Traded Funds (ETFs) simulating absolute views generated by Monte Carlo simulations that allow the inclusion of correlations. The sensibility of the scalar (which is a measure of the investor’s confidence in the prior estimates) contained in the Black-Litterman model will be analyzed over several periods of time and the results obtained compared with t...

  4. Cloud-based Monte Carlo modelling of BSSRDF for the rendering of human skin appearance (Conference Presentation)

    Science.gov (United States)

    Doronin, Alexander; Rushmeier, Holly E.; Meglinski, Igor; Bykov, Alexander V.

    2016-03-01

    We present a new Monte Carlo based approach for the modelling of Bidirectional Scattering-Surface Reflectance Distribution Function (BSSRDF) for accurate rendering of human skin appearance. The variations of both skin tissues structure and the major chromophores are taken into account correspondingly to the different ethnic and age groups. The computational solution utilizes HTML5, accelerated by the graphics processing units (GPUs), and therefore is convenient for the practical use at the most of modern computer-based devices and operating systems. The results of imitation of human skin reflectance spectra, corresponding skin colours and examples of 3D faces rendering are presented and compared with the results of phantom studies.

  5. Test of statistical models of the ν-delayed neutron emission by application of the Monte Carlo method

    International Nuclear Information System (INIS)

    Ohm, H.

    1982-01-01

    Using the example of the delayed neutron spectrum of 24 s- 137 I the statistical model is tested in view of its applicability. A computer code was developed which simulates delayed neutron spectra by the Monte Carlo method under the assumption that the transition probabilities of the ν and the neutron decays obey the Porter-Thomas distribution while the distances of the neutron emitting levels are Wigner distribution. Gramow-Teller ν-transitions and simply forbidden ν-transitions from the preceding nucleus to the emitting nucleus were regarded. (orig./HSI) [de

  6. Monte Carlo semi-empirical model for Si(Li) x-ray detector: Differences between nominal and fitted parameters

    Energy Technology Data Exchange (ETDEWEB)

    Lopez-Pino, N.; Padilla-Cabal, F.; Garcia-Alvarez, J. A.; Vazquez, L.; D' Alessandro, K.; Correa-Alfonso, C. M. [Departamento de Fisica Nuclear, Instituto Superior de Tecnologia y Ciencias Aplicadas (InSTEC) Ave. Salvador Allende y Luaces. Quinta de los Molinos. Habana 10600. A.P. 6163, La Habana (Cuba); Godoy, W.; Maidana, N. L.; Vanin, V. R. [Laboratorio do Acelerador Linear, Instituto de Fisica - Universidade de Sao Paulo Rua do Matao, Travessa R, 187, 05508-900, SP (Brazil)

    2013-05-06

    A detailed characterization of a X-ray Si(Li) detector was performed to obtain the energy dependence of efficiency in the photon energy range of 6.4 - 59.5 keV, which was measured and reproduced by Monte Carlo (MC) simulations. Significant discrepancies between MC and experimental values were found when the manufacturer parameters of the detector were used in the simulation. A complete Computerized Tomography (CT) detector scan allowed to find the correct crystal dimensions and position inside the capsule. The computed efficiencies with the resulting detector model differed with the measured values no more than 10% in most of the energy range.

  7. FLUKA Monte Carlo Modelling of the CHARM Facility’s Test Area: Update of the Radiation Field Assessment

    CERN Document Server

    Infantino, Angelo

    2017-01-01

    The present Accelerator Note is a follow-up of the previous report CERN-ACC-NOTE-2016-12345. In the present work, the FLUKA Monte Carlo model of CERN’s CHARM facility has been improved to the most up-to-date configuration of the facility, including: new test positions, a global refinement of the FLUKA geometry, a careful review of the transport and physics parameters. Several configurations of the facility, in terms of target material and movable shielding configuration, have been simulated. The full set of results is reported in the following and can act as a reference guide to any potential user of the facility.

  8. Multi-step magnetization of the Ising model on a Shastry-Sutherland lattice: a Monte Carlo simulation

    International Nuclear Information System (INIS)

    Huang, W C; Huo, L; Tian, G; Qian, H R; Gao, X S; Qin, M H; Liu, J-M

    2012-01-01

    The magnetization behaviors and spin configurations of the classical Ising model on a Shastry-Sutherland lattice are investigated using Monte Carlo simulations, in order to understand the fascinating magnetization plateaus observed in TmB 4 and other rare-earth tetraborides. The simulations reproduce the 1/2 magnetization plateau by taking into account the dipole-dipole interaction. In addition, a narrow 2/3 magnetization step at low temperature is predicted in our simulation. The multi-step magnetization can be understood as the consequence of the competitions among the spin-exchange interaction, the dipole-dipole interaction, and the static magnetic energy.

  9. Effortful echolalia.

    Science.gov (United States)

    Hadano, K; Nakamura, H; Hamanaka, T

    1998-02-01

    We report three cases of effortful echolalia in patients with cerebral infarction. The clinical picture of speech disturbance is associated with Type 1 Transcortical Motor Aphasia (TCMA, Goldstein, 1915). The patients always spoke nonfluently with loss of speech initiative, dysarthria, dysprosody, agrammatism, and increased effort and were unable to repeat sentences longer than those containing four or six words. In conversation, they first repeated a few words spoken to them, and then produced self initiated speech. The initial repetition as well as the subsequent self initiated speech, which were realized equally laboriously, can be regarded as mitigated echolalia (Pick, 1924). They were always aware of their own echolalia and tried to control it without effect. These cases demonstrate that neither the ability to repeat nor fluent speech are always necessary for echolalia. The possibility that a lesion in the left medial frontal lobe, including the supplementary motor area, plays an important role in effortful echolalia is discussed.

  10. Evaluation and Monte Carlo modelling of the response function of the Leake neutron area survey instrument

    International Nuclear Information System (INIS)

    Tagziria, H.; Tanner, R.J.; Bartlett, D.T.; Thomas, D.J.

    2004-01-01

    All available measured data for the response characteristics of the Leake counter have been gathered together. These data, augmented by previously unpublished work, have been compared to Monte Carlo simulations of the instrument's response characteristics in the energy range from thermal to 20 MeV. A response function has been derived, which is recommended as the best currently available for the instrument. Folding this function with workplace energy distributions has enabled an assessment of the impact of this new response function to be made. Similar work, which will be published separately, has been carried out for the NM2 and the Studsvik 2202D neutron area survey instruments

  11. Monte Carlo study of four-spinon dynamic structure function in antiferromagnetic Heisenberg model

    International Nuclear Information System (INIS)

    Si-Lakhal, B.; Abada, A.

    2003-11-01

    Using Monte Carlo integration methods, we describe the behavior of the exact four-s pinon dynamic structure function S 4 in the antiferromagnetic spin 1/2 Heisenberg quantum spin chain as a function of the neutron energy ω and momentum transfer k. We also determine the fourspinon continuum, the extent of the region in the (k, ω) plane outside which S 4 is identically zero. In each case, the behavior of S 4 is shown to be consistent with the four-spinon continuum and compared to the one of the exact two-spinon dynamic structure function S 2 . Overall shape similarity is noted. (author)

  12. Extraction of diffuse correlation spectroscopy flow index by integration of Nth-order linear model with Monte Carlo simulation

    Science.gov (United States)

    Shang, Yu; Li, Ting; Chen, Lei; Lin, Yu; Toborek, Michal; Yu, Guoqiang

    2014-05-01

    Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (αDB) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of αDB. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo stroke model of mouse. Computer simulations shows that the high-order (N ≥ 5) linear algorithm was more accurate in extracting αDB (errors values of errors in extracting αDB were similar to those reconstructed from the noise-free DCS data. In addition, the errors in extracting the relative changes of αDB using both linear algorithm and semi-infinite solution were fairly small (errors < ±2.0%) and did not rely on the tissue volume/geometry. The experimental results from the in vivo stroke mice agreed with those in simulations, demonstrating the robustness of the linear algorithm. DCS with the high-order linear algorithm shows the potential for the inter-subject comparison and longitudinal monitoring of absolute BFI in a variety of tissues/organs with different volumes/geometries.

  13. Study into critical properties of 3D frustrated Heisenberg model on triangular lattice by the use of Monte Carlo methods

    International Nuclear Information System (INIS)

    Murtazaev, A.K.; Ramazanov, M.K.; Badiev, M.K.

    2009-01-01

    The critical properties of the 3D frustrated antiferromagnetic Heisenberg model on a triangular lattice are investigated by the replica Monte Carlo method. The static magnetic and chiral critical exponents of heat capacity a = 0.05(2), magnetization Β 0.30(1), Β k = 0.52(2), susceptibility Γ = 1.36(2), Γ k = 0.93(3), and correlation radius Ν 0.64(1), Ν k = 0.64(2) are calculated by using the finitesize scaling theory. The critical Fisher exponents η = - 0.06(3), η k = 0.63(4) for this model are estimated for the first time. A new universality class of the critical behavior is shown to be formed by the 3D frustrated Heisenberg model on the triangular lattice. A type of the interlayer exchange interaction is found to influence the universality class of antiferromagnetic Heisenberg model on the a triangular lattice.

  14. Modeling of Radiotherapy Linac Source Terms Using ARCHER Monte Carlo Code: Performance Comparison for GPU and MIC Parallel Computing Devices

    Science.gov (United States)

    Lin, Hui; Liu, Tianyu; Su, Lin; Bednarz, Bryan; Caracappa, Peter; Xu, X. George

    2017-09-01

    Monte Carlo (MC) simulation is well recognized as the most accurate method for radiation dose calculations. For radiotherapy applications, accurate modelling of the source term, i.e. the clinical linear accelerator is critical to the simulation. The purpose of this paper is to perform source modelling and examine the accuracy and performance of the models on Intel Many Integrated Core coprocessors (aka Xeon Phi) and Nvidia GPU using ARCHER and explore the potential optimization methods. Phase Space-based source modelling for has been implemented. Good agreements were found in a tomotherapy prostate patient case and a TrueBeam breast case. From the aspect of performance, the whole simulation for prostate plan and breast plan cost about 173s and 73s with 1% statistical error.

  15. Modeling of Radiotherapy Linac Source Terms Using ARCHER Monte Carlo Code: Performance Comparison for GPU and MIC Parallel Computing Devices

    Directory of Open Access Journals (Sweden)

    Lin Hui

    2017-01-01

    Full Text Available Monte Carlo (MC simulation is well recognized as the most accurate method for radiation dose calculations. For radiotherapy applications, accurate modelling of the source term, i.e. the clinical linear accelerator is critical to the simulation. The purpose of this paper is to perform source modelling and examine the accuracy and performance of the models on Intel Many Integrated Core coprocessors (aka Xeon Phi and Nvidia GPU using ARCHER and explore the potential optimization methods. Phase Space-based source modelling for has been implemented. Good agreements were found in a tomotherapy prostate patient case and a TrueBeam breast case. From the aspect of performance, the whole simulation for prostate plan and breast plan cost about 173s and 73s with 1% statistical error.

  16. Calibration of a gamma spectrometer for measuring natural radioactivity. Experimental measurements and modeling by Monte-Carlo methods

    International Nuclear Information System (INIS)

    Courtine, Fabien

    2007-01-01

    The thesis proceeded in the context of dating by thermoluminescence. This method requires laboratory measurements of the natural radioactivity. For that purpose, we have been using a germanium spectrometer. To refine the calibration of this one, we modelled it by using a Monte-Carlo computer code: Geant4. We developed a geometrical model which takes into account the presence of inactive zones and zones of poor charge-collection within the germanium crystal. The parameters of the model were adjusted by comparison with experimental results obtained with a source of 137 Cs. It appeared that the form of the inactive zones is less simple than is presented in the specialized literature. This model was widened to the case of a more complex source, with cascade effect and angular correlations between photons: the 60 Co. Lastly, applied to extended sources, it gave correct results and allowed us to validate the simulation of matrix effect. (author)

  17. Diagrammatic Monte Carlo simulations of staggered fermions at finite coupling

    CERN Document Server

    Vairinhos, Helvio

    2016-01-01

    Diagrammatic Monte Carlo has been a very fruitful tool for taming, and in some cases even solving, the sign problem in several lattice models. We have recently proposed a diagrammatic model for simulating lattice gauge theories with staggered fermions at arbitrary coupling, which extends earlier successful efforts to simulate lattice QCD at finite baryon density in the strong-coupling regime. Here we present the first numerical simulations of our model, using worm algorithms.

  18. Nonlinear joint models for individual dynamic prediction of risk of death using Hamiltonian Monte Carlo: application to metastatic prostate cancer

    Directory of Open Access Journals (Sweden)

    Solène Desmée

    2017-07-01

    Full Text Available Abstract Background Joint models of longitudinal and time-to-event data are increasingly used to perform individual dynamic prediction of a risk of event. However the difficulty to perform inference in nonlinear models and to calculate the distribution of individual parameters has long limited this approach to linear mixed-effect models for the longitudinal part. Here we use a Bayesian algorithm and a nonlinear joint model to calculate individual dynamic predictions. We apply this approach to predict the risk of death in metastatic castration-resistant prostate cancer (mCRPC patients with frequent Prostate-Specific Antigen (PSA measurements. Methods A joint model is built using a large population of 400 mCRPC patients where PSA kinetics is described by a biexponential function and the hazard function is a PSA-dependent function. Using Hamiltonian Monte Carlo algorithm implemented in Stan software and the estimated population parameters in this population as priors, the a posteriori distribution of the hazard function is computed for a new patient knowing his PSA measurements until a given landmark time. Time-dependent area under the ROC curve (AUC and Brier score are derived to assess discrimination and calibration of the model predictions, first on 200 simulated patients and then on 196 real patients that are not included to build the model. Results Satisfying coverage probabilities of Monte Carlo prediction intervals are obtained for longitudinal and hazard functions. Individual dynamic predictions provide good predictive performances for landmark times larger than 12 months and horizon time of up to 18 months for both simulated and real data. Conclusions As nonlinear joint models can characterize the kinetics of biomarkers and their link with a time-to-event, this approach could be useful to improve patient’s follow-up and the early detection of most at risk patients.

  19. Economic effort management in multispecies fisheries: the FcubEcon model

    DEFF Research Database (Denmark)

    Hoff, Ayoe; Frost, Hans; Ulrich, Clara

    2010-01-01

    Applying single-species assessment and quotas in multispecies fisheries can lead to overfishing or quota underutilization, because advice can be conflicting when different stocks are caught within the same fishery. During the past decade, increased focus on this issue has resulted in the developm......Applying single-species assessment and quotas in multispecies fisheries can lead to overfishing or quota underutilization, because advice can be conflicting when different stocks are caught within the same fishery. During the past decade, increased focus on this issue has resulted...... optimal manner, in both effort-management and single-quota management settings.Applying single-species assessment and quotas in multispecies fisheries can lead to overfishing or quota underutilization, because advice can be conflicting when different stocks are caught within the same fishery. During...

  20. The Effort-reward Imbalance work-stress model and daytime salivary cortisol and dehydroepiandrosterone (DHEA) among Japanese women

    Science.gov (United States)

    Ota, Atsuhiko; Mase, Junji; Howteerakul, Nopporn; Rajatanun, Thitipat; Suwannapong, Nawarat; Yatsuya, Hiroshi; Ono, Yuichiro

    2014-01-01

    We examined the influence of work-related effort–reward imbalance and overcommitment to work (OC), as derived from Siegrist's Effort–Reward Imbalance (ERI) model, on the hypothalamic–pituitary–adrenocortical (HPA) axis. We hypothesized that, among healthy workers, both cortisol and dehydroepiandrosterone (DHEA) secretion would be increased by effort–reward imbalance and OC and, as a result, cortisol-to-DHEA ratio (C/D ratio) would not differ by effort–reward imbalance or OC. The subjects were 115 healthy female nursery school teachers. Salivary cortisol, DHEA, and C/D ratio were used as indexes of HPA activity. Mixed-model analyses of variance revealed that neither the interaction between the ERI model indicators (i.e., effort, reward, effort-to-reward ratio, and OC) and the series of measurement times (9:00, 12:00, and 15:00) nor the main effect of the ERI model indicators was significant for daytime salivary cortisol, DHEA, or C/D ratio. Multiple linear regression analyses indicated that none of the ERI model indicators was significantly associated with area under the curve of daytime salivary cortisol, DHEA, or C/D ratio. We found that effort, reward, effort–reward imbalance, and OC had little influence on daytime variation patterns, levels, or amounts of salivary HPA-axis-related hormones. Thus, our hypotheses were not supported. PMID:25228138

  1. Analysis and modeling of localized heat generation by tumor-targeted nanoparticles (Monte Carlo methods)

    Science.gov (United States)

    Sanattalab, Ehsan; SalmanOgli, Ahmad; Piskin, Erhan

    2016-04-01

    We investigated the tumor-targeted nanoparticles that influence heat generation. We suppose that all nanoparticles are fully functionalized and can find the target using active targeting methods. Unlike the commonly used methods, such as chemotherapy and radiotherapy, the treatment procedure proposed in this study is purely noninvasive, which is considered to be a significant merit. It is found that the localized heat generation due to targeted nanoparticles is significantly higher than other areas. By engineering the optical properties of nanoparticles, including scattering, absorption coefficients, and asymmetry factor (cosine scattering angle), the heat generated in the tumor's area reaches to such critical state that can burn the targeted tumor. The amount of heat generated by inserting smart agents, due to the surface Plasmon resonance, will be remarkably high. The light-matter interactions and trajectory of incident photon upon targeted tissues are simulated by MIE theory and Monte Carlo method, respectively. Monte Carlo method is a statistical one by which we can accurately probe the photon trajectories into a simulation area.

  2. A Hybrid Monte Carlo importance sampling of rare events in Turbulence and in Turbulent Models

    Science.gov (United States)

    Margazoglou, Georgios; Biferale, Luca; Grauer, Rainer; Jansen, Karl; Mesterhazy, David; Rosenow, Tillmann; Tripiccione, Raffaele

    2017-11-01

    Extreme and rare events is a challenging topic in the field of turbulence. Trying to investigate those instances through the use of traditional numerical tools turns to be a notorious task, as they fail to systematically sample the fluctuations around them. On the other hand, we propose that an importance sampling Monte Carlo method can selectively highlight extreme events in remote areas of the phase space and induce their occurrence. We present a brand new computational approach, based on the path integral formulation of stochastic dynamics, and employ an accelerated Hybrid Monte Carlo (HMC) algorithm for this purpose. Through the paradigm of stochastic one-dimensional Burgers' equation, subjected to a random noise that is white-in-time and power-law correlated in Fourier space, we will prove our concept and benchmark our results with standard CFD methods. Furthermore, we will present our first results of constrained sampling around saddle-point instanton configurations (optimal fluctuations). The research leading to these results has received funding from the EU Horizon 2020 research and innovation programme under Grant Agreement No. 642069, and from the EU Seventh Framework Programme (FP7/2007-2013) under ERC Grant Agreement No. 339032.

  3. Monte Carlo modelling of impurity ion transport for a limiter source/sink

    International Nuclear Information System (INIS)

    Stangeby, P.C.; Farrell, C.; Hoskins, S.; Wood, L.

    1988-01-01

    In relating the impurity influx Φ I (0) (atoms per second) into a plasma from the edge to the central impurity ion density n I (0) (ions·m -3 ), it is necessary to know the value of τ I SOL , the average dwell time of impurity ions in the scrape-off layer. It is usually assumed that τ I SOL =L c /c s , the hydrogenic dwell time, where L c is the limiter connection length and c s is the hydrogenic ion acoustic speed. Monte Carlo ion transport results are reported here which show that, for a wall (uniform) influx, τ I SOL is longer than L c /c s , while for a limiter influx it is shorter. Thus for a limiter influx n I (0) is predicted to be smaller than the reference value. Impurities released from the limiter form ever large 'clouds' of successively higher ionization stages. These are reproduced by the Monte Carlo code as are the cloud shapes for a localized impurity injection far from the limiter. (author). 23 refs, 18 figs, 6 tabs

  4. Evaluation of the interindividual human variation in bioactivation of methyleugenol using physiologically based kinetic modeling and Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Al-Subeihi, Ala' A.A., E-mail: subeihi@yahoo.com [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands); BEN-HAYYAN-Aqaba International Laboratories, Aqaba Special Economic Zone Authority (ASEZA), P. O. Box 2565, Aqaba 77110 (Jordan); Alhusainy, Wasma; Kiwamoto, Reiko; Spenkelink, Bert [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands); Bladeren, Peter J. van [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands); Nestec S.A., Avenue Nestlé 55, 1800 Vevey (Switzerland); Rietjens, Ivonne M.C.M.; Punt, Ans [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands)

    2015-03-01

    The present study aims at predicting the level of formation of the ultimate carcinogenic metabolite of methyleugenol, 1′-sulfooxymethyleugenol, in the human population by taking variability in key bioactivation and detoxification reactions into account using Monte Carlo simulations. Depending on the metabolic route, variation was simulated based on kinetic constants obtained from incubations with a range of individual human liver fractions or by combining kinetic constants obtained for specific isoenzymes with literature reported human variation in the activity of these enzymes. The results of the study indicate that formation of 1′-sulfooxymethyleugenol is predominantly affected by variation in i) P450 1A2-catalyzed bioactivation of methyleugenol to 1′-hydroxymethyleugenol, ii) P450 2B6-catalyzed epoxidation of methyleugenol, iii) the apparent kinetic constants for oxidation of 1′-hydroxymethyleugenol, and iv) the apparent kinetic constants for sulfation of 1′-hydroxymethyleugenol. Based on the Monte Carlo simulations a so-called chemical-specific adjustment factor (CSAF) for intraspecies variation could be derived by dividing different percentiles by the 50th percentile of the predicted population distribution for 1′-sulfooxymethyleugenol formation. The obtained CSAF value at the 90th percentile was 3.2, indicating that the default uncertainty factor of 3.16 for human variability in kinetics may adequately cover the variation within 90% of the population. Covering 99% of the population requires a larger uncertainty factor of 6.4. In conclusion, the results showed that adequate predictions on interindividual human variation can be made with Monte Carlo-based PBK modeling. For methyleugenol this variation was observed to be in line with the default variation generally assumed in risk assessment. - Highlights: • Interindividual human differences in methyleugenol bioactivation were simulated. • This was done using in vitro incubations, PBK modeling

  5. Evaluation of the interindividual human variation in bioactivation of methyleugenol using physiologically based kinetic modeling and Monte Carlo simulations

    International Nuclear Information System (INIS)

    Al-Subeihi, Ala' A.A.; Alhusainy, Wasma; Kiwamoto, Reiko; Spenkelink, Bert; Bladeren, Peter J. van; Rietjens, Ivonne M.C.M.; Punt, Ans

    2015-01-01

    The present study aims at predicting the level of formation of the ultimate carcinogenic metabolite of methyleugenol, 1′-sulfooxymethyleugenol, in the human population by taking variability in key bioactivation and detoxification reactions into account using Monte Carlo simulations. Depending on the metabolic route, variation was simulated based on kinetic constants obtained from incubations with a range of individual human liver fractions or by combining kinetic constants obtained for specific isoenzymes with literature reported human variation in the activity of these enzymes. The results of the study indicate that formation of 1′-sulfooxymethyleugenol is predominantly affected by variation in i) P450 1A2-catalyzed bioactivation of methyleugenol to 1′-hydroxymethyleugenol, ii) P450 2B6-catalyzed epoxidation of methyleugenol, iii) the apparent kinetic constants for oxidation of 1′-hydroxymethyleugenol, and iv) the apparent kinetic constants for sulfation of 1′-hydroxymethyleugenol. Based on the Monte Carlo simulations a so-called chemical-specific adjustment factor (CSAF) for intraspecies variation could be derived by dividing different percentiles by the 50th percentile of the predicted population distribution for 1′-sulfooxymethyleugenol formation. The obtained CSAF value at the 90th percentile was 3.2, indicating that the default uncertainty factor of 3.16 for human variability in kinetics may adequately cover the variation within 90% of the population. Covering 99% of the population requires a larger uncertainty factor of 6.4. In conclusion, the results showed that adequate predictions on interindividual human variation can be made with Monte Carlo-based PBK modeling. For methyleugenol this variation was observed to be in line with the default variation generally assumed in risk assessment. - Highlights: • Interindividual human differences in methyleugenol bioactivation were simulated. • This was done using in vitro incubations, PBK modeling

  6. Reviewing the effort-reward imbalance model: drawing up the balance of 45 empirical studies

    NARCIS (Netherlands)

    Vegchel, van N.; Jonge, de J.; Bosma, H.; Schaufeli, W.B.

    2005-01-01

    The present paper provides a review of 45 studies on the Effort–Reward Imbalance (ERI) Model published from 1986 to 2003 (inclusive). In 1986, the ERI Model was introduced by Siegrist et al. (Biological and Psychological Factors in Cardiovascular Disease, Springer, Berlin, 1986, pp. 104–126; Social

  7. Initial validation of 4D-model for a clinical PET scanner using the Monte Carlo code gate

    International Nuclear Information System (INIS)

    Vieira, Igor F.; Lima, Fernando R.A.; Gomes, Marcelo S.; Vieira, Jose W.; Pacheco, Ludimila M.; Chaves, Rosa M.

    2011-01-01

    Building exposure computational models (ECM) of emission tomography (PET and SPECT) currently has several dedicated computing tools based on Monte Carlo techniques (SimSET, SORTEO, SIMIND, GATE). This paper is divided into two steps: (1) using the dedicated code GATE (Geant4 Application for Tomographic Emission) to build a 4D model (where the fourth dimension is the time) of a clinical PET scanner from General Electric, GE ADVANCE, simulating the geometric and electronic structures suitable for this scanner, as well as some phenomena 4D, for example, rotating gantry; (2) the next step is to evaluate the performance of the model built here in the reproduction of test noise equivalent count rate (NEC) based on the NEMA Standards Publication NU protocols 2-2007 for this tomography. The results for steps (1) and (2) will be compared with experimental and theoretical values of the literature showing actual state of art of validation. (author)

  8. Development and experimental validation of a monte carlo modeling of the neutron emission from a d-t generator

    Energy Technology Data Exchange (ETDEWEB)

    Remetti, Romolo; Lepore, Luigi [Sapienza University of Rome, Dept. SBAI, Via Antonio Scarpa 14, 00161 Rome (Italy); Cherubini, Nadia [ENEA CRE Casaccia, Nuclear Material Characterization Laboratory and Nuclear Waste Management, Via Anguillarese 301, 00123 Rome (Italy)

    2017-01-11

    An extensive use of Monte Carlo simulations led to the identification of a Thermo Scientific MP320 neutron generator MCNPX input deck. Such input deck is currently utilized at ENEA Casaccia Research Center for optimizing all the techniques and applications involving the device, in particular for explosives and drugs detection by fast neutrons. The working model of the generator was obtained thanks to a detailed representation of the MP320 internal components, and to the potentialities offered by the MCNPX code. Validation of the model was obtained by comparing simulated results vs. manufacturer's data, and vs. experimental tests. The aim of this work is explaining all the steps that led to those results, suggesting a procedure that might be extended to different models of neutron generators.

  9. Initial validation of 4D-model for a clinical PET scanner using the Monte Carlo code gate

    Energy Technology Data Exchange (ETDEWEB)

    Vieira, Igor F.; Lima, Fernando R.A.; Gomes, Marcelo S., E-mail: falima@cnen.gov.b [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil); Vieira, Jose W.; Pacheco, Ludimila M. [Instituto Federal de Educacao, Ciencia e Tecnologia (IFPE), Recife, PE (Brazil); Chaves, Rosa M. [Instituto de Radium e Supervoltagem Ivo Roesler, Recife, PE (Brazil)

    2011-07-01

    Building exposure computational models (ECM) of emission tomography (PET and SPECT) currently has several dedicated computing tools based on Monte Carlo techniques (SimSET, SORTEO, SIMIND, GATE). This paper is divided into two steps: (1) using the dedicated code GATE (Geant4 Application for Tomographic Emission) to build a 4D model (where the fourth dimension is the time) of a clinical PET scanner from General Electric, GE ADVANCE, simulating the geometric and electronic structures suitable for this scanner, as well as some phenomena 4D, for example, rotating gantry; (2) the next step is to evaluate the performance of the model built here in the reproduction of test noise equivalent count rate (NEC) based on the NEMA Standards Publication NU protocols 2-2007 for this tomography. The results for steps (1) and (2) will be compared with experimental and theoretical values of the literature showing actual state of art of validation. (author)

  10. Modeling premartensitic effects in Ni2MnGa: A mean-field and Monte Carlo simulation study

    DEFF Research Database (Denmark)

    Castan, T.; Vives, E.; Lindgård, Per-Anker

    1999-01-01

    is constructed and justified based on the analysis of the experimentally observed strain variables and precursor phenomena. The description includes the (local) tetragonal distortion, the amplitude of the plane-modulating strain, and the magnetization. The model is solved by means of mean-field theory and Monte......The degenerate Blume-Emery-Griffiths model for martensitic transformations is extended by including both structural and magnetic degrees of freedom in order to elucidate premartensitic effects. Special attention is paid to the effect of the magnetoelastic coupling in Ni2MnGa. The microscopic model...... heat, not always associated with a true phase transition. The main conclusion is that premartensitic effects result from the interplay between the softness of the anomalous phonon driving the modulation and the magnetoelastic coupling. In particular, the premartensitic transition occurs when...

  11. Development and Application of MCNP5 and KENO-VI Monte Carlo Models for the Atucha-2 PHWR Analysis

    Directory of Open Access Journals (Sweden)

    M. Pecchia

    2011-01-01

    Full Text Available The geometrical complexity and the peculiarities of Atucha-2 PHWR require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Core models of Atucha-2 PHWR were developed using both MCNP5 and KENO-VI codes. The developed models were applied for calculating reactor criticality states at beginning of life, reactor cell constants, and control rods volumes. The last two applications were relevant for performing successive three dimensional neutron kinetic analyses since it was necessary to correctly evaluate the effect of each oblique control rod in each cell discretizing the reactor. These corrective factors were then applied to the cell cross sections calculated by the two-dimensional deterministic lattice physics code HELIOS. These results were implemented in the RELAP-3D model to perform safety analyses for the licensing process.

  12. Development and experimental validation of a monte carlo modeling of the neutron emission from a d-t generator

    Science.gov (United States)

    Remetti, Romolo; Lepore, Luigi; Cherubini, Nadia

    2017-01-01

    An extensive use of Monte Carlo simulations led to the identification of a Thermo Scientific MP320 neutron generator MCNPX input deck. Such input deck is currently utilized at ENEA Casaccia Research Center for optimizing all the techniques and applications involving the device, in particular for explosives and drugs detection by fast neutrons. The working model of the generator was obtained thanks to a detailed representation of the MP320 internal components, and to the potentialities offered by the MCNPX code. Validation of the model was obtained by comparing simulated results vs. manufacturer's data, and vs. experimental tests. The aim of this work is explaining all the steps that led to those results, suggesting a procedure that might be extended to different models of neutron generators.

  13. Kinetic Monte Carlo simulations of travelling pulses and spiral waves in the lattice Lotka-Volterra model.

    Science.gov (United States)

    Makeev, Alexei G; Kurkina, Elena S; Kevrekidis, Ioannis G

    2012-06-01

    Kinetic Monte Carlo simulations are used to study the stochastic two-species Lotka-Volterra model on a square lattice. For certain values of the model parameters, the system constitutes an excitable medium: travelling pulses and rotating spiral waves can be excited. Stable solitary pulses travel with constant (modulo stochastic fluctuations) shape and speed along a periodic lattice. The spiral waves observed persist sometimes for hundreds of rotations, but they are ultimately unstable and break-up (because of fluctuations and interactions between neighboring fronts) giving rise to complex dynamic behavior in which numerous small spiral waves rotate and interact with each other. It is interesting that travelling pulses and spiral waves can be exhibited by the model even for completely immobile species, due to the non-local reaction kinetics.

  14. A mathematical model for the kidney and estimative of the specific absorbed fractions by Monte Carlo method

    International Nuclear Information System (INIS)

    Todo, A.S.

    1980-01-01

    Presently, the estimates of specific absorbed fractions in various organs of a heterogeneous phantom are based on Monte Carlo calculation for monoenergetic photons uniformly distributed in the organs of an adult phantom. But, it is known that the kidney and some other organs (for example the skeleton) do not retain the radionuclides in an uniform manner in its internal region. So, we developed a model for the kidney including the cortex, medulla and collecting region. This model was utilized to estimate the specific absorbed fractions, for monoenergetic photons or electrons, in various organs of a heterogeneous phantom, when sources were uniformly distributed in each region of the kidney. All results obtained in this work were compared with those using a homogeneous model for the kidney as presented in ORNL-5000. (Author) [pt

  15. LMDzT-INCA dust forecast model developments and associated validation efforts

    International Nuclear Information System (INIS)

    Schulz, M; Cozic, A; Szopa, S

    2009-01-01

    The nudged atmosphere global climate model LMDzT-INCA is used to forecast global dust fields. Evaluation is undertaken in retrospective for the forecast results of the year 2006. For this purpose AERONET/Photons sites in Northern Africa and on the Arabian Peninsula are chosen where aerosol optical depth is dominated by dust. Despite its coarse resolution, the model captures 48% of the day to day dust variability near Dakar on the initial day of the forecast. On weekly and monthly scale the model captures respectively 62% and 68% of the variability. Correlation coefficients between daily AOD values observed and modelled at Dakar decrease from 0.69 for the initial forecast day to 0.59 and 0.41 respectively for two days ahead and five days ahead. If one requests that the model should be able to issue a warning for an exceedance of aerosol optical depth of 0.5 and issue no warning in the other cases, then the model was wrong in 29% of the cases for day 0, 32% for day 2 and 35% for day 5. A reanalysis run with archived ECMWF winds is only slightly better (r=0.71) but was in error in 25% of the cases. Both the improved simulation of the monthly versus daily variability and the deterioration of the forecast with time can be explained by model failure to simulate the exact timing of a dust event.

  16. Evaluation of Thin Plate Hydrodynamic Stability through a Combined Numerical Modeling and Experimental Effort

    Energy Technology Data Exchange (ETDEWEB)

    Tentner, A. [Argonne National Lab. (ANL), Argonne, IL (United States); Bojanowski, C. [Argonne National Lab. (ANL), Argonne, IL (United States); Feldman, E. [Argonne National Lab. (ANL), Argonne, IL (United States); Wilson, E. [Argonne National Lab. (ANL), Argonne, IL (United States); Solbrekken, G [Univ. of Missouri, Columbia, MO (United States); Jesse, C. [Univ. of Missouri, Columbia, MO (United States); Kennedy, J. [Univ. of Missouri, Columbia, MO (United States); Rivers, J. [Univ. of Missouri, Columbia, MO (United States); Schnieders, G. [Univ. of Missouri, Columbia, MO (United States)

    2017-05-01

    An experimental and computational effort was undertaken in order to evaluate the capability of the fluid-structure interaction (FSI) simulation tools to describe the deflection of a Missouri University Research Reactor (MURR) fuel element plate redesigned for conversion to lowenriched uranium (LEU) fuel due to hydrodynamic forces. Experiments involving both flat plates and curved plates were conducted in a water flow test loop located at the University of Missouri (MU), at conditions and geometries that can be related to the MURR LEU fuel element. A wider channel gap on one side of the test plate, and a narrower on the other represent the differences that could be encountered in a MURR element due to allowed fabrication variability. The difference in the channel gaps leads to a pressure differential across the plate, leading to plate deflection. The induced plate deflection the pressure difference induces in the plate was measured at specified locations using a laser measurement technique. High fidelity 3-D simulations of the experiments were performed at MU using the computational fluid dynamics code STAR-CCM+ coupled with the structural mechanics code ABAQUS. Independent simulations of the experiments were performed at Argonne National Laboratory (ANL) using the STAR-CCM+ code and its built-in structural mechanics solver. The simulation results obtained at MU and ANL were compared with the corresponding measured plate deflections.

  17. Are current health behavioral change models helpful in guiding prevention of weight gain efforts?

    Science.gov (United States)

    Baranowski, Tom; Cullen, Karen W; Nicklas, Theresa; Thompson, Deborah; Baranowski, Janice

    2003-10-01

    Effective procedures are needed to prevent the substantial increases in adiposity that have been occurring among children and adults. Behavioral change may occur as a result of changes in variables that mediate interventions. These mediating variables have typically come from the theories or models used to understand behavior. Seven categories of theories and models are reviewed to define the concepts and to identify the motivational mechanism(s), the resources that a person needs for change, the processes by which behavioral change is likely to occur, and the procedures necessary to promote change. Although each model has something to offer obesity prevention, the early promise can be achieved only with substantial additional research in which these models are applied to diet and physical activity in regard to obesity. The most promising avenues for such research seem to be using the latest variants of the Theory of Planned Behavior and Social Ecology. Synergy may be achieved by taking the most promising concepts from each model and integrating them for use with specific populations. Biology-based steps in an eating or physical activity event are identified, and research issues are suggested to integrate behavioral and biological approaches to understanding eating and physical activity behaviors. Social marketing procedures have much to offer in terms of organizing and strategizing behavioral change programs to incorporate these theoretical ideas. More research is needed to assess the true potential for these models to contribute to our understanding of obesity-related diet and physical activity practices, and in turn, to obesity prevention.

  18. Monte Carlo steps per spin vs. time in the master equation II: Glauber kinetics for the infinite-range ising model in a static magnetic field

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Suhk Kun [Chungbuk National University, Chungbuk (Korea, Republic of)

    2006-01-15

    As an extension of our previous work on the relationship between time in Monte Carlo simulation and time in the continuous master equation in the infinit-range Glauber kinetic Ising model in the absence of any magnetic field, we explored the same model in the presence of a static magnetic field. Monte Carlo steps per spin as time in the MC simulations again turns out to be proportional to time in the master equation for the model in relatively larger static magnetic fields at any temperature. At and near the critical point in a relatively smaller magnetic field, the model exhibits a significant finite-size dependence, and the solution to the Suzuki-Kubo differential equation stemming from the master equation needs to be re-scaled to fit the Monte Carlo steps per spin for the system with different numbers of spins.

  19. Validation of GEANT4 Monte Carlo Models with a Highly Granular Scintillator-Steel Hadron Calorimeter

    CERN Document Server

    Adloff, C.; Blaising, J.J.; Drancourt, C.; Espargiliere, A.; Gaglione, R.; Geffroy, N.; Karyotakis, Y.; Prast, J.; Vouters, G.; Francis, K.; Repond, J.; Schlereth, J.; Smith, J.; Xia, L.; Baldolemar, E.; Li, J.; Park, S.T.; Sosebee, M.; White, A.P.; Yu, J.; Buanes, T.; Eigen, G.; Mikami, Y.; Watson, N.K.; Mavromanolakis, G.; Thomson, M.A.; Ward, D.R.; Yan, W.; Benchekroun, D.; Hoummada, A.; Khoulaki, Y.; Apostolakis, J.; Dotti, A.; Folger, G.; Ivantchenko, V.; Uzhinskiy, V.; Benyamna, M.; Cârloganu, C.; Fehr, F.; Gay, P.; Manen, S.; Royer, L.; Blazey, G.C.; Dyshkant, A.; Lima, J.G.R.; Zutshi, V.; Hostachy, J.Y.; Morin, L.; Cornett, U.; David, D.; Falley, G.; Gadow, K.; Gottlicher, P.; Gunter, C.; Hermberg, B.; Karstensen, S.; Krivan, F.; Lucaci-Timoce, A.I.; Lu, S.; Lutz, B.; Morozov, S.; Morgunov, V.; Reinecke, M.; Sefkow, F.; Smirnov, P.; Terwort, M.; Vargas-Trevino, A.; Feege, N.; Garutti, E.; Marchesini, I.; Ramilli, M.; Eckert, P.; Harion, T.; Kaplan, A.; Schultz-Coulon, H.Ch.; Shen, W.; Stamen, R.; Bilki, B.; Norbeck, E.; Onel, Y.; Wilson, G.W.; Kawagoe, K.; Dauncey, P.D.; Magnan, A.M.; Bartsch, V.; Wing, M.; Salvatore, F.; Alamillo, E.Calvo; Fouz, M.C.; Puerta-Pelayo, J.; Bobchenko, B.; Chadeeva, M.; Danilov, M.; Epifantsev, A.; Markin, O.; Mizuk, R.; Novikov, E.; Popov, V.; Rusinov, V.; Tarkovsky, E.; Kirikova, N.; Kozlov, V.; Smirnov, P.; Soloviev, Y.; Buzhan, P.; Ilyin, A.; Kantserov, V.; Kaplin, V.; Karakash, A.; Popova, E.; Tikhomirov, V.; Kiesling, C.; Seidel, K.; Simon, F.; Soldner, C.; Szalay, M.; Tesar, M.; Weuste, L.; Amjad, M.S.; Bonis, J.; Callier, S.; Conforti di Lorenzo, S.; Cornebise, P.; Doublet, Ph.; Dulucq, F.; Fleury, J.; Frisson, T.; van der Kolk, N.; Li, H.; Martin-Chassard, G.; Richard, F.; de la Taille, Ch.; Poschl, R.; Raux, L.; Rouene, J.; Seguin-Moreau, N.; Anduze, M.; Boudry, V.; Brient, J-C.; Jeans, D.; Mora de Freitas, P.; Musat, G.; Reinhard, M.; Ruan, M.; Videau, H.; Bulanek, B.; Zacek, J.; Cvach, J.; Gallus, P.; Havranek, M.; Janata, M.; Kvasnicka, J.; Lednicky, D.; Marcisovsky, M.; Polak, I.; Popule, J.; Tomasek, L.; Tomasek, M.; Ruzicka, P.; Sicho, P.; Smolik, J.; Vrba, V.; Zalesak, J.; Belhorma, B.; Ghazlane, H.; Takeshita, T.; Uozumi, S.; Gotze, M.; Hartbrich, O.; Sauer, J.; Weber, S.; Zeitnitz, C.

    2013-01-01

    Calorimeters with a high granularity are a fundamental requirement of the Particle Flow paradigm. This paper focuses on the prototype of a hadron calorimeter with analog readout, consisting of thirty-eight scintillator layers alternating with steel absorber planes. The scintillator plates are finely segmented into tiles individually read out via Silicon Photomultipliers. The presented results are based on data collected with pion beams in the energy range from 8GeV to 100GeV. The fine segmentation of the sensitive layers and the high sampling frequency allow for an excellent reconstruction of the spatial development of hadronic showers. A comparison between data and Monte Carlo simulations is presented, concerning both the longitudinal and lateral development of hadronic showers and the global response of the calorimeter. The performance of several GEANT4 physics lists with respect to these observables is evaluated.

  20. Simulation on Mechanical Properties of Tungsten Carbide Thin Films Using Monte Carlo Model

    Directory of Open Access Journals (Sweden)

    Liliam C. Agudelo-Morimitsu

    2012-12-01

    Full Text Available The aim of this paper is to study the mechanical behavior of a system composed by substrate-coating using simulation methods. The contact stresses and the elastic deformation were analyzed by applying a normal load to the surface of the system consisting of a tungsten carbide (WC thin film, which is used as a wear resistant material and a stainless steel substrate. The analysis is based on Monte Carlo simulations using the Metropolis algorithm. The phenomenon was simulated from a fcc facecentered crystalline structure, for both, the coating and the substrate, assuming that the uniaxial strain is taken in the z-axis. Results were obtained for different values of normal applied load to the surface of the coating, obtaining the Strain-stress curves. From this curve, the Young´s modulus was obtained with a value of 600 Gpa, similar to the reports.