WorldWideScience

Sample records for modeling effort monte

  1. Shell model Monte Carlo methods

    International Nuclear Information System (INIS)

    Koonin, S.E.

    1996-01-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of γ-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs

  2. Monte Carlo simulation of Markov unreliability models

    International Nuclear Information System (INIS)

    Lewis, E.E.; Boehm, F.

    1984-01-01

    A Monte Carlo method is formulated for the evaluation of the unrealibility of complex systems with known component failure and repair rates. The formulation is in terms of a Markov process allowing dependences between components to be modeled and computational efficiencies to be achieved in the Monte Carlo simulation. Two variance reduction techniques, forced transition and failure biasing, are employed to increase computational efficiency of the random walk procedure. For an example problem these result in improved computational efficiency by more than three orders of magnitudes over analog Monte Carlo. The method is generalized to treat problems with distributed failure and repair rate data, and a batching technique is introduced and shown to result in substantial increases in computational efficiency for an example problem. A method for separating the variance due to the data uncertainty from that due to the finite number of random walks is presented. (orig.)

  3. Shell model the Monte Carlo way

    International Nuclear Information System (INIS)

    Ormand, W.E.

    1995-01-01

    The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined

  4. Shell model the Monte Carlo way

    Energy Technology Data Exchange (ETDEWEB)

    Ormand, W.E.

    1995-03-01

    The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.

  5. Monte Carlo Euler approximations of HJM term structure financial models

    KAUST Repository

    Björk, Tomas

    2012-11-22

    We present Monte Carlo-Euler methods for a weak approximation problem related to the Heath-Jarrow-Morton (HJM) term structure model, based on Itô stochastic differential equations in infinite dimensional spaces, and prove strong and weak error convergence estimates. The weak error estimates are based on stochastic flows and discrete dual backward problems, and they can be used to identify different error contributions arising from time and maturity discretization as well as the classical statistical error due to finite sampling. Explicit formulas for efficient computation of sharp error approximation are included. Due to the structure of the HJM models considered here, the computational effort devoted to the error estimates is low compared to the work to compute Monte Carlo solutions to the HJM model. Numerical examples with known exact solution are included in order to show the behavior of the estimates. © 2012 Springer Science+Business Media Dordrecht.

  6. Monte Carlo modeling of eye iris color

    Science.gov (United States)

    Koblova, Ekaterina V.; Bashkatov, Alexey N.; Dolotov, Leonid E.; Sinichkin, Yuri P.; Kamenskikh, Tatyana G.; Genina, Elina A.; Tuchin, Valery V.

    2007-05-01

    Based on the presented two-layer eye iris model, the iris diffuse reflectance has been calculated by Monte Carlo technique in the spectral range 400-800 nm. The diffuse reflectance spectra have been recalculated in L*a*b* color coordinate system. Obtained results demonstrated that the iris color coordinates (hue and chroma) can be used for estimation of melanin content in the range of small melanin concentrations, i.e. for estimation of melanin content in blue and green eyes.

  7. Monte carlo methods and models in finance and insurance

    CERN Document Server

    Korn, Ralf; Kroisandt, Gerald

    2010-01-01

    Offering a unique balance between applications and calculations, Monte Carlo Methods and Models in Finance and Insurance incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The authors separately discuss Monte Carlo techniques, stochastic process basics, and the theoretical background and intuition behind financial and actuarial mathematics, before bringing the topics together to apply the Monte Carlo methods to areas of finance and insurance. This allows for the easy identification of standard Monte Carlo tools and for a detailed focus on the main principles of financial and insurance mathematics. The book describes high-level Monte Carlo methods for standard simulation and the simulation of...

  8. Monte Carlo methods and models in finance and insurance

    CERN Document Server

    Korn, Ralf; Kroisandt, Gerald

    2010-01-01

    Offering a unique balance between applications and calculations, Monte Carlo Methods and Models in Finance and Insurance incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The authors separately discuss Monte Carlo techniques, stochastic process basics, and the theoretical background and intuition behind financial and actuarial mathematics, before bringing the topics together to apply the Monte Carlo methods to areas of finance and insurance. This allows for the easy identification of standard Monte Carlo tools and for a detailed focus on the main principles of financial and insurance mathematics. The book describes high-level Monte Carlo methods for standard simulation and the simulation of...

  9. Monte Carlo modeling and meteor showers

    International Nuclear Information System (INIS)

    Kulikova, N.V.

    1987-01-01

    Prediction of short lived increases in the cosmic dust influx, the concentration in lower thermosphere of atoms and ions of meteor origin and the determination of the frequency of micrometeor impacts on spacecraft are all of scientific and practical interest and all require adequate models of meteor showers at an early stage of their existence. A Monte Carlo model of meteor matter ejection from a parent body at any point of space was worked out by other researchers. This scheme is described. According to the scheme, the formation of ten well known meteor streams was simulated and the possibility of genetic affinity of each of them with the most probable parent comet was analyzed. Some of the results are presented

  10. Monte Carlo modelling of TRIGA research reactor

    International Nuclear Information System (INIS)

    El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.

    2010-01-01

    The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucleaires de la Maamora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S(α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file 'up259'. The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.

  11. Monte Carlo modelling of TRIGA research reactor

    Science.gov (United States)

    El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.

    2010-10-01

    The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucléaires de la Maâmora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.

  12. Forecasting with nonlinear time series model: A Monte-Carlo ...

    African Journals Online (AJOL)

    In this paper, we propose a new method of forecasting with nonlinear time series model using Monte-Carlo Bootstrap method. This new method gives better result in terms of forecast root mean squared error (RMSE) when compared with the traditional Bootstrap method and Monte-Carlo method of forecasting using a ...

  13. The sine Gordon model perturbation theory and cluster Monte Carlo

    CERN Document Server

    Hasenbusch, M; Pinn, K

    1994-01-01

    We study the expansion of the surface thickness in the 2-dimensional lattice Sine Gordon model in powers of the fugacity z. Using the expansion to order z**2, we derive lines of constant physics in the rough phase. We describe and test a VMR cluster algorithm for the Monte Carlo simulation of the model. The algorithm shows nearly no critical slowing down. We apply the algorithm in a comparison of our perturbative results with Monte Carlo data.

  14. Studies of Monte Carlo Modelling of Jets at ATLAS

    CERN Document Server

    Kar, Deepak; The ATLAS collaboration

    2017-01-01

    The predictions of different Monte Carlo generators for QCD jet production, both in multijets and for jets produced in association with other objects, are presented. Recent improvements in showering Monte Carlos provide new tools for assessing systematic uncertainties associated with these jets.  Studies of the dependence of physical observables on the choice of shower tune parameters and new prescriptions for assessing systematic uncertainties associated with the choice of shower model and tune are presented.

  15. forecasting with nonlinear time series model: a monte-carlo ...

    African Journals Online (AJOL)

    PUBLICATIONS1

    with nonlinear time series model by comparing the RMSE with the traditional bootstrap and. Monte-Carlo method of forecasting. We use the logistic smooth transition autoregressive. (LSTAR) model as a case study. We first consider a linear model called the AR. (p) model of order p which satisfies the follow- ing linear ...

  16. Aspects of perturbative QCD in Monte Carlo shower models

    International Nuclear Information System (INIS)

    Gottschalk, T.D.

    1986-01-01

    The perturbative QCD content of Monte Carlo models for high energy hadron-hadron scattering is examined. Particular attention is given to the recently developed backwards evolution formalism for initial state parton showers, and the merging of parton shower evolution with hard scattering cross sections. Shower estimates of K-factors are discussed, and a simple scheme is presented for incorporating 2 → QCD cross sections into shower model calculations without double counting. Additional issues in the development of hard scattering Monte Carlo models are summarized. 69 references, 20 figures

  17. Efforts and models of education for parents

    DEFF Research Database (Denmark)

    Jensen, Niels Rosendal

    2010-01-01

    Artiklen omfatter en gennemgang af modeller for forældreuddannelse, der fortrinsvis anvendes i Danmark. Artiklen indlejrer modellerne i nogle bredere blikke på uddannelsessystemet og den aktuelle diskurs om ansvarliggørelse af forældre.   Udgivelsesdato: Marts 2010...

  18. Monte Carlo simulation models of breeding-population advancement.

    Science.gov (United States)

    J.N. King; G.R. Johnson

    1993-01-01

    Five generations of population improvement were modeled using Monte Carlo simulations. The model was designed to address questions that are important to the development of an advanced generation breeding population. Specifically we addressed the effects on both gain and effective population size of different mating schemes when creating a recombinant population for...

  19. Strain in the mesoscale kinetic Monte Carlo model for sintering

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Frandsen, Henrik Lund; Tikare, V.

    2014-01-01

    Shrinkage strains measured from microstructural simulations using the mesoscale kinetic Monte Carlo (kMC) model for solid state sintering are discussed. This model represents the microstructure using digitized discrete sites that are either grain or pore sites. The algorithm used to simulate dens...

  20. Towards a Revised Monte Carlo Neutral Particle Surface Interaction Model

    International Nuclear Information System (INIS)

    Stotler, D.P.

    2005-01-01

    The components of the neutral- and plasma-surface interaction model used in the Monte Carlo neutral transport code DEGAS 2 are reviewed. The idealized surfaces and processes handled by that model are inadequate for accurately simulating neutral transport behavior in present day and future fusion devices. We identify some of the physical processes missing from the model, such as mixed materials and implanted hydrogen, and make some suggestions for improving the model

  1. A novel Monte Carlo approach to hybrid local volatility models

    NARCIS (Netherlands)

    A.W. van der Stoep (Anton); L.A. Grzelak (Lech Aleksander); C.W. Oosterlee (Cornelis)

    2017-01-01

    textabstractWe present in a Monte Carlo simulation framework, a novel approach for the evaluation of hybrid local volatility [Risk, 1994, 7, 18–20], [Int. J. Theor. Appl. Finance, 1998, 1, 61–110] models. In particular, we consider the stochastic local volatility model—see e.g. Lipton et al. [Quant.

  2. A novel Monte Carlo approach to hybrid local volatility models

    NARCIS (Netherlands)

    van der Stoep, A.W.; Grzelak, L.A.; Oosterlee, C.W.

    2017-01-01

    We present in a Monte Carlo simulation framework, a novel approach for the evaluation of hybrid local volatility [Risk, 1994, 7, 18–20], [Int. J. Theor. Appl. Finance, 1998, 1, 61–110] models. In particular, we consider the stochastic local volatility model—see e.g. Lipton et al. [Quant. Finance,

  3. Reservoir Modeling Combining Geostatistics with Markov Chain Monte Carlo Inversion

    DEFF Research Database (Denmark)

    Zunino, Andrea; Lange, Katrine; Melnikova, Yulia

    2014-01-01

    We present a study on the inversion of seismic reflection data generated from a synthetic reservoir model. Our aim is to invert directly for rock facies and porosity of the target reservoir zone. We solve this inverse problem using a Markov chain Monte Carlo (McMC) method to handle the nonlinear,...... constitute samples of the posterior distribution.......We present a study on the inversion of seismic reflection data generated from a synthetic reservoir model. Our aim is to invert directly for rock facies and porosity of the target reservoir zone. We solve this inverse problem using a Markov chain Monte Carlo (McMC) method to handle the nonlinear......, multi-step forward model (rock physics and seismology) and to provide realistic estimates of uncertainties. To generate realistic models which represent samples of the prior distribution, and to overcome the high computational demand, we reduce the search space utilizing an algorithm drawn from...

  4. How to use COSMIC Functional Size in Effort Estimation Models?

    OpenAIRE

    Gencel, Cigdem

    2008-01-01

    Although Functional Size Measurement (FSM) methods have become widely used by the software organizations, the functional size based effort estimation still needs further investigation. Most of the studies on effort estimation consider total functional size of the software as the primary input to estimation models and they mostly focus on identifying the project parameters which might have a significant effect on the size-effort relationship. This study brings suggestions on how to use COSMIC ...

  5. Incorporating Responsiveness to Marketing Efforts When Modeling Brand Choice

    NARCIS (Netherlands)

    D. Fok (Dennis); Ph.H.B.F. Franses (Philip Hans); R. Paap (Richard)

    2001-01-01

    textabstractIn this paper we put forward a brand choice model which incorporates responsiveness to marketing efforts as a form of structural heterogeneity. We introduce two latent segments of households. The households in the first segment are assumed to respond to marketing efforts while households

  6. Monte Carlo Study of the 3D Thirring Model

    OpenAIRE

    Hands, Simon

    1997-01-01

    I review three different non-perturbative approaches to the three dimensional Thirring model: the 1/N_f expansion, Schwinger-Dyson equations, and Monte Carlo simulation. Simulation results are presented to support the existence of a non-perturbative fixed point at a chiral symmetry breaking phase transition for N_f=2 and 4, but not for N_f=6. Spectrum calculations for $N_f=2$ reveal conventional level ordering near the transition.

  7. Monte Carlo Modelling of Mammograms : Development and Validation

    International Nuclear Information System (INIS)

    Spyrou, G.; Panayiotakis, G.; Bakas, A.; Tzanakos, G.

    1998-01-01

    A software package using Monte Carlo methods has been developed for the simulation of x-ray mammography. A simplified geometry of the mammographic apparatus has been considered along with the software phantom of compressed breast. This phantom may contain inhomogeneities of various compositions and sizes at any point. Using this model one can produce simulated mammograms. Results that demonstrate the validity of this simulation are presented. (authors)

  8. Quantum Monte Carlo Simulation of Frustrated Kondo Lattice Models

    Science.gov (United States)

    Sato, Toshihiro; Assaad, Fakher F.; Grover, Tarun

    2018-03-01

    The absence of the negative sign problem in quantum Monte Carlo simulations of spin and fermion systems has different origins. World-line based algorithms for spins require positivity of matrix elements whereas auxiliary field approaches for fermions depend on symmetries such as particle-hole symmetry. For negative-sign-free spin and fermionic systems, we show that one can formulate a negative-sign-free auxiliary field quantum Monte Carlo algorithm that allows Kondo coupling of fermions with the spins. Using this general approach, we study a half-filled Kondo lattice model on the honeycomb lattice with geometric frustration. In addition to the conventional Kondo insulator and antiferromagnetically ordered phases, we find a partial Kondo screened state where spins are selectively screened so as to alleviate frustration, and the lattice rotation symmetry is broken nematically.

  9. Monte Carlo Numerical Models for Nuclear Logging Applications

    Directory of Open Access Journals (Sweden)

    Fusheng Li

    2012-06-01

    Full Text Available Nuclear logging is one of most important logging services provided by many oil service companies. The main parameters of interest are formation porosity, bulk density, and natural radiation. Other services are also provided from using complex nuclear logging tools, such as formation lithology/mineralogy, etc. Some parameters can be measured by using neutron logging tools and some can only be measured by using a gamma ray tool. To understand the response of nuclear logging tools, the neutron transport/diffusion theory and photon diffusion theory are needed. Unfortunately, for most cases there are no analytical answers if complex tool geometry is involved. For many years, Monte Carlo numerical models have been used by nuclear scientists in the well logging industry to address these challenges. The models have been widely employed in the optimization of nuclear logging tool design, and the development of interpretation methods for nuclear logs. They have also been used to predict the response of nuclear logging systems for forward simulation problems. In this case, the system parameters including geometry, materials and nuclear sources, etc., are pre-defined and the transportation and interactions of nuclear particles (such as neutrons, photons and/or electrons in the regions of interest are simulated according to detailed nuclear physics theory and their nuclear cross-section data (probability of interacting. Then the deposited energies of particles entering the detectors are recorded and tallied and the tool responses to such a scenario are generated. A general-purpose code named Monte Carlo N– Particle (MCNP has been the industry-standard for some time. In this paper, we briefly introduce the fundamental principles of Monte Carlo numerical modeling and review the physics of MCNP. Some of the latest developments of Monte Carlo Models are also reviewed. A variety of examples are presented to illustrate the uses of Monte Carlo numerical models

  10. Monte Carlo modeling of human tooth optical coherence tomography imaging

    International Nuclear Information System (INIS)

    Shi, Boya; Meng, Zhuo; Wang, Longzhi; Liu, Tiegen

    2013-01-01

    We present a Monte Carlo model for optical coherence tomography (OCT) imaging of human tooth. The model is implemented by combining the simulation of a Gaussian beam with simulation for photon propagation in a two-layer human tooth model with non-parallel surfaces through a Monte Carlo method. The geometry and the optical parameters of the human tooth model are chosen on the basis of the experimental OCT images. The results show that the simulated OCT images are qualitatively consistent with the experimental ones. Using the model, we demonstrate the following: firstly, two types of photons contribute to the information of morphological features and noise in the OCT image of a human tooth, respectively. Secondly, the critical imaging depth of the tooth model is obtained, and it is found to decrease significantly with increasing mineral loss, simulated as different enamel scattering coefficients. Finally, the best focus position is located below and close to the dental surface by analysis of the effect of focus positions on the OCT signal and critical imaging depth. We anticipate that this modeling will become a powerful and accurate tool for a preliminary numerical study of the OCT technique on diseases of dental hard tissue in human teeth. (paper)

  11. Monte Carlo modeling of human tooth optical coherence tomography imaging

    Science.gov (United States)

    Shi, Boya; Meng, Zhuo; Wang, Longzhi; Liu, Tiegen

    2013-07-01

    We present a Monte Carlo model for optical coherence tomography (OCT) imaging of human tooth. The model is implemented by combining the simulation of a Gaussian beam with simulation for photon propagation in a two-layer human tooth model with non-parallel surfaces through a Monte Carlo method. The geometry and the optical parameters of the human tooth model are chosen on the basis of the experimental OCT images. The results show that the simulated OCT images are qualitatively consistent with the experimental ones. Using the model, we demonstrate the following: firstly, two types of photons contribute to the information of morphological features and noise in the OCT image of a human tooth, respectively. Secondly, the critical imaging depth of the tooth model is obtained, and it is found to decrease significantly with increasing mineral loss, simulated as different enamel scattering coefficients. Finally, the best focus position is located below and close to the dental surface by analysis of the effect of focus positions on the OCT signal and critical imaging depth. We anticipate that this modeling will become a powerful and accurate tool for a preliminary numerical study of the OCT technique on diseases of dental hard tissue in human teeth.

  12. Reservoir Modeling Combining Geostatistics with Markov Chain Monte Carlo Inversion

    DEFF Research Database (Denmark)

    Zunino, Andrea; Lange, Katrine; Melnikova, Yulia

    2014-01-01

    We present a study on the inversion of seismic reflection data generated from a synthetic reservoir model. Our aim is to invert directly for rock facies and porosity of the target reservoir zone. We solve this inverse problem using a Markov chain Monte Carlo (McMC) method to handle the nonlinear......, multi-step forward model (rock physics and seismology) and to provide realistic estimates of uncertainties. To generate realistic models which represent samples of the prior distribution, and to overcome the high computational demand, we reduce the search space utilizing an algorithm drawn from...... geostatistics. The geostatistical algorithm learns the multiple-point statistics from prototype models, then generates proposal models which are tested by a Metropolis sampler. The solution of the inverse problem is finally represented by a collection of reservoir models in terms of facies and porosity, which...

  13. Conditional Monte Carlo randomization tests for regression models.

    Science.gov (United States)

    Parhat, Parwen; Rosenberger, William F; Diao, Guoqing

    2014-08-15

    We discuss the computation of randomization tests for clinical trials of two treatments when the primary outcome is based on a regression model. We begin by revisiting the seminal paper of Gail, Tan, and Piantadosi (1988), and then describe a method based on Monte Carlo generation of randomization sequences. The tests based on this Monte Carlo procedure are design based, in that they incorporate the particular randomization procedure used. We discuss permuted block designs, complete randomization, and biased coin designs. We also use a new technique by Plamadeala and Rosenberger (2012) for simple computation of conditional randomization tests. Like Gail, Tan, and Piantadosi, we focus on residuals from generalized linear models and martingale residuals from survival models. Such techniques do not apply to longitudinal data analysis, and we introduce a method for computation of randomization tests based on the predicted rate of change from a generalized linear mixed model when outcomes are longitudinal. We show, by simulation, that these randomization tests preserve the size and power well under model misspecification. Copyright © 2014 John Wiley & Sons, Ltd.

  14. Monte Carlo Shell Model for ab initio nuclear structure

    Directory of Open Access Journals (Sweden)

    Abe T.

    2014-03-01

    Full Text Available We report on our recent application of the Monte Carlo Shell Model to no-core calculations. At the initial stage of the application, we have performed benchmark calculations in the p-shell region. Results are compared with those in the Full Configuration Interaction and No-Core Full Configuration methods. These are found to be consistent with each other within quoted uncertainties when they could be quantified. The preliminary results in Nshell = 5 reveal the onset of systematic convergence pattern.

  15. Efforts - Final technical report on task 4. Physical modelling calidation

    DEFF Research Database (Denmark)

    Andreasen, Jan Lasson; Olsson, David Dam; Christensen, T. W.

    The present report is documentation for the work carried out in Task 4 at DTU Physical modelling-validation on the Brite/Euram project No. BE96-3340, contract No. BRPR-CT97-0398, with the title Enhanced Framework for forging design using reliable three-dimensional simulation (EFFORTS). The report...

  16. Evolutionary Sequential Monte Carlo Samplers for Change-Point Models

    Directory of Open Access Journals (Sweden)

    Arnaud Dufays

    2016-03-01

    Full Text Available Sequential Monte Carlo (SMC methods are widely used for non-linear filtering purposes. However, the SMC scope encompasses wider applications such as estimating static model parameters so much that it is becoming a serious alternative to Markov-Chain Monte-Carlo (MCMC methods. Not only do SMC algorithms draw posterior distributions of static or dynamic parameters but additionally they provide an estimate of the marginal likelihood. The tempered and time (TNT algorithm, developed in this paper, combines (off-line tempered SMC inference with on-line SMC inference for drawing realizations from many sequential posterior distributions without experiencing a particle degeneracy problem. Furthermore, it introduces a new MCMC rejuvenation step that is generic, automated and well-suited for multi-modal distributions. As this update relies on the wide heuristic optimization literature, numerous extensions are readily available. The algorithm is notably appropriate for estimating change-point models. As an example, we compare several change-point GARCH models through their marginal log-likelihoods over time.

  17. Quantum Monte Carlo study of the Rabi-Hubbard model

    Science.gov (United States)

    Flottat, Thibaut; Hébert, Frédéric; Rousseau, Valéry G.; Batrouni, George Ghassan

    2016-10-01

    We study, using quantum Monte Carlo (QMC) simulations, the ground state properties of a one dimensional Rabi-Hubbard model. The model consists of a lattice of Rabi systems coupled by a photon hopping term between near neighbor sites. For large enough coupling between photons and atoms, the phase diagram generally consists of only two phases: a coherent phase and a compressible incoherent one separated by a quantum phase transition (QPT). We show that, as one goes deeper in the coherent phase, the system becomes unstable exhibiting a divergence of the number of photons. The Mott phases which are present in the Jaynes-Cummings-Hubbard model are not observed in these cases due to the presence of non-negligible counter-rotating terms. We show that these two models become equivalent only when the detuning is negative and large enough, or if the counter-rotating terms are small enough

  18. Markov chain Monte Carlo simulation for Bayesian Hidden Markov Models

    Science.gov (United States)

    Chan, Lay Guat; Ibrahim, Adriana Irawati Nur Binti

    2016-10-01

    A hidden Markov model (HMM) is a mixture model which has a Markov chain with finite states as its mixing distribution. HMMs have been applied to a variety of fields, such as speech and face recognitions. The main purpose of this study is to investigate the Bayesian approach to HMMs. Using this approach, we can simulate from the parameters' posterior distribution using some Markov chain Monte Carlo (MCMC) sampling methods. HMMs seem to be useful, but there are some limitations. Therefore, by using the Mixture of Dirichlet processes Hidden Markov Model (MDPHMM) based on Yau et. al (2011), we hope to overcome these limitations. We shall conduct a simulation study using MCMC methods to investigate the performance of this model.

  19. A Monte Carlo methodology for modelling ashfall hazards

    Science.gov (United States)

    Hurst, Tony; Smith, Warwick

    2004-12-01

    We have developed a methodology for quantifying the probability of particular thicknesses of tephra at any given site, using Monte Carlo methods. This is a part of the development of a probabilistic volcanic hazard model (PVHM) for New Zealand, for hazards planning and insurance purposes. We use an established program (ASHFALL) to model individual eruptions, where the likely thickness of ash deposited at selected sites depends on the location of the volcano, eruptive volume, column height and ash size, and the wind conditions. A Monte Carlo procedure allows us to simulate the variations in eruptive volume and in wind conditions by analysing repeat eruptions, each time allowing the parameters to vary randomly according to known or assumed distributions. Actual wind velocity profiles are used, with randomness included by selection of a starting date. This method can handle the effects of multiple volcanic sources, each source with its own characteristics. We accumulate the tephra thicknesses from all sources to estimate the combined ashfall hazard, expressed as the frequency with which any given depth of tephra is likely to be deposited at selected sites. These numbers are expressed as annual probabilities or as mean return periods. We can also use this method for obtaining an estimate of how often and how large the eruptions from a particular volcano have been. Results from sediment cores in Auckland give useful bounds for the likely total volumes erupted from Egmont Volcano (Mt. Taranaki), 280 km away, during the last 130,000 years.

  20. Quantum Monte Carlo method for models of molecular nanodevices

    Science.gov (United States)

    Arrachea, Liliana; Rozenberg, Marcelo J.

    2005-07-01

    We introduce a quantum Monte Carlo technique to calculate exactly at finite temperatures the Green function of a fermionic quantum impurity coupled to a bosonic field. While the algorithm is general, we focus on the single impurity Anderson model coupled to a Holstein phonon as a schematic model for a molecular transistor. We compute the density of states at the impurity in a large range of parameters, to demonstrate the accuracy and efficiency of the method. We also obtain the conductance of the impurity model and analyze different regimes. The results show that even in the case when the effective attractive phonon interaction is larger than the Coulomb repulsion, a Kondo-like conductance behavior might be observed.

  1. Optimizing Availability of a Framework in Series Configuration Utilizing Markov Model and Monte Carlo Simulation Techniques

    Directory of Open Access Journals (Sweden)

    Mansoor Ahmed Siddiqui

    2017-06-01

    Full Text Available This research work is aimed at optimizing the availability of a framework comprising of two units linked together in series configuration utilizing Markov Model and Monte Carlo (MC Simulation techniques. In this article, effort has been made to develop a maintenance model that incorporates three distinct states for each unit, while taking into account their different levels of deterioration. Calculations are carried out using the proposed model for two distinct cases of corrective repair, namely perfect and imperfect repairs, with as well as without opportunistic maintenance. Initially, results are accomplished using an analytical technique i.e., Markov Model. Validation of the results achieved is later carried out with the help of MC Simulation. In addition, MC Simulation based codes also work well for the frameworks that follow non-exponential failure and repair rates, and thus overcome the limitations offered by the Markov Model.

  2. Dynamic connectivity algorithms for Monte Carlo simulations of the random-cluster model

    Science.gov (United States)

    Metin Elçi, Eren; Weigel, Martin

    2014-05-01

    We review Sweeny's algorithm for Monte Carlo simulations of the random cluster model. Straightforward implementations suffer from the problem of computational critical slowing down, where the computational effort per edge operation scales with a power of the system size. By using a tailored dynamic connectivity algorithm we are able to perform all operations with a poly-logarithmic computational effort. This approach is shown to be efficient in keeping online connectivity information and is of use for a number of applications also beyond cluster-update simulations, for instance in monitoring droplet shape transitions. As the handling of the relevant data structures is non-trivial, we provide a Python module with a full implementation for future reference.

  3. Dynamic connectivity algorithms for Monte Carlo simulations of the random-cluster model

    International Nuclear Information System (INIS)

    Elçi, Eren Metin; Weigel, Martin

    2014-01-01

    We review Sweeny's algorithm for Monte Carlo simulations of the random cluster model. Straightforward implementations suffer from the problem of computational critical slowing down, where the computational effort per edge operation scales with a power of the system size. By using a tailored dynamic connectivity algorithm we are able to perform all operations with a poly-logarithmic computational effort. This approach is shown to be efficient in keeping online connectivity information and is of use for a number of applications also beyond cluster-update simulations, for instance in monitoring droplet shape transitions. As the handling of the relevant data structures is non-trivial, we provide a Python module with a full implementation for future reference.

  4. Longitudinal functional principal component modelling via Stochastic Approximation Monte Carlo

    KAUST Repository

    Martinez, Josue G.

    2010-06-01

    The authors consider the analysis of hierarchical longitudinal functional data based upon a functional principal components approach. In contrast to standard frequentist approaches to selecting the number of principal components, the authors do model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order to overcome this, the authors show how to apply Stochastic Approximation Monte Carlo (SAMC) to this problem, a method that has the potential to explore the entire space and does not become trapped in local extrema. The combination of reversible jump methods and SAMC in hierarchical longitudinal functional data is simplified by a polar coordinate representation of the principal components. The approach is easy to implement and does well in simulated data in determining the distribution of the number of principal components, and in terms of its frequentist estimation properties. Empirical applications are also presented.

  5. Monte Carlo model of diagnostic X-ray dosimetry

    International Nuclear Information System (INIS)

    Khrutchinsky, Arkady; Kutsen, Semion; Gatskevich, George

    2008-01-01

    Full text: A Monte Carlo simulation of absorbed dose distribution in patient's tissues is often used in a dosimetry assessment of X-ray examinations. The results of such simulations in Belarus are presented in the report based on an anthropomorphic tissue-equivalent Rando-like physical phantom. The phantom corresponds to an adult 173 cm high and of 73 kg and consists of a torso and a head made of tissue-equivalent plastics which model soft (muscular), bone, and lung tissues. It consists of 39 layers (each 25 mm thick), including 10 head and neck ones, 16 chest and 13 pelvis ones. A tomographic model of the phantom has been developed from its CT-scan images with a voxel size of 0.88 x 0.88 x 4 mm 3 . A necessary pixelization in Mathematics-based in-house program was carried out for the phantom to be used in the radiation transport code MCNP-4b. The final voxel size of 14.2 x 14.2 x 8 mm 3 was used for the reasonable computer consuming calculations of absorbed dose in tissues and organs in various diagnostic X-ray examinations. MCNP point detectors allocated through body slices obtained as a result of the pixelization were used to calculate the absorbed dose. X-ray spectra generated by the empirical TASMIP model were verified on the X-ray units MEVASIM and SIREGRAPH CF. Absorbed dose distributions in the phantom volume were determined by the corresponding Monte Carlo simulations with a set of point detectors. Doses in organs of the adult phantom computed from the absorbed dose distributions by another Mathematics-based in-house program were estimated for 22 standard organs for various standard X-ray examinations. The results of Monte Carlo simulations were compared with the results of direct measurements of the absorbed dose in the phantom on the X-ray unit SIREGRAPH CF with the calibrated thermo-luminescent dosimeter DTU-01. The measurements were carried out in specified locations of different layers in heart, lungs, liver, pancreas, and stomach at high voltage of

  6. Household water use and conservation models using Monte Carlo techniques

    Directory of Open Access Journals (Sweden)

    R. Cahill

    2013-10-01

    Full Text Available The increased availability of end use measurement studies allows for mechanistic and detailed approaches to estimating household water demand and conservation potential. This study simulates water use in a single-family residential neighborhood using end-water-use parameter probability distributions generated from Monte Carlo sampling. This model represents existing water use conditions in 2010 and is calibrated to 2006–2011 metered data. A two-stage mixed integer optimization model is then developed to estimate the least-cost combination of long- and short-term conservation actions for each household. This least-cost conservation model provides an estimate of the upper bound of reasonable conservation potential for varying pricing and rebate conditions. The models were adapted from previous work in Jordan and are applied to a neighborhood in San Ramon, California in the eastern San Francisco Bay Area. The existing conditions model produces seasonal use results very close to the metered data. The least-cost conservation model suggests clothes washer rebates are among most cost-effective rebate programs for indoor uses. Retrofit of faucets and toilets is also cost-effective and holds the highest potential for water savings from indoor uses. This mechanistic modeling approach can improve understanding of water demand and estimate cost-effectiveness of water conservation programs.

  7. Monte Carlo Computational Modeling of Atomic Oxygen Interactions

    Science.gov (United States)

    Banks, Bruce A.; Stueber, Thomas J.; Miller, Sharon K.; De Groh, Kim K.

    2017-01-01

    Computational modeling of the erosion of polymers caused by atomic oxygen in low Earth orbit (LEO) is useful for determining areas of concern for spacecraft environment durability. Successful modeling requires that the characteristics of the environment such as atomic oxygen energy distribution, flux, and angular distribution be properly represented in the model. Thus whether the atomic oxygen is arriving normal to or inclined to a surface and whether it arrives in a consistent direction or is sweeping across the surface such as in the case of polymeric solar array blankets is important to determine durability. When atomic oxygen impacts a polymer surface it can react removing a certain volume per incident atom (called the erosion yield), recombine, or be ejected as an active oxygen atom to potentially either react with other polymer atoms or exit into space. Scattered atoms can also have a lower energy as a result of partial or total thermal accommodation. Many solutions to polymer durability in LEO involve protective thin films of metal oxides such as SiO2 to prevent atomic oxygen erosion. Such protective films also have their own interaction characteristics. A Monte Carlo computational model has been developed which takes into account the various types of atomic oxygen arrival and how it reacts with a representative polymer (polyimide Kapton H) and how it reacts at defect sites in an oxide protective coating, such as SiO2 on that polymer. Although this model was initially intended to determine atomic oxygen erosion behavior at defect sites for the International Space Station solar arrays, it has been used to predict atomic oxygen erosion or oxidation behavior on many other spacecraft components including erosion of polymeric joints, durability of solar array blanket box covers, and scattering of atomic oxygen into telescopes and microwave cavities where oxidation of critical component surfaces can take place. The computational model is a two dimensional model

  8. Monte Carlo Modeling of Crystal Channeling at High Energies

    CERN Document Server

    Schoofs, Philippe; Cerutti, Francesco

    Charged particles entering a crystal close to some preferred direction can be trapped in the electromagnetic potential well existing between consecutive planes or strings of atoms. This channeling effect can be used to extract beam particles if the crystal is bent beforehand. Crystal channeling is becoming a reliable and efficient technique for collimating beams and removing halo particles. At CERN, the installation of silicon crystals in the LHC is under scrutiny by the UA9 collaboration with the goal of investigating if they are a viable option for the collimation system upgrade. This thesis describes a new Monte Carlo model of planar channeling which has been developed from scratch in order to be implemented in the FLUKA code simulating particle transport and interactions. Crystal channels are described through the concept of continuous potential taking into account thermal motion of the lattice atoms and using Moliere screening function. The energy of the particle transverse motion determines whether or n...

  9. A valence force field-Monte Carlo algorithm for quantum dot growth modeling

    DEFF Research Database (Denmark)

    Barettin, Daniele; Kadkhodazadeh, Shima; Pecchia, Alessandro

    2017-01-01

    We present a novel kinetic Monte Carlo version for the atomistic valence force fields algorithm in order to model a self-assembled quantum dot growth process. We show our atomistic model is both computationally favorable and capture more details compared to traditional kinetic Monte Carlo models...

  10. Underwater Optical Wireless Channel Modeling Using Monte-Carlo Method

    Science.gov (United States)

    Saini, P. Sri; Prince, Shanthi

    2011-10-01

    At present, there is a lot of interest in the functioning of the marine environment. Unmanned or Autonomous Underwater Vehicles (UUVs or AUVs) are used in the exploration of the underwater resources, pollution monitoring, disaster prevention etc. Underwater, where radio waves do not propagate, acoustic communication is being used. But, underwater communication is moving towards Optical Communication which has higher bandwidth when compared to Acoustic Communication but has shorter range comparatively. Underwater Optical Wireless Communication (OWC) is mainly affected by the absorption and scattering of the optical signal. In coastal waters, both inherent and apparent optical properties (IOPs and AOPs) are influenced by a wide array of physical, biological and chemical processes leading to optical variability. The scattering effect has two effects: the attenuation of the signal and the Inter-Symbol Interference (ISI) of the signal. However, the Inter-Symbol Interference is ignored in the present paper. Therefore, in order to have an efficient underwater OWC link it is necessary to model the channel efficiently. In this paper, the underwater optical channel is modeled using Monte-Carlo method. The Monte Carlo approach provides the most general and most flexible technique for numerically solving the equations of Radiative transfer. The attenuation co-efficient of the light signal is studied as a function of the absorption (a) and scattering (b) coefficients. It has been observed that for pure sea water and for less chlorophyll conditions blue wavelength is less absorbed whereas for chlorophyll rich environment red wavelength signal is absorbed less comparative to blue and green wavelength.

  11. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Diks, Cees G H [NON LANL; Clark, Martyn P [NON LANL

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  12. Monte Carlo simulations of lattice models for single polymer systems

    International Nuclear Information System (INIS)

    Hsu, Hsiao-Ping

    2014-01-01

    Single linear polymer chains in dilute solutions under good solvent conditions are studied by Monte Carlo simulations with the pruned-enriched Rosenbluth method up to the chain length N∼O(10 4 ). Based on the standard simple cubic lattice model (SCLM) with fixed bond length and the bond fluctuation model (BFM) with bond lengths in a range between 2 and √(10), we investigate the conformations of polymer chains described by self-avoiding walks on the simple cubic lattice, and by random walks and non-reversible random walks in the absence of excluded volume interactions. In addition to flexible chains, we also extend our study to semiflexible chains for different stiffness controlled by a bending potential. The persistence lengths of chains extracted from the orientational correlations are estimated for all cases. We show that chains based on the BFM are more flexible than those based on the SCLM for a fixed bending energy. The microscopic differences between these two lattice models are discussed and the theoretical predictions of scaling laws given in the literature are checked and verified. Our simulations clarify that a different mapping ratio between the coarse-grained models and the atomistically realistic description of polymers is required in a coarse-graining approach due to the different crossovers to the asymptotic behavior

  13. Introduction to the Monte Carlo project and the approach to the validation of probabilistic models of dietary exposure to selected food chemicals

    NARCIS (Netherlands)

    Gibney, M.J.; Voet, van der H.

    2003-01-01

    The Monte Carlo project was established to allow an international collaborative effort to define conceptual models for food chemical and nutrient exposure, to define and validate the software code to govern these models, to provide new or reconstructed databases for validation studies, and to use

  14. Monte Carlo Modeling the UCN τ Magneto-Gravitational Trap

    Science.gov (United States)

    Holley, A. T.; UCNτ Collaboration

    2016-09-01

    The current uncertainty in our knowledge of the free neutron lifetime is dominated by the nearly 4 σ discrepancy between complementary ``beam'' and ``bottle'' measurement techniques. An incomplete assessment of systematic effects is the most likely explanation for this difference and must be addressed in order to realize the potential of both approaches. The UCN τ collaboration has constructed a large-volume magneto-gravitational trap that eliminates the material interactions which complicated the interpretation of previous bottle experiments. This is accomplished using permanent NdFeB magnets in a bowl-shaped Halbach array to confine polarized UCN from the sides and below and the earth's gravitational field to trap them from above. New in situ detectors that count surviving UCN provide a means of empirically assessing residual systematic effects. The interpretation of that data, and its implication for experimental configurations with enhanced precision, can be bolstered by Monte Carlo models of the current experiment which provide the capability for stable tracking of trapped UCN and detailed modeling of their polarization. Work to develop such models and their comparison with data acquired during our first extensive set of systematics studies will be discussed.

  15. Modelling of scintillator based flat-panel detectors with Monte-Carlo simulations

    International Nuclear Information System (INIS)

    Reims, N; Sukowski, F; Uhlmann, N

    2011-01-01

    Scintillator based flat panel detectors are state of the art in the field of industrial X-ray imaging applications. Choosing the proper system and setup parameters for the vast range of different applications can be a time consuming task, especially when developing new detector systems. Since the system behaviour cannot always be foreseen easily, Monte-Carlo (MC) simulations are keys to gain further knowledge of system components and their behaviour for different imaging conditions. In this work we used two Monte-Carlo based models to examine an indirect converting flat panel detector, specifically the Hamamatsu C9312SK. We focused on the signal generation in the scintillation layer and its influence on the spatial resolution of the whole system. The models differ significantly in their level of complexity. The first model gives a global description of the detector based on different parameters characterizing the spatial resolution. With relatively small effort a simulation model can be developed which equates the real detector regarding signal transfer. The second model allows a more detailed insight of the system. It is based on the well established cascade theory, i.e. describing the detector as a cascade of elemental gain and scattering stages, which represent the built in components and their signal transfer behaviour. In comparison to the first model the influence of single components especially the important light spread behaviour in the scintillator can be analysed in a more differentiated way. Although the implementation of the second model is more time consuming both models have in common that a relatively small amount of system manufacturer parameters are needed. The results of both models were in good agreement with the measured parameters of the real system.

  16. Monte Carlo and phantom study in the brain edema models

    Directory of Open Access Journals (Sweden)

    Yubing Liu

    2017-05-01

    Full Text Available Because the brain edema has a crucial impact on morbidity and mortality, it is important to develop a noninvasive method to monitor the process of the brain edema effectively. When the brain edema occurs, the optical properties of the brain will change. The goal of this study is to access the feasibility and reliability of using noninvasive near-infrared spectroscopy (NIRS monitoring method to measure the brain edema. Specifically, three models, including the water content changes in the cerebrospinal fluid (CSF, gray matter and white matter, were explored. Moreover, these models were numerically simulated by the Monte Carlo studies. Then, the phantom experiments were performed to investigate the light intensity which was measured at different detecting radius on the tissue surface. The results indicated that the light intensity correlated well with the conditions of the brain edema and the detecting radius. Briefly, at the detecting radius of 3.0cm and 4.0cm, the light intensity has a high response to the change of tissue parameters and optical properties. Thus, it is possible to monitor the brain edema noninvasively by NIRS method and the light intensity is a reliable and simple parameter to assess the brain edema.

  17. Image based Monte Carlo modeling for computational phantom

    International Nuclear Information System (INIS)

    Cheng, M.; Wang, W.; Zhao, K.; Fan, Y.; Long, P.; Wu, Y.

    2013-01-01

    Full text of the publication follows. The evaluation on the effects of ionizing radiation and the risk of radiation exposure on human body has been becoming one of the most important issues for radiation protection and radiotherapy fields, which is helpful to avoid unnecessary radiation and decrease harm to human body. In order to accurately evaluate the dose on human body, it is necessary to construct more realistic computational phantom. However, manual description and verification of the models for Monte Carlo (MC) simulation are very tedious, error-prone and time-consuming. In addition, it is difficult to locate and fix the geometry error, and difficult to describe material information and assign it to cells. MCAM (CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport Simulation) was developed as an interface program to achieve both CAD- and image-based automatic modeling. The advanced version (Version 6) of MCAM can achieve automatic conversion from CT/segmented sectioned images to computational phantoms such as MCNP models. Imaged-based automatic modeling program(MCAM6.0) has been tested by several medical images and sectioned images. And it has been applied in the construction of Rad-HUMAN. Following manual segmentation and 3D reconstruction, a whole-body computational phantom of Chinese adult female called Rad-HUMAN was created by using MCAM6.0 from sectioned images of a Chinese visible human dataset. Rad-HUMAN contains 46 organs/tissues, which faithfully represented the average anatomical characteristics of the Chinese female. The dose conversion coefficients (Dt/Ka) from kerma free-in-air to absorbed dose of Rad-HUMAN were calculated. Rad-HUMAN can be applied to predict and evaluate dose distributions in the Treatment Plan System (TPS), as well as radiation exposure for human body in radiation protection. (authors)

  18. Image based Monte Carlo Modeling for Computational Phantom

    Science.gov (United States)

    Cheng, Mengyun; Wang, Wen; Zhao, Kai; Fan, Yanchang; Long, Pengcheng; Wu, Yican

    2014-06-01

    The evaluation on the effects of ionizing radiation and the risk of radiation exposure on human body has been becoming one of the most important issues for radiation protection and radiotherapy fields, which is helpful to avoid unnecessary radiation and decrease harm to human body. In order to accurately evaluate the dose on human body, it is necessary to construct more realistic computational phantom. However, manual description and verfication of the models for Monte carlo(MC)simulation are very tedious, error-prone and time-consuming. In addiation, it is difficult to locate and fix the geometry error, and difficult to describe material information and assign it to cells. MCAM (CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport Simulation) was developed as an interface program to achieve both CAD- and image-based automatic modeling by FDS Team (Advanced Nuclear Energy Research Team, http://www.fds.org.cn). The advanced version (Version 6) of MCAM can achieve automatic conversion from CT/segmented sectioned images to computational phantoms such as MCNP models. Imaged-based automatic modeling program(MCAM6.0) has been tested by several medical images and sectioned images. And it has been applied in the construction of Rad-HUMAN. Following manual segmentation and 3D reconstruction, a whole-body computational phantom of Chinese adult female called Rad-HUMAN was created by using MCAM6.0 from sectioned images of a Chinese visible human dataset. Rad-HUMAN contains 46 organs/tissues, which faithfully represented the average anatomical characteristics of the Chinese female. The dose conversion coefficients(Dt/Ka) from kerma free-in-air to absorbed dose of Rad-HUMAN were calculated. Rad-HUMAN can be applied to predict and evaluate dose distributions in the Treatment Plan System (TPS), as well as radiation exposure for human body in radiation protection.

  19. Mesoscopic kinetic Monte Carlo modeling of organic photovoltaic device characteristics

    Science.gov (United States)

    Kimber, Robin G. E.; Wright, Edward N.; O'Kane, Simon E. J.; Walker, Alison B.; Blakesley, James C.

    2012-12-01

    Measured mobility and current-voltage characteristics of single layer and photovoltaic (PV) devices composed of poly{9,9-dioctylfluorene-co-bis[N,N'-(4-butylphenyl)]bis(N,N'-phenyl-1,4-phenylene)diamine} (PFB) and poly(9,9-dioctylfluorene-co-benzothiadiazole) (F8BT) have been reproduced by a mesoscopic model employing the kinetic Monte Carlo (KMC) approach. Our aim is to show how to avoid the uncertainties common in electrical transport models arising from the need to fit a large number of parameters when little information is available, for example, a single current-voltage curve. Here, simulation parameters are derived from a series of measurements using a self-consistent “building-blocks” approach, starting from data on the simplest systems. We found that site energies show disorder and that correlations in the site energies and a distribution of deep traps must be included in order to reproduce measured charge mobility-field curves at low charge densities in bulk PFB and F8BT. The parameter set from the mobility-field curves reproduces the unipolar current in single layers of PFB and F8BT and allows us to deduce charge injection barriers. Finally, by combining these disorder descriptions and injection barriers with an optical model, the external quantum efficiency and current densities of blend and bilayer organic PV devices can be successfully reproduced across a voltage range encompassing reverse and forward bias, with the recombination rate the only parameter to be fitted, found to be 1×107 s-1. These findings demonstrate an approach that removes some of the arbitrariness present in transport models of organic devices, which validates the KMC as an accurate description of organic optoelectronic systems, and provides information on the microscopic origins of the device behavior.

  20. Linking effort and fishing mortality in a mixed fisheries model

    DEFF Research Database (Denmark)

    Thøgersen, Thomas Talund; Hoff, Ayoe; Frost, Hans Staby

    2012-01-01

    Since the implementation of the Common Fisheries Policy of the European Union in 1983, the management of EU fisheries has been enormously challenging. The abundance of many fish stocks has declined because too much fishing capacity has been utilised on healthy fish stocks. Today, this decline...... in fish stocks has led to overcapacity in many fisheries, leading to incentives for overfishing. Recent research has shown that the allocation of effort among fleets can play an important role in mitigating overfishing when the targeting covers a range of species (multi-species—i.e., so-called mixed...... fisheries), while simultaneously optimising the overall economic performance of the fleets. The so-called FcubEcon model, in particular, has elucidated both the biologically and economically optimal method for allocating catches—and thus effort—between fishing fleets, while ensuring that the quotas...

  1. New software library of geometrical primitives for modelling of solids used in Monte Carlo detector simulations

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    We present our effort for the creation of a new software library of geometrical primitives, which are used for solid modelling in Monte Carlo detector simulations. We plan to replace and unify current geometrical primitive classes in the CERN software projects Geant4 and ROOT with this library. Each solid is represented by a C++ class with methods suited for measuring distances of particles from the surface of a solid and for determination as to whether the particles are located inside, outside or on the surface of the solid. We use numerical tolerance for determining whether the particles are located on the surface. The class methods also contain basic support for visualization. We use dedicated test suites for validation of the shape codes. These include also special performance and numerical value comparison tests for help with analysis of possible candidates of class methods as well as to verify that our new implementation proposals were designed and implemented properly. Currently, bridge classes are u...

  2. Monte Carlo Modeling Electronuclear Processes in Cascade Subcritical Reactor

    CERN Document Server

    Bznuni, S A; Zhamkochyan, V M; Polyanskii, A A; Sosnin, A N; Khudaverdian, A G

    2000-01-01

    Accelerator driven subcritical cascade reactor composed of the main thermal neutron reactor constructed analogous to the core of the VVER-1000 reactor and a booster-reactor, which is constructed similar to the core of the BN-350 fast breeder reactor, is taken as a model example. It is shown by means of Monte Carlo calculations that such system is a safe energy source (k_{eff}=0.94-0.98) and it is capable of transmuting produced radioactive wastes (neutron flux density in the thermal zone is PHI^{max} (r,z)=10^{14} n/(cm^{-2} s^{-1}), neutron flux in the fast zone is respectively equal PHI^{max} (r,z)=2.25 cdot 10^{15} n/(cm^{-2} s^{-1}) if the beam current of the proton accelerator is k_{eff}=0.98 and I=5.3 mA). Suggested configuration of the "cascade" reactor system essentially reduces the requirements on the proton accelerator current.

  3. On an efficient multiple time step Monte Carlo simulation of the SABR model

    NARCIS (Netherlands)

    A. Leitao Rodriguez (Álvaro); L.A. Grzelak (Lech Aleksander); C.W. Oosterlee (Cornelis)

    2017-01-01

    textabstractIn this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl.

  4. On an efficient multiple time step Monte Carlo simulation of the SABR model

    NARCIS (Netherlands)

    Leitao Rodriguez, A.; Grzelak, L.A.; Oosterlee, C.W.

    2017-01-01

    In this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl. Math.

  5. Bayesian specification analysis and estimation of simultaneous equation models using Monte Carlo methods

    NARCIS (Netherlands)

    A. Zellner (Arnold); L. Bauwens (Luc); H.K. van Dijk (Herman)

    1988-01-01

    textabstractBayesian procedures for specification analysis or diagnostic checking of modeling assumptions for structural equations of econometric models are developed and applied using Monte Carlo numerical methods. Checks on the validity of identifying restrictions, exogeneity assumptions and other

  6. A Monte Carlo reflectance model for soil surfaces with three-dimensional structure

    Science.gov (United States)

    Cooper, K. D.; Smith, J. A.

    1985-01-01

    A Monte Carlo soil reflectance model has been developed to study the effect of macroscopic surface irregularities larger than the wavelength of incident flux. The model treats incoherent multiple scattering from Lambertian facets distributed on a periodic surface. Resulting bidirectional reflectance distribution functions are non-Lambertian and compare well with experimental trends reported in the literature. Examples showing the coupling of the Monte Carlo soil model to an adding bidirectional canopy of reflectance model are also given.

  7. Bayesian calibration of terrestrial ecosystem models: a study of advanced Markov chain Monte Carlo methods

    Science.gov (United States)

    Lu, Dan; Ricciuto, Daniel; Walker, Anthony; Safta, Cosmin; Munger, William

    2017-09-01

    Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC) sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this work, a differential evolution adaptive Metropolis (DREAM) algorithm is used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC) model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The calibration of DREAM results in a better model fit and predictive performance compared to the popular adaptive Metropolis (AM) scheme. Moreover, DREAM indicates that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identifies one mode. The application suggests that DREAM is very suitable to calibrate complex terrestrial ecosystem models, where the uncertain parameter size is usually large and existence of local optima is always a concern. In addition, this effort justifies the assumptions of the error model used in Bayesian calibration according to the residual analysis. The result indicates that a heteroscedastic, correlated, Gaussian error model is appropriate for the problem, and the consequent constructed likelihood function can alleviate the underestimation of parameter uncertainty that is usually caused by using uncorrelated error models.

  8. Bayesian calibration of terrestrial ecosystem models: a study of advanced Markov chain Monte Carlo methods

    Directory of Open Access Journals (Sweden)

    D. Lu

    2017-09-01

    Full Text Available Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this work, a differential evolution adaptive Metropolis (DREAM algorithm is used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The calibration of DREAM results in a better model fit and predictive performance compared to the popular adaptive Metropolis (AM scheme. Moreover, DREAM indicates that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identifies one mode. The application suggests that DREAM is very suitable to calibrate complex terrestrial ecosystem models, where the uncertain parameter size is usually large and existence of local optima is always a concern. In addition, this effort justifies the assumptions of the error model used in Bayesian calibration according to the residual analysis. The result indicates that a heteroscedastic, correlated, Gaussian error model is appropriate for the problem, and the consequent constructed likelihood function can alleviate the underestimation of parameter uncertainty that is usually caused by using uncorrelated error models.

  9. Utility of Monte Carlo Modelling for Holdup Measurements.

    Energy Technology Data Exchange (ETDEWEB)

    Belian, Anthony P.; Russo, P. A. (Phyllis A.); Weier, Dennis R. (Dennis Ray),

    2005-01-01

    Non-destructive assay (NDA) measurements performed to locate and quantify holdup in the Oak Ridge K25 enrichment cascade used neutron totals counting and low-resolution gamma-ray spectroscopy. This facility housed the gaseous diffusion process for enrichment of uranium, in the form of UF{sub 6} gas, from {approx} 20% to 93%. Inventory of {sup 235}U inventory in K-25 is all holdup. These buildings have been slated for decontaminatino and decommissioning. The NDA measurements establish the inventory quantities and will be used to assure criticality safety and meet criteria for waste analysis and transportation. The tendency to err on the side of conservatism for the sake of criticality safety in specifying total NDA uncertainty argues, in the interests of safety and costs, for obtaining the best possible value of uncertainty at the conservative confidence level for each item of process equipment. Variable deposit distribution is a complex systematic effect (i.e., determined by multiple independent variables) on the portable NDA results for very large and bulk converters that contributes greatly to total uncertainty for holdup in converters measured by gamma or neutron NDA methods. Because the magnitudes of complex systematic effects are difficult to estimate, computational tools are important for evaluating those that are large. Motivated by very large discrepancies between gamma and neutron measurements of high-mass converters with gamma results tending to dominate, the Monte Carlo code MCNP has been used to determine the systematic effects of deposit distribution on gamma and neutron results for {sup 235}U holdup mass in converters. This paper details the numerical methodology used to evaluate large systematic effects unique to each measurement type, validates the methodology by comparison with measurements, and discusses how modeling tools can supplement the calibration of instruments used for holdup measurements by providing realistic values at well

  10. Sky-Radiance Models for Monte Carlo Radiative Transfer Applications

    Science.gov (United States)

    Santos, I.; Dalimonte, D.; Santos, J. P.

    2012-04-01

    Photon-tracing can be initialized through sky-radiance (Lsky) distribution models when executing Monte Carlo simulations for ocean color studies. To be effective, the Lsky model should: 1) properly represent sky-radiance features of interest; 2) require low computing time; and 3) depend on a limited number of input parameters. The present study verifies the satisfiability of these prerequisite by comparing results from different Lsky formulations. Specifically, two Lsky models were considered as reference cases because of their different approach among solutions presented in the literature. The first model, developed by the Harrisson and Coombes (HC), is based on a parametric expression where the sun geometry is the unique input. The HC model is one of the sky-radiance analytical distribution applied in state-of-art simulations for ocean optics. The coefficients of the HC model were set upon broad-band field measurements and the result is a model that requires a few implementation steps. The second model, implemented by Zibordi and Voss (ZV), is based on physical expressions that accounts for the optical thickness of permanent gases, aerosol, ozone and water vapour at specific wavelengths. Inter-comparisons between normalized ^LskyZV and ^LskyHC (i.e., with unitary scalar irradiance) are discussed by means of individual polar maps and percent difference between sky-radiance distributions. Sky-radiance cross-sections are presented as well. Considered cases include different sun zenith values and wavelengths (i.e., λ=413, 490 and 665 nm, corresponding to selected center-bands of the MEdium Resolution Imaging Spectrometer MERIS). Results have shown a significant convergence between ^LskyHC and ^LskyZV at 665 nm. Differences between models increase with the sun zenith and mostly with wavelength. For Instance, relative differences up to 50% between ^ L skyHC and ^ LskyZV can be observed in the antisolar region for λ=665 nm and θ*=45°. The effects of these

  11. Characterization of infiltration rates from landfills: supporting groundwater modeling efforts.

    Science.gov (United States)

    Moo-Young, Horace; Johnson, Barnes; Johnson, Ann; Carson, David; Lew, Christine; Liu, Salley; Hancocks, Katherine

    2004-01-01

    The purpose of this paper is to review the literature to characterize infiltration rates from landfill liners to support groundwater modeling efforts. The focus of this investigation was on collecting studies that describe the performance of liners 'as installed' or 'as operated'. This document reviews the state of the science and practice on the infiltration rate through compacted clay liner (CCL) for 149 sites and geosynthetic clay liner (GCL) for 1 site. In addition, it reviews the leakage rate through geomembrane (GM) liners and composite liners for 259 sites. For compacted clay liners (CCL), there was limited information on infiltration rates (i.e., only 9 sites reported infiltration rates.), thus, it was difficult to develop a national distribution. The field hydraulic conductivities for natural clay liners range from 1 x 10(-9) cm s(-1) to 1 x 10(-4) cm s(-1), with an average of 6.5 x 10(-8) cm s(-1). There was limited information on geosynthetic clay liner. For composite lined and geomembrane systems, the leak detection system flow rates were utilized. The average monthly flow rate for composite liners ranged from 0-32 lphd for geomembrane and GCL systems to 0 to 1410 lphd for geomembrane and CCL systems. The increased infiltration for the geomembrane and CCL system may be attributed to consolidation water from the clay.

  12. Numerical Demons in Monte Carlo Estimation of Bayesian Model Evidence with Application to Soil Respiration Models

    Science.gov (United States)

    Elshall, A. S.; Ye, M.; Niu, G. Y.; Barron-Gafford, G.

    2016-12-01

    Bayesian multimodel inference is increasingly being used in hydrology. Estimating Bayesian model evidence (BME) is of central importance in many Bayesian multimodel analysis such as Bayesian model averaging and model selection. BME is the overall probability of the model in reproducing the data, accounting for the trade-off between the goodness-of-fit and the model complexity. Yet estimating BME is challenging, especially for high dimensional problems with complex sampling space. Estimating BME using the Monte Carlo numerical methods is preferred, as the methods yield higher accuracy than semi-analytical solutions (e.g. Laplace approximations, BIC, KIC, etc.). However, numerical methods are prone the numerical demons arising from underflow of round off errors. Although few studies alluded to this issue, to our knowledge this is the first study that illustrates these numerical demons. We show that the precision arithmetic can become a threshold on likelihood values and Metropolis acceptance ratio, which results in trimming parameter regions (when likelihood function is less than the smallest floating point number that a computer can represent) and corrupting of the empirical measures of the random states of the MCMC sampler (when using log-likelihood function). We consider two of the most powerful numerical estimators of BME that are the path sampling method of thermodynamic integration (TI) and the importance sampling method of steppingstone sampling (SS). We also consider the two most widely used numerical estimators, which are the prior sampling arithmetic mean (AS) and posterior sampling harmonic mean (HM). We investigate the vulnerability of these four estimators to the numerical demons. Interesting, the most biased estimator, namely the HM, turned out to be the least vulnerable. While it is generally assumed that AM is a bias-free estimator that will always approximate the true BME by investing in computational effort, we show that arithmetic underflow can

  13. Monte Carlo simulation of quantum statistical lattice models

    NARCIS (Netherlands)

    Raedt, Hans De; Lagendijk, Ad

    1985-01-01

    In this article we review recent developments in computational methods for quantum statistical lattice problems. We begin by giving the necessary mathematical basis, the generalized Trotter formula, and discuss the computational tools, exact summations and Monte Carlo simulation, that will be used

  14. Monte Carlo study of superconductivity in the three-band Emery model

    International Nuclear Information System (INIS)

    Frick, M.; Pattnaik, P.C.; Morgenstern, I.; Newns, D.M.; von der Linden, W.

    1990-01-01

    We have examined the three-band Hubbard model for the copper oxide planes in high-temperature superconductors using the projector quantum Monte Carlo method. We find no evidence for s-wave superconductivity

  15. Simplest Validation of the HIJING Monte Carlo Model

    CERN Document Server

    Uzhinsky, V.V.

    2003-01-01

    Fulfillment of the energy-momentum conservation law, as well as the charge, baryon and lepton number conservation is checked for the HIJING Monte Carlo program in $pp$-interactions at $\\sqrt{s}=$ 200, 5500, and 14000 GeV. It is shown that the energy is conserved quite well. The transverse momentum is not conserved, the deviation from zero is at the level of 1--2 GeV/c, and it is connected with the hard jet production. The deviation is absent for soft interactions. Charge, baryon and lepton numbers are conserved. Azimuthal symmetry of the Monte Carlo events is studied, too. It is shown that there is a small signature of a "flow". The situation with the symmetry gets worse for nucleus-nucleus interactions.

  16. An exercise in model validation: Comparing univariate statistics and Monte Carlo-based multivariate statistics

    International Nuclear Information System (INIS)

    Weathers, J.B.; Luck, R.; Weathers, J.W.

    2009-01-01

    The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.

  17. Colloids and Radionuclide Transport: A Field, Experimental and Modeling Effort

    Science.gov (United States)

    Zhao, P.; Zavarin, M.; Sylwester, E. E.; Allen, P. G.; Williams, R. W.; Kersting, A. B.

    2002-05-01

    Natural inorganic colloids (colloid-facilitated transport to the transport of low-solubility actinides, such as Pu, is still not well understood. In an effort to better understand the dominant geochemical mechanisms that control Pu transport, we have performed a series of sorption/desorption experiments using mineral colloids. We focused on natural colloidal minerals present in water samples collected from both saturated and vadose zone waters at the Nevada Test Site. These minerals include zeolites, clays, silica, Mn-oxides, Fe-oxides, and calcite. X-ray absorption fine-structure spectroscopy ( both XANES and EXAFS) was performed in order to characterize the speciation of sorbed plutonium. The XANES spectra show that only Pu(IV) was detected (within experimental error) on these mineral surfaces when the starting Pu oxidation state was +5, indicating that Pu(V) was reduced to Pu(IV) during sorption. The EXAFS detected Pu-M and Pu-C interactions (where M=Fe, Mn, or Si) indicating Pu(IV) surface complexation along with carbonate ternary complex formation on most of the minerals tested. Although the plutonium sorption as Pu(IV) species is mineral independent, the actual sorption paths are different for different minerals. The sorption rates were compared to the rates of plutonium disproportionation under similar conditions. The batch sorption/desorption experiments of Pu(IV) and Pu(V) onto colloidal zeolite (clinoptilolite, colloids particle size 171 ñ 25 nm) were conducted in synthetic groundwater (similar to J-13, Yucca Mountain standard) with a pH range from 4 to 10 and initial plutonium concentration of 10-9 M. The results show that Pu(IV) sorption takes place within an hour, while the rates of Pu(V) sorption onto the colloids is much slower and mineral dependent. The kinetic results from the batch sorption/desorption experiments, coupled with redox kinetics of plutonium in solution will be used in geochemical modeling of Pu surface complexation to colloids and

  18. Pushing the limits of Monte Carlo simulations for the three-dimensional Ising model

    Science.gov (United States)

    Ferrenberg, Alan M.; Xu, Jiahao; Landau, David P.

    2018-04-01

    While the three-dimensional Ising model has defied analytic solution, various numerical methods like Monte Carlo, Monte Carlo renormalization group, and series expansion have provided precise information about the phase transition. Using Monte Carlo simulation that employs the Wolff cluster flipping algorithm with both 32-bit and 53-bit random number generators and data analysis with histogram reweighting and quadruple precision arithmetic, we have investigated the critical behavior of the simple cubic Ising Model, with lattice sizes ranging from 163 to 10243. By analyzing data with cross correlations between various thermodynamic quantities obtained from the same data pool, e.g., logarithmic derivatives of magnetization and derivatives of magnetization cumulants, we have obtained the critical inverse temperature Kc=0.221 654 626 (5 ) and the critical exponent of the correlation length ν =0.629 912 (86 ) with precision that exceeds all previous Monte Carlo estimates.

  19. Progress and applications of MCAM. Monte Carlo automatic modeling program for particle transport simulation

    International Nuclear Information System (INIS)

    Wang Guozhong; Zhang Junjun; Xiong Jian

    2010-01-01

    MCAM (Monte Carlo Automatic Modeling program for particle transport simulation) was developed by FDS Team as a CAD based bi-directional interface program between general CAD systems and Monte Carlo particle transport simulation codes. The physics and material modeling and void space modeling functions were improved and the free form surfaces processing function was developed recently. The applications to the ITER (International Thermonuclear Experimental Reactor) building model and FFHR (Force Free Helical Reactor) model have demonstrated the feasibility, effectiveness and maturity of MCAM latest version for nuclear applications with complex geometry. (author)

  20. A Monte Carlo model of complex spectra of opacity calculations

    International Nuclear Information System (INIS)

    Klapisch, M.; Duffy, P.; Goldstein, W.H.

    1991-01-01

    We are developing a Monte Carlo method for calculating opacities of complex spectra. It should be faster than atomic structure codes and is more accurate than the UTA method. We use the idea that wavelength-averaged opacities depend on the overall properties, but not the details, of the spectrum; our spectra have the same statistical properties as real ones but the strength and energy of each line is random. In preliminary tests we can get Rosseland mean opacities within 20% of actual values. (orig.)

  1. Optical Monte Carlo modeling of a true portwine stain anatomy

    Science.gov (United States)

    Barton, Jennifer K.; Pfefer, T. Joshua; Welch, Ashley J.; Smithies, Derek J.; Nelson, Jerry; van Gemert, Martin J.

    1998-04-01

    A unique Monte Carlo program capable of accommodating an arbitrarily complex geometry was used to determine the energy deposition in a true port wine stain anatomy. Serial histologic sections taken from a biopsy of a dark red, laser therapy resistant stain were digitized and used to create the program input for simulation at wavelengths of 532 and 585 nm. At both wavelengths, the greatest energy deposition occurred in the superficial blood vessels, and subsequently decreased with depth as the laser beam was attenuated. However, more energy was deposited in the epidermis and superficial blood vessels at 532 nm than at 585 nm.

  2. Improving system modeling accuracy with Monte Carlo codes

    International Nuclear Information System (INIS)

    Johnson, A.S.

    1996-01-01

    The use of computer codes based on Monte Carlo methods to perform criticality calculations has become common-place. Although results frequently published in the literature report calculated k eff values to four decimal places, people who use the codes in their everyday work say that they only believe the first two decimal places of any result. The lack of confidence in the computed k eff values may be due to the tendency of the reported standard deviation to underestimate errors associated with the Monte Carlo process. The standard deviation as reported by the codes is the standard deviation of the mean of the k eff values for individual generations in the computer simulation, not the standard deviation of the computed k eff value compared with the physical system. A more subtle problem with the standard deviation of the mean as reported by the codes is that all the k eff values from the separate generations are not statistically independent since the k eff of a given generation is a function of k eff of the previous generation, which is ultimately based on the starting source. To produce a standard deviation that is more representative of the physical system, statistically independent values of k eff are needed

  3. Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases

    Science.gov (United States)

    Pfeiffer, M.; Nizenkov, P.; Mirza, A.; Fasoulas, S.

    2016-02-01

    Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn's Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methods are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.

  4. Hybrid discrete choice models: Gained insights versus increasing effort

    International Nuclear Information System (INIS)

    Mariel, Petr; Meyerhoff, Jürgen

    2016-01-01

    Hybrid choice models expand the standard models in discrete choice modelling by incorporating psychological factors as latent variables. They could therefore provide further insights into choice processes and underlying taste heterogeneity but the costs of estimating these models often significantly increase. This paper aims at comparing the results from a hybrid choice model and a classical random parameter logit. Point of departure for this analysis is whether researchers and practitioners should add hybrid choice models to their suite of models routinely estimated. Our comparison reveals, in line with the few prior studies, that hybrid models gain in efficiency by the inclusion of additional information. The use of one of the two proposed approaches, however, depends on the objective of the analysis. If disentangling preference heterogeneity is most important, hybrid model seems to be preferable. If the focus is on predictive power, a standard random parameter logit model might be the better choice. Finally, we give recommendations for an adequate use of hybrid choice models based on known principles of elementary scientific inference. - Highlights: • The paper compares performance of a Hybrid Choice Model (HCM) and a classical Random Parameter Logit (RPL) model. • The HCM indeed provides insights regarding preference heterogeneity not gained from the RPL. • The RPL has similar predictive power as the HCM in our data. • The costs of estimating HCM seem to be justified when learning more on taste heterogeneity is a major study objective.

  5. Hybrid discrete choice models: Gained insights versus increasing effort

    Energy Technology Data Exchange (ETDEWEB)

    Mariel, Petr, E-mail: petr.mariel@ehu.es [UPV/EHU, Economía Aplicada III, Avda. Lehendakari Aguire, 83, 48015 Bilbao (Spain); Meyerhoff, Jürgen [Institute for Landscape Architecture and Environmental Planning, Technical University of Berlin, D-10623 Berlin, Germany and The Kiel Institute for the World Economy, Duesternbrooker Weg 120, 24105 Kiel (Germany)

    2016-10-15

    Hybrid choice models expand the standard models in discrete choice modelling by incorporating psychological factors as latent variables. They could therefore provide further insights into choice processes and underlying taste heterogeneity but the costs of estimating these models often significantly increase. This paper aims at comparing the results from a hybrid choice model and a classical random parameter logit. Point of departure for this analysis is whether researchers and practitioners should add hybrid choice models to their suite of models routinely estimated. Our comparison reveals, in line with the few prior studies, that hybrid models gain in efficiency by the inclusion of additional information. The use of one of the two proposed approaches, however, depends on the objective of the analysis. If disentangling preference heterogeneity is most important, hybrid model seems to be preferable. If the focus is on predictive power, a standard random parameter logit model might be the better choice. Finally, we give recommendations for an adequate use of hybrid choice models based on known principles of elementary scientific inference. - Highlights: • The paper compares performance of a Hybrid Choice Model (HCM) and a classical Random Parameter Logit (RPL) model. • The HCM indeed provides insights regarding preference heterogeneity not gained from the RPL. • The RPL has similar predictive power as the HCM in our data. • The costs of estimating HCM seem to be justified when learning more on taste heterogeneity is a major study objective.

  6. Hybrid discrete choice models: Gained insights versus increasing effort.

    Science.gov (United States)

    Mariel, Petr; Meyerhoff, Jürgen

    2016-10-15

    Hybrid choice models expand the standard models in discrete choice modelling by incorporating psychological factors as latent variables. They could therefore provide further insights into choice processes and underlying taste heterogeneity but the costs of estimating these models often significantly increase. This paper aims at comparing the results from a hybrid choice model and a classical random parameter logit. Point of departure for this analysis is whether researchers and practitioners should add hybrid choice models to their suite of models routinely estimated. Our comparison reveals, in line with the few prior studies, that hybrid models gain in efficiency by the inclusion of additional information. The use of one of the two proposed approaches, however, depends on the objective of the analysis. If disentangling preference heterogeneity is most important, hybrid model seems to be preferable. If the focus is on predictive power, a standard random parameter logit model might be the better choice. Finally, we give recommendations for an adequate use of hybrid choice models based on known principles of elementary scientific inference. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Microscopic imaging through turbid media Monte Carlo modeling and applications

    CERN Document Server

    Gu, Min; Deng, Xiaoyuan

    2015-01-01

    This book provides a systematic introduction to the principles of microscopic imaging through tissue-like turbid media in terms of Monte-Carlo simulation. It describes various gating mechanisms based on the physical differences between the unscattered and scattered photons and method for microscopic image reconstruction, using the concept of the effective point spread function. Imaging an object embedded in a turbid medium is a challenging problem in physics as well as in biophotonics. A turbid medium surrounding an object under inspection causes multiple scattering, which degrades the contrast, resolution and signal-to-noise ratio. Biological tissues are typically turbid media. Microscopic imaging through a tissue-like turbid medium can provide higher resolution than transillumination imaging in which no objective is used. This book serves as a valuable reference for engineers and scientists working on microscopy of tissue turbid media.

  8. Importance estimation in Monte Carlo modelling of neutron and photon transport

    International Nuclear Information System (INIS)

    Mickael, M.W.

    1992-01-01

    The estimation of neutron and photon importance in a three-dimensional geometry is achieved using a coupled Monte Carlo and diffusion theory calculation. The parameters required for the solution of the multigroup adjoint diffusion equation are estimated from an analog Monte Carlo simulation of the system under investigation. The solution of the adjoint diffusion equation is then used as an estimate of the particle importance in the actual simulation. This approach provides an automated and efficient variance reduction method for Monte Carlo simulations. The technique has been successfully applied to Monte Carlo simulation of neutron and coupled neutron-photon transport in the nuclear well-logging field. The results show that the importance maps obtained in a few minutes of computer time using this technique are in good agreement with Monte Carlo generated importance maps that require prohibitive computing times. The application of this method to Monte Carlo modelling of the response of neutron porosity and pulsed neutron instruments has resulted in major reductions in computation time. (Author)

  9. Modelling of electron contamination in clinical photon beams for Monte Carlo dose calculation

    International Nuclear Information System (INIS)

    Yang, J; Li, J S; Qin, L; Xiong, W; Ma, C-M

    2004-01-01

    The purpose of this work is to model electron contamination in clinical photon beams and to commission the source model using measured data for Monte Carlo treatment planning. In this work, a planar source is used to represent the contaminant electrons at a plane above the upper jaws. The source size depends on the dimensions of the field size at the isocentre. The energy spectra of the contaminant electrons are predetermined using Monte Carlo simulations for photon beams from different clinical accelerators. A 'random creep' method is employed to derive the weight of the electron contamination source by matching Monte Carlo calculated monoenergetic photon and electron percent depth-dose (PDD) curves with measured PDD curves. We have integrated this electron contamination source into a previously developed multiple source model and validated the model for photon beams from Siemens PRIMUS accelerators. The EGS4 based Monte Carlo user code BEAM and MCSIM were used for linac head simulation and dose calculation. The Monte Carlo calculated dose distributions were compared with measured data. Our results showed good agreement (less than 2% or 2 mm) for 6, 10 and 18 MV photon beams

  10. R and D on automatic modeling methods for Monte Carlo codes FLUKA

    International Nuclear Information System (INIS)

    Wang Dianxi; Hu Liqin; Wang Guozhong; Zhao Zijia; Nie Fanzhi; Wu Yican; Long Pengcheng

    2013-01-01

    FLUKA is a fully integrated particle physics Monte Carlo simulation package. It is necessary to create the geometry models before calculation. However, it is time- consuming and error-prone to describe the geometry models manually. This study developed an automatic modeling method which could automatically convert computer-aided design (CAD) geometry models into FLUKA models. The conversion program was integrated into CAD/image-based automatic modeling program for nuclear and radiation transport simulation (MCAM). Its correctness has been demonstrated. (authors)

  11. CAD-based Monte Carlo automatic modeling method based on primitive solid

    International Nuclear Information System (INIS)

    Wang, Dong; Song, Jing; Yu, Shengpeng; Long, Pengcheng; Wang, Yongliang

    2016-01-01

    Highlights: • We develop a method which bi-convert between CAD model and primitive solid. • This method was improved from convert method between CAD model and half space. • This method was test by ITER model and validated the correctness and efficiency. • This method was integrated in SuperMC which could model for SuperMC and Geant4. - Abstract: Monte Carlo method has been widely used in nuclear design and analysis, where geometries are described with primitive solids. However, it is time consuming and error prone to describe a primitive solid geometry, especially for a complicated model. To reuse the abundant existed CAD models and conveniently model with CAD modeling tools, an automatic modeling method for accurate prompt modeling between CAD model and primitive solid is needed. An automatic modeling method for Monte Carlo geometry described by primitive solid was developed which could bi-convert between CAD model and Monte Carlo geometry represented by primitive solids. While converting from CAD model to primitive solid model, the CAD model was decomposed into several convex solid sets, and then corresponding primitive solids were generated and exported. While converting from primitive solid model to the CAD model, the basic primitive solids were created and related operation was done. This method was integrated in the SuperMC and was benchmarked with ITER benchmark model. The correctness and efficiency of this method were demonstrated.

  12. NRMC - A GPU code for N-Reverse Monte Carlo modeling of fluids in confined media

    Science.gov (United States)

    Sánchez-Gil, Vicente; Noya, Eva G.; Lomba, Enrique

    2017-08-01

    NRMC is a parallel code for performing N-Reverse Monte Carlo modeling of fluids in confined media [V. Sánchez-Gil, E.G. Noya, E. Lomba, J. Chem. Phys. 140 (2014) 024504]. This method is an extension of the usual Reverse Monte Carlo method to obtain structural models of confined fluids compatible with experimental diffraction patterns, specifically designed to overcome the problem of slow diffusion that can appear under conditions of tight confinement. Most of the computational time in N-Reverse Monte Carlo modeling is spent in the evaluation of the structure factor for each trial configuration, a calculation that can be easily parallelized. Implementation of the structure factor evaluation in NVIDIA® CUDA so that the code can be run on GPUs leads to a speed up of up to two orders of magnitude.

  13. Monte Carlo modeling for realizing optimized management of failed fuel replacement

    International Nuclear Information System (INIS)

    Morishita, Kazunori; Yamamoto, Yasunori; Nakasuji, Toshiki

    2014-01-01

    Fuel cladding is one of the key components in a fission reactor to keep confining radioactive materials inside a fuel tube. During reactor operation, the cladding is however sometimes breached and radioactive materials leak from the fuel ceramic pellet into the coolant water through the breach. The primary coolant water is therefore monitored so that any leak is quickly detected, where the coolant water is periodically sampled and the concentration of, for example the radioactive iodine 131 (I-131), is measured. Depending on the measured concentration, the faulty fuel assembly with leaking rod is removed from the reactor and replaced by new one immediately or at the next refueling. In the present study, an effort has been made to develop a methodology to optimize the management for replacement of failed fuels due to cladding failures using the I-131 concentration measured in the sampled coolant water. A model numerical equation is proposed to describe the time evolution of I-131 concentration due to fuel leaks, and is then solved using the Monte-Carlo method as a function of sampling rate. Our results have indicated that, in order to achieve the rationalized management of failed fuels, higher resolution to detect a small amount of I-131 is not necessarily required but more frequent sampling is favorable. (author)

  14. Fullrmc, a rigid body Reverse Monte Carlo modeling package enabled with machine learning and artificial intelligence.

    Science.gov (United States)

    Aoun, Bachir

    2016-05-05

    A new Reverse Monte Carlo (RMC) package "fullrmc" for atomic or rigid body and molecular, amorphous, or crystalline materials is presented. fullrmc main purpose is to provide a fully modular, fast and flexible software, thoroughly documented, complex molecules enabled, written in a modern programming language (python, cython, C and C++ when performance is needed) and complying to modern programming practices. fullrmc approach in solving an atomic or molecular structure is different from existing RMC algorithms and software. In a nutshell, traditional RMC methods and software randomly adjust atom positions until the whole system has the greatest consistency with a set of experimental data. In contrast, fullrmc applies smart moves endorsed with reinforcement machine learning to groups of atoms. While fullrmc allows running traditional RMC modeling, the uniqueness of this approach resides in its ability to customize grouping atoms in any convenient way with no additional programming efforts and to apply smart and more physically meaningful moves to the defined groups of atoms. In addition, fullrmc provides a unique way with almost no additional computational cost to recur a group's selection, allowing the system to go out of local minimas by refining a group's position or exploring through and beyond not allowed positions and energy barriers the unrestricted three dimensional space around a group. © 2016 Wiley Periodicals, Inc.

  15. Monte Carlo simulation of diblock copolymer microphases by means of a 'fast' off-lattice model

    DEFF Research Database (Denmark)

    Besold, Gerhard; Hassager, O.; Mouritsen, Ole G.

    1999-01-01

    We present a mesoscopic off-lattice model for the simulation of diblock copolymer melts by Monte Carlo techniques. A single copolymer molecule is modeled as a discrete Edwards chain consisting of two blocks with vertices of type A and B, respectively. The volume interaction is formulated in terms...

  16. A model for Monte Carlo simulation of low angle photon scattering in biological tissues

    CERN Document Server

    Tartari, A; Bonifazzi, C

    2001-01-01

    In order to include the molecular interference effect, a simple procedure is proposed and demonstrated to be able to update the usual cross section database for photon coherent scattering modelling in Monte Carlo codes. This effect was evaluated by measurement of coherent scattering distributions and by means of a model based on four basic materials composing biological tissues.

  17. Monte Carlo Simulations of Compressible Ising Models: Do We Understand Them?

    Science.gov (United States)

    Landau, D. P.; Dünweg, B.; Laradji, M.; Tavazza, F.; Adler, J.; Cannavaccioulo, L.; Zhu, X.

    Extensive Monte Carlo simulations have begun to shed light on our understanding of phase transitions and universality classes for compressible Ising models. A comprehensive analysis of a Landau-Ginsburg-Wilson hamiltonian for systems with elastic degrees of freedom resulted in the prediction that there should be four distinct cases that would have different behavior, depending upon symmetries and thermodynamic constraints. We shall provide an account of the results of careful Monte Carlo simulations for a simple compressible Ising model that can be suitably modified so as to replicate all four cases.

  18. Markov chain Monte Carlo methods for state-space models with point process observations.

    Science.gov (United States)

    Yuan, Ke; Girolami, Mark; Niranjan, Mahesan

    2012-06-01

    This letter considers how a number of modern Markov chain Monte Carlo (MCMC) methods can be applied for parameter estimation and inference in state-space models with point process observations. We quantified the efficiencies of these MCMC methods on synthetic data, and our results suggest that the Reimannian manifold Hamiltonian Monte Carlo method offers the best performance. We further compared such a method with a previously tested variational Bayes method on two experimental data sets. Results indicate similar performance on the large data sets and superior performance on small ones. The work offers an extensive suite of MCMC algorithms evaluated on an important class of models for physiological signal analysis.

  19. A High-Resolution Spatially Explicit Monte-Carlo Simulation Approach to Commercial and Residential Electricity and Water Demand Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Morton, April M [ORNL; McManamay, Ryan A [ORNL; Nagle, Nicholas N [ORNL; Piburn, Jesse O [ORNL; Stewart, Robert N [ORNL; Surendran Nair, Sujithkumar [ORNL

    2016-01-01

    Abstract As urban areas continue to grow and evolve in a world of increasing environmental awareness, the need for high resolution spatially explicit estimates for energy and water demand has become increasingly important. Though current modeling efforts mark significant progress in the effort to better understand the spatial distribution of energy and water consumption, many are provided at a course spatial resolution or rely on techniques which depend on detailed region-specific data sources that are not publicly available for many parts of the U.S. Furthermore, many existing methods do not account for errors in input data sources and may therefore not accurately reflect inherent uncertainties in model outputs. We propose an alternative and more flexible Monte-Carlo simulation approach to high-resolution residential and commercial electricity and water consumption modeling that relies primarily on publicly available data sources. The method s flexible data requirement and statistical framework ensure that the model is both applicable to a wide range of regions and reflective of uncertainties in model results. Key words: Energy Modeling, Water Modeling, Monte-Carlo Simulation, Uncertainty Quantification Acknowledgment This manuscript has been authored by employees of UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the U.S. Department of Energy. Accordingly, the United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  20. Accelerated Monte Carlo system reliability analysis through machine-learning-based surrogate models of network connectivity

    International Nuclear Information System (INIS)

    Stern, R.E.; Song, J.; Work, D.B.

    2017-01-01

    The two-terminal reliability problem in system reliability analysis is known to be computationally intractable for large infrastructure graphs. Monte Carlo techniques can estimate the probability of a disconnection between two points in a network by selecting a representative sample of network component failure realizations and determining the source-terminal connectivity of each realization. To reduce the runtime required for the Monte Carlo approximation, this article proposes an approximate framework in which the connectivity check of each sample is estimated using a machine-learning-based classifier. The framework is implemented using both a support vector machine (SVM) and a logistic regression based surrogate model. Numerical experiments are performed on the California gas distribution network using the epicenter and magnitude of the 1989 Loma Prieta earthquake as well as randomly-generated earthquakes. It is shown that the SVM and logistic regression surrogate models are able to predict network connectivity with accuracies of 99% for both methods, and are 1–2 orders of magnitude faster than using a Monte Carlo method with an exact connectivity check. - Highlights: • Surrogate models of network connectivity are developed by machine-learning algorithms. • Developed surrogate models can reduce the runtime required for Monte Carlo simulations. • Support vector machine and logistic regressions are employed to develop surrogate models. • Numerical example of California gas distribution network demonstrate the proposed approach. • The developed models have accuracies 99%, and are 1–2 orders of magnitude faster than MCS.

  1. Nuclear Hybrid Energy Systems FY16 Modeling Efforts at ORNL

    Energy Technology Data Exchange (ETDEWEB)

    Cetiner, Sacit M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Greenwood, Michael Scott [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Harrison, Thomas J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Qualls, A. L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Guler Yigitoglu, Askin [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Fugate, David W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-09-01

    A nuclear hybrid system uses a nuclear reactor as the basic power generation unit. The power generated by the nuclear reactor is utilized by one or more power customers as either thermal power, electrical power, or both. In general, a nuclear hybrid system will couple the nuclear reactor to at least one thermal power user in addition to the power conversion system. The definition and architecture of a particular nuclear hybrid system is flexible depending on local markets needs and opportunities. For example, locations in need of potable water may be best served by coupling a desalination plant to the nuclear system. Similarly, an area near oil refineries may have a need for emission-free hydrogen production. A nuclear hybrid system expands the nuclear power plant from its more familiar central power station role by diversifying its immediately and directly connected customer base. The definition, design, analysis, and optimization work currently performed with respect to the nuclear hybrid systems represents the work of three national laboratories. Idaho National Laboratory (INL) is the lead lab working with Argonne National Laboratory (ANL) and Oak Ridge National Laboratory. Each laboratory is providing modeling and simulation expertise for the integration of the hybrid system.

  2. Model unspecific search in CMS. Treatment of insufficient Monte Carlo statistics

    Energy Technology Data Exchange (ETDEWEB)

    Lieb, Jonas; Albert, Andreas; Duchardt, Deborah; Hebbeker, Thomas; Knutzen, Simon; Meyer, Arnd; Pook, Tobias; Roemer, Jonas [III. Physikalisches Institut A, RWTH Aachen University (Germany)

    2016-07-01

    In 2015, the CMS detector recorded proton-proton collisions at an unprecedented center of mass energy of √(s)=13 TeV. The Model Unspecific Search in CMS (MUSiC) offers an analysis approach of these data which is complementary to dedicated analyses: By taking all produced final states into consideration, MUSiC is sensitive to indicators of new physics appearing in final states that are usually not investigated. In a two step process, MUSiC first classifies events according to their physics content and then searches kinematic distributions for the most significant deviations between Monte Carlo simulations and observed data. Such a general approach introduces its own set of challenges. One of them is the treatment of situations with insufficient Monte Carlo statistics. Complementing introductory presentations on the MUSiC event selection and classification, this talk will present a method of dealing with the issue of low Monte Carlo statistics.

  3. Quasi-Monte Carlo methods: applications to modeling of light transport in tissue

    Science.gov (United States)

    Schafer, Steven A.

    1996-05-01

    Monte Carlo modeling of light propagation can accurately predict the distribution of light in scattering materials. A drawback of Monte Carlo methods is that they converge inversely with the square root of the number of iterations. Theoretical considerations suggest that convergence which scales inversely with the first power of the number of iterations is possible. We have previously shown that one can obtain at least a portion of that improvement by using van der Corput sequences in place of a conventional pseudo-random number generator. Here, we present our further analysis, and show that quasi-Monte Carlo methods do have limited applicability to light scattering problems. We also discuss potential improvements which may increase the applicability.

  4. Particle Markov Chain Monte Carlo Techniques of Unobserved Component Time Series Models Using Ox

    DEFF Research Database (Denmark)

    Nonejad, Nima

    This paper details Particle Markov chain Monte Carlo techniques for analysis of unobserved component time series models using several economic data sets. PMCMC combines the particle filter with the Metropolis-Hastings algorithm. Overall PMCMC provides a very compelling, computationally fast...

  5. Monte Carlo tools for Beyond the Standard Model Physics , April 14-16

    DEFF Research Database (Denmark)

    Badger...[], Simon; Christensen, Christian Holm; Dalsgaard, Hans Hjersing

    2011-01-01

    already exist for the study of low energy supersymmetry and the MSSM in particular, this workshop will instead focus on tools for alternative TeV-scale physics models. The main goals of the workshop are: To survey what is available. To provide feedback on user experiences with Monte Carlo tools for BSM...

  6. Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo

    DEFF Research Database (Denmark)

    Herckenrath, Daan; Langevin, Christian D.; Doherty, John

    2011-01-01

    Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction...

  7. Confronting uncertainty in model-based geostatistics using Markov Chain Monte Carlo simulation

    NARCIS (Netherlands)

    Minasny, B.; Vrugt, J.A.; McBratney, A.B.

    2011-01-01

    This paper demonstrates for the first time the use of Markov Chain Monte Carlo (MCMC) simulation for parameter inference in model-based soil geostatistics. We implemented the recently developed DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm to jointly summarize the posterior

  8. Treatment of input uncertainty in hydrologic modeling: Doing hydrology backward with Markov chain Monte Carlo simulation

    NARCIS (Netherlands)

    Vrugt, J.A.; Braak, ter C.J.F.; Clark, M.P.; Hyman, J.M.; Robinson, B.A.

    2008-01-01

    There is increasing consensus in the hydrologic literature that an appropriate framework for streamflow forecasting and simulation should include explicit recognition of forcing and parameter and model structural error. This paper presents a novel Markov chain Monte Carlo (MCMC) sampler, entitled

  9. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans.

    Science.gov (United States)

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2017-03-07

    The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients' CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.

  10. SKIRT: The design of a suite of input models for Monte Carlo radiative transfer simulations

    Science.gov (United States)

    Baes, M.; Camps, P.

    2015-09-01

    The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can be either analytical toy models or numerical models defined on grids or a set of particles) and the extensive use of decorators that combine and alter these building blocks to more complex structures. For a number of decorators, e.g. those that add spiral structure or clumpiness, we provide a detailed description of the algorithms that can be used to generate random positions. Advantages of this decorator-based design include code transparency, the avoidance of code duplication, and an increase in code maintainability. Moreover, since decorators can be chained without problems, very complex models can easily be constructed out of simple building blocks. Finally, based on a number of test simulations, we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black-box random position generator.

  11. forecasting with nonlinear time series model: a monte-carlo

    African Journals Online (AJOL)

    PUBLICATIONS1

    erated recursively up to any step greater than one. For nonlinear time series model, point forecast for step one can be done easily like in the linear case but forecast for a step greater than or equal to ..... London. Franses, P. H. (1998). Time series models for business and Economic forecasting, Cam- bridge University press.

  12. Perturbation analysis for Monte Carlo continuous cross section models

    International Nuclear Information System (INIS)

    Kennedy, Chris B.; Abdel-Khalik, Hany S.

    2011-01-01

    Sensitivity analysis, including both its forward and adjoint applications, collectively referred to hereinafter as Perturbation Analysis (PA), is an essential tool to complete Uncertainty Quantification (UQ) and Data Assimilation (DA). PA-assisted UQ and DA have traditionally been carried out for reactor analysis problems using deterministic as opposed to stochastic models for radiation transport. This is because PA requires many model executions to quantify how variations in input data, primarily cross sections, affect variations in model's responses, e.g. detectors readings, flux distribution, multiplication factor, etc. Although stochastic models are often sought for their higher accuracy, their repeated execution is at best computationally expensive and in reality intractable for typical reactor analysis problems involving many input data and output responses. Deterministic methods however achieve computational efficiency needed to carry out the PA analysis by reducing problem dimensionality via various spatial and energy homogenization assumptions. This however introduces modeling error components into the PA results which propagate to the following UQ and DA analyses. The introduced errors are problem specific and therefore are expected to limit the applicability of UQ and DA analyses to reactor systems that satisfy the introduced assumptions. This manuscript introduces a new method to complete PA employing a continuous cross section stochastic model and performed in a computationally efficient manner. If successful, the modeling error components introduced by deterministic methods could be eliminated, thereby allowing for wider applicability of DA and UQ results. Two MCNP models demonstrate the application of the new method - a Critical Pu Sphere (Jezebel), a Pu Fast Metal Array (Russian BR-1). The PA is completed for reaction rate densities, reaction rate ratios, and the multiplication factor. (author)

  13. Adaptable three-dimensional Monte Carlo modeling of imaged blood vessels in skin

    Science.gov (United States)

    Pfefer, T. Joshua; Barton, Jennifer K.; Chan, Eric K.; Ducros, Mathieu G.; Sorg, Brian S.; Milner, Thomas E.; Nelson, J. Stuart; Welch, Ashley J.

    1997-06-01

    In order to reach a higher level of accuracy in simulation of port wine stain treatment, we propose to discard the typical layered geometry and cylindrical blood vessel assumptions made in optical models and use imaging techniques to define actual tissue geometry. Two main additions to the typical 3D, weighted photon, variable step size Monte Carlo routine were necessary to achieve this goal. First, optical low coherence reflectometry (OLCR) images of rat skin were used to specify a 3D material array, with each entry assigned a label to represent the type of tissue in that particular voxel. Second, the Monte Carlo algorithm was altered so that when a photon crosses into a new voxel, the remaining path length is recalculated using the new optical properties, as specified by the material array. The model has shown good agreement with data from the literature. Monte Carlo simulations using OLCR images of asymmetrically curved blood vessels show various effects such as shading, scattering-induced peaks at vessel surfaces, and directionality-induced gradients in energy deposition. In conclusion, this augmentation of the Monte Carlo method can accurately simulate light transport for a wide variety of nonhomogeneous tissue geometries.

  14. TRIPOLI-4{sup ®} Monte Carlo code ITER A-lite neutronic model validation

    Energy Technology Data Exchange (ETDEWEB)

    Jaboulay, Jean-Charles, E-mail: jean-charles.jaboulay@cea.fr [CEA, DEN, Saclay, DM2S, SERMA, F-91191 Gif-sur-Yvette (France); Cayla, Pierre-Yves; Fausser, Clement [MILLENNIUM, 16 Av du Québec Silic 628, F-91945 Villebon sur Yvette (France); Damian, Frederic; Lee, Yi-Kang; Puma, Antonella Li; Trama, Jean-Christophe [CEA, DEN, Saclay, DM2S, SERMA, F-91191 Gif-sur-Yvette (France)

    2014-10-15

    3D Monte Carlo transport codes are extensively used in neutronic analysis, especially in radiation protection and shielding analyses for fission and fusion reactors. TRIPOLI-4{sup ®} is a Monte Carlo code developed by CEA. The aim of this paper is to show its capability to model a large-scale fusion reactor with complex neutron source and geometry. A benchmark between MCNP5 and TRIPOLI-4{sup ®}, on the ITER A-lite model was carried out; neutron flux, nuclear heating in the blankets and tritium production rate in the European TBMs were evaluated and compared. The methodology to build the TRIPOLI-4{sup ®} A-lite model is based on MCAM and the MCNP A-lite model. Simplified TBMs, from KIT, were integrated in the equatorial-port. A good agreement between MCNP and TRIPOLI-4{sup ®} is shown; discrepancies are mainly included in the statistical error.

  15. Monte Carlo modeling of atomic oxygen attack of polymers with protective coatings on LDEF

    Science.gov (United States)

    Banks, Bruce A.; Degroh, Kim K.; Auer, Bruce M.; Gebauer, Linda; Edwards, Jonathan L.

    1993-01-01

    Characterization of the behavior of atomic oxygen interaction with materials on the Long Duration Exposure Facility (LDEF) assists in understanding of the mechanisms involved. Thus the reliability of predicting in-space durability of materials based on ground laboratory testing should be improved. A computational model which simulates atomic oxygen interaction with protected polymers was developed using Monte Carlo techniques. Through the use of an assumed mechanistic behavior of atomic oxygen interaction based on in-space atomic oxygen erosion of unprotected polymers and ground laboratory atomic oxygen interaction with protected polymers, prediction of atomic oxygen interaction with protected polymers on LDEF was accomplished. However, the results of these predictions are not consistent with the observed LDEF results at defect sites in protected polymers. Improved agreement between observed LDEF results and predicted Monte Carlo modeling can be achieved by modifying of the atomic oxygen interactive assumptions used in the model. LDEF atomic oxygen undercutting results, modeling assumptions, and implications are presented.

  16. Evaluating the equation-of-state models of nitrogen in the dissociation regime: an experimental effort

    Science.gov (United States)

    Li, Jiangtao; Chen, Qifeng; Fu, Zhijian; Gu, Yunjun; Zheng, Jun; Li, Chengjun

    2017-06-01

    A number of experiments were designed so that pre-compressed nitrogen (20 MPa) was shock-compressed reverberatively into a regime where molecular dissociation is expected to influence significantly the equation-of-state and transport properties. The equation of state of nitrogen after each compression process was probed by a joint diagnostics of multichannel optical pyrometer (MCOP) and Doppler pin system (DPS). The equation of state data thereby obtained span a pressure-density range of about 0.02-130 GPa and 0.22-5.9 g/cc. Furthermore, based on the uncertainties of the measurements, a Monte Carlo method was employed to evaluate the probability distribution of the thermodynamic state after each compression. According to Monte Carlo results, a number of equation-of-state models or calculations for nitrogen in the dissociation regime were assessed.

  17. Absorbed dose in fibrotic microenvironment models employing Monte Carlo simulation

    International Nuclear Information System (INIS)

    Zambrano Ramírez, O.D.; Rojas Calderón, E.L.; Azorín Vega, E.P.; Ferro Flores, G.; Martínez Caballero, E.

    2015-01-01

    The presence or absence of fibrosis and yet more, the multimeric and multivalent nature of the radiopharmaceutical have recently been reported to have an effect on the radiation absorbed dose in tumor microenvironment models. Fibroblast and myofibroblast cells produce the extracellular matrix by the secretion of proteins which provide structural and biochemical support to cells. The reactive and reparative mechanisms triggered during the inflammatory process causes the production and deposition of extracellular matrix proteins, the abnormal excessive growth of the connective tissue leads to fibrosis. In this work, microenvironment (either not fibrotic or fibrotic) models composed of seven spheres representing cancer cells of 10 μm in diameter each with a 5 μm diameter inner sphere (cell nucleus) were created in two distinct radiation transport codes (PENELOPE and MCNP). The purpose of creating these models was to determine the radiation absorbed dose in the nucleus of cancer cells, based on previously reported radiopharmaceutical retain (by HeLa cells) percentages of the 177 Lu-Tyr 3 -octreotate (monomeric) and 177 Lu-Tyr 3 -octreotate-AuNP (multimeric) radiopharmaceuticals. A comparison in the results between the PENELOPE and MCNP was done. We found a good agreement in the results of the codes. The percent difference between the increase percentages of the absorbed dose in the not fibrotic model with respect to the fibrotic model of the codes PENELOPE and MCNP was found to be under 1% for both radiopharmaceuticals. (authors)

  18. Monte Carlo model of light transport in scintillating fibers and large scintillators

    International Nuclear Information System (INIS)

    Chakarova, R.

    1995-01-01

    A Monte Carlo model is developed which simulates the light transport in a scintillator surrounded by a transparent layer with different surface properties. The model is applied to analyse the light collection properties of scintillating fibers and a large scintillator wrapped in aluminium foil. The influence of the fiber interface characteristics on the light yield is investigated in detail. Light output results as well as time distributions are obtained for the large scintillator case. 15 refs, 16 figs

  19. Chinese Basic Pension Substitution Rate: A Monte Carlo Demonstration of the Individual Account Model

    OpenAIRE

    Dong, Bei; Zhang, Ling; Lu, Xuan

    2008-01-01

    At the end of 2005, the State Council of China passed ”The Decision on adjusting the Individual Account of Basic Pension System”, which adjusted the individual account in the 1997 basic pension system. In this essay, we will analyze the adjustment above, and use Life Annuity Actuarial Theory to establish the basic pension substitution rate model. Monte Carlo simulation is also used to prove the rationality of the model. Some suggestions are put forward associated with the substitution rate ac...

  20. Efficient Markov Chain Monte Carlo Sampling for Hierarchical Hidden Markov Models

    OpenAIRE

    Turek, Daniel; de Valpine, Perry; Paciorek, Christopher J.

    2016-01-01

    Traditional Markov chain Monte Carlo (MCMC) sampling of hidden Markov models (HMMs) involves latent states underlying an imperfect observation process, and generates posterior samples for top-level parameters concurrently with nuisance latent variables. When potentially many HMMs are embedded within a hierarchical model, this can result in prohibitively long MCMC runtimes. We study combinations of existing methods, which are shown to vastly improve computational efficiency for these hierarchi...

  1. Essays on Quantitative Marketing Models and Monte Carlo Integration Methods

    NARCIS (Netherlands)

    R.D. van Oest (Rutger)

    2005-01-01

    textabstractThe last few decades have led to an enormous increase in the availability of large detailed data sets and in the computing power needed to analyze such data. Furthermore, new models and new computing techniques have been developed to exploit both sources. All of this has allowed for

  2. McSCIA: application of the equivalence theorem in a Monte Carlo radiative transfer model for spherical shell

    NARCIS (Netherlands)

    Spada, F.M.; Krol, M.C.; Stammes, P.

    2006-01-01

    A new multiple-scattering Monte Carlo 3-D radiative transfer model named McSCIA (Monte Carlo for SCIAmachy) is presented. The backward technique is used to efficiently simulate narrow field of view instruments. The McSCIA algorithm has been formulated as a function of the Earth’s radius, and can

  3. McSCIA: application of the equivalence theorem in a Monte Carlo radiative transfer model for spherical shell atmospheres

    NARCIS (Netherlands)

    Spada, F.; Krol, M.C.; Stammes, P.

    2006-01-01

    A new multiple-scatteringMonte Carlo 3-D radiative transfer model named McSCIA (Monte Carlo for SCIA-machy) is presented. The backward technique is used to efficiently simulate narrow field of view instruments. The McSCIA algorithm has been formulated as a function of the Earth's radius, and can

  4. Dispersion of radionuclides released into a stable planetary boundary layer using a Monte Carlo model

    International Nuclear Information System (INIS)

    Basit, Abdul; Raza, S Shoaib; Irfan, Naseem

    2006-01-01

    In this paper a Monte Carlo model for describing the atmospheric dispersion of radionuclides (represented by Lagrangian particles/neutral tracers) continuously released into a stable planetary boundary layer is presented. The effect of variation in release height and wind directional shear on plume dispersion is studied. The resultant plume concentration and dose rate at the ground is also calculated. The turbulent atmospheric parameters, like vertical profiles of fluctuating wind velocity components and eddy lifetime, were calculated using empirical relations for a stable atmosphere. The horizontal and vertical dispersion coefficients calculated by a numerical Lagrangian model are compared with the original and modified Pasquill-Gifford and Briggs empirical σs. The comparison shows that the Monte Carlo model can successfully predict dispersion in a stable atmosphere using the empirical turbulent parameters. The predicted ground concentration and dose rate contours indicate a significant increase in the affected area when wind shear is accounted for in the calculations

  5. New model for mines and transportation tunnels external dose calculation using Monte Carlo simulation

    International Nuclear Information System (INIS)

    Allam, Kh. A.

    2017-01-01

    In this work, a new methodology is developed based on Monte Carlo simulation for tunnels and mines external dose calculation. Tunnels external dose evaluation model of a cylindrical shape of finite thickness with an entrance and with or without exit. A photon transportation model was applied for exposure dose calculations. A new software based on Monte Carlo solution was designed and programmed using Delphi programming language. The variation of external dose due to radioactive nuclei in a mine tunnel and the corresponding experimental data lies in the range 7.3 19.9%. The variation of specific external dose rate with position in, tunnel building material density and composition were studied. The given new model has more flexible for real external dose in any cylindrical tunnel structure calculations. (authors)

  6. Finite element model updating using the shadow hybrid Monte Carlo technique

    Science.gov (United States)

    Boulkaibet, I.; Mthembu, L.; Marwala, T.; Friswell, M. I.; Adhikari, S.

    2015-02-01

    Recent research in the field of finite element model updating (FEM) advocates the adoption of Bayesian analysis techniques to dealing with the uncertainties associated with these models. However, Bayesian formulations require the evaluation of the Posterior Distribution Function which may not be available in analytical form. This is the case in FEM updating. In such cases sampling methods can provide good approximations of the Posterior distribution when implemented in the Bayesian context. Markov Chain Monte Carlo (MCMC) algorithms are the most popular sampling tools used to sample probability distributions. However, the efficiency of these algorithms is affected by the complexity of the systems (the size of the parameter space). The Hybrid Monte Carlo (HMC) offers a very important MCMC approach to dealing with higher-dimensional complex problems. The HMC uses the molecular dynamics (MD) steps as the global Monte Carlo (MC) moves to reach areas of high probability where the gradient of the log-density of the Posterior acts as a guide during the search process. However, the acceptance rate of HMC is sensitive to the system size as well as the time step used to evaluate the MD trajectory. To overcome this limitation we propose the use of the Shadow Hybrid Monte Carlo (SHMC) algorithm. The SHMC algorithm is a modified version of the Hybrid Monte Carlo (HMC) and designed to improve sampling for large-system sizes and time steps. This is done by sampling from a modified Hamiltonian function instead of the normal Hamiltonian function. In this paper, the efficiency and accuracy of the SHMC method is tested on the updating of two real structures; an unsymmetrical H-shaped beam structure and a GARTEUR SM-AG19 structure and is compared to the application of the HMC algorithm on the same structures.

  7. Multilevel Monte Carlo and improved timestepping methods in atmospheric dispersion modelling

    Science.gov (United States)

    Katsiolides, Grigoris; Müller, Eike H.; Scheichl, Robert; Shardlow, Tony; Giles, Michael B.; Thomson, David J.

    2018-02-01

    A common way to simulate the transport and spread of pollutants in the atmosphere is via stochastic Lagrangian dispersion models. Mathematically, these models describe turbulent transport processes with stochastic differential equations (SDEs). The computational bottleneck is the Monte Carlo algorithm, which simulates the motion of a large number of model particles in a turbulent velocity field; for each particle, a trajectory is calculated with a numerical timestepping method. Choosing an efficient numerical method is particularly important in operational emergency-response applications, such as tracking radioactive clouds from nuclear accidents or predicting the impact of volcanic ash clouds on international aviation, where accurate and timely predictions are essential. In this paper, we investigate the application of the Multilevel Monte Carlo (MLMC) method to simulate the propagation of particles in a representative one-dimensional dispersion scenario in the atmospheric boundary layer. MLMC can be shown to result in asymptotically superior computational complexity and reduced computational cost when compared to the Standard Monte Carlo (StMC) method, which is currently used in atmospheric dispersion modelling. To reduce the absolute cost of the method also in the non-asymptotic regime, it is equally important to choose the best possible numerical timestepping method on each level. To investigate this, we also compare the standard symplectic Euler method, which is used in many operational models, with two improved timestepping algorithms based on SDE splitting methods.

  8. Monte Carlo simulations of a model for opinion formation

    Science.gov (United States)

    Bordogna, C. M.; Albano, E. V.

    2007-04-01

    A model for opinion formation based on the Theory of Social Impact is presented and studied by means of numerical simulations. Individuals with two states of opinion are impacted due to social interactions with: i) members of the society, ii) a strong leader with a well-defined opinion and iii) the mass media that could either support or compete with the leader. Due to that competition, the average opinion of the social group exhibits phase-transition like behaviour between different states of opinion.

  9. Markov chain Monte Carlo methods in directed graphical models

    DEFF Research Database (Denmark)

    Højbjerre, Malene

    have primarily been based on a Bayesian paradigm, i.e. prior information on the parameters is a prerequisite, but questions about undesirable side effects from the priors are raised.     We present a method, based on MCMC methods, that approximates profile log-likelihood functions in directed graphical...... a tendency to foetal loss is heritable. The data possess a complicated dependence structure due to replicate pregnancies for the same woman, and a given family pattern. We conclude that a tendency to foetal loss is heritable. The model is of great interest in genetic epidemiology, because it considers both...

  10. A sequential Monte Carlo model of the combined GB gas and electricity network

    International Nuclear Information System (INIS)

    Chaudry, Modassar; Wu, Jianzhong; Jenkins, Nick

    2013-01-01

    A Monte Carlo model of the combined GB gas and electricity network was developed to determine the reliability of the energy infrastructure. The model integrates the gas and electricity network into a single sequential Monte Carlo simulation. The model minimises the combined costs of the gas and electricity network, these include gas supplies, gas storage operation and electricity generation. The Monte Carlo model calculates reliability indices such as loss of load probability and expected energy unserved for the combined gas and electricity network. The intention of this tool is to facilitate reliability analysis of integrated energy systems. Applications of this tool are demonstrated through a case study that quantifies the impact on the reliability of the GB gas and electricity network given uncertainties such as wind variability, gas supply availability and outages to energy infrastructure assets. Analysis is performed over a typical midwinter week on a hypothesised GB gas and electricity network in 2020 that meets European renewable energy targets. The efficacy of doubling GB gas storage capacity on the reliability of the energy system is assessed. The results highlight the value of greater gas storage facilities in enhancing the reliability of the GB energy system given various energy uncertainties. -- Highlights: •A Monte Carlo model of the combined GB gas and electricity network was developed. •Reliability indices are calculated for the combined GB gas and electricity system. •The efficacy of doubling GB gas storage capacity on reliability of the energy system is assessed. •Integrated reliability indices could be used to assess the impact of investment in energy assets

  11. MONTE CARLO ANALYSES OF THE YALINA THERMAL FACILITY WITH SERPENT STEREOLITHOGRAPHY GEOMETRY MODEL

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, A.; Gohar, Y.

    2015-01-01

    This paper analyzes the YALINA Thermal subcritical assembly of Belarus using two different Monte Carlo transport programs, SERPENT and MCNP. The MCNP model is based on combinatorial geometry and universes hierarchy, while the SERPENT model is based on Stereolithography geometry. The latter consists of unstructured triangulated surfaces defined by the normal and vertices. This geometry format is used by 3D printers and it has been created by: the CUBIT software, MATLAB scripts, and C coding. All the Monte Carlo simulations have been performed using the ENDF/B-VII.0 nuclear data library. Both MCNP and SERPENT share the same geometry specifications, which describe the facility details without using any material homogenization. Three different configurations have been studied with different number of fuel rods. The three fuel configurations use 216, 245, or 280 fuel rods, respectively. The numerical simulations show that the agreement between SERPENT and MCNP results is within few tens of pcms.

  12. Testing Lorentz Invariance Emergence in the Ising Model using Monte Carlo simulations

    CERN Document Server

    Dias Astros, Maria Isabel

    2017-01-01

    In the context of the Lorentz invariance as an emergent phenomenon at low energy scales to study quantum gravity a system composed by two 3D interacting Ising models (one with an anisotropy in one direction) was proposed. Two Monte Carlo simulations were run: one for the 2D Ising model and one for the target model. In both cases the observables (energy, magnetization, heat capacity and magnetic susceptibility) were computed for different lattice sizes and a Binder cumulant introduced in order to estimate the critical temperature of the systems. Moreover, the correlation function was calculated for the 2D Ising model.

  13. Monte Carlo modeling of neutron imaging at the SINQ spallation source

    International Nuclear Information System (INIS)

    Lebenhaft, J.R.; Lehmann, E.H.; Pitcher, E.J.; McKinney, G.W.

    2003-01-01

    Modeling of the Swiss Spallation Neutron Source (SINQ) has been used to demonstrate the neutron radiography capability of the newly released MPI-version of the MCNPX Monte Carlo code. A detailed MCNPX model was developed of SINQ and its associated neutron transmission radiography (NEUTRA) facility. Preliminary validation of the model was performed by comparing the calculated and measured neutron fluxes in the NEUTRA beam line, and a simulated radiography image was generated for a sample consisting of steel tubes containing different materials. This paper describes the SINQ facility, provides details of the MCNPX model, and presents preliminary results of the neutron imaging. (authors)

  14. Interstitial void structure in Cu Sn liquid alloy as revealed from reverse Monte Carlo modelling

    Science.gov (United States)

    Hoyer, W.; Kleinhempel, R.; Lorinczi, A.; Pohlers, A.; Popescu, M.; Sava, F.

    2005-02-01

    A model for the structure of copper-tin liquid alloy has been developed using the standard reverse Monte Carlo method. The interstitial void structure (size distribution) was analysed. The effects of various kinds of voids (small size and large size) on the interference function and radial distribution function were investigated. Predictions related to the formation of some ternary alloys by filling the interstices of the basic alloy were advanced.

  15. Investigation of Multicritical Phenomena in ANNNI Model by Monte Carlo Methods

    Directory of Open Access Journals (Sweden)

    A. K. Murtazaev

    2012-01-01

    Full Text Available The anisotropic Ising model with competing interactions is investigated in wide temperature range and |J1/J| parameters by means of Monte Carlo methods. Static critical exponents of the magnetization, susceptibility, heat capacity, and correlation radius are calculated in the neighborhood of Lifshitz point. According to obtained results, a phase diagram is plotted, the coordinates of Lifshitz point are defined, and a character of multicritical behavior of the system is detected.

  16. Systematic Identification of Stakeholders for Engagement with Systems Modeling Efforts in the Snohomish Basin, Washington, USA

    Science.gov (United States)

    Even as stakeholder engagement in systems dynamic modeling efforts is increasingly promoted, the mechanisms for identifying which stakeholders should be included are rarely documented. Accordingly, for an Environmental Protection Agency’s Triple Value Simulation (3VS) mode...

  17. Adaptive effort investment in cognitive and physical tasks: a neurocomputational model.

    Science.gov (United States)

    Verguts, Tom; Vassena, Eliana; Silvetti, Massimo

    2015-01-01

    Despite its importance in everyday life, the computational nature of effort investment remains poorly understood. We propose an effort model obtained from optimality considerations, and a neurocomputational approximation to the optimal model. Both are couched in the framework of reinforcement learning. It is shown that choosing when or when not to exert effort can be adaptively learned, depending on rewards, costs, and task difficulty. In the neurocomputational model, the limbic loop comprising anterior cingulate cortex (ACC) and ventral striatum in the basal ganglia allocates effort to cortical stimulus-action pathways whenever this is valuable. We demonstrate that the model approximates optimality. Next, we consider two hallmark effects from the cognitive control literature, namely proportion congruency and sequential congruency effects. It is shown that the model exerts both proactive and reactive cognitive control. Then, we simulate two physical effort tasks. In line with empirical work, impairing the model's dopaminergic pathway leads to apathetic behavior. Thus, we conceptually unify the exertion of cognitive and physical effort, studied across a variety of literatures (e.g., motivation and cognitive control) and animal species.

  18. Monte Carlo tests of the Rasch model based on scalability coefficients

    DEFF Research Database (Denmark)

    Christensen, Karl Bang; Kreiner, Svend

    2010-01-01

    that summarizes the number of Guttman errors in the data matrix. These coefficients are shown to yield efficient tests of the Rasch model using p-values computed using Markov chain Monte Carlo methods. The power of the tests of unequal item discrimination, and their ability to distinguish between local dependence......For item responses fitting the Rasch model, the assumptions underlying the Mokken model of double monotonicity are met. This makes non-parametric item response theory a natural starting-point for Rasch item analysis. This paper studies scalability coefficients based on Loevinger's H coefficient...

  19. Monte Carlo simulation for statistical mechanics model of ion-channel cooperativity in cell membranes

    Science.gov (United States)

    Erdem, Riza; Aydiner, Ekrem

    2009-03-01

    Voltage-gated ion channels are key molecules for the generation and propagation of electrical signals in excitable cell membranes. The voltage-dependent switching of these channels between conducting and nonconducting states is a major factor in controlling the transmembrane voltage. In this study, a statistical mechanics model of these molecules has been discussed on the basis of a two-dimensional spin model. A new Hamiltonian and a new Monte Carlo simulation algorithm are introduced to simulate such a model. It was shown that the results well match the experimental data obtained from batrachotoxin-modified sodium channels in the squid giant axon using the cut-open axon technique.

  20. Monte Carlo tools for Beyond the Standard Model Physics , April 14-16

    DEFF Research Database (Denmark)

    Badger...[], Simon; Christensen, Christian Holm; Dalsgaard, Hans Hjersing

    2011-01-01

    This workshop aims to gather together theorists and experimentalists interested in developing and using Monte Carlo tools for Beyond the Standard Model Physics in an attempt to be prepared for the analysis of data focusing on the Large Hadron Collider. Since a large number of excellent tools....... To identify promising models (or processes) for which the tools have not yet been constructed and start filling up these gaps. To propose ways to streamline the process of going from models to events, i.e. to make the process more user-friendly so that more people can get involved and perform serious collider...

  1. High accuracy modeling for advanced nuclear reactor core designs using Monte Carlo based coupled calculations

    Science.gov (United States)

    Espel, Federico Puente

    The main objective of this PhD research is to develop a high accuracy modeling tool using a Monte Carlo based coupled system. The presented research comprises the development of models to include the thermal-hydraulic feedback to the Monte Carlo method and speed-up mechanisms to accelerate the Monte Carlo criticality calculation. Presently, deterministic codes based on the diffusion approximation of the Boltzmann transport equation, coupled with channel-based (or sub-channel based) thermal-hydraulic codes, carry out the three-dimensional (3-D) reactor core calculations of the Light Water Reactors (LWRs). These deterministic codes utilize nuclear homogenized data (normally over large spatial zones, consisting of fuel assembly or parts of fuel assembly, and in the best case, over small spatial zones, consisting of pin cell), which is functionalized in terms of thermal-hydraulic feedback parameters (in the form of off-line pre-generated cross-section libraries). High accuracy modeling is required for advanced nuclear reactor core designs that present increased geometry complexity and material heterogeneity. Such high-fidelity methods take advantage of the recent progress in computation technology and coupled neutron transport solutions with thermal-hydraulic feedback models on pin or even on sub-pin level (in terms of spatial scale). The continuous energy Monte Carlo method is well suited for solving such core environments with the detailed representation of the complicated 3-D problem. The major advantages of the Monte Carlo method over the deterministic methods are the continuous energy treatment and the exact 3-D geometry modeling. However, the Monte Carlo method involves vast computational time. The interest in Monte Carlo methods has increased thanks to the improvements of the capabilities of high performance computers. Coupled Monte-Carlo calculations can serve as reference solutions for verifying high-fidelity coupled deterministic neutron transport methods

  2. Adaptive Effort Investment in Cognitive and Physical Tasks: A Neurocomputational Model

    Directory of Open Access Journals (Sweden)

    Tom eVerguts

    2015-03-01

    Full Text Available Despite its importance in everyday life, the computational nature of effort investment remains poorly understood. We propose an effort model obtained from optimality considerations, and a neurocomputational approximation to the optimal model. Both are couched in the framework of reinforcement learning. It is shown that choosing when or when not to exert effort can be adaptively learned, depending on rewards, costs, and task difficulty. In the neurocomputational model, the limbic loop comprising anterior cingulate cortex and ventral striatum in the basal ganglia allocates effort to cortical stimulus-action pathways whenever this is valuable. We demonstrate that the model approximates optimality. Next, we consider two hallmark effects from the cognitive control literature, namely proportion congruency and sequential congruency effects. It is shown that the model exerts both proactive and reactive cognitive control. Then, we simulate two physical effort tasks. In line with empirical work, impairing the model’s dopaminergic pathway leads to apathetic behavior. Thus, we conceptually unify the exertion of cognitive and physical effort, studied across a variety of literatures (e.g., motivation and cognitive control and animal species.

  3. Bayesian Monte Carlo and Maximum Likelihood Approach for Uncertainty Estimation and Risk Management: Application to Lake Oxygen Recovery Model

    Science.gov (United States)

    Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...

  4. Modeling dose-rate on/over the surface of cylindrical radio-models using Monte Carlo methods

    International Nuclear Information System (INIS)

    Xiao Xuefu; Ma Guoxue; Wen Fuping; Wang Zhongqi; Wang Chaohui; Zhang Jiyun; Huang Qingbo; Zhang Jiaqiu; Wang Xinxing; Wang Jun

    2004-01-01

    Objective: To determine the dose-rates on/over the surface of 10 cylindrical radio-models, which belong to the Metrology Station of Radio-Geological Survey of CNNC. Methods: The dose-rates on/over the surface of 10 cylindrical radio-models were modeled using the famous Monte Carlo code-MCNP. The dose-rates on/over the surface of 10 cylindrical radio-models were measured by a high gas pressurized ionization chamber dose-rate meter, respectively. The values of dose-rate modeled using MCNP code were compared with those obtained by authors in the present experimental measurement, and with those obtained by other workers previously. Some factors causing the discrepancy between the data obtained by authors using MCNP code and the data obtained using other methods are discussed in this paper. Results: The data of dose-rates on/over the surface of 10 cylindrical radio-models, obtained using MCNP code, were in good agreement with those obtained by other workers using the theoretical method. They were within the discrepancy of ±5% in general, and the maximum discrepancy was less than 10%. Conclusions: As if each factor needed for the Monte Carlo code is correct, the dose-rates on/over the surface of cylindrical radio-models modeled using the Monte Carlo code are correct with an uncertainty of 3%

  5. Monte Carlo modeling provides accurate calibration factors for radionuclide activity meters

    International Nuclear Information System (INIS)

    Zagni, F.; Cicoria, G.; Lucconi, G.; Infantino, A.; Lodi, F.; Marengo, M.

    2014-01-01

    Accurate determination of calibration factors for radionuclide activity meters is crucial for quantitative studies and in the optimization step of radiation protection, as these detectors are widespread in radiopharmacy and nuclear medicine facilities. In this work we developed the Monte Carlo model of a widely used activity meter, using the Geant4 simulation toolkit. More precisely the “PENELOPE” EM physics models were employed. The model was validated by means of several certified sources, traceable to primary activity standards, and other sources locally standardized with spectrometry measurements, plus other experimental tests. Great care was taken in order to accurately reproduce the geometrical details of the gas chamber and the activity sources, each of which is different in shape and enclosed in a unique container. Both relative calibration factors and ionization current obtained with simulations were compared against experimental measurements; further tests were carried out, such as the comparison of the relative response of the chamber for a source placed at different positions. The results showed a satisfactory level of accuracy in the energy range of interest, with the discrepancies lower than 4% for all the tested parameters. This shows that an accurate Monte Carlo modeling of this type of detector is feasible using the low-energy physics models embedded in Geant4. The obtained Monte Carlo model establishes a powerful tool for first instance determination of new calibration factors for non-standard radionuclides, for custom containers, when a reference source is not available. Moreover, the model provides an experimental setup for further research and optimization with regards to materials and geometrical details of the measuring setup, such as the ionization chamber itself or the containers configuration. - Highlights: • We developed a Monte Carlo model of a radionuclide activity meter using Geant4. • The model was validated using several

  6. Development of a Monte Carlo model for the Brainlab microMLC

    Energy Technology Data Exchange (ETDEWEB)

    Belec, Jason; Patrocinio, Horacio; Verhaegen, Frank [Medical Physics Department, McGill University Health Centre, McGill University, Montreal General Hospital, 1650 Cedar Avenue, Montreal, Quebec, H3G1A4 (Canada)

    2005-03-07

    Stereotactic radiosurgery with several static conformal beams shaped by a micro multileaf collimator ({mu}MLC) is used to treat small irregularly shaped brain lesions. Our goal is to perform Monte Carlo calculations of dose distributions for certain treatment plans as a verification tool. A dedicated {mu}MLC component module for the BEAMnrc code was developed as part of this project and was incorporated in a model of the Varian CL2300 linear accelerator 6 MV photon beam. As an initial validation of the code, the leaf geometry was visualized by tracing particles through the component module and recording their position each time a leaf boundary was crossed. The leaf dimensions were measured and the leaf material density and interleaf air gap were chosen to match the simulated leaf leakage profiles with film measurements in a solid water phantom. A comparison between Monte Carlo calculations and measurements (diode, radiographic film) was performed for square and irregularly shaped fields incident on flat and homogeneous water phantoms. Results show that Monte Carlo calculations agree with measured dose distributions to within 2% and/or 1 mm except for field size smaller than 1.2 cm diameter where agreement is within 5% due to uncertainties in measured output factors.

  7. Design and evaluation of a Monte Carlo based model of an orthovoltage treatment system

    International Nuclear Information System (INIS)

    Penchev, Petar; Maeder, Ulf; Fiebich, Martin; Zink, Klemens; University Hospital Marburg

    2015-01-01

    The aim of this study was to develop a flexible framework of an orthovoltage treatment system capable of calculating and visualizing dose distributions in different phantoms and CT datasets. The framework provides a complete set of various filters, applicators and X-ray energies and therefore can be adapted to varying studies or be used for educational purposes. A dedicated user friendly graphical interface was developed allowing for easy setup of the simulation parameters and visualization of the results. For the Monte Carlo simulations the EGSnrc Monte Carlo code package was used. Building the geometry was accomplished with the help of the EGSnrc C++ class library. The deposited dose was calculated according to the KERMA approximation using the track-length estimator. The validation against measurements showed a good agreement within 4-5% deviation, down to depths of 20% of the depth dose maximum. Furthermore, to show its capabilities, the validated model was used to calculate the dose distribution on two CT datasets. Typical Monte Carlo calculation time for these simulations was about 10 minutes achieving an average statistical uncertainty of 2% on a standard PC. However, this calculation time depends strongly on the used CT dataset, tube potential, filter material/thickness and applicator size.

  8. Quantum Monte Carlo simulation for S=1 Heisenberg model with uniaxial anisotropy

    International Nuclear Information System (INIS)

    Tsukamoto, Mitsuaki; Batista, Cristian; Kawashima, Naoki

    2007-01-01

    We perform quantum Monte Carlo simulations for S=1 Heisenberg model with an uniaxial anisotropy. The system exhibits a phase transition as we vary the anisotropy and a long range order appears at a finite temperature when the exchange interaction J is comparable to the uniaxial anisotropy D. We investigate quantum critical phenomena of this model and obtain the line of the phase transition which approaches a power-law with logarithmic corrections at low temperature. We derive the form of logarithmic corrections analytically and compare it to our simulation results

  9. Modeling the cathode region of noble gas mixture discharges using Monte Carlo simulation

    International Nuclear Information System (INIS)

    Donko, Z.; Janossy, M.

    1992-10-01

    A model of the cathode dark space of DC glow discharges was developed in order to study the effects caused by mixing small amounts (≤2%) of other noble gases (Ne, Ar, Kr and Xe) to He. The motion of charged particles was described by Monte Carlo simulation. Several discharge parameters (electron and ion energy distribution functions, electron and ion current densities, reduced ionization coefficients, and current density-voltage characteristics) were obtained. Small amounts of admixtures were found to modify significantly the discharge parameters. Current density-voltage characteristics obtained from the model showed good agreement with experimental data. (author) 40 refs.; 14 figs

  10. Monte Carlo modelling of the Belgian materials testing reactor BR2: present status

    International Nuclear Information System (INIS)

    Verboomen, B.; Aoust, Th.; Raedt, Ch. de; Beeckmans de West-Meerbeeck, A.

    2001-01-01

    A very detailed 3-D MCNP-4B model of the BR2 reactor was developed to perform all neutron and gamma calculations needed for the design of new experimental irradiation rigs. The Monte Carlo model of BR2 includes the nearly exact geometrical representation of fuel elements (now with their axially varying burn-up), of partially inserted control and regulating rods, of experimental devices and of radioisotope production rigs. The multiple level-geometry possibilities of MCNP-4B are fully exploited to obtain sufficiently flexible tools to cope with the very changing core loading. (orig.)

  11. Coupled Monte Carlo simulation and Copula theory for uncertainty analysis of multiphase flow simulation models.

    Science.gov (United States)

    Jiang, Xue; Na, Jin; Lu, Wenxi; Zhang, Yu

    2017-11-01

    Simulation-optimization techniques are effective in identifying an optimal remediation strategy. Simulation models with uncertainty, primarily in the form of parameter uncertainty with different degrees of correlation, influence the reliability of the optimal remediation strategy. In this study, a coupled Monte Carlo simulation and Copula theory is proposed for uncertainty analysis of a simulation model when parameters are correlated. Using the self-adaptive weight particle swarm optimization Kriging method, a surrogate model was constructed to replace the simulation model and reduce the computational burden and time consumption resulting from repeated and multiple Monte Carlo simulations. The Akaike information criterion (AIC) and the Bayesian information criterion (BIC) were employed to identify whether the t Copula function or the Gaussian Copula is the optimal Copula function to match the relevant structure of the parameters. The results show that both the AIC and BIC values of the t Copula function are less than those of the Gaussian Copula function. This indicates that the t Copula function is the optimal function for matching the relevant structure of the parameters. The outputs of the simulation model when parameter correlation was considered and when it was ignored were compared. The results show that the amplitude of the fluctuation interval when parameter correlation was considered is less than the corresponding amplitude when parameter estimation was ignored. Moreover, it was demonstrated that considering the correlation among parameters is essential for uncertainty analysis of a simulation model, and the results of uncertainty analysis should be incorporated into the remediation strategy optimization process.

  12. Comparison of nonstationary generalized logistic models based on Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    S. Kim

    2015-06-01

    Full Text Available Recently, the evidences of climate change have been observed in hydrologic data such as rainfall and flow data. The time-dependent characteristics of statistics in hydrologic data are widely defined as nonstationarity. Therefore, various nonstationary GEV and generalized Pareto models have been suggested for frequency analysis of nonstationary annual maximum and POT (peak-over-threshold data, respectively. However, the alternative models are required for nonstatinoary frequency analysis because of analyzing the complex characteristics of nonstationary data based on climate change. This study proposed the nonstationary generalized logistic model including time-dependent parameters. The parameters of proposed model are estimated using the method of maximum likelihood based on the Newton-Raphson method. In addition, the proposed model is compared by Monte Carlo simulation to investigate the characteristics of models and applicability.

  13. The Physical Models and Statistical Procedures Used in the RACER Monte Carlo Code

    Energy Technology Data Exchange (ETDEWEB)

    Sutton, T.M.; Brown, F.B.; Bischoff, F.G.; MacMillan, D.B.; Ellis, C.L.; Ward, J.T.; Ballinger, C.T.; Kelly, D.J.; Schindler, L.

    1999-07-01

    This report describes the MCV (Monte Carlo - Vectorized)Monte Carlo neutron transport code [Brown, 1982, 1983; Brown and Mendelson, 1984a]. MCV is a module in the RACER system of codes that is used for Monte Carlo reactor physics analysis. The MCV module contains all of the neutron transport and statistical analysis functions of the system, while other modules perform various input-related functions such as geometry description, material assignment, output edit specification, etc. MCV is very closely related to the 05R neutron Monte Carlo code [Irving et al., 1965] developed at Oak Ridge National Laboratory. 05R evolved into the 05RR module of the STEMB system, which was the forerunner of the RACER system. Much of the overall logic and physics treatment of 05RR has been retained and, indeed, the original verification of MCV was achieved through comparison with STEMB results. MCV has been designed to be very computationally efficient [Brown, 1981, Brown and Martin, 1984b; Brown, 1986]. It was originally programmed to make use of vector-computing architectures such as those of the CDC Cyber- 205 and Cray X-MP. MCV was the first full-scale production Monte Carlo code to effectively utilize vector-processing capabilities. Subsequently, MCV was modified to utilize both distributed-memory [Sutton and Brown, 1994] and shared memory parallelism. The code has been compiled and run on platforms ranging from 32-bit UNIX workstations to clusters of 64-bit vector-parallel supercomputers. The computational efficiency of the code allows the analyst to perform calculations using many more neutron histories than is practical with most other Monte Carlo codes, thereby yielding results with smaller statistical uncertainties. MCV also utilizes variance reduction techniques such as survival biasing, splitting, and rouletting to permit additional reduction in uncertainties. While a general-purpose neutron Monte Carlo code, MCV is optimized for reactor physics calculations. It has the

  14. Equilibrium resurfacing of Venus: Results from new Monte Carlo modeling and implications for Venus surface histories

    Science.gov (United States)

    Bjonnes, E. E.; Hansen, V. L.; James, B.; Swenson, J. B.

    2012-02-01

    Venus' impact crater population imposes two observational constraints that must be met by possible model surface histories: (1) near random spatial distribution of ˜975 craters, and (2) few obviously modified impact craters. Catastrophic resurfacing obviously meets these constraints, but equilibrium resurfacing histories require a balance between crater distribution and modification to be viable. Equilibrium resurfacing scenarios with small incremental resurfacing areas meet constraint 1 but not 2, whereas those with large incremental resurfacing areas meet constraint 2 but not 1. Results of Monte Carlo modeling of equilibrium resurfacing ( Strom et al., 1994) is widely cited as support for catastrophic resurfacing hypotheses and as evidence against hypotheses of equilibrium resurfacing. However, the Monte Carlo models did not consider intermediate-size incremental resurfacing areas, nor did they consider histories in which the era of impact crater formation outlasts an era of equilibrium resurfacing. We construct three suites of Monte Carlo experiments that examine incremental resurfacing areas not previously considered (5%, 1%, 0.7%, and 0.1%), and that vary the duration of resurfacing relative to impact crater formation time (1:1 [suite A], 5:6 [suite B], and 2:3 [suite C]). We test the model results against the two impact crater constraints. Several experiments met both constraints. The shorter the time period of equilibrium resurfacing, or the longer the time of crater formation following the cessation of equilibrium resurfacing, the larger the possible areas of incremental resurfacing that satisfy both constraints. Equilibrium resurfacing is statistically viable for suite A at 0.1%, suite B at 0.1%, and suite C for 1%, 0.7%, and 0.1% areas of incremental resurfacing.

  15. A new moving strategy for the sequential Monte Carlo approach in optimizing the hydrological model parameters

    Science.gov (United States)

    Zhu, Gaofeng; Li, Xin; Ma, Jinzhu; Wang, Yunquan; Liu, Shaomin; Huang, Chunlin; Zhang, Kun; Hu, Xiaoli

    2018-04-01

    Sequential Monte Carlo (SMC) samplers have become increasing popular for estimating the posterior parameter distribution with the non-linear dependency structures and multiple modes often present in hydrological models. However, the explorative capabilities and efficiency of the sampler depends strongly on the efficiency in the move step of SMC sampler. In this paper we presented a new SMC sampler entitled the Particle Evolution Metropolis Sequential Monte Carlo (PEM-SMC) algorithm, which is well suited to handle unknown static parameters of hydrologic model. The PEM-SMC sampler is inspired by the works of Liang and Wong (2001) and operates by incorporating the strengths of the genetic algorithm, differential evolution algorithm and Metropolis-Hasting algorithm into the framework of SMC. We also prove that the sampler admits the target distribution to be a stationary distribution. Two case studies including a multi-dimensional bimodal normal distribution and a conceptual rainfall-runoff hydrologic model by only considering parameter uncertainty and simultaneously considering parameter and input uncertainty show that PEM-SMC sampler is generally superior to other popular SMC algorithms in handling the high dimensional problems. The study also indicated that it may be important to account for model structural uncertainty by using multiplier different hydrological models in the SMC framework in future study.

  16. Monte Carlo Modelling of Single-Crystal Diffuse Scattering from Intermetallics

    Directory of Open Access Journals (Sweden)

    Darren J. Goossens

    2016-02-01

    Full Text Available Single-crystal diffuse scattering (SCDS reveals detailed structural insights into materials. In particular, it is sensitive to two-body correlations, whereas traditional Bragg peak-based methods are sensitive to single-body correlations. This means that diffuse scattering is sensitive to ordering that persists for just a few unit cells: nanoscale order, sometimes referred to as “local structure”, which is often crucial for understanding a material and its function. Metals and alloys were early candidates for SCDS studies because of the availability of large single crystals. While great progress has been made in areas like ab initio modelling and molecular dynamics, a place remains for Monte Carlo modelling of model crystals because of its ability to model very large systems; important when correlations are relatively long (though still finite in range. This paper briefly outlines, and gives examples of, some Monte Carlo methods appropriate for the modelling of SCDS from metallic compounds, and considers data collection as well as analysis. Even if the interest in the material is driven primarily by magnetism or transport behaviour, an understanding of the local structure can underpin such studies and give an indication of nanoscale inhomogeneity.

  17. MCNP-REN - A Monte Carlo Tool for Neutron Detector Design Without Using the Point Model

    International Nuclear Information System (INIS)

    Abhold, M.E.; Baker, M.C.

    1999-01-01

    The development of neutron detectors makes extensive use of the predictions of detector response through the use of Monte Carlo techniques in conjunction with the point reactor model. Unfortunately, the point reactor model fails to accurately predict detector response in common applications. For this reason, the general Monte Carlo N-Particle code (MCNP) was modified to simulate the pulse streams that would be generated by a neutron detector and normally analyzed by a shift register. This modified code, MCNP - Random Exponentially Distributed Neutron Source (MCNP-REN), along with the Time Analysis Program (TAP) predict neutron detector response without using the point reactor model, making it unnecessary for the user to decide whether or not the assumptions of the point model are met for their application. MCNP-REN is capable of simulating standard neutron coincidence counting as well as neutron multiplicity counting. Measurements of MOX fresh fuel made using the Underwater Coincidence Counter (UWCC) as well as measurements of HEU reactor fuel using the active neutron Research Reactor Fuel Counter (RRFC) are compared with calculations. The method used in MCNP-REN is demonstrated to be fundamentally sound and shown to eliminate the need to use the point model for detector performance predictions

  18. Modeling of radiation-induced bystander effect using Monte Carlo methods

    Science.gov (United States)

    Xia, Junchao; Liu, Liteng; Xue, Jianming; Wang, Yugang; Wu, Lijun

    2009-03-01

    Experiments showed that the radiation-induced bystander effect exists in cells, or tissues, or even biological organisms when irradiated with energetic ions or X-rays. In this paper, a Monte Carlo model is developed to study the mechanisms of bystander effect under the cells sparsely populated conditions. This model, based on our previous experiment which made the cells sparsely located in a round dish, focuses mainly on the spatial characteristics. The simulation results successfully reach the agreement with the experimental data. Moreover, other bystander effect experiment is also computed by this model and finally the model succeeds in predicting the results. The comparison of simulations with the experimental results indicates the feasibility of the model and the validity of some vital mechanisms assumed.

  19. Monte Carlo simulation as a tool to predict blasting fragmentation based on the Kuz Ram model

    Science.gov (United States)

    Morin, Mario A.; Ficarazzo, Francesco

    2006-04-01

    Rock fragmentation is considered the most important aspect of production blasting because of its direct effects on the costs of drilling and blasting and on the economics of the subsequent operations of loading, hauling and crushing. Over the past three decades, significant progress has been made in the development of new technologies for blasting applications. These technologies include increasingly sophisticated computer models for blast design and blast performance prediction. Rock fragmentation depends on many variables such as rock mass properties, site geology, in situ fracturing and blasting parameters and as such has no complete theoretical solution for its prediction. However, empirical models for the estimation of size distribution of rock fragments have been developed. In this study, a blast fragmentation Monte Carlo-based simulator, based on the Kuz-Ram fragmentation model, has been developed to predict the entire fragmentation size distribution, taking into account intact and joints rock properties, the type and properties of explosives and the drilling pattern. Results produced by this simulator were quite favorable when compared with real fragmentation data obtained from a blast quarry. It is anticipated that the use of Monte Carlo simulation will increase our understanding of the effects of rock mass and explosive properties on the rock fragmentation by blasting, as well as increase our confidence in these empirical models. This understanding will translate into improvements in blasting operations, its corresponding costs and the overall economics of open pit mines and rock quarries.

  20. Applying sequential Monte Carlo methods into a distributed hydrologic model: lagged particle filtering approach with regularization

    Directory of Open Access Journals (Sweden)

    S. J. Noh

    2011-10-01

    Full Text Available Data assimilation techniques have received growing attention due to their capability to improve prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC methods, known as "particle filters", are a Bayesian learning process that has the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response times of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until the uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on the Markov chain Monte Carlo (MCMC methods is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, water and energy transfer processes (WEP, is implemented for the sequential data assimilation through the updating of state variables. The lagged regularized particle filter (LRPF and the sequential importance resampling (SIR particle filter are implemented for hindcasting of streamflow at the Katsura catchment, Japan. Control state variables for filtering are soil moisture content and overland flow. Streamflow measurements are used for data assimilation. LRPF shows consistent forecasts regardless of the process noise assumption, while SIR has different values of optimal process noise and shows sensitive variation of confidential intervals, depending on the process noise. Improvement of LRPF forecasts compared to SIR is particularly found for rapidly varied high flows due to preservation of sample diversity from the kernel, even if particle impoverishment takes place.

  1. Transport appraisal and Monte Carlo simulation by use of the CBA-DK model

    DEFF Research Database (Denmark)

    Salling, Kim Bang; Leleur, Steen

    2011-01-01

    calculation, where risk analysis is carried out using Monte Carlo simulation. Special emphasis has been placed on the separation between inherent randomness in the modeling system and lack of knowledge. These two concepts have been defined in terms of variability (ontological uncertainty) and uncertainty...... (epistemic uncertainty). After a short introduction to deterministic calculation resulting in some evaluation criteria a more comprehensive evaluation of the stochastic calculation is made. Especially, the risk analysis part of CBA-DK, with considerations about which probability distributions should be used...

  2. A threaded Java concurrent implementation of the Monte-Carlo Metropolis Ising model

    Science.gov (United States)

    Castañeda-Marroquín, Carlos; de la Puente, Alfonso Ortega; Alfonseca, Manuel; Glazier, James A.; Swat, Maciej

    2010-01-01

    This paper describes a concurrent Java implementation of the Metropolis Monte-Carlo algorithm that is used in 2D Ising model simulations. The presented method uses threads, monitors, shared variables and high level concurrent constructs that hide the low level details. In our algorithm we assign one thread to handle one spin flip attempt at a time. We use special lattice site selection algorithm to avoid two or more threads working concurently in the region of the lattice that “belongs” to two or more different spins undergoing spin-flip transformation. Our approach does not depend on the current platform and maximizes concurrent use of the available resources. PMID:21814633

  3. A Monte Carlo modeling alternative for the API Gamma Ray Calibration Facility.

    Science.gov (United States)

    Galford, J E

    2017-04-01

    The gamma ray pit at the API Calibration Facility, located on the University of Houston campus, defines the API unit for natural gamma ray logs used throughout the petroleum logging industry. Future use of the facility is uncertain. An alternative method is proposed to preserve the gamma ray API unit definition as an industry standard by using Monte Carlo modeling to obtain accurate counting rate-to-API unit conversion factors for gross-counting and spectral gamma ray tool designs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Empirical Study of Homogeneous and Heterogeneous Ensemble Models for Software Development Effort Estimation

    Directory of Open Access Journals (Sweden)

    Mahmoud O. Elish

    2013-01-01

    Full Text Available Accurate estimation of software development effort is essential for effective management and control of software development projects. Many software effort estimation methods have been proposed in the literature including computational intelligence models. However, none of the existing models proved to be suitable under all circumstances; that is, their performance varies from one dataset to another. The goal of an ensemble model is to manage each of its individual models’ strengths and weaknesses automatically, leading to the best possible decision being taken overall. In this paper, we have developed different homogeneous and heterogeneous ensembles of optimized hybrid computational intelligence models for software development effort estimation. Different linear and nonlinear combiners have been used to combine the base hybrid learners. We have conducted an empirical study to evaluate and compare the performance of these ensembles using five popular datasets. The results confirm that individual models are not reliable as their performance is inconsistent and unstable across different datasets. Although none of the ensemble models was consistently the best, many of them were frequently among the best models for each dataset. The homogeneous ensemble of support vector regression (SVR, with the nonlinear combiner adaptive neurofuzzy inference systems-subtractive clustering (ANFIS-SC, was the best model when considering the average rank of each model across the five datasets.

  5. Monte Carlo Radiative Transfer Modeling of Lightning Observed in Galileo Images of Jupiter

    Science.gov (United States)

    Dyudine, U. A.; Ingersoll, Andrew P.

    2002-01-01

    We study lightning on Jupiter and the clouds illuminated by the lightning using images taken by the Galileo orbiter. The Galileo images have a resolution of 25 km/pixel and axe able to resolve the shape of the single lightning spots in the images, which have full widths at half the maximum intensity in the range of 90-160 km. We compare the measured lightning flash images with simulated images produced by our ED Monte Carlo light-scattering model. The model calculates Monte Carlo scattering of photons in a ED opacity distribution. During each scattering event, light is partially absorbed. The new direction of the photon after scattering is chosen according to a Henyey-Greenstein phase function. An image from each direction is produced by accumulating photons emerging from the cloud in a small range (bins) of emission angles. Lightning bolts are modeled either as points or vertical lines. Our results suggest that some of the observed scattering patterns axe produced in a 3-D cloud rather than in a plane-parallel cloud layer. Lightning is estimated to occur at least as deep as the bottom of the expected water cloud. For the six cases studied, we find that the clouds above the lightning are optically thick (tau > 5). Jovian flashes are more regular and circular than the largest terrestrial flashes observed from space. On Jupiter there is nothing equivalent to the 30-40-km horizontal flashes which axe seen on Earth.

  6. Particle rejuvenation of Rao-Blackwellized sequential Monte Carlo smoothers for conditionally linear and Gaussian models

    Science.gov (United States)

    Nguyen, Ngoc Minh; Corff, Sylvain Le; Moulines, Éric

    2017-12-01

    This paper focuses on sequential Monte Carlo approximations of smoothing distributions in conditionally linear and Gaussian state spaces. To reduce Monte Carlo variance of smoothers, it is typical in these models to use Rao-Blackwellization: particle approximation is used to sample sequences of hidden regimes while the Gaussian states are explicitly integrated conditional on the sequence of regimes and observations, using variants of the Kalman filter/smoother. The first successful attempt to use Rao-Blackwellization for smoothing extends the Bryson-Frazier smoother for Gaussian linear state space models using the generalized two-filter formula together with Kalman filters/smoothers. More recently, a forward-backward decomposition of smoothing distributions mimicking the Rauch-Tung-Striebel smoother for the regimes combined with backward Kalman updates has been introduced. This paper investigates the benefit of introducing additional rejuvenation steps in all these algorithms to sample at each time instant new regimes conditional on the forward and backward particles. This defines particle-based approximations of the smoothing distributions whose support is not restricted to the set of particles sampled in the forward or backward filter. These procedures are applied to commodity markets which are described using a two-factor model based on the spot price and a convenience yield for crude oil data.

  7. Monte carlo inference for state-space models of wild animal populations.

    Science.gov (United States)

    Newman, Ken B; Fernández, Carmen; Thomas, Len; Buckland, Stephen T

    2009-06-01

    We compare two Monte Carlo (MC) procedures, sequential importance sampling (SIS) and Markov chain Monte Carlo (MCMC), for making Bayesian inferences about the unknown states and parameters of state-space models for animal populations. The procedures were applied to both simulated and real pup count data for the British grey seal metapopulation, as well as to simulated data for a Chinook salmon population. The MCMC implementation was based on tailor-made proposal distributions combined with analytical integration of some of the states and parameters. SIS was implemented in a more generic fashion. For the same computing time MCMC tended to yield posterior distributions with less MC variation across different runs of the algorithm than the SIS implementation with the exception in the seal model of some states and one of the parameters that mixed quite slowly. The efficiency of the SIS sampler greatly increased by analytically integrating out unknown parameters in the observation model. We consider that a careful implementation of MCMC for cases where data are informative relative to the priors sets the gold standard, but that SIS samplers are a viable alternative that can be programmed more quickly. Our SIS implementation is particularly competitive in situations where the data are relatively uninformative; in other cases, SIS may require substantially more computer power than an efficient implementation of MCMC to achieve the same level of MC error.

  8. Electric conduction in semiconductors: a pedagogical model based on the Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Capizzo, M C; Sperandeo-Mineo, R M; Zarcone, M [UoP-PERG, University of Palermo Physics Education Research Group and Dipartimento di Fisica e Tecnologie Relative, Universita di Palermo (Italy)], E-mail: sperandeo@difter.unipa.it

    2008-05-15

    We present a pedagogic approach aimed at modelling electric conduction in semiconductors in order to describe and explain some macroscopic properties, such as the characteristic behaviour of resistance as a function of temperature. A simple model of the band structure is adopted for the generation of electron-hole pairs as well as for the carrier transport in moderate electric fields. The semiconductor behaviour is described by substituting the traditional statistical approach (requiring a deep mathematical background) with microscopic models, based on the Monte Carlo method, in which simple rules applied to microscopic particles and quasi-particles determine the macroscopic properties. We compare measurements of electric properties of matter with 'virtual experiments' built by using some models where the physical concepts can be presented at different formalization levels.

  9. Treatment plan evaluation for interstitial photodynamic therapy in a mouse model by Monte Carlo simulation with FullMonte

    Directory of Open Access Journals (Sweden)

    Jeffrey eCassidy

    2015-02-01

    Full Text Available Monte Carlo (MC simulation is recognized as the gold standard for biophotonic simulation, capturing all relevant physics and material properties at the perceived cost of high computing demands. Tetrahedral-mesh-based MC simulations particularly are attractive due to the ability to refine the mesh at will to conform to complicated geometries or user-defined resolution requirements. Since no approximations of material or light-source properties are required, MC methods are applicable to the broadest set of biophotonic simulation problems. MC methods also have other implementation features including inherent parallelism, and permit a continuously-variable quality-runtime tradeoff. We demonstrate here a complete MC-based prospective fluence dose evaluation system for interstitial PDT to generate dose-volume histograms on a tetrahedral mesh geometry description. To our knowledge, this is the first such system for general interstitial photodynamic therapy employing MC methods and is therefore applicable to a very broad cross-section of anatomy and material properties. We demonstrate that evaluation of dose-volume histograms is an effective variance-reduction scheme in its own right which greatly reduces the number of packets required and hence runtime required to achieve acceptable result confidence. We conclude that MC methods are feasible for general PDT treatment evaluation and planning, and considerably less costly than widely believed.

  10. Clinical Management and Burden of Prostate Cancer: A Markov Monte Carlo Model

    Science.gov (United States)

    Sanyal, Chiranjeev; Aprikian, Armen; Cury, Fabio; Chevalier, Simone; Dragomir, Alice

    2014-01-01

    Background Prostate cancer (PCa) is the most common non-skin cancer among men in developed countries. Several novel treatments have been adopted by healthcare systems to manage PCa. Most of the observational studies and randomized trials on PCa have concurrently evaluated fewer treatments over short follow-up. Further, preceding decision analytic models on PCa management have not evaluated various contemporary management options. Therefore, a contemporary decision analytic model was necessary to address limitations to the literature by synthesizing the evidence on novel treatments thereby forecasting short and long-term clinical outcomes. Objectives To develop and validate a Markov Monte Carlo model for the contemporary clinical management of PCa, and to assess the clinical burden of the disease from diagnosis to end-of-life. Methods A Markov Monte Carlo model was developed to simulate the management of PCa in men 65 years and older from diagnosis to end-of-life. Health states modeled were: risk at diagnosis, active surveillance, active treatment, PCa recurrence, PCa recurrence free, metastatic castrate resistant prostate cancer, overall and PCa death. Treatment trajectories were based on state transition probabilities derived from the literature. Validation and sensitivity analyses assessed the accuracy and robustness of model predicted outcomes. Results Validation indicated model predicted rates were comparable to observed rates in the published literature. The simulated distribution of clinical outcomes for the base case was consistent with sensitivity analyses. Predicted rate of clinical outcomes and mortality varied across risk groups. Life expectancy and health adjusted life expectancy predicted for the simulated cohort was 20.9 years (95%CI 20.5–21.3) and 18.2 years (95% CI 17.9–18.5), respectively. Conclusion Study findings indicated contemporary management strategies improved survival and quality of life in patients with PCa. This model could be used

  11. Effort dynamics in a fisheries bioeconomic model: A vessel level approach through Game Theory

    Directory of Open Access Journals (Sweden)

    Gorka Merino

    2007-09-01

    Full Text Available Red shrimp, Aristeus antennatus (Risso, 1816 is one of the most important resources for the bottom-trawl fleets in the northwestern Mediterranean, in terms of both landings and economic value. A simple bioeconomic model introducing Game Theory for the prediction of effort dynamics at vessel level is proposed. The game is performed by the twelve vessels exploiting red shrimp in Blanes. Within the game, two solutions are performed: non-cooperation and cooperation. The first is proposed as a realistic method for the prediction of individual effort strategies and the second is used to illustrate the potential profitability of the analysed fishery. The effort strategy for each vessel is the number of fishing days per year and their objective is profit maximisation, individual profits for the non-cooperative solution and total profits for the cooperative one. In the present analysis, strategic conflicts arise from the differences between vessels in technical efficiency (catchability coefficient and economic efficiency (defined here. The ten-year and 1000-iteration stochastic simulations performed for the two effort solutions show that the best strategy from both an economic and a conservationist perspective is homogeneous effort cooperation. However, the results under non-cooperation are more similar to the observed data on effort strategies and landings.

  12. A Covariance Structure Model Test of Antecedents of Adolescent Alcohol Misuse and a Prevention Effort.

    Science.gov (United States)

    Dielman, T. E.; And Others

    1989-01-01

    Questionnaires were administered to 4,157 junior high school students to determine levels of alcohol misuse, exposure to peer use and misuse of alcohol, susceptibility to peer pressure, internal health locus of control, and self-esteem. Conceptual model of antecendents of adolescent alcohol misuse and effectiveness of a prevention effort was…

  13. Commonalities in WEPP and WEPS and efforts towards a single erosion process model

    NARCIS (Netherlands)

    Visser, S.M.; Flanagan, D.C.

    2004-01-01

    Since the late 1980's, the Agricultural Research Service (ARS) of the United States Department of Agriculture (USDA) has been developing process-based erosion models to predict water erosion and wind erosion. During much of that time, the development efforts of the Water Erosion Prediction Project

  14. Development of new physical models devoted to internal dosimetry using the EGS4 Monte Carlo code

    International Nuclear Information System (INIS)

    Clairand, I.

    1999-01-01

    In the framework of diagnostic and therapeutic applications of nuclear medicine, the calculation of the absorbed dose at the organ scale is necessary for the evaluation of the risks taken by patients after the intake of radiopharmaceuticals. The classical calculation methods supply only a very approximative estimation of this dose because they use dosimetric models based on anthropomorphic phantoms with average corpulence (reference adult man and woman). The aim of this work is to improve these models by a better consideration of the physical characteristics of the patient in order to refine the dosimetric estimations. Several mathematical anthropomorphic phantoms representative of the morphological variations encountered in the adult population have been developed. The corresponding dosimetric parameters have been determined using the Monte Carlo method. The calculation code, based on the EGS4 Monte Carlo code, has been validated using the literature data for reference phantoms. Several phantoms with different corpulence have been developed using the analysis of anthropometric data from medico-legal autopsies. The corresponding dosimetric estimations show the influence of morphological variations on the absorbed dose. Two examples of application, based on clinical data, confirm the interest of this approach with respect to classical methods. (J.S.)

  15. Fast Monte Carlo-simulator with full collimator and detector response modelling for SPECT

    International Nuclear Information System (INIS)

    Sohlberg, A.O.; Kajaste, M.T.

    2012-01-01

    Monte Carlo (MC)-simulations have proved to be a valuable tool in studying single photon emission computed tomography (SPECT)-reconstruction algorithms. Despite their popularity, the use of Monte Carlo-simulations is still often limited by their large computation demand. This is especially true in situations where full collimator and detector modelling with septal penetration, scatter and X-ray fluorescence needs to be included. This paper presents a rapid and simple MC-simulator, which can effectively reduce the computation times. The simulator was built on the convolution-based forced detection principle, which can markedly lower the number of simulated photons. Full collimator and detector response look-up tables are pre-simulated and then later used in the actual MC-simulations to model the system response. The developed simulator was validated by comparing it against 123 I point source measurements made with a clinical gamma camera system and against 99m Tc software phantom simulations made with the SIMIND MC-package. The results showed good agreement between the new simulator, measurements and the SIMIND-package. The new simulator provided near noise-free projection data in approximately 1.5 min per projection with 99m Tc, which was less than one-tenth of SIMIND's time. The developed MC-simulator can markedly decrease the simulation time without sacrificing image quality. (author)

  16. Optical model for port-wine stain skin and its Monte Carlo simulation

    Science.gov (United States)

    Xu, Lanqing; Xiao, Zhengying; Chen, Rong; Wang, Ying

    2008-12-01

    Laser irradiation is the most acceptable therapy for PWS patient at present time. Its efficacy is highly dependent on the energy deposition rules in skin. To achieve optimal PWS treatment parameters a better understanding of light propagation in PWS skin is indispensable. Traditional Monte Carlo simulations using simple geometries such as planar layer tissue model can not provide energy deposition in the skin with enlarged blood vessels. In this paper the structure of normal skin and the pathological character of PWS skin was analyzed in detail and the true structure were simplified into a hybrid layered mathematical model to character two most important aspects of PWS skin: layered structure and overabundant dermal vessels. The basic laser-tissue interaction mechanisms in skin were investigated and the optical parameters of PWS skin tissue at the therapeutic wavelength. Monte Carlo (MC) based techniques were choused to calculate the energy deposition in the skin. Results can be used in choosing optical dosage. Further simulations can be used to predict optimal laser parameters to achieve high-efficacy laser treatment of PWS.

  17. Kinetic Monte-Carlo modeling of hydrogen retention and re-emission from Tore Supra deposits

    International Nuclear Information System (INIS)

    Rai, A.; Schneider, R.; Warrier, M.; Roubin, P.; Martin, C.; Richou, M.

    2009-01-01

    A multi-scale model has been developed to study the reactive-diffusive transport of hydrogen in porous graphite [A. Rai, R. Schneider, M. Warrier, J. Nucl. Mater. (submitted for publication). http://dx.doi.org/10.1016/j.jnucmat.2007.08.013.]. The deposits found on the leading edge of the neutralizer of Tore Supra are multi-scale in nature, consisting of micropores with typical size lower than 2 nm (∼11%), mesopores (∼5%) and macropores with a typical size more than 50 nm [C. Martin, M. Richou, W. Sakaily, B. Pegourie, C. Brosset, P. Roubin, J. Nucl. Mater. 363-365 (2007) 1251]. Kinetic Monte-Carlo (KMC) has been used to study the hydrogen transport at meso-scales. Recombination rate and the diffusion coefficient calculated at the meso-scale was used as an input to scale up and analyze the hydrogen transport at macro-scale. A combination of KMC and MCD (Monte-Carlo diffusion) method was used at macro-scales. Flux dependence of hydrogen recycling has been studied. The retention and re-emission analysis of the model has been extended to study the chemical erosion process based on the Kueppers-Hopf cycle [M. Wittmann, J. Kueppers, J. Nucl. Mater. 227 (1996) 186].

  18. Investigation of SIBM driven recrystallization in alpha Zirconium based on EBSD data and Monte Carlo modeling

    Science.gov (United States)

    Jedrychowski, M.; Bacroix, B.; Salman, O. U.; Tarasiuk, J.; Wronski, S.

    2015-08-01

    The work focuses on the influence of moderate plastic deformation on subsequent partial recrystallization of hexagonal zirconium (Zr702). In the considered case, strain induced boundary migration (SIBM) is assumed to be the dominating recrystallization mechanism. This hypothesis is analyzed and tested in detail using experimental EBSD-OIM data and Monte Carlo computer simulations. An EBSD investigation is performed on zirconium samples, which were channel-die compressed in two perpendicular directions: normal direction (ND) and transverse direction (TD) of the initial material sheet. The maximal applied strain was below 17%. Then, samples were briefly annealed in order to achieve a partly recrystallized state. Obtained EBSD data were analyzed in terms of texture evolution associated with a microstructural characterization, including: kernel average misorientation (KAM), grain orientation spread (GOS), twinning, grain size distributions, description of grain boundary regions. In parallel, Monte Carlo Potts model combined with experimental microstructures was employed in order to verify two main recrystallization scenarios: SIBM driven growth from deformed sub-grains and classical growth of recrystallization nuclei. It is concluded that simulation results provided by the SIBM model are in a good agreement with experimental data in terms of texture as well as microstructural evolution.

  19. The First 24 Years of Reverse Monte Carlo Modelling, Budapest, Hungary, 20-22 September 2012

    Science.gov (United States)

    Keen, David A.; Pusztai, László

    2013-11-01

    This special issue contains a collection of papers reflecting the content of the fifth workshop on reverse Monte Carlo (RMC) methods, held in a hotel on the banks of the Danube in the Budapest suburbs in the autumn of 2012. Over fifty participants gathered to hear talks and discuss a broad range of science based on the RMC technique in very convivial surroundings. Reverse Monte Carlo modelling is a method for producing three-dimensional disordered structural models in quantitative agreement with experimental data. The method was developed in the late 1980s and has since achieved wide acceptance within the scientific community [1], producing an average of over 90 papers and 1200 citations per year over the last five years. It is particularly suitable for the study of the structures of liquid and amorphous materials, as well as the structural analysis of disordered crystalline systems. The principal experimental data that are modelled are obtained from total x-ray or neutron scattering experiments, using the reciprocal space structure factor and/or the real space pair distribution function (PDF). Additional data might be included from extended x-ray absorption fine structure spectroscopy (EXAFS), Bragg peak intensities or indeed any measured data that can be calculated from a three-dimensional atomistic model. It is this use of total scattering (diffuse and Bragg), rather than just the Bragg peak intensities more commonly used for crystalline structure analysis, which enables RMC modelling to probe the often important deviations from the average crystal structure, to probe the structures of poorly crystalline or nanocrystalline materials, and the local structures of non-crystalline materials where only diffuse scattering is observed. This flexibility across various condensed matter structure-types has made the RMC method very attractive in a wide range of disciplines, as borne out in the contents of this special issue. It is however important to point out that since

  20. Hybrid Monte Carlo/Deterministic Methods for Accelerating Active Interrogation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Peplow, Douglas E. [ORNL; Miller, Thomas Martin [ORNL; Patton, Bruce W [ORNL; Wagner, John C [ORNL

    2013-01-01

    The potential for smuggling special nuclear material (SNM) into the United States is a major concern to homeland security, so federal agencies are investigating a variety of preventive measures, including detection and interdiction of SNM during transport. One approach for SNM detection, called active interrogation, uses a radiation source, such as a beam of neutrons or photons, to scan cargo containers and detect the products of induced fissions. In realistic cargo transport scenarios, the process of inducing and detecting fissions in SNM is difficult due to the presence of various and potentially thick materials between the radiation source and the SNM, and the practical limitations on radiation source strength and detection capabilities. Therefore, computer simulations are being used, along with experimental measurements, in efforts to design effective active interrogation detection systems. The computer simulations mostly consist of simulating radiation transport from the source to the detector region(s). Although the Monte Carlo method is predominantly used for these simulations, difficulties persist related to calculating statistically meaningful detector responses in practical computing times, thereby limiting their usefulness for design and evaluation of practical active interrogation systems. In previous work, the benefits of hybrid methods that use the results of approximate deterministic transport calculations to accelerate high-fidelity Monte Carlo simulations have been demonstrated for source-detector type problems. In this work, the hybrid methods are applied and evaluated for three example active interrogation problems. Additionally, a new approach is presented that uses multiple goal-based importance functions depending on a particle s relevance to the ultimate goal of the simulation. Results from the examples demonstrate that the application of hybrid methods to active interrogation problems dramatically increases their calculational efficiency.

  1. Monte Carlo impurity transport modeling in the DIII-D transport

    International Nuclear Information System (INIS)

    Evans, T.E.; Finkenthal, D.F.

    1998-04-01

    A description of the carbon transport and sputtering physics contained in the Monte Carlo Impurity (MCI) transport code is given. Examples of statistically significant carbon transport pathways are examined using MCI's unique tracking visualizer and a mechanism for enhanced carbon accumulation on the high field side of the divertor chamber is discussed. Comparisons between carbon emissions calculated with MCI and those measured in the DIII-D tokamak are described. Good qualitative agreement is found between 2D carbon emission patterns calculated with MCI and experimentally measured carbon patterns. While uncertainties in the sputtering physics, atomic data, and transport models have made quantitative comparisons with experiments more difficult, recent results using a physics based model for physical and chemical sputtering has yielded simulations with about 50% of the total carbon radiation measured in the divertor. These results and plans for future improvement in the physics models and atomic data are discussed

  2. Clinical trial optimization: Monte Carlo simulation Markov model for planning clinical trials recruitment.

    Science.gov (United States)

    Abbas, Ismail; Rovira, Joan; Casanovas, Josep

    2007-05-01

    The patient recruitment process of clinical trials is an essential element which needs to be designed properly. In this paper we describe different simulation models under continuous and discrete time assumptions for the design of recruitment in clinical trials. The results of hypothetical examples of clinical trial recruitments are presented. The recruitment time is calculated and the number of recruited patients is quantified for a given time and probability of recruitment. The expected delay and the effective recruitment durations are estimated using both continuous and discrete time modeling. The proposed type of Monte Carlo simulation Markov models will enable optimization of the recruitment process and the estimation and the calibration of its parameters to aid the proposed clinical trials. A continuous time simulation may minimize the duration of the recruitment and, consequently, the total duration of the trial.

  3. Uncertainty assessment of integrated distributed hydrological models using GLUE with Markov chain Monte Carlo sampling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan

    2008-01-01

    uncertainty estimation (GLUE) procedure based on Markov chain Monte Carlo sampling is applied in order to improve the performance of the methodology in estimating parameters and posterior output distributions. The description of the spatial variations of the hydrological processes is accounted for by defining......-distributed responses are, however, still quite unexplored. Especially for complex models, rigorous parameterization, reduction of the parameter space and use of efficient and effective algorithms are essential to facilitate the calibration process and make it more robust. Moreover, for these models multi...... the identifiability of the parameters and results in satisfactory multi-variable simulations and uncertainty estimates. However, the parameter uncertainty alone cannot explain the total uncertainty at all the sites, due to limitations in the distributed data included in the model calibration. The study also indicates...

  4. Monte Carlo modelling of germanium crystals that are tilted and have rounded front edges

    Energy Technology Data Exchange (ETDEWEB)

    Gasparro, Joel [EC-JRC-IRMM, Institute for Reference Materials and Measurements, Retieseweg 111, B-2440 Geel (Belgium); Hult, Mikael [EC-JRC-IRMM, Institute for Reference Materials and Measurements, Retieseweg 111, B-2440 Geel (Belgium)], E-mail: mikael.hult@ec.europa.eu; Johnston, Peter N. [Applied Physics, Royal Melbourne Institute of Technology, GPO Box 2476V, Melbourne 3001 (Australia); Tagziria, Hamid [EC-JRC-IPSC, Institute for the Protection and the Security of the Citizen, Via E. Fermi 1, I-21020 Ispra (Vatican City State, Holy See,) (Italy)

    2008-09-01

    Gamma-ray detection efficiencies and cascade summing effects in germanium detectors are often calculated using Monte Carlo codes based on a computer model of the detection system. Such a model can never fully replicate reality and it is important to understand how various parameters affect the results. This work concentrates on quantifying two issues, namely (i) the effect of having a Ge-crystal that is tilted inside the cryostat and (ii) the effect of having a model of a Ge-crystal with rounded edges (bulletization). The effect of the tilting is very small (in the order of per mille) when the tilting angles are within a realistic range. The effect of the rounded edges is, however, relatively large (5-10% or higher) particularly for gamma-ray energies below 100 keV.

  5. Monte Carlo modelling of germanium crystals that are tilted and have rounded front edges

    International Nuclear Information System (INIS)

    Gasparro, Joel; Hult, Mikael; Johnston, Peter N.; Tagziria, Hamid

    2008-01-01

    Gamma-ray detection efficiencies and cascade summing effects in germanium detectors are often calculated using Monte Carlo codes based on a computer model of the detection system. Such a model can never fully replicate reality and it is important to understand how various parameters affect the results. This work concentrates on quantifying two issues, namely (i) the effect of having a Ge-crystal that is tilted inside the cryostat and (ii) the effect of having a model of a Ge-crystal with rounded edges (bulletization). The effect of the tilting is very small (in the order of per mille) when the tilting angles are within a realistic range. The effect of the rounded edges is, however, relatively large (5-10% or higher) particularly for gamma-ray energies below 100 keV

  6. Level densities of heavy nuclei in the shell model Monte Carlo approach

    Directory of Open Access Journals (Sweden)

    Alhassid Y.

    2016-01-01

    Full Text Available Nuclear level densities are necessary input to the Hauser-Feshbach theory of compound nuclear reactions. However, the microscopic calculation of level densities in the presence of correlations is a challenging many-body problem. The configurationinteraction shell model provides a suitable framework for the inclusion of correlations and shell effects, but the large dimensionality of the many-particle model space has limited its application in heavy nuclei. The shell model Monte Carlo method enables calculations in spaces that are many orders of magnitude larger than spaces that can be treated by conventional diagonalization methods and has proven to be a powerful tool in the microscopic calculation of level densities. We discuss recent applications of the method in heavy nuclei.

  7. Derivation of a Monte Carlo method for modeling heterodyne detection in optical coherence tomography systems

    DEFF Research Database (Denmark)

    Tycho, Andreas; Jørgensen, Thomas Martini; Andersen, Peter E.

    2002-01-01

    A Monte Carlo (MC) method for modeling optical coherence tomography (OCT) measurements of a diffusely reflecting discontinuity emb edded in a scattering medium is presented. For the first time to the authors' knowledge it is shown analytically that the applicability of an MC approach...... from the sample will have a finite spatial coherence that cannot be accounted for by MC simulation. To estimate this intensity distribution adequately we have developed a novel method for modeling a focused Gaussian beam in MC simulation. This approach is valid for a softly as well as for a strongly...... focused beam, and it is shown that in free space the full three-dimensional intensity distribution of a Gaussian beam is obtained. The OCT signal and the intensity distribution in a scattering medium have been obtained for several geometries with the suggested MC method; when this model and a recently...

  8. A Monte Carlo model for photoneutron generation by a medical LINAC

    Science.gov (United States)

    Sumini, M.; Isolan, L.; Cucchi, G.; Sghedoni, R.; Iori, M.

    2017-11-01

    For an optimal tuning of the radiation protection planning, a Monte Carlo model using the MCNPX code has been built, allowing an accurate estimate of the spectrometric and geometrical characteristics of photoneutrons generated by a Varian TrueBeam Stx© medical linear accelerator. We considered in our study a device working at the reference energy for clinical applications of 15 MV, stemmed from a Varian Clinac©2100 modeled starting from data collected thanks to several papers available in the literature. The model results were compared with neutron and photon dose measurements inside and outside the bunker hosting the accelerator obtaining a complete dose map. Normalized neutron fluences were tallied in different positions at the patient plane and at different depths. A sensitivity analysis with respect to the flattening filter material were performed to enlighten aspects that could influence the photoneutron production.

  9. Dynamic Value at Risk: A Comparative Study Between Heteroscedastic Models and Monte Carlo Simulation

    Directory of Open Access Journals (Sweden)

    José Lamartine Távora Junior

    2006-12-01

    Full Text Available The objective of this paper was to analyze the risk management of a portfolio composed by Petrobras PN, Telemar PN and Vale do Rio Doce PNA stocks. It was verified if the modeling of Value-at-Risk (VaR through the place Monte Carlo simulation with volatility of GARCH family is supported by hypothesis of efficient market. The results have shown that the statistic evaluation in inferior to dynamics, evidencing that the dynamic analysis supplies support to the hypothesis of efficient market of the Brazilian share holding market, in opposition of some empirical evidences. Also, it was verified that the GARCH models of volatility is enough to accommodate the variations of the shareholding Brazilian market, since the model is capable to accommodate the great dynamic of the Brazilian market.

  10. Full modelling of the MOSAIC animal PET system based on the GATE Monte Carlo simulation code

    International Nuclear Information System (INIS)

    Merheb, C; Petegnief, Y; Talbot, J N

    2007-01-01

    Positron emission tomography (PET) systems dedicated to animal imaging are now widely used for biological studies. The scanner performance strongly depends on the design and the characteristics of the system. Many parameters must be optimized like the dimensions and type of crystals, geometry and field-of-view (FOV), sampling, electronics, lightguide, shielding, etc. Monte Carlo modelling is a powerful tool to study the effect of each of these parameters on the basis of realistic simulated data. Performance assessment in terms of spatial resolution, count rates, scatter fraction and sensitivity is an important prerequisite before the model can be used instead of real data for a reliable description of the system response function or for optimization of reconstruction algorithms. The aim of this study is to model the performance of the Philips Mosaic(TM) animal PET system using a comprehensive PET simulation code in order to understand and describe the origin of important factors that influence image quality. We use GATE, a Monte Carlo simulation toolkit for a realistic description of the ring PET model, the detectors, shielding, cap, electronic processing and dead times. We incorporate new features to adjust signal processing to the Anger logic underlying the Mosaic(TM) system. Special attention was paid to dead time and energy spectra descriptions. Sorting of simulated events in a list mode format similar to the system outputs was developed to compare experimental and simulated sensitivity and scatter fractions for different energy thresholds using various models of phantoms describing rat and mouse geometries. Count rates were compared for both cylindrical homogeneous phantoms. Simulated spatial resolution was fitted to experimental data for 18 F point sources at different locations within the FOV with an analytical blurring function for electronic processing effects. Simulated and measured sensitivities differed by less than 3%, while scatter fractions agreed

  11. Using a Monte Carlo model to predict dosimetric properties of small radiotherapy photon fields

    International Nuclear Information System (INIS)

    Scott, Alison J. D.; Nahum, Alan E.; Fenwick, John D.

    2008-01-01

    Accurate characterization of small-field dosimetry requires measurements to be made with precisely aligned specialized detectors and is thus time consuming and error prone. This work explores measurement differences between detectors by using a Monte Carlo model matched to large-field data to predict properties of smaller fields. Measurements made with a variety of detectors have been compared with calculated results to assess their validity and explore reasons for differences. Unshielded diodes are expected to produce some of the most useful data, as their small sensitive cross sections give good resolution whilst their energy dependence is shown to vary little with depth in a 15 MV linac beam. Their response is shown to be constant with field size over the range 1-10 cm, with a correction of 3% needed for a field size of 0.5 cm. BEAMnrc has been used to create a 15 MV beam model, matched to dosimetric data for square fields larger than 3 cm, and producing small-field profiles and percentage depth doses (PDDs) that agree well with unshielded diode data for field sizes down to 0.5 cm. For fields sizes of 1.5 cm and above, little detector-to-detector variation exists in measured output factors, however for a 0.5 cm field a relative spread of 18% is seen between output factors measured with different detectors--values measured with the diamond and pinpoint detectors lying below that of the unshielded diode, with the shielded diode value being higher. Relative to the corrected unshielded diode measurement, the Monte Carlo modeled output factor is 4.5% low, a discrepancy that is probably due to the focal spot fluence profile and source occlusion modeling. The large-field Monte Carlo model can, therefore, currently be used to predict small-field profiles and PDDs measured with an unshielded diode. However, determination of output factors for the smallest fields requires a more detailed model of focal spot fluence and source occlusion.

  12. Integrating multiple distribution models to guide conservation efforts of an endangered toad

    Science.gov (United States)

    Treglia, Michael L.; Fisher, Robert N.; Fitzgerald, Lee A.

    2015-01-01

    Species distribution models are used for numerous purposes such as predicting changes in species’ ranges and identifying biodiversity hotspots. Although implications of distribution models for conservation are often implicit, few studies use these tools explicitly to inform conservation efforts. Herein, we illustrate how multiple distribution models developed using distinct sets of environmental variables can be integrated to aid in identification sites for use in conservation. We focus on the endangered arroyo toad (Anaxyrus californicus), which relies on open, sandy streams and surrounding floodplains in southern California, USA, and northern Baja California, Mexico. Declines of the species are largely attributed to habitat degradation associated with vegetation encroachment, invasive predators, and altered hydrologic regimes. We had three main goals: 1) develop a model of potential habitat for arroyo toads, based on long-term environmental variables and all available locality data; 2) develop a model of the species’ current habitat by incorporating recent remotely-sensed variables and only using recent locality data; and 3) integrate results of both models to identify sites that may be employed in conservation efforts. We used a machine learning technique, Random Forests, to develop the models, focused on riparian zones in southern California. We identified 14.37% and 10.50% of our study area as potential and current habitat for the arroyo toad, respectively. Generally, inclusion of remotely-sensed variables reduced modeled suitability of sites, thus many areas modeled as potential habitat were not modeled as current habitat. We propose such sites could be made suitable for arroyo toads through active management, increasing current habitat by up to 67.02%. Our general approach can be employed to guide conservation efforts of virtually any species with sufficient data necessary to develop appropriate distribution models.

  13. Bayesian modelling of uncertainties of Monte Carlo radiative-transfer simulations

    Science.gov (United States)

    Beaujean, Frederik; Eggers, Hans C.; Kerzendorf, Wolfgang E.

    2018-04-01

    One of the big challenges in astrophysics is the comparison of complex simulations to observations. As many codes do not directly generate observables (e.g. hydrodynamic simulations), the last step in the modelling process is often a radiative-transfer treatment. For this step, the community relies increasingly on Monte Carlo radiative transfer due to the ease of implementation and scalability with computing power. We show how to estimate the statistical uncertainty given the output of just a single radiative-transfer simulation in which the number of photon packets follows a Poisson distribution and the weight (e.g. energy or luminosity) of a single packet may follow an arbitrary distribution. Our Bayesian approach produces a posterior distribution that is valid for any number of packets in a bin, even zero packets, and is easy to implement in practice. Our analytic results for large number of packets show that we generalise existing methods that are valid only in limiting cases. The statistical problem considered here appears in identical form in a wide range of Monte Carlo simulations including particle physics and importance sampling. It is particularly powerful in extracting information when the available data are sparse or quantities are small.

  14. Monte Carlo climate change forecasts with a global coupled ocean-atmosphere model

    International Nuclear Information System (INIS)

    Cubasch, U.; Santer, B.D.; Hegerl, G.; Hoeck, H.; Maier-Reimer, E.; Mikolajwicz, U.; Stoessel, A.; Voss, R.

    1992-01-01

    The Monte Carlo approach, which has increasingly been used during the last decade in the field of extended range weather forecasting, has been applied for climate change experiments. Four integrations with a global coupled ocean-atmosphere model have been started from different initial conditions, but with the same greenhouse gas forcing according to the IPCC scenario A. All experiments have been run for a period of 50 years. The results indicate that the time evolution of the global mean warming depends strongly on the initial state of the climate system. It can vary between 6 and 31 years. The Monte Carlo approach delivers information about both the mean response and the statistical significance of the response. While the individual members of the ensemble show a considerable variation in the climate change pattern of temperature after 50 years, the ensemble mean climate change pattern closely resembles the pattern obtained in a 100 year integration and is, at least over most of the land areas, statistically significant. The ensemble averaged sea-level change due to thermal expansion is significant in the global mean and locally over wide regions of the Pacific. The hydrological cycle is also significantly enhanced in the global mean, but locally the changes in precipitation and soil moisture are masked by the variability of the experiments. (orig.)

  15. Development of self-learning Monte Carlo technique for more efficient modeling of nuclear logging measurements

    International Nuclear Information System (INIS)

    Zazula, J.M.

    1988-01-01

    The self-learning Monte Carlo technique has been implemented to the commonly used general purpose neutron transport code MORSE, in order to enhance sampling of the particle histories that contribute to a detector response. The parameters of all the biasing techniques available in MORSE, i.e. of splitting, Russian roulette, source and collision outgoing energy importance sampling, path length transformation and additional biasing of the source angular distribution are optimized. The learning process is iteratively performed after each batch of particles, by retrieving the data concerning the subset of histories that passed the detector region and energy range in the previous batches. This procedure has been tested on two sample problems in nuclear geophysics, where an unoptimized Monte Carlo calculation is particularly inefficient. The results are encouraging, although the presented method does not directly minimize the variance and the convergence of our algorithm is restricted by the statistics of successful histories from previous random walk. Further applications for modeling of the nuclear logging measurements seem to be promising. 11 refs., 2 figs., 3 tabs. (author)

  16. Measurement and Monte Carlo modeling of the spatial response of scintillation screens

    Energy Technology Data Exchange (ETDEWEB)

    Pistrui-Maximean, S.A. [CNDRI (NDT using Ionizing Radiation) Laboratory, INSA-Lyon, 69621 Villeurbanne (France)], E-mail: spistrui@gmail.com; Letang, J.M. [CNDRI (NDT using Ionizing Radiation) Laboratory, INSA-Lyon, 69621 Villeurbanne (France)], E-mail: jean-michel.letang@insa-lyon.fr; Freud, N. [CNDRI (NDT using Ionizing Radiation) Laboratory, INSA-Lyon, 69621 Villeurbanne (France); Koch, A. [Thales Electron Devices, 38430 Moirans (France); Walenta, A.H. [Detectors and Electronics Department, FB Physik, Siegen University, 57068 Siegen (Germany); Montarou, G. [Corpuscular Physics Laboratory, Blaise Pascal University, 63177 Aubiere (France); Babot, D. [CNDRI (NDT using Ionizing Radiation) Laboratory, INSA-Lyon, 69621 Villeurbanne (France)

    2007-11-01

    In this article, we propose a detailed protocol to carry out measurements of the spatial response of scintillation screens and to assess the agreement with simulated results. The experimental measurements have been carried out using a practical implementation of the slit method. A Monte Carlo simulation model of scintillator screens, implemented with the toolkit Geant4, has been used to study the influence of the acquisition setup parameters and to compare with the experimental results. An algorithm of global stochastic optimization based on a localized random search method has been implemented to adjust the optical parameters (optical scattering and absorption coefficients). The algorithm has been tested for different X-ray tube voltages (40, 70 and 100 kV). A satisfactory convergence between the results simulated with the optimized model and the experimental measurements is obtained.

  17. Monte Carlo method for critical systems in infinite volume: The planar Ising model.

    Science.gov (United States)

    Herdeiro, Victor; Doyon, Benjamin

    2016-10-01

    In this paper we propose a Monte Carlo method for generating finite-domain marginals of critical distributions of statistical models in infinite volume. The algorithm corrects the problem of the long-range effects of boundaries associated to generating critical distributions on finite lattices. It uses the advantage of scale invariance combined with ideas of the renormalization group in order to construct a type of "holographic" boundary condition that encodes the presence of an infinite volume beyond it. We check the quality of the distribution obtained in the case of the planar Ising model by comparing various observables with their infinite-plane prediction. We accurately reproduce planar two-, three-, and four-point of spin and energy operators. We also define a lattice stress-energy tensor, and numerically obtain the associated conformal Ward identities and the Ising central charge.

  18. Development of numerical models for Monte Carlo simulations of Th-Pb fuel assembly

    Directory of Open Access Journals (Sweden)

    Oettingen Mikołaj

    2017-01-01

    Full Text Available The thorium-uranium fuel cycle is a promising alternative against uranium-plutonium fuel cycle, but it demands many advanced research before starting its industrial application in commercial nuclear reactors. The paper presents the development of the thorium-lead (Th-Pb fuel assembly numerical models for the integral irradiation experiments. The Th-Pb assembly consists of a hexagonal array of ThO2 fuel rods and metallic Pb rods. The design of the assembly allows different combinations of rods for various types of irradiations and experimental measurements. The numerical model of the Th-Pb assembly was designed for the numerical simulations with the continuous energy Monte Carlo Burnup code (MCB implemented on the supercomputer Prometheus of the Academic Computer Centre Cyfronet AGH.

  19. Monte Carlo simulation of a statistical mechanical model of multiple protein sequence alignment.

    Science.gov (United States)

    Kinjo, Akira R

    2017-01-01

    A grand canonical Monte Carlo (MC) algorithm is presented for studying the lattice gas model (LGM) of multiple protein sequence alignment, which coherently combines long-range interactions and variable-length insertions. MC simulations are used for both parameter optimization of the model and production runs to explore the sequence subspace around a given protein family. In this Note, I describe the details of the MC algorithm as well as some preliminary results of MC simulations with various temperatures and chemical potentials, and compare them with the mean-field approximation. The existence of a two-state transition in the sequence space is suggested for the SH3 domain family, and inappropriateness of the mean-field approximation for the LGM is demonstrated.

  20. Modeling of ventilation experiment in opalinus clay formation of Mont Terri argillaceous rock tunnel

    International Nuclear Information System (INIS)

    Liu Xiaoyan; Liu Quansheng; Zhang Chengyuan

    2010-01-01

    Deep geological disposal is one of the most realistic methods of nuclear waste disposal, argillaceous rocks are being considered as potential host rocks for deep geological disposal. Our study starts with performing simulations of a laboratory drying test and a ventilation experiment for Mont Terri underground laboratory. It is an main interest of D2011, 5th stage of DECOVALEX. A 3-phase and 3-constituent hydraulic model is introduced to simulate the processes occurring during ventilation, including desaturation/resaturation in the rock, real phase change and air/rock interface, and to explore the Opalinus Clay parameter set There is a good agreement with experimental observations and calculation results from other D2011 participant teams. It means that the 3-phase and 2-constituent hydraulic model is accurate enough and it is a good start for full HMC understanding of the ventilation experiment on argillaceous rock. (authors)

  1. A three-dimensional self-learning kinetic Monte Carlo model: application to Ag(111)

    International Nuclear Information System (INIS)

    Latz, Andreas; Brendel, Lothar; Wolf, Dietrich E

    2012-01-01

    The reliability of kinetic Monte Carlo (KMC) simulations depends on accurate transition rates. The self-learning KMC method (Trushin et al 2005 Phys. Rev. B 72 115401) combines the accuracy of rates calculated from a realistic potential with the efficiency of a rate catalog, using a pattern recognition scheme. This work expands the original two-dimensional method to three dimensions. The concomitant huge increase in the number of rate calculations on the fly needed can be avoided by setting up an initial database, containing exact activation energies calculated for processes gathered from a simpler KMC model. To provide two representative examples, the model is applied to the diffusion of Ag monolayer islands on Ag(111), and the homoepitaxial growth of Ag on Ag(111) at low temperatures.

  2. The Effect of the Demand Control and Effort Reward Imbalance Models on the Academic Burnout of Korean Adolescents

    Science.gov (United States)

    Lee, Jayoung; Puig, Ana; Lee, Sang Min

    2012-01-01

    The purpose of this study was to examine the effects of the Demand Control Model (DCM) and the Effort Reward Imbalance Model (ERIM) on academic burnout for Korean students. Specifically, this study identified the effects of the predictor variables based on DCM and ERIM (i.e., demand, control, effort, reward, Demand Control Ratio, Effort Reward…

  3. Monte Carlo Simulation Of The Portfolio-Balance Model Of Exchange Rates: Finite Sample Properties Of The GMM Estimator

    OpenAIRE

    Hong-Ghi Min

    2011-01-01

    Using Monte Carlo simulation of the Portfolio-balance model of the exchange rates, we report finite sample properties of the GMM estimator for testing over-identifying restrictions in the simultaneous equations model. F-form of Sargans statistic performs better than its chi-squared form while Hansens GMM statistic has the smallest bias.

  4. SU-E-T-239: Monte Carlo Modelling of SMC Proton Nozzles Using TOPAS

    International Nuclear Information System (INIS)

    Chung, K; Kim, J; Shin, J; Han, Y; Ju, S; Hong, C; Kim, D; Kim, H; Shin, E; Ahn, S; Chung, S; Choi, D

    2014-01-01

    Purpose: To expedite and cross-check the commissioning of the proton therapy nozzles at Samsung Medical Center using TOPAS. Methods: We have two different types of nozzles at Samsung Medical Center (SMC), a multi-purpose nozzle and a pencil beam scanning dedicated nozzle. Both nozzles have been modelled in Monte Carlo simulation by using TOPAS based on the vendor-provided geometry. The multi-purpose nozzle is mainly composed of wobbling magnets, scatterers, ridge filters and multi-leaf collimators (MLC). Including patient specific apertures and compensators, all the parts of the nozzle have been implemented in TOPAS following the geometry information from the vendor.The dedicated scanning nozzle has a simpler structure than the multi-purpose nozzle with a vacuum pipe at the down stream of the nozzle.A simple water tank volume has been implemented to measure the dosimetric characteristics of proton beams from the nozzles. Results: We have simulated the two proton beam nozzles at SMC. Two different ridge filters have been tested for the spread-out Bragg peak (SOBP) generation of wobbling mode in the multi-purpose nozzle. The spot sizes and lateral penumbra in two nozzles have been simulated and analyzed using a double Gaussian model. Using parallel geometry, both the depth dose curve and dose profile have been measured simultaneously. Conclusion: The proton therapy nozzles at SMC have been successfully modelled in Monte Carlo simulation using TOPAS. We will perform a validation with measured base data and then use the MC simulation to interpolate/extrapolate the measured data. We believe it will expedite the commissioning process of the proton therapy nozzles at SMC

  5. Monte Carlo Techniques for the Comprehensive Modeling of Isotopic Inventories in Future Nuclear Systems and Fuel Cycles. Final Report

    International Nuclear Information System (INIS)

    Paul P.H. Wilson

    2005-01-01

    The development of Monte Carlo techniques for isotopic inventory analysis has been explored in order to facilitate the modeling of systems with flowing streams of material through varying neutron irradiation environments. This represents a novel application of Monte Carlo methods to a field that has traditionally relied on deterministic solutions to systems of first-order differential equations. The Monte Carlo techniques were based largely on the known modeling techniques of Monte Carlo radiation transport, but with important differences, particularly in the area of variance reduction and efficiency measurement. The software that was developed to implement and test these methods now provides a basis for validating approximate modeling techniques that are available to deterministic methodologies. The Monte Carlo methods have been shown to be effective in reproducing the solutions of simple problems that are possible using both stochastic and deterministic methods. The Monte Carlo methods are also effective for tracking flows of materials through complex systems including the ability to model removal of individual elements or isotopes in the system. Computational performance is best for flows that have characteristic times that are large fractions of the system lifetime. As the characteristic times become short, leading to thousands or millions of passes through the system, the computational performance drops significantly. Further research is underway to determine modeling techniques to improve performance within this range of problems. This report describes the technical development of Monte Carlo techniques for isotopic inventory analysis. The primary motivation for this solution methodology is the ability to model systems of flowing material being exposed to varying and stochastically varying radiation environments. The methodology was developed in three stages: analog methods which model each atom with true reaction probabilities (Section 2), non-analog methods

  6. Two-dimensional hybrid Monte Carlo–fluid modelling of dc glow discharges: Comparison with fluid models, reliability, and accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Eylenceoğlu, E.; Rafatov, I., E-mail: rafatov@metu.edu.tr [Department of Physics, Middle East Technical University, Ankara (Turkey); Kudryavtsev, A. A. [Saint Petersburg State University, St.Petersburg (Russian Federation)

    2015-01-15

    Two-dimensional hybrid Monte Carlo–fluid numerical code is developed and applied to model the dc glow discharge. The model is based on the separation of electrons into two parts: the low energetic (slow) and high energetic (fast) electron groups. Ions and slow electrons are described within the fluid model using the drift-diffusion approximation for particle fluxes. Fast electrons, represented by suitable number of super particles emitted from the cathode, are responsible for ionization processes in the discharge volume, which are simulated by the Monte Carlo collision method. Electrostatic field is obtained from the solution of Poisson equation. The test calculations were carried out for an argon plasma. Main properties of the glow discharge are considered. Current-voltage curves, electric field reversal phenomenon, and the vortex current formation are developed and discussed. The results are compared to those obtained from the simple and extended fluid models. Contrary to reports in the literature, the analysis does not reveal significant advantages of existing hybrid methods over the extended fluid model.

  7. Fundamental Drop Dynamics and Mass Transfer Experiments to Support Solvent Extraction Modeling Efforts

    International Nuclear Information System (INIS)

    Christensen, Kristi; Rutledge, Veronica; Garn, Troy

    2011-01-01

    In support of the Nuclear Energy Advanced Modeling Simulation Safeguards and Separations (NEAMS SafeSep) program, the Idaho National Laboratory (INL) worked in collaboration with Los Alamos National Laboratory (LANL) to further a modeling effort designed to predict mass transfer behavior for selected metal species between individual dispersed drops and a continuous phase in a two phase liquid-liquid extraction (LLE) system. The purpose of the model is to understand the fundamental processes of mass transfer that occur at the drop interface. This fundamental understanding can be extended to support modeling of larger LLE equipment such as mixer settlers, pulse columns, and centrifugal contactors. The work performed at the INL involved gathering the necessary experimental data to support the modeling effort. A custom experimental apparatus was designed and built for performing drop contact experiments to measure mass transfer coefficients as a function of contact time. A high speed digital camera was used in conjunction with the apparatus to measure size, shape, and velocity of the drops. In addition to drop data, the physical properties of the experimental fluids were measured to be used as input data for the model. Physical properties measurements included density, viscosity, surface tension and interfacial tension. Additionally, self diffusion coefficients for the selected metal species in each experimental solution were measured, and the distribution coefficient for the metal partitioning between phases was determined. At the completion of this work, the INL has determined the mass transfer coefficient and a velocity profile for drops rising by buoyancy through a continuous medium under a specific set of experimental conditions. Additionally, a complete set of experimentally determined fluid properties has been obtained. All data will be provided to LANL to support the modeling effort.

  8. The effect of a number of selective points in modeling of polymerization reacting Monte Carlo method: studying the initiation reaction

    CERN Document Server

    Sadi, M; Dabir, B

    2003-01-01

    Monte Carlo Method is one of the most powerful techniques to model different processes, such as polymerization reactions. By this method, without any need to solve moment equations, a very detailed information on the structure and properties of polymers are obtained. The number of algorithm repetitions (selected volumes of reactor for modelling which represent the number of initial molecules) is very important in this method. In Monte Carlo method calculations are based on the random number of generations and reaction probability determinations. so the number of algorithm repetition is very important. In this paper, the initiation reaction was considered alone and the importance of number of initiator molecules on the result were studied. It can be concluded that Monte Carlo method will not give accurate results if the number of molecules is not satisfied to be big enough, because in that case , selected volume would not be representative of the whole system.

  9. Model of electronic energy relaxation in the test-particle Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Roblin, P.; Rosengard, A. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. des Procedes d`Enrichissement; Nguyen, T.T. [Compagnie Internationale de Services en Informatique (CISI) - Centre d`Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1994-12-31

    We previously presented a new test-particle Monte Carlo method (1) (which we call PTMC), an iterative method for solving the Boltzmann equation, and now improved and very well-suited to the collisional steady gas flows. Here, we apply a statistical method, described by Anderson (2), to treat electronic translational energy transfer by a collisional process, to atomic uranium vapor. For our study, only three levels of its multiple energy states are considered: 0,620 cm{sup -1} and an average level grouping upper levels. After presenting two-dimensional results, we apply this model to the evaporation of uranium by electron bombardment and show that the PTMC results, for given initial electronic temperatures, are in good agreement with experimental radial velocity measurements. (author). 12 refs., 1 fig.

  10. Monte Carlo evidence for the gluon-chain model of QCD string formation

    International Nuclear Information System (INIS)

    Greensite, J.; San Francisco State Univ., CA

    1988-08-01

    The Monte Carlo method is used to calculate the overlaps string vertical stroken gluons>, where Ψ string [A] is the Yang-Mills wavefunctional due to a static quark-antiquark pair, and vertical stroken gluons > are orthogonal trial states containing n=0, 1, or 2 gluon operators multiplying the true ground state. The calculation is carried out for SU(2) lattice gauge theory in Coulomb gauge, in D=4 dimensions. It is found that the string state is dominated, at small qanti q separations, by the vacuum ('no-gluon') state, at larger separations by the 1-gluon state, and, at the largest separations attempted, the 2-gluon state begins to dominate. This behavior is in qualitative agreement with the gluon-chain model, which is a large-N colors motivated theory of QCD string formation. (orig.)

  11. Constrained-path quantum Monte Carlo approach for non-yrast states within the shell model

    Energy Technology Data Exchange (ETDEWEB)

    Bonnard, J. [INFN, Sezione di Padova, Padova (Italy); LPC Caen, ENSICAEN, Universite de Caen, CNRS/IN2P3, Caen (France); Juillet, O. [LPC Caen, ENSICAEN, Universite de Caen, CNRS/IN2P3, Caen (France)

    2016-04-15

    The present paper intends to present an extension of the constrained-path quantum Monte Carlo approach allowing to reconstruct non-yrast states in order to reach the complete spectroscopy of nuclei within the interacting shell model. As in the yrast case studied in a previous work, the formalism involves a variational symmetry-restored wave function assuming two central roles. First, it guides the underlying Brownian motion to improve the efficiency of the sampling. Second, it constrains the stochastic paths according to the phaseless approximation to control sign or phase problems that usually plague fermionic QMC simulations. Proof-of-principle results in the sd valence space are reported. They prove the ability of the scheme to offer remarkably accurate binding energies for both even- and odd-mass nuclei irrespective of the considered interaction. (orig.)

  12. Monte Carlo based geometrical model for efficiency calculation of an n-type HPGe detector

    Energy Technology Data Exchange (ETDEWEB)

    Padilla Cabal, Fatima, E-mail: fpadilla@instec.c [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba); Lopez-Pino, Neivy; Luis Bernal-Castillo, Jose; Martinez-Palenzuela, Yisel; Aguilar-Mena, Jimmy; D' Alessandro, Katia; Arbelo, Yuniesky; Corrales, Yasser; Diaz, Oscar [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba)

    2010-12-15

    A procedure to optimize the geometrical model of an n-type detector is described. Sixteen lines from seven point sources ({sup 241}Am, {sup 133}Ba, {sup 22}Na, {sup 60}Co, {sup 57}Co, {sup 137}Cs and {sup 152}Eu) placed at three different source-to-detector distances (10, 20 and 30 cm) were used to calibrate a low-background gamma spectrometer between 26 and 1408 keV. Direct Monte Carlo techniques using the MCNPX 2.6 and GEANT 4 9.2 codes, and a semi-empirical procedure were performed to obtain theoretical efficiency curves. Since discrepancies were found between experimental and calculated data using the manufacturer parameters of the detector, a detail study of the crystal dimensions and the geometrical configuration is carried out. The relative deviation with experimental data decreases from a mean value of 18-4%, after the parameters were optimized.

  13. Calibration of lung counter using a CT model of Torso phantom and Monte Carlo method

    International Nuclear Information System (INIS)

    Zhang Binquan; Ma Jizeng; Yang Duanjie; Liu Liye; Cheng Jianping

    2006-01-01

    Tomography image of a Torso phantom was obtained from CT-Scan. The Torso phantom represents the trunk of an adult man that is 170 cm high and weight of 65 kg. After these images were segmented, cropped, and resized, a 3-dimension voxel phantom was created. The voxel phantom includes more than 2 million voxels, which size was 2.73 mm x 2.73 mm x 3 mm. This model could be used for the calibration of lung counter with Monte Carlo method. On the assumption that radioactive material was homogeneously distributed throughout the lung, counting efficiencies of a HPGe detector in different positions were calculated as Adipose Mass fraction (AMF) was different in the soft tissue in chest. The results showed that counting efficiencies of the lung counter changed up to 67% for 17.5 keV γ ray and 20% for 25 keV γ ray when AMF changed from 0 to 40%. (authors)

  14. Markov Chain Monte Carlo Simulation to Assess Uncertainty in Models of Naturally Deformed Rock

    Science.gov (United States)

    Davis, J. R.; Titus, S.; Giorgis, S. D.; Horsman, E. M.

    2015-12-01

    Field studies in tectonics and structural geology involve many kinds of data, such as foliation-lineation pairs, folded and boudinaged veins, deformed clasts, and lattice preferred orientations. Each data type can inform a model of deformation, for example by excluding certain geometries or constraining model parameters. In past work we have demonstrated how to systematically integrate a wide variety of data types into the computation of best-fit deformations. However, because even the simplest deformation models tend to be highly non-linear in their parameters, evaluating the uncertainty in the best fit has been difficult. In this presentation we describe an approach to rigorously assessing the uncertainty in models of naturally deformed rock. Rather than finding a single vector of parameter values that fits the data best, we use Bayesian Markov chain Monte Carlo methods to generate a large set of vectors of varying fitness. Taken together, these vectors approximate the probability distribution of the parameters given the data. From this distribution, various auxiliary statistical quantities and conclusions can be derived. Further, the relative probability of differing models can be quantified. We apply this approach to two example data sets, from the Gem Lake shear zone and western Idaho shear zone. Our findings address shear zone geometry, magnitude of deformation, strength of field fabric, and relative viscosity of clasts. We compare our model predictions to those of earlier studies.

  15. Revisiting the hybrid quantum Monte Carlo method for Hubbard and electron-phonon models

    Science.gov (United States)

    Beyl, Stefan; Goth, Florian; Assaad, Fakher F.

    2018-02-01

    A unique feature of the hybrid quantum Monte Carlo (HQMC) method is the potential to simulate negative sign free lattice fermion models with subcubic scaling in system size. Here we will revisit the algorithm for various models. We will show that for the Hubbard model the HQMC suffers from ergodicity issues and unbounded forces in the effective action. Solutions to these issues can be found in terms of a complexification of the auxiliary fields. This implementation of the HQMC that does not attempt to regularize the fermionic matrix so as to circumvent the aforementioned singularities does not outperform single spin flip determinantal methods with cubic scaling. On the other hand we will argue that there is a set of models for which the HQMC is very efficient. This class is characterized by effective actions free of singularities. Using the Majorana representation, we show that models such as the Su-Schrieffer-Heeger Hamiltonian at half filling and on a bipartite lattice belong to this class. For this specific model subcubic scaling is achieved.

  16. Bayesian parameter estimation in dynamic population model via particle Markov chain Monte Carlo

    Directory of Open Access Journals (Sweden)

    Meng Gao

    2012-12-01

    Full Text Available In nature, population dynamics are subject to multiple sources of stochasticity. State-space models (SSMs provide an ideal framework for incorporating both environmental noises and measurement errors into dynamic population models. In this paper, we present a recently developed method, Particle Markov Chain Monte Carlo (Particle MCMC, for parameter estimation in nonlinear SSMs. We use one effective algorithm of Particle MCMC, Particle Gibbs sampling algorithm, to estimate the parameters of a state-space model of population dynamics. The posterior distributions of parameters are derived given the conjugate prior distribution. Numerical simulations showed that the model parameters can be accurately estimated, no matter the deterministic model is stable, periodic or chaotic. Moreover, we fit the model to 16 representative time series from Global Population Dynamics Database (GPDD. It is verified that the results of parameter and state estimation using Particle Gibbs sampling algorithm are satisfactory for a majority of time series. For other time series, the quality of parameter estimation can also be improved, if prior knowledge is constrained. In conclusion, Particle Gibbs sampling algorithm provides a new Bayesian parameter inference method for studying population dynamics.

  17. Evaluation of Arroyo Channel Restoration Efforts using Hydrological Modeling: Rancho San Bernardino, Sonora, MX

    Science.gov (United States)

    Jemison, N. E.; DeLong, S.; Henderson, W. M.; Adams, J.

    2012-12-01

    In the drylands of the southwestern U.S. and northwestern Mexico, historical river channel incision (arroyo cutting) has led to the destruction of riparian ecological systems and cieñega wetlands in many locations. Along Silver Creek on the Arizona-Sonora border, the Cuenca Los Ojos Foundation has been installing rock gabions and concrete and earthen berms with a goal of slowing flash floods, raising groundwater levels, and refilling arroyo channels with sediment in an area that changed from a broad, perennially wet cieñega to a narrow sand- and gravel-dominated arroyo channel with an average depth of ~6 m. The engineering efforts hope to restore desert wetlands, regrow riparian vegetation, and promote sediment deposition along the arroyo floor. Hydrological modeling allows us to predict how rare flood events interact with the restoration efforts and may guide future approaches to dryland ecological restoration. This modeling is complemented by detailed topographic surveying and use of streamflow sensors to monitor hydrological processes in the restoration project. We evaluate the inundation associated with model 10-, 50-, 100-, 500-, and 1,000-year floods through the study area using FLO-2D and HEC-RAS modeling environments in order to evaluate the possibility of returning surface inundation to the former cieñega surface. According to HEC-RAS model predictions, given current channel configuration, it would require a 500-year flood to overtop the channel banks and reinundate the cieñega (now terrace) surface, though the 100-year flood may lead to limited terrace surface inundation. Based on our models, 10-year floods were ~2 m from overtopping the arroyo walls, 50-year floods came ~1.5 m from overtopping the arroyos, 100-year floods were ~1.2 m from overtopping, and 500- and 1,000-year floods at least partially inundated the cieñega surface. The current topography of Silver Creek does not allow for frequent flooding of the former cieñega; model predictions

  18. Development of a Monte Carlo multiple source model for inclusion in a dose calculation auditing tool.

    Science.gov (United States)

    Faught, Austin M; Davidson, Scott E; Fontenot, Jonas; Kry, Stephen F; Etzel, Carol; Ibbott, Geoffrey S; Followill, David S

    2017-09-01

    The Imaging and Radiation Oncology Core Houston (IROC-H) (formerly the Radiological Physics Center) has reported varying levels of agreement in their anthropomorphic phantom audits. There is reason to believe one source of error in this observed disagreement is the accuracy of the dose calculation algorithms and heterogeneity corrections used. To audit this component of the radiotherapy treatment process, an independent dose calculation tool is needed. Monte Carlo multiple source models for Elekta 6 MV and 10 MV therapeutic x-ray beams were commissioned based on measurement of central axis depth dose data for a 10 × 10 cm 2 field size and dose profiles for a 40 × 40 cm 2 field size. The models were validated against open field measurements consisting of depth dose data and dose profiles for field sizes ranging from 3 × 3 cm 2 to 30 × 30 cm 2 . The models were then benchmarked against measurements in IROC-H's anthropomorphic head and neck and lung phantoms. Validation results showed 97.9% and 96.8% of depth dose data passed a ±2% Van Dyk criterion for 6 MV and 10 MV models respectively. Dose profile comparisons showed an average agreement using a ±2%/2 mm criterion of 98.0% and 99.0% for 6 MV and 10 MV models respectively. Phantom plan comparisons were evaluated using ±3%/2 mm gamma criterion, and averaged passing rates between Monte Carlo and measurements were 87.4% and 89.9% for 6 MV and 10 MV models respectively. Accurate multiple source models for Elekta 6 MV and 10 MV x-ray beams have been developed for inclusion in an independent dose calculation tool for use in clinical trial audits. © 2017 American Association of Physicists in Medicine.

  19. Study of Monte Carlo Simulation Method for Methane Phase Diagram Prediction using Two Different Potential Models

    KAUST Repository

    Kadoura, Ahmad

    2011-06-06

    Lennard‐Jones (L‐J) and Buckingham exponential‐6 (exp‐6) potential models were used to produce isotherms for methane at temperatures below and above critical one. Molecular simulation approach, particularly Monte Carlo simulations, were employed to create these isotherms working with both canonical and Gibbs ensembles. Experiments in canonical ensemble with each model were conducted to estimate pressures at a range of temperatures above methane critical temperature. Results were collected and compared to experimental data existing in literature; both models showed an elegant agreement with the experimental data. In parallel, experiments below critical temperature were run in Gibbs ensemble using L‐J model only. Upon comparing results with experimental ones, a good fit was obtained with small deviations. The work was further developed by adding some statistical studies in order to achieve better understanding and interpretation to the estimated quantities by the simulation. Methane phase diagrams were successfully reproduced by an efficient molecular simulation technique with different potential models. This relatively simple demonstration shows how powerful molecular simulation methods could be, hence further applications on more complicated systems are considered. Prediction of phase behavior of elemental sulfur in sour natural gases has been an interesting and challenging field in oil and gas industry. Determination of elemental sulfur solubility conditions helps avoiding all kinds of problems caused by its dissolution in gas production and transportation processes. For this purpose, further enhancement to the methods used is to be considered in order to successfully simulate elemental sulfur phase behavior in sour natural gases mixtures.

  20. Unified description of pf-shell nuclei by the Monte Carlo shell model calculations

    Energy Technology Data Exchange (ETDEWEB)

    Mizusaki, Takahiro; Otsuka, Takaharu [Tokyo Univ. (Japan). Dept. of Physics; Honma, Michio

    1998-03-01

    The attempts to solve shell model by new methods are briefed. The shell model calculation by quantum Monte Carlo diagonalization which was proposed by the authors is a more practical method, and it became to be known that it can solve the problem with sufficiently good accuracy. As to the treatment of angular momentum, in the method of the authors, deformed Slater determinant is used as the basis, therefore, for making angular momentum into the peculiar state, projected operator is used. The space determined dynamically is treated mainly stochastically, and the energy of the multibody by the basis formed as the result is evaluated and selectively adopted. The symmetry is discussed, and the method of decomposing shell model space into dynamically determined space and the product of spin and isospin spaces was devised. The calculation processes are shown with the example of {sup 50}Mn nuclei. The calculation of the level structure of {sup 48}Cr with known exact energy can be done with the accuracy of peculiar absolute energy value within 200 keV. {sup 56}Ni nuclei are the self-conjugate nuclei of Z=N=28. The results of the shell model calculation of {sup 56}Ni nucleus structure by using the interactions of nuclear models are reported. (K.I.)

  1. Rapid creation, Monte Carlo simulation, and visualization of realistic 3D cell models.

    Science.gov (United States)

    Czech, Jacob; Dittrich, Markus; Stiles, Joel R

    2009-01-01

    Spatially realistic diffusion-reaction simulations supplement traditional experiments and provide testable hypotheses for complex physiological systems. To date, however, the creation of realistic 3D cell models has been difficult and time-consuming, typically involving hand reconstruction from electron microscopic images. Here, we present a complementary approach that is much simpler and faster, because the cell architecture (geometry) is created directly in silico using 3D modeling software like that used for commercial film animations. We show how a freely available open source program (Blender) can be used to create the model geometry, which then can be read by our Monte Carlo simulation and visualization softwares (MCell and DReAMM, respectively). This new workflow allows rapid prototyping and development of realistic computational models, and thus should dramatically accelerate their use by a wide variety of computational and experimental investigators. Using two self-contained examples based on synaptic transmission, we illustrate the creation of 3D cellular geometry with Blender, addition of molecules, reactions, and other run-time conditions using MCell's Model Description Language (MDL), and subsequent MCell simulations and DReAMM visualizations. In the first example, we simulate calcium influx through voltage-gated channels localized on a presynaptic bouton, with subsequent intracellular calcium diffusion and binding to sites on synaptic vesicles. In the second example, we simulate neurotransmitter release from synaptic vesicles as they fuse with the presynaptic membrane, subsequent transmitter diffusion into the synaptic cleft, and binding to postsynaptic receptors on a dendritic spine.

  2. Modeling a secular trend by Monte Carlo simulation of height biased migration in a spatial network.

    Science.gov (United States)

    Groth, Detlef

    2017-04-01

    Background: In a recent Monte Carlo simulation, the clustering of body height of Swiss military conscripts within a spatial network with characteristic features of the natural Swiss geography was investigated. In this study I examined the effect of migration of tall individuals into network hubs on the dynamics of body height within the whole spatial network. The aim of this study was to simulate height trends. Material and methods: Three networks were used for modeling, a regular rectangular fishing net like network, a real world example based on the geographic map of Switzerland, and a random network. All networks contained between 144 and 148 districts and between 265-307 road connections. Around 100,000 agents were initially released with average height of 170 cm, and height standard deviation of 6.5 cm. The simulation was started with the a priori assumption that height variation within a district is limited and also depends on height of neighboring districts (community effect on height). In addition to a neighborhood influence factor, which simulates a community effect, body height dependent migration of conscripts between adjacent districts in each Monte Carlo simulation was used to re-calculate next generation body heights. In order to determine the direction of migration for taller individuals, various centrality measures for the evaluation of district importance within the spatial network were applied. Taller individuals were favored to migrate more into network hubs, backward migration using the same number of individuals was random, not biased towards body height. Network hubs were defined by the importance of a district within the spatial network. The importance of a district was evaluated by various centrality measures. In the null model there were no road connections, height information could not be delivered between the districts. Results: Due to the favored migration of tall individuals into network hubs, average body height of the hubs, and later

  3. Forwards and Backwards Modelling of Ashfall Hazards in New Zealand by Monte Carlo Methods

    Science.gov (United States)

    Hurst, T.; Smith, W. D.; Bibby, H. M.

    2003-12-01

    We have developed a technique for quantifying the probability of particular thicknesses of airfall ash from a volcanic eruption at any given site, using Monte Carlo methods, for hazards planning and insurance purposes. We use an established program (ASHFALL) to model individual eruptions, where the likely thickness of ash deposited at selected sites depends on the location of the volcano, eruptive volume, column height and ash size, and the wind conditions. A Monte Carlo formulation then allows us to simulate the variations in eruptive volume and in wind conditions by analysing repeat eruptions, each time allowing the parameters to vary randomly according to known or assumed distributions. Actual wind velocity profiles are used, with randomness included by selection of a starting date. We show how this method can handle the effects of multiple volcanic sources by aggregation, each source with its own characteristics. This follows a similar procedure which we have used for earthquake hazard assessment. The result is estimates of the frequency with which any given depth of ash is likely to be deposited at the selected site, accounting for all volcanoes that might affect it. These numbers are expressed as annual probabilities or as mean return periods. We can also use this method for obtaining an estimate of how often and how large the eruptions from a particular volcano have been. Results from ash cores in Auckland can give useful bounds for the likely total volumes erupted from the volcano Mt Egmont/Mt Taranaki, 280 km away, during the last 140,000 years, information difficult to obtain from local tephra stratigraphy.

  4. Modeling of molecular nitrogen collisions and dissociation processes for direct simulation Monte Carlo

    International Nuclear Information System (INIS)

    Parsons, Neal; Levin, Deborah A.; Duin, Adri C. T. van; Zhu, Tong

    2014-01-01

    The Direct Simulation Monte Carlo (DSMC) method typically used for simulating hypersonic Earth re-entry flows requires accurate total collision cross sections and reaction probabilities. However, total cross sections are often determined from extrapolations of relatively low-temperature viscosity data, so their reliability is unknown for the high temperatures observed in hypersonic flows. Existing DSMC reaction models accurately reproduce experimental equilibrium reaction rates, but the applicability of these rates to the strong thermal nonequilibrium observed in hypersonic shocks is unknown. For hypersonic flows, these modeling issues are particularly relevant for nitrogen, the dominant species of air. To rectify this deficiency, the Molecular Dynamics/Quasi-Classical Trajectories (MD/QCT) method is used to accurately compute collision and reaction cross sections for the N 2 ( 1 Σ g + )-N 2 ( 1 Σ g + ) collision pair for conditions expected in hypersonic shocks using a new potential energy surface developed using a ReaxFF fit to recent advanced ab initio calculations. The MD/QCT-computed reaction probabilities were found to exhibit better physical behavior and predict less dissociation than the baseline total collision energy reaction model for strong nonequilibrium conditions expected in a shock. The MD/QCT reaction model compared well with computed equilibrium reaction rates and shock-tube data. In addition, the MD/QCT-computed total cross sections were found to agree well with established variable hard sphere total cross sections

  5. Modeling of molecular nitrogen collisions and dissociation processes for direct simulation Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Parsons, Neal, E-mail: neal.parsons@cd-adapco.com; Levin, Deborah A., E-mail: deblevin@illinois.edu [Department of Aerospace Engineering, The Pennsylvania State University, 233 Hammond Building, University Park, Pennsylvania 16802 (United States); Duin, Adri C. T. van, E-mail: acv13@engr.psu.edu [Department of Mechanical and Nuclear Engineering, The Pennsylvania State University, 136 Research East, University Park, Pennsylvania 16802 (United States); Zhu, Tong, E-mail: tvz5037@psu.edu [Department of Aerospace Engineering, The Pennsylvania State University, 136 Research East, University Park, Pennsylvania 16802 (United States)

    2014-12-21

    The Direct Simulation Monte Carlo (DSMC) method typically used for simulating hypersonic Earth re-entry flows requires accurate total collision cross sections and reaction probabilities. However, total cross sections are often determined from extrapolations of relatively low-temperature viscosity data, so their reliability is unknown for the high temperatures observed in hypersonic flows. Existing DSMC reaction models accurately reproduce experimental equilibrium reaction rates, but the applicability of these rates to the strong thermal nonequilibrium observed in hypersonic shocks is unknown. For hypersonic flows, these modeling issues are particularly relevant for nitrogen, the dominant species of air. To rectify this deficiency, the Molecular Dynamics/Quasi-Classical Trajectories (MD/QCT) method is used to accurately compute collision and reaction cross sections for the N{sub 2}({sup 1}Σ{sub g}{sup +})-N{sub 2}({sup 1}Σ{sub g}{sup +}) collision pair for conditions expected in hypersonic shocks using a new potential energy surface developed using a ReaxFF fit to recent advanced ab initio calculations. The MD/QCT-computed reaction probabilities were found to exhibit better physical behavior and predict less dissociation than the baseline total collision energy reaction model for strong nonequilibrium conditions expected in a shock. The MD/QCT reaction model compared well with computed equilibrium reaction rates and shock-tube data. In addition, the MD/QCT-computed total cross sections were found to agree well with established variable hard sphere total cross sections.

  6. 3D Monte Carlo model of optical transport in laser-irradiated cutaneous vascular malformations

    Science.gov (United States)

    Majaron, Boris; Milanič, Matija; Jia, Wangcun; Nelson, J. S.

    2010-11-01

    We have developed a three-dimensional Monte Carlo (MC) model of optical transport in skin and applied it to analysis of port wine stain treatment with sequential laser irradiation and intermittent cryogen spray cooling. Our MC model extends the approaches of the popular multi-layer model by Wang et al.1 to three dimensions, thus allowing treatment of skin inclusions with more complex geometries and arbitrary irradiation patterns. To overcome the obvious drawbacks of either "escape" or "mirror" boundary conditions at the lateral boundaries of the finely discretized volume of interest (VOI), photons exiting the VOI are propagated in laterally infinite tissue layers with appropriate optical properties, until they loose all their energy, escape into the air, or return to the VOI, but the energy deposition outside of the VOI is not computed and recorded. After discussing the selection of tissue parameters, we apply the model to analysis of blood photocoagulation and collateral thermal damage in treatment of port wine stain (PWS) lesions with sequential laser irradiation and intermittent cryogen spray cooling.

  7. Kinetic Monte Carlo Potts Model for Simulating a High Burnup Structure in UO2

    International Nuclear Information System (INIS)

    Oh, Jae-Yong; Koo, Yang-Hyun; Lee, Byung-Ho

    2008-01-01

    A Potts model, based on the kinetic Monte Carlo method, was originally developed for magnetic domain evolutions, but it was also proposed as a model for a grain growth in polycrystals due to similarities between Potts domain structures and grain structures. It has modeled various microstructural phenomena such as grain growths, a recrystallization, a sintering, and so on. A high burnup structure (HBS) is observed in the periphery of a high burnup UO 2 fuel. Although its formation mechanism is not clearly understood yet, its characteristics are well recognized: The HBS microstructure consists of very small grains and large bubbles instead of original as-sintered grains. A threshold burnup for the HBS is observed at a local burnup 60-80 Gwd/tM, and the threshold temperature is 1000-1200 .deg. C. Concerning a energy stability, the HBS can be created if the system energy of the HBS is lower than that of the original structure in an irradiated UO 2 . In this paper, a Potts model was implemented for simulating the HBS by calculating system energies, and the simulation results were compared with the HBS characteristics mentioned above

  8. Optimization of a Monte Carlo Model of the Transient Reactor Test Facility

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Kristin; DeHart, Mark; Goluoglu, Sedat

    2017-03-01

    The ultimate goal of modeling and simulation is to obtain reasonable answers to problems that don’t have representations which can be easily evaluated while minimizing the amount of computational resources. With the advances during the last twenty years of large scale computing centers, researchers have had the ability to create a multitude of tools to minimize the number of approximations necessary when modeling a system. The tremendous power of these centers requires the user to possess an immense amount of knowledge to optimize the models for accuracy and efficiency.This paper seeks to evaluate the KENO model of TREAT to optimize calculational efforts.

  9. Quantitative Analysis of the Security of Software-Defined Network Controller Using Threat/Effort Model

    Directory of Open Access Journals (Sweden)

    Zehui Wu

    2017-01-01

    Full Text Available SDN-based controller, which is responsible for the configuration and management of the network, is the core of Software-Defined Networks. Current methods, which focus on the secure mechanism, use qualitative analysis to estimate the security of controllers, leading to inaccurate results frequently. In this paper, we employ a quantitative approach to overcome the above shortage. Under the analysis of the controller threat model we give the formal model results of the APIs, the protocol interfaces, and the data items of controller and further provide our Threat/Effort quantitative calculation model. With the help of Threat/Effort model, we are able to compare not only the security of different versions of the same kind controller but also different kinds of controllers and provide a basis for controller selection and secure development. We evaluated our approach in four widely used SDN-based controllers which are POX, OpenDaylight, Floodlight, and Ryu. The test, which shows the similarity outcomes with the traditional qualitative analysis, demonstrates that with our approach we are able to get the specific security values of different controllers and presents more accurate results.

  10. APPLYING TEACHING-LEARNING TO ARTIFICIAL BEE COLONY FOR PARAMETER OPTIMIZATION OF SOFTWARE EFFORT ESTIMATION MODEL

    Directory of Open Access Journals (Sweden)

    THANH TUNG KHUAT

    2017-05-01

    Full Text Available Artificial Bee Colony inspired by the foraging behaviour of honey bees is a novel meta-heuristic optimization algorithm in the community of swarm intelligence algorithms. Nevertheless, it is still insufficient in the speed of convergence and the quality of solutions. This paper proposes an approach in order to tackle these downsides by combining the positive aspects of TeachingLearning based optimization and Artificial Bee Colony. The performance of the proposed method is assessed on the software effort estimation problem, which is the complex and important issue in the project management. Software developers often carry out the software estimation in the early stages of the software development life cycle to derive the required cost and schedule for a project. There are a large number of methods for effort estimation in which COCOMO II is one of the most widely used models. However, this model has some restricts because its parameters have not been optimized yet. In this work, therefore, we will present the approach to overcome this limitation of COCOMO II model. The experiments have been conducted on NASA software project dataset and the obtained results indicated that the improvement of parameters provided better estimation capabilities compared to the original COCOMO II model.

  11. A Monte Carlo study of time-aggregation in continuous-time and discrete-time parametric hazard models.

    NARCIS (Netherlands)

    Hofstede, ter F.; Wedel, M.

    1998-01-01

    This study investigates the effects of time aggregation in discrete and continuous-time hazard models. A Monte Carlo study is conducted in which data are generated according to various continuous and discrete-time processes, and aggregated into daily, weekly and monthly intervals. These data are

  12. An open source Bayesian Monte Carlo isotope mixing model with applications in Earth surface processes

    Science.gov (United States)

    Arendt, Carli A.; Aciego, Sarah M.; Hetland, Eric A.

    2015-05-01

    The implementation of isotopic tracers as constraints on source contributions has become increasingly relevant to understanding Earth surface processes. Interpretation of these isotopic tracers has become more accessible with the development of Bayesian Monte Carlo (BMC) mixing models, which allow uncertainty in mixing end-members and provide methodology for systems with multicomponent mixing. This study presents an open source multiple isotope BMC mixing model that is applicable to Earth surface environments with sources exhibiting distinct end-member isotopic signatures. Our model is first applied to new δ18O and δD measurements from the Athabasca Glacier, which showed expected seasonal melt evolution trends and vigorously assessed the statistical relevance of the resulting fraction estimations. To highlight the broad applicability of our model to a variety of Earth surface environments and relevant isotopic systems, we expand our model to two additional case studies: deriving melt sources from δ18O, δD, and 222Rn measurements of Greenland Ice Sheet bulk water samples and assessing nutrient sources from ɛNd and 87Sr/86Sr measurements of Hawaiian soil cores. The model produces results for the Greenland Ice Sheet and Hawaiian soil data sets that are consistent with the originally published fractional contribution estimates. The advantage of this method is that it quantifies the error induced by variability in the end-member compositions, unrealized by the models previously applied to the above case studies. Results from all three case studies demonstrate the broad applicability of this statistical BMC isotopic mixing model for estimating source contribution fractions in a variety of Earth surface systems.

  13. Creating high-resolution digital elevation model using thin plate spline interpolation and Monte Carlo simulation

    International Nuclear Information System (INIS)

    Pohjola, J.; Turunen, J.; Lipping, T.

    2009-07-01

    In this report creation of the digital elevation model of Olkiluoto area incorporating a large area of seabed is described. The modeled area covers 960 square kilometers and the apparent resolution of the created elevation model was specified to be 2.5 x 2.5 meters. Various elevation data like contour lines and irregular elevation measurements were used as source data in the process. The precision and reliability of the available source data varied largely. Digital elevation model (DEM) comprises a representation of the elevation of the surface of the earth in particular area in digital format. DEM is an essential component of geographic information systems designed for the analysis and visualization of the location-related data. DEM is most often represented either in raster or Triangulated Irregular Network (TIN) format. After testing several methods the thin plate spline interpolation was found to be best suited for the creation of the elevation model. The thin plate spline method gave the smallest error in the test where certain amount of points was removed from the data and the resulting model looked most natural. In addition to the elevation data the confidence interval at each point of the new model was required. The Monte Carlo simulation method was selected for this purpose. The source data points were assigned probability distributions according to what was known about their measurement procedure and from these distributions 1 000 (20 000 in the first version) values were drawn for each data point. Each point of the newly created DEM had thus as many realizations. The resulting high resolution DEM will be used in modeling the effects of land uplift and evolution of the landscape in the time range of 10 000 years from the present. This time range comes from the requirements set for the spent nuclear fuel repository site. (orig.)

  14. Monte Carlo simulation of OLS and linear mixed model inference of phenotypic effects on gene expression

    Directory of Open Access Journals (Sweden)

    Jeffrey A. Walker

    2016-10-01

    Full Text Available Background Self-contained tests estimate and test the association between a phenotype and mean expression level in a gene set defined a priori. Many self-contained gene set analysis methods have been developed but the performance of these methods for phenotypes that are continuous rather than discrete and with multiple nuisance covariates has not been well studied. Here, I use Monte Carlo simulation to evaluate the performance of both novel and previously published (and readily available via R methods for inferring effects of a continuous predictor on mean expression in the presence of nuisance covariates. The motivating data are a high-profile dataset which was used to show opposing effects of hedonic and eudaimonic well-being (or happiness on the mean expression level of a set of genes that has been correlated with social adversity (the CTRA gene set. The original analysis of these data used a linear model (GLS of fixed effects with correlated error to infer effects of Hedonia and Eudaimonia on mean CTRA expression. Methods The standardized effects of Hedonia and Eudaimonia on CTRA gene set expression estimated by GLS were compared to estimates using multivariate (OLS linear models and generalized estimating equation (GEE models. The OLS estimates were tested using O’Brien’s OLS test, Anderson’s permutation ${r}_{F}^{2}$ r F 2 -test, two permutation F-tests (including GlobalAncova, and a rotation z-test (Roast. The GEE estimates were tested using a Wald test with robust standard errors. The performance (Type I, II, S, and M errors of all tests was investigated using a Monte Carlo simulation of data explicitly modeled on the re-analyzed dataset. Results GLS estimates are inconsistent between data sets, and, in each dataset, at least one coefficient is large and highly statistically significant. By contrast, effects estimated by OLS or GEE are very small, especially relative to the standard errors. Bootstrap and permutation GLS

  15. Monte Carlo simulation of OLS and linear mixed model inference of phenotypic effects on gene expression.

    Science.gov (United States)

    Walker, Jeffrey A

    2016-01-01

    Self-contained tests estimate and test the association between a phenotype and mean expression level in a gene set defined a priori . Many self-contained gene set analysis methods have been developed but the performance of these methods for phenotypes that are continuous rather than discrete and with multiple nuisance covariates has not been well studied. Here, I use Monte Carlo simulation to evaluate the performance of both novel and previously published (and readily available via R) methods for inferring effects of a continuous predictor on mean expression in the presence of nuisance covariates. The motivating data are a high-profile dataset which was used to show opposing effects of hedonic and eudaimonic well-being (or happiness) on the mean expression level of a set of genes that has been correlated with social adversity (the CTRA gene set). The original analysis of these data used a linear model (GLS) of fixed effects with correlated error to infer effects of Hedonia and Eudaimonia on mean CTRA expression. The standardized effects of Hedonia and Eudaimonia on CTRA gene set expression estimated by GLS were compared to estimates using multivariate (OLS) linear models and generalized estimating equation (GEE) models. The OLS estimates were tested using O'Brien's OLS test, Anderson's permutation [Formula: see text]-test, two permutation F -tests (including GlobalAncova), and a rotation z -test (Roast). The GEE estimates were tested using a Wald test with robust standard errors. The performance (Type I, II, S, and M errors) of all tests was investigated using a Monte Carlo simulation of data explicitly modeled on the re-analyzed dataset. GLS estimates are inconsistent between data sets, and, in each dataset, at least one coefficient is large and highly statistically significant. By contrast, effects estimated by OLS or GEE are very small, especially relative to the standard errors. Bootstrap and permutation GLS distributions suggest that the GLS results in

  16. Monte Carlo simulation of atomic short range order and cluster formation in two dimensional model alloys

    International Nuclear Information System (INIS)

    Rojas T, J.; Instituto Peruano de Energia Nuclear, Lima; Manrique C, E.; Torres T, E.

    2002-01-01

    Using monte Carlo simulation have been carried out an atomistic description of the structure and ordering processes in the system Cu-Au in a two-dimensional model. The ABV model of the alloy is a system of N atoms A and B, located in rigid lattice with some vacant sites. In the model we assume pair wise interactions between nearest neighbors with constant ordering energy J = 0,03 eV. The dynamics was introduced by means of a vacancy that exchanges of place with any atom of its neighbors. The simulations were carried out in a square lattice with 1024 and 4096 particles, using periodic boundary conditions to avoid border effects. We calculate the first two parameters of short range order of Warren-Cowley as function of the concentration and temperature. It was also studied the probabilities of formation of different atomic clusters that consist of 9 atoms as function of the concentration of the alloy and temperatures in a wide range of values. In some regions of temperature and concentration it was observed compositional and thermal polymorphism

  17. Modeling Monte Carlo of multileaf collimators using the code GEANT4

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Alex C.H.; Lima, Fernando R.A., E-mail: oliveira.ach@yahoo.com, E-mail: falima@cnen.gov.br [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil); Lima, Luciano S.; Vieira, Jose W., E-mail: lusoulima@yahoo.com.br [Instituto Federal de Educacao, Ciencia e Tecnologia de Pernambuco (IFPE), Recife, PE (Brazil)

    2014-07-01

    Radiotherapy uses various techniques and equipment for local treatment of cancer. The equipment most often used in radiotherapy to the patient irradiation is linear accelerator (Linac). Among the many algorithms developed for evaluation of dose distributions in radiotherapy planning, the algorithms based on Monte Carlo (MC) methods have proven to be very promising in terms of accuracy by providing more realistic results. The MC simulations for applications in radiotherapy are divided into two parts. In the first, the simulation of the production of the radiation beam by the Linac is performed and then the phase space is generated. The phase space contains information such as energy, position, direction, etc. of millions of particles (photons, electrons, positrons). In the second part the simulation of the transport of particles (sampled phase space) in certain configurations of irradiation field is performed to assess the dose distribution in the patient (or phantom). Accurate modeling of the Linac head is of particular interest in the calculation of dose distributions for intensity modulated radiation therapy (IMRT), where complex intensity distributions are delivered using a multileaf collimator (MLC). The objective of this work is to describe a methodology for modeling MC of MLCs using code Geant4. To exemplify this methodology, the Varian Millennium 120-leaf MLC was modeled, whose physical description is available in BEAMnrc Users Manual (20 11). The dosimetric characteristics (i.e., penumbra, leakage, and tongue-and-groove effect) of this MLC were evaluated. The results agreed with data published in the literature concerning the same MLC. (author)

  18. Monte Carlo Uncertainty Quantification Using Quasi-1D SRM Ballistic Model

    Directory of Open Access Journals (Sweden)

    Davide Viganò

    2016-01-01

    Full Text Available Compactness, reliability, readiness, and construction simplicity of solid rocket motors make them very appealing for commercial launcher missions and embarked systems. Solid propulsion grants high thrust-to-weight ratio, high volumetric specific impulse, and a Technology Readiness Level of 9. However, solid rocket systems are missing any throttling capability at run-time, since pressure-time evolution is defined at the design phase. This lack of mission flexibility makes their missions sensitive to deviations of performance from nominal behavior. For this reason, the reliability of predictions and reproducibility of performances represent a primary goal in this field. This paper presents an analysis of SRM performance uncertainties throughout the implementation of a quasi-1D numerical model of motor internal ballistics based on Shapiro’s equations. The code is coupled with a Monte Carlo algorithm to evaluate statistics and propagation of some peculiar uncertainties from design data to rocker performance parameters. The model has been set for the reproduction of a small-scale rocket motor, discussing a set of parametric investigations on uncertainty propagation across the ballistic model.

  19. Mathematical modeling, analysis and Markov Chain Monte Carlo simulation of Ebola epidemics

    Science.gov (United States)

    Tulu, Thomas Wetere; Tian, Boping; Wu, Zunyou

    Ebola virus infection is a severe infectious disease with the highest case fatality rate which become the global public health treat now. What makes the disease the worst of all is no specific effective treatment available, its dynamics is not much researched and understood. In this article a new mathematical model incorporating both vaccination and quarantine to study the dynamics of Ebola epidemic has been developed and comprehensively analyzed. The existence as well as uniqueness of the solution to the model is also verified and the basic reproduction number is calculated. Besides, stability conditions are also checked and finally simulation is done using both Euler method and one of the top ten most influential algorithm known as Markov Chain Monte Carlo (MCMC) method. Different rates of vaccination to predict the effect of vaccination on the infected individual over time and that of quarantine are discussed. The results show that quarantine and vaccination are very effective ways to control Ebola epidemic. From our study it was also seen that there is less possibility of an individual for getting Ebola virus for the second time if they survived his/her first infection. Last but not least real data has been fitted to the model, showing that it can used to predict the dynamic of Ebola epidemic.

  20. Monte-Carlo modelling to determine optimum filter choices for sub-microsecond optical pyrometry

    Science.gov (United States)

    Ota, Thomas A.; Chapman, David J.; Eakins, Daniel E.

    2017-04-01

    When designing a spectral-band pyrometer for use at high time resolutions (sub-μs), there is ambiguity regarding the optimum characteristics for a spectral filter(s). In particular, while prior work has discussed uncertainties in spectral-band pyrometry, there has been little discussion of the effects of noise which is an important consideration in time-resolved, high speed experiments. Using a Monte-Carlo process to simulate the effects of noise, a model of collection from a black body has been developed to give insights into the optimum choices for centre wavelength and passband width. The model was validated and then used to explore the effects of centre wavelength and passband width on measurement uncertainty. This reveals a transition centre wavelength below which uncertainties in calculated temperature are high. To further investigate system performance, simultaneous variation of the centre wavelength and bandpass width of a filter is investigated. Using data reduction, the effects of temperature and noise levels are illustrated and an empirical approximation is determined. The results presented show that filter choice can significantly affect instrument performance and, while best practice requires detailed modelling to achieve optimal performance, the expression presented can be used to aid filter selection.

  1. Two electric field Monte Carlo models of coherent backscattering of polarized light.

    Science.gov (United States)

    Doronin, Alexander; Radosevich, Andrew J; Backman, Vadim; Meglinski, Igor

    2014-11-01

    Modeling of coherent polarized light propagation in turbid scattering medium by the Monte Carlo method provides an ultimate understanding of coherent effects of multiple scattering, such as enhancement of coherent backscattering and peculiarities of laser speckle formation in dynamic light scattering (DLS) and optical coherence tomography (OCT) diagnostic modalities. In this report, we consider two major ways of modeling the coherent polarized light propagation in scattering tissue-like turbid media. The first approach is based on tracking transformations of the electric field along the ray propagation. The second one is developed in analogy to the iterative procedure of the solution of the Bethe-Salpeter equation. To achieve a higher accuracy in the results and to speed up the modeling, both codes utilize the implementation of parallel computing on NVIDIA Graphics Processing Units (GPUs) with Compute Unified Device Architecture (CUDA). We compare these two approaches through simulations of the enhancement of coherent backscattering of polarized light and evaluate the accuracy of each technique with the results of a known analytical solution. The advantages and disadvantages of each computational approach and their further developments are discussed. Both codes are available online and are ready for immediate use or download.

  2. Study on quantification method based on Monte Carlo sampling for multiunit probabilistic safety assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Kye Min [KHNP Central Research Institute, Daejeon (Korea, Republic of); Han, Sang Hoon; Park, Jin Hee; Lim, Ho Gon; Yang, Joon Yang [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Heo, Gyun Young [Kyung Hee University, Yongin (Korea, Republic of)

    2017-06-15

    In Korea, many nuclear power plants operate at a single site based on geographical characteristics, but the population density near the sites is higher than that in other countries. Thus, multiunit accidents are a more important consideration than in other countries and should be addressed appropriately. Currently, there are many issues related to a multiunit probabilistic safety assessment (PSA). One of them is the quantification of a multiunit PSA model. A traditional PSA uses a Boolean manipulation of the fault tree in terms of the minimal cut set. However, such methods have some limitations when rare event approximations cannot be used effectively or a very small truncation limit should be applied to identify accident sequence combinations for a multiunit site. In particular, it is well known that seismic risk in terms of core damage frequency can be overestimated because there are many events that have a high failure probability. In this study, we propose a quantification method based on a Monte Carlo approach for a multiunit PSA model. This method can consider all possible accident sequence combinations in a multiunit site and calculate a more exact value for events that have a high failure probability. An example model for six identical units at a site was also developed and quantified to confirm the applicability of the proposed method.

  3. Restricted primitive model for electrical double layers: modified HNC theory of density profiles and Monte Carlo study of differential capacitance

    International Nuclear Information System (INIS)

    Ballone, P.; Pastore, G.; Tosi, M.P.

    1986-02-01

    Interfacial properties of an ionic fluid next to a uniformly charged planar wall are studied in the restricted primitive model by both theoretical and Monte Carlo methods. The system is a 1:1 fluid of equisized charged hard spheres in a state appropriate to 1M aqueous electrolyte solutions. The interfacial density profiles of counterions and coions are evaluated by extending the hypernetted chain approximation (HNC) to include the leading bridge diagrams for the wall-ion correlations. The theoretical results compare well with those of grand canonical Monte Carlo computations of Torrie and Valleau over the whole range of surface charge density considered by these authors, thus resolving the earlier disagreement between statistical mechanical theories and simulation data at large charge densities. In view of the importance of the model as a testing ground for theories of the diffuse layer, the Monte Carlo calculations are tested by considering alternative choices for the basic simulation cell and are extended so as to allow an evaluation of the differential capacitance of the model interface by two independent methods. These involve numerical differentiation of the mean potential drop as a function of the surface charge density or alternatively an appropriate use of a fluctuation theory formula for the capacitance. The results of these two Monte Carlo approaches consistently indicate an initially smooth increase of the diffuse layer capacitance followed by structure at large charge densities, this behaviour being connected with layering of counterions as already revealed in the density profiles reported by Torrie and Valleau. (author)

  4. Monte Carlo modeling of Lead-Cooled Fast Reactor in adiabatic equilibrium state

    Energy Technology Data Exchange (ETDEWEB)

    Stanisz, Przemysław, E-mail: pstanisz@agh.edu.pl; Oettingen, Mikołaj, E-mail: moettin@agh.edu.pl; Cetnar, Jerzy, E-mail: cetnar@mail.ftj.agh.edu.pl

    2016-05-15

    Graphical abstract: - Highlights: • We present the Monte Carlo modeling of the LFR in the adiabatic equilibrium state. • We assess the adiabatic equilibrium fuel composition using the MCB code. • We define the self-adjusting process of breeding gain by the control rod operation. • The designed LFR can work in the adiabatic cycle with zero fuel breeding. - Abstract: Nuclear power would appear to be the only energy source able to satisfy the global energy demand while also achieving a significant reduction of greenhouse gas emissions. Moreover, it can provide a stable and secure source of electricity, and plays an important role in many European countries. However, nuclear power generation from its birth has been doomed by the legacy of radioactive nuclear waste. In addition, the looming decrease in the available resources of fissile U235 may influence the future sustainability of nuclear energy. The integrated solution to both problems is not trivial, and postulates the introduction of a closed-fuel cycle strategy based on breeder reactors. The perfect choice of a novel reactor system fulfilling both requirements is the Lead-Cooled Fast Reactor operating in the adiabatic equilibrium state. In such a state, the reactor converts depleted or natural uranium into plutonium while consuming any self-generated minor actinides and transferring only fission products as waste. We present the preliminary design of a Lead-Cooled Fast Reactor operating in the adiabatic equilibrium state with the Monte Carlo Continuous Energy Burnup Code – MCB. As a reference reactor model we apply the core design developed initially under the framework of the European Lead-cooled SYstem (ELSY) project and refined in the follow-up Lead-cooled European Advanced DEmonstration Reactor (LEADER) project. The major objective of the study is to show to what extent the constraints of the adiabatic cycle are maintained and to indicate the phase space for further improvements. The analysis

  5. A Monte-Carlo Model for Microstructure-Induced Ultrasonic Signal Fluctuations in Titanium Alloy Inspections

    International Nuclear Information System (INIS)

    Yu Linxiao; Thompson, R.B.; Margetan, F.J.; Wang Yurong

    2004-01-01

    In ultrasonic inspections of some jet-engine alloys, microstructural inhomogeneities act to significantly distort the amplitude and phase profiles of the incident sonic beam, and these distortions lead in turn to ultrasonic amplitude variations. For example, in pulse/echo inspections the back-wall signal amplitude is often seen to fluctuate dramatically when scanning a transducer parallel to a flat specimen. The stochastic nature of the ultrasonic response has obvious implications for both flaw characterization and probability of detection, and tools to estimate fluctuation levels are needed. In this study, as a first step, we develop a quantitative Monte-Carlo model to predict the back-wall amplitude fluctuations seen in ultrasonic pulse/echo inspections. Inputs to the model include statistical descriptions of various beam distortion effects, namely: the lateral 'drift' of the center-of-energy about its expected position; the distortion of pressure amplitude about its expected pattern; and two types of wave-front distortion ('wrinkling' and 'tilting'). The model inputs are deduced by analyzing through-transmission measurements in which the sonic beam emerging from an immersed metal specimen is mapped using a small receiver. The mapped field is compared to the model prediction for a homogeneous metal, and statistical parameters describing the differences are deduced using the technique of 'maximum likelihood estimation' (MLE). Our modeling approach is demonstrated using rectangular coupons of jet-engine Titanium alloys, and predicted back-wall fluctuation levels are shown to be in good agreement with experiment. As a new way of modeling ultrasonic signal fluctuations, the approach outlined in this paper suggests many possibilities for future research

  6. Neutron and gamma sensitivities of self-powered detectors: Monte Carlo modelling

    International Nuclear Information System (INIS)

    Vermeeren, Ludo

    2015-01-01

    This paper deals with the development of a detailed Monte Carlo approach for the calculation of the absolute neutron sensitivity of SPNDs, which makes use of the MCNP code. We will explain the calculation approach, including the activation and beta emission steps, the gamma-electron interactions, the charge deposition in the various detector parts and the effect of the space charge field in the insulator. The model can also be applied for the calculation of the gamma sensitivity of self-powered detectors and for the radiation-induced currents in signal cables. The model yields detailed information on the various contributions to the sensor currents, with distinct response times. Results for the neutron sensitivity of various types of SPNDs are in excellent agreement with experimental data obtained at the BR2 research reactor. For typical neutron to gamma flux ratios, the calculated gamma induced SPND currents are significantly lower than the neutron induced currents. The gamma sensitivity depends very strongly upon the immediate detector surroundings and on the gamma spectrum. Our calculation method opens the way to a reliable on-line determination of the absolute in-pile thermal neutron flux. (authors)

  7. Single-site Lennard-Jones models via polynomial chaos surrogates of Monte Carlo molecular simulation

    KAUST Repository

    Kadoura, Ahmad Salim

    2016-06-01

    In this work, two Polynomial Chaos (PC) surrogates were generated to reproduce Monte Carlo (MC) molecular simulation results of the canonical (single-phase) and the NVT-Gibbs (two-phase) ensembles for a system of normalized structureless Lennard-Jones (LJ) particles. The main advantage of such surrogates, once generated, is the capability of accurately computing the needed thermodynamic quantities in a few seconds, thus efficiently replacing the computationally expensive MC molecular simulations. Benefiting from the tremendous computational time reduction, the PC surrogates were used to conduct large-scale optimization in order to propose single-site LJ models for several simple molecules. Experimental data, a set of supercritical isotherms, and part of the two-phase envelope, of several pure components were used for tuning the LJ parameters (ε, σ). Based on the conducted optimization, excellent fit was obtained for different noble gases (Ar, Kr, and Xe) and other small molecules (CH4, N2, and CO). On the other hand, due to the simplicity of the LJ model used, dramatic deviations between simulation and experimental data were observed, especially in the two-phase region, for more complex molecules such as CO2 and C2 H6.

  8. Modelling of a general purpose irradiation chamber using a Monte Carlo particle transport code

    International Nuclear Information System (INIS)

    Dhiyauddin Ahmad Fauzi; Sheik, F.O.A.; Nurul Fadzlin Hasbullah

    2013-01-01

    Full-text: The aim of this research is to stimulate the effectiveness use of a general purpose irradiation chamber to contain pure neutron particles obtained from a research reactor. The secondary neutron and gamma particles dose discharge from the chamber layers will be used as a platform to estimate the safe dimension of the chamber. The chamber, made up of layers of lead (Pb), shielding, polyethylene (PE), moderator and commercial grade aluminium (Al) cladding is proposed for the use of interacting samples with pure neutron particles in a nuclear reactor environment. The estimation was accomplished through simulation based on general Monte Carlo N-Particle transport code using Los Alamos MCNPX software. Simulations were performed on the model of the chamber subjected to high neutron flux radiation and its gamma radiation product. The model of neutron particle used is based on the neutron source found in PUSPATI TRIGA MARK II research reactor which holds a maximum flux value of 1 x 10 12 neutron/ cm 2 s. The expected outcomes of this research are zero gamma dose in the core of the chamber and neutron dose rate of less than 10 μSv/ day discharge from the chamber system. (author)

  9. Application of a Monte Carlo linac model in routine verifications of dose calculations

    International Nuclear Information System (INIS)

    Linares Rosales, H. M.; Alfonso Laguardia, R.; Lara Mas, E.; Popescu, T.

    2015-01-01

    The analysis of some parameters of interest in Radiotherapy Medical Physics based on an experimentally validated Monte Carlo model of an Elekta Precise lineal accelerator, was performed for 6 and 15 Mv photon beams. The simulations were performed using the EGSnrc code. As reference for simulations, the optimal beam parameters values (energy and FWHM) previously obtained were used. Deposited dose calculations in water phantoms were done, on typical complex geometries commonly are used in acceptance and quality control tests, such as irregular and asymmetric fields. Parameters such as MLC scatter, maximum opening or closing position, and the separation between them were analyzed from calculations in water. Similarly simulations were performed on phantoms obtained from CT studies of real patients, making comparisons of the dose distribution calculated with EGSnrc and the dose distribution obtained from the computerized treatment planning systems (TPS) used in routine clinical plans. All the results showed a great agreement with measurements, finding all of them within tolerance limits. These results allowed the possibility of using the developed model as a robust verification tool for validating calculations in very complex situation, where the accuracy of the available TPS could be questionable. (Author)

  10. Nanostructure evolution of neutron-irradiated reactor pressure vessel steels: Revised Object kinetic Monte Carlo model

    Energy Technology Data Exchange (ETDEWEB)

    Chiapetto, M., E-mail: mchiapet@sckcen.be [SCK-CEN, Nuclear Materials Science Institute, Boeretang 200, B-2400 Mol (Belgium); Unité Matériaux Et Transformations (UMET), UMR 8207, Université de Lille 1, ENSCL, F-59600 Villeneuve d’Ascq Cedex (France); Messina, L. [DEN-Service de Recherches de Métallurgie Physique, CEA, Université Paris-Saclay, F-91191 Gif-sur-Yvette (France); KTH Royal Institute of Technology, Roslagstullsbacken 21, SE-114 21 Stockholm (Sweden); Becquart, C.S. [Unité Matériaux Et Transformations (UMET), UMR 8207, Université de Lille 1, ENSCL, F-59600 Villeneuve d’Ascq Cedex (France); Olsson, P. [KTH Royal Institute of Technology, Roslagstullsbacken 21, SE-114 21 Stockholm (Sweden); Malerba, L. [SCK-CEN, Nuclear Materials Science Institute, Boeretang 200, B-2400 Mol (Belgium)

    2017-02-15

    This work presents a revised set of parameters to be used in an Object kinetic Monte Carlo model to simulate the microstructure evolution under neutron irradiation of reactor pressure vessel steels at the operational temperature of light water reactors (∼300 °C). Within a “grey-alloy” approach, a more physical description than in a previous work is used to translate the effect of Mn and Ni solute atoms on the defect cluster diffusivity reduction. The slowing down of self-interstitial clusters, due to the interaction between solutes and crowdions in Fe is now parameterized using binding energies from the latest DFT calculations and the solute concentration in the matrix from atom-probe experiments. The mobility of vacancy clusters in the presence of Mn and Ni solute atoms was also modified on the basis of recent DFT results, thereby removing some previous approximations. The same set of parameters was seen to predict the correct microstructure evolution for two different types of alloys, under very different irradiation conditions: an Fe-C-MnNi model alloy, neutron irradiated at a relatively high flux, and a high-Mn, high-Ni RPV steel from the Swedish Ringhals reactor surveillance program. In both cases, the predicted self-interstitial loop density matches the experimental solute cluster density, further corroborating the surmise that the MnNi-rich nanofeatures form by solute enrichment of immobilized small interstitial loops, which are invisible to the electron microscope.

  11. Monte Carlo model to describe depth selective fluorescence spectra of epithelial tissue

    Science.gov (United States)

    Pavlova, Ina; Weber, Crystal Redden; Schwarz, Richard A.; Williams, Michelle; El-Naggar, Adel; Gillenwater, Ann; Richards-Kortum, Rebecca

    2008-01-01

    We present a Monte Carlo model to predict fluorescence spectra of the oral mucosa obtained with a depth-selective fiber optic probe as a function of tissue optical properties. A model sensitivity analysis determines how variations in optical parameters associated with neoplastic development influence the intensity and shape of spectra, and elucidates the biological basis for differences in spectra from normal and premalignant oral sites. Predictions indicate that spectra of oral mucosa collected with a depth-selective probe are affected by variations in epithelial optical properties, and to a lesser extent, by changes in superficial stromal parameters, but not by changes in the optical properties of deeper stroma. The depth selective probe offers enhanced detection of epithelial fluorescence, with 90% of the detected signal originating from the epithelium and superficial stroma. Predicted depth-selective spectra are in good agreement with measured average spectra from normal and dysplastic oral sites. Changes in parameters associated with dysplastic progression lead to a decreased fluorescence intensity and a shift of the spectra to longer emission wavelengths. Decreased fluorescence is due to a drop in detected stromal photons, whereas the shift of spectral shape is attributed to an increased fraction of detected photons arising in the epithelium. PMID:19123659

  12. Markov-chain model of classified atomistic transition states for discrete kinetic Monte Carlo simulations.

    Science.gov (United States)

    Numazawa, Satoshi; Smith, Roger

    2011-10-01

    Classical harmonic transition state theory is considered and applied in discrete lattice cells with hierarchical transition levels. The scheme is then used to determine transitions that can be applied in a lattice-based kinetic Monte Carlo (KMC) atomistic simulation model. The model results in an effective reduction of KMC simulation steps by utilizing a classification scheme of transition levels for thermally activated atomistic diffusion processes. Thermally activated atomistic movements are considered as local transition events constrained in potential energy wells over certain local time periods. These processes are represented by Markov chains of multidimensional Boolean valued functions in three-dimensional lattice space. The events inhibited by the barriers under a certain level are regarded as thermal fluctuations of the canonical ensemble and accepted freely. Consequently, the fluctuating system evolution process is implemented as a Markov chain of equivalence class objects. It is shown that the process can be characterized by the acceptance of metastable local transitions. The method is applied to a problem of Au and Ag cluster growth on a rippled surface. The simulation predicts the existence of a morphology-dependent transition time limit from a local metastable to stable state for subsequent cluster growth by accretion. Excellent agreement with observed experimental results is obtained.

  13. Neutron and gamma sensitivities of self-powered detectors: Monte Carlo modelling

    Energy Technology Data Exchange (ETDEWEB)

    Vermeeren, Ludo [SCK-CEN, Nuclear Research Centre, Boeretang 200, B-2400 Mol, (Belgium)

    2015-07-01

    This paper deals with the development of a detailed Monte Carlo approach for the calculation of the absolute neutron sensitivity of SPNDs, which makes use of the MCNP code. We will explain the calculation approach, including the activation and beta emission steps, the gamma-electron interactions, the charge deposition in the various detector parts and the effect of the space charge field in the insulator. The model can also be applied for the calculation of the gamma sensitivity of self-powered detectors and for the radiation-induced currents in signal cables. The model yields detailed information on the various contributions to the sensor currents, with distinct response times. Results for the neutron sensitivity of various types of SPNDs are in excellent agreement with experimental data obtained at the BR2 research reactor. For typical neutron to gamma flux ratios, the calculated gamma induced SPND currents are significantly lower than the neutron induced currents. The gamma sensitivity depends very strongly upon the immediate detector surroundings and on the gamma spectrum. Our calculation method opens the way to a reliable on-line determination of the absolute in-pile thermal neutron flux. (authors)

  14. Modeling charged defects, dopant diffusion and activation mechanisms for TCAD simulations using kinetic Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Martin-Bragado, Ignacio [Synopsys Inc, 700 East Middlefield Road, Mountain View, 94043 CA (United States)]. E-mail: Ignacio.martin-bragado@synopsys.com; Tian, S. [Synopsys Inc, 700 East Middlefield Road, Mountain View, 94043 CA (United States); Johnson, M. [Synopsys Inc, 700 East Middlefield Road, Mountain View, 94043 CA (United States); Castrillo, P. [Department of Electronics, University of Valladolid Campus Miguel Delibes, Camino del Cementerio S/N, 47011 Valladolid (Spain); Pinacho, R. [Department of Electronics, University of Valladolid Campus Miguel Delibes, Camino del Cementerio S/N, 47011 Valladolid (Spain); Rubio, J. [Department of Electronics, University of Valladolid Campus Miguel Delibes, Camino del Cementerio S/N, 47011 Valladolid (Spain); Jaraiz, M. [Department of Electronics, University of Valladolid Campus Miguel Delibes, Camino del Cementerio S/N, 47011 Valladolid (Spain)

    2006-12-15

    This work will show how the kinetic Monte Carlo (KMC) technique is able to successfully model the defects and diffusion of dopants in Si-based materials for advanced microelectronic devices, especially for non-equilibrium conditions. Charge states of point defects and paired dopants are also simulated, including the dependency of the diffusivities on the Fermi level and charged particle drift coming from the electric field. The KMC method is used to simulate the diffusion of the point defects, and formation and dissolution of extended defects, whereas a quasi-atomistic approach is used to take into account the carrier densities. The simulated mechanisms include the kick-out diffusion mechanism, extended defect formation and the activation/deactivation of dopants through the formation of impurity clusters. Damage accumulation and amorphization are also taken into account. Solid phase epitaxy regrowth is included, and also the dopants redistribution during recrystallization of the amorphized regions. Regarding the charged defects, the model considers the dependencies of charge reactions, electric bias, pairing and break-up reactions according to the local Fermi level. Some aspects of the basic physical mechanisms have also been taken into consideration: how to smooth out the atomistic dopant point charge distribution, avoiding very abrupt and unphysical charge profiles and how to implement the drift of charged particles into the existing electric field. The work will also discuss the efficiency, accuracy and relevance of the method, together with its implementation in a technology computer aided design process simulator.

  15. A stochastic Markov chain approach for tennis: Monte Carlo simulation and modeling

    Science.gov (United States)

    Aslam, Kamran

    This dissertation describes the computational formulation of probability density functions (pdfs) that facilitate head-to-head match simulations in tennis along with ranking systems developed from their use. A background on the statistical method used to develop the pdfs , the Monte Carlo method, and the resulting rankings are included along with a discussion on ranking methods currently being used both in professional sports and in other applications. Using an analytical theory developed by Newton and Keller in [34] that defines a tennis player's probability of winning a game, set, match and single elimination tournament, a computational simulation has been developed in Matlab that allows further modeling not previously possible with the analytical theory alone. Such experimentation consists of the exploration of non-iid effects, considers the concept the varying importance of points in a match and allows an unlimited number of matches to be simulated between unlikely opponents. The results of these studies have provided pdfs that accurately model an individual tennis player's ability along with a realistic, fair and mathematically sound platform for ranking them.

  16. Monte Carlo modeling of the absolute solid angle acceptance in the LAMPF P10 spectrometer

    International Nuclear Information System (INIS)

    Redmon, J.A.; Isenhower, L.D.; Sadler, M.E.

    1992-01-01

    The P10 Spectrometer is used to measure the individual energies of the two gamma rays produced by π o decay. With these energies and the opening angle between the two gamma rays, the energy of the π o can be calculated. An absolute determination of the solid angle is needed for measurements of the differential cross section for π - p → η o n. Using Monte Carlo modeling, the laboratory setup is scrutinized by changing various parameters in the simulation runs. This model is compared with actual data obtained by experiment. Using this comparison, it is possible to determine any deviations in the laboratory setup. With the deviations known, an absolute solid angle acceptance can be computed. Parameters varied are the following: (1) Beam Energy, (2) Beam Position in the X and Y direction relative to the target, and (3) Target Position in the Z direction relative to the pivot between the two arms of the spectrometer. These variations are made for beam energies of 10, 20, and 40 MeV at laboratory scattering angles of 0 and 180 degrees

  17. Learning reduced kinetic Monte Carlo models of complex chemistry from molecular dynamics.

    Science.gov (United States)

    Yang, Qian; Sing-Long, Carlos A; Reed, Evan J

    2017-08-01

    We propose a novel statistical learning framework for automatically and efficiently building reduced kinetic Monte Carlo (KMC) models of large-scale elementary reaction networks from data generated by a single or few molecular dynamics simulations (MD). Existing approaches for identifying species and reactions from molecular dynamics typically use bond length and duration criteria, where bond duration is a fixed parameter motivated by an understanding of bond vibrational frequencies. In contrast, we show that for highly reactive systems, bond duration should be a model parameter that is chosen to maximize the predictive power of the resulting statistical model. We demonstrate our method on a high temperature, high pressure system of reacting liquid methane, and show that the learned KMC model is able to extrapolate more than an order of magnitude in time for key molecules. Additionally, our KMC model of elementary reactions enables us to isolate the most important set of reactions governing the behavior of key molecules found in the MD simulation. We develop a new data-driven algorithm to reduce the chemical reaction network which can be solved either as an integer program or efficiently using L1 regularization, and compare our results with simple count-based reduction. For our liquid methane system, we discover that rare reactions do not play a significant role in the system, and find that less than 7% of the approximately 2000 reactions observed from molecular dynamics are necessary to reproduce the molecular concentration over time of methane. The framework described in this work paves the way towards a genomic approach to studying complex chemical systems, where expensive MD simulation data can be reused to contribute to an increasingly large and accurate genome of elementary reactions and rates.

  18. Modeling of continuous free-radical butadiene-styrene copolymerization process by the Monte Carlo method

    Directory of Open Access Journals (Sweden)

    T. A. Mikhailova

    2016-01-01

    Full Text Available In the paper the algorithm of modeling of continuous low-temperature free-radical butadiene-styrene copolymerization process in emulsion based on the Monte-Carlo method is offered. This process is the cornerstone of industrial production butadiene – styrene synthetic rubber which is the most widespread large-capacity rubber of general purpose. Imitation of growth of each macromolecule of the formed copolymer and tracking of the processes happening to it is the basis of algorithm of modeling. Modeling is carried out taking into account residence-time distribution of particles in system that gives the chance to research the process proceeding in the battery of consistently connected polymerization reactors. At the same time each polymerization reactor represents the continuous stirred tank reactor. Since the process is continuous, it is considered continuous addition of portions to the reaction mixture in the first reactor of battery. The constructed model allows to research molecular-weight and viscous characteristics of the formed copolymerization product, to predict the mass content of butadiene and styrene in copolymer, to carry out calculation of molecular-weight distribution of the received product at any moment of conducting process. According to the results of computational experiments analyzed the influence of mode of the process of the regulator introduced during the maintaining on change of characteristics of the formed butadiene-styrene copolymer. As the considered process takes place with participation of monomers of two types, besides listed the model allows to research compositional heterogeneity of the received product that is to carry out calculation of composite distribution and distribution of macromolecules for the size and structure. On the basis of the proposed algorithm created the software tool that allows you to keep track of changes in the characteristics of the resulting product in the dynamics.

  19. Simulation and modeling efforts to support decision making in healthcare supply chain management.

    Science.gov (United States)

    AbuKhousa, Eman; Al-Jaroodi, Jameela; Lazarova-Molnar, Sanja; Mohamed, Nader

    2014-01-01

    Recently, most healthcare organizations focus their attention on reducing the cost of their supply chain management (SCM) by improving the decision making pertaining processes' efficiencies. The availability of products through healthcare SCM is often a matter of life or death to the patient; therefore, trial and error approaches are not an option in this environment. Simulation and modeling (SM) has been presented as an alternative approach for supply chain managers in healthcare organizations to test solutions and to support decision making processes associated with various SCM problems. This paper presents and analyzes past SM efforts to support decision making in healthcare SCM and identifies the key challenges associated with healthcare SCM modeling. We also present and discuss emerging technologies to meet these challenges.

  20. Simulation and Modeling Efforts to Support Decision Making in Healthcare Supply Chain Management

    Directory of Open Access Journals (Sweden)

    Eman AbuKhousa

    2014-01-01

    Full Text Available Recently, most healthcare organizations focus their attention on reducing the cost of their supply chain management (SCM by improving the decision making pertaining processes’ efficiencies. The availability of products through healthcare SCM is often a matter of life or death to the patient; therefore, trial and error approaches are not an option in this environment. Simulation and modeling (SM has been presented as an alternative approach for supply chain managers in healthcare organizations to test solutions and to support decision making processes associated with various SCM problems. This paper presents and analyzes past SM efforts to support decision making in healthcare SCM and identifies the key challenges associated with healthcare SCM modeling. We also present and discuss emerging technologies to meet these challenges.

  1. Competition for marine space: modelling the Baltic Sea fisheries and effort displacement under spatial restrictions

    DEFF Research Database (Denmark)

    Bastardie, Francois; Nielsen, J. Rasmus; Eigaard, Ole Ritzau

    2015-01-01

    to fishery and from vessel to vessel. The impact assessment of new spatial plans involving fisheries should be based on quantitative bioeconomic analyses that take into account individual vessel decisions, and trade-offs in cross-sector conflicting interests.Weuse a vessel-oriented decision-support tool (the...... DISPLACE model) to combine stochastic variations in spatial fishing activities with harvested resource dynamics in scenario projections. The assessment computes economic and stock status indicators by modelling the activity of Danish, Swedish, and German vessels (.12 m) in the international western Baltic...... Sea commercial fishery, together with the underlying size-based distribution dynamics of the main fishery resources of sprat, herring, and cod. The outcomes of alternative scenarios for spatial effort displacement are exemplified by evaluating the fishers’s abilities to adapt to spatial plans under...

  2. Assesment of advanced step models for steady state Monte Carlo burnup calculations in application to prismatic HTGR

    Directory of Open Access Journals (Sweden)

    Kępisty Grzegorz

    2015-09-01

    Full Text Available In this paper, we compare the methodology of different time-step models in the context of Monte Carlo burnup calculations for nuclear reactors. We discuss the differences between staircase step model, slope model, bridge scheme and stochastic implicit Euler method proposed in literature. We focus on the spatial stability of depletion procedure and put additional emphasis on the problem of normalization of neutron source strength. Considered methodology has been implemented in our continuous energy Monte Carlo burnup code (MCB5. The burnup simulations have been performed using the simplified high temperature gas-cooled reactor (HTGR system with and without modeling of control rod withdrawal. Useful conclusions have been formulated on the basis of results.

  3. A Monte Carlo/response surface strategy for sensitivity analysis: application to a dynamic model of vegetative plant growth

    Science.gov (United States)

    Lim, J. T.; Gold, H. J.; Wilkerson, G. G.; Raper, C. D. Jr; Raper CD, J. r. (Principal Investigator)

    1989-01-01

    We describe the application of a strategy for conducting a sensitivity analysis for a complex dynamic model. The procedure involves preliminary screening of parameter sensitivities by numerical estimation of linear sensitivity coefficients, followed by generation of a response surface based on Monte Carlo simulation. Application is to a physiological model of the vegetative growth of soybean plants. The analysis provides insights as to the relative importance of certain physiological processes in controlling plant growth. Advantages and disadvantages of the strategy are discussed.

  4. Inverse Modeling Using Markov Chain Monte Carlo Aided by Adaptive Stochastic Collocation Method with Transformation

    Science.gov (United States)

    Zhang, D.; Liao, Q.

    2016-12-01

    The Bayesian inference provides a convenient framework to solve statistical inverse problems. In this method, the parameters to be identified are treated as random variables. The prior knowledge, the system nonlinearity, and the measurement errors can be directly incorporated in the posterior probability density function (PDF) of the parameters. The Markov chain Monte Carlo (MCMC) method is a powerful tool to generate samples from the posterior PDF. However, since the MCMC usually requires thousands or even millions of forward simulations, it can be a computationally intensive endeavor, particularly when faced with large-scale flow and transport models. To address this issue, we construct a surrogate system for the model responses in the form of polynomials by the stochastic collocation method. In addition, we employ interpolation based on the nested sparse grids and takes into account the different importance of the parameters, under the condition of high random dimensions in the stochastic space. Furthermore, in case of low regularity such as discontinuous or unsmooth relation between the input parameters and the output responses, we introduce an additional transform process to improve the accuracy of the surrogate model. Once we build the surrogate system, we may evaluate the likelihood with very little computational cost. We analyzed the convergence rate of the forward solution and the surrogate posterior by Kullback-Leibler divergence, which quantifies the difference between probability distributions. The fast convergence of the forward solution implies fast convergence of the surrogate posterior to the true posterior. We also tested the proposed algorithm on water-flooding two-phase flow reservoir examples. The posterior PDF calculated from a very long chain with direct forward simulation is assumed to be accurate. The posterior PDF calculated using the surrogate model is in reasonable agreement with the reference, revealing a great improvement in terms of

  5. [Verification of the VEF photon beam model for dose calculations by the Voxel-Monte-Carlo-Algorithm].

    Science.gov (United States)

    Kriesen, Stephan; Fippel, Matthias

    2005-01-01

    The VEF linac head model (VEF, virtual energy fluence) was developed at the University of Tübingen to determine the primary fluence for calculations of dose distributions in patients by the Voxel-Monte-Carlo-Algorithm (XVMC). This analytical model can be fitted to any therapy accelerator head by measuring only a few basic dose data; therefore, time-consuming Monte-Carlo simulations of the linac head become unnecessary. The aim of the present study was the verification of the VEF model by means of water-phantom measurements, as well as the comparison of this system with a common analytical linac head model of a commercial planning system (TMS, formerly HELAX or MDS Nordion, respectively). The results show that both the VEF and the TMS models can very well simulate the primary fluence. However, the VEF model proved superior in the simulations of scattered radiation and in the calculations of strongly irregular MLC fields. Thus, an accurate and clinically practicable tool for the determination of the primary fluence for Monte-Carlo-Simulations with photons was established, especially for the use in IMRT planning.

  6. Verification of the VEF photon beam model for dose calculations by the voxel-Monte-Carlo-algorithm

    International Nuclear Information System (INIS)

    Kriesen, S.; Fippel, M.

    2005-01-01

    The VEF linac head model (VEF, virtual energy fluence) was developed at the University of Tuebingen to determine the primary fluence for calculations of dose distributions in patients by the Voxel-Monte-Carlo-Algorithm (XVMC). This analytical model can be fitted to any therapy accelerator head by measuring only a few basic dose data; therefore, time-consuming Monte-Carlo simulations of the linac head become unnecessary. The aim of the present study was the verification of the VEF model by means of water-phantom measurements, as well as the comparison of this system with a common analytical linac head model of a commercial planning system (TMS, formerly HELAX or MDS Nordion, respectively). The results show that both the VEF and the TMS models can very well simulate the primary fluence. However, the VEF model proved superior in the simulations of scattered radiation and in the calculations of strongly irregular MLC fields. Thus, an accurate and clinically practicable tool for the determination of the primary fluence for Monte-Carlo-Simulations with photons was established, especially for the use in IMRT planning. (orig.)

  7. Monte Carlo modelling of the effect of an absorber on an electron beam

    International Nuclear Information System (INIS)

    Li, L.; Stewart, A.T.; Round, W.H.

    2004-01-01

    Full text: The electron beam from a linear accelerator is essentially spatially uniform in energy and intensity. Hence it may not be suitable for treating a patient where it is desirable for the treatment depth to vary across the field. Using an absorber to shield the part of the beam where a shallower treatment depth is required may provide a solution. But the absorber will cause energy degradation, spectrum spreading and scattering of the incident beam. This situation was investigated using Monte Carlo simulation to determine the changes in the incident beam under, and near the edge of, a sheet absorber made of a low atomic number material. The EGSncr system, along with user code written in Mortran, was used to perform the Monte Carlo simulations. A situation where a thin absorber was placed in a pure 15 MeV 10 cm wide electron beam from a point source was modelled. The absorber was placed to cover half of the beam. This was repeated for different thicknesses of aluminium. It was further repeated for absorbers where the edge on the middle of the beam is chamfered. The dose distributions were plotted, and compared to measured distributions from a clinical accelerator. In addition, the effects of energy degradation, spectrum spreading and scattering were also investigated. This was done by analysing the energies and angles of the simulated electrons after passing through the absorber. Knowledge of energy loss versus scattering angle for different thicknesses of different materials allows for a better choice of absorber. The simulations predicted that at the edge of the shadow of the absorber a hot spot appeared outside the shadow and a cold spot inside the shadow. This was confirmed by measurement. Chamfering the edge of the absorber was seen to reduce this effect with the significance of the effect being dependent in the absorber thickness and the shape of the chamfer. The choice of thickness of the absorber should take into account the effects of energy spectrum

  8. Modeling Efforts to Aid in the Determination of Process Enrichment Levels for Identifying Potential Material Diversion

    International Nuclear Information System (INIS)

    Guenther, C F; Elayat, H A; O'Connell, W J

    2006-01-01

    Efforts have been under way at Lawrence Livermore National Laboratory (LLNL) to develop detailed analytical models that simulate enrichment and conversion facilities for the purpose of aiding in the detection of material diversion as part of an overall safeguards strategy. These models could be used to confirm proper accountability of the nuclear materials at facilities worldwide. Operation of an enrichment process for manufacturing commercial reactor fuel presents proliferation concerns including both diversion and the potential for further enrichment to make weapons grade material. While inspections of foreign reprocessing facilities by the International Atomic Energy Agency (IAEA) are meant to ensure that such diversion is not occurring, it must be verified that such diversion is not taking place through both examination of the facility and taking specific measurements such as the radiation fields outside of various process lines. Our current effort is developing algorithms that would be incorporated into the current process models that would provide both neutron and gamma radiation fields outside any process line for the purpose of to determining the most effective locations for placing in-plant monitoring equipment. These algorithms, while providing dose and spectral information, could also be designed to provide detector responses that could be physically measured at various points on the process line. Such information could be used to optimize detector locations in support of real-time on-site monitoring to determine the enrichment levels within a process stream. The results of parametric analyses to establish expected variations for several different process streams and configurations are presented. Based upon these results, the capability of a sodium iodide (NaI(Tl)), high-purity germanium (HPGe), or neutron detection system is being investigated from the standpoint of their viability in quantitatively measuring and discerning the enrichment and potential

  9. AN ENHANCED MODEL TO ESTIMATE EFFORT, PERFORMANCE AND COST OF THE SOFTWARE PROJECTS

    Directory of Open Access Journals (Sweden)

    M. Pauline

    2013-04-01

    Full Text Available The Authors have proposed a model that first captures the fundamentals of software metrics in the phase 1 consisting of three primitive primary software engineering metrics; they are person-months (PM, function-points (FP, and lines of code (LOC. The phase 2 consists of the proposed function point which is obtained by grouping the adjustment factors to simplify the process of adjustment and to ensure more consistency in the adjustments. In the proposed method fuzzy logic is used for quantifying the quality of requirements and is added as one of the adjustment factor, thus a fuzzy based approach for the Enhanced General System Characteristics to Estimate Effort of the Software Projects using productivity has been obtained. The phase 3 takes the calculated function point from our work and is given as input to the static single variable model (i.e. to the Intermediate COCOMO and COCOMO II for cost estimation. The Authors have tailored the cost factors in intermediate COCOMO and both; cost and scale factors are tailored in COCOMO II to suite to the individual development environment, which is very important for the accuracy of the cost estimates. The software performance indicators are project duration, schedule predictability, requirements completion ratio and post-release defect density, are also measured for the software projects in my work. A comparative study for effort, performance measurement and cost estimation of the software project is done between the existing model and the authors proposed work. Thus our work analyzes the interaction¬al process through which the estimation tasks were collectively accomplished.

  10. Modelling Protein-induced Membrane Deformation using Monte Carlo and Langevin Dynamics Simulations

    Science.gov (United States)

    Radhakrishnan, R.; Agrawal, N.; Ramakrishnan, N.; Kumar, P. B. Sunil; Liu, J.

    2010-11-01

    In eukaryotic cells, internalization of extracellular cargo via the cellular process of endocytosis is orchestrated by a variety of proteins, many of which are implicated in membrane deformation/bending. We model the energetics of deformations membranes by using the Helfrich Hamiltonian using two different formalisms: (i) Cartesian or Monge Gauge using Langevin dynamics; (ii) Curvilinear coordinate system using Monte Carlo (MC). Monge gauge approach which has been extensively studied is limited to small deformations of the membrane and cannot describe extreme deformations. Curvilinear coordinate approach can handle large deformation limits as well as finite-temperature membrane fluctuations; here we employ an unstructured triangular mesh to compute the local curvature tensor, and we evolve the membrane surface using a MC method. In our application, we compare the two approaches (i and ii above) to study how the spatial assembly of curvature inducing proteins leads to vesicle budding from a planar membrane. We also quantify how the curvature field of the membrane impacts the spatial segregation of proteins.

  11. Modeling the biophysical effects in a carbon beam delivery line by using Monte Carlo simulations

    Science.gov (United States)

    Cho, Ilsung; Yoo, SeungHoon; Cho, Sungho; Kim, Eun Ho; Song, Yongkeun; Shin, Jae-ik; Jung, Won-Gyun

    2016-09-01

    The Relative biological effectiveness (RBE) plays an important role in designing a uniform dose response for ion-beam therapy. In this study, the biological effectiveness of a carbon-ion beam delivery system was investigated using Monte Carlo simulations. A carbon-ion beam delivery line was designed for the Korea Heavy Ion Medical Accelerator (KHIMA) project. The GEANT4 simulation tool kit was used to simulate carbon-ion beam transport into media. An incident energy carbon-ion beam with energy in the range between 220 MeV/u and 290 MeV/u was chosen to generate secondary particles. The microdosimetric-kinetic (MK) model was applied to describe the RBE of 10% survival in human salivary-gland (HSG) cells. The RBE weighted dose was estimated as a function of the penetration depth in the water phantom along the incident beam's direction. A biologically photon-equivalent Spread Out Bragg Peak (SOBP) was designed using the RBE-weighted absorbed dose. Finally, the RBE of mixed beams was predicted as a function of the depth in the water phantom.

  12. Optimization of dual-wavelength intravascular photoacoustic imaging of atherosclerotic plaques using Monte Carlo optical modeling

    Science.gov (United States)

    Dana, Nicholas; Sowers, Timothy; Karpiouk, Andrei; Vanderlaan, Donald; Emelianov, Stanislav

    2017-10-01

    Coronary heart disease (the presence of coronary atherosclerotic plaques) is a significant health problem in the industrialized world. A clinical method to accurately visualize and characterize atherosclerotic plaques is needed. Intravascular photoacoustic (IVPA) imaging is being developed to fill this role, but questions remain regarding optimal imaging wavelengths. We utilized a Monte Carlo optical model to simulate IVPA excitation in coronary tissues, identifying optimal wavelengths for plaque characterization. Near-infrared wavelengths (≤1800 nm) were simulated, and single- and dual-wavelength data were analyzed for accuracy of plaque characterization. Results indicate light penetration is best in the range of 1050 to 1370 nm, where 5% residual fluence can be achieved at clinically relevant depths of ≥2 mm in arteries. Across the arterial wall, fluence may vary by over 10-fold, confounding plaque characterization. For single-wavelength results, plaque segmentation accuracy peaked at 1210 and 1720 nm, though correlation was poor (primary wavelength (≈1.0). Results suggest that, without flushing the luminal blood, a primary and secondary wavelength near 1210 and 1350 nm, respectively, may offer the best implementation of dual-wavelength IVPA imaging. These findings could guide the development of a cost-effective clinical system by highlighting optimal wavelengths and improving plaque characterization.

  13. Monte Carlo modeling the phase diagram of magnets with the Dzyaloshinskii - Moriya interaction

    Science.gov (United States)

    Belemuk, A. M.; Stishov, S. M.

    2017-11-01

    We use classical Monte Carlo calculations to model the high-pressure behavior of the phase transition in the helical magnets. We vary values of the exchange interaction constant J and the Dzyaloshinskii-Moriya interaction constant D, which is equivalent to changing spin-spin distances, as occurs in real systems under pressure. The system under study is self-similar at D / J = constant , and its properties are defined by the single variable J / T , where T is temperature. The existence of the first order phase transition critically depends on the ratio D / J . A variation of J strongly affects the phase transition temperature and width of the fluctuation region (the ;hump;) as follows from the system self-similarity. The high-pressure behavior of the spin system depends on the evolution of the interaction constants J and D on compression. Our calculations are relevant to the high pressure phase diagrams of helical magnets MnSi and Cu2OSeO3.

  14. Risk analysis of gravity dam instability using credibility theory Monte Carlo simulation model.

    Science.gov (United States)

    Xin, Cao; Chongshi, Gu

    2016-01-01

    Risk analysis of gravity dam stability involves complicated uncertainty in many design parameters and measured data. Stability failure risk ratio described jointly by probability and possibility has deficiency in characterization of influence of fuzzy factors and representation of the likelihood of risk occurrence in practical engineering. In this article, credibility theory is applied into stability failure risk analysis of gravity dam. Stability of gravity dam is viewed as a hybrid event considering both fuzziness and randomness of failure criterion, design parameters and measured data. Credibility distribution function is conducted as a novel way to represent uncertainty of influence factors of gravity dam stability. And combining with Monte Carlo simulation, corresponding calculation method and procedure are proposed. Based on a dam section, a detailed application of the modeling approach on risk calculation of both dam foundation and double sliding surfaces is provided. The results show that, the present method is feasible to be applied on analysis of stability failure risk for gravity dams. The risk assessment obtained can reflect influence of both sorts of uncertainty, and is suitable as an index value.

  15. Monte Carlo Modeling of Sodium in Mercury's Exosphere During the First Two MESSENGER Flybys

    Science.gov (United States)

    Burger, Matthew H.; Killen, Rosemary M.; Vervack, Ronald J., Jr.; Bradley, E. Todd; McClintock, William E.; Sarantos, Menelaos; Benna, Mehdi; Mouawad, Nelly

    2010-01-01

    We present a Monte Carlo model of the distribution of neutral sodium in Mercury's exosphere and tail using data from the Mercury Atmospheric and Surface Composition Spectrometer (MASCS) on the MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) spacecraft during the first two flybys of the planet in January and September 2008. We show that the dominant source mechanism for ejecting sodium from the surface is photon-stimulated desorption (PSD) and that the desorption rate is limited by the diffusion rate of sodium from the interior of grains in the regolith to the topmost few monolayers where PSD is effective. In the absence of ion precipitation, we find that the sodium source rate is limited to approximately 10(exp 6) - 10(exp 7) per square centimeter per second, depending on the sticking efficiency of exospheric sodium that returns to the surface. The diffusion rate must be at least a factor of 5 higher in regions of ion precipitation to explain the MASCS observations during the second MESSENGER f1yby. We estimate that impact vaporization of micrometeoroids may provide up to 15% of the total sodium source rate in the regions observed. Although sputtering by precipitating ions was found not to be a significant source of sodium during the MESSENGER flybys, ion precipitation is responsible for increasing the source rate at high latitudes through ion-enhanced diffusion.

  16. Monte Carlo Technique Used to Model the Degradation of Internal Spacecraft Surfaces by Atomic Oxygen

    Science.gov (United States)

    Banks, Bruce A.; Miller, Sharon K.

    2004-01-01

    Atomic oxygen is one of the predominant constituents of Earth's upper atmosphere. It is created by the photodissociation of molecular oxygen (O2) into single O atoms by ultraviolet radiation. It is chemically very reactive because a single O atom readily combines with another O atom or with other atoms or molecules that can form a stable oxide. The effects of atomic oxygen on the external surfaces of spacecraft in low Earth orbit can have dire consequences for spacecraft life, and this is a well-known and much studied problem. Much less information is known about the effects of atomic oxygen on the internal surfaces of spacecraft. This degradation can occur when openings in components of the spacecraft exterior exist that allow the entry of atomic oxygen into regions that may not have direct atomic oxygen attack but rather scattered attack. Openings can exist because of spacecraft venting, microwave cavities, and apertures for Earth viewing, Sun sensors, or star trackers. The effects of atomic oxygen erosion of polymers interior to an aperture on a spacecraft were simulated at the NASA Glenn Research Center by using Monte Carlo computational techniques. A two-dimensional model was used to provide quantitative indications of the attenuation of atomic oxygen flux as a function of the distance into a parallel-walled cavity. The model allows the atomic oxygen arrival direction, the Maxwell Boltzman temperature, and the ram energy to be varied along with the interaction parameters of the degree of recombination upon impact with polymer or nonreactive surfaces, the initial reaction probability, the reaction probability dependence upon energy and angle of attack, degree of specularity of scattering of reactive and nonreactive surfaces, and the degree of thermal accommodation upon impact with reactive and non-reactive surfaces to be varied to allow the model to produce atomic oxygen erosion geometries that replicate actual experimental results from space. The degree of

  17. Comparison of deep inelastic electron-photon scattering data with the HERWIG and PHOJET Monte Carlo models

    CERN Document Server

    Achard, P.; Braccini, S.; Chamizo, M.; Cowan, G.; de Roeck, A.; Field, J.H.; Finch, A.J.; Lin, C.H.; Lauber, J.A.; Lehto, M.H.; Kienzle-Focacci, M.N.; Miller, D.J.; Nisius, R.; Saremi, S.; Soldner-Rembold, S.; Surrow, B.; Taylor, R.J.; Wadhwa, M.; Wright, A.E.

    2002-01-01

    Deep inelastic electron-photon scattering is studied in the $Q^2$ range from 1.2 to 30 GeV$^2$ using the LEP1 data taken with the ALEPH, L3 and OPAL detectors at centre-of-mass energies close to the mass of the Z boson. Distributions of the measured hadronic final state are corrected to the hadron level and compared to the predictions of the HERWIG and PHOJET Monte Carlo models. For large regions in most of the distributions studied the results of the different experiments agree with one another. However, significant differences are found between the data and the models. Therefore the combined LEP data serve as an important input to improve on the Monte Carlo models.

  18. Determinants for a successful Sémont maneuver: an in-vitro study with a semicircular canal model

    Directory of Open Access Journals (Sweden)

    Dominik Obrist

    2016-09-01

    Full Text Available Objective: To evaluate the effect of time between the movements/steps, angle of body movements as well as the angular velocity of the maneuvers in an in-vitro model of a semicircular canal (SCC to improve the efficacy of the Sémont maneuver in benign paroxysmal positional vertigo (BPPV.Methods: Sémont maneuvers were performed on an in-vitro SCC model. Otoconia trajectories were captured by a video camera. The effects of time between the movements, angles of motion (0°, 10°, 20°, 30° below the horizontal line, different angular velocities (90, 135, 180°/s and otoconia size (36 and 50µm on the final position of the otoconia in the SCC were tested.Results: Without extension of the movements beyond the horizontal, the in-vitro experiments (with particles corresponding to 50 m diameter did not yield successful canalith repositioning. If the movements were extended by 20° beyond the horizontal position, Sémont maneuvers were successful with resting times of at least 16 s. For larger extension angles the required time decreased. However, for smaller particles (36 m the required time doubled. The angular maneuver velocity (tested between 90 and 180°/s did not have a major impact on the final position of the otoconia.Interpretation: The two primary determinants for success of the Sémont maneuver are the time between the movements and the extension of the movements beyond the horizontal. The time between the movements should be at least 45 s. Angles of 20° or more below horizontal line (so-called Sémont ++ should increase the success rate of SM.

  19. Microscopic calculation of level densities: the shell model Monte Carlo approach

    International Nuclear Information System (INIS)

    Alhassid, Yoram

    2012-01-01

    The shell model Monte Carlo (SMMC) approach provides a powerful technique for the microscopic calculation of level densities in model spaces that are many orders of magnitude larger than those that can be treated by conventional methods. We discuss a number of developments: (i) Spin distribution. We used a spin projection method to calculate the exact spin distribution of energy levels as a function of excitation energy. In even-even nuclei we find an odd-even staggering effect (in spin). Our results were confirmed in recent analysis of experimental data. (ii) Heavy nuclei. The SMMC approach was extended to heavy nuclei. We have studied the crossover between vibrational and rotational collectivity in families of samarium and neodymium isotopes in model spaces of dimension approx. 10 29 . We find good agreement with experimental results for both state densities and 2 > (where J is the total spin). (iii) Collective enhancement factors. We have calculated microscopically the vibrational and rotational enhancement factors of level densities versus excitation energy. We find that the decay of these enhancement factors in heavy nuclei is correlated with the pairing and shape phase transitions. (iv) Odd-even and odd-odd nuclei. The projection on an odd number of particles leads to a sign problem in SMMC. We discuss a novel method to calculate state densities in odd-even and odd-odd nuclei despite the sign problem. (v) State densities versus level densities. The SMMC approach has been used extensively to calculate state densities. However, experiments often measure level densities (where levels are counted without including their spin degeneracies.) A spin projection method enables us to also calculate level densities in SMMC. We have calculated the SMMC level density of 162 Dy and found it to agree well with experiments

  20. Monte Carlo Based Calibration and Uncertainty Analysis of a Coupled Plant Growth and Hydrological Model

    Science.gov (United States)

    Houska, Tobias; Multsch, Sebastian; Kraft, Philipp; Frede, Hans-Georg; Breuer, Lutz

    2014-05-01

    Computer simulations are widely used to support decision making and planning in the agriculture sector. On the one hand, many plant growth models use simplified hydrological processes and structures, e.g. by the use of a small number of soil layers or by the application of simple water flow approaches. On the other hand, in many hydrological models plant growth processes are poorly represented. Hence, fully coupled models with a high degree of process representation would allow a more detailed analysis of the dynamic behaviour of the soil-plant interface. We used the Python programming language to couple two of such high process oriented independent models and to calibrate both models simultaneously. The Catchment Modelling Framework (CMF) simulated soil hydrology based on the Richards equation and the Van-Genuchten-Mualem retention curve. CMF was coupled with the Plant growth Modelling Framework (PMF), which predicts plant growth on the basis of radiation use efficiency, degree days, water shortage and dynamic root biomass allocation. The Monte Carlo based Generalised Likelihood Uncertainty Estimation (GLUE) method was applied to parameterize the coupled model and to investigate the related uncertainty of model predictions to it. Overall, 19 model parameters (4 for CMF and 15 for PMF) were analysed through 2 x 106 model runs randomly drawn from an equally distributed parameter space. Three objective functions were used to evaluate the model performance, i.e. coefficient of determination (R2), bias and model efficiency according to Nash Sutcliffe (NSE). The model was applied to three sites with different management in Muencheberg (Germany) for the simulation of winter wheat (Triticum aestivum L.) in a cross-validation experiment. Field observations for model evaluation included soil water content and the dry matters of roots, storages, stems and leaves. Best parameter sets resulted in NSE of 0.57 for the simulation of soil moisture across all three sites. The shape

  1. A Pilot Study to Compare Programming Effort for Two Parallel Programming Models (PREPRINT)

    National Research Council Canada - National Science Library

    Hochstein, Lorin; Basili, Victor R; Vishkin, Uzi; Gilbert, John

    2007-01-01

    CONTEXT: Writing software for the current generation of parallel systems requires significant programmer effort, and the community is seeking alternatives that reduce effort while still achieving good performance. OBJECTIVE...

  2. Noise in Neuronal and Electronic Circuits: A General Modeling Framework and Non-Monte Carlo Simulation Techniques.

    Science.gov (United States)

    Kilinc, Deniz; Demir, Alper

    2017-08-01

    The brain is extremely energy efficient and remarkably robust in what it does despite the considerable variability and noise caused by the stochastic mechanisms in neurons and synapses. Computational modeling is a powerful tool that can help us gain insight into this important aspect of brain mechanism. A deep understanding and computational design tools can help develop robust neuromorphic electronic circuits and hybrid neuroelectronic systems. In this paper, we present a general modeling framework for biological neuronal circuits that systematically captures the nonstationary stochastic behavior of ion channels and synaptic processes. In this framework, fine-grained, discrete-state, continuous-time Markov chain models of both ion channels and synaptic processes are treated in a unified manner. Our modeling framework features a mechanism for the automatic generation of the corresponding coarse-grained, continuous-state, continuous-time stochastic differential equation models for neuronal variability and noise. Furthermore, we repurpose non-Monte Carlo noise analysis techniques, which were previously developed for analog electronic circuits, for the stochastic characterization of neuronal circuits both in time and frequency domain. We verify that the fast non-Monte Carlo analysis methods produce results with the same accuracy as computationally expensive Monte Carlo simulations. We have implemented the proposed techniques in a prototype simulator, where both biological neuronal and analog electronic circuits can be simulated together in a coupled manner.

  3. Linking sociological with physiological data: the model of effort-reward imbalance at work.

    Science.gov (United States)

    Siegrist, J; Klein, D; Voigt, K H

    1997-01-01

    While socio-epidemiologic studies documented impressive associations of indicators of chronic psychosocial stress with cardiovascular (c.v.) disease evidence on patho-physiologic processes is still limited. In this regard, the concept of heightened c.v. and hormonal reactivity (RE) to mental stress was proposed and explored. While this concept is a static one we suggest a more dynamic two-stage model of RE where recurrent high responsiveness (stage 1) in the long run results in attenuated, reduced maximal RE due to functional adaptation (stage 2). We present results of an indirect test of this hypothesis in a group of 68 healthy middle-aged men undergoing a modified Stroop Test: in men suffering from high chronic work stress in terms of effort-reward imbalance significantly reduced RE in heart rate, adrenaline and cortisol was found after adjusting for relevant confounders. In conclusion, results underscore the potential of linking sociological with physiological data in stress research.

  4. Range Verification Methods in Particle Therapy: Underlying Physics and Monte Carlo Modeling

    Science.gov (United States)

    Kraan, Aafke Christine

    2015-01-01

    Hadron therapy allows for highly conformal dose distributions and better sparing of organs-at-risk, thanks to the characteristic dose deposition as function of depth. However, the quality of hadron therapy treatments is closely connected with the ability to predict and achieve a given beam range in the patient. Currently, uncertainties in particle range lead to the employment of safety margins, at the expense of treatment quality. Much research in particle therapy is therefore aimed at developing methods to verify the particle range in patients. Non-invasive in vivo monitoring of the particle range can be performed by detecting secondary radiation, emitted from the patient as a result of nuclear interactions of charged hadrons with tissue, including β+ emitters, prompt photons, and charged fragments. The correctness of the dose delivery can be verified by comparing measured and pre-calculated distributions of the secondary particles. The reliability of Monte Carlo (MC) predictions is a key issue. Correctly modeling the production of secondaries is a non-trivial task, because it involves nuclear physics interactions at energies, where no rigorous theories exist to describe them. The goal of this review is to provide a comprehensive overview of various aspects in modeling the physics processes for range verification with secondary particles produced in proton, carbon, and heavier ion irradiation. We discuss electromagnetic and nuclear interactions of charged hadrons in matter, which is followed by a summary of some widely used MC codes in hadron therapy. Then, we describe selected examples of how these codes have been validated and used in three range verification techniques: PET, prompt gamma, and charged particle detection. We include research studies and clinically applied methods. For each of the techniques, we point out advantages and disadvantages, as well as clinical challenges still to be addressed, focusing on MC simulation aspects. PMID:26217586

  5. Range Verification Methods in Particle Therapy: Underlying Physics and Monte Carlo Modeling.

    Science.gov (United States)

    Kraan, Aafke Christine

    2015-01-01

    Hadron therapy allows for highly conformal dose distributions and better sparing of organs-at-risk, thanks to the characteristic dose deposition as function of depth. However, the quality of hadron therapy treatments is closely connected with the ability to predict and achieve a given beam range in the patient. Currently, uncertainties in particle range lead to the employment of safety margins, at the expense of treatment quality. Much research in particle therapy is therefore aimed at developing methods to verify the particle range in patients. Non-invasive in vivo monitoring of the particle range can be performed by detecting secondary radiation, emitted from the patient as a result of nuclear interactions of charged hadrons with tissue, including β (+) emitters, prompt photons, and charged fragments. The correctness of the dose delivery can be verified by comparing measured and pre-calculated distributions of the secondary particles. The reliability of Monte Carlo (MC) predictions is a key issue. Correctly modeling the production of secondaries is a non-trivial task, because it involves nuclear physics interactions at energies, where no rigorous theories exist to describe them. The goal of this review is to provide a comprehensive overview of various aspects in modeling the physics processes for range verification with secondary particles produced in proton, carbon, and heavier ion irradiation. We discuss electromagnetic and nuclear interactions of charged hadrons in matter, which is followed by a summary of some widely used MC codes in hadron therapy. Then, we describe selected examples of how these codes have been validated and used in three range verification techniques: PET, prompt gamma, and charged particle detection. We include research studies and clinically applied methods. For each of the techniques, we point out advantages and disadvantages, as well as clinical challenges still to be addressed, focusing on MC simulation aspects.

  6. Aqueous corrosion of borosilicate glasses: experiments, modeling and Monte-Carlo simulations

    International Nuclear Information System (INIS)

    Ledieu, A.

    2004-10-01

    This work is concerned with the corrosion of borosilicate glasses with variable oxide contents. The originality of this study is the complementary use of experiments and numerical simulations. This study is expected to contribute to a better understanding of the corrosion of nuclear waste confinement glasses. First, the corrosion of glasses containing only silicon, boron and sodium oxides has been studied. The kinetics of leaching show that the rate of leaching and the final degree of corrosion sharply depend on the boron content through a percolation mechanism. For some glass contents and some conditions of leaching, the layer which appears at the glass surface stops the release of soluble species (boron and sodium). This altered layer (also called the gel layer) has been characterized with nuclear magnetic resonance (NMR) and small angle X-ray scattering (SAXS) techniques. Second, additional elements have been included in the glass composition. It appears that calcium, zirconium or aluminum oxides strongly modify the final degree of corrosion so that the percolation properties of the boron sub-network is no more a sufficient explanation to account for the behavior of these glasses. Meanwhile, we have developed a theoretical model, based on the dissolution and the reprecipitation of the silicon. Kinetic Monte Carlo simulations have been used in order to test several concepts such as the boron percolation, the local reactivity of weakly soluble elements and the restructuring of the gel layer. This model has been fully validated by comparison with the results on the three oxide glasses. Then, it has been used as a comprehensive tool to investigate the paradoxical behavior of the aluminum and zirconium glasses: although these elements slow down the corrosion kinetics, they lead to a deeper final degree of corrosion. The main contribution of this work is that the final degree of corrosion of borosilicate glasses results from the competition of two opposite mechanisms

  7. Water leaching of borosilicate glasses: experiments, modeling and Monte Carlo simulations

    International Nuclear Information System (INIS)

    Ledieu, A.

    2004-10-01

    This work is concerned with the corrosion of borosilicate glasses with variable oxide contents. The originality of this study is the complementary use of experiments and numerical simulations. This study is expected to contribute to a better understanding of the corrosion of nuclear waste confinement glasses. First, the corrosion of glasses containing only silicon, boron and sodium oxides has been studied. The kinetics of leaching show that the rate of leaching and the final degree of corrosion sharply depend on the boron content through a percolation mechanism. For some glass contents and some conditions of leaching, the layer which appears at the glass surface stops the release of soluble species (boron and sodium). This altered layer (also called the gel layer) has been characterized with nuclear magnetic resonance (NMR) and small angle X-ray scattering (SAXS) techniques. Second, additional elements have been included in the glass composition. It appears that calcium, zirconium or aluminum oxides strongly modify the final degree of corrosion so that the percolation properties of the boron sub-network is no more a sufficient explanation to account for the behavior of these glasses. Meanwhile, we have developed a theoretical model, based on the dissolution and the reprecipitation of the silicon. Kinetic Monte Carlo simulations have been used in order to test several concepts such as the boron percolation, the local reactivity of weakly soluble elements and the restructuring of the gel layer. This model has been fully validated by comparison with the results on the three oxide glasses. Then, it has been used as a comprehensive tool to investigate the paradoxical behavior of the aluminum and zirconium glasses: although these elements slow down the corrosion kinetics, they lead to a deeper final degree of corrosion. The main contribution of this work is that the final degree of corrosion of borosilicate glasses results from the competition of two opposite mechanisms

  8. Target dose conversion modeling from pencil beam (PB) to Monte Carlo (MC) for lung SBRT

    International Nuclear Information System (INIS)

    Zheng, Dandan; Zhu, Xiaofeng; Zhang, Qinghui; Liang, Xiaoying; Zhen, Weining; Lin, Chi; Verma, Vivek; Wang, Shuo; Wahl, Andrew; Lei, Yu; Zhou, Sumin; Zhang, Chi

    2016-01-01

    A challenge preventing routine clinical implementation of Monte Carlo (MC)-based lung SBRT is the difficulty of reinterpreting historical outcome data calculated with inaccurate dose algorithms, because the target dose was found to decrease to varying degrees when recalculated with MC. The large variability was previously found to be affected by factors such as tumour size, location, and lung density, usually through sub-group comparisons. We hereby conducted a pilot study to systematically and quantitatively analyze these patient factors and explore accurate target dose conversion models, so that large-scale historical outcome data can be correlated with more accurate MC dose without recalculation. Twenty-one patients that underwent SBRT for early-stage lung cancer were replanned with 6MV 360° dynamic conformal arcs using pencil-beam (PB) and recalculated with MC. The percent D95 difference (PB-MC) was calculated for the PTV and GTV. Using single linear regression, this difference was correlated with the following quantitative patient indices: maximum tumour diameter (MaxD); PTV and GTV volumes; minimum distance from tumour to soft tissue (dmin); and mean density and standard deviation of the PTV, GTV, PTV margin, lung, and 2 mm, 15 mm, 50 mm shells outside the PTV. Multiple linear regression and artificial neural network (ANN) were employed to model multiple factors and improve dose conversion accuracy. Single linear regression with PTV D95 deficiency identified the strongest correlation on mean-density (location) indices, weaker on lung density, and the weakest on size indices, with the following R 2 values in decreasing orders: shell2mm (0.71), PTV (0.68), PTV margin (0.65), shell15mm (0.62), shell50mm (0.49), lung (0.40), dmin (0.22), GTV (0.19), MaxD (0.17), PTV volume (0.15), and GTV volume (0.08). A multiple linear regression model yielded the significance factor of 3.0E-7 using two independent features: mean density of shell2mm (P = 1.6E-7) and PTV volume

  9. Monte Carlo analysis of an ODE Model of the Sea Urchin Endomesoderm Network

    Directory of Open Access Journals (Sweden)

    Klipp Edda

    2009-08-01

    Full Text Available Abstract Background Gene Regulatory Networks (GRNs control the differentiation, specification and function of cells at the genomic level. The levels of interactions within large GRNs are of enormous depth and complexity. Details about many GRNs are emerging, but in most cases it is unknown to what extent they control a given process, i.e. the grade of completeness is uncertain. This uncertainty stems from limited experimental data, which is the main bottleneck for creating detailed dynamical models of cellular processes. Parameter estimation for each node is often infeasible for very large GRNs. We propose a method, based on random parameter estimations through Monte-Carlo simulations to measure completeness grades of GRNs. Results We developed a heuristic to assess the completeness of large GRNs, using ODE simulations under different conditions and randomly sampled parameter sets to detect parameter-invariant effects of perturbations. To test this heuristic, we constructed the first ODE model of the whole sea urchin endomesoderm GRN, one of the best studied large GRNs. We find that nearly 48% of the parameter-invariant effects correspond with experimental data, which is 65% of the expected optimal agreement obtained from a submodel for which kinetic parameters were estimated and used for simulations. Randomized versions of the model reproduce only 23.5% of the experimental data. Conclusion The method described in this paper enables an evaluation of network topologies of GRNs without requiring any parameter values. The benefit of this method is exemplified in the first mathematical analysis of the complete Endomesoderm Network Model. The predictions we provide deliver candidate nodes in the network that are likely to be erroneous or miss unknown connections, which may need additional experiments to improve the network topology. This mathematical model can serve as a scaffold for detailed and more realistic models. We propose that our method can

  10. Study of the validity of a combined potential model using the Hybrid Reverse Monte Carlo method in Fluoride glass system

    Directory of Open Access Journals (Sweden)

    M. Kotbi

    2013-03-01

    Full Text Available The choice of appropriate interaction models is among the major disadvantages of conventional methods such as Molecular Dynamics (MD and Monte Carlo (MC simulations. On the other hand, the so-called Reverse Monte Carlo (RMC method, based on experimental data, can be applied without any interatomic and/or intermolecular interactions. The RMC results are accompanied by artificial satellite peaks. To remedy this problem, we use an extension of the RMC algorithm, which introduces an energy penalty term into the acceptance criteria. This method is referred to as the Hybrid Reverse Monte Carlo (HRMC method. The idea of this paper is to test the validity of a combined potential model of coulomb and Lennard-Jones in a Fluoride glass system BaMnMF7 (M = Fe,V using HRMC method. The results show a good agreement between experimental and calculated characteristics, as well as a meaningful improvement in partial pair distribution functions (PDFs. We suggest that this model should be used in calculating the structural properties and in describing the average correlations between components of fluoride glass or a similar system. We also suggest that HRMC could be useful as a tool for testing the interaction potential models, as well as for conventional applications.

  11. Hybrid method for fast Monte Carlo simulation of diffuse reflectance from a multilayered tissue model with tumor-like heterogeneities.

    Science.gov (United States)

    Zhu, Caigang; Liu, Quan

    2012-01-01

    We present a hybrid method that combines a multilayered scaling method and a perturbation method to speed up the Monte Carlo simulation of diffuse reflectance from a multilayered tissue model with finite-size tumor-like heterogeneities. The proposed method consists of two steps. In the first step, a set of photon trajectory information generated from a baseline Monte Carlo simulation is utilized to scale the exit weight and exit distance of survival photons for the multilayered tissue model. In the second step, another set of photon trajectory information, including the locations of all collision events from the baseline simulation and the scaling result obtained from the first step, is employed by the perturbation Monte Carlo method to estimate diffuse reflectance from the multilayered tissue model with tumor-like heterogeneities. Our method is demonstrated to shorten simulation time by several orders of magnitude. Moreover, this hybrid method works for a larger range of probe configurations and tumor models than the scaling method or the perturbation method alone.

  12. Monte Carlo modeling of Standard Model multi-boson production processes for $\\sqrt{s} = 13$ TeV ATLAS analyses

    CERN Document Server

    Li, Shu; The ATLAS collaboration

    2017-01-01

    Proceeding for the poster presentation at LHCP2017, Shanghai, China on the topic of "Monte Carlo modeling of Standard Model multi-boson production processes for $\\sqrt{s} = 13$ TeV ATLAS analyses" (ATL-PHYS-SLIDE-2017-265 https://cds.cern.ch/record/2265389) Deadline: 01/09/2017

  13. A Collaborative Effort Between Caribbean States for Tsunami Numerical Modeling: Case Study CaribeWave15

    Science.gov (United States)

    Chacón-Barrantes, Silvia; López-Venegas, Alberto; Sánchez-Escobar, Rónald; Luque-Vergara, Néstor

    2017-10-01

    Historical records have shown that tsunami have affected the Caribbean region in the past. However infrequent, recent studies have demonstrated that they pose a latent hazard for countries within this basin. The Hazard Assessment Working Group of the ICG/CARIBE-EWS (Intergovernmental Coordination Group of the Early Warning System for Tsunamis and Other Coastal Threats for the Caribbean Sea and Adjacent Regions) of IOC/UNESCO has a modeling subgroup, which seeks to develop a modeling platform to assess the effects of possible tsunami sources within the basin. The CaribeWave tsunami exercise is carried out annually in the Caribbean region to increase awareness and test tsunami preparedness of countries within the basin. In this study we present results of tsunami inundation using the CaribeWave15 exercise scenario for four selected locations within the Caribbean basin (Colombia, Costa Rica, Panamá and Puerto Rico), performed by tsunami modeling researchers from those selected countries. The purpose of this study was to provide the states with additional results for the exercise. The results obtained here were compared to co-seismic deformation and tsunami heights within the basin (energy plots) provided for the exercise to assess the performance of the decision support tools distributed by PTWC (Pacific Tsunami Warning Center), the tsunami service provider for the Caribbean basin. However, comparison of coastal tsunami heights was not possible, due to inconsistencies between the provided fault parameters and the modeling results within the provided exercise products. Still, the modeling performed here allowed to analyze tsunami characteristics at the mentioned states from sources within the North Panamá Deformed Belt. The occurrence of a tsunami in the Caribbean may affect several countries because a great variety of them share coastal zones in this basin. Therefore, collaborative efforts similar to the one presented in this study, particularly between neighboring

  14. An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations

    Science.gov (United States)

    Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B.; Jia, Xun

    2015-10-01

    Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum

  15. Oxygen distribution in tumors: A qualitative analysis and modeling study providing a novel Monte Carlo approach

    International Nuclear Information System (INIS)

    Lagerlöf, Jakob H.; Kindblom, Jon; Bernhardt, Peter

    2014-01-01

    Purpose: To construct a Monte Carlo (MC)-based simulation model for analyzing the dependence of tumor oxygen distribution on different variables related to tumor vasculature [blood velocity, vessel-to-vessel proximity (vessel proximity), and inflowing oxygen partial pressure (pO 2 )]. Methods: A voxel-based tissue model containing parallel capillaries with square cross-sections (sides of 10 μm) was constructed. Green's function was used for diffusion calculations and Michaelis-Menten's kinetics to manage oxygen consumption. The model was tuned to approximately reproduce the oxygenational status of a renal carcinoma; the depth oxygenation curves (DOC) were fitted with an analytical expression to facilitate rapid MC simulations of tumor oxygen distribution. DOCs were simulated with three variables at three settings each (blood velocity, vessel proximity, and inflowing pO 2 ), which resulted in 27 combinations of conditions. To create a model that simulated variable oxygen distributions, the oxygen tension at a specific point was randomly sampled with trilinear interpolation in the dataset from the first simulation. Six correlations between blood velocity, vessel proximity, and inflowing pO 2 were hypothesized. Variable models with correlated parameters were compared to each other and to a nonvariable, DOC-based model to evaluate the differences in simulated oxygen distributions and tumor radiosensitivities for different tumor sizes. Results: For tumors with radii ranging from 5 to 30 mm, the nonvariable DOC model tended to generate normal or log-normal oxygen distributions, with a cut-off at zero. The pO 2 distributions simulated with the six-variable DOC models were quite different from the distributions generated with the nonvariable DOC model; in the former case the variable models simulated oxygen distributions that were more similar to in vivo results found in the literature. For larger tumors, the oxygen distributions became truncated in the lower

  16. An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations.

    Science.gov (United States)

    Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun

    2015-10-21

    Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum

  17. Development of an accurate 3D Monte Carlo broadband atmospheric radiative transfer model

    Science.gov (United States)

    Jones, Alexandra L.

    Radiation is the ultimate source of energy that drives our weather and climate. It is also the fundamental quantity detected by satellite sensors from which earth's properties are inferred. Radiative energy from the sun and emitted from the earth and atmosphere is redistributed by clouds in one of their most important roles in the atmosphere. Without accurately representing these interactions we greatly decrease our ability to successfully predict climate change, weather patterns, and to observe our environment from space. The remote sensing algorithms and dynamic models used to study and observe earth's atmosphere all parameterize radiative transfer with approximations that reduce or neglect horizontal variation of the radiation field, even in the presence of clouds. Despite having complete knowledge of the underlying physics at work, these approximations persist due to perceived computational expense. In the current context of high resolution modeling and remote sensing observations of clouds, from shallow cumulus to deep convective clouds, and given our ever advancing technological capabilities, these approximations have been exposed as inappropriate in many situations. This presents a need for accurate 3D spectral and broadband radiative transfer models to provide bounds on the interactions between clouds and radiation to judge the accuracy of similar but less expensive models and to aid in new parameterizations that take into account 3D effects when coupled to dynamic models of the atmosphere. Developing such a state of the art model based on the open source, object-oriented framework of the I3RC Monte Carlo Community Radiative Transfer ("IMC-original") Model is the task at hand. It has involved incorporating (1) thermal emission sources of radiation ("IMC+emission model"), allowing it to address remote sensing problems involving scattering of light emitted at earthly temperatures as well as spectral cooling rates, (2) spectral integration across an arbitrary

  18. Index of Effort: An Analytical Model for Evaluating and Re-Directing Student Recruitment Activities for a Local Community College.

    Science.gov (United States)

    Landini, Albert J.

    This index of effort is proposed as a means by which those in charge of student recruitment activities at community colleges can be sure that their efforts are being directed toward all of the appropriate population. The index is an analytical model based on the concept of socio-economic profiles, using small area 1970 census data, and is the…

  19. Prediction Model for Object Oriented Software Development Effort Estimation Using One Hidden Layer Feed Forward Neural Network with Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Chandra Shekhar Yadav

    2014-01-01

    Full Text Available The budget computation for software development is affected by the prediction of software development effort and schedule. Software development effort and schedule can be predicted precisely on the basis of past software project data sets. In this paper, a model for object-oriented software development effort estimation using one hidden layer feed forward neural network (OHFNN has been developed. The model has been further optimized with the help of genetic algorithm by taking weight vector obtained from OHFNN as initial population for the genetic algorithm. Convergence has been obtained by minimizing the sum of squared errors of each input vector and optimal weight vector has been determined to predict the software development effort. The model has been empirically validated on the PROMISE software engineering repository dataset. Performance of the model is more accurate than the well-established constructive cost model (COCOMO.

  20. Diffusion Monte Carlo determination of the binding energy of the sup 4 He nucleus for model Wigner potentials

    Energy Technology Data Exchange (ETDEWEB)

    Bishop, R.F. (Manchester Univ. (United Kingdom). Inst. of Science and Technology); Buendia, E. (Granada Univ. (Spain). Dept. de Fisica Moderna); Flynn, M.F. (Kent State Univ., OH (United States). Dept. of Physics); Guardiola, R. (Valencia Univ. (Spain). Dept. de Fisica Atomica y Nuclear)

    1992-02-01

    The diffusion Monte Carlo method is used to integrate the four-body Schroedinger equation corresponding to the {sup 4}He nucleus for several model potentials of Wigner type. Good importance sampling trial functions are used, and the sampling is large enough to obtain the ground-state energy with an error of only 0.01 to 0.02 MeV. (author).

  1. Bayesian Modelling, Monte Carlo Sampling and Capital Allocation of Insurance Risks

    Directory of Open Access Journals (Sweden)

    Gareth W. Peters

    2017-09-01

    Full Text Available The main objective of this work is to develop a detailed step-by-step guide to the development and application of a new class of efficient Monte Carlo methods to solve practically important problems faced by insurers under the new solvency regulations. In particular, a novel Monte Carlo method to calculate capital allocations for a general insurance company is developed, with a focus on coherent capital allocation that is compliant with the Swiss Solvency Test. The data used is based on the balance sheet of a representative stylized company. For each line of business in that company, allocations are calculated for the one-year risk with dependencies based on correlations given by the Swiss Solvency Test. Two different approaches for dealing with parameter uncertainty are discussed and simulation algorithms based on (pseudo-marginal Sequential Monte Carlo algorithms are described and their efficiency is analysed.

  2. Monte Carlo simulation of the Leksell Gamma Knife: I. Source modelling and calculations in homogeneous media

    Energy Technology Data Exchange (ETDEWEB)

    Moskvin, Vadim [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN (United States)]. E-mail: vmoskvin@iupui.edu; DesRosiers, Colleen; Papiez, Lech; Timmerman, Robert; Randall, Marcus; DesRosiers, Paul [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN (United States)

    2002-06-21

    The Monte Carlo code PENELOPE has been used to simulate photon flux from the Leksell Gamma Knife, a precision method for treating intracranial lesions. Radiation from a single {sup 60}Co assembly traversing the collimator system was simulated, and phase space distributions at the output surface of the helmet for photons and electrons were calculated. The characteristics describing the emitted final beam were used to build a two-stage Monte Carlo simulation of irradiation of a target. A dose field inside a standard spherical polystyrene phantom, usually used for Gamma Knife dosimetry, has been computed and compared with experimental results, with calculations performed by other authors with the use of the EGS4 Monte Carlo code, and data provided by the treatment planning system Gamma Plan. Good agreement was found between these data and results of simulations in homogeneous media. Owing to this established accuracy, PENELOPE is suitable for simulating problems relevant to stereotactic radiosurgery. (author)

  3. Habitat models to assist plant protection efforts in Shenandoah National Park, Virginia, USA

    Science.gov (United States)

    Van Manen, F.T.; Young, J.A.; Thatcher, C.A.; Cass, W.B.; Ulrey, C.

    2005-01-01

    During 2002, the National Park Service initiated a demonstration project to develop science-based law enforcement strategies for the protection of at-risk natural resources, including American ginseng (Panax quinquefolius L.), bloodroot (Sanguinaria canadensis L.), and black cohosh (Cimicifuga racemosa (L.) Nutt. [syn. Actaea racemosa L.]). Harvest pressure on these species is increasing because of the growing herbal remedy market. We developed habitat models for Shenandoah National Park and the northern portion of the Blue Ridge Parkway to determine the distribution of favorable habitats of these three plant species and to demonstrate the use of that information to support plant protection activities. We compiled locations for the three plant species to delineate favorable habitats with a geographic information system (GIS). We mapped potential habitat quality for each species by calculating a multivariate statistic, Mahalanobis distance, based on GIS layers that characterized the topography, land cover, and geology of the plant locations (10-m resolution). We tested model performance with an independent dataset of plant locations, which indicated a significant relationship between Mahalanobis distance values and species occurrence. We also generated null models by examining the distribution of the Mahalanobis distance values had plants been distributed randomly. For all species, the habitat models performed markedly better than their respective null models. We used our models to direct field searches to the most favorable habitats, resulting in a sizeable number of new plant locations (82 ginseng, 73 bloodroot, and 139 black cohosh locations). The odds of finding new plant locations based on the habitat models were 4.5 (black cohosh) to 12.3 (American ginseng) times greater than random searches; thus, the habitat models can be used to improve the efficiency of plant protection efforts, (e.g., marking of plants, law enforcement activities). The field searches also

  4. Integrated Cost and Schedule using Monte Carlo Simulation of a CPM Model - 12419

    Energy Technology Data Exchange (ETDEWEB)

    Hulett, David T. [Hulett and Associates, LLC (United States); Nosbisch, Michael R. [Project Time and Cost, Inc. (United States)

    2012-07-01

    . - Good-quality risk data that are usually collected in risk interviews of the project team, management and others knowledgeable in the risk of the project. The risks from the risk register are used as the basis of the risk data in the risk driver method. The risk driver method is based in the fundamental principle that identifiable risks drive overall cost and schedule risk. - A Monte Carlo simulation software program that can simulate schedule risk, burn WM2012 rate risk and time-independent resource risk. The results include the standard histograms and cumulative distributions of possible cost and time results for the project. However, by simulating both cost and time simultaneously we can collect the cost-time pairs of results and hence show the scatter diagram ('football chart') that indicates the joint probability of finishing on time and on budget. Also, we can derive the probabilistic cash flow for comparison with the time-phased project budget. Finally the risks to schedule completion and to cost can be prioritized, say at the P-80 level of confidence, to help focus the risk mitigation efforts. If the cost and schedule estimates including contingency reserves are not acceptable to the project stakeholders the project team should conduct risk mitigation workshops and studies, deciding which risk mitigation actions to take, and re-run the Monte Carlo simulation to determine the possible improvement to the project's objectives. Finally, it is recommended that the contingency reserves of cost and of time, calculated at a level that represents an acceptable degree of certainty and uncertainty for the project stakeholders, be added as a resource-loaded activity to the project schedule for strategic planning purposes. The risk analysis described in this paper is correct only for the current plan, represented by the schedule. The project contingency reserve of time and cost that are the main results of this analysis apply if that plan is to be followed. Of

  5. Validating a virtual source model based in Monte Carlo Method for profiles and percent deep doses calculation

    Energy Technology Data Exchange (ETDEWEB)

    Del Nero, Renata Aline; Yoriyaz, Hélio [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Nakandakari, Marcos Vinicius Nakaoka, E-mail: hyoriyaz@ipen.br, E-mail: marcos.sake@gmail.com [Hospital Beneficência Portuguesa de São Paulo, SP (Brazil)

    2017-07-01

    The Monte Carlo method for radiation transport data has been adapted for medical physics application. More specifically, it has received more attention in clinical treatment planning with the development of more efficient computer simulation techniques. In linear accelerator modeling by the Monte Carlo method, the phase space data file (phsp) is used a lot. However, to obtain precision in the results, it is necessary detailed information about the accelerator's head and commonly the supplier does not provide all the necessary data. An alternative to the phsp is the Virtual Source Model (VSM). This alternative approach presents many advantages for the clinical Monte Carlo application. This is the most efficient method for particle generation and can provide an accuracy similar when the phsp is used. This research propose a VSM simulation with the use of a Virtual Flattening Filter (VFF) for profiles and percent deep doses calculation. Two different sizes of open fields (40 x 40 cm² and 40√2 x 40√2 cm²) were used and two different source to surface distance (SSD) were applied: the standard 100 cm and custom SSD of 370 cm, which is applied in radiotherapy treatments of total body irradiation. The data generated by the simulation was analyzed and compared with experimental data to validate the VSM. This current model is easy to build and test. (author)

  6. The 3-Attractor Water Model: Monte-Carlo Simulations with a New, Effective 2-Body Potential (BMW

    Directory of Open Access Journals (Sweden)

    Francis Muguet

    2003-02-01

    Full Text Available According to the precepts of the 3-attractor (3-A water model, effective 2-body water potentials should feature as local minima the bifurcated and inverted water dimers in addition to the well-known linear water dimer global minimum. In order to test the 3-A model, a new pair wise effective intermolecular rigid water potential has been designed. The new potential is part of new class of potentials called BMW (Bushuev-Muguet-Water which is built by modifying existing empirical potentials. This version (BMW v. 0.1 has been designed by modifying the SPC/E empirical water potential. It is a preliminary version well suited for exploratory Monte-Carlo simulations. The shape of the potential energy surface (PES around each local minima has been approximated with the help of Gaussian functions. Classical Monte Carlo simulations have been carried out for liquid water in the NPT ensemble for a very wide range of state parameters up to the supercritical water regime. Thermodynamic properties are reported. The radial distributions functions (RDFs have been computed and are compared with the RDFs obtained from Neutron Scattering experimental data. Our preliminary Monte-Carlo simulations show that the seemingly unconventional hypotheses of the 3-A model are most plausible. The simulation has also uncovered a totally new role for 2-fold H-bonds.

  7. Modeling the Movement of Homicide by Type to Inform Public Health Prevention Efforts.

    Science.gov (United States)

    Zeoli, April M; Grady, Sue; Pizarro, Jesenia M; Melde, Chris

    2015-10-01

    We modeled the spatiotemporal movement of hotspot clusters of homicide by motive in Newark, New Jersey, to investigate whether different homicide types have different patterns of clustering and movement. We obtained homicide data from the Newark Police Department Homicide Unit's investigative files from 1997 through 2007 (n = 560). We geocoded the address at which each homicide victim was found and recorded the date of and the motive for the homicide. We used cluster detection software to model the spatiotemporal movement of statistically significant homicide clusters by motive, using census tract and month of occurrence as the spatial and temporal units of analysis. Gang-motivated homicides showed evidence of clustering and diffusion through Newark. Additionally, gang-motivated homicide clusters overlapped to a degree with revenge and drug-motivated homicide clusters. Escalating dispute and nonintimate familial homicides clustered; however, there was no evidence of diffusion. Intimate partner and robbery homicides did not cluster. By tracking how homicide types diffuse through communities and determining which places have ongoing or emerging homicide problems by type, we can better inform the deployment of prevention and intervention efforts.

  8. Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo

    Science.gov (United States)

    Herckenrath, Daan; Langevin, Christian D.; Doherty, John

    2011-01-01

    Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction uncertainty was tested for a synthetic saltwater intrusion model patterned after the Henry problem. Saltwater intrusion caused by a reduction in fresh groundwater discharge was simulated for 1000 randomly generated hydraulic conductivity distributions, representing a mildly heterogeneous aquifer. From these 1000 simulations, the hydraulic conductivity distribution giving rise to the most extreme case of saltwater intrusion was selected and was assumed to represent the "true" system. Head and salinity values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. The NSMC method was used to calculate 1000 calibration-constrained parameter fields. If the dimensionality of the solution space was set appropriately, the estimated uncertainty range from the NSMC analysis encompassed the truth. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. Reducing the dimensionality of the null-space for the processing of the random parameter sets did not result in any significant gains in efficiency and compromised the ability of the NSMC method to encompass the true prediction value. The addition of intrapilot point heterogeneity to the NSMC process was also tested. According to a variogram comparison, this provided the same scale of heterogeneity that was used to generate the truth. However, incorporation of intrapilot point variability did not make a noticeable difference to the uncertainty of the prediction. With this higher level of heterogeneity, however, the computational burden of

  9. Modeling of FREYA fast critical experiments with the Serpent Monte Carlo code

    International Nuclear Information System (INIS)

    Fridman, E.; Kochetkov, A.; Krása, A.

    2017-01-01

    Highlights: • FREYA – the EURATOM project executed to support fast lead-based reactor systems. • Critical experiments in the VENUS-F facility during the FREYA project. • Characterization of the critical VENUS-F cores with Serpent. • Comparison of the numerical Serpent results to the experimental data. - Abstract: The FP7 EURATOM project FREYA has been executed between 2011 and 2016 with the aim of supporting the design of fast lead-cooled reactor systems such as MYRRHA and ALFRED. During the project, a number of critical experiments were conducted in the VENUS-F facility located at SCK·CEN, Mol, Belgium. The Monte Carlo code Serpent was one of the codes applied for the characterization of the critical VENUS-F cores. Four critical configurations were modeled with Serpent, namely the reference critical core, the clean MYRRHA mock-up, the full MYRRHA mock-up, and the critical core with the ALFRED island. This paper briefly presents the VENUS-F facility, provides a detailed description of the aforementioned critical VENUS-F cores, and compares the numerical results calculated by Serpent to the available experimental data. The compared parameters include keff, point kinetics parameters, fission rate ratios of important actinides to that of U235 (spectral indices), axial and radial distribution of fission rates, and lead void reactivity effect. The reported results show generally good agreement between the calculated and experimental values. Nevertheless, the paper also reveals some noteworthy issues requiring further attention. This includes the systematic overprediction of reactivity and systematic underestimation of the U238 to U235 fission rate ratio.

  10. Modeling parameterized geometry in GPU-based Monte Carlo particle transport simulation for radiotherapy.

    Science.gov (United States)

    Chi, Yujie; Tian, Zhen; Jia, Xun

    2016-08-07

    Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU's shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0

  11. A Monte Carlo Model for Neutron Coincidence Counting with Fast Organic Liquid Scintillation Detectors

    International Nuclear Information System (INIS)

    Gamage, Kelum A.A.; Joyce, Malcolm J.; Cave, Frank D.

    2013-06-01

    Neutron coincidence counting is an established, nondestructive method for the qualitative and quantitative analysis of nuclear materials. Several even-numbered nuclei of the actinide isotopes, and especially even-numbered plutonium isotopes, undergo spontaneous fission, resulting in the emission of neutrons which are correlated in time. The characteristics of this i.e. the multiplicity can be used to identify each isotope in question. Similarly, the corresponding characteristics of isotopes that are susceptible to stimulated fission are somewhat isotope-related, and also dependent on the energy of the incident neutron that stimulates the fission event, and this can hence be used to identify and quantify isotopes also. Most of the neutron coincidence counters currently used are based on 3 He gas tubes. In the 3 He-filled gas proportional-counter, the (n, p) reaction is largely responsible for the detection of slow neutrons and hence neutrons have to be slowed down to thermal energies. As a result, moderator and shielding materials are essential components of many systems designed to assess quantities of fissile materials. The use of a moderator, however, extends the die-away time of the detector necessitating a larger coincidence window and, further, 3 He is now in short supply and expensive. In this paper, a simulation based on the Monte Carlo method is described which has been performed using MCNPX 2.6.0, to model the geometry of a sector-shaped liquid scintillation detector in response to coincident neutron events. The detection of neutrons from a mixed-oxide (MOX) fuel pellet using an organic liquid scintillator has been simulated for different thicknesses of scintillators. In this new neutron detector, a layer of lead has been used to reduce the gamma-ray fluence reaching the scintillator. The effect of lead for neutron detection has also been estimated by considering different thicknesses of lead layers. (authors)

  12. Water leaching of borosilicate glasses: experiments, modeling and Monte Carlo simulations; Alteration par l'eau des verres borosilicates: experiences, modelisation et simulations Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Ledieu, A

    2004-10-15

    This work is concerned with the corrosion of borosilicate glasses with variable oxide contents. The originality of this study is the complementary use of experiments and numerical simulations. This study is expected to contribute to a better understanding of the corrosion of nuclear waste confinement glasses. First, the corrosion of glasses containing only silicon, boron and sodium oxides has been studied. The kinetics of leaching show that the rate of leaching and the final degree of corrosion sharply depend on the boron content through a percolation mechanism. For some glass contents and some conditions of leaching, the layer which appears at the glass surface stops the release of soluble species (boron and sodium). This altered layer (also called the gel layer) has been characterized with nuclear magnetic resonance (NMR) and small angle X-ray scattering (SAXS) techniques. Second, additional elements have been included in the glass composition. It appears that calcium, zirconium or aluminum oxides strongly modify the final degree of corrosion so that the percolation properties of the boron sub-network is no more a sufficient explanation to account for the behavior of these glasses. Meanwhile, we have developed a theoretical model, based on the dissolution and the reprecipitation of the silicon. Kinetic Monte Carlo simulations have been used in order to test several concepts such as the boron percolation, the local reactivity of weakly soluble elements and the restructuring of the gel layer. This model has been fully validated by comparison with the results on the three oxide glasses. Then, it has been used as a comprehensive tool to investigate the paradoxical behavior of the aluminum and zirconium glasses: although these elements slow down the corrosion kinetics, they lead to a deeper final degree of corrosion. The main contribution of this work is that the final degree of corrosion of borosilicate glasses results from the competition of two opposite mechanisms

  13. Aqueous corrosion of borosilicate glasses: experiments, modeling and Monte-Carlo simulations; Alteration par l'eau des verres borosilicates: experiences, modelisation et simulations Monte-Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Ledieu, A

    2004-10-01

    This work is concerned with the corrosion of borosilicate glasses with variable oxide contents. The originality of this study is the complementary use of experiments and numerical simulations. This study is expected to contribute to a better understanding of the corrosion of nuclear waste confinement glasses. First, the corrosion of glasses containing only silicon, boron and sodium oxides has been studied. The kinetics of leaching show that the rate of leaching and the final degree of corrosion sharply depend on the boron content through a percolation mechanism. For some glass contents and some conditions of leaching, the layer which appears at the glass surface stops the release of soluble species (boron and sodium). This altered layer (also called the gel layer) has been characterized with nuclear magnetic resonance (NMR) and small angle X-ray scattering (SAXS) techniques. Second, additional elements have been included in the glass composition. It appears that calcium, zirconium or aluminum oxides strongly modify the final degree of corrosion so that the percolation properties of the boron sub-network is no more a sufficient explanation to account for the behavior of these glasses. Meanwhile, we have developed a theoretical model, based on the dissolution and the reprecipitation of the silicon. Kinetic Monte Carlo simulations have been used in order to test several concepts such as the boron percolation, the local reactivity of weakly soluble elements and the restructuring of the gel layer. This model has been fully validated by comparison with the results on the three oxide glasses. Then, it has been used as a comprehensive tool to investigate the paradoxical behavior of the aluminum and zirconium glasses: although these elements slow down the corrosion kinetics, they lead to a deeper final degree of corrosion. The main contribution of this work is that the final degree of corrosion of borosilicate glasses results from the competition of two opposite mechanisms

  14. Modeling the structure of amorphous MoS3: a neutron diffraction and reverse Monte Carlo study.

    Science.gov (United States)

    Hibble, Simon J; Wood, Glenn B

    2004-01-28

    A model for the structure of amorphous molybdenum trisulfide, a-MoS3, has been created using reverse Monte Carlo methods. This model, which consists of chains of MoS6 units sharing three sulfurs with each of its two neighbors and forming alternate long, nonbonded, and short, bonded, Mo-Mo separations, is a good fit to the neutron diffraction data and is chemically and physically realistic. The paper identifies the limitations of previous models based on Mo3 triangular clusters in accounting for the available experimental data.

  15. Monte Carlo Error Analysis Applied to Core Formation: The Single-stage Model Revived

    Science.gov (United States)

    Cottrell, E.; Walter, M. J.

    2009-12-01

    The last decade has witnessed an explosion of studies that scrutinize whether or not the siderophile element budget of the modern mantle can plausibly be explained by metal-silicate equilibration in a deep magma ocean during core formation. The single-stage equilibrium scenario is seductive because experiments that equilibrate metal and silicate can then serve as a proxy for the early earth, and the physical and chemical conditions of core formation can be identified. Recently, models have become more complex as they try to accommodate the proliferation of element partitioning data sets, each of which sets its own limits on the pressure, temperature, and chemistry of equilibration. The ability of single stage models to explain mantle chemistry has subsequently been challenged, resulting in the development of complex multi-stage core formation models. Here we show that the extent to which extant partitioning data are consistent with single-stage core formation depends heavily upon (1) the assumptions made when regressing experimental partitioning data (2) the certainty with which regression coefficients are known and (3) the certainty with which the core/mantle concentration ratios of the siderophile elements are known. We introduce a Monte Carlo algorithm coded in MATLAB that samples parameter space in pressure and oxygen fugacity for a given mantle composition (nbo/t) and liquidus, and returns the number of equilibrium single-stage liquidus “solutions” that are permissible, taking into account the uncertainty in regression parameters and range of acceptable core/mantle ratios. Here we explore the consequences of regression parameter uncertainty and the impact of regression construction on model outcomes. We find that the form of the partition coefficient (Kd with enforced valence state, or D) and the handling of the temperature effect (based on 1-atm free energy data or high P-T experimental observations) critically affects model outcomes. We consider the most

  16. Environmental dose rate heterogeneity of beta radiation and its implications for luminescence dating: Monte Carlo modelling and experimental validation

    DEFF Research Database (Denmark)

    Nathan, R.P.; Thomas, P.J.; Jain, M.

    2003-01-01

    -e distributions and it is important to characterise this effect, both to ensure that dose distributions are not misinterpreted, and that an accurate beta dose rate is employed in dating calculations. In this study, we make a first attempt providing a description of potential problems in heterogeneous environments...... and identify the likely size of these effects on D-e distributions. The study employs the MCNP 4C Monte Carlo electron/photon transport model, supported by an experimental validation of the code in several case studies. We find good agreement between the experimental measurements and the Monte Carlo...... simulations. It is concluded that the effect of beta, heterogeneity in complex environments for luminescence dating is two fold: (i) the infinite matrix dose rate is not universally applicable; its accuracy depends on the scale of the heterogeneity, and (ii) the interpretation of D-e distributions is complex...

  17. Monte Carlo simulations of phase transitions and lattice dynamics in an atom-phonon model for spin transition compounds

    International Nuclear Information System (INIS)

    Apetrei, Alin Marian; Enachescu, Cristian; Tanasa, Radu; Stoleriu, Laurentiu; Stancu, Alexandru

    2010-01-01

    We apply here the Monte Carlo Metropolis method to a known atom-phonon coupling model for 1D spin transition compounds (STC). These inorganic molecular systems can switch under thermal or optical excitation, between two states in thermodynamical competition, i.e. high spin (HS) and low spin (LS). In the model, the ST units (molecules) are linked by springs, whose elastic constants depend on the spin states of the neighboring atoms, and can only have three possible values. Several previous analytical papers considered a unique average value for the elastic constants (mean-field approximation) and obtained phase diagrams and thermal hysteresis loops. Recently, Monte Carlo simulation papers, taking into account all three values of the elastic constants, obtained thermal hysteresis loops, but no phase diagrams. Employing Monte Carlo simulation, in this work we obtain the phase diagram at T=0 K, which is fully consistent with earlier analytical work; however it is more complex. The main difference is the existence of two supplementary critical curves that mark a hysteresis zone in the phase diagram. This explains the pressure hysteresis curves at low temperature observed experimentally and predicts a 'chemical' hysteresis in STC at very low temperatures. The formation and the dynamics of the domains are also discussed.

  18. Analysis of polytype stability in PVT grown silicon carbide single crystal using competitive lattice model Monte Carlo simulations

    Directory of Open Access Journals (Sweden)

    Hui-Jun Guo

    2014-09-01

    Full Text Available Polytype stability is very important for high quality SiC single crystal growth. However, the growth conditions for the 4H, 6H and 15R polytypes are similar, and the mechanism of polytype stability is not clear. The kinetics aspects, such as surface-step nucleation, are important. The kinetic Monte Carlo method is a common tool to study surface kinetics in crystal growth. However, the present lattice models for kinetic Monte Carlo simulations cannot solve the problem of the competitive growth of two or more lattice structures. In this study, a competitive lattice model was developed for kinetic Monte Carlo simulation of the competition growth of the 4H and 6H polytypes of SiC. The site positions are fixed at the perfect crystal lattice positions without any adjustment of the site positions. Surface steps on seeds and large ratios of diffusion/deposition have positive effects on the 4H polytype stability. The 3D polytype distribution in a physical vapor transport method grown SiC ingot showed that the facet preserved the 4H polytype even if the 6H polytype dominated the growth surface. The theoretical and experimental results of polytype growth in SiC suggest that retaining the step growth mode is an important factor to maintain a stable single 4H polytype during SiC growth.

  19. Studies of Top Quark Monte Carlo Modelling with the ATLAS Detector

    CERN Document Server

    Asquith, Lily; The ATLAS collaboration

    2017-01-01

    The status of recent studies of modern Monte Carlo generator setups for the pair production of top quarks at the LHC. Samples at a center of mass energy of 13 TeV have been generated for a variety of generators and with different generator configurations. The predictions from these sample are compared to ATLAS data for a variety of kinematic observables.

  20. Examination of a Process Model of Adolescent Smoking Self-Change Efforts in Relation to Gender

    Science.gov (United States)

    MacPherson, Laura; Myers, Mark G.

    2010-01-01

    Little information describes how adolescents change their smoking behavior. This study investigated the role of gender in the relationship of motivation and cognitive variables with adolescent smoking self-change efforts. Self-report and semi-structured interview data from a prospective study of smoking self-change efforts were examined among 98…

  1. MODELING THE STRUCTURAL RELATIONS AMONG LEARNING STRATEGIES, SELF-EFFICACY BELIEFS, AND EFFORT REGULATION

    Directory of Open Access Journals (Sweden)

    Şenol Şen

    2016-06-01

    Full Text Available This research examined the relations among students’ learning strategies (elaboration, organization, critical thinking and metacognitive learning strategies, self-efficacy beliefs, and effort regulation. The Motivated Strategies for Learning Questionnaire (MSLQ was used to measure students’ learning strategies, self-efficacy beliefs, and effort regulation. A total of 227 high school students participated in the research. Confirmatory factor analysis and path analysis were performed to examine the relations among the variables of the research. Results revealed that students’ metacognitive learning strategies and self-efficacy beliefs statistically and significantly predicted their effort regulation. In addition, the students’ self-efficacy beliefs directly affected deep cognitive learning strategies and effort regulation but indirectly affected metacognitive learning strategies. Furthermore, 88.6 % of the variance in effort regulation was explained by metacognitive learning strategies and self-efficacy beliefs.

  2. Economic effort management in multispecies fisheries: the FcubEcon model

    DEFF Research Database (Denmark)

    Hoff, Ayoe; Frost, Hans; Ulrich, Clara

    2010-01-01

    allocation between fleets should not be based on biological considerations alone, but also on the economic behaviour of fishers, because fisheries management has a significant impact on human behaviour as well as on ecosystem development. The FcubEcon management framework for effort allocation between fleets......-harvest potential and fish-stock-preservation considerations. Effort allocation between fleets should not be based on biological considerations alone, but also on the economic behaviour of fishers, because fisheries management has a significant impact on human behaviour as well as on ecosystem development. The Fcub...... in the development of management tools based on fleets, fisheries, and areas, rather than on unit fish stocks. A natural consequence of this has been to consider effort rather than quota management, a final effort decision being based on fleet-harvest potential and fish-stock-preservation considerations. Effort...

  3. Upending the social ecological model to guide health promotion efforts toward policy and environmental change.

    Science.gov (United States)

    Golden, Shelley D; McLeroy, Kenneth R; Green, Lawrence W; Earp, Jo Anne L; Lieberman, Lisa D

    2015-04-01

    Efforts to change policies and the environments in which people live, work, and play have gained increasing attention over the past several decades. Yet health promotion frameworks that illustrate the complex processes that produce health-enhancing structural changes are limited. Building on the experiences of health educators, community activists, and community-based researchers described in this supplement and elsewhere, as well as several political, social, and behavioral science theories, we propose a new framework to organize our thinking about producing policy, environmental, and other structural changes. We build on the social ecological model, a framework widely employed in public health research and practice, by turning it inside out, placing health-related and other social policies and environments at the center, and conceptualizing the ways in which individuals, their social networks, and organized groups produce a community context that fosters healthy policy and environmental development. We conclude by describing how health promotion practitioners and researchers can foster structural change by (1) conveying the health and social relevance of policy and environmental change initiatives, (2) building partnerships to support them, and (3) promoting more equitable distributions of the resources necessary for people to meet their daily needs, control their lives, and freely participate in the public sphere. © 2015 Society for Public Health Education.

  4. Modelling of neutron and photon transport in iron and concrete radiation shieldings by the Monte Carlo method - Version 2

    CERN Document Server

    Žukauskaite, A; Plukiene, R; Plukis, A

    2007-01-01

    Particle accelerators and other high energy facilities produce penetrating ionizing radiation (neutrons and γ-rays) that must be shielded. The objective of this work was to model photon and neutron transport in various materials, usually used as shielding, such as concrete, iron or graphite. Monte Carlo method allows obtaining answers by simulating individual particles and recording some aspects of their average behavior. In this work several nuclear experiments were modeled: AVF 65 – γ-ray beams (1-10 MeV), HIMAC and ISIS-800 – high energy neutrons (20-800 MeV) transport in iron and concrete. The results were then compared with experimental data.

  5. Shell-model Monte Carlo simulations of the BCS-BEC crossover in few-fermion systems

    DEFF Research Database (Denmark)

    Zinner, Nikolaj Thomas; Mølmer, Klaus; Özen, C.

    2009-01-01

    We study a trapped system of fermions with a zero-range two-body interaction using the shell-model Monte Carlo method, providing ab initio results for the low particle number limit where mean-field theory is not applicable. We present results for the N-body energies as function of interaction...... strength, particle number, and temperature. The subtle question of renormalization in a finite model space is addressed and the convergence of our method and its applicability across the BCS-BEC crossover is discussed. Our findings indicate that very good quantitative results can be obtained on the BCS...

  6. An assessment of the feasibility of using Monte Carlo calculations to model a combined neutron/gamma electronic personal dosemeter

    International Nuclear Information System (INIS)

    Tanner, J.E.; Witts, D.; Tanner, R.J.; Bartlett, D.T.; Burgess, P.H.; Edwards, A.A.; More, B.R.

    1995-01-01

    A Monte Carlo facility has been developed for modelling the response of semiconductor devices to mixed neutron-photon fields. This utilises the code MCNP for neutron and photon transport and a new code, STRUGGLE, which has been developed to model the secondary charged particle transport. It is thus possible to predict the pulse height distribution expected from prototype electronic personal detectors, given the detector efficiency factor. Initial calculations have been performed on a simple passivated implanted planar silicon detector. This device has also been irradiated in neutron, gamma and X ray fields to verify the accuracy of the predictions. Good agreement was found between experiment and calculation. (author)

  7. Assessment of Transport Infrastructure Projects by the use of Monte Carlo Simulation: The CBA-DK Model

    DEFF Research Database (Denmark)

    Salling, Kim Bang; Leleur, Steen

    2006-01-01

    calculation, where risk analysis (RA) is car-ried out using Monte Carlo Simulation (MCS). After a de-scription of the deterministic and stochastic calculations emphasis is paid to the RA part of CBA-DK with consid-erations about which probability distributions to make use of. Furthermore, a comprehensive......This paper presents the Danish CBA-DK software model for assessment of transport infrastructure projects. The as-sessment model is based on both a deterministic calcula-tion following the cost-benefit analysis (CBA) methodol-ogy in a Danish manual from the Ministry of Transport and on a stochastic...

  8. Monte Carlo modeling and optimization of contrast-enhanced radiotherapy of brain tumors

    Energy Technology Data Exchange (ETDEWEB)

    Perez-Lopez, C E; Garnica-Garza, H M, E-mail: hgarnica@cinvestav.mx [Centro de Investigacion y de Estudios Avanzados del Instituto Politecnico Nacional Unidad Monterrey, Via del Conocimiento 201 Parque de Investigacion e Innovacion Tecnologica, Apodaca NL CP 66600 (Mexico)

    2011-07-07

    Contrast-enhanced radiotherapy involves the use of a kilovoltage x-ray beam to impart a tumoricidal dose to a target into which a radiological contrast agent has previously been loaded in order to increase the x-ray absorption efficiency. In this treatment modality the selection of the proper x-ray spectrum is important since at the energy range of interest the penetration ability of the x-ray beam is limited. For the treatment of brain tumors, the situation is further complicated by the presence of the skull, which also absorbs kilovoltage x-ray in a very efficient manner. In this work, using Monte Carlo simulation, a realistic patient model and the Cimmino algorithm, several irradiation techniques and x-ray spectra are evaluated for two possible clinical scenarios with respect to the location of the target, these being a tumor located at the center of the head and at a position close to the surface of the head. It will be shown that x-ray spectra, such as those produced by a conventional x-ray generator, are capable of producing absorbed dose distributions with excellent uniformity in the target as well as dose differential of at least 20% of the prescribed tumor dose between this and the surrounding brain tissue, when the tumor is located at the center of the head. However, for tumors with a lateral displacement from the center and close to the skull, while the absorbed dose distribution in the target is also quite uniform and the dose to the surrounding brain tissue is within an acceptable range, hot spots in the skull arise which are above what is considered a safe limit. A comparison with previously reported results using mono-energetic x-ray beams such as those produced by a radiation synchrotron is also presented and it is shown that the absorbed dose distributions rendered by this type of beam are very similar to those obtained with a conventional x-ray beam.

  9. Voxel2MCNP: a framework for modeling, simulation and evaluation of radiation transport scenarios for Monte Carlo codes

    International Nuclear Information System (INIS)

    Pölz, Stefan; Laubersheimer, Sven; Eberhardt, Jakob S; Harrendorf, Marco A; Keck, Thomas; Benzler, Andreas; Breustedt, Bastian

    2013-01-01

    The basic idea of Voxel2MCNP is to provide a framework supporting users in modeling radiation transport scenarios using voxel phantoms and other geometric models, generating corresponding input for the Monte Carlo code MCNPX, and evaluating simulation output. Applications at Karlsruhe Institute of Technology are primarily whole and partial body counter calibration and calculation of dose conversion coefficients. A new generic data model describing data related to radiation transport, including phantom and detector geometries and their properties, sources, tallies and materials, has been developed. It is modular and generally independent of the targeted Monte Carlo code. The data model has been implemented as an XML-based file format to facilitate data exchange, and integrated with Voxel2MCNP to provide a common interface for modeling, visualization, and evaluation of data. Also, extensions to allow compatibility with several file formats, such as ENSDF for nuclear structure properties and radioactive decay data, SimpleGeo for solid geometry modeling, ImageJ for voxel lattices, and MCNPX’s MCTAL for simulation results have been added. The framework is presented and discussed in this paper and example workflows for body counter calibration and calculation of dose conversion coefficients is given to illustrate its application. (paper)

  10. Characterization of an Ar/O2 magnetron plasma by a multi-species Monte Carlo model

    International Nuclear Information System (INIS)

    Bultinck, E; Bogaerts, A

    2011-01-01

    A combined Monte Carlo (MC)/analytical surface model is developed to study the plasma processes occurring during the reactive sputter deposition of TiO x thin films. This model describes the important plasma species with a MC approach (i.e. electrons, Ar + ions, O 2 + ions, fast Ar atoms and sputtered Ti atoms). The deposition of the TiO x film is treated by an analytical surface model. The implementation of our so-called multi-species MC model is presented, and some typical calculation results are shown, such as densities, fluxes, energies and collision rates. The advantages and disadvantages of the multi-species MC model are illustrated by a comparison with a particle-in-cell/Monte Carlo collisions (PIC/MCC) model. Disadvantages include the fact that certain input values and assumptions are needed. However, when these are accounted for, the results are in good agreement with the PIC/MCC simulations, and the calculation time has drastically decreased, which enables us to simulate large and complicated reactor geometries. To illustrate this, the effect of larger target-substrate distances on the film properties is investigated. It is shown that a stoichiometric film is deposited at all investigated target-substrate distances (24, 40, 60 and 80 mm). Moreover, a larger target-substrate distance promotes film uniformity, but the deposition rate is much lower.

  11. Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo

    Science.gov (United States)

    Cheong, R. Y.; Gabda, D.

    2017-09-01

    Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.

  12. Direct Simulation Monte Carlo Application of the Three Dimensional Forced Harmonic Oscillator Model

    Science.gov (United States)

    2017-12-07

    challenges of implementing the full array of VV transitions mentioned earlier. Note here that an account of VV processes is not expected to influence the...method. It takes into account the microscopic reversibility between the excitation and deexcitation processes, and it satisfies the detailed balance...probabilities and is suitable for the direct simulation Monte Carlo method. It takes into account the microscopic reversibility between the excitation

  13. Modelling of an industrial environment, part 1.: Monte Carlo simulations of photon transport

    International Nuclear Information System (INIS)

    Kis, Z.; Eged, K.; Meckbach, R.; Voigt, G.

    2002-01-01

    After a nuclear accident releasing radioactive material into the environment the external exposures may contribute significantly to the radiation exposure of the population (UNSCEAR 1988, 2000). For urban populations the external gamma exposure from radionuclides deposited on the surfaces of the urban-industrial environments yields the dominant contributions to the total dose to the public (Kelly 1987; Jacob and Meckbach 1990). The radiation field is naturally influenced by the environment around the sources. For calculations of the shielding effect of the structures in complex and realistic urban environments Monte Carlo methods turned out to be useful tools (Jacob and Meckbach 1987; Meckbach et al. 1988). Using these methods a complex environment can be set up in which the photon transport can be solved on a reliable way. The accuracy of the methods is in principle limited only by the knowledge of the atomic cross sections and the computational time. Several papers using Monte Carlo results for calculating doses from the external gamma exposures were published (Jacob and Meckbach 1987, 1990; Meckbach et al. 1988; Rochedo et al. 1996). In these papers the Monte Carlo simulations were run in urban environments and for different photon energies. The industrial environment can be defined as such an area where productive and/or commercial activity is carried out. A good example can be a factory or a supermarket. An industrial environment can rather be different from the urban ones as for the types and structures of the buildings and their dimensions. These variations will affect the radiation field of this environment. Hence there is a need to run new Monte Carlo simulations designed specially for the industrial environments

  14. McSCIA: application of the Equivalence Theorem in a Monte Carlo radiative transfer model for spherical shell atmospheres

    Directory of Open Access Journals (Sweden)

    F. Spada

    2006-01-01

    Full Text Available A new multiple-scattering Monte Carlo 3-D radiative transfer model named McSCIA (Monte Carlo for SCIAmachy is presented. The backward technique is used to efficiently simulate narrow field of view instruments. The McSCIA algorithm has been formulated as a function of the Earth's radius, and can thus perform simulations for both plane-parallel and spherical atmospheres. The latter geometry is essential for the interpretation of limb satellite measurements, as performed by SCIAMACHY on board of ESA's Envisat. The model can simulate UV-vis-NIR radiation. First the ray-tracing algorithm is presented in detail, and then successfully validated against literature references, both in plane-parallel and in spherical geometry. A simple 1-D model is used to explain two different ways of treating absorption. One method uses the single scattering albedo while the other uses the equivalence theorem. The equivalence theorem is based on a separation of absorption and scattering. It is shown that both methods give, in a statistical way, identical results for a wide variety of scenarios. Both absorption methods are included in McSCIA, and it is shown that also for a 3-D case both formulations give identical results. McSCIA limb profiles for atmospheres with and without absorption compare well with the one of the state of the art Monte Carlo radiative transfer model MCC++. A simplification of the photon statistics may lead to very fast calculations of absorption features in the atmosphere. However, these simplifications potentially introduce biases in the results. McSCIA does not use simplifications and is therefore a relatively slow implementation of the equivalence theorem.

  15. A replica exchange Monte Carlo algorithm for protein folding in the HP model

    Directory of Open Access Journals (Sweden)

    Shmygelska Alena

    2007-09-01

    Full Text Available Abstract Background The ab initio protein folding problem consists of predicting protein tertiary structure from a given amino acid sequence by minimizing an energy function; it is one of the most important and challenging problems in biochemistry, molecular biology and biophysics. The ab initio protein folding problem is computationally challenging and has been shown to be NP MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaat0uy0HwzTfgDPnwy1egaryqtHrhAL1wy0L2yHvdaiqaacqWFneVtcqqGqbauaaa@3961@-hard even when conformations are restricted to a lattice. In this work, we implement and evaluate the replica exchange Monte Carlo (REMC method, which has already been applied very successfully to more complex protein models and other optimization problems with complex energy landscapes, in combination with the highly effective pull move neighbourhood in two widely studied Hydrophobic Polar (HP lattice models. Results We demonstrate that REMC is highly effective for solving instances of the square (2D and cubic (3D HP protein folding problem. When using the pull move neighbourhood, REMC outperforms current state-of-the-art algorithms for most benchmark instances. Additionally, we show that this new algorithm provides a larger ensemble of ground-state structures than the existing state-of-the-art methods. Furthermore, it scales well with sequence length, and it finds significantly better conformations on long biological sequences and sequences with a provably unique ground-state structure, which is believed to be a characteristic of real proteins. We also present evidence that our REMC algorithm can fold sequences which exhibit significant interaction between termini in the hydrophobic core relatively easily. Conclusion We demonstrate that REMC utilizing the pull move

  16. Evaluation of an ARPS-based canopy flow modeling system for use in future operational smoke prediction efforts

    Science.gov (United States)

    M. T. Kiefer; S. Zhong; W. E. Heilman; J. J. Charney; X. Bian

    2013-01-01

    Efforts to develop a canopy flow modeling system based on the Advanced Regional Prediction System (ARPS) model are discussed. The standard version of ARPS is modified to account for the effect of drag forces on mean and turbulent flow through a vegetation canopy, via production and sink terms in the momentum and subgrid-scale turbulent kinetic energy (TKE) equations....

  17. A Monte Carlo-adjusted goodness-of-fit test for parametric models describing spatial point patterns

    KAUST Repository

    Dao, Ngocanh

    2014-04-03

    Assessing the goodness-of-fit (GOF) for intricate parametric spatial point process models is important for many application fields. When the probability density of the statistic of the GOF test is intractable, a commonly used procedure is the Monte Carlo GOF test. Additionally, if the data comprise a single dataset, a popular version of the test plugs a parameter estimate in the hypothesized parametric model to generate data for theMonte Carlo GOF test. In this case, the test is invalid because the resulting empirical level does not reach the nominal level. In this article, we propose a method consisting of nested Monte Carlo simulations which has the following advantages: the bias of the resulting empirical level of the test is eliminated, hence the empirical levels can always reach the nominal level, and information about inhomogeneity of the data can be provided.We theoretically justify our testing procedure using Taylor expansions and demonstrate that it is correctly sized through various simulation studies. In our first data application, we discover, in agreement with Illian et al., that Phlebocarya filifolia plants near Perth, Australia, can follow a homogeneous Poisson clustered process that provides insight into the propagation mechanism of these plants. In our second data application, we find, in contrast to Diggle, that a pairwise interaction model provides a good fit to the micro-anatomy data of amacrine cells designed for analyzing the developmental growth of immature retina cells in rabbits. This article has supplementary material online. © 2013 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.

  18. Monte Carlo modelling of photodynamic therapy treatments comparing clustered three dimensional tumour structures with homogeneous tissue structures

    Science.gov (United States)

    Campbell, C. L.; Wood, K.; Brown, C. T. A.; Moseley, H.

    2016-07-01

    We explore the effects of three dimensional (3D) tumour structures on depth dependent fluence rates, photodynamic doses (PDD) and fluorescence images through Monte Carlo radiation transfer modelling of photodynamic therapy. The aim with this work was to compare the commonly used uniform tumour densities with non-uniform densities to determine the importance of including 3D models in theoretical investigations. It was found that fractal 3D models resulted in deeper penetration on average of therapeutic radiation and higher PDD. An increase in effective treatment depth of 1 mm was observed for one of the investigated fractal structures, when comparing to the equivalent smooth model. Wide field fluorescence images were simulated, revealing information about the relationship between tumour structure and the appearance of the fluorescence intensity. Our models indicate that the 3D tumour structure strongly affects the spatial distribution of therapeutic light, the PDD and the wide field appearance of surface fluorescence images.

  19. Transmission calculation by empirical numerical model and Monte Carlo simulation in high energy proton radiography of thick objects

    Science.gov (United States)

    Zheng, Na; Xu, Hai-Bo

    2015-10-01

    An empirical numerical model that includes nuclear absorption, multiple Coulomb scattering and energy loss is presented for the calculation of transmission through thick objects in high energy proton radiography. In this numerical model the angular distributions are treated as Gaussians in the laboratory frame. A Monte Carlo program based on the Geant4 toolkit was developed and used for high energy proton radiography experiment simulations and verification of the empirical numerical model. The two models are used to calculate the transmission fraction of carbon and lead step-wedges in proton radiography at 24 GeV/c, and to calculate radial transmission of the French Test Object in proton radiography at 24 GeV/c with different angular cuts. It is shown that the results of the two models agree with each other, and an analysis of the slight differences is given. Supported by NSAF (11176001) and Science and Technology Developing Foundation of China Academy of Engineering Physics (2012A0202006)

  20. Computational Model of D-Region Ion Production Caused by Energetic Electron Precipitations Based on General Monte Carlo Transport Calculations

    Science.gov (United States)

    Kouznetsov, A.; Cully, C. M.

    2017-12-01

    During enhanced magnetic activities, large ejections of energetic electrons from radiation belts are deposited in the upper polar atmosphere where they play important roles in its physical and chemical processes, including VLF signals subionospheric propagation. Electron deposition can affect D-Region ionization, which are estimated based on ionization rates derived from energy depositions. We present a model of D-region ion production caused by an arbitrary (in energy and pitch angle) distribution of fast (10 keV - 1 MeV) electrons. The model relies on a set of pre-calculated results obtained using a general Monte Carlo approach with the latest version of the MCNP6 (Monte Carlo N-Particle) code for the explicit electron tracking in magnetic fields. By expressing those results using the ionization yield functions, the pre-calculated results are extended to cover arbitrary magnetic field inclinations and atmospheric density profiles, allowing ionization rate altitude profile computations in the range of 20 and 200 km at any geographic point of interest and date/time by adopting results from an external atmospheric density model (e.g. NRLMSISE-00). The pre-calculated MCNP6 results are stored in a CDF (Common Data Format) file, and IDL routines library is written to provide an end-user interface to the model.

  1. A multi-agent quantum Monte Carlo model for charge transport: Application to organic field-effect transistors

    International Nuclear Information System (INIS)

    Bauer, Thilo; Jäger, Christof M.; Jordan, Meredith J. T.; Clark, Timothy

    2015-01-01

    We have developed a multi-agent quantum Monte Carlo model to describe the spatial dynamics of multiple majority charge carriers during conduction of electric current in the channel of organic field-effect transistors. The charge carriers are treated by a neglect of diatomic differential overlap Hamiltonian using a lattice of hydrogen-like basis functions. The local ionization energy and local electron affinity defined previously map the bulk structure of the transistor channel to external potentials for the simulations of electron- and hole-conduction, respectively. The model is designed without a specific charge-transport mechanism like hopping- or band-transport in mind and does not arbitrarily localize charge. An electrode model allows dynamic injection and depletion of charge carriers according to source-drain voltage. The field-effect is modeled by using the source-gate voltage in a Metropolis-like acceptance criterion. Although the current cannot be calculated because the simulations have no time axis, using the number of Monte Carlo moves as pseudo-time gives results that resemble experimental I/V curves

  2. Reducing Monte Carlo error in the Bayesian estimation of risk ratios using log-binomial regression models.

    Science.gov (United States)

    Salmerón, Diego; Cano, Juan A; Chirlaque, María D

    2015-08-30

    In cohort studies, binary outcomes are very often analyzed by logistic regression. However, it is well known that when the goal is to estimate a risk ratio, the logistic regression is inappropriate if the outcome is common. In these cases, a log-binomial regression model is preferable. On the other hand, the estimation of the regression coefficients of the log-binomial model is difficult owing to the constraints that must be imposed on these coefficients. Bayesian methods allow a straightforward approach for log-binomial regression models and produce smaller mean squared errors in the estimation of risk ratios than the frequentist methods, and the posterior inferences can be obtained using the software WinBUGS. However, Markov chain Monte Carlo methods implemented in WinBUGS can lead to large Monte Carlo errors in the approximations to the posterior inferences because they produce correlated simulations, and the accuracy of the approximations are inversely related to this correlation. To reduce correlation and to improve accuracy, we propose a reparameterization based on a Poisson model and a sampling algorithm coded in R. Copyright © 2015 John Wiley & Sons, Ltd.

  3. Cloud-based Monte Carlo modelling of BSSRDF for the rendering of human skin appearance (Conference Presentation)

    Science.gov (United States)

    Doronin, Alexander; Rushmeier, Holly E.; Meglinski, Igor; Bykov, Alexander V.

    2016-03-01

    We present a new Monte Carlo based approach for the modelling of Bidirectional Scattering-Surface Reflectance Distribution Function (BSSRDF) for accurate rendering of human skin appearance. The variations of both skin tissues structure and the major chromophores are taken into account correspondingly to the different ethnic and age groups. The computational solution utilizes HTML5, accelerated by the graphics processing units (GPUs), and therefore is convenient for the practical use at the most of modern computer-based devices and operating systems. The results of imitation of human skin reflectance spectra, corresponding skin colours and examples of 3D faces rendering are presented and compared with the results of phantom studies.

  4. FLUKA Monte Carlo Modelling of the CHARM Facility’s Test Area: Update of the Radiation Field Assessment

    CERN Document Server

    Infantino, Angelo

    2017-01-01

    The present Accelerator Note is a follow-up of the previous report CERN-ACC-NOTE-2016-12345. In the present work, the FLUKA Monte Carlo model of CERN’s CHARM facility has been improved to the most up-to-date configuration of the facility, including: new test positions, a global refinement of the FLUKA geometry, a careful review of the transport and physics parameters. Several configurations of the facility, in terms of target material and movable shielding configuration, have been simulated. The full set of results is reported in the following and can act as a reference guide to any potential user of the facility.

  5. An analysis of the OI 1304 A dayglow using a Monte Carlo resonant scattering model with partial frequency redistribution

    Science.gov (United States)

    Meier, R. R.; Lee, J.-S.

    1982-01-01

    The transport of resonance radiation under optically thick conditions is shown to be accurately described by a Monte Carlo model of the atomic oxygen 1304 A airglow triplet in which partial frequency redistribution, temperature gradients, pure absorption and multilevel scattering are accounted for. All features of the data can be explained by photoelectron impact excitation and the resonant scattering of sunlight, where the latter source dominates below 100 and above 500 km and is stronger at intermediate altitudes than previously thought. It is concluded that the OI 1304 A emission can be used in studies of excitation processes and atomic oxygen densities in planetary atmospheres.

  6. Monte Carlo semi-empirical model for Si(Li) x-ray detector: Differences between nominal and fitted parameters

    Energy Technology Data Exchange (ETDEWEB)

    Lopez-Pino, N.; Padilla-Cabal, F.; Garcia-Alvarez, J. A.; Vazquez, L.; D' Alessandro, K.; Correa-Alfonso, C. M. [Departamento de Fisica Nuclear, Instituto Superior de Tecnologia y Ciencias Aplicadas (InSTEC) Ave. Salvador Allende y Luaces. Quinta de los Molinos. Habana 10600. A.P. 6163, La Habana (Cuba); Godoy, W.; Maidana, N. L.; Vanin, V. R. [Laboratorio do Acelerador Linear, Instituto de Fisica - Universidade de Sao Paulo Rua do Matao, Travessa R, 187, 05508-900, SP (Brazil)

    2013-05-06

    A detailed characterization of a X-ray Si(Li) detector was performed to obtain the energy dependence of efficiency in the photon energy range of 6.4 - 59.5 keV, which was measured and reproduced by Monte Carlo (MC) simulations. Significant discrepancies between MC and experimental values were found when the manufacturer parameters of the detector were used in the simulation. A complete Computerized Tomography (CT) detector scan allowed to find the correct crystal dimensions and position inside the capsule. The computed efficiencies with the resulting detector model differed with the measured values no more than 10% in most of the energy range.

  7. DISPLACE: a dynamic, individual-based model for spatial fishing planning and effort displacement: Integrating underlying fish population models

    DEFF Research Database (Denmark)

    Bastardie, Francois; Nielsen, J. Rasmus; Miethe, Tanja

    or to the alteration of individual fishing patterns. We demonstrate that integrating the spatial activity of vessels and local fish stock abundance dynamics allow for interactions and more realistic predictions of fishermen behaviour, revenues and stock abundance......We previously developed a spatially explicit, individual-based model (IBM) evaluating the bio-economic efficiency of fishing vessel movements between regions according to the catching and targeting of different species based on the most recent high resolution spatial fishery data. The main purpose...... was to test the effects of alternative fishing effort allocation scenarios related to fuel consumption, energy efficiency (value per litre of fuel), sustainable fish stock harvesting, and profitability of the fisheries. The assumption here was constant underlying resource availability. Now, an advanced...

  8. One State's Systems Change Efforts to Reduce Child Care Expulsion: Taking the Pyramid Model to Scale

    Science.gov (United States)

    Vinh, Megan; Strain, Phil; Davidon, Sarah; Smith, Barbara J.

    2016-01-01

    This article describes the efforts funded by the state of Colorado to address unacceptably high rates of expulsion from child care. Based on the results of a 2006 survey, the state of Colorado launched two complementary policy initiatives in 2009 to impact expulsion rates and to improve the use of evidence-based practices related to challenging…

  9. Bodily Effort Enhances Learning and Metacognition: Investigating the Relation Between Physical Effort and Cognition Using Dual-Process Models of Embodiment.

    Science.gov (United States)

    Skulmowski, Alexander; Rey, Günter Daniel

    2017-01-01

    Recent embodiment research revealed that cognitive processes can be influenced by bodily cues. Some of these cues were found to elicit disparate effects on cognition. For instance, weight sensations can inhibit problem-solving performance, but were shown to increase judgments regarding recall probability (judgments of learning; JOLs) in memory tasks. We investigated the effects of physical effort on learning and metacognition by conducting two studies in which we varied whether a backpack was worn or not while 20 nouns were to be learned. Participants entered a JOL for each word and completed a recall test. Experiment 1 ( N = 18) revealed that exerting physical effort by wearing a backpack led to higher JOLs for easy nouns, without a notable effect on difficult nouns. Participants who wore a backpack reached higher recall scores. Therefore, physical effort may act as a form of desirable difficulty during learning. In Experiment 2 ( N = 30), the influence of physical effort on JOL s and learning disappeared when more difficult nouns were to be learned, implying that a high cognitive load may diminish bodily effects. These findings suggest that physical effort mainly influences superficial modes of thought and raise doubts concerning the explanatory power of metaphor-centered accounts of embodiment for higher-level cognition.

  10. Extraction of diffuse correlation spectroscopy flow index by integration of Nth-order linear model with Monte Carlo simulation

    Science.gov (United States)

    Shang, Yu; Li, Ting; Chen, Lei; Lin, Yu; Toborek, Michal; Yu, Guoqiang

    2014-05-01

    Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (αDB) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of αDB. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo stroke model of mouse. Computer simulations shows that the high-order (N ≥ 5) linear algorithm was more accurate in extracting αDB (errors values of errors in extracting αDB were similar to those reconstructed from the noise-free DCS data. In addition, the errors in extracting the relative changes of αDB using both linear algorithm and semi-infinite solution were fairly small (errors < ±2.0%) and did not rely on the tissue volume/geometry. The experimental results from the in vivo stroke mice agreed with those in simulations, demonstrating the robustness of the linear algorithm. DCS with the high-order linear algorithm shows the potential for the inter-subject comparison and longitudinal monitoring of absolute BFI in a variety of tissues/organs with different volumes/geometries.

  11. Monte Carlo study of four-spinon dynamic structure function in antiferromagnetic Heisenberg model

    International Nuclear Information System (INIS)

    Si-Lakhal, B.; Abada, A.

    2003-11-01

    Using Monte Carlo integration methods, we describe the behavior of the exact four-s pinon dynamic structure function S 4 in the antiferromagnetic spin 1/2 Heisenberg quantum spin chain as a function of the neutron energy ω and momentum transfer k. We also determine the fourspinon continuum, the extent of the region in the (k, ω) plane outside which S 4 is identically zero. In each case, the behavior of S 4 is shown to be consistent with the four-spinon continuum and compared to the one of the exact two-spinon dynamic structure function S 2 . Overall shape similarity is noted. (author)

  12. Evaluation and Monte Carlo modelling of the response function of the Leake neutron area survey instrument

    International Nuclear Information System (INIS)

    Tagziria, H.; Tanner, R.J.; Bartlett, D.T.; Thomas, D.J.

    2004-01-01

    All available measured data for the response characteristics of the Leake counter have been gathered together. These data, augmented by previously unpublished work, have been compared to Monte Carlo simulations of the instrument's response characteristics in the energy range from thermal to 20 MeV. A response function has been derived, which is recommended as the best currently available for the instrument. Folding this function with workplace energy distributions has enabled an assessment of the impact of this new response function to be made. Similar work, which will be published separately, has been carried out for the NM2 and the Studsvik 2202D neutron area survey instruments

  13. 2D and 3D Modeling Efforts in Fuel Film Cooling of Liquid Rocket Engines (Conference Paper with Briefing Charts)

    Science.gov (United States)

    2017-01-12

    Conference Paper with Briefing Charts 3. DATES COVERED (From - To) 17 November 2016 – 12 January 2017 4. TITLE AND SUBTITLE 2D and 3D Modeling ...98) Prescribed by ANSI Std. 239.18 2D and 3D Modeling Efforts in Fuel Film Cooling of Liquid Rocket Engines Kevin C. Brown∗, Edward B. Coy†, and...wide. As a consequence, the 3D simulations may better model the experimental setup used, but are perhaps not representative of the long circumferential

  14. Nonlinear joint models for individual dynamic prediction of risk of death using Hamiltonian Monte Carlo: application to metastatic prostate cancer

    Directory of Open Access Journals (Sweden)

    Solène Desmée

    2017-07-01

    Full Text Available Abstract Background Joint models of longitudinal and time-to-event data are increasingly used to perform individual dynamic prediction of a risk of event. However the difficulty to perform inference in nonlinear models and to calculate the distribution of individual parameters has long limited this approach to linear mixed-effect models for the longitudinal part. Here we use a Bayesian algorithm and a nonlinear joint model to calculate individual dynamic predictions. We apply this approach to predict the risk of death in metastatic castration-resistant prostate cancer (mCRPC patients with frequent Prostate-Specific Antigen (PSA measurements. Methods A joint model is built using a large population of 400 mCRPC patients where PSA kinetics is described by a biexponential function and the hazard function is a PSA-dependent function. Using Hamiltonian Monte Carlo algorithm implemented in Stan software and the estimated population parameters in this population as priors, the a posteriori distribution of the hazard function is computed for a new patient knowing his PSA measurements until a given landmark time. Time-dependent area under the ROC curve (AUC and Brier score are derived to assess discrimination and calibration of the model predictions, first on 200 simulated patients and then on 196 real patients that are not included to build the model. Results Satisfying coverage probabilities of Monte Carlo prediction intervals are obtained for longitudinal and hazard functions. Individual dynamic predictions provide good predictive performances for landmark times larger than 12 months and horizon time of up to 18 months for both simulated and real data. Conclusions As nonlinear joint models can characterize the kinetics of biomarkers and their link with a time-to-event, this approach could be useful to improve patient’s follow-up and the early detection of most at risk patients.

  15. A Hybrid Monte Carlo importance sampling of rare events in Turbulence and in Turbulent Models

    Science.gov (United States)

    Margazoglou, Georgios; Biferale, Luca; Grauer, Rainer; Jansen, Karl; Mesterhazy, David; Rosenow, Tillmann; Tripiccione, Raffaele

    2017-11-01

    Extreme and rare events is a challenging topic in the field of turbulence. Trying to investigate those instances through the use of traditional numerical tools turns to be a notorious task, as they fail to systematically sample the fluctuations around them. On the other hand, we propose that an importance sampling Monte Carlo method can selectively highlight extreme events in remote areas of the phase space and induce their occurrence. We present a brand new computational approach, based on the path integral formulation of stochastic dynamics, and employ an accelerated Hybrid Monte Carlo (HMC) algorithm for this purpose. Through the paradigm of stochastic one-dimensional Burgers' equation, subjected to a random noise that is white-in-time and power-law correlated in Fourier space, we will prove our concept and benchmark our results with standard CFD methods. Furthermore, we will present our first results of constrained sampling around saddle-point instanton configurations (optimal fluctuations). The research leading to these results has received funding from the EU Horizon 2020 research and innovation programme under Grant Agreement No. 642069, and from the EU Seventh Framework Programme (FP7/2007-2013) under ERC Grant Agreement No. 339032.

  16. Accounting for inhomogeneous broadening in nano-optics by electromagnetic modeling based on Monte Carlo methods

    Science.gov (United States)

    Gudjonson, Herman; Kats, Mikhail A.; Liu, Kun; Nie, Zhihong; Kumacheva, Eugenia; Capasso, Federico

    2014-01-01

    Many experimental systems consist of large ensembles of uncoupled or weakly interacting elements operating as a single whole; this is particularly the case for applications in nano-optics and plasmonics, including colloidal solutions, plasmonic or dielectric nanoparticles on a substrate, antenna arrays, and others. In such experiments, measurements of the optical spectra of ensembles will differ from measurements of the independent elements as a result of small variations from element to element (also known as polydispersity) even if these elements are designed to be identical. In particular, sharp spectral features arising from narrow-band resonances will tend to appear broader and can even be washed out completely. Here, we explore this effect of inhomogeneous broadening as it occurs in colloidal nanopolymers comprising self-assembled nanorod chains in solution. Using a technique combining finite-difference time-domain simulations and Monte Carlo sampling, we predict the inhomogeneously broadened optical spectra of these colloidal nanopolymers and observe significant qualitative differences compared with the unbroadened spectra. The approach combining an electromagnetic simulation technique with Monte Carlo sampling is widely applicable for quantifying the effects of inhomogeneous broadening in a variety of physical systems, including those with many degrees of freedom that are otherwise computationally intractable. PMID:24469797

  17. Evaluation of the interindividual human variation in bioactivation of methyleugenol using physiologically based kinetic modeling and Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Al-Subeihi, Ala' A.A., E-mail: subeihi@yahoo.com [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands); BEN-HAYYAN-Aqaba International Laboratories, Aqaba Special Economic Zone Authority (ASEZA), P. O. Box 2565, Aqaba 77110 (Jordan); Alhusainy, Wasma; Kiwamoto, Reiko; Spenkelink, Bert [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands); Bladeren, Peter J. van [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands); Nestec S.A., Avenue Nestlé 55, 1800 Vevey (Switzerland); Rietjens, Ivonne M.C.M.; Punt, Ans [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands)

    2015-03-01

    The present study aims at predicting the level of formation of the ultimate carcinogenic metabolite of methyleugenol, 1′-sulfooxymethyleugenol, in the human population by taking variability in key bioactivation and detoxification reactions into account using Monte Carlo simulations. Depending on the metabolic route, variation was simulated based on kinetic constants obtained from incubations with a range of individual human liver fractions or by combining kinetic constants obtained for specific isoenzymes with literature reported human variation in the activity of these enzymes. The results of the study indicate that formation of 1′-sulfooxymethyleugenol is predominantly affected by variation in i) P450 1A2-catalyzed bioactivation of methyleugenol to 1′-hydroxymethyleugenol, ii) P450 2B6-catalyzed epoxidation of methyleugenol, iii) the apparent kinetic constants for oxidation of 1′-hydroxymethyleugenol, and iv) the apparent kinetic constants for sulfation of 1′-hydroxymethyleugenol. Based on the Monte Carlo simulations a so-called chemical-specific adjustment factor (CSAF) for intraspecies variation could be derived by dividing different percentiles by the 50th percentile of the predicted population distribution for 1′-sulfooxymethyleugenol formation. The obtained CSAF value at the 90th percentile was 3.2, indicating that the default uncertainty factor of 3.16 for human variability in kinetics may adequately cover the variation within 90% of the population. Covering 99% of the population requires a larger uncertainty factor of 6.4. In conclusion, the results showed that adequate predictions on interindividual human variation can be made with Monte Carlo-based PBK modeling. For methyleugenol this variation was observed to be in line with the default variation generally assumed in risk assessment. - Highlights: • Interindividual human differences in methyleugenol bioactivation were simulated. • This was done using in vitro incubations, PBK modeling

  18. An object kinetic Monte Carlo model for the microstructure evolution of neutron-irradiated reactor pressure vessel steels

    Energy Technology Data Exchange (ETDEWEB)

    Messina, Luca; Olsson, Paer [KTH Royal Institute of Technology, Stockholm (Sweden); Chiapetto, Monica [SCK - CEN, Nuclear Materials Science Institute, Mol (Belgium); Unite Materiaux et Transformations (UMET), UMR 8207, Universite de Lille 1, ENSCL, Villeneuve d' Ascq (France); Becquart, Charlotte S. [Unite Materiaux et Transformations (UMET), UMR 8207, Universite de Lille 1, ENSCL, Villeneuve d' Ascq (France); Malerba, Lorenzo [SCK - CEN, Nuclear Materials Science Institute, Mol (Belgium)

    2016-11-15

    This work presents a full object kinetic Monte Carlo framework for the simulation of the microstructure evolution of reactor pressure vessel (RPV) steels. The model pursues a ''gray-alloy'' approach, where the effect of solute atoms is seen exclusively as a reduction of the mobility of defect clusters. The same set of parameters yields a satisfactory evolution for two different types of alloys, in very different irradiation conditions: an Fe-C-MnNi model alloy (high flux) and a high-Mn, high-Ni RPV steel (low flux). A satisfactory match with the experimental characterizations is obtained only if assuming a substantial immobilization of vacancy clusters due to solute atoms, which is here verified by means of independent atomistic kinetic Monte Carlo simulations. The microstructure evolution of the two alloys is strongly affected by the dose rate; a predominance of single defects and small defect clusters is observed at low dose rates, whereas larger defect clusters appear at high dose rates. In both cases, the predicted density of interstitial loops matches the experimental solute-cluster density, suggesting that the MnNi-rich nanofeatures might form as a consequence of solute enrichment on immobilized small interstitial loops, which are invisible to the electron microscope. (copyright 2016 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  19. Initial validation of 4D-model for a clinical PET scanner using the Monte Carlo code gate

    Energy Technology Data Exchange (ETDEWEB)

    Vieira, Igor F.; Lima, Fernando R.A.; Gomes, Marcelo S., E-mail: falima@cnen.gov.b [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil); Vieira, Jose W.; Pacheco, Ludimila M. [Instituto Federal de Educacao, Ciencia e Tecnologia (IFPE), Recife, PE (Brazil); Chaves, Rosa M. [Instituto de Radium e Supervoltagem Ivo Roesler, Recife, PE (Brazil)

    2011-07-01

    Building exposure computational models (ECM) of emission tomography (PET and SPECT) currently has several dedicated computing tools based on Monte Carlo techniques (SimSET, SORTEO, SIMIND, GATE). This paper is divided into two steps: (1) using the dedicated code GATE (Geant4 Application for Tomographic Emission) to build a 4D model (where the fourth dimension is the time) of a clinical PET scanner from General Electric, GE ADVANCE, simulating the geometric and electronic structures suitable for this scanner, as well as some phenomena 4D, for example, rotating gantry; (2) the next step is to evaluate the performance of the model built here in the reproduction of test noise equivalent count rate (NEC) based on the NEMA Standards Publication NU protocols 2-2007 for this tomography. The results for steps (1) and (2) will be compared with experimental and theoretical values of the literature showing actual state of art of validation. (author)

  20. Development and Application of MCNP5 and KENO-VI Monte Carlo Models for the Atucha-2 PHWR Analysis

    Directory of Open Access Journals (Sweden)

    M. Pecchia

    2011-01-01

    Full Text Available The geometrical complexity and the peculiarities of Atucha-2 PHWR require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Core models of Atucha-2 PHWR were developed using both MCNP5 and KENO-VI codes. The developed models were applied for calculating reactor criticality states at beginning of life, reactor cell constants, and control rods volumes. The last two applications were relevant for performing successive three dimensional neutron kinetic analyses since it was necessary to correctly evaluate the effect of each oblique control rod in each cell discretizing the reactor. These corrective factors were then applied to the cell cross sections calculated by the two-dimensional deterministic lattice physics code HELIOS. These results were implemented in the RELAP-3D model to perform safety analyses for the licensing process.

  1. Development and experimental validation of a monte carlo modeling of the neutron emission from a d-t generator

    Science.gov (United States)

    Remetti, Romolo; Lepore, Luigi; Cherubini, Nadia

    2017-01-01

    An extensive use of Monte Carlo simulations led to the identification of a Thermo Scientific MP320 neutron generator MCNPX input deck. Such input deck is currently utilized at ENEA Casaccia Research Center for optimizing all the techniques and applications involving the device, in particular for explosives and drugs detection by fast neutrons. The working model of the generator was obtained thanks to a detailed representation of the MP320 internal components, and to the potentialities offered by the MCNPX code. Validation of the model was obtained by comparing simulated results vs. manufacturer's data, and vs. experimental tests. The aim of this work is explaining all the steps that led to those results, suggesting a procedure that might be extended to different models of neutron generators.

  2. A mathematical model for the kidney and estimative of the specific absorbed fractions by Monte Carlo method

    International Nuclear Information System (INIS)

    Todo, A.S.

    1980-01-01

    Presently, the estimates of specific absorbed fractions in various organs of a heterogeneous phantom are based on Monte Carlo calculation for monoenergetic photons uniformly distributed in the organs of an adult phantom. But, it is known that the kidney and some other organs (for example the skeleton) do not retain the radionuclides in an uniform manner in its internal region. So, we developed a model for the kidney including the cortex, medulla and collecting region. This model was utilized to estimate the specific absorbed fractions, for monoenergetic photons or electrons, in various organs of a heterogeneous phantom, when sources were uniformly distributed in each region of the kidney. All results obtained in this work were compared with those using a homogeneous model for the kidney as presented in ORNL-5000. (Author) [pt

  3. Health Promotion Efforts as Predictors of Physical Activity in Schools: An Application of the Diffusion of Innovations Model

    Science.gov (United States)

    Glowacki, Elizabeth M.; Centeio, Erin E.; Van Dongen, Daniel J.; Carson, Russell L.; Castelli, Darla M.

    2016-01-01

    Background: Implementing a comprehensive school physical activity program (CSPAP) effectively addresses public health issues by providing opportunities for physical activity (PA). Grounded in the Diffusion of Innovations model, the purpose of this study was to identify how health promotion efforts facilitate opportunities for PA. Methods: Physical…

  4. Inverse problem for a physiologically structured population model with variable-effort harvesting

    Directory of Open Access Journals (Sweden)

    Andrusyak Ruslan V.

    2017-04-01

    Full Text Available We consider the inverse problem of determining how the physiological structure of a harvested population evolves in time, and of finding the time-dependent effort to be expended in harvesting, so that the weighted integral of the density, which may be, for example, the total number of individuals or the total biomass, has prescribed dynamics. We give conditions for the existence of a unique, global, weak solution to the problem. Our investigation is carried out using the method of characteristics and a generalization of the Banach fixed-point theorem.

  5. Accurate Monte Carlo modeling of cyclotrons for optimization of shielding and activation calculations in the biomedical field

    International Nuclear Information System (INIS)

    Infantino, Angelo; Marengo, Mario; Baschetti, Serafina; Cicoria, Gianfranco; Longo Vaschetto, Vittorio; Lucconi, Giulia; Massucci, Piera; Vichi, Sara; Zagni, Federico; Mostacci, Domiziano

    2015-01-01

    Biomedical cyclotrons for production of Positron Emission Tomography (PET) radionuclides and radiotherapy with hadrons or ions are widely diffused and established in hospitals as well as in industrial facilities and research sites. Guidelines for site planning and installation, as well as for radiation protection assessment, are given in a number of international documents; however, these well-established guides typically offer analytic methods of calculation of both shielding and materials activation, in approximate or idealized geometry set up. The availability of Monte Carlo codes with accurate and up-to-date libraries for transport and interactions of neutrons and charged particles at energies below 250 MeV, together with the continuously increasing power of nowadays computers, makes systematic use of simulations with realistic geometries possible, yielding equipment and site specific evaluation of the source terms, shielding requirements and all quantities relevant to radiation protection. In this work, the well-known Monte Carlo code FLUKA was used to simulate two representative models of cyclotron for PET radionuclides production, including their targetry; and one type of proton therapy cyclotron including the energy selection system. Simulations yield estimates of various quantities of radiological interest, including the effective dose distribution around the equipment, the effective number of neutron produced per incident proton and the activation of target materials, the structure of the cyclotron, the energy degrader, the vault walls and the soil. The model was validated against experimental measurements and comparison with well-established reference data. Neutron ambient dose equivalent H ⁎ (10) was measured around a GE PETtrace cyclotron: an average ratio between experimental measurement and simulations of 0.99±0.07 was found. Saturation yield of 18 F, produced by the well-known 18 O(p,n) 18 F reaction, was calculated and compared with the IAEA

  6. Validation of GEANT4 Monte Carlo Models with a Highly Granular Scintillator-Steel Hadron Calorimeter

    CERN Document Server

    Adloff, C; Blaising, J J; Drancourt, C; Espargiliere, A; Gaglione, R; Geffroy, N; Karyotakis, Y; Prast, J; Vouters, G; Francis, K; Repond, J; Schlereth, J; Smith, J; Xia, L; Baldolemar, E; Li, J; Park, S T; Sosebee, M; White, A P; Yu, J; Buanes, T; Eigen, G; Mikami, Y; Watson, N K; Mavromanolakis, G; Thomson, M A; Ward, D R; Yan, W; Benchekroun, D; Hoummada, A; Khoulaki, Y; Apostolakis, J; Dotti, A; Folger, G; Ivantchenko, V; Uzhinskiy, V; Benyamna, M; Cârloganu, C; Fehr, F; Gay, P; Manen, S; Royer, L; Blazey, G C; Dyshkant, A; Lima, J G R; Zutshi, V; Hostachy, J Y; Morin, L; Cornett, U; David, D; Falley, G; Gadow, K; Gottlicher, P; Gunter, C; Hermberg, B; Karstensen, S; Krivan, F; Lucaci-Timoce, A I; Lu, S; Lutz, B; Morozov, S; Morgunov, V; Reinecke, M; Sefkow, F; Smirnov, P; Terwort, M; Vargas-Trevino, A; Feege, N; Garutti, E; Marchesini, I; Ramilli, M; Eckert, P; Harion, T; Kaplan, A; Schultz-Coulon, H Ch; Shen, W; Stamen, R; Bilki, B; Norbeck, E; Onel, Y; Wilson, G W; Kawagoe, K; Dauncey, P D; Magnan, A M; Bartsch, V; Wing, M; Salvatore, F; Alamillo, E Calvo; Fouz, M C; Puerta-Pelayo, J; Bobchenko, B; Chadeeva, M; Danilov, M; Epifantsev, A; Markin, O; Mizuk, R; Novikov, E; Popov, V; Rusinov, V; Tarkovsky, E; Kirikova, N; Kozlov, V; Smirnov, P; Soloviev, Y; Buzhan, P; Ilyin, A; Kantserov, V; Kaplin, V; Karakash, A; Popova, E; Tikhomirov, V; Kiesling, C; Seidel, K; Simon, F; Soldner, C; Szalay, M; Tesar, M; Weuste, L; Amjad, M S; Bonis, J; Callier, S; Conforti di Lorenzo, S; Cornebise, P; Doublet, Ph; Dulucq, F; Fleury, J; Frisson, T; van der Kolk, N; Li, H; Martin-Chassard, G; Richard, F; de la Taille, Ch; Poschl, R; Raux, L; Rouene, J; Seguin-Moreau, N; Anduze, M; Boudry, V; Brient, J-C; Jeans, D; Mora de Freitas, P; Musat, G; Reinhard, M; Ruan, M; Videau, H; Bulanek, B; Zacek, J; Cvach, J; Gallus, P; Havranek, M; Janata, M; Kvasnicka, J; Lednicky, D; Marcisovsky, M; Polak, I; Popule, J; Tomasek, L; Tomasek, M; Ruzicka, P; Sicho, P; Smolik, J; Vrba, V; Zalesak, J; Belhorma, B; Ghazlane, H; Takeshita, T; Uozumi, S; Gotze, M; Hartbrich, O; Sauer, J; Weber, S; Zeitnitz, C

    2013-01-01

    Calorimeters with a high granularity are a fundamental requirement of the Particle Flow paradigm. This paper focuses on the prototype of a hadron calorimeter with analog readout, consisting of thirty-eight scintillator layers alternating with steel absorber planes. The scintillator plates are finely segmented into tiles individually read out via Silicon Photomultipliers. The presented results are based on data collected with pion beams in the energy range from 8GeV to 100GeV. The fine segmentation of the sensitive layers and the high sampling frequency allow for an excellent reconstruction of the spatial development of hadronic showers. A comparison between data and Monte Carlo simulations is presented, concerning both the longitudinal and lateral development of hadronic showers and the global response of the calorimeter. The performance of several GEANT4 physics lists with respect to these observables is evaluated.

  7. Simulation on Mechanical Properties of Tungsten Carbide Thin Films Using Monte Carlo Model

    Directory of Open Access Journals (Sweden)

    Liliam C. Agudelo-Morimitsu

    2012-12-01

    Full Text Available The aim of this paper is to study the mechanical behavior of a system composed by substrate-coating using simulation methods. The contact stresses and the elastic deformation were analyzed by applying a normal load to the surface of the system consisting of a tungsten carbide (WC thin film, which is used as a wear resistant material and a stainless steel substrate. The analysis is based on Monte Carlo simulations using the Metropolis algorithm. The phenomenon was simulated from a fcc facecentered crystalline structure, for both, the coating and the substrate, assuming that the uniaxial strain is taken in the z-axis. Results were obtained for different values of normal applied load to the surface of the coating, obtaining the Strain-stress curves. From this curve, the Young´s modulus was obtained with a value of 600 Gpa, similar to the reports.

  8. Analytical model to describe fluorescence spectra of normal and preneoplastic epithelial tissue: comparison with Monte Carlo simulations and clinical measurements.

    Science.gov (United States)

    Chang, Sung K; Arifler, Dizem; Drezek, Rebekah; Follen, Michele; Richards-Kortum, Rebecca

    2004-01-01

    Fluorescence spectroscopy has shown promise for the detection of precancerous changes in vivo. The epithelial and stromal layers of tissue have very different optical properties; the albedo is relatively low in the epithelium and approaches one in the stroma. As precancer develops, the optical properties of the epithelium and stroma are altered in markedly different ways: epithelial scattering and fluorescence increase, and stromal scattering and fluorescence decrease. We present an analytical model of the fluorescence spectrum of a two-layer medium such as epithelial tissue. Our hypothesis is that accounting for the two different tissue layers will provide increased diagnostic information when used to analyze tissue fluorescence spectra measured in vivo. The Beer-Lambert law is used to describe light propagation in the epithelial layer, while light propagation in the highly scattering stromal layer is described with diffusion theory. Predictions of the analytical model are compared to results from Monte Carlo simulations of light propagation under a range of optical properties reported for normal and precancerous epithelial tissue. In all cases, the mean square error between the Monte Carlo simulations and the analytical model are within 15%. Finally, model predictions are compared to fluorescence spectra of normal and precancerous cervical tissue measured in vivo; the lineshape of fluorescence agrees well in both cases, and the decrease in fluorescence intensity from normal to precancerous tissue is correctly predicted to within 5%. Future work will explore the use of this model to extract information about changes in epithelial and stromal optical properties from clinical measurements and the diagnostic value of these parameters. (c) 2004 Society of Photo-Optical Instrumentation Engineers.

  9. Qualification of a Monte Carlo model of photon beams of a Lilac Elekta Precise

    International Nuclear Information System (INIS)

    Linares R, H. M.; Laguardia, R. A.; Lara M, E.

    2014-08-01

    For the simulation of the accelerator head the parameters determination that characterize the electrons primary beam that affect in the target is a step that involves a fundamental role in the precision of the Monte Carlo calculations. Applying the proposed methodology by Pena et al. [2007], in this work was carried out the qualification of the photon beams (6 MV and 15 MV) of an accelerator Elekta Precise, using the Monte Carlo code EGSnrc. The influence exerted by the characteristics of the electrons primary beam on the distribution of absorbed dose for the two energy of this equipment was studied. Using different mid energy combinations and FWHM of the electrons primary beam was calculated the dose deposited in a segmented water mannequin with its surface to 100 cm of the source. Starting from the deposited dose in the mannequin the dose curves in depth and dose profiles to different depths were built. These curves were compared with measured values in a similar experimental arrangement to the carried out simulation, applying acceptability criteria based on confidence intervals [Venselaar et al. 2001]. The dose profiles for small fields were like it was expected, to be strongly influenced by the radial distribution (FWHM). The energy/FWHM combinations that better reproduce the experimental curves of each photon beam were determined. One time determined the best combination (5.75 MeV/2 mm and 11.25 MeV/2 mm, respectively) was used for the generation of the phase spaces and the field factors calculation. A good correspondence was obtained between the simulations and the measurements for a wide range of field sizes, as well as for different types of detectors, being all the results inside of the tolerance margins. (author)

  10. Molecule-based kinetic Monte Carlo modeling of hydrotreating processes applied to Light Cycle Oil gas oils

    Science.gov (United States)

    Kolb, Max; Pereira de Oliveira, Luis; Verstraete, Jan

    2013-03-01

    A novel kinetic modeling strategy for refining processes for heavy petroleum fractions is proposed. The approach allows to overcome the notorious lack of molecular details in describing the petroleum fractions. The simulation of the reactions process consists of a two-step procedure. In the first step, a mixture of molecules representing the feedstock of the process is generated via two sucessive molecular reconstruction algorithms. The first algorithm, termed stochastic reconstruction, generates an equimolar set of molecules with the appropriate analytical properties via a Monte Carlo method. The second algorithm, called reconstruction by entropy maximization, adjusts the molar fractions of the generated molecules in order to further improve the properties of the mixture. In the second step, a kinetic Monte Carlo method is used to simulate the effect of the refining reactions on the previously generated set of molecules. The full two-step methodology has been applied to the hydrotreating of LCO gas oils and to the hydrocracking of vacuum residues from different origins (e.g. Athabasca).

  11. Adaptive Kernel-density Independence Sampling based Monte Carlo Sampling (A-KISMCS) for inverse hydrological modelling

    Science.gov (United States)

    Pande, S.; Shafiei, M.

    2016-12-01

    Markov chain Monte Carlo (MCMC) methods have been applied in many hydrologic studies to explore posterior parameter distributions within a Bayesian framework. Accurate estimation of posterior parameter distributions is key to reliably estimate marginal likelihood functions and hence to reliably estimate measures of Bayesian complexity. This paper introduces an alternative to well-known random walk based MCMC samplers. An Adaptive Kernel Density Independence Sampling based Monte Carlo Sampling (A-KISMCS) is proposed. A-KISMCS uses an independence sampler with Metropolis-Hastings (M-H) updates which ensures that candidate observations are drawn independently of the current state of a chain. This ensures efficient exploration of the target distribution. The bandwidth of the kernel density estimator is also adapted online in order to increase its accuracy and ensure fast convergence to a target distribution. The performance of A-KISMCS is tested on one several case studies, including synthetic and real world case studies of hydrological modelling and compared with Differential Evolution Adaptive Metropolis (DREAM-zs), which is fundamentally based on random walk sampling with differential evolution. Results show that while DREAM-zs converges to slightly sharper posterior densities, A-KISMCS is slightly more efficient in tracking the mode of the posteriors.

  12. A Monte Carlo Simulation approach for the modeling of free-molecule squeeze-film damping of flexible microresonators

    KAUST Repository

    Leung, Roger

    2010-03-31

    Squeeze-film damping on microresonators is a significant damping source even when the surrounding gas is highly rarefied. This article presents a general modeling approach based on Monte Carlo (MC) simulations for the prediction of squeeze-film damping on resonators in the freemolecule regime. The generality of the approach is demonstrated in its capability of simulating resonators of any shape and with any accommodation coefficient. The approach is validated using both the analytical results of the free-space damping and the experimental data of the squeeze-film damping on a clamped-clamped plate resonator oscillating at its first flexure mode. The effect of oscillation modes on the quality factor of the resonator has also been studied and semi-analytical approximate models for the squeeze-film damping with diffuse collisions have been developed.

  13. Advancements in reactor physics modelling methodology of Monte Carlo Burnup Code MCB dedicated to higher simulation fidelity of HTR cores

    International Nuclear Information System (INIS)

    Cetnar, Jerzy

    2014-01-01

    The recent development of MCB - Monte Carlo Continuous Energy Burn-up code is directed towards advanced description of modern reactors, including double heterogeneity structures that exist in HTR-s. In this, we exploit the advantages of MCB methodology in integrated approach, where physics, neutronics, burnup, reprocessing, non-stationary process modeling (control rod operation) and refined spatial modeling are carried in a single flow. This approach allows for implementations of advanced statistical options like analysis of error propagation, perturbation in time domain, sensitivity and source convergence analyses. It includes statistical analysis of burnup process, emitted particle collection, thermal-hydraulic coupling, automatic power profile calculations, advanced procedures of burnup step normalization and enhanced post processing capabilities. (author)

  14. Modeling of neutron and photon transport in iron and concrete radiation shields by using Monte Carlo method

    CERN Document Server

    Žukauskaitėa, A; Plukienė, R; Ridikas, D

    2007-01-01

    Particle accelerators and other high energy facilities produce penetrating ionizing radiation (neutrons and γ-rays) that must be shielded. The objective of this work was to model photon and neutron transport in various materials, usually used as shielding, such as concrete, iron or graphite. Monte Carlo method allows obtaining answers by simulating individual particles and recording some aspects of their average behavior. In this work several nuclear experiments were modeled: AVF 65 (AVF cyclotron of Research Center of Nuclear Physics, Osaka University, Japan) – γ-ray beams (1-10 MeV), HIMAC (heavy-ion synchrotron of the National Institute of Radiological Sciences in Chiba, Japan) and ISIS-800 (ISIS intensive spallation neutron source facility of the Rutherford Appleton laboratory, UK) – high energy neutron (20-800 MeV) transport in iron and concrete. The calculation results were then compared with experimental data.compared with experimental data.

  15. Monte Carlo thermodynamic and structural properties of the TIP4P water model: dependence on the computational conditions

    Directory of Open Access Journals (Sweden)

    João Manuel Marques Cordeiro

    1998-11-01

    Full Text Available Classical Monte Carlo simulations were carried out on the NPT ensemble at 25°C and 1 atm, aiming to investigate the ability of the TIP4P water model [Jorgensen, Chandrasekhar, Madura, Impey and Klein; J. Chem. Phys., 79 (1983 926] to reproduce the newest structural picture of liquid water. The results were compared with recent neutron diffraction data [Soper; Bruni and Ricci; J. Chem. Phys., 106 (1997 247]. The influence of the computational conditions on the thermodynamic and structural results obtained with this model was also analyzed. The findings were compared with the original ones from Jorgensen et al [above-cited reference plus Mol. Phys., 56 (1985 1381]. It is notice that the thermodynamic results are dependent on the boundary conditions used, whereas the usual radial distribution functions g(O/O(r and g(O/H(r do not depend on them.

  16. Criticality of the random-site Ising model: Metropolis, Swendsen-Wang and Wolff Monte Carlo algorithms

    Directory of Open Access Journals (Sweden)

    D.Ivaneyko

    2005-01-01

    Full Text Available We apply numerical simulations to study of the criticality of the 3D Ising model with random site quenched dilution. The emphasis is given to the issues not being discussed in detail before. In particular, we attempt a comparison of different Monte Carlo techniques, discussing regions of their applicability and advantages/disadvantages depending on the aim of a particular simulation set. Moreover, besides evaluation of the critical indices we estimate the universal ratio Γ+/Γ- for the magnetic susceptibility critical amplitudes. Our estimate Γ+/Γ- = 1.67 ± 0.15 is in a good agreement with the recent MC analysis of the random-bond Ising model giving further support that both random-site and random-bond dilutions lead to the same universality class.

  17. Extraction of diffuse correlation spectroscopy flow index by integration of Nth-order linear model with Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Shang, Yu; Lin, Yu; Yu, Guoqiang, E-mail: guoqiang.yu@uky.edu [Department of Biomedical Engineering, University of Kentucky, Lexington, Kentucky 40506 (United States); Li, Ting [Department of Biomedical Engineering, University of Kentucky, Lexington, Kentucky 40506 (United States); State Key Laboratory for Electronic Thin Film and Integrated Device, University of Electronic Science and Technology of China, Chengdu 610054 (China); Chen, Lei; Toborek, Michal [Department of Neurosurgery, University of Kentucky, Lexington, Kentucky 40536 (United States)

    2014-05-12

    Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (αD{sub B}) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of αD{sub B}. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo stroke model of mouse. Computer simulations shows that the high-order (N ≥ 5) linear algorithm was more accurate in extracting αD{sub B} (errors < ±2%) from the noise-free DCS data than the semi-infinite solution (errors: −5.3% to −18.0%) for different tissue models. Although adding random noises to DCS data resulted in αD{sub B} variations, the mean values of errors in extracting αD{sub B} were similar to those reconstructed from the noise-free DCS data. In addition, the errors in extracting the relative changes of αD{sub B} using both linear algorithm and semi-infinite solution were fairly small (errors < ±2.0%) and did not rely on the tissue volume/geometry. The experimental results from the in vivo stroke mice agreed with those in simulations, demonstrating the robustness of the linear algorithm. DCS with the high-order linear algorithm shows the potential for the inter-subject comparison and longitudinal monitoring of absolute BFI in a variety of tissues/organs with different volumes/geometries.

  18. Monte Carlo Bayesian Inference on a Statistical Model of Sub-Gridcolumn Moisture Variability using High-Resolution Cloud Observations

    Science.gov (United States)

    Norris, P. M.; da Silva, A. M., Jr.

    2016-12-01

    Norris and da Silva recently published a method to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation (CDA). The gridcolumn model includes assumed-PDF intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used are MODIS cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. In the example provided, the method is able to restore marine stratocumulus near the Californian coast where the background state has a clear swath. The new approach not only significantly reduces mean and standard deviation biases with respect to the assimilated observables, but also improves the simulated rotational-Ramman scattering cloud optical centroid pressure against independent (non-assimilated) retrievals from the OMI instrument. One obvious difficulty for the method, and other CDA methods, is the lack of information content in passive cloud observables on cloud vertical structure, beyond cloud-top and thickness, thus necessitating strong dependence on the background vertical moisture structure. It is found that a simple flow-dependent correlation modification due to Riishojgaard is helpful, better honoring inversion structures in the background state.

  19. Correction of confidence intervals in excess relative risk models using Monte Carlo dosimetry systems with shared errors.

    Directory of Open Access Journals (Sweden)

    Zhuo Zhang

    Full Text Available In epidemiological studies, exposures of interest are often measured with uncertainties, which may be independent or correlated. Independent errors can often be characterized relatively easily while correlated measurement errors have shared and hierarchical components that complicate the description of their structure. For some important studies, Monte Carlo dosimetry systems that provide multiple realizations of exposure estimates have been used to represent such complex error structures. While the effects of independent measurement errors on parameter estimation and methods to correct these effects have been studied comprehensively in the epidemiological literature, the literature on the effects of correlated errors, and associated correction methods is much more sparse. In this paper, we implement a novel method that calculates corrected confidence intervals based on the approximate asymptotic distribution of parameter estimates in linear excess relative risk (ERR models. These models are widely used in survival analysis, particularly in radiation epidemiology. Specifically, for the dose effect estimate of interest (increase in relative risk per unit dose, a mixture distribution consisting of a normal and a lognormal component is applied. This choice of asymptotic approximation guarantees that corrected confidence intervals will always be bounded, a result which does not hold under a normal approximation. A simulation study was conducted to evaluate the proposed method in survival analysis using a realistic ERR model. We used both simulated Monte Carlo dosimetry systems (MCDS and actual dose histories from the Mayak Worker Dosimetry System 2013, a MCDS for plutonium exposures in the Mayak Worker Cohort. Results show our proposed methods provide much improved coverage probabilities for the dose effect parameter, and noticeable improvements for other model parameters.

  20. Correction of confidence intervals in excess relative risk models using Monte Carlo dosimetry systems with shared errors

    Science.gov (United States)

    Preston, Dale L.; Sokolnikov, Mikhail; Napier, Bruce A.; Degteva, Marina; Moroz, Brian; Vostrotin, Vadim; Shiskina, Elena; Birchall, Alan; Stram, Daniel O.

    2017-01-01

    In epidemiological studies, exposures of interest are often measured with uncertainties, which may be independent or correlated. Independent errors can often be characterized relatively easily while correlated measurement errors have shared and hierarchical components that complicate the description of their structure. For some important studies, Monte Carlo dosimetry systems that provide multiple realizations of exposure estimates have been used to represent such complex error structures. While the effects of independent measurement errors on parameter estimation and methods to correct these effects have been studied comprehensively in the epidemiological literature, the literature on the effects of correlated errors, and associated correction methods is much more sparse. In this paper, we implement a novel method that calculates corrected confidence intervals based on the approximate asymptotic distribution of parameter estimates in linear excess relative risk (ERR) models. These models are widely used in survival analysis, particularly in radiation epidemiology. Specifically, for the dose effect estimate of interest (increase in relative risk per unit dose), a mixture distribution consisting of a normal and a lognormal component is applied. This choice of asymptotic approximation guarantees that corrected confidence intervals will always be bounded, a result which does not hold under a normal approximation. A simulation study was conducted to evaluate the proposed method in survival analysis using a realistic ERR model. We used both simulated Monte Carlo dosimetry systems (MCDS) and actual dose histories from the Mayak Worker Dosimetry System 2013, a MCDS for plutonium exposures in the Mayak Worker Cohort. Results show our proposed methods provide much improved coverage probabilities for the dose effect parameter, and noticeable improvements for other model parameters. PMID:28369141

  1. Using Monte Carlo modelling in optimising treatment outcome in advanced head and neck cancers treated with chemoradiotherapy

    International Nuclear Information System (INIS)

    Marcu, L.G.; Bezak, E.

    2011-01-01

    Full text: Advanced head and neck cancers are highly aggressive therefore they require corresponding treatment. Alterations in radiotherapy fractionation are possible ways to improve tumour control. However, while radiotherapy alone has an impact on the short term prognosis, long-term benefits are only moderate. Chemotherapy, whether concurrent or neoadjuvant, offers a possible way to improve treatment outcome. A Monte Carlo model was developed to demonstrate the role of cisplatin in altered fractionation radiotherapy and to develop a less customary schedule for neoadjuvant chemotherapy. Cisplatin combined with conventional radiotherapy was shown to improve patient survival. A Monte Carlo model was developed to assess the effect of combined cisplatin-altered fractionation schedule on a virtual head and neck tumour. Radiotherapy was simulated using the linear-quadratic formalism, whereas the cisplatin model was based on experimental data. Various neoadjuvant schedules were designed for cisplatin to optimize therapeutic ratio without increasing normal tissue toxicity. Concurrent daily cisplatin + altered fractionated radiotherapy have a notable effect on repopulation: while a TCP of 97.9% is reached with 60 Gy accelerated radiotherapy, the same TCP is achieved with daily cisplatin + 26 Gy accelerated radiotherapy (43% of the dose). Neoadjuvant cisplatin administered every third day leads to the same cell kill as daily cisplatin but with lower tissue toxicity. Conclusions Cisplatin both concurrently and prior to radiation offer superior tumour control as compared to radiation alone. Whilst the model outcome is valid for a small tumour (5 x 106 cells) the results can be extrapolated to obtain equivalent data for a clinically detectable tumour.

  2. Nonlinear calibration transfer based on hierarchical Bayesian models and Lagrange Multipliers: Error bounds of estimates via Monte Carlo - Markov Chain sampling.

    Science.gov (United States)

    Seichter, Felicia; Vogt, Josef; Radermacher, Peter; Mizaikoff, Boris

    2017-01-25

    The calibration of analytical systems is time-consuming and the effort for daily calibration routines should therefore be minimized, while maintaining the analytical accuracy and precision. The 'calibration transfer' approach proposes to combine calibration data already recorded with actual calibrations measurements. However, this strategy was developed for the multivariate, linear analysis of spectroscopic data, and thus, cannot be applied to sensors with a single response channel and/or a non-linear relationship between signal and desired analytical concentration. To fill this gap for a non-linear calibration equation, we assume that the coefficients for the equation, collected over several calibration runs, are normally distributed. Considering that coefficients of an actual calibration are a sample of this distribution, only a few standards are needed for a complete calibration data set. The resulting calibration transfer approach is demonstrated for a fluorescence oxygen sensor and implemented as a hierarchical Bayesian model, combined with a Lagrange Multipliers technique and Monte-Carlo Markov-Chain sampling. The latter provides realistic estimates for coefficients and prediction together with accurate error bounds by simulating known measurement errors and system fluctuations. Performance criteria for validation and optimal selection of a reduced set of calibration samples were developed and lead to a setup which maintains the analytical performance of a full calibration. Strategies for a rapid determination of problems occurring in a daily calibration routine, are proposed, thereby opening the possibility of correcting the problem just in time. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Markov Modeling of Component Fault Growth Over A Derived Domain of Feasible Output Control Effort Modifications

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper introduces a novel Markov process formulation of stochastic fault growth modeling, in order to facilitate the development and analysis of...

  4. The effects of electron binding energy corrections on Monte Carlo models in the diagnostic x-ray energy range

    International Nuclear Information System (INIS)

    Sim, L.H.; Van Doorn, T.; Michael, G.J.

    1996-01-01

    Full text: The effects of incorporating electron binding energy corrections for incoherent scatter (BEC) into Monte Carlo models of X-ray transport in the diagnostic energy range have been examined. The inclusion of BEC can significantly increase computing overhead both in terms of data storage and execution time. In a modern PC application, data storage is unlikely to be a significant problem. However, execution time is a major consideration when assessing the relative usefulness of Monte Carlo systems. If the effectiveness of including BEC is barely more than equivocal, as is the case in some of the studies reported here, then a decision to include them requires consideration of the photon energy being modelled and the data being sought. This work seeks to clarify the real significance of inclusion of BEC by examining their effects without the confounding influence of coherent scattering effects. A Monte Carlo computer code has been developed to study a variety of X-ray transport phenomena. Models of radiation dose deposition in a semi-infinite medium, a similar model in tissue using a realistic source spectrum and diverging beam geometry, a simulation of pencil beam bone densitometry measurements, models of barrier penetration by X-rays and models of the angular distribution of scattered radiation have been undertaken. Results of previous studies have been confirmed. Models of radiation dose deposition for 10 keV, 30 keV and 100 keV photons have shown that inclusion of BEC has only a small effect upon values of total depth dose. Differences are of the same order of magnitude as the standard deviation of the results. A larger effect was noted for the values of dose due to scattered photons. This effect reached a maximum of 7% at 30 keV. Similar results were obtained from a model using a realistic source spectrum and diverging beam geometry. In the simulation of bone densitometry measurements the effects are significant (i.e. of the order of 10%). The angular

  5. The European Integrated Tokamak Modelling (ITM) effort: achievements and first physics results

    NARCIS (Netherlands)

    G.L. Falchetto,; Coster, D.; Coelho, R.; Scott, B. D.; Figini, L.; Kalupin, D.; Nardon, E.; Nowak, S.; L.L. Alves,; Artaud, J. F.; Basiuk, V.; João P.S. Bizarro,; C. Boulbe,; Dinklage, A.; Farina, D.; B. Faugeras,; Ferreira, J.; Figueiredo, A.; Huynh, P.; Imbeaux, F.; Ivanova-Stanik, I.; Jonsson, T.; H.-J. Klingshirn,; Konz, C.; Kus, A.; Marushchenko, N. B.; Pereverzev, G.; M. Owsiak,; Poli, E.; Peysson, Y.; R. Reimer,; Signoret, J.; Sauter, O.; Stankiewicz, R.; Strand, P.; Voitsekhovitch, I.; Westerhof, E.; T. Zok,; Zwingmann, W.; ITM-TF contributors,; ASDEX Upgrade team,; JET-EFDA Contributors,

    2014-01-01

    A selection of achievements and first physics results are presented of the European Integrated Tokamak Modelling Task Force (EFDA ITM-TF) simulation framework, which aims to provide a standardized platform and an integrated modelling suite of validated numerical codes for the simulation and

  6. Artificial Neural Networks for Reducing Computational Effort in Active Truncated Model Testing of Mooring Lines

    DEFF Research Database (Denmark)

    Christiansen, Niels Hørbye; Voie, Per Erlend Torbergsen; Høgsberg, Jan Becker

    2015-01-01

    is by active truncated models. In these models only the very top part of the system is represented by a physical model whereas the behavior of the part below the truncation is calculated by numerical models and accounted for in the physical model by active actuators applying relevant forces to the physical...... orders of magnitude faster than conventional numerical methods. The AAN ability to learn and predict the nonlinear relation between a given input and the corresponding output makes the hybrid method tailor made for the active actuators used in the truncated experiments. All the ANN training can be done...... prior to the experiment and with a properly trained ANN it is no problem to obtain accurate simulations much faster than real time-without any need for large computational capacity. The present study demonstrates how this hybrid method can be applied to the active truncated experiments yielding a system...

  7. A method to generate equivalent energy spectra and filtration models based on measurement for multidetector CT Monte Carlo dosimetry simulations

    International Nuclear Information System (INIS)

    Turner, Adam C.; Zhang Di; Kim, Hyun J.; DeMarco, John J.; Cagnon, Chris H.; Angel, Erin; Cody, Dianna D.; Stevens, Donna M.; Primak, Andrew N.; McCollough, Cynthia H.; McNitt-Gray, Michael F.

    2009-01-01

    The purpose of this study was to present a method for generating x-ray source models for performing Monte Carlo (MC) radiation dosimetry simulations of multidetector row CT (MDCT) scanners. These so-called ''equivalent'' source models consist of an energy spectrum and filtration description that are generated based wholly on the measured values and can be used in place of proprietary manufacturer's data for scanner-specific MDCT MC simulations. Required measurements include the half value layers (HVL 1 and HVL 2 ) and the bowtie profile (exposure values across the fan beam) for the MDCT scanner of interest. Using these measured values, a method was described (a) to numerically construct a spectrum with the calculated HVLs approximately equal to those measured (equivalent spectrum) and then (b) to determine a filtration scheme (equivalent filter) that attenuates the equivalent spectrum in a similar fashion as the actual filtration attenuates the actual x-ray beam, as measured by the bowtie profile measurements. Using this method, two types of equivalent source models were generated: One using a spectrum based on both HVL 1 and HVL 2 measurements and its corresponding filtration scheme and the second consisting of a spectrum based only on the measured HVL 1 and its corresponding filtration scheme. Finally, a third type of source model was built based on the spectrum and filtration data provided by the scanner's manufacturer. MC simulations using each of these three source model types were evaluated by comparing the accuracy of multiple CT dose index (CTDI) simulations to measured CTDI values for 64-slice scanners from the four major MDCT manufacturers. Comprehensive evaluations were carried out for each scanner using each kVp and bowtie filter combination available. CTDI experiments were performed for both head (16 cm in diameter) and body (32 cm in diameter) CTDI phantoms using both central and peripheral measurement positions. Both equivalent source model types

  8. A method to generate equivalent energy spectra and filtration models based on measurement for multidetector CT Monte Carlo dosimetry simulations

    Science.gov (United States)

    Turner, Adam C.; Zhang, Di; Kim, Hyun J.; DeMarco, John J.; Cagnon, Chris H.; Angel, Erin; Cody, Dianna D.; Stevens, Donna M.; Primak, Andrew N.; McCollough, Cynthia H.; McNitt-Gray, Michael F.

    2009-01-01

    The purpose of this study was to present a method for generating x-ray source models for performing Monte Carlo (MC) radiation dosimetry simulations of multidetector row CT (MDCT) scanners. These so-called “equivalent” source models consist of an energy spectrum and filtration description that are generated based wholly on the measured values and can be used in place of proprietary manufacturer’s data for scanner-specific MDCT MC simulations. Required measurements include the half value layers (HVL1 and HVL2) and the bowtie profile (exposure values across the fan beam) for the MDCT scanner of interest. Using these measured values, a method was described (a) to numerically construct a spectrum with the calculated HVLs approximately equal to those measured (equivalent spectrum) and then (b) to determine a filtration scheme (equivalent filter) that attenuates the equivalent spectrum in a similar fashion as the actual filtration attenuates the actual x-ray beam, as measured by the bowtie profile measurements. Using this method, two types of equivalent source models were generated: One using a spectrum based on both HVL1 and HVL2 measurements and its corresponding filtration scheme and the second consisting of a spectrum based only on the measured HVL1 and its corresponding filtration scheme. Finally, a third type of source model was built based on the spectrum and filtration data provided by the scanner’s manufacturer. MC simulations using each of these three source model types were evaluated by comparing the accuracy of multiple CT dose index (CTDI) simulations to measured CTDI values for 64-slice scanners from the four major MDCT manufacturers. Comprehensive evaluations were carried out for each scanner using each kVp and bowtie filter combination available. CTDI experiments were performed for both head (16 cm in diameter) and body (32 cm in diameter) CTDI phantoms using both central and peripheral measurement positions. Both equivalent source model types

  9. A method to generate equivalent energy spectra and filtration models based on measurement for multidetector CT Monte Carlo dosimetry simulations.

    Science.gov (United States)

    Turner, Adam C; Zhang, Di; Kim, Hyun J; DeMarco, John J; Cagnon, Chris H; Angel, Erin; Cody, Dianna D; Stevens, Donna M; Primak, Andrew N; McCollough, Cynthia H; McNitt-Gray, Michael F

    2009-06-01

    The purpose of this study was to present a method for generating x-ray source models for performing Monte Carlo (MC) radiation dosimetry simulations of multidetector row CT (MDCT) scanners. These so-called "equivalent" source models consist of an energy spectrum and filtration description that are generated based wholly on the measured values and can be used in place of proprietary manufacturer's data for scanner-specific MDCT MC simulations. Required measurements include the half value layers (HVL1 and HVL2) and the bowtie profile (exposure values across the fan beam) for the MDCT scanner of interest. Using these measured values, a method was described (a) to numerically construct a spectrum with the calculated HVLs approximately equal to those measured (equivalent spectrum) and then (b) to determine a filtration scheme (equivalent filter) that attenuates the equivalent spectrum in a similar fashion as the actual filtration attenuates the actual x-ray beam, as measured by the bowtie profile measurements. Using this method, two types of equivalent source models were generated: One using a spectrum based on both HVL1 and HVL2 measurements and its corresponding filtration scheme and the second consisting of a spectrum based only on the measured HVL1 and its corresponding filtration scheme. Finally, a third type of source model was built based on the spectrum and filtration data provided by the scanner's manufacturer. MC simulations using each of these three source model types were evaluated by comparing the accuracy of multiple CT dose index (CTDI) simulations to measured CTDI values for 64-slice scanners from the four major MDCT manufacturers. Comprehensive evaluations were carried out for each scanner using each kVp and bowtie filter combination available. CTDI experiments were performed for both head (16 cm in diameter) and body (32 cm in diameter) CTDI phantoms using both central and peripheral measurement positions. Both equivalent source model types result in

  10. Kinetic Monte Carlo simulations compared with continuum models and experimental properties of pattern formation during ion beam sputtering

    International Nuclear Information System (INIS)

    Chason, E; Chan, W L

    2009-01-01

    Kinetic Monte Carlo simulations model the evolution of surfaces during low energy ion bombardment using atomic level mechanisms of defect formation, recombination and surface diffusion. Because the individual kinetic processes are completely determined, the resulting morphological evolution can be directly compared with continuum models based on the same mechanisms. We present results of simulations based on a curvature-dependent sputtering mechanism and diffusion of mobile surface defects. The results are compared with a continuum linear instability model based on the same physical processes. The model predictions are found to be in good agreement with the simulations for predicting the early-stage morphological evolution and the dependence on processing parameters such as the flux and temperature. This confirms that the continuum model provides a reasonable approximation of the surface evolution from multiple interacting surface defects using this model of sputtering. However, comparison with experiments indicates that there are many features of the surface evolution that do not agree with the continuum model or simulations, suggesting that additional mechanisms are required to explain the observed behavior.

  11. LMDzT-INCA dust forecast model developments and associated validation efforts

    International Nuclear Information System (INIS)

    Schulz, M; Cozic, A; Szopa, S

    2009-01-01

    The nudged atmosphere global climate model LMDzT-INCA is used to forecast global dust fields. Evaluation is undertaken in retrospective for the forecast results of the year 2006. For this purpose AERONET/Photons sites in Northern Africa and on the Arabian Peninsula are chosen where aerosol optical depth is dominated by dust. Despite its coarse resolution, the model captures 48% of the day to day dust variability near Dakar on the initial day of the forecast. On weekly and monthly scale the model captures respectively 62% and 68% of the variability. Correlation coefficients between daily AOD values observed and modelled at Dakar decrease from 0.69 for the initial forecast day to 0.59 and 0.41 respectively for two days ahead and five days ahead. If one requests that the model should be able to issue a warning for an exceedance of aerosol optical depth of 0.5 and issue no warning in the other cases, then the model was wrong in 29% of the cases for day 0, 32% for day 2 and 35% for day 5. A reanalysis run with archived ECMWF winds is only slightly better (r=0.71) but was in error in 25% of the cases. Both the improved simulation of the monthly versus daily variability and the deterioration of the forecast with time can be explained by model failure to simulate the exact timing of a dust event.

  12. Monte Carlo Methods in Physics

    International Nuclear Information System (INIS)

    Santoso, B.

    1997-01-01

    Method of Monte Carlo integration is reviewed briefly and some of its applications in physics are explained. A numerical experiment on random generators used in the monte Carlo techniques is carried out to show the behavior of the randomness of various methods in generating them. To account for the weight function involved in the Monte Carlo, the metropolis method is used. From the results of the experiment, one can see that there is no regular patterns of the numbers generated, showing that the program generators are reasonably good, while the experimental results, shows a statistical distribution obeying statistical distribution law. Further some applications of the Monte Carlo methods in physics are given. The choice of physical problems are such that the models have available solutions either in exact or approximate values, in which comparisons can be mode, with the calculations using the Monte Carlo method. Comparison show that for the models to be considered, good agreement have been obtained

  13. Evaluation of Thin Plate Hydrodynamic Stability through a Combined Numerical Modeling and Experimental Effort

    Energy Technology Data Exchange (ETDEWEB)

    Tentner, A. [Argonne National Lab. (ANL), Argonne, IL (United States); Bojanowski, C. [Argonne National Lab. (ANL), Argonne, IL (United States); Feldman, E. [Argonne National Lab. (ANL), Argonne, IL (United States); Wilson, E. [Argonne National Lab. (ANL), Argonne, IL (United States); Solbrekken, G [Univ. of Missouri, Columbia, MO (United States); Jesse, C. [Univ. of Missouri, Columbia, MO (United States); Kennedy, J. [Univ. of Missouri, Columbia, MO (United States); Rivers, J. [Univ. of Missouri, Columbia, MO (United States); Schnieders, G. [Univ. of Missouri, Columbia, MO (United States)

    2017-05-01

    An experimental and computational effort was undertaken in order to evaluate the capability of the fluid-structure interaction (FSI) simulation tools to describe the deflection of a Missouri University Research Reactor (MURR) fuel element plate redesigned for conversion to lowenriched uranium (LEU) fuel due to hydrodynamic forces. Experiments involving both flat plates and curved plates were conducted in a water flow test loop located at the University of Missouri (MU), at conditions and geometries that can be related to the MURR LEU fuel element. A wider channel gap on one side of the test plate, and a narrower on the other represent the differences that could be encountered in a MURR element due to allowed fabrication variability. The difference in the channel gaps leads to a pressure differential across the plate, leading to plate deflection. The induced plate deflection the pressure difference induces in the plate was measured at specified locations using a laser measurement technique. High fidelity 3-D simulations of the experiments were performed at MU using the computational fluid dynamics code STAR-CCM+ coupled with the structural mechanics code ABAQUS. Independent simulations of the experiments were performed at Argonne National Laboratory (ANL) using the STAR-CCM+ code and its built-in structural mechanics solver. The simulation results obtained at MU and ANL were compared with the corresponding measured plate deflections.

  14. Efficient 3D Kinetic Monte Carlo Method for Modeling of Molecular Structure and Dynamics

    DEFF Research Database (Denmark)

    Panshenskov, Mikhail; Solov'yov, Ilia; Solov'yov, Andrey V.

    2014-01-01

    Self-assembly of molecular systems is an important and general problem that intertwines physics, chemistry, biology, and material sciences. Through understanding of the physical principles of self-organization, it often becomes feasible to control the process and to obtain complex structures with...... the kinetic Monte Carlo approach in a three-dimensional space. We describe the computational side of the developed code, discuss its efficiency, and apply it for studying an exemplary system....... with tailored properties, for example, bacteria colonies of cells or nanodevices with desired properties. Theoretical studies and simulations provide an important tool for unraveling the principles of self-organization and, therefore, have recently gained an increasing interest. The present article features...... an extension of a popular code MBN EXPLORER (MesoBioNano Explorer) aiming to provide a universal approach to study self-assembly phenomena in biology and nanoscience. In particular, this extension involves a highly parallelized module of MBN EXPLORER that allows simulating stochastic processes using...

  15. Analytical model of the binary multileaf collimator of tomotherapy for Monte Carlo simulations

    International Nuclear Information System (INIS)

    Sterpin, E; Vynckier, S; Salvat, F; Olivera, G H

    2008-01-01

    Helical Tomotherapy (HT) delivers intensity-modulated radiotherapy by the means of many configurations of the binary multi-leaf collimator (MLC). The aim of the present study was to devise a method, which we call the 'transfer function' (TF) method, to perform the transport of particles through the MLC much faster than the time consuming Monte Carlo (MC) simulation and with no significant loss of accuracy. The TF method consists of calculating, for each photon in the phase-space file, the attenuation factor for each leaf (up to three) that the photon passes, assuming straight propagation through closed leaves, and storing these factors in a modified phase-space file. To account for the transport through the MLC in a given configuration, the weight of a photon is simply multiplied by the attenuation factors of the leaves that are intersected by the photon ray and are closed. The TF method was combined with the PENELOPE MC code, and validated with measurements for the three static field sizes available (40x5, 40x2.5 and 40x1 cm 2 ) and for some MLC patterns. The TF method allows a large reduction in computation time, without introducing appreciable deviations from the result of full MC simulations

  16. Monte-Carlo Modeling of Parameters of a Subcritical Cascade Reactor Based on MSBR and LMFBR Technologies

    CERN Document Server

    Bznuni, S A; Zhamkochyan, V M; Polanski, A; Sosnin, A N; Khudaverdyan, A H

    2001-01-01

    Parameters of a subcritical cascade reactor driven by a proton accelerator and based on a primary lead-bismuth target, main reactor constructed analogously to the molten salt breeder (MSBR) reactor core and a booster-reactor analogous to the core of the BN-350 liquid metal cooled fast breeder reactor (LMFBR). It is shown by means of Monte-Carlo modeling that the reactor under study provides safe operation modes (k_{eff}=0.94-0.98), is apable to transmute effectively radioactive nuclear waste and reduces by an order of magnitude the requirements on the accelerator beam current. Calculations show that the maximal neutron flux in the thermal zone is 10^{14} cm^{12}\\cdot s^_{-1}, in the fast booster zone is 5.12\\cdot10^{15} cm^{12}\\cdot s{-1} at k_{eff}=0.98 and proton beam current I=2.1 mA.

  17. Validation of Monte Carlo simulation of mammography with TLD measurement and depth dose calculation with a detailed breast model

    Science.gov (United States)

    Wang, Wenjing; Qiu, Rui; Ren, Li; Liu, Huan; Wu, Zhen; Li, Chunyan; Li, Junli

    2017-09-01

    Mean glandular dose (MGD) is not only determined by the compressed breast thickness (CBT) and the glandular content, but also by the distribution of glandular tissues in breast. Depth dose inside the breast in mammography has been widely concerned as glandular dose decreases rapidly with increasing depth. In this study, an experiment using thermo luminescent dosimeters (TLDs) was carried out to validate Monte Carlo simulations of mammography. Percent depth doses (PDDs) at different depth values were measured inside simple breast phantoms of different thicknesses. The experimental values were well consistent with the values calculated by Geant4. Then a detailed breast model with a CBT of 4 cm and a glandular content of 50%, which has been constructed in previous work, was used to study the effects of the distribution of glandular tissues in breast with Geant4. The breast model was reversed in direction of compression to get a reverse model with a different distribution of glandular tissues. Depth dose distributions and glandular tissue dose conversion coefficients were calculated. It revealed that the conversion coefficients were about 10% larger when the breast model was reversed, for glandular tissues in the reverse model are concentrated in the upper part of the model.

  18. Validation of the coupling of mesh models to GEANT4 Monte Carlo code for simulation of internal sources of photons

    International Nuclear Information System (INIS)

    Caribe, Paulo Rauli Rafeson Vasconcelos; Cassola, Vagner Ferreira; Kramer, Richard; Khoury, Helen Jamil

    2013-01-01

    The use of three-dimensional models described by polygonal meshes in numerical dosimetry enables more accurate modeling of complex objects than the use of simple solid. The objectives of this work were validate the coupling of mesh models to the Monte Carlo code GEANT4 and evaluate the influence of the number of vertices in the simulations to obtain absorbed fractions of energy (AFEs). Validation of the coupling was performed to internal sources of photons with energies between 10 keV and 1 MeV for spherical geometries described by the GEANT4 and three-dimensional models with different number of vertices and triangular or quadrilateral faces modeled using Blender program. As a result it was found that there were no significant differences between AFEs for objects described by mesh models and objects described using solid volumes of GEANT4. Since that maintained the shape and the volume the decrease in the number of vertices to describe an object does not influence so meant dosimetric data, but significantly decreases the time required to achieve the dosimetric calculations, especially for energies less than 100 keV

  19. A trans-dimensional Bayesian Markov chain Monte Carlo algorithm for model assessment using frequency-domain electromagnetic data

    Science.gov (United States)

    Minsley, Burke J.

    2011-01-01

    A meaningful interpretation of geophysical measurements requires an assessment of the space of models that are consistent with the data, rather than just a single, ‘best’ model which does not convey information about parameter uncertainty. For this purpose, a trans-dimensional Bayesian Markov chain Monte Carlo (MCMC) algorithm is developed for assessing frequencydomain electromagnetic (FDEM) data acquired from airborne or ground-based systems. By sampling the distribution of models that are consistent with measured data and any prior knowledge, valuable inferences can be made about parameter values such as the likely depth to an interface, the distribution of possible resistivity values as a function of depth and non-unique relationships between parameters. The trans-dimensional aspect of the algorithm allows the number of layers to be a free parameter that is controlled by the data, where models with fewer layers are inherently favoured, which provides a natural measure of parsimony and a significant degree of flexibility in parametrization. The MCMC algorithm is used with synthetic examples to illustrate how the distribution of acceptable models is affected by the choice of prior information, the system geometry and configuration and the uncertainty in the measured system elevation. An airborne FDEM data set that was acquired for the purpose of hydrogeological characterization is also studied. The results compare favorably with traditional least-squares analysis, borehole resistivity and lithology logs from the site, and also provide new information about parameter uncertainty necessary for model assessment.

  20. Algorithm development and simulation outcomes for hypoxic head and neck cancer radiotherapy using a Monte Carlo cell division model

    International Nuclear Information System (INIS)

    Harriss, W.M.; Bezak, E.; Yeoh, E.

    2010-01-01

    Full text: A temporal Monte Carlo tumour model, 'Hyp-RT'. sim ulating hypoxic head and neck cancer has been updated and extended to model radiothcrapy. The aim is to providc a convenient radiobio logical tool for clinicians to evaluate radiotherapy treatment schedules based on many individual tumour properties including oxygenation. FORTRAN95 and JA YA havc been utilised to develop the efficient algorithm, which can propagate 108 cells. Epithelial cell kill is affected by dose, oxygenation and proliferativc status. Accelerated repopulation (AR) has been modelled by increasing the symmetrical stem cell division probability, and reoxygenation (ROx) has been modelled using random incremental boosts of oxygen to the cell po ulation throughout therapy. Results The stem cell percentage and the degree of hypoxia dominate tumour growth rate. For conventional radiotherapy. 15-25% more dose was required for a hypox ic versus oxic tumours, depending on the time of AR onset (0-3 weeks after thc start of treatment). ROx of hypoxic tumours resulted in tumoUJ: sensitisation and therefore a dose reduction, of up to 35%, varying with the time of onset. Fig. I shows results for all combinations of AR and ROx onset times for the moderate hypoxia case. Conclusions In hypoxic tumours, accelerated repopulation and reoxy genation affect ccll kill in the same manner as when the effects are modelled individually. however the degree of the effect is altered and therefore the combined result is difficult to predict. providing evidence for the usefulness of computer models. Simulations have quantitatively

  1. Accurate Monte Carlo modeling of cyclotrons for optimization of shielding and activation calculations in the biomedical field

    Science.gov (United States)

    Infantino, Angelo; Marengo, Mario; Baschetti, Serafina; Cicoria, Gianfranco; Longo Vaschetto, Vittorio; Lucconi, Giulia; Massucci, Piera; Vichi, Sara; Zagni, Federico; Mostacci, Domiziano

    2015-11-01

    Biomedical cyclotrons for production of Positron Emission Tomography (PET) radionuclides and radiotherapy with hadrons or ions are widely diffused and established in hospitals as well as in industrial facilities and research sites. Guidelines for site planning and installation, as well as for radiation protection assessment, are given in a number of international documents; however, these well-established guides typically offer analytic methods of calculation of both shielding and materials activation, in approximate or idealized geometry set up. The availability of Monte Carlo codes with accurate and up-to-date libraries for transport and interactions of neutrons and charged particles at energies below 250 MeV, together with the continuously increasing power of nowadays computers, makes systematic use of simulations with realistic geometries possible, yielding equipment and site specific evaluation of the source terms, shielding requirements and all quantities relevant to radiation protection. In this work, the well-known Monte Carlo code FLUKA was used to simulate two representative models of cyclotron for PET radionuclides production, including their targetry; and one type of proton therapy cyclotron including the energy selection system. Simulations yield estimates of various quantities of radiological interest, including the effective dose distribution around the equipment, the effective number of neutron produced per incident proton and the activation of target materials, the structure of the cyclotron, the energy degrader, the vault walls and the soil. The model was validated against experimental measurements and comparison with well-established reference data. Neutron ambient dose equivalent H*(10) was measured around a GE PETtrace cyclotron: an average ratio between experimental measurement and simulations of 0.99±0.07 was found. Saturation yield of 18F, produced by the well-known 18O(p,n)18F reaction, was calculated and compared with the IAEA recommended

  2. Reference interaction site model investigation of homonuclear hard dumbbells under simple fluid theory closures: comparison with Monte Carlo simulations.

    Science.gov (United States)

    Munaò, G; Costa, D; Caccamo, C

    2009-04-14

    We revisit the thermodynamic and structural properties of fluids of homonuclear hard dumbbells in the framework provided by the reference interaction site model (RISM) theory of molecular fluids. Besides the previously investigated Percus-Yevick (PY) approximation, we test the accuracy of other closures to the RISM equations, imported from the theory of simple fluids; specifically, we study the hypernetted chain (HNC), the modified HNC (MHNC) and, less extensively, the Verlet approximations. We implement our approach for models characterized by several different elongations, up to the case of tangent diatomics, and investigate the whole fluid density range. The theoretical predictions are assessed against Monte Carlo simulations, either available from literature or newly generated by us. The HNC and PY equations of state, calculated via different routes, share on the whole the same level of accuracy. The MHNC is applied by enforcing an internal thermodynamic consistency constraint, leading to good predictions for the equation of state as the elongation of the dumbbell increases. As for the radial distribution function, the MHNC appears superior to other theories, especially for tangent diatomics in the high density limit; the PY approximation is better than the HNC and Verlet closures in the high density or elongation regime. Our structural analysis is supplemented by an accurate inversion procedure to reconstruct from Monte Carlo data and RISM the "exact" direct correlation function. In agreement with such calculations and consistent with the forecast of rigorous diagrammatic analysis, all theories predict the occurrence in the direct correlation function of a first cusp inside the dumbbell core and (with the obvious exception of the PY) of a second cusp outside; the cusps' heights are also qualitatively well reproduced by the theories, except at high densities.

  3. RECONSTRUCTION OF PENSION FUND PERFORMANCE MODEL AS AN EFFORT TO WORTHY PENSION FUND GOVERNANCE

    Directory of Open Access Journals (Sweden)

    Apriyanto Gaguk

    2017-08-01

    Full Text Available This study aims to reconstruct the performance assessment model on Pension Fund by modifying Baldrige Assessment method that is adjusted to the conditions in Dana Pensiun A (Pension Fund A in order to realize Good Pension Fund Governance. This study design uses case study analysis. The research sites were conducted in Dana Pensiun A. The informants in the study included the employer, supervisory board, pension fund management, active and passive pension fund participant as well as financial services authority elements as the regulator. The result of this research is a construction of a comprehensive and profound retirement performance assessment model with attention to aspects of growth and fair distribution. The model includes the parameters of leadership, strategic planning, stakeholders focus, measurement, analysis, and knowledge management, workforce focus, standard operational procedure focus, result, just and fair distribution of wealth and power.

  4. A New Software Reliability Growth Model: Multigeneration Faults and a Power-Law Testing-Effort Function

    Directory of Open Access Journals (Sweden)

    Fan Li

    2016-01-01

    Full Text Available Software reliability growth models (SRGMs based on a nonhomogeneous Poisson process (NHPP are widely used to describe the stochastic failure behavior and assess the reliability of software systems. For these models, the testing-effort effect and the fault interdependency play significant roles. Considering a power-law function of testing effort and the interdependency of multigeneration faults, we propose a modified SRGM to reconsider the reliability of open source software (OSS systems and then to validate the model’s performance using several real-world data. Our empirical experiments show that the model well fits the failure data and presents a high-level prediction capability. We also formally examine the optimal policy of software release, considering both the testing cost and the reliability requirement. By conducting sensitivity analysis, we find that if the testing-effort effect or the fault interdependency was ignored, the best time to release software would be seriously delayed and more resources would be misplaced in testing the software.

  5. Phase-coexistence simulations of fluid mixtures by the Markov Chain Monte Carlo method using single-particle models

    KAUST Repository

    Li, Jun

    2013-09-01

    We present a single-particle Lennard-Jones (L-J) model for CO2 and N2. Simplified L-J models for other small polyatomic molecules can be obtained following the methodology described herein. The phase-coexistence diagrams of single-component systems computed using the proposed single-particle models for CO2 and N2 agree well with experimental data over a wide range of temperatures. These diagrams are computed using the Markov Chain Monte Carlo method based on the Gibbs-NVT ensemble. This good agreement validates the proposed simplified models. That is, with properly selected parameters, the single-particle models have similar accuracy in predicting gas-phase properties as more complex, state-of-the-art molecular models. To further test these single-particle models, three binary mixtures of CH4, CO2 and N2 are studied using a Gibbs-NPT ensemble. These results are compared against experimental data over a wide range of pressures. The single-particle model has similar accuracy in the gas phase as traditional models although its deviation in the liquid phase is greater. Since the single-particle model reduces the particle number and avoids the time-consuming Ewald summation used to evaluate Coulomb interactions, the proposed model improves the computational efficiency significantly, particularly in the case of high liquid density where the acceptance rate of the particle-swap trial move increases. We compare, at constant temperature and pressure, the Gibbs-NPT and Gibbs-NVT ensembles to analyze their performance differences and results consistency. As theoretically predicted, the agreement between the simulations implies that Gibbs-NVT can be used to validate Gibbs-NPT predictions when experimental data is not available. © 2013 Elsevier Inc.

  6. MCNP6 and DRiFT modeling efforts for the NEUANCE/DANCE detector array

    Energy Technology Data Exchange (ETDEWEB)

    Pinilla, Maria Isabel [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-01-30

    This report seeks to study and benchmark code predictions against experimental data; determine parameters to match MCNP-simulated detector response functions to experimental stilbene measurements; add stilbene processing capabilities to DRiFT; and improve NEUANCE detector array modeling and analysis using new MCNP6 and DRiFT features.

  7. Dynamic material flow modeling: an effort to calibrate and validate aluminum stocks and flows in Austria.

    Science.gov (United States)

    Buchner, Hanno; Laner, David; Rechberger, Helmut; Fellner, Johann

    2015-05-05

    A calibrated and validated dynamic material flow model of Austrian aluminum (Al) stocks and flows between 1964 and 2012 was developed. Calibration and extensive plausibility testing was performed to illustrate how the quality of dynamic material flow analysis can be improved on the basis of the consideration of independent bottom-up estimates. According to the model, total Austrian in-use Al stocks reached a level of 360 kg/capita in 2012, with buildings (45%) and transport applications (32%) being the major in-use stocks. Old scrap generation (including export of end-of-life vehicles) amounted to 12.5 kg/capita in 2012, still being on the increase, while Al final demand has remained rather constant at around 25 kg/capita in the past few years. The application of global sensitivity analysis showed that only small parts of the total variance of old scrap generation could be explained by the variation of single parameters, emphasizing the need for comprehensive sensitivity analysis tools accounting for interaction between parameters and time-delay effects in dynamic material flow models. Overall, it was possible to generate a detailed understanding of the evolution of Al stocks and flows in Austria, including plausibility evaluations of the results. Such models constitute a reliable basis for evaluating future recycling potentials, in particular with respect to application-specific qualities of current and future national Al scrap generation and utilization.

  8. Ideals, activities, dissonance, and processing: a conceptual model to guide educators' efforts to stimulate student reflection.

    Science.gov (United States)

    Thompson, Britta M; Teal, Cayla R; Rogers, John C; Paterniti, Debora A; Haidet, Paul

    2010-05-01

    Medical schools are increasingly incorporating opportunities for reflection into their curricula. However, little is known about the cognitive and/or emotional processes that occur when learners participate in activities designed to promote reflection. The purpose of this study was to identify and elucidate those processes. In 2008, the authors analyzed qualitative data from focus groups that were originally conducted to evaluate an educational activity designed to promote reflection. These data afforded the opportunity to explore the processes of reflection in detail. Transcripts (94 pages, single-spaced) from four focus groups were analyzed using a narrative framework. The authors spent approximately 40 hours in group and 240 hours in individual coding activities. The authors developed a conceptual model of five major elements in students' reflective processes: the educational activity, the presence or absence of cognitive or emotional dissonance, and two methods of processing dissonance (preservation or reconciliation). The model also incorporates the relationship between the student's internal ideal of what a doctor is or does and the student's perception of the teacher's ideal of what a doctor is or does. The model further identifies points at which educators may be able to influence the processes of reflection and the development of professional ideals. Students' cognitive and emotional processes have important effects on the success of educational activities intended to stimulate reflection. Although additional research is needed, this model-which incorporates ideals, activities, dissonance, and processing-can guide educators as they plan and implement such activities.

  9. TestDose: A nuclear medicine software based on Monte Carlo modeling for generating gamma camera acquisitions and dosimetry.

    Science.gov (United States)

    Garcia, Marie-Paule; Villoing, Daphnée; McKay, Erin; Ferrer, Ludovic; Cremonesi, Marta; Botta, Francesca; Ferrari, Mahila; Bardiès, Manuel

    2015-12-01

    The TestDose platform was developed to generate scintigraphic imaging protocols and associated dosimetry by Monte Carlo modeling. TestDose is part of a broader project (www.dositest.com) whose aim is to identify the biases induced by different clinical dosimetry protocols. The TestDose software allows handling the whole pipeline from virtual patient generation to resulting planar and SPECT images and dosimetry calculations. The originality of their approach relies on the implementation of functional segmentation for the anthropomorphic model representing a virtual patient. Two anthropomorphic models are currently available: 4D XCAT and ICRP 110. A pharmacokinetic model describes the biodistribution of a given radiopharmaceutical in each defined compartment at various time-points. The Monte Carlo simulation toolkit gate offers the possibility to accurately simulate scintigraphic images and absorbed doses in volumes of interest. The TestDose platform relies on gate to reproduce precisely any imaging protocol and to provide reference dosimetry. For image generation, TestDose stores user's imaging requirements and generates automatically command files used as input for gate. Each compartment is simulated only once and the resulting output is weighted using pharmacokinetic data. Resulting compartment projections are aggregated to obtain the final image. For dosimetry computation, emission data are stored in the platform database and relevant gate input files are generated for the virtual patient model and associated pharmacokinetics. Two samples of software runs are given to demonstrate the potential of TestDose. A clinical imaging protocol for the Octreoscan™ therapeutical treatment was implemented using the 4D XCAT model. Whole-body "step and shoot" acquisitions at different times postinjection and one SPECT acquisition were generated within reasonable computation times. Based on the same Octreoscan™ kinetics, a dosimetry computation performed on the ICRP 110

  10. TestDose: A nuclear medicine software based on Monte Carlo modeling for generating gamma camera acquisitions and dosimetry

    International Nuclear Information System (INIS)

    Garcia, Marie-Paule; Villoing, Daphnée; McKay, Erin; Ferrer, Ludovic; Cremonesi, Marta; Botta, Francesca; Ferrari, Mahila; Bardiès, Manuel

    2015-01-01

    Purpose: The TestDose platform was developed to generate scintigraphic imaging protocols and associated dosimetry by Monte Carlo modeling. TestDose is part of a broader project (www.dositest.com) whose aim is to identify the biases induced by different clinical dosimetry protocols. Methods: The TestDose software allows handling the whole pipeline from virtual patient generation to resulting planar and SPECT images and dosimetry calculations. The originality of their approach relies on the implementation of functional segmentation for the anthropomorphic model representing a virtual patient. Two anthropomorphic models are currently available: 4D XCAT and ICRP 110. A pharmacokinetic model describes the biodistribution of a given radiopharmaceutical in each defined compartment at various time-points. The Monte Carlo simulation toolkit GATE offers the possibility to accurately simulate scintigraphic images and absorbed doses in volumes of interest. The TestDose platform relies on GATE to reproduce precisely any imaging protocol and to provide reference dosimetry. For image generation, TestDose stores user’s imaging requirements and generates automatically command files used as input for GATE. Each compartment is simulated only once and the resulting output is weighted using pharmacokinetic data. Resulting compartment projections are aggregated to obtain the final image. For dosimetry computation, emission data are stored in the platform database and relevant GATE input files are generated for the virtual patient model and associated pharmacokinetics. Results: Two samples of software runs are given to demonstrate the potential of TestDose. A clinical imaging protocol for the Octreoscan™ therapeutical treatment was implemented using the 4D XCAT model. Whole-body “step and shoot” acquisitions at different times postinjection and one SPECT acquisition were generated within reasonable computation times. Based on the same Octreoscan™ kinetics, a dosimetry

  11. TestDose: A nuclear medicine software based on Monte Carlo modeling for generating gamma camera acquisitions and dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Garcia, Marie-Paule, E-mail: marie-paule.garcia@univ-brest.fr; Villoing, Daphnée [UMR 1037 INSERM/UPS, CRCT, 133 Route de Narbonne, 31062 Toulouse (France); McKay, Erin [St George Hospital, Gray Street, Kogarah, New South Wales 2217 (Australia); Ferrer, Ludovic [ICO René Gauducheau, Boulevard Jacques Monod, St Herblain 44805 (France); Cremonesi, Marta; Botta, Francesca; Ferrari, Mahila [European Institute of Oncology, Via Ripamonti 435, Milano 20141 (Italy); Bardiès, Manuel [UMR 1037 INSERM/UPS, CRCT, 133 Route de Narbonne, Toulouse 31062 (France)

    2015-12-15

    Purpose: The TestDose platform was developed to generate scintigraphic imaging protocols and associated dosimetry by Monte Carlo modeling. TestDose is part of a broader project (www.dositest.com) whose aim is to identify the biases induced by different clinical dosimetry protocols. Methods: The TestDose software allows handling the whole pipeline from virtual patient generation to resulting planar and SPECT images and dosimetry calculations. The originality of their approach relies on the implementation of functional segmentation for the anthropomorphic model representing a virtual patient. Two anthropomorphic models are currently available: 4D XCAT and ICRP 110. A pharmacokinetic model describes the biodistribution of a given radiopharmaceutical in each defined compartment at various time-points. The Monte Carlo simulation toolkit GATE offers the possibility to accurately simulate scintigraphic images and absorbed doses in volumes of interest. The TestDose platform relies on GATE to reproduce precisely any imaging protocol and to provide reference dosimetry. For image generation, TestDose stores user’s imaging requirements and generates automatically command files used as input for GATE. Each compartment is simulated only once and the resulting output is weighted using pharmacokinetic data. Resulting compartment projections are aggregated to obtain the final image. For dosimetry computation, emission data are stored in the platform database and relevant GATE input files are generated for the virtual patient model and associated pharmacokinetics. Results: Two samples of software runs are given to demonstrate the potential of TestDose. A clinical imaging protocol for the Octreoscan™ therapeutical treatment was implemented using the 4D XCAT model. Whole-body “step and shoot” acquisitions at different times postinjection and one SPECT acquisition were generated within reasonable computation times. Based on the same Octreoscan™ kinetics, a dosimetry

  12. New Monte Carlo model of cylindrical diffusing fibers illustrates axially heterogeneous fluorescence detection: simulation and experimental validation.

    Science.gov (United States)

    Baran, Timothy M; Foster, Thomas H

    2011-08-01

    We present a new Monte Carlo model of cylindrical diffusing fibers that is implemented with a graphics processing unit. Unlike previously published models that approximate the diffuser as a linear array of point sources, this model is based on the construction of these fibers. This allows for accurate determination of fluence distributions and modeling of fluorescence generation and collection. We demonstrate that our model generates fluence profiles similar to a linear array of point sources, but reveals axially heterogeneous fluorescence detection. With axially homogeneous excitation fluence, approximately 90% of detected fluorescence is collected by the proximal third of the diffuser for μ(s)'∕μ(a) = 8 in the tissue and 70 to 88% is collected in this region for μ(s)'∕μ(a) = 80. Increased fluorescence detection by the distal end of the diffuser relative to the center section is also demonstrated. Validation of these results was performed by creating phantoms consisting of layered fluorescent regions. Diffusers were inserted into these layered phantoms and fluorescence spectra were collected. Fits to these spectra show quantitative agreement between simulated fluorescence collection sensitivities and experimental results. These results will be applicable to the use of diffusers as detectors for dosimetry in interstitial photodynamic therapy.

  13. An ab initio chemical reaction model for the direct simulation Monte Carlo study of non-equilibrium nitrogen flows.

    Science.gov (United States)

    Mankodi, T K; Bhandarkar, U V; Puranik, B P

    2017-08-28

    A new ab initio based chemical model for a Direct Simulation Monte Carlo (DSMC) study suitable for simulating rarefied flows with a high degree of non-equilibrium is presented. To this end, Collision Induced Dissociation (CID) cross sections for N 2 +N 2 →N 2 +2N are calculated and published using a global complete active space self-consistent field-complete active space second order perturbation theory N 4 potential energy surface and quasi-classical trajectory algorithm for high energy collisions (up to 30 eV). CID cross sections are calculated for only a selected set of ro-vibrational combinations of the two nitrogen molecules, and a fitting scheme based on spectroscopic weights is presented to interpolate the CID cross section for all possible ro-vibrational combinations. The new chemical model is validated by calculating equilibrium reaction rate coefficients that can be compared well with existing shock tube and computational results. High-enthalpy hypersonic nitrogen flows around a cylinder in the transition flow regime are simulated using DSMC to compare the predictions of the current ab initio based chemical model with the prevailing phenomenological model (the total collision energy model). The differences in the predictions are discussed.

  14. Ab initio and Atomic kinetic Monte Carlo modelling of segregation in concentrated FeCrNi alloys

    Science.gov (United States)

    Piochaud, J. B.; Becquart, C. S.; Domain, C.

    2014-06-01

    Internal structure of pressurised water reactors are made of austenitic materials. Under irradiation, the microstructure of these concentrated alloys evolves and solute segregation on grain boundaries or irradiation defects such as dislocation loops are observed to take place. In order to model and predict the microstructure evolution, a multiscale modelling approach needs to be developed, which starts at the atomic scale. Atomic Kinetic Monte Carlo (AKMC) modelling is the method we chose to provide an insight on defect mediated diffusion under irradiation. In that approach, we model the concentrated commercial steel as a FeCrNi alloy (γ-Fe70Cr20Ni10). As no reliable empirical potential exists at the moment to reproduce faithfully the phase diagram and the interactions of the elements and point defects, we have adjusted a pair interaction model on large amount of DFT calculations. The point defect properties in the Fe70Cr20Ni10, and more precisely, how their formation energy depends on the local environment will be presented and some AKMC results on thermal non equilibrium segregation and radiation induce segregation will be presented. The effect of Si on the segregation will also be discussed.

  15. Ab initio and atomic kinetic Monte Carlo modelling of segregation in concentrated FeCrNi alloys

    International Nuclear Information System (INIS)

    Piochaud, J.B.; Becquart, C.S.; Domain, C.

    2013-01-01

    Internal structure of pressurised water reactors are made of austenitic materials. Under irradiation, the microstructure of these concentrated alloys evolves and solute segregation on grain boundaries or irradiation defects such as dislocation loops are observed to take place. In order to model and predict the microstructure evolution, a multi-scale modelling approach needs to be developed, which starts at the atomic scale. Atomic Kinetic Monte Carlo (AKMC) modelling is the method we chose to provide an insight on defect mediated diffusion under irradiation. In that approach, we model the concentrated commercial steel as a FeCrNi alloy (γ-Fe 70 Cr 20 Ni 10 ). As no reliable empirical potential exists at the moment to reproduce faithfully the phase diagram and the interactions of the elements and point defects, we have adjusted a pair interaction model on large amount of DFT (Density Functional Theory) calculations. The point defect properties in the Fe 70 Cr 20 Ni 10 , and more precisely, how their formation energy depends on the local environment will be presented and some AKMC results on thermal non equilibrium segregation (TNES) and radiation induce segregation will be presented. The effect of Si on the segregation will also be discussed. Preliminary results show that it is the solute- grain boundaries interactions which drive TNES

  16. Monte Carlo Modeling of Dual and Triple Photon Energy Absorptiometry Technique

    Directory of Open Access Journals (Sweden)

    Alireza Kamali-Asl

    2007-12-01

    Full Text Available Introduction: Osteoporosis is a bone disease in which there is a reduction in the amount of bone mineral content leading to an increase in the risk of bone fractures. The affected individuals not only have to go through lots of pain and suffering but this disease also results in high economic costs to the society due to a large number of fractures.  A timely and accurate diagnosis of this disease makes it possible to start a treatment and thus preventing bone fractures as a result of osteoporosis. Radiographic methods are particularly well suited for in vivo determination of bone mineral density (BMD due to the relatively high x-ray absorption properties of bone mineral compared to other tissues. Materials and Methods: Monte Carlo simulation has been conducted to explore the possibilities of triple photon energy absorptiometry (TPA in the measurement of bone mineral content. The purpose of this technique is to correctly measure the bone mineral density in the presence of fatty and soft tissues. The same simulations have been done for a dual photon energy absorptiometry (DPA system and an extended DPA system. Results: Using DPA with three components improves the accuracy of the obtained result while the simulation results show that TPA system is not accurate enough to be considered as an adequate method for the measurement of bone mineral density. Discussion: The reason for the improvement in the accuracy is the consideration of fatty tissue in TPA method while having attenuation coefficient as a function of energy makes TPA an inadequate method. Conclusion: Using TPA method is not a perfect solution to overcome the problem of non uniformity in the distribution of fatty tissue.

  17. A coupled modelling effort to study the fate of contaminated sediments downstream of the Coles Hill deposit, Virginia, USA

    Directory of Open Access Journals (Sweden)

    C. F. Castro-Bolinaga

    2015-03-01

    Full Text Available This paper presents the preliminary results of a coupled modelling effort to study the fate of tailings (radioactive waste-by product downstream of the Coles Hill uranium deposit located in Virginia, USA. The implementation of the overall modelling process includes a one-dimensional hydraulic model to qualitatively characterize the sediment transport process under severe flooding conditions downstream of the potential mining site, a two-dimensional ANSYS Fluent model to simulate the release of tailings from a containment cell located partially above the local ground surface into the nearby streams, and a one-dimensional finite-volume sediment transport model to examine the propagation of a tailings sediment pulse in the river network located downstream. The findings of this investigation aim to assist in estimating the potential impacts that tailings would have if they were transported into rivers and reservoirs located downstream of the Coles Hill deposit that serve as municipal drinking water supplies.

  18. Modeling kinetics of a large-scale fed-batch CHO cell culture by Markov chain Monte Carlo method.

    Science.gov (United States)

    Xing, Zizhuo; Bishop, Nikki; Leister, Kirk; Li, Zheng Jian

    2010-01-01

    Markov chain Monte Carlo (MCMC) method was applied to model kinetics of a fed-batch Chinese hamster ovary cell culture process in 5,000-L bioreactors. The kinetic model consists of six differential equations, which describe dynamics of viable cell density and concentrations of glucose, glutamine, ammonia, lactate, and the antibody fusion protein B1 (B1). The kinetic model has 18 parameters, six of which were calculated from the cell culture data, whereas the other 12 were estimated from a training data set that comprised of seven cell culture runs using a MCMC method. The model was confirmed in two validation data sets that represented a perturbation of the cell culture condition. The agreement between the predicted and measured values of both validation data sets may indicate high reliability of the model estimates. The kinetic model uniquely incorporated the ammonia removal and the exponential function of B1 protein concentration. The model indicated that ammonia and lactate play critical roles in cell growth and that low concentrations of glucose (0.17 mM) and glutamine (0.09 mM) in the cell culture medium may help reduce ammonia and lactate production. The model demonstrated that 83% of the glucose consumed was used for cell maintenance during the late phase of the cell cultures, whereas the maintenance coefficient for glutamine was negligible. Finally, the kinetic model suggests that it is critical for B1 production to sustain a high number of viable cells. The MCMC methodology may be a useful tool for modeling kinetics of a fed-batch mammalian cell culture process.

  19. Controls over Ocean Mesopelagic Interior Carbon Storage (COMICS: fieldwork, synthesis and modelling efforts

    Directory of Open Access Journals (Sweden)

    Richard John Sanders

    2016-08-01

    Full Text Available The ocean’s biological carbon pump plays a central role in regulating atmospheric CO2 levels. In particular, the depth at which sinking organic carbon is broken down and respired in the mesopelagic zone is critical, with deeper remineralisation resulting in greater carbon storage. Until recently, however, a balanced budget of the supply and consumption of organic carbon in the mesopelagic had not been constructed in any region of the ocean, and the processes controlling organic carbon turnover are still poorly understood. Large-scale data syntheses suggest that a wide range of factors can influence remineralisation depth including upper-ocean ecological interactions, and interior dissolved oxygen concentration and temperature. However these analyses do not provide a mechanistic understanding of remineralisation, which increases the challenge of appropriately modelling the mesopelagic carbon dynamics. In light of this, the UK Natural Environment Research Council has funded a programme with this mechanistic understanding as its aim, drawing targeted fieldwork right through to implementation of a new parameterisation for mesopelagic remineralisation within an IPCC class global biogeochemical model. The Controls over Ocean Mesopelagic Interior Carbon Storage (COMICS programme will deliver new insights into the processes of carbon cycling in the mesopelagic zone and how these influence ocean carbon storage. Here we outline the programme’s rationale, its goals, planned fieldwork and modelling activities, with the aim of stimulating international collaboration.

  20. A Monte Carlo model to produce baryons in e+e- annihilation

    International Nuclear Information System (INIS)

    Meyer, T.

    1981-08-01

    A simple model is described extending the Field-Feynman model to baryon production in quark fragmentation. The model predicts baryon baryon correlations within jets and in opposite jets produced in electron-positron annihilation. Existing data is well described by the model. (orig.)

  1. Microcanonical Monte Carlo

    International Nuclear Information System (INIS)

    Creutz, M.

    1986-01-01

    The author discusses a recently developed algorithm for simulating statistical systems. The procedure interpolates between molecular dynamics methods and canonical Monte Carlo. The primary advantages are extremely fast simulations of discrete systems such as the Ising model and a relative insensitivity to random number quality. A variation of the algorithm gives rise to a deterministic dynamics for Ising spins. This model may be useful for high speed simulation of non-equilibrium phenomena

  2. Combined observational and modeling efforts of aerosol-cloud-precipitation interactions over Southeast Asia

    Science.gov (United States)

    Loftus, Adrian; Tsay, Si-Chee; Nguyen, Xuan Anh

    2016-04-01

    Low-level stratocumulus (Sc) clouds cover more of the Earth's surface than any other cloud type rendering them critical for Earth's energy balance, primarily via reflection of solar radiation, as well as their role in the global hydrological cycle. Stratocumuli are particularly sensitive to changes in aerosol loading on both microphysical and macrophysical scales, yet the complex feedbacks involved in aerosol-cloud-precipitation interactions remain poorly understood. Moreover, research on these clouds has largely been confined to marine environments, with far fewer studies over land where major sources of anthropogenic aerosols exist. The aerosol burden over Southeast Asia (SEA) in boreal spring, attributed to biomass burning (BB), exhibits highly consistent spatiotemporal distribution patterns, with major variability due to changes in aerosol loading mediated by processes ranging from large-scale climate factors to diurnal meteorological events. Downwind from source regions, the transported BB aerosols often overlap with low-level Sc cloud decks associated with the development of the region's pre-monsoon system, providing a unique, natural laboratory for further exploring their complex micro- and macro-scale relationships. Compared to other locations worldwide, studies of springtime biomass-burning aerosols and the predominately Sc cloud systems over SEA and their ensuing interactions are underrepresented in scientific literature. Measurements of aerosol and cloud properties, whether ground-based or from satellites, generally lack information on microphysical processes; thus cloud-resolving models are often employed to simulate the underlying physical processes in aerosol-cloud-precipitation interactions. The Goddard Cumulus Ensemble (GCE) cloud model has recently been enhanced with a triple-moment (3M) bulk microphysics scheme as well as the Regional Atmospheric Modeling System (RAMS) version 6 aerosol module. Because the aerosol burden not only affects cloud

  3. Exploring Spatiotemporal Trends in Commercial Fishing Effort of an Abalone Fishing Zone: A GIS-Based Hotspot Model.

    Directory of Open Access Journals (Sweden)

    M Ali Jalali

    Full Text Available Assessing patterns of fisheries activity at a scale related to resource exploitation has received particular attention in recent times. However, acquiring data about the distribution and spatiotemporal allocation of catch and fishing effort in small scale benthic fisheries remains challenging. Here, we used GIS-based spatio-statistical models to investigate the footprint of commercial diving events on blacklip abalone (Haliotis rubra stocks along the south-west coast of Victoria, Australia from 2008 to 2011. Using abalone catch data matched with GPS location we found catch per unit of fishing effort (CPUE was not uniformly spatially and temporally distributed across the study area. Spatial autocorrelation and hotspot analysis revealed significant spatiotemporal clusters of CPUE (with distance thresholds of 100's of meters among years, indicating the presence of CPUE hotspots focused on specific reefs. Cumulative hotspot maps indicated that certain reef complexes were consistently targeted across years but with varying intensity, however often a relatively small proportion of the full reef extent was targeted. Integrating CPUE with remotely-sensed light detection and ranging (LiDAR derived bathymetry data using generalized additive mixed model corroborated that fishing pressure primarily coincided with shallow, rugose and complex components of reef structures. This study demonstrates that a geospatial approach is efficient in detecting patterns and trends in commercial fishing effort and its association with seafloor characteristics.

  4. Monte Carlo Modeling for in vivo MRS : Generating and quantifying simulations via the Windows, Linux and Android platform

    NARCIS (Netherlands)

    De Beer, R.; Van Ormondt, D.

    2014-01-01

    Work in context of European Union TRANSACT project. We have developed a Java/JNI/C/Fortran based software application, called MonteCarlo, with which the users can carry out Monte Carlo studies in the field of \\emph{in vivo} MRS. The application is supposed to be used as a tool for supporting the

  5. Sample Size and Power Estimates for a Confirmatory Factor Analytic Model in Exercise and Sport: A Monte Carlo Approach

    Science.gov (United States)

    Myers, Nicholas D.; Ahn, Soyeon; Jin, Ying

    2011-01-01

    Monte Carlo methods can be used in data analytic situations (e.g., validity studies) to make decisions about sample size and to estimate power. The purpose of using Monte Carlo methods in a validity study is to improve the methodological approach within a study where the primary focus is on construct validity issues and not on advancing…

  6. A coupled kinetic Monte Carlo–finite element mesoscale model for thermoelastic martensitic phase transformations in shape memory alloys

    International Nuclear Information System (INIS)

    Chen, Ying; Schuh, Christopher A.

    2015-01-01

    A mesoscale modeling framework integrating thermodynamics, kinetic Monte Carlo (KMC) and finite element mechanics (FEM) is developed to simulate displacive thermoelastic transformations between austenite and martensite in shape memory alloys (SMAs). The model is based on a transition state approximation for the energy landscape of the two phases under loading or cooling, which leads to the activation energy and rate for transformation domains incorporating local stress states. The evolved stress state after each domain transformation event is calculated by FEM, and is subsequently used in the stochastic KMC algorithm to determine the next domain to transform. The model captures transformation stochasticity, and predicts internal phase and stress distributions and evolution throughout the entire incubation, nucleation and growth process. It also relates the critical transformation stresses or temperatures to internal activation energies. It therefore enables quantitative exploration of transformation dynamics and transformation–microstructure interactions. The model is used to simulate superelasticity (mechanically induced transformation) under both load control and strain control in single-crystal SMAs under uniaxial tension

  7. A proposal on alternative sampling-based modeling method of spherical particles in stochastic media for Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    Song Hyun Kim

    2015-08-01

    Full Text Available Chord length sampling method in Monte Carlo simulations is a method used to model spherical particles with random sampling technique in a stochastic media. It has received attention due to the high calculation efficiency as well as user convenience; however, a technical issue regarding boundary effect has been noted. In this study, after analyzing the distribution characteristics of spherical particles using an explicit method, an alternative chord length sampling method is proposed. In addition, for modeling in finite media, a correction method of the boundary effect is proposed. Using the proposed method, sample probability distributions and relative errors were estimated and compared with those calculated by the explicit method. The results show that the reconstruction ability and modeling accuracy of the particle probability distribution with the proposed method were considerably high. Also, from the local packing fraction results, the proposed method can successfully solve the boundary effect problem. It is expected that the proposed method can contribute to the increasing of the modeling accuracy in stochastic media.

  8. Warranty optimisation based on the prediction of costs to the manufacturer using neural network model and Monte Carlo simulation

    Science.gov (United States)

    Stamenkovic, Dragan D.; Popovic, Vladimir M.

    2015-02-01

    Warranty is a powerful marketing tool, but it always involves additional costs to the manufacturer. In order to reduce these costs and make use of warranty's marketing potential, the manufacturer needs to master the techniques for warranty cost prediction according to the reliability characteristics of the product. In this paper a combination free replacement and pro rata warranty policy is analysed as warranty model for one type of light bulbs. Since operating conditions have a great impact on product reliability, they need to be considered in such analysis. A neural network model is used to predict light bulb reliability characteristics based on the data from the tests of light bulbs in various operating conditions. Compared with a linear regression model used in the literature for similar tasks, the neural network model proved to be a more accurate method for such prediction. Reliability parameters obtained in this way are later used in Monte Carlo simulation for the prediction of times to failure needed for warranty cost calculation. The results of the analysis make possible for the manufacturer to choose the optimal warranty policy based on expected product operating conditions. In such a way, the manufacturer can lower the costs and increase the profit.

  9. Monte Carlo modeling of the Yttrium-90 nanospheres application in the liver radionuclide therapy and organs doses calculation

    Directory of Open Access Journals (Sweden)

    Ghavami Seyed Mostafa

    2016-01-01

    Full Text Available Using the nano-scaled radionuclides in the radionuclide therapy significantly reduces the particles trapping in the organs vessels and avoids thrombosis formations. Additionally, uniform distribution in the target organ may be another benefit of the nanoradionuclides in the radionuclide therapy. Monte Carlo simulation was conducted to model a mathematical humanoid phantom and the liver cells of the simulated phantom were filled with the 90Y nanospheres. Healthy organs doses, fatal and nonfatal risks of the surrounding organs were estimated. The estimations and calculations were made in four different distribution patterns of the radionuclide seeds. Maximum doses and risks estimated for the surrounding organs were obtained in the high edge concentrated distribution model of the liver including the nanoradionuclides. For the dose equivalent, effective dose, fatal and non-fatal risks, the values obtained as 7.51E-03 Sv/Bq, 3.01E-01 Sv/Bq, and 9.16E-01 cases/104 persons for the bladder, colon, and kidney of the modeled phantom, respectively. The mentioned values were the maximum values among the studied modeled distributions. Maximum values of Normal Tissue Complication Probability for the healthy organs calculated as 5.9-8.9 %. Result of using nanoparticles of the 90Y provides promising dosimetric properties in MC simulation results considering non-toxicity reports for the radionuclide.

  10. Ann modeling of kerf transfer in Co2 laser cutting and optimization of cutting parameters using monte carlo method

    Directory of Open Access Journals (Sweden)

    Miloš Madić

    2015-01-01

    Full Text Available In this paper, an attempt has been made to develop a mathematical model in order to study the relationship between laser cutting parameters such as laser power, cutting speed, assist gas pressure and focus position, and kerf taper angle obtained in CO2 laser cutting of AISI 304 stainless steel. To this aim, a single hidden layer artificial neural network (ANN trained with gradient descent with momentum algorithm was used. To obtain an experimental database for the ANN training, laser cutting experiment was planned as per Taguchi’s L27 orthogonal array with three levels for each of the cutting parameters. Statistically assessed as adequate, ANN model was then used to investigate the effect of the laser cutting parameters on the kerf taper angle by generating 2D and 3D plots. It was observed that the kerf taper angle was highly sensitive to the selected laser cutting parameters, as well as their interactions. In addition to modeling, by applying the Monte Carlo method on the developed kerf taper angle ANN model, the near optimal laser cutting parameter settings, which minimize kerf taper angle, were determined.

  11. Calculation of dose distribution in compressible breast tissues using finite element modeling, Monte Carlo simulation and thermoluminescence dosimeters.

    Science.gov (United States)

    Mohammadyari, Parvin; Faghihi, Reza; Mosleh-Shirazi, Mohammad Amin; Lotfi, Mehrzad; Hematiyan, Mohammad Rahim; Koontz, Craig; Meigooni, Ali S

    2015-12-07

    Compression is a technique to immobilize the target or improve the dose distribution within the treatment volume during different irradiation techniques such as AccuBoost(®) brachytherapy. However, there is no systematic method for determination of dose distribution for uncompressed tissue after irradiation under compression. In this study, the mechanical behavior of breast tissue between compressed and uncompressed states was investigated. With that, a novel method was developed to determine the dose distribution in uncompressed tissue after irradiation of compressed breast tissue. Dosimetry was performed using two different methods, namely, Monte Carlo simulations using the MCNP5 code and measurements using thermoluminescent dosimeters (TLD). The displacement of the breast elements was simulated using a finite element model and calculated using ABAQUS software. From these results, the 3D dose distribution in uncompressed tissue was determined. The geometry of the model was constructed from magnetic resonance images of six different women volunteers. The mechanical properties were modeled by using the Mooney-Rivlin hyperelastic material model. Experimental dosimetry was performed by placing the TLD chips into the polyvinyl alcohol breast equivalent phantom. The results determined that the nodal displacements, due to the gravitational force and the 60 Newton compression forces (with 43% contraction in the loading direction and 37% expansion in the orthogonal direction) were determined. Finally, a comparison of the experimental data and the simulated data showed agreement within 11.5%  ±  5.9%.

  12. Whole body counter calibration using Monte Carlo modeling with an array of phantom sizes based on national anthropometric reference data

    Science.gov (United States)

    Shypailo, R. J.; Ellis, K. J.

    2011-05-01

    During construction of the whole body counter (WBC) at the Children's Nutrition Research Center (CNRC), efficiency calibration was needed to translate acquired counts of 40K to actual grams of potassium for measurement of total body potassium (TBK) in a diverse subject population. The MCNP Monte Carlo n-particle simulation program was used to describe the WBC (54 detectors plus shielding), test individual detector counting response, and create a series of virtual anthropomorphic phantoms based on national reference anthropometric data. Each phantom included an outer layer of adipose tissue and an inner core of lean tissue. Phantoms were designed for both genders representing ages 3.5 to 18.5 years with body sizes from the 5th to the 95th percentile based on body weight. In addition, a spherical surface source surrounding the WBC was modeled in order to measure the effects of subject mass on room background interference. Individual detector measurements showed good agreement with the MCNP model. The background source model came close to agreement with empirical measurements, but showed a trend deviating from unity with increasing subject size. Results from the MCNP simulation of the CNRC WBC agreed well with empirical measurements using BOMAB phantoms. Individual detector efficiency corrections were used to improve the accuracy of the model. Nonlinear multiple regression efficiency calibration equations were derived for each gender. Room background correction is critical in improving the accuracy of the WBC calibration.

  13. Community effort endorsing multiscale modelling, multiscale data science and multiscale computing for systems medicine.

    Science.gov (United States)

    Zanin, Massimiliano; Chorbev, Ivan; Stres, Blaz; Stalidzans, Egils; Vera, Julio; Tieri, Paolo; Castiglione, Filippo; Groen, Derek; Zheng, Huiru; Baumbach, Jan; Schmid, Johannes A; Basilio, José; Klimek, Peter; Debeljak, Nataša; Rozman, Damjana; Schmidt, Harald H H W

    2017-12-05

    Systems medicine holds many promises, but has so far provided only a limited number of proofs of principle. To address this road block, possible barriers and challenges of translating systems medicine into clinical practice need to be identified and addressed. The members of the European Cooperation in Science and Technology (COST) Action CA15120 Open Multiscale Systems Medicine (OpenMultiMed) wish to engage the scientific community of systems medicine and multiscale modelling, data science and computing, to provide their feedback in a structured manner. This will result in follow-up white papers and open access resources to accelerate the clinical translation of systems medicine. © The Author 2017. Published by Oxford University Press.

  14. A retrospective analysis of the infectious bovine rhinotracheitis (bovine herpes virus-1) surveillance program in Norway using Monte Carlo simulation models

    DEFF Research Database (Denmark)

    Paisley, Larry; Tharaldsen, J.; Jarp, J.

    2001-01-01

    -serum samples have been negative for BHV-1 antibodies. This paper describes the use of Monte Carlo simulation models for the analysis and interpretation of the results of the surveillance and provides support for the contention that the Norwegian cattle population is not infected by BHV-1....

  15. X-ray spectra from magnetar candidates - III. Fitting SGR/AXP soft X-ray emission with non-relativistic Monte Carlo models

    NARCIS (Netherlands)

    Zane, S.; Rea, N.; Turolla, R.; Nobili, L.

    2009-01-01

    Within the magnetar scenario, the ‘twisted magnetosphere’ model appears very promising in explaining the persistent X-ray emission from soft gamma repeaters (SGRs) and anomalous X-ray pulsars (AXPs). In the first two papers of the series, we have presented a 3D Monte Carlo code for solving radiation

  16. Inverse modeling of cloud-aerosol interactions — Part 2: Sensitivity tests on liquid phase clouds using a Markov Chain Monte Carlo based simulation approach

    NARCIS (Netherlands)

    Partridge, D.G.; Vrugt, J.A.; Tunved, P.; Ekman, A.M.L.; Struthers, H.; Sorooshian, A.

    2012-01-01

    This paper presents a novel approach to investigate cloud-aerosol interactions by coupling a Markov Chain Monte Carlo (MCMC) algorithm to a pseudo-adiabatic cloud parcel model. Despite the number of numerical cloud-aerosol sensitivity studies previously conducted few have used statistical analysis

  17. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros

    2016-08-29

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  18. Modelling Infectious Disease Spreading Dynamic via Magnetic Spin Distribution: The Stochastic Monte Carlo and Neural Network Analysis

    Science.gov (United States)

    Laosiritaworn, Yongjua; Laosiritaworn, Yongyut; Laosiritaworn, Wimalin S.

    2017-09-01

    In this work, the disease spreading under SIR framework (susceptible-infected-recovered) agent-based model was investigated via magnetic spin model, stochastic Monte Carlo simulation, and Neural Network analysis. The defined systems were two-dimensional lattice-like, where the spins (representing susceptible, infected, and recovered agents) were allocated on lattice cells. The lattice size, spin density, and infectious period were varied to observe its influence on disease spreading period. In the simulation, each spin was randomly allocated on the lattice and interacted with its first neighbouring spins for disease spreading. The subgroup magnetization profiles were recorded. From the results, numbers of agents in each subgroup as a function of time was found to depend on all considered parameters. Specifically, the disease spreading period slightly increases with increasing system size, decreases with increasing spin density, and exponentially decays with increasing infectious period. Due to many degrees of freedom associated, Neural Network was used to establish complex relationship among parameters. Multi-layer perceptron was considered, where optimized network architecture of 3-19-15-1 was found. Good agreement between predicted and actual outputs was evident. This confirms the validity of using Neural Network as supplements in modelling SIR disease spreading and provides profound database for future deployment.

  19. Kinetic Monte Carlo modeling of the efficiency roll-off in a multilayer white organic light-emitting device

    Energy Technology Data Exchange (ETDEWEB)

    Mesta, M.; Coehoorn, R.; Bobbert, P. A. [Department of Applied Physics, Technische Universiteit Eindhoven, P.O. Box 513, NL-5600 MB Eindhoven (Netherlands); Eersel, H. van [Simbeyond B.V., P.O. Box 513, NL-5600 MB Eindhoven (Netherlands)

    2016-03-28

    Triplet-triplet annihilation (TTA) and triplet-polaron quenching (TPQ) in organic light-emitting devices (OLEDs) lead to a roll-off of the internal quantum efficiency (IQE) with increasing current density J. We employ a kinetic Monte Carlo modeling study to analyze the measured IQE and color balance as a function of J in a multilayer hybrid white OLED that combines fluorescent blue with phosphorescent green and red emission. We investigate two models for TTA and TPQ involving the phosphorescent green and red emitters: short-range nearest-neighbor quenching and long-range Förster-type quenching. Short-range quenching predicts roll-off to occur at much higher J than measured. Taking long-range quenching with Förster radii for TTA and TPQ equal to twice the Förster radii for exciton transfer leads to a fair description of the measured IQE-J curve, with the major contribution to the roll-off coming from TPQ. The measured decrease of the ratio of phosphorescent to fluorescent component of the emitted light with increasing J is correctly predicted. A proper description of the J-dependence of the ratio of red and green phosphorescent emission needs further model refinements.

  20. Abstract ID: 240 A probabilistic-based nuclear reaction model for Monte Carlo ion transport in particle therapy.

    Science.gov (United States)

    Maria Jose, Gonzalez Torres; Jürgen, Henniger

    2018-01-01

    In order to expand the Monte Carlo transport program AMOS to particle therapy applications, the ion module is being developed in the radiation physics group (ASP) at the TU Dresden. This module simulates the three main interactions of ions in matter for the therapy energy range: elastic scattering, inelastic collisions and nuclear reactions. The simulation of the elastic scattering is based on the Binary Collision Approximation and the inelastic collisions on the Bethe-Bloch theory. The nuclear reactions, which are the focus of the module, are implemented according to a probabilistic-based model developed in the group. The developed model uses probability density functions to sample the occurrence of a nuclear reaction given the initial energy of the projectile particle as well as the energy at which this reaction will take place. The particle is transported until the reaction energy is reached and then the nuclear reaction is simulated. This approach allows a fast evaluation of the nuclear reactions. The theory and application of the proposed model will be addressed in this presentation. The results of the simulation of a proton beam colliding with tissue will also be presented. Copyright © 2017.

  1. Internal dosimetry with the Monte Carlo code GATE: validation using the ICRP/ICRU female reference computational model

    Science.gov (United States)

    Villoing, Daphnée; Marcatili, Sara; Garcia, Marie-Paule; Bardiès, Manuel

    2017-03-01

    The purpose of this work was to validate GATE-based clinical scale absorbed dose calculations in nuclear medicine dosimetry. GATE (version 6.2) and MCNPX (version 2.7.a) were used to derive dosimetric parameters (absorbed fractions, specific absorbed fractions and S-values) for the reference female computational model proposed by the International Commission on Radiological Protection in ICRP report 110. Monoenergetic photons and electrons (from 50 keV to 2 MeV) and four isotopes currently used in nuclear medicine (fluorine-18, lutetium-177, iodine-131 and yttrium-90) were investigated. Absorbed fractions, specific absorbed fractions and S-values were generated with GATE and MCNPX for 12 regions of interest in the ICRP 110 female computational model, thereby leading to 144 source/target pair configurations. Relative differences between GATE and MCNPX obtained in specific configurations (self-irradiation or cross-irradiation) are presented. Relative differences in absorbed fractions, specific absorbed fractions or S-values are below 10%, and in most cases less than 5%. Dosimetric results generated with GATE for the 12 volumes of interest are available as supplemental data. GATE can be safely used for radiopharmaceutical dosimetry at the clinical scale. This makes GATE a viable option for Monte Carlo modelling of both imaging and absorbed dose in nuclear medicine.

  2. Atomic structure of Mg-based metallic glass investigated with neutron diffraction, reverse Monte Carlo modeling and electron microscopy.

    Science.gov (United States)

    Babilas, Rafał; Łukowiec, Dariusz; Temleitner, Laszlo

    2017-01-01

    The structure of a multicomponent metallic glass, Mg 65 Cu 20 Y 10 Ni 5 , was investigated by the combined methods of neutron diffraction (ND), reverse Monte Carlo modeling (RMC) and high-resolution transmission electron microscopy (HRTEM). The RMC method, based on the results of ND measurements, was used to develop a realistic structure model of a quaternary alloy in a glassy state. The calculated model consists of a random packing structure of atoms in which some ordered regions can be indicated. The amorphous structure was also described by peak values of partial pair correlation functions and coordination numbers, which illustrated some types of cluster packing. The N = 9 clusters correspond to the tri-capped trigonal prisms, which are one of Bernal's canonical clusters, and atomic clusters with N = 6 and N = 12 are suitable for octahedral and icosahedral atomic configurations. The nanocrystalline character of the alloy after annealing was also studied by HRTEM. The selected HRTEM images of the nanocrystalline regions were also processed by inverse Fourier transform analysis. The high-angle annular dark-field (HAADF) technique was used to determine phase separation in the studied glass after heat treatment. The HAADF mode allows for the observation of randomly distributed, dark contrast regions of about 4-6 nm. The interplanar spacing identified for the orthorhombic Mg 2 Cu crystalline phase is similar to the value of the first coordination shell radius from the short-range order.

  3. Joint Data Assimilation and Parameter Calibration in on-line groundwater modelling using Sequential Monte Carlo techniques

    Science.gov (United States)

    Ramgraber, M.; Schirmer, M.

    2017-12-01

    As computational power grows and wireless sensor networks find their way into common practice, it becomes increasingly feasible to pursue on-line numerical groundwater modelling. The reconciliation of model predictions with sensor measurements often necessitates the application of Sequential Monte Carlo (SMC) techniques, most prominently represented by the Ensemble Kalman Filter. In the pursuit of on-line predictions it seems advantageous to transcend the scope of pure data assimilation and incorporate on-line parameter calibration as well. Unfortunately, the interplay between shifting model parameters and transient states is non-trivial. Several recent publications (e.g. Chopin et al., 2013, Kantas et al., 2015) in the field of statistics discuss potential algorithms addressing this issue. However, most of these are computationally intractable for on-line application. In this study, we investigate to what extent compromises between mathematical rigour and computational restrictions can be made within the framework of on-line numerical modelling of groundwater. Preliminary studies are conducted in a synthetic setting, with the goal of transferring the conclusions drawn into application in a real-world setting. To this end, a wireless sensor network has been established in the valley aquifer around Fehraltorf, characterized by a highly dynamic groundwater system and located about 20 km to the East of Zürich, Switzerland. By providing continuous probabilistic estimates of the state and parameter distribution, a steady base for branched-off predictive scenario modelling could be established, providing water authorities with advanced tools for assessing the impact of groundwater management practices. Chopin, N., Jacob, P.E. and Papaspiliopoulos, O. (2013): SMC2: an efficient algorithm for sequential analysis of state space models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 75 (3), p. 397-426. Kantas, N., Doucet, A., Singh, S

  4. Monte Carlo modeling of a clinical PET scanner by using the GATE dedicated computer code

    International Nuclear Information System (INIS)

    Vieira, Igor Fagner; Lima, Fernando Roberto de Andrade

    2011-01-01

    This paper demonstrates more possible detailed the GATE simulated architecture involved in the 4D modeling of a General Electric PET scanner, the Advance. So, it were used data present in the literature on the configuration of GE modelled PET. The obtained results which were the 3D components of PET creation, and the simulation of 4D phenomena as the source decay and the gantry whirl, exhibit the potential of tool in emission tomograph modelling

  5. Monte Carlo study of the magnetic properties of the 3D Hubbard model

    OpenAIRE

    Campos, Isabel; Davenport, James W.

    2001-01-01

    We investigate numerically the magnetic properties of the 3D Isotropic and Anisotropic Hubbard model at half-filling. The behavior of the transition temperature as a function of the anisotropic hopping parameter is qualitatively described. In the Isotropic model we measure the scaling properties of the susceptibility finding agreement with the magnetic critical exponents of the 3D Heisenberg model. We also describe several particularities concerning the implementation of our simulation in a c...

  6. Nanostructure evolution under irradiation in FeMnNi alloys: A "grey alloy" object kinetic Monte Carlo model

    Science.gov (United States)

    Chiapetto, M.; Malerba, L.; Becquart, C. S.

    2015-07-01

    This work extends the object kinetic Monte Carlo model for neutron irradiation-induced nanostructure evolution in Fe-C binary alloys developed in [1], introducing the effects of substitutional solutes like Mn and Ni. The objective is to develop a model able to describe the nanostructural evolution of both vacancy and self-interstitial atom (SIA) defect cluster populations in Fe(C)MnNi neutron-irradiated model alloys at the operational temperature of light water reactors (∼300 °C), by simulating specific reference irradiation experiments. To do this, the effects of the substitutional solutes of interest are introduced, under simplifying assumptions, using a "grey alloy" scheme. Mn and Ni solute atoms are not explicitly introduced in the model, which therefore cannot describe their redistribution under irradiation, but their effect is introduced by modifying the parameters that govern the mobility of both SIA and vacancy clusters. In particular, the reduction of the mobility of point-defect clusters as a consequence of the presence of solutes proved to be key to explain the experimentally observed disappearance of detectable defect clusters with increasing solute content. Solute concentration is explicitly taken into account in the model as a variable determining the slowing down of self-interstitial clusters; small vacancy clusters, on the other hand, are assumed to be significantly slowed down by the presence of solutes, while for clusters bigger than 10 vacancies their complete immobility is postulated. The model, which is fully based on physical considerations and only uses a few parameters for calibration, is found to be capable of reproducing the experimental trends in terms of density and size distribution of the irradiation-induced defect populations with dose, as compared to the reference experiment, thereby providing insight into the physical mechanisms that influence the nanostructural evolution undergone by this material during irradiation.

  7. Kernel Density Independence Sampling based Monte Carlo Scheme (KISMCS) for inverse hydrological modeling

    NARCIS (Netherlands)

    Shafiei, M.; Gharari, S.; Pande, S.; Bhulai, S.

    2014-01-01

    Posterior sampling methods are increasingly being used to describe parameter and model predictive uncertainty in hydrologic modelling. This paper proposes an alternative to random walk chains (such as DREAM-zs). We propose a sampler based on independence chains with an embedded feature of

  8. Monte-Carlo model development for evaluation of current clinical target volume definition for heterogeneous and hypoxic glioblastoma.

    Science.gov (United States)

    Moghaddasi, L; Bezak, E; Harriss-Phillips, W

    2016-05-07

    Clinical target volume (CTV) determination may be complex and subjective. In this work a microscopic-scale tumour model was developed to evaluate current CTV practices in glioblastoma multiforme (GBM) external radiotherapy. Previously, a Geant4 cell-based dosimetry model was developed to calculate the dose deposited in individual GBM cells. Microscopic extension probability (MEP) models were then developed using Matlab-2012a. The results of the cell-based dosimetry model and MEP models were combined to calculate survival fractions (SF) for CTV margins of 2.0 and 2.5 cm. In the current work, oxygenation and heterogeneous radiosensitivity profiles were incorporated into the GBM model. The genetic heterogeneity was modelled using a range of α/β values (linear-quadratic model parameters) associated with different GBM cell lines. These values were distributed among the cells randomly, taken from a Gaussian-weighted sample of α/β values. Cellular oxygen pressure was distributed randomly taken from a sample weighted to profiles obtained from literature. Three types of GBM models were analysed: homogeneous-normoxic, heterogeneous-normoxic, and heterogeneous-hypoxic. The SF in different regions of the tumour model and the effect of the CTV margin extension from 2.0-2.5 cm on SFs were investigated for three MEP models. The SF within the beam was increased by up to three and two orders of magnitude following incorporation of heterogeneous radiosensitivities and hypoxia, respectively, in the GBM model. However, the total SF was shown to be overdominated by the presence of tumour cells in the penumbra region and to a lesser extent by genetic heterogeneity and hypoxia. CTV extension by 0.5 cm reduced the SF by a maximum of 78.6  ±  3.3%, 78.5  ±  3.3%, and 77.7  ±  3.1% for homogeneous and heterogeneous-normoxic, and heterogeneous hypoxic GBMs, respectively. Monte-Carlo model was developed to quantitatively evaluate SF for genetically

  9. Particle-gamma and particle-particle correlations in nuclear reactions using Monte Carlo Hauser-Feshback model

    Energy Technology Data Exchange (ETDEWEB)

    Kawano, Toshihiko [Los Alamos National Laboratory; Talou, Patrick [Los Alamos National Laboratory; Watanabe, Takehito [Los Alamos National Laboratory; Chadwick, Mark [Los Alamos National Laboratory

    2010-01-01

    Monte Carlo simulations for particle and {gamma}-ray emissions from an excited nucleus based on the Hauser-Feshbach statistical theory are performed to obtain correlated information between emitted particles and {gamma}-rays. We calculate neutron induced reactions on {sup 51}V to demonstrate unique advantages of the Monte Carlo method. which are the correlated {gamma}-rays in the neutron radiative capture reaction, the neutron and {gamma}-ray correlation, and the particle-particle correlations at higher energies. It is shown that properties in nuclear reactions that are difficult to study with a deterministic method can be obtained with the Monte Carlo simulations.

  10. Size dependent thermal hysteresis in spin crossover nanoparticles reflected within a Monte Carlo based Ising-like model

    International Nuclear Information System (INIS)

    Atitoaie, Alexandru; Tanasa, Radu; Enachescu, Cristian

    2012-01-01

    Spin crossover compounds are photo-magnetic bistable molecular magnets with two states in thermodynamic competition: the diamagnetic low-spin state and paramagnetic high-spin state. The thermal transition between the two states is often accompanied by a wide hysteresis, premise for possible application of these materials as recording media. In this paper we study the influence of the system's size on the thermal hysteresis loops using Monte Carlo simulations based on an Arrhenius dynamics applied for an Ising like model with long- and short-range interactions. We show that using appropriate boundary conditions it is possible to reproduce both the drop of hysteresis width with decreasing particle size, the hysteresis shift towards lower temperatures and the incomplete transition, as in the available experimental data. The case of larger systems composed by several sublattices is equally treated reproducing the shrinkage of the hysteresis loop's width experimentally observed. - Highlights: ► A study concerning size effects in spin crossover nanoparticles hysteresis is presented. ► An Ising like model with short- and long-range interactions and Arrhenius dynamics is employed. ► In open boundary system the hysteresis width decreases with particle size. ► With appropriate environment, hysteresis loop is shifted towards lower temperature and transition is incomplete.

  11. Sequential Monte Carlo filter for state estimation of LiFePO4 batteries based on an online updated model

    Science.gov (United States)

    Li, Jiahao; Klee Barillas, Joaquin; Guenther, Clemens; Danzer, Michael A.

    2014-02-01

    Battery state monitoring is one of the key techniques in battery management systems e.g. in electric vehicles. An accurate estimation can help to improve the system performance and to prolong the battery remaining useful life. Main challenges for the state estimation for LiFePO4 batteries are the flat characteristic of open-circuit-voltage over battery state of charge (SOC) and the existence of hysteresis phenomena. Classical estimation approaches like Kalman filtering show limitations to handle nonlinear and non-Gaussian error distribution problems. In addition, uncertainties in the battery model parameters must be taken into account to describe the battery degradation. In this paper, a novel model-based method combining a Sequential Monte Carlo filter with adaptive control to determine the cell SOC and its electric impedance is presented. The applicability of this dual estimator is verified using measurement data acquired from a commercial LiFePO4 cell. Due to a better handling of the hysteresis problem, results show the benefits of the proposed method against the estimation with an Extended Kalman filter.

  12. Monte Carlo model to describe depth selective fluorescence spectra of epithelial tissue: applications for diagnosis of oral precancer.

    Science.gov (United States)

    Pavlova, Ina; Weber, Crystal Redden; Schwarz, Richard A; Williams, Michelle; El-Naggar, Adel; Gillenwater, Ann; Richards-Kortum, Rebecca

    2008-01-01

    We present a Monte Carlo model to predict fluorescence spectra of the oral mucosa obtained with a depth-selective fiber optic probe as a function of tissue optical properties. A model sensitivity analysis determines how variations in optical parameters associated with neoplastic development influence the intensity and shape of spectra, and elucidates the biological basis for differences in spectra from normal and premalignant oral sites. Predictions indicate that spectra of oral mucosa collected with a depth-selective probe are affected by variations in epithelial optical properties, and to a lesser extent, by changes in superficial stromal parameters, but not by changes in the optical properties of deeper stroma. The depth selective probe offers enhanced detection of epithelial fluorescence, with 90% of the detected signal originating from the epithelium and superficial stroma. Predicted depth-selective spectra are in good agreement with measured average spectra from normal and dysplastic oral sites. Changes in parameters associated with dysplastic progression lead to a decreased fluorescence intensity and a shift of the spectra to longer emission wavelengths. Decreased fluorescence is due to a drop in detected stromal photons, whereas the shift of spectral shape is attributed to an increased fraction of detected photons arising in the epithelium.

  13. Monte Carlo modeling of 60 Co HDR brachytherapy source in water and in different solid water phantom materials

    Directory of Open Access Journals (Sweden)

    Sahoo S

    2010-01-01

    Full Text Available The reference medium for brachytherapy dose measurements is water. Accuracy of dose measurements of brachytherapy sources is critically dependent on precise measurement of the source-detector distance. A solid phantom can be precisely machined and hence source-detector distances can be accurately determined. In the present study, four different solid phantom materials such as polymethylmethacrylate (PMMA, polystyrene, Solid Water, and RW1 are modeled using the Monte Carlo methods to investigate the influence of phantom material on dose rate distributions of the new model of BEBIG 60 Co brachytherapy source. The calculated dose rate constant is 1.086 ± 0.06% cGy h−1 U−1 for water, PMMA, polystyrene, Solid Water, and RW1. The investigation suggests that the phantom materials RW1 and Solid Water represent water-equivalent up to 20 cm from the source. PMMA and polystyrene are water-equivalent up to 10 cm and 15 cm from the source, respectively, as the differences in the dose data obtained in these phantom materials are not significantly different from the corresponding data obtained in liquid water phantom. At a radial distance of 20 cm from the source, polystyrene overestimates the dose by 3% and PMMA underestimates it by about 8% when compared to the corresponding data obtained in water phantom.

  14. Bayesian multi-dipole modelling of a single topography in MEG by adaptive sequential Monte Carlo samplers

    International Nuclear Information System (INIS)

    Sorrentino, Alberto; Luria, Gianvittorio; Aramini, Riccardo

    2014-01-01

    In this paper, we develop a novel Bayesian approach to the problem of estimating neural currents in the brain from a fixed distribution of magnetic field (called topography), measured by magnetoencephalography. Differently from recent studies that describe inversion techniques, such as spatio-temporal regularization/filtering, in which neural dynamics always plays a role, we face here a purely static inverse problem. Neural currents are modelled as an unknown number of current dipoles, whose state space is described in terms of a variable-dimension model. Within the resulting Bayesian framework, we set up a sequential Monte Carlo sampler to explore the posterior distribution. An adaptation technique is employed in order to effectively balance the computational cost and the quality of the sample approximation. Then, both the number and the parameters of the unknown current dipoles are simultaneously estimated. The performance of the method is assessed by means of synthetic data, generated by source configurations containing up to four dipoles. Eventually, we describe the results obtained by analysing data from a real experiment, involving somatosensory evoked fields, and compare them to those provided by three other methods. (paper)

  15. Modelling of the electronic and ferroelectric properties of trichloroacetamide using Monte Carlo and first-principles calculations

    Directory of Open Access Journals (Sweden)

    Yaxuan Cai

    2017-06-01

    Full Text Available The electronic structure and ferroelectric mechanism of trichloroacetamide were studied using first principles calculations and density functional theory within the generalized gradient approximation. Using both Bader charge and electron deformation density, large molecular spontaneous polarization is found to originate from the charge transfer cause by the strong “push-pull” effect of electron-releasing interacting with electron-withdrawing groups. The intermolecular hydrogen bonds, NH⋯O, produce dipole moments in adjacent molecules to be aligned with each other. They also reduce the potential energy of the molecular chain threaded by hydrogen bonds. Due to the symmetric crystalline properties, however, the polarization of trichloroacetamide is mostly compensated and therefore small. Using the Berry Phase method, the spontaneous polarization of trichloroacetamide was simulated, and good agreement with the experimental values was found. Considering the polarization characteristics of trichloroacetamide, we constructed a one-dimensional ferroelectric Hamiltonian model to calculate the ferroelectric properties of TCAA. Using the Hamiltonian model, the thermal properties and ferroelectricity of trichloroacetamide were studied using the Monte Carlo method, and the Tc value was calculated.

  16. FIFRELIN - TRIPOLI-4® coupling for Monte Carlo simulations with a fission model. Application to shielding calculations

    Science.gov (United States)

    Petit, Odile; Jouanne, Cédric; Litaize, Olivier; Serot, Olivier; Chebboubi, Abdelhazize; Pénéliau, Yannick

    2017-09-01

    TRIPOLI-4® Monte Carlo transport code and FIFRELIN fission model have been coupled by means of external files so that neutron transport can take into account fission distributions (multiplicities and spectra) that are not averaged, as is the case when using evaluated nuclear data libraries. Spectral effects on responses in shielding configurations with fission sampling are then expected. In the present paper, the principle of this coupling is detailed and a comparison between TRIPOLI-4® fission distributions at the emission of fission neutrons is presented when using JEFF-3.1.1 evaluated data or FIFRELIN data generated either through a n/g-uncoupled mode or through a n/g-coupled mode. Finally, an application to a modified version of the ASPIS benchmark is performed and the impact of using FIFRELIN data on neutron transport is analyzed. Differences noticed on average reaction rates on the surfaces closest to the fission source are mainly due to the average prompt fission spectrum. Moreover, when working with the same average spectrum, a complementary analysis based on non-average reaction rates still shows significant differences that point out the real impact of using a fission model in neutron transport simulations.

  17. A Simple Model to Access Equilibrium Constants of Reactions Type A ⇋ B Using Monte Carlo Simulation.

    Directory of Open Access Journals (Sweden)

    R. R. Farias, L. A. M. Cardoso, N. M. Oliveira Neto

    2011-01-01

    Full Text Available A simple theoretical model to describe equilibrium properties of homogeneous re-versible chemical reactions is proposed and applied to an A ⇋ B type reaction. Forthis purpose the equilibrium properties are analyzed by usual Monte Carlo simula-tion. It is shown that the equilibrium constant (Ke for this kind of reaction exhibitsdistinct characteristics for Eba 1, where Eba is the ratio be-tween the reverse and forward activation energies. For Eba 1 and increase(decrease the temperature our results recover the principle of Le Chˆtelier applied ato temperature effects. The special and interesting case is obtained for Eba = 1 sinceKe = 1 for all range of temperature. Another important parameter in our analysisis θA , defined as temperature measured with relation the activation energy of theforward reaction. For fixed values of Eba and for θA ≫ 1 the equilibrium constantapproaches 1, showing that all transitions are equally likely, no matter the differencein the energy barriers. The data obtained in our simulations show the well knownrelationship between Ke , Eb , Ea and kB T . Finally we argue that this theoreticalmodel can be applied to a family of homogeneous chemical reactions characterizedby the same Eba and θA showing the broad application of this stochastic model tostudy chemical reactions. Some of these results will be discussed in terms of collisiontheory.

  18. A parameter for the selection of an optimum balance calibration model by Monte Carlo simulation

    CSIR Research Space (South Africa)

    Bidgood, Peter M

    2013-09-01

    Full Text Available in (1). The additional modulated terms are included to give a degree of functionality to balances whose characteristics are dependent on the sign of the applied load(s). In this paper, only the linear and quadratic terms are considered. Equation (1...]. MDOE methods are directed at obtaining an optimum balance loading scheme that can be applied to effectively determine the coefficients of a pre-defined calibration model: this model may be linear, quadratic or cubic. The most general of these models...

  19. A Monte Carlo pencil beam scanning model for proton treatment plan simulation using GATE/GEANT4

    Energy Technology Data Exchange (ETDEWEB)

    Grevillot, L; Freud, N; Sarrut, D [Universite de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Universite Lyon 1, Centre Leon Berard, Lyon (France); Bertrand, D; Dessy, F, E-mail: loic.grevillot@creatis.insa-lyon.fr [IBA, B-1348, Louvain-la Neuve (Belgium)

    2011-08-21

    This work proposes a generic method for modeling scanned ion beam delivery systems, without simulation of the treatment nozzle and based exclusively on beam data library (BDL) measurements required for treatment planning systems (TPS). To this aim, new tools dedicated to treatment plan simulation were implemented in the Gate Monte Carlo platform. The method was applied to a dedicated nozzle from IBA for proton pencil beam scanning delivery. Optical and energy parameters of the system were modeled using a set of proton depth-dose profiles and spot sizes measured at 27 therapeutic energies. For further validation of the beam model, specific 2D and 3D plans were produced and then measured with appropriate dosimetric tools. Dose contributions from secondary particles produced by nuclear interactions were also investigated using field size factor experiments. Pristine Bragg peaks were reproduced with 0.7 mm range and 0.2 mm spot size accuracy. A 32 cm range spread-out Bragg peak with 10 cm modulation was reproduced with 0.8 mm range accuracy and a maximum point-to-point dose difference of less than 2%. A 2D test pattern consisting of a combination of homogeneous and high-gradient dose regions passed a 2%/2 mm gamma index comparison for 97% of the points. In conclusion, the generic modeling method proposed for scanned ion beam delivery systems was applicable to an IBA proton therapy system. The key advantage of the method is that it only requires BDL measurements of the system. The validation tests performed so far demonstrated that the beam model achieves clinical performance, paving the way for further studies toward TPS benchmarking. The method involves new sources that are available in the new Gate release V6.1 and could be further applied to other particle therapy systems delivering protons or other types of ions like carbon.

  20. A Monte Carlo pencil beam scanning model for proton treatment plan simulation using GATE/GEANT4.

    Science.gov (United States)

    Grevillot, L; Bertrand, D; Dessy, F; Freud, N; Sarrut, D

    2011-08-21

    This work proposes a generic method for modeling scanned ion beam delivery systems, without simulation of the treatment nozzle and based exclusively on beam data library (BDL) measurements required for treatment planning systems (TPS). To this aim, new tools dedicated to treatment plan simulation were implemented in the Gate Monte Carlo platform. The method was applied to a dedicated nozzle from IBA for proton pencil beam scanning delivery. Optical and energy parameters of the system were modeled using a set of proton depth-dose profiles and spot sizes measured at 27 therapeutic energies. For further validation of the beam model, specific 2D and 3D plans were produced and then measured with appropriate dosimetric tools. Dose contributions from secondary particles produced by nuclear interactions were also investigated using field size factor experiments. Pristine Bragg peaks were reproduced with 0.7 mm range and 0.2 mm spot size accuracy. A 32 cm range spread-out Bragg peak with 10 cm modulation was reproduced with 0.8 mm range accuracy and a maximum point-to-point dose difference of less than 2%. A 2D test pattern consisting of a combination of homogeneous and high-gradient dose regions passed a 2%/2 mm gamma index comparison for 97% of the points. In conclusion, the generic modeling method proposed for scanned ion beam delivery systems was applicable to an IBA proton therapy system. The key advantage of the method is that it only requires BDL measurements of the system. The validation tests performed so far demonstrated that the beam model achieves clinical performance, paving the way for further studies toward TPS benchmarking. The method involves new sources that are available in the new Gate release V6.1 and could be further applied to other particle therapy systems delivering protons or other types of ions like carbon.

  1. Digitized Onondaga Lake Dissolved Oxygen Concentrations and Model Simulated Values using Bayesian Monte Carlo Methods

    Data.gov (United States)

    U.S. Environmental Protection Agency — The dataset is lake dissolved oxygen concentrations obtained form plots published by Gelda et al. (1996) and lake reaeration model simulated values using Bayesian...

  2. A MATLAB Package for Markov Chain Monte Carlo with a Multi-Unidimensional IRT Model

    Directory of Open Access Journals (Sweden)

    Yanyan Sheng

    2008-11-01

    Full Text Available Unidimensional item response theory (IRT models are useful when each item is designed to measure some facet of a unified latent trait. In practical applications, items are not necessarily measuring the same underlying trait, and hence the more general multi-unidimensional model should be considered. This paper provides the requisite information and description of software that implements the Gibbs sampler for such models with two item parameters and a normal ogive form. The software developed is written in the MATLAB package IRTmu2no. The package is flexible enough to allow a user the choice to simulate binary response data with multiple dimensions, set the number of total or burn-in iterations, specify starting values or prior distributions for model parameters, check convergence of the Markov chain, as well as obtain Bayesian fit statistics. Illustrative examples are provided to demonstrate and validate the use of the software package.

  3. Markov chain Monte Carlo approach to parameter estimation in the FitzHugh-Nagumo model

    DEFF Research Database (Denmark)

    Jensen, Anders Christian; Ditlevsen, Susanne; Kessler, Mathieu

    2012-01-01

    Excitability is observed in a variety of natural systems, such as neuronal dynamics, cardiovascular tissues, or climate dynamics. The stochastic FitzHugh-Nagumo model is a prominent example representing an excitable system. To validate the practical use of a model, the first step is to estimate m...... handle multidimensional nonlinear diffusions with large time scale separation. The estimation method is illustrated on simulated data....

  4. Monte Carlo dosimetry for {sup 125}I and {sup 103}Pd eye plaque brachytherapy with various seed models

    Energy Technology Data Exchange (ETDEWEB)

    Thomson, R. M.; Rogers, D. W. O. [Ottawa Carleton Institute for Physics, Carleton University Campus, Ottawa, Ontario K1S 5B6 (Canada)

    2010-01-15

    Purpose: Dose distributions are calculated for various models of {sup 125}I and {sup 103}Pd seeds in the standardized plaques of the Collaborative Ocular Melanoma Study (COMS). The sensitivity to seed model of dose distributions and dose distributions relative to TG-43 are investigated. Methods: Monte Carlo simulations are carried out with the EGSnrc user-code BrachyDose. Brachytherapy seeds and eye plaques are fully modeled. Simulations of one seed in the central slot of a 20 mm Modulay (gold alloy) plaque backing with and without the Silastic (silicone polymer) insert and of a 16 mm fully loaded Modulay/Silastic plaque are performed. Dose distributions are compared to those calculated under TG-43 assumptions, i.e., ignoring the effects of the plaque backing and insert and interseed attenuation. Three-dimensional dose distributions for different {sup 125}I and {sup 103}Pd seed models are compared via depth-dose curves, isodose contours, and tabulation of doses at points of interest in the eye. Results are compared to those of our recent BrachyDose study for COMS plaques containing model 6711 ({sup 125}I) or 200 ({sup 103}Pd) seeds [R. M. Thomson et al., Med. Phys. 35, 5530-5543 (2008)]. Results: Along the central axis of a plaque containing one seed, variations of less than 1% are seen in the effect of the Modulay backing alone for different seed models; for the Modulay/Silastic combination, variations are 2%. For a 16 mm plaque fully loaded with {sup 125}I ({sup 103}Pd) seeds, dose decreases relative to TG-43 doses are 11%-12% (19%-20%) and 14%-15% (20%) at distances of 0.5 and 1 cm from the inner sclera along the plaque's central axis, respectively. For the same prescription dose, doses at points of interest vary by up to 8% with seed model. Doses to critical normal structures are lower for all {sup 103}Pd seed models than for {sup 125}I with the possible exception of the sclera adjacent to the plaque; scleral doses vary with seed model and are not always

  5. Verification of a Monte Carlo model of the Missouri S&T reactor

    Science.gov (United States)

    Richardson, Brad Paul

    The purpose of this research is to ensure that an MCNP model of the Missouri S&T reactor produces accurate results so that it may be used to predict the effects of some desired upgrades to the reactor. The desired upgrades are an increase in licensed power from 200 kW to 400kW, and the installation of a secondary cooling system to prevent heating of the pool. This was performed by comparing simulations performed using the model with experiments performed using the reactor. The experiments performed were, the approach to criticality method of predicting the critical control rod height, measurement of the axial flux profile, moderator temperature coefficient of reactivity, and void coefficient of reactivity. The results of these experiments and results from the simulation show that the model produces a similar axial flux profile, and that it models the void and temperature coefficients of reactivity well. The model does however over-predict the criticality of the core, such that it predicts a lower critical rod height and a keff greater than one when simulating conditions in which the reactor was at a stable power. It is assumed that this is due to the model using fuel compositions from when the fuel was new, while in reality the reactor has been operating with this fuel for nearly 20 years. It has therefore been concluded that the fuel composition should be updated by performing a burnup analysis, and an accurate heat transfer and fluid flow analysis be performed to better represent the temperature profile before the model is used to simulate the effects of the desired upgrades.

  6. Applying the effort-reward imbalance model to household and family work: a population-based study of German mothers.

    Science.gov (United States)

    Sperlich, Stefanie; Peter, Richard; Geyer, Siegfried

    2012-01-06

    This paper reports on results of a newly developed questionnaire for the assessment of effort-reward imbalance (ERI) in unpaid household and family work. Using a cross-sectional population-based survey of German mothers (n = 3129) the dimensional structure of the theoretical ERI model was validated by means of Confirmatory Factor Analysis (CFA). Analyses of Variance were computed to examine relationships between ERI and social factors and health outcomes. CFA revealed good psychometric properties indicating that the subscale 'effort' is based on one latent factor and the subscale 'reward' is composed of four dimensions: 'intrinsic value of family and household work', 'societal esteem', 'recognition from the partner', and 'affection from the child(ren)'. About 19.3% of mothers perceived lack of reciprocity and 23.8% showed high rates of overcommitment in terms of inability to withdraw from household and family obligations. Socially disadvantaged mothers were at higher risk of ERI, in particular with respect to the perception of low societal esteem. Gender inequality in the division of household and family work and work-family conflict accounted most for ERI in household and family work. Analogous to ERI in paid work we could demonstrate that ERI affects self-rated health, somatic complaints, mental health and, to some extent, hypertension. The newly developed questionnaire demonstrates satisfied validity and promising results for extending the ERI model to household and family work.

  7. Applying the effort-reward imbalance model to household and family work: a population-based study of German mothers

    Directory of Open Access Journals (Sweden)

    Sperlich Stefanie

    2012-01-01

    Full Text Available Abstract Background This paper reports on results of a newly developed questionnaire for the assessment of effort-reward imbalance (ERI in unpaid household and family work. Methods: Using a cross-sectional population-based survey of German mothers (n = 3129 the dimensional structure of the theoretical ERI model was validated by means of Confirmatory Factor Analysis (CFA. Analyses of Variance were computed to examine relationships between ERI and social factors and health outcomes. Results CFA revealed good psychometric properties indicating that the subscale 'effort' is based on one latent factor and the subscale 'reward' is composed of four dimensions: 'intrinsic value of family and household work', 'societal esteem', 'recognition from the partner', and 'affection from the child(ren'. About 19.3% of mothers perceived lack of reciprocity and 23.8% showed high rates of overcommitment in terms of inability to withdraw from household and family obligations. Socially disadvantaged mothers were at higher risk of ERI, in particular with respect to the perception of low societal esteem. Gender inequality in the division of household and family work and work-family conflict accounted most for ERI in household and family work. Analogous to ERI in paid work we could demonstrate that ERI affects self-rated health, somatic complaints, mental health and, to some extent, hypertension. Conclusions The newly developed questionnaire demonstrates satisfied validity and promising results for extending the ERI model to household and family work.

  8. Monte Carlo modeling of a clinical PET scanner by using the GATE dedicated computer code; Modelagem Monte Carlo de um PET Scanner clinico utilizando o codigo dedicado GATE

    Energy Technology Data Exchange (ETDEWEB)

    Vieira, Igor Fagner; Lima, Fernando Roberto de Andrade, E-mail: falima@cnen.gov.b [Universidade Federal de Pernambuco (DEN/UFPE), Recife, PE (Brazil). Dept. de Energia Nuclear; Universidade de Pernambuco (UPE), Recife, PE (Brazil). Escola Politecnica; Vieira, Jose Wilson [Universidade Federal de Pernambuco (DEN/UFPE), Recife, PE (Brazil). Dept. de Energia Nuclear; Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)

    2011-10-26

    This paper demonstrates more possible detailed the GATE simulated architecture involved in the 4D modeling of a General Electric PET scanner, the Advance. So, it were used data present in the literature on the configuration of GE modelled PET. The obtained results which were the 3D components of PET creation, and the simulation of 4D phenomena as the source decay and the gantry whirl, exhibit the potential of tool in emission tomograph modelling

  9. Model recommendations meet management reality: implementation and evaluation of a network-informed vaccination effort for endangered Hawaiian monk seals

    Science.gov (United States)

    Barbieri, Michelle M.; Murphy, Samantha; Baker, Jason D.; Harting, Albert L.; Craft, Meggan E.; Littnan, Charles L.

    2018-01-01

    Where disease threatens endangered wildlife populations, substantial resources are required for management actions such as vaccination. While network models provide a promising tool for identifying key spreaders and prioritizing efforts to maximize efficiency, population-scale vaccination remains rare, providing few opportunities to evaluate performance of model-informed strategies under realistic scenarios. Because the endangered Hawaiian monk seal could be heavily impacted by disease threats such as morbillivirus, we implemented a prophylactic vaccination programme. We used contact networks to prioritize vaccinating animals with high contact rates. We used dynamic network models to simulate morbillivirus outbreaks under real and idealized vaccination scenarios. We then evaluated the efficacy of model recommendations in this real-world vaccination project. We found that deviating from the model recommendations decreased the efficiency; requiring 44% more vaccinations to achieve a given decrease in outbreak size. However, we gained protection more quickly by vaccinating available animals rather than waiting to encounter priority seals. This work demonstrates the value of network models, but also makes trade-offs clear. If vaccines were limited but time was ample, vaccinating only priority animals would maximize herd protection. However, where time is the limiting factor, vaccinating additional lower-priority animals could more quickly protect the population. PMID:29321294

  10. Diagrammatic Monte Carlo for the weak-coupling expansion of non-Abelian lattice field theories: Large-N U (N ) ×U (N ) principal chiral model

    Science.gov (United States)

    Buividovich, P. V.; Davody, A.

    2017-12-01

    We develop numerical tools for diagrammatic Monte Carlo simulations of non-Abelian lattice field theories in the t'Hooft large-N limit based on the weak-coupling expansion. First, we note that the path integral measure of such theories contributes a bare mass term in the effective action which is proportional to the bare coupling constant. This mass term renders the perturbative expansion infrared-finite and allows us to study it directly in the large-N and infinite-volume limits using the diagrammatic Monte Carlo approach. On the exactly solvable example of a large-N O (N ) sigma model in D =2 dimensions we show that this infrared-finite weak-coupling expansion contains, in addition to powers of bare coupling, also powers of its logarithm, reminiscent of resummed perturbation theory in thermal field theory and resurgent trans-series without exponential terms. We numerically demonstrate the convergence of these double series to the manifestly nonperturbative dynamical mass gap. We then develop a diagrammatic Monte Carlo algorithm for sampling planar diagrams in the large-N matrix field theory, and apply it to study this infrared-finite weak-coupling expansion for large-N U (N ) ×U (N ) nonlinear sigma model (principal chiral model) in D =2 . We sample up to 12 leading orders of the weak-coupling expansion, which is the practical limit set by the increasingly strong sign problem at high orders. Comparing diagrammatic Monte Carlo with conventional Monte Carlo simulations extrapolated to infinite N , we find a good agreement for the energy density as well as for the critical temperature of the "deconfinement" transition. Finally, we comment on the applicability of our approach to planar QCD at zero and finite density.

  11. Critical dynamics of the Potts model: short-time Monte Carlo simulations

    International Nuclear Information System (INIS)

    Silva, Roberto da; Drugowich de Felicio, J.R.

    2004-01-01

    We calculate the new dynamic exponent θ of the 4-state Potts model, using short-time simulations. Our estimates θ1=-0.0471(33) and θ2=-0.0429(11) obtained by following the behavior of the magnetization or measuring the evolution of the time correlation function of the magnetization corroborate the conjecture by Okano et al. [Nucl. Phys. B 485 (1997) 727]. In addition, these values agree with previous estimate of the same dynamic exponent for the two-dimensional Ising model with three-spin interactions in one direction, that is known to belong to the same universality class as the 4-state Potts model. The anomalous dimension of initial magnetization x0=zθ+β/ν is calculated by an alternative way that mixes two different initial conditions. We have also estimated the values of the static exponents β and ν. They are in complete agreement with the pertinent results of the literature

  12. Probabilistic physics-of-failure models for component reliabilities using Monte Carlo simulation and Weibull analysis: a parametric study

    International Nuclear Information System (INIS)

    Hall, P.L.; Strutt, J.E.

    2003-01-01

    In reliability engineering, component failures are generally classified in one of three ways: (1) early life failures; (2) failures having random onset times; and (3) late life or 'wear out' failures. When the time-distribution of failures of a population of components is analysed in terms of a Weibull distribution, these failure types may be associated with shape parameters β having values 1 respectively. Early life failures are frequently attributed to poor design (e.g. poor materials selection) or problems associated with manufacturing or assembly processes. We describe a methodology for the implementation of physics-of-failure models of component lifetimes in the presence of parameter and model uncertainties. This treats uncertain parameters as random variables described by some appropriate statistical distribution, which may be sampled using Monte Carlo methods. The number of simulations required depends upon the desired accuracy of the predicted lifetime. Provided that the number of sampled variables is relatively small, an accuracy of 1-2% can be obtained using typically 1000 simulations. The resulting collection of times-to-failure are then sorted into ascending order and fitted to a Weibull distribution to obtain a shape factor β and a characteristic life-time η. Examples are given of the results obtained using three different models: (1) the Eyring-Peck (EP) model for corrosion of printed circuit boards; (2) a power-law corrosion growth (PCG) model which represents the progressive deterioration of oil and gas pipelines; and (3) a random shock-loading model of mechanical failure. It is shown that for any specific model the values of the Weibull shape parameters obtained may be strongly dependent on the degree of uncertainty of the underlying input parameters. Both the EP and PCG models can yield a wide range of values of β, from β>1, characteristic of wear-out behaviour, to β<1, characteristic of early-life failure, depending on the degree of

  13. Corruption of accuracy and efficiency of Markov chain Monte Carlo simulation by inaccurate numerical implementation of conceptual hydrologic models

    Science.gov (United States)

    Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.

    2010-10-01

    Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.

  14. Monte Carlo-based evaluation of S-values in mouse models for positron-emitting radionuclides

    Science.gov (United States)

    Xie, Tianwu; Zaidi, Habib

    2013-01-01

    In addition to being a powerful clinical tool, Positron emission tomography (PET) is also used in small laboratory animal research to visualize and track certain molecular processes associated with diseases such as cancer, heart disease and neurological disorders in living small animal models of disease. However, dosimetric characteristics in small animal PET imaging are usually overlooked, though the radiation dose may not be negligible. In this work, we constructed 17 mouse models of different body mass and size based on the realistic four-dimensional MOBY mouse model. Particle (photons, electrons and positrons) transport using the Monte Carlo method was performed to calculate the absorbed fractions and S-values for eight positron-emitting radionuclides (C-11, N-13, O-15, F-18, Cu-64, Ga-68, Y-86 and I-124). Among these radionuclides, O-15 emits positrons with high energy and frequency and produces the highest self-absorbed S-values in each organ, while Y-86 emits γ-rays with high energy and frequency which results in the highest cross-absorbed S-values for non-neighbouring organs. Differences between S-values for self-irradiated organs were between 2% and 3%/g difference in body weight for most organs. For organs irradiating other organs outside the splanchnocoele (i.e. brain, testis and bladder), differences between S-values were lower than 1%/g. These appealing results can be used to assess variations in small animal dosimetry as a function of total-body mass. The generated database of S-values for various radionuclides can be used in the assessment of radiation dose to mice from different radiotracers in small animal PET experiments, thus offering quantitative figures for comparative dosimetry research in small animal models.

  15. Copula Gaussian graphical models with penalized ascent Monte Carlo EM algorithm

    NARCIS (Netherlands)

    Abegaz, Fentaw; Wit, Ernst

    2015-01-01

    Typical data that arise from surveys, experiments, and observational studies include continuous and discrete variables. In this article, we study the interdependence among a mixed (continuous, count, ordered categorical, and binary) set of variables via graphical models. We propose an (1)-penalized

  16. A Monte Carlo Examination of an MTMM Model with Planned Incomplete Data Structures.

    Science.gov (United States)

    Bunting, Brendan P.; Adamson, Gary; Mulhall, Peter K.

    2002-01-01

    Studied planned incomplete data designs for the purpose of substantially reducing the amount of data required for multitrait-multimethod models. Simulations studied the effectiveness of Listwise Deletion, Pairwise Deletion, and the expectation maximization (EM) algorithm. Results indicate that EM is generally precise and efficient. (SLD)

  17. Validation of GEANT4 Monte Carlo models with a highly granular scintillator-steel hadron calorimeter

    Czech Academy of Sciences Publication Activity Database

    Adloff, C.; Blaha, J.; Blaising, J.J.; Cvach, Jaroslav; Gallus, Petr; Havránek, Miroslav; Janata, Milan; Kvasnička, Jiří; Lednický, Denis; Marčišovský, Michal; Polák, Ivo; Popule, Jiří; Tomášek, Lukáš; Tomášek, Michal; Růžička, Pavel; Šícho, Petr; Smolík, Jan; Vrba, Václav; Zálešák, Jaroslav

    2013-01-01

    Roč. 8, Jul (2013), s. 1-33 ISSN 1748-0221 Institutional support: RVO:68378271 Keywords : interaction of radiation with matter * calorimeter methods * detector modelling and simulations Subject RIV: BF - Elementary Particles and High Energy Physics Impact factor: 1.526, year: 2013

  18. Monte Carlo estimation for nonlinear non-Gaussian state space models

    NARCIS (Netherlands)

    Jungbacker, B.M.J.P.; Koopman, S.J.

    2007-01-01

    We develop a proposal or importance density for state space models with a nonlinear non-Gaussian observation vector y ∼ p(yθ) and an unobserved linear Gaussian signal vector θ ∼ p(θ). The proposal density is obtained from the Laplace approximation of the smoothing density p(θy). We present efficient

  19. Markov models for digraph panel data : Monte Carlo-based derivative estimation

    NARCIS (Netherlands)

    Schweinberger, Michael; Snijders, Tom A. B.

    2007-01-01

    A parametric, continuous-time Markov model for digraph panel data is considered. The parameter is estimated by the method of moments. A convenient method for estimating the variance-covariance matrix of the moment estimator relies on the delta method, requiring the Jacobian matrix-that is, the

  20. An MLE method for finding LKB NTCP model parameters using Monte Carlo uncertainty estimates

    Science.gov (United States)

    Carolan, Martin; Oborn, Brad; Foo, Kerwyn; Haworth, Annette; Gulliford, Sarah; Ebert, Martin

    2014-03-01

    The aims of this work were to establish a program to fit NTCP models to clinical data with multiple toxicity endpoints, to test the method using a realistic test dataset, to compare three methods for estimating confidence intervals for the fitted parameters and to characterise the speed and performance of the program.

  1. The FLUKA Monte Carlo code coupled with the local effect model for biological calculations in carbon ion therapy

    CERN Document Server

    Mairani, A; Kraemer, M; Sommerer, F; Parodi, K; Scholz, M; Cerutti, F; Ferrari, A; Fasso, A

    2010-01-01

    Clinical Monte Carlo (MC) calculations for carbon ion therapy have to provide absorbed and RBE-weighted dose. The latter is defined as the product of the dose and the relative biological effectiveness (RBE). At the GSI Helmholtzzentrum fur Schwerionenforschung as well as at the Heidelberg Ion Therapy Center (HIT), the RBE values are calculated according to the local effect model (LEM). In this paper, we describe the approach followed for coupling the FLUKA MC code with the LEM and its application to dose and RBE-weighted dose calculations for a superimposition of two opposed C-12 ion fields as applied in therapeutic irradiations. The obtained results are compared with the available experimental data of CHO (Chinese hamster ovary) cell survival and the outcomes of the GSI analytical treatment planning code TRiP98. Some discrepancies have been observed between the analytical and MC calculations of absorbed physical dose profiles, which can be explained by the differences between the laterally integrated depth-d...

  2. Calculation of effective dose for nuclear medicine applications using an image-base body model and the Monte Carlo Method

    International Nuclear Information System (INIS)

    Yorulmaz, N.; Bozkurt, A.

    2009-01-01

    In nuclear medicine applications, the aim is to obtain diagnostic information about the organs and tissues of the patient with the help of some radiopharmaceuticals administered to him/her. Because some organs of the patient other than those under investigation will also be exposed to the radiation, it is important for radiation risk assessment to know how much radiation is received by the vital or radio-sensitive organs or tissues. In this study, an image-based body model created from the realistic images of a human is used together with the Monte Carlo code MCNP to compute the radiation doses absorbed by organs and tissues for some nuclear medicine procedures at gamma energies of 0.01, 0.015, 0.02, 0.03, 0.05, 0.1, 0.2, 0.5 and 1 MeV. Later, these values are used in conjunction with radiation weighting factors and organ weighting factors to estimate the effective dose for each diagnostic application.

  3. Structure of AgxNa1-xPO3 glasses by neutron diffraction and reverse Monte Carlo modelling

    International Nuclear Information System (INIS)

    Hall, Andreas; Swenson, Jan; Karlsson, Christian; Adams, Stefan; Bowron, Daniel T

    2007-01-01

    We have performed structural studies of mixed mobile ion phosphate glasses Ag x Na 1-x PO 3 using diffraction experiments and reverse Monte Carlo simulations. This glass system is particularly interesting as a model system for investigations of the mixed mobile ion effect, due to its anomalously low magnitude in the system. As for previously studied mixed alkali phosphate glasses, with a much more pronounced mixed mobile ion effect, we find no substantial structural alterations of the phosphorous-oxygen network and the local coordination of the mobile cations. Furthermore, the mobile Ag + and Na + ions are randomly mixed with no detectable preference for either similar or dissimilar pairs of cations. However, in contrast to mixed mobile ion systems with a very pronounced mixed mobile ion effect, the two types of mobile ions have, in this case, very similar local environments. For all the studied glass compositions the average Ag-O and Na-O distances in the first coordination shell are determined to be 2.5 ± 0.1 and 2.5 ± 0.1 A, and the corresponding average coordination numbers are approximately 3.2 and 3.7, respectively. The similar local coordinations of the two types of mobile ions suggests that the energy mismatch for a Na + ion to occupy a site that previously has been occupied by a Ag + ion (and vice versa) is low, and that this low energy mismatch is responsible for the anomalously weak mixed mobile ion effect

  4. Construction of a computational exposure model for dosimetric calculations using the EGS4 Monte Carlo code and voxel phantoms

    International Nuclear Information System (INIS)

    Vieira, Jose Wilson

    2004-07-01

    The MAX phantom has been developed from existing segmented images of a male adult body, in order to achieve a representation as close as possible to the anatomical properties of the reference adult male specified by the ICRP. In computational dosimetry, MAX can simulate the geometry of a human body under exposure to ionizing radiations, internal or external, with the objective of calculating the equivalent dose in organs and tissues for occupational, medical or environmental purposes of the radiation protection. This study presents a methodology used to build a new computational exposure model MAX/EGS4: the geometric construction of the phantom; the development of the algorithm of one-directional, divergent, and isotropic radioactive sources; new methods for calculating the equivalent dose in the red bone marrow and in the skin, and the coupling of the MAX phantom with the EGS4 Monte Carlo code. Finally, some results of radiation protection, in the form of conversion coefficients between equivalent dose (or effective dose) and free air-kerma for external photon irradiation are presented and discussed. Comparing the results presented with similar data from other human phantoms it is possible to conclude that the coupling MAX/EGS4 is satisfactory for the calculation of the equivalent dose in radiation protection. (author)

  5. Monte Carlo methods

    CERN Document Server

    Kalos, Melvin H

    2008-01-01

    This introduction to Monte Carlo methods seeks to identify and study the unifying elements that underlie their effective application. Initial chapters provide a short treatment of the probability and statistics needed as background, enabling those without experience in Monte Carlo techniques to apply these ideas to their research.The book focuses on two basic themes: The first is the importance of random walks as they occur both in natural stochastic systems and in their relationship to integral and differential equations. The second theme is that of variance reduction in general and importance sampling in particular as a technique for efficient use of the methods. Random walks are introduced with an elementary example in which the modeling of radiation transport arises directly from a schematic probabilistic description of the interaction of radiation with matter. Building on this example, the relationship between random walks and integral equations is outlined

  6. The FLUKA Monte Carlo code coupled with the local effect model for biological calculations in carbon ion therapy

    International Nuclear Information System (INIS)

    Mairani, A; Brons, S; Parodi, K; Cerutti, F; Ferrari, A; Sommerer, F; Fasso, A; Kraemer, M; Scholz, M

    2010-01-01

    Clinical Monte Carlo (MC) calculations for carbon ion therapy have to provide absorbed and RBE-weighted dose. The latter is defined as the product of the dose and the relative biological effectiveness (RBE). At the GSI Helmholtzzentrum fuer Schwerionenforschung as well as at the Heidelberg Ion Therapy Center (HIT), the RBE values are calculated according to the local effect model (LEM). In this paper, we describe the approach followed for coupling the FLUKA MC code with the LEM and its application to dose and RBE-weighted dose calculations for a superimposition of two opposed 12 C ion fields as applied in therapeutic irradiations. The obtained results are compared with the available experimental data of CHO (Chinese hamster ovary) cell survival and the outcomes of the GSI analytical treatment planning code TRiP98. Some discrepancies have been observed between the analytical and MC calculations of absorbed physical dose profiles, which can be explained by the differences between the laterally integrated depth-dose distributions in water used as input basic data in TRiP98 and the FLUKA recalculated ones. On the other hand, taking into account the differences in the physical beam modeling, the FLUKA-based biological calculations of the CHO cell survival profiles are found in good agreement with the experimental data as well with the TRiP98 predictions. The developed approach that combines the MC transport/interaction capability with the same biological model as in the treatment planning system (TPS) will be used at HIT to support validation/improvement of both dose and RBE-weighted dose calculations performed by the analytical TPS.

  7. Neutron shielding calculations in a proton therapy facility based on Monte Carlo simulations and analytical models: Criterion for selecting the method of choice

    International Nuclear Information System (INIS)

    Titt, U.; Newhauser, W. D.

    2005-01-01

    Proton therapy facilities are shielded to limit the amount of secondary radiation to which patients, occupational workers and members of the general public are exposed. The most commonly applied shielding design methods for proton therapy facilities comprise semi-empirical and analytical methods to estimate the neutron dose equivalent. This study compares the results of these methods with a detailed simulation of a proton therapy facility by using the Monte Carlo technique. A comparison of neutron dose equivalent values predicted by the various methods reveals the superior accuracy of the Monte Carlo predictions in locations where the calculations converge. However, the reliability of the overall shielding design increases if simulation results, for which solutions have not converged, e.g. owing to too few particle histories, can be excluded, and deterministic models are being used at these locations. Criteria to accept or reject Monte Carlo calculations in such complex structures are not well understood. An optimum rejection criterion would allow all converging solutions of Monte Carlo simulation to be taken into account, and reject all solutions with uncertainties larger than the design safety margins. In this study, the optimum rejection criterion of 10% was found. The mean ratio was 26, 62% of all receptor locations showed a ratio between 0.9 and 10, and 92% were between 1 and 100. (authors)

  8. porewater chemistry experiment at Mont Terri rock laboratory. Reactive transport modelling including bacterial activity

    International Nuclear Information System (INIS)

    Tournassat, Christophe; Gaucher, Eric C.; Leupin, Olivier X.; Wersin, Paul

    2010-01-01

    Document available in extended abstract form only. An in-situ test in the Opalinus Clay formation, termed pore water Chemistry (PC) experiment, was run for a period of five years. It was based on the concept of diffusive equilibration whereby traced water with a composition close to that expected in the formation was continuously circulated and monitored in a packed off borehole. The main original focus was to obtain reliable data on the pH/pCO 2 of the pore water, but because of unexpected microbially- induced redox reactions, the objective was then changed to elucidate the biogeochemical processes happening in the borehole and to understand their impact on pH/pCO 2 and pH in the low permeability clay formation. The biologically perturbed chemical evolution of the PC experiment was simulated with reactive transport models. The aim of this modelling exercise was to develop a 'minimal-' model able to reproduce the chemical evolution of the PC experiment, i.e. the chemical evolution of solute inorganic and organic compounds (organic carbon, dissolved inorganic carbon etc...) that are coupled with each other through the simultaneous occurrence of biological transformation of solute or solid compounds, in-diffusion and out-diffusion of solute species and precipitation/dissolution of minerals (in the borehole and in the formation). An accurate description of the initial chemical conditions in the surrounding formation together with simplified kinetics rule mimicking the different phases of bacterial activities allowed reproducing the evolution of all main measured parameters (e.g. pH, TOC). Analyses from the overcoring and these simulations evidence the high buffer capacity of Opalinus clay regarding chemical perturbations due to bacterial activity. This pH buffering capacity is mainly attributed to the carbonate system as well as to the clay surfaces reactivity. Glycerol leaching from the pH-electrode might be the primary organic source responsible for

  9. Overview of past, ongoing and future efforts of the integrated modeling of global change for Northern Eurasia

    Science.gov (United States)

    Monier, Erwan; Kicklighter, David; Sokolov, Andrei; Zhuang, Qianlai; Melillo, Jerry; Reilly, John

    2016-04-01

    Northern Eurasia is both a major player in the global carbon budget (it includes roughly 70% of the Earth's boreal forest and more than two-thirds of the Earth's permafrost) and a region that has experienced dramatic climate change (increase in temperature, growing season length, floods and droughts) over the past century. Northern Eurasia has also undergone significant land-use change, both driven by human activity (including deforestation, expansion of agricultural lands and urbanization) and natural disturbances (such as wildfires and insect outbreaks). These large environmental and socioeconomic impacts have major implications for the carbon cycle in the region. Northern Eurasia is made up of a diverse set of ecosystems that range from tundra to forests, with significant areas of croplands and pastures as well as deserts, with major urban areas. As such, it represents a complex system with substantial challenges for the modeling community. In this presentation, we provide an overview of past, ongoing and possible future efforts of the integrated modeling of global change for Northern Eurasia. We review the variety of existing modeling approaches to investigate specific components of Earth system dynamics in the region. While there are a limited number of studies that try to integrate various aspects of the Earth system (through scale, teleconnections or processes), we point out that there are few systematic analyses of the various feedbacks within the Earth system (between components, regions or scale). As a result, there is a lack of knowledge of the relative importance of such feedbacks, and it is unclear how policy relevant current studies are that fail to account for these feedbacks. We review the role of Earth system models, and their advantages/limitations compared to detailed single component models. We further introduce the human activity system (global trade, economic models, demographic model and so on), the need for coupled human/earth system models

  10. A Monte Carlo-tuned model of the flow in the NorthGRIP area

    DEFF Research Database (Denmark)

    Grinsted, Aslak; Dahl-Jensen, Dorthe

    2002-01-01

    The North Greenland Icecore Project (NorthGRIP) drill site was chosen in order to obtain a good Eemian record. At the present depth, 3001m, the Eemian interstadial has yet to be seen. Clearly the flow in this area is poorly understood and needs further investigation. After a review of specific...... no Eemian is observed is a high basal melt rate (2.7mm/a). The melting is a consequence of a higher geothermal heat flux than the expected 51mW/m^2 of the Precambrian shield. From our analyses it is concluded that the geothermal heat flux at NorthGRIP is 98mW/m^2.The high basalmelt rate also gives rise...... to sliding at the bed. In addition to these results, an accumulationmodel has been established specifically forNorthGRIP.These results are essential for further modelling of the NorthGRIP flow and depth-age relationship...

  11. FOODCHAIN: a Monte Carlo model to estimate individual exposure to airborne pollutants via the foodchain pathway

    International Nuclear Information System (INIS)

    Dixon, E.; Holton, G.A.

    1984-01-01

    Ingestion of contaminated food due to the airborne release of radionuclides or chemical pollutants is a particularly difficult human exposure pathway to quantify. There are a number of important physical and biological processes such as atmospheric deposition and plant uptake to consider. These processes are approximate by techniques encoded in the computer program TEREX. Once estimates of pollutant concentrations are made, the problem can be reduced to computing exposure from ingestion of the food. Some assessments do not account for where the contaminated food is eaten, while others limit consumption to meat and vegetables produced within the affected area. While those approaches lead to an upper bound of exposure, a more realistic assumption is that if locally produced food is not sufficient to meet the dietary needs of the local populace, then uncontaminated food will be imported. This is the approach taken by the computer model FOODCHAIN. Exposures via ingestion of six basic types of food are modeled: beef, milk, grains, leafy vegetables, exposed produce (edible parts are exposed to atmospheric deposition), and protected produce (edible parts are protected from atmospheric deposition). Intake requirements for these six foods are based on a standard diet. Using TEREX-produced site-specific crop production values and food contamination values, FOODCHAIN randomly samples pollutant concentrations in each of the six foodstuffs in an inerative manner. Consumption of a particular food is weighted by a factor proportional to the total production of that food within the area studied. The exposures due to consumption of each of the six foodstuffs are summed to produce the total exposure for each randomly calculated diet

  12. Monte Carlo radiation transport: A revolution in science

    International Nuclear Information System (INIS