WorldWideScience

Sample records for modeling effort monte

  1. Analysis of Empirical Software Effort Estimation Models

    CERN Document Server

    Basha, Saleem

    2010-01-01

    Reliable effort estimation remains an ongoing challenge to software engineers. Accurate effort estimation is the state of art of software engineering, effort estimation of software is the preliminary phase between the client and the business enterprise. The relationship between the client and the business enterprise begins with the estimation of the software. The credibility of the client to the business enterprise increases with the accurate estimation. Effort estimation often requires generalizing from a small number of historical projects. Generalization from such limited experience is an inherently under constrained problem. Accurate estimation is a complex process because it can be visualized as software effort prediction, as the term indicates prediction never becomes an actual. This work follows the basics of the empirical software effort estimation models. The goal of this paper is to study the empirical software effort estimation. The primary conclusion is that no single technique is best for all sit...

  2. Shell model the Monte Carlo way

    Energy Technology Data Exchange (ETDEWEB)

    Ormand, W.E.

    1995-03-01

    The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.

  3. Monte Carlo Euler approximations of HJM term structure financial models

    KAUST Repository

    Björk, Tomas

    2012-11-22

    We present Monte Carlo-Euler methods for a weak approximation problem related to the Heath-Jarrow-Morton (HJM) term structure model, based on Itô stochastic differential equations in infinite dimensional spaces, and prove strong and weak error convergence estimates. The weak error estimates are based on stochastic flows and discrete dual backward problems, and they can be used to identify different error contributions arising from time and maturity discretization as well as the classical statistical error due to finite sampling. Explicit formulas for efficient computation of sharp error approximation are included. Due to the structure of the HJM models considered here, the computational effort devoted to the error estimates is low compared to the work to compute Monte Carlo solutions to the HJM model. Numerical examples with known exact solution are included in order to show the behavior of the estimates. © 2012 Springer Science+Business Media Dordrecht.

  4. Statistical Modeling Efforts for Headspace Gas

    Energy Technology Data Exchange (ETDEWEB)

    Weaver, Brian Phillip [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-03-17

    The purpose of this document is to describe the statistical modeling effort for gas concentrations in WIPP storage containers. The concentration (in ppm) of CO2 in the headspace volume of standard waste box (SWB) 68685 is shown. A Bayesian approach and an adaptive Metropolis-Hastings algorithm were used.

  5. Monte Carlo exploration of warped Higgsless models

    Energy Technology Data Exchange (ETDEWEB)

    Hewett, JoAnne L.; Lillie, Benjamin; Rizzo, Thomas Gerard [Stanford Linear Accelerator Center, 2575 Sand Hill Rd., Menlo Park, CA, 94025 (United States)]. E-mail: rizzo@slac.stanford.edu

    2004-10-01

    We have performed a detailed Monte Carlo exploration of the parameter space for a warped Higgsless model of electroweak symmetry breaking in 5 dimensions. This model is based on the SU(2){sub L} x SU(2){sub R} x U(1){sub B-L} gauge group in an AdS{sub 5} bulk with arbitrary gauge kinetic terms on both the Planck and TeV branes. Constraints arising from precision electroweak measurements and collider data are found to be relatively easy to satisfy. We show, however, that the additional requirement of perturbative unitarity up to the cut-off, {approx_equal} 10 TeV, in W{sub L}{sup +}W{sub L}{sup -} elastic scattering in the absence of dangerous tachyons eliminates all models. If successful models of this class exist, they must be highly fine-tuned. (author)

  6. Monte Carlo Exploration of Warped Higgsless Models

    CERN Document Server

    Hewett, J L; Rizzo, T G

    2004-01-01

    We have performed a detailed Monte Carlo exploration of the parameter space for a warped Higgsless model of electroweak symmetry breaking in 5 dimensions. This model is based on the $SU(2)_L\\times SU(2)_R\\times U(1)_{B-L}$ gauge group in an AdS$_5$ bulk with arbitrary gauge kinetic terms on both the Planck and TeV branes. Constraints arising from precision electroweak measurements and collider data are found to be relatively easy to satisfy. We show, however, that the additional requirement of perturbative unitarity up to the cut-off, $\\simeq 10$ TeV, in $W_L^+W_L^-$ elastic scattering in the absence of dangerous tachyons eliminates all models. If successful models of this class exist, they must be highly fine-tuned.

  7. Validation of Compton Scattering Monte Carlo Simulation Models

    CERN Document Server

    Weidenspointner, Georg; Hauf, Steffen; Hoff, Gabriela; Kuster, Markus; Pia, Maria Grazia; Saracco, Paolo

    2014-01-01

    Several models for the Monte Carlo simulation of Compton scattering on electrons are quantitatively evaluated with respect to a large collection of experimental data retrieved from the literature. Some of these models are currently implemented in general purpose Monte Carlo systems; some have been implemented and evaluated for possible use in Monte Carlo particle transport for the first time in this study. Here we present first and preliminary results concerning total and differential Compton scattering cross sections.

  8. Monte Carlo Simulation of River Meander Modelling

    Science.gov (United States)

    Posner, A. J.; Duan, J. G.

    2010-12-01

    This study first compares the first order analytical solutions for flow field by Ikeda et. al. (1981) and Johanesson and Parker (1989b). Ikeda et. al.’s (1981) linear bank erosion model was implemented to predict the rate of bank erosion in which the bank erosion coefficient is treated as a stochastic variable that varies with physical properties of the bank (e.g. cohesiveness, stratigraphy, vegetation density). The developed model was used to predict the evolution of meandering planforms. Then, the modeling results were analyzed and compared to the observed data. Since the migration of meandering channel consists of downstream translation, lateral expansion, and downstream or upstream rotations. Several measures are formulated in order to determine which of the resulting planform is closest to the experimental measured one. Results from the deterministic model highly depend on the calibrated erosion coefficient. Since field measurements are always limited, the stochastic model yielded more realistic predictions of meandering planform evolutions. Due to the random nature of bank erosion coefficient, the meandering planform evolution is a stochastic process that can only be accurately predicted by a stochastic model. Quasi-2D Ikeda (1989) flow solution with Monte Carlo Simulation of Bank Erosion Coefficient.

  9. Monte Carlo modelling of TRIGA research reactor

    Science.gov (United States)

    El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.

    2010-10-01

    The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucléaires de la Maâmora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.

  10. Monte Carlo models of dust coagulation

    CERN Document Server

    Zsom, Andras

    2010-01-01

    The thesis deals with the first stage of planet formation, namely dust coagulation from micron to millimeter sizes in circumstellar disks. For the first time, we collect and compile the recent laboratory experiments on dust aggregates into a collision model that can be implemented into dust coagulation models. We put this model into a Monte Carlo code that uses representative particles to simulate dust evolution. Simulations are performed using three different disk models in a local box (0D) located at 1 AU distance from the central star. We find that the dust evolution does not follow the previously assumed growth-fragmentation cycle, but growth is halted by bouncing before the fragmentation regime is reached. We call this the bouncing barrier which is an additional obstacle during the already complex formation process of planetesimals. The absence of the growth-fragmentation cycle and the halted growth has two important consequences for planet formation. 1) It is observed that disk atmospheres are dusty thr...

  11. Reporting Monte Carlo Studies in Structural Equation Modeling

    NARCIS (Netherlands)

    Boomsma, Anne

    2013-01-01

    In structural equation modeling, Monte Carlo simulations have been used increasingly over the last two decades, as an inventory from the journal Structural Equation Modeling illustrates. Reaching out to a broad audience, this article provides guidelines for reporting Monte Carlo studies in that fiel

  12. MODELING LEACHING OF VIRUSES BY THE MONTE CARLO METHOD

    Science.gov (United States)

    A predictive screening model was developed for fate and transport of viruses in the unsaturated zone. A database of input parameters allowed Monte Carlo analysis with the model. The resulting kernel densities of predicted attenuation during percolation indicated very ...

  13. Event-chain Monte Carlo for classical continuous spin models

    Science.gov (United States)

    Michel, Manon; Mayer, Johannes; Krauth, Werner

    2015-10-01

    We apply the event-chain Monte Carlo algorithm to classical continuum spin models on a lattice and clarify the condition for its validity. In the two-dimensional XY model, it outperforms the local Monte Carlo algorithm by two orders of magnitude, although it remains slower than the Wolff cluster algorithm. In the three-dimensional XY spin glass model at low temperature, the event-chain algorithm is far superior to the other algorithms.

  14. Quantum Monte Carlo methods algorithms for lattice models

    CERN Document Server

    Gubernatis, James; Werner, Philipp

    2016-01-01

    Featuring detailed explanations of the major algorithms used in quantum Monte Carlo simulations, this is the first textbook of its kind to provide a pedagogical overview of the field and its applications. The book provides a comprehensive introduction to the Monte Carlo method, its use, and its foundations, and examines algorithms for the simulation of quantum many-body lattice problems at finite and zero temperature. These algorithms include continuous-time loop and cluster algorithms for quantum spins, determinant methods for simulating fermions, power methods for computing ground and excited states, and the variational Monte Carlo method. Also discussed are continuous-time algorithms for quantum impurity models and their use within dynamical mean-field theory, along with algorithms for analytically continuing imaginary-time quantum Monte Carlo data. The parallelization of Monte Carlo simulations is also addressed. This is an essential resource for graduate students, teachers, and researchers interested in ...

  15. Efforts and models of education for parents

    DEFF Research Database (Denmark)

    Jensen, Niels Rosendal

    2010-01-01

    Artiklen omfatter en gennemgang af modeller for forældreuddannelse, der fortrinsvis anvendes i Danmark. Artiklen indlejrer modellerne i nogle bredere blikke på uddannelsessystemet og den aktuelle diskurs om ansvarliggørelse af forældre.   Udgivelsesdato: Marts 2010...

  16. Monte-Carlo simulation-based statistical modeling

    CERN Document Server

    Chen, John

    2017-01-01

    This book brings together expert researchers engaged in Monte-Carlo simulation-based statistical modeling, offering them a forum to present and discuss recent issues in methodological development as well as public health applications. It is divided into three parts, with the first providing an overview of Monte-Carlo techniques, the second focusing on missing data Monte-Carlo methods, and the third addressing Bayesian and general statistical modeling using Monte-Carlo simulations. The data and computer programs used here will also be made publicly available, allowing readers to replicate the model development and data analysis presented in each chapter, and to readily apply them in their own research. Featuring highly topical content, the book has the potential to impact model development and data analyses across a wide spectrum of fields, and to spark further research in this direction.

  17. Monte Carlo studies of model Langmuir monolayers.

    Science.gov (United States)

    Opps, S B; Yang, B; Gray, C G; Sullivan, D E

    2001-04-01

    This paper examines some of the basic properties of a model Langmuir monolayer, consisting of surfactant molecules deposited onto a water subphase. The surfactants are modeled as rigid rods composed of a head and tail segment of diameters sigma(hh) and sigma(tt), respectively. The tails consist of n(t) approximately 4-7 effective monomers representing methylene groups. These rigid rods interact via site-site Lennard-Jones potentials with different interaction parameters for the tail-tail, head-tail, and head-head interactions. In a previous paper, we studied the ground-state properties of this system using a Landau approach. In the present paper, Monte Carlo simulations were performed in the canonical ensemble to elucidate the finite-temperature behavior of this system. Simulation techniques, incorporating a system of dynamic filters, allow us to decrease CPU time with negligible statistical error. This paper focuses on several of the key parameters, such as density, head-tail diameter mismatch, and chain length, responsible for driving transitions from uniformly tilted to untilted phases and between different tilt-ordered phases. Upon varying the density of the system, with sigma(hh)=sigma(tt), we observe a transition from a tilted (NNN)-condensed phase to an untilted-liquid phase and, upon comparison with recent experiments with fatty acid-alcohol and fatty acid-ester mixtures [M. C. Shih, M. K. Durbin, A. Malik, P. Zschack, and P. Dutta, J. Chem. Phys. 101, 9132 (1994); E. Teer, C. M. Knobler, C. Lautz, S. Wurlitzer, J. Kildae, and T. M. Fischer, J. Chem. Phys. 106, 1913 (1997)], we identify this as the L'(2)/Ov-L1 phase boundary. By varying the head-tail diameter ratio, we observe a decrease in T(c) with increasing mismatch. However, as the chain length was increased we observed that the transition temperatures increased and differences in T(c) due to head-tail diameter mismatch were diminished. In most of the present research, the water was treated as a hard

  18. Gas discharges modeling by Monte Carlo technique

    Directory of Open Access Journals (Sweden)

    Savić Marija

    2010-01-01

    Full Text Available The basic assumption of the Townsend theory - that ions produce secondary electrons - is valid only in a very narrow range of the reduced electric field E/N. In accordance with the revised Townsend theory that was suggested by Phelps and Petrović, secondary electrons are produced in collisions of ions, fast neutrals, metastable atoms or photons with the cathode, or in gas phase ionizations by fast neutrals. In this paper we tried to build up a Monte Carlo code that can be used to calculate secondary electron yields for different types of particles. The obtained results are in good agreement with the analytical results of Phelps and. Petrović [Plasma Sourc. Sci. Technol. 8 (1999 R1].

  19. Modeling neutron guides using Monte Carlo simulations

    CERN Document Server

    Wang, D Q; Crow, M L; Wang, X L; Lee, W T; Hubbard, C R

    2002-01-01

    Four neutron guide geometries, straight, converging, diverging and curved, were characterized using Monte Carlo ray-tracing simulations. The main areas of interest are the transmission of the guides at various neutron energies and the intrinsic time-of-flight (TOF) peak broadening. Use of a delta-function time pulse from a uniform Lambert neutron source allows one to quantitatively simulate the effect of guides' geometry on the TOF peak broadening. With a converging guide, the intensity and the beam divergence increases while the TOF peak width decreases compared with that of a straight guide. By contrast, use of a diverging guide decreases the intensity and the beam divergence, and broadens the width (in TOF) of the transmitted neutron pulse.

  20. Strain in the mesoscale kinetic Monte Carlo model for sintering

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Frandsen, Henrik Lund; Tikare, V.

    2014-01-01

    Shrinkage strains measured from microstructural simulations using the mesoscale kinetic Monte Carlo (kMC) model for solid state sintering are discussed. This model represents the microstructure using digitized discrete sites that are either grain or pore sites. The algorithm used to simulate...

  1. Quasi-Monte Carlo methods for the Heston model

    OpenAIRE

    Jan Baldeaux; Dale Roberts

    2012-01-01

    In this paper, we discuss the application of quasi-Monte Carlo methods to the Heston model. We base our algorithms on the Broadie-Kaya algorithm, an exact simulation scheme for the Heston model. As the joint transition densities are not available in closed-form, the Linear Transformation method due to Imai and Tan, a popular and widely applicable method to improve the effectiveness of quasi-Monte Carlo methods, cannot be employed in the context of path-dependent options when the underlying pr...

  2. Modelling hadronic interactions in cosmic ray Monte Carlo generators

    Directory of Open Access Journals (Sweden)

    Pierog Tanguy

    2015-01-01

    Full Text Available Currently the uncertainty in the prediction of shower observables for different primary particles and energies is dominated by differences between hadronic interaction models. The LHC data on minimum bias measurements can be used to test Monte Carlo generators and these new constraints will help to reduce the uncertainties in air shower predictions. In this article, after a short introduction on air showers and Monte Carlo generators, we will show the results of the comparison between the updated version of high energy hadronic interaction models EPOS LHC and QGSJETII-04 with LHC data. Results for air shower simulations and their consequences on comparisons with air shower data will be discussed.

  3. Monte Carlo methods and models in finance and insurance

    CERN Document Server

    Korn, Ralf

    2010-01-01

    Offering a unique balance between applications and calculations, this book incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The book enables readers to find the right algorithm for a desired application and illustrates complicated methods and algorithms with simple applicat

  4. Electrophysiological correlates of listening effort: neurodynamical modeling and measurement.

    Science.gov (United States)

    Strauss, Daniel J; Corona-Strauss, Farah I; Trenado, Carlos; Bernarding, Corinna; Reith, Wolfgang; Latzel, Matthias; Froehlich, Matthias

    2010-06-01

    An increased listing effort represents a major problem in humans with hearing impairment. Neurodiagnostic methods for an objective listening effort estimation might support hearing instrument fitting procedures. However the cognitive neurodynamics of listening effort is far from being understood and its neural correlates have not been identified yet. In this paper we analyze the cognitive neurodynamics of listening effort by using methods of forward neurophysical modeling and time-scale electroencephalographic neurodiagnostics. In particular, we present a forward neurophysical model for auditory late responses (ALRs) as large-scale listening effort correlates. Here endogenously driven top-down projections related to listening effort are mapped to corticothalamic feedback pathways which were analyzed for the selective attention neurodynamics before. We show that this model represents well the time-scale phase stability analysis of experimental electroencephalographic data from auditory discrimination paradigms. It is concluded that the proposed neurophysical and neuropsychological framework is appropriate for the analysis of listening effort and might help to develop objective electroencephalographic methods for its estimation in future.

  5. Calibration and Monte Carlo modelling of neutron long counters

    CERN Document Server

    Tagziria, H

    2000-01-01

    The Monte Carlo technique has become a very powerful tool in radiation transport as full advantage is taken of enhanced cross-section data, more powerful computers and statistical techniques, together with better characterisation of neutron and photon source spectra. At the National Physical Laboratory, calculations using the Monte Carlo radiation transport code MCNP-4B have been combined with accurate measurements to characterise two long counters routinely used to standardise monoenergetic neutron fields. New and more accurate response function curves have been produced for both long counters. A novel approach using Monte Carlo methods has been developed, validated and used to model the response function of the counters and determine more accurately their effective centres, which have always been difficult to establish experimentally. Calculations and measurements agree well, especially for the De Pangher long counter for which details of the design and constructional material are well known. The sensitivit...

  6. A novel Monte Carlo approach to hybrid local volatility models

    NARCIS (Netherlands)

    A.W. van der Stoep (Anton); L.A. Grzelak (Lech Aleksander); C.W. Oosterlee (Cornelis)

    2017-01-01

    textabstractWe present in a Monte Carlo simulation framework, a novel approach for the evaluation of hybrid local volatility [Risk, 1994, 7, 18–20], [Int. J. Theor. Appl. Finance, 1998, 1, 61–110] models. In particular, we consider the stochastic local volatility model—see e.g. Lipton et al. [Quant.

  7. City Logistics Modeling Efforts: Trends and Gaps - A Review

    NARCIS (Netherlands)

    Anand, N.R.; Quak, H.J.; Van Duin, J.H.R.; Tavasszy, L.A.

    2012-01-01

    In this paper, we present a review of city logistics modeling efforts reported in the literature for urban freight analysis. The review framework takes into account the diversity and complexity found in the present-day city logistics practice. Next, it covers the different aspects in the modeling se

  8. A generalized hard-sphere model for Monte Carlo simulation

    Science.gov (United States)

    Hassan, H. A.; Hash, David B.

    1993-01-01

    A new molecular model, called the generalized hard-sphere, or GHS model, is introduced. This model contains, as a special case, the variable hard-sphere model of Bird (1981) and is capable of reproducing all of the analytic viscosity coefficients available in the literature that are derived for a variety of interaction potentials incorporating attraction and repulsion. In addition, a new procedure for determining interaction potentials in a gas mixture is outlined. Expressions needed for implementing the new model in the direct simulation Monte Carlo methods are derived. This development makes it possible to employ interaction models that have the same level of complexity as used in Navier-Stokes calculations.

  9. Efforts - Final technical report on task 4. Physical modelling calidation

    DEFF Research Database (Denmark)

    Andreasen, Jan Lasson; Olsson, David Dam; Christensen, T. W.

    The present report is documentation for the work carried out in Task 4 at DTU Physical modelling-validation on the Brite/Euram project No. BE96-3340, contract No. BRPR-CT97-0398, with the title Enhanced Framework for forging design using reliable three-dimensional simulation (EFFORTS). The report...

  10. JEWEL - a Monte Carlo Model for Jet Quenching

    CERN Document Server

    Zapp, Korinna; Wiedemann, Urs Achim

    2009-01-01

    The Monte Carlo model JEWEL 1.0 (Jet Evolution With Energy Loss) simulates parton shower evolution in the presence of a dense QCD medium. In its current form medium interactions are modelled as elastic scattering based on perturbative matrix elements and a simple prescription for medium induced gluon radiation. The parton shower is interfaced with a hadronisation model. In the absence of medium effects JEWEL is shown to reproduce jet measurements at LEP. The collisional energy loss is consistent with analytic calculations, but with JEWEL we can go a step further and characterise also jet-induced modifications of the medium. Elastic and inelastic medium interactions are shown to lead to distinctive modifications of the jet fragmentation pattern, which should allow to experimentally distinguish between collisional and radiative energy loss mechanisms. In these proceedings the main JEWEL results are summarised and a Monte Carlo algorithm is outlined that allows to include the Landau-Pomerantschuk-Migdal effect i...

  11. A semianalytic Monte Carlo code for modelling LIDAR measurements

    Science.gov (United States)

    Palazzi, Elisa; Kostadinov, Ivan; Petritoli, Andrea; Ravegnani, Fabrizio; Bortoli, Daniele; Masieri, Samuele; Premuda, Margherita; Giovanelli, Giorgio

    2007-10-01

    LIDAR (LIght Detection and Ranging) is an optical active remote sensing technology with many applications in atmospheric physics. Modelling of LIDAR measurements appears useful approach for evaluating the effects of various environmental variables and scenarios as well as of different measurement geometries and instrumental characteristics. In this regard a Monte Carlo simulation model can provide a reliable answer to these important requirements. A semianalytic Monte Carlo code for modelling LIDAR measurements has been developed at ISAC-CNR. The backscattered laser signal detected by the LIDAR system is calculated in the code taking into account the contributions due to the main atmospheric molecular constituents and aerosol particles through processes of single and multiple scattering. The contributions by molecular absorption, ground and clouds reflection are evaluated too. The code can perform simulations of both monostatic and bistatic LIDAR systems. To enhance the efficiency of the Monte Carlo simulation, analytical estimates and expected value calculations are performed. Artificial devices (such as forced collision, local forced collision, splitting and russian roulette) are moreover foreseen by the code, which can enable the user to drastically reduce the variance of the calculation.

  12. Monte Carlo modelling of positron transport in real world applications

    Science.gov (United States)

    Marjanović, S.; Banković, A.; Šuvakov, M.; Petrović, Z. Lj

    2014-05-01

    Due to the unstable nature of positrons and their short lifetime, it is difficult to obtain high positron particle densities. This is why the Monte Carlo simulation technique, as a swarm method, is very suitable for modelling most of the current positron applications involving gaseous and liquid media. The ongoing work on the measurements of cross-sections for positron interactions with atoms and molecules and swarm calculations for positrons in gasses led to the establishment of good cross-section sets for positron interaction with gasses commonly used in real-world applications. Using the standard Monte Carlo technique and codes that can follow both low- (down to thermal energy) and high- (up to keV) energy particles, we are able to model different systems directly applicable to existing experimental setups and techniques. This paper reviews the results on modelling Surko-type positron buffer gas traps, application of the rotating wall technique and simulation of positron tracks in water vapor as a substitute for human tissue, and pinpoints the challenges in and advantages of applying Monte Carlo simulations to these systems.

  13. A Monte Carlo Model of Light Propagation in Nontransparent Tissue

    Institute of Scientific and Technical Information of China (English)

    姚建铨; 朱水泉; 胡海峰; 王瑞康

    2004-01-01

    To sharpen the imaging of structures, it is vital to develop a convenient and efficient quantitative algorithm of the optical coherence tomography (OCT) sampling. In this paper a new Monte Carlo model is set up and how light propagates in bio-tissue is analyzed in virtue of mathematics and physics equations. The relations,in which light intensity of Class 1 and Class 2 light with different wavelengths changes with their permeation depth,and in which Class 1 light intensity (signal light intensity) changes with the probing depth, and in which angularly resolved diffuse reflectance and diffuse transmittance change with the exiting angle, are studied. The results show that Monte Carlo simulation results are consistent with the theory data.

  14. Monte Carlo Numerical Models for Nuclear Logging Applications

    Directory of Open Access Journals (Sweden)

    Fusheng Li

    2012-06-01

    Full Text Available Nuclear logging is one of most important logging services provided by many oil service companies. The main parameters of interest are formation porosity, bulk density, and natural radiation. Other services are also provided from using complex nuclear logging tools, such as formation lithology/mineralogy, etc. Some parameters can be measured by using neutron logging tools and some can only be measured by using a gamma ray tool. To understand the response of nuclear logging tools, the neutron transport/diffusion theory and photon diffusion theory are needed. Unfortunately, for most cases there are no analytical answers if complex tool geometry is involved. For many years, Monte Carlo numerical models have been used by nuclear scientists in the well logging industry to address these challenges. The models have been widely employed in the optimization of nuclear logging tool design, and the development of interpretation methods for nuclear logs. They have also been used to predict the response of nuclear logging systems for forward simulation problems. In this case, the system parameters including geometry, materials and nuclear sources, etc., are pre-defined and the transportation and interactions of nuclear particles (such as neutrons, photons and/or electrons in the regions of interest are simulated according to detailed nuclear physics theory and their nuclear cross-section data (probability of interacting. Then the deposited energies of particles entering the detectors are recorded and tallied and the tool responses to such a scenario are generated. A general-purpose code named Monte Carlo N– Particle (MCNP has been the industry-standard for some time. In this paper, we briefly introduce the fundamental principles of Monte Carlo numerical modeling and review the physics of MCNP. Some of the latest developments of Monte Carlo Models are also reviewed. A variety of examples are presented to illustrate the uses of Monte Carlo numerical models

  15. Monte Carlo simulation of classical spin models with chaotic billiards.

    Science.gov (United States)

    Suzuki, Hideyuki

    2013-11-01

    It has recently been shown that the computing abilities of Boltzmann machines, or Ising spin-glass models, can be implemented by chaotic billiard dynamics without any use of random numbers. In this paper, we further numerically investigate the capabilities of the chaotic billiard dynamics as a deterministic alternative to random Monte Carlo methods by applying it to classical spin models in statistical physics. First, we verify that the billiard dynamics can yield samples that converge to the true distribution of the Ising model on a small lattice, and we show that it appears to have the same convergence rate as random Monte Carlo sampling. Second, we apply the billiard dynamics to finite-size scaling analysis of the critical behavior of the Ising model and show that the phase-transition point and the critical exponents are correctly obtained. Third, we extend the billiard dynamics to spins that take more than two states and show that it can be applied successfully to the Potts model. We also discuss the possibility of extensions to continuous-valued models such as the XY model.

  16. A stochastic model updating strategy-based improved response surface model and advanced Monte Carlo simulation

    Science.gov (United States)

    Zhai, Xue; Fei, Cheng-Wei; Choy, Yat-Sze; Wang, Jian-Jun

    2017-01-01

    To improve the accuracy and efficiency of computation model for complex structures, the stochastic model updating (SMU) strategy was proposed by combining the improved response surface model (IRSM) and the advanced Monte Carlo (MC) method based on experimental static test, prior information and uncertainties. Firstly, the IRSM and its mathematical model were developed with the emphasis on moving least-square method, and the advanced MC simulation method is studied based on Latin hypercube sampling method as well. And then the SMU procedure was presented with experimental static test for complex structure. The SMUs of simply-supported beam and aeroengine stator system (casings) were implemented to validate the proposed IRSM and advanced MC simulation method. The results show that (1) the SMU strategy hold high computational precision and efficiency for the SMUs of complex structural system; (2) the IRSM is demonstrated to be an effective model due to its SMU time is far less than that of traditional response surface method, which is promising to improve the computational speed and accuracy of SMU; (3) the advanced MC method observably decrease the samples from finite element simulations and the elapsed time of SMU. The efforts of this paper provide a promising SMU strategy for complex structure and enrich the theory of model updating.

  17. Linking effort and fishing mortality in a mixed fisheries model

    DEFF Research Database (Denmark)

    Thøgersen, Thomas Talund; Hoff, Ayoe; Frost, Hans Staby

    2012-01-01

    in fish stocks has led to overcapacity in many fisheries, leading to incentives for overfishing. Recent research has shown that the allocation of effort among fleets can play an important role in mitigating overfishing when the targeting covers a range of species (multi-species—i.e., so-called mixed...... fisheries), while simultaneously optimising the overall economic performance of the fleets. The so-called FcubEcon model, in particular, has elucidated both the biologically and economically optimal method for allocating catches—and thus effort—between fishing fleets, while ensuring that the quotas...

  18. Dynamical Monte Carlo method for stochastic epidemic models

    CERN Document Server

    Aiello, O E

    2002-01-01

    A new approach to Dynamical Monte Carlo Methods is introduced to simulate markovian processes. We apply this approach to formulate and study an epidemic Generalized SIRS model. The results are in excellent agreement with the forth order Runge-Kutta method in a region of deterministic solution. Introducing local stochastic interactions, the Runge-Kutta method is not applicable, and we solve and check it self-consistently with a stochastic version of the Euler Method. The results are also analyzed under the herd-immunity concept.

  19. Monte Carlo Shell Model for ab initio nuclear structure

    Directory of Open Access Journals (Sweden)

    Abe T.

    2014-03-01

    Full Text Available We report on our recent application of the Monte Carlo Shell Model to no-core calculations. At the initial stage of the application, we have performed benchmark calculations in the p-shell region. Results are compared with those in the Full Configuration Interaction and No-Core Full Configuration methods. These are found to be consistent with each other within quoted uncertainties when they could be quantified. The preliminary results in Nshell = 5 reveal the onset of systematic convergence pattern.

  20. Novel Extrapolation Method in the Monte Carlo Shell Model

    CERN Document Server

    Shimizu, Noritaka; Mizusaki, Takahiro; Otsuka, Takaharu; Abe, Takashi; Honma, Michio

    2010-01-01

    We propose an extrapolation method utilizing energy variance in the Monte Carlo shell model in order to estimate the energy eigenvalue and observables accurately. We derive a formula for the energy variance with deformed Slater determinants, which enables us to calculate the energy variance efficiently. The feasibility of the method is demonstrated for the full $pf$-shell calculation of $^{56}$Ni, and the applicability of the method to a system beyond current limit of exact diagonalization is shown for the $pf$+$g_{9/2}$-shell calculation of $^{64}$Ge.

  1. Monte Carlo Simulation of Kinesin Movement with a Lattice Model

    Institute of Scientific and Technical Information of China (English)

    WANG Hong; DOU Shuo-Xing; WANG Peng-Ye

    2005-01-01

    @@ Kinesin is a processive double-headed molecular motor that moves along a microtubule by taking about 8nm steps. It generally hydrolyzes one ATP molecule for taking each forward step. The processive movement of the kinesin molecular motors is numerically simulated with a lattice model. The motors are considered as Brownian particles and the ATPase processes of both heads are taken into account. The Monte Carlo simulation results agree well with recent experimental observations, especially on the relation of velocity versus ATP and ADP concentrations.

  2. 3D Monte Carlo radiation transfer modelling of photodynamic therapy

    Science.gov (United States)

    Campbell, C. Louise; Christison, Craig; Brown, C. Tom A.; Wood, Kenneth; Valentine, Ronan M.; Moseley, Harry

    2015-06-01

    The effects of ageing and skin type on Photodynamic Therapy (PDT) for different treatment methods have been theoretically investigated. A multilayered Monte Carlo Radiation Transfer model is presented where both daylight activated PDT and conventional PDT are compared. It was found that light penetrates deeper through older skin with a lighter complexion, which translates into a deeper effective treatment depth. The effect of ageing was found to be larger for darker skin types. The investigation further strengthens the usage of daylight as a potential light source for PDT where effective treatment depths of about 2 mm can be achieved.

  3. Gauge Potts model with generalized action: A Monte Carlo analysis

    Energy Technology Data Exchange (ETDEWEB)

    Fanchiotti, H.; Canal, C.A.G.; Sciutto, S.J.

    1985-08-15

    Results of a Monte Carlo calculation on the q-state gauge Potts model in d dimensions with a generalized action involving planar 1 x 1, plaquette, and 2 x 1, fenetre, loop interactions are reported. For d = 3 and q = 2, first- and second-order phase transitions are detected. The phase diagram for q = 3 presents only first-order phase transitions. For d = 2, a comparison with analytical results is made. Here also, the behavior of the numerical simulation in the vicinity of a second-order transition is analyzed.

  4. Evolutionary Sequential Monte Carlo Samplers for Change-Point Models

    Directory of Open Access Journals (Sweden)

    Arnaud Dufays

    2016-03-01

    Full Text Available Sequential Monte Carlo (SMC methods are widely used for non-linear filtering purposes. However, the SMC scope encompasses wider applications such as estimating static model parameters so much that it is becoming a serious alternative to Markov-Chain Monte-Carlo (MCMC methods. Not only do SMC algorithms draw posterior distributions of static or dynamic parameters but additionally they provide an estimate of the marginal likelihood. The tempered and time (TNT algorithm, developed in this paper, combines (off-line tempered SMC inference with on-line SMC inference for drawing realizations from many sequential posterior distributions without experiencing a particle degeneracy problem. Furthermore, it introduces a new MCMC rejuvenation step that is generic, automated and well-suited for multi-modal distributions. As this update relies on the wide heuristic optimization literature, numerous extensions are readily available. The algorithm is notably appropriate for estimating change-point models. As an example, we compare several change-point GARCH models through their marginal log-likelihoods over time.

  5. Monte Carlo model for electron degradation in methane

    CERN Document Server

    Bhardwaj, Anil

    2015-01-01

    We present a Monte Carlo model for degradation of 1-10,000 eV electrons in an atmosphere of methane. The electron impact cross sections for CH4 are compiled and analytical representations of these cross sections are used as input to the model.model.Yield spectra, which provides information about the number of inelastic events that have taken place in each energy bin, is used to calculate the yield (or population) of various inelastic processes. The numerical yield spectra, obtained from the Monte Carlo simulations, is represented analytically, thus generating the Analytical Yield Spectra (AYS). AYS is employed to obtain the mean energy per ion pair and efficiencies of various inelastic processes.Mean energy per ion pair for neutral CH4 is found to be 26 (27.8) eV at 10 (0.1) keV. Efficiency calculation showed that ionization is the dominant process at energies >50 eV, for which more than 50% of the incident electron energy is used. Above 25 eV, dissociation has an efficiency of 27%. Below 10 eV, vibrational e...

  6. Markov chain Monte Carlo simulation for Bayesian Hidden Markov Models

    Science.gov (United States)

    Chan, Lay Guat; Ibrahim, Adriana Irawati Nur Binti

    2016-10-01

    A hidden Markov model (HMM) is a mixture model which has a Markov chain with finite states as its mixing distribution. HMMs have been applied to a variety of fields, such as speech and face recognitions. The main purpose of this study is to investigate the Bayesian approach to HMMs. Using this approach, we can simulate from the parameters' posterior distribution using some Markov chain Monte Carlo (MCMC) sampling methods. HMMs seem to be useful, but there are some limitations. Therefore, by using the Mixture of Dirichlet processes Hidden Markov Model (MDPHMM) based on Yau et. al (2011), we hope to overcome these limitations. We shall conduct a simulation study using MCMC methods to investigate the performance of this model.

  7. Iterative optimisation of Monte Carlo detector models using measurements and simulations

    Energy Technology Data Exchange (ETDEWEB)

    Marzocchi, O., E-mail: olaf@marzocchi.net [European Patent Office, Rijswijk (Netherlands); Leone, D., E-mail: debora.leone@kit.edu [Institute for Nuclear Waste Disposal, Karlsruhe Institute of Technology, Karlsruhe (Germany)

    2015-04-11

    This work proposes a new technique to optimise the Monte Carlo models of radiation detectors, offering the advantage of a significantly lower user effort and therefore an improved work efficiency compared to the prior techniques. The method consists of four steps, two of which are iterative and suitable for automation using scripting languages. The four steps consist in the acquisition in the laboratory of measurement data to be used as reference; the modification of a previously available detector model; the simulation of a tentative model of the detector to obtain the coefficients of a set of linear equations; the solution of the system of equations and the update of the detector model. Steps three and four can be repeated for more accurate results. This method avoids the “try and fail” approach typical of the prior techniques.

  8. Hierarchical Acceleration of Multilevel Monte Carlo Methods for Computationally Expensive Simulations in Reservoir Modeling

    Science.gov (United States)

    Zhang, G.; Lu, D.; Webster, C.

    2014-12-01

    The rational management of oil and gas reservoir requires an understanding of its response to existing and planned schemes of exploitation and operation. Such understanding requires analyzing and quantifying the influence of the subsurface uncertainties on predictions of oil and gas production. As the subsurface properties are typically heterogeneous causing a large number of model parameters, the dimension independent Monte Carlo (MC) method is usually used for uncertainty quantification (UQ). Recently, multilevel Monte Carlo (MLMC) methods were proposed, as a variance reduction technique, in order to improve computational efficiency of MC methods in UQ. In this effort, we propose a new acceleration approach for MLMC method to further reduce the total computational cost by exploiting model hierarchies. Specifically, for each model simulation on a new added level of MLMC, we take advantage of the approximation of the model outputs constructed based on simulations on previous levels to provide better initial states of new simulations, which will help improve efficiency by, e.g. reducing the number of iterations in linear system solving or the number of needed time-steps. This is achieved by using mesh-free interpolation methods, such as Shepard interpolation and radial basis approximation. Our approach is applied to a highly heterogeneous reservoir model from the tenth SPE project. The results indicate that the accelerated MLMC can achieve the same accuracy as standard MLMC with a significantly reduced cost.

  9. Longitudinal functional principal component modelling via Stochastic Approximation Monte Carlo

    KAUST Repository

    Martinez, Josue G.

    2010-06-01

    The authors consider the analysis of hierarchical longitudinal functional data based upon a functional principal components approach. In contrast to standard frequentist approaches to selecting the number of principal components, the authors do model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order to overcome this, the authors show how to apply Stochastic Approximation Monte Carlo (SAMC) to this problem, a method that has the potential to explore the entire space and does not become trapped in local extrema. The combination of reversible jump methods and SAMC in hierarchical longitudinal functional data is simplified by a polar coordinate representation of the principal components. The approach is easy to implement and does well in simulated data in determining the distribution of the number of principal components, and in terms of its frequentist estimation properties. Empirical applications are also presented.

  10. Monte Carlo modelling of Schottky diode for rectenna simulation

    Science.gov (United States)

    Bernuchon, E.; Aniel, F.; Zerounian, N.; Grimault-Jacquin, A. S.

    2017-09-01

    Before designing a detector circuit, the electrical parameters extraction of the Schottky diode is a critical step. This article is based on a Monte-Carlo (MC) solver of the Boltzmann Transport Equation (BTE) including different transport mechanisms at the metal-semiconductor contact such as image force effect or tunneling. The weight of tunneling and thermionic current is quantified according to different degrees of tunneling modelling. The I-V characteristic highlights the dependence of the ideality factor and the current saturation with bias. Harmonic Balance (HB) simulation on a rectifier circuit within Advanced Design System (ADS) software shows that considering non-linear ideality factor and saturation current for the electrical model of the Schottky diode does not seem essential. Indeed, bias independent values extracted in forward regime on I-V curve are sufficient. However, the non-linear series resistance extracted from a small signal analysis (SSA) strongly influences the conversion efficiency at low input powers.

  11. Monte Carlo autofluorescence modeling of cervical intraepithelial neoplasm progression

    Science.gov (United States)

    Chu, S. C.; Chiang, H. K.; Wu, C. E.; He, S. Y.; Wang, D. Y.

    2006-02-01

    Monte Carlo fluorescence model has been developed to estimate the autofluorescent spectra associated with the progression of the Exo-Cervical Intraepithelial Neoplasm (CIN). We used double integrating spheres system and a tunable light source system, 380 to 600 nm, to measure the reflection and transmission spectra of a 50 μm thick tissue, and used Inverse Adding-Doubling (IAD) method to estimate the absorption (μa) and scattering (μs) coefficients. Human cervical tissue samples were sliced vertically (longitudinal) by the frozen section method. The results show that the absorption and scattering coefficients of cervical neoplasia are 2~3 times higher than normal tissues. We applied Monte Carlo method to estimate photon distribution and fluorescence emission in the tissue. By combining the intrinsic fluorescence information (collagen, NADH, and FAD), the anatomical information of the epithelium, CIN, stroma layers, and the fluorescence escape function, the autofluorescence spectra of CIN at different development stages were obtained.We have observed that the progression of the CIN results in gradually decreasing of the autofluorescence intensity of collagen peak intensity. In addition, the existence of the CIN layer formeda barrier that blocks the autofluorescence escaping from the stroma layer due to the strong extinction(scattering and absorption) of the CIN layer. To our knowledge, this is the first study measuring the CIN optical properties in the visible range; it also successfully demonstrates the fluorescence model forestimating autofluorescence spectra of cervical tissue associated with the progression of the CIN tissue;this model is very important in assisting the CIN diagnosis and treatment in clinical medicine.

  12. Household water use and conservation models using Monte Carlo techniques

    Science.gov (United States)

    Cahill, R.; Lund, J. R.; DeOreo, B.; Medellín-Azuara, J.

    2013-10-01

    The increased availability of end use measurement studies allows for mechanistic and detailed approaches to estimating household water demand and conservation potential. This study simulates water use in a single-family residential neighborhood using end-water-use parameter probability distributions generated from Monte Carlo sampling. This model represents existing water use conditions in 2010 and is calibrated to 2006-2011 metered data. A two-stage mixed integer optimization model is then developed to estimate the least-cost combination of long- and short-term conservation actions for each household. This least-cost conservation model provides an estimate of the upper bound of reasonable conservation potential for varying pricing and rebate conditions. The models were adapted from previous work in Jordan and are applied to a neighborhood in San Ramon, California in the eastern San Francisco Bay Area. The existing conditions model produces seasonal use results very close to the metered data. The least-cost conservation model suggests clothes washer rebates are among most cost-effective rebate programs for indoor uses. Retrofit of faucets and toilets is also cost-effective and holds the highest potential for water savings from indoor uses. This mechanistic modeling approach can improve understanding of water demand and estimate cost-effectiveness of water conservation programs.

  13. Recent efforts to model human diseases in vivo in Drosophila.

    Science.gov (United States)

    Pfleger, Cathie M; Reiter, Lawrence T

    2008-01-01

    Upon completion of sequencing the Drosophila genome, it was estimated that 61% of human disease-associated genes had sequence homologs in flies, and in some diseases such as cancer, the number was as high as 68%. We now know that as many as 75% of the genes associated with genetic disease have counterparts in Drosophila. Using better tools for mutation detection, association studies and whole genome analysis the number of human genes associated with genetic disease is steadily increasing. These detection efforts are outpacing the ability to assign function and understand the underlying cause of the disease at the molecular level. Drosophila models can therefore advance human disease research in a number of ways by: establishing the normal role of these gene products during development, elucidating the mechanism underlying disease pathology, and even identifying candidate therapeutic agents for the treatment of human disease. At the 49(th) Annual Drosophila Research Conference in San Diego this year, a number of labs presented their exciting findings on Drosophila models of human disease in both platform presentations and poster sessions. Here we can only briefly review some of these developments, and we apologize that we do not have the time or space to review all of the findings presented which use Drosophila to understand human disease etiology.

  14. Introduction to the Monte Carlo project and the approach to the validation of probabilistic models of dietary exposure to selected food chemicals

    NARCIS (Netherlands)

    Gibney, M.J.; Voet, van der H.

    2003-01-01

    The Monte Carlo project was established to allow an international collaborative effort to define conceptual models for food chemical and nutrient exposure, to define and validate the software code to govern these models, to provide new or reconstructed databases for validation studies, and to use th

  15. Monte Carlo grain growth modeling with local temperature gradients

    Science.gov (United States)

    Tan, Y.; Maniatty, A. M.; Zheng, C.; Wen, J. T.

    2017-09-01

    This work investigated the development of a Monte Carlo (MC) simulation approach to modeling grain growth in the presence of non-uniform temperature field that may vary with time. We first scale the MC model to physical growth processes by fitting experimental data. Based on the scaling relationship, we derive a grid site selection probability (SSP) function to consider the effect of a spatially varying temperature field. The SSP function is based on the differential MC step, which allows it to naturally consider time varying temperature fields too. We verify the model and compare the predictions to other existing formulations (Godfrey and Martin 1995 Phil. Mag. A 72 737-49 Radhakrishnan and Zacharia 1995 Metall. Mater. Trans. A 26 2123-30) in simple two-dimensional cases with only spatially varying temperature fields, where the predicted grain growth in regions of constant temperature are expected to be the same as for the isothermal case. We also test the model in a more realistic three-dimensional case with a temperature field varying in both space and time, modeling grain growth in the heat affected zone of a weld. We believe the newly proposed approach is promising for modeling grain growth in material manufacturing processes that involves time-dependent local temperature gradient.

  16. Modelling a gamma irradiation process using the Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Soares, Gabriela A.; Pereira, Marcio T., E-mail: gas@cdtn.br, E-mail: mtp@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2011-07-01

    In gamma irradiation service it is of great importance the evaluation of absorbed dose in order to guarantee the service quality. When physical structure and human resources are not available for performing dosimetry in each product irradiated, the appliance of mathematic models may be a solution. Through this, the prediction of the delivered dose in a specific product, irradiated in a specific position and during a certain period of time becomes possible, if validated with dosimetry tests. At the gamma irradiation facility of CDTN, equipped with a Cobalt-60 source, the Monte Carlo method was applied to perform simulations of products irradiations and the results were compared with Fricke dosimeters irradiated under the same conditions of the simulations. The first obtained results showed applicability of this method, with a linear relation between simulation and experimental results. (author)

  17. A Monte Carlo Simulation Framework for Testing Cosmological Models

    Directory of Open Access Journals (Sweden)

    Heymann Y.

    2014-10-01

    Full Text Available We tested alternative cosmologies using Monte Carlo simulations based on the sam- pling method of the zCosmos galactic survey. The survey encompasses a collection of observable galaxies with respective redshifts that have been obtained for a given spec- troscopic area of the sky. Using a cosmological model, we can convert the redshifts into light-travel times and, by slicing the survey into small redshift buckets, compute a curve of galactic density over time. Because foreground galaxies obstruct the images of more distant galaxies, we simulated the theoretical galactic density curve using an average galactic radius. By comparing the galactic density curves of the simulations with that of the survey, we could assess the cosmologies. We applied the test to the expanding-universe cosmology of de Sitter and to a dichotomous cosmology.

  18. Monte Carlo modeling of recrystallization processes in α-uranium

    Science.gov (United States)

    Steiner, M. A.; McCabe, R. J.; Garlea, E.; Agnew, S. R.

    2017-08-01

    Starting with electron backscattered diffraction (EBSD) data obtained from a warm clock-rolled α-uranium deformation microstructure, a Potts Monte Carlo model was used to simulate static site-saturated recrystallization and test which recrystallization nucleation conditions within the microstructure are best validated by experimental observations. The simulations support prior observations that recrystallized nuclei within α-uranium form preferentially on non-twin high-angle grain boundary sites at 450 °C. They also demonstrate, in a new finding, that nucleation along these boundaries occurs only at a highly constrained subset of sites possessing the largest degrees of local deformation. Deformation in the EBSD data can be identified by the Kernel Average Misorientation (KAM), which may be considered as a proxy for the local geometrically necessary dislocation (GND) density.

  19. Monte Carlo Modeling of Crystal Channeling at High Energies

    CERN Document Server

    Schoofs, Philippe; Cerutti, Francesco

    Charged particles entering a crystal close to some preferred direction can be trapped in the electromagnetic potential well existing between consecutive planes or strings of atoms. This channeling effect can be used to extract beam particles if the crystal is bent beforehand. Crystal channeling is becoming a reliable and efficient technique for collimating beams and removing halo particles. At CERN, the installation of silicon crystals in the LHC is under scrutiny by the UA9 collaboration with the goal of investigating if they are a viable option for the collimation system upgrade. This thesis describes a new Monte Carlo model of planar channeling which has been developed from scratch in order to be implemented in the FLUKA code simulating particle transport and interactions. Crystal channels are described through the concept of continuous potential taking into account thermal motion of the lattice atoms and using Moliere screening function. The energy of the particle transverse motion determines whether or n...

  20. Accelerating Monte Carlo Markov chains with proxy and error models

    Science.gov (United States)

    Josset, Laureline; Demyanov, Vasily; Elsheikh, Ahmed H.; Lunati, Ivan

    2015-12-01

    In groundwater modeling, Monte Carlo Markov Chain (MCMC) simulations are often used to calibrate aquifer parameters and propagate the uncertainty to the quantity of interest (e.g., pollutant concentration). However, this approach requires a large number of flow simulations and incurs high computational cost, which prevents a systematic evaluation of the uncertainty in the presence of complex physical processes. To avoid this computational bottleneck, we propose to use an approximate model (proxy) to predict the response of the exact model. Here, we use a proxy that entails a very simplified description of the physics with respect to the detailed physics described by the "exact" model. The error model accounts for the simplification of the physical process; and it is trained on a learning set of realizations, for which both the proxy and exact responses are computed. First, the key features of the set of curves are extracted using functional principal component analysis; then, a regression model is built to characterize the relationship between the curves. The performance of the proposed approach is evaluated on the Imperial College Fault model. We show that the joint use of the proxy and the error model to infer the model parameters in a two-stage MCMC set-up allows longer chains at a comparable computational cost. Unnecessary evaluations of the exact responses are avoided through a preliminary evaluation of the proposal made on the basis of the corrected proxy response. The error model trained on the learning set is crucial to provide a sufficiently accurate prediction of the exact response and guide the chains to the low misfit regions. The proposed methodology can be extended to multiple-chain algorithms or other Bayesian inference methods. Moreover, FPCA is not limited to the specific presented application and offers a general framework to build error models.

  1. Performance Analysis of Software Effort Estimation Models Using Neural Networks

    Directory of Open Access Journals (Sweden)

    P.Latha

    2013-08-01

    Full Text Available Software Effort estimation involves the estimation of effort required to develop software. Cost overrun, schedule overrun occur in the software development due to the wrong estimate made during the initial stage of software development. Proper estimation is very essential for successful completion of software development. Lot of estimation techniques available to estimate the effort in which neural network based estimation technique play a prominent role. Back propagation Network is the most widely used architecture. ELMAN neural network a recurrent type network can be used on par with Back propagation Network. For a good predictor system the difference between estimated effort and actual effort should be as low as possible. Data from historic project of NASA is used for training and testing. The experimental Results confirm that Back propagation algorithm is efficient than Elman neural network.

  2. Monte-Carlo Simulation of Ising Model%Ising 模型的Monte-Carlo模拟

    Institute of Scientific and Technical Information of China (English)

    吴国军; 胡经国

    2000-01-01

    在平面四角点阵上,以Ising模型为框架,在IBM-PC机上用Mont e-Carlo方法模拟了螺旋边界、半自由边界及自由边界条件下铁磁系统的相图,并与周期性边界条件下的结果作了比较.

  3. A Monte Carlo-based model of gold nanoparticle radiosensitization

    Science.gov (United States)

    Lechtman, Eli Solomon

    The goal of radiotherapy is to operate within the therapeutic window - delivering doses of ionizing radiation to achieve locoregional tumour control, while minimizing normal tissue toxicity. A greater therapeutic ratio can be achieved by utilizing radiosensitizing agents designed to enhance the effects of radiation at the tumour. Gold nanoparticles (AuNP) represent a novel radiosensitizer with unique and attractive properties. AuNPs enhance local photon interactions, thereby converting photons into localized damaging electrons. Experimental reports of AuNP radiosensitization reveal this enhancement effect to be highly sensitive to irradiation source energy, cell line, and AuNP size, concentration and intracellular localization. This thesis explored the physics and some of the underlying mechanisms behind AuNP radiosensitization. A Monte Carlo simulation approach was developed to investigate the enhanced photoelectric absorption within AuNPs, and to characterize the escaping energy and range of the photoelectric products. Simulations revealed a 10 3 fold increase in the rate of photoelectric absorption using low-energy brachytherapy sources compared to megavolt sources. For low-energy sources, AuNPs released electrons with ranges of only a few microns in the surrounding tissue. For higher energy sources, longer ranged photoelectric products travelled orders of magnitude farther. A novel radiobiological model called the AuNP radiosensitization predictive (ARP) model was developed based on the unique nanoscale energy deposition pattern around AuNPs. The ARP model incorporated detailed Monte Carlo simulations with experimentally determined parameters to predict AuNP radiosensitization. This model compared well to in vitro experiments involving two cancer cell lines (PC-3 and SK-BR-3), two AuNP sizes (5 and 30 nm) and two source energies (100 and 300 kVp). The ARP model was then used to explore the effects of AuNP intracellular localization using 1.9 and 100 nm Au

  4. Optimizing Muscle Parameters in Musculoskeletal Modeling Using Monte Carlo Simulations

    Science.gov (United States)

    Hanson, Andrea; Reed, Erik; Cavanagh, Peter

    2011-01-01

    Astronauts assigned to long-duration missions experience bone and muscle atrophy in the lower limbs. The use of musculoskeletal simulation software has become a useful tool for modeling joint and muscle forces during human activity in reduced gravity as access to direct experimentation is limited. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler(TM) (San Clemente, CA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces. However, no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. Peak hip joint force using the default parameters was 2.96 times body weight (BW) and increased to 3.21 BW in an optimized, feature-selected test case. The rectus femoris was predicted to peak at 60.1% activation following muscle recruitment optimization, compared to 19.2% activation with default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.

  5. Modelling of scintillator based flat-panel detectors with Monte-Carlo simulations

    Science.gov (United States)

    Reims, N.; Sukowski, F.; Uhlmann, N.

    2011-01-01

    Scintillator based flat panel detectors are state of the art in the field of industrial X-ray imaging applications. Choosing the proper system and setup parameters for the vast range of different applications can be a time consuming task, especially when developing new detector systems. Since the system behaviour cannot always be foreseen easily, Monte-Carlo (MC) simulations are keys to gain further knowledge of system components and their behaviour for different imaging conditions. In this work we used two Monte-Carlo based models to examine an indirect converting flat panel detector, specifically the Hamamatsu C9312SK. We focused on the signal generation in the scintillation layer and its influence on the spatial resolution of the whole system. The models differ significantly in their level of complexity. The first model gives a global description of the detector based on different parameters characterizing the spatial resolution. With relatively small effort a simulation model can be developed which equates the real detector regarding signal transfer. The second model allows a more detailed insight of the system. It is based on the well established cascade theory, i.e. describing the detector as a cascade of elemental gain and scattering stages, which represent the built in components and their signal transfer behaviour. In comparison to the first model the influence of single components especially the important light spread behaviour in the scintillator can be analysed in a more differentiated way. Although the implementation of the second model is more time consuming both models have in common that a relatively small amount of system manufacturer parameters are needed. The results of both models were in good agreement with the measured parameters of the real system.

  6. Monte Carlo model for electron degradation in xenon gas

    CERN Document Server

    Mukundan, Vrinda

    2016-01-01

    We have developed a Monte Carlo model for studying the local degradation of electrons in the energy range 9-10000 eV in xenon gas. Analytically fitted form of electron impact cross sections for elastic and various inelastic processes are fed as input data to the model. Two dimensional numerical yield spectrum, which gives information on the number of energy loss events occurring in a particular energy interval, is obtained as output of the model. Numerical yield spectrum is fitted analytically, thus obtaining analytical yield spectrum. The analytical yield spectrum can be used to calculate electron fluxes, which can be further employed for the calculation of volume production rates. Using yield spectrum, mean energy per ion pair and efficiencies of inelastic processes are calculated. The value for mean energy per ion pair for Xe is 22 eV at 10 keV. Ionization dominates for incident energies greater than 50 eV and is found to have an efficiency of 65% at 10 keV. The efficiency for the excitation process is 30%...

  7. Hopping electron model with geometrical frustration: kinetic Monte Carlo simulations

    Science.gov (United States)

    Terao, Takamichi

    2016-09-01

    The hopping electron model on the Kagome lattice was investigated by kinetic Monte Carlo simulations, and the non-equilibrium nature of the system was studied. We have numerically confirmed that aging phenomena are present in the autocorrelation function C ({t,tW )} of the electron system on the Kagome lattice, which is a geometrically frustrated lattice without any disorder. The waiting-time distributions p(τ ) of hopping electrons of the system on Kagome lattice has been also studied. It is confirmed that the profile of p (τ ) obtained at lower temperatures obeys the power-law behavior, which is a characteristic feature of continuous time random walk of electrons. These features were also compared with the characteristics of the Coulomb glass model, used as a model of disordered thin films and doped semiconductors. This work represents an advance in the understanding of the dynamics of geometrically frustrated systems and will serve as a basis for further studies of these physical systems.

  8. Monte Carlo modeling and optimization of buffer gas positron traps

    Science.gov (United States)

    Marjanović, Srđan; Petrović, Zoran Lj

    2017-02-01

    Buffer gas positron traps have been used for over two decades as the prime source of slow positrons enabling a wide range of experiments. While their performance has been well understood through empirical studies, no theoretical attempt has been made to quantitatively describe their operation. In this paper we apply standard models as developed for physics of low temperature collision dominated plasmas, or physics of swarms to model basic performance and principles of operation of gas filled positron traps. The Monte Carlo model is equipped with the best available set of cross sections that were mostly derived experimentally by using the same type of traps that are being studied. Our model represents in realistic geometry and fields the development of the positron ensemble from the initial beam provided by the solid neon moderator through voltage drops between the stages of the trap and through different pressures of the buffer gas. The first two stages employ excitation of N2 with acceleration of the order of 10 eV so that the trap operates under conditions when excitation of the nitrogen reduces the energy of the initial beam to trap the positrons without giving them a chance to become annihilated following positronium formation. The energy distribution function develops from the assumed distribution leaving the moderator, it is accelerated by the voltage drops and forms beams at several distinct energies. In final stages the low energy loss collisions (vibrational excitation of CF4 and rotational excitation of N2) control the approach of the distribution function to a Maxwellian at room temperature but multiple non-Maxwellian groups persist throughout most of the thermalization. Optimization of the efficiency of the trap may be achieved by changing the pressure and voltage drops and also by selecting to operate in a two stage mode. The model allows quantitative comparisons and test of optimization as well as development of other properties.

  9. Parallel Monte Carlo transport modeling in the context of a time-dependent, three-dimensional multi-physics code

    Energy Technology Data Exchange (ETDEWEB)

    Procassini, R.J. [Lawrence Livermore National lab., CA (United States)

    1997-12-31

    The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution of particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.

  10. Linking effort and fishing mortality in a mixed fisheries model

    DEFF Research Database (Denmark)

    Thøgersen, Thomas Talund; Hoff, Ayoe; Frost, Hans Staby

    2012-01-01

    in fish stocks has led to overcapacity in many fisheries, leading to incentives for overfishing. Recent research has shown that the allocation of effort among fleets can play an important role in mitigating overfishing when the targeting covers a range of species (multi-species—i.e., so-called mixed...

  11. SKIRT: the design of a suite of input models for Monte Carlo radiative transfer simulations

    CERN Document Server

    Baes, Maarten

    2015-01-01

    The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can...

  12. Monte Carlo Modeling Electronuclear Processes in Cascade Subcritical Reactor

    CERN Document Server

    Bznuni, S A; Zhamkochyan, V M; Polyanskii, A A; Sosnin, A N; Khudaverdian, A G

    2000-01-01

    Accelerator driven subcritical cascade reactor composed of the main thermal neutron reactor constructed analogous to the core of the VVER-1000 reactor and a booster-reactor, which is constructed similar to the core of the BN-350 fast breeder reactor, is taken as a model example. It is shown by means of Monte Carlo calculations that such system is a safe energy source (k_{eff}=0.94-0.98) and it is capable of transmuting produced radioactive wastes (neutron flux density in the thermal zone is PHI^{max} (r,z)=10^{14} n/(cm^{-2} s^{-1}), neutron flux in the fast zone is respectively equal PHI^{max} (r,z)=2.25 cdot 10^{15} n/(cm^{-2} s^{-1}) if the beam current of the proton accelerator is k_{eff}=0.98 and I=5.3 mA). Suggested configuration of the "cascade" reactor system essentially reduces the requirements on the proton accelerator current.

  13. Monte Carlo Glauber wounded nucleon model with meson cloud

    CERN Document Server

    Zakharov, B G

    2016-01-01

    We study the effect of the nucleon meson cloud on predictions of the Monte Carlo Glauber wounded nucleon model for $AA$, $pA$, and $pp$ collisions. From the analysis of the data on the charged multiplicity density in $AA$ collisions we find that the meson-baryon Fock component reduces the required fraction of binary collisions by a factor of $\\sim 2$ for Au+Au collisions at $\\sqrt{s}=0.2$ TeV and $\\sim 1.5$ for Pb+Pb collisions at $\\sqrt{s}=2.76$ TeV. For central $AA$ collisions the meson cloud can increase the multiplicity density by $\\sim 16-18$\\%. We give predictions for the midrapidity charged multiplicity density in Pb+Pb collisions at $\\sqrt{s}=5.02$ TeV for the future LHC run 2. We find that the meson cloud has a weak effect on the centrality dependence of the ellipticity $\\epsilon_2$ in $AA$ collisions. For collisions of the deformed uranium nuclei at $\\sqrt{s}=0.2$ TeV we find that the meson cloud may improve somewhat agreement with the data on the dependence of the elliptic flow on the charged multi...

  14. A Monte Carlo reflectance model for soil surfaces with three-dimensional structure

    Science.gov (United States)

    Cooper, K. D.; Smith, J. A.

    1985-01-01

    A Monte Carlo soil reflectance model has been developed to study the effect of macroscopic surface irregularities larger than the wavelength of incident flux. The model treats incoherent multiple scattering from Lambertian facets distributed on a periodic surface. Resulting bidirectional reflectance distribution functions are non-Lambertian and compare well with experimental trends reported in the literature. Examples showing the coupling of the Monte Carlo soil model to an adding bidirectional canopy of reflectance model are also given.

  15. Suggestion Program and Model Installation Program - Duplication of Effort.

    Science.gov (United States)

    1988-04-01

    REPORTNUMBER88-26- TITL SUGESIONPROGAM ND ODE INSALLTIO PRGRAM -DULICTIO OF EFFORT AUTHR(S)MAJR DOALD . TOWBRDGEUSA FACUTY DVISRMAOR SEVE L.HANSN, CSC/824STU...NIP Evaluation Process............................ 13 FIGURE 3--USAF MIP Growth................................... 17 0. p.r vip I -.# EXECUTIVE SUMMARY...the study centers on program processes for submitting and evaluating proposals. The Suggestion Program and MIP processes are similar in that they both

  16. Monte Carlo simulations of the HP model (the "Ising model" of protein folding)

    Science.gov (United States)

    Li, Ying Wai; Wüst, Thomas; Landau, David P.

    2011-09-01

    Using Wang-Landau sampling with suitable Monte Carlo trial moves (pull moves and bond-rebridging moves combined) we have determined the density of states and thermodynamic properties for a short sequence of the HP protein model. For free chains these proteins are known to first undergo a collapse "transition" to a globule state followed by a second "transition" into a native state. When placed in the proximity of an attractive surface, there is a competition between surface adsorption and folding that leads to an intriguing sequence of "transitions". These transitions depend upon the relative interaction strengths and are largely inaccessible to "standard" Monte Carlo methods.

  17. A polynomial model of patient-specific breathing effort during controlled mechanical ventilation.

    Science.gov (United States)

    Redmond, Daniel P; Docherty, Paul D; Yeong Shiong Chiew; Chase, J Geoffrey

    2015-08-01

    Patient breathing efforts occurring during controlled ventilation causes perturbations in pressure data, which cause erroneous parameter estimation in conventional models of respiratory mechanics. A polynomial model of patient effort can be used to capture breath-specific effort and underlying lung condition. An iterative multiple linear regression is used to identify the model in clinical volume controlled data. The polynomial model has lower fitting error and more stable estimates of respiratory elastance and resistance in the presence of patient effort than the conventional single compartment model. However, the polynomial model can converge to poor parameter estimation when patient efforts occur very early in the breath, or for long duration. The model of patient effort can provide clinical benefits by providing accurate respiratory mechanics estimation and monitoring of breath-to-breath patient effort, which can be used by clinicians to guide treatment.

  18. Effective quantum Monte Carlo algorithm for modeling strongly correlated systems

    NARCIS (Netherlands)

    Kashurnikov, V. A.; Krasavin, A. V.

    2007-01-01

    A new effective Monte Carlo algorithm based on principles of continuous time is presented. It allows calculating, in an arbitrary discrete basis, thermodynamic quantities and linear response of mixed boson-fermion, spin-boson, and other strongly correlated systems which admit no analytic description

  19. Monte Carlo simulation of quantum statistical lattice models

    NARCIS (Netherlands)

    Raedt, Hans De; Lagendijk, Ad

    1985-01-01

    In this article we review recent developments in computational methods for quantum statistical lattice problems. We begin by giving the necessary mathematical basis, the generalized Trotter formula, and discuss the computational tools, exact summations and Monte Carlo simulation, that will be used t

  20. Monte Carlo estimation of the conditional Rasch model

    NARCIS (Netherlands)

    Akkermans, Wies M.W.

    1994-01-01

    In order to obtain conditional maximum likelihood estimates, the so-called conditioning estimates have to be calculated. In this paper a method is examined that does not calculate these constants exactly, but approximates them using Monte Carlo Markov Chains. As an example, the method is applied to

  1. Monte Carlo estimation of the conditional Rasch model

    NARCIS (Netherlands)

    Akkermans, W.

    1998-01-01

    In order to obtain conditional maximum likelihood estimates, the conditioning constants are needed. Geyer and Thompson (1992) proposed a Markov chain Monte Carlo method that can be used to approximate these constants when they are difficult to calculate exactly. In the present paper, their method is

  2. Improved Monte Carlo model for multiple scattering calculations

    Institute of Scientific and Technical Information of China (English)

    Weiwei Cai; Lin Ma

    2012-01-01

    The coupling between the Monte Carlo (MC) method and geometrical optics to improve accuracy is investigated.The results obtained show improved agreement with previous experimental data,demonstrating that the MC method,when coupled with simple geometrical optics,can simulate multiple scattering with enhanced fidelity.

  3. An exercise in model validation: Comparing univariate statistics and Monte Carlo-based multivariate statistics

    Energy Technology Data Exchange (ETDEWEB)

    Weathers, J.B. [Shock, Noise, and Vibration Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: James.Weathers@ngc.com; Luck, R. [Department of Mechanical Engineering, Mississippi State University, 210 Carpenter Engineering Building, P.O. Box ME, Mississippi State, MS 39762-5925 (United States)], E-mail: Luck@me.msstate.edu; Weathers, J.W. [Structural Analysis Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: Jeffrey.Weathers@ngc.com

    2009-11-15

    The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.

  4. Monte Carlo modeling of ultrasound probes for image guided radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Bazalova-Carter, Magdalena, E-mail: bazalova@uvic.ca [Department of Radiation Oncology, Stanford University, Stanford, California 94305 and Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia V8W 2Y2 (Canada); Schlosser, Jeffrey [SoniTrack Systems, Inc., Palo Alto, California 94304 (United States); Chen, Josephine [Department of Radiation Oncology, UCSF, San Francisco, California 94143 (United States); Hristov, Dimitre [Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States)

    2015-10-15

    Purpose: To build Monte Carlo (MC) models of two ultrasound (US) probes and to quantify the effect of beam attenuation due to the US probes for radiation therapy delivered under real-time US image guidance. Methods: MC models of two Philips US probes, an X6-1 matrix-array transducer and a C5-2 curved-array transducer, were built based on their megavoltage (MV) CT images acquired in a Tomotherapy machine with a 3.5 MV beam in the EGSnrc, BEAMnrc, and DOSXYZnrc codes. Mass densities in the probes were assigned based on an electron density calibration phantom consisting of cylinders with mass densities between 0.2 and 8.0 g/cm{sup 3}. Beam attenuation due to the US probes in horizontal (for both probes) and vertical (for the X6-1 probe) orientation was measured in a solid water phantom for 6 and 15 MV (15 × 15) cm{sup 2} beams with a 2D ionization chamber array and radiographic films at 5 cm depth. The MC models of the US probes were validated by comparison of the measured dose distributions and dose distributions predicted by MC. Attenuation of depth dose in the (15 × 15) cm{sup 2} beams and small circular beams due to the presence of the probes was assessed by means of MC simulations. Results: The 3.5 MV CT number to mass density calibration curve was found to be linear with R{sup 2} > 0.99. The maximum mass densities in the X6-1 and C5-2 probes were found to be 4.8 and 5.2 g/cm{sup 3}, respectively. Dose profile differences between MC simulations and measurements of less than 3% for US probes in horizontal orientation were found, with the exception of the penumbra region. The largest 6% dose difference was observed in dose profiles of the X6-1 probe placed in vertical orientation, which was attributed to inadequate modeling of the probe cable. Gamma analysis of the simulated and measured doses showed that over 96% of measurement points passed the 3%/3 mm criteria for both probes placed in horizontal orientation and for the X6-1 probe in vertical orientation. The

  5. A comparison between the effort-reward imbalance and demand control models.

    Science.gov (United States)

    Ostry, Aleck S; Kelly, Shona; Demers, Paul A; Mustard, Cameron; Hertzman, Clyde

    2003-02-27

    To compare the predictive validity of the demand/control and reward/imbalance models, alone and in combination with each other, for self-reported health status and the self-reported presence of any chronic disease condition. Self-reports for psychosocial work conditions were obtained in a sample of sawmill workers using the demand/control and effort/reward imbalance models. The relative predictive validity of task-level control was compared with effort/reward imbalance. As well, the predictive validity of a model developed by combining task-level control with effort/reward imbalance was determined. Logistic regression was utilized for all models. The demand/control and effort/reward imbalance models independently predicted poor self-reported health status. The effort-reward imbalance model predicted the presence of a chronic disease while the demand/control model did not. A model combining effort-reward imbalance and task-level control was a better predictor of self-reported health status and any chronic condition than either model alone. Effort reward imbalance modeled with intrinsic effort had marginally better predictive validity than when modeled with extrinsic effort only. Future work should explore the combined effects of these two models of psychosocial stress at work on health more thoroughly.

  6. Monte Carlo simulation of magnetization switching in a Heisenberg model for small ferromagnetic particles

    OpenAIRE

    Hinzke, Denise; Nowak, Ulrich

    1999-01-01

    Using Monte Carlo methods we investigate the thermally activated magnetization switching of small ferromagnetic particles driven by an external magnetic field. For low uniaxial anisotropy one expects that the spins rotate coherently while for sufficiently large anisotropy the reversal should be due to nucleation. The latter case has been investigated extensively by Monte Carlo simulation of corresponding Ising models. In order to study the crossover from coherent rotation to nucleation we use...

  7. Colloids and Radionuclide Transport: A Field, Experimental and Modeling Effort

    Science.gov (United States)

    Zhao, P.; Zavarin, M.; Sylwester, E. E.; Allen, P. G.; Williams, R. W.; Kersting, A. B.

    2002-05-01

    Natural inorganic colloids (clinoptilolite, colloids particle size 171 ñ 25 nm) were conducted in synthetic groundwater (similar to J-13, Yucca Mountain standard) with a pH range from 4 to 10 and initial plutonium concentration of 10-9 M. The results show that Pu(IV) sorption takes place within an hour, while the rates of Pu(V) sorption onto the colloids is much slower and mineral dependent. The kinetic results from the batch sorption/desorption experiments, coupled with redox kinetics of plutonium in solution will be used in geochemical modeling of Pu surface complexation to colloids and reactive transport. (This work was performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48.)

  8. Efforts and Models of Education for Parents: the Danish Approach

    Directory of Open Access Journals (Sweden)

    Rosendal Jensen, Niels

    2009-12-01

    to underline that Danish welfare policy has been changing rather radical. The classic model was an understanding of welfare as social assurance and/or as social distribution – based on social solidarity. The modern model looks like welfare as social service and/or social investment. This means that citizens are changing role – from user and/or citizen to consumer and/or investor. The Danish state is in correspondence with decisions taken by the government investing in a national future shaped by global competition. The new models of welfare – “service” and “investment” – imply severe changes in hitherto known concepts of family life, relationship between parents and children etc. As an example the investment model points at a new implementation of the relationship between social rights and the rights of freedom. The service model has demonstrated that weakness that the access to qualified services in the field of health or education is becoming more and more dependent of the private purchasing power. The weakness of the investment model is that it represents a sort of “The Winner takes it all” – since a political majority is enabled to make agendas in societal fields former protected by the tripartite power and the rights of freedom of the citizens. The outcome of the Danish development seems to be an establishment of a political governed public service industry which on one side are capable of competing on market conditions and on the other are able being governed by contracts. This represents a new form of close linking of politics, economy and professional work. Attempts of controlling education, pedagogy and thereby the population are not a recent invention. In European history we could easily point at several such experiments. The real news is the linking between political priorities and exercise of public activities by economic incentives. By defining visible goals for the public servants, by introducing measurement of achievements and

  9. Nuclear Hybrid Energy Systems FY16 Modeling Efforts at ORNL

    Energy Technology Data Exchange (ETDEWEB)

    Cetiner, Sacit M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Greenwood, Michael Scott [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Harrison, Thomas J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Qualls, A. L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Guler Yigitoglu, Askin [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Fugate, David W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-09-01

    A nuclear hybrid system uses a nuclear reactor as the basic power generation unit. The power generated by the nuclear reactor is utilized by one or more power customers as either thermal power, electrical power, or both. In general, a nuclear hybrid system will couple the nuclear reactor to at least one thermal power user in addition to the power conversion system. The definition and architecture of a particular nuclear hybrid system is flexible depending on local markets needs and opportunities. For example, locations in need of potable water may be best served by coupling a desalination plant to the nuclear system. Similarly, an area near oil refineries may have a need for emission free hydrogen production. A nuclear hybrid system expands the nuclear power plant from its more familiar central power station role by diversifying its immediately and directly connected customer base. The definition, design, analysis, and optimization work currently performed with respect to the nuclear hybrid systems represents the work of three national laboratories. Idaho National Laboratory (INL) is the lead lab working with Argonne National Laboratory (ANL) and Oak Ridge National Laboratory. Each laboratory is providing modeling and simulation expertise for the integration of the hybrid system.

  10. Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases

    Science.gov (United States)

    Pfeiffer, M.; Nizenkov, P.; Mirza, A.; Fasoulas, S.

    2016-02-01

    Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn's Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methods are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.

  11. Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases

    Energy Technology Data Exchange (ETDEWEB)

    Pfeiffer, M., E-mail: mpfeiffer@irs.uni-stuttgart.de; Nizenkov, P., E-mail: nizenkov@irs.uni-stuttgart.de; Mirza, A., E-mail: mirza@irs.uni-stuttgart.de; Fasoulas, S., E-mail: fasoulas@irs.uni-stuttgart.de [Institute of Space Systems, University of Stuttgart, Pfaffenwaldring 29, D-70569 Stuttgart (Germany)

    2016-02-15

    Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn’s Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methods are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.

  12. Optical Monte Carlo modeling of a true portwine stain anatomy

    Science.gov (United States)

    Barton, Jennifer K.; Pfefer, T. Joshua; Welch, Ashley J.; Smithies, Derek J.; Nelson, Jerry; van Gemert, Martin J.

    1998-04-01

    A unique Monte Carlo program capable of accommodating an arbitrarily complex geometry was used to determine the energy deposition in a true port wine stain anatomy. Serial histologic sections taken from a biopsy of a dark red, laser therapy resistant stain were digitized and used to create the program input for simulation at wavelengths of 532 and 585 nm. At both wavelengths, the greatest energy deposition occurred in the superficial blood vessels, and subsequently decreased with depth as the laser beam was attenuated. However, more energy was deposited in the epidermis and superficial blood vessels at 532 nm than at 585 nm.

  13. Parametric links among Monte Carlo, phase-field, and sharp-interface models of interfacial motion.

    Science.gov (United States)

    Liu, Pu; Lusk, Mark T

    2002-12-01

    Parametric links are made among three mesoscale simulation paradigms: phase-field, sharp-interface, and Monte Carlo. A two-dimensional, square lattice, 1/2 Ising model is considered for the Monte Carlo method, where an exact solution for the interfacial free energy is known. The Monte Carlo mobility is calibrated as a function of temperature using Glauber kinetics. A standard asymptotic analysis relates the phase-field and sharp-interface parameters, and this allows the phase-field and Monte Carlo parameters to be linked. The result is derived without bulk effects but is then applied to a set of simulations with the bulk driving force included. An error analysis identifies the domain over which the parametric relationships are accurate.

  14. A new Monte Carlo simulation model for laser transmission in smokescreen based on MATLAB

    Science.gov (United States)

    Lee, Heming; Wang, Qianqian; Shan, Bin; Li, Xiaoyang; Gong, Yong; Zhao, Jing; Peng, Zhong

    2016-11-01

    A new Monte Carlo simulation model of laser transmission in smokescreen is promoted in this paper. In the traditional Monte Carlo simulation model, the radius of particles is set at the same value and the initial cosine value of photons direction is fixed also, which can only get the approximate result. The new model is achieved based on MATLAB and can simulate laser transmittance in smokescreen with different sizes of particles, and the output result of the model is close to the real scenarios. In order to alleviate the influence of the laser divergence while traveling in the air, we changed the initial direction cosine of photons on the basis of the traditional Monte Carlo model. The mixed radius particle smoke simulation results agree with the measured transmittance under the same experimental conditions with 5.42% error rate.

  15. Exploring uncertainty in glacier mass balance modelling with Monte Carlo simulation

    NARCIS (Netherlands)

    Machguth, H.; Purves, R.S.; Oerlemans, J.; Hoelzle, M.; Paul, F.

    2008-01-01

    By means of Monte Carlo simulations we calculated uncertainty in modelled cumulative mass balance over 400 days at one particular point on the tongue of Morteratsch Glacier, Switzerland, using a glacier energy balance model of intermediate complexity. Before uncertainty assessment, the model was tun

  16. Exploring uncertainty in glacier mass balance modelling with Monte Carlo simulation

    NARCIS (Netherlands)

    Machguth, H.; Purves, R.S.; Oerlemans, J.; Hoelzle, M.; Paul, F.

    2008-01-01

    By means of Monte Carlo simulations we calculated uncertainty in modelled cumulative mass balance over 400 days at one particular point on the tongue of Morteratsch Glacier, Switzerland, using a glacier energy balance model of intermediate complexity. Before uncertainty assessment, the model was tun

  17. Fullrmc, a rigid body Reverse Monte Carlo modeling package enabled with machine learning and artificial intelligence.

    Science.gov (United States)

    Aoun, Bachir

    2016-05-01

    A new Reverse Monte Carlo (RMC) package "fullrmc" for atomic or rigid body and molecular, amorphous, or crystalline materials is presented. fullrmc main purpose is to provide a fully modular, fast and flexible software, thoroughly documented, complex molecules enabled, written in a modern programming language (python, cython, C and C++ when performance is needed) and complying to modern programming practices. fullrmc approach in solving an atomic or molecular structure is different from existing RMC algorithms and software. In a nutshell, traditional RMC methods and software randomly adjust atom positions until the whole system has the greatest consistency with a set of experimental data. In contrast, fullrmc applies smart moves endorsed with reinforcement machine learning to groups of atoms. While fullrmc allows running traditional RMC modeling, the uniqueness of this approach resides in its ability to customize grouping atoms in any convenient way with no additional programming efforts and to apply smart and more physically meaningful moves to the defined groups of atoms. In addition, fullrmc provides a unique way with almost no additional computational cost to recur a group's selection, allowing the system to go out of local minimas by refining a group's position or exploring through and beyond not allowed positions and energy barriers the unrestricted three dimensional space around a group.

  18. Z_3 Polyakov Loop Models and Inverse Monte-Carlo Methods

    CERN Document Server

    Wozar, Christian; Uhlmann, Sebastian; Wipf, Andreas; Heinzl, Thomas

    2007-01-01

    We study effective Polyakov loop models for SU(3) Yang-Mills theory at finite temperature. A comprehensive mean field analysis of the phase diagram is carried out and compared to the results obtained from Monte-Carlo simulations. We find a rich phase structure including ferromagnetic and antiferromagnetic phases. Due to the presence of a tricritical point the mean field approximation agrees very well with the numerical data. Critical exponents associated with second-order transitions coincide with those of the Z_3 Potts model. Finally, we employ inverse Monte-Carlo methods to determine the effective couplings in order to match the effective models to Yang-Mills theory.

  19. Microscopic imaging through turbid media Monte Carlo modeling and applications

    CERN Document Server

    Gu, Min; Deng, Xiaoyuan

    2015-01-01

    This book provides a systematic introduction to the principles of microscopic imaging through tissue-like turbid media in terms of Monte-Carlo simulation. It describes various gating mechanisms based on the physical differences between the unscattered and scattered photons and method for microscopic image reconstruction, using the concept of the effective point spread function. Imaging an object embedded in a turbid medium is a challenging problem in physics as well as in biophotonics. A turbid medium surrounding an object under inspection causes multiple scattering, which degrades the contrast, resolution and signal-to-noise ratio. Biological tissues are typically turbid media. Microscopic imaging through a tissue-like turbid medium can provide higher resolution than transillumination imaging in which no objective is used. This book serves as a valuable reference for engineers and scientists working on microscopy of tissue turbid media.

  20. Kinetic Monte Carlo modelling of neutron irradiation damage in iron

    Energy Technology Data Exchange (ETDEWEB)

    Gamez, L. [Instituto de Fusion Nuclear, UPM, Madrid (Spain); Departamento de Fisica Aplicada, ETSII, UPM, Madrid (Spain)], E-mail: linarejos.gamez@upm.es; Martinez, E. [Instituto de Fusion Nuclear, UPM, Madrid (Spain); Lawrence Livermore National Laboratory, LLNL, CA 94550 (United States); Perlado, J.M.; Cepas, P. [Instituto de Fusion Nuclear, UPM, Madrid (Spain); Caturla, M.J. [Departamento de Fisica Aplicada, Universidad de Alicante, Alicante (Spain); Victoria, M. [Instituto de Fusion Nuclear, UPM, Madrid (Spain); Marian, J. [Lawrence Livermore National Laboratory, LLNL, CA 94550 (United States); Arevalo, C. [Instituto de Fusion Nuclear, UPM, Madrid (Spain); Hernandez, M.; Gomez, D. [CIEMAT, Madrid (Spain)

    2007-10-15

    Ferritic steels (FeCr based alloys) are key materials needed to fulfill the requirements expected in future nuclear fusion facilities, both for magnetic and inertial confinement, and advanced fission reactors (GIV) and transmutation systems. Research in such field is actually a critical aspect in the European research program and abroad. Experimental and multiscale simulation methodologies are going hand by hand in increasing the knowledge of materials performance. At DENIM, it is progressing in some specific part of the well-linked simulation methodology both for defects energetics and diffusion, and for dislocation dynamics. In this study, results obtained from kinetic Monte Carlo simulations of neutron irradiated Fe under different conditions are presented, using modified ad hoc parameters. A significant agreement with experimental measurements has been found for some of the parameterization and mechanisms considered. The results of these simulations are discussed and compared with previous calculations.

  1. Modeling to Mars: a NASA Model Based Systems Engineering Pathfinder Effort

    Science.gov (United States)

    Phojanamongkolkij, Nipa; Lee, Kristopher A.; Miller, Scott T.; Vorndran, Kenneth A.; Vaden, Karl R.; Ross, Eric P.; Powell, Bobby C.; Moses, Robert W.

    2017-01-01

    The NASA Engineering Safety Center (NESC) Systems Engineering (SE) Technical Discipline Team (TDT) initiated the Model Based Systems Engineering (MBSE) Pathfinder effort in FY16. The goals and objectives of the MBSE Pathfinder include developing and advancing MBSE capability across NASA, applying MBSE to real NASA issues, and capturing issues and opportunities surrounding MBSE. The Pathfinder effort consisted of four teams, with each team addressing a particular focus area. This paper focuses on Pathfinder team 1 with the focus area of architectures and mission campaigns. These efforts covered the timeframe of February 2016 through September 2016. The team was comprised of eight team members from seven NASA Centers (Glenn Research Center, Langley Research Center, Ames Research Center, Goddard Space Flight Center IV&V Facility, Johnson Space Center, Marshall Space Flight Center, and Stennis Space Center). Collectively, the team had varying levels of knowledge, skills and expertise in systems engineering and MBSE. The team applied their existing and newly acquired system modeling knowledge and expertise to develop modeling products for a campaign (Program) of crew and cargo missions (Projects) to establish a human presence on Mars utilizing In-Situ Resource Utilization (ISRU). Pathfinder team 1 developed a subset of modeling products that are required for a Program System Requirement Review (SRR)/System Design Review (SDR) and Project Mission Concept Review (MCR)/SRR as defined in NASA Procedural Requirements. Additionally, Team 1 was able to perform and demonstrate some trades and constraint analyses. At the end of these efforts, over twenty lessons learned and recommended next steps have been identified.

  2. NRMC - A GPU code for N-Reverse Monte Carlo modeling of fluids in confined media

    Science.gov (United States)

    Sánchez-Gil, Vicente; Noya, Eva G.; Lomba, Enrique

    2017-08-01

    NRMC is a parallel code for performing N-Reverse Monte Carlo modeling of fluids in confined media [V. Sánchez-Gil, E.G. Noya, E. Lomba, J. Chem. Phys. 140 (2014) 024504]. This method is an extension of the usual Reverse Monte Carlo method to obtain structural models of confined fluids compatible with experimental diffraction patterns, specifically designed to overcome the problem of slow diffusion that can appear under conditions of tight confinement. Most of the computational time in N-Reverse Monte Carlo modeling is spent in the evaluation of the structure factor for each trial configuration, a calculation that can be easily parallelized. Implementation of the structure factor evaluation in NVIDIA® CUDA so that the code can be run on GPUs leads to a speed up of up to two orders of magnitude.

  3. Benchmark calculation of no-core Monte Carlo shell model in light nuclei

    CERN Document Server

    Abe, T; Otsuka, T; Shimizu, N; Utsuno, Y; Vary, J P; 10.1063/1.3584062

    2011-01-01

    The Monte Carlo shell model is firstly applied to the calculation of the no-core shell model in light nuclei. The results are compared with those of the full configuration interaction. The agreements between them are within a few % at most.

  4. A tutorial introduction to Bayesian inference for stochastic epidemic models using Markov chain Monte Carlo methods.

    Science.gov (United States)

    O'Neill, Philip D

    2002-01-01

    Recent Bayesian methods for the analysis of infectious disease outbreak data using stochastic epidemic models are reviewed. These methods rely on Markov chain Monte Carlo methods. Both temporal and non-temporal data are considered. The methods are illustrated with a number of examples featuring different models and datasets.

  5. Universality of the Ising and the S=1 model on Archimedean lattices: A Monte Carlo determination

    Science.gov (United States)

    Malakis, A.; Gulpinar, G.; Karaaslan, Y.; Papakonstantinou, T.; Aslan, G.

    2012-03-01

    The Ising models S=1/2 and S=1 are studied by efficient Monte Carlo schemes on the (3,4,6,4) and the (3,3,3,3,6) Archimedean lattices. The algorithms used, a hybrid Metropolis-Wolff algorithm and a parallel tempering protocol, are briefly described and compared with the simple Metropolis algorithm. Accurate Monte Carlo data are produced at the exact critical temperatures of the Ising model for these lattices. Their finite-size analysis provide, with high accuracy, all critical exponents which, as expected, are the same with the well-known 2D Ising model exact values. A detailed finite-size scaling analysis of our Monte Carlo data for the S=1 model on the same lattices provides very clear evidence that this model obeys, also very well, the 2D Ising model critical exponents. As a result, we find that recent Monte Carlo simulations and attempts to define effective dimensionality for the S=1 model on these lattices are misleading. Accurate estimates are obtained for the critical amplitudes of the logarithmic expansions of the specific heat for both models on the two Archimedean lattices.

  6. Markov chain Monte Carlo methods for state-space models with point process observations.

    Science.gov (United States)

    Yuan, Ke; Girolami, Mark; Niranjan, Mahesan

    2012-06-01

    This letter considers how a number of modern Markov chain Monte Carlo (MCMC) methods can be applied for parameter estimation and inference in state-space models with point process observations. We quantified the efficiencies of these MCMC methods on synthetic data, and our results suggest that the Reimannian manifold Hamiltonian Monte Carlo method offers the best performance. We further compared such a method with a previously tested variational Bayes method on two experimental data sets. Results indicate similar performance on the large data sets and superior performance on small ones. The work offers an extensive suite of MCMC algorithms evaluated on an important class of models for physiological signal analysis.

  7. FREYA-a new Monte Carlo code for improved modeling of fission chains

    Energy Technology Data Exchange (ETDEWEB)

    Hagmann, C A; Randrup, J; Vogt, R L

    2012-06-12

    A new simulation capability for modeling of individual fission events and chains and the transport of fission products in materials is presented. FREYA ( Fission Yield Event Yield Algorithm ) is a Monte Carlo code for generating fission events providing correlated kinematic information for prompt neutrons, gammas, and fragments. As a standalone code, FREYA calculates quantities such as multiplicity-energy, angular, and gamma-neutron energy sharing correlations. To study materials with multiplication, shielding effects, and detectors, we have integrated FREYA into the general purpose Monte Carlo code MCNP. This new tool will allow more accurate modeling of detector responses including correlations and the development of SNM detectors with increased sensitivity.

  8. The Mental Effort-Reward Imbalances Model and Its Implications for Behaviour Management

    Science.gov (United States)

    Poulton, Alison; Whale, Samina; Robinson, Joanne

    2016-01-01

    Attention deficit hyperactivity disorder (ADHD) is frequently associated with oppositional defiant disorder (ODD). The Mental Effort Reward Imbalances Model (MERIM) explains this observational association as follows: in ADHD a disproportionate level of mental effort is required for sustaining concentration for achievement; in ODD the subjective…

  9. Monte Carlo path sampling approach to modeling aeolian sediment transport

    Science.gov (United States)

    Hardin, E. J.; Mitasova, H.; Mitas, L.

    2011-12-01

    Coastal communities and vital infrastructure are subject to coastal hazards including storm surge and hurricanes. Coastal dunes offer protection by acting as natural barriers from waves and storm surge. During storms, these landforms and their protective function can erode; however, they can also erode even in the absence of storms due to daily wind and waves. Costly and often controversial beach nourishment and coastal construction projects are common erosion mitigation practices. With a more complete understanding of coastal morphology, the efficacy and consequences of anthropogenic activities could be better predicted. Currently, the research on coastal landscape evolution is focused on waves and storm surge, while only limited effort is devoted to understanding aeolian forces. Aeolian transport occurs when the wind supplies a shear stress that exceeds a critical value, consequently ejecting sand grains into the air. If the grains are too heavy to be suspended, they fall back to the grain bed where the collision ejects more grains. This is called saltation and is the salient process by which sand mass is transported. The shear stress required to dislodge grains is related to turbulent air speed. Subsequently, as sand mass is injected into the air, the wind loses speed along with its ability to eject more grains. In this way, the flux of saltating grains is itself influenced by the flux of saltating grains and aeolian transport becomes nonlinear. Aeolian sediment transport is difficult to study experimentally for reasons arising from the orders of magnitude difference between grain size and dune size. It is difficult to study theoretically because aeolian transport is highly nonlinear especially over complex landscapes. Current computational approaches have limitations as well; single grain models are mathematically simple but are computationally intractable even with modern computing power whereas cellular automota-based approaches are computationally efficient

  10. Markov chain Monte Carlo methods in directed graphical models

    DEFF Research Database (Denmark)

    Højbjerre, Malene

    Directed graphical models present data possessing a complex dependence structure, and MCMC methods are computer-intensive simulation techniques to approximate high-dimensional intractable integrals, which emerge in such models with incomplete data. MCMC computations in directed graphical models...

  11. Single-cluster-update Monte Carlo method for the random anisotropy model

    Science.gov (United States)

    Rößler, U. K.

    1999-06-01

    A Wolff-type cluster Monte Carlo algorithm for random magnetic models is presented. The algorithm is demonstrated to reduce significantly the critical slowing down for planar random anisotropy models with weak anisotropy strength. Dynamic exponents zcluster algorithms are estimated for models with ratio of anisotropy to exchange constant D/J=1.0 on cubic lattices in three dimensions. For these models, critical exponents are derived from a finite-size scaling analysis.

  12. Converting Boundary Representation Solid Models to Half-Space Representation Models for Monte Carlo Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Davis JE, Eddy MJ, Sutton TM, Altomari TJ

    2007-03-01

    Solid modeling computer software systems provide for the design of three-dimensional solid models used in the design and analysis of physical components. The current state-of-the-art in solid modeling representation uses a boundary representation format in which geometry and topology are used to form three-dimensional boundaries of the solid. The geometry representation used in these systems is cubic B-spline curves and surfaces--a network of cubic B-spline functions in three-dimensional Cartesian coordinate space. Many Monte Carlo codes, however, use a geometry representation in which geometry units are specified by intersections and unions of half-spaces. This paper describes an algorithm for converting from a boundary representation to a half-space representation.

  13. [Psychometric properties of the French version of the Effort-Reward Imbalance model].

    Science.gov (United States)

    Niedhammer, I; Siegrist, J; Landre, M F; Goldberg, M; Leclerc, A

    2000-10-01

    Two main models are currently used to evaluate psychosocial factors at work: the Job Strain model developed by Karasek and the Effort-Reward Imbalance model. A French version of the first model has been validated for the dimensions of psychological demands and decision latitude. As regards the second one evaluating three dimensions (extrinsic effort, reward, and intrinsic effort), there are several versions in different languages, but until recently there was no validated French version. The objective of this study was to explore the psychometric properties of the French version of the Effort-Reward Imbalance model in terms of internal consistency, factorial validity, and discriminant validity. The present study was based on the GAZEL cohort and included the 10 174 subjects who were working at the French national electric and gas company (EDF-GDF) and answered the questionnaire in 1998. A French version of Effort-Reward Imbalance was included in this questionnaire. This version was obtained by a standard forward/backward translation procedure. Internal consistency was satisfactory for the three scales of extrinsic effort, reward, and intrinsic effort: Cronbach's Alpha coefficients higher than 0.7 were observed. A one-factor solution was retained for the factor analysis of the scale of extrinsic effort. A three-factor solution was retained for the factor analysis of reward, and these dimensions were interpreted as the factor analysis of intrinsic effort did not support the expected four-dimension structure. The analysis of discriminant validity displayed significant associations between measures of Effort-Reward Imbalance and the variables of sex, age, education level, and occupational grade. This study is the first one supporting satisfactory psychometric properties of the French version of the Effort-Reward Imbalance model. However, the factorial validity of intrinsic effort could be questioned. Furthermore, as most previous studies were based on male samples

  14. Model unspecific search in CMS. Treatment of insufficient Monte Carlo statistics

    Energy Technology Data Exchange (ETDEWEB)

    Lieb, Jonas; Albert, Andreas; Duchardt, Deborah; Hebbeker, Thomas; Knutzen, Simon; Meyer, Arnd; Pook, Tobias; Roemer, Jonas [III. Physikalisches Institut A, RWTH Aachen University (Germany)

    2016-07-01

    In 2015, the CMS detector recorded proton-proton collisions at an unprecedented center of mass energy of √(s)=13 TeV. The Model Unspecific Search in CMS (MUSiC) offers an analysis approach of these data which is complementary to dedicated analyses: By taking all produced final states into consideration, MUSiC is sensitive to indicators of new physics appearing in final states that are usually not investigated. In a two step process, MUSiC first classifies events according to their physics content and then searches kinematic distributions for the most significant deviations between Monte Carlo simulations and observed data. Such a general approach introduces its own set of challenges. One of them is the treatment of situations with insufficient Monte Carlo statistics. Complementing introductory presentations on the MUSiC event selection and classification, this talk will present a method of dealing with the issue of low Monte Carlo statistics.

  15. Finding the right balance between groundwater model complexity and experimental effort via Bayesian model selection

    Science.gov (United States)

    Schöniger, Anneli; Illman, Walter A.; Wöhling, Thomas; Nowak, Wolfgang

    2015-12-01

    Groundwater modelers face the challenge of how to assign representative parameter values to the studied aquifer. Several approaches are available to parameterize spatial heterogeneity in aquifer parameters. They differ in their conceptualization and complexity, ranging from homogeneous models to heterogeneous random fields. While it is common practice to invest more effort into data collection for models with a finer resolution of heterogeneities, there is a lack of advice which amount of data is required to justify a certain level of model complexity. In this study, we propose to use concepts related to Bayesian model selection to identify this balance. We demonstrate our approach on the characterization of a heterogeneous aquifer via hydraulic tomography in a sandbox experiment (Illman et al., 2010). We consider four increasingly complex parameterizations of hydraulic conductivity: (1) Effective homogeneous medium, (2) geology-based zonation, (3) interpolation by pilot points, and (4) geostatistical random fields. First, we investigate the shift in justified complexity with increasing amount of available data by constructing a model confusion matrix. This matrix indicates the maximum level of complexity that can be justified given a specific experimental setup. Second, we determine which parameterization is most adequate given the observed drawdown data. Third, we test how the different parameterizations perform in a validation setup. The results of our test case indicate that aquifer characterization via hydraulic tomography does not necessarily require (or justify) a geostatistical description. Instead, a zonation-based model might be a more robust choice, but only if the zonation is geologically adequate.

  16. Treatment of input uncertainty in hydrologic modeling: Doing hydrology backward with Markov chain Monte Carlo simulation

    NARCIS (Netherlands)

    Vrugt, J.A.; Braak, ter C.J.F.; Clark, M.P.; Hyman, J.M.; Robinson, B.A.

    2008-01-01

    There is increasing consensus in the hydrologic literature that an appropriate framework for streamflow forecasting and simulation should include explicit recognition of forcing and parameter and model structural error. This paper presents a novel Markov chain Monte Carlo (MCMC) sampler, entitled

  17. Preliminary Monte Carlo Results for the Three-Dimensional Holstein Model

    Institute of Scientific and Technical Information of China (English)

    吴焰立; 刘川; 罗强

    2003-01-01

    Monte Carlo simulations are used to study the three-dimensional Holstein model. The relationship between the band filling and the chemical potential is obtained for various phonon frequencies and temperatures. The energy of a single electron or a hole is also calculated as a function of the lattice momenta.

  18. An NCME Instructional Module on Estimating Item Response Theory Models Using Markov Chain Monte Carlo Methods

    Science.gov (United States)

    Kim, Jee-Seon; Bolt, Daniel M.

    2007-01-01

    The purpose of this ITEMS module is to provide an introduction to Markov chain Monte Carlo (MCMC) estimation for item response models. A brief description of Bayesian inference is followed by an overview of the various facets of MCMC algorithms, including discussion of prior specification, sampling procedures, and methods for evaluating chain…

  19. Treatment of input uncertainty in hydrologic modeling: Doing hydrology backward with Markov chain Monte Carlo simulation

    NARCIS (Netherlands)

    Vrugt, J.A.; Braak, ter C.J.F.; Clark, M.P.; Hyman, J.M.; Robinson, B.A.

    2008-01-01

    There is increasing consensus in the hydrologic literature that an appropriate framework for streamflow forecasting and simulation should include explicit recognition of forcing and parameter and model structural error. This paper presents a novel Markov chain Monte Carlo (MCMC) sampler, entitled di

  20. Confronting uncertainty in model-based geostatistics using Markov Chain Monte Carlo simulation

    NARCIS (Netherlands)

    Minasny, B.; Vrugt, J.A.; McBratney, A.B.

    2011-01-01

    This paper demonstrates for the first time the use of Markov Chain Monte Carlo (MCMC) simulation for parameter inference in model-based soil geostatistics. We implemented the recently developed DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm to jointly summarize the posterior distributi

  1. An NCME Instructional Module on Estimating Item Response Theory Models Using Markov Chain Monte Carlo Methods

    Science.gov (United States)

    Kim, Jee-Seon; Bolt, Daniel M.

    2007-01-01

    The purpose of this ITEMS module is to provide an introduction to Markov chain Monte Carlo (MCMC) estimation for item response models. A brief description of Bayesian inference is followed by an overview of the various facets of MCMC algorithms, including discussion of prior specification, sampling procedures, and methods for evaluating chain…

  2. A study of the XY model by the Monte Carlo method

    Science.gov (United States)

    Suranyi, Peter; Harten, Paul

    1987-01-01

    The massively parallel processor is used to perform Monte Carlo simulations for the two dimensional XY model on lattices of sizes up to 128 x 128. A parallel random number generator was constructed, finite size effects were studied, and run times were compared with those on a CRAY X-MP supercomputer.

  3. Generic Form of Bayesian Monte Carlo For Models With Partial Monotonicity

    NARCIS (Netherlands)

    Rajabalinejad, M.

    2012-01-01

    This paper presents a generic method for the safety assessments of models with partial monotonicity. For this purpose, a Bayesian interpolation method is developed and implemented in the Monte Carlo process. integrated approach is the generalization of the recently developed techniques used in safet

  4. An Evaluation of a Markov Chain Monte Carlo Method for the Rasch Model.

    Science.gov (United States)

    Kim, Seock-Ho

    2001-01-01

    Examined the accuracy of the Gibbs sampling Markov chain Monte Carlo procedure for estimating item and person (theta) parameters in the one-parameter logistic model. Analyzed four empirical datasets using the Gibbs sampling, conditional maximum likelihood, marginal maximum likelihood, and joint maximum likelihood methods. Discusses the conditions…

  5. Particle Markov Chain Monte Carlo Techniques of Unobserved Component Time Series Models Using Ox

    DEFF Research Database (Denmark)

    Nonejad, Nima

    This paper details Particle Markov chain Monte Carlo techniques for analysis of unobserved component time series models using several economic data sets. PMCMC combines the particle filter with the Metropolis-Hastings algorithm. Overall PMCMC provides a very compelling, computationally fast...

  6. Markov Chain Monte Carlo Estimation of Item Parameters for the Generalized Graded Unfolding Model

    Science.gov (United States)

    de la Torre, Jimmy; Stark, Stephen; Chernyshenko, Oleksandr S.

    2006-01-01

    The authors present a Markov Chain Monte Carlo (MCMC) parameter estimation procedure for the generalized graded unfolding model (GGUM) and compare it to the marginal maximum likelihood (MML) approach implemented in the GGUM2000 computer program, using simulated and real personality data. In the simulation study, test length, number of response…

  7. Generic form of Bayesian Monte Carlo for models with partial monotonicity

    NARCIS (Netherlands)

    Rajabalinejad, M.; Spitas, C.

    2012-01-01

    This paper presents a generic method for the safety assessments of models with partial monotonicity. For this purpose, a Bayesian interpolation method is developed and implemented in the Monte Carlo process. integrated approach is the generalization of the recently developed techniques used in safet

  8. LASER-DOPPLER VELOCIMETRY AND MONTE-CARLO SIMULATIONS ON MODELS FOR BLOOD PERFUSION IN TISSUE

    NARCIS (Netherlands)

    DEMUL, FFM; KOELINK, MH; KOK, ML; HARMSMA, PJ; GREVE, J; GRAAFF, R; AARNOUDSE, JG

    1995-01-01

    Laser Doppler flow measurements and Monte Carlo simulations on small blood perfusion flow models at 780 nm are presented and compared. The dimensions of the optical sample volume are investigated as functions of the distance of the laser to the detector and as functions of the angle of penetration o

  9. Surprising convergence of the Monte Carlo renormalization group for the three-dimensional Ising model.

    Science.gov (United States)

    Ron, Dorit; Brandt, Achi; Swendsen, Robert H

    2017-05-01

    We present a surprisingly simple approach to high-accuracy calculations of the critical properties of the three-dimensional Ising model. The method uses a modified block-spin transformation with a tunable parameter to improve convergence in the Monte Carlo renormalization group. The block-spin parameter must be tuned differently for different exponents to produce optimal convergence.

  10. Supercomputer and cluster performance modeling and analysis efforts:2004-2006.

    Energy Technology Data Exchange (ETDEWEB)

    Sturtevant, Judith E.; Ganti, Anand; Meyer, Harold (Hal) Edward; Stevenson, Joel O.; Benner, Robert E., Jr. (.,; .); Goudy, Susan Phelps; Doerfler, Douglas W.; Domino, Stefan Paul; Taylor, Mark A.; Malins, Robert Joseph; Scott, Ryan T.; Barnette, Daniel Wayne; Rajan, Mahesh; Ang, James Alfred; Black, Amalia Rebecca; Laub, Thomas William; Vaughan, Courtenay Thomas; Franke, Brian Claude

    2007-02-01

    This report describes efforts by the Performance Modeling and Analysis Team to investigate performance characteristics of Sandia's engineering and scientific applications on the ASC capability and advanced architecture supercomputers, and Sandia's capacity Linux clusters. Efforts to model various aspects of these computers are also discussed. The goals of these efforts are to quantify and compare Sandia's supercomputer and cluster performance characteristics; to reveal strengths and weaknesses in such systems; and to predict performance characteristics of, and provide guidelines for, future acquisitions and follow-on systems. Described herein are the results obtained from running benchmarks and applications to extract performance characteristics and comparisons, as well as modeling efforts, obtained during the time period 2004-2006. The format of the report, with hypertext links to numerous additional documents, purposefully minimizes the document size needed to disseminate the extensive results from our research.

  11. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans

    Science.gov (United States)

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2017-03-01

    The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg–Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.

  12. Ensemble Bayesian model averaging using Markov Chain Monte Carlo sampling

    NARCIS (Netherlands)

    Vrugt, J.A.; Diks, C.G.H.; Clark, M.

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In t

  13. Adaptive Effort Investment in Cognitive and Physical Tasks: A Neurocomputational Model

    Directory of Open Access Journals (Sweden)

    Tom eVerguts

    2015-03-01

    Full Text Available Despite its importance in everyday life, the computational nature of effort investment remains poorly understood. We propose an effort model obtained from optimality considerations, and a neurocomputational approximation to the optimal model. Both are couched in the framework of reinforcement learning. It is shown that choosing when or when not to exert effort can be adaptively learned, depending on rewards, costs, and task difficulty. In the neurocomputational model, the limbic loop comprising anterior cingulate cortex and ventral striatum in the basal ganglia allocates effort to cortical stimulus-action pathways whenever this is valuable. We demonstrate that the model approximates optimality. Next, we consider two hallmark effects from the cognitive control literature, namely proportion congruency and sequential congruency effects. It is shown that the model exerts both proactive and reactive cognitive control. Then, we simulate two physical effort tasks. In line with empirical work, impairing the model’s dopaminergic pathway leads to apathetic behavior. Thus, we conceptually unify the exertion of cognitive and physical effort, studied across a variety of literatures (e.g., motivation and cognitive control and animal species.

  14. ANALYSIS OF INNOVATIVE ACTIVITY OF METALLURGICAL COMPANIES USING MONTE-CARLO MATHEMATICAL MODEL-ING METHOD

    Directory of Open Access Journals (Sweden)

    Shchekoturova S. D.

    2015-04-01

    Full Text Available The article presents an analysis of an innovative activity of four Russian metallurgical enterprises: "Ruspolimet", JSC "Ural Smithy", JSC "Stupino Metallurgical Company", JSC "VSMPO" via mathematical modeling using Monte Carlo method. The results of the assessment of innovative activity of Russian metallurgical companies were identified in five years dynamics. An assessment of the current innovative activity was made by the calculation of an integral index of the innovative activity. The calculation was based on such six indicators as the proportion of staff employed in R & D; the level of development of new technology; the degree of development of new products; share of material resources for R & D; degree of security of enterprise intellectual property; the share of investment in innovative projects and it was analyzed from 2007 to 2011. On the basis of this data the integral indicator of the innovative activity of metallurgical companies was calculated by well-known method of weighting coefficients. The comparative analysis of integral indicators of the innovative activity of considered companies made it possible to range their level of the innovative activity and to characterize the current state of their business. Based on Monte Carlo method a variation interval of the integral indicator was obtained and detailed instructions to choose the strategy of the innovative development of metallurgical enterprises were given as well

  15. Overview 2004 of NASA-Stirling Convertor CFD Model Development and Regenerator R and D Efforts

    Science.gov (United States)

    Tew, Roy C.; Dyson, Rodger W.; Wilson, Scott D.; Demko, Rikako

    2004-01-01

    This paper reports on accomplishments in 2004 in (1) development of Stirling-convertor CFD models at NASA Glenn and via a NASA grant, (2) a Stirling regenerator-research effort being conducted via a NASA grant (a follow-on effort to an earlier DOE contract), and (3) a regenerator-microfabrication contract for development of a "next-generation Stirling regenerator." Cleveland State University is the lead organization for all three grant/contractual efforts, with the University of Minnesota and Gedeon Associates as subcontractors. Also, the Stirling Technology Company and Sunpower, Inc. are both involved in all three efforts, either as funded or unfunded participants. International Mezzo Technologies of Baton Rouge, Louisiana is the regenerator fabricator for the regenerator-microfabrication contract. Results of the efforts in these three areas are summarized.

  16. Monte-Carlo Inversion of Travel-Time Data for the Estimation of Weld Model Parameters

    Science.gov (United States)

    Hunter, A. J.; Drinkwater, B. W.; Wilcox, P. D.

    2011-06-01

    The quality of ultrasonic array imagery is adversely affected by uncompensated variations in the medium properties. A method for estimating the parameters of a general model of an inhomogeneous anisotropic medium is described. The model is comprised of a number of homogeneous sub-regions with unknown anisotropy. Bayesian estimation of the unknown model parameters is performed via a Monte-Carlo Markov chain using the Metropolis-Hastings algorithm. Results are demonstrated using simulated weld data.

  17. Monte Carlo modeling of a Novalis Tx Varian 6 MV with HD-120 multileaf collimator.

    Science.gov (United States)

    Vazquez-Quino, Luis Alberto; Massingill, Brian; Shi, Chengyu; Gutierrez, Alonso; Esquivel, Carlos; Eng, Tony; Papanikolaou, Nikos; Stathakis, Sotirios

    2012-09-06

    A Monte Carlo model of the Novalis Tx linear accelerator equipped with high-definition multileaf collimator (HD-120 HD-MLC) was commissioned using ionization chamber measurements in water. All measurements in water were performed using a liquid filled ionization chamber. Film measurements were made using EDR2 film in solid water. Open rectangular fields defined by the jaws or the HD-MLC were used for comparison against measurements. Furthermore, inter- and intraleaf leakage calculated by the Monte Carlo model was compared against film measurements. The statistical uncertainty of the Monte Carlo calculations was less than 1% for all simulations. Results for all regular field sizes show an excellent agreement with commissioning data (percent depth-dose curves and profiles), well within 1% of difference in the relative dose and 1 mm distance to agreement. The computed leakage through HD-MLCs shows good agreement with film measurements. The Monte Carlo model developed in this study accurately represents the new Novalis Tx Varian linac with HD-MLC and can be used for reliable patient dose calculations.

  18. Evolving Software Effort Estimation Models Using Multigene Symbolic Regression Genetic Programming

    Directory of Open Access Journals (Sweden)

    Sultan Aljahdali

    2013-12-01

    Full Text Available Software has played an essential role in engineering, economic development, stock market growth and military applications. Mature software industry count on highly predictive software effort estimation models. Correct estimation of software effort lead to correct estimation of budget and development time. It also allows companies to develop appropriate time plan for marketing campaign. Now a day it became a great challenge to get these estimates due to the increasing number of attributes which affect the software development life cycle. Software cost estimation models should be able to provide sufficient confidence on its prediction capabilities. Recently, Computational Intelligence (CI paradigms were explored to handle the software effort estimation problem with promising results. In this paper we evolve two new models for software effort estimation using Multigene Symbolic Regression Genetic Programming (GP. One model utilizes the Source Line Of Code (SLOC as input variable to estimate the Effort (E; while the second model utilize the Inputs, Outputs, Files, and User Inquiries to estimate the Function Point (FP. The proposed GP models show better estimation capabilities compared to other reported models in the literature. The validation results are accepted based Albrecht data set.

  19. High-resolution and Monte Carlo additions to the SASKTRAN radiative transfer model

    Directory of Open Access Journals (Sweden)

    D. J. Zawada

    2015-06-01

    Full Text Available The Optical Spectrograph and InfraRed Imaging System (OSIRIS instrument on board the Odin spacecraft has been measuring limb-scattered radiance since 2001. The vertical radiance profiles measured as the instrument nods are inverted, with the aid of the SASKTRAN radiative transfer model, to obtain vertical profiles of trace atmospheric constituents. Here we describe two newly developed modes of the SASKTRAN radiative transfer model: a high-spatial-resolution mode and a Monte Carlo mode. The high-spatial-resolution mode is a successive-orders model capable of modelling the multiply scattered radiance when the atmosphere is not spherically symmetric; the Monte Carlo mode is intended for use as a highly accurate reference model. It is shown that the two models agree in a wide variety of solar conditions to within 0.2 %. As an example case for both models, Odin–OSIRIS scans were simulated with the Monte Carlo model and retrieved using the high-resolution model. A systematic bias of up to 4 % in retrieved ozone number density between scans where the instrument is scanning up or scanning down was identified. The bias is largest when the sun is near the horizon and the solar scattering angle is far from 90°. It was found that calculating the multiply scattered diffuse field at five discrete solar zenith angles is sufficient to eliminate the bias for typical Odin–OSIRIS geometries.

  20. High resolution and Monte Carlo additions to the SASKTRAN radiative transfer model

    Directory of Open Access Journals (Sweden)

    D. J. Zawada

    2015-03-01

    Full Text Available The OSIRIS instrument on board the Odin spacecraft has been measuring limb scattered radiance since 2001. The vertical radiance profiles measured as the instrument nods are inverted, with the aid of the SASKTRAN radiative transfer model, to obtain vertical profiles of trace atmospheric constituents. Here we describe two newly developed modes of the SASKTRAN radiative transfer model: a high spatial resolution mode, and a Monte Carlo mode. The high spatial resolution mode is a successive orders model capable of modelling the multiply scattered radiance when the atmosphere is not spherically symmetric; the Monte Carlo mode is intended for use as a highly accurate reference model. It is shown that the two models agree in a wide variety of solar conditions to within 0.2%. As an example case for both models, Odin-OSIRIS scans were simulated with the Monte Carlo model and retrieved using the high resolution model. A systematic bias of up to 4% in retrieved ozone number density between scans where the instrument is scanning up or scanning down was identified. It was found that calculating the multiply scattered diffuse field at five discrete solar zenith angles is sufficient to eliminate the bias for typical Odin-OSIRIS geometries.

  1. Improvements of the Analytical Model of Monte Carlo

    Institute of Scientific and Technical Information of China (English)

    HE Qing-Fang; XU Zheng; TENG Feng; LIU De-Ang; XU Xu-Rong

    2006-01-01

    @@ By extending the conduction band structure, we set up a new analytical model in ZnS. Compared the results with both the old analytical model and the full band model, it is found that they are possibly in reasonable agreement with the full band method and we can improve the calculation precision. Another important work is to reduce the programme computation time using the method of data fitting scattering rate curves.

  2. Monte Carlo study of single-barrier structure based on exclusion model full counting statistics

    Institute of Scientific and Technical Information of China (English)

    Chen Hua; Du Lei; Qu Cheng-Li; He Liang; Chen Wen-Hao; Sun Peng

    2011-01-01

    Different from the usual full counting statistics theoretical work that focuses on the higher order cumulants computation by using cumulant generating function in electrical structures, Monte Carlo simulation of single-barrier structure is performed to obtain time series for two types of widely applicable exclusion models, counter-flows model,and tunnel model. With high-order spectrum analysis of Matlab, the validation of Monte Carlo methods is shown through the extracted first four cumulants from the time series, which are in agreement with those from cumulant generating function. After the comparison between the counter-flows model and the tunnel model in a single barrier structure, it is found that the essential difference between them consists in the strictly holding of Pauli principle in the former and in the statistical consideration of Pauli principle in the latter.

  3. Development of perturbation Monte Carlo methods for polarized light transport in a discrete particle scattering model.

    Science.gov (United States)

    Nguyen, Jennifer; Hayakawa, Carole K; Mourant, Judith R; Venugopalan, Vasan; Spanier, Jerome

    2016-05-01

    We present a polarization-sensitive, transport-rigorous perturbation Monte Carlo (pMC) method to model the impact of optical property changes on reflectance measurements within a discrete particle scattering model. The model consists of three log-normally distributed populations of Mie scatterers that approximate biologically relevant cervical tissue properties. Our method provides reflectance estimates for perturbations across wavelength and/or scattering model parameters. We test our pMC model performance by perturbing across number densities and mean particle radii, and compare pMC reflectance estimates with those obtained from conventional Monte Carlo simulations. These tests allow us to explore different factors that control pMC performance and to evaluate the gains in computational efficiency that our pMC method provides.

  4. A Monte Carlo Solution of the Human Ballistic Mortality Model

    Science.gov (United States)

    1978-08-01

    to obtain a damage D for thetotal wound. This addition law is averaged over the total soldier W.B. Beverly, "A Human Balistic Mortalitj Model," to be...January 1970. 𔃽 C.A. Stanley and K. Brown. "A Coniputer Man Anatomica l ModeL ," Balistic Research Laboratory Report ARBL T No. 02080, May 1978

  5. Reservoir Modeling Combining Geostatistics with Markov Chain Monte Carlo Inversion

    DEFF Research Database (Denmark)

    Zunino, Andrea; Lange, Katrine; Melnikova, Yulia;

    2014-01-01

    , multi-step forward model (rock physics and seismology) and to provide realistic estimates of uncertainties. To generate realistic models which represent samples of the prior distribution, and to overcome the high computational demand, we reduce the search space utilizing an algorithm drawn from...

  6. TRIPOLI-4{sup ®} Monte Carlo code ITER A-lite neutronic model validation

    Energy Technology Data Exchange (ETDEWEB)

    Jaboulay, Jean-Charles, E-mail: jean-charles.jaboulay@cea.fr [CEA, DEN, Saclay, DM2S, SERMA, F-91191 Gif-sur-Yvette (France); Cayla, Pierre-Yves; Fausser, Clement [MILLENNIUM, 16 Av du Québec Silic 628, F-91945 Villebon sur Yvette (France); Damian, Frederic; Lee, Yi-Kang; Puma, Antonella Li; Trama, Jean-Christophe [CEA, DEN, Saclay, DM2S, SERMA, F-91191 Gif-sur-Yvette (France)

    2014-10-15

    3D Monte Carlo transport codes are extensively used in neutronic analysis, especially in radiation protection and shielding analyses for fission and fusion reactors. TRIPOLI-4{sup ®} is a Monte Carlo code developed by CEA. The aim of this paper is to show its capability to model a large-scale fusion reactor with complex neutron source and geometry. A benchmark between MCNP5 and TRIPOLI-4{sup ®}, on the ITER A-lite model was carried out; neutron flux, nuclear heating in the blankets and tritium production rate in the European TBMs were evaluated and compared. The methodology to build the TRIPOLI-4{sup ®} A-lite model is based on MCAM and the MCNP A-lite model. Simplified TBMs, from KIT, were integrated in the equatorial-port. A good agreement between MCNP and TRIPOLI-4{sup ®} is shown; discrepancies are mainly included in the statistical error.

  7. Monte Carlo Simulation of the Potts Model on a Dodecagonal Quasiperiodic Structure

    Institute of Scientific and Technical Information of China (English)

    WEN Zhang-Bin; HOU Zhi-Lin; FU Xiu-Jun

    2011-01-01

    By means of a Monte Carlo simulation, we study the three-state Potts model on a two-dimensional quasiperiodic structure based on a dodecagonal cluster covering pattern. The critical temperature and exponents are obtained from finite-size scaling analysis. It is shown that the Potts model on the quasiperiodic lattice belongs to the same universal class as those on periodic ones.%@@ By means of a Monte Carlo simulation, we study the three-state Potts model on a two-dimensional quasiperiodic structure based on a dodecagonal cluster covering pattern.The critical temperature and exponents are obtained from finite-size scaling analysis.It is shown that the Potts model on the quasiperiodic lattice belongs to the same universal class as those on periodic ones.

  8. Large-scale Monte Carlo simulations for the depinning transition in Ising-type lattice models

    Science.gov (United States)

    Si, Lisha; Liao, Xiaoyun; Zhou, Nengji

    2016-12-01

    With the developed "extended Monte Carlo" (EMC) algorithm, we have studied the depinning transition in Ising-type lattice models by extensive numerical simulations, taking the random-field Ising model with a driving field and the driven bond-diluted Ising model as examples. In comparison with the usual Monte Carlo method, the EMC algorithm exhibits greater efficiency of the simulations. Based on the short-time dynamic scaling form, both the transition field and critical exponents of the depinning transition are determined accurately via the large-scale simulations with the lattice size up to L = 8912, significantly refining the results in earlier literature. In the strong-disorder regime, a new universality class of the Ising-type lattice model is unveiled with the exponents β = 0.304(5) , ν = 1.32(3) , z = 1.12(1) , and ζ = 0.90(1) , quite different from that of the quenched Edwards-Wilkinson equation.

  9. Monte Carlo modeling of atomic oxygen attack of polymers with protective coatings on LDEF

    Science.gov (United States)

    Banks, Bruce A.; Degroh, Kim K.; Auer, Bruce M.; Gebauer, Linda; Edwards, Jonathan L.

    1993-01-01

    Characterization of the behavior of atomic oxygen interaction with materials on the Long Duration Exposure Facility (LDEF) assists in understanding of the mechanisms involved. Thus the reliability of predicting in-space durability of materials based on ground laboratory testing should be improved. A computational model which simulates atomic oxygen interaction with protected polymers was developed using Monte Carlo techniques. Through the use of an assumed mechanistic behavior of atomic oxygen interaction based on in-space atomic oxygen erosion of unprotected polymers and ground laboratory atomic oxygen interaction with protected polymers, prediction of atomic oxygen interaction with protected polymers on LDEF was accomplished. However, the results of these predictions are not consistent with the observed LDEF results at defect sites in protected polymers. Improved agreement between observed LDEF results and predicted Monte Carlo modeling can be achieved by modifying of the atomic oxygen interactive assumptions used in the model. LDEF atomic oxygen undercutting results, modeling assumptions, and implications are presented.

  10. Simulation model based on Monte Carlo method for traffic assignment in local area road network

    Institute of Scientific and Technical Information of China (English)

    Yuchuan DU; Yuanjing GENG; Lijun SUN

    2009-01-01

    For a local area road network, the available traffic data of traveling are the flow volumes in the key intersections, not the complete OD matrix. Considering the circumstance characteristic and the data availability of a local area road network, a new model for traffic assignment based on Monte Carlo simulation of intersection turning movement is provided in this paper. For good stability in temporal sequence, turning ratio is adopted as the important parameter of this model. The formulation for local area road network assignment problems is proposed on the assumption of random turning behavior. The traffic assignment model based on the Monte Carlo method has been used in traffic analysis for an actual urban road network. The results comparing surveying traffic flow data and determining flow data by the previous model verify the applicability and validity of the proposed methodology.

  11. Recommended direct simulation Monte Carlo collision model parameters for modeling ionized air transport processes

    Energy Technology Data Exchange (ETDEWEB)

    Swaminathan-Gopalan, Krishnan; Stephani, Kelly A., E-mail: ksteph@illinois.edu [Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States)

    2016-02-15

    A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach. The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.

  12. Recommended direct simulation Monte Carlo collision model parameters for modeling ionized air transport processes

    Science.gov (United States)

    Swaminathan-Gopalan, Krishnan; Stephani, Kelly A.

    2016-02-01

    A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach. The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.

  13. An Analytic Linear Accelerator Source Model for Monte Carlo Dose Calculations. I. Model Representation and Construction

    CERN Document Server

    Tian, Zhen; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun

    2015-01-01

    Monte Carlo (MC) simulation is considered as the most accurate method for radiation dose calculations. Accuracy of a source model for a linear accelerator is critical for the overall dose calculation accuracy. In this paper, we presented an analytical source model that we recently developed for GPU-based MC dose calculations. A key concept called phase-space-ring (PSR) was proposed. It contained a group of particles that are of the same type and close in energy and radial distance to the center of the phase-space plane. The model parameterized probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. For a primary photon PSRs, the particle direction is assumed to be from the beam spot. A finite spot size is modeled with a 2D Gaussian distribution. For a scattered photon PSR, multiple Gaussian components were used to model the particle direction. The direction distribution of an electron PSRs was also modeled as a 2D Gaussian distributi...

  14. Monte Carlo Based Toy Model for Fission Process

    CERN Document Server

    Kurniadi, R; Viridi, S

    2014-01-01

    Fission yield has been calculated notoriously by two calculations approach, macroscopic approach and microscopic approach. This work will proposes another calculation approach which the nucleus is treated as a toy model. The toy model of fission yield is a preliminary method that use random number as a backbone of the calculation. Because of nucleus as a toy model hence the fission process does not represent real fission process in nature completely. Fission event is modeled by one random number. The number is assumed as width of distribution probability of nucleon position in compound nuclei when fission process is started. The toy model is formed by Gaussian distribution of random number that randomizes distance like between particle and central point. The scission process is started by smashing compound nucleus central point into two parts that are left central and right central points. These three points have different Gaussian distribution parameters such as mean ({\\mu}CN, {\\mu}L, {\\mu}R), and standard d...

  15. Cosmological constraints on generalized Chaplygin gas model: Markov Chain Monte Carlo approach

    OpenAIRE

    Xu, Lixin; Lu, Jianbo

    2010-01-01

    We use the Markov Chain Monte Carlo method to investigate a global constraints on the generalized Chaplygin gas (GCG) model as the unification of dark matter and dark energy from the latest observational data: the Constitution dataset of type supernovae Ia (SNIa), the observational Hubble data (OHD), the cluster X-ray gas mass fraction, the baryon acoustic oscillation (BAO), and the cosmic microwave background (CMB) data. In a non-flat universe, the constraint results for GCG model are, $\\Ome...

  16. Direct Monte Carlo Measurement of the Surface Tension in Ising Models

    CERN Document Server

    Hasenbusch, M

    1992-01-01

    I present a cluster Monte Carlo algorithm that gives direct access to the interface free energy of Ising models. The basic idea is to simulate an ensemble that consists of both configurations with periodic and with antiperiodic boundary conditions. A cluster algorithm is provided that efficently updates this joint ensemble. The interface tension is obtained from the ratio of configurations with periodic and antiperiodic boundary conditions, respectively. The method is tested for the 3-dimensional Ising model.

  17. Coupled Simulations of Mechanical Deformation and Microstructural Evolution Using Polycrystal Plasticity and Monte Carlo Potts Models

    Energy Technology Data Exchange (ETDEWEB)

    Battaile, C.C.; Buchheit, T.E.; Holm, E.A.; Neilsen, M.K.; Wellman, G.W.

    1999-01-12

    The microstructural evolution of heavily deformed polycrystalline Cu is simulated by coupling a constitutive model for polycrystal plasticity with the Monte Carlo Potts model for grain growth. The effects of deformation on boundary topology and grain growth kinetics are presented. Heavy deformation leads to dramatic strain-induced boundary migration and subsequent grain fragmentation. Grain growth is accelerated in heavily deformed microstructures. The implications of these results for the thermomechanical fatigue failure of eutectic solder joints are discussed.

  18. A Monte Carlo Uncertainty Analysis of Ozone Trend Predictions in a Two Dimensional Model. Revision

    Science.gov (United States)

    Considine, D. B.; Stolarski, R. S.; Hollandsworth, S. M.; Jackman, C. H.; Fleming, E. L.

    1998-01-01

    We use Monte Carlo analysis to estimate the uncertainty in predictions of total O3 trends between 1979 and 1995 made by the Goddard Space Flight Center (GSFC) two-dimensional (2D) model of stratospheric photochemistry and dynamics. The uncertainty is caused by gas-phase chemical reaction rates, photolysis coefficients, and heterogeneous reaction parameters which are model inputs. The uncertainty represents a lower bound to the total model uncertainty assuming the input parameter uncertainties are characterized correctly. Each of the Monte Carlo runs was initialized in 1970 and integrated for 26 model years through the end of 1995. This was repeated 419 times using input parameter sets generated by Latin Hypercube Sampling. The standard deviation (a) of the Monte Carlo ensemble of total 03 trend predictions is used to quantify the model uncertainty. The 34% difference between the model trend in globally and annually averaged total O3 using nominal inputs and atmospheric trends calculated from Nimbus 7 and Meteor 3 total ozone mapping spectrometer (TOMS) version 7 data is less than the 46% calculated 1 (sigma), model uncertainty, so there is no significant difference between the modeled and observed trends. In the northern hemisphere midlatitude spring the modeled and observed total 03 trends differ by more than 1(sigma) but less than 2(sigma), which we refer to as marginal significance. We perform a multiple linear regression analysis of the runs which suggests that only a few of the model reactions contribute significantly to the variance in the model predictions. The lack of significance in these comparisons suggests that they are of questionable use as guides for continuing model development. Large model/measurement differences which are many multiples of the input parameter uncertainty are seen in the meridional gradients of the trend and the peak-to-peak variations in the trends over an annual cycle. These discrepancies unambiguously indicate model formulation

  19. Essays on Quantitative Marketing Models and Monte Carlo Integration Methods

    NARCIS (Netherlands)

    R.D. van Oest (Rutger)

    2005-01-01

    textabstractThe last few decades have led to an enormous increase in the availability of large detailed data sets and in the computing power needed to analyze such data. Furthermore, new models and new computing techniques have been developed to exploit both sources. All of this has allowed for addr

  20. Forward and adjoint radiance Monte Carlo models for quantitative photoacoustic imaging

    Science.gov (United States)

    Hochuli, Roman; Powell, Samuel; Arridge, Simon; Cox, Ben

    2015-03-01

    In quantitative photoacoustic imaging, the aim is to recover physiologically relevant tissue parameters such as chromophore concentrations or oxygen saturation. Obtaining accurate estimates is challenging due to the non-linear relationship between the concentrations and the photoacoustic images. Nonlinear least squares inversions designed to tackle this problem require a model of light transport, the most accurate of which is the radiative transfer equation. This paper presents a highly scalable Monte Carlo model of light transport that computes the radiance in 2D using a Fourier basis to discretise in angle. The model was validated against a 2D finite element model of the radiative transfer equation, and was used to compute gradients of an error functional with respect to the absorption and scattering coefficient. It was found that adjoint-based gradient calculations were much more robust to inherent Monte Carlo noise than a finite difference approach. Furthermore, the Fourier angular discretisation allowed very efficient gradient calculations as sums of Fourier coefficients. These advantages, along with the high parallelisability of Monte Carlo models, makes this approach an attractive candidate as a light model for quantitative inversion in photoacoustic imaging.

  1. McSCIA: application of the equivalence theorem in a Monte Carlo radiative transfer model for spherical shell

    NARCIS (Netherlands)

    Spada, F.M.; Krol, M.C.|info:eu-repo/dai/nl/078760410; Stammes, P.

    2006-01-01

    A new multiple-scattering Monte Carlo 3-D radiative transfer model named McSCIA (Monte Carlo for SCIAmachy) is presented. The backward technique is used to efficiently simulate narrow field of view instruments. The McSCIA algorithm has been formulated as a function of the Earth’s radius, and can

  2. McSCIA: application of the equivalence theorem in a Monte Carlo radiative transfer model for spherical shell atmospheres

    NARCIS (Netherlands)

    Spada, F.; Krol, M.C.; Stammes, P.

    2006-01-01

    A new multiple-scatteringMonte Carlo 3-D radiative transfer model named McSCIA (Monte Carlo for SCIA-machy) is presented. The backward technique is used to efficiently simulate narrow field of view instruments. The McSCIA algorithm has been formulated as a function of the Earth's radius, and can

  3. Application of Markov chain Monte Carlo analysis to biomathematical modeling of respirable dust in US and UK coal miners.

    Science.gov (United States)

    Sweeney, Lisa M; Parker, Ann; Haber, Lynne T; Tran, C Lang; Kuempel, Eileen D

    2013-06-01

    A biomathematical model was previously developed to describe the long-term clearance and retention of particles in the lungs of coal miners. The model structure was evaluated and parameters were estimated in two data sets, one from the United States and one from the United Kingdom. The three-compartment model structure consists of deposition of inhaled particles in the alveolar region, competing processes of either clearance from the alveolar region or translocation to the lung interstitial region, and very slow, irreversible sequestration of interstitialized material in the lung-associated lymph nodes. Point estimates of model parameter values were estimated separately for the two data sets. In the current effort, Bayesian population analysis using Markov chain Monte Carlo simulation was used to recalibrate the model while improving assessments of parameter variability and uncertainty. When model parameters were calibrated simultaneously to the two data sets, agreement between the derived parameters for the two groups was very good, and the central tendency values were similar to those derived from the deterministic approach. These findings are relevant to the proposed update of the ICRP human respiratory tract model with revisions to the alveolar-interstitial region based on this long-term particle clearance and retention model.

  4. Numerical Study of Light Transport in Apple Models Based on Monte Carlo Simulations

    Directory of Open Access Journals (Sweden)

    Mohamed Lamine Askoura

    2015-12-01

    Full Text Available This paper reports on the quantification of light transport in apple models using Monte Carlo simulations. To this end, apple was modeled as a two-layer spherical model including skin and flesh bulk tissues. The optical properties of both tissue types used to generate Monte Carlo data were collected from the literature, and selected to cover a range of values related to three apple varieties. Two different imaging-tissue setups were simulated in order to show the role of the skin on steady-state backscattering images, spatially-resolved reflectance profiles, and assessment of flesh optical properties using an inverse nonlinear least squares fitting algorithm. Simulation results suggest that apple skin cannot be ignored when a Visible/Near-Infrared (Vis/NIR steady-state imaging setup is used for investigating quality attributes of apples. They also help to improve optical inspection techniques in the horticultural products.

  5. Monte Carlo simulation based toy model for fission process

    Science.gov (United States)

    Kurniadi, Rizal; Waris, Abdul; Viridi, Sparisoma

    2016-09-01

    Nuclear fission has been modeled notoriously using two approaches method, macroscopic and microscopic. This work will propose another approach, where the nucleus is treated as a toy model. The aim is to see the usefulness of particle distribution in fission yield calculation. Inasmuch nucleus is a toy, then the Fission Toy Model (FTM) does not represent real process in nature completely. The fission event in FTM is represented by one random number. The number is assumed as width of distribution probability of nucleon position in compound nuclei when fission process is started. By adopting the nucleon density approximation, the Gaussian distribution is chosen as particle distribution. This distribution function generates random number that randomizes distance between particles and a central point. The scission process is started by smashing compound nucleus central point into two parts that are left central and right central points. The yield is determined from portion of nuclei distribution which is proportional with portion of mass numbers. By using modified FTM, characteristic of particle distribution in each fission event could be formed before fission process. These characteristics could be used to make prediction about real nucleons interaction in fission process. The results of FTM calculation give information that the γ value seems as energy.

  6. Monte Carlo simulation for kinetic chemotaxis model: An application to the traveling population wave

    Science.gov (United States)

    Yasuda, Shugo

    2017-02-01

    A Monte Carlo simulation of chemotactic bacteria is developed on the basis of the kinetic model and is applied to a one-dimensional traveling population wave in a microchannel. In this simulation, the Monte Carlo method, which calculates the run-and-tumble motions of bacteria, is coupled with a finite volume method to calculate the macroscopic transport of the chemical cues in the environment. The simulation method can successfully reproduce the traveling population wave of bacteria that was observed experimentally and reveal the microscopic dynamics of bacterium coupled with the macroscopic transports of the chemical cues and bacteria population density. The results obtained by the Monte Carlo method are also compared with the asymptotic solution derived from the kinetic chemotaxis equation in the continuum limit, where the Knudsen number, which is defined by the ratio of the mean free path of bacterium to the characteristic length of the system, vanishes. The validity of the Monte Carlo method in the asymptotic behaviors for small Knudsen numbers is numerically verified.

  7. Comparing analytical and Monte Carlo optical diffusion models in phosphor-based X-ray detectors

    Science.gov (United States)

    Kalyvas, N.; Liaparinos, P.

    2014-03-01

    Luminescent materials are employed as radiation to light converters in detectors of medical imaging systems, often referred to as phosphor screens. Several processes affect the light transfer properties of phosphors. Amongst the most important is the interaction of light. Light attenuation (absorption and scattering) can be described either through "diffusion" theory in theoretical models or "quantum" theory in Monte Carlo methods. Although analytical methods, based on photon diffusion equations, have been preferentially employed to investigate optical diffusion in the past, Monte Carlo simulation models can overcome several of the analytical modelling assumptions. The present study aimed to compare both methodologies and investigate the dependence of the analytical model optical parameters as a function of particle size. It was found that the optical photon attenuation coefficients calculated by analytical modeling are decreased with respect to the particle size (in the region 1- 12 μm). In addition, for particles sizes smaller than 6μm there is no simultaneous agreement between the theoretical modulation transfer function and light escape values with respect to the Monte Carlo data.

  8. Modeling of hysteresis loops by Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    Z. Nehme

    2015-12-01

    Full Text Available Recent advances in MC simulations of magnetic properties are rather devoted to non-interacting systems or ultrafast phenomena, while the modeling of quasi-static hysteresis loops of an assembly of spins with strong internal exchange interactions remains limited to specific cases. In the case of any assembly of magnetic moments, we propose MC simulations on the basis of a three dimensional classical Heisenberg model applied to an isolated magnetic slab involving first nearest neighbors exchange interactions and uniaxial anisotropy. Three different algorithms were successively implemented in order to simulate hysteresis loops: the classical free algorithm, the cone algorithm and a mixed one consisting of adding some global rotations. We focus particularly our study on the impact of varying the anisotropic constant parameter on the coercive field for different temperatures and algorithms. A study of the angular acceptation move distribution allows the dynamics of our simulations to be characterized. The results reveal that the coercive field is linearly related to the anisotropy providing that the algorithm and the numeric conditions are carefully chosen. In a general tendency, it is found that the efficiency of the simulation can be greatly enhanced by using the mixed algorithm that mimic the physics of collective behavior. Consequently, this study lead as to better quantified coercive fields measurements resulting from physical phenomena of complex magnetic (nanoarchitectures with different anisotropy contributions.

  9. Modeling of hysteresis loops by Monte Carlo simulation

    Science.gov (United States)

    Nehme, Z.; Labaye, Y.; Sayed Hassan, R.; Yaacoub, N.; Greneche, J. M.

    2015-12-01

    Recent advances in MC simulations of magnetic properties are rather devoted to non-interacting systems or ultrafast phenomena, while the modeling of quasi-static hysteresis loops of an assembly of spins with strong internal exchange interactions remains limited to specific cases. In the case of any assembly of magnetic moments, we propose MC simulations on the basis of a three dimensional classical Heisenberg model applied to an isolated magnetic slab involving first nearest neighbors exchange interactions and uniaxial anisotropy. Three different algorithms were successively implemented in order to simulate hysteresis loops: the classical free algorithm, the cone algorithm and a mixed one consisting of adding some global rotations. We focus particularly our study on the impact of varying the anisotropic constant parameter on the coercive field for different temperatures and algorithms. A study of the angular acceptation move distribution allows the dynamics of our simulations to be characterized. The results reveal that the coercive field is linearly related to the anisotropy providing that the algorithm and the numeric conditions are carefully chosen. In a general tendency, it is found that the efficiency of the simulation can be greatly enhanced by using the mixed algorithm that mimic the physics of collective behavior. Consequently, this study lead as to better quantified coercive fields measurements resulting from physical phenomena of complex magnetic (nano)architectures with different anisotropy contributions.

  10. Simple capture-recapture models permitting unequal catchability and variable sampling effort.

    Science.gov (United States)

    Agresti, A

    1994-06-01

    We consider two capture-recapture models that imply that the logit of the probability of capture is an additive function of an animal catchability parameter and a parameter reflecting the sampling effort. The models are special cases of the Rasch model, and satisfy the property of quasi-symmetry. One model is log-linear and the other is a latent class model. For the log-linear model, point and interval estimates of the population size are easily obtained using standard software, such as GLIM.

  11. Monte Carlo modeling of ion beam induced secondary electrons

    Energy Technology Data Exchange (ETDEWEB)

    Huh, U., E-mail: uhuh@vols.utk.edu [Biochemistry & Cellular & Molecular Biology, University of Tennessee, Knoxville, TN 37996-0840 (United States); Cho, W. [Electrical and Computer Engineering, University of Tennessee, Knoxville, TN 37996-2100 (United States); Joy, D.C. [Biochemistry & Cellular & Molecular Biology, University of Tennessee, Knoxville, TN 37996-0840 (United States); Center for Nanophase Materials Science, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States)

    2016-09-15

    Ion induced secondary electrons (iSE) can produce high-resolution images ranging from a few eV to 100 keV over a wide range of materials. The interpretation of such images requires knowledge of the secondary electron yields (iSE δ) for each of the elements and materials present and as a function of the incident beam energy. Experimental data for helium ions are currently limited to 40 elements and six compounds while other ions are not well represented. To overcome this limitation, we propose a simple procedure based on the comprehensive work of Berger et al. Here we show that between the energy range of 10–100 keV the Berger et al. data for elements and compounds can be accurately represented by a single universal curve. The agreement between the limited experimental data that is available and the predictive model is good, and has been found to provide reliable yield data for a wide range of elements and compounds. - Highlights: • The Universal ASTAR Yield Curve was derived from data recently published by NIST. • IONiSE incorporated with the Curve will predict iSE yield for elements and compounds. • This approach can also handle other ion beams by changing basic scattering profile.

  12. Reviewing the effort-reward imbalance model: drawing up the balance of 45 empirical studies.

    Science.gov (United States)

    van Vegchel, Natasja; de Jonge, Jan; Bosma, Hans; Schaufeli, Wilmar

    2005-03-01

    The present paper provides a review of 45 studies on the Effort-Reward Imbalance (ERI) Model published from 1986 to 2003 (inclusive). In 1986, the ERI Model was introduced by Siegrist et al. (Biological and Psychological Factors in Cardiovascular Disease, Springer, Berlin, 1986, pp. 104-126; Social Science & Medicine 22 (1986) 247). The central tenet of the ERI Model is that an imbalance between (high) efforts and (low) rewards leads to (sustained) strain reactions. Besides efforts and rewards, overcommitment (i.e., a personality characteristic) is a crucial aspect of the model. Essentially, the ERI Model contains three main assumptions, which could be labeled as (1) the extrinsic ERI hypothesis: high efforts in combination with low rewards increase the risk of poor health, (2) the intrinsic overcommitment hypothesis: a high level of overcommitment may increase the risk of poor health, and (3) the interaction hypothesis: employees reporting an extrinsic ERI and a high level of overcommitment have an even higher risk of poor health. The review showed that the extrinsic ERI hypothesis has gained considerable empirical support. Results for overcommitment remain inconsistent and the moderating effect of overcommitment on the relation between ERI and employee health has been scarcely examined. Based on these review results suggestions for future research are proposed.

  13. Effort dynamics in a fisheries bioeconomic model: A vessel level approach through Game Theory

    Directory of Open Access Journals (Sweden)

    Gorka Merino

    2007-09-01

    Full Text Available Red shrimp, Aristeus antennatus (Risso, 1816 is one of the most important resources for the bottom-trawl fleets in the northwestern Mediterranean, in terms of both landings and economic value. A simple bioeconomic model introducing Game Theory for the prediction of effort dynamics at vessel level is proposed. The game is performed by the twelve vessels exploiting red shrimp in Blanes. Within the game, two solutions are performed: non-cooperation and cooperation. The first is proposed as a realistic method for the prediction of individual effort strategies and the second is used to illustrate the potential profitability of the analysed fishery. The effort strategy for each vessel is the number of fishing days per year and their objective is profit maximisation, individual profits for the non-cooperative solution and total profits for the cooperative one. In the present analysis, strategic conflicts arise from the differences between vessels in technical efficiency (catchability coefficient and economic efficiency (defined here. The ten-year and 1000-iteration stochastic simulations performed for the two effort solutions show that the best strategy from both an economic and a conservationist perspective is homogeneous effort cooperation. However, the results under non-cooperation are more similar to the observed data on effort strategies and landings.

  14. VRS Model: A Model for Estimation of Efforts and Time Duration in Development of IVR Software System

    Directory of Open Access Journals (Sweden)

    Devesh Kumar Srivastava

    2012-01-01

    Full Text Available Accurate software effort estimates are critical to measure for developers, leaders, project managers. Underestimating the costs may result in management approving proposed systems which can exceed their budgets, with underdeveloped functions and poor quality, and failure to complete on time. Various models have been derived to calculate the effort of large number of completed software projects from various organizations and applications to explore how project sizes mapped into project effort. But, still there is a need to prediction accuracy of the models. Day to day there is rapid change and growth to get new techniques and model to estimate the accurate size, effort and cost of software but still there is lack of accuracy to meet exactly the accurate effort as per company norms and standards. A BPO Company takes up a process of another company. The Company which is handling the incoming calls of customers, queries, solution, services through software is known as IVR software. In this paper the author has proposed a model named ?VRS Model? to estimate the accurate effort and schedule of IVR software applications. This model will be helpful for project managers, developers and customers to estimate accurate effort and schedule of only IVR Projects.

  15. Critical behavior of the random-bond Ashkin-Teller model: A Monte Carlo study

    Science.gov (United States)

    Wiseman, Shai; Domany, Eytan

    1995-04-01

    The critical behavior of a bond-disordered Ashkin-Teller model on a square lattice is investigated by intensive Monte Carlo simulations. A duality transformation is used to locate a critical plane of the disordered model. This critical plane corresponds to the line of critical points of the pure model, along which critical exponents vary continuously. Along this line the scaling exponent corresponding to randomness φ=(α/ν) varies continuously and is positive so that the randomness is relevant, and different critical behavior is expected for the disordered model. We use a cluster algorithm for the Monte Carlo simulations based on the Wolff embedding idea, and perform a finite size scaling study of several critical models, extrapolating between the critical bond-disordered Ising and bond-disordered four-state Potts models. The critical behavior of the disordered model is compared with the critical behavior of an anisotropic Ashkin-Teller model, which is used as a reference pure model. We find no essential change in the order parameters' critical exponents with respect to those of the pure model. The divergence of the specific heat C is changed dramatically. Our results favor a logarithmic type divergence at Tc, C~lnL for the random-bond Ashkin-Teller and four-state Potts models and C~ln lnL for the random-bond Ising model.

  16. MONTE CARLO ANALYSES OF THE YALINA THERMAL FACILITY WITH SERPENT STEREOLITHOGRAPHY GEOMETRY MODEL

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, A.; Gohar, Y.

    2015-01-01

    This paper analyzes the YALINA Thermal subcritical assembly of Belarus using two different Monte Carlo transport programs, SERPENT and MCNP. The MCNP model is based on combinatorial geometry and universes hierarchy, while the SERPENT model is based on Stereolithography geometry. The latter consists of unstructured triangulated surfaces defined by the normal and vertices. This geometry format is used by 3D printers and it has been created by: the CUBIT software, MATLAB scripts, and C coding. All the Monte Carlo simulations have been performed using the ENDF/B-VII.0 nuclear data library. Both MCNP and SERPENT share the same geometry specifications, which describe the facility details without using any material homogenization. Three different configurations have been studied with different number of fuel rods. The three fuel configurations use 216, 245, or 280 fuel rods, respectively. The numerical simulations show that the agreement between SERPENT and MCNP results is within few tens of pcms.

  17. Monte Carlo analysis of uncertainty propagation in a stratospheric model. 2: Uncertainties due to reaction rates

    Science.gov (United States)

    Stolarski, R. S.; Butler, D. M.; Rundel, R. D.

    1977-01-01

    A concise stratospheric model was used in a Monte-Carlo analysis of the propagation of reaction rate uncertainties through the calculation of an ozone perturbation due to the addition of chlorine. Two thousand Monte-Carlo cases were run with 55 reaction rates being varied. Excellent convergence was obtained in the output distributions because the model is sensitive to the uncertainties in only about 10 reactions. For a 1 ppby chlorine perturbation added to a 1.5 ppby chlorine background, the resultant 1 sigma uncertainty on the ozone perturbation is a factor of 1.69 on the high side and 1.80 on the low side. The corresponding 2 sigma factors are 2.86 and 3.23. Results are also given for the uncertainties, due to reaction rates, in the ambient concentrations of stratospheric species.

  18. Monte Carlo modeling of spatially complex wrist tissue for the optimization of optical pulse oximeters

    Science.gov (United States)

    Robinson, Mitchell; Butcher, Ryan; Coté, Gerard L.

    2017-02-01

    Monte Carlo modeling of photon propagation has been used in the examination of particular areas of the body to further enhance the understanding of light propagation through tissue. This work seeks to improve upon the established simulation methods through more accurate representations of the simulated tissues in the wrist as well as the characteristics of the light source. The Monte Carlo simulation program was developed using Matlab. Generation of different tissue domains, such as muscle, vasculature, and bone, was performed in Solidworks, where each domain was saved as a separate .stl file that was read into the program. The light source was altered to give considerations to both viewing angle of the simulated LED as well as the nominal diameter of the source. It is believed that the use of these more accurate models generates results that more closely match those seen in-vivo, and can be used to better guide the design of optical wrist-worn measurement devices.

  19. A technique for estimating maximum harvesting effort in a stochastic fishery model

    Indian Academy of Sciences (India)

    Ram Rup Sarkar; J Chattopadhayay

    2003-06-01

    Exploitation of biological resources and the harvest of population species are commonly practiced in fisheries, forestry and wild life management. Estimation of maximum harvesting effort has a great impact on the economics of fisheries and other bio-resources. The present paper deals with the problem of a bioeconomic fishery model under environmental variability. A technique for finding the maximum harvesting effort in fluctuating environment has been developed in a two-species competitive system, which shows that under realistic environmental variability the maximum harvesting effort is less than what is estimated in the deterministic model. This method also enables us to find out the safe regions in the parametric space for which the chance of extinction of the species is minimized. A real life fishery problem has been considered to obtain the inaccessible parameters of the system in a systematic way. Such studies may help resource managers to get an idea for controlling the system.

  20. Automata networks model for alignment and least effort on vocabulary formation

    CERN Document Server

    Vera, Javier; Goles, Eric

    2015-01-01

    Can artificial communities of agents develop language with scaling relations close to the Zipf law? As a preliminary answer to this question, we propose an Automata Networks model of the formation of a vocabulary on a population of individuals, under two in principle opposite strategies: the alignment and the least effort principle. Within the previous account to the emergence of linguistic conventions (specially, the Naming Game), we focus on modeling speaker and hearer efforts as actions over their vocabularies and we study the impact of these actions on the formation of a shared language. The numerical simulations are essentially based on an energy function, that measures the amount of local agreement between the vocabularies. The results suggests that on one dimensional lattices the best strategy to the formation of shared languages is the one that minimizes the efforts of speakers on communicative tasks.

  1. Testing Lorentz Invariance Emergence in the Ising Model using Monte Carlo simulations

    CERN Document Server

    Dias Astros, Maria Isabel

    2017-01-01

    In the context of the Lorentz invariance as an emergent phenomenon at low energy scales to study quantum gravity a system composed by two 3D interacting Ising models (one with an anisotropy in one direction) was proposed. Two Monte Carlo simulations were run: one for the 2D Ising model and one for the target model. In both cases the observables (energy, magnetization, heat capacity and magnetic susceptibility) were computed for different lattice sizes and a Binder cumulant introduced in order to estimate the critical temperature of the systems. Moreover, the correlation function was calculated for the 2D Ising model.

  2. Monte Carlo tests of the Rasch model based on scalability coefficients

    DEFF Research Database (Denmark)

    Christensen, Karl Bang; Kreiner, Svend

    2010-01-01

    that summarizes the number of Guttman errors in the data matrix. These coefficients are shown to yield efficient tests of the Rasch model using p-values computed using Markov chain Monte Carlo methods. The power of the tests of unequal item discrimination, and their ability to distinguish between local dependence...... and unequal item discrimination, are discussed. The methods are illustrated and motivated using a simulation study and a real data example....

  3. Monte Carlo Tests of Nucleation Concepts in the Lattice Gas Model

    OpenAIRE

    Schmitz, Fabian; Virnau, Peter; Binder, Kurt

    2013-01-01

    The conventional theory of homogeneous and heterogeneous nucleation in a supersaturated vapor is tested by Monte Carlo simulations of the lattice gas (Ising) model with nearest-neighbor attractive interactions on the simple cubic lattice. The theory considers the nucleation process as a slow (quasi-static) cluster (droplet) growth over a free energy barrier $\\Delta F^*$, constructed in terms of a balance of surface and bulk term of a "critical droplet" of radius $R^*$, implying that the rates...

  4. Critical Exponents of the Classical 3D Heisenberg Model A Single-Cluster Monte Carlo Study

    CERN Document Server

    Holm, C; Holm, Christian; Janke, Wolfhard

    1993-01-01

    We have simulated the three-dimensional Heisenberg model on simple cubic lattices, using the single-cluster Monte Carlo update algorithm. The expected pronounced reduction of critical slowing down at the phase transition is verified. This allows simulations on significantly larger lattices than in previous studies and consequently a better control over systematic errors. In one set of simulations we employ the usual finite-size scaling methods to compute the critical exponents $\

  5. Monte Carlo Study of the Xy-Model on SIERPIŃSKI Carpet

    Science.gov (United States)

    Mitrović, Božidar; Przedborski, Michelle A.

    2014-09-01

    We have performed a Monte Carlo (MC) study of the classical XY-model on a Sierpiński carpet, which is a planar fractal structure with infinite order of ramification and fractal dimension 1.8928. We employed the Wolff cluster algorithm in our simulations and our results, in particular those for the susceptibility and the helicity modulus, indicate the absence of finite-temperature Berezinskii-Kosterlitz-Thouless (BKT) transition in this system.

  6. Quantum Monte Carlo simulation of a two-dimensional Majorana lattice model

    Science.gov (United States)

    Hayata, Tomoya; Yamamoto, Arata

    2017-07-01

    We study interacting Majorana fermions in two dimensions as a low-energy effective model of a vortex lattice in two-dimensional time-reversal-invariant topological superconductors. For that purpose, we implement ab initio quantum Monte Carlo simulation to the Majorana fermion system in which the path-integral measure is given by a semipositive Pfaffian. We discuss spontaneous breaking of time-reversal symmetry at finite temperatures.

  7. Continuous energy Monte Carlo calculations for randomly distributed spherical fuels based on statistical geometry model

    Energy Technology Data Exchange (ETDEWEB)

    Murata, Isao [Osaka Univ., Suita (Japan); Mori, Takamasa; Nakagawa, Masayuki; Itakura, Hirofumi

    1996-03-01

    The method to calculate neutronics parameters of a core composed of randomly distributed spherical fuels has been developed based on a statistical geometry model with a continuous energy Monte Carlo method. This method was implemented in a general purpose Monte Carlo code MCNP, and a new code MCNP-CFP had been developed. This paper describes the model and method how to use it and the validation results. In the Monte Carlo calculation, the location of a spherical fuel is sampled probabilistically along the particle flight path from the spatial probability distribution of spherical fuels, called nearest neighbor distribution (NND). This sampling method was validated through the following two comparisons: (1) Calculations of inventory of coated fuel particles (CFPs) in a fuel compact by both track length estimator and direct evaluation method, and (2) Criticality calculations for ordered packed geometries. This method was also confined by applying to an analysis of the critical assembly experiment at VHTRC. The method established in the present study is quite unique so as to a probabilistic model of the geometry with a great number of spherical fuels distributed randomly. Realizing the speed-up by vector or parallel computations in future, it is expected to be widely used in calculation of a nuclear reactor core, especially HTGR cores. (author).

  8. Monte Carlo method of radiative transfer applied to a turbulent flame modeling with LES

    Science.gov (United States)

    Zhang, Jin; Gicquel, Olivier; Veynante, Denis; Taine, Jean

    2009-06-01

    Radiative transfer plays an important role in the numerical simulation of turbulent combustion. However, for the reason that combustion and radiation are characterized by different time scales and different spatial and chemical treatments, the radiation effect is often neglected or roughly modelled. The coupling of a large eddy simulation combustion solver and a radiation solver through a dedicated language, CORBA, is investigated. Two formulations of Monte Carlo method (Forward Method and Emission Reciprocity Method) employed to resolve RTE have been compared in a one-dimensional flame test case using three-dimensional calculation grids with absorbing and emitting media in order to validate the Monte Carlo radiative solver and to choose the most efficient model for coupling. Then the results obtained using two different RTE solvers (Reciprocity Monte Carlo method and Discrete Ordinate Method) applied on a three-dimensional flame holder set-up with a correlated-k distribution model describing the real gas medium spectral radiative properties are compared not only in terms of the physical behavior of the flame, but also in computational performance (storage requirement, CPU time and parallelization efficiency). To cite this article: J. Zhang et al., C. R. Mecanique 337 (2009).

  9. Commonalities in WEPP and WEPS and efforts towards a single erosion process model

    NARCIS (Netherlands)

    Visser, S.M.; Flanagan, D.C.

    2004-01-01

    Since the late 1980's, the Agricultural Research Service (ARS) of the United States Department of Agriculture (USDA) has been developing process-based erosion models to predict water erosion and wind erosion. During much of that time, the development efforts of the Water Erosion Prediction Project (

  10. Commonalities in WEPP and WEPS and efforts towards a single erosion process model

    NARCIS (Netherlands)

    Visser, S.M.; Flanagan, D.C.

    2004-01-01

    Since the late 1980's, the Agricultural Research Service (ARS) of the United States Department of Agriculture (USDA) has been developing process-based erosion models to predict water erosion and wind erosion. During much of that time, the development efforts of the Water Erosion Prediction Project

  11. GPU Accelerated Monte Carlo Algorithm of Ising Model on Triangular Lattice%三角晶格Ising模型Monte Carlo模拟的GPU加速算法

    Institute of Scientific and Technical Information of China (English)

    陆星; 蔡静; 张伟

    2012-01-01

    In the statistical model, the efficiency of most Monte Carlo algorithm reduces quickly near the critical point. In the analysis of traditional local algorithms, a GPU-based parallel simulation algorithm on the triangular lattice Ising model, which greatly improves the efficiency of the Monte Carlo simulation, is raised. For the model with the size of 1 024 X 1 024, a speedup of 69 is achieved. Besides, the critical behavior is analyzed, a high-precision critical point (/Jc = 0.274 66( 1) ) and critical exponents (y, = 1.01(2), yh= 1. 875 6(3) ) of triangular lattice Ising model are obtained, which implies the effectiveness of the GPU algorithm.%在分析传统Monte Carlo算法的基础上,针对三角晶格Ising模型提出了一种基于GPU的并行模拟方法,大大提高了算法的效率.对1 024×1 024的模型,实现了69倍的加速比.通过该算法所得数据分析模型的临界行为,获得了高精度的临界点βc=0.27466(1)和临界指数y1=1.01(2),yh=1.875 6(3).

  12. Monte Carlo tools for Beyond the Standard Model Physics , April 14-16

    DEFF Research Database (Denmark)

    Badger...[], Simon; Christensen, Christian Holm; Dalsgaard, Hans Hjersing;

    2011-01-01

    This workshop aims to gather together theorists and experimentalists interested in developing and using Monte Carlo tools for Beyond the Standard Model Physics in an attempt to be prepared for the analysis of data focusing on the Large Hadron Collider. Since a large number of excellent tools....... To identify promising models (or processes) for which the tools have not yet been constructed and start filling up these gaps. To propose ways to streamline the process of going from models to events, i.e. to make the process more user-friendly so that more people can get involved and perform serious collider...

  13. Development of advanced geometric models and acceleration techniques for Monte Carlo simulation in Medical Physics

    OpenAIRE

    Badal Soler, Andreu

    2008-01-01

    Els programes de simulació Monte Carlo de caràcter general s'utilitzen actualment en una gran varietat d'aplicacions.Tot i això, els models geomètrics implementats en la majoria de programes imposen certes limitacions a la forma dels objectes que es poden definir. Aquests models no són adequats per descriure les superfícies arbitràries que es troben en estructures anatòmiques o en certs aparells mèdics i, conseqüentment, algunes aplicacions que requereixen l'ús de models geomètrics molt detal...

  14. A Monte Carlo simulation for kinetic chemotaxis models: an application to the traveling population wave

    CERN Document Server

    Yasuda, Shugo

    2015-01-01

    A Monte Carlo simulation for the chemotactic bacteria is developed on the basis of the kinetic modeling, i.e., the Boltzmann transport equation, and applied to the one-dimensional traveling population wave in a micro channel.In this method, the Monte Carlo method, which calculates the run-and-tumble motions of bacteria, is coupled with a finite volume method to solve the macroscopic transport of the chemical cues in the field. The simulation method can successfully reproduce the traveling population wave of bacteria which was observed experimentally. The microscopic dynamics of bacteria, e.g., the velocity autocorrelation function and velocity distribution function of bacteria, are also investigated. It is found that the bacteria which form the traveling population wave create quasi-periodic motions as well as a migratory movement along with the traveling population wave. Simulations are also performed with changing the sensitivity and modulation parameters in the response function of bacteria. It is found th...

  15. A Study on System Availability Vs System Administration Efforts with Mathematical Models

    Institute of Scientific and Technical Information of China (English)

    郑建德

    2003-01-01

    Two mathematical models are developed in this paper to study the effectiveness of system administration efforts on the improvement of system availability, based on the assumption that there exists a transitional state for a computer system in operation before it is brought down by some hardware or software problems and with intensified system administration efforts, it is possible to discover and fix the problems in time to bring the system back to normal state before it is down. Markov chain is used to simulate the transition of system states. A conclusion is made that increasing system administration efforts may be a cost-effective way to meet the requirements for moderate improvement on system availability, but higher demand on this aspect still has to be met by advanced technologies.

  16. Modeling Replenishment of Ultrathin Liquid Perfluoropolyether Z Films on Solid Surfaces Using Monte Carlo Simulation

    Directory of Open Access Journals (Sweden)

    M. S. Mayeed

    2014-01-01

    Full Text Available Applying the reptation algorithm to a simplified perfluoropolyether Z off-lattice polymer model an NVT Monte Carlo simulation has been performed. Bulk condition has been simulated first to compare the average radius of gyration with the bulk experimental results. Then the model is tested for its ability to describe dynamics. After this, it is applied to observe the replenishment of nanoscale ultrathin liquid films on solid flat carbon surfaces. The replenishment rate for trenches of different widths (8, 12, and 16 nms for several molecular weights between two films of perfluoropolyether Z from the Monte Carlo simulation is compared to that obtained solving the diffusion equation using the experimental diffusion coefficients of Ma et al. (1999, with room condition in both cases. Replenishment per Monte Carlo cycle seems to be a constant multiple of replenishment per second at least up to 2 nm replenished film thickness of the trenches over the carbon surface. Considerable good agreement has been achieved here between the experimental results and the dynamics of molecules using reptation moves in the ultrathin liquid films on solid surfaces.

  17. Modeling weight variability in a pan coating process using Monte Carlo simulations.

    Science.gov (United States)

    Pandey, Preetanshu; Katakdaunde, Manoj; Turton, Richard

    2006-10-06

    The primary objective of the current study was to investigate process variables affecting weight gain mass coating variability (CV(m) ) in pan coating devices using novel video-imaging techniques and Monte Carlo simulations. Experimental information such as the tablet location, circulation time distribution, velocity distribution, projected surface area, and spray dynamics was the main input to the simulations. The data on the dynamics of tablet movement were obtained using novel video-imaging methods. The effects of pan speed, pan loading, tablet size, coating time, spray flux distribution, and spray area and shape were investigated. CV(m) was found to be inversely proportional to the square root of coating time. The spray shape was not found to affect the CV(m) of the process significantly, but an increase in the spray area led to lower CV(m) s. Coating experiments were conducted to verify the predictions from the Monte Carlo simulations, and the trends predicted from the model were in good agreement. It was observed that the Monte Carlo simulations underpredicted CV(m) s in comparison to the experiments. The model developed can provide a basis for adjustments in process parameters required during scale-up operations and can be useful in predicting the process changes that are needed to achieve the same CV(m) when a variable is altered.

  18. Adaptive Multi-GPU Exchange Monte Carlo for the 3D Random Field Ising Model

    CERN Document Server

    Navarro, C A; Deng, Youjin

    2015-01-01

    The study of disordered spin systems through Monte Carlo simulations has proven to be a hard task due to the adverse energy landscape present at the low temperature regime, making it difficult for the simulation to escape from a local minimum. Replica based algorithms such as the Exchange Monte Carlo (also known as parallel tempering) are effective at overcoming this problem, reaching equilibrium on disordered spin systems such as the Spin Glass or Random Field models, by exchanging information between replicas of neighbor temperatures. In this work we present a multi-GPU Exchange Monte Carlo method designed for the simulation of the 3D Random Field Model. The implementation is based on a two-level parallelization scheme that allows the method to scale its performance in the presence of faster and GPUs as well as multiple GPUs. In addition, we modified the original algorithm by adapting the set of temperatures according to the exchange rate observed from short trial runs, leading to an increased exchange rate...

  19. The Effect of the Demand Control and Effort Reward Imbalance Models on the Academic Burnout of Korean Adolescents

    Science.gov (United States)

    Lee, Jayoung; Puig, Ana; Lee, Sang Min

    2012-01-01

    The purpose of this study was to examine the effects of the Demand Control Model (DCM) and the Effort Reward Imbalance Model (ERIM) on academic burnout for Korean students. Specifically, this study identified the effects of the predictor variables based on DCM and ERIM (i.e., demand, control, effort, reward, Demand Control Ratio, Effort Reward…

  20. Incorporating phosphorus cycling into global modeling efforts: a worthwhile, tractable endeavor

    Science.gov (United States)

    Reed, Sasha C.; Yang, Xiaojuan; Thornton, Peter E.

    2015-01-01

    Myriad field, laboratory, and modeling studies show that nutrient availability plays a fundamental role in regulating CO2 exchange between the Earth's biosphere and atmosphere, and in determining how carbon pools and fluxes respond to climatic change. Accordingly, global models that incorporate coupled climate–carbon cycle feedbacks made a significant advance with the introduction of a prognostic nitrogen cycle. Here we propose that incorporating phosphorus cycling represents an important next step in coupled climate–carbon cycling model development, particularly for lowland tropical forests where phosphorus availability is often presumed to limit primary production. We highlight challenges to including phosphorus in modeling efforts and provide suggestions for how to move forward.

  1. Bayesian phylogenetic model selection using reversible jump Markov chain Monte Carlo.

    Science.gov (United States)

    Huelsenbeck, John P; Larget, Bret; Alfaro, Michael E

    2004-06-01

    A common problem in molecular phylogenetics is choosing a model of DNA substitution that does a good job of explaining the DNA sequence alignment without introducing superfluous parameters. A number of methods have been used to choose among a small set of candidate substitution models, such as the likelihood ratio test, the Akaike Information Criterion (AIC), the Bayesian Information Criterion (BIC), and Bayes factors. Current implementations of any of these criteria suffer from the limitation that only a small set of models are examined, or that the test does not allow easy comparison of non-nested models. In this article, we expand the pool of candidate substitution models to include all possible time-reversible models. This set includes seven models that have already been described. We show how Bayes factors can be calculated for these models using reversible jump Markov chain Monte Carlo, and apply the method to 16 DNA sequence alignments. For each data set, we compare the model with the best Bayes factor to the best models chosen using AIC and BIC. We find that the best model under any of these criteria is not necessarily the most complicated one; models with an intermediate number of substitution types typically do best. Moreover, almost all of the models that are chosen as best do not constrain a transition rate to be the same as a transversion rate, suggesting that it is the transition/transversion rate bias that plays the largest role in determining which models are selected. Importantly, the reversible jump Markov chain Monte Carlo algorithm described here allows estimation of phylogeny (and other phylogenetic model parameters) to be performed while accounting for uncertainty in the model of DNA substitution.

  2. Monte Carlo Studies of Phase Separation in Compressible 2-dim Ising Models

    Science.gov (United States)

    Mitchell, S. J.; Landau, D. P.

    2006-03-01

    Using high resolution Monte Carlo simulations, we study time-dependent domain growth in compressible 2-dim ferromagnetic (s=1/2) Ising models with continuous spin positions and spin-exchange moves [1]. Spins interact with slightly modified Lennard-Jones potentials, and we consider a model with no lattice mismatch and one with 4% mismatch. For comparison, we repeat calculations for the rigid Ising model [2]. For all models, large systems (512^2) and long times (10^ 6 MCS) are examined over multiple runs, and the growth exponent is measured in the asymptotic scaling regime. For the rigid model and the compressible model with no lattice mismatch, the growth exponent is consistent with the theoretically expected value of 1/3 [1] for Model B type growth. However, we find that non-zero lattice mismatch has a significant and unexpected effect on the growth behavior.Supported by the NSF.[1] D.P. Landau and K. Binder, A Guide to Monte Carlo Simulations in Statistical Physics, second ed. (Cambridge University Press, New York, 2005).[2] J. Amar, F. Sullivan, and R.D. Mountain, Phys. Rev. B 37, 196 (1988).

  3. Monte Carlo renormalization-group investigation of the two-dimensional O(4) sigma model

    Science.gov (United States)

    Heller, Urs M.

    1988-01-01

    An improved Monte Carlo renormalization-group method is used to determine the beta function of the two-dimensional O(4) sigma model. While for (inverse) couplings beta = greater than about 2.2 agreement is obtained with asymptotic scaling according to asymptotic freedom, deviations from it are obtained at smaller couplings. They are, however, consistent with the behavior of the correlation length, indicating 'scaling' according to the full beta function. These results contradict recent claims that the model has a critical point at finite coupling.

  4. Combined constraints on modified Chaplygin gas model from cosmological observed data: Markov Chain Monte Carlo approach

    OpenAIRE

    Lu, Jianbo; Xu, Lixin; Wu, Yabo; Liu, Molin

    2011-01-01

    We use the Markov Chain Monte Carlo method to investigate a global constraints on the modified Chaplygin gas (MCG) model as the unification of dark matter and dark energy from the latest observational data: the Union2 dataset of type supernovae Ia (SNIa), the observational Hubble data (OHD), the cluster X-ray gas mass fraction, the baryon acoustic oscillation (BAO), and the cosmic microwave background (CMB) data. In a flat universe, the constraint results for MCG model are, $\\Omega_{b}h^{2}=0...

  5. Modeling and Monte Carlo simulation of nucleation and growth of UV/low-temperature-induced nanostructures

    Science.gov (United States)

    Flicstein, Jean; Pata, S.; Chun, L. S. H. K.; Palmier, Jean F.; Courant, J. L.

    1998-05-01

    A model for ultraviolet induced chemical vapor deposition (UV CVD) for a-SiN:H is described. In the simulation of UV CVD process, activate charged centers creation, species incorporation, surface diffusion, and desorption are considered as elementary steps for the photonucleation and photodeposition mechanisms. The process is characterized by two surface sticking coefficients. Surface diffusion of species is modeled with a gaussian distribution. A real time Monte Carlo method is used to determine photonucleation and photodeposition rates in nanostructures. Comparison of experimental versus simulation results for a-SiN:H is shown to predict the morphology temporal evolution under operating conditions down to atomistic resolution.

  6. Nuclear Level Density of ${}^{161}$Dy in the Shell Model Monte Carlo Method

    CERN Document Server

    Özen, Cem; Nakada, Hitoshi

    2012-01-01

    We extend the shell-model Monte Carlo applications to the rare-earth region to include the odd-even nucleus ${}^{161}$Dy. The projection on an odd number of particles leads to a sign problem at low temperatures making it impractical to extract the ground-state energy in direct calculations. We use level counting data at low energies and neutron resonance data to extract the shell model ground-state energy to good precision. We then calculate the level density of ${}^{161}$Dy and find it in very good agreement with the level density extracted from experimental data.

  7. Monte Carlo renormalization-group investigation of the two-dimensional O(4) sigma model

    Science.gov (United States)

    Heller, Urs M.

    1988-01-01

    An improved Monte Carlo renormalization-group method is used to determine the beta function of the two-dimensional O(4) sigma model. While for (inverse) couplings beta = greater than about 2.2 agreement is obtained with asymptotic scaling according to asymptotic freedom, deviations from it are obtained at smaller couplings. They are, however, consistent with the behavior of the correlation length, indicating 'scaling' according to the full beta function. These results contradict recent claims that the model has a critical point at finite coupling.

  8. Molecular mobility with respect to accessible volume in Monte Carlo lattice model for polymers

    Science.gov (United States)

    Diani, J.; Gilormini, P.

    2017-02-01

    A three-dimensional cubic Monte Carlo lattice model is considered to test the impact of volume on the molecular mobility of amorphous polymers. Assuming classic polymer chain dynamics, the concept of locked volume limiting the accessible volume around the polymer chains is introduced. The polymer mobility is assessed by its ability to explore the entire lattice thanks to reptation motions. When recording the polymer mobility with respect to the lattice accessible volume, a sharp mobility transition is observed as witnessed during glass transition. The model ability to reproduce known actual trends in terms of glass transition with respect to material parameters, is also tested.

  9. Open-source direct simulation Monte Carlo chemistry modeling for hypersonic flows

    OpenAIRE

    Scanlon, Thomas J.; White, Craig; Borg, Matthew K.; Palharini, Rodrigo C.; Farbar, Erin; Boyd, Iain D.; Reese, Jason M.; Brown, Richard E

    2015-01-01

    An open source implementation of chemistry modelling for the direct simulationMonte Carlo (DSMC) method is presented. Following the recent work of Bird [1] an approach known as the quantum kinetic (Q-K) method has been adopted to describe chemical reactions in a 5-species air model using DSMC procedures based on microscopic gas information. The Q-K technique has been implemented within the framework of the dsmcFoam code, a derivative of the open source CFD code OpenFOAM. Results for vibration...

  10. Development of a Monte Carlo model for the Brainlab microMLC.

    Science.gov (United States)

    Belec, Jason; Patrocinio, Horacio; Verhaegen, Frank

    2005-03-07

    Stereotactic radiosurgery with several static conformal beams shaped by a micro multileaf collimator (microMLC) is used to treat small irregularly shaped brain lesions. Our goal is to perform Monte Carlo calculations of dose distributions for certain treatment plans as a verification tool. A dedicated microMLC component module for the BEAMnrc code was developed as part of this project and was incorporated in a model of the Varian CL2300 linear accelerator 6 MV photon beam. As an initial validation of the code, the leaf geometry was visualized by tracing particles through the component module and recording their position each time a leaf boundary was crossed. The leaf dimensions were measured and the leaf material density and interleaf air gap were chosen to match the simulated leaf leakage profiles with film measurements in a solid water phantom. A comparison between Monte Carlo calculations and measurements (diode, radiographic film) was performed for square and irregularly shaped fields incident on flat and homogeneous water phantoms. Results show that Monte Carlo calculations agree with measured dose distributions to within 2% and/or 1 mm except for field size smaller than 1.2 cm diameter where agreement is within 5% due to uncertainties in measured output factors.

  11. Core-scale solute transport model selection using Monte Carlo analysis

    CERN Document Server

    Malama, Bwalya; James, Scott C

    2013-01-01

    Model applicability to core-scale solute transport is evaluated using breakthrough data from column experiments conducted with conservative tracers tritium (H-3) and sodium-22, and the retarding solute uranium-232. The three models considered are single-porosity, double-porosity with single-rate mobile-immobile mass-exchange, and the multirate model, which is a deterministic model that admits the statistics of a random mobile-immobile mass-exchange rate coefficient. The experiments were conducted on intact Culebra Dolomite core samples. Previously, data were analyzed using single- and double-porosity models although the Culebra Dolomite is known to possess multiple types and scales of porosity, and to exhibit multirate mobile-immobile-domain mass transfer characteristics at field scale. The data are reanalyzed here and null-space Monte Carlo analysis is used to facilitate objective model selection. Prediction (or residual) bias is adopted as a measure of the model structural error. The analysis clearly shows ...

  12. Model Calibration Efforts for the International Space Station's Solar Array Mast

    Science.gov (United States)

    Elliott, Kenny B.; Horta, Lucas G.; Templeton, Justin D.; Knight, Norman F., Jr.

    2012-01-01

    The International Space Station (ISS) relies on sixteen solar-voltaic blankets to provide electrical power to the station. Each pair of blankets is supported by a deployable boom called the Folding Articulated Square Truss Mast (FAST Mast). At certain ISS attitudes, the solar arrays can be positioned in such a way that shadowing of either one or three longerons causes an unexpected asymmetric thermal loading that if unchecked can exceed the operational stability limits of the mast. Work in this paper documents part of an independent NASA Engineering and Safety Center effort to assess the existing operational limits. Because of the complexity of the system, the problem is being worked using a building-block progression from components (longerons), to units (single or multiple bays), to assembly (full mast). The paper presents results from efforts to calibrate the longeron components. The work includes experimental testing of two types of longerons (straight and tapered), development of Finite Element (FE) models, development of parameter uncertainty models, and the establishment of a calibration and validation process to demonstrate adequacy of the models. Models in the context of this paper refer to both FE model and probabilistic parameter models. Results from model calibration of the straight longerons show that the model is capable of predicting the mean load, axial strain, and bending strain. For validation, parameter values obtained from calibration of straight longerons are used to validate experimental results for the tapered longerons.

  13. Applying sequential Monte Carlo methods into a distributed hydrologic model: lagged particle filtering approach with regularization

    Directory of Open Access Journals (Sweden)

    S. J. Noh

    2011-04-01

    Full Text Available Applications of data assimilation techniques have been widely used to improve hydrologic prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC methods, known as "particle filters", provide the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response time of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on Markov chain Monte Carlo (MCMC is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, WEP is implemented for the sequential data assimilation through the updating of state variables. Particle filtering is parallelized and implemented in the multi-core computing environment via open message passing interface (MPI. We compare performance results of particle filters in terms of model efficiency, predictive QQ plots and particle diversity. The improvement of model efficiency and the preservation of particle diversity are found in the lagged regularized particle filter.

  14. Experimental Validation of Monte Carlo Simulations Based on a Virtual Source Model for TomoTherapy in a RANDO Phantom.

    Science.gov (United States)

    Yuan, Jiankui; Zheng, Yiran; Wessels, Barry; Lo, Simon S; Ellis, Rodney; Machtay, Mitchell; Yao, Min

    2016-12-01

    A virtual source model for Monte Carlo simulations of helical TomoTherapy has been developed previously by the authors. The purpose of this work is to perform experiments in an anthropomorphic (RANDO) phantom with the same order of complexity as in clinical treatments to validate the virtual source model to be used for quality assurance secondary check on TomoTherapy patient planning dose. Helical TomoTherapy involves complex delivery pattern with irregular beam apertures and couch movement during irradiation. Monte Carlo simulation, as the most accurate dose algorithm, is desirable in radiation dosimetry. Current Monte Carlo simulations for helical TomoTherapy adopt the full Monte Carlo model, which includes detailed modeling of individual machine component, and thus, large phase space files are required at different scoring planes. As an alternative approach, we developed a virtual source model without using the large phase space files for the patient dose calculations previously. In this work, we apply the simulation system to recompute the patient doses, which were generated by the treatment planning system in an anthropomorphic phantom to mimic the real patient treatments. We performed thermoluminescence dosimeter point dose and film measurements to compare with Monte Carlo results. Thermoluminescence dosimeter measurements show that the relative difference in both Monte Carlo and treatment planning system is within 3%, with the largest difference less than 5% for both the test plans. The film measurements demonstrated 85.7% and 98.4% passing rate using the 3 mm/3% acceptance criterion for the head and neck and lung cases, respectively. Over 95% passing rate is achieved if 4 mm/4% criterion is applied. For the dose-volume histograms, very good agreement is obtained between the Monte Carlo and treatment planning system method for both cases. The experimental results demonstrate that the virtual source model Monte Carlo system can be a viable option for the

  15. Microsopic nuclear level densities by the shell model Monte Carlo method

    CERN Document Server

    Alhassid, Y; Gilbreth, C N; Nakada, H; Özen, C

    2016-01-01

    The configuration-interaction shell model approach provides an attractive framework for the calculation of nuclear level densities in the presence of correlations, but the large dimensionality of the model space has hindered its application in mid-mass and heavy nuclei. The shell model Monte Carlo (SMMC) method permits calculations in model spaces that are many orders of magnitude larger than spaces that can be treated by conventional diagonalization methods. We discuss recent progress in the SMMC approach to level densities, and in particular the calculation of level densities in heavy nuclei. We calculate the distribution of the axial quadrupole operator in the laboratory frame at finite temperature and demonstrate that it is a model-independent signature of deformation in the rotational invariant framework of the shell model. We propose a method to use these distributions for calculating level densities as a function of intrinsic deformation.

  16. The Physical Models and Statistical Procedures Used in the RACER Monte Carlo Code

    Energy Technology Data Exchange (ETDEWEB)

    Sutton, T.M.; Brown, F.B.; Bischoff, F.G.; MacMillan, D.B.; Ellis, C.L.; Ward, J.T.; Ballinger, C.T.; Kelly, D.J.; Schindler, L.

    1999-07-01

    This report describes the MCV (Monte Carlo - Vectorized)Monte Carlo neutron transport code [Brown, 1982, 1983; Brown and Mendelson, 1984a]. MCV is a module in the RACER system of codes that is used for Monte Carlo reactor physics analysis. The MCV module contains all of the neutron transport and statistical analysis functions of the system, while other modules perform various input-related functions such as geometry description, material assignment, output edit specification, etc. MCV is very closely related to the 05R neutron Monte Carlo code [Irving et al., 1965] developed at Oak Ridge National Laboratory. 05R evolved into the 05RR module of the STEMB system, which was the forerunner of the RACER system. Much of the overall logic and physics treatment of 05RR has been retained and, indeed, the original verification of MCV was achieved through comparison with STEMB results. MCV has been designed to be very computationally efficient [Brown, 1981, Brown and Martin, 1984b; Brown, 1986]. It was originally programmed to make use of vector-computing architectures such as those of the CDC Cyber- 205 and Cray X-MP. MCV was the first full-scale production Monte Carlo code to effectively utilize vector-processing capabilities. Subsequently, MCV was modified to utilize both distributed-memory [Sutton and Brown, 1994] and shared memory parallelism. The code has been compiled and run on platforms ranging from 32-bit UNIX workstations to clusters of 64-bit vector-parallel supercomputers. The computational efficiency of the code allows the analyst to perform calculations using many more neutron histories than is practical with most other Monte Carlo codes, thereby yielding results with smaller statistical uncertainties. MCV also utilizes variance reduction techniques such as survival biasing, splitting, and rouletting to permit additional reduction in uncertainties. While a general-purpose neutron Monte Carlo code, MCV is optimized for reactor physics calculations. It has the

  17. New Generation of the Monte Carlo Shell Model for the K Computer Era

    CERN Document Server

    Shimizu, Noritaka; Tsunoda, Yusuke; Utsuno, Yutaka; Yoshida, Tooru; Mizusaki, Takahiro; Honma, Michio; Otsuka, Takaharu

    2012-01-01

    We present a newly enhanced version of the Monte Carlo Shell Model method by incorporating the conjugate gradient method and energy-variance extrapolation. This new method enables us to perform large-scale shell-model calculations that the direct diagonalization method cannot reach. This new generation framework of the MCSM provides us with a powerful tool to perform most-advanced large-scale shell-model calculations on current massively parallel computers such as the K computer. We discuss the validity of this method in ab initio calculations of light nuclei, and propose a new method to describe the intrinsic wave function in terms of the shell-model picture. We also apply this new MCSM to the study of neutron-rich Cr and Ni isotopes using the conventional shell-model calculations with an inert 40Ca core and discuss how the magicity of N = 28, 40, 50 remains or is broken.

  18. Quantitative photoacoustic tomography using forward and adjoint Monte Carlo models of radiance

    CERN Document Server

    Hochuli, Roman; Arridge, Simon; Cox, Ben

    2016-01-01

    Forward and adjoint Monte Carlo (MC) models of radiance are proposed for use in model-based quantitative photoacoustic tomography. A 2D radiance MC model using a harmonic angular basis is introduced and validated against analytic solutions for the radiance in heterogeneous media. A gradient-based optimisation scheme is then used to recover 2D absorption and scattering coefficients distributions from simulated photoacoustic measurements. It is shown that the functional gradients, which are a challenge to compute efficiently using MC models, can be calculated directly from the coefficients of the harmonic angular basis used in the forward and adjoint models. This work establishes a framework for transport-based quantitative photoacoustic tomography that can fully exploit emerging highly parallel computing architectures.

  19. Modeling of radiation-induced bystander effect using Monte Carlo methods

    Science.gov (United States)

    Xia, Junchao; Liu, Liteng; Xue, Jianming; Wang, Yugang; Wu, Lijun

    2009-03-01

    Experiments showed that the radiation-induced bystander effect exists in cells, or tissues, or even biological organisms when irradiated with energetic ions or X-rays. In this paper, a Monte Carlo model is developed to study the mechanisms of bystander effect under the cells sparsely populated conditions. This model, based on our previous experiment which made the cells sparsely located in a round dish, focuses mainly on the spatial characteristics. The simulation results successfully reach the agreement with the experimental data. Moreover, other bystander effect experiment is also computed by this model and finally the model succeeds in predicting the results. The comparison of simulations with the experimental results indicates the feasibility of the model and the validity of some vital mechanisms assumed.

  20. Monte Carlo based statistical power analysis for mediation models: methods and software.

    Science.gov (United States)

    Zhang, Zhiyong

    2014-12-01

    The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.

  1. Monte Carlo simulation as a tool to predict blasting fragmentation based on the Kuz Ram model

    Science.gov (United States)

    Morin, Mario A.; Ficarazzo, Francesco

    2006-04-01

    Rock fragmentation is considered the most important aspect of production blasting because of its direct effects on the costs of drilling and blasting and on the economics of the subsequent operations of loading, hauling and crushing. Over the past three decades, significant progress has been made in the development of new technologies for blasting applications. These technologies include increasingly sophisticated computer models for blast design and blast performance prediction. Rock fragmentation depends on many variables such as rock mass properties, site geology, in situ fracturing and blasting parameters and as such has no complete theoretical solution for its prediction. However, empirical models for the estimation of size distribution of rock fragments have been developed. In this study, a blast fragmentation Monte Carlo-based simulator, based on the Kuz-Ram fragmentation model, has been developed to predict the entire fragmentation size distribution, taking into account intact and joints rock properties, the type and properties of explosives and the drilling pattern. Results produced by this simulator were quite favorable when compared with real fragmentation data obtained from a blast quarry. It is anticipated that the use of Monte Carlo simulation will increase our understanding of the effects of rock mass and explosive properties on the rock fragmentation by blasting, as well as increase our confidence in these empirical models. This understanding will translate into improvements in blasting operations, its corresponding costs and the overall economics of open pit mines and rock quarries.

  2. Applying sequential Monte Carlo methods into a distributed hydrologic model: lagged particle filtering approach with regularization

    Directory of Open Access Journals (Sweden)

    S. J. Noh

    2011-10-01

    Full Text Available Data assimilation techniques have received growing attention due to their capability to improve prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC methods, known as "particle filters", are a Bayesian learning process that has the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response times of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until the uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on the Markov chain Monte Carlo (MCMC methods is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, water and energy transfer processes (WEP, is implemented for the sequential data assimilation through the updating of state variables. The lagged regularized particle filter (LRPF and the sequential importance resampling (SIR particle filter are implemented for hindcasting of streamflow at the Katsura catchment, Japan. Control state variables for filtering are soil moisture content and overland flow. Streamflow measurements are used for data assimilation. LRPF shows consistent forecasts regardless of the process noise assumption, while SIR has different values of optimal process noise and shows sensitive variation of confidential intervals, depending on the process noise. Improvement of LRPF forecasts compared to SIR is particularly found for rapidly varied high flows due to preservation of sample diversity from the kernel, even if particle impoverishment takes place.

  3. A Monte Carlo model for 3D grain evolution during welding

    Science.gov (United States)

    Rodgers, Theron M.; Mitchell, John A.; Tikare, Veena

    2017-09-01

    Welding is one of the most wide-spread processes used in metal joining. However, there are currently no open-source software implementations for the simulation of microstructural evolution during a weld pass. Here we describe a Potts Monte Carlo based model implemented in the SPPARKS kinetic Monte Carlo computational framework. The model simulates melting, solidification and solid-state microstructural evolution of material in the fusion and heat-affected zones of a weld. The model does not simulate thermal behavior, but rather utilizes user input parameters to specify weld pool and heat-affect zone properties. Weld pool shapes are specified by Bézier curves, which allow for the specification of a wide range of pool shapes. Pool shapes can range from narrow and deep to wide and shallow representing different fluid flow conditions within the pool. Surrounding temperature gradients are calculated with the aide of a closest point projection algorithm. The model also allows simulation of pulsed power welding through time-dependent variation of the weld pool size. Example simulation results and comparisons with laboratory weld observations demonstrate microstructural variation with weld speed, pool shape, and pulsed-power.

  4. Phenomenology of Large Extra Dimensions Models at Hadrons Colliders using Monte Carlo Techniques (Spin-2 Graviton)

    CERN Document Server

    Bakhet, Nady; Hussein, Tarek

    2015-01-01

    Large Extra Dimensions Models have been proposed to remove the hierarchy problem and give an explanation why the gravity is so much weaker than the other three forces. In this work, we present an analysis of Monte Carlo data events for new physics signatures of spin-2 Graviton in context of ADD model with total dimensions $D=4+\\delta,$ $\\delta = 1,2,3,4,5,6 $ where $ \\delta $ is the extra special dimension, this model involves missing momentum $P_{T}^{miss}$ in association with jet in the final state via the process $pp(\\bar{p}) \\rightarrow G+jet$, Also, we present an analysis in context of the RS model with 5-dimensions via the process $pp(\\bar{p}) \\rightarrow G+jet$, $G \\rightarrow e^{+}e^{-}$ with final state $e^{+}e^{-}+jet$. We used Monte Carlo event generator Pythia8 to produce efficient signal selection rules at the Large Hadron Collider with $\\sqrt{s}$=14TeV and at the Tevatron $\\sqrt{s}$=1.96TeV .

  5. Contribution of Monte-Carlo modeling for understanding long-term behavior of nuclear glasses

    Energy Technology Data Exchange (ETDEWEB)

    Minet, Y.; Ledieu, A.; Devreux, F.; Barboux, P.; Frugier, P.; Gin, S

    2004-07-01

    Monte-Carlo methods have been developed at CEA and Ecole Polytechnique to improve our understanding of the basic mechanisms that control glass dissolution kinetics. The models, based on dissolution and recondensation rates of the atoms, can reproduce the observed alteration rates and the evolutions of the alteration layers on simplified borosilicate glasses (based on SiO{sub 2}-B{sub 2}O{sub 3}-Na{sub 2}O) over a large range of compositions and alteration conditions. The basic models are presented, as well as their current evolutions to describe more complex glasses (introduction of Al, Zr, Ca oxides) and to take into account phenomena which may be predominant in the long run (such as diffusion in the alteration layer or secondary phase precipitation). The predictions are compared with the observations performed by techniques giving structural or textural information on the alteration layer (e.g. NMR, Small Angle X-ray Scattering). The paper concludes with proposals for further evolutions of Monte-Carlo models towards integration into a predictive modeling framework. (authors)

  6. Fission yield calculation using toy model based on Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Jubaidah, E-mail: jubaidah@student.itb.ac.id [Nuclear Physics and Biophysics Division, Department of Physics, Bandung Institute of Technology. Jl. Ganesa No. 10 Bandung – West Java, Indonesia 40132 (Indonesia); Physics Department, Faculty of Mathematics and Natural Science – State University of Medan. Jl. Willem Iskandar Pasar V Medan Estate – North Sumatera, Indonesia 20221 (Indonesia); Kurniadi, Rizal, E-mail: rijalk@fi.itb.ac.id [Nuclear Physics and Biophysics Division, Department of Physics, Bandung Institute of Technology. Jl. Ganesa No. 10 Bandung – West Java, Indonesia 40132 (Indonesia)

    2015-09-30

    Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (R{sub c}), mean of left curve (μ{sub L}) and mean of right curve (μ{sub R}), deviation of left curve (σ{sub L}) and deviation of right curve (σ{sub R}). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90

  7. Revised Use Case Point (Re-UCP Model for Software Effort Estimation

    Directory of Open Access Journals (Sweden)

    Mudasir Manzoor Kirmani

    2015-03-01

    Full Text Available At present the most challenging issue that the software development industry encounters is less efficient management of software development budget projections. This problem has put the modern day software development companies in a situation wherein they are dealing with improper requirement engineering, ambiguous resource elicitation, uncertain cost and effort estimation. The most indispensable and inevitable aspect of any software development company is to form a counter mechanism to deal with the problems which leads to chaos. An emphatic combative domain to deal with this problem is to schedule the whole development process to undergo proper and efficient estimation process, wherein the estimation of all the resources can be made well in advance in order to check whether the conceived project is feasible and within the resources available. The basic building block in any object oriented design is Use Case diagrams which are prepared in early stages of design after clearly understanding the requirements. Use Case Diagrams are considered to be useful for approximating estimates for software development project. This research work gives detailed overview of Re-UCP (revised use case point method of effort estimation for software projects. The Re-UCP method is a modified approach which is based on UCP method of effort estimation. In this research study 14 projects were subjected to estimate efforts using Re-UCP method and the results were compared with UCP and e-UCP models. The comparison of 14 projects shows that Re-UCP has significantly outperformed the existing UCP and e-UCP effort estimation techniques.

  8. Evaluation of Arroyo Channel Restoration Efforts using Hydrological Modeling: Rancho San Bernardino, Sonora, MX

    Science.gov (United States)

    Jemison, N. E.; DeLong, S.; Henderson, W. M.; Adams, J.

    2012-12-01

    In the drylands of the southwestern U.S. and northwestern Mexico, historical river channel incision (arroyo cutting) has led to the destruction of riparian ecological systems and cieñega wetlands in many locations. Along Silver Creek on the Arizona-Sonora border, the Cuenca Los Ojos Foundation has been installing rock gabions and concrete and earthen berms with a goal of slowing flash floods, raising groundwater levels, and refilling arroyo channels with sediment in an area that changed from a broad, perennially wet cieñega to a narrow sand- and gravel-dominated arroyo channel with an average depth of ~6 m. The engineering efforts hope to restore desert wetlands, regrow riparian vegetation, and promote sediment deposition along the arroyo floor. Hydrological modeling allows us to predict how rare flood events interact with the restoration efforts and may guide future approaches to dryland ecological restoration. This modeling is complemented by detailed topographic surveying and use of streamflow sensors to monitor hydrological processes in the restoration project. We evaluate the inundation associated with model 10-, 50-, 100-, 500-, and 1,000-year floods through the study area using FLO-2D and HEC-RAS modeling environments in order to evaluate the possibility of returning surface inundation to the former cieñega surface. According to HEC-RAS model predictions, given current channel configuration, it would require a 500-year flood to overtop the channel banks and reinundate the cieñega (now terrace) surface, though the 100-year flood may lead to limited terrace surface inundation. Based on our models, 10-year floods were ~2 m from overtopping the arroyo walls, 50-year floods came ~1.5 m from overtopping the arroyos, 100-year floods were ~1.2 m from overtopping, and 500- and 1,000-year floods at least partially inundated the cieñega surface. The current topography of Silver Creek does not allow for frequent flooding of the former cieñega; model predictions

  9. Software Project Effort Estimation Based on Multiple Parametric Models Generated Through Data Clustering

    Institute of Scientific and Technical Information of China (English)

    Juan J. Cuadrado Gallego; Daniel Rodríguez; Miguel (A)ngel Sicilia; Miguel Garre Rubio; Angel García Crespo

    2007-01-01

    Parametric software effort estimation models usually consists of only a single mathematical relationship. Withthe advent of software repositories containing data from heterogeneous projects, these types of models suffer from pooradjustment and predictive accuracy. One possible way to alleviate this problem is the use of a set of mathematical equationsobtained through dividing of the historical project datasets according to different parameters into subdatasets called parti-tions. In turn, partitions are divided into clusters that serve as a tool for more accurate models. In this paper, we describethe process, tool and results of such approach through a case study using a publicly available repository, ISBSG. Resultssuggest the adequacy of the technique as an extension of existing single-expression models without making the estimationprocess much more complex that uses a single estimation model. A tool to support the process is also presented.

  10. Monte Carlo simulation of Prussian blue analogs described by Heisenberg ternary alloy model

    Science.gov (United States)

    Yüksel, Yusuf

    2015-11-01

    Within the framework of Monte Carlo simulation technique, we simulate magnetic behavior of Prussian blue analogs based on Heisenberg ternary alloy model. We present phase diagrams in various parameter spaces, and we compare some of our results with those based on Ising counterparts. We clarify the variations of transition temperature and compensation phenomenon with mixing ratio of magnetic ions, exchange interactions, and exchange anisotropy in the present ferro-ferrimagnetic Heisenberg system. According to our results, thermal variation of the total magnetization curves may exhibit N, L, P, Q, R type behaviors based on the Néel classification scheme.

  11. Rejection-free Monte Carlo algorithms for models with continuous degrees of freedom.

    Science.gov (United States)

    Muñoz, J D; Novotny, M A; Mitchell, S J

    2003-02-01

    We construct a rejection-free Monte Carlo algorithm for a system with continuous degrees of freedom. We illustrate the algorithm by applying it to the classical three-dimensional Heisenberg model with canonical Metropolis dynamics. We obtain the lifetime of the metastable state following a reversal of the external magnetic field. Our rejection-free algorithm obtains results in agreement with a direct implementation of the Metropolis dynamic and requires orders of magnitude less computational time at low temperatures. The treatment is general and can be extended to other dynamics and other systems with continuous degrees of freedom.

  12. Uncertainty assessment of integrated distributed hydrological models using GLUE with Markov chain Monte Carlo sampling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan

    2008-01-01

    uncertainty estimation (GLUE) procedure based on Markov chain Monte Carlo sampling is applied in order to improve the performance of the methodology in estimating parameters and posterior output distributions. The description of the spatial variations of the hydrological processes is accounted for by defining...... the identifiability of the parameters and results in satisfactory multi-variable simulations and uncertainty estimates. However, the parameter uncertainty alone cannot explain the total uncertainty at all the sites, due to limitations in the distributed data included in the model calibration. The study also indicates...

  13. Simulation of low Schottky barrier MOSFETs using an improved Multi-subband Monte Carlo model

    Science.gov (United States)

    Gudmundsson, Valur; Palestri, Pierpaolo; Hellström, Per-Erik; Selmi, Luca; Östling, Mikael

    2013-01-01

    We present a simple and efficient approach to implement Schottky barrier contacts in a Multi-subband Monte Carlo simulator by using the subband smoothening technique to mimic tunneling at the Schottky junction. In the absence of scattering, simulation results for Schottky barrier MOSFETs are in agreement with ballistic Non-Equilibrium Green's Functions calculations. We then include the most relevant scattering mechanisms, and apply the model to the study of double gate Schottky barrier MOSFETs representative of the ITRS 2015 high performance device. Results show that a Schottky barrier height of less than approximately 0.15 eV is required to outperform the doped source/drain structure.

  14. Chemical Potential of Benzene Fluid from Monte Carlo Simulation with Anisotropic United Atom Model

    Directory of Open Access Journals (Sweden)

    Mahfuzh Huda

    2013-07-01

    Full Text Available The profile of chemical potential of benzene fluid has been investigated using Anisotropic United Atom (AUA model. A Monte Carlo simulation in canonical ensemble was done to obtain the isotherm of benzene fluid, from which the excess part of chemical potential was calculated. A surge of potential energy is observed during the simulation at high temperature which is related to the gas-liquid phase transition. The isotherm profile indicates the tendency of benzene to condensate due to the strong attractive interaction. The results show that the chemical potential of benzene rapidly deviates from its ideal gas counterpart even at low density.

  15. A threaded Java concurrent implementation of the Monte-Carlo Metropolis Ising model.

    Science.gov (United States)

    Castañeda-Marroquín, Carlos; de la Puente, Alfonso Ortega; Alfonseca, Manuel; Glazier, James A; Swat, Maciej

    2009-06-01

    This paper describes a concurrent Java implementation of the Metropolis Monte-Carlo algorithm that is used in 2D Ising model simulations. The presented method uses threads, monitors, shared variables and high level concurrent constructs that hide the low level details. In our algorithm we assign one thread to handle one spin flip attempt at a time. We use special lattice site selection algorithm to avoid two or more threads working concurently in the region of the lattice that "belongs" to two or more different spins undergoing spin-flip transformation. Our approach does not depend on the current platform and maximizes concurrent use of the available resources.

  16. Variational Monte Carlo study of magnetic states in the periodic Anderson model

    Science.gov (United States)

    Kubo, Katsunori

    2015-03-01

    We study the magnetic states of the periodic Anderson model with a finite Coulomb interaction between f electrons on a square lattice by applying variational Monte Carlo method. We consider Gutzwiller wavefunctions for the paramagnetic, antiferromagnetic, ferromagnetic, and charge density wave states. We find an antiferromagnetic phase around half-filling. There is a phase transition accompanying change in the Fermi-surface topology in this antiferromagnetic phase. We also study a case away from half-filling, and find a ferromagnetic state as the ground state there.

  17. Studies on top-quark Monte Carlo modelling for Top2016

    CERN Document Server

    The ATLAS collaboration

    2016-01-01

    This note summarises recent studies on Monte Carlo simulation setups of top-quark pair production used by the ATLAS experiment and presents a new method to deal with interference effects for the $Wt$ single-top-quark production which is compared against previous techniques. The main focus for the top-quark pair production is on the improvement of the modelling of the Powheg generator interfaced to the Pythia8 and Herwig7 shower generators. The studies are done using unfolded data at centre-of-mass energies of 7, 8, and 13 TeV.

  18. Hybrid Parallel Programming Models for AMR Neutron Monte-Carlo Transport

    Science.gov (United States)

    Dureau, David; Poëtte, Gaël

    2014-06-01

    This paper deals with High Performance Computing (HPC) applied to neutron transport theory on complex geometries, thanks to both an Adaptive Mesh Refinement (AMR) algorithm and a Monte-Carlo (MC) solver. Several Parallelism models are presented and analyzed in this context, among them shared memory and distributed memory ones such as Domain Replication and Domain Decomposition, together with Hybrid strategies. The study is illustrated by weak and strong scalability tests on complex benchmarks on several thousands of cores thanks to the petaflopic supercomputer Tera100.

  19. A study of potential energy curves from the model space quantum Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Ohtsuka, Yuhki; Ten-no, Seiichiro, E-mail: tenno@cs.kobe-u.ac.jp [Department of Computational Sciences, Graduate School of System Informatics, Kobe University, Nada-ku, Kobe 657-8501 (Japan)

    2015-12-07

    We report on the first application of the model space quantum Monte Carlo (MSQMC) to potential energy curves (PECs) for the excited states of C{sub 2}, N{sub 2}, and O{sub 2} to validate the applicability of the method. A parallel MSQMC code is implemented with the initiator approximation to enable efficient sampling. The PECs of MSQMC for various excited and ionized states are compared with those from the Rydberg-Klein-Rees and full configuration interaction methods. The results indicate the usefulness of MSQMC for precise PECs in a wide range obviating problems concerning quasi-degeneracy.

  20. Competition for marine space: modelling the Baltic Sea fisheries and effort displacement under spatial restrictions

    DEFF Research Database (Denmark)

    Bastardie, Francois; Nielsen, J. Rasmus; Eigaard, Ole Ritzau

    2015-01-01

    to fishery and from vessel to vessel. The impact assessment of new spatial plans involving fisheries should be based on quantitative bioeconomic analyses that take into account individual vessel decisions, and trade-offs in cross-sector conflicting interests.Weuse a vessel-oriented decision-support tool (the...... various constraints. Interlinked spatial, technical, and biological dynamics of vessels and stocks in the scenarios result in stable profits, which compensate for the additional costs from effort displacement and release pressure on the fish stocks. The effort is further redirected away from sensitive...... benthic habitats, enhancing the ecological positive effects. The energy efficiency of some of the vessels, however, is strongly reduced with the new zonation, and some of the vessels suffer decreased profits. The DISPLACE model serves as a spatially explicit bioeconomic benchmark tool for management...

  1. Evaluation of angular scattering models for electron-neutral collisions in Monte Carlo simulations

    Science.gov (United States)

    Janssen, J. F. J.; Pitchford, L. C.; Hagelaar, G. J. M.; van Dijk, J.

    2016-10-01

    In Monte Carlo simulations of electron transport through a neutral background gas, simplifying assumptions related to the shape of the angular distribution of electron-neutral scattering cross sections are usually made. This is mainly because full sets of differential scattering cross sections are rarely available. In this work simple models for angular scattering are compared to results from the recent quantum calculations of Zatsarinny and Bartschat for differential scattering cross sections (DCS’s) from zero to 200 eV in argon. These simple models represent in various ways an approach to forward scattering with increasing electron energy. The simple models are then used in Monte Carlo simulations of range, straggling, and backscatter of electrons emitted from a surface into a volume filled with a neutral gas. It is shown that the assumptions of isotropic elastic scattering and of forward scattering for the inelastic collision process yield results within a few percent of those calculated using the DCS’s of Zatsarinny and Bartschat. The quantities which were held constant in these comparisons are the elastic momentum transfer and total inelastic cross sections.

  2. Derivation of a Monte Carlo method for modeling heterodyne detection in optical coherence tomography systems

    DEFF Research Database (Denmark)

    Tycho, Andreas; Jørgensen, Thomas Martini; Andersen, Peter E.

    2002-01-01

    A Monte Carlo (MC) method for modeling optical coherence tomography (OCT) measurements of a diffusely reflecting discontinuity emb edded in a scattering medium is presented. For the first time to the authors' knowledge it is shown analytically that the applicability of an MC approach to this opti......A Monte Carlo (MC) method for modeling optical coherence tomography (OCT) measurements of a diffusely reflecting discontinuity emb edded in a scattering medium is presented. For the first time to the authors' knowledge it is shown analytically that the applicability of an MC approach...... to this optical geometry is firmly justified, because, as we show, in the conjugate image plane the field reflected from the sample is delta-correlated from which it follows that the heterodyne signal is calculated from the intensity distribution only. This is not a trivial result because, in general, the light...... focused beam, and it is shown that in free space the full three-dimensional intensity distribution of a Gaussian beam is obtained. The OCT signal and the intensity distribution in a scattering medium have been obtained for several geometries with the suggested MC method; when this model and a recently...

  3. Collectivity in Heavy Nuclei in the Shell Model Monte Carlo Approach

    CERN Document Server

    Özen, C; Nakada, H

    2013-01-01

    The microscopic description of collectivity in heavy nuclei in the framework of the configuration-interaction shell model has been a major challenge. The size of the model space required for the description of heavy nuclei prohibits the use of conventional diagonalization methods. We have overcome this difficulty by using the shell model Monte Carlo (SMMC) method, which can treat model spaces that are many orders of magnitude larger than those that can be treated by conventional methods. We identify a thermal observable that can distinguish between vibrational and rotational collectivity and use it to describe the crossover from vibrational to rotational collectivity in families of even-even rare-earth isotopes. We calculate the state densities in these nuclei and find them to be in close agreement with experimental data. We also calculate the collective enhancement factors of the corresponding level densities and find that their decay with excitation energy is correlated with the pairing and shape phase tran...

  4. Monte Carlo evaluation of biological variation: Random generation of correlated non-Gaussian model parameters

    Science.gov (United States)

    Hertog, Maarten L. A. T. M.; Scheerlinck, Nico; Nicolaï, Bart M.

    2009-01-01

    When modelling the behaviour of horticultural products, demonstrating large sources of biological variation, we often run into the issue of non-Gaussian distributed model parameters. This work presents an algorithm to reproduce such correlated non-Gaussian model parameters for use with Monte Carlo simulations. The algorithm works around the problem of non-Gaussian distributions by transforming the observed non-Gaussian probability distributions using a proposed SKN-distribution function before applying the covariance decomposition algorithm to generate Gaussian random co-varying parameter sets. The proposed SKN-distribution function is based on the standard Gaussian distribution function and can exhibit different degrees of both skewness and kurtosis. This technique is demonstrated using a case study on modelling the ripening of tomato fruit evaluating the propagation of biological variation with time.

  5. Monte Carlo Simulation of a Novel Classical Spin Model with a Tricritical Point

    Science.gov (United States)

    Cary, Tyler; Scalettar, Richard; Singh, Rajiv

    Recent experimental findings along with motivation from the well known Blume-Capel model has led to the development of a novel two-dimensional classical spin model defined on a square lattice. This model consists of two Ising spin species per site with each species interacting with its own kind as perpendicular one dimensional Ising chains along with complex and frustrating interactions between species. Probing this model with Mean Field Theory, Metropolis Monte Carlo, and Wang Landau sampling has revealed a rich phase diagram which includes a tricritical point separating a first order magnetic phase transition from a continuous one, along with three ordered phases. Away from the tricritical point, the expected 2D Ising critical exponents have been recovered. Ongoing work focuses on finding the tricritical exponents and their connection to a supersymmetric critical point.

  6. Treatment plan evaluation for interstitial photodynamic therapy in a mouse model by Monte Carlo simulation with FullMonte

    Science.gov (United States)

    Cassidy, Jeffrey; Betz, Vaughn; Lilge, Lothar

    2015-02-01

    Monte Carlo (MC) simulation is recognized as the “gold standard” for biophotonic simulation, capturing all relevant physics and material properties at the perceived cost of high computing demands. Tetrahedral-mesh-based MC simulations particularly are attractive due to the ability to refine the mesh at will to conform to complicated geometries or user-defined resolution requirements. Since no approximations of material or light-source properties are required, MC methods are applicable to the broadest set of biophotonic simulation problems. MC methods also have other implementation features including inherent parallelism, and permit a continuously-variable quality-runtime tradeoff. We demonstrate here a complete MC-based prospective fluence dose evaluation system for interstitial PDT to generate dose-volume histograms on a tetrahedral mesh geometry description. To our knowledge, this is the first such system for general interstitial photodynamic therapy employing MC methods and is therefore applicable to a very broad cross-section of anatomy and material properties. We demonstrate that evaluation of dose-volume histograms is an effective variance-reduction scheme in its own right which greatly reduces the number of packets required and hence runtime required to achieve acceptable result confidence. We conclude that MC methods are feasible for general PDT treatment evaluation and planning, and considerably less costly than widely believed.

  7. APPLYING TEACHING-LEARNING TO ARTIFICIAL BEE COLONY FOR PARAMETER OPTIMIZATION OF SOFTWARE EFFORT ESTIMATION MODEL

    Directory of Open Access Journals (Sweden)

    THANH TUNG KHUAT

    2017-05-01

    Full Text Available Artificial Bee Colony inspired by the foraging behaviour of honey bees is a novel meta-heuristic optimization algorithm in the community of swarm intelligence algorithms. Nevertheless, it is still insufficient in the speed of convergence and the quality of solutions. This paper proposes an approach in order to tackle these downsides by combining the positive aspects of TeachingLearning based optimization and Artificial Bee Colony. The performance of the proposed method is assessed on the software effort estimation problem, which is the complex and important issue in the project management. Software developers often carry out the software estimation in the early stages of the software development life cycle to derive the required cost and schedule for a project. There are a large number of methods for effort estimation in which COCOMO II is one of the most widely used models. However, this model has some restricts because its parameters have not been optimized yet. In this work, therefore, we will present the approach to overcome this limitation of COCOMO II model. The experiments have been conducted on NASA software project dataset and the obtained results indicated that the improvement of parameters provided better estimation capabilities compared to the original COCOMO II model.

  8. Quantitative Analysis of the Security of Software-Defined Network Controller Using Threat/Effort Model

    Directory of Open Access Journals (Sweden)

    Zehui Wu

    2017-01-01

    Full Text Available SDN-based controller, which is responsible for the configuration and management of the network, is the core of Software-Defined Networks. Current methods, which focus on the secure mechanism, use qualitative analysis to estimate the security of controllers, leading to inaccurate results frequently. In this paper, we employ a quantitative approach to overcome the above shortage. Under the analysis of the controller threat model we give the formal model results of the APIs, the protocol interfaces, and the data items of controller and further provide our Threat/Effort quantitative calculation model. With the help of Threat/Effort model, we are able to compare not only the security of different versions of the same kind controller but also different kinds of controllers and provide a basis for controller selection and secure development. We evaluated our approach in four widely used SDN-based controllers which are POX, OpenDaylight, Floodlight, and Ryu. The test, which shows the similarity outcomes with the traditional qualitative analysis, demonstrates that with our approach we are able to get the specific security values of different controllers and presents more accurate results.

  9. Adaptive Markov chain Monte Carlo forward projection for statistical analysis in epidemic modelling of human papillomavirus.

    Science.gov (United States)

    Korostil, Igor A; Peters, Gareth W; Cornebise, Julien; Regan, David G

    2013-05-20

    A Bayesian statistical model and estimation methodology based on forward projection adaptive Markov chain Monte Carlo is developed in order to perform the calibration of a high-dimensional nonlinear system of ordinary differential equations representing an epidemic model for human papillomavirus types 6 and 11 (HPV-6, HPV-11). The model is compartmental and involves stratification by age, gender and sexual-activity group. Developing this model and a means to calibrate it efficiently is relevant because HPV is a very multi-typed and common sexually transmitted infection with more than 100 types currently known. The two types studied in this paper, types 6 and 11, are causing about 90% of anogenital warts. We extend the development of a sexual mixing matrix on the basis of a formulation first suggested by Garnett and Anderson, frequently used to model sexually transmitted infections. In particular, we consider a stochastic mixing matrix framework that allows us to jointly estimate unknown attributes and parameters of the mixing matrix along with the parameters involved in the calibration of the HPV epidemic model. This matrix describes the sexual interactions between members of the population under study and relies on several quantities that are a priori unknown. The Bayesian model developed allows one to estimate jointly the HPV-6 and HPV-11 epidemic model parameters as well as unknown sexual mixing matrix parameters related to assortativity. Finally, we explore the ability of an extension to the class of adaptive Markov chain Monte Carlo algorithms to incorporate a forward projection strategy for the ordinary differential equation state trajectories. Efficient exploration of the Bayesian posterior distribution developed for the ordinary differential equation parameters provides a challenge for any Markov chain sampling methodology, hence the interest in adaptive Markov chain methods. We conclude with simulation studies on synthetic and recent actual data.

  10. Clinical management and burden of prostate cancer: a Markov Monte Carlo model.

    Directory of Open Access Journals (Sweden)

    Chiranjeev Sanyal

    Full Text Available BACKGROUND: Prostate cancer (PCa is the most common non-skin cancer among men in developed countries. Several novel treatments have been adopted by healthcare systems to manage PCa. Most of the observational studies and randomized trials on PCa have concurrently evaluated fewer treatments over short follow-up. Further, preceding decision analytic models on PCa management have not evaluated various contemporary management options. Therefore, a contemporary decision analytic model was necessary to address limitations to the literature by synthesizing the evidence on novel treatments thereby forecasting short and long-term clinical outcomes. OBJECTIVES: To develop and validate a Markov Monte Carlo model for the contemporary clinical management of PCa, and to assess the clinical burden of the disease from diagnosis to end-of-life. METHODS: A Markov Monte Carlo model was developed to simulate the management of PCa in men 65 years and older from diagnosis to end-of-life. Health states modeled were: risk at diagnosis, active surveillance, active treatment, PCa recurrence, PCa recurrence free, metastatic castrate resistant prostate cancer, overall and PCa death. Treatment trajectories were based on state transition probabilities derived from the literature. Validation and sensitivity analyses assessed the accuracy and robustness of model predicted outcomes. RESULTS: Validation indicated model predicted rates were comparable to observed rates in the published literature. The simulated distribution of clinical outcomes for the base case was consistent with sensitivity analyses. Predicted rate of clinical outcomes and mortality varied across risk groups. Life expectancy and health adjusted life expectancy predicted for the simulated cohort was 20.9 years (95%CI 20.5-21.3 and 18.2 years (95% CI 17.9-18.5, respectively. CONCLUSION: Study findings indicated contemporary management strategies improved survival and quality of life in patients with PCa. This

  11. Kinetic Monte-Carlo modeling of hydrogen retention and re-emission from Tore Supra deposits

    Energy Technology Data Exchange (ETDEWEB)

    Rai, A. [Max-Planck-Institut fuer Plasmaphysik, D-17491 Greifswald (Germany)], E-mail: Abha.Rai@ipp.mpg.de; Schneider, R. [Max-Planck-Institut fuer Plasmaphysik, D-17491 Greifswald (Germany); Warrier, M. [Computational Analysis Division, BARC, Trombay, Mumbai 400085 (India); Roubin, P.; Martin, C.; Richou, M. [PIIM, Universite de Provence, Centre Saint-Jerome, (service 242) F-13397 Marseille cedex 20 (France)

    2009-04-30

    A multi-scale model has been developed to study the reactive-diffusive transport of hydrogen in porous graphite [A. Rai, R. Schneider, M. Warrier, J. Nucl. Mater. (submitted for publication). http://dx.doi.org/10.1016/j.jnucmat.2007.08.013.]. The deposits found on the leading edge of the neutralizer of Tore Supra are multi-scale in nature, consisting of micropores with typical size lower than 2 nm ({approx}11%), mesopores ({approx}5%) and macropores with a typical size more than 50 nm [C. Martin, M. Richou, W. Sakaily, B. Pegourie, C. Brosset, P. Roubin, J. Nucl. Mater. 363-365 (2007) 1251]. Kinetic Monte-Carlo (KMC) has been used to study the hydrogen transport at meso-scales. Recombination rate and the diffusion coefficient calculated at the meso-scale was used as an input to scale up and analyze the hydrogen transport at macro-scale. A combination of KMC and MCD (Monte-Carlo diffusion) method was used at macro-scales. Flux dependence of hydrogen recycling has been studied. The retention and re-emission analysis of the model has been extended to study the chemical erosion process based on the Kueppers-Hopf cycle [M. Wittmann, J. Kueppers, J. Nucl. Mater. 227 (1996) 186].

  12. A Monte-Carlo based model of the AX-PET demonstrator and its experimental validation.

    Science.gov (United States)

    Solevi, P; Oliver, J F; Gillam, J E; Bolle, E; Casella, C; Chesi, E; De Leo, R; Dissertori, G; Fanti, V; Heller, M; Lai, M; Lustermann, W; Nappi, E; Pauss, F; Rudge, A; Ruotsalainen, U; Schinzel, D; Schneider, T; Séguinot, J; Stapnes, S; Weilhammer, P; Tuna, U; Joram, C; Rafecas, M

    2013-08-21

    AX-PET is a novel PET detector based on axially oriented crystals and orthogonal wavelength shifter (WLS) strips, both individually read out by silicon photo-multipliers. Its design decouples sensitivity and spatial resolution, by reducing the parallax error due to the layered arrangement of the crystals. Additionally the granularity of AX-PET enhances the capability to track photons within the detector yielding a large fraction of inter-crystal scatter events. These events, if properly processed, can be included in the reconstruction stage further increasing the sensitivity. Its unique features require dedicated Monte-Carlo simulations, enabling the development of the device, interpreting data and allowing the development of reconstruction codes. At the same time the non-conventional design of AX-PET poses several challenges to the simulation and modeling tasks, mostly related to the light transport and distribution within the crystals and WLS strips, as well as the electronics readout. In this work we present a hybrid simulation tool based on an analytical model and a Monte-Carlo based description of the AX-PET demonstrator. It was extensively validated against experimental data, providing excellent agreement.

  13. Optical model for port-wine stain skin and its Monte Carlo simulation

    Science.gov (United States)

    Xu, Lanqing; Xiao, Zhengying; Chen, Rong; Wang, Ying

    2008-12-01

    Laser irradiation is the most acceptable therapy for PWS patient at present time. Its efficacy is highly dependent on the energy deposition rules in skin. To achieve optimal PWS treatment parameters a better understanding of light propagation in PWS skin is indispensable. Traditional Monte Carlo simulations using simple geometries such as planar layer tissue model can not provide energy deposition in the skin with enlarged blood vessels. In this paper the structure of normal skin and the pathological character of PWS skin was analyzed in detail and the true structure were simplified into a hybrid layered mathematical model to character two most important aspects of PWS skin: layered structure and overabundant dermal vessels. The basic laser-tissue interaction mechanisms in skin were investigated and the optical parameters of PWS skin tissue at the therapeutic wavelength. Monte Carlo (MC) based techniques were choused to calculate the energy deposition in the skin. Results can be used in choosing optical dosage. Further simulations can be used to predict optimal laser parameters to achieve high-efficacy laser treatment of PWS.

  14. The First 24 Years of Reverse Monte Carlo Modelling, Budapest, Hungary, 20-22 September 2012

    Science.gov (United States)

    Keen, David A.; Pusztai, László

    2013-11-01

    This special issue contains a collection of papers reflecting the content of the fifth workshop on reverse Monte Carlo (RMC) methods, held in a hotel on the banks of the Danube in the Budapest suburbs in the autumn of 2012. Over fifty participants gathered to hear talks and discuss a broad range of science based on the RMC technique in very convivial surroundings. Reverse Monte Carlo modelling is a method for producing three-dimensional disordered structural models in quantitative agreement with experimental data. The method was developed in the late 1980s and has since achieved wide acceptance within the scientific community [1], producing an average of over 90 papers and 1200 citations per year over the last five years. It is particularly suitable for the study of the structures of liquid and amorphous materials, as well as the structural analysis of disordered crystalline systems. The principal experimental data that are modelled are obtained from total x-ray or neutron scattering experiments, using the reciprocal space structure factor and/or the real space pair distribution function (PDF). Additional data might be included from extended x-ray absorption fine structure spectroscopy (EXAFS), Bragg peak intensities or indeed any measured data that can be calculated from a three-dimensional atomistic model. It is this use of total scattering (diffuse and Bragg), rather than just the Bragg peak intensities more commonly used for crystalline structure analysis, which enables RMC modelling to probe the often important deviations from the average crystal structure, to probe the structures of poorly crystalline or nanocrystalline materials, and the local structures of non-crystalline materials where only diffuse scattering is observed. This flexibility across various condensed matter structure-types has made the RMC method very attractive in a wide range of disciplines, as borne out in the contents of this special issue. It is however important to point out that since

  15. Dynamic Value at Risk: A Comparative Study Between Heteroscedastic Models and Monte Carlo Simulation

    Directory of Open Access Journals (Sweden)

    José Lamartine Távora Junior

    2006-12-01

    Full Text Available The objective of this paper was to analyze the risk management of a portfolio composed by Petrobras PN, Telemar PN and Vale do Rio Doce PNA stocks. It was verified if the modeling of Value-at-Risk (VaR through the place Monte Carlo simulation with volatility of GARCH family is supported by hypothesis of efficient market. The results have shown that the statistic evaluation in inferior to dynamics, evidencing that the dynamic analysis supplies support to the hypothesis of efficient market of the Brazilian share holding market, in opposition of some empirical evidences. Also, it was verified that the GARCH models of volatility is enough to accommodate the variations of the shareholding Brazilian market, since the model is capable to accommodate the great dynamic of the Brazilian market.

  16. An Efficient Monte Carlo Method for Modeling Radiative Transfer in Protoplanetary Disks

    Science.gov (United States)

    Kim, Stacy

    2011-01-01

    Monte Carlo methods have been shown to be effective and versatile in modeling radiative transfer processes to calculate model temperature profiles for protoplanetary disks. Temperatures profiles are important for connecting physical structure to observation and for understanding the conditions for planet formation and migration. However, certain areas of the disk such as the optically thick disk interior are under-sampled, or are of particular interest such as the snow line (where water vapor condenses into ice) and the area surrounding a protoplanet. To improve the sampling, photon packets can be preferentially scattered and reemitted toward the preferred locations at the cost of weighting packet energies to conserve the average energy flux. Here I report on the weighting schemes developed, how they can be applied to various models, and how they affect simulation mechanics and results. We find that improvements in sampling do not always imply similar improvements in temperature accuracies and calculation speeds.

  17. Monte Carlo simulations of a supersymmetric matrix model of dynamical compactification in non perturbative string theory

    CERN Document Server

    Anagnostopoulos, Konstantinos N; Nishimura, Jun

    2012-01-01

    The IKKT or IIB matrix model has been postulated to be a non perturbative definition of superstring theory. It has the attractive feature that spacetime is dynamically generated, which makes possible the scenario of dynamical compactification of extra dimensions, which in the Euclidean model manifests by spontaneously breaking the SO(10) rotational invariance (SSB). In this work we study using Monte Carlo simulations the 6 dimensional version of the Euclidean IIB matrix model. Simulations are found to be plagued by a strong complex action problem and the factorization method is used for effective sampling and computing expectation values of the extent of spacetime in various dimensions. Our results are consistent with calculations using the Gaussian Expansion method which predict SSB to SO(3) symmetric vacua, a finite universal extent of the compactified dimensions and finite spacetime volume.

  18. Corner wetting in the two-dimensional Ising model: Monte Carlo results

    Science.gov (United States)

    Albano, E. V.; DeVirgiliis, A.; Müller, M.; Binder, K.

    2003-01-01

    Square L × L (L = 24-128) Ising lattices with nearest neighbour ferromagnetic exchange are considered using free boundary conditions at which boundary magnetic fields ± h are applied, i.e., at the two boundary rows ending at the lower left corner a field +h acts, while at the two boundary rows ending at the upper right corner a field -h acts. For temperatures T less than the critical temperature Tc of the bulk, this boundary condition leads to the formation of two domains with opposite orientations of the magnetization direction, separated by an interface which for T larger than the filling transition temperature Tf (h) runs from the upper left corner to the lower right corner, while for T interface is localized either close to the lower left corner or close to the upper right corner. Numerous theoretical predictions for the critical behaviour of this 'corner wetting' or 'wedge filling' transition are tested by Monte Carlo simulations. In particular, it is shown that for T = Tf (h) the magnetization profile m(z) in the z-direction normal to the interface is simply linear and the interfacial width scales as w propto L, while for T > Tf (h) it scales as w proptosurd L. The distribution P (ell) of the interface position ell (measured along the z-direction from the corners) decays exponentially for T Tf (h). Furthermore, the Monte Carlo data are compatible with langleellrangle propto (Tf (h) - T)-1 and a finite size scaling of the total magnetization according to M(L, T) = tilde M {(1 - T/Tf (h))nubot L} with nubot = 1. Unlike the findings for critical wetting in the thin film geometry of the Ising model, the Monte Carlo results for corner wetting are in very good agreement with the theoretical predictions.

  19. Measuring Effortful Control Using the Children's Behavior Questionnaire-Very Short Form: Modeling Matters.

    Science.gov (United States)

    Backer-Grøndahl, Agathe; Nærde, Ane; Ulleberg, Pål; Janson, Harald

    2016-01-01

    Effortful control (EC) is an important concept in the research on self-regulation in children. We tested 2 alternative factor models of EC as measured by the Children's Behavior Questionnaire-Very Short Form (CBQ-VSF; Putnam & Rothbart, 2006 ) in a large sample of preschoolers (N = 1,007): 1 lower order and 1 hierarchical second-order structure. Additionally, convergent and predictive validity of EC as measured by the CBQ-VSF were investigated. The results supported a hierarchical model. Moderate convergent validity of the second-order latent EC factor was found in that it correlated with compliance and observed EC tasks. Both CBQ-VSF EC measures were also negatively correlated with child physical aggression. The results have implications for the measurement, modeling, and interpretation of EC applying the CBQ.

  20. The NASA-Langley Wake Vortex Modelling Effort in Support of an Operational Aircraft Spacing System

    Science.gov (United States)

    Proctor, Fred H.

    1998-01-01

    Two numerical modelling efforts, one using a large eddy simulation model and the other a numerical weather prediction model, are underway in support of NASA's Terminal Area Productivity program. The large-eddy simulation model (LES) has a meteorological framework and permits the interaction of wake vortices with environments characterized by crosswind shear, stratification, humidity, and atmospheric turbulence. Results from the numerical simulations are being used to assist in the development of algorithms for an operational wake-vortex aircraft spacing system. A mesoscale weather forecast model is being adapted for providing operational forecast of winds, temperature, and turbulence parameters to be used in the terminal area. This paper describes the goals and modelling approach, as well as achievements obtained to date. Simulation results will be presented from the LES model for both two and three dimensions. The 2-D model is found to be generally valid for studying wake vortex transport, while the 3-D approach is necessary for realistic treatment of decay via interaction of wake vortices and atmospheric boundary layer turbulence. Meteorology is shown to have an important affect on vortex transport and decay. Presented are results showing that wake vortex transport is unaffected by uniform fog or rain, but wake vortex transport can be strongly affected by nonlinear vertical change in the ambient crosswind. Both simulation and observations show that atmospheric vortices decay from the outside with minimal expansion of the core. Vortex decay and the onset three-dimensional instabilities are found to be enhanced by the presence of ambient turbulence.

  1. Core-scale solute transport model selection using Monte Carlo analysis

    Science.gov (United States)

    Malama, Bwalya; Kuhlman, Kristopher L.; James, Scott C.

    2013-06-01

    Model applicability to core-scale solute transport is evaluated using breakthrough data from column experiments conducted with conservative tracers tritium (3H) and sodium-22 (22Na ), and the retarding solute uranium-232 (232U). The three models considered are single-porosity, double-porosity with single-rate mobile-immobile mass-exchange, and the multirate model, which is a deterministic model that admits the statistics of a random mobile-immobile mass-exchange rate coefficient. The experiments were conducted on intact Culebra Dolomite core samples. Previously, data were analyzed using single-porosity and double-porosity models although the Culebra Dolomite is known to possess multiple types and scales of porosity, and to exhibit multirate mobile-immobile-domain mass transfer characteristics at field scale. The data are reanalyzed here and null-space Monte Carlo analysis is used to facilitate objective model selection. Prediction (or residual) bias is adopted as a measure of the model structural error. The analysis clearly shows single-porosity and double-porosity models are structurally deficient, yielding late-time residual bias that grows with time. On the other hand, the multirate model yields unbiased predictions consistent with the late-time -5/2 slope diagnostic of multirate mass transfer. The analysis indicates the multirate model is better suited to describing core-scale solute breakthrough in the Culebra Dolomite than the other two models.

  2. Full modelling of the MOSAIC animal PET system based on the GATE Monte Carlo simulation code

    Science.gov (United States)

    Merheb, C.; Petegnief, Y.; Talbot, J. N.

    2007-02-01

    Positron emission tomography (PET) systems dedicated to animal imaging are now widely used for biological studies. The scanner performance strongly depends on the design and the characteristics of the system. Many parameters must be optimized like the dimensions and type of crystals, geometry and field-of-view (FOV), sampling, electronics, lightguide, shielding, etc. Monte Carlo modelling is a powerful tool to study the effect of each of these parameters on the basis of realistic simulated data. Performance assessment in terms of spatial resolution, count rates, scatter fraction and sensitivity is an important prerequisite before the model can be used instead of real data for a reliable description of the system response function or for optimization of reconstruction algorithms. The aim of this study is to model the performance of the Philips Mosaic™ animal PET system using a comprehensive PET simulation code in order to understand and describe the origin of important factors that influence image quality. We use GATE, a Monte Carlo simulation toolkit for a realistic description of the ring PET model, the detectors, shielding, cap, electronic processing and dead times. We incorporate new features to adjust signal processing to the Anger logic underlying the Mosaic™ system. Special attention was paid to dead time and energy spectra descriptions. Sorting of simulated events in a list mode format similar to the system outputs was developed to compare experimental and simulated sensitivity and scatter fractions for different energy thresholds using various models of phantoms describing rat and mouse geometries. Count rates were compared for both cylindrical homogeneous phantoms. Simulated spatial resolution was fitted to experimental data for 18F point sources at different locations within the FOV with an analytical blurring function for electronic processing effects. Simulated and measured sensitivities differed by less than 3%, while scatter fractions agreed

  3. Incorporating S-shaped testing-effort functions into NHPP software reliability model with imperfect debugging

    Institute of Scientific and Technical Information of China (English)

    Qiuying Li; Haifeng Li; Minyan Lu

    2015-01-01

    Testing-effort (TE) and imperfect debugging (ID) in the reliability modeling process may further improve the fitting and pre-diction results of software reliability growth models (SRGMs). For describing the S-shaped varying trend of TE increasing rate more accurately, first, two S-shaped testing-effort functions (TEFs), i.e., delayed S-shaped TEF (DS-TEF) and inflected S-shaped TEF (IS-TEF), are proposed. Then these two TEFs are incorporated into various types (exponential-type, delayed S-shaped and in-flected S-shaped) of non-homogeneous Poisson process (NHPP) SRGMs with two forms of ID respectively for obtaining a series of new NHPP SRGMs which consider S-shaped TEFs as wel as ID. Final y these new SRGMs and several comparison NHPP SRGMs are applied into four real failure data-sets respectively for investigating the fitting and prediction power of these new SRGMs. The experimental results show that: (i) the proposed IS-TEF is more suitable and flexible for describing the consumption of TE than the previous TEFs; (i ) incorporating TEFs into the inflected S-shaped NHPP SRGM may be more effective and appropriate compared with the exponential-type and the delayed S-shaped NHPP SRGMs; (i i) the inflected S-shaped NHPP SRGM con-sidering both IS-TEF and ID yields the most accurate fitting and prediction results than the other comparison NHPP SRGMs.

  4. Monte Carlo Method Simulation for the Phase-transition of the Two-Dimension Trangular Ising Lattice Model%Monte Carlo方法研究二维三角晶格Ising模型的相变现象

    Institute of Scientific and Technical Information of China (English)

    赵新军

    2012-01-01

    In this paper, we use computers to investigate Two-Dimension trangular Ising lattice by means of the Monte Carlo method, and calculated the magnetization and specific heat of Two-Dimensional triangular Ising lattice model in the absence of a magnetic field. We can get the critical temperature by means of the Monte Carlo method. The critical temperature that we obtained by Monte Carlo method is confirmed with the theoretical result very well.%应用MonteCarlo方法计算了无外磁场时二维三角晶格Ising模型的磁化强度、比热随温度的变化关系,给出了二维三角晶格Ising模型的临界温度J/kBT=0.44,由MonteCarlo方法所确定的“临界温度”与理论计算结果一致.

  5. Monte Carlo based verification of a beam model used in a treatment planning system

    Science.gov (United States)

    Wieslander, E.; Knöös, T.

    2008-02-01

    Modern treatment planning systems (TPSs) usually separate the dose modelling into a beam modelling phase, describing the beam exiting the accelerator, followed by a subsequent dose calculation in the patient. The aim of this work is to use the Monte Carlo code system EGSnrc to study the modelling of head scatter as well as the transmission through multi-leaf collimator (MLC) and diaphragms in the beam model used in a commercial TPS (MasterPlan, Nucletron B.V.). An Elekta Precise linear accelerator equipped with an MLC has been modelled in BEAMnrc, based on available information from the vendor regarding the material and geometry of the treatment head. The collimation in the MLC direction consists of leafs which are complemented with a backup diaphragm. The characteristics of the electron beam, i.e., energy and spot size, impinging on the target have been tuned to match measured data. Phase spaces from simulations of the treatment head are used to extract the scatter from, e.g., the flattening filter and the collimating structures. Similar data for the source models used in the TPS are extracted from the treatment planning system, thus a comprehensive analysis is possible. Simulations in a water phantom, with DOSXYZnrc, are also used to study the modelling of the MLC and the diaphragms by the TPS. The results from this study will be helpful to understand the limitations of the model in the TPS and provide knowledge for further improvements of the TPS source modelling.

  6. Study of dispersion forces with quantum Monte Carlo: toward a continuum model for solvation.

    Science.gov (United States)

    Amovilli, Claudio; Floris, Franca Maria

    2015-05-28

    We present a general method to compute dispersion interaction energy that, starting from London's interpretation, is based on the measure of the electronic electric field fluctuations, evaluated on electronic sampled configurations generated by quantum Monte Carlo. A damped electric field was considered in order to avoid divergence in the variance. Dispersion atom-atom C6 van der Waals coefficients were computed by coupling electric field fluctuations with static dipole polarizabilities. The dipole polarizability was evaluated at the diffusion Monte Carlo level by studying the response of the system to a constant external electric field. We extended the method to the calculation of the dispersion contribution to the free energy of solvation in the framework of the polarizable continuum model. We performed test calculations on pairs of some atomic systems. We considered He in ground and low lying excited states and Ne in the ground state and obtained a good agreement with literature data. We also made calculations on He, Ne, and F(-) in water as the solvent. Resulting dispersion contribution to the free energy of solvation shows the reliability of the method illustrated here.

  7. Spreaders and sponges define metastasis in lung cancer: a Markov chain Monte Carlo mathematical model.

    Science.gov (United States)

    Newton, Paul K; Mason, Jeremy; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Norton, Larry; Kuhn, Peter

    2013-05-01

    The classic view of metastatic cancer progression is that it is a unidirectional process initiated at the primary tumor site, progressing to variably distant metastatic sites in a fairly predictable, although not perfectly understood, fashion. A Markov chain Monte Carlo mathematical approach can determine a pathway diagram that classifies metastatic tumors as "spreaders" or "sponges" and orders the timescales of progression from site to site. In light of recent experimental evidence highlighting the potential significance of self-seeding of primary tumors, we use a Markov chain Monte Carlo (MCMC) approach, based on large autopsy data sets, to quantify the stochastic, systemic, and often multidirectional aspects of cancer progression. We quantify three types of multidirectional mechanisms of progression: (i) self-seeding of the primary tumor, (ii) reseeding of the primary tumor from a metastatic site (primary reseeding), and (iii) reseeding of metastatic tumors (metastasis reseeding). The model shows that the combined characteristics of the primary and the first metastatic site to which it spreads largely determine the future pathways and timescales of systemic disease.

  8. A virtual source method for Monte Carlo simulation of Gamma Knife Model C

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Hoon; Kim, Yong Kyun [Hanyang University, Seoul (Korea, Republic of); Chung, Hyun Tai [Seoul National University College of Medicine, Seoul (Korea, Republic of)

    2016-05-15

    The Monte Carlo simulation method has been used for dosimetry of radiation treatment. Monte Carlo simulation is the method that determines paths and dosimetry of particles using random number. Recently, owing to the ability of fast processing of the computers, it is possible to treat a patient more precisely. However, it is necessary to increase the simulation time to improve the efficiency of accuracy uncertainty. When generating the particles from the cobalt source in a simulation, there are many particles cut off. So it takes time to simulate more accurately. For the efficiency, we generated the virtual source that has the phase space distribution which acquired a single gamma knife channel. We performed the simulation using the virtual sources on the 201 channel and compared the measurement with the simulation using virtual sources and real sources. A virtual source file was generated to reduce the simulation time of a Gamma Knife Model C. Simulations with a virtual source executed about 50 times faster than the original source code and there was no statistically significant difference in simulated results.

  9. Recent Advances in the Microscopic Calculations of Level Densities by the Shell Model Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    Alhassid Y.

    2014-04-01

    Full Text Available The shell model Monte Carlo (SMMC method enables calculations in model spaces that are many orders of magnitude larger than those that can be treated by conventional methods, and is particularly suitable for the calculation of level densities in the presence of correlations. We review recent advances and applications of SMMC for the microscopic calculation of level densities. Recent developments include (i a method to calculate accurately the ground-state energy of an odd-mass nucleus, circumventing a sign problem that originates in the projection on an odd number of particles, and (ii a method to calculate directly level densities, which, unlike state densities, do not include the spin degeneracy of the levels. We calculated the level densities of a family of nickel isotopes 59−64Ni and of a heavy deformed rare-earth nucleus 162Dy and found them to be in close agreement with various experimental data sets.

  10. Recent Advances in the Microscopic Calculations of Level Densities by the Shell Model Monte Carlo Method

    CERN Document Server

    Alhassid, Y; Liu, S; Mukherjee, A; Nakada, H

    2014-01-01

    The shell model Monte Carlo (SMMC) method enables calculations in model spaces that are many orders of magnitude larger than those that can be treated by conventional methods, and is particularly suitable for the calculation of level densities in the presence of correlations. We review recent advances and applications of SMMC for the microscopic calculation of level densities. Recent developments include (i) a method to calculate accurately the ground-state energy of an odd-mass nucleus, circumventing a sign problem that originates in the projection on an odd number of particles, and (ii) a method to calculate directly level densities, which, unlike state densities, do not include the spin degeneracy of the levels. We calculated the level densities of a family of nickel isotopes $^{59-64}$Ni and of a heavy deformed rare-earth nucleus $^{162}$Dy and found them to be in close agreement with various experimental data sets.

  11. Business Scenario Evaluation Method Using Monte Carlo Simulation on Qualitative and Quantitative Hybrid Model

    Science.gov (United States)

    Samejima, Masaki; Akiyoshi, Masanori; Mitsukuni, Koshichiro; Komoda, Norihisa

    We propose a business scenario evaluation method using qualitative and quantitative hybrid model. In order to evaluate business factors with qualitative causal relations, we introduce statistical values based on propagation and combination of effects of business factors by Monte Carlo simulation. In propagating an effect, we divide a range of each factor by landmarks and decide an effect to a destination node based on the divided ranges. In combining effects, we decide an effect of each arc using contribution degree and sum all effects. Through applied results to practical models, it is confirmed that there are no differences between results obtained by quantitative relations and results obtained by the proposed method at the risk rate of 5%.

  12. Density-based Monte Carlo filter and its applications in nonlinear stochastic differential equation models.

    Science.gov (United States)

    Huang, Guanghui; Wan, Jianping; Chen, Hui

    2013-02-01

    Nonlinear stochastic differential equation models with unobservable state variables are now widely used in analysis of PK/PD data. Unobservable state variables are usually estimated with extended Kalman filter (EKF), and the unknown pharmacokinetic parameters are usually estimated by maximum likelihood estimator. However, EKF is inadequate for nonlinear PK/PD models, and MLE is known to be biased downwards. A density-based Monte Carlo filter (DMF) is proposed to estimate the unobservable state variables, and a simulation-based M estimator is proposed to estimate the unknown parameters in this paper, where a genetic algorithm is designed to search the optimal values of pharmacokinetic parameters. The performances of EKF and DMF are compared through simulations for discrete time and continuous time systems respectively, and it is found that the results based on DMF are more accurate than those given by EKF with respect to mean absolute error.

  13. Solving the master equation without kinetic Monte Carlo: Tensor train approximations for a CO oxidation model

    Energy Technology Data Exchange (ETDEWEB)

    Gelß, Patrick, E-mail: p.gelss@fu-berlin.de; Matera, Sebastian, E-mail: matera@math.fu-berlin.de; Schütte, Christof, E-mail: schuette@mi.fu-berlin.de

    2016-06-01

    In multiscale modeling of heterogeneous catalytic processes, one crucial point is the solution of a Markovian master equation describing the stochastic reaction kinetics. Usually, this is too high-dimensional to be solved with standard numerical techniques and one has to rely on sampling approaches based on the kinetic Monte Carlo method. In this study we break the curse of dimensionality for the direct solution of the Markovian master equation by exploiting the Tensor Train Format for this purpose. The performance of the approach is demonstrated on a first principles based, reduced model for the CO oxidation on the RuO{sub 2}(110) surface. We investigate the complexity for increasing system size and for various reaction conditions. The advantage over the stochastic simulation approach is illustrated by a problem with increased stiffness.

  14. A Monte Carlo method for critical systems in infinite volume: the planar Ising model

    CERN Document Server

    Herdeiro, Victor

    2016-01-01

    In this paper we propose a Monte Carlo method for generating finite-domain marginals of critical distributions of statistical models in infinite volume. The algorithm corrects the problem of the long-range effects of boundaries associated to generating critical distributions on finite lattices. It uses the advantage of scale invariance combined with ideas of the renormalization group in order to construct a type of "holographic" boundary condition that encodes the presence of an infinite volume beyond it. We check the quality of the distribution obtained in the case of the planar Ising model by comparing various observables with their infinite-plane prediction. We accurately reproduce planar two-, three- and four-point functions of spin and energy operators. We also define a lattice stress-energy tensor, and numerically obtain the associated conformal Ward identities and the Ising central charge.

  15. Monte Carlo method for critical systems in infinite volume: The planar Ising model.

    Science.gov (United States)

    Herdeiro, Victor; Doyon, Benjamin

    2016-10-01

    In this paper we propose a Monte Carlo method for generating finite-domain marginals of critical distributions of statistical models in infinite volume. The algorithm corrects the problem of the long-range effects of boundaries associated to generating critical distributions on finite lattices. It uses the advantage of scale invariance combined with ideas of the renormalization group in order to construct a type of "holographic" boundary condition that encodes the presence of an infinite volume beyond it. We check the quality of the distribution obtained in the case of the planar Ising model by comparing various observables with their infinite-plane prediction. We accurately reproduce planar two-, three-, and four-point of spin and energy operators. We also define a lattice stress-energy tensor, and numerically obtain the associated conformal Ward identities and the Ising central charge.

  16. Development of numerical models for Monte Carlo simulations of Th-Pb fuel assembly

    Directory of Open Access Journals (Sweden)

    Oettingen Mikołaj

    2017-01-01

    Full Text Available The thorium-uranium fuel cycle is a promising alternative against uranium-plutonium fuel cycle, but it demands many advanced research before starting its industrial application in commercial nuclear reactors. The paper presents the development of the thorium-lead (Th-Pb fuel assembly numerical models for the integral irradiation experiments. The Th-Pb assembly consists of a hexagonal array of ThO2 fuel rods and metallic Pb rods. The design of the assembly allows different combinations of rods for various types of irradiations and experimental measurements. The numerical model of the Th-Pb assembly was designed for the numerical simulations with the continuous energy Monte Carlo Burnup code (MCB implemented on the supercomputer Prometheus of the Academic Computer Centre Cyfronet AGH.

  17. Monte Carlo study of Lefschetz thimble structure in one-dimensional Thirring model at finite density

    CERN Document Server

    Fujii, Hirotsugu; Kikukawa, Yoshio

    2015-01-01

    We consider the one-dimensional massive Thirring model formulated on the lattice with staggered fermions and an auxiliary compact vector (link) field, which is exactly solvable and shows a phase transition with increasing the chemical potential of fermion number: the crossover at a finite temperature and the first order transition at zero temperature. We complexify its path-integration on Lefschetz thimbles and examine its phase transition by hybrid Monte Carlo simulations on the single dominant thimble. We observe a discrepancy between the numerical and exact results in the crossover region for small inverse coupling $\\beta$ and/or large lattice size $L$, while they are in good agreement at the lower and higher density regions. We also observe that the discrepancy persists in the continuum limit keeping the temperature finite and it becomes more significant toward the low-temperature limit. This numerical result is consistent with our analytical study of the model's thimble structure. And these results imply...

  18. Conformal or Walking? Monte Carlo renormalization group studies of SU(3) gauge models with fundamental fermions

    CERN Document Server

    Hasenfratz, Anna

    2010-01-01

    Strongly coupled gauge systems with many fermions are important in many phenomenological models. I use the 2-lattice matching Monte Carlo renormalization group method to study the fixed point structure and critical indexes of SU(3) gauge models with 8 and 12 flavors of fundamental fermions. With an improved renormalization group block transformation I am able to connect the perturbative and confining regimes of the N_f=8 flavor system, thus verifying its QCD-like nature. With N_f=12 flavors the data favor the existence of an infrared fixed point and conformal phase, though the results are also consistent with very slow walking. I measure the anomalous mass dimension in both systems at several gauge couplings and find that they are barely different from the free field value.

  19. Measurement and Monte Carlo modeling of the spatial response of scintillation screens

    Energy Technology Data Exchange (ETDEWEB)

    Pistrui-Maximean, S.A. [CNDRI (NDT using Ionizing Radiation) Laboratory, INSA-Lyon, 69621 Villeurbanne (France)], E-mail: spistrui@gmail.com; Letang, J.M. [CNDRI (NDT using Ionizing Radiation) Laboratory, INSA-Lyon, 69621 Villeurbanne (France)], E-mail: jean-michel.letang@insa-lyon.fr; Freud, N. [CNDRI (NDT using Ionizing Radiation) Laboratory, INSA-Lyon, 69621 Villeurbanne (France); Koch, A. [Thales Electron Devices, 38430 Moirans (France); Walenta, A.H. [Detectors and Electronics Department, FB Physik, Siegen University, 57068 Siegen (Germany); Montarou, G. [Corpuscular Physics Laboratory, Blaise Pascal University, 63177 Aubiere (France); Babot, D. [CNDRI (NDT using Ionizing Radiation) Laboratory, INSA-Lyon, 69621 Villeurbanne (France)

    2007-11-01

    In this article, we propose a detailed protocol to carry out measurements of the spatial response of scintillation screens and to assess the agreement with simulated results. The experimental measurements have been carried out using a practical implementation of the slit method. A Monte Carlo simulation model of scintillator screens, implemented with the toolkit Geant4, has been used to study the influence of the acquisition setup parameters and to compare with the experimental results. An algorithm of global stochastic optimization based on a localized random search method has been implemented to adjust the optical parameters (optical scattering and absorption coefficients). The algorithm has been tested for different X-ray tube voltages (40, 70 and 100 kV). A satisfactory convergence between the results simulated with the optimized model and the experimental measurements is obtained.

  20. The OH distribution in cometary atmospheres - A collisional Monte Carlo model for heavy species

    Science.gov (United States)

    Combi, Michael R.; Bos, Brent J.; Smyth, William H.

    1993-01-01

    The study presents an extension of the cometary atmosphere Monte Carlo particle trajectory model formalism which makes it both physically correct for heavy species and yet computationally reasonable. The derivation accounts for the collision path and scattering redirection of a heavy radical traveling through a fluid coma with a given radial distribution in outflow speed and temperature. The revised model verifies that the earlier fast-H atom approximations used in earlier work are valid, and it is applied to a case where the heavy radical formalism is necessary: the OH distribution. It is found that a steeper variation of water production rate with heliocentric distance is required for a water coma which is consistent with the velocity-resolved observations of Comet P/Halley.

  1. Monte Carlo markovian modeling of modal competition in dual-wavelength semiconductor lasers

    Science.gov (United States)

    Chusseau, Laurent; Philippe, Fabrice; Jean-Marie, Alain

    2014-03-01

    Monte Carlo markovian models of a dual-mode semiconductor laser with quantum well (QW) or quantum dot (QD) active regions are proposed. Accounting for carriers and photons as particles that may exchange energy in the course of time allows an ab initio description of laser dynamics such as the mode competition and intrinsic laser noise. We used these models to evaluate the stability of the dual-mode regime when laser characteristics are varied: mode gains and losses, non-radiative recombination rates, intraband relaxation time, capture time in QD, transfer of excitation between QD via the wetting layer. . . As a major result, a possible steady-sate dualmode regime is predicted for specially designed QD semiconductor lasers thereby acting as a CW microwave or terahertz-beating source whereas it does not occur for QW lasers.

  2. Simulation and Modeling Efforts to Support Decision Making in Healthcare Supply Chain Management

    Directory of Open Access Journals (Sweden)

    Eman AbuKhousa

    2014-01-01

    Full Text Available Recently, most healthcare organizations focus their attention on reducing the cost of their supply chain management (SCM by improving the decision making pertaining processes’ efficiencies. The availability of products through healthcare SCM is often a matter of life or death to the patient; therefore, trial and error approaches are not an option in this environment. Simulation and modeling (SM has been presented as an alternative approach for supply chain managers in healthcare organizations to test solutions and to support decision making processes associated with various SCM problems. This paper presents and analyzes past SM efforts to support decision making in healthcare SCM and identifies the key challenges associated with healthcare SCM modeling. We also present and discuss emerging technologies to meet these challenges.

  3. Simulation and modeling efforts to support decision making in healthcare supply chain management.

    Science.gov (United States)

    AbuKhousa, Eman; Al-Jaroodi, Jameela; Lazarova-Molnar, Sanja; Mohamed, Nader

    2014-01-01

    Recently, most healthcare organizations focus their attention on reducing the cost of their supply chain management (SCM) by improving the decision making pertaining processes' efficiencies. The availability of products through healthcare SCM is often a matter of life or death to the patient; therefore, trial and error approaches are not an option in this environment. Simulation and modeling (SM) has been presented as an alternative approach for supply chain managers in healthcare organizations to test solutions and to support decision making processes associated with various SCM problems. This paper presents and analyzes past SM efforts to support decision making in healthcare SCM and identifies the key challenges associated with healthcare SCM modeling. We also present and discuss emerging technologies to meet these challenges.

  4. An effort for developing a seamless transport modeling and remote sensing system for air pollutants

    Science.gov (United States)

    Nakajima, T.; Goto, D.; Dai, T.; Misawa, S.; Uchida, J.; Schutgens, N.; Hashimoto, M.; Oikawa, E.; Takenaka, H.; Tsuruta, H.; Inoue, T.; Higurashi, A.

    2015-12-01

    Wide area of the globe, like Asian region, still suffers from a large emission of air pollutants and cause serious impacts on the earth's climate and the public health of the area. Launch of an international initiative, Climate and Clean Air Coalition (CCAC), is an example of efforts to ease the difficulties by reducing Short-Lived Climate Pollutants (SLCPs), i.e., black carbon aerosol, methane and other short-lived atmospheric materials that heat the earth's system, along with long-lived greenhouse gas mitigation. Impact evaluation of the air pollutants, however, has large uncertainties. We like to introduce a recent effort of projects MEXT/SALSA and MOEJ/S-12 to develop a seamless transport model for atmospheric constituents, NICAM-Chem, that is flexible enough to cover global scale to regional scale by the NICAM nonhydrostatic dynamic core (NICAM), coupled with SPRINTARS aerosol model, CHASER atmospheric chemistry model and with their three computational grid systems, i.e. quasi homogeneous grids, stretched grids and diamond grids. A local ensemble transform Kalman filter/smoother with this modeling system was successfully applied to data from MODIS, AERONET, and CALIPSO for global assimilation/inversion and surface SPM and SO2 air pollution monitoring networks for Japanese area assimilation. My talk will be extended to discuss an effective utility of satellite remote sensing of aerosols using Cloud and Aerosol Imager (CAI) on board the GOSAT satellite and Advanced Himawari Imager (AHI) on board the new third generation geostationary satellite, Himawari-8. The CAI has a near-ultraviolet channel of 380nm with 500m spatial resolution and the AHI has high frequency measurement capability of every 10 minutes. These functions are very effective for accurate land aerosol remote sensing, so that a combination with the developed aerosol assimilation system is promising.

  5. Monte Carlo method based QSAR modeling of maleimide derivatives as glycogen synthase kinase-3β inhibitors.

    Science.gov (United States)

    Živković, Jelena V; Trutić, Nataša V; Veselinović, Jovana B; Nikolić, Goran M; Veselinović, Aleksandar M

    2015-09-01

    The Monte Carlo method was used for QSAR modeling of maleimide derivatives as glycogen synthase kinase-3β inhibitors. The first QSAR model was developed for a series of 74 3-anilino-4-arylmaleimide derivatives. The second QSAR model was developed for a series of 177 maleimide derivatives. QSAR models were calculated with the representation of the molecular structure by the simplified molecular input-line entry system. Two splits have been examined: one split into the training and test set for the first QSAR model, and one split into the training, test and validation set for the second. The statistical quality of the developed model is very good. The calculated model for 3-anilino-4-arylmaleimide derivatives had following statistical parameters: r(2)=0.8617 for the training set; r(2)=0.8659, and r(m)(2)=0.7361 for the test set. The calculated model for maleimide derivatives had following statistical parameters: r(2)=0.9435, for the training, r(2)=0.9262 and r(m)(2)=0.8199 for the test and r(2)=0.8418, r(av)(m)(2)=0.7469 and ∆r(m)(2)=0.1476 for the validation set. Structural indicators considered as molecular fragments responsible for the increase and decrease in the inhibition activity have been defined. The computer-aided design of new potential glycogen synthase kinase-3β inhibitors has been presented by using defined structural alerts.

  6. Effect of nonlinearity in hybrid kinetic Monte Carlo-continuum models.

    Science.gov (United States)

    Balter, Ariel; Lin, Guang; Tartakovsky, Alexandre M

    2012-01-01

    Recently there has been interest in developing efficient ways to model heterogeneous surface reactions with hybrid computational models that couple a kinetic Monte Carlo (KMC) model for a surface to a finite-difference model for bulk diffusion in a continuous domain. We consider two representative problems that validate a hybrid method and show that this method captures the combined effects of nonlinearity and stochasticity. We first validate a simple deposition-dissolution model with a linear rate showing that the KMC-continuum hybrid agrees with both a fully deterministic model and its analytical solution. We then study a deposition-dissolution model including competitive adsorption, which leads to a nonlinear rate, and show that in this case the KMC-continuum hybrid and fully deterministic simulations do not agree. However, we are able to identify the difference as a natural result of the stochasticity coming from the KMC surface process. Because KMC captures inherent fluctuations, we consider it to be more realistic than a purely deterministic model. Therefore, we consider the KMC-continuum hybrid to be more representative of a real system.

  7. Behavioral modeling of human choices reveals dissociable effects of physical effort and temporal delay on reward devaluation.

    Science.gov (United States)

    Klein-Flügge, Miriam C; Kennerley, Steven W; Saraiva, Ana C; Penny, Will D; Bestmann, Sven

    2015-03-01

    There has been considerable interest from the fields of biology, economics, psychology, and ecology about how decision costs decrease the value of rewarding outcomes. For example, formal descriptions of how reward value changes with increasing temporal delays allow for quantifying individual decision preferences, as in animal species populating different habitats, or normal and clinical human populations. Strikingly, it remains largely unclear how humans evaluate rewards when these are tied to energetic costs, despite the surge of interest in the neural basis of effort-guided decision-making and the prevalence of disorders showing a diminished willingness to exert effort (e.g., depression). One common assumption is that effort discounts reward in a similar way to delay. Here we challenge this assumption by formally comparing competing hypotheses about effort and delay discounting. We used a design specifically optimized to compare discounting behavior for both effort and delay over a wide range of decision costs (Experiment 1). We then additionally characterized the profile of effort discounting free of model assumptions (Experiment 2). Contrary to previous reports, in both experiments effort costs devalued reward in a manner opposite to delay, with small devaluations for lower efforts, and progressively larger devaluations for higher effort-levels (concave shape). Bayesian model comparison confirmed that delay-choices were best predicted by a hyperbolic model, with the largest reward devaluations occurring at shorter delays. In contrast, an altogether different relationship was observed for effort-choices, which were best described by a model of inverse sigmoidal shape that is initially concave. Our results provide a novel characterization of human effort discounting behavior and its first dissociation from delay discounting. This enables accurate modelling of cost-benefit decisions, a prerequisite for the investigation of the neural underpinnings of effort

  8. Monte Carlo modeling of proton therapy installations: a global experimental method to validate secondary neutron dose calculations.

    Science.gov (United States)

    Farah, J; Martinetti, F; Sayah, R; Lacoste, V; Donadille, L; Trompier, F; Nauraye, C; De Marzi, L; Vabre, I; Delacroix, S; Hérault, J; Clairand, I

    2014-06-07

    Monte Carlo calculations are increasingly used to assess stray radiation dose to healthy organs of proton therapy patients and estimate the risk of secondary cancer. Among the secondary particles, neutrons are of primary concern due to their high relative biological effectiveness. The validation of Monte Carlo simulations for out-of-field neutron doses remains however a major challenge to the community. Therefore this work focused on developing a global experimental approach to test the reliability of the MCNPX models of two proton therapy installations operating at 75 and 178 MeV for ocular and intracranial tumor treatments, respectively. The method consists of comparing Monte Carlo calculations against experimental measurements of: (a) neutron spectrometry inside the treatment room, (b) neutron ambient dose equivalent at several points within the treatment room, (c) secondary organ-specific neutron doses inside the Rando-Alderson anthropomorphic phantom. Results have proven that Monte Carlo models correctly reproduce secondary neutrons within the two proton therapy treatment rooms. Sensitive differences between experimental measurements and simulations were nonetheless observed especially with the highest beam energy. The study demonstrated the need for improved measurement tools, especially at the high neutron energy range, and more accurate physical models and cross sections within the Monte Carlo code to correctly assess secondary neutron doses in proton therapy applications.

  9. Monte Carlo modeling of proton therapy installations: a global experimental method to validate secondary neutron dose calculations

    Science.gov (United States)

    Farah, J.; Martinetti, F.; Sayah, R.; Lacoste, V.; Donadille, L.; Trompier, F.; Nauraye, C.; De Marzi, L.; Vabre, I.; Delacroix, S.; Hérault, J.; Clairand, I.

    2014-06-01

    Monte Carlo calculations are increasingly used to assess stray radiation dose to healthy organs of proton therapy patients and estimate the risk of secondary cancer. Among the secondary particles, neutrons are of primary concern due to their high relative biological effectiveness. The validation of Monte Carlo simulations for out-of-field neutron doses remains however a major challenge to the community. Therefore this work focused on developing a global experimental approach to test the reliability of the MCNPX models of two proton therapy installations operating at 75 and 178 MeV for ocular and intracranial tumor treatments, respectively. The method consists of comparing Monte Carlo calculations against experimental measurements of: (a) neutron spectrometry inside the treatment room, (b) neutron ambient dose equivalent at several points within the treatment room, (c) secondary organ-specific neutron doses inside the Rando-Alderson anthropomorphic phantom. Results have proven that Monte Carlo models correctly reproduce secondary neutrons within the two proton therapy treatment rooms. Sensitive differences between experimental measurements and simulations were nonetheless observed especially with the highest beam energy. The study demonstrated the need for improved measurement tools, especially at the high neutron energy range, and more accurate physical models and cross sections within the Monte Carlo code to correctly assess secondary neutron doses in proton therapy applications.

  10. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

    Science.gov (United States)

    Kieftenbeld, Vincent; Natesan, Prathiba

    2012-01-01

    Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…

  11. AN ENHANCED MODEL TO ESTIMATE EFFORT, PERFORMANCE AND COST OF THE SOFTWARE PROJECTS

    Directory of Open Access Journals (Sweden)

    M. Pauline

    2013-04-01

    Full Text Available The Authors have proposed a model that first captures the fundamentals of software metrics in the phase 1 consisting of three primitive primary software engineering metrics; they are person-months (PM, function-points (FP, and lines of code (LOC. The phase 2 consists of the proposed function point which is obtained by grouping the adjustment factors to simplify the process of adjustment and to ensure more consistency in the adjustments. In the proposed method fuzzy logic is used for quantifying the quality of requirements and is added as one of the adjustment factor, thus a fuzzy based approach for the Enhanced General System Characteristics to Estimate Effort of the Software Projects using productivity has been obtained. The phase 3 takes the calculated function point from our work and is given as input to the static single variable model (i.e. to the Intermediate COCOMO and COCOMO II for cost estimation. The Authors have tailored the cost factors in intermediate COCOMO and both; cost and scale factors are tailored in COCOMO II to suite to the individual development environment, which is very important for the accuracy of the cost estimates. The software performance indicators are project duration, schedule predictability, requirements completion ratio and post-release defect density, are also measured for the software projects in my work. A comparative study for effort, performance measurement and cost estimation of the software project is done between the existing model and the authors proposed work. Thus our work analyzes the interaction¬al process through which the estimation tasks were collectively accomplished.

  12. Monte Carlo model of the Studsvik BNCT clinical beam: description and validation.

    Science.gov (United States)

    Giusti, Valerio; Munck af Rosenschöld, Per M; Sköld, Kurt; Montagnini, Bruno; Capala, Jacek

    2003-12-01

    The neutron beam at the Studsvik facility for boron neutron capture therapy (BNCT) and the validation of the related computational model developed for the MCNP-4B Monte Carlo code are presented. Several measurements performed at the epithermal neutron port used for clinical trials have been made in order to validate the Monte Carlo computational model. The good general agreement between the MCNP calculations and the experimental results has provided an adequate check of the calculation procedure. In particular, at the nominal reactor power of 1 MW, the calculated in-air epithermal neutron flux in the energy interval between 0.4 eV-10 keV is 3.24 x 10(9) n cm(-2) s(-1) (+/- 1.2% 1 std. dev.) while the measured value is 3.30 x 10(9) n cm(-20 s(-1) (+/- 5.0% 1 std. dev.). Furthermore, the calculated in-phantom thermal neutron flux, equal to 6.43 x 10(9) n cm(-2) s(-1) (+/- 1.0% 1 std. dev.), and the corresponding measured value of 6.33 X 10(9) n cm(-2) s(-1) (+/- 5.3% 1 std. dev.) agree within their respective uncertainties. The only statistically significant disagreement is a discrepancy of 39% between the MCNP calculations of the in-air photon kerma and the corresponding experimental value. Despite this, a quite acceptable overall in-phantom beam performance was obtained, with a maximum value of the therapeutic ratio (the ratio between the local tumor dose and the maximum healthy tissue dose) equal to 6.7. The described MCNP model of the Studsvik facility has been deemed adequate to evaluate further improvements in the beam design as well as to plan experimental work.

  13. A Coarse-Grained DNA Model Parameterized from Atomistic Simulations by Inverse Monte Carlo

    Directory of Open Access Journals (Sweden)

    Nikolay Korolev

    2014-05-01

    Full Text Available Computer modeling of very large biomolecular systems, such as long DNA polyelectrolytes or protein-DNA complex-like chromatin cannot reach all-atom resolution in a foreseeable future and this necessitates the development of coarse-grained (CG approximations. DNA is both highly charged and mechanically rigid semi-flexible polymer and adequate DNA modeling requires a correct description of both its structural stiffness and salt-dependent electrostatic forces. Here, we present a novel CG model of DNA that approximates the DNA polymer as a chain of 5-bead units. Each unit represents two DNA base pairs with one central bead for bases and pentose moieties and four others for phosphate groups. Charges, intra- and inter-molecular force field potentials for the CG DNA model were calculated using the inverse Monte Carlo method from all atom molecular dynamic (MD simulations of 22 bp DNA oligonucleotides. The CG model was tested by performing dielectric continuum Langevin MD simulations of a 200 bp double helix DNA in solutions of monovalent salt with explicit ions. Excellent agreement with experimental data was obtained for the dependence of the DNA persistent length on salt concentration in the range 0.1–100 mM. The new CG DNA model is suitable for modeling various biomolecular systems with adequate description of electrostatic and mechanical properties.

  14. A Monte Carlo model of hot electron trapping and detrapping in SiO2

    Science.gov (United States)

    Kamocsai, R. L.; Porod, W.

    1991-02-01

    High-field stressing and oxide degradation of SiO2 are studied using a microscopic model of electron heating and charge trapping and detrapping. Hot electrons lead to a charge buildup in the oxide according to the dynamic trapping-detrapping model by Nissan-Cohen and co-workers [Y. Nissan-Cohen, J. Shappir, D. Frohman-Bentchkowsky, J. Appl. Phys. 58, 2252 (1985)]. Detrapping events are modeled as trap-to-band impact ionization processes initiated by high energy conduction electrons. The detailed electronic distribution function obtained from Monte Carlo transport simulations is utilized for the determination of the detrapping rates. We apply our microscopic model to the calculation of the flat-band voltage shift in silicon dioxide as a function of the electric field, and we show that our model is able to reproduce the experimental results. We also compare these results to the predictions of the empirical trapping-detrapping model which assumes a heuristic detrapping cross section. Our microscopic theory accounts for the nonlocal nature of impact ionization which leads to a dark space close to the injecting cathode, which is unaccounted for in the empirical model.

  15. Backbone exponents of the two-dimensional q-state Potts model: a Monte Carlo investigation.

    Science.gov (United States)

    Deng, Youjin; Blöte, Henk W J; Nienhuis, Bernard

    2004-02-01

    We determine the backbone exponent X(b) of several critical and tricritical q-state Potts models in two dimensions. The critical systems include the bond percolation, the Ising, the q=2-sqrt[3], 3, and 4 state Potts, and the Baxter-Wu model, and the tricritical ones include the q=1 Potts model and the Blume-Capel model. For this purpose, we formulate several efficient Monte Carlo methods and sample the probability P2 of a pair of points connected via at least two independent paths. Finite-size-scaling analysis of P2 yields X(b) as 0.3566(2), 0.2696(3), 0.2105(3), and 0.127(4) for the critical q=2-sqrt[3], 1,2, 3, and 4 state Potts model, respectively. At tricriticality, we obtain X(b)=0.0520(3) and 0.0753(6) for the q=1 and 2 Potts model, respectively. For the critical q-->0 Potts model it is derived that X(b)=3/4. From a scaling argument, we find that, at tricriticality, X(b) reduces to the magnetic exponent, as confirmed by the numerical results.

  16. The effort-reward imbalance work-stress model and daytime salivary cortisol and dehydroepiandrosterone (DHEA) among Japanese women.

    Science.gov (United States)

    Ota, Atsuhiko; Mase, Junji; Howteerakul, Nopporn; Rajatanun, Thitipat; Suwannapong, Nawarat; Yatsuya, Hiroshi; Ono, Yuichiro

    2014-09-17

    We examined the influence of work-related effort-reward imbalance and overcommitment to work (OC), as derived from Siegrist's Effort-Reward Imbalance (ERI) model, on the hypothalamic-pituitary-adrenocortical (HPA) axis. We hypothesized that, among healthy workers, both cortisol and dehydroepiandrosterone (DHEA) secretion would be increased by effort-reward imbalance and OC and, as a result, cortisol-to-DHEA ratio (C/D ratio) would not differ by effort-reward imbalance or OC. The subjects were 115 healthy female nursery school teachers. Salivary cortisol, DHEA, and C/D ratio were used as indexes of HPA activity. Mixed-model analyses of variance revealed that neither the interaction between the ERI model indicators (i.e., effort, reward, effort-to-reward ratio, and OC) and the series of measurement times (9:00, 12:00, and 15:00) nor the main effect of the ERI model indicators was significant for daytime salivary cortisol, DHEA, or C/D ratio. Multiple linear regression analyses indicated that none of the ERI model indicators was significantly associated with area under the curve of daytime salivary cortisol, DHEA, or C/D ratio. We found that effort, reward, effort-reward imbalance, and OC had little influence on daytime variation patterns, levels, or amounts of salivary HPA-axis-related hormones. Thus, our hypotheses were not supported.

  17. Teaching Monte Carlo Strategies for Earth System Modelling using a Guided Group-Learning Approach in the Classroom

    Science.gov (United States)

    Wagener, T.; Pianosi, F.; Woods, R. A.

    2016-12-01

    The need for quantifying uncertainty in earth system modelling has now been well established on both scientific and policy-making grounds. There is an urgent need to bring the skills and tools needed for doing so into practice. However, such topics are currently largely constrained to specialist graduate courses or to short courses for PhD students. Teaching the advanced skills needed for implementing and for using uncertainty analysis is difficult because students feel that it is inaccessible and it can be boring if presented using frontal teaching in the classroom. While we have made significant advancement in sharing teaching material, sometimes even including teaching notes (Wagener et al., 2012, Hydrology and Earth System Sciences), there is great need for understanding how we can bring such advanced topics into the undergraduate (and even graduate) curriculum in an effective manner. We present the results of our efforts to teach Matlab-based tools for uncertainty quantification in earth system modelling in a civil engineering undergraduate course. We use the example of teaching Monte Carlo strategies, the basis for the most widely used uncertainty quantification approaches, through the use of guided group-learning activities in the classroom. We utilize a three-step approach: [1] basic introduction to the problem, [2] guided group-learning to develop a possible solution, [3] comparison of possible solutions with state-of-the-art algorithms across groups. Our initial testing in an undergraduate course suggests that (i) overall students find a group-learning approach more engaging, (ii) that different students take charge of advancing the discussion at different stages or for different problems, and (iii) that making appropriate suggestions (facilitator) to guide the discussion keeps the speed of advancement sufficiently high. We present the approach, our initial results and suggest how a wider course on earth system modelling could be formulated in this manner.

  18. The effect of a number of selective points in modeling of polymerization reacting Monte Carlo method: studying the initiation reaction

    CERN Document Server

    Sadi, M; Dabir, B

    2003-01-01

    Monte Carlo Method is one of the most powerful techniques to model different processes, such as polymerization reactions. By this method, without any need to solve moment equations, a very detailed information on the structure and properties of polymers are obtained. The number of algorithm repetitions (selected volumes of reactor for modelling which represent the number of initial molecules) is very important in this method. In Monte Carlo method calculations are based on the random number of generations and reaction probability determinations. so the number of algorithm repetition is very important. In this paper, the initiation reaction was considered alone and the importance of number of initiator molecules on the result were studied. It can be concluded that Monte Carlo method will not give accurate results if the number of molecules is not satisfied to be big enough, because in that case , selected volume would not be representative of the whole system.

  19. A software tool to assess uncertainty in transient-storage model parameters using Monte Carlo simulations

    Science.gov (United States)

    Ward, Adam S.; Kelleher, Christa A.; Mason, Seth J. K.; Wagener, Thorsten; McIntyre, Neil; McGlynn, Brian L.; Runkel, Robert L.; Payn, Robert A.

    2017-01-01

    Researchers and practitioners alike often need to understand and characterize how water and solutes move through a stream in terms of the relative importance of in-stream and near-stream storage and transport processes. In-channel and subsurface storage processes are highly variable in space and time and difficult to measure. Storage estimates are commonly obtained using transient-storage models (TSMs) of the experimentally obtained solute-tracer test data. The TSM equations represent key transport and storage processes with a suite of numerical parameters. Parameter values are estimated via inverse modeling, in which parameter values are iteratively changed until model simulations closely match observed solute-tracer data. Several investigators have shown that TSM parameter estimates can be highly uncertain. When this is the case, parameter values cannot be used reliably to interpret stream-reach functioning. However, authors of most TSM studies do not evaluate or report parameter certainty. Here, we present a software tool linked to the One-dimensional Transport with Inflow and Storage (OTIS) model that enables researchers to conduct uncertainty analyses via Monte-Carlo parameter sampling and to visualize uncertainty and sensitivity results. We demonstrate application of our tool to 2 case studies and compare our results to output obtained from more traditional implementation of the OTIS model. We conclude by suggesting best practices for transient-storage modeling and recommend that future applications of TSMs include assessments of parameter certainty to support comparisons and more reliable interpretations of transport processes.

  20. Exploring uncertainty in glacier mass balance modelling with Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    H. Machguth

    2008-12-01

    Full Text Available By means of Monte Carlo simulations we calculated uncertainty in modelled cumulative mass balance over 400 days at one particular point on the tongue of Morteratsch Glacier, Switzerland, using a glacier energy balance model of intermediate complexity. Before uncertainty assessment, the model was tuned to observed mass balance for the investigated time period and its robustness was tested by comparing observed and modelled mass balance over 11 years, yielding very small deviations. Both systematic and random uncertainties are assigned to twelve input parameters and their respective values estimated from the literature or from available meteorological data sets. The calculated overall uncertainty in the model output is dominated by systematic errors and amounts to 0.7 m w.e. or approximately 10% of total melt over the investigated time span. In order to provide a first order estimate on variability in uncertainty depending on the quality of input data, we conducted a further experiment, calculating overall uncertainty for different levels of uncertainty in measured global radiation and air temperature. Our results show that the output of a well calibrated model is subject to considerable uncertainties, in particular when applied for extrapolation in time and space where systematic errors are likely to be an important issue.

  1. Exploring uncertainty in glacier mass balance modelling with Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    H. Machguth

    2008-06-01

    Full Text Available By means of Monte Carlo simulations we calculated uncertainty in modelled cumulative mass balance over 400 days at one particular point on the tongue of Morteratsch Glacier, Switzerland, using a glacier energy balance model of intermediate complexity. Before uncertainty assessment, the model was tuned to observed mass balance for the investigated time period and its robustness was tested by comparing observed and modelled mass balance over 11 years, yielding very small deviations. Both systematic and random uncertainties are assigned to twelve input parameters and their respective values estimated from the literature or from available meteorological data sets. The calculated overall uncertainty in the model output is dominated by systematic errors and amounts to 0.7 m w.e. or approximately 10% of total melt over the investigated time span. In order to provide a first order estimate on variability in uncertainty depending on the quality of input data, we conducted a further experiment, calculating overall uncertainty for different levels of uncertainty in measured global radiation and air temperature. Our results show that the output of a well calibrated model is subject to considerable uncertainties, in particular when applied for extrapolation in time and space where systematic errors are likely to be an important issue.

  2. Monte Carlo based geometrical model for efficiency calculation of an n-type HPGe detector

    Energy Technology Data Exchange (ETDEWEB)

    Padilla Cabal, Fatima, E-mail: fpadilla@instec.c [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba); Lopez-Pino, Neivy; Luis Bernal-Castillo, Jose; Martinez-Palenzuela, Yisel; Aguilar-Mena, Jimmy; D' Alessandro, Katia; Arbelo, Yuniesky; Corrales, Yasser; Diaz, Oscar [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba)

    2010-12-15

    A procedure to optimize the geometrical model of an n-type detector is described. Sixteen lines from seven point sources ({sup 241}Am, {sup 133}Ba, {sup 22}Na, {sup 60}Co, {sup 57}Co, {sup 137}Cs and {sup 152}Eu) placed at three different source-to-detector distances (10, 20 and 30 cm) were used to calibrate a low-background gamma spectrometer between 26 and 1408 keV. Direct Monte Carlo techniques using the MCNPX 2.6 and GEANT 4 9.2 codes, and a semi-empirical procedure were performed to obtain theoretical efficiency curves. Since discrepancies were found between experimental and calculated data using the manufacturer parameters of the detector, a detail study of the crystal dimensions and the geometrical configuration is carried out. The relative deviation with experimental data decreases from a mean value of 18-4%, after the parameters were optimized.

  3. Monte Carlo modeling of cavity imaging in pure iron using back-scatter electron scanning microscopy

    Science.gov (United States)

    Yan, Qiang; Gigax, Jonathan; Chen, Di; Garner, F. A.; Shao, Lin

    2016-11-01

    Backscattered electrons (BSE) in a scanning electron microscope (SEM) can produce images of subsurface cavity distributions as a nondestructive characterization technique. Monte Carlo simulations were performed to understand the mechanism of void imaging and to identify key parameters in optimizing void resolution. The modeling explores an iron target of different thicknesses, electron beams of different energies, beam sizes, and scan pitch, evaluated for voids of different sizes and depths below the surface. The results show that the void image contrast is primarily caused by discontinuity of energy spectra of backscattered electrons, due to increased outward path lengths for those electrons which penetrate voids and are backscattered at deeper depths. Size resolution of voids at specific depths, and maximum detection depth of specific voids sizes are derived as a function of electron beam energy. The results are important for image optimization and data extraction.

  4. Macroion solutions in the cell model studied by field theory and Monte Carlo simulations.

    Science.gov (United States)

    Lue, Leo; Linse, Per

    2011-12-14

    Aqueous solutions of charged spherical macroions with variable dielectric permittivity and their associated counterions are examined within the cell model using a field theory and Monte Carlo simulations. The field theory is based on separation of fields into short- and long-wavelength terms, which are subjected to different statistical-mechanical treatments. The simulations were performed by using a new, accurate, and fast algorithm for numerical evaluation of the electrostatic polarization interaction. The field theory provides counterion distributions outside a macroion in good agreement with the simulation results over the full range from weak to strong electrostatic coupling. A low-dielectric macroion leads to a displacement of the counterions away from the macroion.

  5. Hybrid Monte-Carlo simulation of interacting tight-binding model of graphene

    CERN Document Server

    Smith, Dominik

    2013-01-01

    In this work, results are presented of Hybrid-Monte-Carlo simulations of the tight-binding Hamiltonian of graphene, coupled to an instantaneous long-range two-body potential which is modeled by a Hubbard-Stratonovich auxiliary field. We present an investigation of the spontaneous breaking of the sublattice symmetry, which corresponds to a phase transition from a conducting to an insulating phase and which occurs when the effective fine-structure constant $\\alpha$ of the system crosses above a certain threshold $\\alpha_C$. Qualitative comparisons to earlier works on the subject (which used larger system sizes and higher statistics) are made and it is established that $\\alpha_C$ is of a plausible magnitude in our simulations. Also, we discuss differences between simulations using compact and non-compact variants of the Hubbard field and present a quantitative comparison of distinct discretization schemes of the Euclidean time-like dimension in the Fermion operator.

  6. Of bugs and birds: Markov Chain Monte Carlo for hierarchical modeling in wildlife research

    Science.gov (United States)

    Link, W.A.; Cam, E.; Nichols, J.D.; Cooch, E.G.

    2002-01-01

    Markov chain Monte Carlo (MCMC) is a statistical innovation that allows researchers to fit far more complex models to data than is feasible using conventional methods. Despite its widespread use in a variety of scientific fields, MCMC appears to be underutilized in wildlife applications. This may be due to a misconception that MCMC requires the adoption of a subjective Bayesian analysis, or perhaps simply to its lack of familiarity among wildlife researchers. We introduce the basic ideas of MCMC and software BUGS (Bayesian inference using Gibbs sampling), stressing that a simple and satisfactory intuition for MCMC does not require extraordinary mathematical sophistication. We illustrate the use of MCMC with an analysis of the association between latent factors governing individual heterogeneity in breeding and survival rates of kittiwakes (Rissa tridactyla). We conclude with a discussion of the importance of individual heterogeneity for understanding population dynamics and designing management plans.

  7. Efficient Monte Carlo and greedy heuristic for the inference of stochastic block models

    CERN Document Server

    Peixoto, Tiago P

    2014-01-01

    We present an efficient algorithm for the inference of stochastic block models in large networks. The algorithm can be used as an optimized Markov chain Monte Carlo (MCMC) method, with a fast mixing time and a much reduced susceptibility to getting trapped in metastable states, or as a greedy agglomerative heuristic, with an almost linear $O(N\\ln^2N)$ complexity, where $N$ is the number of nodes in the network, independent on the number of blocks being inferred. We show that the heuristic is capable of delivering results which are indistinguishable from the more exact and numerically expensive MCMC method in many artificial and empirical networks, despite being much faster. The method is entirely unbiased towards any specific mixing pattern, and in particular it does not favor assortative community structures.

  8. Monte Carlo based geometrical model for efficiency calculation of an n-type HPGe detector.

    Science.gov (United States)

    Cabal, Fatima Padilla; Lopez-Pino, Neivy; Bernal-Castillo, Jose Luis; Martinez-Palenzuela, Yisel; Aguilar-Mena, Jimmy; D'Alessandro, Katia; Arbelo, Yuniesky; Corrales, Yasser; Diaz, Oscar

    2010-12-01

    A procedure to optimize the geometrical model of an n-type detector is described. Sixteen lines from seven point sources ((241)Am, (133)Ba, (22)Na, (60)Co, (57)Co, (137)Cs and (152)Eu) placed at three different source-to-detector distances (10, 20 and 30 cm) were used to calibrate a low-background gamma spectrometer between 26 and 1408 keV. Direct Monte Carlo techniques using the MCNPX 2.6 and GEANT 4 9.2 codes, and a semi-empirical procedure were performed to obtain theoretical efficiency curves. Since discrepancies were found between experimental and calculated data using the manufacturer parameters of the detector, a detail study of the crystal dimensions and the geometrical configuration is carried out. The relative deviation with experimental data decreases from a mean value of 18-4%, after the parameters were optimized.

  9. FPGA Hardware Acceleration of Monte Carlo Simulations for the Ising Model

    CERN Document Server

    Ortega-Zamorano, Francisco; Cannas, Sergio A; Jerez, José M; Franco, Leonardo

    2016-01-01

    A two-dimensional Ising model with nearest-neighbors ferromagnetic interactions is implemented in a Field Programmable Gate Array (FPGA) board.Extensive Monte Carlo simulations were carried out using an efficient hardware representation of individual spins and a combined global-local LFSR random number generator. Consistent results regarding the descriptive properties of magnetic systems, like energy, magnetization and susceptibility are obtained while a speed-up factor of approximately 6 times is achieved in comparison to previous FPGA-based published works and almost $10^4$ times in comparison to a standard CPU simulation. A detailed description of the logic design used is given together with a careful analysis of the quality of the random number generator used. The obtained results confirm the potential of FPGAs for analyzing the statistical mechanics of magnetic systems.

  10. A Thermodynamic Model for Square-well Chain Fluid: Theory and Monte Carlo Simulation

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A thermodynamic model for the freely jointed square-well chain fluids was developed based on the thermodynamic perturbation theory of Barker-Henderson, Zhang and Wertheim. In this derivation Zhang's expressions for square-well monomers improved from Barker-Henderson compressibility approximation were adopted as the reference fluid, and Wertheim's polymerization method was used to obtain the free energy term due to the bond connectivity. An analytic expression for the Helmholtz free energy of the square-well chain fluids was obtained. The expression without adjustable parameters leads to the thermodynamic consistent predictions of the compressibility factors, residual internal energy and constant-volume heat capacity for dimer,4-mer, 8-mer and 16-mer square-well fluids. The results are in good agreement with the Monte Carlo simulation. To obtain the MC data of residual internal energy and the constant-volume heat capacity needed, NVT MC simulations were performed for these square-well chain fluids.

  11. World-line quantum Monte Carlo algorithm for a one-dimensional Bose model

    Energy Technology Data Exchange (ETDEWEB)

    Batrouni, G.G. (Thinking Machines Corporation, 245 First Street, Cambridge, Massachusetts 02142 (United States)); Scalettar, R.T. (Physics Department, University of California, Davis, California 95616 (United States))

    1992-10-01

    In this paper we provide a detailed description of the ground-state phase diagram of interacting, disordered bosons on a lattice. We describe a quantum Monte Carlo algorithm that incorporates in an efficient manner the required bosonic wave-function symmetry. We consider the ordered case, where we evaluate the compressibility gap and show the lowest three Mott insulating lobes. We obtain the critical ratio of interaction strength to hopping at which the onset of superfluidity occurs for the first lobe, and the critical exponents {nu} and {ital z}. For the disordered model we show the effect of randomness on the phase diagram and the superfluid correlations. We also measure the response of the superfluid density, {rho}{sub {ital s}}, to external perturbations. This provides an unambiguous characterization of the recently observed Bose and Anderson glass phases.

  12. Constrained-path quantum Monte Carlo approach for non-yrast states within the shell model

    Energy Technology Data Exchange (ETDEWEB)

    Bonnard, J. [INFN, Sezione di Padova, Padova (Italy); LPC Caen, ENSICAEN, Universite de Caen, CNRS/IN2P3, Caen (France); Juillet, O. [LPC Caen, ENSICAEN, Universite de Caen, CNRS/IN2P3, Caen (France)

    2016-04-15

    The present paper intends to present an extension of the constrained-path quantum Monte Carlo approach allowing to reconstruct non-yrast states in order to reach the complete spectroscopy of nuclei within the interacting shell model. As in the yrast case studied in a previous work, the formalism involves a variational symmetry-restored wave function assuming two central roles. First, it guides the underlying Brownian motion to improve the efficiency of the sampling. Second, it constrains the stochastic paths according to the phaseless approximation to control sign or phase problems that usually plague fermionic QMC simulations. Proof-of-principle results in the sd valence space are reported. They prove the ability of the scheme to offer remarkably accurate binding energies for both even- and odd-mass nuclei irrespective of the considered interaction. (orig.)

  13. Optimization of a Monte Carlo Model of the Transient Reactor Test Facility

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Kristin; DeHart, Mark; Goluoglu, Sedat

    2017-03-01

    The ultimate goal of modeling and simulation is to obtain reasonable answers to problems that don’t have representations which can be easily evaluated while minimizing the amount of computational resources. With the advances during the last twenty years of large scale computing centers, researchers have had the ability to create a multitude of tools to minimize the number of approximations necessary when modeling a system. The tremendous power of these centers requires the user to possess an immense amount of knowledge to optimize the models for accuracy and efficiency.This paper seeks to evaluate the KENO model of TREAT to optimize calculational efforts.

  14. Mental Effort and Perceptions of TV and Books: A Dutch Replication Study Based on Salomon's Model of Learning.

    Science.gov (United States)

    Beentjes, Hans W. J.

    This comparison of students' learning from reading books and from watching television uses Gavriel Salomon's model of learning effects, which is based on the amount of mental effort invested (AIME) in a medium as determining how deeply the information from that medium is processed. Mental effort, in turn, is predicted to depend on two perceptions…

  15. Development of a Monte Carlo multiple source model for inclusion in a dose calculation auditing tool.

    Science.gov (United States)

    Faught, Austin M; Davidson, Scott E; Fontenot, Jonas; Kry, Stephen F; Etzel, Carol; Ibbott, Geoffrey S; Followill, David S

    2017-09-01

    The Imaging and Radiation Oncology Core Houston (IROC-H) (formerly the Radiological Physics Center) has reported varying levels of agreement in their anthropomorphic phantom audits. There is reason to believe one source of error in this observed disagreement is the accuracy of the dose calculation algorithms and heterogeneity corrections used. To audit this component of the radiotherapy treatment process, an independent dose calculation tool is needed. Monte Carlo multiple source models for Elekta 6 MV and 10 MV therapeutic x-ray beams were commissioned based on measurement of central axis depth dose data for a 10 × 10 cm(2) field size and dose profiles for a 40 × 40 cm(2) field size. The models were validated against open field measurements consisting of depth dose data and dose profiles for field sizes ranging from 3 × 3 cm(2) to 30 × 30 cm(2) . The models were then benchmarked against measurements in IROC-H's anthropomorphic head and neck and lung phantoms. Validation results showed 97.9% and 96.8% of depth dose data passed a ±2% Van Dyk criterion for 6 MV and 10 MV models respectively. Dose profile comparisons showed an average agreement using a ±2%/2 mm criterion of 98.0% and 99.0% for 6 MV and 10 MV models respectively. Phantom plan comparisons were evaluated using ±3%/2 mm gamma criterion, and averaged passing rates between Monte Carlo and measurements were 87.4% and 89.9% for 6 MV and 10 MV models respectively. Accurate multiple source models for Elekta 6 MV and 10 MV x-ray beams have been developed for inclusion in an independent dose calculation tool for use in clinical trial audits. © 2017 American Association of Physicists in Medicine.

  16. Study of Monte Carlo Simulation Method for Methane Phase Diagram Prediction using Two Different Potential Models

    KAUST Repository

    Kadoura, Ahmad

    2011-06-06

    Lennard‐Jones (L‐J) and Buckingham exponential‐6 (exp‐6) potential models were used to produce isotherms for methane at temperatures below and above critical one. Molecular simulation approach, particularly Monte Carlo simulations, were employed to create these isotherms working with both canonical and Gibbs ensembles. Experiments in canonical ensemble with each model were conducted to estimate pressures at a range of temperatures above methane critical temperature. Results were collected and compared to experimental data existing in literature; both models showed an elegant agreement with the experimental data. In parallel, experiments below critical temperature were run in Gibbs ensemble using L‐J model only. Upon comparing results with experimental ones, a good fit was obtained with small deviations. The work was further developed by adding some statistical studies in order to achieve better understanding and interpretation to the estimated quantities by the simulation. Methane phase diagrams were successfully reproduced by an efficient molecular simulation technique with different potential models. This relatively simple demonstration shows how powerful molecular simulation methods could be, hence further applications on more complicated systems are considered. Prediction of phase behavior of elemental sulfur in sour natural gases has been an interesting and challenging field in oil and gas industry. Determination of elemental sulfur solubility conditions helps avoiding all kinds of problems caused by its dissolution in gas production and transportation processes. For this purpose, further enhancement to the methods used is to be considered in order to successfully simulate elemental sulfur phase behavior in sour natural gases mixtures.

  17. Worm Monte Carlo study of the honeycomb-lattice loop model

    Energy Technology Data Exchange (ETDEWEB)

    Liu Qingquan, E-mail: liuqq@mail.ustc.edu.c [Hefei National Laboratory for Physical Sciences at Microscale, Department of Modern Physics, University of Science and Technology of China, Hefei, 230027 (China); Deng Youjin, E-mail: yjdeng@ustc.edu.c [Hefei National Laboratory for Physical Sciences at Microscale, Department of Modern Physics, University of Science and Technology of China, Hefei, 230027 (China); Garoni, Timothy M., E-mail: t.garoni@ms.unimelb.edu.a [ARC Centre of Excellence for Mathematics and Statistics of Complex Systems, Department of Mathematics and Statistics, University of Melbourne, Victoria 3010 (Australia)

    2011-05-11

    We present a Markov-chain Monte Carlo algorithm of worm type that correctly simulates the O(n) loop model on any (finite and connected) bipartite cubic graph, for any real n>0, and any edge weight, including the fully-packed limit of infinite edge weight. Furthermore, we prove rigorously that the algorithm is ergodic and has the correct stationary distribution. We emphasize that by using known exact mappings when n=2, this algorithm can be used to simulate a number of zero-temperature Potts antiferromagnets for which the Wang-Swendsen-Kotecky cluster algorithm is non-ergodic, including the 3-state model on the kagome lattice and the 4-state model on the triangular lattice. We then use this worm algorithm to perform a systematic study of the honeycomb-lattice loop model as a function of n{<=}2, on the critical line and in the densely-packed and fully-packed phases. By comparing our numerical results with Coulomb gas theory, we identify a set of exact expressions for scaling exponents governing some fundamental geometric and dynamic observables. In particular, we show that for all n{<=}2, the scaling of a certain return time in the worm dynamics is governed by the magnetic dimension of the loop model, thus providing a concrete dynamical interpretation of this exponent. The case n>2 is also considered, and we confirm the existence of a phase transition in the 3-state Potts universality class that was recently observed via numerical transfer matrix calculations.

  18. Voxel2MCNP: software for handling voxel models for Monte Carlo radiation transport calculations.

    Science.gov (United States)

    Hegenbart, Lars; Pölz, Stefan; Benzler, Andreas; Urban, Manfred

    2012-02-01

    Voxel2MCNP is a program that sets up radiation protection scenarios with voxel models and generates corresponding input files for the Monte Carlo code MCNPX. Its technology is based on object-oriented programming, and the development is platform-independent. It has a user-friendly graphical interface including a two- and three-dimensional viewer. A row of equipment models is implemented in the program. Various voxel model file formats are supported. Applications include calculation of counting efficiency of in vivo measurement scenarios and calculation of dose coefficients for internal and external radiation scenarios. Moreover, anthropometric parameters of voxel models, for instance chest wall thickness, can be determined. Voxel2MCNP offers several methods for voxel model manipulations including image registration techniques. The authors demonstrate the validity of the program results and provide references for previous successful implementations. The authors illustrate the reliability of calculated dose conversion factors and specific absorbed fractions. Voxel2MCNP is used on a regular basis to generate virtual radiation protection scenarios at Karlsruhe Institute of Technology while further improvements and developments are ongoing.

  19. Unified description of pf-shell nuclei by the Monte Carlo shell model calculations

    Energy Technology Data Exchange (ETDEWEB)

    Mizusaki, Takahiro; Otsuka, Takaharu [Tokyo Univ. (Japan). Dept. of Physics; Honma, Michio

    1998-03-01

    The attempts to solve shell model by new methods are briefed. The shell model calculation by quantum Monte Carlo diagonalization which was proposed by the authors is a more practical method, and it became to be known that it can solve the problem with sufficiently good accuracy. As to the treatment of angular momentum, in the method of the authors, deformed Slater determinant is used as the basis, therefore, for making angular momentum into the peculiar state, projected operator is used. The space determined dynamically is treated mainly stochastically, and the energy of the multibody by the basis formed as the result is evaluated and selectively adopted. The symmetry is discussed, and the method of decomposing shell model space into dynamically determined space and the product of spin and isospin spaces was devised. The calculation processes are shown with the example of {sup 50}Mn nuclei. The calculation of the level structure of {sup 48}Cr with known exact energy can be done with the accuracy of peculiar absolute energy value within 200 keV. {sup 56}Ni nuclei are the self-conjugate nuclei of Z=N=28. The results of the shell model calculation of {sup 56}Ni nucleus structure by using the interactions of nuclear models are reported. (K.I.)

  20. Modeling of composite latex particle morphology by off-lattice Monte Carlo simulation.

    Science.gov (United States)

    Duda, Yurko; Vázquez, Flavio

    2005-02-01

    Composite latex particles have shown a great range of applications such as paint resins, varnishes, water borne adhesives, impact modifiers, etc. The high-performance properties of this kind of materials may be explained in terms of a synergistical combination of two different polymers (usually a rubber and a thermoplastic). A great variety of composite latex particles with very different morphologies may be obtained by two-step emulsion polymerization processes. The formation of specific particle morphology depends on the chemical and physical nature of the monomers used during the synthesis, the process temperature, the reaction initiator, the surfactants, etc. Only a few models have been proposed to explain the appearance of the composite particle morphologies. These models have been based on the change of the interfacial energies during the synthesis. In this work, we present a new three-component model: Polymer blend (flexible and rigid chain particles) is dispersed in water by forming spherical cavities. Monte Carlo simulations of the model in two dimensions are used to determine the density distribution of chains and water molecules inside the suspended particle. This approach allows us to study the dependence of the morphology of the composite latex particles on the relative hydrophilicity and flexibility of the chain molecules as well as on their density and composition. It has been shown that our simple model is capable of reproducing the main features of the various morphologies observed in synthesis experiments.

  1. Unified description of pf-shell nuclei by the Monte Carlo shell model calculations

    Energy Technology Data Exchange (ETDEWEB)

    Mizusaki, Takahiro; Otsuka, Takaharu [Tokyo Univ. (Japan). Dept. of Physics; Honma, Michio

    1998-03-01

    The attempts to solve shell model by new methods are briefed. The shell model calculation by quantum Monte Carlo diagonalization which was proposed by the authors is a more practical method, and it became to be known that it can solve the problem with sufficiently good accuracy. As to the treatment of angular momentum, in the method of the authors, deformed Slater determinant is used as the basis, therefore, for making angular momentum into the peculiar state, projected operator is used. The space determined dynamically is treated mainly stochastically, and the energy of the multibody by the basis formed as the result is evaluated and selectively adopted. The symmetry is discussed, and the method of decomposing shell model space into dynamically determined space and the product of spin and isospin spaces was devised. The calculation processes are shown with the example of {sup 50}Mn nuclei. The calculation of the level structure of {sup 48}Cr with known exact energy can be done with the accuracy of peculiar absolute energy value within 200 keV. {sup 56}Ni nuclei are the self-conjugate nuclei of Z=N=28. The results of the shell model calculation of {sup 56}Ni nucleus structure by using the interactions of nuclear models are reported. (K.I.)

  2. Corner wetting in the two-dimensional Ising model: Monte Carlo results

    Energy Technology Data Exchange (ETDEWEB)

    Albano, E V [INIFTA, Universidad Nacional de La Plata, CC 16 Suc. 4, 1900 La Plata (Argentina); Virgiliis, A De [INIFTA, Universidad Nacional de La Plata, CC 16 Suc. 4, 1900 La Plata (Argentina); Mueller, M [Institut fuer Physik, Johannes Gutenberg Universitaet, Staudinger Weg 7, D-55099 Mainz (Germany); Binder, K [Institut fuer Physik, Johannes Gutenberg Universitaet, Staudinger Weg 7, D-55099 Mainz (Germany)

    2003-01-29

    Square LxL (L=24-128) Ising lattices with nearest neighbour ferromagnetic exchange are considered using free boundary conditions at which boundary magnetic fields are applied, i.e., at the two boundary rows ending at the lower left corner a field +h acts, while at the two boundary rows ending at the upper right corner a field -h acts. For temperatures T less than the critical temperature T{sub c} of the bulk, this boundary condition leads to the formation of two domains with opposite orientations of the magnetization direction, separated by an interface which for T larger than the filling transition temperature T{sub f} (h) runs from the upper left corner to the lower right corner, while for TMonte Carlo simulations. In particular, it is shown that for T=T{sub f} (h) the magnetization profile m(z) in the z-direction normal to the interface is simply linear and the interfacial width scales as w {proportional_to} L, while for T>T{sub f} (h) it scales as w {proportional_to}{radical} L. The distribution P (l) of the interface position l (measured along the z-direction from the corners) decays exponentially for TT{sub f} (h). Furthermore, the Monte Carlo data are compatible with (l) {proportional_to} (T{sub f} (h) - T){sup -1} and a finite size scaling of the total magnetization according to M(L, T) M-tilde ((1 - T/T{sub f} (h)){sup {nu}}{sub perp} L) with {nu}{sub perp} = 1. Unlike the findings for critical wetting in the thin film geometry of the Ising model, the Monte Carlo results for corner wetting are in very good agreement with the theoretical predictions.

  3. Geological modeling of a fault zone in clay rocks at the Mont-Terri laboratory (Switzerland)

    Science.gov (United States)

    Kakurina, M.; Guglielmi, Y.; Nussbaum, C.; Valley, B.

    2016-12-01

    Clay-rich formations are considered to be a natural barrier for radionuclides or fluids (water, hydrocarbons, CO2) migration. However, little is known about the architecture of faults affecting clay formations because of their quick alteration at the Earth's surface. The Mont Terri Underground Research Laboratory provides exceptional conditions to investigate an un-weathered, perfectly exposed clay fault zone architecture and to conduct fault activation experiments that allow explore the conditions for stability of such clay faults. Here we show first results from a detailed geological model of the Mont Terri Main Fault architecture, using GoCad software, a detailed structural analysis of 6 fully cored and logged 30-to-50m long and 3-to-15m spaced boreholes crossing the fault zone. These high-definition geological data were acquired within the Fault Slip (FS) experiment project that consisted in fluid injections in different intervals within the fault using the SIMFIP probe to explore the conditions for the fault mechanical and seismic stability. The Mont Terri Main Fault "core" consists of a thrust zone about 0.8 to 3m wide that is bounded by two major fault planes. Between these planes, there is an assembly of distinct slickensided surfaces and various facies including scaly clays, fault gouge and fractured zones. Scaly clay including S-C bands and microfolds occurs in larger zones at top and bottom of the Mail Fault. A cm-thin layer of gouge, that is known to accommodate high strain parts, runs along the upper fault zone boundary. The non-scaly part mainly consists of undeformed rock block, bounded by slickensides. Such a complexity as well as the continuity of the two major surfaces are hard to correlate between the different boreholes even with the high density of geological data within the relatively small volume of the experiment. This may show that a poor strain localization occurred during faulting giving some perspectives about the potential for

  4. Monte Carlo modeling in CT-based geometries: dosimetry for biological modeling experiments with particle beam radiation.

    Science.gov (United States)

    Diffenderfer, Eric S; Dolney, Derek; Schaettler, Maximilian; Sanzari, Jenine K; McDonough, James; Cengel, Keith A

    2014-03-01

    The space radiation environment imposes increased dangers of exposure to ionizing radiation, particularly during a solar particle event (SPE). These events consist primarily of low energy protons that produce a highly inhomogeneous dose distribution. Due to this inherent dose heterogeneity, experiments designed to investigate the radiobiological effects of SPE radiation present difficulties in evaluating and interpreting dose to sensitive organs. To address this challenge, we used the Geant4 Monte Carlo simulation framework to develop dosimetry software that uses computed tomography (CT) images and provides radiation transport simulations incorporating all relevant physical interaction processes. We found that this simulation accurately predicts measured data in phantoms and can be applied to model dose in radiobiological experiments with animal models exposed to charged particle (electron and proton) beams. This study clearly demonstrates the value of Monte Carlo radiation transport methods for two critically interrelated uses: (i) determining the overall dose distribution and dose levels to specific organ systems for animal experiments with SPE-like radiation, and (ii) interpreting the effect of random and systematic variations in experimental variables (e.g. animal movement during long exposures) on the dose distributions and consequent biological effects from SPE-like radiation exposure. The software developed and validated in this study represents a critically important new tool that allows integration of computational and biological modeling for evaluating the biological outcomes of exposures to inhomogeneous SPE-like radiation dose distributions, and has potential applications for other environmental and therapeutic exposure simulations.

  5. Development of a randomized 3D cell model for Monte Carlo microdosimetry simulations

    Energy Technology Data Exchange (ETDEWEB)

    Douglass, Michael; Bezak, Eva; Penfold, Scott [School of Chemistry and Physics, University of Adelaide, North Terrace, Adelaide 5005, South Australia (Australia) and Department of Medical Physics, Royal Adelaide Hospital, North Terrace, Adelaide 5000, South Australia (Australia)

    2012-06-15

    Purpose: The objective of the current work was to develop an algorithm for growing a macroscopic tumor volume from individual randomized quasi-realistic cells. The major physical and chemical components of the cell need to be modeled. It is intended to import the tumor volume into GEANT4 (and potentially other Monte Carlo packages) to simulate ionization events within the cell regions. Methods: A MATLAB Copyright-Sign code was developed to produce a tumor coordinate system consisting of individual ellipsoidal cells randomized in their spatial coordinates, sizes, and rotations. An eigenvalue method using a mathematical equation to represent individual cells was used to detect overlapping cells. GEANT4 code was then developed to import the coordinate system into GEANT4 and populate it with individual cells of varying sizes and composed of the membrane, cytoplasm, reticulum, nucleus, and nucleolus. Each region is composed of chemically realistic materials. Results: The in-house developed MATLAB Copyright-Sign code was able to grow semi-realistic cell distributions ({approx}2 Multiplication-Sign 10{sup 8} cells in 1 cm{sup 3}) in under 36 h. The cell distribution can be used in any number of Monte Carlo particle tracking toolkits including GEANT4, which has been demonstrated in this work. Conclusions: Using the cell distribution and GEANT4, the authors were able to simulate ionization events in the individual cell components resulting from 80 keV gamma radiation (the code is applicable to other particles and a wide range of energies). This virtual microdosimetry tool will allow for a more complete picture of cell damage to be developed.

  6. Solution of deterministic-stochastic epidemic models by dynamical Monte Carlo method

    Science.gov (United States)

    Aièllo, O. E.; Haas, V. J.; daSilva, M. A. A.; Caliri, A.

    2000-07-01

    This work is concerned with dynamical Monte Carlo (MC) method and its application to models originally formulated in a continuous-deterministic approach. Specifically, a susceptible-infected-removed-susceptible (SIRS) model is used in order to analyze aspects of the dynamical MC algorithm and achieve its applications in epidemic contexts. We first examine two known approaches to the dynamical interpretation of the MC method and follow with the application of one of them in the SIRS model. The working method chosen is based on the Poisson process where hierarchy of events, properly calculated waiting time between events, and independence of the events simulated, are the basic requirements. To verify the consistence of the method, some preliminary MC results are compared against exact steady-state solutions and other general numerical results (provided by Runge-Kutta method): good agreement is found. Finally, a space-dependent extension of the SIRS model is introduced and treated by MC. The results are interpreted under and in accordance with aspects of the herd-immunity concept.

  7. Critical Casimir force and its fluctuations in lattice spin models: exact and Monte Carlo results.

    Science.gov (United States)

    Dantchev, Daniel; Krech, Michael

    2004-04-01

    We present general arguments and construct a stress tensor operator for finite lattice spin models. The average value of this operator gives the Casimir force of the system close to the bulk critical temperature T(c). We verify our arguments via exact results for the force in the two-dimensional Ising model, d -dimensional Gaussian, and mean spherical model with 2Monte Carlo simulations for three-dimensional Ising, XY, and Heisenberg models we demonstrate that the standard deviation of the Casimir force F(C) in a slab geometry confining a critical substance in-between is k(b) TD(T) (A/ a(d-1) )(1/2), where A is the surface area of the plates, a is the lattice spacing, and D(T) is a slowly varying nonuniversal function of the temperature T. The numerical calculations demonstrate that at the critical temperature T(c) the force possesses a Gaussian distribution centered at the mean value of the force = k(b) T(c) (d-1)Delta/ (L/a)(d), where L is the distance between the plates and Delta is the (universal) Casimir amplitude.

  8. Monte Carlo tests of renormalization-group predictions for critical phenomena in Ising models

    Science.gov (United States)

    Binder, Kurt; Luijten, Erik

    2001-04-01

    A critical review is given of status and perspectives of Monte Carlo simulations that address bulk and interfacial phase transitions of ferromagnetic Ising models. First, some basic methodological aspects of these simulations are briefly summarized (single-spin flip vs. cluster algorithms, finite-size scaling concepts), and then the application of these techniques to the nearest-neighbor Ising model in d=3 and 5 dimensions is described, and a detailed comparison to theoretical predictions is made. In addition, the case of Ising models with a large but finite range of interaction and the crossover scaling from mean-field behavior to the Ising universality class are treated. If one considers instead a long-range interaction described by a power-law decay, new classes of critical behavior depending on the exponent of this power law become accessible, and a stringent test of the ε-expansion becomes possible. As a final type of crossover from mean-field type behavior to two-dimensional Ising behavior, the interface localization-delocalization transition of Ising films confined between “competing” walls is considered. This problem is still hampered by questions regarding the appropriate coarse-grained model for the fluctuating interface near a wall, which is the starting point for both this problem and the theory of critical wetting.

  9. Monte Carlo Modeling of Computed Tomography Ceiling Scatter for Shielding Calculations.

    Science.gov (United States)

    Edwards, Stephen; Schick, Daniel

    2016-04-01

    Radiation protection for clinical staff and members of the public is of paramount importance, particularly in occupied areas adjacent to computed tomography scanner suites. Increased patient workloads and the adoption of multi-slice scanning systems may make unshielded secondary scatter from ceiling surfaces a significant contributor to dose. The present paper expands upon an existing analytical model for calculating ceiling scatter accounting for variable room geometries and provides calibration data for a range of clinical beam qualities. The practical effect of gantry, false ceiling, and wall attenuation in limiting ceiling scatter is also explored and incorporated into the model. Monte Carlo simulations were used to calibrate the model for scatter from both concrete and lead surfaces. Gantry attenuation experimental data showed an effective blocking of scatter directed toward the ceiling at angles up to 20-30° from the vertical for the scanners examined. The contribution of ceiling scatter from computed tomography operation to the effective dose of individuals in areas surrounding the scanner suite could be significant and therefore should be considered in shielding design according to the proposed analytical model.

  10. Modeling of molecular nitrogen collisions and dissociation processes for direct simulation Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Parsons, Neal, E-mail: neal.parsons@cd-adapco.com; Levin, Deborah A., E-mail: deblevin@illinois.edu [Department of Aerospace Engineering, The Pennsylvania State University, 233 Hammond Building, University Park, Pennsylvania 16802 (United States); Duin, Adri C. T. van, E-mail: acv13@engr.psu.edu [Department of Mechanical and Nuclear Engineering, The Pennsylvania State University, 136 Research East, University Park, Pennsylvania 16802 (United States); Zhu, Tong, E-mail: tvz5037@psu.edu [Department of Aerospace Engineering, The Pennsylvania State University, 136 Research East, University Park, Pennsylvania 16802 (United States)

    2014-12-21

    The Direct Simulation Monte Carlo (DSMC) method typically used for simulating hypersonic Earth re-entry flows requires accurate total collision cross sections and reaction probabilities. However, total cross sections are often determined from extrapolations of relatively low-temperature viscosity data, so their reliability is unknown for the high temperatures observed in hypersonic flows. Existing DSMC reaction models accurately reproduce experimental equilibrium reaction rates, but the applicability of these rates to the strong thermal nonequilibrium observed in hypersonic shocks is unknown. For hypersonic flows, these modeling issues are particularly relevant for nitrogen, the dominant species of air. To rectify this deficiency, the Molecular Dynamics/Quasi-Classical Trajectories (MD/QCT) method is used to accurately compute collision and reaction cross sections for the N{sub 2}({sup 1}Σ{sub g}{sup +})-N{sub 2}({sup 1}Σ{sub g}{sup +}) collision pair for conditions expected in hypersonic shocks using a new potential energy surface developed using a ReaxFF fit to recent advanced ab initio calculations. The MD/QCT-computed reaction probabilities were found to exhibit better physical behavior and predict less dissociation than the baseline total collision energy reaction model for strong nonequilibrium conditions expected in a shock. The MD/QCT reaction model compared well with computed equilibrium reaction rates and shock-tube data. In addition, the MD/QCT-computed total cross sections were found to agree well with established variable hard sphere total cross sections.

  11. A background error covariance model of significant wave height employing Monte Carlo simulation

    Institute of Scientific and Technical Information of China (English)

    GUO Yanyou; HOU Yijun; ZHANG Chunmei; YANG Jie

    2012-01-01

    The quality of background error statistics is one of the key components for successful assimilation of observations in a numerical model.The background error covariance(BEC)of ocean waves is generally estimated under an assumption that it is stationary over a period of time and uniform over a domain.However,error statistics are in fact functions of the physical processes governing the meteorological situation and vary with the wave condition.In this paper,we simulated the BEC of the significant wave height(SWH)employing Monte Carlo methods.An interesting result is that the BEC varies consistently with the mean wave direction(MWD).In the model domain,the BEC of the SWH decreases significantly when the MWD changes abruptly.A new BEC model of the SWH based on the correlation between the BEC and MWD was then developed.A case study of regional data assimilation was performed,where the SWH observations of buoy 22001 were used to assess the SWH hindcast.The results show that the new BEC model benefits wave prediction and allows reasonable approximations of anisotropy and inhomogeneous errors.

  12. The intentionality model and language acquisition: engagement, effort, and the essential tension in development.

    Science.gov (United States)

    Bloom, L; Tinker, E

    2001-01-01

    The purpose of the longitudinal research reported in this Monograph was to examine language acquisition in the second year of life in the context of developments in cognition, affect, and social connectedness. The theoretical focus for the research is on the agency of the child and the importance of the child's intentionality for explaining development, rather than on language as an independent object. The model of development for the research is a Model of Intentionality with two components: the engagement in a world of persons and objects that motivates acquiring a language, and the effort that is required to express and articulate increasingly discrepant and elaborate intentional state representations. The fundamental assumption in the model is that the driving force for acquiring language is in the essential tension between engagement and effort for linguistic, emotional, and physical actions of interpretation and expression. Results of lag sequential analyses are reported to show how different behaviors--words, sentences, emotional expressions, conversational interactions, and constructing thematic relations between objects in play--converged, both in the stream of children's actions in everyday events, in real time, and in developmental time between the emergence of words at about 13 months and the transition to simple sentences at about 2 years of age. Patterns of deviation from baseline rates of the different behaviors show that child emotional expression, child speech, and mother speech clearly influence each other, and the mutual influences between them are different at times of either emergence or achievement in both language and object play. The three conclusions that follow from the results of the research are that (a) expression and interpretation are the acts of performance in which language is learned, which means that performance counts for explaining language acquisition; (b) language is not an independent object but is acquired by a child in

  13. Habitat models to assist plant protection efforts in Shenandoah National Park, Virginia, USA

    Science.gov (United States)

    Van Manen, F.T.; Young, J.A.; Thatcher, C.A.; Cass, W.B.; Ulrey, C.

    2005-01-01

    During 2002, the National Park Service initiated a demonstration project to develop science-based law enforcement strategies for the protection of at-risk natural resources, including American ginseng (Panax quinquefolius L.), bloodroot (Sanguinaria canadensis L.), and black cohosh (Cimicifuga racemosa (L.) Nutt. [syn. Actaea racemosa L.]). Harvest pressure on these species is increasing because of the growing herbal remedy market. We developed habitat models for Shenandoah National Park and the northern portion of the Blue Ridge Parkway to determine the distribution of favorable habitats of these three plant species and to demonstrate the use of that information to support plant protection activities. We compiled locations for the three plant species to delineate favorable habitats with a geographic information system (GIS). We mapped potential habitat quality for each species by calculating a multivariate statistic, Mahalanobis distance, based on GIS layers that characterized the topography, land cover, and geology of the plant locations (10-m resolution). We tested model performance with an independent dataset of plant locations, which indicated a significant relationship between Mahalanobis distance values and species occurrence. We also generated null models by examining the distribution of the Mahalanobis distance values had plants been distributed randomly. For all species, the habitat models performed markedly better than their respective null models. We used our models to direct field searches to the most favorable habitats, resulting in a sizeable number of new plant locations (82 ginseng, 73 bloodroot, and 139 black cohosh locations). The odds of finding new plant locations based on the habitat models were 4.5 (black cohosh) to 12.3 (American ginseng) times greater than random searches; thus, the habitat models can be used to improve the efficiency of plant protection efforts, (e.g., marking of plants, law enforcement activities). The field searches also

  14. Transfer-Matrix Monte Carlo Estimates of Critical Points in the Simple Cubic Ising, Planar and Heisenberg Models

    NARCIS (Netherlands)

    Nightingale, M.P.; Blöte , H.W.J.

    1996-01-01

    The principle and the efficiency of the Monte Carlo transfer-matrix algorithm are discussed. Enhancements of this algorithm are illustrated by applications to several phase transitions in lattice spin models. We demonstrate how the statistical noise can be reduced considerably by a similarity transf

  15. Quantum Monte Carlo study of the itinerant-localized model of strongly correlated electrons: Spin-spin correlation functions

    OpenAIRE

    Ivantsov, Ilya; Ferraz, Alvaro; Kochetov, Evgenii

    2016-01-01

    We perform quantum Monte Carlo simulations of the itinerant-localized periodic Kondo-Heisenberg model for the underdoped cuprates to calculate the associated spin correlation functions. The strong electron correlations are shown to play a key role in the abrupt destruction of the quasi long-range antiferromagnetic order in the lightly doped regime.

  16. Quantum Monte Carlo study of the itinerant-localized model of strongly correlated electrons: Spin-spin correlation functions

    Science.gov (United States)

    Ivantsov, Ilya; Ferraz, Alvaro; Kochetov, Evgenii

    2016-12-01

    We perform quantum Monte Carlo simulations of the itinerant-localized periodic Kondo-Heisenberg model for the underdoped cuprates to calculate the associated spin correlation functions. The strong electron correlations are shown to play a key role in the abrupt destruction of the quasi-long-range antiferromagnetic order in the lightly doped regime.

  17. Quantum Monte Carlo study of the cooperative binding of NO2 to fragment models of carbon nanotubes

    NARCIS (Netherlands)

    Lawson, John W.; Bauschlicher Jr., Charles W.; Toulouse, Julien; Filippi, Claudia; Umrigar, C.J.

    2008-01-01

    Previous calculations on model systems for the cooperative binding of two NO2 molecules to carbon nanotubes using density functional theory and second order Moller–Plesset perturbation theory gave results differing by 30 kcal/mol. Quantum Monte Carlo calculations are performed to study the role of e

  18. Meta-Analysis of Single-Case Data: A Monte Carlo Investigation of a Three Level Model

    Science.gov (United States)

    Owens, Corina M.

    2011-01-01

    Numerous ways to meta-analyze single-case data have been proposed in the literature, however, consensus on the most appropriate method has not been reached. One method that has been proposed involves multilevel modeling. This study used Monte Carlo methods to examine the appropriateness of Van den Noortgate and Onghena's (2008) raw data multilevel…

  19. A Monte-Carlo study for the critical exponents of the three-dimensional O(6) model

    Science.gov (United States)

    Loison, D.

    1999-09-01

    Using Wolff's single-cluster Monte-Carlo update algorithm, the three-dimensional O(6)-Heisenberg model on a simple cubic lattice is simulated. With the help of finite size scaling we compute the critical exponents ν, β, γ and η. Our results agree with the field-theory predictions but not so well with the prediction of the series expansions.

  20. Transfer-Matrix Monte Carlo Estimates of Critical Points in the Simple Cubic Ising, Planar and Heisenberg Models

    NARCIS (Netherlands)

    Nightingale, M.P.; Blöte , H.W.J.

    1996-01-01

    The principle and the efficiency of the Monte Carlo transfer-matrix algorithm are discussed. Enhancements of this algorithm are illustrated by applications to several phase transitions in lattice spin models. We demonstrate how the statistical noise can be reduced considerably by a similarity

  1. A Monte Carlo study of time-aggregation in continuous-time and discrete-time parametric hazard models.

    NARCIS (Netherlands)

    Hofstede, ter F.; Wedel, M.

    1998-01-01

    This study investigates the effects of time aggregation in discrete and continuous-time hazard models. A Monte Carlo study is conducted in which data are generated according to various continuous and discrete-time processes, and aggregated into daily, weekly and monthly intervals. These data are

  2. Automatic generation of a JET 3D neutronics model from CAD geometry data for Monte Carlo calculations

    Energy Technology Data Exchange (ETDEWEB)

    Tsige-Tamirat, H. [Association FZK-Euratom, Forschungszentrum Karlsruhe, P.O. Box 3640, 76021 Karlsruhe (Germany)]. E-mail: tsige@irs.fzk.de; Fischer, U. [Association FZK-Euratom, Forschungszentrum Karlsruhe, P.O. Box 3640, 76021 Karlsruhe (Germany); Carman, P.P. [Euratom/UKAEA Fusion Association, Culham Science Center, Abingdon, Oxfordshire OX14 3DB (United Kingdom); Loughlin, M. [Euratom/UKAEA Fusion Association, Culham Science Center, Abingdon, Oxfordshire OX14 3DB (United Kingdom)

    2005-11-15

    The paper describes the automatic generation of a JET 3D neutronics model from data of computer aided design (CAD) system for Monte Carlo (MC) calculations. The applied method converts suitable CAD data into a representation appropriate for MC codes. The converted geometry is fully equivalent to the CAD geometry.

  3. A Markov chain Monte Carlo with Gibbs sampling approach to anisotropic receiver function forward modeling

    Science.gov (United States)

    Wirth, Erin A.; Long, Maureen D.; Moriarty, John C.

    2017-01-01

    Teleseismic receiver functions contain information regarding Earth structure beneath a seismic station. P-to-SV converted phases are often used to characterize crustal and upper-mantle discontinuities and isotropic velocity structures. More recently, P-to-SH converted energy has been used to interrogate the orientation of anisotropy at depth, as well as the geometry of dipping interfaces. Many studies use a trial-and-error forward modeling approach for the interpretation of receiver functions, generating synthetic receiver functions from a user-defined input model of Earth structure and amending this model until it matches major features in the actual data. While often successful, such an approach makes it impossible to explore model space in a systematic and robust manner, which is especially important given that solutions are likely non-unique. Here, we present a Markov chain Monte Carlo algorithm with Gibbs sampling for the interpretation of anisotropic receiver functions. Synthetic examples are used to test the viability of the algorithm, suggesting that it works well for models with a reasonable number of free parameters (<˜20). Additionally, the synthetic tests illustrate that certain parameters are well constrained by receiver function data, while others are subject to severe trade-offs-an important implication for studies that attempt to interpret Earth structure based on receiver function data. Finally, we apply our algorithm to receiver function data from station WCI in the central United States. We find evidence for a change in anisotropic structure at mid-lithospheric depths, consistent with previous work that used a grid search approach to model receiver function data at this station. Forward modeling of receiver functions using model space search algorithms, such as the one presented here, provide a meaningful framework for interrogating Earth structure from receiver function data.

  4. Improved hybrid Monte Carlo-fluid model for the electrical characteristics in an analytical radio-frequency glow discharge in argon

    NARCIS (Netherlands)

    Bogaerts, A.; Gijbels, R.; W. Goedheer,

    2001-01-01

    An improved hybrid Monte Carlo-fluid model for electrons, argon ions and fast argon atoms, is presented for the rf Grimm-type glow discharge. In this new approach, all electrons, including the large slow electron group in the bulk plasma, are treated with the Monte Carlo model. The calculation

  5. Monte Carlo simulation of OLS and linear mixed model inference of phenotypic effects on gene expression

    Science.gov (United States)

    2016-01-01

    Background Self-contained tests estimate and test the association between a phenotype and mean expression level in a gene set defined a priori. Many self-contained gene set analysis methods have been developed but the performance of these methods for phenotypes that are continuous rather than discrete and with multiple nuisance covariates has not been well studied. Here, I use Monte Carlo simulation to evaluate the performance of both novel and previously published (and readily available via R) methods for inferring effects of a continuous predictor on mean expression in the presence of nuisance covariates. The motivating data are a high-profile dataset which was used to show opposing effects of hedonic and eudaimonic well-being (or happiness) on the mean expression level of a set of genes that has been correlated with social adversity (the CTRA gene set). The original analysis of these data used a linear model (GLS) of fixed effects with correlated error to infer effects of Hedonia and Eudaimonia on mean CTRA expression. Methods The standardized effects of Hedonia and Eudaimonia on CTRA gene set expression estimated by GLS were compared to estimates using multivariate (OLS) linear models and generalized estimating equation (GEE) models. The OLS estimates were tested using O’Brien’s OLS test, Anderson’s permutation \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}${r}_{F}^{2}$\\end{document}rF2-test, two permutation F-tests (including GlobalAncova), and a rotation z-test (Roast). The GEE estimates were tested using a Wald test with robust standard errors. The performance (Type I, II, S, and M errors) of all tests was investigated using a Monte Carlo simulation of data explicitly modeled on the re-analyzed dataset. Results GLS estimates are inconsistent between data

  6. Monte Carlo simulation of OLS and linear mixed model inference of phenotypic effects on gene expression

    Directory of Open Access Journals (Sweden)

    Jeffrey A. Walker

    2016-10-01

    Full Text Available Background Self-contained tests estimate and test the association between a phenotype and mean expression level in a gene set defined a priori. Many self-contained gene set analysis methods have been developed but the performance of these methods for phenotypes that are continuous rather than discrete and with multiple nuisance covariates has not been well studied. Here, I use Monte Carlo simulation to evaluate the performance of both novel and previously published (and readily available via R methods for inferring effects of a continuous predictor on mean expression in the presence of nuisance covariates. The motivating data are a high-profile dataset which was used to show opposing effects of hedonic and eudaimonic well-being (or happiness on the mean expression level of a set of genes that has been correlated with social adversity (the CTRA gene set. The original analysis of these data used a linear model (GLS of fixed effects with correlated error to infer effects of Hedonia and Eudaimonia on mean CTRA expression. Methods The standardized effects of Hedonia and Eudaimonia on CTRA gene set expression estimated by GLS were compared to estimates using multivariate (OLS linear models and generalized estimating equation (GEE models. The OLS estimates were tested using O’Brien’s OLS test, Anderson’s permutation ${r}_{F}^{2}$ r F 2 -test, two permutation F-tests (including GlobalAncova, and a rotation z-test (Roast. The GEE estimates were tested using a Wald test with robust standard errors. The performance (Type I, II, S, and M errors of all tests was investigated using a Monte Carlo simulation of data explicitly modeled on the re-analyzed dataset. Results GLS estimates are inconsistent between data sets, and, in each dataset, at least one coefficient is large and highly statistically significant. By contrast, effects estimated by OLS or GEE are very small, especially relative to the standard errors. Bootstrap and permutation GLS

  7. Calibration, characterisation and Monte Carlo modelling of a fast-UNCL

    Energy Technology Data Exchange (ETDEWEB)

    Tagziria, Hamid, E-mail: hamid.tagziria@jrc.ec.europa.eu [European Commission, Joint Research Center, ITU-Nuclear Security Unit, I-21027 Ispra (Italy); Bagi, Janos; Peerani, Paolo [European Commission, Joint Research Center, ITU-Nuclear Security Unit, I-21027 Ispra (Italy); Belian, Antony [Department of Safeguards, SGTS/TAU, IAEA Vienna Austria (Austria)

    2012-09-21

    This paper describes the calibration, characterisation and Monte Carlo modelling of a new IAEA Uranium Neutron Collar (UNCL) for LWR fuel, which can be operated in both passive and active modes. It can employ either 35 {sup 3}He tubes (in active configuration) or 44 tubes at 10 atm pressure (in its passive configuration) and thus can be operated in fast mode (with Cd liner) as its efficiency is higher than that of the standard UNCL. Furthermore, it has an adjustable internal cavity which allows the measurement of varying sizes of fuel assemblies such as WWER, PWR and BWR. It is intended to be used with Cd liners in active mode (with an AmLi interrogation source in place) by the inspectorate for the determination of the {sup 235}U content in fresh fuel assemblies, especially in cases where high concentrations of burnable poisons cause problems with accurate assays. A campaign of measurements has been carried out at the JRC Performance Laboratories (PERLA) in Ispra (Italy) using various radionuclide neutron sources ({sup 252}Cf, {sup 241}AmLi and PuGa) and our BWR and PWR reference assemblies, in order to calibrate and characterise the counter as well as assess its performance and determine its optimum operational parameters. Furthermore, the fast-UNCL has been extensively modelled at JRC using the Monte Carlo code, MCNP-PTA, which simulates both the neutron transport and the coincidence electronics. The model has been validated using our measurements which agreed well with calculations. The WWER1000 fuel assembly for which there are no representative reference materials for an adequate calibration of the counter, has also been modelled and the response of the counter to this fuel assembly has been simulated. Subsequently numerical calibrations curves have been obtained for the above fuel assemblies in various modes (fast and thermal). The sensitivity of the counter to fuel rods substitution as well as other important aspects and the parameters of the fast

  8. A Monte Carlo Method for Summing Modeled and Background Pollutant Concentrations.

    Science.gov (United States)

    Dhammapala, Ranil; Bowman, Clint; Schulte, Jill

    2017-02-23

    Air quality analyses for permitting new pollution sources often involve modeling dispersion of pollutants using models like AERMOD. Representative background pollutant concentrations must be added to modeled concentrations to determine compliance with air quality standards. Summing 98(th) (or 99(th)) percentiles of two independent distributions that are unpaired in time, overestimates air quality impacts and could needlessly burden sources with restrictive permit conditions. This problem is exacerbated when emissions and background concentrations peak during different seasons. Existing methods addressing this matter either require much input data, disregard source and background seasonality, or disregard the variability of the background by utilizing a single concentration for each season, month, hour-of-day, day-of-week or wind direction. Availability of representative background concentrations are another limitation. Here we report on work to improve permitting analyses, with the development of (1) daily gridded, background concentrations interpolated from 12km-CMAQ forecasts and monitored data. A two- step interpolation reproduced measured background concentrations to within 6.2%; and (2) a Monte Carlo (MC) method to combine AERMOD output and background concentrations while respecting their seasonality. The MC method randomly combines, with replacement, data from the same months, and calculates 1000 estimates of the 98(th) or 99(th) percentiles. The design concentration of background + new source is the median of these 1000 estimates. We found that the AERMOD design value (DV) + background DV lay at the upper end of the distribution of these thousand 99(th) percentiles, while measured DVs were at the lower end. Our MC method sits between these two metrics and is sufficiently protective of public health in that it overestimates design concentrations somewhat. We also calculated probabilities of exceeding specified thresholds at each receptor, better informing

  9. The two-phase issue in the O(n) non-linear $\\sigma$-model: A Monte Carlo study

    OpenAIRE

    Alles, B.; Buonanno, A.; Cella, G.

    1996-01-01

    We have performed a high statistics Monte Carlo simulation to investigate whether the two-dimensional O(n) non-linear sigma models are asymptotically free or they show a Kosterlitz- Thouless-like phase transition. We have calculated the mass gap and the magnetic susceptibility in the O(8) model with standard action and the O(3) model with Symanzik action. Our results for O(8) support the asymptotic freedom scenario.

  10. Monte Carlo modeling of photon transport in buried bone tissue layer for quantitative Raman spectroscopy

    Science.gov (United States)

    Wilson, Robert H.; Dooley, Kathryn A.; Morris, Michael D.; Mycek, Mary-Ann

    2009-02-01

    Light-scattering spectroscopy has the potential to provide information about bone composition via a fiber-optic probe placed on the skin. In order to design efficient probes, one must understand the effect of all tissue layers on photon transport. To quantitatively understand the effect of overlying tissue layers on the detected bone Raman signal, a layered Monte Carlo model was modified for Raman scattering. The model incorporated the absorption and scattering properties of three overlying tissue layers (dermis, subdermis, muscle), as well as the underlying bone tissue. The attenuation of the collected bone Raman signal, predominantly due to elastic light scattering in the overlying tissue layers, affected the carbonate/phosphate (C/P) ratio by increasing the standard deviation of the computational result. Furthermore, the mean C/P ratio varied when the relative thicknesses of the layers were varied and the elastic scattering coefficient at the Raman scattering wavelength of carbonate was modeled to be different from that at the Raman scattering wavelength of phosphate. These results represent the first portion of a computational study designed to predict optimal probe geometry and help to analyze detected signal for Raman scattering experiments involving bone.

  11. Modeling Monte Carlo of multileaf collimators using the code GEANT4

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Alex C.H.; Lima, Fernando R.A., E-mail: oliveira.ach@yahoo.com, E-mail: falima@cnen.gov.br [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil); Lima, Luciano S.; Vieira, Jose W., E-mail: lusoulima@yahoo.com.br [Instituto Federal de Educacao, Ciencia e Tecnologia de Pernambuco (IFPE), Recife, PE (Brazil)

    2014-07-01

    Radiotherapy uses various techniques and equipment for local treatment of cancer. The equipment most often used in radiotherapy to the patient irradiation is linear accelerator (Linac). Among the many algorithms developed for evaluation of dose distributions in radiotherapy planning, the algorithms based on Monte Carlo (MC) methods have proven to be very promising in terms of accuracy by providing more realistic results. The MC simulations for applications in radiotherapy are divided into two parts. In the first, the simulation of the production of the radiation beam by the Linac is performed and then the phase space is generated. The phase space contains information such as energy, position, direction, etc. of millions of particles (photons, electrons, positrons). In the second part the simulation of the transport of particles (sampled phase space) in certain configurations of irradiation field is performed to assess the dose distribution in the patient (or phantom). Accurate modeling of the Linac head is of particular interest in the calculation of dose distributions for intensity modulated radiation therapy (IMRT), where complex intensity distributions are delivered using a multileaf collimator (MLC). The objective of this work is to describe a methodology for modeling MC of MLCs using code Geant4. To exemplify this methodology, the Varian Millennium 120-leaf MLC was modeled, whose physical description is available in BEAMnrc Users Manual (20 11). The dosimetric characteristics (i.e., penumbra, leakage, and tongue-and-groove effect) of this MLC were evaluated. The results agreed with data published in the literature concerning the same MLC. (author)

  12. Tests of the modified Sigmund model of ion sputtering using Monte Carlo simulations

    Science.gov (United States)

    Hofsäss, Hans; Bradley, R. Mark

    2015-05-01

    Monte Carlo simulations are used to evaluate the Modified Sigmund Model of Sputtering. Simulations were carried out for a range of ion incidence angles and surface curvatures for different ion species, ion energies, and target materials. Sputter yields, moments of erosive crater functions, and the fraction of backscattered energy were determined. In accordance with the Modified Sigmund Model of Sputtering, we find that for sufficiently large incidence angles θ the curvature dependence of the erosion crater function tends to destabilize the solid surface along the projected direction of the incident ions. For the perpendicular direction, however, the curvature dependence always leads to a stabilizing contribution. The simulation results also show that, for larger values of θ, a significant fraction of the ions is backscattered, carrying off a substantial amount of the incident ion energy. This provides support for the basic idea behind the Modified Sigmund Model of Sputtering: that the incidence angle θ should be replaced by a larger angle Ψ to account for the reduced energy that is deposited in the solid for larger values of θ.

  13. Two electric field Monte Carlo models of coherent backscattering of polarized light.

    Science.gov (United States)

    Doronin, Alexander; Radosevich, Andrew J; Backman, Vadim; Meglinski, Igor

    2014-11-01

    Modeling of coherent polarized light propagation in turbid scattering medium by the Monte Carlo method provides an ultimate understanding of coherent effects of multiple scattering, such as enhancement of coherent backscattering and peculiarities of laser speckle formation in dynamic light scattering (DLS) and optical coherence tomography (OCT) diagnostic modalities. In this report, we consider two major ways of modeling the coherent polarized light propagation in scattering tissue-like turbid media. The first approach is based on tracking transformations of the electric field along the ray propagation. The second one is developed in analogy to the iterative procedure of the solution of the Bethe-Salpeter equation. To achieve a higher accuracy in the results and to speed up the modeling, both codes utilize the implementation of parallel computing on NVIDIA Graphics Processing Units (GPUs) with Compute Unified Device Architecture (CUDA). We compare these two approaches through simulations of the enhancement of coherent backscattering of polarized light and evaluate the accuracy of each technique with the results of a known analytical solution. The advantages and disadvantages of each computational approach and their further developments are discussed. Both codes are available online and are ready for immediate use or download.

  14. Mesh-based Monte Carlo code for fluorescence modeling in complex tissues with irregular boundaries

    Science.gov (United States)

    Wilson, Robert H.; Chen, Leng-Chun; Lloyd, William; Kuo, Shiuhyang; Marcelo, Cynthia; Feinberg, Stephen E.; Mycek, Mary-Ann

    2011-07-01

    There is a growing need for the development of computational models that can account for complex tissue morphology in simulations of photon propagation. We describe the development and validation of a user-friendly, MATLAB-based Monte Carlo code that uses analytically-defined surface meshes to model heterogeneous tissue geometry. The code can use information from non-linear optical microscopy images to discriminate the fluorescence photons (from endogenous or exogenous fluorophores) detected from different layers of complex turbid media. We present a specific application of modeling a layered human tissue-engineered construct (Ex Vivo Produced Oral Mucosa Equivalent, EVPOME) designed for use in repair of oral tissue following surgery. Second-harmonic generation microscopic imaging of an EVPOME construct (oral keratinocytes atop a scaffold coated with human type IV collagen) was employed to determine an approximate analytical expression for the complex shape of the interface between the two layers. This expression can then be inserted into the code to correct the simulated fluorescence for the effect of the irregular tissue geometry.

  15. Monte Carlo simulation of depth dose distribution in several organic models for boron neutron capture therapy

    Science.gov (United States)

    Matsumoto, T.

    2007-09-01

    Monte Carlo simulations are performed to evaluate depth-dose distributions for possible treatment of cancers by boron neutron capture therapy (BNCT). The ICRU computational model of ADAM & EVA was used as a phantom to simulate tumors at a depth of 5 cm in central regions of the lungs, liver and pancreas. Tumors of the prostate and osteosarcoma were also centered at the depth of 4.5 and 2.5 cm in the phantom models. The epithermal neutron beam from a research reactor was the primary neutron source for the MCNP calculation of the depth-dose distributions in those cancer models. For brain tumor irradiations, the whole-body dose was also evaluated. The MCNP simulations suggested that a lethal dose of 50 Gy to the tumors can be achieved without reaching the tolerance dose of 25 Gy to normal tissue. The whole-body phantom calculations also showed that the BNCT could be applied for brain tumors without significant damage to whole-body organs.

  16. Study on quantification method based on Monte Carlo sampling for multiunit probabilistic safety assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Kye Min [KHNP Central Research Institute, Daejeon (Korea, Republic of); Han, Sang Hoon; Park, Jin Hee; Lim, Ho Gon; Yang, Joon Yang [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Heo, Gyun Young [Kyung Hee University, Yongin (Korea, Republic of)

    2017-06-15

    In Korea, many nuclear power plants operate at a single site based on geographical characteristics, but the population density near the sites is higher than that in other countries. Thus, multiunit accidents are a more important consideration than in other countries and should be addressed appropriately. Currently, there are many issues related to a multiunit probabilistic safety assessment (PSA). One of them is the quantification of a multiunit PSA model. A traditional PSA uses a Boolean manipulation of the fault tree in terms of the minimal cut set. However, such methods have some limitations when rare event approximations cannot be used effectively or a very small truncation limit should be applied to identify accident sequence combinations for a multiunit site. In particular, it is well known that seismic risk in terms of core damage frequency can be overestimated because there are many events that have a high failure probability. In this study, we propose a quantification method based on a Monte Carlo approach for a multiunit PSA model. This method can consider all possible accident sequence combinations in a multiunit site and calculate a more exact value for events that have a high failure probability. An example model for six identical units at a site was also developed and quantified to confirm the applicability of the proposed method.

  17. Monte Carlo method based QSAR modelling of natural lipase inhibitors using hybrid optimal descriptors.

    Science.gov (United States)

    Kumar, A; Chauhan, S

    2017-03-08

    Obesity is one of the most provoking health burdens in the developed countries. One of the strategies to prevent obesity is the inhibition of pancreatic lipase enzyme. The aim of this study was to build QSAR models for natural lipase inhibitors by using the Monte Carlo method. The molecular structures were represented by the simplified molecular input line entry system (SMILES) notation and molecular graphs. Three sets - training, calibration and test set of three splits - were examined and validated. Statistical quality of all the described models was very good. The best QSAR model showed the following statistical parameters: r(2) = 0.864 and Q(2) = 0.836 for the test set and r(2) = 0.824 and Q(2) = 0.819 for the validation set. Structural attributes for increasing and decreasing the activity (expressed as pIC50) were also defined. Using defined structural attributes, the design of new potential lipase inhibitors is also presented. Additionally, a molecular docking study was performed for the determination of binding modes of designed molecules.

  18. Monte Carlo Uncertainty Quantification Using Quasi-1D SRM Ballistic Model

    Directory of Open Access Journals (Sweden)

    Davide Viganò

    2016-01-01

    Full Text Available Compactness, reliability, readiness, and construction simplicity of solid rocket motors make them very appealing for commercial launcher missions and embarked systems. Solid propulsion grants high thrust-to-weight ratio, high volumetric specific impulse, and a Technology Readiness Level of 9. However, solid rocket systems are missing any throttling capability at run-time, since pressure-time evolution is defined at the design phase. This lack of mission flexibility makes their missions sensitive to deviations of performance from nominal behavior. For this reason, the reliability of predictions and reproducibility of performances represent a primary goal in this field. This paper presents an analysis of SRM performance uncertainties throughout the implementation of a quasi-1D numerical model of motor internal ballistics based on Shapiro’s equations. The code is coupled with a Monte Carlo algorithm to evaluate statistics and propagation of some peculiar uncertainties from design data to rocker performance parameters. The model has been set for the reproduction of a small-scale rocket motor, discussing a set of parametric investigations on uncertainty propagation across the ballistic model.

  19. Monte Carlo renormalization: the triangular Ising model as a test case.

    Science.gov (United States)

    Guo, Wenan; Blöte, Henk W J; Ren, Zhiming

    2005-04-01

    We test the performance of the Monte Carlo renormalization method in the context of the Ising model on a triangular lattice. We apply a block-spin transformation which allows for an adjustable parameter so that the transformation can be optimized. This optimization purportedly brings the fixed point of the transformation to a location where the corrections to scaling vanish. To this purpose we determine corrections to scaling of the triangular Ising model with nearest- and next-nearest-neighbor interactions by means of transfer-matrix calculations and finite-size scaling. We find that the leading correction to scaling just vanishes for the nearest-neighbor model. However, the fixed point of the commonly used majority-rule block-spin transformation appears to lie well away from the nearest-neighbor critical point. This raises the question whether the majority rule is suitable as a renormalization transformation, because the standard assumptions of real-space renormalization imply that corrections to scaling vanish at the fixed point. We avoid this inconsistency by means of the optimized transformation which shifts the fixed point back to the vicinity of the nearest-neighbor critical Hamiltonian. The results of the optimized transformation in terms of the Ising critical exponents are more accurate than those obtained with the majority rule.

  20. Monte Carlo simulations for a Lotka-type model with reactant surface diffusion and interactions.

    Science.gov (United States)

    Zvejnieks, G; Kuzovkov, V N

    2001-05-01

    The standard Lotka-type model, which was introduced for the first time by Mai et al. [J. Phys. A 30, 4171 (1997)] for a simplified description of autocatalytic surface reactions, is generalized here for a case of mobile and energetically interacting reactants. The mathematical formalism is proposed for determining the dependence of transition rates on the interaction energy (and temperature) for the general mathematical model, and the Lotka-type model, in particular. By means of Monte Carlo computer simulations, we have studied the impact of diffusion (with and without energetic interactions between reactants) on oscillatory properties of the A+B-->2B reaction. The diffusion leads to a desynchronization of oscillations and a subsequent decrease of oscillation amplitude. The energetic interaction between reactants has a dual effect depending on the type of mobile reactants. In the limiting case of mobile reactants B the repulsion results in a decrease of amplitudes. However, these amplitudes increase if reactants A are mobile and repulse each other. A simplified interpretation of the obtained results is given.

  1. Monte carlo method-based QSAR modeling of penicillins binding to human serum proteins.

    Science.gov (United States)

    Veselinović, Jovana B; Toropov, Andrey A; Toropova, Alla P; Nikolić, Goran M; Veselinović, Aleksandar M

    2015-01-01

    The binding of penicillins to human serum proteins was modeled with optimal descriptors based on the Simplified Molecular Input-Line Entry System (SMILES). The concentrations of protein-bound drug for 87 penicillins expressed as percentage of the total plasma concentration were used as experimental data. The Monte Carlo method was used as a computational tool to build up the quantitative structure-activity relationship (QSAR) model for penicillins binding to plasma proteins. One random data split into training, test and validation set was examined. The calculated QSAR model had the following statistical parameters: r(2)  = 0.8760, q(2)  = 0.8665, s = 8.94 for the training set and r(2)  = 0.9812, q(2)  = 0.9753, s = 7.31 for the test set. For the validation set, the statistical parameters were r(2)  = 0.727 and s = 12.52, but after removing the three worst outliers, the statistical parameters improved to r(2)  = 0.921 and s = 7.18. SMILES-based molecular fragments (structural indicators) responsible for the increase and decrease of penicillins binding to plasma proteins were identified. The possibility of using these results for the computer-aided design of new penicillins with desired binding properties is presented. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. QSAR models for HEPT derivates as NNRTI inhibitors based on Monte Carlo method.

    Science.gov (United States)

    Toropova, Alla P; Toropov, Andrey A; Veselinović, Jovana B; Miljković, Filip N; Veselinović, Aleksandar M

    2014-04-22

    A series of 107 1-[(2-hydroxyethoxy)-methyl]-6-(phenylthio) thymine (HEPT) with anti-HIV-1 activity as a non-nucleoside reverse transcriptase inhibitor (NNRTI) has been studied. Monte Carlo method has been used as a tool to build up the quantitative structure-activity relationships (QSAR) for anti-HIV-1 activity. The QSAR models were calculated with the representation of the molecular structure by simplified molecular input-line entry system and by the molecular graph. Three various splits into training and test set were examined. Statistical quality of all build models is very good. Best calculated model had following statistical parameters: for training set r(2) = 0.8818, q(2) = 0.8774 and r(2) = 0.9360, q(2) = 0.9243 for test set. Structural indicators (alerts) for increase and decrease of the IC50 are defined. Using defined structural alerts computer aided design of new potential anti-HIV-1 HEPT derivates is presented. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  3. Mathematical modeling, analysis and Markov Chain Monte Carlo simulation of Ebola epidemics

    Science.gov (United States)

    Tulu, Thomas Wetere; Tian, Boping; Wu, Zunyou

    Ebola virus infection is a severe infectious disease with the highest case fatality rate which become the global public health treat now. What makes the disease the worst of all is no specific effective treatment available, its dynamics is not much researched and understood. In this article a new mathematical model incorporating both vaccination and quarantine to study the dynamics of Ebola epidemic has been developed and comprehensively analyzed. The existence as well as uniqueness of the solution to the model is also verified and the basic reproduction number is calculated. Besides, stability conditions are also checked and finally simulation is done using both Euler method and one of the top ten most influential algorithm known as Markov Chain Monte Carlo (MCMC) method. Different rates of vaccination to predict the effect of vaccination on the infected individual over time and that of quarantine are discussed. The results show that quarantine and vaccination are very effective ways to control Ebola epidemic. From our study it was also seen that there is less possibility of an individual for getting Ebola virus for the second time if they survived his/her first infection. Last but not least real data has been fitted to the model, showing that it can used to predict the dynamic of Ebola epidemic.

  4. Hidden zero-temperature bicritical point in the two-dimensional anisotropic Heisenberg model: Monte Carlo simulations and proper finite-size scaling

    OpenAIRE

    Zhou, Chenggang; Landau, D. P.; Schulthess, Thomas C.

    2006-01-01

    By considering the appropriate finite-size effect, we explain the connection between Monte Carlo simulations of two-dimensional anisotropic Heisenberg antiferromagnet in a field and the early renormalization group calculation for the bicritical point in $2+\\epsilon$ dimensions. We found that the long length scale physics of the Monte Carlo simulations is indeed captured by the anisotropic nonlinear $\\sigma$ model. Our Monte Carlo data and analysis confirm that the bicritical point in two dime...

  5. A Monte Carlo model of the Varian IGRT couch top for RapidArc QA.

    Science.gov (United States)

    Teke, T; Gill, B; Duzenli, C; Popescu, I A

    2011-12-21

    The objectives of this study are to evaluate the effect of couch attenuation on quality assurance (QA) results and to present a couch top model for Monte Carlo (MC) dose calculation for RapidArc treatments. The IGRT couch top is modelled in Eclipse as a thin skin of higher density material with a homogeneous fill of foam of lower density and attenuation. The IGRT couch structure consists of two longitudinal sections referred to as thick and thin. The Hounsfield Unit (HU) characterization of the couch structure was determined using a cylindrical phantom by comparing ion chamber measurements with the dose predicted by the treatment planning system (TPS). The optimal set of HU for the inside of the couch and the surface shell was found to be respectively -960 and -700 HU in agreement with Vanetti et al (2009 Phys. Med. Biol. 54 N157-66). For each plan, the final dose calculation was performed with the thin, thick and without the couch top. Dose differences up to 2.6% were observed with TPS calculated doses not including the couch and up to 3.4% with MC not including the couch and were found to be treatment specific. A MC couch top model was created based on the TPS geometrical model. The carbon fibre couch top skin was modelled using carbon graphite; the density was adjusted until good agreement with experimental data was observed, while the density of the foam inside was kept constant. The accuracy of the couch top model was evaluated by comparison with ion chamber measurements and TPS calculated dose combined with a 3D gamma analysis. Similar to the TPS case, a single graphite density can be used for both the thin and thick MC couch top models. Results showed good agreement with ion chamber measurements (within 1.2%) and with TPS (within 1%). For each plan, over 95% of the points passed the 3D gamma test.

  6. The interface free energy: Comparison of accurate Monte Carlo results for the 3D Ising model with effective interface models

    CERN Document Server

    Caselle, Michele; Panero, Marco

    2007-01-01

    We provide accurate Monte Carlo results for the free energy of interfaces with periodic boundary conditions in the 3D Ising model. We study a large range of inverse temperatures, allowing to control corrections to scaling. In addition to square interfaces, we study rectangular interfaces for a large range of aspect ratios u=L_1/L_2. Our numerical results are compared with predictions of effective interface models. This comparison verifies clearly the effective Nambu-Goto model up to two-loop order. Our data also allow us to obtain the estimates T_c sigma^-1/2=1.235(2), m_0++ sigma^-1/2=3.037(16) and R_+=f_+ sigma_0^2 =0.387(2), which are more precise than previous ones.

  7. Two models at work : A study of interactions and specificity in relation to the Demand-Control Model and the Effort-Reward Imbalance Model

    NARCIS (Netherlands)

    Vegchel, N.

    2005-01-01

    To investigate the relation between work and employee health, several work stress models, e.g., the Demand-Control (DC) Model and the Effort-Reward Imbalance (ERI) Model, have been developed. Although these models focus on job demands and job resources, relatively little attention has been devoted

  8. Two models at work : A study of interactions and specificity in relation to the Demand-Control Model and the Effort-Reward Imbalance Model

    NARCIS (Netherlands)

    Vegchel, N.

    2005-01-01

    To investigate the relation between work and employee health, several work stress models, e.g., the Demand-Control (DC) Model and the Effort-Reward Imbalance (ERI) Model, have been developed. Although these models focus on job demands and job resources, relatively little attention has been devoted t

  9. Monte Carlo modeling of Lead-Cooled Fast Reactor in adiabatic equilibrium state

    Energy Technology Data Exchange (ETDEWEB)

    Stanisz, Przemysław, E-mail: pstanisz@agh.edu.pl; Oettingen, Mikołaj, E-mail: moettin@agh.edu.pl; Cetnar, Jerzy, E-mail: cetnar@mail.ftj.agh.edu.pl

    2016-05-15

    Graphical abstract: - Highlights: • We present the Monte Carlo modeling of the LFR in the adiabatic equilibrium state. • We assess the adiabatic equilibrium fuel composition using the MCB code. • We define the self-adjusting process of breeding gain by the control rod operation. • The designed LFR can work in the adiabatic cycle with zero fuel breeding. - Abstract: Nuclear power would appear to be the only energy source able to satisfy the global energy demand while also achieving a significant reduction of greenhouse gas emissions. Moreover, it can provide a stable and secure source of electricity, and plays an important role in many European countries. However, nuclear power generation from its birth has been doomed by the legacy of radioactive nuclear waste. In addition, the looming decrease in the available resources of fissile U235 may influence the future sustainability of nuclear energy. The integrated solution to both problems is not trivial, and postulates the introduction of a closed-fuel cycle strategy based on breeder reactors. The perfect choice of a novel reactor system fulfilling both requirements is the Lead-Cooled Fast Reactor operating in the adiabatic equilibrium state. In such a state, the reactor converts depleted or natural uranium into plutonium while consuming any self-generated minor actinides and transferring only fission products as waste. We present the preliminary design of a Lead-Cooled Fast Reactor operating in the adiabatic equilibrium state with the Monte Carlo Continuous Energy Burnup Code – MCB. As a reference reactor model we apply the core design developed initially under the framework of the European Lead-cooled SYstem (ELSY) project and refined in the follow-up Lead-cooled European Advanced DEmonstration Reactor (LEADER) project. The major objective of the study is to show to what extent the constraints of the adiabatic cycle are maintained and to indicate the phase space for further improvements. The analysis

  10. Economic effort management in multispecies fisheries: the FcubEcon model

    DEFF Research Database (Denmark)

    Hoff, Ayoe; Frost, Hans; Ulrich, Clara

    2010-01-01

    in the development of management tools based on fleets, fisheries, and areas, rather than on unit fish stocks. A natural consequence of this has been to consider effort rather than quota management, a final effort decision being based on fleet-harvest potential and fish-stock-preservation considerations. Effort...... allocation between fleets should not be based on biological considerations alone, but also on the economic behaviour of fishers, because fisheries management has a significant impact on human behaviour as well as on ecosystem development. The FcubEcon management framework for effort allocation between fleets...... optimal manner, in both effort-management and single-quota management settings.Applying single-species assessment and quotas in multispecies fisheries can lead to overfishing or quota underutilization, because advice can be conflicting when different stocks are caught within the same fishery. During...

  11. Monte-Carlo modeling of the central carbon metabolism of Lactococcus lactis: insights into metabolic regulation.

    Science.gov (United States)

    Murabito, Ettore; Verma, Malkhey; Bekker, Martijn; Bellomo, Domenico; Westerhoff, Hans V; Teusink, Bas; Steuer, Ralf

    2014-01-01

    Metabolic pathways are complex dynamic systems whose response to perturbations and environmental challenges are governed by multiple interdependencies between enzyme properties, reactions rates, and substrate levels. Understanding the dynamics arising from such a network can be greatly enhanced by the construction of a computational model that embodies the properties of the respective system. Such models aim to incorporate mechanistic details of cellular interactions to mimic the temporal behavior of the biochemical reaction system and usually require substantial knowledge of kinetic parameters to allow meaningful conclusions. Several approaches have been suggested to overcome the severe data requirements of kinetic modeling, including the use of approximative kinetics and Monte-Carlo sampling of reaction parameters. In this work, we employ a probabilistic approach to study the response of a complex metabolic system, the central metabolism of the lactic acid bacterium Lactococcus lactis, subject to perturbations and brief periods of starvation. Supplementing existing methodologies, we show that it is possible to acquire a detailed understanding of the control properties of a corresponding metabolic pathway model that is directly based on experimental observations. In particular, we delineate the role of enzymatic regulation to maintain metabolic stability and metabolic recovery after periods of starvation. It is shown that the feedforward activation of the pyruvate kinase by fructose-1,6-bisphosphate qualitatively alters the bifurcation structure of the corresponding pathway model, indicating a crucial role of enzymatic regulation to prevent metabolic collapse for low external concentrations of glucose. We argue that similar probabilistic methodologies will help our understanding of dynamic properties of small-, medium- and large-scale metabolic networks models.

  12. Upending the social ecological model to guide health promotion efforts toward policy and environmental change.

    Science.gov (United States)

    Golden, Shelley D; McLeroy, Kenneth R; Green, Lawrence W; Earp, Jo Anne L; Lieberman, Lisa D

    2015-04-01

    Efforts to change policies and the environments in which people live, work, and play have gained increasing attention over the past several decades. Yet health promotion frameworks that illustrate the complex processes that produce health-enhancing structural changes are limited. Building on the experiences of health educators, community activists, and community-based researchers described in this supplement and elsewhere, as well as several political, social, and behavioral science theories, we propose a new framework to organize our thinking about producing policy, environmental, and other structural changes. We build on the social ecological model, a framework widely employed in public health research and practice, by turning it inside out, placing health-related and other social policies and environments at the center, and conceptualizing the ways in which individuals, their social networks, and organized groups produce a community context that fosters healthy policy and environmental development. We conclude by describing how health promotion practitioners and researchers can foster structural change by (1) conveying the health and social relevance of policy and environmental change initiatives, (2) building partnerships to support them, and (3) promoting more equitable distributions of the resources necessary for people to meet their daily needs, control their lives, and freely participate in the public sphere.

  13. Dynamic Critical Behavior of Multi-Grid Monte Carlo for Two-Dimensional Nonlinear $\\sigma$-Models

    OpenAIRE

    Mana, Gustavo; Mendes, Tereza; Pelissetto, Andrea; Sokal, Alan D.

    1995-01-01

    We introduce a new and very convenient approach to multi-grid Monte Carlo (MGMC) algorithms for general nonlinear $\\sigma$-models: it is based on embedding an $XY$ model into the given $\\sigma$-model, and then updating the induced $XY$ model using a standard $XY$-model MGMC code. We study the dynamic critical behavior of this algorithm for the two-dimensional $O(N)$ $\\sigma$-models with $N = 3,4,8$ and for the $SU(3)$ principal chiral model. We find that the dynamic critical exponent $z$ vari...

  14. Mathematical modelling of scanner-specific bowtie filters for Monte Carlo CT dosimetry

    Science.gov (United States)

    Kramer, R.; Cassola, V. F.; Andrade, M. E. A.; de Araújo, M. W. C.; Brenner, D. J.; Khoury, H. J.

    2017-02-01

    The purpose of bowtie filters in CT scanners is to homogenize the x-ray intensity measured by the detectors in order to improve the image quality and at the same time to reduce the dose to the patient because of the preferential filtering near the periphery of the fan beam. For CT dosimetry, especially for Monte Carlo calculations of organ and tissue absorbed doses to patients, it is important to take the effect of bowtie filters into account. However, material composition and dimensions of these filters are proprietary. Consequently, a method for bowtie filter simulation independent of access to proprietary data and/or to a specific scanner would be of interest to many researchers involved in CT dosimetry. This study presents such a method based on the weighted computer tomography dose index, CTDIw, defined in two cylindrical PMMA phantoms of 16 cm and 32 cm diameter. With an EGSnrc-based Monte Carlo (MC) code, ratios CTDIw/CTDI100,a were calculated for a specific CT scanner using PMMA bowtie filter models based on sigmoid Boltzmann functions combined with a scanner filter factor (SFF) which is modified during calculations until the calculated MC CTDIw/CTDI100,a matches ratios CTDIw/CTDI100,a, determined by measurements or found in publications for that specific scanner. Once the scanner-specific value for an SFF has been found, the bowtie filter algorithm can be used in any MC code to perform CT dosimetry for that specific scanner. The bowtie filter model proposed here was validated for CTDIw/CTDI100,a considering 11 different CT scanners and for CTDI100,c, CTDI100,p and their ratio considering 4 different CT scanners. Additionally, comparisons were made for lateral dose profiles free in air and using computational anthropomorphic phantoms. CTDIw/CTDI100,a determined with this new method agreed on average within 0.89% (max. 3.4%) and 1.64% (max. 4.5%) with corresponding data published by CTDosimetry (www.impactscan.org) for the CTDI HEAD and BODY phantoms

  15. A hybrid Monte Carlo model for the energy response functions of X-ray photon counting detectors

    Science.gov (United States)

    Wu, Dufan; Xu, Xiaofei; Zhang, Li; Wang, Sen

    2016-09-01

    In photon counting computed tomography (CT), it is vital to know the energy response functions of the detector for noise estimation and system optimization. Empirical methods lack flexibility and Monte Carlo simulations require too much knowledge of the detector. In this paper, we proposed a hybrid Monte Carlo model for the energy response functions of photon counting detectors in X-ray medical applications. GEANT4 was used to model the energy deposition of X-rays in the detector. Then numerical models were used to describe the process of charge sharing, anti-charge sharing and spectral broadening, which were too complicated to be included in the Monte Carlo model. Several free parameters were introduced in the numerical models, and they could be calibrated from experimental measurements such as X-ray fluorescence from metal elements. The method was used to model the energy response function of an XCounter Flite X1 photon counting detector. The parameters of the model were calibrated with fluorescence measurements. The model was further tested against measured spectrums of a VJ X-ray source to validate its feasibility and accuracy.

  16. A hybrid Monte Carlo model for the energy response functions of X-ray photon counting detectors

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Dufan; Xu, Xiaofei [Key Laboratory of Particle & Radiation Imaging, Tsinghua University, Ministry of Education (China); Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Zhang, Li, E-mail: zli@mail.tsinghua.edu.cn [Key Laboratory of Particle & Radiation Imaging, Tsinghua University, Ministry of Education (China); Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Wang, Sen [Key Laboratory of Particle & Radiation Imaging, Tsinghua University, Ministry of Education (China); Department of Engineering Physics, Tsinghua University, Beijing 100084 (China)

    2016-09-11

    In photon counting computed tomography (CT), it is vital to know the energy response functions of the detector for noise estimation and system optimization. Empirical methods lack flexibility and Monte Carlo simulations require too much knowledge of the detector. In this paper, we proposed a hybrid Monte Carlo model for the energy response functions of photon counting detectors in X-ray medical applications. GEANT4 was used to model the energy deposition of X-rays in the detector. Then numerical models were used to describe the process of charge sharing, anti-charge sharing and spectral broadening, which were too complicated to be included in the Monte Carlo model. Several free parameters were introduced in the numerical models, and they could be calibrated from experimental measurements such as X-ray fluorescence from metal elements. The method was used to model the energy response function of an XCounter Flite X1 photon counting detector. The parameters of the model were calibrated with fluorescence measurements. The model was further tested against measured spectrums of a VJ X-ray source to validate its feasibility and accuracy.

  17. Overview 2004 of NASA Stirling-Convertor CFD-Model Development and Regenerator R&D Efforts

    Science.gov (United States)

    Tew, Roy C.; Dyson, Rodger W.; Wilson, Scott D.; Demko, Rikako

    2005-01-01

    This paper reports on accomplishments in 2004 in development of Stirling-convertor CFD model at NASA GRC and via a NASA grant, a Stirling regenerator-research effort being conducted via a NASA grant (a follow-on effort to an earlier DOE contract), and a regenerator-microfabrication contract for development of a "next-generation Stirling regenerator." Cleveland State University is the lead organization for all three grant/contractual efforts, with the University of Minnesota and Gedeor Associates as subcontractors. Also, the Stirling Technology Co. and Sunpower, Inc. are both involved in all three efforts, either as funded or unfunded participants. International Mezzo Technologies of Baton Rouge, LA is the regenerator fabricator for the regenerator-microfabrication contract. Results of the efforts in these three areas are summarized.

  18. Monte Carlo ice flow modeling projects a new stable configuration for Columbia Glacier, Alaska, c. 2020

    Directory of Open Access Journals (Sweden)

    W. Colgan

    2012-11-01

    Full Text Available Due to the abundance of observational datasets collected since the onset of its retreat (c. 1983, Columbia Glacier, Alaska, provides an exciting modeling target. We perform Monte Carlo simulations of the form and flow of Columbia Glacier, using a 1-D (depth-integrated flowline model, over a wide range of parameter values and forcings. An ensemble filter is imposed following spin-up to ensure that only simulations that accurately reproduce observed pre-retreat glacier geometry are retained; all other simulations are discarded. The selected ensemble of simulations reasonably reproduces numerous highly transient post-retreat observed datasets. The selected ensemble mean projection suggests that Columbia Glacier will achieve a new dynamic equilibrium (i.e. "stable" ice geometry c. 2020, at which time iceberg calving rate will have returned to approximately pre-retreat values. Comparison of the observed 1957 and 2007 glacier geometries with the projected 2100 glacier geometry suggests that Columbia Glacier had already discharged ~82% of its projected 1957–2100 sea level rise contribution by 2007. This case study therefore highlights the difficulties associated with the future extrapolation of observed glacier mass loss rates that are dominated by iceberg calving.

  19. Markov-chain model of classified atomistic transition states for discrete kinetic Monte Carlo simulations.

    Science.gov (United States)

    Numazawa, Satoshi; Smith, Roger

    2011-10-01

    Classical harmonic transition state theory is considered and applied in discrete lattice cells with hierarchical transition levels. The scheme is then used to determine transitions that can be applied in a lattice-based kinetic Monte Carlo (KMC) atomistic simulation model. The model results in an effective reduction of KMC simulation steps by utilizing a classification scheme of transition levels for thermally activated atomistic diffusion processes. Thermally activated atomistic movements are considered as local transition events constrained in potential energy wells over certain local time periods. These processes are represented by Markov chains of multidimensional Boolean valued functions in three-dimensional lattice space. The events inhibited by the barriers under a certain level are regarded as thermal fluctuations of the canonical ensemble and accepted freely. Consequently, the fluctuating system evolution process is implemented as a Markov chain of equivalence class objects. It is shown that the process can be characterized by the acceptance of metastable local transitions. The method is applied to a problem of Au and Ag cluster growth on a rippled surface. The simulation predicts the existence of a morphology-dependent transition time limit from a local metastable to stable state for subsequent cluster growth by accretion. Excellent agreement with observed experimental results is obtained.

  20. Nanostructure evolution of neutron-irradiated reactor pressure vessel steels: Revised Object kinetic Monte Carlo model

    Science.gov (United States)

    Chiapetto, M.; Messina, L.; Becquart, C. S.; Olsson, P.; Malerba, L.

    2017-02-01

    This work presents a revised set of parameters to be used in an Object kinetic Monte Carlo model to simulate the microstructure evolution under neutron irradiation of reactor pressure vessel steels at the operational temperature of light water reactors (∼300 °C). Within a "grey-alloy" approach, a more physical description than in a previous work is used to translate the effect of Mn and Ni solute atoms on the defect cluster diffusivity reduction. The slowing down of self-interstitial clusters, due to the interaction between solutes and crowdions in Fe is now parameterized using binding energies from the latest DFT calculations and the solute concentration in the matrix from atom-probe experiments. The mobility of vacancy clusters in the presence of Mn and Ni solute atoms was also modified on the basis of recent DFT results, thereby removing some previous approximations. The same set of parameters was seen to predict the correct microstructure evolution for two different types of alloys, under very different irradiation conditions: an Fe-C-MnNi model alloy, neutron irradiated at a relatively high flux, and a high-Mn, high-Ni RPV steel from the Swedish Ringhals reactor surveillance program. In both cases, the predicted self-interstitial loop density matches the experimental solute cluster density, further corroborating the surmise that the MnNi-rich nanofeatures form by solute enrichment of immobilized small interstitial loops, which are invisible to the electron microscope.

  1. Monte Carlo model of neutral-particle transport in diverted plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Heifetz, D.; Post, D.; Petravic, M.; Weisheit, J.; Bateman, G.

    1981-11-01

    The transport of neutral atoms and molecules in the edge and divertor regions of fusion experiments has been calculated using Monte-Carlo techniques. The deuterium, tritium, and helium atoms are produced by recombination in the plasma and at the walls. The relevant collision processes of charge exchange, ionization, and dissociation between the neutrals and the flowing plasma electrons and ions are included, along with wall reflection models. General two-dimensional wall and plasma geometries are treated in a flexible manner so that varied configurations can be easily studied. The algorithm uses a pseudo-collision method. Splitting with Russian roulette, suppression of absorption, and efficient scoring techniques are used to reduce the variance. The resulting code is sufficiently fast and compact to be incorporated into iterative treatments of plasma dynamics requiring numerous neutral profiles. The calculation yields the neutral gas densities, pressures, fluxes, ionization rates, momentum transfer rates, energy transfer rates, and wall sputtering rates. Applications have included modeling of proposed INTOR/FED poloidal divertor designs and other experimental devices.

  2. A stochastic Markov chain approach for tennis: Monte Carlo simulation and modeling

    Science.gov (United States)

    Aslam, Kamran

    This dissertation describes the computational formulation of probability density functions (pdfs) that facilitate head-to-head match simulations in tennis along with ranking systems developed from their use. A background on the statistical method used to develop the pdfs , the Monte Carlo method, and the resulting rankings are included along with a discussion on ranking methods currently being used both in professional sports and in other applications. Using an analytical theory developed by Newton and Keller in [34] that defines a tennis player's probability of winning a game, set, match and single elimination tournament, a computational simulation has been developed in Matlab that allows further modeling not previously possible with the analytical theory alone. Such experimentation consists of the exploration of non-iid effects, considers the concept the varying importance of points in a match and allows an unlimited number of matches to be simulated between unlikely opponents. The results of these studies have provided pdfs that accurately model an individual tennis player's ability along with a realistic, fair and mathematically sound platform for ranking them.

  3. Dynamical Models for NGC 6503 using a Markov Chain Monte Carlo Technique

    CERN Document Server

    Puglielli, David; Courteau, Stéphane

    2010-01-01

    We use Bayesian statistics and Markov chain Monte Carlo (MCMC) techniques to construct dynamical models for the spiral galaxy NGC 6503. The constraints include surface brightness profiles which display a Freeman Type II structure; HI and ionized gas rotation curves; the stellar rotation, which is nearly coincident with the ionized gas curve; and the line of sight stellar dispersion, with a sigma-drop at the centre. The galaxy models consist of a Sersic bulge, an exponential disc with an optional inner truncation and a cosmologically motivated dark halo. The Bayesian/MCMC technique yields the joint posterior probability distribution function for the input parameters. We examine several interpretations of the data: the Type II surface brightness profile may be due to dust extinction, to an inner truncated disc or to a ring of bright stars; and we test separate fits to the gas and stellar rotation curves to determine if the gas traces the gravitational potential. We test each of these scenarios for bar stability...

  4. Inverse Monte Carlo in a multilayered tissue model: merging diffuse reflectance spectroscopy and laser Doppler flowmetry

    Science.gov (United States)

    Fredriksson, Ingemar; Burdakov, Oleg; Larsson, Marcus; Strömberg, Tomas

    2013-12-01

    The tissue fraction of red blood cells (RBCs) and their oxygenation and speed-resolved perfusion are estimated in absolute units by combining diffuse reflectance spectroscopy (DRS) and laser Doppler flowmetry (LDF). The DRS spectra (450 to 850 nm) are assessed at two source-detector separations (0.4 and 1.2 mm), allowing for a relative calibration routine, whereas LDF spectra are assessed at 1.2 mm in the same fiber-optic probe. Data are analyzed using nonlinear optimization in an inverse Monte Carlo technique by applying an adaptive multilayered tissue model based on geometrical, scattering, and absorbing properties, as well as RBC flow-speed information. Simulations of 250 tissue-like models including up to 2000 individual blood vessels were used to evaluate the method. The absolute root mean square (RMS) deviation between estimated and true oxygenation was 4.1 percentage units, whereas the relative RMS deviations for the RBC tissue fraction and perfusion were 19% and 23%, respectively. Examples of in vivo measurements on forearm and foot during common provocations are presented. The method offers several advantages such as simultaneous quantification of RBC tissue fraction and oxygenation and perfusion from the same, predictable, sampling volume. The perfusion estimate is speed resolved, absolute (% RBC×mm/s), and more accurate due to the combination with DRS.

  5. Momentum transfer Monte Carlo model for the simulation of laser speckle contrast imaging (Conference Presentation)

    Science.gov (United States)

    Regan, Caitlin; Hayakawa, Carole K.; Choi, Bernard

    2016-03-01

    Laser speckle imaging (LSI) enables measurement of relative blood flow in microvasculature and perfusion in tissues. To determine the impact of tissue optical properties and perfusion dynamics on speckle contrast, we developed a computational simulation of laser speckle contrast imaging. We used a discrete absorption-weighted Monte Carlo simulation to model the transport of light in tissue. We simulated optical excitation of a uniform flat light source and tracked the momentum transfer of photons as they propagated through a simulated tissue geometry. With knowledge of the probability distribution of momentum transfer occurring in various layers of the tissue, we calculated the expected laser speckle contrast arising with coherent excitation using both reflectance and transmission geometries. We simulated light transport in a single homogeneous tissue while independently varying either absorption (.001-100mm^-1), reduced scattering (.1-10mm^-1), or anisotropy (0.05-0.99) over a range of values relevant to blood and commonly imaged tissues. We observed that contrast decreased by 49% with an increase in optical scattering, and observed a 130% increase with absorption (exposure time = 1ms). We also explored how speckle contrast was affected by the depth (0-1mm) and flow speed (0-10mm/s) of a dynamic vascular inclusion. This model of speckle contrast is important to increase our understanding of how parameters such as perfusion dynamics, vessel depth, and tissue optical properties affect laser speckle imaging.

  6. Single-site Lennard-Jones models via polynomial chaos surrogates of Monte Carlo molecular simulation

    KAUST Repository

    Kadoura, Ahmad Salim

    2016-06-01

    In this work, two Polynomial Chaos (PC) surrogates were generated to reproduce Monte Carlo (MC) molecular simulation results of the canonical (single-phase) and the NVT-Gibbs (two-phase) ensembles for a system of normalized structureless Lennard-Jones (LJ) particles. The main advantage of such surrogates, once generated, is the capability of accurately computing the needed thermodynamic quantities in a few seconds, thus efficiently replacing the computationally expensive MC molecular simulations. Benefiting from the tremendous computational time reduction, the PC surrogates were used to conduct large-scale optimization in order to propose single-site LJ models for several simple molecules. Experimental data, a set of supercritical isotherms, and part of the two-phase envelope, of several pure components were used for tuning the LJ parameters (ε, σ). Based on the conducted optimization, excellent fit was obtained for different noble gases (Ar, Kr, and Xe) and other small molecules (CH4, N2, and CO). On the other hand, due to the simplicity of the LJ model used, dramatic deviations between simulation and experimental data were observed, especially in the two-phase region, for more complex molecules such as CO2 and C2 H6.

  7. Clustering and heterogeneous dynamics in a kinetic Monte Carlo model of self-propelled hard disks.

    Science.gov (United States)

    Levis, Demian; Berthier, Ludovic

    2014-06-01

    We introduce a kinetic Monte Carlo model for self-propelled hard disks to capture with minimal ingredients the interplay between thermal fluctuations, excluded volume, and self-propulsion in large assemblies of active particles. We analyze in detail the resulting (density, self-propulsion) nonequilibrium phase diagram over a broad range of parameters. We find that purely repulsive hard disks spontaneously aggregate into fractal clusters as self-propulsion is increased and rationalize the evolution of the average cluster size by developing a kinetic model of reversible aggregation. As density is increased, the nonequilibrium clusters percolate to form a ramified structure reminiscent of a physical gel. We show that the addition of a finite amount of noise is needed to trigger a nonequilibrium phase separation, showing that demixing in active Brownian particles results from a delicate balance between noise, interparticle interactions, and self-propulsion. We show that self-propulsion has a profound influence on the dynamics of the active fluid. We find that the diffusion constant has a nonmonotonic behavior as self-propulsion is increased at finite density and that activity produces strong deviations from Fickian diffusion that persist over large time scales and length scales, suggesting that systems of active particles generically behave as dynamically heterogeneous systems.

  8. Properties of Carbon-Oxygen White Dwarfs From Monte Carlo Stellar Models

    CERN Document Server

    Fields, C E; Petermann, I; Iliadis, C; Timmes, F X

    2016-01-01

    We investigate properties of carbon-oxygen white dwarfs with respect to the composite uncertainties in the reaction rates using the stellar evolution toolkit, Modules for Experiments in Stellar Astrophysics (MESA) and the probability density functions in the reaction rate library STARLIB. These are the first Monte Carlo stellar evolution studies that use complete stellar models. Focusing on 3 M$_{\\odot}$ models evolved from the pre main-sequence to the first thermal pulse, we survey the remnant core mass, composition, and structure properties as a function of 26 STARLIB reaction rates covering hydrogen and helium burning using a Principal Component Analysis and Spearman Rank-Order Correlation. Relative to the arithmetic mean value, we find the width of the 95\\% confidence interval to be $\\Delta M_{{\\rm 1TP}}$ $\\approx$ 0.019 M$_{\\odot}$ for the core mass at the first thermal pulse, $\\Delta$$t_{\\rm{1TP}}$ $\\approx$ 12.50 Myr for the age, $\\Delta \\log(T_{{\\rm c}}/{\\rm K}) \\approx$ 0.013 for the central temperat...

  9. Modeling the Thermal Conductivity of Nanocomposites Using Monte-Carlo Methods and Realistic Nanotube Configurations

    Science.gov (United States)

    Bui, Khoa; Papavassiliou, Dimitrios

    2012-02-01

    The effective thermal conductivity (Keff) of carbon nanotube (CNT) composites is affected by the thermal boundary resistance (TBR) and by the dispersion pattern and geometry of the CNTs. We have previously modeled CNTs as straight cylinders and found that the TBR between CNTs (TBRCNT-CNT) can suppress Keff at high volume fractions of CNTs [1]. Effective medium theory results assume that the CNTs are in a perfect dispersion state and exclude the TBRCNT-CNT [2]. In this work, we report on the development of an algorithm for generating CNTs with worm-like geometry in 3D, and with different persistence lengths. These worm-like CNTs are then randomly placed in a periodic box representing a realistic state, since the persistence length of a CNT can be obtained from microscopic images. The use of these CNT geometries in conjunction with off-lattice Monte Carlo simulations [1] in order to study the effective thermal properties of nanocomposites will be discussed, as well as the effects of the persistence length on Keff and comparisons to straight cylinder models. References [1] K. Bui, B.P. Grady, D.V. Papavassiliou, Chem. Phys. Let., 508(4-6), 248-251, 2011 [2] C.W. Nan, G. Liu, Y. Lin, M. Li, App. Phys. Let., 85(16), 3549-3551, 2006

  10. A Monte-Carlo Model of Partially Trapped UV Radiation in a Plasma Display Panel Cell

    Science.gov (United States)

    van der Straaten, Trudy; Kushner, Mark J.

    1999-10-01

    Plasma Display Panels (PDPs) are being developed for large-area high-brightness flat panel displays. Color PDP cells generally use xenon gas mixtures to generate UV photons that are converted to visible light by phosphors. While the UV photons produced by Xe(6s'-5s5p6, 6s-5s5p6) are only in a quasi-optically thick regime due to the small dimensions (100s μms) of PDP cells, current models of PDPs do not explicitly address UV radiation transport other than by using radiation trapping factors. In this paper we report on results from a two-dimensional hybrid simulation of a PDP cell which models radiation transport using Monte Carlo (MC) photon transport and frequency redistribution algorithms. We examine the spectrum of UV photons incident on the phosphor and their escape probability. For typical operating conditions (400 Torr, 1-4% Xe mole fraction) there is significant frequency redistribution of resonance radiation due to absorption and subsequent re-emission at a different frequency within the lineshape. Significant line reversal occurs at Xe mole fractions of a few percent, the degree of which depends on PDP cell dimensions. The escape probability generally decreases during the current pulse due to additional quenching by electron impact processes.

  11. A geometrical model for the Monte Carlo simulation of the TrueBeam linac

    CERN Document Server

    Rodriguez, Miguel; Fogliata, Antonella; Cozzi, Luca; Sauerwein, Wolfgang; Brualla, Lorenzo

    2015-01-01

    Monte Carlo (MC) simulation of linacs depends on the accurate geometrical description of the head. The geometry of the Varian TrueBeam linac is not available to researchers. Instead, the company distributes phase-space files (PSFs) of the flattening-filter-free (FFF) beams tallied upstream the jaws. Yet, MC simulations based on third party tallied PSFs are subject to limitations. We present an experimentally-based geometry developed for the simulation of the FFF beams of the TrueBeam linac. The upper part of the TrueBeam linac was modeled modifying the Clinac 2100 geometry. The most important modification is the replacement of the standard flattening filters by {\\it ad hoc} thin filters which were modeled by comparing dose measurements and simulations. The experimental dose profiles for the 6~MV and 10~MV FFF beams were obtained from the Varian Golden Data Set and from in-house measurements for radiation fields ranging from $3\\times3$ to $40\\times40$ cm$^2$. The same comparisons were done for dose profiles ob...

  12. Non-Local effective SU(2) Polyakov-loop models from inverse Monte-Carlo methods

    CERN Document Server

    Bahrampour, Bardiya; von Smekal, Lorenz

    2016-01-01

    The strong-coupling expansion of the lattice gauge action leads to Polyakov-loop models that effectively describe gluodynamics at low temperatures, and together with the hopping expansion of the fermion determinant provides insight into the QCD phase diagram at finite density and low temperatures, although for rather heavy quarks. At higher temperatures the strong-coupling expansion breaks down and it is expected that the interactions between Polyakov loops become non-local. Here, we therefore test how well pure SU(2) gluodynamics can be mapped onto different non-local Polyakov models with inverse Monte-Carlo methods. We take into account Polyakov loops in higher representations and gradually add interaction terms at larger distances. We are particularly interested in extrapolating the range of non-local terms in sufficiently large volumes and higher representations. We study the characteristic fall-off in strength of the non-local couplings with the interaction distance, and its dependence on the gauge coupl...

  13. Early efforts in modeling the incubation period of infectious diseases with an acute course of illness

    Directory of Open Access Journals (Sweden)

    Nishiura Hiroshi

    2007-05-01

    Full Text Available Abstract The incubation period of infectious diseases, the time from infection with a microorganism to onset of disease, is directly relevant to prevention and control. Since explicit models of the incubation period enhance our understanding of the spread of disease, previous classic studies were revisited, focusing on the modeling methods employed and paying particular attention to relatively unknown historical efforts. The earliest study on the incubation period of pandemic influenza was published in 1919, providing estimates of the incubation period of Spanish flu using the daily incidence on ships departing from several ports in Australia. Although the study explicitly dealt with an unknown time of exposure, the assumed periods of exposure, which had an equal probability of infection, were too long, and thus, likely resulted in slight underestimates of the incubation period. After the suggestion that the incubation period follows lognormal distribution, Japanese epidemiologists extended this assumption to estimates of the time of exposure during a point source outbreak. Although the reason why the incubation period of acute infectious diseases tends to reveal a right-skewed distribution has been explored several times, the validity of the lognormal assumption is yet to be fully clarified. At present, various different distributions are assumed, and the lack of validity in assuming lognormal distribution is particularly apparent in the case of slowly progressing diseases. The present paper indicates that (1 analysis using well-defined short periods of exposure with appropriate statistical methods is critical when the exact time of exposure is unknown, and (2 when assuming a specific distribution for the incubation period, comparisons using different distributions are needed in addition to estimations using different datasets, analyses of the determinants of incubation period, and an understanding of the underlying disease mechanisms.

  14. Developing a primary care research agenda through collaborative efforts - a proposed "6E" model.

    Science.gov (United States)

    Tan, Ngiap Chuan; Ng, Chirk Jenn; Rosemary, Mitchell; Wahid, Khan; Goh, Lee Gan

    2014-01-01

    Primary care research is at a crossroad in South Pacific. A steering committee comprising a member of WONCA Asia Pacific Regional (APR) council and the President of Fiji College of General Practitioners garnered sponsorship from Fiji Ministry of Health, WONCA APR and pharmaceutical agencies to organize the event in October 2013. This paper describes the processes needed to set up a national primary research agenda through the collaborative efforts of local stakeholders and external facilitators using a test case in South Pacific. The setting was a 2-day primary care research workshop in Fiji. The steering committee invited a team of three external facilitators from the Asia-Pacific region to organize and operationalize the workshop. The eventual participants were 3 external facilitators, 6 local facilitators, and 29 local primary care physicians, academics, and local medical leaders from Fiji and South Pacific Islands. Pre-workshop and main workshop programs were drawn up by the external facilitators, using participants' input of research topics relating to their local clinical issues of interest. Course notes were prepared and distributed before the workshop. In the workshop, proposed research topics were shortlisted by group discussion and consensus. Study designs were proposed, scrutinized, and adopted for further research development. The facilitators reviewed the processes in setting the research agenda after the workshop and conceived the proposed 6E model. These processes can be grouped for easy reference, comprising the pre-workshop stages of "entreat", "enlist", "engage", and the workshop stages of "educe", "empower", and "encapsulate". The 6E model to establish a research agenda is conceptually logical. Its feasibility can be further tested in its application in other situation where research agenda setting is the critical step to improve the quality of primary care.

  15. The structure of molten CuCl: Reverse Monte Carlo modeling with high-energy X-ray diffraction data and molecular dynamics of a polarizable ion model

    Science.gov (United States)

    Alcaraz, Olga; Trullàs, Joaquim; Tahara, Shuta; Kawakita, Yukinobu; Takeda, Shin'ichi

    2016-09-01

    The results of the structural properties of molten copper chloride are reported from high-energy X-ray diffraction measurements, reverse Monte Carlo modeling method, and molecular dynamics simulations using a polarizable ion model. The simulated X-ray structure factor reproduces all trends observed experimentally, in particular the shoulder at around 1 Å-1 related to intermediate range ordering, as well as the partial copper-copper correlations from the reverse Monte Carlo modeling, which cannot be reproduced by using a simple rigid ion model. It is shown that the shoulder comes from intermediate range copper-copper correlations caused by the polarized chlorides.

  16. Hybrid Monte Carlo and continuum modeling of electrolytes with concentration-induced dielectric variations

    Science.gov (United States)

    Guan, Xiaofei; Ma, Manman; Gan, Zecheng; Xu, Zhenli; Li, Bo

    2016-11-01

    The distribution of ions near a charged surface is an important quantity in many biological and material processes, and has been therefore investigated intensively. However, few theoretical and simulation approaches have included the influence of concentration-induced variations in the local dielectric permittivity of an underlying electrolyte solution. Such local variations have long been observed and known to affect the properties of ionic solution in the bulk and around the charged surface. We propose a hybrid computational model that combines Monte Carlo simulations with continuum electrostatic modeling to investigate such properties. A key component in our hybrid model is a semianalytical formula for the ion-ion interaction energy in a dielectrically inhomogeneous environment. This formula is obtained by solving for the Green's function Poisson's equation with ionic-concentration-dependent dielectric permittivity using a harmonic interpolation method and spherical harmonic series. We also construct a self-consistent continuum model of electrostatics to describe the effect of ionic-concentration-dependent dielectric permittivity and the resulting self-energy contribution. With extensive numerical simulations, we verify the convergence of our hybrid simulation scheme, show the qualitatively different structures of ionic distribution due to the concentration-induced dielectric variations, and compare our simulation results with the self-consistent continuum model. In particular, we study the differences between weakly and strongly charged surfaces and multivalencies of counterions. Our hybrid simulations conform particularly the depletion of ionic concentrations near a charged surface and also capture the charge inversion. We discuss several issues and possible further improvement of our approach for simulations of large charged systems.

  17. Modelling heterotachy in phylogenetic inference by reversible-jump Markov chain Monte Carlo.

    Science.gov (United States)

    Pagel, Mark; Meade, Andrew

    2008-12-27

    The rate at which a given site in a gene sequence alignment evolves over time may vary. This phenomenon--known as heterotachy--can bias or distort phylogenetic trees inferred from models of sequence evolution that assume rates of evolution are constant. Here, we describe a phylogenetic mixture model designed to accommodate heterotachy. The method sums the likelihood of the data at each site over more than one set of branch lengths on the same tree topology. A branch-length set that is best for one site may differ from the branch-length set that is best for some other site, thereby allowing different sites to have different rates of change throughout the tree. Because rate variation may not be present in all branches, we use a reversible-jump Markov chain Monte Carlo algorithm to identify those branches in which reliable amounts of heterotachy occur. We implement the method in combination with our 'pattern-heterogeneity' mixture model, applying it to simulated data and five published datasets. We find that complex evolutionary signals of heterotachy are routinely present over and above variation in the rate or pattern of evolution across sites, that the reversible-jump method requires far fewer parameters than conventional mixture models to describe it, and serves to identify the regions of the tree in which heterotachy is most pronounced. The reversible-jump procedure also removes the need for a posteriori tests of 'significance' such as the Akaike or Bayesian information criterion tests, or Bayes factors. Heterotachy has important consequences for the correct reconstruction of phylogenies as well as for tests of hypotheses that rely on accurate branch-length information. These include molecular clocks, analyses of tempo and mode of evolution, comparative studies and ancestral state reconstruction. The model is available from the authors' website, and can be used for the analysis of both nucleotide and morphological data.

  18. Modeling of continuous free-radical butadiene-styrene copolymerization process by the Monte Carlo method

    Directory of Open Access Journals (Sweden)

    T. A. Mikhailova

    2016-01-01

    Full Text Available In the paper the algorithm of modeling of continuous low-temperature free-radical butadiene-styrene copolymerization process in emulsion based on the Monte-Carlo method is offered. This process is the cornerstone of industrial production butadiene – styrene synthetic rubber which is the most widespread large-capacity rubber of general purpose. Imitation of growth of each macromolecule of the formed copolymer and tracking of the processes happening to it is the basis of algorithm of modeling. Modeling is carried out taking into account residence-time distribution of particles in system that gives the chance to research the process proceeding in the battery of consistently connected polymerization reactors. At the same time each polymerization reactor represents the continuous stirred tank reactor. Since the process is continuous, it is considered continuous addition of portions to the reaction mixture in the first reactor of battery. The constructed model allows to research molecular-weight and viscous characteristics of the formed copolymerization product, to predict the mass content of butadiene and styrene in copolymer, to carry out calculation of molecular-weight distribution of the received product at any moment of conducting process. According to the results of computational experiments analyzed the influence of mode of the process of the regulator introduced during the maintaining on change of characteristics of the formed butadiene-styrene copolymer. As the considered process takes place with participation of monomers of two types, besides listed the model allows to research compositional heterogeneity of the received product that is to carry out calculation of composite distribution and distribution of macromolecules for the size and structure. On the basis of the proposed algorithm created the software tool that allows you to keep track of changes in the characteristics of the resulting product in the dynamics.

  19. Monte Carlo simulation model for economic evaluation of rubble mound breakwater protection in Harbors

    Institute of Scientific and Technical Information of China (English)

    Richard M. Males; Jeffrey A. Melby

    2011-01-01

    The US Army Corps of Engineers has a mission to conduct a wide array of programs in the arenas of water resources,including coastal protection.Coastal projects must be evaluated according to sound economic principles,and considerations of risk assessment and sea level change must be included in the analysis.Breakwaters are typically nearshore structures designed to reduce wave action in the lee of the structure,resulting in calmer waters within the protected area,with attendant benefits in terms of usability by navigation interests,shoreline protection,reduction of wave runup and onshore flooding,and protection of navigation channels from sedimentation and wave action.A common method of breakwater construction is the rubble mound breakwater,constructed in a trapezoidal cross section with gradually increasing stone sizes from the core out.Rubble mound breakwaters are subject to degradation from storms,particularly for antiquated designs with under-sized stones insufficient to protect against intense wave energy.Storm waves dislodge the stones,resulting in lowering of crest height and associated protective capability for wave reduction.This behavior happens over a long period of time,so a lifecycle model (that can analyze the damage progression over a period of years) is appropriate.Because storms are highly variable,a model that can support risk analysis is also needed.Economic impacts are determined by the nature of the wave climate in the protected area,and by the nature of the protected assets.Monte Carlo simulation (MCS)modeling that incorporates engineering and economic impacts is a worthwhile method for handling the many complexities involved in real world problems.The Corps has developed and utilized a number of MCS models to compare project alternatives in terms of their costs and benefits.This paper describes one such model,Coastal Structure simulation (CSsim) that has been developed specifically for planning level analysis of breakwaters.

  20. Assesment of advanced step models for steady state Monte Carlo burnup calculations in application to prismatic HTGR

    Directory of Open Access Journals (Sweden)

    Kępisty Grzegorz

    2015-09-01

    Full Text Available In this paper, we compare the methodology of different time-step models in the context of Monte Carlo burnup calculations for nuclear reactors. We discuss the differences between staircase step model, slope model, bridge scheme and stochastic implicit Euler method proposed in literature. We focus on the spatial stability of depletion procedure and put additional emphasis on the problem of normalization of neutron source strength. Considered methodology has been implemented in our continuous energy Monte Carlo burnup code (MCB5. The burnup simulations have been performed using the simplified high temperature gas-cooled reactor (HTGR system with and without modeling of control rod withdrawal. Useful conclusions have been formulated on the basis of results.

  1. Monte Carlo study of half-magnetization plateau and magnetic phase diagram in pyrochlore antiferromagnetic Heisenberg model

    OpenAIRE

    Motome, Yukitoshi; Penc, Karlo; Shannon, Nic

    2005-01-01

    The antiferromagnetic Heisenberg model on a pyrochlore lattice under external magnetic field is studied by classical Monte Carlo simulation. The model includes bilinear and biquadratic interactions; the latter effectively describes the coupling to lattice distortions. The magnetization process shows a half-magnetization plateau at low temperatures, accompanied with strong suppression of the magnetic susceptibility. Temperature dependence of the plateau behavior is clarified. Finite-temperatur...

  2. A Monte Carlo/response surface strategy for sensitivity analysis: application to a dynamic model of vegetative plant growth

    Science.gov (United States)

    Lim, J. T.; Gold, H. J.; Wilkerson, G. G.; Raper, C. D. Jr; Raper CD, J. r. (Principal Investigator)

    1989-01-01

    We describe the application of a strategy for conducting a sensitivity analysis for a complex dynamic model. The procedure involves preliminary screening of parameter sensitivities by numerical estimation of linear sensitivity coefficients, followed by generation of a response surface based on Monte Carlo simulation. Application is to a physiological model of the vegetative growth of soybean plants. The analysis provides insights as to the relative importance of certain physiological processes in controlling plant growth. Advantages and disadvantages of the strategy are discussed.

  3. A nucleation and growth model of silicon nanoparticles produced by pulsed laser deposition via Monte Carlo simulation

    Science.gov (United States)

    Wang, Yinglong; Qin, Aili; Chu, Lizhi; Deng, Zechao; Ding, Xuecheng; Guan, Li

    2017-02-01

    We simulated the nucleation and growth of Si nanoparticles produced by pulse laser deposition using Monte Carlo method at the molecular (microscopic) level. In the model, the mechanism and thermodynamic conditions of nucleation and growth of Si nanoparticles were described. In a real physical scale of target-substrate configuration, the model was used to analyze the average size distribution of Si nanoparticles in argon ambient gas and the calculated results are in agreement with the experimental results.

  4. Effects of fishing effort allocation scenarios on energy efficiency and profitability: an individual-based model applied to Danish fisheries

    DEFF Research Database (Denmark)

    Bastardie, Francois; Nielsen, J. Rasmus; Andersen, Bo Sølgaard

    2010-01-01

    engine specifications, and fish and fuel prices. The outcomes of scenarios A and B indicate a trade-off between fuel savings and energy efficiency improvements when effort is displaced closer to the harbour compared to reductions in total landing amounts and profit. Scenario C indicates that historic...... efficiency (quantity of fish caught per litre of fuel used), and profitability are factors that we simulated in developing a spatially explicit individual-based model (IBM) for fishing vessel movements. The observed spatial and seasonal patterns of fishing effort for each fishing activity are evaluated...... to the harbour, and (C) allocating effort towards optimising the expected area-specific profit per trip. The model is informed by data from each Danish fishing vessel >15 m after coupling its high resolution spatial and temporal effort data (VMS) with data from logbook landing declarations, sales slips, vessel...

  5. Direct Monte Carlo and multifluid modeling of the circumnuclear dust coma. Spherical grain dynamics revisited

    Science.gov (United States)

    Crifo, J.-F.; Loukianov, G. A.; Rodionov, A. V.; Zakharov, V. V.

    2005-07-01

    This paper describes the first computations of dust distributions in the vicinity of an active cometary nucleus, using a multidimensional Direct Simulation Monte Carlo Method (DSMC). The physical model is simplistic: spherical grains of a broad range of sizes are liberated by H 2O sublimation from a selection of nonrotating sunlit spherical nuclei, and submitted to the nucleus gravity, the gas drag, and the solar radiation pressure. The results are compared to those obtained by the previously described Dust Multi-Fluid Method (DMF) and demonstrate an excellent agreement in the regions where the DMF is usable. Most importantly, the DSMC allows the discovery of hitherto unsuspected dust coma properties in those cases which cannot be treated by the DMF. This leads to a thorough reconsideration of the properties of the near-nucleus dust dynamics. In particular, the results show that (1) none of the three forces considered here can be neglected a priori, in particular not the radiation pressure; (2) hitherto unsuspected new families of grain trajectories exist, for instance trajectories leading from the nightside surface to the dayside coma; (3) a wealth of balistic-like trajectories leading from one point of the surface to another point exist; on the dayside, such trajectories lead to the formation of "mini-volcanoes." The present model and results are discussed carefully. It is shown that (1) the neglected forces (inertia associated with a nucleus rotation, solar tidal force) are, in general, not negligible everywhere, and (2) when allowing for these additional forces, a time-dependent model will, in general, have to be used. The future steps of development of the model are outlined.

  6. Simulating Photon Scattering Effects in Structurally Detailed Ventricular Models Using a Monte Carlo Approach

    Directory of Open Access Journals (Sweden)

    Martin J Bishop

    2014-09-01

    Full Text Available Light scattering during optical imaging of electrical activation within the heart is known to significantlydistort the optically-recorded action potential (AP upstroke, as well as affecting the magnitude of the measured response of ventricular tissue to strong electric shocks. Modelling approaches based on the photondiffusion equation have recently been instrumental in quantifying and helping to understand the origin of the resulting distortion. However, they are unable to faithfully represent regions of non-scattering media, such assmall cavities within the myocardium which are filled with perfusate during experiments. Stochastic Monte Carlo (MC approaches allow simulation and tracking of individual photon `packets' as they propagate through tissuewith differing scattering properties. Here, we present a novel application of the MC method of photon scattering simulation, applied for the first time to the simulation of cardiac optical mapping signals withinunstructured, tetrahedral, finite element computational ventricular models. The method faithfully allows simulation of optical signals over highly-detailed, anatomically-complex MR-based models, includingrepresentations of fine-scale anatomy and intramural cavities. We show that optical action potential upstroke is prolonged close to large subepicardial vessels than further away from vessels, at times having a distinct `humped' morphology.Furthermore, we uncover a novel mechanism by which photon scattering effects around vessels cavities interact with `virtual-electrode' regions of strong de-/hyper-polarised tissue surrounding cavitiesduring shocks, significantly reducing the apparent optically-measured epicardial polarisation. We therefore demonstrate the importance of this novel optical mapping simulation approach along with highly anatomically-detailed models to fully investigate electrophysiological phenomena driven by fine-scale structural heterogeneity.

  7. Inverse Modeling Using Markov Chain Monte Carlo Aided by Adaptive Stochastic Collocation Method with Transformation

    Science.gov (United States)

    Zhang, D.; Liao, Q.

    2016-12-01

    The Bayesian inference provides a convenient framework to solve statistical inverse problems. In this method, the parameters to be identified are treated as random variables. The prior knowledge, the system nonlinearity, and the measurement errors can be directly incorporated in the posterior probability density function (PDF) of the parameters. The Markov chain Monte Carlo (MCMC) method is a powerful tool to generate samples from the posterior PDF. However, since the MCMC usually requires thousands or even millions of forward simulations, it can be a computationally intensive endeavor, particularly when faced with large-scale flow and transport models. To address this issue, we construct a surrogate system for the model responses in the form of polynomials by the stochastic collocation method. In addition, we employ interpolation based on the nested sparse grids and takes into account the different importance of the parameters, under the condition of high random dimensions in the stochastic space. Furthermore, in case of low regularity such as discontinuous or unsmooth relation between the input parameters and the output responses, we introduce an additional transform process to improve the accuracy of the surrogate model. Once we build the surrogate system, we may evaluate the likelihood with very little computational cost. We analyzed the convergence rate of the forward solution and the surrogate posterior by Kullback-Leibler divergence, which quantifies the difference between probability distributions. The fast convergence of the forward solution implies fast convergence of the surrogate posterior to the true posterior. We also tested the proposed algorithm on water-flooding two-phase flow reservoir examples. The posterior PDF calculated from a very long chain with direct forward simulation is assumed to be accurate. The posterior PDF calculated using the surrogate model is in reasonable agreement with the reference, revealing a great improvement in terms of

  8. Monte Carlo calculation model for heat radiation of inclined cylindrical flames and its application

    Science.gov (United States)

    Chang, Zhangyu; Ji, Jingwei; Huang, Yuankai; Wang, Zhiyi; Li, Qingjie

    2017-02-01

    Based on Monte Carlo method, a calculation model and its C++ calculating program for radiant heat transfer from an inclined cylindrical flame are proposed. In this model, the total radiation energy of the inclined cylindrical flame is distributed equally among a certain number of energy beams, which are emitted randomly from the flame surface. The incident heat flux on a surface is calculated by counting the number of energy beams which could reach the surface. The paper mainly studies the geometrical evaluation criterion for validity of energy beams emitted by inclined cylindrical flames and received by other surfaces. Compared to Mudan's formula results for a straight cylinder or a cylinder with 30° tilt angle, the calculated view factors range from 0.0043 to 0.2742 and the predicted view factors agree well with Mudan's results. The changing trend and values of incident heat fluxes computed by the model is consistent with experimental data measured by Rangwala et al. As a case study, incident heat fluxes on a gasoline tank, both the side and the top surface are calculated by the model. The heat radiation is from an inclined cylindrical flame generated by another 1000 m3 gasoline tank 4.6 m away from it. The cone angle of the flame to the adjacent oil tank is 45° and the polar angle is 0°. The top surface and the side surface of the tank are divided into 960 and 5760 grids during the calculation, respectively. The maximum incident heat flux on the side surface is 39.64 and 51.31 kW/m2 on the top surface. Distributions of the incident heat flux on the surface of the oil tank and on the ground around the fire tank are obtained, too.

  9. Multistep Lattice-Voxel method utilizing lattice function for Monte-Carlo treatment planning with pixel based voxel model.

    Science.gov (United States)

    Kumada, H; Saito, K; Nakamura, T; Sakae, T; Sakurai, H; Matsumura, A; Ono, K

    2011-12-01

    Treatment planning for boron neutron capture therapy generally utilizes Monte-Carlo methods for calculation of the dose distribution. The new treatment planning system JCDS-FX employs the multi-purpose Monte-Carlo code PHITS to calculate the dose distribution. JCDS-FX allows to build a precise voxel model consisting of pixel based voxel cells in the scale of 0.4×0.4×2.0 mm(3) voxel in order to perform high-accuracy dose estimation, e.g. for the purpose of calculating the dose distribution in a human body. However, the miniaturization of the voxel size increases calculation time considerably. The aim of this study is to investigate sophisticated modeling methods which can perform Monte-Carlo calculations for human geometry efficiently. Thus, we devised a new voxel modeling method "Multistep Lattice-Voxel method," which can configure a voxel model that combines different voxel sizes by utilizing the lattice function over and over. To verify the performance of the calculation with the modeling method, several calculations for human geometry were carried out. The results demonstrated that the Multistep Lattice-Voxel method enabled the precise voxel model to reduce calculation time substantially while keeping the high-accuracy of dose estimation.

  10. Risk analysis of gravity dam instability using credibility theory Monte Carlo simulation model.

    Science.gov (United States)

    Xin, Cao; Chongshi, Gu

    2016-01-01

    Risk analysis of gravity dam stability involves complicated uncertainty in many design parameters and measured data. Stability failure risk ratio described jointly by probability and possibility has deficiency in characterization of influence of fuzzy factors and representation of the likelihood of risk occurrence in practical engineering. In this article, credibility theory is applied into stability failure risk analysis of gravity dam. Stability of gravity dam is viewed as a hybrid event considering both fuzziness and randomness of failure criterion, design parameters and measured data. Credibility distribution function is conducted as a novel way to represent uncertainty of influence factors of gravity dam stability. And combining with Monte Carlo simulation, corresponding calculation method and procedure are proposed. Based on a dam section, a detailed application of the modeling approach on risk calculation of both dam foundation and double sliding surfaces is provided. The results show that, the present method is feasible to be applied on analysis of stability failure risk for gravity dams. The risk assessment obtained can reflect influence of both sorts of uncertainty, and is suitable as an index value.

  11. Monte Carlo Study of Topological Defects in the 3D Heisenberg Model

    CERN Document Server

    Holm, C; Holm, Christian; Janke, Wolfhard

    1994-01-01

    We use single-cluster Monte Carlo simulations to study the role of topological defects in the three-dimensional classical Heisenberg model on simple cubic lattices of size up to $80^3$. By applying reweighting techniques to time series generated in the vicinity of the approximate infinite volume transition point $K_c$, we obtain clear evidence that the temperature derivative of the average defect density $d\\langle n \\rangle/dT$ behaves qualitatively like the specific heat, i.e., both observables are finite in the infinite volume limit. This is in contrast to results by Lau and Dasgupta [{\\em Phys. Rev.\\/} {\\bf B39} (1989) 7212] who extrapolated a divergent behavior of $d\\langle n \\rangle/dT$ at $K_c$ from simulations on lattices of size up to $16^3$. We obtain weak evidence that $d\\langle n \\rangle/dT$ scales with the same critical exponent as the specific heat.As a byproduct of our simulations, we obtain a very accurate estimate for the ratio $\\alpha/\

  12. Monte Carlo Modeling of Minor Actinide Burning in Fissile Spallation Targets

    Science.gov (United States)

    Malyshkin, Yury; Pshenichnov, Igor; Mishustin, Igor; Greiner, Walter

    2014-06-01

    Minor actinides (MA) present a harmful part of spent nuclear fuel due to their long half-lives and high radio-toxicity. Neutrons produced in spallation targets of Accelerator Driven Systems (ADS) can be used to transmute and burn MA. Non-fissile targets are commonly considered in ADS design. However, additional neutrons from fission reactions can be used in targets made of fissile materials. We developed a Geant4-based code MCADS (Monte Carlo model for Accelerator Driven Systems) for simulating neutron production and transport in different spallation targets. MCADS is suitable for calculating spatial distributions of neutron flux and energy deposition, neutron multiplication factors and other characteristics of produced neutrons and residual nuclei. Several modifications of the Geant4 source code described in this work were made in order to simulate targets containing MA. Results of MCADS simulations are reported for several cylindrical targets made of U+Am, Am or Am2O3 including more complicated design options with a neutron booster and a reflector. Estimations of Am burning rates are given for the considered cases.

  13. Modeling uncertainty in risk assessment: an integrated approach with fuzzy set theory and Monte Carlo simulation.

    Science.gov (United States)

    Arunraj, N S; Mandal, Saptarshi; Maiti, J

    2013-06-01

    Modeling uncertainty during risk assessment is a vital component for effective decision making. Unfortunately, most of the risk assessment studies suffer from uncertainty analysis. The development of tools and techniques for capturing uncertainty in risk assessment is ongoing and there has been a substantial growth in this respect in health risk assessment. In this study, the cross-disciplinary approaches for uncertainty analyses are identified and a modified approach suitable for industrial safety risk assessment is proposed using fuzzy set theory and Monte Carlo simulation. The proposed method is applied to a benzene extraction unit (BEU) of a chemical plant. The case study results show that the proposed method provides better measure of uncertainty than the existing methods as unlike traditional risk analysis method this approach takes into account both variability and uncertainty of information into risk calculation, and instead of a single risk value this approach provides interval value of risk values for a given percentile of risk. The implications of these results in terms of risk control and regulatory compliances are also discussed.

  14. Modeling the Biophysical Effects in a Carbon Beam Delivery Line using Monte Carlo Simulation

    CERN Document Server

    Cho, Ilsung; Cho, Sungho; Kim, Eun Ho; Song, Yongkeun; Shin, Jae-ik; Jung, Won-Gyun

    2016-01-01

    Relative biological effectiveness (RBE) plays an important role in designing a uniform dose response for ion beam therapy. In this study the biological effectiveness of a carbon ion beam delivery system was investigated using Monte Carlo simulation. A carbon ion beam delivery line was designed for the Korea Heavy Ion Medical Accelerator (KHIMA) project. The GEANT4 simulation tool kit was used to simulate carbon beam transporting into media. An incident energy carbon ion beam in the range between 220 MeV/u and 290 MeV/u was chosen to generate secondary particles. The microdosimetric-kinetic (MK) model is applied to describe the RBE of 10% survival in human salivary gland (HSG) cells. The RBE weighted dose was estimated as a function of the penetrating depth of the water phantom along the incident beam direction. A biologically photon-equivalent Spread Out Bragg Peak (SOBP) was designed using the RBE weighted absorbed dose. Finally, the RBE of mixed beams was predicted as a function of the water phantom depth.

  15. Monte Carlo study of the double and super-exchange model with lattice distortion

    Energy Technology Data Exchange (ETDEWEB)

    Suarez, J R; Vallejo, E; Navarro, O [Instituto de Investigaciones en Materiales, Universidad Nacional Autonoma de Mexico, Apartado Postal 70-360, 04510 Mexico D. F. (Mexico); Avignon, M, E-mail: jrsuarez@iim.unam.m [Institut Neel, Centre National de la Recherche Scientifique (CNRS) and Universite Joseph Fourier, BP 166, 38042 Grenoble Cedex 9 (France)

    2009-05-01

    In this work a magneto-elastic phase transition was obtained in a linear chain due to the interplay between magnetism and lattice distortion in a double and super-exchange model. It is considered a linear chain consisting of localized classical spins interacting with itinerant electrons. Due to the double exchange interaction, localized spins tend to align ferromagnetically. This ferromagnetic tendency is expected to be frustrated by anti-ferromagnetic super-exchange interactions between neighbor localized spins. Additionally, lattice parameter is allowed to have small changes, which contributes harmonically to the energy of the system. Phase diagram is obtained as a function of the electron density and the super-exchange interaction using a Monte Carlo minimization. At low super-exchange interaction energy phase transition between electron-full ferromagnetic distorted and electron-empty anti-ferromagnetic undistorted phases occurs. In this case all electrons and lattice distortions were found within the ferromagnetic domain. For high super-exchange interaction energy, phase transition between two site distorted periodic arrangement of independent magnetic polarons ordered anti-ferromagnetically and the electron-empty anti-ferromagnetic undistorted phase was found. For this high interaction energy, Wigner crystallization, lattice distortion and charge distribution inside two-site polarons were obtained.

  16. Modeling the biophysical effects in a carbon beam delivery line by using Monte Carlo simulations

    Science.gov (United States)

    Cho, Ilsung; Yoo, SeungHoon; Cho, Sungho; Kim, Eun Ho; Song, Yongkeun; Shin, Jae-ik; Jung, Won-Gyun

    2016-09-01

    The Relative biological effectiveness (RBE) plays an important role in designing a uniform dose response for ion-beam therapy. In this study, the biological effectiveness of a carbon-ion beam delivery system was investigated using Monte Carlo simulations. A carbon-ion beam delivery line was designed for the Korea Heavy Ion Medical Accelerator (KHIMA) project. The GEANT4 simulation tool kit was used to simulate carbon-ion beam transport into media. An incident energy carbon-ion beam with energy in the range between 220 MeV/u and 290 MeV/u was chosen to generate secondary particles. The microdosimetric-kinetic (MK) model was applied to describe the RBE of 10% survival in human salivary-gland (HSG) cells. The RBE weighted dose was estimated as a function of the penetration depth in the water phantom along the incident beam's direction. A biologically photon-equivalent Spread Out Bragg Peak (SOBP) was designed using the RBE-weighted absorbed dose. Finally, the RBE of mixed beams was predicted as a function of the depth in the water phantom.

  17. Monte-Carlo event generation for a two-Higgs-doublet model with maximal CP symmetry

    CERN Document Server

    Brehmer, Johann

    2012-01-01

    Recently a two-Higgs-doublet model with maximal symmetry under generalised CP transformations, the MCPM, has been proposed. The theory features a unique fermion mass spectrum which, although not describing nature precisely, provides a good approximation. It also predicts the existence of five Higgs bosons with a particular signature. In this thesis I implemented the MCPM into the Monte-Carlo event generation package MadGraph, allowing the simulation of any MCPM tree-level process. The generated events are in a standardised format and can be used for further analysis with tools such as PYTHIA or GEANT, eventually leading to the comparison with experimental data and the exclusion or discovery of the theory. The implementation was successfully validated in different ways. It was then used for a first comparison of the MCPM signal events with the SM background and previous searches for new physics, hinting that the data expected at the LHC in the next years might provide exclusion limits or show signatures of thi...

  18. Monte Carlo Technique Used to Model the Degradation of Internal Spacecraft Surfaces by Atomic Oxygen

    Science.gov (United States)

    Banks, Bruce A.; Miller, Sharon K.

    2004-01-01

    Atomic oxygen is one of the predominant constituents of Earth's upper atmosphere. It is created by the photodissociation of molecular oxygen (O2) into single O atoms by ultraviolet radiation. It is chemically very reactive because a single O atom readily combines with another O atom or with other atoms or molecules that can form a stable oxide. The effects of atomic oxygen on the external surfaces of spacecraft in low Earth orbit can have dire consequences for spacecraft life, and this is a well-known and much studied problem. Much less information is known about the effects of atomic oxygen on the internal surfaces of spacecraft. This degradation can occur when openings in components of the spacecraft exterior exist that allow the entry of atomic oxygen into regions that may not have direct atomic oxygen attack but rather scattered attack. Openings can exist because of spacecraft venting, microwave cavities, and apertures for Earth viewing, Sun sensors, or star trackers. The effects of atomic oxygen erosion of polymers interior to an aperture on a spacecraft were simulated at the NASA Glenn Research Center by using Monte Carlo computational techniques. A two-dimensional model was used to provide quantitative indications of the attenuation of atomic oxygen flux as a function of the distance into a parallel-walled cavity. The model allows the atomic oxygen arrival direction, the Maxwell Boltzman temperature, and the ram energy to be varied along with the interaction parameters of the degree of recombination upon impact with polymer or nonreactive surfaces, the initial reaction probability, the reaction probability dependence upon energy and angle of attack, degree of specularity of scattering of reactive and nonreactive surfaces, and the degree of thermal accommodation upon impact with reactive and non-reactive surfaces to be varied to allow the model to produce atomic oxygen erosion geometries that replicate actual experimental results from space. The degree of

  19. Determinants for a successful Sémont maneuver: an in-vitro study with a semicircular canal model

    Directory of Open Access Journals (Sweden)

    Dominik Obrist

    2016-09-01

    Full Text Available Objective: To evaluate the effect of time between the movements/steps, angle of body movements as well as the angular velocity of the maneuvers in an in-vitro model of a semicircular canal (SCC to improve the efficacy of the Sémont maneuver in benign paroxysmal positional vertigo (BPPV.Methods: Sémont maneuvers were performed on an in-vitro SCC model. Otoconia trajectories were captured by a video camera. The effects of time between the movements, angles of motion (0°, 10°, 20°, 30° below the horizontal line, different angular velocities (90, 135, 180°/s and otoconia size (36 and 50µm on the final position of the otoconia in the SCC were tested.Results: Without extension of the movements beyond the horizontal, the in-vitro experiments (with particles corresponding to 50 m diameter did not yield successful canalith repositioning. If the movements were extended by 20° beyond the horizontal position, Sémont maneuvers were successful with resting times of at least 16 s. For larger extension angles the required time decreased. However, for smaller particles (36 m the required time doubled. The angular maneuver velocity (tested between 90 and 180°/s did not have a major impact on the final position of the otoconia.Interpretation: The two primary determinants for success of the Sémont maneuver are the time between the movements and the extension of the movements beyond the horizontal. The time between the movements should be at least 45 s. Angles of 20° or more below horizontal line (so-called Sémont ++ should increase the success rate of SM.

  20. Development of CT scanner models for patient organ dose calculations using Monte Carlo methods

    Science.gov (United States)

    Gu, Jianwei

    There is a serious and growing concern about the CT dose delivered by diagnostic CT examinations or image-guided radiation therapy imaging procedures. To better understand and to accurately quantify radiation dose due to CT imaging, Monte Carlo based CT scanner models are needed. This dissertation describes the development, validation, and application of detailed CT scanner models including a GE LightSpeed 16 MDCT scanner and two image guided radiation therapy (IGRT) cone beam CT (CBCT) scanners, kV CBCT and MV CBCT. The modeling process considered the energy spectrum, beam geometry and movement, and bowtie filter (BTF). The methodology of validating the scanner models using reported CTDI values was also developed and implemented. Finally, the organ doses to different patients undergoing CT scan were obtained by integrating the CT scanner models with anatomically-realistic patient phantoms. The tube current modulation (TCM) technique was also investigated for dose reduction. It was found that for RPI-AM, thyroid, kidneys and thymus received largest dose of 13.05, 11.41 and 11.56 mGy/100 mAs from chest scan, abdomen-pelvis scan and CAP scan, respectively using 120 kVp protocols. For RPI-AF, thymus, small intestine and kidneys received largest dose of 10.28, 12.08 and 11.35 mGy/100 mAs from chest scan, abdomen-pelvis scan and CAP scan, respectively using 120 kVp protocols. The dose to the fetus of the 3 month pregnant patient phantom was 0.13 mGy/100 mAs and 0.57 mGy/100 mAs from the chest and kidney scan, respectively. For the chest scan of the 6 month patient phantom and the 9 month patient phantom, the fetal doses were 0.21 mGy/100 mAs and 0.26 mGy/100 mAs, respectively. For MDCT with TCM schemas, the fetal dose can be reduced with 14%-25%. To demonstrate the applicability of the method proposed in this dissertation for modeling the CT scanner, additional MDCT scanner was modeled and validated by using the measured CTDI values. These results demonstrated that the

  1. A single-source photon source model of a linear accelerator for Monte Carlo dose calculation.

    Science.gov (United States)

    Nwankwo, Obioma; Glatting, Gerhard; Wenz, Frederik; Fleckenstein, Jens

    2017-01-01

    To introduce a new method of deriving a virtual source model (VSM) of a linear accelerator photon beam from a phase space file (PSF) for Monte Carlo (MC) dose calculation. A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden) and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs) for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC) between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses. The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate) for the evaluated fields. A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm.

  2. Properties of Carbon-Oxygen White Dwarfs From Monte Carlo Stellar Models

    Science.gov (United States)

    Fields, C. E.; Farmer, R.; Petermann, I.; Iliadis, C.; Timmes, F. X.

    2016-05-01

    We investigate properties of carbon-oxygen white dwarfs with respect to the composite uncertainties in the reaction rates using the stellar evolution toolkit, Modules for Experiments in Stellar Astrophysics (MESA) and the probability density functions in the reaction rate library STARLIB. These are the first Monte Carlo stellar evolution studies that use complete stellar models. Focusing on 3 {M}⊙ models evolved from the pre main-sequence to the first thermal pulse, we survey the remnant core mass, composition, and structure properties as a function of 26 STARLIB reaction rates covering hydrogen and helium burning using a Principal Component Analysis and Spearman Rank-Order Correlation. Relative to the arithmetic mean value, we find the width of the 95% confidence interval to be {{Δ }}{M}{{1TP}} ≈ 0.019 {M}⊙ for the core mass at the first thermal pulse, Δ{t}{{1TP}} ≈ 12.50 Myr for the age, {{Δ }}{log}({T}{{c}}/{{K}}) ≈ 0.013 for the central temperature, {{Δ }}{log}({ρ }{{c}}/{{g}} {{cm}}-3) ≈ 0.060 for the central density, {{Δ }}{Y}{{e,c}} ≈ 2.6 × 10-5 for the central electron fraction, {{Δ }}{X}{{c}}{(}22{{Ne}}) ≈ 5.8 × 10-4, {{Δ }}{X}{{c}}{(}12{{C}}) ≈ 0.392, and {{Δ }}{X}{{c}}{(}16{{O}}) ≈ 0.392. Uncertainties in the experimental 12C(α ,γ {)}16{{O}}, triple-α, and 14N({\\text{}}p,γ {)}15{{O}} reaction rates dominate these variations. We also consider a grid of 1-6 {M}⊙ models evolved from the pre main-sequence to the final white dwarf to probe the sensitivity of the initial-final mass relation to experimental uncertainties in the hydrogen and helium reaction rates.

  3. Monte-Carlo based Uncertainty Analysis For CO2 Laser Microchanneling Model

    Science.gov (United States)

    Prakash, Shashi; Kumar, Nitish; Kumar, Subrata

    2016-09-01

    CO2 laser microchanneling has emerged as a potential technique for the fabrication of microfluidic devices on PMMA (Poly-methyl-meth-acrylate). PMMA directly vaporizes when subjected to high intensity focused CO2 laser beam. This process results in clean cut and acceptable surface finish on microchannel walls. Overall, CO2 laser microchanneling process is cost effective and easy to implement. While fabricating microchannels on PMMA using a CO2 laser, the maximum depth of the fabricated microchannel is the key feature. There are few analytical models available to predict the maximum depth of the microchannels and cut channel profile on PMMA substrate using a CO2 laser. These models depend upon the values of thermophysical properties of PMMA and laser beam parameters. There are a number of variants of transparent PMMA available in the market with different values of thermophysical properties. Therefore, for applying such analytical models, the values of these thermophysical properties are required to be known exactly. Although, the values of laser beam parameters are readily available, extensive experiments are required to be conducted to determine the value of thermophysical properties of PMMA. The unavailability of exact values of these property parameters restrict the proper control over the microchannel dimension for given power and scanning speed of the laser beam. In order to have dimensional control over the maximum depth of fabricated microchannels, it is necessary to have an idea of uncertainty associated with the predicted microchannel depth. In this research work, the uncertainty associated with the maximum depth dimension has been determined using Monte Carlo method (MCM). The propagation of uncertainty with different power and scanning speed has been predicted. The relative impact of each thermophysical property has been determined using sensitivity analysis.

  4. Monte Carlo Simulation Modeling of a Regional Stroke Team’s Use of Telemedicine

    Science.gov (United States)

    Torabi, Elham; Froehle, Craig M.; Lindsell, Chris J.; Moomaw, Charles J.; Kanter, Daniel; Kleindorfer, Dawn; Adeoye, Opeolu

    2015-01-01

    Objectives The objective of this study was to evaluate operational policies that may improve the proportion of eligible stroke patients within a population who would receive intravenous recombinant tissue plasminogen activator (rt-PA), and minimize time to treatment in eligible patients. Methods In the context of a regional stroke team, the authors examined the effects of staff location and telemedicine deployment policies on the timeliness of thrombolytic treatment, and estimated the efficacy and cost-effectiveness of six different policies. A process map comprising the steps from recognition of stroke symptoms to intravenous administration of rt-PA was constructed using data from published literature combined with expert opinion. Six scenarios were investigated: telemedicine deployment (none, all, or outer-ring hospitals only); and, staff location (center of region or anywhere in region). Physician locations were randomly generated based on their zip codes of residence and work. The outcomes of interest were onset-to-treatment (OTT) time, door-to-needle (DTN) time, and the proportion of patients treated within three hours. A Monte Carlo simulation of the stroke team care-delivery system was constructed based on a primary dataset of 121 ischemic stroke patients who were potentially eligible for treatment with rt-PA. Results With the physician located randomly in the region, deploying telemedicine at all hospitals in the region (compared with partial or no telemedicine) would result in the highest rates of treatment within three hours (80% vs. 75% vs. 70%) and the shortest OTT (148 vs. 164 vs. 176 minutes), and DTN (45 vs. 61 vs. 73 minutes) times. However, locating the on-call physician centrally coupled with partial telemedicine deployment (five of the 17 hospitals) would be most cost-effective with comparable eligibility and treatment times. Conclusions Given the potential societal benefits, continued efforts to deploy telemedicine appear warranted. Aligning the

  5. Monte Carlo Simulation Modeling of a Regional Stroke Team's Use of Telemedicine.

    Science.gov (United States)

    Torabi, Elham; Froehle, Craig M; Lindsell, Christopher J; Moomaw, Charles J; Kanter, Daniel; Kleindorfer, Dawn; Adeoye, Opeolu

    2016-01-01

    The objective of this study was to evaluate operational policies that may improve the proportion of eligible stroke patients within a population who would receive intravenous recombinant tissue plasminogen activator (rt-PA) and minimize time to treatment in eligible patients. In the context of a regional stroke team, the authors examined the effects of staff location and telemedicine deployment policies on the timeliness of thrombolytic treatment, and estimated the efficacy and cost-effectiveness of six different policies. A process map comprising the steps from recognition of stroke symptoms to intravenous administration of rt-PA was constructed using data from published literature combined with expert opinion. Six scenarios were investigated: telemedicine deployment (none, all, or outer-ring hospitals only) and staff location (center of region or anywhere in region). Physician locations were randomly generated based on their zip codes of residence and work. The outcomes of interest were onset-to-treatment (OTT) time, door-to-needle (DTN) time, and the proportion of patients treated within 3 hours. A Monte Carlo simulation of the stroke team care-delivery system was constructed based on a primary data set of 121 ischemic stroke patients who were potentially eligible for treatment with rt-PA. With the physician located randomly in the region, deploying telemedicine at all hospitals in the region (compared with partial or no telemedicine) would result in the highest rates of treatment within 3 hours (80% vs. 75% vs. 70%) and the shortest OTT (148 vs. 164 vs. 176 minutes) and DTN (45 vs. 61 vs. 73 minutes) times. However, locating the on-call physician centrally coupled with partial telemedicine deployment (five of the 17 hospitals) would be most cost-effective with comparable eligibility and treatment times. Given the potential societal benefits, continued efforts to deploy telemedicine appear warranted. Aligning the incentives between those who would have to fund

  6. Application of the limited strength model of self-regulation to understanding exercise effort, planning and adherence.

    Science.gov (United States)

    Martin Ginis, Kathleen A; Bray, Steven R

    2010-12-01

    The limited strength model posits that self-regulatory strength is a finite, renewable resource that is drained when people attempt to regulate their emotions, thoughts or behaviours. The purpose of this study was to determine whether self-regulatory depletion can explain lapses in exercise effort, planning and adherence. In a lab-based experiment, participants exposed to a self-regulatory depletion manipulation generated lower levels of work during a 10 min bicycling task, and planned to exert less effort during an upcoming exercise bout, compared with control participants. The magnitude of reduction in planned exercise effort predicted exercise adherence over a subsequent 8-week period. Together, these results suggest that self-regulatory depletion can influence exercise effort, planning and decision-making and that the depletion of self-regulatory resources can explain episodes of exercise non-adherence both in the lab and in everyday life.

  7. Monte Carlo analysis of an ODE Model of the Sea Urchin Endomesoderm Network

    Directory of Open Access Journals (Sweden)

    Klipp Edda

    2009-08-01

    Full Text Available Abstract Background Gene Regulatory Networks (GRNs control the differentiation, specification and function of cells at the genomic level. The levels of interactions within large GRNs are of enormous depth and complexity. Details about many GRNs are emerging, but in most cases it is unknown to what extent they control a given process, i.e. the grade of completeness is uncertain. This uncertainty stems from limited experimental data, which is the main bottleneck for creating detailed dynamical models of cellular processes. Parameter estimation for each node is often infeasible for very large GRNs. We propose a method, based on random parameter estimations through Monte-Carlo simulations to measure completeness grades of GRNs. Results We developed a heuristic to assess the completeness of large GRNs, using ODE simulations under different conditions and randomly sampled parameter sets to detect parameter-invariant effects of perturbations. To test this heuristic, we constructed the first ODE model of the whole sea urchin endomesoderm GRN, one of the best studied large GRNs. We find that nearly 48% of the parameter-invariant effects correspond with experimental data, which is 65% of the expected optimal agreement obtained from a submodel for which kinetic parameters were estimated and used for simulations. Randomized versions of the model reproduce only 23.5% of the experimental data. Conclusion The method described in this paper enables an evaluation of network topologies of GRNs without requiring any parameter values. The benefit of this method is exemplified in the first mathematical analysis of the complete Endomesoderm Network Model. The predictions we provide deliver candidate nodes in the network that are likely to be erroneous or miss unknown connections, which may need additional experiments to improve the network topology. This mathematical model can serve as a scaffold for detailed and more realistic models. We propose that our method can

  8. Stochastic geometrical model and Monte Carlo optimization methods for building reconstruction from InSAR data

    Science.gov (United States)

    Zhang, Yue; Sun, Xian; Thiele, Antje; Hinz, Stefan

    2015-10-01

    Synthetic aperture radar (SAR) systems, such as TanDEM-X, TerraSAR-X and Cosmo-SkyMed, acquire imagery with high spatial resolution (HR), making it possible to observe objects in urban areas with high detail. In this paper, we propose a new top-down framework for three-dimensional (3D) building reconstruction from HR interferometric SAR (InSAR) data. Unlike most methods proposed before, we adopt a generative model and utilize the reconstruction process by maximizing a posteriori estimation (MAP) through Monte Carlo methods. The reason for this strategy refers to the fact that the noisiness of SAR images calls for a thorough prior model to better cope with the inherent amplitude and phase fluctuations. In the reconstruction process, according to the radar configuration and the building geometry, a 3D building hypothesis is mapped to the SAR image plane and decomposed to feature regions such as layover, corner line, and shadow. Then, the statistical properties of intensity, interferometric phase and coherence of each region are explored respectively, and are included as region terms. Roofs are not directly considered as they are mixed with wall into layover area in most cases. When estimating the similarity between the building hypothesis and the real data, the prior, the region term, together with the edge term related to the contours of layover and corner line, are taken into consideration. In the optimization step, in order to achieve convergent reconstruction outputs and get rid of local extrema, special transition kernels are designed. The proposed framework is evaluated on the TanDEM-X dataset and performs well for buildings reconstruction.

  9. Comprehensive modeling of solid phase epitaxial growth using Lattice Kinetic Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Martin-Bragado, Ignacio, E-mail: ignacio.martin@imdea.org [IMDEA Materials Institute, C/ Eric Kandel 2, Parque Científico y Tecnológico de Getafe 28906 Madrid, Getafe (Spain)

    2013-05-15

    Damage evolution of irradiated silicon is, and has been, a topic of interest for the last decades for its applications to the semiconductor industry. In particular, sometimes, the damage is heavy enough to collapse the lattice and to locally amorphize the silicon, while in other cases amorphization is introduced explicitly to improve other implanted profiles. Subsequent annealing of the implanted samples heals the amorphized regions through Solid Phase Epitaxial Regrowth (SPER). SPER is a complicated process. It is anisotropic, it generates defects in the recrystallized silicon, it has a different amorphous/crystalline (A/C) roughness for each orientation, leaving pits in Si(1 1 0), and in Si(1 1 1) it produces two modes of recrystallization with different rates. The recently developed code MMonCa has been used to introduce a physically-based comprehensive model using Lattice Kinetic Monte Carlo that explains all the above singularities of silicon SPER. The model operates by having, as building blocks, the silicon lattice microconfigurations and their four twins. It detects the local configurations, assigns microscopical growth rates, and reconstructs the positions of the lattice locally with one of those building blocks. The overall results reproduce the (a) anisotropy as a result of the different growth rates, (b) localization of SPER induced defects, (c) roughness trends of the A/C interface, (d) pits on Si(1 1 0) regrown surfaces, and (e) bimodal Si(1 1 1) growth. It also provides physical insights of the nature and shape of deposited defects and how they assist in the occurrence of all the above effects.

  10. Range verification methods in particle therapy: underlying physics and Monte Carlo modelling

    Directory of Open Access Journals (Sweden)

    Aafke Christine Kraan

    2015-07-01

    Full Text Available Hadron therapy allows for highly conformal dose distributions and better sparing of organs-at-risk, thanks to the characteristic dose deposition as function of depth. However, the quality of hadron therapy treatments is closely connected with the ability to predict and achieve a given beam range in the patient. Currently, uncertainties in particle range lead to the employment of safety margins, at the expense of treatment quality. Much research in particle therapy is therefore aimed at developing methods to verify the particle range in patients.Non-invasive in-vivo monitoring of the particle range can be performed by detecting secondary radiation, emitted from the patient as a result of nuclear interactions of charged hadrons with tissue, including beta+ emitters, prompt photons, and charged fragments. The correctness of the dose delivery can be verified by comparing measured and pre-calculated distributions of the secondary particles. The reliability of Monte Carlo (MC predictions is a key issue. Correctly modelling the production of secondaries is a non-trivial task, because it involves nuclear physics interactions at energies, where no rigorous theories exist to describe them. The goal of this review is to provide a comprehensive overview of various aspects in modelling the physics processes for range verification with secondary particles produced in proton, carbon, and heavier ion irradiation. We discuss electromagnetic and nuclear interactions of charged hadrons in matter, which is followed by a summary of some widely used MC codes in hadron therapy. Then we describe selected examples of how these codes have been validated and used in three range verification techniques: PET, prompt gamma, and charged particle detection. We include research studies and clinically applied methods. For each of the techniques we point out advantages and disadvantages, as well as clinical challenges still to be addressed, focusing on MC simulation aspects.

  11. Range Verification Methods in Particle Therapy: Underlying Physics and Monte Carlo Modeling.

    Science.gov (United States)

    Kraan, Aafke Christine

    2015-01-01

    Hadron therapy allows for highly conformal dose distributions and better sparing of organs-at-risk, thanks to the characteristic dose deposition as function of depth. However, the quality of hadron therapy treatments is closely connected with the ability to predict and achieve a given beam range in the patient. Currently, uncertainties in particle range lead to the employment of safety margins, at the expense of treatment quality. Much research in particle therapy is therefore aimed at developing methods to verify the particle range in patients. Non-invasive in vivo monitoring of the particle range can be performed by detecting secondary radiation, emitted from the patient as a result of nuclear interactions of charged hadrons with tissue, including β (+) emitters, prompt photons, and charged fragments. The correctness of the dose delivery can be verified by comparing measured and pre-calculated distributions of the secondary particles. The reliability of Monte Carlo (MC) predictions is a key issue. Correctly modeling the production of secondaries is a non-trivial task, because it involves nuclear physics interactions at energies, where no rigorous theories exist to describe them. The goal of this review is to provide a comprehensive overview of various aspects in modeling the physics processes for range verification with secondary particles produced in proton, carbon, and heavier ion irradiation. We discuss electromagnetic and nuclear interactions of charged hadrons in matter, which is followed by a summary of some widely used MC codes in hadron therapy. Then, we describe selected examples of how these codes have been validated and used in three range verification techniques: PET, prompt gamma, and charged particle detection. We include research studies and clinically applied methods. For each of the techniques, we point out advantages and disadvantages, as well as clinical challenges still to be addressed, focusing on MC simulation aspects.

  12. Hybrid method for fast Monte Carlo simulation of diffuse reflectance from a multilayered tissue model with tumor-like heterogeneities.

    Science.gov (United States)

    Zhu, Caigang; Liu, Quan

    2012-01-01

    We present a hybrid method that combines a multilayered scaling method and a perturbation method to speed up the Monte Carlo simulation of diffuse reflectance from a multilayered tissue model with finite-size tumor-like heterogeneities. The proposed method consists of two steps. In the first step, a set of photon trajectory information generated from a baseline Monte Carlo simulation is utilized to scale the exit weight and exit distance of survival photons for the multilayered tissue model. In the second step, another set of photon trajectory information, including the locations of all collision events from the baseline simulation and the scaling result obtained from the first step, is employed by the perturbation Monte Carlo method to estimate diffuse reflectance from the multilayered tissue model with tumor-like heterogeneities. Our method is demonstrated to shorten simulation time by several orders of magnitude. Moreover, this hybrid method works for a larger range of probe configurations and tumor models than the scaling method or the perturbation method alone.

  13. Study of the validity of a combined potential model using the Hybrid Reverse Monte Carlo method in Fluoride glass system

    Directory of Open Access Journals (Sweden)

    M. Kotbi

    2013-03-01

    Full Text Available The choice of appropriate interaction models is among the major disadvantages of conventional methods such as Molecular Dynamics (MD and Monte Carlo (MC simulations. On the other hand, the so-called Reverse Monte Carlo (RMC method, based on experimental data, can be applied without any interatomic and/or intermolecular interactions. The RMC results are accompanied by artificial satellite peaks. To remedy this problem, we use an extension of the RMC algorithm, which introduces an energy penalty term into the acceptance criteria. This method is referred to as the Hybrid Reverse Monte Carlo (HRMC method. The idea of this paper is to test the validity of a combined potential model of coulomb and Lennard-Jones in a Fluoride glass system BaMnMF7 (M = Fe,V using HRMC method. The results show a good agreement between experimental and calculated characteristics, as well as a meaningful improvement in partial pair distribution functions (PDFs. We suggest that this model should be used in calculating the structural properties and in describing the average correlations between components of fluoride glass or a similar system. We also suggest that HRMC could be useful as a tool for testing the interaction potential models, as well as for conventional applications.

  14. Monte Carlo modeling of Standard Model multi-boson production processes for $\\sqrt{s} = 13$ TeV ATLAS analyses

    CERN Document Server

    Li, Shu; The ATLAS collaboration

    2017-01-01

    Proceeding for the poster presentation at LHCP2017, Shanghai, China on the topic of "Monte Carlo modeling of Standard Model multi-boson production processes for $\\sqrt{s} = 13$ TeV ATLAS analyses" (ATL-PHYS-SLIDE-2017-265 https://cds.cern.ch/record/2265389) Deadline: 01/09/2017

  15. Oxygen distribution in tumors: A qualitative analysis and modeling study providing a novel Monte Carlo approach

    Energy Technology Data Exchange (ETDEWEB)

    Lagerlöf, Jakob H., E-mail: Jakob@radfys.gu.se [Department of Radiation Physics, Göteborg University, Göteborg 41345 (Sweden); Kindblom, Jon [Department of Oncology, Sahlgrenska University Hospital, Göteborg 41345 (Sweden); Bernhardt, Peter [Department of Radiation Physics, Göteborg University, Göteborg 41345, Sweden and Department of Nuclear Medicine, Sahlgrenska University Hospital, Göteborg 41345 (Sweden)

    2014-09-15

    Purpose: To construct a Monte Carlo (MC)-based simulation model for analyzing the dependence of tumor oxygen distribution on different variables related to tumor vasculature [blood velocity, vessel-to-vessel proximity (vessel proximity), and inflowing oxygen partial pressure (pO{sub 2})]. Methods: A voxel-based tissue model containing parallel capillaries with square cross-sections (sides of 10 μm) was constructed. Green's function was used for diffusion calculations and Michaelis-Menten's kinetics to manage oxygen consumption. The model was tuned to approximately reproduce the oxygenational status of a renal carcinoma; the depth oxygenation curves (DOC) were fitted with an analytical expression to facilitate rapid MC simulations of tumor oxygen distribution. DOCs were simulated with three variables at three settings each (blood velocity, vessel proximity, and inflowing pO{sub 2}), which resulted in 27 combinations of conditions. To create a model that simulated variable oxygen distributions, the oxygen tension at a specific point was randomly sampled with trilinear interpolation in the dataset from the first simulation. Six correlations between blood velocity, vessel proximity, and inflowing pO{sub 2} were hypothesized. Variable models with correlated parameters were compared to each other and to a nonvariable, DOC-based model to evaluate the differences in simulated oxygen distributions and tumor radiosensitivities for different tumor sizes. Results: For tumors with radii ranging from 5 to 30 mm, the nonvariable DOC model tended to generate normal or log-normal oxygen distributions, with a cut-off at zero. The pO{sub 2} distributions simulated with the six-variable DOC models were quite different from the distributions generated with the nonvariable DOC model; in the former case the variable models simulated oxygen distributions that were more similar to in vivo results found in the literature. For larger tumors, the oxygen distributions became

  16. Integrated Cost and Schedule using Monte Carlo Simulation of a CPM Model - 12419

    Energy Technology Data Exchange (ETDEWEB)

    Hulett, David T. [Hulett and Associates, LLC (United States); Nosbisch, Michael R. [Project Time and Cost, Inc. (United States)

    2012-07-01

    . - Good-quality risk data that are usually collected in risk interviews of the project team, management and others knowledgeable in the risk of the project. The risks from the risk register are used as the basis of the risk data in the risk driver method. The risk driver method is based in the fundamental principle that identifiable risks drive overall cost and schedule risk. - A Monte Carlo simulation software program that can simulate schedule risk, burn WM2012 rate risk and time-independent resource risk. The results include the standard histograms and cumulative distributions of possible cost and time results for the project. However, by simulating both cost and time simultaneously we can collect the cost-time pairs of results and hence show the scatter diagram ('football chart') that indicates the joint probability of finishing on time and on budget. Also, we can derive the probabilistic cash flow for comparison with the time-phased project budget. Finally the risks to schedule completion and to cost can be prioritized, say at the P-80 level of confidence, to help focus the risk mitigation efforts. If the cost and schedule estimates including contingency reserves are not acceptable to the project stakeholders the project team should conduct risk mitigation workshops and studies, deciding which risk mitigation actions to take, and re-run the Monte Carlo simulation to determine the possible improvement to the project's objectives. Finally, it is recommended that the contingency reserves of cost and of time, calculated at a level that represents an acceptable degree of certainty and uncertainty for the project stakeholders, be added as a resource-loaded activity to the project schedule for strategic planning purposes. The risk analysis described in this paper is correct only for the current plan, represented by the schedule. The project contingency reserve of time and cost that are the main results of this analysis apply if that plan is to be followed. Of

  17. Comparing kinetic Monte Carlo and thin-film modeling of transversal instabilities of ridges on patterned substrates

    Science.gov (United States)

    Tewes, Walter; Buller, Oleg; Heuer, Andreas; Thiele, Uwe; Gurevich, Svetlana V.

    2017-03-01

    We employ kinetic Monte Carlo (KMC) simulations and a thin-film continuum model to comparatively study the transversal (i.e., Plateau-Rayleigh) instability of ridges formed by molecules on pre-patterned substrates. It is demonstrated that the evolution of the occurring instability qualitatively agrees between the two models for a single ridge as well as for two weakly interacting ridges. In particular, it is shown for both models that the instability occurs on well defined length and time scales which are, for the KMC model, significantly larger than the intrinsic scales of thermodynamic fluctuations. This is further evidenced by the similarity of dispersion relations characterizing the linear instability modes.

  18. Mental effort

    NARCIS (Netherlands)

    Kirschner, Paul A.; Kirschner, Femke

    2013-01-01

    Kirschner, P. A., & Kirschner, F. (2012). Mental effort. In N. Seel (Ed.), Encyclopedia of the sciences of learning, Volume 5 (pp. 2182-2184). New York, NY: Springer. doi:10.1007/978-1-4419-1428-6_226

  19. One State's Systems Change Efforts to Reduce Child Care Expulsion: Taking the Pyramid Model to Scale

    Science.gov (United States)

    Vinh, Megan; Strain, Phil; Davidon, Sarah; Smith, Barbara J.

    2016-01-01

    This article describes the efforts funded by the state of Colorado to address unacceptably high rates of expulsion from child care. Based on the results of a 2006 survey, the state of Colorado launched two complementary policy initiatives in 2009 to impact expulsion rates and to improve the use of evidence-based practices related to challenging…

  20. Modeling Psychological Empowerment among Youth Involved in Local Tobacco Control Efforts

    Science.gov (United States)

    Holden, Debra J.; Evans, W. Douglas; Hinnant, Laurie W.; Messeri, Peter

    2005-01-01

    The American Legacy Foundation funded 13 state health departments for their Statewide Youth Movement Against Tobacco Use in September 2000. Its goal was to create statewide tobacco control initiatives implemented with youth leadership. The underlying theory behind these initiatives was that tobacco control efforts can best be accomplished by…

  1. High precision single-cluster Monte Carlo measurement of the critical exponents of the classical 3D Heisenberg model

    CERN Document Server

    Holm, C

    1992-01-01

    We report measurements of the critical exponents of the classical three-dimensional Heisenberg model on simple cubic lattices of size $L^3$ with $L$ = 12, 16, 20, 24, 32, 40, and 48. The data was obtained from a few long single-cluster Monte Carlo simulations near the phase transition. We compute high precision estimates of the critical coupling $K_c$, Binder's parameter $U^* and the critical exponents $\

  2. Introduction to the Monte Carlo project and the approach to the validation of probabilistic models of dietary exposure to selected food chemicals.

    Science.gov (United States)

    Gibney, M J; van der Voet, H

    2003-10-01

    The Monte Carlo project was established to allow an international collaborative effort to define conceptual models for food chemical and nutrient exposure, to define and validate the software code to govern these models, to provide new or reconstructed databases for validation studies, and to use the new software code to complete validation modelling. Models were considered valid when they provided exposure estimates (e(a)) that could be shown not to underestimate the true exposure (e(b)), but at the same time are more realistic than the currently used conservative estimates (e(c)). Thus, validation required e(b) model parameters. In most instances, it was possible to generate probabilistic models that fulfilled the validation criteria.

  3. A Monte Carlo simulation model for stationary non-Gaussian processes

    DEFF Research Database (Denmark)

    Grigoriu, M.; Ditlevsen, Ove Dalager; Arwade, S. R.

    2003-01-01

    includes translation processes and is useful for both Monte Carlo simulation and analytical studies. As for translation processes, the mixture of translation processes can have a wide range of marginal distributions and correlation functions. Moreover, these processes can match a broader range of second...... athe proposed Monte Carlo algorithm and compare features of translation processes and mixture of translation processes. Keywords: Monte Carlo simulation, non-Gaussian processes, sampling theorem, stochastic processes, translation processes......A class of stationary non-Gaussian processes, referred to as the class of mixtures of translation processes, is defined by their finite dimensional distributions consisting of mixtures of finite dimensional distributions of translation processes. The class of mixtures of translation processes...

  4. Bayesian Modelling, Monte Carlo Sampling and Capital Allocation of Insurance Risks

    Directory of Open Access Journals (Sweden)

    Gareth W. Peters

    2017-09-01

    Full Text Available The main objective of this work is to develop a detailed step-by-step guide to the development and application of a new class of efficient Monte Carlo methods to solve practically important problems faced by insurers under the new solvency regulations. In particular, a novel Monte Carlo method to calculate capital allocations for a general insurance company is developed, with a focus on coherent capital allocation that is compliant with the Swiss Solvency Test. The data used is based on the balance sheet of a representative stylized company. For each line of business in that company, allocations are calculated for the one-year risk with dependencies based on correlations given by the Swiss Solvency Test. Two different approaches for dealing with parameter uncertainty are discussed and simulation algorithms based on (pseudo-marginal Sequential Monte Carlo algorithms are described and their efficiency is analysed.

  5. A Monte-Carlo Bayesian framework for urban rainfall error modelling

    Science.gov (United States)

    Ochoa Rodriguez, Susana; Wang, Li-Pen; Willems, Patrick; Onof, Christian

    2016-04-01

    Rainfall estimates of the highest possible accuracy and resolution are required for urban hydrological applications, given the small size and fast response which characterise urban catchments. While significant progress has been made in recent years towards meeting rainfall input requirements for urban hydrology -including increasing use of high spatial resolution radar rainfall estimates in combination with point rain gauge records- rainfall estimates will never be perfect and the true rainfall field is, by definition, unknown [1]. Quantifying the residual errors in rainfall estimates is crucial in order to understand their reliability, as well as the impact that their uncertainty may have in subsequent runoff estimates. The quantification of errors in rainfall estimates has been an active topic of research for decades. However, existing rainfall error models have several shortcomings, including the fact that they are limited to describing errors associated to a single data source (i.e. errors associated to rain gauge measurements or radar QPEs alone) and to a single representative error source (e.g. radar-rain gauge differences, spatial temporal resolution). Moreover, rainfall error models have been mostly developed for and tested at large scales. Studies at urban scales are mostly limited to analyses of propagation of errors in rain gauge records-only through urban drainage models and to tests of model sensitivity to uncertainty arising from unmeasured rainfall variability. Only few radar rainfall error models -originally developed for large scales- have been tested at urban scales [2] and have been shown to fail to well capture small-scale storm dynamics, including storm peaks, which are of utmost important for urban runoff simulations. In this work a Monte-Carlo Bayesian framework for rainfall error modelling at urban scales is introduced, which explicitly accounts for relevant errors (arising from insufficient accuracy and/or resolution) in multiple data

  6. Validation of the Monte Carlo model developed to assess the activity generated in control rods of a BWR

    Science.gov (United States)

    Ródenas, José; Abarca, Agustín; Gallardo, Sergio; Sollet, Eduardo

    2010-07-01

    Control rods are activated by neutron reactions into the reactor. The activation is produced mainly in stainless steel and its impurities. The dose produced by this activity is not important inside the reactor, but it has to be taken into account when the rod is withdrawn from it. The neutron activation has been modeled with the MCNP5 code based on the Monte Carlo method. The number of reactions obtained with the code can be converted into activity. In this work, a detailed model of the control rod has been developed considering all its components: handle, tubes, gain, and central core. On the other hand, the rod has been divided into 5 zones in order to consider the different axial exposition to neutron flux into the reactor. Results of the Monte Carlo simulation for the neutron activation constitute a gamma source in the control rod. With this source, applying again the Monte Carlo method, doses at certain distance of the rod have been calculated. Comparison of calculated doses with experimental measurements leads to the validation of the model developed.

  7. The 3-Attractor Water Model: Monte-Carlo Simulations with a New, Effective 2-Body Potential (BMW

    Directory of Open Access Journals (Sweden)

    Francis Muguet

    2003-02-01

    Full Text Available According to the precepts of the 3-attractor (3-A water model, effective 2-body water potentials should feature as local minima the bifurcated and inverted water dimers in addition to the well-known linear water dimer global minimum. In order to test the 3-A model, a new pair wise effective intermolecular rigid water potential has been designed. The new potential is part of new class of potentials called BMW (Bushuev-Muguet-Water which is built by modifying existing empirical potentials. This version (BMW v. 0.1 has been designed by modifying the SPC/E empirical water potential. It is a preliminary version well suited for exploratory Monte-Carlo simulations. The shape of the potential energy surface (PES around each local minima has been approximated with the help of Gaussian functions. Classical Monte Carlo simulations have been carried out for liquid water in the NPT ensemble for a very wide range of state parameters up to the supercritical water regime. Thermodynamic properties are reported. The radial distributions functions (RDFs have been computed and are compared with the RDFs obtained from Neutron Scattering experimental data. Our preliminary Monte-Carlo simulations show that the seemingly unconventional hypotheses of the 3-A model are most plausible. The simulation has also uncovered a totally new role for 2-fold H-bonds.

  8. Monte Carlo homogenized limit analysis model for randomly assembled blocks in-plane loaded

    Science.gov (United States)

    Milani, Gabriele; Lourenço, Paulo B.

    2010-11-01

    A simple rigid-plastic homogenization model for the limit analysis of masonry walls in-plane loaded and constituted by the random assemblage of blocks with variable dimensions is proposed. In the model, blocks constituting a masonry wall are supposed infinitely resistant with a Gaussian distribution of height and length, whereas joints are reduced to interfaces with frictional behavior and limited tensile and compressive strength. Block by block, a representative element of volume (REV) is considered, constituted by a central block interconnected with its neighbors by means of rigid-plastic interfaces. The model is characterized by a few material parameters, is numerically inexpensive and very stable. A sub-class of elementary deformation modes is a-priori chosen in the REV, mimicking typical failures due to joints cracking and crushing. Masonry strength domains are obtained equating the power dissipated in the heterogeneous model with the power dissipated by a fictitious homogeneous macroscopic plate. Due to the inexpensiveness of the approach proposed, Monte Carlo simulations can be repeated on the REV in order to have a stochastic estimation of in-plane masonry strength at different orientations of the bed joints with respect to external loads accounting for the geometrical statistical variability of blocks dimensions. Two cases are discussed, the former consisting on full stochastic REV assemblages (obtained considering a random variability of both blocks height an length) and the latter assuming the presence of a horizontal alignment along bed joints, i.e. allowing blocks height variability only row by row. The case of deterministic blocks height (quasi-periodic texture) can be obtained as a subclass of this latter case. Masonry homogenized failure surfaces are finally implemented in an upper bound FE limit analysis code for the analysis at collapse of entire walls in-plane loaded. Two cases of engineering practice, consisting on the prediction of the failure

  9. Monte-Carlo modelling of nano-material photocatalysis: bridging photocatalytic activity and microscopic charge kinetics.

    Science.gov (United States)

    Liu, Baoshun

    2016-04-28

    In photocatalysis, it is known that light intensity, organic concentration, and temperature affect the photocatalytic activity by changing the microscopic kinetics of holes and electrons. However, how the microscopic kinetics of holes and electrons relates to the photocatalytic activity was not well known. In the present research, we developed a Monte-Carlo random walking model that involved all of the charge kinetics, including the photo-generation, the recombination, the transport, and the interfacial transfer of holes and electrons, to simulate the overall photocatalytic reaction, which we called a "computer experiment" of photocatalysis. By using this model, we simulated the effect of light intensity, temperature, and organic surface coverage on the photocatalytic activity and the density of the free electrons that accumulate in the simulated system. It was seen that the increase of light intensity increases the electron density and its mobility, which increases the probability for a hole/electron to find an electron/hole for recombination, and consequently led to an apparent kinetics that the quantum yield (QY) decreases with the increase of light intensity. It was also seen that the increase of organic surface coverage could increase the rate of hole interfacial transfer and result in the decrease of the probability for an electron to recombine with a hole. Moreover, the increase of organic coverage on the nano-material surface can also increase the accumulation of electrons, which enhances the mobility for electrons to undergo interfacial transfer, and finally leads to the increase of photocatalytic activity. The simulation showed that the temperature had a more complicated effect, as it can simultaneously change the activation of electrons, the interfacial transfer of holes, and the interfacial transfer of electrons. It was shown that the interfacial transfer of holes might play a main role at low temperature, with the temperature-dependence of QY

  10. Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo

    Science.gov (United States)

    Herckenrath, Daan; Langevin, Christian D.; Doherty, John

    2011-01-01

    Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction uncertainty was tested for a synthetic saltwater intrusion model patterned after the Henry problem. Saltwater intrusion caused by a reduction in fresh groundwater discharge was simulated for 1000 randomly generated hydraulic conductivity distributions, representing a mildly heterogeneous aquifer. From these 1000 simulations, the hydraulic conductivity distribution giving rise to the most extreme case of saltwater intrusion was selected and was assumed to represent the "true" system. Head and salinity values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. The NSMC method was used to calculate 1000 calibration-constrained parameter fields. If the dimensionality of the solution space was set appropriately, the estimated uncertainty range from the NSMC analysis encompassed the truth. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. Reducing the dimensionality of the null-space for the processing of the random parameter sets did not result in any significant gains in efficiency and compromised the ability of the NSMC method to encompass the true prediction value. The addition of intrapilot point heterogeneity to the NSMC process was also tested. According to a variogram comparison, this provided the same scale of heterogeneity that was used to generate the truth. However, incorporation of intrapilot point variability did not make a noticeable difference to the uncertainty of the prediction. With this higher level of heterogeneity, however, the computational burden of

  11. Modeling parameterized geometry in GPU-based Monte Carlo particle transport simulation for radiotherapy.

    Science.gov (United States)

    Chi, Yujie; Tian, Zhen; Jia, Xun

    2016-08-01

    Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU's shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0

  12. Comprehensive modeling of special nuclear materials detection using three-dimensional deterministic and Monte Carlo methods

    Science.gov (United States)

    Ghita, Gabriel M.

    Our study aim to design a useful neutron signature characterization device based on 3He detectors, a standard neutron detection methodology used in homeland security applications. Research work involved simulation of the generation, transport, and detection of the leakage radiation from Special Nuclear Materials (SNM). To accomplish research goals, we use a new methodology to fully characterize a standard "1-Ci" Plutonium-Beryllium (Pu-Be) neutron source based on 3-D computational radiation transport methods, employing both deterministic SN and Monte Carlo methodologies. Computational model findings were subsequently validated through experimental measurements. Achieved results allowed us to design, build, and laboratory-test a Nickel composite alloy shield that enables the neutron leakage spectrum from a standard Pu-Be source to be transformed, through neutron scattering interactions in the shield, into a very close approximation of the neutron spectrum leaking from a large, subcritical mass of Weapons Grade Plutonium (WGPu) metal. This source will make possible testing with a nearly exact reproduction of the neutron spectrum from a 6.67 kg WGPu mass equivalent, but without the expense or risk of testing detector components with real materials. Moreover, over thirty moderator materials were studied in order to characterize their neutron energy filtering potential. Specific focus was made to establish the limits of He-3 spectroscopy using ideal filter materials. To demonstrate our methodology, we present the optimally detected spectral differences between SNM materials (Plutonium and Uranium), metal and oxide, using ideal filter materials. Finally, using knowledge gained from previous studies, the design of a He-3 spectroscopy system neutron detector, simulated entirely via computational methods, is proposed to resolve the spectra from SNM neutron sources of high interest. This was accomplished by replacing ideal filters with real materials, and comparing reaction

  13. Modeling of multi-band drift in nanowires using a full band Monte Carlo simulation

    Science.gov (United States)

    Hathwar, Raghuraj; Saraniti, Marco; Goodnick, Stephen M.

    2016-07-01

    We report on a new numerical approach for multi-band drift within the context of full band Monte Carlo (FBMC) simulation and apply this to Si and InAs nanowires. The approach is based on the solution of the Krieger and Iafrate (KI) equations [J. B. Krieger and G. J. Iafrate, Phys. Rev. B 33, 5494 (1986)], which gives the probability of carriers undergoing interband transitions subject to an applied electric field. The KI equations are based on the solution of the time-dependent Schrödinger equation, and previous solutions of these equations have used Runge-Kutta (RK) methods to numerically solve the KI equations. This approach made the solution of the KI equations numerically expensive and was therefore only applied to a small part of the Brillouin zone (BZ). Here we discuss an alternate approach to the solution of the KI equations using the Magnus expansion (also known as "exponential perturbation theory"). This method is more accurate than the RK method as the solution lies on the exponential map and shares important qualitative properties with the exact solution such as the preservation of the unitary character of the time evolution operator. The solution of the KI equations is then incorporated through a modified FBMC free-flight drift routine and applied throughout the nanowire BZ. The importance of the multi-band drift model is then demonstrated for the case of Si and InAs nanowires by simulating a uniform field FBMC and analyzing the average carrier energies and carrier populations under high electric fields. Numerical simulations show that the average energy of the carriers under high electric field is significantly higher when multi-band drift is taken into consideration, due to the interband transitions allowing carriers to achieve higher energies.

  14. Modeling parameterized geometry in GPU-based Monte Carlo particle transport simulation for radiotherapy

    Science.gov (United States)

    Chi, Yujie; Tian, Zhen; Jia, Xun

    2016-08-01

    Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU’s shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0

  15. Adaptive multi-GPU Exchange Monte Carlo for the 3D Random Field Ising Model

    Science.gov (United States)

    Navarro, Cristóbal A.; Huang, Wei; Deng, Youjin

    2016-08-01

    This work presents an adaptive multi-GPU Exchange Monte Carlo approach for the simulation of the 3D Random Field Ising Model (RFIM). The design is based on a two-level parallelization. The first level, spin-level parallelism, maps the parallel computation as optimal 3D thread-blocks that simulate blocks of spins in shared memory with minimal halo surface, assuming a constant block volume. The second level, replica-level parallelism, uses multi-GPU computation to handle the simulation of an ensemble of replicas. CUDA's concurrent kernel execution feature is used in order to fill the occupancy of each GPU with many replicas, providing a performance boost that is more notorious at the smallest values of L. In addition to the two-level parallel design, the work proposes an adaptive multi-GPU approach that dynamically builds a proper temperature set free of exchange bottlenecks. The strategy is based on mid-point insertions at the temperature gaps where the exchange rate is most compromised. The extra work generated by the insertions is balanced across the GPUs independently of where the mid-point insertions were performed. Performance results show that spin-level performance is approximately two orders of magnitude faster than a single-core CPU version and one order of magnitude faster than a parallel multi-core CPU version running on 16-cores. Multi-GPU performance is highly convenient under a weak scaling setting, reaching up to 99 % efficiency as long as the number of GPUs and L increase together. The combination of the adaptive approach with the parallel multi-GPU design has extended our possibilities of simulation to sizes of L = 32 , 64 for a workstation with two GPUs. Sizes beyond L = 64 can eventually be studied using larger multi-GPU systems.

  16. Monte Carlo probabilistic sensitivity analysis for patient level simulation models: efficient estimation of mean and variance using ANOVA.

    Science.gov (United States)

    O'Hagan, Anthony; Stevenson, Matt; Madan, Jason

    2007-10-01

    Probabilistic sensitivity analysis (PSA) is required to account for uncertainty in cost-effectiveness calculations arising from health economic models. The simplest way to perform PSA in practice is by Monte Carlo methods, which involves running the model many times using randomly sampled values of the model inputs. However, this can be impractical when the economic model takes appreciable amounts of time to run. This situation arises, in particular, for patient-level simulation models (also known as micro-simulation or individual-level simulation models), where a single run of the model simulates the health care of many thousands of individual patients. The large number of patients required in each run to achieve accurate estimation of cost-effectiveness means that only a relatively small number of runs is possible. For this reason, it is often said that PSA is not practical for patient-level models. We develop a way to reduce the computational burden of Monte Carlo PSA for patient-level models, based on the algebra of analysis of variance. Methods are presented to estimate the mean and variance of the model output, with formulae for determining optimal sample sizes. The methods are simple to apply and will typically reduce the computational demand very substantially.

  17. Water leaching of borosilicate glasses: experiments, modeling and Monte Carlo simulations; Alteration par l'eau des verres borosilicates: experiences, modelisation et simulations Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Ledieu, A

    2004-10-15

    This work is concerned with the corrosion of borosilicate glasses with variable oxide contents. The originality of this study is the complementary use of experiments and numerical simulations. This study is expected to contribute to a better understanding of the corrosion of nuclear waste confinement glasses. First, the corrosion of glasses containing only silicon, boron and sodium oxides has been studied. The kinetics of leaching show that the rate of leaching and the final degree of corrosion sharply depend on the boron content through a percolation mechanism. For some glass contents and some conditions of leaching, the layer which appears at the glass surface stops the release of soluble species (boron and sodium). This altered layer (also called the gel layer) has been characterized with nuclear magnetic resonance (NMR) and small angle X-ray scattering (SAXS) techniques. Second, additional elements have been included in the glass composition. It appears that calcium, zirconium or aluminum oxides strongly modify the final degree of corrosion so that the percolation properties of the boron sub-network is no more a sufficient explanation to account for the behavior of these glasses. Meanwhile, we have developed a theoretical model, based on the dissolution and the reprecipitation of the silicon. Kinetic Monte Carlo simulations have been used in order to test several concepts such as the boron percolation, the local reactivity of weakly soluble elements and the restructuring of the gel layer. This model has been fully validated by comparison with the results on the three oxide glasses. Then, it has been used as a comprehensive tool to investigate the paradoxical behavior of the aluminum and zirconium glasses: although these elements slow down the corrosion kinetics, they lead to a deeper final degree of corrosion. The main contribution of this work is that the final degree of corrosion of borosilicate glasses results from the competition of two opposite mechanisms

  18. Aqueous corrosion of borosilicate glasses: experiments, modeling and Monte-Carlo simulations; Alteration par l'eau des verres borosilicates: experiences, modelisation et simulations Monte-Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Ledieu, A

    2004-10-01

    This work is concerned with the corrosion of borosilicate glasses with variable oxide contents. The originality of this study is the complementary use of experiments and numerical simulations. This study is expected to contribute to a better understanding of the corrosion of nuclear waste confinement glasses. First, the corrosion of glasses containing only silicon, boron and sodium oxides has been studied. The kinetics of leaching show that the rate of leaching and the final degree of corrosion sharply depend on the boron content through a percolation mechanism. For some glass contents and some conditions of leaching, the layer which appears at the glass surface stops the release of soluble species (boron and sodium). This altered layer (also called the gel layer) has been characterized with nuclear magnetic resonance (NMR) and small angle X-ray scattering (SAXS) techniques. Second, additional elements have been included in the glass composition. It appears that calcium, zirconium or aluminum oxides strongly modify the final degree of corrosion so that the percolation properties of the boron sub-network is no more a sufficient explanation to account for the behavior of these glasses. Meanwhile, we have developed a theoretical model, based on the dissolution and the reprecipitation of the silicon. Kinetic Monte Carlo simulations have been used in order to test several concepts such as the boron percolation, the local reactivity of weakly soluble elements and the restructuring of the gel layer. This model has been fully validated by comparison with the results on the three oxide glasses. Then, it has been used as a comprehensive tool to investigate the paradoxical behavior of the aluminum and zirconium glasses: although these elements slow down the corrosion kinetics, they lead to a deeper final degree of corrosion. The main contribution of this work is that the final degree of corrosion of borosilicate glasses results from the competition of two opposite mechanisms

  19. Monte Carlo Error Analysis Applied to Core Formation: The Single-stage Model Revived

    Science.gov (United States)

    Cottrell, E.; Walter, M. J.

    2009-12-01

    The last decade has witnessed an explosion of studies that scrutinize whether or not the siderophile element budget of the modern mantle can plausibly be explained by metal-silicate equilibration in a deep magma ocean during core formation. The single-stage equilibrium scenario is seductive because experiments that equilibrate metal and silicate can then serve as a proxy for the early earth, and the physical and chemical conditions of core formation can be identified. Recently, models have become more complex as they try to accommodate the proliferation of element partitioning data sets, each of which sets its own limits on the pressure, temperature, and chemistry of equilibration. The ability of single stage models to explain mantle chemistry has subsequently been challenged, resulting in the development of complex multi-stage core formation models. Here we show that the extent to which extant partitioning data are consistent with single-stage core formation depends heavily upon (1) the assumptions made when regressing experimental partitioning data (2) the certainty with which regression coefficients are known and (3) the certainty with which the core/mantle concentration ratios of the siderophile elements are known. We introduce a Monte Carlo algorithm coded in MATLAB that samples parameter space in pressure and oxygen fugacity for a given mantle composition (nbo/t) and liquidus, and returns the number of equilibrium single-stage liquidus “solutions” that are permissible, taking into account the uncertainty in regression parameters and range of acceptable core/mantle ratios. Here we explore the consequences of regression parameter uncertainty and the impact of regression construction on model outcomes. We find that the form of the partition coefficient (Kd with enforced valence state, or D) and the handling of the temperature effect (based on 1-atm free energy data or high P-T experimental observations) critically affects model outcomes. We consider the most

  20. The Benefits of Using Semi-continuous and Continuous Models to Analyze Binge Eating Data: A Monte Carlo Investigation

    Science.gov (United States)

    Grotzinger, Andrew; Hildebrandt, Tom; Yu, Jessica

    2016-01-01

    Objective Change in binge eating is typically a primary outcome for interventions targeting individuals with eating pathology. A range of statistical models exist to handle these types of frequency distributions, but little empirical evidence exists to guide the appropriate choice of statistical model. Method Monte Carlo simulations were used to investigate the utility of semi-continuous models relative to continuous models in various situations relevant to binge eating treatment studies. Results Semi-continuous models yielded more accurate estimates of the population, while continuous models were higher powered when higher levels of missing data were present. Discussion The present findings generally support the use of semi-continuous models applied to binge eating data, with total sample sizes of roughly 200 being adequately powered to detect moderate treatment effects. However, models with a significant amount of missing data yielded more favorable power estimates for continuous models. PMID:25195793

  1. Co-combustion of peanut hull and coal blends: Artificial neural networks modeling, particle swarm optimization and Monte Carlo simulation.

    Science.gov (United States)

    Buyukada, Musa

    2016-09-01

    Co-combustion of coal and peanut hull (PH) were investigated using artificial neural networks (ANN), particle swarm optimization, and Monte Carlo simulation as a function of blend ratio, heating rate, and temperature. The best prediction was reached by ANN61 multi-layer perception model with a R(2) of 0.99994. Blend ratio of 90 to 10 (PH to coal, wt%), temperature of 305°C, and heating rate of 49°Cmin(-1) were determined as the optimum input values and yield of 87.4% was obtained under PSO optimized conditions. The validation experiments resulted in yields of 87.5%±0.2 after three replications. Monte Carlo simulations were used for the probabilistic assessments of stochastic variability and uncertainty associated with explanatory variables of co-combustion process.

  2. Monte Carlo study of phase transitions and magnetic properties of LaMnO3: Heisenberg model

    Science.gov (United States)

    Naji, S.; Benyoussef, A.; El Kenz, A.; Ez-Zahraouy, H.; Loulidi, M.

    2012-08-01

    On the basis of ab initio calculations (FPLO) and Monte Carlo Simulations (MCS) the phase diagrams and magnetic properties of the bulk perovskite LaMnO3 have been studied, using the Heisenberg model. It is shown, using ab initio calculations in the scalar relativistic scheme, that the stable phase is the antiferromagnetic A-type, which corresponds to ferromagnetic order of the manganese ions in the basal planes (a,b) and antiferromagnetic order of these ions between these planes along the c axis. Using the full four-component relativistic scheme, in order to calculate the magnetic anisotropy energy and constants, it is found that the favorable magnetic direction is the (010) b axis. The transition temperatures and the critical exponents are obtained in the framework of Monte Carlo simulations. The magnetic anisotropy and the exchange couplings of the Heisenberg model are deduced from ab initio calculations. They lead, by using Monte Carlo simulations, to a quantitative agreement with the experimental transition temperatures.

  3. Monte Carlo simulations of phase transitions and lattice dynamics in an atom-phonon model for spin transition compounds

    Energy Technology Data Exchange (ETDEWEB)

    Apetrei, Alin Marian, E-mail: alin.apetrei@uaic.r [Department of Physics, Alexandru Ioan Cuza University of Iasi, 11 Blvd. Carol I, Iasi 700506 (Romania); Enachescu, Cristian; Tanasa, Radu; Stoleriu, Laurentiu; Stancu, Alexandru [Department of Physics, Alexandru Ioan Cuza University of Iasi, 11 Blvd. Carol I, Iasi 700506 (Romania)

    2010-09-01

    We apply here the Monte Carlo Metropolis method to a known atom-phonon coupling model for 1D spin transition compounds (STC). These inorganic molecular systems can switch under thermal or optical excitation, between two states in thermodynamical competition, i.e. high spin (HS) and low spin (LS). In the model, the ST units (molecules) are linked by springs, whose elastic constants depend on the spin states of the neighboring atoms, and can only have three possible values. Several previous analytical papers considered a unique average value for the elastic constants (mean-field approximation) and obtained phase diagrams and thermal hysteresis loops. Recently, Monte Carlo simulation papers, taking into account all three values of the elastic constants, obtained thermal hysteresis loops, but no phase diagrams. Employing Monte Carlo simulation, in this work we obtain the phase diagram at T=0 K, which is fully consistent with earlier analytical work; however it is more complex. The main difference is the existence of two supplementary critical curves that mark a hysteresis zone in the phase diagram. This explains the pressure hysteresis curves at low temperature observed experimentally and predicts a 'chemical' hysteresis in STC at very low temperatures. The formation and the dynamics of the domains are also discussed.

  4. Effortful echolalia.

    Science.gov (United States)

    Hadano, K; Nakamura, H; Hamanaka, T

    1998-02-01

    We report three cases of effortful echolalia in patients with cerebral infarction. The clinical picture of speech disturbance is associated with Type 1 Transcortical Motor Aphasia (TCMA, Goldstein, 1915). The patients always spoke nonfluently with loss of speech initiative, dysarthria, dysprosody, agrammatism, and increased effort and were unable to repeat sentences longer than those containing four or six words. In conversation, they first repeated a few words spoken to them, and then produced self initiated speech. The initial repetition as well as the subsequent self initiated speech, which were realized equally laboriously, can be regarded as mitigated echolalia (Pick, 1924). They were always aware of their own echolalia and tried to control it without effect. These cases demonstrate that neither the ability to repeat nor fluent speech are always necessary for echolalia. The possibility that a lesion in the left medial frontal lobe, including the supplementary motor area, plays an important role in effortful echolalia is discussed.

  5. Health Promotion Efforts as Predictors of Physical Activity in Schools: An Application of the Diffusion of Innovations Model

    Science.gov (United States)

    Glowacki, Elizabeth M.; Centeio, Erin E.; Van Dongen, Daniel J.; Carson, Russell L.; Castelli, Darla M.

    2016-01-01

    Background: Implementing a comprehensive school physical activity program (CSPAP) effectively addresses public health issues by providing opportunities for physical activity (PA). Grounded in the Diffusion of Innovations model, the purpose of this study was to identify how health promotion efforts facilitate opportunities for PA. Methods: Physical…

  6. Modelling detectability of kiore (Rattus exulans) on Aguiguan, Mariana Islands, to inform possible eradication and monitoring efforts

    Science.gov (United States)

    Adams, A.A.Y.; Stanford, J.W.; Wiewel, A.S.; Rodda, G.H.

    2011-01-01

    Estimating the detection probability of introduced organisms during the pre-monitoring phase of an eradication effort can be extremely helpful in informing eradication and post-eradication monitoring efforts, but this step is rarely taken. We used data collected during 11 nights of mark-recapture sampling on Aguiguan, Mariana Islands, to estimate introduced kiore (Rattus exulans Peale) density and detection probability, and evaluated factors affecting detectability to help inform possible eradication efforts. Modelling of 62 captures of 48 individuals resulted in a model-averaged density estimate of 55 kiore/ha. Kiore detection probability was best explained by a model allowing neophobia to diminish linearly (i.e. capture probability increased linearly) until occasion 7, with additive effects of sex and cumulative rainfall over the prior 48 hours. Detection probability increased with increasing rainfall and females were up to three times more likely than males to be trapped. In this paper, we illustrate the type of information that can be obtained by modelling mark-recapture data collected during pre-eradication monitoring and discuss the potential of using these data to inform eradication and posteradication monitoring efforts. ?? New Zealand Ecological Society.

  7. Monte-Carlo simulations of methane/carbon dioxide and ethane/carbon dioxide mixture adsorption in zeolites and comparison with matrix treatment of statistical mechanical lattice model

    Science.gov (United States)

    Dunne, Lawrence J.; Furgani, Akrem; Jalili, Sayed; Manos, George

    2009-05-01

    Adsorption isotherms have been computed by Monte-Carlo simulation for methane/carbon dioxide and ethane/carbon dioxide mixtures adsorbed in the zeolite silicalite. These isotherms show remarkable differences with the ethane/carbon dioxide mixtures displaying strong adsorption preference reversal at high coverage. To explain the differences in the Monte-Carlo mixture isotherms an exact matrix calculation of the statistical mechanics of a lattice model of mixture adsorption in zeolites has been made. The lattice model reproduces the essential features of the Monte-Carlo isotherms, enabling us to understand the differing adsorption behaviour of methane/carbon dioxide and ethane/carbon dioxide mixtures in zeolites.

  8. Lattice gas models and kinetic Monte Carlo simulations of epitaxial growth

    NARCIS (Netherlands)

    Biehl, Michael; Voigt, A

    2005-01-01

    A brief introduction is given to Kinetic Monte Carlo (KMC) simulations of epitaxial crystal growth. Molecular Beam Epitaxy (MBE) serves as the prototype example for growth far from equilibrium. However, many of the aspects discussed here would carry over to other techniques as well. A variety of app

  9. Monte Carlo Estimation of the Conditional Rasch Model. Research Report 94-09.

    Science.gov (United States)

    Akkermans, Wies M. W.

    In order to obtain conditional maximum likelihood estimates, the so-called conditioning estimates have to be calculated. In this paper a method is examined that does not calculate these constants exactly, but approximates them using Monte Carlo Markov Chains. As an example, the method is applied to the conditional estimation of both item and…

  10. Lattice gas models and kinetic Monte Carlo simulations of epitaxial growth

    NARCIS (Netherlands)

    Biehl, Michael; Voigt, A

    2005-01-01

    A brief introduction is given to Kinetic Monte Carlo (KMC) simulations of epitaxial crystal growth. Molecular Beam Epitaxy (MBE) serves as the prototype example for growth far from equilibrium. However, many of the aspects discussed here would carry over to other techniques as well. A variety of app

  11. Modeling and simulation of radiation from hypersonic flows with Monte Carlo methods

    Science.gov (United States)

    Sohn, Ilyoup

    approximately 1 % was achieved with an efficiency about three times faster than the NEQAIR code. To perform accurate and efficient analyses of chemically reacting flowfield - radiation interactions, the direct simulation Monte Carlo (DSMC) and the photon Monte Carlo (PMC) radiative transport methods are used to simulate flowfield - radiation coupling from transitional to peak heating freestream conditions. The non-catalytic and fully catalytic surface conditions were modeled and good agreement of the stagnation-point convective heating between DSMC and continuum fluid dynamics (CFD) calculation under the assumption of fully catalytic surface was achieved. Stagnation-point radiative heating, however, was found to be very different. To simulate three-dimensional radiative transport, the finite-volume based PMC (FV-PMC) method was employed. DSMC - FV-PMC simulations with the goal of understanding the effect of radiation on the flow structure for different degrees of hypersonic non-equilibrium are presented. It is found that except for the highest altitudes, the coupling of radiation influences the flowfield, leading to a decrease in both heavy particle translational and internal temperatures and a decrease in the convective heat flux to the vehicle body. The DSMC - FV-PMC coupled simulations are compared with the previous coupled simulations and correlations obtained using continuum flow modeling and one-dimensional radiative transport. The modeling of radiative transport is further complicated by radiative transitions occurring during the excitation process of the same radiating gas species. This interaction affects the distribution of electronic state populations and, in turn, the radiative transport. The radiative transition rate in the excitation/de-excitation processes and the radiative transport equation (RTE) must be coupled simultaneously to account for non-local effects. The QSS model is presented to predict the electronic state populations of radiating gas species taking

  12. Quasi-monte carlo simulation and variance reduction techniques substantially reduce computational requirements of patient-level simulation models: An application to a discrete event simulation model

    NARCIS (Netherlands)

    Treur, M.; Postma, M.

    2014-01-01

    Objectives: Patient-level simulation models provide increased flexibility to overcome the limitations of cohort-based approaches in health-economic analysis. However, computational requirements of reaching convergence is a notorious barrier. The objective was to assess the impact of using quasi-mont

  13. Modelling of neutron and photon transport in iron and concrete radiation shieldings by the Monte Carlo method - Version 2

    CERN Document Server

    Žukauskaite, A; Plukiene, R; Plukis, A

    2007-01-01

    Particle accelerators and other high energy facilities produce penetrating ionizing radiation (neutrons and γ-rays) that must be shielded. The objective of this work was to model photon and neutron transport in various materials, usually used as shielding, such as concrete, iron or graphite. Monte Carlo method allows obtaining answers by simulating individual particles and recording some aspects of their average behavior. In this work several nuclear experiments were modeled: AVF 65 – γ-ray beams (1-10 MeV), HIMAC and ISIS-800 – high energy neutrons (20-800 MeV) transport in iron and concrete. The results were then compared with experimental data.

  14. Mean-field and Monte Carlo studies of the magnetization-reversal transition in the Ising model

    Energy Technology Data Exchange (ETDEWEB)

    Misra, Arkajyoti [Saha Institute of Nuclear Physics, Bidhannagar, Calcutta (India)]. E-mail: arko@cmp.saha.ernet.in; Chakrabarti, Bikas K. [Saha Institute of Nuclear Physics, Bidhannagar, Calcutta (India)]. E-mail: bikas@cmp.saha.ernet.in

    2000-06-16

    Detailed mean-field and Monte Carlo studies of the dynamic magnetization-reversal transition in the Ising model in its ordered phase under a competing external magnetic field of finite duration have been presented here. An approximate analytical treatment of the mean-field equations of motion shows the existence of diverging length and time scales across this dynamic transition phase boundary. These are also supported by numerical solutions of the complete mean-field equations of motion and the Monte Carlo study of the system evolving under Glauber dynamics in both two and three dimensions. Classical nucleation theory predicts different mechanisms of domain growth in two regimes marked by the strength of the external field, and the nature of the Monte Carlo phase boundary can be comprehended satisfactorily using the theory. The order of the transition changes from a continuous to a discontinuous one as one crosses over from coalescence regime (stronger field) to a nucleation regime (weaker field). Finite-size scaling theory can be applied in the coalescence regime, where the best-fit estimates of the critical exponents are obtained for two and three dimensions. (author)

  15. Voxel2MCNP: a framework for modeling, simulation and evaluation of radiation transport scenarios for Monte Carlo codes.

    Science.gov (United States)

    Pölz, Stefan; Laubersheimer, Sven; Eberhardt, Jakob S; Harrendorf, Marco A; Keck, Thomas; Benzler, Andreas; Breustedt, Bastian

    2013-08-21

    The basic idea of Voxel2MCNP is to provide a framework supporting users in modeling radiation transport scenarios using voxel phantoms and other geometric models, generating corresponding input for the Monte Carlo code MCNPX, and evaluating simulation output. Applications at Karlsruhe Institute of Technology are primarily whole and partial body counter calibration and calculation of dose conversion coefficients. A new generic data model describing data related to radiation transport, including phantom and detector geometries and their properties, sources, tallies and materials, has been developed. It is modular and generally independent of the targeted Monte Carlo code. The data model has been implemented as an XML-based file format to facilitate data exchange, and integrated with Voxel2MCNP to provide a common interface for modeling, visualization, and evaluation of data. Also, extensions to allow compatibility with several file formats, such as ENSDF for nuclear structure properties and radioactive decay data, SimpleGeo for solid geometry modeling, ImageJ for voxel lattices, and MCNPX's MCTAL for simulation results have been added. The framework is presented and discussed in this paper and example workflows for body counter calibration and calculation of dose conversion coefficients is given to illustrate its application.

  16. Monte Carlo modeling and optimization of contrast-enhanced radiotherapy of brain tumors

    Energy Technology Data Exchange (ETDEWEB)

    Perez-Lopez, C E; Garnica-Garza, H M, E-mail: hgarnica@cinvestav.mx [Centro de Investigacion y de Estudios Avanzados del Instituto Politecnico Nacional Unidad Monterrey, Via del Conocimiento 201 Parque de Investigacion e Innovacion Tecnologica, Apodaca NL CP 66600 (Mexico)

    2011-07-07

    Contrast-enhanced radiotherapy involves the use of a kilovoltage x-ray beam to impart a tumoricidal dose to a target into which a radiological contrast agent has previously been loaded in order to increase the x-ray absorption efficiency. In this treatment modality the selection of the proper x-ray spectrum is important since at the energy range of interest the penetration ability of the x-ray beam is limited. For the treatment of brain tumors, the situation is further complicated by the presence of the skull, which also absorbs kilovoltage x-ray in a very efficient manner. In this work, using Monte Carlo simulation, a realistic patient model and the Cimmino algorithm, several irradiation techniques and x-ray spectra are evaluated for two possible clinical scenarios with respect to the location of the target, these being a tumor located at the center of the head and at a position close to the surface of the head. It will be shown that x-ray spectra, such as those produced by a conventional x-ray generator, are capable of producing absorbed dose distributions with excellent uniformity in the target as well as dose differential of at least 20% of the prescribed tumor dose between this and the surrounding brain tissue, when the tumor is located at the center of the head. However, for tumors with a lateral displacement from the center and close to the skull, while the absorbed dose distribution in the target is also quite uniform and the dose to the surrounding brain tissue is within an acceptable range, hot spots in the skull arise which are above what is considered a safe limit. A comparison with previously reported results using mono-energetic x-ray beams such as those produced by a radiation synchrotron is also presented and it is shown that the absorbed dose distributions rendered by this type of beam are very similar to those obtained with a conventional x-ray beam.

  17. McSCIA: application of the Equivalence Theorem in a Monte Carlo radiative transfer model for spherical shell atmospheres

    Directory of Open Access Journals (Sweden)

    F. Spada

    2006-02-01

    Full Text Available A new multiple-scattering Monte Carlo 3-D radiative transfer model named McSCIA (Monte Carlo for SCIAmachy is presented. The backward technique is used to efficiently simulate narrow field of view instruments. The McSCIA algorithm has been formulated as a function of the Earth's radius, and can thus perform simulations for both plane-parallel and spherical atmospheres. The latter geometry is essential for the interpretation of limb satellite measurements, as performed by SCIAMACHY on board of ESA's Envisat. The model can simulate UV-vis-NIR radiation.

    First the ray-tracing algorithm is presented in detail, and then successfully validated against literature references, both in plane-parallel and in spherical geometry. A simple 1-D model is used to explain two different ways of treating absorption. One method uses the single scattering albedo while the other uses the equivalence theorem. The equivalence theorem is based on a separation of absorption and scattering. It is shown that both methods give, in a statistical way, identical results for a wide variety of scenarios. Both absorption methods are included in McSCIA, and it is shown that also for a 3-D case both formulations give identical results. McSCIA limb profiles for atmospheres with and without absorption compare well with the one of the state of the art Monte Carlo radiative transfer model MCC++.

    A simplification of the photon statistics may lead to very fast calculations of absorption features in the atmosphere. However, these simplifications potentially introduce biases in the results. McSCIA does not use simplifications and is therefore a relatively slow implementation of the equivalence theorem. For the first time, however, the validity of the equivalence theorem is demonstrated in a spherical 3-D radiative transfer model.

  18. Modeling indoor air pollution from cookstove emissions in developing countries using a Monte Carlo single-box model

    Science.gov (United States)

    Johnson, Michael; Lam, Nick; Brant, Simone; Gray, Christen; Pennise, David

    2011-06-01

    A simple Monte Carlo single-box model is presented as a first approach toward examining the relationship between emissions of pollutants from fuel/cookstove combinations and the resulting indoor air pollution (IAP) concentrations. The model combines stove emission rates with expected distributions of kitchen volumes and air exchange rates in the developing country context to produce a distribution of IAP concentration estimates. The resulting distribution can be used to predict the likelihood that IAP concentrations will meet air quality guidelines, including those recommended by the World Health Organization (WHO) for fine particulate matter (PM 2.5) and carbon monoxide (CO). The model can also be used in reverse to estimate the probability that specific emission factors will result in meeting air quality guidelines. The modeled distributions of indoor PM 2.5 concentration estimated that only 4% of homes using fuelwood in a rocket-style cookstove, even under idealized conditions, would meet the WHO Interim-1 annual PM 2.5 guideline of 35 μg m -3. According to the model, the PM 2.5 emissions that would be required for even 50% of homes to meet this guideline (0.055 g MJ-delivered -1) are lower than those for an advanced gasifier fan stove, while emissions levels similar to liquefied petroleum gas (0.018 g MJ-delivered -1) would be required for 90% of homes to meet the guideline. Although the predicted distribution of PM concentrations (median = 1320 μg m -3) from inputs for traditional wood stoves was within the range of reported values for India (108-3522 μg m -3), the model likely overestimates IAP concentrations. Direct comparison with simultaneously measured emissions rates and indoor concentrations of CO indicated the model overestimated IAP concentrations resulting from charcoal and kerosene emissions in Kenyan kitchens by 3 and 8 times respectively, although it underestimated the CO concentrations resulting from wood-burning cookstoves in India by

  19. Millimeter wave satellite communication studies. Results of the 1981 propagation modeling effort

    Science.gov (United States)

    Stutzman, W. L.; Tsolakis, A.; Dishman, W. K.

    1982-12-01

    Theoretical modeling associated with rain effects on millimeter wave propagation is detailed. Three areas of work are discussed. A simple model for prediction of rain attenuation is developed and evaluated. A method for computing scattering from single rain drops is presented. A complete multiple scattering model is described which permits accurate calculation of the effects on dual polarized signals passing through rain.

  20. The minimum effort required to eradicate infections in models with backward bifurcation

    NARCIS (Netherlands)

    Safan, M.; Heesterbeek, J.A.P.; Dietz, K.

    2006-01-01

    We study an epidemiological model which assumes that the susceptibility after a primary infection is r times the susceptibility before a primary infection. For r = 0 (r = 1) this is the SIR (SIS) model. For r > 1 + (μ/α) this model shows backward bifurcations, where μ is the death rate and α is the

  1. Uncertainty propagation in a stratospheric model. I - Development of a concise stratospheric model. II - Monte Carlo analysis of imprecisions due to reaction rates. [for ozone depletion prediction

    Science.gov (United States)

    Rundel, R. D.; Butler, D. M.; Stolarski, R. S.

    1978-01-01

    The paper discusses the development of a concise stratospheric model which uses iteration to obtain coupling between interacting species. The one-dimensional, steady-state, diurnally-averaged model generates diffusion equations with appropriate sources and sinks for species odd oxygen, H2O, H2, CO, N2O, odd nitrogen, CH4, CH3Cl, CCl4, CF2Cl2, CFCl3, and odd chlorine. The model evaluates steady-state perturbations caused by injections of chlorine and NO(x) and may be used to predict ozone depletion. The model is used in a Monte Carlo study of the propagation of reaction-rate imprecisions by calculating an ozone perturbation caused by the addition of chlorine. Since the model is sensitive to only 10 of the more than 50 reaction rates considered, only about 1000 Monte Carlo cases are required to span the space of possible results.

  2. Economic effort management in multispecies fisheries: the FcubEcon model

    DEFF Research Database (Denmark)

    Hoff, Ayoe; Frost, Hans; Ulrich, Clara

    2010-01-01

    Applying single-species assessment and quotas in multispecies fisheries can lead to overfishing or quota underutilization, because advice can be conflicting when different stocks are caught within the same fishery. During the past decade, increased focus on this issue has resulted in the developm......Applying single-species assessment and quotas in multispecies fisheries can lead to overfishing or quota underutilization, because advice can be conflicting when different stocks are caught within the same fishery. During the past decade, increased focus on this issue has resulted...... optimal manner, in both effort-management and single-quota management settings.Applying single-species assessment and quotas in multispecies fisheries can lead to overfishing or quota underutilization, because advice can be conflicting when different stocks are caught within the same fishery. During...

  3. Artificial Neural Networks for Reducing Computational Effort in Active Truncated Model Testing of Mooring Lines

    DEFF Research Database (Denmark)

    Christiansen, Niels Hørbye; Voie, Per Erlend Torbergsen; Høgsberg, Jan Becker

    2015-01-01

    is by active truncated models. In these models only the very top part of the system is represented by a physical model whereas the behavior of the part below the truncation is calculated by numerical models and accounted for in the physical model by active actuators applying relevant forces to the physical...... model. Hence, in principal it is possible to achieve reliable experimental data for much larger water depths than what the actual depth of the test basin would suggest. However, since the computations must be faster than real time, as the numerical simulations and the physical experiment run...... simultaneously, this method is very demanding in terms of numerical efficiency and computational power. Therefore, this method has not yet proved to be feasible. It has recently been shown how a hybrid method combining classical numerical models and artificial neural networks (ANN) can provide a dramatic...

  4. LPM-Effect in Monte Carlo Models of Radiative Energy Loss

    CERN Document Server

    Zapp, Korinna C; Wiedemann, Urs Achim

    2009-01-01

    Extending the use of Monte Carlo (MC) event generators to jets in nuclear collisions requires a probabilistic implementation of the non-abelian LPM effect. We demonstrate that a local, probabilistic MC implementation based on the concept of formation times can account fully for the LPM-effect. The main features of the analytically known eikonal and collinear approximation can be reproduced, but we show how going beyond this approximation can lead to qualitatively different results.

  5. LPM-Effect in Monte Carlo Models of Radiative Energy Loss

    Energy Technology Data Exchange (ETDEWEB)

    Zapp, Korinna C. [Physikalisches Institut, Universitaet Heidelberg, Philosophenweg 12, D-69120 Heidelberg (Germany); ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum fuer Schwerionenforschung GmbH, Planckstrasse 1, 64291 Darmstadt (Germany); Stachel, Johanna [Physikalisches Institut, Universitaet Heidelberg, Philosophenweg 12, D-69120 Heidelberg (Germany); Wiedemann, Urs Achim [Physics Department, Theory Unit, CERN, CH-1211 Geneve 23 (Switzerland)

    2009-11-01

    Extending the use of Monte Carlo (MC) event generators to jets in nuclear collisions requires a probabilistic implementation of the non-abelian LPM effect. We demonstrate that a local, probabilistic MC implementation based on the concept of formation times can account fully for the LPM-effect. The main features of the analytically known eikonal and collinear approximation can be reproduced, but we show how going beyond this approximation can lead to qualitatively different results.

  6. Efficient 3D Kinetic Monte Carlo Method for Modeling of Molecular Structure and Dynamics

    DEFF Research Database (Denmark)

    Panshenskov, Mikhail; Solov'yov, Ilia; Solov'yov, Andrey V.

    2014-01-01

    Self-assembly of molecular systems is an important and general problem that intertwines physics, chemistry, biology, and material sciences. Through understanding of the physical principles of self-organization, it often becomes feasible to control the process and to obtain complex structures with...... the kinetic Monte Carlo approach in a three-dimensional space. We describe the computational side of the developed code, discuss its efficiency, and apply it for studying an exemplary system....

  7. Magnetic properties of a ferrimagnetic core/shell nanocube Ising model: A Monte Carlo simulation study

    Energy Technology Data Exchange (ETDEWEB)

    Zaim, A. [LPMMS, Faculte des Sciences, B.P. 11201, Zitoune, Meknes (Morocco); LPSMS, FST Errachidia, B.P. 509, Boutalamine, Errachidia (Morocco); Kerouad, M. [LPMMS, Faculte des Sciences, B.P. 11201, Zitoune, Meknes (Morocco)], E-mail: kerouad@fs-umi.ac.ma; EL Amraoui, Y. [LPSMS, FST Errachidia, B.P. 509, Boutalamine, Errachidia (Morocco)

    2009-04-15

    Monte Carlo simulation has been used to study the magnetic properties and hysteresis loops of a single nanocube, consisting of a ferromagnetic core of spin-1/2 surrounded by a ferromagnetic shell of spin-1 with antiferromagnetic interface coupling. We find a number of characteristic phenomena. In particular, the effects of the shell coupling and the interface coupling on both the compensation temperature and the magnetization profiles are investigated. The effects of the interface coupling on the hysteresis loops are also examined.

  8. Development and validation of a measurement-based source model for kilovoltage cone-beam CT Monte Carlo dosimetry simulations

    Science.gov (United States)

    McMillan, Kyle; McNitt-Gray, Michael; Ruan, Dan

    2013-01-01

    Purpose: The purpose of this study is to adapt an equivalent source model originally developed for conventional CT Monte Carlo dose quantification to the radiation oncology context and validate its application for evaluating concomitant dose incurred by a kilovoltage (kV) cone-beam CT (CBCT) system integrated into a linear accelerator. Methods: In order to properly characterize beams from the integrated kV CBCT system, the authors have adapted a previously developed equivalent source model consisting of an equivalent spectrum module that takes into account intrinsic filtration and an equivalent filter module characterizing the added bowtie filtration. An equivalent spectrum was generated for an 80, 100, and 125 kVp beam with beam energy characterized by half-value layer measurements. An equivalent filter description was generated from bowtie profile measurements for both the full- and half-bowtie. Equivalent source models for each combination of equivalent spectrum and filter were incorporated into the Monte Carlo software package MCNPX. Monte Carlo simulations were then validated against in-phantom measurements for both the radiographic and CBCT mode of operation of the kV CBCT system. Radiographic and CBCT imaging dose was measured for a variety of protocols at various locations within a body (32 cm in diameter) and head (16 cm in diameter) CTDI phantom. The in-phantom radiographic and CBCT dose was simulated at all measurement locations and converted to absolute dose using normalization factors calculated from air scan measurements and corresponding simulations. The simulated results were compared with the physical measurements and their discrepancies were assessed quantitatively. Results: Strong agreement was observed between in-phantom simulations and measurements. For the radiographic protocols, simulations uniformly underestimated measurements by 0.54%–5.14% (mean difference = −3.07%, SD = 1.60%). For the CBCT protocols, simulations uniformly

  9. McSCIA: application of the Equivalence Theorem in a Monte Carlo radiative transfer model for spherical shell atmospheres

    Directory of Open Access Journals (Sweden)

    F. Spada

    2006-01-01

    Full Text Available A new multiple-scattering Monte Carlo 3-D radiative transfer model named McSCIA (Monte Carlo for SCIAmachy is presented. The backward technique is used to efficiently simulate narrow field of view instruments. The McSCIA algorithm has been formulated as a function of the Earth's radius, and can thus perform simulations for both plane-parallel and spherical atmospheres. The latter geometry is essential for the interpretation of limb satellite measurements, as performed by SCIAMACHY on board of ESA's Envisat. The model can simulate UV-vis-NIR radiation. First the ray-tracing algorithm is presented in detail, and then successfully validated against literature references, both in plane-parallel and in spherical geometry. A simple 1-D model is used to explain two different ways of treating absorption. One method uses the single scattering albedo while the other uses the equivalence theorem. The equivalence theorem is based on a separation of absorption and scattering. It is shown that both methods give, in a statistical way, identical results for a wide variety of scenarios. Both absorption methods are included in McSCIA, and it is shown that also for a 3-D case both formulations give identical results. McSCIA limb profiles for atmospheres with and without absorption compare well with the one of the state of the art Monte Carlo radiative transfer model MCC++. A simplification of the photon statistics may lead to very fast calculations of absorption features in the atmosphere. However, these simplifications potentially introduce biases in the results. McSCIA does not use simplifications and is therefore a relatively slow implementation of the equivalence theorem.

  10. Short-Term Variability of X-rays from Accreting Neutron Star Vela X-1: II. Monte-Carlo Modeling

    CERN Document Server

    Odaka, Hirokazu; Tanaka, Yasuyuki T; Watanabe, Shin; Takahashi, Tadayuki; Makishima, Kazuo

    2013-01-01

    We develop a Monte Carlo Comptonization model for the X-ray spectrum of accretion-powered pulsars. Simple, spherical, thermal Comptonization models give harder spectra for higher optical depth, while the observational data from Vela X-1 show that the spectra are harder at higher luminosity. This suggests a physical interpretation where the optical depth of the accreting plasma increases with mass accretion rate. We develop a detailed Monte-Carlo model of the accretion flow, including the effects of the strong magnetic field ($\\sim 10^{12}$ G) both in geometrically constraining the flow into an accretion column, and in reducing the cross section. We treat bulk-motion Comptonization of the infalling material as well as thermal Comptonization. These model spectra can match the observed broad-band {\\it Suzaku} data from Vela X-1 over a wide range of mass accretion rates. The model can also explain the so-called "low state", in which the uminosity decreases by an order of magnitude. Here, thermal Comptonization sh...

  11. A replica exchange Monte Carlo algorithm for protein folding in the HP model

    Directory of Open Access Journals (Sweden)

    Shmygelska Alena

    2007-09-01

    Full Text Available Abstract Background The ab initio protein folding problem consists of predicting protein tertiary structure from a given amino acid sequence by minimizing an energy function; it is one of the most important and challenging problems in biochemistry, molecular biology and biophysics. The ab initio protein folding problem is computationally challenging and has been shown to be NP MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaat0uy0HwzTfgDPnwy1egaryqtHrhAL1wy0L2yHvdaiqaacqWFneVtcqqGqbauaaa@3961@-hard even when conformations are restricted to a lattice. In this work, we implement and evaluate the replica exchange Monte Carlo (REMC method, which has already been applied very successfully to more complex protein models and other optimization problems with complex energy landscapes, in combination with the highly effective pull move neighbourhood in two widely studied Hydrophobic Polar (HP lattice models. Results We demonstrate that REMC is highly effective for solving instances of the square (2D and cubic (3D HP protein folding problem. When using the pull move neighbourhood, REMC outperforms current state-of-the-art algorithms for most benchmark instances. Additionally, we show that this new algorithm provides a larger ensemble of ground-state structures than the existing state-of-the-art methods. Furthermore, it scales well with sequence length, and it finds significantly better conformations on long biological sequences and sequences with a provably unique ground-state structure, which is believed to be a characteristic of real proteins. We also present evidence that our REMC algorithm can fold sequences which exhibit significant interaction between termini in the hydrophobic core relatively easily. Conclusion We demonstrate that REMC utilizing the pull move

  12. Markov Modeling of Component Fault Growth Over A Derived Domain of Feasible Output Control Effort Modifications

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper introduces a novel Markov process formulation of stochastic fault growth modeling, in order to facilitate the development and analysis of...

  13. Modeling the impact of restoration efforts on phosphorus loading and transport through Everglades National Park, FL, USA.

    Science.gov (United States)

    Long, Stephanie A; Tachiev, Georgio I; Fennema, Robert; Cook, Amy M; Sukop, Michael C; Miralles-Wilhelm, Fernando

    2015-07-01

    Ecosystems of Florida Everglades are highly sensitive to phosphorus loading. Future restoration efforts, which focus on restoring Everglades water flows, may pose a threat to the health of these ecosystems. To determine the fate and transport of total phosphorus and evaluate proposed Everglades restoration, a water quality model has been developed using the hydrodynamic results from the M3ENP (Mike Marsh Model of Everglades National Park)--a physically-based hydrological numerical model which uses MIKE SHE/MIKE 11 software. Using advection-dispersion with reactive transport for the model, parameters were optimized and phosphorus loading in the overland water column was modeled with good accuracy (60%). The calibrated M3ENP-AD model was then modified to include future bridge construction and canal water level changes, which have shown to increase flows into ENP. These bridge additions increased total dissolved phosphorus (TP) load downstream in Shark Slough and decreased TP load in downstream Taylor Slough. However, there was a general decrease in TP concentration and TP mass per area over the entire model domain. The M3ENP-AD model has determined the mechanisms for TP transport and quantified the impacts of ENP restoration efforts on the spatial-temporal distribution of phosphorus transport. This tool can be used to guide future Everglades restoration decisions.

  14. Monte Carlo modeling and analyses of YALINA-booster subcritical assembly part 1: analytical models and main neutronics parameters.

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, A.; Gohar, M. Y. A.; Nuclear Engineering Division

    2008-09-11

    This study was carried out to model and analyze the YALINA-Booster facility, of the Joint Institute for Power and Nuclear Research of Belarus, with the long term objective of advancing the utilization of accelerator driven systems for the incineration of nuclear waste. The YALINA-Booster facility is a subcritical assembly, driven by an external neutron source, which has been constructed to study the neutron physics and to develop and refine methodologies to control the operation of accelerator driven systems. The external neutron source consists of Californium-252 spontaneous fission neutrons, 2.45 MeV neutrons from Deuterium-Deuterium reactions, or 14.1 MeV neutrons from Deuterium-Tritium reactions. In the latter two cases a deuteron beam is used to generate the neutrons. This study is a part of the collaborative activity between Argonne National Laboratory (ANL) of USA and the Joint Institute for Power and Nuclear Research of Belarus. In addition, the International Atomic Energy Agency (IAEA) has a coordinated research project benchmarking and comparing the results of different numerical codes with the experimental data available from the YALINA-Booster facility and ANL has a leading role coordinating the IAEA activity. The YALINA-Booster facility has been modeled according to the benchmark specifications defined for the IAEA activity without any geometrical homogenization using the Monte Carlo codes MONK and MCNP/MCNPX/MCB. The MONK model perfectly matches the MCNP one. The computational analyses have been extended through the MCB code, which is an extension of the MCNP code with burnup capability because of its additional feature for analyzing source driven multiplying assemblies. The main neutronics parameters of the YALINA-Booster facility were calculated using these computer codes with different nuclear data libraries based on ENDF/B-VI-0, -6, JEF-2.2, and JEF-3.1.

  15. The European Integrated Tokamak Modelling (ITM) effort: achievements and first physics results

    NARCIS (Netherlands)

    G.L. Falchetto,; Coster, D.; Coelho, R.; Scott, B. D.; Figini, L.; Kalupin, D.; Nardon, E.; Nowak, S.; L.L. Alves,; Artaud, J. F.; Basiuk, V.; João P.S. Bizarro,; C. Boulbe,; Dinklage, A.; Farina, D.; B. Faugeras,; Ferreira, J.; Figueiredo, A.; Huynh, P.; Imbeaux, F.; Ivanova-Stanik, I.; Jonsson, T.; H.-J. Klingshirn,; Konz, C.; Kus, A.; Marushchenko, N. B.; Pereverzev, G.; M. Owsiak,; Poli, E.; Peysson, Y.; R. Reimer,; Signoret, J.; Sauter, O.; Stankiewicz, R.; Strand, P.; Voitsekhovitch, I.; Westerhof, E.; T. Zok,; Zwingmann, W.; ITM-TF contributors,; ASDEX Upgrade team,; JET-EFDA Contributors,

    2014-01-01

    A selection of achievements and first physics results are presented of the European Integrated Tokamak Modelling Task Force (EFDA ITM-TF) simulation framework, which aims to provide a standardized platform and an integrated modelling suite of validated numerical codes for the simulation and

  16. Evaluation of Thin Plate Hydrodynamic Stability through a Combined Numerical Modeling and Experimental Effort

    Energy Technology Data Exchange (ETDEWEB)

    Tentner, A. [Argonne National Lab. (ANL), Argonne, IL (United States); Bojanowski, C. [Argonne National Lab. (ANL), Argonne, IL (United States); Feldman, E. [Argonne National Lab. (ANL), Argonne, IL (United States); Wilson, E. [Argonne National Lab. (ANL), Argonne, IL (United States); Solbrekken, G [Univ. of Missouri, Columbia, MO (United States); Jesse, C. [Univ. of Missouri, Columbia, MO (United States); Kennedy, J. [Univ. of Missouri, Columbia, MO (United States); Rivers, J. [Univ. of Missouri, Columbia, MO (United States); Schnieders, G. [Univ. of Missouri, Columbia, MO (United States)

    2017-05-01

    An experimental and computational effort was undertaken in order to evaluate the capability of the fluid-structure interaction (FSI) simulation tools to describe the deflection of a Missouri University Research Reactor (MURR) fuel element plate redesigned for conversion to lowenriched uranium (LEU) fuel due to hydrodynamic forces. Experiments involving both flat plates and curved plates were conducted in a water flow test loop located at the University of Missouri (MU), at conditions and geometries that can be related to the MURR LEU fuel element. A wider channel gap on one side of the test plate, and a narrower on the other represent the differences that could be encountered in a MURR element due to allowed fabrication variability. The difference in the channel gaps leads to a pressure differential across the plate, leading to plate deflection. The induced plate deflection the pressure difference induces in the plate was measured at specified locations using a laser measurement technique. High fidelity 3-D simulations of the experiments were performed at MU using the computational fluid dynamics code STAR-CCM+ coupled with the structural mechanics code ABAQUS. Independent simulations of the experiments were performed at Argonne National Laboratory (ANL) using the STAR-CCM+ code and its built-in structural mechanics solver. The simulation results obtained at MU and ANL were compared with the corresponding measured plate deflections.

  17. Optimisation of the Population Monte Carlo algorithm: Application to constraining isocurvature models with cosmic microwave background data

    CERN Document Server

    Moodley, Darell

    2015-01-01

    We optimise the parameters of the Population Monte Carlo algorithm using numerical simulations. The optimisation is based on an efficiency statistic related to the number of samples evaluated prior to convergence, and is applied to a D-dimensional Gaussian distribution to derive optimal scaling laws for the algorithm parameters. More complex distributions such as the banana and bimodal distributions are also studied. We apply these results to a cosmological parameter estimation problem that uses CMB anisotropy data from the WMAP nine-year release to constrain a six parameter adiabatic model and a fifteen parameter admixture model, consisting of correlated adiabatic and isocurvature perturbations. In the case of the adiabatic model and the admixture model we find respective degradation factors of three and twenty, relative to the optimal Gaussian case, due to degeneracies in the underlying parameter space. The WMAP nine-year data constrain the admixture model to have an isocurvature fraction of at most $36.3 \\...

  18. Monte Carlo modelling of photodynamic therapy treatments comparing clustered three dimensional tumour structures with homogeneous tissue structures

    Science.gov (United States)

    Campbell, C. L.; Wood, K.; Brown, C. T. A.; Moseley, H.

    2016-07-01

    We explore the effects of three dimensional (3D) tumour structures on depth dependent fluence rates, photodynamic doses (PDD) and fluorescence images through Monte Carlo radiation transfer modelling of photodynamic therapy. The aim with this work was to compare the commonly used uniform tumour densities with non-uniform densities to determine the importance of including 3D models in theoretical investigations. It was found that fractal 3D models resulted in deeper penetration on average of therapeutic radiation and higher PDD. An increase in effective treatment depth of 1 mm was observed for one of the investigated fractal structures, when comparing to the equivalent smooth model. Wide field fluorescence images were simulated, revealing information about the relationship between tumour structure and the appearance of the fluorescence intensity. Our models indicate that the 3D tumour structure strongly affects the spatial distribution of therapeutic light, the PDD and the wide field appearance of surface fluorescence images.

  19. Monte carlo diffusion hybrid model for photon migration in a two-layer turbid medium in the frequency domain.

    Science.gov (United States)

    Alexandrakis, G; Farrell, T J; Patterson, M S

    2000-05-01

    We propose a hybrid Monte Carlo (MC) diffusion model for calculating the spatially resolved reflectance amplitude and phase delay resulting from an intensity-modulated pencil beam vertically incident on a two-layer turbid medium. The model combines the accuracy of MC at radial distances near the incident beam with the computational efficiency afforded by a diffusion calculation at further distances. This results in a single forward calculation several hundred times faster than pure MC, depending primarily on model parameters. Model predictions are compared with MC data for two cases that span the extremes of physiologically relevant optical properties: skin overlying fat and skin overlying muscle, both in the presence of an exogenous absorber. It is shown that good agreement can be achieved for radial distances from 0.5 to 20 mm in both cases. However, in the skin-on-muscle case the choice of model parameters and the definition of the diffusion coefficient can lead to some interesting discrepancies.

  20. A Monte Carlo-adjusted goodness-of-fit test for parametric models describing spatial point patterns

    KAUST Repository

    Dao, Ngocanh

    2014-04-03

    Assessing the goodness-of-fit (GOF) for intricate parametric spatial point process models is important for many application fields. When the probability density of the statistic of the GOF test is intractable, a commonly used procedure is the Monte Carlo GOF test. Additionally, if the data comprise a single dataset, a popular version of the test plugs a parameter estimate in the hypothesized parametric model to generate data for theMonte Carlo GOF test. In this case, the test is invalid because the resulting empirical level does not reach the nominal level. In this article, we propose a method consisting of nested Monte Carlo simulations which has the following advantages: the bias of the resulting empirical level of the test is eliminated, hence the empirical levels can always reach the nominal level, and information about inhomogeneity of the data can be provided.We theoretically justify our testing procedure using Taylor expansions and demonstrate that it is correctly sized through various simulation studies. In our first data application, we discover, in agreement with Illian et al., that Phlebocarya filifolia plants near Perth, Australia, can follow a homogeneous Poisson clustered process that provides insight into the propagation mechanism of these plants. In our second data application, we find, in contrast to Diggle, that a pairwise interaction model provides a good fit to the micro-anatomy data of amacrine cells designed for analyzing the developmental growth of immature retina cells in rabbits. This article has supplementary material online. © 2013 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.