Energy Technology Data Exchange (ETDEWEB)
Swaminathan-Gopalan, Krishnan; Stephani, Kelly A., E-mail: ksteph@illinois.edu [Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States)
2016-02-15
A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach. The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.
Swaminathan-Gopalan, Krishnan; Stephani, Kelly A.
2016-02-01
A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach. The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.
Energy Technology Data Exchange (ETDEWEB)
Parsons, Neal, E-mail: neal.parsons@cd-adapco.com; Levin, Deborah A., E-mail: deblevin@illinois.edu [Department of Aerospace Engineering, The Pennsylvania State University, 233 Hammond Building, University Park, Pennsylvania 16802 (United States); Duin, Adri C. T. van, E-mail: acv13@engr.psu.edu [Department of Mechanical and Nuclear Engineering, The Pennsylvania State University, 136 Research East, University Park, Pennsylvania 16802 (United States); Zhu, Tong, E-mail: tvz5037@psu.edu [Department of Aerospace Engineering, The Pennsylvania State University, 136 Research East, University Park, Pennsylvania 16802 (United States)
2014-12-21
The Direct Simulation Monte Carlo (DSMC) method typically used for simulating hypersonic Earth re-entry flows requires accurate total collision cross sections and reaction probabilities. However, total cross sections are often determined from extrapolations of relatively low-temperature viscosity data, so their reliability is unknown for the high temperatures observed in hypersonic flows. Existing DSMC reaction models accurately reproduce experimental equilibrium reaction rates, but the applicability of these rates to the strong thermal nonequilibrium observed in hypersonic shocks is unknown. For hypersonic flows, these modeling issues are particularly relevant for nitrogen, the dominant species of air. To rectify this deficiency, the Molecular Dynamics/Quasi-Classical Trajectories (MD/QCT) method is used to accurately compute collision and reaction cross sections for the N{sub 2}({sup 1}Σ{sub g}{sup +})-N{sub 2}({sup 1}Σ{sub g}{sup +}) collision pair for conditions expected in hypersonic shocks using a new potential energy surface developed using a ReaxFF fit to recent advanced ab initio calculations. The MD/QCT-computed reaction probabilities were found to exhibit better physical behavior and predict less dissociation than the baseline total collision energy reaction model for strong nonequilibrium conditions expected in a shock. The MD/QCT reaction model compared well with computed equilibrium reaction rates and shock-tube data. In addition, the MD/QCT-computed total cross sections were found to agree well with established variable hard sphere total cross sections.
Evaluation of angular scattering models for electron-neutral collisions in Monte Carlo simulations
Janssen, J. F. J.; Pitchford, L. C.; Hagelaar, G. J. M.; van Dijk, J.
2016-10-01
In Monte Carlo simulations of electron transport through a neutral background gas, simplifying assumptions related to the shape of the angular distribution of electron-neutral scattering cross sections are usually made. This is mainly because full sets of differential scattering cross sections are rarely available. In this work simple models for angular scattering are compared to results from the recent quantum calculations of Zatsarinny and Bartschat for differential scattering cross sections (DCS’s) from zero to 200 eV in argon. These simple models represent in various ways an approach to forward scattering with increasing electron energy. The simple models are then used in Monte Carlo simulations of range, straggling, and backscatter of electrons emitted from a surface into a volume filled with a neutral gas. It is shown that the assumptions of isotropic elastic scattering and of forward scattering for the inelastic collision process yield results within a few percent of those calculated using the DCS’s of Zatsarinny and Bartschat. The quantities which were held constant in these comparisons are the elastic momentum transfer and total inelastic cross sections.
Assessment of high-fidelity collision models in the direct simulation Monte Carlo method
Weaver, Andrew B.
Advances in computer technology over the decades has allowed for more complex physics to be modeled in the DSMC method. Beginning with the first paper on DSMC in 1963, 30,000 collision events per hour were simulated using a simple hard sphere model. Today, more than 10 billion collision events can be simulated per hour for the same problem. Many new and more physically realistic collision models such as the Lennard-Jones potential and the forced harmonic oscillator model have been introduced into DSMC. However, the fact that computer resources are more readily available and higher-fidelity models have been developed does not necessitate their usage. It is important to understand how such high-fidelity models affect the output quantities of interest in engineering applications. The effect of elastic and inelastic collision models on compressible Couette flow, ground-state atomic oxygen transport properties, and normal shock waves have therefore been investigated. Recommendations for variable soft sphere and Lennard-Jones model parameters are made based on a critical review of recent ab-initio calculations and experimental measurements of transport properties.
Shi, Feng; Wang, Dezhen; Ren, Chunsheng
2008-06-01
Atmospheric pressure discharge nonequilibrium plasmas have been applied to plasma processing with modern technology. Simulations of discharge in pure Ar and pure He gases at one atmospheric pressure by a high voltage trapezoidal nanosecond pulse have been performed using a one-dimensional particle-in-cell Monte Carlo collision (PIC-MCC) model coupled with a renormalization and weighting procedure (mapping algorithm). Numerical results show that the characteristics of discharge in both inert gases are very similar. There exist the effects of local reverse field and double-peak distributions of charged particles' density. The electron and ion energy distribution functions are also observed, and the discharge is concluded in the view of ionization avalanche in number. Furthermore, the independence of total current density is a function of time, but not of position.
Energy Technology Data Exchange (ETDEWEB)
Krause, Claudius
2012-04-15
High energy proton-proton collisions lead to a large amount of secondary particles to be measured in a detector. A final state containing top quarks is of particular interest. But top quarks are only produced in a small fraction of the collisions. Hence, criteria must be defined to separate events containing top quarks from the background. From detectors, we record signals, for example hits in the tracker system or deposits in the calorimeters. In order to obtain the momentum of the particles, we apply algorithms to reconstruct tracks in space. More sophisticated algorithms are needed to identify the flavour of quarks, such as b-tagging. Several steps are needed to test these algorithms. Collision products of proton-proton events are generated using Monte Carlo techniques and their passage through the detector is simulated. After that, the algorithms are applied and the signal efficiency and the mistagging rate can be obtained. There are, however, many different approaches and algorithms realized in programs, so the question arises if the choice of the Monte Carlo generator influences the measured quantities. In this thesis, two commonly used Monte Carlo generators, SHERPA and MadGraph/MadEvent, are compared and the differences in the selection efficiency of semimuonic tt events are estimated. In addition, the distributions of kinematic variables are shown. A special chapter about the matching of matrix elements with parton showers is included. The main algorithms, CKKW for SHERPA and MLM for MadGraph/MadEvent, are introduced.
Energy Technology Data Exchange (ETDEWEB)
Turrell, A.E., E-mail: a.turrell09@imperial.ac.uk; Sherlock, M.; Rose, S.J.
2015-10-15
Large-angle Coulomb collisions allow for the exchange of a significant proportion of the energy of a particle in a single collision, but are not included in models of plasmas based on fluids, the Vlasov–Fokker–Planck equation, or currently available plasma Monte Carlo techniques. Their unique effects include the creation of fast ‘knock-on’ ions, which may be more likely to undergo certain reactions, and distortions to ion distribution functions relative to what is predicted by small-angle collision only theories. We present a computational method which uses Monte Carlo techniques to include the effects of large-angle Coulomb collisions in plasmas and which self-consistently evolves distribution functions according to the creation of knock-on ions of any generation. The method is used to demonstrate ion distribution function distortions in an inertial confinement fusion (ICF) relevant scenario of the slowing of fusion products.
Bleibel, J.; Bravina, L. V.; Zabrodin, E. E.
2016-06-01
Multiplicity, rapidity and transverse momentum distributions of hadrons produced both in inelastic and nondiffractive p p collisions at energies from √{s }=200 GeV to 14 TeV are studied within the Monte Carlo quark-gluon string model. Good agreement with the available experimental data up to √{s }=13 TeV is obtained, and predictions are made for the collisions at top LHC energy √{s }=14 TeV . The model indicates that Feynman scaling and extended longitudinal scaling remain valid in the fragmentation regions, whereas strong violation of Feynman scaling is observed at midrapidity. The Koba-Nielsen-Olesen (KNO) scaling in multiplicity distributions is violated at LHC also. The origin of both maintenance and violation of the scaling trends is traced to short range correlations of particles in the strings and interplay between the multistring processes at ultrarelativistic energies.
DEFF Research Database (Denmark)
Granados, Alba; Brunskog, Jonas; Misztal, M. K.
2015-01-01
When vocal folds vibrate at normal speaking frequencies, collisions occurs. The numerics and formulations behind a position-based continuum model of contact is an active field of research in the contact mechanics community. In this paper, a frictionless three-dimensional finite element model...... of the vocal fold collision is proposed, which incorporates different procedures used in contact mechanics and mathematical optimization theories. The penalty approach and the Lagrange multiplier method are investigated. The contact force solution obtained by the penalty formulation is highly dependent...
Energy Technology Data Exchange (ETDEWEB)
Fubiani, G.; Boeuf, J. P. [Université de Toulouse, UPS, INPT, LAPLACE (Laboratoire Plasma et Conversion d' Energie), 118 route de Narbonne, F-31062 Toulouse cedex 9 (France); CNRS, LAPLACE, F-31062 Toulouse (France)
2013-11-15
Results from a 3D self-consistent Particle-In-Cell Monte Carlo Collisions (PIC MCC) model of a high power fusion-type negative ion source are presented for the first time. The model is used to calculate the plasma characteristics of the ITER prototype BATMAN ion source developed in Garching. Special emphasis is put on the production of negative ions on the plasma grid surface. The question of the relative roles of the impact of neutral hydrogen atoms and positive ions on the cesiated grid surface has attracted much attention recently and the 3D PIC MCC model is used to address this question. The results show that the production of negative ions by positive ion impact on the plasma grid is small with respect to the production by atomic hydrogen or deuterium bombardment (less than 10%)
The ultraperipheral collisions using Monte Carlo calculation method
Energy Technology Data Exchange (ETDEWEB)
Gonzalez, I. [Universidade de Sao Paulo (IF/USP), SP (Brazil). Inst. de Fisica; Instituto Superior de Tecnologias e Ciencias Aplicadas (InSTEC), Havana (Cuba); Deppman, A. [Universidade de Sao Paulo (IF/USP), SP (Brazil). Inst. de Fisica
2012-07-01
Full text: Ultraperipheral collisions, UPCs, are heavy ions collision reactions in which one ion interact only through the electromagnetic field of the other ion. The interaction of a nucleus with the electromagnetic field can be treated as the interaction of virtual photons with the nucleus. The intensity of the electromagnetic field, and therefore the number of photons in the cloud around the nucleus, is proportional to Z{sup 2}. This type of collision is highly favored when heavy ions collide, then heavy ion colliders, as for instance the Large Hadron Collider (LHC) at CERN allow the study of UPCs physics. This work is oriented to the study of nuclear processes in ultra peripheral collisions, using the model of intranuclear cascade code developed in CRISP. The main objective of this work is to study process induced by the absorption of high energy photons or by the absorption of multiple photons by one single nucleus. Here we focus on the production of vector mesons by virtual photons with energies around 1.0 - 1.5 GeV, an the propagation of these mesons in the nucleus. The CRISP code, which simulates intranuclear cascade and evaporation/fission/fragmentation nuclear reactions, allow a complete study of the nuclear reactions intermediated by vector mesons. (author)
Tomasik, Boris
2010-11-01
A Monte Carlo generator of the final state of hadrons emitted from an ultrarelativistic nuclear collision is introduced. An important feature of the generator is a possible fragmentation of the fireball and emission of the hadrons from fragments. Phase space distribution of the fragments is based on the blast wave model extended to azimuthally non-symmetric fireballs. Parameters of the model can be tuned and this allows to generate final states from various kinds of fireballs. A facultative output in the OSCAR1999A format allows for a comprehensive analysis of phase-space distributions and/or use as an input for an afterburner. DRAGON's purpose is to produce artificial data sets which resemble those coming from real nuclear collisions provided fragmentation occurs at hadronisation and hadrons are emitted from fragments without any further scattering. Its name, DRAGON, stands for DRoplet and hAdron GeneratOr for Nuclear collisions. In a way, the model is similar to THERMINATOR, with the crucial difference that emission from fragments is included.
Monte Carlo Glauber wounded nucleon model with meson cloud
Zakharov, B G
2016-01-01
We study the effect of the nucleon meson cloud on predictions of the Monte Carlo Glauber wounded nucleon model for $AA$, $pA$, and $pp$ collisions. From the analysis of the data on the charged multiplicity density in $AA$ collisions we find that the meson-baryon Fock component reduces the required fraction of binary collisions by a factor of $\\sim 2$ for Au+Au collisions at $\\sqrt{s}=0.2$ TeV and $\\sim 1.5$ for Pb+Pb collisions at $\\sqrt{s}=2.76$ TeV. For central $AA$ collisions the meson cloud can increase the multiplicity density by $\\sim 16-18$\\%. We give predictions for the midrapidity charged multiplicity density in Pb+Pb collisions at $\\sqrt{s}=5.02$ TeV for the future LHC run 2. We find that the meson cloud has a weak effect on the centrality dependence of the ellipticity $\\epsilon_2$ in $AA$ collisions. For collisions of the deformed uranium nuclei at $\\sqrt{s}=0.2$ TeV we find that the meson cloud may improve somewhat agreement with the data on the dependence of the elliptic flow on the charged multi...
STARlight: A Monte Carlo simulation program for ultra-peripheral collisions of relativistic ions
Klein, Spencer R.; Nystrand, Joakim; Seger, Janet; Gorbunov, Yuri; Butterworth, Joey
2017-03-01
Ultra-peripheral collisions (UPCs) have been a significant source of study at RHIC and the LHC. In these collisions, the two colliding nuclei interact electromagnetically, via two-photon or photonuclear interactions, but not hadronically; they effectively miss each other. Photonuclear interactions produce vector meson states or more general photonuclear final states, while two-photon interactions can produce lepton or meson pairs, or single mesons. In these interactions, the collision geometry plays a major role. We present a program, STARlight, that calculates the cross-sections for a variety of UPC final states and also creates, via Monte Carlo simulation, events for use in determining detector efficiency.
STARlight: A Monte Carlo simulation program for ultra-peripheral collisions of relativistic ions
Klein, Spencer R; Seger, Janet; Gorbunov, Yuri; Butterworth, Joey
2016-01-01
Ultra-peripheral collisions (UPCs) have been a significant source of study at RHIC and the LHC. In these collisions, the two colliding nuclei interact electromagnetically, via two-photon or photonuclear interactions, but not hadronically; they effectively miss each other. Photonuclear interactions produce vector meson states or more general photonuclear final states, while two-photon interactions can produce lepton or meson pairs, or single mesons. In these interactions, the collision geometry plays a major role. We present a program, STARlight, that calculates the cross-sections for a variety of UPC final states and also creates, via Monte Carlo simulation, events for use in determining detector efficiency.
DROPLET COLLISION AND COALESCENCE MODEL
Institute of Scientific and Technical Information of China (English)
LI Qiang; CAI Ti-min; HE Guo-qiang; HU Chun-bo
2006-01-01
A new droplet collision and coalescence model was presented, a quick-sort method for locating collision partners was also devised and based on theoretical and experimental results, further advancement was made to the droplet collision outcome.The advantages of the two implementations of smoothed particle hydrodynamics (SPH)method were used to limit the collision of droplets to a given number of nearest droplets and define the probability of coalescence, numerical simulations were carried out for model validation. Results show that the model presented is mesh-independent and less time consuming, it can not only maintains the system momentum conservation perfectly, but not susceptible to initial droplet size distribution as well.
SPHINX v 1.1 Monte Carlo Program for Polarized Nucleon-Nucleon Collisions (update)
Güllenstern, S; Górnicki, P; Mankiewicz, L; Schäfer, A; Güllenstern, Stefan; Martin, Oliver; Gornicki, Pawel; Mankiewicz, Lech; Schäfer, Andreas
1996-01-01
We present the updated long write-up for version 1.1 of the SPHINX Monte Carlo. The program can be used to simulate polarized nucleon - nucleon collisions at high energies. Spins of colliding particles are taken into account. The program allows the calculation of cross sections for various processes.
Monte Carlo models of dust coagulation
Zsom, Andras
2010-01-01
The thesis deals with the first stage of planet formation, namely dust coagulation from micron to millimeter sizes in circumstellar disks. For the first time, we collect and compile the recent laboratory experiments on dust aggregates into a collision model that can be implemented into dust coagulation models. We put this model into a Monte Carlo code that uses representative particles to simulate dust evolution. Simulations are performed using three different disk models in a local box (0D) located at 1 AU distance from the central star. We find that the dust evolution does not follow the previously assumed growth-fragmentation cycle, but growth is halted by bouncing before the fragmentation regime is reached. We call this the bouncing barrier which is an additional obstacle during the already complex formation process of planetesimals. The absence of the growth-fragmentation cycle and the halted growth has two important consequences for planet formation. 1) It is observed that disk atmospheres are dusty thr...
Modelling seabird collision risk with off-shore wind farms
Energy Technology Data Exchange (ETDEWEB)
Mateos, Maria; Arroyo, Gonzalo Munoz; Rosario, Jose Juan Alonso del
2011-07-01
Full text: Recent concern about the adverse effects of collision mortality of avian migrants at wind farms has highlighted the need to understand bird-wind turbine interactions. Here, a stochastic collision model, based on data of seabird behaviour collected on- site, is presented, as a flexible and easy to take tool to assess the collisions probabilities of off-shore wind farms in a pre-construction phase. The collision prediction model considering the wind farm area as a risk window has been constructed as a stochastic model for avian migrants, based on Monte Carlo simulation. The model calculates the probable number of birds collided per time unit. Migration volume, wind farm dimensions, vertical and horizontal distribution of the migratory passage, flight direction and avoidance rates, between other variables, are taken into account in different steps of the model as the input variables. In order to assess the weighted importance of these factors on collision probability predictions, collision probabilities obtained from the set of scenarios resulting from the different combinations of the input variables were modelled by using Generalised Additive Models. The application of this model to a hypothetical project for erecting a wind farm at the Strait of Gibraltar showed that collision probability, and consequently mortality rates, strongly depend on the values of the avoidance rates taken into account, and the distribution of birds into the different altitude layers. These parameters should be considered as priorities to be addressed in post-construction studies. (Author)
Public repository with Monte Carlo simulations for high-energy particle collision experiments
Chekanov, S V
2016-01-01
Planning high-energy collision experiments for the next few decades requires extensive Monte Carlo simulations in order to accomplish physics goals of these experiments. Such simulations are essential for understanding fundamental physics processes, as well as for setting up the detector parameters that help establish R&D projects required over the next few decades. This paper describes a public repository with Monte Carlo event samples before and after detector-response simulation. The goal of this repository is to facilitate the accomplishment of many goals in planning a next generation of particle experiments.
Energy Technology Data Exchange (ETDEWEB)
Huthmacher, Klaus [Department of Physics and OPTIMAS Research Center, University of Kaiserslautern (Germany); Molberg, Andreas K. [Department of Chemistry and OPTIMAS Research Center, University of Kaiserslautern (Germany); Rethfeld, Bärbel [Department of Physics and OPTIMAS Research Center, University of Kaiserslautern (Germany); Gulley, Jeremy R., E-mail: jgulley@kennesaw.edu [Department of Physics, Kennesaw State University, Kennesaw, GA 30144 (United States)
2016-10-01
A split-step numerical method for calculating ultrafast free-electron dynamics in dielectrics is introduced. The two split steps, independently programmed in C++11 and FORTRAN 2003, are interfaced via the presented open source wrapper. The first step solves a deterministic extended multi-rate equation for the ionization, electron–phonon collisions, and single photon absorption by free-carriers. The second step is stochastic and models electron–electron collisions using Monte-Carlo techniques. This combination of deterministic and stochastic approaches is a unique and efficient method of calculating the nonlinear dynamics of 3D materials exposed to high intensity ultrashort pulses. Results from simulations solving the proposed model demonstrate how electron–electron scattering relaxes the non-equilibrium electron distribution on the femtosecond time scale.
Huthmacher, Klaus; Molberg, Andreas K.; Rethfeld, Bärbel; Gulley, Jeremy R.
2016-10-01
A split-step numerical method for calculating ultrafast free-electron dynamics in dielectrics is introduced. The two split steps, independently programmed in C++11 and FORTRAN 2003, are interfaced via the presented open source wrapper. The first step solves a deterministic extended multi-rate equation for the ionization, electron-phonon collisions, and single photon absorption by free-carriers. The second step is stochastic and models electron-electron collisions using Monte-Carlo techniques. This combination of deterministic and stochastic approaches is a unique and efficient method of calculating the nonlinear dynamics of 3D materials exposed to high intensity ultrashort pulses. Results from simulations solving the proposed model demonstrate how electron-electron scattering relaxes the non-equilibrium electron distribution on the femtosecond time scale.
Barghouthi, I. A.; Barakat, A. R.; Schunk, R. W.
1994-01-01
Non-Maxwellian ion velocity distribution functions have been theoretically predicted and confirmed by observations, to occur at high latitudes. These distributions deviate from Maxwellian due to the combined effect of the E x B drift and ion-neutral collisions. At high altitude and/or for solar maximum conditions, the ion-to-neutral density ratio increases and, hence, the role of ion self-collisions becomes appreciable. A Monte Carlo simulation was used to investigate the behavior of O(+) ions that are E x B-drifting through a background of neutral O, with the effect of O(+) (Coulomb) self-collisions included. Wide ranges of the ion-to-neutral density ratio n(sub i)/n(sub n) and the electrostatic field E were considered in order to investigate the change of ion behavior with solar cycle and with altitude. For low altitudes and/or solar minimum (n(sub i)/n(sub n) less than or equal to 10(exp -5)), the effect of self-collisions is negligible. For higher values of n(sub i)/n(sub n), the effect of self-collisions becomes significant and, hence, the non-Maxwellian features of the O(+) distribution are reduced. The Monte Carlo results were compared to those that used simplified collision models in order to assess their validity. In general, the simple collision models tend to be more accurate for low E and for high n(sub i)/n(sub n).
A semianalytic Monte Carlo code for modelling LIDAR measurements
Palazzi, Elisa; Kostadinov, Ivan; Petritoli, Andrea; Ravegnani, Fabrizio; Bortoli, Daniele; Masieri, Samuele; Premuda, Margherita; Giovanelli, Giorgio
2007-10-01
LIDAR (LIght Detection and Ranging) is an optical active remote sensing technology with many applications in atmospheric physics. Modelling of LIDAR measurements appears useful approach for evaluating the effects of various environmental variables and scenarios as well as of different measurement geometries and instrumental characteristics. In this regard a Monte Carlo simulation model can provide a reliable answer to these important requirements. A semianalytic Monte Carlo code for modelling LIDAR measurements has been developed at ISAC-CNR. The backscattered laser signal detected by the LIDAR system is calculated in the code taking into account the contributions due to the main atmospheric molecular constituents and aerosol particles through processes of single and multiple scattering. The contributions by molecular absorption, ground and clouds reflection are evaluated too. The code can perform simulations of both monostatic and bistatic LIDAR systems. To enhance the efficiency of the Monte Carlo simulation, analytical estimates and expected value calculations are performed. Artificial devices (such as forced collision, local forced collision, splitting and russian roulette) are moreover foreseen by the code, which can enable the user to drastically reduce the variance of the calculation.
Space Object Collision Probability via Monte Carlo on the Graphics Processing Unit
Vittaldev, Vivek; Russell, Ryan P.
2017-09-01
Fast and accurate collision probability computations are essential for protecting space assets. Monte Carlo (MC) simulation is the most accurate but computationally intensive method. A Graphics Processing Unit (GPU) is used to parallelize the computation and reduce the overall runtime. Using MC techniques to compute the collision probability is common in literature as the benchmark. An optimized implementation on the GPU, however, is a challenging problem and is the main focus of the current work. The MC simulation takes samples from the uncertainty distributions of the Resident Space Objects (RSOs) at any time during a time window of interest and outputs the separations at closest approach. Therefore, any uncertainty propagation method may be used and the collision probability is automatically computed as a function of RSO collision radii. Integration using a fixed time step and a quartic interpolation after every Runge Kutta step ensures that no close approaches are missed. Two orders of magnitude speedups over a serial CPU implementation are shown, and speedups improve moderately with higher fidelity dynamics. The tool makes the MC approach tractable on a single workstation, and can be used as a final product, or for verifying surrogate and analytical collision probability methods.
Rybczyński, Maciej
2011-01-01
We investigate the influence of the nucleon-nucleon collision profile (probability of interaction as a function of the nucleon-nucleon impact parameter) in the wounded nucleon model and its extensions on several observables measured in relativistic heavy-ion collisions. We find that the participant eccentricity coefficient, $\\epsilon^\\ast$, as well as the higher harmonic coefficients, $\\epsilon_n^\\ast$, are reduced by 10-20% for mid-peripheral collisions when the realistic (Gaussian) profile is used, as compared to the case with the commonly-used hard-sphere profile. Similarly, the multiplicity fluctuations, treated as the function of the number of wounded nucleons in one of the colliding nuclei, are reduced by 10-20%. This demonstrates that the Glauber Monte Carlo codes should necessarily use the realistic nucleon-nucleon collision profile in precision studies of these observables. The Gaussian collision profile is built-in in {\\tt GLISSANDO}.
A model for collisions in granular gases
Brilliantov, Nikolai V.; Spahn, Frank; Hertzsch, Jan-Martin; Poeschel, Thorsten
2002-01-01
We propose a model for collisions between particles of a granular material and calculate the restitution coefficients for the normal and tangential motion as functions of the impact velocity from considerations of dissipative viscoelastic collisions. Existing models of impact with dissipation as well as the classical Hertz impact theory are included in the present model as special cases. We find that the type of collision (smooth, reflecting or sticky) is determined by the impact velocity and...
Gas discharges modeling by Monte Carlo technique
Directory of Open Access Journals (Sweden)
Savić Marija
2010-01-01
Full Text Available The basic assumption of the Townsend theory - that ions produce secondary electrons - is valid only in a very narrow range of the reduced electric field E/N. In accordance with the revised Townsend theory that was suggested by Phelps and Petrović, secondary electrons are produced in collisions of ions, fast neutrals, metastable atoms or photons with the cathode, or in gas phase ionizations by fast neutrals. In this paper we tried to build up a Monte Carlo code that can be used to calculate secondary electron yields for different types of particles. The obtained results are in good agreement with the analytical results of Phelps and. Petrović [Plasma Sourc. Sci. Technol. 8 (1999 R1].
Shell model the Monte Carlo way
Energy Technology Data Exchange (ETDEWEB)
Ormand, W.E.
1995-03-01
The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.
Directory of Open Access Journals (Sweden)
A. R. Barakat
Monte Carlo results were compared to those that used simplified collision models in order to assess their validity. In general, the simple collision models tend to be more accurate for low E and for high n_{i}/n_{n}.
Energy Technology Data Exchange (ETDEWEB)
Schunert, Sebastian; Schwen, Daniel; Ghassemi, Pedram; Baker, Benjamin; Zabriskie, Adam; Ortensi, Javier; Wang, Yaqi; Gleicher, Frederick; DeHart, Mark; Martineau, Richard
2017-04-01
This work presents a multi-physics, multi-scale approach to modeling the Transient Test Reactor (TREAT) currently prepared for restart at the Idaho National Laboratory. TREAT fuel is made up of microscopic fuel grains (r ˜ 20µm) dispersed in a graphite matrix. The novelty of this work is in coupling a binary collision Monte-Carlo (BCMC) model to the Finite Element based code Moose for solving a microsopic heat-conduction problem whose driving source is provided by the BCMC model tracking fission fragment energy deposition. This microscopic model is driven by a transient, engineering scale neutronics model coupled to an adiabatic heating model. The macroscopic model provides local power densities and neutron energy spectra to the microscpic model. Currently, no feedback from the microscopic to the macroscopic model is considered. TREAT transient 15 is used to exemplify the capabilities of the multi-physics, multi-scale model, and it is found that the average fuel grain temperature differs from the average graphite temperature by 80 K despite the low-power transient. The large temperature difference has strong implications on the Doppler feedback a potential LEU TREAT core would see, and it underpins the need for multi-physics, multi-scale modeling of a TREAT LEU core.
Monte Carlo simulation of the γγ → τ+τ− process in e+e− collisions
Filipovic, Jelena
2017-01-01
Monte Carlo events for γγ → τ+τ− process in e+e− collisions were generated with SuperChic and the cross-sections at different centre-of-mass energies of FCC-ee were obtained. Further tau decays were performed using Pythia8 generator. Events were stored in Root trees and prepared for future analysis.
Satake, Shinsuke; Pianpanit, Theerasarn; Sugama, Hideo; Nunami, Masanori; Matsuoka, Seikichi; Ishiguro, Seiji; Kanno, Ryutaro
2016-01-01
A numerical method to implement a linearized Coulomb collision operator for multi-ion-species neoclassical transport simulation using two-weight $\\delta f$ Monte Carlo method is developed. The conservation properties and the adjointness of the operator in the collisions between two particle species with different temperatures are verified. The linearized operator in a $\\delta f$ Monte Carlo code is benchmarked with other two kinetic simulation codes, i. e., a $\\delta f$ continuum gyrokinetic code with the same linearized collision operator and a full-f PIC code with Nanbu collision operator. The benchmark simulations of the equilibration process of plasma flow and temperature fluctuation among several particle species show very good agreement between $\\delta f$ Monte Carlo code and the other two codes. An error in the H-theorem in the two-weight $\\delta f$ Monte Carlo method is found, which is caused by the weight spreading phenomenon inherent in the two-weight $\\delta f$ method. It is demonstrated that the w...
NLO Monte Carlo predictions for heavy-quark production at the LHC. pp collisions in ALICE
Energy Technology Data Exchange (ETDEWEB)
Klasen, M.; Kovarik, K.; Topp, M. [Muenster Univ. (Germany). Inst. fuer Theoretische Physik 1; Klein-Boesing, C. [Muenster Univ. (Germany). Inst. fuer Kernphysik; GSI Helmholtzzentrum fuer Schwerionenforschung, Darmstadt (Germany). ExtreMe Matter Institute EMMI; Kramer, G. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Wessels, J.P. [Muenster Univ. (Germany). Inst. fuer Kernphysik
2014-05-15
Next-to-leading order (NLO) QCD predictions for the production of heavy quarks in proton-proton collisions are presented within three different approaches to quark mass, resummation and fragmentation effects. In particular, new NLO and parton shower simulations with POWHEG are performed in the ALICE kinematic regime at three different centre-of-mass energies, including scale and parton density variations, in order to establish a reliable baseline for future detailed studies of heavy-quark suppression in heavy-ion collisions. Very good agreement of POWHEG is found with FONLL, in particular for centrally produced D{sup 0}, D{sup +} and D{sup *+} mesons and electrons from charm and bottom quark decays, but also with the generally somewhat higher GM-VFNS predictions within the theoretical uncertainties. The latter are dominated by scale rather than quark mass variations. Parton density uncertainties for charm and bottom quark production are computed here with POWHEG for the first time and shown to be dominant in the forward regime, e.g. for muons coming from heavy-flavour decays. The fragmentation into D{sub s}{sup +} mesons seems to require further tuning within the NLO Monte Carlo approach.
NLO Monte Carlo predictions for heavy-quark production at the LHC: pp collisions in ALICE
Klasen, M.; Klein-Bösing, C.; Kovarik, K.; Kramer, G.; Topp, M.; Wessels, J. P.
2014-08-01
Next-to-leading order (NLO) QCD predictions for the production of heavy quarks in proton-proton collisions are presented within three different approaches to quark mass, resummation and fragmentation effects. In particular, new NLO and parton shower simulations with POWHEG are performed in the ALICE kinematic regime at three different centre-of-mass energies, including scale and parton density variations, in order to establish a reliable baseline for future detailed studies of heavy-quark suppression in heavy-ion collisions. Very good agreement of POWHEG is found with FONLL, in particular for centrally produced D 0, D + and D *+ mesons and electrons from charm and bottom quark decays, but also with the generally somewhat higher GM-VFNS predictions within the theoretical uncertainties. The latter are dominated by scale rather than quark mass variations. Parton density uncertainties for charm and bottom quark production are computed here with POWHEG for the first time and shown to be dominant in the forward regime, e.g. for muons coming from heavy-flavour decays. The fragmentation into D s mesons seems to require further tuning within the NLO Monte Carlo approach.
Energy Technology Data Exchange (ETDEWEB)
Altsybeev, Igor [St. Petersburg State University (Russian Federation)
2016-01-22
In the present work, Monte-Carlo toy model with repulsing quark-gluon strings in hadron-hadron collisions is described. String repulsion creates transverse boosts for the string decay products, giving modifications of observables. As an example, long-range correlations between mean transverse momenta of particles in two observation windows are studied in MC toy simulation of the heavy-ion collisions.
An efficient collision limiter Monte Carlo simulation for hypersonic near-continuum flows
Liang, Jie; Li, Zhihui; Li, Xuguo; Fang, Boqiang Du Ming
2016-11-01
The implementation of a collision limiter DSMC-based hybrid approach is presented to simulate hypersonic near-continuum flow. The continuum breakdown parameters based on gradient-length local Knudsen number are characterized different regions of the flowfield. The collision limiter is used in continuum inviscid regions with large time step and cell size. Local density gradient-based dynamic adaptation of collision and sampling cells refinement is employed in high gradient regions including strong shocks and boundary layer near surface. A variable time step scheme is adopted to make sure a more uniform distribution of model particles per collision cell throughout the computational domain, with a constant ratio of local time step interval to particle weights to avoid particles cloned or destroyed when crossing interface from cell to cell. The surface pressure and friction coefficients of hypersonic reentry flow for a blunt capsule are computed in different conditions and compared with benchmark case in transitional regime to examine the efficiency and accuracy. The aerodynamic characteristics of a wave rider shape with sharp leading edge are simulated in the test state for hypersonic near-continuum. The computed aerodynamic coefficients have good agreements with experimental data in low density wind tunnel of CARDC and have less computational expense.
Collision of Physics and Software in the Monte Carlo Application Toolkit (MCATK)
Energy Technology Data Exchange (ETDEWEB)
Sweezy, Jeremy Ed [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-01-21
The topic is presented in a series of slides organized as follows: MCATK overview, development strategy, available algorithms, problem modeling (sources, geometry, data, tallies), parallelism, miscellaneous tools/features, example MCATK application, recent areas of research, and summary and future work. MCATK is a C++ component-based Monte Carlo neutron-gamma transport software library with continuous energy neutron and photon transport. Designed to build specialized applications and to provide new functionality in existing general-purpose Monte Carlo codes like MCNP, it reads ACE formatted nuclear data generated by NJOY. The motivation behind MCATK was to reduce costs. MCATK physics involves continuous energy neutron & gamma transport with multi-temperature treatment, static eigenvalue (k_{eff} and α) algorithms, time-dependent algorithm, and fission chain algorithms. MCATK geometry includes mesh geometries and solid body geometries. MCATK provides verified, unit-test Monte Carlo components, flexibility in Monte Carlo application development, and numerous tools such as geometry and cross section plotters.
A collision model for safety evaluation of autonomous intelligent cruise control.
Touran, A; Brackstone, M A; McDonald, M
1999-09-01
This paper describes a general framework for safety evaluation of autonomous intelligent cruise control in rear-end collisions. Using data and specifications from prototype devices, two collision models are developed. One model considers a train of four cars, one of which is equipped with autonomous intelligent cruise control. This model considers the car in front and two cars following the equipped car. In the second model, none of the cars is equipped with the device. Each model can predict the possibility of rear-end collision between cars under various conditions by calculating the remaining distance between cars after the front car brakes. Comparing the two collision models allows one to evaluate the effectiveness of autonomous intelligent cruise control in preventing collisions. The models are then subjected to Monte Carlo simulation to calculate the probability of collision. Based on crash probabilities, an expected value is calculated for the number of cars involved in any collision. It is found that given the model assumptions, while equipping a car with autonomous intelligent cruise control can significantly reduce the probability of the collision with the car ahead, it may adversely affect the situation for the following cars.
AROMA 2.2 a Monte Carlo generator for heavy flavour events in ep collisions
Ingelman, G; Schuler, G A
1996-01-01
A program to simulate the production of heavy quarks through the boson-gluon fusion process in e^{\\pm}p collisions is presented. The full electro\\-weak structure of the electron--gluon interaction is taken into account as well as the masses of the produced heavy quarks. Higher order QCD radiation is treated using initial and final state parton showers, and hadronization is performed using the Lund string model. Physics and programming aspects are described in this manual.
Monte Carlo exploration of warped Higgsless models
Energy Technology Data Exchange (ETDEWEB)
Hewett, JoAnne L.; Lillie, Benjamin; Rizzo, Thomas Gerard [Stanford Linear Accelerator Center, 2575 Sand Hill Rd., Menlo Park, CA, 94025 (United States)]. E-mail: rizzo@slac.stanford.edu
2004-10-01
We have performed a detailed Monte Carlo exploration of the parameter space for a warped Higgsless model of electroweak symmetry breaking in 5 dimensions. This model is based on the SU(2){sub L} x SU(2){sub R} x U(1){sub B-L} gauge group in an AdS{sub 5} bulk with arbitrary gauge kinetic terms on both the Planck and TeV branes. Constraints arising from precision electroweak measurements and collider data are found to be relatively easy to satisfy. We show, however, that the additional requirement of perturbative unitarity up to the cut-off, {approx_equal} 10 TeV, in W{sub L}{sup +}W{sub L}{sup -} elastic scattering in the absence of dangerous tachyons eliminates all models. If successful models of this class exist, they must be highly fine-tuned. (author)
Monte Carlo Exploration of Warped Higgsless Models
Hewett, J L; Rizzo, T G
2004-01-01
We have performed a detailed Monte Carlo exploration of the parameter space for a warped Higgsless model of electroweak symmetry breaking in 5 dimensions. This model is based on the $SU(2)_L\\times SU(2)_R\\times U(1)_{B-L}$ gauge group in an AdS$_5$ bulk with arbitrary gauge kinetic terms on both the Planck and TeV branes. Constraints arising from precision electroweak measurements and collider data are found to be relatively easy to satisfy. We show, however, that the additional requirement of perturbative unitarity up to the cut-off, $\\simeq 10$ TeV, in $W_L^+W_L^-$ elastic scattering in the absence of dangerous tachyons eliminates all models. If successful models of this class exist, they must be highly fine-tuned.
Monte Carlo modeling and optimization of buffer gas positron traps
Marjanović, Srđan; Petrović, Zoran Lj
2017-02-01
Buffer gas positron traps have been used for over two decades as the prime source of slow positrons enabling a wide range of experiments. While their performance has been well understood through empirical studies, no theoretical attempt has been made to quantitatively describe their operation. In this paper we apply standard models as developed for physics of low temperature collision dominated plasmas, or physics of swarms to model basic performance and principles of operation of gas filled positron traps. The Monte Carlo model is equipped with the best available set of cross sections that were mostly derived experimentally by using the same type of traps that are being studied. Our model represents in realistic geometry and fields the development of the positron ensemble from the initial beam provided by the solid neon moderator through voltage drops between the stages of the trap and through different pressures of the buffer gas. The first two stages employ excitation of N2 with acceleration of the order of 10 eV so that the trap operates under conditions when excitation of the nitrogen reduces the energy of the initial beam to trap the positrons without giving them a chance to become annihilated following positronium formation. The energy distribution function develops from the assumed distribution leaving the moderator, it is accelerated by the voltage drops and forms beams at several distinct energies. In final stages the low energy loss collisions (vibrational excitation of CF4 and rotational excitation of N2) control the approach of the distribution function to a Maxwellian at room temperature but multiple non-Maxwellian groups persist throughout most of the thermalization. Optimization of the efficiency of the trap may be achieved by changing the pressure and voltage drops and also by selecting to operate in a two stage mode. The model allows quantitative comparisons and test of optimization as well as development of other properties.
Institute of Scientific and Technical Information of China (English)
Shi Feng; Zhang Li-Li; Wang De-Zhen
2009-01-01
This paper reports that a simulation of glow discharge in pure helium gas at the pressure of 1.333×103 Pa under a high-voltage nanosecond pulse is performed by using a one-dimensional particle-in-cell Monte Carlo collisions (PIC-MCC) model. Numerical modelling results show that the cathode sheath is much thicker than that of anode during the pulse discharge, and that there exists the phenomenon of field reversal at relative high pressures near the end of the pulse, which results from the cumulative positive charges due to their finite mobility during the cathode sheath expansion. Moreover, electron energy distribution function (EEDF) and ion energy distribution function (IEDF) have been also observed. In the early stage of the pulse, a large amount of electrons can be accelerated above the ionization threshold energy. However, in the second half of the pulse, as the field in bulk plasma decreases and thereafter the reverse field forms due to the excessive charges in cathode sheath, although the plasma density grows, the high energy part of EEDF decreases. It concludes that the large volume non-equilibrium plasmas can be obtained with high-voltage nanosecond pulse discharges.
Monte Carlo Simulation of River Meander Modelling
Posner, A. J.; Duan, J. G.
2010-12-01
This study first compares the first order analytical solutions for flow field by Ikeda et. al. (1981) and Johanesson and Parker (1989b). Ikeda et. al.’s (1981) linear bank erosion model was implemented to predict the rate of bank erosion in which the bank erosion coefficient is treated as a stochastic variable that varies with physical properties of the bank (e.g. cohesiveness, stratigraphy, vegetation density). The developed model was used to predict the evolution of meandering planforms. Then, the modeling results were analyzed and compared to the observed data. Since the migration of meandering channel consists of downstream translation, lateral expansion, and downstream or upstream rotations. Several measures are formulated in order to determine which of the resulting planform is closest to the experimental measured one. Results from the deterministic model highly depend on the calibrated erosion coefficient. Since field measurements are always limited, the stochastic model yielded more realistic predictions of meandering planform evolutions. Due to the random nature of bank erosion coefficient, the meandering planform evolution is a stochastic process that can only be accurately predicted by a stochastic model. Quasi-2D Ikeda (1989) flow solution with Monte Carlo Simulation of Bank Erosion Coefficient.
Validation of Compton Scattering Monte Carlo Simulation Models
Weidenspointner, Georg; Hauf, Steffen; Hoff, Gabriela; Kuster, Markus; Pia, Maria Grazia; Saracco, Paolo
2014-01-01
Several models for the Monte Carlo simulation of Compton scattering on electrons are quantitatively evaluated with respect to a large collection of experimental data retrieved from the literature. Some of these models are currently implemented in general purpose Monte Carlo systems; some have been implemented and evaluated for possible use in Monte Carlo particle transport for the first time in this study. Here we present first and preliminary results concerning total and differential Compton scattering cross sections.
Monte Carlo modelling of TRIGA research reactor
El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.
2010-10-01
The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucléaires de la Maâmora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.
Model unspecific search in CMS. Treatment of insufficient Monte Carlo statistics
Energy Technology Data Exchange (ETDEWEB)
Lieb, Jonas; Albert, Andreas; Duchardt, Deborah; Hebbeker, Thomas; Knutzen, Simon; Meyer, Arnd; Pook, Tobias; Roemer, Jonas [III. Physikalisches Institut A, RWTH Aachen University (Germany)
2016-07-01
In 2015, the CMS detector recorded proton-proton collisions at an unprecedented center of mass energy of √(s)=13 TeV. The Model Unspecific Search in CMS (MUSiC) offers an analysis approach of these data which is complementary to dedicated analyses: By taking all produced final states into consideration, MUSiC is sensitive to indicators of new physics appearing in final states that are usually not investigated. In a two step process, MUSiC first classifies events according to their physics content and then searches kinematic distributions for the most significant deviations between Monte Carlo simulations and observed data. Such a general approach introduces its own set of challenges. One of them is the treatment of situations with insufficient Monte Carlo statistics. Complementing introductory presentations on the MUSiC event selection and classification, this talk will present a method of dealing with the issue of low Monte Carlo statistics.
Heavy Ions Collision evolution modeling with ECHO-QGP
Rolando, Valentina; Beraudo, Andrea; Del Zanna, Luca; Becattini, Francesco; Chandra, Vinod; De Pace, Arturo; Nardi, Marzia
2014-01-01
We present a numerical code modeling the evolution of the medium formed in relativistic heavy ion collisions, ECHO-QGP. The code solves relativistic hydrodynamics in $(3+1)-$D, with dissipative terms included within the framework of Israel-Stewart theory; it can work both in Minkowskian and in Bjorken coordinates. Initial conditions are provided through an implementation of the Glauber model (both Optical and Monte Carlo), while freezeout and particle generation are based on the Cooper-Frye prescription. The code is validated against several test problems and shows remarkable stability and accuracy with the combination of a conservative (shock-capturing) approach and the high-order methods employed. In particular it beautifully agrees with the semi-analytic solution known as Gubser flow, both in the ideal and in the viscous Israel-Stewart case, up to very large times and without any ad hoc tuning of the algorithm.
Fan Affinity Laws from a Collision Model
Bhattacharjee, Shayak
2012-01-01
The performance of a fan is usually estimated using hydrodynamical considerations. The calculations are long and involved and the results are expressed in terms of three affinity laws. In this paper we use kinetic theory to attack this problem. A hard sphere collision model is used, and subsequently a correction to account for the flow behaviour…
Fan Affinity Laws from a Collision Model
Bhattacharjee, Shayak
2012-01-01
The performance of a fan is usually estimated using hydrodynamical considerations. The calculations are long and involved and the results are expressed in terms of three affinity laws. In this paper we use kinetic theory to attack this problem. A hard sphere collision model is used, and subsequently a correction to account for the flow behaviour…
Reporting Monte Carlo Studies in Structural Equation Modeling
Boomsma, Anne
2013-01-01
In structural equation modeling, Monte Carlo simulations have been used increasingly over the last two decades, as an inventory from the journal Structural Equation Modeling illustrates. Reaching out to a broad audience, this article provides guidelines for reporting Monte Carlo studies in that fiel
A numerical 4D Collision Risk Model
Schmitt, Pal; Culloch, Ross; Lieber, Lilian; Kregting, Louise
2017-04-01
With the growing number of marine renewable energy (MRE) devices being installed across the world, some concern has been raised about the possibility of harming mobile, marine fauna by collision. Although physical contact between a MRE device and an organism has not been reported to date, these novel sub-sea structures pose a challenge for accurately estimating collision risks as part of environmental impact assessments. Even if the animal motion is simplified to linear translation, ignoring likely evasive behaviour, the mathematical problem of establishing an impact probability is not trivial. We present a numerical algorithm to obtain such probability distributions using transient, four-dimensional simulations of a novel marine renewable device concept, Deep Green, Minesto's power plant and hereafter referred to as the 'kite' that flies in a figure-of-eight configuration. Simulations were carried out altering several configurations including kite depth, kite speed and kite trajectory while keeping the speed of the moving object constant. Since the kite assembly is defined as two parts in the model, a tether (attached to the seabed) and the kite, collision risk of each part is reported independently. By comparing the number of collisions with the number of collision-free simulations, a probability of impact for each simulated position in the cross- section of the area is considered. Results suggest that close to the bottom, where the tether amplitude is small, the path is always blocked and the impact probability is 100% as expected. However, higher up in the water column, the collision probability is twice as high in the mid line, where the tether passes twice per period than at the extremes of its trajectory. The collision probability distribution is much more complex in the upper end of the water column, where the kite and tether can simultaneously collide with the object. Results demonstrate the viability of such models, which can also incorporate empirical
Rabie, M.; Franck, C. M.
2016-06-01
We present a freely available MATLAB code for the simulation of electron transport in arbitrary gas mixtures in the presence of uniform electric fields. For steady-state electron transport, the program provides the transport coefficients, reaction rates and the electron energy distribution function. The program uses established Monte Carlo techniques and is compatible with the electron scattering cross section files from the open-access Plasma Data Exchange Project LXCat. The code is written in object-oriented design, allowing the tracing and visualization of the spatiotemporal evolution of electron swarms and the temporal development of the mean energy and the electron number due to attachment and/or ionization processes. We benchmark our code with well-known model gases as well as the real gases argon, N2, O2, CF4, SF6 and mixtures of N2 and O2.
MODELING LEACHING OF VIRUSES BY THE MONTE CARLO METHOD
A predictive screening model was developed for fate and transport of viruses in the unsaturated zone. A database of input parameters allowed Monte Carlo analysis with the model. The resulting kernel densities of predicted attenuation during percolation indicated very ...
Fan affinity laws from a collision model
Bhattacharjee, Shayak
2012-01-01
The performance of a fan is usually estimated from hydrodynamical considerations. The calculations are long and involved and the results are expressed in terms of three affinity laws. In this work we use kinetic theory to attack this problem. A hard sphere collision model is used, and subsequently a correction to account for the flow behaviour of air is incorporated. Our calculations prove the affinity laws and provide numerical estimates of the air delivery, thrust and drag on a rotating fan.
Event-chain Monte Carlo for classical continuous spin models
Michel, Manon; Mayer, Johannes; Krauth, Werner
2015-10-01
We apply the event-chain Monte Carlo algorithm to classical continuum spin models on a lattice and clarify the condition for its validity. In the two-dimensional XY model, it outperforms the local Monte Carlo algorithm by two orders of magnitude, although it remains slower than the Wolff cluster algorithm. In the three-dimensional XY spin glass model at low temperature, the event-chain algorithm is far superior to the other algorithms.
Quantum Monte Carlo methods algorithms for lattice models
Gubernatis, James; Werner, Philipp
2016-01-01
Featuring detailed explanations of the major algorithms used in quantum Monte Carlo simulations, this is the first textbook of its kind to provide a pedagogical overview of the field and its applications. The book provides a comprehensive introduction to the Monte Carlo method, its use, and its foundations, and examines algorithms for the simulation of quantum many-body lattice problems at finite and zero temperature. These algorithms include continuous-time loop and cluster algorithms for quantum spins, determinant methods for simulating fermions, power methods for computing ground and excited states, and the variational Monte Carlo method. Also discussed are continuous-time algorithms for quantum impurity models and their use within dynamical mean-field theory, along with algorithms for analytically continuing imaginary-time quantum Monte Carlo data. The parallelization of Monte Carlo simulations is also addressed. This is an essential resource for graduate students, teachers, and researchers interested in ...
Monte-Carlo simulation-based statistical modeling
Chen, John
2017-01-01
This book brings together expert researchers engaged in Monte-Carlo simulation-based statistical modeling, offering them a forum to present and discuss recent issues in methodological development as well as public health applications. It is divided into three parts, with the first providing an overview of Monte-Carlo techniques, the second focusing on missing data Monte-Carlo methods, and the third addressing Bayesian and general statistical modeling using Monte-Carlo simulations. The data and computer programs used here will also be made publicly available, allowing readers to replicate the model development and data analysis presented in each chapter, and to readily apply them in their own research. Featuring highly topical content, the book has the potential to impact model development and data analyses across a wide spectrum of fields, and to spark further research in this direction.
Monte Carlo studies of model Langmuir monolayers.
Opps, S B; Yang, B; Gray, C G; Sullivan, D E
2001-04-01
This paper examines some of the basic properties of a model Langmuir monolayer, consisting of surfactant molecules deposited onto a water subphase. The surfactants are modeled as rigid rods composed of a head and tail segment of diameters sigma(hh) and sigma(tt), respectively. The tails consist of n(t) approximately 4-7 effective monomers representing methylene groups. These rigid rods interact via site-site Lennard-Jones potentials with different interaction parameters for the tail-tail, head-tail, and head-head interactions. In a previous paper, we studied the ground-state properties of this system using a Landau approach. In the present paper, Monte Carlo simulations were performed in the canonical ensemble to elucidate the finite-temperature behavior of this system. Simulation techniques, incorporating a system of dynamic filters, allow us to decrease CPU time with negligible statistical error. This paper focuses on several of the key parameters, such as density, head-tail diameter mismatch, and chain length, responsible for driving transitions from uniformly tilted to untilted phases and between different tilt-ordered phases. Upon varying the density of the system, with sigma(hh)=sigma(tt), we observe a transition from a tilted (NNN)-condensed phase to an untilted-liquid phase and, upon comparison with recent experiments with fatty acid-alcohol and fatty acid-ester mixtures [M. C. Shih, M. K. Durbin, A. Malik, P. Zschack, and P. Dutta, J. Chem. Phys. 101, 9132 (1994); E. Teer, C. M. Knobler, C. Lautz, S. Wurlitzer, J. Kildae, and T. M. Fischer, J. Chem. Phys. 106, 1913 (1997)], we identify this as the L'(2)/Ov-L1 phase boundary. By varying the head-tail diameter ratio, we observe a decrease in T(c) with increasing mismatch. However, as the chain length was increased we observed that the transition temperatures increased and differences in T(c) due to head-tail diameter mismatch were diminished. In most of the present research, the water was treated as a hard
Monte Carlo Analysis of the Lévy Stability and Multi-fractal Spectrum in e+e- Collisions
Institute of Scientific and Technical Information of China (English)
陈刚; 刘连寿
2002-01-01
The Lévy stability analysis is carried out for e+e- collisions at Z0 mass using the Monte Carlo method. The Lévy index μ is found to be μ = 1.701 ± 0.043. The self-slmilar generalized dimensions D(q) and multi-fractal spectrum f(а) are presented. The Rényi dimension D(q) decreases with increasing q. The self-similar multifractal spectrum is a convex curve with a maximum at q = 0, а = 1.169 ± 0.011. The right-hand side of the spectrum, corresponding to negative values of q, is obtained through analytical continuation.
Sheikin, E. G.
2017-08-01
The analytical differential cross section (DCS) of elastic scattering of atoms that reproduces the stopping power and the straggling of energy loss is proposed. Analytical expressions derived from the DCS for diffusion σd and viscosity σv cross sections of elastic collisions of atoms are in good agreement with known cross sections of 38Ar-40Ar and H-Li collisions obtained from quantum mechanical simulations. The Monte Carlo modeling of the transport of sputtered Cu atoms in Ar and implantation of Bi ions in B and C materials made using the proposed DCS demonstrates its accuracy in the modeling of elastic collisions.
Modeling neutron guides using Monte Carlo simulations
Wang, D Q; Crow, M L; Wang, X L; Lee, W T; Hubbard, C R
2002-01-01
Four neutron guide geometries, straight, converging, diverging and curved, were characterized using Monte Carlo ray-tracing simulations. The main areas of interest are the transmission of the guides at various neutron energies and the intrinsic time-of-flight (TOF) peak broadening. Use of a delta-function time pulse from a uniform Lambert neutron source allows one to quantitatively simulate the effect of guides' geometry on the TOF peak broadening. With a converging guide, the intensity and the beam divergence increases while the TOF peak width decreases compared with that of a straight guide. By contrast, use of a diverging guide decreases the intensity and the beam divergence, and broadens the width (in TOF) of the transmitted neutron pulse.
Monte Carlo event generation of photon-photon collisions at colliders
Helenius, Ilkka
2015-01-01
In addition to being interesting in itself, the photon-photon interactions will be an inevitable background for the future electron-positron colliders. Thus to be able to quantify the potential of future electron-positron colliders it is important to have an accurate description of these collisions. Here we present our ongoing work to implement the photon-photon collisions in the Pythia 8 event generator. First we introduce photon PDFs in general and then discuss in more detail one particular set we have used in our studies. Then we will discuss how the parton-shower algorithm in Pythia 8 is modified in case of photon beams and how the beam remnants are constructed. Finally a brief outlook on future developments is given.
Strain in the mesoscale kinetic Monte Carlo model for sintering
DEFF Research Database (Denmark)
Bjørk, Rasmus; Frandsen, Henrik Lund; Tikare, V.
2014-01-01
Shrinkage strains measured from microstructural simulations using the mesoscale kinetic Monte Carlo (kMC) model for solid state sintering are discussed. This model represents the microstructure using digitized discrete sites that are either grain or pore sites. The algorithm used to simulate...
Quasi-Monte Carlo methods for the Heston model
Jan Baldeaux; Dale Roberts
2012-01-01
In this paper, we discuss the application of quasi-Monte Carlo methods to the Heston model. We base our algorithms on the Broadie-Kaya algorithm, an exact simulation scheme for the Heston model. As the joint transition densities are not available in closed-form, the Linear Transformation method due to Imai and Tan, a popular and widely applicable method to improve the effectiveness of quasi-Monte Carlo methods, cannot be employed in the context of path-dependent options when the underlying pr...
Modelling hadronic interactions in cosmic ray Monte Carlo generators
Directory of Open Access Journals (Sweden)
Pierog Tanguy
2015-01-01
Full Text Available Currently the uncertainty in the prediction of shower observables for different primary particles and energies is dominated by differences between hadronic interaction models. The LHC data on minimum bias measurements can be used to test Monte Carlo generators and these new constraints will help to reduce the uncertainties in air shower predictions. In this article, after a short introduction on air showers and Monte Carlo generators, we will show the results of the comparison between the updated version of high energy hadronic interaction models EPOS LHC and QGSJETII-04 with LHC data. Results for air shower simulations and their consequences on comparisons with air shower data will be discussed.
Monte Carlo methods and models in finance and insurance
Korn, Ralf
2010-01-01
Offering a unique balance between applications and calculations, this book incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The book enables readers to find the right algorithm for a desired application and illustrates complicated methods and algorithms with simple applicat
A viscous blast-wave model for high energy heavy-ion collisions
Jaiswal, Amaresh; Koch, Volker
2016-07-01
Employing a viscosity-based survival scale for initial geometrical perturbations formed in relativistic heavy-ion collisions, we model the radial flow velocity at freeze-out. Subsequently, we use the Cooper-Frye freeze-out prescription, with viscous corrections to the distribution function, to extract the transverse momentum dependence of particle yields and flow harmonics. We fit the model parameters for central collisions, by fitting the spectra of identified particles at the Large Hadron Collider (LHC), and estimate them for other centralities using simple hydrodynamic relations. We use the results of Monte Carlo Glauber model for initial eccentricities. We demonstrate that this improved viscous blast-wave model leads to good agreement with transverse momentum distribution of elliptic and triangular flow for all centralities and estimate the shear viscosity to entropy density ratio η/s ≃ 0.24 at the LHC.
Donahue, C. M.; Hrenya, C. M.; Zelinskaya, A. P.; Nakagawa, K. J.
2008-11-01
Using an apparatus inspired by Newton's cradle, the simultaneous, normal collision between three solid spheres is examined. Namely, an initially touching, motionless pair of "target" particles (doublet) is impacted on one end by a third "striker" particle. Measurements of postcollisional velocities and collision durations are obtained via high-speed photography and an electrical circuit, respectively. Contrary to intuition, the expected Newton's cradle outcome of a motionless, touching particle pair at the bottom of the pendulum arc is not observed in either case. Instead, the striker particle reverses its direction and separates from the middle particle after collision. This reversal is not observed, however, if the target particles are separated by a small distance (not in contact) initially, although a separation still occurs between the striker and middle particle after the collision, with both particles traveling in the same direction. For the case of initially touching target particles, contact duration measurements indicate that the striker separates from the three particles before the two target particles separate. However, when the targets are slightly separated, a three-particle collision is never observed, and the collision is, in fact, a series of two-body collisions. A subsequent implementation of a variety of hard-sphere and soft-sphere collision models indicates that a three-body (soft-sphere) treatment is essential for predicting the velocity reversal, consistent with the experimental findings. Finally, a direct comparison between model predictions and measurements of postcollisional velocities and contact durations provides a gauge of the relative merits of existing collision models for three-body interactions.
Collisions of Small Nuclei in the Thermal Model
Cleymans, J; Oeschler, H.; Redlich, K.; Sharma, N.
2016-01-01
An analysis is presented of the expectations of the thermal model for particle production in collisions of small nuclei. The maxima observed in particle ratios of strange particles to pions as a function of beam energy in heavy ion collisions, are reduced when considering smaller nuclei. Of particular interest is the $\\Lambda/\\pi^+$ ratio shows the strongest maximum which survives even in collisions of small nuclei.
Next-to-Leading-Order Monte Carlo Simulation of Diphoton Production in Hadronic Collisions
D'Errico, Luca
2011-01-01
We present a method, based on the positive weight next-to-leading-order matching formalism (POWHEG), to simulate photon production processes at next-to-leading-order (NLO). This technique is applied to the simulation of diphoton production in hadron-hadron collisions. The algorithm consistently combines the parton shower and NLO calculation, producing only positive weight events. The simulation includes both the photon fragmentation contribution and a full implementation of the truncated shower required to correctly describe soft emissions in an angular-ordered parton shower.
NLO Monte Carlo predictions for heavy-quark production at the LHC: pp collisions in ALICE
Klasen, M; Kovarik, K; Kramer, G; Topp, M; Wessels, J
2014-01-01
Next-to-leading order (NLO) QCD predictions for the production of heavy quarks in proton-proton collisions are presented within three different approaches to quark mass, resummation and fragmentation effects. In particular, new NLO and parton shower simulations with POWHEG are performed in the ALICE kinematic regime at three different centre-of-mass energies, including scale and parton density variations, in order to establish a reliable baseline for future detailed studies of heavy-quark suppression in heavy-ion collisions. Very good agreement of POWHEG is found with FONLL, in particular for centrally produced D^0, D^+ and D^*+ mesons and electrons from charm and bottom quark decays, but also with the generally somewhat higher GM-VFNS predictions within the theoretical uncertainties. The latter are dominated by scale rather than quark mass variations. Parton density uncertainties for charm and bottom quark production are computed here with POWHEG for the first time and shown to be dominant in the forward reg...
Calibration and Monte Carlo modelling of neutron long counters
Tagziria, H
2000-01-01
The Monte Carlo technique has become a very powerful tool in radiation transport as full advantage is taken of enhanced cross-section data, more powerful computers and statistical techniques, together with better characterisation of neutron and photon source spectra. At the National Physical Laboratory, calculations using the Monte Carlo radiation transport code MCNP-4B have been combined with accurate measurements to characterise two long counters routinely used to standardise monoenergetic neutron fields. New and more accurate response function curves have been produced for both long counters. A novel approach using Monte Carlo methods has been developed, validated and used to model the response function of the counters and determine more accurately their effective centres, which have always been difficult to establish experimentally. Calculations and measurements agree well, especially for the De Pangher long counter for which details of the design and constructional material are well known. The sensitivit...
A novel Monte Carlo approach to hybrid local volatility models
A.W. van der Stoep (Anton); L.A. Grzelak (Lech Aleksander); C.W. Oosterlee (Cornelis)
2017-01-01
textabstractWe present in a Monte Carlo simulation framework, a novel approach for the evaluation of hybrid local volatility [Risk, 1994, 7, 18–20], [Int. J. Theor. Appl. Finance, 1998, 1, 61–110] models. In particular, we consider the stochastic local volatility model—see e.g. Lipton et al. [Quant.
Modeling the Collision with Friction of Rigid Bodies
Zabuga, A. G.
2016-09-01
Different models of a perfectly inelastic collision of rigid bodies in plane motion are compared. Formulas for the impact impulses are derived for the Kane-Levinson-Whittaker model based on the kinematic restitution factor, the Routh model based on the kinetic restitution factor, and the Stronge model based on the energy restitution factor. It is shown that these formulas coincide if the collision of rough rigid bodies in plane motion is perfectly inelastic
Kong, Linghan; Wang, Weizong; Murphy, Anthony B.; Xia, Guangqing
2017-04-01
Microdischarges are an important type of plasma discharge that possess several unique characteristics, such as the presence of a stable glow discharge, high plasma density and intense excimer radiation, leading to several potential applications. The intense and controllable gas heating within the extremely small dimensions of microdischarges has been exploited in micro-thruster technologies by incorporating a micro-nozzle to generate the thrust. This kind of micro-thruster has a significantly improved specific impulse performance compared to conventional cold gas thrusters, and can meet the requirements arising from the emerging development and application of micro-spacecraft. In this paper, we performed a self-consistent 2D particle-in-cell simulation, with a Monte Carlo collision model, of a microdischarge operating in a prototype micro-plasma thruster with a hollow cylinder geometry and a divergent micro-nozzle. The model takes into account the thermionic electron emission including the Schottky effect, the secondary electron emission due to cathode bombardment by the plasma ions, several different collision processes, and a non-uniform argon background gas density in the cathode–anode gap. Results in the high-pressure (several hundreds of Torr), high-current (mA) operating regime showing the behavior of the plasma density, potential distribution, and energy flux towards the hollow cathode and anode are presented and discussed. In addition, the results of simulations showing the effect of different argon gas pressures, cathode material work function and discharge voltage on the operation of the microdischarge thruster are presented. Our calculated properties are compared with experimental data under similar conditions and qualitative and quantitative agreements are reached.
A generalized hard-sphere model for Monte Carlo simulation
Hassan, H. A.; Hash, David B.
1993-01-01
A new molecular model, called the generalized hard-sphere, or GHS model, is introduced. This model contains, as a special case, the variable hard-sphere model of Bird (1981) and is capable of reproducing all of the analytic viscosity coefficients available in the literature that are derived for a variety of interaction potentials incorporating attraction and repulsion. In addition, a new procedure for determining interaction potentials in a gas mixture is outlined. Expressions needed for implementing the new model in the direct simulation Monte Carlo methods are derived. This development makes it possible to employ interaction models that have the same level of complexity as used in Navier-Stokes calculations.
The OH distribution in cometary atmospheres - A collisional Monte Carlo model for heavy species
Combi, Michael R.; Bos, Brent J.; Smyth, William H.
1993-01-01
The study presents an extension of the cometary atmosphere Monte Carlo particle trajectory model formalism which makes it both physically correct for heavy species and yet computationally reasonable. The derivation accounts for the collision path and scattering redirection of a heavy radical traveling through a fluid coma with a given radial distribution in outflow speed and temperature. The revised model verifies that the earlier fast-H atom approximations used in earlier work are valid, and it is applied to a case where the heavy radical formalism is necessary: the OH distribution. It is found that a steeper variation of water production rate with heliocentric distance is required for a water coma which is consistent with the velocity-resolved observations of Comet P/Halley.
Hydrodynamical Models of Gas Cloud - Galaxy Collisions
Franklin, M.; Dinge, D.; Jones, T.; Benjamin, B.
1999-05-01
Clouds of neutral hydrogen falling toward the Galactic plane with a speed of about 100 km/s or more are among those considered to be "high velocity clouds" (HVCs). As HVCs are often observed approaching the midplane, the collision of such clouds with the gaseous disk of the Galaxy has been proposed as a precursor event to the phenomena known as "supershells" and as a catalyst to star formation. While many previous analytic calculations have assumed that ram pressure of the resisting medium was negligible, and a ballistic approximation was valid, observations showing a correlation between speed and increased height above the plane, the opposite of what is expected for free fall, suggest otherwise. Benjamin & Danly suggested in 1997 that clouds falling at terminal velocity provide a simple explanation for the observed velocity distribution. In this work, numerical models are used to test the above hypotheses with clouds falling through a more modern model of the interstellar medium than that used in the seminal work by Tenorio-Tagle et al. (TT) in 1987. With the addition of more dense material to the model background, clouds were still able to form supershell-like remnants, though star formation does not appear to be triggered. Further, though agreement was not perfect, the terminal velocity model was found to be a better approximation for these clouds' fall than the ballistic case. Cooling was a physical process included in TT's work which was not included here, but was found to be non-negligible. Simulations which include a cooling algorithm must be done to confirm these results. This work was supported in part by NSF grant AST96-19438.
JEWEL - a Monte Carlo Model for Jet Quenching
Zapp, Korinna; Wiedemann, Urs Achim
2009-01-01
The Monte Carlo model JEWEL 1.0 (Jet Evolution With Energy Loss) simulates parton shower evolution in the presence of a dense QCD medium. In its current form medium interactions are modelled as elastic scattering based on perturbative matrix elements and a simple prescription for medium induced gluon radiation. The parton shower is interfaced with a hadronisation model. In the absence of medium effects JEWEL is shown to reproduce jet measurements at LEP. The collisional energy loss is consistent with analytic calculations, but with JEWEL we can go a step further and characterise also jet-induced modifications of the medium. Elastic and inelastic medium interactions are shown to lead to distinctive modifications of the jet fragmentation pattern, which should allow to experimentally distinguish between collisional and radiative energy loss mechanisms. In these proceedings the main JEWEL results are summarised and a Monte Carlo algorithm is outlined that allows to include the Landau-Pomerantschuk-Migdal effect i...
Monte Carlo Euler approximations of HJM term structure financial models
Björk, Tomas
2012-11-22
We present Monte Carlo-Euler methods for a weak approximation problem related to the Heath-Jarrow-Morton (HJM) term structure model, based on Itô stochastic differential equations in infinite dimensional spaces, and prove strong and weak error convergence estimates. The weak error estimates are based on stochastic flows and discrete dual backward problems, and they can be used to identify different error contributions arising from time and maturity discretization as well as the classical statistical error due to finite sampling. Explicit formulas for efficient computation of sharp error approximation are included. Due to the structure of the HJM models considered here, the computational effort devoted to the error estimates is low compared to the work to compute Monte Carlo solutions to the HJM model. Numerical examples with known exact solution are included in order to show the behavior of the estimates. © 2012 Springer Science+Business Media Dordrecht.
Model investigation of non-thermal phase transition in high energy collisions
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
The non-thermal phase transition in high energy collisions is studied in detail in the framework of random cascade model. The relation between the characteristic parameter λq of phase transition and the rank q of moment is obtained using Monte Carlo simulation, and the existence of two phases in self-similar cascading multiparticle systems is shown. The relation between the critical point qc of phase transition on the fluctuation parameter α is obtained and compared with the experimental results from NA22. The same study is carried out also by analytical calculation under central limit approximation. The range of validity of the central limit approximation is discussed.
Coulomb Collision for Plasma Simulations: Modelling and Numerical Methods
Geiser, Juergen
2016-09-01
We are motivated to model weakly ionized Plasma applications. The modeling problem is based on an incorporated explicit velocity-dependent small-angle Coulomb collision terms into a Fokker-Planck equation. Such a collision is done with so called test and field particles, which are scattered stochastically based on a Langevin equation. Based on such different model approaches, means the transport part is done with kinetic equations, while the collision part is done via the Langevin equations, we present a splitting of these models. Such a splitting allow us to combine different modeling parts. For the transport part, we can apply particle models and solve them with particle methods, e.g., PIC, while for the collision part, we can apply the explicit Coulomb collision model, e.g., with fast stochastic differential equation solvers. Additional, we also apply multiscale approaches for the different parts of the transport part, e.g., different time-scales of an explicit electric field, and model-order reduction approaches. We present first numerical results for particle simulations with the deterministic-stochastic splitting schemes. Such ideas can be applied to sputtering problems or plasma applications with dominant Coulomb collisions.
Tseung, H Wan Chan; Beltran, C
2014-01-01
Purpose: Very fast Monte Carlo (MC) simulations of proton transport have been implemented recently on GPUs. However, these usually use simplified models for non-elastic (NE) proton-nucleus interactions. Our primary goal is to build a GPU-based proton transport MC with detailed modeling of elastic and NE collisions. Methods: Using CUDA, we implemented GPU kernels for these tasks: (1) Simulation of spots from our scanning nozzle configurations, (2) Proton propagation through CT geometry, considering nuclear elastic scattering, multiple scattering, and energy loss straggling, (3) Modeling of the intranuclear cascade stage of NE interactions, (4) Nuclear evaporation simulation, and (5) Statistical error estimates on the dose. To validate our MC, we performed: (1) Secondary particle yield calculations in NE collisions, (2) Dose calculations in homogeneous phantoms, (3) Re-calculations of head and neck plans from a commercial treatment planning system (TPS), and compared with Geant4.9.6p2/TOPAS. Results: Yields, en...
Modelling of a collision between two smartphones
de Jesus, V. L. B.; Sasaki, D. G. G.
2016-09-01
In the predominant approach in physics textbooks, the collision between particles is treated as a black box, where no physical quantity can be measured. This approach becomes even more evident in experimental classes where collisions are the simplest and most common way of applying the theorem of conservation of linear momentum in the asymptotic behavior. In this paper we develop and analyse an experiment on collisions using only two smartphones. The experimental setup is amazingly simple; the two devices are aligned on a horizontal table of lacquered wood, in order to slide more easily. At the edge of one of them a piece of common sponge is glued using double-sided tape. By using a free smartphone application, the values generated by the accelerometer of the two devices in full motion are measured and tabulated. Through numerical iteration, the speed graphs of the smartphones before, during, and after the collision are obtained. The main conclusions were: (i) the demonstration of the feasibility of using smartphones as an alternative to air tracks and electronic sensors employed in a teaching lab, (ii) the possibility of investigating the collision itself, its characteristics and effects; this is the great advantage of the use of smartphones over traditional experiments, (iii) the compatibility of the results with the impulse-momentum theorem, within the margin of uncertainty.
Model of Centauro and strangelet production in heavy ion collisions
Angelis, Aris L S; Kharlov, Yu V; Korotkikh, V L; Mavromanolakis, G; Panagiotou, A D; Sadovsky, S A; Kharlov, Yu.V.
2004-01-01
We discuss the phenomenological model of Centauro event production in relativistic nucleus-nucleus collisions. This model makes quantitative predictions for kinematic observables, baryon number and mass of the Centauro fireball and its decay products. Centauros decay mainly to nucleons, strange hyperons and possibly strangelets. Simulations of Centauro events for the CASTOR detector in Pb-Pb collisions at LHC energies are performed. The signatures of these events are discussed in detail.
Monte Carlo modelling of positron transport in real world applications
Marjanović, S.; Banković, A.; Šuvakov, M.; Petrović, Z. Lj
2014-05-01
Due to the unstable nature of positrons and their short lifetime, it is difficult to obtain high positron particle densities. This is why the Monte Carlo simulation technique, as a swarm method, is very suitable for modelling most of the current positron applications involving gaseous and liquid media. The ongoing work on the measurements of cross-sections for positron interactions with atoms and molecules and swarm calculations for positrons in gasses led to the establishment of good cross-section sets for positron interaction with gasses commonly used in real-world applications. Using the standard Monte Carlo technique and codes that can follow both low- (down to thermal energy) and high- (up to keV) energy particles, we are able to model different systems directly applicable to existing experimental setups and techniques. This paper reviews the results on modelling Surko-type positron buffer gas traps, application of the rotating wall technique and simulation of positron tracks in water vapor as a substitute for human tissue, and pinpoints the challenges in and advantages of applying Monte Carlo simulations to these systems.
Zhou, Wen; Guo, Heng; Jiang, Wei; Li, He-Ping; Li, Zeng-Yao; Lapenta, Giovanni
2016-10-01
A sheath is the transition region from plasma to a solid surface, which also plays a critical role in determining the behaviors of many lab and industrial plasmas. However, the cathode sheath properties in arc discharges are not well understood yet due to its multi-scale and kinetic features. In this letter, we have adopted an implicit particle-in-cell Monte Carlo collision (PIC-MCC) method to study the cathode sheath in an atmospheric arc discharge plasma. The cathode sheath thickness, number densities and averaged energies of electrons and ions, the electric field distribution, as well as the spatially averaged electron energy probability function (EEPF), are predicted self-consistently by using this newly developed kinetic model. It is also shown that the thermionic emission at the hot cathode surface is the dominant electron emission process to sustain the arc discharges, while the effects from secondary and field electron emissions are negligible. The present results verify the previous conjectures and experimental observations.
A Monte Carlo Model of Light Propagation in Nontransparent Tissue
Institute of Scientific and Technical Information of China (English)
姚建铨; 朱水泉; 胡海峰; 王瑞康
2004-01-01
To sharpen the imaging of structures, it is vital to develop a convenient and efficient quantitative algorithm of the optical coherence tomography (OCT) sampling. In this paper a new Monte Carlo model is set up and how light propagates in bio-tissue is analyzed in virtue of mathematics and physics equations. The relations,in which light intensity of Class 1 and Class 2 light with different wavelengths changes with their permeation depth,and in which Class 1 light intensity (signal light intensity) changes with the probing depth, and in which angularly resolved diffuse reflectance and diffuse transmittance change with the exiting angle, are studied. The results show that Monte Carlo simulation results are consistent with the theory data.
Monte Carlo Numerical Models for Nuclear Logging Applications
Directory of Open Access Journals (Sweden)
Fusheng Li
2012-06-01
Full Text Available Nuclear logging is one of most important logging services provided by many oil service companies. The main parameters of interest are formation porosity, bulk density, and natural radiation. Other services are also provided from using complex nuclear logging tools, such as formation lithology/mineralogy, etc. Some parameters can be measured by using neutron logging tools and some can only be measured by using a gamma ray tool. To understand the response of nuclear logging tools, the neutron transport/diffusion theory and photon diffusion theory are needed. Unfortunately, for most cases there are no analytical answers if complex tool geometry is involved. For many years, Monte Carlo numerical models have been used by nuclear scientists in the well logging industry to address these challenges. The models have been widely employed in the optimization of nuclear logging tool design, and the development of interpretation methods for nuclear logs. They have also been used to predict the response of nuclear logging systems for forward simulation problems. In this case, the system parameters including geometry, materials and nuclear sources, etc., are pre-defined and the transportation and interactions of nuclear particles (such as neutrons, photons and/or electrons in the regions of interest are simulated according to detailed nuclear physics theory and their nuclear cross-section data (probability of interacting. Then the deposited energies of particles entering the detectors are recorded and tallied and the tool responses to such a scenario are generated. A general-purpose code named Monte Carlo N– Particle (MCNP has been the industry-standard for some time. In this paper, we briefly introduce the fundamental principles of Monte Carlo numerical modeling and review the physics of MCNP. Some of the latest developments of Monte Carlo Models are also reviewed. A variety of examples are presented to illustrate the uses of Monte Carlo numerical models
Avian collision risk models for wind energy impact assessments
Energy Technology Data Exchange (ETDEWEB)
Masden, E.A., E-mail: elizabeth.masden@uhi.ac.uk [Environmental Research Institute, North Highland College-UHI, University of the Highlands and Islands, Ormlie Road, Thurso, Caithness KW14 7EE (United Kingdom); Cook, A.S.C.P. [British Trust for Ornithology, The Nunnery, Thetford IP24 2PU (United Kingdom)
2016-01-15
With the increasing global development of wind energy, collision risk models (CRMs) are routinely used to assess the potential impacts of wind turbines on birds. We reviewed and compared the avian collision risk models currently available in the scientific literature, exploring aspects such as the calculation of a collision probability, inclusion of stationary components e.g. the tower, angle of approach and uncertainty. 10 models were cited in the literature and of these, all included a probability of collision of a single bird colliding with a wind turbine during passage through the rotor swept area, and the majority included a measure of the number of birds at risk. 7 out of the 10 models calculated the probability of birds colliding, whilst the remainder used a constant. We identified four approaches to calculate the probability of collision and these were used by others. 6 of the 10 models were deterministic and included the most frequently used models in the UK, with only 4 including variation or uncertainty in some way, the most recent using Bayesian methods. Despite their appeal, CRMs have their limitations and can be ‘data hungry’ as well as assuming much about bird movement and behaviour. As data become available, these assumptions should be tested to ensure that CRMs are functioning to adequately answer the questions posed by the wind energy sector. - Highlights: • We highlighted ten models available to assess avian collision risk. • Only 4 of the models included variability or uncertainty. • Collision risk models have limitations and can be ‘data hungry’. • It is vital that the most appropriate model is used for a given task.
Collision-free speed model for pedestrian dynamics
Tordeux, Antoine; Seyfried, Armin
2015-01-01
We propose in this paper a minimal speed-based pedestrian model for which particle dynamics are intrinsically collision-free. The speed model is an optimal velocity function depending on the agent length (i.e.\\ particle diameter), maximum speed and time gap parameters. The direction model is a weighted sum of exponential repulsion from the neighbors, calibrated by the repulsion rate and distance. The model's main features like the reproduction of empirical phenomena are analysed by simulation. We point out that phenomena of self-organisation observable in force-based models and field studies can be reproduced by the collision-free model with low computational effort.
Monte Carlo model of neutral-particle transport in diverted plasmas
Energy Technology Data Exchange (ETDEWEB)
Heifetz, D.; Post, D.; Petravic, M.; Weisheit, J.; Bateman, G.
1981-11-01
The transport of neutral atoms and molecules in the edge and divertor regions of fusion experiments has been calculated using Monte-Carlo techniques. The deuterium, tritium, and helium atoms are produced by recombination in the plasma and at the walls. The relevant collision processes of charge exchange, ionization, and dissociation between the neutrals and the flowing plasma electrons and ions are included, along with wall reflection models. General two-dimensional wall and plasma geometries are treated in a flexible manner so that varied configurations can be easily studied. The algorithm uses a pseudo-collision method. Splitting with Russian roulette, suppression of absorption, and efficient scoring techniques are used to reduce the variance. The resulting code is sufficiently fast and compact to be incorporated into iterative treatments of plasma dynamics requiring numerous neutral profiles. The calculation yields the neutral gas densities, pressures, fluxes, ionization rates, momentum transfer rates, energy transfer rates, and wall sputtering rates. Applications have included modeling of proposed INTOR/FED poloidal divertor designs and other experimental devices.
Molecular dynamics and binary collision modeling of the primary damage state of collision cascades
DEFF Research Database (Denmark)
Heinisch, H.L.; Singh, B.N.
1992-01-01
Quantitative information on defect production in cascades in copper obtained from recent molecular dynamics simulations is compared to defect production information determined earlier with a model based on the binary collision approximation (BCA). The total numbers of residual defects, the fracti...
Two models with rescattering for high energy heavy ion collisions
Bøggild, H.; Hansen, Ole; Humanic, T. J.
2006-12-01
The effects of hadronic rescattering in high energy relativistic Au+Au collisions are studied using two very different models to describe the early stages of the collision. One model is based on a hadronic thermal picture and the other on a superposition of parton-parton collisions. Operationally, the output hadrons from each of these models are used as input to a hadronic rescattering calculation. The results of the rescattering calculations from each model are then compared with rapidity and transverse momentum distributions from the BNL Relativistic Heavy Ion Collider BRAHMS experiment. In spite of the different points of view of the two models of the initial stage, after rescattering, the observed differences between the models are mostly “washed out” and both models give observables that agree roughly with each other and with experimental data.
Monte Carlo simulation of classical spin models with chaotic billiards.
Suzuki, Hideyuki
2013-11-01
It has recently been shown that the computing abilities of Boltzmann machines, or Ising spin-glass models, can be implemented by chaotic billiard dynamics without any use of random numbers. In this paper, we further numerically investigate the capabilities of the chaotic billiard dynamics as a deterministic alternative to random Monte Carlo methods by applying it to classical spin models in statistical physics. First, we verify that the billiard dynamics can yield samples that converge to the true distribution of the Ising model on a small lattice, and we show that it appears to have the same convergence rate as random Monte Carlo sampling. Second, we apply the billiard dynamics to finite-size scaling analysis of the critical behavior of the Ising model and show that the phase-transition point and the critical exponents are correctly obtained. Third, we extend the billiard dynamics to spins that take more than two states and show that it can be applied successfully to the Potts model. We also discuss the possibility of extensions to continuous-valued models such as the XY model.
Weibull model of Multiplicity Distribution in hadron-hadron collisions
Dash, Sadhana
2014-01-01
We introduce the Weibull distribution as a simple parametrization of charged particle multiplicities in hadron-hadron collisions at all available energies, ranging from ISR energies to the most recent LHC energies. In statistics, the Weibull distribution has wide applicability in natural processes involving fragmentation processes. This gives a natural connection to the available state-of-the-art models for multi-particle production in hadron hadron collisions involving QCD parton fragmentation and hadronization.
Binary collisions in popovici’s photogravitational model
Directory of Open Access Journals (Sweden)
Mioc V.
2002-01-01
Full Text Available The dynamics of bodies under the combined action of the gravitational attraction and the radiative repelling force has large and deep implications in astronomy. In the 1920s, the Romanian astronomer Constantin Popovici proposed a modified photogravitational law (considered by other scientists too. This paper deals with the collisions of the two-body problem associated with Popovici’s model. Resorting to McGehee-type transformations of the second kind, we obtain regular equations of motion and define the collision manifold. The flow on this boundary manifold is wholly described. This allows to point out some important qualitative features of the collisional motion: existence of the black-hole effect, gradientlikeness of the flow on the collision manifold, regularizability of collisions under certain conditions. Some questions, coming from the comparison of Levi-Civita’s regularizing transformations and McGehee’s ones, are formulated.
Vaporization wave model for ion-ion central collisions
Energy Technology Data Exchange (ETDEWEB)
Baldo, M.; Giansiracusa, G.; Piccitto, G. (Catania Univ. (Italy). Ist. di Fisica; Istituto Nazionale di Fisica Nucleare, Catania (Italy))
1983-09-24
We propose a simple model for central or nearly central ion-ion collisions at intermediate energies. It is based on the ''vaporization wave model'' developed by Bennett for macroscopic objects. The model offers a simple explanation of the observed deuteron/proton abundancy ratio as a function of the beam energy.
Vaporization wave model for ion-ion central collisions
Energy Technology Data Exchange (ETDEWEB)
Baldo, M.; Giansiracusa, G.; Piccitto, G. (Catania Univ. (Italy). Ist. di Fisica)
1983-09-24
A simple model for central or nearly central ion-ion collisions at intermediate energies is proposed. It is based on the ''vaporization wave model'' developed by Bennet for macroscopic objects. The model offers a simple explanation of the observed deuteron/proton abundancy ratio as a function of the beam energy.
Spectra of produced particles at CERN SPS heavy-ion collisions from a parton-cascade model
Srivastava, D K; Srivastava, Dinesh Kumar; Geiger, Klaus
1998-01-01
We evaluate the spectra of produced particles (pions, kaons, antiprotons) from partonic cascades which may develop in the wake of heavy-ion collisions at CERN SPS energies and which may hadronize by formation of clusters which decay into hadrons. Using the experimental data obtained by NA35 and NA44 collaborations for S+S and Pb+Pb collisions, we conclude that the Monte Carlo implementation of the recently developed parton-cascade/cluster-hadronization model provides a reasonable description of the distributions of the particles produced in such collisions. While the rapidity distribution of the mid-rapidity protons is described reasonably well, their transverse momentum distribution falls too rapidly compared to the experimental values, implying a significant effect of final state scattering among the produced hadrons neglected so far.
Dynamical Monte Carlo method for stochastic epidemic models
Aiello, O E
2002-01-01
A new approach to Dynamical Monte Carlo Methods is introduced to simulate markovian processes. We apply this approach to formulate and study an epidemic Generalized SIRS model. The results are in excellent agreement with the forth order Runge-Kutta method in a region of deterministic solution. Introducing local stochastic interactions, the Runge-Kutta method is not applicable, and we solve and check it self-consistently with a stochastic version of the Euler Method. The results are also analyzed under the herd-immunity concept.
Monte Carlo Shell Model for ab initio nuclear structure
Directory of Open Access Journals (Sweden)
Abe T.
2014-03-01
Full Text Available We report on our recent application of the Monte Carlo Shell Model to no-core calculations. At the initial stage of the application, we have performed benchmark calculations in the p-shell region. Results are compared with those in the Full Configuration Interaction and No-Core Full Configuration methods. These are found to be consistent with each other within quoted uncertainties when they could be quantified. The preliminary results in Nshell = 5 reveal the onset of systematic convergence pattern.
Novel Extrapolation Method in the Monte Carlo Shell Model
Shimizu, Noritaka; Mizusaki, Takahiro; Otsuka, Takaharu; Abe, Takashi; Honma, Michio
2010-01-01
We propose an extrapolation method utilizing energy variance in the Monte Carlo shell model in order to estimate the energy eigenvalue and observables accurately. We derive a formula for the energy variance with deformed Slater determinants, which enables us to calculate the energy variance efficiently. The feasibility of the method is demonstrated for the full $pf$-shell calculation of $^{56}$Ni, and the applicability of the method to a system beyond current limit of exact diagonalization is shown for the $pf$+$g_{9/2}$-shell calculation of $^{64}$Ge.
Monte Carlo Simulation of Kinesin Movement with a Lattice Model
Institute of Scientific and Technical Information of China (English)
WANG Hong; DOU Shuo-Xing; WANG Peng-Ye
2005-01-01
@@ Kinesin is a processive double-headed molecular motor that moves along a microtubule by taking about 8nm steps. It generally hydrolyzes one ATP molecule for taking each forward step. The processive movement of the kinesin molecular motors is numerically simulated with a lattice model. The motors are considered as Brownian particles and the ATPase processes of both heads are taken into account. The Monte Carlo simulation results agree well with recent experimental observations, especially on the relation of velocity versus ATP and ADP concentrations.
3D Monte Carlo radiation transfer modelling of photodynamic therapy
Campbell, C. Louise; Christison, Craig; Brown, C. Tom A.; Wood, Kenneth; Valentine, Ronan M.; Moseley, Harry
2015-06-01
The effects of ageing and skin type on Photodynamic Therapy (PDT) for different treatment methods have been theoretically investigated. A multilayered Monte Carlo Radiation Transfer model is presented where both daylight activated PDT and conventional PDT are compared. It was found that light penetrates deeper through older skin with a lighter complexion, which translates into a deeper effective treatment depth. The effect of ageing was found to be larger for darker skin types. The investigation further strengthens the usage of daylight as a potential light source for PDT where effective treatment depths of about 2 mm can be achieved.
Gauge Potts model with generalized action: A Monte Carlo analysis
Energy Technology Data Exchange (ETDEWEB)
Fanchiotti, H.; Canal, C.A.G.; Sciutto, S.J.
1985-08-15
Results of a Monte Carlo calculation on the q-state gauge Potts model in d dimensions with a generalized action involving planar 1 x 1, plaquette, and 2 x 1, fenetre, loop interactions are reported. For d = 3 and q = 2, first- and second-order phase transitions are detected. The phase diagram for q = 3 presents only first-order phase transitions. For d = 2, a comparison with analytical results is made. Here also, the behavior of the numerical simulation in the vicinity of a second-order transition is analyzed.
A viscous blast-wave model for relativistic heavy-ion collisions
Jaiswal, Amaresh
2015-01-01
Using a viscosity-based survival scale for geometrical perturbations formed in the early stages of relativistic heavy-ion collisions, we model the radial flow velocity during freeze-out. Subsequently, we employ the Cooper-Frye freeze-out prescription, with first-order viscous corrections to the distribution function, to obtain the transverse momentum distribution of particle yields and flow harmonics. For initial eccentricities, we use the results of Monte Carlo Glauber model. We fix the blast-wave model parameters by fitting the transverse momentum spectra of identified particles at the Large Hadron Collider (LHC) and demonstrate that this leads to a fairly good agreement with transverse momentum distribution of elliptic and triangular flow for various centralities. Within this viscous blast-wave model, we estimate the shear viscosity to entropy density ratio $\\eta/s\\simeq 0.24$ at the LHC.
Evolutionary Sequential Monte Carlo Samplers for Change-Point Models
Directory of Open Access Journals (Sweden)
Arnaud Dufays
2016-03-01
Full Text Available Sequential Monte Carlo (SMC methods are widely used for non-linear filtering purposes. However, the SMC scope encompasses wider applications such as estimating static model parameters so much that it is becoming a serious alternative to Markov-Chain Monte-Carlo (MCMC methods. Not only do SMC algorithms draw posterior distributions of static or dynamic parameters but additionally they provide an estimate of the marginal likelihood. The tempered and time (TNT algorithm, developed in this paper, combines (off-line tempered SMC inference with on-line SMC inference for drawing realizations from many sequential posterior distributions without experiencing a particle degeneracy problem. Furthermore, it introduces a new MCMC rejuvenation step that is generic, automated and well-suited for multi-modal distributions. As this update relies on the wide heuristic optimization literature, numerous extensions are readily available. The algorithm is notably appropriate for estimating change-point models. As an example, we compare several change-point GARCH models through their marginal log-likelihoods over time.
Monte Carlo model for electron degradation in methane
Bhardwaj, Anil
2015-01-01
We present a Monte Carlo model for degradation of 1-10,000 eV electrons in an atmosphere of methane. The electron impact cross sections for CH4 are compiled and analytical representations of these cross sections are used as input to the model.model.Yield spectra, which provides information about the number of inelastic events that have taken place in each energy bin, is used to calculate the yield (or population) of various inelastic processes. The numerical yield spectra, obtained from the Monte Carlo simulations, is represented analytically, thus generating the Analytical Yield Spectra (AYS). AYS is employed to obtain the mean energy per ion pair and efficiencies of various inelastic processes.Mean energy per ion pair for neutral CH4 is found to be 26 (27.8) eV at 10 (0.1) keV. Efficiency calculation showed that ionization is the dominant process at energies >50 eV, for which more than 50% of the incident electron energy is used. Above 25 eV, dissociation has an efficiency of 27%. Below 10 eV, vibrational e...
Markov chain Monte Carlo simulation for Bayesian Hidden Markov Models
Chan, Lay Guat; Ibrahim, Adriana Irawati Nur Binti
2016-10-01
A hidden Markov model (HMM) is a mixture model which has a Markov chain with finite states as its mixing distribution. HMMs have been applied to a variety of fields, such as speech and face recognitions. The main purpose of this study is to investigate the Bayesian approach to HMMs. Using this approach, we can simulate from the parameters' posterior distribution using some Markov chain Monte Carlo (MCMC) sampling methods. HMMs seem to be useful, but there are some limitations. Therefore, by using the Mixture of Dirichlet processes Hidden Markov Model (MDPHMM) based on Yau et. al (2011), we hope to overcome these limitations. We shall conduct a simulation study using MCMC methods to investigate the performance of this model.
Preon Model and a Possible New Physics in ep Collisions
Senju, H.
1993-03-01
The properties of predicted new particles in a preon-subpreon model are discussed. The model contains several new particles which could be detected in the near future. It is shown that ep colliders are especially adequate to study properties of a few of them. Production cross sections and signatures in ep collisions are discussed.
Preon model and a possible new physics in ep collisions
Energy Technology Data Exchange (ETDEWEB)
Senju, Hirofumi (Nagoya Municipal Women' s Coll. (Japan))
1993-03-01
The properties of predicted new particles in a preon-subpreon model are discussed. The model contains several new particles which could be detected in the near future. It is shown that ep colliders are especially adequate to study properties of a few of them. Production cross sections and signatures in ep collisions are discussed. (author).
Popov, Dmitry; Hofmann, Werner
One of the general problems of modern high energy physics is a problem of comparing experimental data, measurements of observables in high energy collisions, to theory, which is represented by Monte Carlo simulations. This work is dedicated to further development of the tuning methodology and implementation of software tools for tuning of the PYTHIA Monte Carlo event generator for the LHCb experiment. The aim of this thesis is to create a fast analytical model of the Monte Carlo event generator and then fitting the model to the experimental data, recorded by the LHCb detector, considering statistical and computational uncertainties and estimating the best values for the tuned parameters, by simultaneous tuning of a group of phenomenological parameters in many-dimensional parameter-space. The fitting algorithm is interfaced to the LHCb software framework, which models the response of the LHCb detector. Typically, the tunings are done to the measurements which are corrected for detector effects. These correctio...
Burt, Jonathan M.; Josyula, Eswar
2016-11-01
A modification to DSMC collision routines is proposed to eliminate or reduce collision separation error in numerical transport coefficients. This modification follows from earlier DSMC error analysis based on Green-Kubo theory, and is currently limited to the case of a hard sphere monatomic simple gas simulation with approximately isotropic collision separation statistics. Further adjustments to the DSMC collision algorithm are proposed to reduce collision separation error associated with a finite time step interval. It is shown analytically that, for random collision partner selection at the small time step limit with a cell size equal to the mean free path, collision separation error in viscosity is reduced by approximately 37% while thermal conductivity error is completely removed. In a demonstration case involving hypersonic flow over a cylinder, the proposed modification is found to allow for large error reductions in both the total force and heat transfer rate. Although this modification is not intended as a general solution to the problem of DSMC collision separation error, it is hoped that the concept demonstrated here of utilizing Green-Kubo analysis for DSMC error reduction will in the future find more widespread applicability.
Zhu, Caigang; Liu, Quan
2012-01-01
We present a hybrid method that combines a multilayered scaling method and a perturbation method to speed up the Monte Carlo simulation of diffuse reflectance from a multilayered tissue model with finite-size tumor-like heterogeneities. The proposed method consists of two steps. In the first step, a set of photon trajectory information generated from a baseline Monte Carlo simulation is utilized to scale the exit weight and exit distance of survival photons for the multilayered tissue model. In the second step, another set of photon trajectory information, including the locations of all collision events from the baseline simulation and the scaling result obtained from the first step, is employed by the perturbation Monte Carlo method to estimate diffuse reflectance from the multilayered tissue model with tumor-like heterogeneities. Our method is demonstrated to shorten simulation time by several orders of magnitude. Moreover, this hybrid method works for a larger range of probe configurations and tumor models than the scaling method or the perturbation method alone.
Longitudinal functional principal component modelling via Stochastic Approximation Monte Carlo
Martinez, Josue G.
2010-06-01
The authors consider the analysis of hierarchical longitudinal functional data based upon a functional principal components approach. In contrast to standard frequentist approaches to selecting the number of principal components, the authors do model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order to overcome this, the authors show how to apply Stochastic Approximation Monte Carlo (SAMC) to this problem, a method that has the potential to explore the entire space and does not become trapped in local extrema. The combination of reversible jump methods and SAMC in hierarchical longitudinal functional data is simplified by a polar coordinate representation of the principal components. The approach is easy to implement and does well in simulated data in determining the distribution of the number of principal components, and in terms of its frequentist estimation properties. Empirical applications are also presented.
Monte Carlo modelling of Schottky diode for rectenna simulation
Bernuchon, E.; Aniel, F.; Zerounian, N.; Grimault-Jacquin, A. S.
2017-09-01
Before designing a detector circuit, the electrical parameters extraction of the Schottky diode is a critical step. This article is based on a Monte-Carlo (MC) solver of the Boltzmann Transport Equation (BTE) including different transport mechanisms at the metal-semiconductor contact such as image force effect or tunneling. The weight of tunneling and thermionic current is quantified according to different degrees of tunneling modelling. The I-V characteristic highlights the dependence of the ideality factor and the current saturation with bias. Harmonic Balance (HB) simulation on a rectifier circuit within Advanced Design System (ADS) software shows that considering non-linear ideality factor and saturation current for the electrical model of the Schottky diode does not seem essential. Indeed, bias independent values extracted in forward regime on I-V curve are sufficient. However, the non-linear series resistance extracted from a small signal analysis (SSA) strongly influences the conversion efficiency at low input powers.
Examining of the Collision Breakup Model between Geostationary Orbit Objects
Hata, Hidehiro; Hanada, Toshiya; Akahoshi, Yasuhiro; Yasaka, Tetsuo; Harada, Shoji
This paper will examine the applicability of the hypervelocity collision model included in the NASA standard breakup model 2000 revision to low-velocity collisions possible in space, especially in the geosynchronous regime. The analytic method used in the standard breakup model will be applied to experimental data accumulated through low-velocity impact experiments performed at Kyushu Institute of Technology at a velocity about 300m/s and 800m/s. The projectiles and target specimens used were aluminum solid balls and aluminum honeycomb sandwich panels with face sheets of carbon fiber reinforced plastic, respectively. Then, we have found that a kind of lower boundary exists on fragment area-to-mass distribution at a smaller characteristic length range. This paper will describe the theoretical derivation of lower boundary and propose another modification on fragment area-to-mass distribution and it will conclude that the hypervelocity collision model in the standard breakup model can be applied to low-velocity collisions possible with some modifications.
Weibull model of multiplicity distribution in hadron-hadron collisions
Dash, Sadhana; Nandi, Basanta K.; Sett, Priyanka
2016-06-01
We introduce the use of the Weibull distribution as a simple parametrization of charged particle multiplicities in hadron-hadron collisions at all available energies, ranging from ISR energies to the most recent LHC energies. In statistics, the Weibull distribution has wide applicability in natural processes that involve fragmentation processes. This provides a natural connection to the available state-of-the-art models for multiparticle production in hadron-hadron collisions, which involve QCD parton fragmentation and hadronization. The Weibull distribution describes the multiplicity data at the most recent LHC energies better than the single negative binomial distribution.
Monte Carlo autofluorescence modeling of cervical intraepithelial neoplasm progression
Chu, S. C.; Chiang, H. K.; Wu, C. E.; He, S. Y.; Wang, D. Y.
2006-02-01
Monte Carlo fluorescence model has been developed to estimate the autofluorescent spectra associated with the progression of the Exo-Cervical Intraepithelial Neoplasm (CIN). We used double integrating spheres system and a tunable light source system, 380 to 600 nm, to measure the reflection and transmission spectra of a 50 μm thick tissue, and used Inverse Adding-Doubling (IAD) method to estimate the absorption (μa) and scattering (μs) coefficients. Human cervical tissue samples were sliced vertically (longitudinal) by the frozen section method. The results show that the absorption and scattering coefficients of cervical neoplasia are 2~3 times higher than normal tissues. We applied Monte Carlo method to estimate photon distribution and fluorescence emission in the tissue. By combining the intrinsic fluorescence information (collagen, NADH, and FAD), the anatomical information of the epithelium, CIN, stroma layers, and the fluorescence escape function, the autofluorescence spectra of CIN at different development stages were obtained.We have observed that the progression of the CIN results in gradually decreasing of the autofluorescence intensity of collagen peak intensity. In addition, the existence of the CIN layer formeda barrier that blocks the autofluorescence escaping from the stroma layer due to the strong extinction(scattering and absorption) of the CIN layer. To our knowledge, this is the first study measuring the CIN optical properties in the visible range; it also successfully demonstrates the fluorescence model forestimating autofluorescence spectra of cervical tissue associated with the progression of the CIN tissue;this model is very important in assisting the CIN diagnosis and treatment in clinical medicine.
Household water use and conservation models using Monte Carlo techniques
Cahill, R.; Lund, J. R.; DeOreo, B.; Medellín-Azuara, J.
2013-10-01
The increased availability of end use measurement studies allows for mechanistic and detailed approaches to estimating household water demand and conservation potential. This study simulates water use in a single-family residential neighborhood using end-water-use parameter probability distributions generated from Monte Carlo sampling. This model represents existing water use conditions in 2010 and is calibrated to 2006-2011 metered data. A two-stage mixed integer optimization model is then developed to estimate the least-cost combination of long- and short-term conservation actions for each household. This least-cost conservation model provides an estimate of the upper bound of reasonable conservation potential for varying pricing and rebate conditions. The models were adapted from previous work in Jordan and are applied to a neighborhood in San Ramon, California in the eastern San Francisco Bay Area. The existing conditions model produces seasonal use results very close to the metered data. The least-cost conservation model suggests clothes washer rebates are among most cost-effective rebate programs for indoor uses. Retrofit of faucets and toilets is also cost-effective and holds the highest potential for water savings from indoor uses. This mechanistic modeling approach can improve understanding of water demand and estimate cost-effectiveness of water conservation programs.
Behaviour of ion velocity distributions for a simple collision model
St-Maurice, J.-P.; Schunk, R. W.
1974-01-01
Calculation of the ion velocity distributions for a weakly ionized plasma subjected to crossed electric and magnetic fields. An exact solution to Boltzmann's equation has been obtained by replacing the Boltzmann collision integral with a simple relaxation model. At altitudes above about 150 km, where the ion collision frequency is much less than the ion cyclotron frequency, the ion distribution takes the shape of a torus in velocity space for electric fields greater than 40 mV/m. This shape persists for one to two hours after application of the electric field. At altitudes where the ion collision and cyclotron frequencies are approximately equal (about 120 km), the ion velocity distribution is shaped like a bean for large electric field strengths. This bean-shaped distribution persists throughout the lifetime of ionospheric electric fields. These highly non-Maxwellian ion velocity distributions may have an appreciable affect on the interpretation of ion temperature measurements.
Monte Carlo grain growth modeling with local temperature gradients
Tan, Y.; Maniatty, A. M.; Zheng, C.; Wen, J. T.
2017-09-01
This work investigated the development of a Monte Carlo (MC) simulation approach to modeling grain growth in the presence of non-uniform temperature field that may vary with time. We first scale the MC model to physical growth processes by fitting experimental data. Based on the scaling relationship, we derive a grid site selection probability (SSP) function to consider the effect of a spatially varying temperature field. The SSP function is based on the differential MC step, which allows it to naturally consider time varying temperature fields too. We verify the model and compare the predictions to other existing formulations (Godfrey and Martin 1995 Phil. Mag. A 72 737-49 Radhakrishnan and Zacharia 1995 Metall. Mater. Trans. A 26 2123-30) in simple two-dimensional cases with only spatially varying temperature fields, where the predicted grain growth in regions of constant temperature are expected to be the same as for the isothermal case. We also test the model in a more realistic three-dimensional case with a temperature field varying in both space and time, modeling grain growth in the heat affected zone of a weld. We believe the newly proposed approach is promising for modeling grain growth in material manufacturing processes that involves time-dependent local temperature gradient.
Modelling a gamma irradiation process using the Monte Carlo method
Energy Technology Data Exchange (ETDEWEB)
Soares, Gabriela A.; Pereira, Marcio T., E-mail: gas@cdtn.br, E-mail: mtp@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)
2011-07-01
In gamma irradiation service it is of great importance the evaluation of absorbed dose in order to guarantee the service quality. When physical structure and human resources are not available for performing dosimetry in each product irradiated, the appliance of mathematic models may be a solution. Through this, the prediction of the delivered dose in a specific product, irradiated in a specific position and during a certain period of time becomes possible, if validated with dosimetry tests. At the gamma irradiation facility of CDTN, equipped with a Cobalt-60 source, the Monte Carlo method was applied to perform simulations of products irradiations and the results were compared with Fricke dosimeters irradiated under the same conditions of the simulations. The first obtained results showed applicability of this method, with a linear relation between simulation and experimental results. (author)
A Monte Carlo Simulation Framework for Testing Cosmological Models
Directory of Open Access Journals (Sweden)
Heymann Y.
2014-10-01
Full Text Available We tested alternative cosmologies using Monte Carlo simulations based on the sam- pling method of the zCosmos galactic survey. The survey encompasses a collection of observable galaxies with respective redshifts that have been obtained for a given spec- troscopic area of the sky. Using a cosmological model, we can convert the redshifts into light-travel times and, by slicing the survey into small redshift buckets, compute a curve of galactic density over time. Because foreground galaxies obstruct the images of more distant galaxies, we simulated the theoretical galactic density curve using an average galactic radius. By comparing the galactic density curves of the simulations with that of the survey, we could assess the cosmologies. We applied the test to the expanding-universe cosmology of de Sitter and to a dichotomous cosmology.
Monte Carlo modeling of recrystallization processes in α-uranium
Steiner, M. A.; McCabe, R. J.; Garlea, E.; Agnew, S. R.
2017-08-01
Starting with electron backscattered diffraction (EBSD) data obtained from a warm clock-rolled α-uranium deformation microstructure, a Potts Monte Carlo model was used to simulate static site-saturated recrystallization and test which recrystallization nucleation conditions within the microstructure are best validated by experimental observations. The simulations support prior observations that recrystallized nuclei within α-uranium form preferentially on non-twin high-angle grain boundary sites at 450 °C. They also demonstrate, in a new finding, that nucleation along these boundaries occurs only at a highly constrained subset of sites possessing the largest degrees of local deformation. Deformation in the EBSD data can be identified by the Kernel Average Misorientation (KAM), which may be considered as a proxy for the local geometrically necessary dislocation (GND) density.
Monte Carlo Modeling of Crystal Channeling at High Energies
Schoofs, Philippe; Cerutti, Francesco
Charged particles entering a crystal close to some preferred direction can be trapped in the electromagnetic potential well existing between consecutive planes or strings of atoms. This channeling effect can be used to extract beam particles if the crystal is bent beforehand. Crystal channeling is becoming a reliable and efficient technique for collimating beams and removing halo particles. At CERN, the installation of silicon crystals in the LHC is under scrutiny by the UA9 collaboration with the goal of investigating if they are a viable option for the collimation system upgrade. This thesis describes a new Monte Carlo model of planar channeling which has been developed from scratch in order to be implemented in the FLUKA code simulating particle transport and interactions. Crystal channels are described through the concept of continuous potential taking into account thermal motion of the lattice atoms and using Moliere screening function. The energy of the particle transverse motion determines whether or n...
Accelerating Monte Carlo Markov chains with proxy and error models
Josset, Laureline; Demyanov, Vasily; Elsheikh, Ahmed H.; Lunati, Ivan
2015-12-01
In groundwater modeling, Monte Carlo Markov Chain (MCMC) simulations are often used to calibrate aquifer parameters and propagate the uncertainty to the quantity of interest (e.g., pollutant concentration). However, this approach requires a large number of flow simulations and incurs high computational cost, which prevents a systematic evaluation of the uncertainty in the presence of complex physical processes. To avoid this computational bottleneck, we propose to use an approximate model (proxy) to predict the response of the exact model. Here, we use a proxy that entails a very simplified description of the physics with respect to the detailed physics described by the "exact" model. The error model accounts for the simplification of the physical process; and it is trained on a learning set of realizations, for which both the proxy and exact responses are computed. First, the key features of the set of curves are extracted using functional principal component analysis; then, a regression model is built to characterize the relationship between the curves. The performance of the proposed approach is evaluated on the Imperial College Fault model. We show that the joint use of the proxy and the error model to infer the model parameters in a two-stage MCMC set-up allows longer chains at a comparable computational cost. Unnecessary evaluations of the exact responses are avoided through a preliminary evaluation of the proposal made on the basis of the corrected proxy response. The error model trained on the learning set is crucial to provide a sufficiently accurate prediction of the exact response and guide the chains to the low misfit regions. The proposed methodology can be extended to multiple-chain algorithms or other Bayesian inference methods. Moreover, FPCA is not limited to the specific presented application and offers a general framework to build error models.
Kusoglu Sarikaya, C.; Rafatov, I.; Kudryavtsev, A. A.
2016-06-01
The work deals with the Particle in Cell/Monte Carlo Collision (PIC/MCC) analysis of the problem of detection and identification of impurities in the nonlocal plasma of gas discharge using the Plasma Electron Spectroscopy (PLES) method. For this purpose, 1d3v PIC/MCC code for numerical simulation of glow discharge with nonlocal electron energy distribution function is developed. The elastic, excitation, and ionization collisions between electron-neutral pairs and isotropic scattering and charge exchange collisions between ion-neutral pairs and Penning ionizations are taken into account. Applicability of the numerical code is verified under the Radio-Frequency capacitively coupled discharge conditions. The efficiency of the code is increased by its parallelization using Open Message Passing Interface. As a demonstration of the PLES method, parallel PIC/MCC code is applied to the direct current glow discharge in helium doped with a small amount of argon. Numerical results are consistent with the theoretical analysis of formation of nonlocal EEDF and existing experimental data.
LPM-Effect in Monte Carlo Models of Radiative Energy Loss
Zapp, Korinna C; Wiedemann, Urs Achim
2009-01-01
Extending the use of Monte Carlo (MC) event generators to jets in nuclear collisions requires a probabilistic implementation of the non-abelian LPM effect. We demonstrate that a local, probabilistic MC implementation based on the concept of formation times can account fully for the LPM-effect. The main features of the analytically known eikonal and collinear approximation can be reproduced, but we show how going beyond this approximation can lead to qualitatively different results.
LPM-Effect in Monte Carlo Models of Radiative Energy Loss
Energy Technology Data Exchange (ETDEWEB)
Zapp, Korinna C. [Physikalisches Institut, Universitaet Heidelberg, Philosophenweg 12, D-69120 Heidelberg (Germany); ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum fuer Schwerionenforschung GmbH, Planckstrasse 1, 64291 Darmstadt (Germany); Stachel, Johanna [Physikalisches Institut, Universitaet Heidelberg, Philosophenweg 12, D-69120 Heidelberg (Germany); Wiedemann, Urs Achim [Physics Department, Theory Unit, CERN, CH-1211 Geneve 23 (Switzerland)
2009-11-01
Extending the use of Monte Carlo (MC) event generators to jets in nuclear collisions requires a probabilistic implementation of the non-abelian LPM effect. We demonstrate that a local, probabilistic MC implementation based on the concept of formation times can account fully for the LPM-effect. The main features of the analytically known eikonal and collinear approximation can be reproduced, but we show how going beyond this approximation can lead to qualitatively different results.
A Monte Carlo-based model of gold nanoparticle radiosensitization
Lechtman, Eli Solomon
The goal of radiotherapy is to operate within the therapeutic window - delivering doses of ionizing radiation to achieve locoregional tumour control, while minimizing normal tissue toxicity. A greater therapeutic ratio can be achieved by utilizing radiosensitizing agents designed to enhance the effects of radiation at the tumour. Gold nanoparticles (AuNP) represent a novel radiosensitizer with unique and attractive properties. AuNPs enhance local photon interactions, thereby converting photons into localized damaging electrons. Experimental reports of AuNP radiosensitization reveal this enhancement effect to be highly sensitive to irradiation source energy, cell line, and AuNP size, concentration and intracellular localization. This thesis explored the physics and some of the underlying mechanisms behind AuNP radiosensitization. A Monte Carlo simulation approach was developed to investigate the enhanced photoelectric absorption within AuNPs, and to characterize the escaping energy and range of the photoelectric products. Simulations revealed a 10 3 fold increase in the rate of photoelectric absorption using low-energy brachytherapy sources compared to megavolt sources. For low-energy sources, AuNPs released electrons with ranges of only a few microns in the surrounding tissue. For higher energy sources, longer ranged photoelectric products travelled orders of magnitude farther. A novel radiobiological model called the AuNP radiosensitization predictive (ARP) model was developed based on the unique nanoscale energy deposition pattern around AuNPs. The ARP model incorporated detailed Monte Carlo simulations with experimentally determined parameters to predict AuNP radiosensitization. This model compared well to in vitro experiments involving two cancer cell lines (PC-3 and SK-BR-3), two AuNP sizes (5 and 30 nm) and two source energies (100 and 300 kVp). The ARP model was then used to explore the effects of AuNP intracellular localization using 1.9 and 100 nm Au
Gaussian Process Model for Collision Dynamics of Complex Molecules.
Cui, Jie; Krems, Roman V
2015-08-14
We show that a Gaussian process model can be combined with a small number (of order 100) of scattering calculations to provide a multidimensional dependence of scattering observables on the experimentally controllable parameters (such as the collision energy or temperature) as well as the potential energy surface (PES) parameters. For the case of Ar-C_{6}H_{6} collisions, we show that 200 classical trajectory calculations are sufficient to provide a ten-dimensional hypersurface, giving the dependence of the collision lifetimes on the collision energy, internal temperature, and eight PES parameters. This can be used for solving the inverse scattering problem, for the efficient calculation of thermally averaged observables, for reducing the error of the molecular dynamics calculations by averaging over the PES variations, and for the analysis of the sensitivity of the observables to individual parameters determining the PES. Trained by a combination of classical and quantum calculations, the model provides an accurate description of the quantum scattering cross sections, even near scattering resonances.
Optimizing Muscle Parameters in Musculoskeletal Modeling Using Monte Carlo Simulations
Hanson, Andrea; Reed, Erik; Cavanagh, Peter
2011-01-01
Astronauts assigned to long-duration missions experience bone and muscle atrophy in the lower limbs. The use of musculoskeletal simulation software has become a useful tool for modeling joint and muscle forces during human activity in reduced gravity as access to direct experimentation is limited. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler(TM) (San Clemente, CA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces. However, no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. Peak hip joint force using the default parameters was 2.96 times body weight (BW) and increased to 3.21 BW in an optimized, feature-selected test case. The rectus femoris was predicted to peak at 60.1% activation following muscle recruitment optimization, compared to 19.2% activation with default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.
Model investigation of non-thermal phase transition in high energy collisions
Institute of Scientific and Technical Information of China (English)
王琴; 李治明; 刘连寿
2000-01-01
The non-thermal phase transition in high energy collisions is studied in detail in the frame-work of random cascade model. The relation between the characteristic parameter γq of phase transition and the rank q of moment is obtained using Monte Carlo simulation, and the existence of two phases in self-similar cascading multiparticle systems is shown. The relation between the critical point qc of phase transition on the fluctuation parameter a is obtained and compared with the experimental results from NA22. The same study is carried out also by analytical calculation under central limit ap-proximation. The range of validity of the central limit approximation is discussed.
Monte Carlo model for electron degradation in xenon gas
Mukundan, Vrinda
2016-01-01
We have developed a Monte Carlo model for studying the local degradation of electrons in the energy range 9-10000 eV in xenon gas. Analytically fitted form of electron impact cross sections for elastic and various inelastic processes are fed as input data to the model. Two dimensional numerical yield spectrum, which gives information on the number of energy loss events occurring in a particular energy interval, is obtained as output of the model. Numerical yield spectrum is fitted analytically, thus obtaining analytical yield spectrum. The analytical yield spectrum can be used to calculate electron fluxes, which can be further employed for the calculation of volume production rates. Using yield spectrum, mean energy per ion pair and efficiencies of inelastic processes are calculated. The value for mean energy per ion pair for Xe is 22 eV at 10 keV. Ionization dominates for incident energies greater than 50 eV and is found to have an efficiency of 65% at 10 keV. The efficiency for the excitation process is 30%...
Hopping electron model with geometrical frustration: kinetic Monte Carlo simulations
Terao, Takamichi
2016-09-01
The hopping electron model on the Kagome lattice was investigated by kinetic Monte Carlo simulations, and the non-equilibrium nature of the system was studied. We have numerically confirmed that aging phenomena are present in the autocorrelation function C ({t,tW )} of the electron system on the Kagome lattice, which is a geometrically frustrated lattice without any disorder. The waiting-time distributions p(τ ) of hopping electrons of the system on Kagome lattice has been also studied. It is confirmed that the profile of p (τ ) obtained at lower temperatures obeys the power-law behavior, which is a characteristic feature of continuous time random walk of electrons. These features were also compared with the characteristics of the Coulomb glass model, used as a model of disordered thin films and doped semiconductors. This work represents an advance in the understanding of the dynamics of geometrically frustrated systems and will serve as a basis for further studies of these physical systems.
Modelling droplet collision outcomes for different substances and viscosities
Sommerfeld, Martin; Kuschel, Matthias
2016-12-01
The main objective of the present study is the derivation of models describing the outcome of binary droplet collisions for a wide range of dynamic viscosities in the well-known collision maps (i.e. normalised lateral droplet displacement at collision, called impact parameter, versus collision Weber number). Previous studies by Kuschel and Sommerfeld (Exp Fluids 54:1440, 2013) for different solution droplets having a range of solids contents and hence dynamic viscosities (here between 1 and 60 mPa s) revealed that the locations of the triple point (i.e. coincidence of bouncing, stretching separation and coalescence) and the critical Weber number (i.e. condition for the transition from coalescence to separation for head-on collisions) show a clear dependence on dynamic viscosity. In order to extend these findings also to pure liquids and to provide a broader data basis for modelling the viscosity effect, additional binary collision experiments were conducted for different alcohols (viscosity range 1.2-15.9 mPa s) and the FVA1 reference oil at different temperatures (viscosity range 3.0-28.2 mPa s). The droplet size for the series of alcohols was around 365 and 385 µm for the FVA1 reference oil, in each case with fixed diameter ratio at Δ= 1. The relative velocity between the droplets was varied in the range 0.5-3.5 m/s, yielding maximum Weber numbers of around 180. Individual binary droplet collisions with defined conditions were generated by two droplet chains each produced by vibrating orifice droplet generators. For recording droplet motion and the binary collision process with good spatial and temporal resolution high-speed shadow imaging was employed. The results for varied relative velocity and impact angle were assembled in impact parameter-Weber number maps. With increasing dynamic viscosity a characteristic displacement of the regimes for the different collision scenarios was also observed for pure liquids similar to that observed for solutions. This
A new collision avoidance model for pedestrian dynamics
Wang, Qian-Ling; Chen, Yao; Dong, Hai-Rong; Zhou, Min; Ning, Bin
2015-03-01
The pedestrians can only avoid collisions passively under the action of forces during simulations using the social force model, which may lead to unnatural behaviors. This paper proposes an optimization-based model for the avoidance of collisions, where the social repulsive force is removed in favor of a search for the quickest path to destination in the pedestrian’s vision field. In this way, the behaviors of pedestrians are governed by changing their desired walking direction and desired speed. By combining the critical factors of pedestrian movement, such as positions of the exit and obstacles and velocities of the neighbors, the choice of desired velocity has been rendered to a discrete optimization problem. Therefore, it is the self-driven force that leads pedestrians to a free path rather than the repulsive force, which means the pedestrians can actively avoid collisions. The new model is verified by comparing with the fundamental diagram and actual data. The simulation results of individual avoidance trajectories and crowd avoidance behaviors demonstrate the reasonability of the proposed model. Project supported by the National Natural Science Foundation of China (Grant Nos. 61233001 and 61322307) and the Fundamental Research Funds for Central Universities of China (Grant No. 2013JBZ007).
Zhou, Qi-Dong; Menjo, Hiroaki; Sako, Takashi
2016-01-01
Very forward (VF) detectors in hadron colliders, having unique sensitivity to diffractive processes, can be a powerful tool for studying diffractive dissociation by combining them with central detectors. Several Monte Carlo simulation samples in $p$-$p$ collisions at $\\sqrt s = 13$ TeV were analyzed, and different nondiffractive and diffractive contributions were clarified through differential cross sections of forward neutral particles. Diffraction selection criteria in the VF-triggered-event samples were determined by using the central track information. The corresponding selection applicable in real experiments has $\\approx$100% purity and 30%-70% efficiency. Consequently, the central information enables classification of the forward productions into diffraction and nondiffraction categories; in particular, most of the surviving events from the selection belong to low-mass diffraction events at $\\log_{10}(\\xi_{x}) < -5.5$. Therefore, the combined method can uniquely access the low-mass diffraction regim...
Leung, Roger
2010-03-31
Squeeze-film damping on microresonators is a significant damping source even when the surrounding gas is highly rarefied. This article presents a general modeling approach based on Monte Carlo (MC) simulations for the prediction of squeeze-film damping on resonators in the freemolecule regime. The generality of the approach is demonstrated in its capability of simulating resonators of any shape and with any accommodation coefficient. The approach is validated using both the analytical results of the free-space damping and the experimental data of the squeeze-film damping on a clamped-clamped plate resonator oscillating at its first flexure mode. The effect of oscillation modes on the quality factor of the resonator has also been studied and semi-analytical approximate models for the squeeze-film damping with diffuse collisions have been developed.
Kinetic models with randomly perturbed binary collisions
Bassetti, Federico; Toscani, Giuseppe
2010-01-01
We introduce a class of Kac-like kinetic equations on the real line, with general random collisional rules, which include as particular cases models for wealth redistribution in an agent-based market or models for granular gases with a background heat bath. Conditions on these collisional rules which guarantee both the existence and uniqueness of equilibrium profiles and their main properties are found. We show that the characterization of these stationary solutions is of independent interest, since the same profiles are shown to be solutions of different evolution problems, both in the econophysics context and in the kinetic theory of rarefied gases.
Sensor Fusion Based Model for Collision Free Mobile Robot Navigation
Directory of Open Access Journals (Sweden)
Marwah Almasri
2015-12-01
Full Text Available Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot’s wheels, and 24 fuzzy rules for the robot’s movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes.
Sensor Fusion Based Model for Collision Free Mobile Robot Navigation.
Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar
2015-12-26
Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot's wheels, and 24 fuzzy rules for the robot's movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes.
Modelling the brightness increase signature due to asteroid collisions
McLoughlin, Ev; McLoughlin, Alan
2015-01-01
We have developed a model to predict the post-collision brightness increase of sub-catastrophic collisions between asteroids and to evaluate the likelihood of a survey detecting these events. It is based on the cratering scaling laws of Holsapple and Housen (2007) and models the ejecta expansion following an impact as occurring in discrete shells each with their own velocity. We estimate the magnitude change between a series of target/impactor pairs, assuming it is given by the increase in reflecting surface area within a photometric aperture due to the resulting ejecta. As expected the photometric signal increases with impactor size, but we find also that the photometric signature decreases rapidly as the target asteroid diameter increases, due to gravitational fallback. We have used the model results to make an estimate of the impactor diameter for the (596) Scheila collision of D=49-65m depending on the impactor taxonomy, which is broadly consistent with previous estimates. We varied both the strength regi...
Monte Carlo Modeling Electronuclear Processes in Cascade Subcritical Reactor
Bznuni, S A; Zhamkochyan, V M; Polyanskii, A A; Sosnin, A N; Khudaverdian, A G
2000-01-01
Accelerator driven subcritical cascade reactor composed of the main thermal neutron reactor constructed analogous to the core of the VVER-1000 reactor and a booster-reactor, which is constructed similar to the core of the BN-350 fast breeder reactor, is taken as a model example. It is shown by means of Monte Carlo calculations that such system is a safe energy source (k_{eff}=0.94-0.98) and it is capable of transmuting produced radioactive wastes (neutron flux density in the thermal zone is PHI^{max} (r,z)=10^{14} n/(cm^{-2} s^{-1}), neutron flux in the fast zone is respectively equal PHI^{max} (r,z)=2.25 cdot 10^{15} n/(cm^{-2} s^{-1}) if the beam current of the proton accelerator is k_{eff}=0.98 and I=5.3 mA). Suggested configuration of the "cascade" reactor system essentially reduces the requirements on the proton accelerator current.
SKIRT: the design of a suite of input models for Monte Carlo radiative transfer simulations
Baes, Maarten
2015-01-01
The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can...
Extended hard-sphere model and collisions of cohesive particles.
Kosinski, Pawel; Hoffmann, Alex C
2011-09-01
In two earlier papers the present authors modified a standard hard-sphere particle-wall and particle-particle collision model to account for the presence of adhesive or cohesive interaction between the colliding particles: the problem is of importance for modeling particle-fluid flow using the Lagrangian approach. This technique, which involves a direct numerical simulation of such flows, is gaining increasing popularity for simulating, e.g., dust transport, flows of nanofluids and grains in planetary rings. The main objective of the previous papers was to formally extend the impulse-based hard-sphere model, while suggestions for quantifications of the adhesive or cohesive interaction were made. This present paper gives an improved quantification of the adhesive and cohesive interactions for use in the extended hard-sphere model for cases where the surfaces of the colliding bodies are "dry," e.g., there is no liquid-bridge formation between the colliding bodies. This quantification is based on the Johnson-Kendall-Roberts (JKR) analysis of collision dynamics but includes, in addition, dissipative forces using a soft-sphere modeling technique. In this way the cohesive impulse, required for the hard-sphere model, is calculated together with other parameters, namely the collision duration and the restitution coefficient. Finally a dimensional analysis technique is applied to fit an analytical expression to the results for the cohesive impulse that can be used in the extended hard-sphere model. At the end of the paper we show some simulation results in order to illustrate the model.
Quark model and high energy collisions
Anisovich, V V; Nyíri, J; Shabelski, Yu M
2004-01-01
This is an updated version of the book published in 1985. QCD-motivated, it gives a detailed description of hadron structure and soft interactions in the additive quark model, where hadrons are regarded as composite systems of dressed quarks. In the past decade it has become clear that nonperturbative QCD, responsible for soft hadronic processes, may differ rather drastically from perturbative QCD. The understanding of nonperturbative QCD requires a detailed investigation of the experiments and the theoretical approaches. Bearing this in mind, the book has been rewritten paying special attenti
A Monte Carlo study of jet fragmentation functions in PbPb and pp collisions at sqrt{s}=2.76 TeV
Pérez-Ramos, Redamy
2014-01-01
The parton-to-hadron fragmentation functions (FFs) obtained from the YaJEM and PYTHIA6 Monte Carlo event generators, are studied for jets produced in a strongly-interacting medium and in the QCD "vacuum" respectively. The medium modifications are studied with the YaJEM code in two different scenarios by (i) accounting for the medium induced virtuality DeltaQ^2 transferred to the leading parton from the medium, and (ii) by altering the infrared sector in the Borghini-Wiedemann approach. The results of our simulations are compared to experimental jet data measured by the CMS experiment in PbPb and pp collisions at a center-of-mass energy of 2.76 TeV. Though both scenarios qualitatively describe the shape and main physical features of the FFs, the ratios are in much better agreement with the first scenario. Results are presented for the Monte Carlo FFs obtained for different parton flavours (quark and gluon) and accounting exactly, or not, for the experimental jet reconstruction biases.
A Monte Carlo reflectance model for soil surfaces with three-dimensional structure
Cooper, K. D.; Smith, J. A.
1985-01-01
A Monte Carlo soil reflectance model has been developed to study the effect of macroscopic surface irregularities larger than the wavelength of incident flux. The model treats incoherent multiple scattering from Lambertian facets distributed on a periodic surface. Resulting bidirectional reflectance distribution functions are non-Lambertian and compare well with experimental trends reported in the literature. Examples showing the coupling of the Monte Carlo soil model to an adding bidirectional canopy of reflectance model are also given.
Markov Modelling of Fingerprinting Systems for Collision Analysis
Directory of Open Access Journals (Sweden)
Guénolé C. M. Silvestre
2008-03-01
Full Text Available Multimedia fingerprinting, also known as robust or perceptual hashing, aims at representing multimedia signals through compact and perceptually significant descriptors (hash values. In this paper, we examine the probability of collision of a certain general class of robust hashing systems that, in its binary alphabet version, encompasses a number of existing robust audio hashing algorithms. Our analysis relies on modelling the fingerprint (hash symbols by means of Markov chains, which is generally realistic due to the hash synchronization properties usually required in multimedia identification. We provide theoretical expressions of performance, and show that the use of M-ary alphabets is advantageous with respect to binary alphabets. We show how these general expressions explain the performance of Philips fingerprinting, whose probability of collision had only been previously estimated through heuristics.
Numerical models of trench migration in continental collision zones
Directory of Open Access Journals (Sweden)
V. Magni
2012-03-01
Full Text Available Continental collision is an intrinsic feature of plate tectonics. The closure of an oceanic basin leads to the onset of subduction of buoyant continental material, which slows down and eventually stops the subduction process. We perform a parametric study of the geometrical and rheological influence on subduction dynamics during the subduction of continental lithosphere. In 2-D numerical models of a free subduction system with temperature and stress-dependent rheology, the trench and the overriding plate move self-consistently as a function of the dynamics of the system (i.e. no external forces are imposed. This setup enables to study how continental subduction influences the trench migration. We found that in all models the trench starts to advance once the continent enters the subduction zone and continues to migrate until few million years after the ultimate slab detachment. Our results support the idea that the trench advancing is favoured and, in part provided by, the intrinsic force balance of continental collision. We suggest that the trench advance is first induced by the locking of the subduction zone and the subsequent steepening of the slab, and next by the sinking of the deepest oceanic part of the slab, during stretching and break-off of the slab. The amount of trench advancing ranges from 40 to 220 km and depends on the dip angle of the slab before the onset of collision.
Monte Carlo simulations of the HP model (the "Ising model" of protein folding)
Li, Ying Wai; Wüst, Thomas; Landau, David P.
2011-09-01
Using Wang-Landau sampling with suitable Monte Carlo trial moves (pull moves and bond-rebridging moves combined) we have determined the density of states and thermodynamic properties for a short sequence of the HP protein model. For free chains these proteins are known to first undergo a collapse "transition" to a globule state followed by a second "transition" into a native state. When placed in the proximity of an attractive surface, there is a competition between surface adsorption and folding that leads to an intriguing sequence of "transitions". These transitions depend upon the relative interaction strengths and are largely inaccessible to "standard" Monte Carlo methods.
Numerical models of slab migration in continental collision zones
Directory of Open Access Journals (Sweden)
V. Magni
2012-09-01
Full Text Available Continental collision is an intrinsic feature of plate tectonics. The closure of an oceanic basin leads to the onset of subduction of buoyant continental material, which slows down and eventually stops the subduction process. In natural cases, evidence of advancing margins has been recognized in continental collision zones such as India-Eurasia and Arabia-Eurasia. We perform a parametric study of the geometrical and rheological influence on subduction dynamics during the subduction of continental lithosphere. In our 2-D numerical models of a free subduction system with temperature and stress-dependent rheology, the trench and the overriding plate move self-consistently as a function of the dynamics of the system (i.e. no external forces are imposed. This setup enables to study how continental subduction influences the trench migration. We found that in all models the slab starts to advance once the continent enters the subduction zone and continues to migrate until few million years after the ultimate slab detachment. Our results support the idea that the advancing mode is favoured and, in part, provided by the intrinsic force balance of continental collision. We suggest that the advance is first induced by the locking of the subduction zone and the subsequent steepening of the slab, and next by the sinking of the deepest oceanic part of the slab, during stretching and break-off of the slab. These processes are responsible for the migration of the subduction zone by triggering small-scale convection cells in the mantle that, in turn, drag the plates. The amount of advance ranges from 40 to 220 km and depends on the dip angle of the slab before the onset of collision.
Predictions from a Simple Hadron Rescattering Model for pp Collisions at the LHC
Truesdale, David C.
With studies of heavy ion and pp physics already under way at the LHC, it is necessary to consider how hadron rescattering will effect the observed results from experiments such as ALICE, ATLAS and CMS. Through the use of a simple, relativistic kinematics based hadron rescattering model, this dissertation shows that the hadron rescattering phase can obscure some signals for radial flow in pp collisions at LHC energies. This dissertation presents an in depth description of the hardware based alignment monitoring system developed for the ALICE Inner Tracking System. It details the development of the ITSAMS, which uses geometric optics and a CMOS array to measure micron scale motion between two points. By monitoring three strategic points on the ITS in relation to the TPC endplate, the ITSAMS can determine translational shifts between the two detectors to a resolution of 9.4 mum in the transverse plane and 78 mum along the longitudinal axis. The ITSAMS can measure rotational shifts to 10 murad or better about all three axes. After a brief discussion of the ALICE experiment and the theory and practice of two-particle intensity interferometry, this dissertation details a simple hadron rescattering computer model developed by Dr. T. J. Humanic. The process of porting the model to the C++ computer language is presented here, along with the improvements made. The model has been updated with a new space-time distribution scheme that is more appropriate for pp collision studies. The model is then compared with final-state PYTHIA generated Monte-Carlo data. It is shown that the hadron rescattering model accurately reproduces pseudorapidity distributions for pp collisions at s = 0.9, 7, 10, and 14 TeV. Moreover, except for a slight overprediction of kaons and a slight underprediction of protons, the rescattering model accurately reproduces PYTHIA pT spectra. This dissertation then endeavours compare results to the HBT radii present in the ALICE collaboration's analysis of
Atomic collision processes for modelling cool star spectra
Barklem, Paul
2015-05-01
The abundances of chemical elements in cool stars are very important in many problems in modern astrophysics. They provide unique insight into the chemical and dynamical evolution of the Galaxy, stellar processes such as mixing and gravitational settling, the Sun and its place in the Galaxy, and planet formation, to name a just few examples. Modern telescopes and spectrographs measure stellar spectral lines with precision of order 1 per cent, and planned surveys will provide such spectra for millions of stars. However, systematic errors in the interpretation of observed spectral lines leads to abundances with uncertainties greater than 20 per cent. Greater precision in the interpreted abundances should reasonably be expected to lead to significant discoveries, and improvements in atomic data used in stellar atmosphere models play a key role in achieving such advances in precision. In particular, departures from the classical assumption of local thermodynamic equilibrium (LTE) represent a significant uncertainty in the modelling of stellar spectra and thus derived chemical abundances. Non-LTE modelling requires large amounts of radiative and collisional data for the atomic species of interest. I will focus on inelastic collision processes due to electron and hydrogen atom impacts, the important perturbers in cool stars, and the progress that has been made. I will discuss the impact on non-LTE modelling, and what the modelling tells us about the types of collision processes that are important and the accuracy required. More specifically, processes of fundamentally quantum mechanical nature such as spin-changing collisions and charge transfer have been found to be very important in the non-LTE modelling of spectral lines of lithium, oxygen, sodium and magnesium.
Development of topography in 3-D continental-collision models
Pusok, A. E.; Kaus, Boris J. P.
2015-05-01
Understanding the formation and evolution of high mountain belts, such as the Himalayas and the adjacent Tibetan Plateau, has been the focus of many tectonic and numerical models. Here we employ 3-D numerical simulations to investigate the role that subduction, collision, and indentation play on lithosphere dynamics at convergent margins, and to analyze the conditions under which large topographic plateaus can form in an integrated lithospheric and upper mantle-scale model. Distinct dynamics are obtained for the oceanic subduction side (trench retreat, slab rollback) and the continental-collision side (trench advance, slab detachment, topographic uplift, lateral extrusion). We show that slab pull alone is insufficient to generate high topography in the upper plate, and that external forcing and the presence of strong blocks such as the Tarim Basin are necessary to create and shape anomalously high topographic fronts and plateaus. Moreover, scaling is used to predict four different modes of surface expression in continental-collision models: (I) low-amplitude homogeneous shortening, (II) high-amplitude homogeneous shortening, (III) Alpine-type topography with topographic front and low plateau, and (IV) Tibet-Himalaya-type topography with topographic front and high plateau. Results of semianalytical models suggest that the Argand number governs the formation of high topographic fronts, while the amplitude of plateaus is controlled by the initial buoyancy ratio of the upper plate. Applying these results to natural examples, we show that the Alps belong to regime (III), the Himalaya-Tibet to regime (IV), whereas the Andes-Altiplano fall at the boundary between regimes (III) and (IV).
Effective quantum Monte Carlo algorithm for modeling strongly correlated systems
Kashurnikov, V. A.; Krasavin, A. V.
2007-01-01
A new effective Monte Carlo algorithm based on principles of continuous time is presented. It allows calculating, in an arbitrary discrete basis, thermodynamic quantities and linear response of mixed boson-fermion, spin-boson, and other strongly correlated systems which admit no analytic description
Monte Carlo simulation of quantum statistical lattice models
Raedt, Hans De; Lagendijk, Ad
1985-01-01
In this article we review recent developments in computational methods for quantum statistical lattice problems. We begin by giving the necessary mathematical basis, the generalized Trotter formula, and discuss the computational tools, exact summations and Monte Carlo simulation, that will be used t
Monte Carlo estimation of the conditional Rasch model
Akkermans, Wies M.W.
1994-01-01
In order to obtain conditional maximum likelihood estimates, the so-called conditioning estimates have to be calculated. In this paper a method is examined that does not calculate these constants exactly, but approximates them using Monte Carlo Markov Chains. As an example, the method is applied to
Monte Carlo estimation of the conditional Rasch model
Akkermans, W.
1998-01-01
In order to obtain conditional maximum likelihood estimates, the conditioning constants are needed. Geyer and Thompson (1992) proposed a Markov chain Monte Carlo method that can be used to approximate these constants when they are difficult to calculate exactly. In the present paper, their method is
Improved Monte Carlo model for multiple scattering calculations
Institute of Scientific and Technical Information of China (English)
Weiwei Cai; Lin Ma
2012-01-01
The coupling between the Monte Carlo (MC) method and geometrical optics to improve accuracy is investigated.The results obtained show improved agreement with previous experimental data,demonstrating that the MC method,when coupled with simple geometrical optics,can simulate multiple scattering with enhanced fidelity.
De Backer, A.; Adjanor, G.; Domain, C.; Lescoat, M. L.; Jublot-Leclerc, S.; Fortuna, F.; Gentils, A.; Ortiz, C. J.; Souidi, A.; Becquart, C. S.
2015-06-01
Implantation of 10 keV helium in 316L steel thin foils was performed in JANNuS-Orsay facility and modeled using a multiscale approach. Density Functional Theory (DFT) atomistic calculations [1] were used to obtain the properties of He and He-vacancy clusters, and the Binary Collision Approximation based code MARLOWE was applied to determine the damage and He-ion depth profiles as in [2,3]. The processes involved in the homogeneous He bubble nucleation and growth were defined and implemented in the Object Kinetic Monte Carlo code LAKIMOCA [4]. In particular as the He to dpa ratio was high, self-trapping of He clusters and the trap mutation of He-vacancy clusters had to be taken into account. With this multiscale approach, the formation of bubbles was modeled up to nanometer-scale size, where bubbles can be observed by Transmission Electron Microscopy. Their densities and sizes were studied as functions of fluence (up to 5 × 1019 He/m2) at two temperatures (473 and 723 K) and for different sample thicknesses (25-250 nm). It appears that the damage is not only due to the collision cascades but is also strongly controlled by the He accumulation in pressurized bubbles. Comparison with experimental data is discussed and sensible agreement is achieved.
Monte Carlo modeling of ultrasound probes for image guided radiotherapy
Energy Technology Data Exchange (ETDEWEB)
Bazalova-Carter, Magdalena, E-mail: bazalova@uvic.ca [Department of Radiation Oncology, Stanford University, Stanford, California 94305 and Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia V8W 2Y2 (Canada); Schlosser, Jeffrey [SoniTrack Systems, Inc., Palo Alto, California 94304 (United States); Chen, Josephine [Department of Radiation Oncology, UCSF, San Francisco, California 94143 (United States); Hristov, Dimitre [Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States)
2015-10-15
Purpose: To build Monte Carlo (MC) models of two ultrasound (US) probes and to quantify the effect of beam attenuation due to the US probes for radiation therapy delivered under real-time US image guidance. Methods: MC models of two Philips US probes, an X6-1 matrix-array transducer and a C5-2 curved-array transducer, were built based on their megavoltage (MV) CT images acquired in a Tomotherapy machine with a 3.5 MV beam in the EGSnrc, BEAMnrc, and DOSXYZnrc codes. Mass densities in the probes were assigned based on an electron density calibration phantom consisting of cylinders with mass densities between 0.2 and 8.0 g/cm{sup 3}. Beam attenuation due to the US probes in horizontal (for both probes) and vertical (for the X6-1 probe) orientation was measured in a solid water phantom for 6 and 15 MV (15 × 15) cm{sup 2} beams with a 2D ionization chamber array and radiographic films at 5 cm depth. The MC models of the US probes were validated by comparison of the measured dose distributions and dose distributions predicted by MC. Attenuation of depth dose in the (15 × 15) cm{sup 2} beams and small circular beams due to the presence of the probes was assessed by means of MC simulations. Results: The 3.5 MV CT number to mass density calibration curve was found to be linear with R{sup 2} > 0.99. The maximum mass densities in the X6-1 and C5-2 probes were found to be 4.8 and 5.2 g/cm{sup 3}, respectively. Dose profile differences between MC simulations and measurements of less than 3% for US probes in horizontal orientation were found, with the exception of the penumbra region. The largest 6% dose difference was observed in dose profiles of the X6-1 probe placed in vertical orientation, which was attributed to inadequate modeling of the probe cable. Gamma analysis of the simulated and measured doses showed that over 96% of measurement points passed the 3%/3 mm criteria for both probes placed in horizontal orientation and for the X6-1 probe in vertical orientation. The
Double pendulum model for tennis stroke including a collision process
Youn, Sun-Hyun
2015-01-01
By means of adding a collision process between the ball and racket in double pendulum model, we analyzed the tennis stroke. It is possible that the speed of the rebound ball does not simply depend on the angular velocity of the racket, and higher angular velocity sometimes gives lower ball speed. We numerically showed that the proper time lagged racket rotation increases the speed of the rebound ball by 20%. We also showed that the elbow should move in order to add the angular velocity of the racket.
Model for hypernucleus production in heavy ion collisions
Pop, V Topor
2010-01-01
We estimate the production cross sections of hypernuclei in projectile like fragment (PLF) in heavy ion collisions. The discussed scenario for the formation cross section of hypernucleus is: (a) Lambda particles are produced in the participant region but have a considerable rapidity spread and (b) Lambda with rapidity close to that of the PLF and total momentum (in the rest system of PLF) up to Fermi motion can then be trapped and produce hypernuclei. The process (a) is considered here within Heavy Ion Jet Interacting Generator HIJING-BBbar model and the process (b) in the canonical thermodynamic model (CTM). We estimate the production cross-sections for light hypernuclei for C + C at 3.7 GeV total nucleon-nucleon center of mass energy and for Ne+Ne and Ar+Ar collisions at 5.0 GeV. By taking into account explicitly the impact parameter dependence of the colliding systems, it is found that the cross section is different from that predicted by the coalescence model and large discrepancy is obtained for 6_He and...
Eikonal model analysis of elastic hadron collisions at high energies
Prochazka, Jiri
2016-01-01
Elastic collisions of protons at different energies represent main background in studying the structure of fundamental particles at the present. On the basis of standardly used model proposed by West and Yennie the protons have been then interpreted as transparent objects; elastic events have been interpreted as more central than inelastic ones. It will be shown that using eikonal model the protons may be interpreted in agreement with usual ontological conception; elastic processes being more peripheral than inelastic ones. The corresponding results (differing fundamentally from those of WY model) will be presented by analyzing the most ample elastic data set measured at ISR energy of 53 GeV. Detailed analysis of measured differential cross section will be performed and different alternatives of peripheral behavior on the basis of eikonal model will be presented. The impact of recently established electromagnetic form factors on determination of quantities specifying hadron interaction determined from the fit...
A Simple Model of Wings in Heavy-Ion Collisions
Parikh, Aditya
2015-01-01
We create a simple model of heavy ion collisions independent of any generators as a way of investigating a possible source of the wings seen in data. As a first test, we reproduce a standard correlations plot to verify the integrity of the model. We then proceed to test whether an η dependent v2 could be a source of the wings and take projections along multiple Δφ intervals and compare with data. Other variations of the model are tested by having dN/dφ and v2 depend on η as well as including pions and protons into the model to make it more realistic. Comparisons with data seem to indicate that an η dependent v2 is not the main source of the wings.
Directory of Open Access Journals (Sweden)
Kovalenko Vladimir
2017-01-01
Full Text Available Long-range multiplicity correlations in intervals separated in pseudorapidity and azimuth are studied in the framework of string fusion approach. We applied a Monte Carlo model, in which the string configurations in the transverse plane and rapidity are simulating event-by-event. The string interaction is realized in the lattice string fusion approach with introduction of a grid in the transverse plane. We assumed that the azimuthal anisotropy of particle production is caused by parton energy loss traveling trough the media formed by clusters of fused strings : Δpt/Δx = −α(pt √η2/3, where η is a string density. In the cellular approach the Bresenham’s line algorithm has been applied. It is obtained that in AA collisions, the parton energy loss seems to play considerable role, in particular, by providing large contribution to the correlation of mean transverse momentum with multiplicity. The developed approach provides non-zero values flows in p-Pb collisions at LHC energies and produces the pattern similar to the one of the experimental di-hadron analysis.
Kovalenko, Vladimir
2017-03-01
Long-range multiplicity correlations in intervals separated in pseudorapidity and azimuth are studied in the framework of string fusion approach. We applied a Monte Carlo model, in which the string configurations in the transverse plane and rapidity are simulating event-by-event. The string interaction is realized in the lattice string fusion approach with introduction of a grid in the transverse plane. We assumed that the azimuthal anisotropy of particle production is caused by parton energy loss traveling trough the media formed by clusters of fused strings : Δpt/Δx = -α(pt √η)2/3, where η is a string density. In the cellular approach the Bresenham's line algorithm has been applied. It is obtained that in AA collisions, the parton energy loss seems to play considerable role, in particular, by providing large contribution to the correlation of mean transverse momentum with multiplicity. The developed approach provides non-zero values flows in p-Pb collisions at LHC energies and produces the pattern similar to the one of the experimental di-hadron analysis.
A Covariant OBE Model for $\\eta$ Production in NN Collisions
Gedalin, E; Razdolskaya, L A
1998-01-01
A relativistic covariant one boson exchange model, previously applied to describe elastic nucleon-nucleon scattering, is extended to study $\\eta$ production in NN collisions. The transition amplitude for the elementary BN->$\\eta$N process with B being the meson exchanged (B=$\\pi$, $|sigma$,$\\eta$, corresponding to s and u-channels with a nucleon or a nucleon isobar N*(1535MeV) in the intermediate states. Taking the relative phases of the various exchange amplitudes to be +1, the model reproduces the cross sections for the $NN\\to X\\eta$ reactions in a consistent manner. In the limit where all overall contributions from the exchange of pseudoscalart and scalar mesons with that of vector mesons cancel out. Consequently, much of the ambiguities in the model predictions due to unknown relative phases of different vector pseudoscalar exchanges are strongly reduced.
Mitchell, J T; Tannenbaum, M J; Stankus, P W
2016-01-01
Several methods of generating three constituent-quarks in a nucleon are evaluated which explicitly maintain the nucleon's center of mass and desired radial distribution and can be used within Monte Carlo Glauber frameworks. The geometric models provided by each method are used to generate distributions over the Number of Constituent Quark Participants ($N_{qp}$) in $p+p$, $d+$Au and Au$+$Au collisions. The results are compared with each other and to a previous result of $N_{qp}$ calculations, without this explicit constraint, used in measurements of $\\sqrt{s_{_{NN}}}$=200 GeV $p+p$, $d+$Au and Au$+$Au collisions at RHIC.
Modeling of electron-electron collisions for particle-in-cell simulations
Energy Technology Data Exchange (ETDEWEB)
Andrea, D. d'
2006-09-15
The modeling of the physics of pulsed plasma thrusters requires the numerical solution of the Boltzmann equation for rarefied plasma flows where continuum assumptions fail. To tackle this challenging task, a cooperation between several institutes has been formed with the goal to develop a hybrid code based on Particle-In-Cell and Direct Simulation Monte Carlo techniques. These development activities are bundled in the project ''Numerische Simulation und Auslegung eines instationaeren gepulsten magnetoplasmadynamischen Triebwerks fuer eine Mondsonde'' which is funded by the Landesstiftung Baden-Wuerttemberg within the subject area ''Modellierung und Simulation auf Hochleistungscomputern''. In the frame of this project, the IHM is in charge to develop suitable physical-mathematical and numerical models to include charged particle collisions into the simulation. which can significantly affect the Parameters of such plasma devices. The intention of the present report is to introduce the Fokker-Planck approach for electron-electron interaction in Standard charged particle simulations. where the impact Parameter is usually large resulting in a small deflection angle. The theoretical and applicative framework is discussed in detail paying particular attention to the Particle-In-Cell approach in velocity space. a new technique which allows the self-consistent computation of the friction and diffusion coefficients arising from the Fokker-Planck treatment of collisions. These velocity-dependent coefficients thernselves are responsible for the change in velocity of the simulation particles, which is determined by the numerical solution of a Langevin-type equation. Simulation results for typical numerical experiments computed with the new developed Fokker-Planck solver are presented. demonstrating the quality. property and reliability of the applied numerical methods. (orig.)
Hinzke, Denise; Nowak, Ulrich
1999-01-01
Using Monte Carlo methods we investigate the thermally activated magnetization switching of small ferromagnetic particles driven by an external magnetic field. For low uniaxial anisotropy one expects that the spins rotate coherently while for sufficiently large anisotropy the reversal should be due to nucleation. The latter case has been investigated extensively by Monte Carlo simulation of corresponding Ising models. In order to study the crossover from coherent rotation to nucleation we use...
Monte Carlo path sampling approach to modeling aeolian sediment transport
Hardin, E. J.; Mitasova, H.; Mitas, L.
2011-12-01
Coastal communities and vital infrastructure are subject to coastal hazards including storm surge and hurricanes. Coastal dunes offer protection by acting as natural barriers from waves and storm surge. During storms, these landforms and their protective function can erode; however, they can also erode even in the absence of storms due to daily wind and waves. Costly and often controversial beach nourishment and coastal construction projects are common erosion mitigation practices. With a more complete understanding of coastal morphology, the efficacy and consequences of anthropogenic activities could be better predicted. Currently, the research on coastal landscape evolution is focused on waves and storm surge, while only limited effort is devoted to understanding aeolian forces. Aeolian transport occurs when the wind supplies a shear stress that exceeds a critical value, consequently ejecting sand grains into the air. If the grains are too heavy to be suspended, they fall back to the grain bed where the collision ejects more grains. This is called saltation and is the salient process by which sand mass is transported. The shear stress required to dislodge grains is related to turbulent air speed. Subsequently, as sand mass is injected into the air, the wind loses speed along with its ability to eject more grains. In this way, the flux of saltating grains is itself influenced by the flux of saltating grains and aeolian transport becomes nonlinear. Aeolian sediment transport is difficult to study experimentally for reasons arising from the orders of magnitude difference between grain size and dune size. It is difficult to study theoretically because aeolian transport is highly nonlinear especially over complex landscapes. Current computational approaches have limitations as well; single grain models are mathematically simple but are computationally intractable even with modern computing power whereas cellular automota-based approaches are computationally efficient
Optical Monte Carlo modeling of a true portwine stain anatomy
Barton, Jennifer K.; Pfefer, T. Joshua; Welch, Ashley J.; Smithies, Derek J.; Nelson, Jerry; van Gemert, Martin J.
1998-04-01
A unique Monte Carlo program capable of accommodating an arbitrarily complex geometry was used to determine the energy deposition in a true port wine stain anatomy. Serial histologic sections taken from a biopsy of a dark red, laser therapy resistant stain were digitized and used to create the program input for simulation at wavelengths of 532 and 585 nm. At both wavelengths, the greatest energy deposition occurred in the superficial blood vessels, and subsequently decreased with depth as the laser beam was attenuated. However, more energy was deposited in the epidermis and superficial blood vessels at 532 nm than at 585 nm.
Thermal Model Description of Collisions of Small Nuclei
Cleymans, J.; Oeschler, H.; Redlich, K.; Sharma, N.
2016-01-01
The dependence of particle production on the size of the colliding nuclei is analyzed in terms of the thermal model using the canonical ensemble. The concept of strangeness correlation in clusters of sub-volume $V_c$ is used to account for the suppression of strangeness. A systematic analysis is presented of the predictions of the thermal model for particle production in collisions of small nuclei. The pattern of the maxima in particle ratios of strange particles to pions as a function of beam energy is quite special, as they do not occur at the same beam energy and are sensitive to system size. In particular, the $\\Lambda/\\pi^+$ ratio shows a clear maximum even for the smallest systems while the maximum in the K$^+/\\pi^+$ ratio disappears in small systems.
Gao, Liang; Sun, Jizhong; Feng, Chunlei; Bai, Jing; Ding, Hongbin
2012-01-01
A particle-in-cell plus Monte Carlo collisions method has been employed to investigate the nitrogen discharge driven by a nanosecond pulse power source. To assess whether the production of the metastable state N2(A3 Σu+) can be efficiently enhanced in a nanosecond pulsed discharge, the evolutions of metastable state N2(A3 Σu+) density and electron energy distribution function have been examined in detail. The simulation results indicate that the ultra short pulse can modulate the electron energy effectively: during the early pulse-on time, high energy electrons give rise to quick electron avalanche and rapid growth of the metastable state N2(A3 Σu+) density. It is estimated that for a single pulse with amplitude of -9 kV and pulse width 30 ns, the metastable state N2(A3 Σu+) density can achieve a value in the order of 109 cm-3. The N2(A3 Σu+) density at such a value could be easily detected by laser-based experimental methods.
Casalderrey-Solana, Jorge; Milhano, Jose Guilherme; Pablos, Daniel; Rajagopal, Krishna
2015-01-01
We confront a hybrid strong/weak coupling model for jet quenching to data from LHC heavy ion collisions. The model combines the perturbative QCD physics at high momentum transfer and the strongly coupled dynamics of non- abelian gauge theories plasmas in a phenomenological way. By performing a full Monte Carlo simulation, and after fitting one single parameter, we successfully describe several jet observables at the LHC, including dijet and photon jet measurements. Within current theoretical and experimental uncertainties, we find that such observables show little sensitivity to the specifics of the microscopic energy loss mechanism. We also present a new observable, the ratio of the fragmentation function of inclusive jets to that of the associated jets in dijet pairs, which can discriminate among different medium models. Finally, we discuss the importance of plasma response to jet passage in jet shapes.
Parametric links among Monte Carlo, phase-field, and sharp-interface models of interfacial motion.
Liu, Pu; Lusk, Mark T
2002-12-01
Parametric links are made among three mesoscale simulation paradigms: phase-field, sharp-interface, and Monte Carlo. A two-dimensional, square lattice, 1/2 Ising model is considered for the Monte Carlo method, where an exact solution for the interfacial free energy is known. The Monte Carlo mobility is calibrated as a function of temperature using Glauber kinetics. A standard asymptotic analysis relates the phase-field and sharp-interface parameters, and this allows the phase-field and Monte Carlo parameters to be linked. The result is derived without bulk effects but is then applied to a set of simulations with the bulk driving force included. An error analysis identifies the domain over which the parametric relationships are accurate.
A new Monte Carlo simulation model for laser transmission in smokescreen based on MATLAB
Lee, Heming; Wang, Qianqian; Shan, Bin; Li, Xiaoyang; Gong, Yong; Zhao, Jing; Peng, Zhong
2016-11-01
A new Monte Carlo simulation model of laser transmission in smokescreen is promoted in this paper. In the traditional Monte Carlo simulation model, the radius of particles is set at the same value and the initial cosine value of photons direction is fixed also, which can only get the approximate result. The new model is achieved based on MATLAB and can simulate laser transmittance in smokescreen with different sizes of particles, and the output result of the model is close to the real scenarios. In order to alleviate the influence of the laser divergence while traveling in the air, we changed the initial direction cosine of photons on the basis of the traditional Monte Carlo model. The mixed radius particle smoke simulation results agree with the measured transmittance under the same experimental conditions with 5.42% error rate.
Exploring uncertainty in glacier mass balance modelling with Monte Carlo simulation
Machguth, H.; Purves, R.S.; Oerlemans, J.; Hoelzle, M.; Paul, F.
2008-01-01
By means of Monte Carlo simulations we calculated uncertainty in modelled cumulative mass balance over 400 days at one particular point on the tongue of Morteratsch Glacier, Switzerland, using a glacier energy balance model of intermediate complexity. Before uncertainty assessment, the model was tun
Exploring uncertainty in glacier mass balance modelling with Monte Carlo simulation
Machguth, H.; Purves, R.S.; Oerlemans, J.; Hoelzle, M.; Paul, F.
2008-01-01
By means of Monte Carlo simulations we calculated uncertainty in modelled cumulative mass balance over 400 days at one particular point on the tongue of Morteratsch Glacier, Switzerland, using a glacier energy balance model of intermediate complexity. Before uncertainty assessment, the model was tun
Li, BC; Liu, F; Wen, XJ
2016-01-01
In an improved multisource thermal model, we systematically investigate the transverse momentum spectra in pp collisions at high energies ranging from 62.4 GeV to 7 TeV. The results are compared with the experimental data in RHIC and LHC. Based on the collision energy dependence of the source-excitation factors, we estimate the transverse momentum spectra in pp collisions at higher energies, potential future pp colliders operating at 33 and 100 TeV.
Modelling early stages of relativistic heavy-ion collisions
Directory of Open Access Journals (Sweden)
Ruggieri M.
2016-01-01
Full Text Available In this study we model early time dynamics of relativistic heavy ion collisions by an initial color-electric field which then decays to a plasma by the Schwinger mechanism. The dynamics of the many particles system produced by the decay is described by relativistic kinetic theory, taking into account the backreaction on the color field by solving self-consistently the kinetic and the field equations. Our main results concern isotropization and thermalization for a 1+1D expanding geometry. In case of small η/s (η/s ≲ 0.3 we find τisotropization ≈ 0.8 fm/c and τthermalization ≈ 1 fm/c in agreement with the common lore of hydrodynamics.
Large-scale model-based assessment of deer-vehicle collision risk.
Hothorn, Torsten; Brandl, Roland; Müller, Jörg
2012-01-01
Ungulates, in particular the Central European roe deer Capreolus capreolus and the North American white-tailed deer Odocoileus virginianus, are economically and ecologically important. The two species are risk factors for deer-vehicle collisions and as browsers of palatable trees have implications for forest regeneration. However, no large-scale management systems for ungulates have been implemented, mainly because of the high efforts and costs associated with attempts to estimate population sizes of free-living ungulates living in a complex landscape. Attempts to directly estimate population sizes of deer are problematic owing to poor data quality and lack of spatial representation on larger scales. We used data on >74,000 deer-vehicle collisions observed in 2006 and 2009 in Bavaria, Germany, to model the local risk of deer-vehicle collisions and to investigate the relationship between deer-vehicle collisions and both environmental conditions and browsing intensities. An innovative modelling approach for the number of deer-vehicle collisions, which allows nonlinear environment-deer relationships and assessment of spatial heterogeneity, was the basis for estimating the local risk of collisions for specific road types on the scale of Bavarian municipalities. Based on this risk model, we propose a new "deer-vehicle collision index" for deer management. We show that the risk of deer-vehicle collisions is positively correlated to browsing intensity and to harvest numbers. Overall, our results demonstrate that the number of deer-vehicle collisions can be predicted with high precision on the scale of municipalities. In the densely populated and intensively used landscapes of Central Europe and North America, a model-based risk assessment for deer-vehicle collisions provides a cost-efficient instrument for deer management on the landscape scale. The measures derived from our model provide valuable information for planning road protection and defining hunting quota. Open
Microscopic imaging through turbid media Monte Carlo modeling and applications
Gu, Min; Deng, Xiaoyuan
2015-01-01
This book provides a systematic introduction to the principles of microscopic imaging through tissue-like turbid media in terms of Monte-Carlo simulation. It describes various gating mechanisms based on the physical differences between the unscattered and scattered photons and method for microscopic image reconstruction, using the concept of the effective point spread function. Imaging an object embedded in a turbid medium is a challenging problem in physics as well as in biophotonics. A turbid medium surrounding an object under inspection causes multiple scattering, which degrades the contrast, resolution and signal-to-noise ratio. Biological tissues are typically turbid media. Microscopic imaging through a tissue-like turbid medium can provide higher resolution than transillumination imaging in which no objective is used. This book serves as a valuable reference for engineers and scientists working on microscopy of tissue turbid media.
Kinetic Monte Carlo modelling of neutron irradiation damage in iron
Energy Technology Data Exchange (ETDEWEB)
Gamez, L. [Instituto de Fusion Nuclear, UPM, Madrid (Spain); Departamento de Fisica Aplicada, ETSII, UPM, Madrid (Spain)], E-mail: linarejos.gamez@upm.es; Martinez, E. [Instituto de Fusion Nuclear, UPM, Madrid (Spain); Lawrence Livermore National Laboratory, LLNL, CA 94550 (United States); Perlado, J.M.; Cepas, P. [Instituto de Fusion Nuclear, UPM, Madrid (Spain); Caturla, M.J. [Departamento de Fisica Aplicada, Universidad de Alicante, Alicante (Spain); Victoria, M. [Instituto de Fusion Nuclear, UPM, Madrid (Spain); Marian, J. [Lawrence Livermore National Laboratory, LLNL, CA 94550 (United States); Arevalo, C. [Instituto de Fusion Nuclear, UPM, Madrid (Spain); Hernandez, M.; Gomez, D. [CIEMAT, Madrid (Spain)
2007-10-15
Ferritic steels (FeCr based alloys) are key materials needed to fulfill the requirements expected in future nuclear fusion facilities, both for magnetic and inertial confinement, and advanced fission reactors (GIV) and transmutation systems. Research in such field is actually a critical aspect in the European research program and abroad. Experimental and multiscale simulation methodologies are going hand by hand in increasing the knowledge of materials performance. At DENIM, it is progressing in some specific part of the well-linked simulation methodology both for defects energetics and diffusion, and for dislocation dynamics. In this study, results obtained from kinetic Monte Carlo simulations of neutron irradiated Fe under different conditions are presented, using modified ad hoc parameters. A significant agreement with experimental measurements has been found for some of the parameterization and mechanisms considered. The results of these simulations are discussed and compared with previous calculations.
Z_3 Polyakov Loop Models and Inverse Monte-Carlo Methods
Wozar, Christian; Uhlmann, Sebastian; Wipf, Andreas; Heinzl, Thomas
2007-01-01
We study effective Polyakov loop models for SU(3) Yang-Mills theory at finite temperature. A comprehensive mean field analysis of the phase diagram is carried out and compared to the results obtained from Monte-Carlo simulations. We find a rich phase structure including ferromagnetic and antiferromagnetic phases. Due to the presence of a tricritical point the mean field approximation agrees very well with the numerical data. Critical exponents associated with second-order transitions coincide with those of the Z_3 Potts model. Finally, we employ inverse Monte-Carlo methods to determine the effective couplings in order to match the effective models to Yang-Mills theory.
Modeling Vehicle Collision Angle in Traffic Crashes Based on Three-Dimensional Laser Scanning Data
Directory of Open Access Journals (Sweden)
Nengchao Lyu
2017-02-01
Full Text Available In road traffic accidents, the analysis of a vehicle’s collision angle plays a key role in identifying a traffic accident’s form and cause. However, because accurate estimation of vehicle collision angle involves many factors, it is difficult to accurately determine it in cases in which less physical evidence is available and there is a lack of monitoring. This paper establishes the mathematical relation model between collision angle, deformation, and normal vector in the collision region according to the equations of particle deformation and force in Hooke’s law of classical mechanics. At the same time, the surface reconstruction method suitable for a normal vector solution is studied. Finally, the estimation model of vehicle collision angle is presented. In order to verify the correctness of the model, verification of multi-angle collision experiments and sensitivity analysis of laser scanning precision for the angle have been carried out using three-dimensional (3D data obtained by a 3D laser scanner in the collision deformation zone. Under the conditions with which the model has been defined, validation results show that the collision angle is a result of the weighted synthesis of the normal vector of the collision point and the weight value is the deformation of the collision point corresponding to normal vectors. These conclusions prove the applicability of the model. The collision angle model proposed in this paper can be used as the theoretical basis for traffic accident identification and cause analysis. It can also be used as a theoretical reference for the study of the impact deformation of elastic materials.
Ruzic, David N.; Juliano, Daniel R.; Hayden, Douglas B.; Allain, Monica M. C.
1998-10-01
A code has been developed to model the transport of sputtered material in a modified industrial-scale magnetron. The device has a target diameter of 355 mm and was designed for 200 mm substrates. The chamber has been retrofitted with an auxilliary RF inductive plasma source located between the target and substrate. The source consists of a water-cooled copper coil immersed in the plasma, but with a diameter large enough to prevent shadowing of the substrate. The RF plasma, target sputter flux distribution, background gas conditions, and geometry are all inputs to the code. The plasma is characterized via a combination of a Langmuir probe apparatus and the results of a simple analytic model of the ICP system. The source of sputtered atoms from the target is found through measurements of the depth of the sputter track in an eroded target and the distribution of the sputter flux is calculated via VFTRIM. A Monte Carlo routine tracks high energy atoms emerging from the target as they move through the chamber and undergo collisions with the electrons and background gas. The sputtered atoms are tracked by this routine whatever their electronic state (neutral, excited, or ion). If the energy of a sputtered atom decreases to near-thermal levels, then it exits the Monte Carlo routine as is tracked with a simple diffusion model. In this way, all sputtered atoms are followed until they hit and stick to a surface, and the velocity distribution of the sputtered atom population (including electronic state information) at each surface is calculated, especially the substrate. Through the use of this simulation the coil parameters and geometry can be tailored to maximize deposition rate and sputter flux uniformity.
Sarkadi, L.
2016-09-01
The ionization of the uracil molecule induced by heavy-ion impact has been investigated using the classical trajectory Monte Carlo (CTMC) method. Assuming the validity of the independent-particle model approximation, the collision problem is solved by considering the three-body dynamics of the projectile, an active electron and the molecule core. The interaction of the molecule core with the other two particles is described by a multi-center potential built from screened atomic potentials. The cross section differential with respect to the energy and angle of the electrons ejected in the ionization process has been calculated for an impact of 3.5 MeV u-1 {{{C}}}6+ ions. Total electron emission cross sections (TCS) are presented for {{{C}}}q+ (q=0-6) and {{{O}}}6+ projectiles as a function of the impact energy in the range from 10 keV u-1 to 10 MeV u-1. The dependence of the TCS on the charge state of the projectile has been investigated for 2.5 MeV u-1 {{{O}}}q+ (q=4-8) and {{{F}}}q+ (q=5-9) ions. The results of the calculations are compared with available experimental data and the predictions of other theoretical models: the first Born approximation with correct boundary conditions (CB1), the continuum-distorted-wave-eikonal-initial-state approach (CDW-EIS), and the combined classical-trajectory Monte Carlo-classical over-the-barrier model (CTMC-COB).
NRMC - A GPU code for N-Reverse Monte Carlo modeling of fluids in confined media
Sánchez-Gil, Vicente; Noya, Eva G.; Lomba, Enrique
2017-08-01
NRMC is a parallel code for performing N-Reverse Monte Carlo modeling of fluids in confined media [V. Sánchez-Gil, E.G. Noya, E. Lomba, J. Chem. Phys. 140 (2014) 024504]. This method is an extension of the usual Reverse Monte Carlo method to obtain structural models of confined fluids compatible with experimental diffraction patterns, specifically designed to overcome the problem of slow diffusion that can appear under conditions of tight confinement. Most of the computational time in N-Reverse Monte Carlo modeling is spent in the evaluation of the structure factor for each trial configuration, a calculation that can be easily parallelized. Implementation of the structure factor evaluation in NVIDIA® CUDA so that the code can be run on GPUs leads to a speed up of up to two orders of magnitude.
Benchmark calculation of no-core Monte Carlo shell model in light nuclei
Abe, T; Otsuka, T; Shimizu, N; Utsuno, Y; Vary, J P; 10.1063/1.3584062
2011-01-01
The Monte Carlo shell model is firstly applied to the calculation of the no-core shell model in light nuclei. The results are compared with those of the full configuration interaction. The agreements between them are within a few % at most.
O'Neill, Philip D
2002-01-01
Recent Bayesian methods for the analysis of infectious disease outbreak data using stochastic epidemic models are reviewed. These methods rely on Markov chain Monte Carlo methods. Both temporal and non-temporal data are considered. The methods are illustrated with a number of examples featuring different models and datasets.
Universality of the Ising and the S=1 model on Archimedean lattices: A Monte Carlo determination
Malakis, A.; Gulpinar, G.; Karaaslan, Y.; Papakonstantinou, T.; Aslan, G.
2012-03-01
The Ising models S=1/2 and S=1 are studied by efficient Monte Carlo schemes on the (3,4,6,4) and the (3,3,3,3,6) Archimedean lattices. The algorithms used, a hybrid Metropolis-Wolff algorithm and a parallel tempering protocol, are briefly described and compared with the simple Metropolis algorithm. Accurate Monte Carlo data are produced at the exact critical temperatures of the Ising model for these lattices. Their finite-size analysis provide, with high accuracy, all critical exponents which, as expected, are the same with the well-known 2D Ising model exact values. A detailed finite-size scaling analysis of our Monte Carlo data for the S=1 model on the same lattices provides very clear evidence that this model obeys, also very well, the 2D Ising model critical exponents. As a result, we find that recent Monte Carlo simulations and attempts to define effective dimensionality for the S=1 model on these lattices are misleading. Accurate estimates are obtained for the critical amplitudes of the logarithmic expansions of the specific heat for both models on the two Archimedean lattices.
Markov chain Monte Carlo methods for state-space models with point process observations.
Yuan, Ke; Girolami, Mark; Niranjan, Mahesan
2012-06-01
This letter considers how a number of modern Markov chain Monte Carlo (MCMC) methods can be applied for parameter estimation and inference in state-space models with point process observations. We quantified the efficiencies of these MCMC methods on synthetic data, and our results suggest that the Reimannian manifold Hamiltonian Monte Carlo method offers the best performance. We further compared such a method with a previously tested variational Bayes method on two experimental data sets. Results indicate similar performance on the large data sets and superior performance on small ones. The work offers an extensive suite of MCMC algorithms evaluated on an important class of models for physiological signal analysis.
FREYA-a new Monte Carlo code for improved modeling of fission chains
Energy Technology Data Exchange (ETDEWEB)
Hagmann, C A; Randrup, J; Vogt, R L
2012-06-12
A new simulation capability for modeling of individual fission events and chains and the transport of fission products in materials is presented. FREYA ( Fission Yield Event Yield Algorithm ) is a Monte Carlo code for generating fission events providing correlated kinematic information for prompt neutrons, gammas, and fragments. As a standalone code, FREYA calculates quantities such as multiplicity-energy, angular, and gamma-neutron energy sharing correlations. To study materials with multiplication, shielding effects, and detectors, we have integrated FREYA into the general purpose Monte Carlo code MCNP. This new tool will allow more accurate modeling of detector responses including correlations and the development of SNM detectors with increased sensitivity.
Modeling of Ship Collision Risk Index Based on Complex Plane and Its Realization
Directory of Open Access Journals (Sweden)
Xiaoqin Xu
2016-07-01
Full Text Available Ship collision risk index is the basic and important concept in the domain of ship collision avoidance. In this paper, the advantages and deficiencies of the various calculation methods of ship collision risk index are pointed out. Then the ship collision risk model based on complex plane, which can well make up for the deficiencies of the widely-used evaluation model proposed by Kearon.J and Liu ruru is proposed. On this basis, the calculation method of collision risk index under the encountering situation of multi-ships is constructed, then the three-dimensional image and spatial curve of the risk index are figured out. Finally, single chip microcomputer is used to realize the model. And attaching this single chip microcomputer to ARPA is helpful to the decision-making of the marine navigators.
Statistical model predictions for p+p and Pb+Pb collisions at LHC
Kraus, I.; Cleymans, J.; Oeschler, H.; Redlich, K.; Wheaton, S.
2009-01-01
Particle production in p+p and central collisions at LHC is discussed in the context of the statistical thermal model. For heavy-ion collisions, predictions of various particle ratios are presented. The sensitivity of several ratios on the temperature and the baryon chemical potential is studied in
Midrapidity inclusive densities in high energy pp collisions in additive quark model
Shabelski, Yu. M.; Shuvaev, A. G.
2016-08-01
High energy (CERN SPS and LHC) inelastic pp (pbar{p}) scattering is treated in the framework of the additive quark model together with Pomeron exchange theory. We extract the midrapidity inclusive density of the charged secondaries produced in a single quark-quark collision and investigate its energy dependence. Predictions for the π p collisions are presented.
A semi-holographic model for heavy-ion collisions
Iancu, Edmond
2014-01-01
We develop a semi-holographic model for the out-of-equilibrium dynamics during the partonic stages of an ultrarelativistic heavy-ion collision. The model combines a weakly-coupled hard sector, involving gluon modes with energy and momenta of the order of the saturation momentum and relatively large occupation numbers, with a strongly-coupled soft sector, which physically represents the soft gluons radiated by the hard partons. The hard sector is described by perturbative QCD, more precisely, by its semi-classical approximation (the classical Yang-Mills equations) which becomes appropriate when the occupation numbers are large. The soft sector is described by a marginally deformed conformal field theory, which in turn admits a holographic description in terms of classical Einstein's equations in $AdS_5$ with a minimally coupled massless `dilaton'. The model involve two free parameters which characterize the gauge-invariant couplings between the hard and soft sectors. Via these couplings, the hard modes provide...
Markov chain Monte Carlo methods in directed graphical models
DEFF Research Database (Denmark)
Højbjerre, Malene
Directed graphical models present data possessing a complex dependence structure, and MCMC methods are computer-intensive simulation techniques to approximate high-dimensional intractable integrals, which emerge in such models with incomplete data. MCMC computations in directed graphical models...
Single-cluster-update Monte Carlo method for the random anisotropy model
Rößler, U. K.
1999-06-01
A Wolff-type cluster Monte Carlo algorithm for random magnetic models is presented. The algorithm is demonstrated to reduce significantly the critical slowing down for planar random anisotropy models with weak anisotropy strength. Dynamic exponents zcluster algorithms are estimated for models with ratio of anisotropy to exchange constant D/J=1.0 on cubic lattices in three dimensions. For these models, critical exponents are derived from a finite-size scaling analysis.
Energy Technology Data Exchange (ETDEWEB)
Davis JE, Eddy MJ, Sutton TM, Altomari TJ
2007-03-01
Solid modeling computer software systems provide for the design of three-dimensional solid models used in the design and analysis of physical components. The current state-of-the-art in solid modeling representation uses a boundary representation format in which geometry and topology are used to form three-dimensional boundaries of the solid. The geometry representation used in these systems is cubic B-spline curves and surfaces--a network of cubic B-spline functions in three-dimensional Cartesian coordinate space. Many Monte Carlo codes, however, use a geometry representation in which geometry units are specified by intersections and unions of half-spaces. This paper describes an algorithm for converting from a boundary representation to a half-space representation.
Efficient modelling of particle collisions using a non-linear viscoelastic contact force
Ray, Shouryya; Fröhlich, Jochen
2015-01-01
In this paper the normal collision of spherical particles is investigated. The particle interaction is modelled in a macroscopic way using the Hertzian contact force with additional linear damping. The goal of the work is to develop an efficient approximate solution of sufficient accuracy for this problem which can be used in soft-sphere collision models for Discrete Element Methods and for particle transport in viscous fluids. First, by the choice of appropriate units, the number of governing parameters of the collision process is reduced to one, thus providing a dimensionless parameter that characterizes all such collisions up to dynamic similitude. It is a simple combination of known material parameters as well as initial conditions. A rigorous calculation of the collision time and restitution coefficient from the governing equations, in the form of a series expansion in this parameter is provided. Such a first principles calculation is particularly interesting from a theoretical perspective. Since the gov...
Energy Technology Data Exchange (ETDEWEB)
Brown, F.B.; Sutton, T.M.
1996-02-01
This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.
Vrugt, J.A.; Braak, ter C.J.F.; Clark, M.P.; Hyman, J.M.; Robinson, B.A.
2008-01-01
There is increasing consensus in the hydrologic literature that an appropriate framework for streamflow forecasting and simulation should include explicit recognition of forcing and parameter and model structural error. This paper presents a novel Markov chain Monte Carlo (MCMC) sampler, entitled
Preliminary Monte Carlo Results for the Three-Dimensional Holstein Model
Institute of Scientific and Technical Information of China (English)
吴焰立; 刘川; 罗强
2003-01-01
Monte Carlo simulations are used to study the three-dimensional Holstein model. The relationship between the band filling and the chemical potential is obtained for various phonon frequencies and temperatures. The energy of a single electron or a hole is also calculated as a function of the lattice momenta.
Kim, Jee-Seon; Bolt, Daniel M.
2007-01-01
The purpose of this ITEMS module is to provide an introduction to Markov chain Monte Carlo (MCMC) estimation for item response models. A brief description of Bayesian inference is followed by an overview of the various facets of MCMC algorithms, including discussion of prior specification, sampling procedures, and methods for evaluating chain…
Vrugt, J.A.; Braak, ter C.J.F.; Clark, M.P.; Hyman, J.M.; Robinson, B.A.
2008-01-01
There is increasing consensus in the hydrologic literature that an appropriate framework for streamflow forecasting and simulation should include explicit recognition of forcing and parameter and model structural error. This paper presents a novel Markov chain Monte Carlo (MCMC) sampler, entitled di
Confronting uncertainty in model-based geostatistics using Markov Chain Monte Carlo simulation
Minasny, B.; Vrugt, J.A.; McBratney, A.B.
2011-01-01
This paper demonstrates for the first time the use of Markov Chain Monte Carlo (MCMC) simulation for parameter inference in model-based soil geostatistics. We implemented the recently developed DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm to jointly summarize the posterior distributi
Kim, Jee-Seon; Bolt, Daniel M.
2007-01-01
The purpose of this ITEMS module is to provide an introduction to Markov chain Monte Carlo (MCMC) estimation for item response models. A brief description of Bayesian inference is followed by an overview of the various facets of MCMC algorithms, including discussion of prior specification, sampling procedures, and methods for evaluating chain…
A study of the XY model by the Monte Carlo method
Suranyi, Peter; Harten, Paul
1987-01-01
The massively parallel processor is used to perform Monte Carlo simulations for the two dimensional XY model on lattices of sizes up to 128 x 128. A parallel random number generator was constructed, finite size effects were studied, and run times were compared with those on a CRAY X-MP supercomputer.
Generic Form of Bayesian Monte Carlo For Models With Partial Monotonicity
Rajabalinejad, M.
2012-01-01
This paper presents a generic method for the safety assessments of models with partial monotonicity. For this purpose, a Bayesian interpolation method is developed and implemented in the Monte Carlo process. integrated approach is the generalization of the recently developed techniques used in safet
An Evaluation of a Markov Chain Monte Carlo Method for the Rasch Model.
Kim, Seock-Ho
2001-01-01
Examined the accuracy of the Gibbs sampling Markov chain Monte Carlo procedure for estimating item and person (theta) parameters in the one-parameter logistic model. Analyzed four empirical datasets using the Gibbs sampling, conditional maximum likelihood, marginal maximum likelihood, and joint maximum likelihood methods. Discusses the conditions…
Particle Markov Chain Monte Carlo Techniques of Unobserved Component Time Series Models Using Ox
DEFF Research Database (Denmark)
Nonejad, Nima
This paper details Particle Markov chain Monte Carlo techniques for analysis of unobserved component time series models using several economic data sets. PMCMC combines the particle filter with the Metropolis-Hastings algorithm. Overall PMCMC provides a very compelling, computationally fast...
Markov Chain Monte Carlo Estimation of Item Parameters for the Generalized Graded Unfolding Model
de la Torre, Jimmy; Stark, Stephen; Chernyshenko, Oleksandr S.
2006-01-01
The authors present a Markov Chain Monte Carlo (MCMC) parameter estimation procedure for the generalized graded unfolding model (GGUM) and compare it to the marginal maximum likelihood (MML) approach implemented in the GGUM2000 computer program, using simulated and real personality data. In the simulation study, test length, number of response…
Generic form of Bayesian Monte Carlo for models with partial monotonicity
Rajabalinejad, M.; Spitas, C.
2012-01-01
This paper presents a generic method for the safety assessments of models with partial monotonicity. For this purpose, a Bayesian interpolation method is developed and implemented in the Monte Carlo process. integrated approach is the generalization of the recently developed techniques used in safet
LASER-DOPPLER VELOCIMETRY AND MONTE-CARLO SIMULATIONS ON MODELS FOR BLOOD PERFUSION IN TISSUE
DEMUL, FFM; KOELINK, MH; KOK, ML; HARMSMA, PJ; GREVE, J; GRAAFF, R; AARNOUDSE, JG
1995-01-01
Laser Doppler flow measurements and Monte Carlo simulations on small blood perfusion flow models at 780 nm are presented and compared. The dimensions of the optical sample volume are investigated as functions of the distance of the laser to the detector and as functions of the angle of penetration o
Ron, Dorit; Brandt, Achi; Swendsen, Robert H
2017-05-01
We present a surprisingly simple approach to high-accuracy calculations of the critical properties of the three-dimensional Ising model. The method uses a modified block-spin transformation with a tunable parameter to improve convergence in the Monte Carlo renormalization group. The block-spin parameter must be tuned differently for different exponents to produce optimal convergence.
Hanford, Amanda D; O'Connor, Patrick D; Anderson, James B; Long, Lyle N
2008-06-01
In the current study, real gas effects in the propagation of sound waves are simulated using the direct simulation Monte Carlo method for a wide range of frequencies. This particle method allows for treatment of acoustic phenomena at high Knudsen numbers, corresponding to low densities and a high ratio of the molecular mean free path to wavelength. Different methods to model the internal degrees of freedom of diatomic molecules and the exchange of translational, rotational and vibrational energies in collisions are employed in the current simulations of a diatomic gas. One of these methods is the fully classical rigid-rotor/harmonic-oscillator model for rotation and vibration. A second method takes into account the discrete quantum energy levels for vibration with the closely spaced rotational levels classically treated. This method gives a more realistic representation of the internal structure of diatomic and polyatomic molecules. Applications of these methods are investigated in diatomic nitrogen gas in order to study the propagation of sound and its attenuation and dispersion along with their dependence on temperature. With the direct simulation method, significant deviations from continuum predictions are also observed for high Knudsen number flows.
Ensemble Bayesian model averaging using Markov Chain Monte Carlo sampling
Vrugt, J.A.; Diks, C.G.H.; Clark, M.
2008-01-01
Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In t
Collision Geometry and Flow in Uranium+Uranium Collisions
Goldschmidt, Andy; Shen, Chun; Heinz, Ulrich
2015-01-01
Using event-by-event viscous fluid dynamics to evolve fluctuating initial density profiles from the Monte-Carlo Glauber model for U+U collisions, we report a "knee"-like structure in the elliptic flow as a function of collision centrality, located around the 0.5% most central collisions as measured by the final charged multiplicity. This knee is due to the preferential selection of tip-on-tip collision geometries by a high-multiplicity trigger. Such a knee structure is not seen in the STAR data. This rules out the two-component MC-Glauber model for initial energy and entropy production. Hence an enrichment of tip-tip configurations by triggering solely on high-multiplicity in the U+U collisions does not work. On the other hand, by using the Zero Degree Calorimeters (ZDCs) coupled with event-shape engineering such a selection is possible. We identify the selection purity of body-body and tip-tip events in full-overlap U+U collisions. By additionally constraining the asymmetry of the ZDC signals we can further ...
A measurement-based generalized source model for Monte Carlo dose simulations of CT scans
Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun
2017-03-01
The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg–Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.
Directory of Open Access Journals (Sweden)
Shchekoturova S. D.
2015-04-01
Full Text Available The article presents an analysis of an innovative activity of four Russian metallurgical enterprises: "Ruspolimet", JSC "Ural Smithy", JSC "Stupino Metallurgical Company", JSC "VSMPO" via mathematical modeling using Monte Carlo method. The results of the assessment of innovative activity of Russian metallurgical companies were identified in five years dynamics. An assessment of the current innovative activity was made by the calculation of an integral index of the innovative activity. The calculation was based on such six indicators as the proportion of staff employed in R & D; the level of development of new technology; the degree of development of new products; share of material resources for R & D; degree of security of enterprise intellectual property; the share of investment in innovative projects and it was analyzed from 2007 to 2011. On the basis of this data the integral indicator of the innovative activity of metallurgical companies was calculated by well-known method of weighting coefficients. The comparative analysis of integral indicators of the innovative activity of considered companies made it possible to range their level of the innovative activity and to characterize the current state of their business. Based on Monte Carlo method a variation interval of the integral indicator was obtained and detailed instructions to choose the strategy of the innovative development of metallurgical enterprises were given as well
Monte-Carlo Inversion of Travel-Time Data for the Estimation of Weld Model Parameters
Hunter, A. J.; Drinkwater, B. W.; Wilcox, P. D.
2011-06-01
The quality of ultrasonic array imagery is adversely affected by uncompensated variations in the medium properties. A method for estimating the parameters of a general model of an inhomogeneous anisotropic medium is described. The model is comprised of a number of homogeneous sub-regions with unknown anisotropy. Bayesian estimation of the unknown model parameters is performed via a Monte-Carlo Markov chain using the Metropolis-Hastings algorithm. Results are demonstrated using simulated weld data.
Improvements of the Analytical Model of Monte Carlo
Institute of Scientific and Technical Information of China (English)
HE Qing-Fang; XU Zheng; TENG Feng; LIU De-Ang; XU Xu-Rong
2006-01-01
@@ By extending the conduction band structure, we set up a new analytical model in ZnS. Compared the results with both the old analytical model and the full band model, it is found that they are possibly in reasonable agreement with the full band method and we can improve the calculation precision. Another important work is to reduce the programme computation time using the method of data fitting scattering rate curves.
Monte Carlo modeling of a Novalis Tx Varian 6 MV with HD-120 multileaf collimator.
Vazquez-Quino, Luis Alberto; Massingill, Brian; Shi, Chengyu; Gutierrez, Alonso; Esquivel, Carlos; Eng, Tony; Papanikolaou, Nikos; Stathakis, Sotirios
2012-09-06
A Monte Carlo model of the Novalis Tx linear accelerator equipped with high-definition multileaf collimator (HD-120 HD-MLC) was commissioned using ionization chamber measurements in water. All measurements in water were performed using a liquid filled ionization chamber. Film measurements were made using EDR2 film in solid water. Open rectangular fields defined by the jaws or the HD-MLC were used for comparison against measurements. Furthermore, inter- and intraleaf leakage calculated by the Monte Carlo model was compared against film measurements. The statistical uncertainty of the Monte Carlo calculations was less than 1% for all simulations. Results for all regular field sizes show an excellent agreement with commissioning data (percent depth-dose curves and profiles), well within 1% of difference in the relative dose and 1 mm distance to agreement. The computed leakage through HD-MLCs shows good agreement with film measurements. The Monte Carlo model developed in this study accurately represents the new Novalis Tx Varian linac with HD-MLC and can be used for reliable patient dose calculations.
High-resolution and Monte Carlo additions to the SASKTRAN radiative transfer model
Directory of Open Access Journals (Sweden)
D. J. Zawada
2015-06-01
Full Text Available The Optical Spectrograph and InfraRed Imaging System (OSIRIS instrument on board the Odin spacecraft has been measuring limb-scattered radiance since 2001. The vertical radiance profiles measured as the instrument nods are inverted, with the aid of the SASKTRAN radiative transfer model, to obtain vertical profiles of trace atmospheric constituents. Here we describe two newly developed modes of the SASKTRAN radiative transfer model: a high-spatial-resolution mode and a Monte Carlo mode. The high-spatial-resolution mode is a successive-orders model capable of modelling the multiply scattered radiance when the atmosphere is not spherically symmetric; the Monte Carlo mode is intended for use as a highly accurate reference model. It is shown that the two models agree in a wide variety of solar conditions to within 0.2 %. As an example case for both models, Odin–OSIRIS scans were simulated with the Monte Carlo model and retrieved using the high-resolution model. A systematic bias of up to 4 % in retrieved ozone number density between scans where the instrument is scanning up or scanning down was identified. The bias is largest when the sun is near the horizon and the solar scattering angle is far from 90°. It was found that calculating the multiply scattered diffuse field at five discrete solar zenith angles is sufficient to eliminate the bias for typical Odin–OSIRIS geometries.
High resolution and Monte Carlo additions to the SASKTRAN radiative transfer model
Directory of Open Access Journals (Sweden)
D. J. Zawada
2015-03-01
Full Text Available The OSIRIS instrument on board the Odin spacecraft has been measuring limb scattered radiance since 2001. The vertical radiance profiles measured as the instrument nods are inverted, with the aid of the SASKTRAN radiative transfer model, to obtain vertical profiles of trace atmospheric constituents. Here we describe two newly developed modes of the SASKTRAN radiative transfer model: a high spatial resolution mode, and a Monte Carlo mode. The high spatial resolution mode is a successive orders model capable of modelling the multiply scattered radiance when the atmosphere is not spherically symmetric; the Monte Carlo mode is intended for use as a highly accurate reference model. It is shown that the two models agree in a wide variety of solar conditions to within 0.2%. As an example case for both models, Odin-OSIRIS scans were simulated with the Monte Carlo model and retrieved using the high resolution model. A systematic bias of up to 4% in retrieved ozone number density between scans where the instrument is scanning up or scanning down was identified. It was found that calculating the multiply scattered diffuse field at five discrete solar zenith angles is sufficient to eliminate the bias for typical Odin-OSIRIS geometries.
Monte Carlo study of single-barrier structure based on exclusion model full counting statistics
Institute of Scientific and Technical Information of China (English)
Chen Hua; Du Lei; Qu Cheng-Li; He Liang; Chen Wen-Hao; Sun Peng
2011-01-01
Different from the usual full counting statistics theoretical work that focuses on the higher order cumulants computation by using cumulant generating function in electrical structures, Monte Carlo simulation of single-barrier structure is performed to obtain time series for two types of widely applicable exclusion models, counter-flows model,and tunnel model. With high-order spectrum analysis of Matlab, the validation of Monte Carlo methods is shown through the extracted first four cumulants from the time series, which are in agreement with those from cumulant generating function. After the comparison between the counter-flows model and the tunnel model in a single barrier structure, it is found that the essential difference between them consists in the strictly holding of Pauli principle in the former and in the statistical consideration of Pauli principle in the latter.
Nguyen, Jennifer; Hayakawa, Carole K; Mourant, Judith R; Venugopalan, Vasan; Spanier, Jerome
2016-05-01
We present a polarization-sensitive, transport-rigorous perturbation Monte Carlo (pMC) method to model the impact of optical property changes on reflectance measurements within a discrete particle scattering model. The model consists of three log-normally distributed populations of Mie scatterers that approximate biologically relevant cervical tissue properties. Our method provides reflectance estimates for perturbations across wavelength and/or scattering model parameters. We test our pMC model performance by perturbing across number densities and mean particle radii, and compare pMC reflectance estimates with those obtained from conventional Monte Carlo simulations. These tests allow us to explore different factors that control pMC performance and to evaluate the gains in computational efficiency that our pMC method provides.
A Monte Carlo Solution of the Human Ballistic Mortality Model
1978-08-01
to obtain a damage D for thetotal wound. This addition law is averaged over the total soldier W.B. Beverly, "A Human Balistic Mortalitj Model," to be...January 1970. C.A. Stanley and K. Brown. "A Coniputer Man Anatomica l ModeL ," Balistic Research Laboratory Report ARBL T No. 02080, May 1978
Reservoir Modeling Combining Geostatistics with Markov Chain Monte Carlo Inversion
DEFF Research Database (Denmark)
Zunino, Andrea; Lange, Katrine; Melnikova, Yulia;
2014-01-01
, multi-step forward model (rock physics and seismology) and to provide realistic estimates of uncertainties. To generate realistic models which represent samples of the prior distribution, and to overcome the high computational demand, we reduce the search space utilizing an algorithm drawn from...
A Cross-domain Survey of Metrics for Modelling and Evaluating Collisions
Directory of Open Access Journals (Sweden)
Jeremy A. Marvel
2014-09-01
Full Text Available This paper provides a brief survey of the metrics for measuring probability, degree, and severity of collisions as applied to autonomous and intelligent systems. Though not exhaustive, this survey evaluates the state-of-the-art of collision metrics, and assesses which are likely to aid in the establishment and support of autonomous system collision modelling. The survey includes metrics for 1 robot arms; 2 mobile robot platforms; 3 nonholonomic physical systems such as ground vehicles, aircraft, and naval vessels, and; 4 virtual and mathematical models.
Anomalous transport model study of chiral magnetic effects in heavy ion collisions
Sun, Yifeng; Li, Feng
2016-01-01
Using an anomalous transport model for massless quarks, we study the effect of magnetic field on the elliptic flows of quarks and antiquarks in relativistic heavy ion collisions. With initial conditions from a blast wave model and assuming that the strong magnetic field produced in non-central heavy ion collisions can last for a sufficiently long time, we obtain an appreciable electric quadrupole moment in the transverse plane of a heavy ion collision, which subsequently leads to a splitting between the elliptic flows of quarks and antiquarks as expected from the chiral magnetic wave formed in the produced QGP and observed in experiments at the Relativistic Heavy Ion Collider (RHIC).
A Cross-Domain Survey of Metrics for Modelling and Evaluating Collisions
Directory of Open Access Journals (Sweden)
Jeremy A. Marvel
2014-09-01
Full Text Available This paper provides a brief survey of the metrics for measuring probability, degree, and severity of collisions as applied to autonomous and intelligent systems. Though not exhaustive, this survey evaluates the state-of-the-art of collision metrics, and assesses which are likely to aid in the establishment and support of autonomous system collision modelling. The survey includes metrics for 1 robot arms; 2 mobile robot platforms; 3 nonholonomic physical systems such as ground vehicles, aircraft, and naval vessels, and; 4 virtual and mathematical models.
From many body wee partons dynamics to perfect fluid: a standard model for heavy ion collisions
Energy Technology Data Exchange (ETDEWEB)
Venugopalan, R.
2010-07-22
We discuss a standard model of heavy ion collisions that has emerged both from experimental results of the RHIC program and associated theoretical developments. We comment briefly on the impact of early results of the LHC program on this picture. We consider how this standard model of heavy ion collisions could be solidified or falsified in future experiments at RHIC, the LHC and a future Electro-Ion Collider.
TRIPOLI-4{sup ®} Monte Carlo code ITER A-lite neutronic model validation
Energy Technology Data Exchange (ETDEWEB)
Jaboulay, Jean-Charles, E-mail: jean-charles.jaboulay@cea.fr [CEA, DEN, Saclay, DM2S, SERMA, F-91191 Gif-sur-Yvette (France); Cayla, Pierre-Yves; Fausser, Clement [MILLENNIUM, 16 Av du Québec Silic 628, F-91945 Villebon sur Yvette (France); Damian, Frederic; Lee, Yi-Kang; Puma, Antonella Li; Trama, Jean-Christophe [CEA, DEN, Saclay, DM2S, SERMA, F-91191 Gif-sur-Yvette (France)
2014-10-15
3D Monte Carlo transport codes are extensively used in neutronic analysis, especially in radiation protection and shielding analyses for fission and fusion reactors. TRIPOLI-4{sup ®} is a Monte Carlo code developed by CEA. The aim of this paper is to show its capability to model a large-scale fusion reactor with complex neutron source and geometry. A benchmark between MCNP5 and TRIPOLI-4{sup ®}, on the ITER A-lite model was carried out; neutron flux, nuclear heating in the blankets and tritium production rate in the European TBMs were evaluated and compared. The methodology to build the TRIPOLI-4{sup ®} A-lite model is based on MCAM and the MCNP A-lite model. Simplified TBMs, from KIT, were integrated in the equatorial-port. A good agreement between MCNP and TRIPOLI-4{sup ®} is shown; discrepancies are mainly included in the statistical error.
Monte Carlo Simulation of the Potts Model on a Dodecagonal Quasiperiodic Structure
Institute of Scientific and Technical Information of China (English)
WEN Zhang-Bin; HOU Zhi-Lin; FU Xiu-Jun
2011-01-01
By means of a Monte Carlo simulation, we study the three-state Potts model on a two-dimensional quasiperiodic structure based on a dodecagonal cluster covering pattern. The critical temperature and exponents are obtained from finite-size scaling analysis. It is shown that the Potts model on the quasiperiodic lattice belongs to the same universal class as those on periodic ones.%@@ By means of a Monte Carlo simulation, we study the three-state Potts model on a two-dimensional quasiperiodic structure based on a dodecagonal cluster covering pattern.The critical temperature and exponents are obtained from finite-size scaling analysis.It is shown that the Potts model on the quasiperiodic lattice belongs to the same universal class as those on periodic ones.
Large-scale Monte Carlo simulations for the depinning transition in Ising-type lattice models
Si, Lisha; Liao, Xiaoyun; Zhou, Nengji
2016-12-01
With the developed "extended Monte Carlo" (EMC) algorithm, we have studied the depinning transition in Ising-type lattice models by extensive numerical simulations, taking the random-field Ising model with a driving field and the driven bond-diluted Ising model as examples. In comparison with the usual Monte Carlo method, the EMC algorithm exhibits greater efficiency of the simulations. Based on the short-time dynamic scaling form, both the transition field and critical exponents of the depinning transition are determined accurately via the large-scale simulations with the lattice size up to L = 8912, significantly refining the results in earlier literature. In the strong-disorder regime, a new universality class of the Ising-type lattice model is unveiled with the exponents β = 0.304(5) , ν = 1.32(3) , z = 1.12(1) , and ζ = 0.90(1) , quite different from that of the quenched Edwards-Wilkinson equation.
Monte Carlo modeling of atomic oxygen attack of polymers with protective coatings on LDEF
Banks, Bruce A.; Degroh, Kim K.; Auer, Bruce M.; Gebauer, Linda; Edwards, Jonathan L.
1993-01-01
Characterization of the behavior of atomic oxygen interaction with materials on the Long Duration Exposure Facility (LDEF) assists in understanding of the mechanisms involved. Thus the reliability of predicting in-space durability of materials based on ground laboratory testing should be improved. A computational model which simulates atomic oxygen interaction with protected polymers was developed using Monte Carlo techniques. Through the use of an assumed mechanistic behavior of atomic oxygen interaction based on in-space atomic oxygen erosion of unprotected polymers and ground laboratory atomic oxygen interaction with protected polymers, prediction of atomic oxygen interaction with protected polymers on LDEF was accomplished. However, the results of these predictions are not consistent with the observed LDEF results at defect sites in protected polymers. Improved agreement between observed LDEF results and predicted Monte Carlo modeling can be achieved by modifying of the atomic oxygen interactive assumptions used in the model. LDEF atomic oxygen undercutting results, modeling assumptions, and implications are presented.
Simulation model based on Monte Carlo method for traffic assignment in local area road network
Institute of Scientific and Technical Information of China (English)
Yuchuan DU; Yuanjing GENG; Lijun SUN
2009-01-01
For a local area road network, the available traffic data of traveling are the flow volumes in the key intersections, not the complete OD matrix. Considering the circumstance characteristic and the data availability of a local area road network, a new model for traffic assignment based on Monte Carlo simulation of intersection turning movement is provided in this paper. For good stability in temporal sequence, turning ratio is adopted as the important parameter of this model. The formulation for local area road network assignment problems is proposed on the assumption of random turning behavior. The traffic assignment model based on the Monte Carlo method has been used in traffic analysis for an actual urban road network. The results comparing surveying traffic flow data and determining flow data by the previous model verify the applicability and validity of the proposed methodology.
Monte Carlo Based Toy Model for Fission Process
Kurniadi, R; Viridi, S
2014-01-01
Fission yield has been calculated notoriously by two calculations approach, macroscopic approach and microscopic approach. This work will proposes another calculation approach which the nucleus is treated as a toy model. The toy model of fission yield is a preliminary method that use random number as a backbone of the calculation. Because of nucleus as a toy model hence the fission process does not represent real fission process in nature completely. Fission event is modeled by one random number. The number is assumed as width of distribution probability of nucleon position in compound nuclei when fission process is started. The toy model is formed by Gaussian distribution of random number that randomizes distance like between particle and central point. The scission process is started by smashing compound nucleus central point into two parts that are left central and right central points. These three points have different Gaussian distribution parameters such as mean ({\\mu}CN, {\\mu}L, {\\mu}R), and standard d...
Zhai, Xue; Fei, Cheng-Wei; Choy, Yat-Sze; Wang, Jian-Jun
2017-01-01
To improve the accuracy and efficiency of computation model for complex structures, the stochastic model updating (SMU) strategy was proposed by combining the improved response surface model (IRSM) and the advanced Monte Carlo (MC) method based on experimental static test, prior information and uncertainties. Firstly, the IRSM and its mathematical model were developed with the emphasis on moving least-square method, and the advanced MC simulation method is studied based on Latin hypercube sampling method as well. And then the SMU procedure was presented with experimental static test for complex structure. The SMUs of simply-supported beam and aeroengine stator system (casings) were implemented to validate the proposed IRSM and advanced MC simulation method. The results show that (1) the SMU strategy hold high computational precision and efficiency for the SMUs of complex structural system; (2) the IRSM is demonstrated to be an effective model due to its SMU time is far less than that of traditional response surface method, which is promising to improve the computational speed and accuracy of SMU; (3) the advanced MC method observably decrease the samples from finite element simulations and the elapsed time of SMU. The efforts of this paper provide a promising SMU strategy for complex structure and enrich the theory of model updating.
Tian, Zhen; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun
2015-01-01
Monte Carlo (MC) simulation is considered as the most accurate method for radiation dose calculations. Accuracy of a source model for a linear accelerator is critical for the overall dose calculation accuracy. In this paper, we presented an analytical source model that we recently developed for GPU-based MC dose calculations. A key concept called phase-space-ring (PSR) was proposed. It contained a group of particles that are of the same type and close in energy and radial distance to the center of the phase-space plane. The model parameterized probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. For a primary photon PSRs, the particle direction is assumed to be from the beam spot. A finite spot size is modeled with a 2D Gaussian distribution. For a scattered photon PSR, multiple Gaussian components were used to model the particle direction. The direction distribution of an electron PSRs was also modeled as a 2D Gaussian distributi...
Cosmological constraints on generalized Chaplygin gas model: Markov Chain Monte Carlo approach
Xu, Lixin; Lu, Jianbo
2010-01-01
We use the Markov Chain Monte Carlo method to investigate a global constraints on the generalized Chaplygin gas (GCG) model as the unification of dark matter and dark energy from the latest observational data: the Constitution dataset of type supernovae Ia (SNIa), the observational Hubble data (OHD), the cluster X-ray gas mass fraction, the baryon acoustic oscillation (BAO), and the cosmic microwave background (CMB) data. In a non-flat universe, the constraint results for GCG model are, $\\Ome...
Direct Monte Carlo Measurement of the Surface Tension in Ising Models
Hasenbusch, M
1992-01-01
I present a cluster Monte Carlo algorithm that gives direct access to the interface free energy of Ising models. The basic idea is to simulate an ensemble that consists of both configurations with periodic and with antiperiodic boundary conditions. A cluster algorithm is provided that efficently updates this joint ensemble. The interface tension is obtained from the ratio of configurations with periodic and antiperiodic boundary conditions, respectively. The method is tested for the 3-dimensional Ising model.
Energy Technology Data Exchange (ETDEWEB)
Battaile, C.C.; Buchheit, T.E.; Holm, E.A.; Neilsen, M.K.; Wellman, G.W.
1999-01-12
The microstructural evolution of heavily deformed polycrystalline Cu is simulated by coupling a constitutive model for polycrystal plasticity with the Monte Carlo Potts model for grain growth. The effects of deformation on boundary topology and grain growth kinetics are presented. Heavy deformation leads to dramatic strain-induced boundary migration and subsequent grain fragmentation. Grain growth is accelerated in heavily deformed microstructures. The implications of these results for the thermomechanical fatigue failure of eutectic solder joints are discussed.
Essays on Quantitative Marketing Models and Monte Carlo Integration Methods
R.D. van Oest (Rutger)
2005-01-01
textabstractThe last few decades have led to an enormous increase in the availability of large detailed data sets and in the computing power needed to analyze such data. Furthermore, new models and new computing techniques have been developed to exploit both sources. All of this has allowed for addr
A Monte Carlo Uncertainty Analysis of Ozone Trend Predictions in a Two Dimensional Model. Revision
Considine, D. B.; Stolarski, R. S.; Hollandsworth, S. M.; Jackman, C. H.; Fleming, E. L.
1998-01-01
We use Monte Carlo analysis to estimate the uncertainty in predictions of total O3 trends between 1979 and 1995 made by the Goddard Space Flight Center (GSFC) two-dimensional (2D) model of stratospheric photochemistry and dynamics. The uncertainty is caused by gas-phase chemical reaction rates, photolysis coefficients, and heterogeneous reaction parameters which are model inputs. The uncertainty represents a lower bound to the total model uncertainty assuming the input parameter uncertainties are characterized correctly. Each of the Monte Carlo runs was initialized in 1970 and integrated for 26 model years through the end of 1995. This was repeated 419 times using input parameter sets generated by Latin Hypercube Sampling. The standard deviation (a) of the Monte Carlo ensemble of total 03 trend predictions is used to quantify the model uncertainty. The 34% difference between the model trend in globally and annually averaged total O3 using nominal inputs and atmospheric trends calculated from Nimbus 7 and Meteor 3 total ozone mapping spectrometer (TOMS) version 7 data is less than the 46% calculated 1 (sigma), model uncertainty, so there is no significant difference between the modeled and observed trends. In the northern hemisphere midlatitude spring the modeled and observed total 03 trends differ by more than 1(sigma) but less than 2(sigma), which we refer to as marginal significance. We perform a multiple linear regression analysis of the runs which suggests that only a few of the model reactions contribute significantly to the variance in the model predictions. The lack of significance in these comparisons suggests that they are of questionable use as guides for continuing model development. Large model/measurement differences which are many multiples of the input parameter uncertainty are seen in the meridional gradients of the trend and the peak-to-peak variations in the trends over an annual cycle. These discrepancies unambiguously indicate model formulation
Forward and adjoint radiance Monte Carlo models for quantitative photoacoustic imaging
Hochuli, Roman; Powell, Samuel; Arridge, Simon; Cox, Ben
2015-03-01
In quantitative photoacoustic imaging, the aim is to recover physiologically relevant tissue parameters such as chromophore concentrations or oxygen saturation. Obtaining accurate estimates is challenging due to the non-linear relationship between the concentrations and the photoacoustic images. Nonlinear least squares inversions designed to tackle this problem require a model of light transport, the most accurate of which is the radiative transfer equation. This paper presents a highly scalable Monte Carlo model of light transport that computes the radiance in 2D using a Fourier basis to discretise in angle. The model was validated against a 2D finite element model of the radiative transfer equation, and was used to compute gradients of an error functional with respect to the absorption and scattering coefficient. It was found that adjoint-based gradient calculations were much more robust to inherent Monte Carlo noise than a finite difference approach. Furthermore, the Fourier angular discretisation allowed very efficient gradient calculations as sums of Fourier coefficients. These advantages, along with the high parallelisability of Monte Carlo models, makes this approach an attractive candidate as a light model for quantitative inversion in photoacoustic imaging.
Beyond the thermal model in relativistic heavy-ion collisions
Wolschin, Georg
2016-01-01
Deviations from thermal distribution functions of produced particles in relativistic heavy-ion collisions are discussed as indicators for nonequilibrium processes. The focus is on rapidity distributions of produced charged hadrons as functions of collision energy and centrality which are used to infer the fraction of produced particles from a central fireball as compared to the one from the fragmentation sources that are out of equilibrium with the rest of the system. Overall thermal equilibrium would only be reached for large times t -> infinity.
Monte Carlo simulation based toy model for fission process
Kurniadi, Rizal; Waris, Abdul; Viridi, Sparisoma
2016-09-01
Nuclear fission has been modeled notoriously using two approaches method, macroscopic and microscopic. This work will propose another approach, where the nucleus is treated as a toy model. The aim is to see the usefulness of particle distribution in fission yield calculation. Inasmuch nucleus is a toy, then the Fission Toy Model (FTM) does not represent real process in nature completely. The fission event in FTM is represented by one random number. The number is assumed as width of distribution probability of nucleon position in compound nuclei when fission process is started. By adopting the nucleon density approximation, the Gaussian distribution is chosen as particle distribution. This distribution function generates random number that randomizes distance between particles and a central point. The scission process is started by smashing compound nucleus central point into two parts that are left central and right central points. The yield is determined from portion of nuclei distribution which is proportional with portion of mass numbers. By using modified FTM, characteristic of particle distribution in each fission event could be formed before fission process. These characteristics could be used to make prediction about real nucleons interaction in fission process. The results of FTM calculation give information that the γ value seems as energy.
Energy Technology Data Exchange (ETDEWEB)
Abe, M.; Morisawa, M. [Musashi Institute of Technology, Tokyo (Japan); Sato, T. [Keio University, Tokyo (Japan); Kobayashi, K. [Molex-Japan Co. Ltd., Tokyo (Japan)
1997-10-01
The past study of safety at vehicle collision pays attention to phenomena within the short time from starting collision, and the behavior of rollover is studied separating from that at collision. Most simulations of traffic accident are two-dimensional simulations. Therefore, it is indispensable for vehicle design to the analyze three-dimensional and continuous behavior from crash till stopping. Accordingly, in this study, the three-dimensional behavior of two vehicles at collision was simulated by computer using dynamic models. Then, by comparison of the calculated results with real vehicles` collision test data, it was confirmed that dynamic model of this study was reliable. 10 refs., 6 figs., 3 tabs.
Spada, F.M.; Krol, M.C.|info:eu-repo/dai/nl/078760410; Stammes, P.
2006-01-01
A new multiple-scattering Monte Carlo 3-D radiative transfer model named McSCIA (Monte Carlo for SCIAmachy) is presented. The backward technique is used to efficiently simulate narrow field of view instruments. The McSCIA algorithm has been formulated as a function of the Earth’s radius, and can
Spada, F.; Krol, M.C.; Stammes, P.
2006-01-01
A new multiple-scatteringMonte Carlo 3-D radiative transfer model named McSCIA (Monte Carlo for SCIA-machy) is presented. The backward technique is used to efficiently simulate narrow field of view instruments. The McSCIA algorithm has been formulated as a function of the Earth's radius, and can
Numerical Study of Light Transport in Apple Models Based on Monte Carlo Simulations
Directory of Open Access Journals (Sweden)
Mohamed Lamine Askoura
2015-12-01
Full Text Available This paper reports on the quantification of light transport in apple models using Monte Carlo simulations. To this end, apple was modeled as a two-layer spherical model including skin and flesh bulk tissues. The optical properties of both tissue types used to generate Monte Carlo data were collected from the literature, and selected to cover a range of values related to three apple varieties. Two different imaging-tissue setups were simulated in order to show the role of the skin on steady-state backscattering images, spatially-resolved reflectance profiles, and assessment of flesh optical properties using an inverse nonlinear least squares fitting algorithm. Simulation results suggest that apple skin cannot be ignored when a Visible/Near-Infrared (Vis/NIR steady-state imaging setup is used for investigating quality attributes of apples. They also help to improve optical inspection techniques in the horticultural products.
Updates to the dust-agglomerate collision model and implications for planetesimal formation
Blum, Jürgen; Brisset, Julie; Bukhari, Mohtashim; Kothe, Stefan; Landeck, Alexander; Schräpler, Rainer; Weidling, René
2016-10-01
Since the publication of our first dust-agglomerate collision model in 2010, several new laboratory experiments have been performed, which have led to a refinement of the model. Substantial improvement of the model has been achieved in the low-velocity regime (where we investigated the abrasion in bouncing collisions), in the high-velocity regime (where we have studied the fragmentation behavior of colliding dust aggregates), in the erosion regime (in which we extended the experiments to impacts of small projectile agglomerates into large target agglomerates), and in the very-low velocity collision regime (where we studied further sticking collisions). We also have applied the new dust-agglomerate collision model to the solar nebula conditions and can constrain the potential growth of planetesimals by mass transfer to a very small parameter space, which makes this growth path very unlikely. Experimental examples, an outline of the new collision model, and applications to dust agglomerate growth in the solar nebula will be presented.
Monte Carlo simulation for kinetic chemotaxis model: An application to the traveling population wave
Yasuda, Shugo
2017-02-01
A Monte Carlo simulation of chemotactic bacteria is developed on the basis of the kinetic model and is applied to a one-dimensional traveling population wave in a microchannel. In this simulation, the Monte Carlo method, which calculates the run-and-tumble motions of bacteria, is coupled with a finite volume method to calculate the macroscopic transport of the chemical cues in the environment. The simulation method can successfully reproduce the traveling population wave of bacteria that was observed experimentally and reveal the microscopic dynamics of bacterium coupled with the macroscopic transports of the chemical cues and bacteria population density. The results obtained by the Monte Carlo method are also compared with the asymptotic solution derived from the kinetic chemotaxis equation in the continuum limit, where the Knudsen number, which is defined by the ratio of the mean free path of bacterium to the characteristic length of the system, vanishes. The validity of the Monte Carlo method in the asymptotic behaviors for small Knudsen numbers is numerically verified.
Large-scale model-based assessment of deer-vehicle collision risk.
Directory of Open Access Journals (Sweden)
Torsten Hothorn
Full Text Available Ungulates, in particular the Central European roe deer Capreolus capreolus and the North American white-tailed deer Odocoileus virginianus, are economically and ecologically important. The two species are risk factors for deer-vehicle collisions and as browsers of palatable trees have implications for forest regeneration. However, no large-scale management systems for ungulates have been implemented, mainly because of the high efforts and costs associated with attempts to estimate population sizes of free-living ungulates living in a complex landscape. Attempts to directly estimate population sizes of deer are problematic owing to poor data quality and lack of spatial representation on larger scales. We used data on >74,000 deer-vehicle collisions observed in 2006 and 2009 in Bavaria, Germany, to model the local risk of deer-vehicle collisions and to investigate the relationship between deer-vehicle collisions and both environmental conditions and browsing intensities. An innovative modelling approach for the number of deer-vehicle collisions, which allows nonlinear environment-deer relationships and assessment of spatial heterogeneity, was the basis for estimating the local risk of collisions for specific road types on the scale of Bavarian municipalities. Based on this risk model, we propose a new "deer-vehicle collision index" for deer management. We show that the risk of deer-vehicle collisions is positively correlated to browsing intensity and to harvest numbers. Overall, our results demonstrate that the number of deer-vehicle collisions can be predicted with high precision on the scale of municipalities. In the densely populated and intensively used landscapes of Central Europe and North America, a model-based risk assessment for deer-vehicle collisions provides a cost-efficient instrument for deer management on the landscape scale. The measures derived from our model provide valuable information for planning road protection and defining
Comparing analytical and Monte Carlo optical diffusion models in phosphor-based X-ray detectors
Kalyvas, N.; Liaparinos, P.
2014-03-01
Luminescent materials are employed as radiation to light converters in detectors of medical imaging systems, often referred to as phosphor screens. Several processes affect the light transfer properties of phosphors. Amongst the most important is the interaction of light. Light attenuation (absorption and scattering) can be described either through "diffusion" theory in theoretical models or "quantum" theory in Monte Carlo methods. Although analytical methods, based on photon diffusion equations, have been preferentially employed to investigate optical diffusion in the past, Monte Carlo simulation models can overcome several of the analytical modelling assumptions. The present study aimed to compare both methodologies and investigate the dependence of the analytical model optical parameters as a function of particle size. It was found that the optical photon attenuation coefficients calculated by analytical modeling are decreased with respect to the particle size (in the region 1- 12 μm). In addition, for particles sizes smaller than 6μm there is no simultaneous agreement between the theoretical modulation transfer function and light escape values with respect to the Monte Carlo data.
Modeling of hysteresis loops by Monte Carlo simulation
Directory of Open Access Journals (Sweden)
Z. Nehme
2015-12-01
Full Text Available Recent advances in MC simulations of magnetic properties are rather devoted to non-interacting systems or ultrafast phenomena, while the modeling of quasi-static hysteresis loops of an assembly of spins with strong internal exchange interactions remains limited to specific cases. In the case of any assembly of magnetic moments, we propose MC simulations on the basis of a three dimensional classical Heisenberg model applied to an isolated magnetic slab involving first nearest neighbors exchange interactions and uniaxial anisotropy. Three different algorithms were successively implemented in order to simulate hysteresis loops: the classical free algorithm, the cone algorithm and a mixed one consisting of adding some global rotations. We focus particularly our study on the impact of varying the anisotropic constant parameter on the coercive field for different temperatures and algorithms. A study of the angular acceptation move distribution allows the dynamics of our simulations to be characterized. The results reveal that the coercive field is linearly related to the anisotropy providing that the algorithm and the numeric conditions are carefully chosen. In a general tendency, it is found that the efficiency of the simulation can be greatly enhanced by using the mixed algorithm that mimic the physics of collective behavior. Consequently, this study lead as to better quantified coercive fields measurements resulting from physical phenomena of complex magnetic (nanoarchitectures with different anisotropy contributions.
Modeling of hysteresis loops by Monte Carlo simulation
Nehme, Z.; Labaye, Y.; Sayed Hassan, R.; Yaacoub, N.; Greneche, J. M.
2015-12-01
Recent advances in MC simulations of magnetic properties are rather devoted to non-interacting systems or ultrafast phenomena, while the modeling of quasi-static hysteresis loops of an assembly of spins with strong internal exchange interactions remains limited to specific cases. In the case of any assembly of magnetic moments, we propose MC simulations on the basis of a three dimensional classical Heisenberg model applied to an isolated magnetic slab involving first nearest neighbors exchange interactions and uniaxial anisotropy. Three different algorithms were successively implemented in order to simulate hysteresis loops: the classical free algorithm, the cone algorithm and a mixed one consisting of adding some global rotations. We focus particularly our study on the impact of varying the anisotropic constant parameter on the coercive field for different temperatures and algorithms. A study of the angular acceptation move distribution allows the dynamics of our simulations to be characterized. The results reveal that the coercive field is linearly related to the anisotropy providing that the algorithm and the numeric conditions are carefully chosen. In a general tendency, it is found that the efficiency of the simulation can be greatly enhanced by using the mixed algorithm that mimic the physics of collective behavior. Consequently, this study lead as to better quantified coercive fields measurements resulting from physical phenomena of complex magnetic (nano)architectures with different anisotropy contributions.
Model-Based Optimization of Airborne Collision Avoidance Logic
2010-01-26
According to Kuchar and Drumm [4], the mid-air collision of a Russian Tu-154 and a DHL B-757 over Uberlingen in 2002 may have been averted if TCAS...had properly reversed the RA it had issued to the DHL aircraft. The current version of TCAS incorporates reversal logic. According to TCAS monitoring
Wee, Loo Kang
2012-01-01
We develop an Easy Java Simulation (EJS) model for students to experience the physics of idealized one-dimensional collision carts. The physics model is described and simulated by both continuous dynamics and discrete transition during collision. In the field of designing computer simulations, we discuss briefly three pedagogical considerations such as 1) consistent simulation world view with pen paper representation, 2) data table, scientific graphs and symbolic mathematical representations for ease of data collection and multiple representational visualizations and 3) game for simple concept testing that can further support learning. We also suggest using physical world setup to be augmented complimentarily with simulation while highlighting three advantages of real collision carts equipment like tacit 3D experience, random errors in measurement and conceptual significance of conservation of momentum applied to just before and after collision. General feedback from the students has been relatively positive,...
Krasnitz, A; Venugopalan, R; Krasnitz, Alex; Nara, Yasushi; Venugopalan, Raju
2003-01-01
We extend previous work on high energy nuclear collisions in the Color Glass Condensate model to study collisions of finite ultrarelativistic nuclei. The changes implemented include a) imposition of color neutrality at the nucleon level and b) realistic nuclear matter distributions of finite nuclei. The saturation scale characterizing the fields of color charge is explicitly position dependent, $\\Lambda_s=\\Lambda_s(x_T)$. We compute gluon distributions both before and after the collisions. The gluon distribution in the nuclear wavefunction before the collision is significantly suppressed below the saturation scale when compared to the simple McLerran-Venugopalan model prediction, while the behavior at large momentum $p_T\\gg \\Lambda_s$ remains unchanged. We study the centrality dependence of produced gluons and compare it to the centrality dependence of charged hadrons exhibited by the RHIC data. We demonstrate the geometrical scaling property of the initial gluon transverse momentum distributions for differen...
Characteristics of particle production in high energy nuclear collisions a model-based analysis
Guptaroy, P; Bhattacharya, S; Bhattacharya, D P
2002-01-01
The present work pertains to the production of some very important negatively charged secondaries in lead-lead and gold-gold collisions at AGS, SPS and RHIC energies. We would like to examine here the role of the particular version of sequential chain model (SCM), which was applied widely in the past in analysing data on various high-energy hadronic collisions, in explaining now the latest findings on the features of particle production in the relativistic nucleus-nucleus collisions. The agreement between the model of our choice and the measured data is found to be modestly satisfactory in cases of the most prominent and abundantly produced varieties of the secondaries in the above-stated two nuclear collisions. (25 refs).
Wee, Loo Kang
2012-05-01
We develop an Easy Java Simulation (EJS) model for students to experience the physics of idealized one-dimensional collision carts. The physics model is described and simulated by both continuous dynamics and discrete transition during collision. In designing the simulations, we discuss briefly three pedagogical considerations namely (1) a consistent simulation world view with a pen and paper representation, (2) a data table, scientific graphs and symbolic mathematical representations for ease of data collection and multiple representational visualizations and (3) a game for simple concept testing that can further support learning. We also suggest using a physical world setup augmented by simulation by highlighting three advantages of real collision carts equipment such as a tacit 3D experience, random errors in measurement and the conceptual significance of conservation of momentum applied to just before and after collision. General feedback from the students has been relatively positive, and we hope teachers will find the simulation useful in their own classes.
On the multiplicity distribution in statistical model: (II) most central collisions
Xu, Hao-jie
2016-01-01
This work is a continuation of our effort [arXiv:1602.06378] to investigate the statistical expectations for cumulants of (net-conserved) charge distributions in relativistic heavy ion collisions, by using a simple but quantitatively more realistic geometric model, i.e. optical Glauber model. We suggest a new approach for centrality definition in studying of multiplicity fluctuations, which aim at eliminating the uncertainties between experimental measurements and theoretical calculations, as well as redoubling the statistics. We find that the statistical expectations of multiplicity distribution mimic the negative binomial distribution at non-central collisions, but tend to approach the Poisson one at most central collisions due to the "boundary effect" from distribution of volume. We conclude that the collisional geometry (distribution of volume and its fluctuations) play a crucial role in studying of event-by-event multiplicity fluctuations in relativistic heavy ion collisions.
Monte Carlo modeling of ion beam induced secondary electrons
Energy Technology Data Exchange (ETDEWEB)
Huh, U., E-mail: uhuh@vols.utk.edu [Biochemistry & Cellular & Molecular Biology, University of Tennessee, Knoxville, TN 37996-0840 (United States); Cho, W. [Electrical and Computer Engineering, University of Tennessee, Knoxville, TN 37996-2100 (United States); Joy, D.C. [Biochemistry & Cellular & Molecular Biology, University of Tennessee, Knoxville, TN 37996-0840 (United States); Center for Nanophase Materials Science, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States)
2016-09-15
Ion induced secondary electrons (iSE) can produce high-resolution images ranging from a few eV to 100 keV over a wide range of materials. The interpretation of such images requires knowledge of the secondary electron yields (iSE δ) for each of the elements and materials present and as a function of the incident beam energy. Experimental data for helium ions are currently limited to 40 elements and six compounds while other ions are not well represented. To overcome this limitation, we propose a simple procedure based on the comprehensive work of Berger et al. Here we show that between the energy range of 10–100 keV the Berger et al. data for elements and compounds can be accurately represented by a single universal curve. The agreement between the limited experimental data that is available and the predictive model is good, and has been found to provide reliable yield data for a wide range of elements and compounds. - Highlights: • The Universal ASTAR Yield Curve was derived from data recently published by NIST. • IONiSE incorporated with the Curve will predict iSE yield for elements and compounds. • This approach can also handle other ion beams by changing basic scattering profile.
D-meson observables in heavy-ion collisions at LHC with EPOSHQ model
Ozvenchuk, Vitalii; Aichelin, Joerg; Gossiaux, Pol-Bernard; Guiot, Benjamin; Nahrgang, Marlene; Werner, Klaus
2016-11-01
We study the propagation of charm quarks in the quark-gluon plasma (QGP) created in ultrarelativistic heavy-ion collisions at LHC within EPOSHQ model. The interactions of heavy quarks with the light partons in ultrarelativistic heavy-ion collisions through the collisional and radiative processes lead to a large suppression of final D-meson spectra at high transverse momentum and a finite D-meson elliptic flow. Our results are in a good agreement with the available experimental data.
Towards a construction of inclusive collision cross-sections in the massless Nelson model
2011-01-01
The conventional approach to the infrared problem in perturbative quantum electrodynamics relies on the concept of inclusive collision cross-sections. A non-perturbative variant of this notion was introduced in algebraic quantum field theory. Relying on these insights, we take first steps towards a non-perturbative construction of inclusive collision cross-sections in the massless Nelson model. We show that our proposal is consistent with the standard scattering theory in the absence of the i...
Critical behavior of the random-bond Ashkin-Teller model: A Monte Carlo study
Wiseman, Shai; Domany, Eytan
1995-04-01
The critical behavior of a bond-disordered Ashkin-Teller model on a square lattice is investigated by intensive Monte Carlo simulations. A duality transformation is used to locate a critical plane of the disordered model. This critical plane corresponds to the line of critical points of the pure model, along which critical exponents vary continuously. Along this line the scaling exponent corresponding to randomness φ=(α/ν) varies continuously and is positive so that the randomness is relevant, and different critical behavior is expected for the disordered model. We use a cluster algorithm for the Monte Carlo simulations based on the Wolff embedding idea, and perform a finite size scaling study of several critical models, extrapolating between the critical bond-disordered Ising and bond-disordered four-state Potts models. The critical behavior of the disordered model is compared with the critical behavior of an anisotropic Ashkin-Teller model, which is used as a reference pure model. We find no essential change in the order parameters' critical exponents with respect to those of the pure model. The divergence of the specific heat C is changed dramatically. Our results favor a logarithmic type divergence at Tc, C~lnL for the random-bond Ashkin-Teller and four-state Potts models and C~ln lnL for the random-bond Ising model.
MONTE CARLO ANALYSES OF THE YALINA THERMAL FACILITY WITH SERPENT STEREOLITHOGRAPHY GEOMETRY MODEL
Energy Technology Data Exchange (ETDEWEB)
Talamo, A.; Gohar, Y.
2015-01-01
This paper analyzes the YALINA Thermal subcritical assembly of Belarus using two different Monte Carlo transport programs, SERPENT and MCNP. The MCNP model is based on combinatorial geometry and universes hierarchy, while the SERPENT model is based on Stereolithography geometry. The latter consists of unstructured triangulated surfaces defined by the normal and vertices. This geometry format is used by 3D printers and it has been created by: the CUBIT software, MATLAB scripts, and C coding. All the Monte Carlo simulations have been performed using the ENDF/B-VII.0 nuclear data library. Both MCNP and SERPENT share the same geometry specifications, which describe the facility details without using any material homogenization. Three different configurations have been studied with different number of fuel rods. The three fuel configurations use 216, 245, or 280 fuel rods, respectively. The numerical simulations show that the agreement between SERPENT and MCNP results is within few tens of pcms.
Stolarski, R. S.; Butler, D. M.; Rundel, R. D.
1977-01-01
A concise stratospheric model was used in a Monte-Carlo analysis of the propagation of reaction rate uncertainties through the calculation of an ozone perturbation due to the addition of chlorine. Two thousand Monte-Carlo cases were run with 55 reaction rates being varied. Excellent convergence was obtained in the output distributions because the model is sensitive to the uncertainties in only about 10 reactions. For a 1 ppby chlorine perturbation added to a 1.5 ppby chlorine background, the resultant 1 sigma uncertainty on the ozone perturbation is a factor of 1.69 on the high side and 1.80 on the low side. The corresponding 2 sigma factors are 2.86 and 3.23. Results are also given for the uncertainties, due to reaction rates, in the ambient concentrations of stratospheric species.
Robinson, Mitchell; Butcher, Ryan; Coté, Gerard L.
2017-02-01
Monte Carlo modeling of photon propagation has been used in the examination of particular areas of the body to further enhance the understanding of light propagation through tissue. This work seeks to improve upon the established simulation methods through more accurate representations of the simulated tissues in the wrist as well as the characteristics of the light source. The Monte Carlo simulation program was developed using Matlab. Generation of different tissue domains, such as muscle, vasculature, and bone, was performed in Solidworks, where each domain was saved as a separate .stl file that was read into the program. The light source was altered to give considerations to both viewing angle of the simulated LED as well as the nominal diameter of the source. It is believed that the use of these more accurate models generates results that more closely match those seen in-vivo, and can be used to better guide the design of optical wrist-worn measurement devices.
Testing Lorentz Invariance Emergence in the Ising Model using Monte Carlo simulations
Dias Astros, Maria Isabel
2017-01-01
In the context of the Lorentz invariance as an emergent phenomenon at low energy scales to study quantum gravity a system composed by two 3D interacting Ising models (one with an anisotropy in one direction) was proposed. Two Monte Carlo simulations were run: one for the 2D Ising model and one for the target model. In both cases the observables (energy, magnetization, heat capacity and magnetic susceptibility) were computed for different lattice sizes and a Binder cumulant introduced in order to estimate the critical temperature of the systems. Moreover, the correlation function was calculated for the 2D Ising model.
Monte Carlo tests of the Rasch model based on scalability coefficients
DEFF Research Database (Denmark)
Christensen, Karl Bang; Kreiner, Svend
2010-01-01
that summarizes the number of Guttman errors in the data matrix. These coefficients are shown to yield efficient tests of the Rasch model using p-values computed using Markov chain Monte Carlo methods. The power of the tests of unequal item discrimination, and their ability to distinguish between local dependence...... and unequal item discrimination, are discussed. The methods are illustrated and motivated using a simulation study and a real data example....
Monte Carlo Tests of Nucleation Concepts in the Lattice Gas Model
Schmitz, Fabian; Virnau, Peter; Binder, Kurt
2013-01-01
The conventional theory of homogeneous and heterogeneous nucleation in a supersaturated vapor is tested by Monte Carlo simulations of the lattice gas (Ising) model with nearest-neighbor attractive interactions on the simple cubic lattice. The theory considers the nucleation process as a slow (quasi-static) cluster (droplet) growth over a free energy barrier $\\Delta F^*$, constructed in terms of a balance of surface and bulk term of a "critical droplet" of radius $R^*$, implying that the rates...
Critical Exponents of the Classical 3D Heisenberg Model A Single-Cluster Monte Carlo Study
Holm, C; Holm, Christian; Janke, Wolfhard
1993-01-01
We have simulated the three-dimensional Heisenberg model on simple cubic lattices, using the single-cluster Monte Carlo update algorithm. The expected pronounced reduction of critical slowing down at the phase transition is verified. This allows simulations on significantly larger lattices than in previous studies and consequently a better control over systematic errors. In one set of simulations we employ the usual finite-size scaling methods to compute the critical exponents $\
Monte Carlo Study of the Xy-Model on SIERPIŃSKI Carpet
Mitrović, Božidar; Przedborski, Michelle A.
2014-09-01
We have performed a Monte Carlo (MC) study of the classical XY-model on a Sierpiński carpet, which is a planar fractal structure with infinite order of ramification and fractal dimension 1.8928. We employed the Wolff cluster algorithm in our simulations and our results, in particular those for the susceptibility and the helicity modulus, indicate the absence of finite-temperature Berezinskii-Kosterlitz-Thouless (BKT) transition in this system.
Quantum Monte Carlo simulation of a two-dimensional Majorana lattice model
Hayata, Tomoya; Yamamoto, Arata
2017-07-01
We study interacting Majorana fermions in two dimensions as a low-energy effective model of a vortex lattice in two-dimensional time-reversal-invariant topological superconductors. For that purpose, we implement ab initio quantum Monte Carlo simulation to the Majorana fermion system in which the path-integral measure is given by a semipositive Pfaffian. We discuss spontaneous breaking of time-reversal symmetry at finite temperatures.
Collision detection and modeling of rigid and deformable objects in laparoscopic simulator
Dy, Mary-Clare; Tagawa, Kazuyoshi; Tanaka, Hiromi T.; Komori, Masaru
2015-03-01
Laparoscopic simulators are viable alternatives for surgical training and rehearsal. Haptic devices can also be incorporated with virtual reality simulators to provide additional cues to the users. However, to provide realistic feedback, the haptic device must be updated by 1kHz. On the other hand, realistic visual cues, that is, the collision detection and deformation between interacting objects must be rendered at least 30 fps. Our current laparoscopic simulator detects the collision between a point on the tool tip, and on the organ surfaces, in which haptic devices are attached on actual tool tips for realistic tool manipulation. The triangular-mesh organ model is rendered using a mass spring deformation model, or finite element method-based models. In this paper, we investigated multi-point-based collision detection on the rigid tool rods. Based on the preliminary results, we propose a method to improve the collision detection scheme, and speed up the organ deformation reaction. We discuss our proposal for an efficient method to compute simultaneous multiple collision between rigid (laparoscopic tools) and deformable (organs) objects, and perform the subsequent collision response, with haptic feedback, in real-time.
Energy Technology Data Exchange (ETDEWEB)
Murata, Isao [Osaka Univ., Suita (Japan); Mori, Takamasa; Nakagawa, Masayuki; Itakura, Hirofumi
1996-03-01
The method to calculate neutronics parameters of a core composed of randomly distributed spherical fuels has been developed based on a statistical geometry model with a continuous energy Monte Carlo method. This method was implemented in a general purpose Monte Carlo code MCNP, and a new code MCNP-CFP had been developed. This paper describes the model and method how to use it and the validation results. In the Monte Carlo calculation, the location of a spherical fuel is sampled probabilistically along the particle flight path from the spatial probability distribution of spherical fuels, called nearest neighbor distribution (NND). This sampling method was validated through the following two comparisons: (1) Calculations of inventory of coated fuel particles (CFPs) in a fuel compact by both track length estimator and direct evaluation method, and (2) Criticality calculations for ordered packed geometries. This method was also confined by applying to an analysis of the critical assembly experiment at VHTRC. The method established in the present study is quite unique so as to a probabilistic model of the geometry with a great number of spherical fuels distributed randomly. Realizing the speed-up by vector or parallel computations in future, it is expected to be widely used in calculation of a nuclear reactor core, especially HTGR cores. (author).
Monte Carlo method of radiative transfer applied to a turbulent flame modeling with LES
Zhang, Jin; Gicquel, Olivier; Veynante, Denis; Taine, Jean
2009-06-01
Radiative transfer plays an important role in the numerical simulation of turbulent combustion. However, for the reason that combustion and radiation are characterized by different time scales and different spatial and chemical treatments, the radiation effect is often neglected or roughly modelled. The coupling of a large eddy simulation combustion solver and a radiation solver through a dedicated language, CORBA, is investigated. Two formulations of Monte Carlo method (Forward Method and Emission Reciprocity Method) employed to resolve RTE have been compared in a one-dimensional flame test case using three-dimensional calculation grids with absorbing and emitting media in order to validate the Monte Carlo radiative solver and to choose the most efficient model for coupling. Then the results obtained using two different RTE solvers (Reciprocity Monte Carlo method and Discrete Ordinate Method) applied on a three-dimensional flame holder set-up with a correlated-k distribution model describing the real gas medium spectral radiative properties are compared not only in terms of the physical behavior of the flame, but also in computational performance (storage requirement, CPU time and parallelization efficiency). To cite this article: J. Zhang et al., C. R. Mecanique 337 (2009).
Collision Geometry and Flow in Uranium+Uranium Collisions
Goldschmidt, Andy; Shen, Chun; Heinz, Ulrich
2015-01-01
Using event-by-event viscous fluid dynamics to evolve fluctuating initial density profiles from the Monte-Carlo Glauber model for U+U collisions, we report a "knee"-like structure in the elliptic flow as a function of collision centrality, located near 0.5% centrality as measured by the final charged multiplicity. This knee is due to the preferential selection of tip-on-tip collision geometries by a high-multiplicity trigger. Such a knee structure is not seen in the STAR data. This rules out the two-component MC-Glauber model for initial energy and entropy production. An enrichment of tip-tip configurations by triggering solely on high-multiplicity in the U+U collisions thus does not work. On the other hand, using the Zero Degree Calorimeters (ZDCs) coupled with event-shape engineering, we identify the selection purity of body-body and tip-tip events in the full-overlap U+U collisions. With additional constraints on the asymmetry of the ZDC signals one can further increases the probability of selecting tip-ti...
Collision Energy Evolution of Elliptic and Triangular Flow in a Hybrid Model
Auvinen, Jussi
2013-01-01
While the existence of a strongly interacting state of matter, known as 'quark-gluon plasma' (QGP), has been established in heavy ion collision experiments in the past decade, the task remains to map out the transition from the hadronic matter to the QGP. This is done by measuring the dependence of key observables (such as particle suppression and elliptic flow) on the collision energy of the heavy ions. This procedure, known as 'beam energy scan', has been most recently performed at the Relativistic Heavy Ion Collider (RHIC). Utilizing a Boltzmann+hydrodynamics hybrid model, we study the collision energy dependence of initial state eccentricities and the final state elliptic and triangular flow. This approach is well suited to investigate the relative importance of hydrodynamics and hadron transport at different collision energies.
Sensor Fusion Based Model for Collision Free Mobile Robot Navigation
Marwah Almasri; Khaled Elleithy; Abrar Alajlan
2015-01-01
Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot...
Monte Carlo tools for Beyond the Standard Model Physics , April 14-16
DEFF Research Database (Denmark)
Badger...[], Simon; Christensen, Christian Holm; Dalsgaard, Hans Hjersing;
2011-01-01
This workshop aims to gather together theorists and experimentalists interested in developing and using Monte Carlo tools for Beyond the Standard Model Physics in an attempt to be prepared for the analysis of data focusing on the Large Hadron Collider. Since a large number of excellent tools....... To identify promising models (or processes) for which the tools have not yet been constructed and start filling up these gaps. To propose ways to streamline the process of going from models to events, i.e. to make the process more user-friendly so that more people can get involved and perform serious collider...
Badal Soler, Andreu
2008-01-01
Els programes de simulació Monte Carlo de caràcter general s'utilitzen actualment en una gran varietat d'aplicacions.Tot i això, els models geomètrics implementats en la majoria de programes imposen certes limitacions a la forma dels objectes que es poden definir. Aquests models no són adequats per descriure les superfícies arbitràries que es troben en estructures anatòmiques o en certs aparells mèdics i, conseqüentment, algunes aplicacions que requereixen l'ús de models geomètrics molt detal...
Li, Xiaomeng; Yan, Xuedong; Wu, Jiawei; Radwan, Essam; Zhang, Yuting
2016-12-01
Driver's collision avoidance performance has a direct link to the collision risk and crash severity. Previous studies demonstrated that the distracted driving, such as using a cell phone while driving, disrupted the driver's performance on road. This study aimed to investigate the manner and extent to which cell phone use and driver's gender affected driving performance and collision risk in a rear-end collision avoidance process. Forty-two licensed drivers completed the driving simulation experiment in three phone use conditions: no phone use, hands-free, and hand-held, in which the drivers drove in a car-following situation with potential rear-end collision risks caused by the leading vehicle's sudden deceleration. Based on the experiment data, a rear-end collision risk assessment model was developed to assess the influence of cell phone use and driver's gender. The cell phone use and driver's gender were found to be significant factors that affected the braking performances in the rear-end collision avoidance process, including the brake reaction time, the deceleration adjusting time and the maximum deceleration rate. The minimum headway distance between the leading vehicle and the simulator during the rear-end collision avoidance process was the final output variable, which could be used to measure the rear-end collision risk and judge whether a collision occurred. The results showed that although cell phone use drivers took some compensatory behaviors in the collision avoidance process to reduce the mental workload, the collision risk in cell phone use conditions was still higher than that without the phone use. More importantly, the results proved that the hands-free condition did not eliminate the safety problem associated with distracted driving because it impaired the driving performance in the same way as much as the use of hand-held phones. In addition, the gender effect indicated that although female drivers had longer reaction time than male drivers in
Yasuda, Shugo
2015-01-01
A Monte Carlo simulation for the chemotactic bacteria is developed on the basis of the kinetic modeling, i.e., the Boltzmann transport equation, and applied to the one-dimensional traveling population wave in a micro channel.In this method, the Monte Carlo method, which calculates the run-and-tumble motions of bacteria, is coupled with a finite volume method to solve the macroscopic transport of the chemical cues in the field. The simulation method can successfully reproduce the traveling population wave of bacteria which was observed experimentally. The microscopic dynamics of bacteria, e.g., the velocity autocorrelation function and velocity distribution function of bacteria, are also investigated. It is found that the bacteria which form the traveling population wave create quasi-periodic motions as well as a migratory movement along with the traveling population wave. Simulations are also performed with changing the sensitivity and modulation parameters in the response function of bacteria. It is found th...
Directory of Open Access Journals (Sweden)
M. S. Mayeed
2014-01-01
Full Text Available Applying the reptation algorithm to a simplified perfluoropolyether Z off-lattice polymer model an NVT Monte Carlo simulation has been performed. Bulk condition has been simulated first to compare the average radius of gyration with the bulk experimental results. Then the model is tested for its ability to describe dynamics. After this, it is applied to observe the replenishment of nanoscale ultrathin liquid films on solid flat carbon surfaces. The replenishment rate for trenches of different widths (8, 12, and 16 nms for several molecular weights between two films of perfluoropolyether Z from the Monte Carlo simulation is compared to that obtained solving the diffusion equation using the experimental diffusion coefficients of Ma et al. (1999, with room condition in both cases. Replenishment per Monte Carlo cycle seems to be a constant multiple of replenishment per second at least up to 2 nm replenished film thickness of the trenches over the carbon surface. Considerable good agreement has been achieved here between the experimental results and the dynamics of molecules using reptation moves in the ultrathin liquid films on solid surfaces.
Modeling weight variability in a pan coating process using Monte Carlo simulations.
Pandey, Preetanshu; Katakdaunde, Manoj; Turton, Richard
2006-10-06
The primary objective of the current study was to investigate process variables affecting weight gain mass coating variability (CV(m) ) in pan coating devices using novel video-imaging techniques and Monte Carlo simulations. Experimental information such as the tablet location, circulation time distribution, velocity distribution, projected surface area, and spray dynamics was the main input to the simulations. The data on the dynamics of tablet movement were obtained using novel video-imaging methods. The effects of pan speed, pan loading, tablet size, coating time, spray flux distribution, and spray area and shape were investigated. CV(m) was found to be inversely proportional to the square root of coating time. The spray shape was not found to affect the CV(m) of the process significantly, but an increase in the spray area led to lower CV(m) s. Coating experiments were conducted to verify the predictions from the Monte Carlo simulations, and the trends predicted from the model were in good agreement. It was observed that the Monte Carlo simulations underpredicted CV(m) s in comparison to the experiments. The model developed can provide a basis for adjustments in process parameters required during scale-up operations and can be useful in predicting the process changes that are needed to achieve the same CV(m) when a variable is altered.
Adaptive Multi-GPU Exchange Monte Carlo for the 3D Random Field Ising Model
Navarro, C A; Deng, Youjin
2015-01-01
The study of disordered spin systems through Monte Carlo simulations has proven to be a hard task due to the adverse energy landscape present at the low temperature regime, making it difficult for the simulation to escape from a local minimum. Replica based algorithms such as the Exchange Monte Carlo (also known as parallel tempering) are effective at overcoming this problem, reaching equilibrium on disordered spin systems such as the Spin Glass or Random Field models, by exchanging information between replicas of neighbor temperatures. In this work we present a multi-GPU Exchange Monte Carlo method designed for the simulation of the 3D Random Field Model. The implementation is based on a two-level parallelization scheme that allows the method to scale its performance in the presence of faster and GPUs as well as multiple GPUs. In addition, we modified the original algorithm by adapting the set of temperatures according to the exchange rate observed from short trial runs, leading to an increased exchange rate...
Model of the humanoid body for self collision detection based on elliptical capsules
CSIR Research Space (South Africa)
Dube, C
2011-12-01
Full Text Available . The humanoid body is modeled using elliptical capsules, while the moving segments, i.e. arms and legs, of the humanoid are modeled using circular capsules. This collision detection model provides a good fit to the humanoid body shape while being simple...
Bayesian phylogenetic model selection using reversible jump Markov chain Monte Carlo.
Huelsenbeck, John P; Larget, Bret; Alfaro, Michael E
2004-06-01
A common problem in molecular phylogenetics is choosing a model of DNA substitution that does a good job of explaining the DNA sequence alignment without introducing superfluous parameters. A number of methods have been used to choose among a small set of candidate substitution models, such as the likelihood ratio test, the Akaike Information Criterion (AIC), the Bayesian Information Criterion (BIC), and Bayes factors. Current implementations of any of these criteria suffer from the limitation that only a small set of models are examined, or that the test does not allow easy comparison of non-nested models. In this article, we expand the pool of candidate substitution models to include all possible time-reversible models. This set includes seven models that have already been described. We show how Bayes factors can be calculated for these models using reversible jump Markov chain Monte Carlo, and apply the method to 16 DNA sequence alignments. For each data set, we compare the model with the best Bayes factor to the best models chosen using AIC and BIC. We find that the best model under any of these criteria is not necessarily the most complicated one; models with an intermediate number of substitution types typically do best. Moreover, almost all of the models that are chosen as best do not constrain a transition rate to be the same as a transversion rate, suggesting that it is the transition/transversion rate bias that plays the largest role in determining which models are selected. Importantly, the reversible jump Markov chain Monte Carlo algorithm described here allows estimation of phylogeny (and other phylogenetic model parameters) to be performed while accounting for uncertainty in the model of DNA substitution.
Monte Carlo Studies of Phase Separation in Compressible 2-dim Ising Models
Mitchell, S. J.; Landau, D. P.
2006-03-01
Using high resolution Monte Carlo simulations, we study time-dependent domain growth in compressible 2-dim ferromagnetic (s=1/2) Ising models with continuous spin positions and spin-exchange moves [1]. Spins interact with slightly modified Lennard-Jones potentials, and we consider a model with no lattice mismatch and one with 4% mismatch. For comparison, we repeat calculations for the rigid Ising model [2]. For all models, large systems (512^2) and long times (10^ 6 MCS) are examined over multiple runs, and the growth exponent is measured in the asymptotic scaling regime. For the rigid model and the compressible model with no lattice mismatch, the growth exponent is consistent with the theoretically expected value of 1/3 [1] for Model B type growth. However, we find that non-zero lattice mismatch has a significant and unexpected effect on the growth behavior.Supported by the NSF.[1] D.P. Landau and K. Binder, A Guide to Monte Carlo Simulations in Statistical Physics, second ed. (Cambridge University Press, New York, 2005).[2] J. Amar, F. Sullivan, and R.D. Mountain, Phys. Rev. B 37, 196 (1988).
A model for energy transfer in collisions of atoms with highly excited molecules.
Houston, Paul L; Conte, Riccardo; Bowman, Joel M
2015-05-21
A model for energy transfer in the collision between an atom and a highly excited target molecule has been developed on the basis of classical mechanics and turning point analysis. The predictions of the model have been tested against the results of trajectory calculations for collisions of five different target molecules with argon or helium under a variety of temperatures, collision energies, and initial rotational levels. The model predicts selected moments of the joint probability distribution, P(Jf,ΔE) with an R(2) ≈ 0.90. The calculation is efficient, in most cases taking less than one CPU-hour. The model provides several insights into the energy transfer process. The joint probability distribution is strongly dependent on rotational energy transfer and conservation laws and less dependent on vibrational energy transfer. There are two mechanisms for rotational excitation, one due to motion normal to the intermolecular potential and one due to motion tangential to it and perpendicular to the line of centers. Energy transfer is found to depend strongly on the intermolecular potential and only weakly on the intramolecular potential. Highly efficient collisions are a natural consequence of the energy transfer and arise due to collisions at "sweet spots" in the space of impact parameter and molecular orientation.
Monte Carlo renormalization-group investigation of the two-dimensional O(4) sigma model
Heller, Urs M.
1988-01-01
An improved Monte Carlo renormalization-group method is used to determine the beta function of the two-dimensional O(4) sigma model. While for (inverse) couplings beta = greater than about 2.2 agreement is obtained with asymptotic scaling according to asymptotic freedom, deviations from it are obtained at smaller couplings. They are, however, consistent with the behavior of the correlation length, indicating 'scaling' according to the full beta function. These results contradict recent claims that the model has a critical point at finite coupling.
Lu, Jianbo; Xu, Lixin; Wu, Yabo; Liu, Molin
2011-01-01
We use the Markov Chain Monte Carlo method to investigate a global constraints on the modified Chaplygin gas (MCG) model as the unification of dark matter and dark energy from the latest observational data: the Union2 dataset of type supernovae Ia (SNIa), the observational Hubble data (OHD), the cluster X-ray gas mass fraction, the baryon acoustic oscillation (BAO), and the cosmic microwave background (CMB) data. In a flat universe, the constraint results for MCG model are, $\\Omega_{b}h^{2}=0...
Flicstein, Jean; Pata, S.; Chun, L. S. H. K.; Palmier, Jean F.; Courant, J. L.
1998-05-01
A model for ultraviolet induced chemical vapor deposition (UV CVD) for a-SiN:H is described. In the simulation of UV CVD process, activate charged centers creation, species incorporation, surface diffusion, and desorption are considered as elementary steps for the photonucleation and photodeposition mechanisms. The process is characterized by two surface sticking coefficients. Surface diffusion of species is modeled with a gaussian distribution. A real time Monte Carlo method is used to determine photonucleation and photodeposition rates in nanostructures. Comparison of experimental versus simulation results for a-SiN:H is shown to predict the morphology temporal evolution under operating conditions down to atomistic resolution.
Nuclear Level Density of ${}^{161}$Dy in the Shell Model Monte Carlo Method
Özen, Cem; Nakada, Hitoshi
2012-01-01
We extend the shell-model Monte Carlo applications to the rare-earth region to include the odd-even nucleus ${}^{161}$Dy. The projection on an odd number of particles leads to a sign problem at low temperatures making it impractical to extract the ground-state energy in direct calculations. We use level counting data at low energies and neutron resonance data to extract the shell model ground-state energy to good precision. We then calculate the level density of ${}^{161}$Dy and find it in very good agreement with the level density extracted from experimental data.
Monte Carlo renormalization-group investigation of the two-dimensional O(4) sigma model
Heller, Urs M.
1988-01-01
An improved Monte Carlo renormalization-group method is used to determine the beta function of the two-dimensional O(4) sigma model. While for (inverse) couplings beta = greater than about 2.2 agreement is obtained with asymptotic scaling according to asymptotic freedom, deviations from it are obtained at smaller couplings. They are, however, consistent with the behavior of the correlation length, indicating 'scaling' according to the full beta function. These results contradict recent claims that the model has a critical point at finite coupling.
Molecular mobility with respect to accessible volume in Monte Carlo lattice model for polymers
Diani, J.; Gilormini, P.
2017-02-01
A three-dimensional cubic Monte Carlo lattice model is considered to test the impact of volume on the molecular mobility of amorphous polymers. Assuming classic polymer chain dynamics, the concept of locked volume limiting the accessible volume around the polymer chains is introduced. The polymer mobility is assessed by its ability to explore the entire lattice thanks to reptation motions. When recording the polymer mobility with respect to the lattice accessible volume, a sharp mobility transition is observed as witnessed during glass transition. The model ability to reproduce known actual trends in terms of glass transition with respect to material parameters, is also tested.
Open-source direct simulation Monte Carlo chemistry modeling for hypersonic flows
Scanlon, Thomas J.; White, Craig; Borg, Matthew K.; Palharini, Rodrigo C.; Farbar, Erin; Boyd, Iain D.; Reese, Jason M.; Brown, Richard E
2015-01-01
An open source implementation of chemistry modelling for the direct simulationMonte Carlo (DSMC) method is presented. Following the recent work of Bird [1] an approach known as the quantum kinetic (Q-K) method has been adopted to describe chemical reactions in a 5-species air model using DSMC procedures based on microscopic gas information. The Q-K technique has been implemented within the framework of the dsmcFoam code, a derivative of the open source CFD code OpenFOAM. Results for vibration...
Core-scale solute transport model selection using Monte Carlo analysis
Malama, Bwalya; James, Scott C
2013-01-01
Model applicability to core-scale solute transport is evaluated using breakthrough data from column experiments conducted with conservative tracers tritium (H-3) and sodium-22, and the retarding solute uranium-232. The three models considered are single-porosity, double-porosity with single-rate mobile-immobile mass-exchange, and the multirate model, which is a deterministic model that admits the statistics of a random mobile-immobile mass-exchange rate coefficient. The experiments were conducted on intact Culebra Dolomite core samples. Previously, data were analyzed using single- and double-porosity models although the Culebra Dolomite is known to possess multiple types and scales of porosity, and to exhibit multirate mobile-immobile-domain mass transfer characteristics at field scale. The data are reanalyzed here and null-space Monte Carlo analysis is used to facilitate objective model selection. Prediction (or residual) bias is adopted as a measure of the model structural error. The analysis clearly shows ...
Development of a Monte Carlo model for the Brainlab microMLC.
Belec, Jason; Patrocinio, Horacio; Verhaegen, Frank
2005-03-07
Stereotactic radiosurgery with several static conformal beams shaped by a micro multileaf collimator (microMLC) is used to treat small irregularly shaped brain lesions. Our goal is to perform Monte Carlo calculations of dose distributions for certain treatment plans as a verification tool. A dedicated microMLC component module for the BEAMnrc code was developed as part of this project and was incorporated in a model of the Varian CL2300 linear accelerator 6 MV photon beam. As an initial validation of the code, the leaf geometry was visualized by tracing particles through the component module and recording their position each time a leaf boundary was crossed. The leaf dimensions were measured and the leaf material density and interleaf air gap were chosen to match the simulated leaf leakage profiles with film measurements in a solid water phantom. A comparison between Monte Carlo calculations and measurements (diode, radiographic film) was performed for square and irregularly shaped fields incident on flat and homogeneous water phantoms. Results show that Monte Carlo calculations agree with measured dose distributions to within 2% and/or 1 mm except for field size smaller than 1.2 cm diameter where agreement is within 5% due to uncertainties in measured output factors.
Directory of Open Access Journals (Sweden)
S. J. Noh
2011-04-01
Full Text Available Applications of data assimilation techniques have been widely used to improve hydrologic prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC methods, known as "particle filters", provide the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response time of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on Markov chain Monte Carlo (MCMC is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, WEP is implemented for the sequential data assimilation through the updating of state variables. Particle filtering is parallelized and implemented in the multi-core computing environment via open message passing interface (MPI. We compare performance results of particle filters in terms of model efficiency, predictive QQ plots and particle diversity. The improvement of model efficiency and the preservation of particle diversity are found in the lagged regularized particle filter.
A combined model for pseudorapidity distributions in Cu-Cu collisions at BNL-RHIC energies
Jiang, Zhjin; Huang, Yan
2016-01-01
The charged particles produced in nucleus-nucleus collisions come from leading particles and those frozen out from the hot and dense matter created in collisions. The leading particles are conventionally supposed having Gaussian rapidity distributions normalized to the number of participants. The hot and dense matter is assumed to expand according to the unified hydrodynamics, a hydro model which unifies the features of Landau and Hwa-Bjorken model, and freeze out into charged particles from a space-like hypersurface with a proper time of Tau_FO . The rapidity distribution of this part of charged particles can be derived out analytically. The combined contribution from both leading particles and unified hydrodynamics is then compared against the experimental data performed by BNL-RHIC-PHOBOS Collaboration in different centrality Cu-Cu collisions at sqrt(s_NN)=200 and 62.4 GeV, respectively. The model predictions are in well consistent with experimental measurements.
A numerical strategy for finite element modeling of frictionless asymmetric vocal fold collision
DEFF Research Database (Denmark)
Granados, Alba; Misztal, Marek Krzysztof; Brunskog, Jonas;
2016-01-01
Analysis of voice pathologies may require vocal fold models that include relevant features such as vocal fold asymmetric collision. The present study numerically addresses the problem of frictionless asymmetric collision in a self-sustained three-dimensional continuum model of the vocal folds....... Theoretical background and numerical analysis of the finite-element position-based contact model are presented, along with validation. A novel contact detection mechanism capable to detect collision in asymmetric oscillations is developed. The effect of inexact contact constraint enforcement on vocal fold...... dynamics is examined by different variational methods for inequality constrained minimization problems, namely the Lagrange multiplier method and the penalty method. In contrast to the penalty solution, which is related to classical spring-like contact forces, numerical examples show that the parameter...
Institute of Scientific and Technical Information of China (English)
LU Hong; YI Guodong; TAN Jianrong; LIU Zhenyu
2008-01-01
Collision avoidance decision-making models of multiple agents in virtual driving environ- ment are studied. Based on the behavioral characteristics and hierarchical structure of the collision avoidance decision-making in real life driving, delphi approach and mathematical statistics method are introduced to construct pair-wise comparison judgment matrix of collision avoidance decision choices to each collision situation. Analytic hierarchy process (AHP) is adopted to establish the agents' collision avoidance decision-making model. To simulate drivers' characteristics, driver factors are added to categorize driving modes into impatient mode, normal mode, and the cautious mode. The results show that this model can simulate human's thinking process, and the agents in the virtual environment can deal with collision situations and make decisions to avoid collisions without intervention. The model can also reflect diversity and uncertainty of real life driving behaviors, and solves the multi-objective, multi-choice ranking priority problem in multi-vehicle collision scenarios. This collision avoidance model of multi-agents model is feasible and effective, and can provide richer and closer-to-life virtual scene for driving simulator, reflecting real-life traffic environment more truly, this model can also promote the practicality of driving simulator.
Insight into collision zone dynamics from topography: numerical modelling results and observations
Directory of Open Access Journals (Sweden)
A. D. Bottrill
2012-11-01
Full Text Available Dynamic models of subduction and continental collision are used to predict dynamic topography changes on the overriding plate. The modelling results show a distinct evolution of topography on the overriding plate, during subduction, continental collision and slab break-off. A prominent topographic feature is a temporary (few Myrs basin on the overriding plate after initial collision. This "collisional mantle dynamic basin" (CMDB is caused by slab steepening drawing, material away from the base of the overriding plate. Also, during this initial collision phase, surface uplift is predicted on the overriding plate between the suture zone and the CMDB, due to the subduction of buoyant continental material and its isostatic compensation. After slab detachment, redistribution of stresses and underplating of the overriding plate cause the uplift to spread further into the overriding plate. This topographic evolution fits the stratigraphy found on the overriding plate of the Arabia-Eurasia collision zone in Iran and south east Turkey. The sedimentary record from the overriding plate contains Upper Oligocene-Lower Miocene marine carbonates deposited between terrestrial clastic sedimentary rocks, in units such as the Qom Formation and its lateral equivalents. This stratigraphy shows that during the Late Oligocene–Early Miocene the surface of the overriding plate sank below sea level before rising back above sea level, without major compressional deformation recorded in the same area. Our modelled topography changes fit well with this observed uplift and subsidence.
Insight into collision zone dynamics from topography: numerical modelling results and observations
Bottrill, A. D.; van Hunen, J.; Allen, M. B.
2012-11-01
Dynamic models of subduction and continental collision are used to predict dynamic topography changes on the overriding plate. The modelling results show a distinct evolution of topography on the overriding plate, during subduction, continental collision and slab break-off. A prominent topographic feature is a temporary (few Myrs) basin on the overriding plate after initial collision. This "collisional mantle dynamic basin" (CMDB) is caused by slab steepening drawing, material away from the base of the overriding plate. Also, during this initial collision phase, surface uplift is predicted on the overriding plate between the suture zone and the CMDB, due to the subduction of buoyant continental material and its isostatic compensation. After slab detachment, redistribution of stresses and underplating of the overriding plate cause the uplift to spread further into the overriding plate. This topographic evolution fits the stratigraphy found on the overriding plate of the Arabia-Eurasia collision zone in Iran and south east Turkey. The sedimentary record from the overriding plate contains Upper Oligocene-Lower Miocene marine carbonates deposited between terrestrial clastic sedimentary rocks, in units such as the Qom Formation and its lateral equivalents. This stratigraphy shows that during the Late Oligocene-Early Miocene the surface of the overriding plate sank below sea level before rising back above sea level, without major compressional deformation recorded in the same area. Our modelled topography changes fit well with this observed uplift and subsidence.
Insight into collision zone dynamics from topography: numerical modelling results and observations
Directory of Open Access Journals (Sweden)
A. D. Bottrill
2012-07-01
Full Text Available Dynamic models of subduction and continental collision are used to predict dynamic topography changes on the overriding plate. The modelling results show a distinct evolution of topography on the overriding plate, during subduction, continental collision and slab break-off. A prominent topographic feature is a temporary (few Myrs deepening in the area of the back arc-basin after initial collision. This collisional mantle dynamic basin (CMDB is caused by slab steepening drawing material away from the base of the overriding plate. Also during this initial collision phase, surface uplift is predicted on the overriding plate between the suture zone and the CMDB, due to the subduction of buoyant continental material and its isostatic compensation. After slab detachment, redistribution of stresses and underplating of the overriding plate causes the uplift to spread further into the overriding plate. This topographic evolution fits the stratigraphy found on the overriding plate of the Arabia-Eurasia collision zone in Iran and south east Turkey. The sedimentary record from the overriding plate contains Upper Oligocene-Lower Miocene marine carbonates deposited between terrestrial clastic sedimentary rocks, in units such as the Qom Formation and its lateral equivalents. This stratigraphy shows that during the Late Oligocene-Early Miocene the surface of the overriding plate sank below sea level before rising back above sea level, without major compressional deformation recorded in the same area. This uplift and subsidence pattern correlates well with our modelled topography changes.
Yuan, Jiankui; Zheng, Yiran; Wessels, Barry; Lo, Simon S; Ellis, Rodney; Machtay, Mitchell; Yao, Min
2016-12-01
A virtual source model for Monte Carlo simulations of helical TomoTherapy has been developed previously by the authors. The purpose of this work is to perform experiments in an anthropomorphic (RANDO) phantom with the same order of complexity as in clinical treatments to validate the virtual source model to be used for quality assurance secondary check on TomoTherapy patient planning dose. Helical TomoTherapy involves complex delivery pattern with irregular beam apertures and couch movement during irradiation. Monte Carlo simulation, as the most accurate dose algorithm, is desirable in radiation dosimetry. Current Monte Carlo simulations for helical TomoTherapy adopt the full Monte Carlo model, which includes detailed modeling of individual machine component, and thus, large phase space files are required at different scoring planes. As an alternative approach, we developed a virtual source model without using the large phase space files for the patient dose calculations previously. In this work, we apply the simulation system to recompute the patient doses, which were generated by the treatment planning system in an anthropomorphic phantom to mimic the real patient treatments. We performed thermoluminescence dosimeter point dose and film measurements to compare with Monte Carlo results. Thermoluminescence dosimeter measurements show that the relative difference in both Monte Carlo and treatment planning system is within 3%, with the largest difference less than 5% for both the test plans. The film measurements demonstrated 85.7% and 98.4% passing rate using the 3 mm/3% acceptance criterion for the head and neck and lung cases, respectively. Over 95% passing rate is achieved if 4 mm/4% criterion is applied. For the dose-volume histograms, very good agreement is obtained between the Monte Carlo and treatment planning system method for both cases. The experimental results demonstrate that the virtual source model Monte Carlo system can be a viable option for the
Microsopic nuclear level densities by the shell model Monte Carlo method
Alhassid, Y; Gilbreth, C N; Nakada, H; Özen, C
2016-01-01
The configuration-interaction shell model approach provides an attractive framework for the calculation of nuclear level densities in the presence of correlations, but the large dimensionality of the model space has hindered its application in mid-mass and heavy nuclei. The shell model Monte Carlo (SMMC) method permits calculations in model spaces that are many orders of magnitude larger than spaces that can be treated by conventional diagonalization methods. We discuss recent progress in the SMMC approach to level densities, and in particular the calculation of level densities in heavy nuclei. We calculate the distribution of the axial quadrupole operator in the laboratory frame at finite temperature and demonstrate that it is a model-independent signature of deformation in the rotational invariant framework of the shell model. We propose a method to use these distributions for calculating level densities as a function of intrinsic deformation.
Chaturvedi, O. S. K.; Srivastava, P. K.; Kumar, Ashwini; Singh, B. K.
2016-12-01
The charged particle multiplicity (n_{ch}) and pseudorapidity density (dn_{ch}/dη) are key observables to characterize the properties of matter created in heavy-ion collisions. The dependence of these observables on collision energy and the collision geometry are a key tool to understand the underlying particle production mechanism. Recently much interest has been focused on asymmetric and deformed nuclei collisions since these collisions can provide a deeper understanding about the nature of quantum chromodynamics (QCD). From the phenomenological perspective, a unified model which describes the experimental data coming from various kinds of collision experiments is much needed to provide physical insights on the production mechanism. In this paper, we have calculated the charged hadron multiplicities for nucleon-nucleus, such as proton-lead ( p-Pb) and asymmetric nuclei collisions like deutron-gold ( d-Au), and copper-gold (Cu-Au) within a new version of the wounded quark model (WQM) and we have shown their variation with respect to centrality. Further we have used a suitable density function within our WQM to calculate pseudorapidity density of charged hadrons at midrapidity in the collisions of deformed uranium nuclei. We found that our model with suitable density functions describes the experimental data for symmetric, asymmetric and deformed nuclei collisions simultaneously over a wide range of the collision energy.
Zhang, G.; Lu, D.; Webster, C.
2014-12-01
The rational management of oil and gas reservoir requires an understanding of its response to existing and planned schemes of exploitation and operation. Such understanding requires analyzing and quantifying the influence of the subsurface uncertainties on predictions of oil and gas production. As the subsurface properties are typically heterogeneous causing a large number of model parameters, the dimension independent Monte Carlo (MC) method is usually used for uncertainty quantification (UQ). Recently, multilevel Monte Carlo (MLMC) methods were proposed, as a variance reduction technique, in order to improve computational efficiency of MC methods in UQ. In this effort, we propose a new acceleration approach for MLMC method to further reduce the total computational cost by exploiting model hierarchies. Specifically, for each model simulation on a new added level of MLMC, we take advantage of the approximation of the model outputs constructed based on simulations on previous levels to provide better initial states of new simulations, which will help improve efficiency by, e.g. reducing the number of iterations in linear system solving or the number of needed time-steps. This is achieved by using mesh-free interpolation methods, such as Shepard interpolation and radial basis approximation. Our approach is applied to a highly heterogeneous reservoir model from the tenth SPE project. The results indicate that the accelerated MLMC can achieve the same accuracy as standard MLMC with a significantly reduced cost.
The Physical Models and Statistical Procedures Used in the RACER Monte Carlo Code
Energy Technology Data Exchange (ETDEWEB)
Sutton, T.M.; Brown, F.B.; Bischoff, F.G.; MacMillan, D.B.; Ellis, C.L.; Ward, J.T.; Ballinger, C.T.; Kelly, D.J.; Schindler, L.
1999-07-01
This report describes the MCV (Monte Carlo - Vectorized)Monte Carlo neutron transport code [Brown, 1982, 1983; Brown and Mendelson, 1984a]. MCV is a module in the RACER system of codes that is used for Monte Carlo reactor physics analysis. The MCV module contains all of the neutron transport and statistical analysis functions of the system, while other modules perform various input-related functions such as geometry description, material assignment, output edit specification, etc. MCV is very closely related to the 05R neutron Monte Carlo code [Irving et al., 1965] developed at Oak Ridge National Laboratory. 05R evolved into the 05RR module of the STEMB system, which was the forerunner of the RACER system. Much of the overall logic and physics treatment of 05RR has been retained and, indeed, the original verification of MCV was achieved through comparison with STEMB results. MCV has been designed to be very computationally efficient [Brown, 1981, Brown and Martin, 1984b; Brown, 1986]. It was originally programmed to make use of vector-computing architectures such as those of the CDC Cyber- 205 and Cray X-MP. MCV was the first full-scale production Monte Carlo code to effectively utilize vector-processing capabilities. Subsequently, MCV was modified to utilize both distributed-memory [Sutton and Brown, 1994] and shared memory parallelism. The code has been compiled and run on platforms ranging from 32-bit UNIX workstations to clusters of 64-bit vector-parallel supercomputers. The computational efficiency of the code allows the analyst to perform calculations using many more neutron histories than is practical with most other Monte Carlo codes, thereby yielding results with smaller statistical uncertainties. MCV also utilizes variance reduction techniques such as survival biasing, splitting, and rouletting to permit additional reduction in uncertainties. While a general-purpose neutron Monte Carlo code, MCV is optimized for reactor physics calculations. It has the
Modeling of Inelastic Collisions in a Multifluid Plasma: Ionization and Recombination
Le, H P
2016-01-01
A model for ionization and recombination collisions in a multifluid plasma is formulated using the framework introduced in previous work [{Phys. Plasmas} \\textbf{22}, 093512 (2015)]. The exchange source terms for density, momentum and energy are detailed for the case of electron induced ionization and three body recombination collisions with isotropic scattering. The principle of detailed balance is enforced at the microscopic level. We describe how to incorporate the standard collisional-radiative model into the multifluid equations using the current formulation. Numerical solutions of the collisional-radiative rate equations for atomic hydrogen are presented to highlight the impact of the multifluid effect on the kinetics.
Particle Production in Ultrarelativistic Heavy-Ion Collisions: A Statistical-Thermal Model Review
Directory of Open Access Journals (Sweden)
S. K. Tiwari
2013-01-01
Full Text Available The current status of various thermal and statistical descriptions of particle production in the ultrarelativistic heavy-ion collisions experiments is presented in detail. We discuss the formulation of various types of thermal models of a hot and dense hadron gas (HG and the methods incorporated in the implementing of the interactions between hadrons. It includes our new excluded-volume model which is thermodynamically consistent. The results of the above models together with the experimental results for various ratios of the produced hadrons are compared. We derive some new universal conditions emerging at the chemical freeze-out of HG fireball showing independence with respect to the energy as well as the structure of the nuclei used in the collision. Further, we calculate various transport properties of HG such as the ratio of shear viscosity-to-entropy using our thermal model and compare with the results of other models. We also show the rapidity as well as transverse mass spectra of various hadrons in the thermal HG model in order to outline the presence of flow in the fluid formed in the collision. The purpose of this review article is to organize and summarize the experimental data obtained in various experiments with heavy-ion collisions and then to examine and analyze them using thermal models so that a firm conclusion regarding the formation of quark-gluon plasma (QGP can be obtained.
New Generation of the Monte Carlo Shell Model for the K Computer Era
Shimizu, Noritaka; Tsunoda, Yusuke; Utsuno, Yutaka; Yoshida, Tooru; Mizusaki, Takahiro; Honma, Michio; Otsuka, Takaharu
2012-01-01
We present a newly enhanced version of the Monte Carlo Shell Model method by incorporating the conjugate gradient method and energy-variance extrapolation. This new method enables us to perform large-scale shell-model calculations that the direct diagonalization method cannot reach. This new generation framework of the MCSM provides us with a powerful tool to perform most-advanced large-scale shell-model calculations on current massively parallel computers such as the K computer. We discuss the validity of this method in ab initio calculations of light nuclei, and propose a new method to describe the intrinsic wave function in terms of the shell-model picture. We also apply this new MCSM to the study of neutron-rich Cr and Ni isotopes using the conventional shell-model calculations with an inert 40Ca core and discuss how the magicity of N = 28, 40, 50 remains or is broken.
Quantitative photoacoustic tomography using forward and adjoint Monte Carlo models of radiance
Hochuli, Roman; Arridge, Simon; Cox, Ben
2016-01-01
Forward and adjoint Monte Carlo (MC) models of radiance are proposed for use in model-based quantitative photoacoustic tomography. A 2D radiance MC model using a harmonic angular basis is introduced and validated against analytic solutions for the radiance in heterogeneous media. A gradient-based optimisation scheme is then used to recover 2D absorption and scattering coefficients distributions from simulated photoacoustic measurements. It is shown that the functional gradients, which are a challenge to compute efficiently using MC models, can be calculated directly from the coefficients of the harmonic angular basis used in the forward and adjoint models. This work establishes a framework for transport-based quantitative photoacoustic tomography that can fully exploit emerging highly parallel computing architectures.
Modeling of radiation-induced bystander effect using Monte Carlo methods
Xia, Junchao; Liu, Liteng; Xue, Jianming; Wang, Yugang; Wu, Lijun
2009-03-01
Experiments showed that the radiation-induced bystander effect exists in cells, or tissues, or even biological organisms when irradiated with energetic ions or X-rays. In this paper, a Monte Carlo model is developed to study the mechanisms of bystander effect under the cells sparsely populated conditions. This model, based on our previous experiment which made the cells sparsely located in a round dish, focuses mainly on the spatial characteristics. The simulation results successfully reach the agreement with the experimental data. Moreover, other bystander effect experiment is also computed by this model and finally the model succeeds in predicting the results. The comparison of simulations with the experimental results indicates the feasibility of the model and the validity of some vital mechanisms assumed.
Monte Carlo based statistical power analysis for mediation models: methods and software.
Zhang, Zhiyong
2014-12-01
The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.
Monte Carlo simulation as a tool to predict blasting fragmentation based on the Kuz Ram model
Morin, Mario A.; Ficarazzo, Francesco
2006-04-01
Rock fragmentation is considered the most important aspect of production blasting because of its direct effects on the costs of drilling and blasting and on the economics of the subsequent operations of loading, hauling and crushing. Over the past three decades, significant progress has been made in the development of new technologies for blasting applications. These technologies include increasingly sophisticated computer models for blast design and blast performance prediction. Rock fragmentation depends on many variables such as rock mass properties, site geology, in situ fracturing and blasting parameters and as such has no complete theoretical solution for its prediction. However, empirical models for the estimation of size distribution of rock fragments have been developed. In this study, a blast fragmentation Monte Carlo-based simulator, based on the Kuz-Ram fragmentation model, has been developed to predict the entire fragmentation size distribution, taking into account intact and joints rock properties, the type and properties of explosives and the drilling pattern. Results produced by this simulator were quite favorable when compared with real fragmentation data obtained from a blast quarry. It is anticipated that the use of Monte Carlo simulation will increase our understanding of the effects of rock mass and explosive properties on the rock fragmentation by blasting, as well as increase our confidence in these empirical models. This understanding will translate into improvements in blasting operations, its corresponding costs and the overall economics of open pit mines and rock quarries.
Directory of Open Access Journals (Sweden)
S. J. Noh
2011-10-01
Full Text Available Data assimilation techniques have received growing attention due to their capability to improve prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC methods, known as "particle filters", are a Bayesian learning process that has the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response times of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until the uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on the Markov chain Monte Carlo (MCMC methods is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, water and energy transfer processes (WEP, is implemented for the sequential data assimilation through the updating of state variables. The lagged regularized particle filter (LRPF and the sequential importance resampling (SIR particle filter are implemented for hindcasting of streamflow at the Katsura catchment, Japan. Control state variables for filtering are soil moisture content and overland flow. Streamflow measurements are used for data assimilation. LRPF shows consistent forecasts regardless of the process noise assumption, while SIR has different values of optimal process noise and shows sensitive variation of confidential intervals, depending on the process noise. Improvement of LRPF forecasts compared to SIR is particularly found for rapidly varied high flows due to preservation of sample diversity from the kernel, even if particle impoverishment takes place.
A Monte Carlo model for 3D grain evolution during welding
Rodgers, Theron M.; Mitchell, John A.; Tikare, Veena
2017-09-01
Welding is one of the most wide-spread processes used in metal joining. However, there are currently no open-source software implementations for the simulation of microstructural evolution during a weld pass. Here we describe a Potts Monte Carlo based model implemented in the SPPARKS kinetic Monte Carlo computational framework. The model simulates melting, solidification and solid-state microstructural evolution of material in the fusion and heat-affected zones of a weld. The model does not simulate thermal behavior, but rather utilizes user input parameters to specify weld pool and heat-affect zone properties. Weld pool shapes are specified by Bézier curves, which allow for the specification of a wide range of pool shapes. Pool shapes can range from narrow and deep to wide and shallow representing different fluid flow conditions within the pool. Surrounding temperature gradients are calculated with the aide of a closest point projection algorithm. The model also allows simulation of pulsed power welding through time-dependent variation of the weld pool size. Example simulation results and comparisons with laboratory weld observations demonstrate microstructural variation with weld speed, pool shape, and pulsed-power.
Bakhet, Nady; Hussein, Tarek
2015-01-01
Large Extra Dimensions Models have been proposed to remove the hierarchy problem and give an explanation why the gravity is so much weaker than the other three forces. In this work, we present an analysis of Monte Carlo data events for new physics signatures of spin-2 Graviton in context of ADD model with total dimensions $D=4+\\delta,$ $\\delta = 1,2,3,4,5,6 $ where $ \\delta $ is the extra special dimension, this model involves missing momentum $P_{T}^{miss}$ in association with jet in the final state via the process $pp(\\bar{p}) \\rightarrow G+jet$, Also, we present an analysis in context of the RS model with 5-dimensions via the process $pp(\\bar{p}) \\rightarrow G+jet$, $G \\rightarrow e^{+}e^{-}$ with final state $e^{+}e^{-}+jet$. We used Monte Carlo event generator Pythia8 to produce efficient signal selection rules at the Large Hadron Collider with $\\sqrt{s}$=14TeV and at the Tevatron $\\sqrt{s}$=1.96TeV .
Contribution of Monte-Carlo modeling for understanding long-term behavior of nuclear glasses
Energy Technology Data Exchange (ETDEWEB)
Minet, Y.; Ledieu, A.; Devreux, F.; Barboux, P.; Frugier, P.; Gin, S
2004-07-01
Monte-Carlo methods have been developed at CEA and Ecole Polytechnique to improve our understanding of the basic mechanisms that control glass dissolution kinetics. The models, based on dissolution and recondensation rates of the atoms, can reproduce the observed alteration rates and the evolutions of the alteration layers on simplified borosilicate glasses (based on SiO{sub 2}-B{sub 2}O{sub 3}-Na{sub 2}O) over a large range of compositions and alteration conditions. The basic models are presented, as well as their current evolutions to describe more complex glasses (introduction of Al, Zr, Ca oxides) and to take into account phenomena which may be predominant in the long run (such as diffusion in the alteration layer or secondary phase precipitation). The predictions are compared with the observations performed by techniques giving structural or textural information on the alteration layer (e.g. NMR, Small Angle X-ray Scattering). The paper concludes with proposals for further evolutions of Monte-Carlo models towards integration into a predictive modeling framework. (authors)
Fission yield calculation using toy model based on Monte Carlo simulation
Energy Technology Data Exchange (ETDEWEB)
Jubaidah, E-mail: jubaidah@student.itb.ac.id [Nuclear Physics and Biophysics Division, Department of Physics, Bandung Institute of Technology. Jl. Ganesa No. 10 Bandung – West Java, Indonesia 40132 (Indonesia); Physics Department, Faculty of Mathematics and Natural Science – State University of Medan. Jl. Willem Iskandar Pasar V Medan Estate – North Sumatera, Indonesia 20221 (Indonesia); Kurniadi, Rizal, E-mail: rijalk@fi.itb.ac.id [Nuclear Physics and Biophysics Division, Department of Physics, Bandung Institute of Technology. Jl. Ganesa No. 10 Bandung – West Java, Indonesia 40132 (Indonesia)
2015-09-30
Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (R{sub c}), mean of left curve (μ{sub L}) and mean of right curve (μ{sub R}), deviation of left curve (σ{sub L}) and deviation of right curve (σ{sub R}). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90
Energy Technology Data Exchange (ETDEWEB)
Mansoori, Zohreh; Saffar-Avval, Majid; Basirat-Tabrizi, Hassan; Ahmadi, Goodarz; Lain, Santiago
2002-12-01
A thermo-mechanical turbulence model is developed and used for predicting heat transfer in a gas-solid flow through a vertical pipe with constant wall heat flux. The new four-way interaction model makes use of the thermal k{sub {theta}}-{tau}{sub {theta}} equations, in addition to the hydrodynamic k-{tau} transport, and accounts for the particle-particle and particle-wall collisions through a Eulerian/Lagrangian formulation. The simulation results indicate that the level of thermal turbulence intensity and the heat transfer are strongly affected by the particle collisions. Inter-particle collisions attenuate the thermal turbulence intensity near the wall but somewhat amplify the temperature fluctuations in the pipe core region. The hydrodynamic-to-thermal times-scale ratio and the turbulent Prandtl number in the region near the wall increase due to the inter-particle collisions. The results also show that the use of a constant or the single-phase gas turbulent Prandtl number produces error in the thermal eddy diffusivity and thermal turbulent intensity fields. Simulation results also indicate that the inter-particle contact heat conduction during collision has no significant effect in the range of Reynolds number and particle diameter studied.
Monte Carlo simulation of Prussian blue analogs described by Heisenberg ternary alloy model
Yüksel, Yusuf
2015-11-01
Within the framework of Monte Carlo simulation technique, we simulate magnetic behavior of Prussian blue analogs based on Heisenberg ternary alloy model. We present phase diagrams in various parameter spaces, and we compare some of our results with those based on Ising counterparts. We clarify the variations of transition temperature and compensation phenomenon with mixing ratio of magnetic ions, exchange interactions, and exchange anisotropy in the present ferro-ferrimagnetic Heisenberg system. According to our results, thermal variation of the total magnetization curves may exhibit N, L, P, Q, R type behaviors based on the Néel classification scheme.
Rejection-free Monte Carlo algorithms for models with continuous degrees of freedom.
Muñoz, J D; Novotny, M A; Mitchell, S J
2003-02-01
We construct a rejection-free Monte Carlo algorithm for a system with continuous degrees of freedom. We illustrate the algorithm by applying it to the classical three-dimensional Heisenberg model with canonical Metropolis dynamics. We obtain the lifetime of the metastable state following a reversal of the external magnetic field. Our rejection-free algorithm obtains results in agreement with a direct implementation of the Metropolis dynamic and requires orders of magnitude less computational time at low temperatures. The treatment is general and can be extended to other dynamics and other systems with continuous degrees of freedom.
DEFF Research Database (Denmark)
Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan
2008-01-01
uncertainty estimation (GLUE) procedure based on Markov chain Monte Carlo sampling is applied in order to improve the performance of the methodology in estimating parameters and posterior output distributions. The description of the spatial variations of the hydrological processes is accounted for by defining...... the identifiability of the parameters and results in satisfactory multi-variable simulations and uncertainty estimates. However, the parameter uncertainty alone cannot explain the total uncertainty at all the sites, due to limitations in the distributed data included in the model calibration. The study also indicates...
Simulation of low Schottky barrier MOSFETs using an improved Multi-subband Monte Carlo model
Gudmundsson, Valur; Palestri, Pierpaolo; Hellström, Per-Erik; Selmi, Luca; Östling, Mikael
2013-01-01
We present a simple and efficient approach to implement Schottky barrier contacts in a Multi-subband Monte Carlo simulator by using the subband smoothening technique to mimic tunneling at the Schottky junction. In the absence of scattering, simulation results for Schottky barrier MOSFETs are in agreement with ballistic Non-Equilibrium Green's Functions calculations. We then include the most relevant scattering mechanisms, and apply the model to the study of double gate Schottky barrier MOSFETs representative of the ITRS 2015 high performance device. Results show that a Schottky barrier height of less than approximately 0.15 eV is required to outperform the doped source/drain structure.
Chemical Potential of Benzene Fluid from Monte Carlo Simulation with Anisotropic United Atom Model
Directory of Open Access Journals (Sweden)
Mahfuzh Huda
2013-07-01
Full Text Available The profile of chemical potential of benzene fluid has been investigated using Anisotropic United Atom (AUA model. A Monte Carlo simulation in canonical ensemble was done to obtain the isotherm of benzene fluid, from which the excess part of chemical potential was calculated. A surge of potential energy is observed during the simulation at high temperature which is related to the gas-liquid phase transition. The isotherm profile indicates the tendency of benzene to condensate due to the strong attractive interaction. The results show that the chemical potential of benzene rapidly deviates from its ideal gas counterpart even at low density.
A threaded Java concurrent implementation of the Monte-Carlo Metropolis Ising model.
Castañeda-Marroquín, Carlos; de la Puente, Alfonso Ortega; Alfonseca, Manuel; Glazier, James A; Swat, Maciej
2009-06-01
This paper describes a concurrent Java implementation of the Metropolis Monte-Carlo algorithm that is used in 2D Ising model simulations. The presented method uses threads, monitors, shared variables and high level concurrent constructs that hide the low level details. In our algorithm we assign one thread to handle one spin flip attempt at a time. We use special lattice site selection algorithm to avoid two or more threads working concurently in the region of the lattice that "belongs" to two or more different spins undergoing spin-flip transformation. Our approach does not depend on the current platform and maximizes concurrent use of the available resources.
Variational Monte Carlo study of magnetic states in the periodic Anderson model
Kubo, Katsunori
2015-03-01
We study the magnetic states of the periodic Anderson model with a finite Coulomb interaction between f electrons on a square lattice by applying variational Monte Carlo method. We consider Gutzwiller wavefunctions for the paramagnetic, antiferromagnetic, ferromagnetic, and charge density wave states. We find an antiferromagnetic phase around half-filling. There is a phase transition accompanying change in the Fermi-surface topology in this antiferromagnetic phase. We also study a case away from half-filling, and find a ferromagnetic state as the ground state there.
Studies on top-quark Monte Carlo modelling for Top2016
The ATLAS collaboration
2016-01-01
This note summarises recent studies on Monte Carlo simulation setups of top-quark pair production used by the ATLAS experiment and presents a new method to deal with interference effects for the $Wt$ single-top-quark production which is compared against previous techniques. The main focus for the top-quark pair production is on the improvement of the modelling of the Powheg generator interfaced to the Pythia8 and Herwig7 shower generators. The studies are done using unfolded data at centre-of-mass energies of 7, 8, and 13 TeV.
Hybrid Parallel Programming Models for AMR Neutron Monte-Carlo Transport
Dureau, David; Poëtte, Gaël
2014-06-01
This paper deals with High Performance Computing (HPC) applied to neutron transport theory on complex geometries, thanks to both an Adaptive Mesh Refinement (AMR) algorithm and a Monte-Carlo (MC) solver. Several Parallelism models are presented and analyzed in this context, among them shared memory and distributed memory ones such as Domain Replication and Domain Decomposition, together with Hybrid strategies. The study is illustrated by weak and strong scalability tests on complex benchmarks on several thousands of cores thanks to the petaflopic supercomputer Tera100.
A study of potential energy curves from the model space quantum Monte Carlo method
Energy Technology Data Exchange (ETDEWEB)
Ohtsuka, Yuhki; Ten-no, Seiichiro, E-mail: tenno@cs.kobe-u.ac.jp [Department of Computational Sciences, Graduate School of System Informatics, Kobe University, Nada-ku, Kobe 657-8501 (Japan)
2015-12-07
We report on the first application of the model space quantum Monte Carlo (MSQMC) to potential energy curves (PECs) for the excited states of C{sub 2}, N{sub 2}, and O{sub 2} to validate the applicability of the method. A parallel MSQMC code is implemented with the initiator approximation to enable efficient sampling. The PECs of MSQMC for various excited and ionized states are compared with those from the Rydberg-Klein-Rees and full configuration interaction methods. The results indicate the usefulness of MSQMC for precise PECs in a wide range obviating problems concerning quasi-degeneracy.
High-Performance Computer Modeling of the Cosmos-Iridium Collision
Olivier, S.
This paper describes the application of a new, integrated modeling and simulation framework, encompassing the space situational awareness (SSA) enterprise, to the recent Cosmos-Iridium collision. This framework is based on a flexible, scalable architecture to enable efficient simulation of the current SSA enterprise, and to accommodate future advancements in SSA systems. In particular, the code is designed to take advantage of massively parallel computer systems available, for example, at Lawrence Livermore National Laboratory. We will describe the application of this framework to the recent collision of the Cosmos and Iridium satellites, including (1) detailed hydrodynamic modeling of the satellite collision and resulting debris generation, (2) orbital propagation of the simulated debris and analysis of the increased risk to other satellites (3) calculation of the radar and optical signatures of the simulated debris and modeling of debris detection with space surveillance radar and optical systems (4) determination of simulated debris orbits from modeled space surveillance observations and analysis of the resulting orbital accuracy, (5) comparison of these modeling and simulation results with Space Surveillance Network observations. We will also discuss the use of this integrated modeling and simulation framework to analyze the risks and consequences of future satellite collisions and to assess strategies for mitigating or avoiding future incidents, including the addition of new sensor systems, used in conjunction with the Space Surveillance Network, for improving space situational awareness.
High-Performance Computer Modeling of the Cosmos-Iridium Collision
Energy Technology Data Exchange (ETDEWEB)
Olivier, S; Cook, K; Fasenfest, B; Jefferson, D; Jiang, M; Leek, J; Levatin, J; Nikolaev, S; Pertica, A; Phillion, D; Springer, K; De Vries, W
2009-08-28
This paper describes the application of a new, integrated modeling and simulation framework, encompassing the space situational awareness (SSA) enterprise, to the recent Cosmos-Iridium collision. This framework is based on a flexible, scalable architecture to enable efficient simulation of the current SSA enterprise, and to accommodate future advancements in SSA systems. In particular, the code is designed to take advantage of massively parallel, high-performance computer systems available, for example, at Lawrence Livermore National Laboratory. We will describe the application of this framework to the recent collision of the Cosmos and Iridium satellites, including (1) detailed hydrodynamic modeling of the satellite collision and resulting debris generation, (2) orbital propagation of the simulated debris and analysis of the increased risk to other satellites (3) calculation of the radar and optical signatures of the simulated debris and modeling of debris detection with space surveillance radar and optical systems (4) determination of simulated debris orbits from modeled space surveillance observations and analysis of the resulting orbital accuracy, (5) comparison of these modeling and simulation results with Space Surveillance Network observations. We will also discuss the use of this integrated modeling and simulation framework to analyze the risks and consequences of future satellite collisions and to assess strategies for mitigating or avoiding future incidents, including the addition of new sensor systems, used in conjunction with the Space Surveillance Network, for improving space situational awareness.
A Simple Quantum Model of Ultracold Polar Molecule Collisions
Idziaszek, Zbigniew; Bohn, John L; Julienne, Paul S
2010-01-01
We present a unified formalism for describing chemical reaction rates of trapped, ultracold molecules. This formalism reduces the scattering to its essential features, namely, a propagation of the reactant molecules through a gauntlet of long-range forces before they ultimately encounter one another, followed by a probability for the reaction to occur once they do. In this way, the electric-field dependence should be readily parametrized in terms of a pair of fitting parameters (along with a $C_6$ coefficient) for each asymptotic value of partial wave quantum numbers $|L,M \\rangle$. From this, the electric field dependence of the collision rates follows automatically. We present examples for reactive species such as KRb, and non-reactive species, such as RbCs.
Energy Technology Data Exchange (ETDEWEB)
Zhou, Qi-Dong [Nagoya University, Institute for Space-Earth Environmental Research, Nagoya (Japan); Itow, Yoshitaka; Sako, Takashi [Nagoya University, Institute for Space-Earth Environmental Research, Nagoya (Japan); Nagoya University, Kobayashi-Maskawa Institute, Nagoya (Japan); Menjo, Hiroaki [Nagoya University, Graduate School of Science, Nagoya (Japan)
2017-04-15
Very forward (VF) detectors in hadron colliders, having unique sensitivity to diffractive processes, can be a powerful tool for studying diffractive dissociation by combining them with central detectors. Several Monte Carlo simulation samples in p-p collisions at √(s) = 13 TeV were analyzed, and different nondiffractive and diffractive contributions were clarified through differential cross sections of forward neutral particles. Diffraction selection criteria in the VF-triggered-event samples were determined by using the central track information. The corresponding selection applicable in real experiments has ∼ 100% purity and 30-70% efficiency. Consequently, the central information enables classification of the forward productions into diffraction and nondiffraction categories; in particular, most of the surviving events from the selection belong to low-mass diffraction events at log{sub 10}(ξ{sub x}) < -5.5. Therefore, the combined method can uniquely access the low-mass diffraction regime experimentally. (orig.)
Modelling of scintillator based flat-panel detectors with Monte-Carlo simulations
Reims, N.; Sukowski, F.; Uhlmann, N.
2011-01-01
Scintillator based flat panel detectors are state of the art in the field of industrial X-ray imaging applications. Choosing the proper system and setup parameters for the vast range of different applications can be a time consuming task, especially when developing new detector systems. Since the system behaviour cannot always be foreseen easily, Monte-Carlo (MC) simulations are keys to gain further knowledge of system components and their behaviour for different imaging conditions. In this work we used two Monte-Carlo based models to examine an indirect converting flat panel detector, specifically the Hamamatsu C9312SK. We focused on the signal generation in the scintillation layer and its influence on the spatial resolution of the whole system. The models differ significantly in their level of complexity. The first model gives a global description of the detector based on different parameters characterizing the spatial resolution. With relatively small effort a simulation model can be developed which equates the real detector regarding signal transfer. The second model allows a more detailed insight of the system. It is based on the well established cascade theory, i.e. describing the detector as a cascade of elemental gain and scattering stages, which represent the built in components and their signal transfer behaviour. In comparison to the first model the influence of single components especially the important light spread behaviour in the scintillator can be analysed in a more differentiated way. Although the implementation of the second model is more time consuming both models have in common that a relatively small amount of system manufacturer parameters are needed. The results of both models were in good agreement with the measured parameters of the real system.
From p+p to Pb+Pb Collisions: Wounded Nucleon versus Statistical Models
Gazdzicki, Marek
2013-01-01
System size dependence of hadron production properties is discussed within the Wounded Nucleon Model and the Statistical Model in the grand canonical, canonical and micro-canonical formulations. Similarities and differences between predictions of the models related to the treatment of conservation laws are exposed. A need for models which would combine a hydrodynamical-like expansion with conservation laws obeyed in individual collisions is stressed.
Littlest Higgs model with T-parity and single top production in ep collisions
Institute of Scientific and Technical Information of China (English)
WEN Jia; YUE Chong-Xing; LIU Jin-Yan; LIU Wei
2009-01-01
Based on calculating the contributions of the littlest Higgs model with T-parity (called LHT model) to the anomalous top coupling tqγ (q=u or c), we consider single top production via the t-channel partonic process eq → et in ep collisions. Our numerical results show that the production cross section in the LHT model can be significantly enhanced relative to that in the standard model (SM).
DEFF Research Database (Denmark)
Tycho, Andreas; Jørgensen, Thomas Martini; Andersen, Peter E.
2002-01-01
A Monte Carlo (MC) method for modeling optical coherence tomography (OCT) measurements of a diffusely reflecting discontinuity emb edded in a scattering medium is presented. For the first time to the authors' knowledge it is shown analytically that the applicability of an MC approach to this opti......A Monte Carlo (MC) method for modeling optical coherence tomography (OCT) measurements of a diffusely reflecting discontinuity emb edded in a scattering medium is presented. For the first time to the authors' knowledge it is shown analytically that the applicability of an MC approach...... to this optical geometry is firmly justified, because, as we show, in the conjugate image plane the field reflected from the sample is delta-correlated from which it follows that the heterodyne signal is calculated from the intensity distribution only. This is not a trivial result because, in general, the light...... focused beam, and it is shown that in free space the full three-dimensional intensity distribution of a Gaussian beam is obtained. The OCT signal and the intensity distribution in a scattering medium have been obtained for several geometries with the suggested MC method; when this model and a recently...
Collectivity in Heavy Nuclei in the Shell Model Monte Carlo Approach
Özen, C; Nakada, H
2013-01-01
The microscopic description of collectivity in heavy nuclei in the framework of the configuration-interaction shell model has been a major challenge. The size of the model space required for the description of heavy nuclei prohibits the use of conventional diagonalization methods. We have overcome this difficulty by using the shell model Monte Carlo (SMMC) method, which can treat model spaces that are many orders of magnitude larger than those that can be treated by conventional methods. We identify a thermal observable that can distinguish between vibrational and rotational collectivity and use it to describe the crossover from vibrational to rotational collectivity in families of even-even rare-earth isotopes. We calculate the state densities in these nuclei and find them to be in close agreement with experimental data. We also calculate the collective enhancement factors of the corresponding level densities and find that their decay with excitation energy is correlated with the pairing and shape phase tran...
Hertog, Maarten L. A. T. M.; Scheerlinck, Nico; Nicolaï, Bart M.
2009-01-01
When modelling the behaviour of horticultural products, demonstrating large sources of biological variation, we often run into the issue of non-Gaussian distributed model parameters. This work presents an algorithm to reproduce such correlated non-Gaussian model parameters for use with Monte Carlo simulations. The algorithm works around the problem of non-Gaussian distributions by transforming the observed non-Gaussian probability distributions using a proposed SKN-distribution function before applying the covariance decomposition algorithm to generate Gaussian random co-varying parameter sets. The proposed SKN-distribution function is based on the standard Gaussian distribution function and can exhibit different degrees of both skewness and kurtosis. This technique is demonstrated using a case study on modelling the ripening of tomato fruit evaluating the propagation of biological variation with time.
Monte Carlo Simulation of a Novel Classical Spin Model with a Tricritical Point
Cary, Tyler; Scalettar, Richard; Singh, Rajiv
Recent experimental findings along with motivation from the well known Blume-Capel model has led to the development of a novel two-dimensional classical spin model defined on a square lattice. This model consists of two Ising spin species per site with each species interacting with its own kind as perpendicular one dimensional Ising chains along with complex and frustrating interactions between species. Probing this model with Mean Field Theory, Metropolis Monte Carlo, and Wang Landau sampling has revealed a rich phase diagram which includes a tricritical point separating a first order magnetic phase transition from a continuous one, along with three ordered phases. Away from the tricritical point, the expected 2D Ising critical exponents have been recovered. Ongoing work focuses on finding the tricritical exponents and their connection to a supersymmetric critical point.
Iterative optimisation of Monte Carlo detector models using measurements and simulations
Energy Technology Data Exchange (ETDEWEB)
Marzocchi, O., E-mail: olaf@marzocchi.net [European Patent Office, Rijswijk (Netherlands); Leone, D., E-mail: debora.leone@kit.edu [Institute for Nuclear Waste Disposal, Karlsruhe Institute of Technology, Karlsruhe (Germany)
2015-04-11
This work proposes a new technique to optimise the Monte Carlo models of radiation detectors, offering the advantage of a significantly lower user effort and therefore an improved work efficiency compared to the prior techniques. The method consists of four steps, two of which are iterative and suitable for automation using scripting languages. The four steps consist in the acquisition in the laboratory of measurement data to be used as reference; the modification of a previously available detector model; the simulation of a tentative model of the detector to obtain the coefficients of a set of linear equations; the solution of the system of equations and the update of the detector model. Steps three and four can be repeated for more accurate results. This method avoids the “try and fail” approach typical of the prior techniques.
A probabilistic model for hydrokinetic turbine collision risks: exploring impacts on fish.
Directory of Open Access Journals (Sweden)
Linus Hammar
Full Text Available A variety of hydrokinetic turbines are currently under development for power generation in rivers, tidal straits and ocean currents. Because some of these turbines are large, with rapidly moving rotor blades, the risk of collision with aquatic animals has been brought to attention. The behavior and fate of animals that approach such large hydrokinetic turbines have not yet been monitored at any detail. In this paper, we conduct a synthesis of the current knowledge and understanding of hydrokinetic turbine collision risks. The outcome is a generic fault tree based probabilistic model suitable for estimating population-level ecological risks. New video-based data on fish behavior in strong currents are provided and models describing fish avoidance behaviors are presented. The findings indicate low risk for small-sized fish. However, at large turbines (≥5 m, bigger fish seem to have high probability of collision, mostly because rotor detection and avoidance is difficult in low visibility. Risks can therefore be substantial for vulnerable populations of large-sized fish, which thrive in strong currents. The suggested collision risk model can be applied to different turbine designs and at a variety of locations as basis for case-specific risk assessments. The structure of the model facilitates successive model validation, refinement and application to other organism groups such as marine mammals.
A probabilistic model for hydrokinetic turbine collision risks: exploring impacts on fish.
Hammar, Linus; Eggertsen, Linda; Andersson, Sandra; Ehnberg, Jimmy; Arvidsson, Rickard; Gullström, Martin; Molander, Sverker
2015-01-01
A variety of hydrokinetic turbines are currently under development for power generation in rivers, tidal straits and ocean currents. Because some of these turbines are large, with rapidly moving rotor blades, the risk of collision with aquatic animals has been brought to attention. The behavior and fate of animals that approach such large hydrokinetic turbines have not yet been monitored at any detail. In this paper, we conduct a synthesis of the current knowledge and understanding of hydrokinetic turbine collision risks. The outcome is a generic fault tree based probabilistic model suitable for estimating population-level ecological risks. New video-based data on fish behavior in strong currents are provided and models describing fish avoidance behaviors are presented. The findings indicate low risk for small-sized fish. However, at large turbines (≥5 m), bigger fish seem to have high probability of collision, mostly because rotor detection and avoidance is difficult in low visibility. Risks can therefore be substantial for vulnerable populations of large-sized fish, which thrive in strong currents. The suggested collision risk model can be applied to different turbine designs and at a variety of locations as basis for case-specific risk assessments. The structure of the model facilitates successive model validation, refinement and application to other organism groups such as marine mammals.
The First 24 Years of Reverse Monte Carlo Modelling, Budapest, Hungary, 20-22 September 2012
Keen, David A.; Pusztai, László
2013-11-01
This special issue contains a collection of papers reflecting the content of the fifth workshop on reverse Monte Carlo (RMC) methods, held in a hotel on the banks of the Danube in the Budapest suburbs in the autumn of 2012. Over fifty participants gathered to hear talks and discuss a broad range of science based on the RMC technique in very convivial surroundings. Reverse Monte Carlo modelling is a method for producing three-dimensional disordered structural models in quantitative agreement with experimental data. The method was developed in the late 1980s and has since achieved wide acceptance within the scientific community [1], producing an average of over 90 papers and 1200 citations per year over the last five years. It is particularly suitable for the study of the structures of liquid and amorphous materials, as well as the structural analysis of disordered crystalline systems. The principal experimental data that are modelled are obtained from total x-ray or neutron scattering experiments, using the reciprocal space structure factor and/or the real space pair distribution function (PDF). Additional data might be included from extended x-ray absorption fine structure spectroscopy (EXAFS), Bragg peak intensities or indeed any measured data that can be calculated from a three-dimensional atomistic model. It is this use of total scattering (diffuse and Bragg), rather than just the Bragg peak intensities more commonly used for crystalline structure analysis, which enables RMC modelling to probe the often important deviations from the average crystal structure, to probe the structures of poorly crystalline or nanocrystalline materials, and the local structures of non-crystalline materials where only diffuse scattering is observed. This flexibility across various condensed matter structure-types has made the RMC method very attractive in a wide range of disciplines, as borne out in the contents of this special issue. It is however important to point out that since
Korostil, Igor A; Peters, Gareth W; Cornebise, Julien; Regan, David G
2013-05-20
A Bayesian statistical model and estimation methodology based on forward projection adaptive Markov chain Monte Carlo is developed in order to perform the calibration of a high-dimensional nonlinear system of ordinary differential equations representing an epidemic model for human papillomavirus types 6 and 11 (HPV-6, HPV-11). The model is compartmental and involves stratification by age, gender and sexual-activity group. Developing this model and a means to calibrate it efficiently is relevant because HPV is a very multi-typed and common sexually transmitted infection with more than 100 types currently known. The two types studied in this paper, types 6 and 11, are causing about 90% of anogenital warts. We extend the development of a sexual mixing matrix on the basis of a formulation first suggested by Garnett and Anderson, frequently used to model sexually transmitted infections. In particular, we consider a stochastic mixing matrix framework that allows us to jointly estimate unknown attributes and parameters of the mixing matrix along with the parameters involved in the calibration of the HPV epidemic model. This matrix describes the sexual interactions between members of the population under study and relies on several quantities that are a priori unknown. The Bayesian model developed allows one to estimate jointly the HPV-6 and HPV-11 epidemic model parameters as well as unknown sexual mixing matrix parameters related to assortativity. Finally, we explore the ability of an extension to the class of adaptive Markov chain Monte Carlo algorithms to incorporate a forward projection strategy for the ordinary differential equation state trajectories. Efficient exploration of the Bayesian posterior distribution developed for the ordinary differential equation parameters provides a challenge for any Markov chain sampling methodology, hence the interest in adaptive Markov chain methods. We conclude with simulation studies on synthetic and recent actual data.
Clinical management and burden of prostate cancer: a Markov Monte Carlo model.
Directory of Open Access Journals (Sweden)
Chiranjeev Sanyal
Full Text Available BACKGROUND: Prostate cancer (PCa is the most common non-skin cancer among men in developed countries. Several novel treatments have been adopted by healthcare systems to manage PCa. Most of the observational studies and randomized trials on PCa have concurrently evaluated fewer treatments over short follow-up. Further, preceding decision analytic models on PCa management have not evaluated various contemporary management options. Therefore, a contemporary decision analytic model was necessary to address limitations to the literature by synthesizing the evidence on novel treatments thereby forecasting short and long-term clinical outcomes. OBJECTIVES: To develop and validate a Markov Monte Carlo model for the contemporary clinical management of PCa, and to assess the clinical burden of the disease from diagnosis to end-of-life. METHODS: A Markov Monte Carlo model was developed to simulate the management of PCa in men 65 years and older from diagnosis to end-of-life. Health states modeled were: risk at diagnosis, active surveillance, active treatment, PCa recurrence, PCa recurrence free, metastatic castrate resistant prostate cancer, overall and PCa death. Treatment trajectories were based on state transition probabilities derived from the literature. Validation and sensitivity analyses assessed the accuracy and robustness of model predicted outcomes. RESULTS: Validation indicated model predicted rates were comparable to observed rates in the published literature. The simulated distribution of clinical outcomes for the base case was consistent with sensitivity analyses. Predicted rate of clinical outcomes and mortality varied across risk groups. Life expectancy and health adjusted life expectancy predicted for the simulated cohort was 20.9 years (95%CI 20.5-21.3 and 18.2 years (95% CI 17.9-18.5, respectively. CONCLUSION: Study findings indicated contemporary management strategies improved survival and quality of life in patients with PCa. This
Double pendulum model for a tennis stroke including a collision process
Youn, Sun-Hyun
2015-10-01
By means of adding a collision process between the ball and racket in the double pendulum model, we analyzed the tennis stroke. The ball and the racket system may be accelerated during the collision time; thus, the speed of the rebound ball does not simply depend on the angular velocity of the racket. A higher angular velocity sometimes gives a lower rebound ball speed. We numerically showed that the proper time-lagged racket rotation increased the speed of the rebound ball by 20%. We also showed that the elbow should move in the proper direction in order to add the angular velocity of the racket.
D-meson observables in heavy-ion collisions at LHC with EPOSHQ model
Directory of Open Access Journals (Sweden)
Ozvenchuk Vitalii
2016-01-01
Full Text Available We study the propagation of charm quarks in the quark-gluon plasma (QGP created in ultrarelativistic heavy-ion collisions at LHC within EPOSHQ model. The interactions of heavy quarks with the light partons in ultrarelativistic heavy-ion collisions through the collisional and radiative processes lead to a large suppression of final D-meson spectra at high transverse momentum and a finite D-meson elliptic flow. Our results are in a good agreement with the available experimental data.
Dynamical Analysis of Sputtering at Threshold Energy Range: Modelling of Ar+Ni(100) Collision System
Institute of Scientific and Technical Information of China (English)
HUNDUR Yakup; G(U)VEN(C) Ziya B; HIPPLER Rainer
2008-01-01
The sputtering process of Ar+Ni(100) collision systems is investigated by means of constant energy molecular dynamics simulations.The Ni(100) slab is mimicked by an embedded-atom potential,and the interaction between the projectile and the surface is modelled by using the reparametrized ZBL potential.Ni atom emission from the lattice is analysed over the range of 20-50 eV collision energy.Sputtering yield,angular and energy distributions of the scattered Ar and of the sputtered Ni atoms are calculated,and compared to the available theoretical and experimental data.
Towards a construction of inclusive collision cross-sections in massless Nelson's model
Dybalski, Wojciech
2011-01-01
The conventional approach to the infrared problem in perturbative quantum electrodynamics relies on the concept of inclusive collision cross-sections. A non-perturbative variant of this notion was introduced in algebraic quantum field theory. Relying on these insights, we take first steps towards a non-perturbative construction of inclusive collision cross-sections in massless Nelson's model. We show that our proposal is consistent with the standard scattering theory in the absence of the infrared problem and discuss its status in the infrared-singular case.
Kinetic Monte-Carlo modeling of hydrogen retention and re-emission from Tore Supra deposits
Energy Technology Data Exchange (ETDEWEB)
Rai, A. [Max-Planck-Institut fuer Plasmaphysik, D-17491 Greifswald (Germany)], E-mail: Abha.Rai@ipp.mpg.de; Schneider, R. [Max-Planck-Institut fuer Plasmaphysik, D-17491 Greifswald (Germany); Warrier, M. [Computational Analysis Division, BARC, Trombay, Mumbai 400085 (India); Roubin, P.; Martin, C.; Richou, M. [PIIM, Universite de Provence, Centre Saint-Jerome, (service 242) F-13397 Marseille cedex 20 (France)
2009-04-30
A multi-scale model has been developed to study the reactive-diffusive transport of hydrogen in porous graphite [A. Rai, R. Schneider, M. Warrier, J. Nucl. Mater. (submitted for publication). http://dx.doi.org/10.1016/j.jnucmat.2007.08.013.]. The deposits found on the leading edge of the neutralizer of Tore Supra are multi-scale in nature, consisting of micropores with typical size lower than 2 nm ({approx}11%), mesopores ({approx}5%) and macropores with a typical size more than 50 nm [C. Martin, M. Richou, W. Sakaily, B. Pegourie, C. Brosset, P. Roubin, J. Nucl. Mater. 363-365 (2007) 1251]. Kinetic Monte-Carlo (KMC) has been used to study the hydrogen transport at meso-scales. Recombination rate and the diffusion coefficient calculated at the meso-scale was used as an input to scale up and analyze the hydrogen transport at macro-scale. A combination of KMC and MCD (Monte-Carlo diffusion) method was used at macro-scales. Flux dependence of hydrogen recycling has been studied. The retention and re-emission analysis of the model has been extended to study the chemical erosion process based on the Kueppers-Hopf cycle [M. Wittmann, J. Kueppers, J. Nucl. Mater. 227 (1996) 186].
A Monte-Carlo based model of the AX-PET demonstrator and its experimental validation.
Solevi, P; Oliver, J F; Gillam, J E; Bolle, E; Casella, C; Chesi, E; De Leo, R; Dissertori, G; Fanti, V; Heller, M; Lai, M; Lustermann, W; Nappi, E; Pauss, F; Rudge, A; Ruotsalainen, U; Schinzel, D; Schneider, T; Séguinot, J; Stapnes, S; Weilhammer, P; Tuna, U; Joram, C; Rafecas, M
2013-08-21
AX-PET is a novel PET detector based on axially oriented crystals and orthogonal wavelength shifter (WLS) strips, both individually read out by silicon photo-multipliers. Its design decouples sensitivity and spatial resolution, by reducing the parallax error due to the layered arrangement of the crystals. Additionally the granularity of AX-PET enhances the capability to track photons within the detector yielding a large fraction of inter-crystal scatter events. These events, if properly processed, can be included in the reconstruction stage further increasing the sensitivity. Its unique features require dedicated Monte-Carlo simulations, enabling the development of the device, interpreting data and allowing the development of reconstruction codes. At the same time the non-conventional design of AX-PET poses several challenges to the simulation and modeling tasks, mostly related to the light transport and distribution within the crystals and WLS strips, as well as the electronics readout. In this work we present a hybrid simulation tool based on an analytical model and a Monte-Carlo based description of the AX-PET demonstrator. It was extensively validated against experimental data, providing excellent agreement.
Optical model for port-wine stain skin and its Monte Carlo simulation
Xu, Lanqing; Xiao, Zhengying; Chen, Rong; Wang, Ying
2008-12-01
Laser irradiation is the most acceptable therapy for PWS patient at present time. Its efficacy is highly dependent on the energy deposition rules in skin. To achieve optimal PWS treatment parameters a better understanding of light propagation in PWS skin is indispensable. Traditional Monte Carlo simulations using simple geometries such as planar layer tissue model can not provide energy deposition in the skin with enlarged blood vessels. In this paper the structure of normal skin and the pathological character of PWS skin was analyzed in detail and the true structure were simplified into a hybrid layered mathematical model to character two most important aspects of PWS skin: layered structure and overabundant dermal vessels. The basic laser-tissue interaction mechanisms in skin were investigated and the optical parameters of PWS skin tissue at the therapeutic wavelength. Monte Carlo (MC) based techniques were choused to calculate the energy deposition in the skin. Results can be used in choosing optical dosage. Further simulations can be used to predict optimal laser parameters to achieve high-efficacy laser treatment of PWS.
Dynamic Value at Risk: A Comparative Study Between Heteroscedastic Models and Monte Carlo Simulation
Directory of Open Access Journals (Sweden)
José Lamartine Távora Junior
2006-12-01
Full Text Available The objective of this paper was to analyze the risk management of a portfolio composed by Petrobras PN, Telemar PN and Vale do Rio Doce PNA stocks. It was verified if the modeling of Value-at-Risk (VaR through the place Monte Carlo simulation with volatility of GARCH family is supported by hypothesis of efficient market. The results have shown that the statistic evaluation in inferior to dynamics, evidencing that the dynamic analysis supplies support to the hypothesis of efficient market of the Brazilian share holding market, in opposition of some empirical evidences. Also, it was verified that the GARCH models of volatility is enough to accommodate the variations of the shareholding Brazilian market, since the model is capable to accommodate the great dynamic of the Brazilian market.
An Efficient Monte Carlo Method for Modeling Radiative Transfer in Protoplanetary Disks
Kim, Stacy
2011-01-01
Monte Carlo methods have been shown to be effective and versatile in modeling radiative transfer processes to calculate model temperature profiles for protoplanetary disks. Temperatures profiles are important for connecting physical structure to observation and for understanding the conditions for planet formation and migration. However, certain areas of the disk such as the optically thick disk interior are under-sampled, or are of particular interest such as the snow line (where water vapor condenses into ice) and the area surrounding a protoplanet. To improve the sampling, photon packets can be preferentially scattered and reemitted toward the preferred locations at the cost of weighting packet energies to conserve the average energy flux. Here I report on the weighting schemes developed, how they can be applied to various models, and how they affect simulation mechanics and results. We find that improvements in sampling do not always imply similar improvements in temperature accuracies and calculation speeds.
Anagnostopoulos, Konstantinos N; Nishimura, Jun
2012-01-01
The IKKT or IIB matrix model has been postulated to be a non perturbative definition of superstring theory. It has the attractive feature that spacetime is dynamically generated, which makes possible the scenario of dynamical compactification of extra dimensions, which in the Euclidean model manifests by spontaneously breaking the SO(10) rotational invariance (SSB). In this work we study using Monte Carlo simulations the 6 dimensional version of the Euclidean IIB matrix model. Simulations are found to be plagued by a strong complex action problem and the factorization method is used for effective sampling and computing expectation values of the extent of spacetime in various dimensions. Our results are consistent with calculations using the Gaussian Expansion method which predict SSB to SO(3) symmetric vacua, a finite universal extent of the compactified dimensions and finite spacetime volume.
Corner wetting in the two-dimensional Ising model: Monte Carlo results
Albano, E. V.; DeVirgiliis, A.; Müller, M.; Binder, K.
2003-01-01
Square L × L (L = 24-128) Ising lattices with nearest neighbour ferromagnetic exchange are considered using free boundary conditions at which boundary magnetic fields ± h are applied, i.e., at the two boundary rows ending at the lower left corner a field +h acts, while at the two boundary rows ending at the upper right corner a field -h acts. For temperatures T less than the critical temperature Tc of the bulk, this boundary condition leads to the formation of two domains with opposite orientations of the magnetization direction, separated by an interface which for T larger than the filling transition temperature Tf (h) runs from the upper left corner to the lower right corner, while for T interface is localized either close to the lower left corner or close to the upper right corner. Numerous theoretical predictions for the critical behaviour of this 'corner wetting' or 'wedge filling' transition are tested by Monte Carlo simulations. In particular, it is shown that for T = Tf (h) the magnetization profile m(z) in the z-direction normal to the interface is simply linear and the interfacial width scales as w propto L, while for T > Tf (h) it scales as w proptosurd L. The distribution P (ell) of the interface position ell (measured along the z-direction from the corners) decays exponentially for T Tf (h). Furthermore, the Monte Carlo data are compatible with langleellrangle propto (Tf (h) - T)-1 and a finite size scaling of the total magnetization according to M(L, T) = tilde M {(1 - T/Tf (h))nubot L} with nubot = 1. Unlike the findings for critical wetting in the thin film geometry of the Ising model, the Monte Carlo results for corner wetting are in very good agreement with the theoretical predictions.
Core-scale solute transport model selection using Monte Carlo analysis
Malama, Bwalya; Kuhlman, Kristopher L.; James, Scott C.
2013-06-01
Model applicability to core-scale solute transport is evaluated using breakthrough data from column experiments conducted with conservative tracers tritium (3H) and sodium-22 (22Na ), and the retarding solute uranium-232 (232U). The three models considered are single-porosity, double-porosity with single-rate mobile-immobile mass-exchange, and the multirate model, which is a deterministic model that admits the statistics of a random mobile-immobile mass-exchange rate coefficient. The experiments were conducted on intact Culebra Dolomite core samples. Previously, data were analyzed using single-porosity and double-porosity models although the Culebra Dolomite is known to possess multiple types and scales of porosity, and to exhibit multirate mobile-immobile-domain mass transfer characteristics at field scale. The data are reanalyzed here and null-space Monte Carlo analysis is used to facilitate objective model selection. Prediction (or residual) bias is adopted as a measure of the model structural error. The analysis clearly shows single-porosity and double-porosity models are structurally deficient, yielding late-time residual bias that grows with time. On the other hand, the multirate model yields unbiased predictions consistent with the late-time -5/2 slope diagnostic of multirate mass transfer. The analysis indicates the multirate model is better suited to describing core-scale solute breakthrough in the Culebra Dolomite than the other two models.
MATHEMATICAL MODEL FOR ACCESS MODE OF CONTENTION-COLLISION CANCELLATION IN A STAR LAN
Institute of Scientific and Technical Information of China (English)
Lu Zhaoyi; Sun Lijun
2004-01-01
I type system model of CCCAM(Contention-Collision Cancellation Access Mode)is studied through mathematical modelling and simulation. There are two innovations: (1) in the account; (2) the time at which customers depart after having been served successfully are chosen to be the embedded point, thereby "free period" is introduced reasonably. So the mathematical modelling and analysis result in this paper are significant for application of wire star LAN and wireless star LAN.
Full modelling of the MOSAIC animal PET system based on the GATE Monte Carlo simulation code
Merheb, C.; Petegnief, Y.; Talbot, J. N.
2007-02-01
Positron emission tomography (PET) systems dedicated to animal imaging are now widely used for biological studies. The scanner performance strongly depends on the design and the characteristics of the system. Many parameters must be optimized like the dimensions and type of crystals, geometry and field-of-view (FOV), sampling, electronics, lightguide, shielding, etc. Monte Carlo modelling is a powerful tool to study the effect of each of these parameters on the basis of realistic simulated data. Performance assessment in terms of spatial resolution, count rates, scatter fraction and sensitivity is an important prerequisite before the model can be used instead of real data for a reliable description of the system response function or for optimization of reconstruction algorithms. The aim of this study is to model the performance of the Philips Mosaic™ animal PET system using a comprehensive PET simulation code in order to understand and describe the origin of important factors that influence image quality. We use GATE, a Monte Carlo simulation toolkit for a realistic description of the ring PET model, the detectors, shielding, cap, electronic processing and dead times. We incorporate new features to adjust signal processing to the Anger logic underlying the Mosaic™ system. Special attention was paid to dead time and energy spectra descriptions. Sorting of simulated events in a list mode format similar to the system outputs was developed to compare experimental and simulated sensitivity and scatter fractions for different energy thresholds using various models of phantoms describing rat and mouse geometries. Count rates were compared for both cylindrical homogeneous phantoms. Simulated spatial resolution was fitted to experimental data for 18F point sources at different locations within the FOV with an analytical blurring function for electronic processing effects. Simulated and measured sensitivities differed by less than 3%, while scatter fractions agreed
Trending in Probability of Collision Measurements via a Bayesian Zero-Inflated Beta Mixed Model
Vallejo, Jonathon; Hejduk, Matt; Stamey, James
2015-01-01
We investigate the performance of a generalized linear mixed model in predicting the Probabilities of Collision (Pc) for conjunction events. Specifically, we apply this model to the log(sub 10) transformation of these probabilities and argue that this transformation yields values that can be considered bounded in practice. Additionally, this bounded random variable, after scaling, is zero-inflated. Consequently, we model these values using the zero-inflated Beta distribution, and utilize the Bayesian paradigm and the mixed model framework to borrow information from past and current events. This provides a natural way to model the data and provides a basis for answering questions of interest, such as what is the likelihood of observing a probability of collision equal to the effective value of zero on a subsequent observation.
Models of galaxy collisions in Stephan's quintet and other interacting systems
Hwang, Jeong-Sun
2010-12-01
This dissertation describes numerical studies of three interacting galaxy systems. First, hydrodynamical models of the collisions in the famous compact galaxy group, Stephan's Quintet, were constructed to investigate the dynamical interaction history and evolution of the intergalactic gas. It has been found that with a sequence of two-at-a-time collisions, most of the major morphological and kinematical features of the group were well reproduced in the models. The models suggest the two long tails extending from NGC 7319 toward NGC 7320c may be formed simultaneously from a strong collisional encounter between the two galaxies, resulting in a thinner and denser inner tail than the outer one. The tails then also run parallel to each other as observed. The model results support the idea that the group-wide shock detected in multi-wavelength observations between NGC 7319 and 7318b and the starburst region north of NGC 7318b are triggered by the current high-speed collision between NGC 7318b and the intergalactic gas. It is expected that other compact groups containing rich extended features like Stephan's Quintet can be modeled in similar ways, and that sequences of two-at-a-time collisions will be the general rule. The second set of hydrodynamical simulations were performed to model the peculiar galaxy pair, Arp 285. This system possesses a series of star-forming complexes in an unusual tail-like feature extending out perpendicular to the disk of the northern galaxy. Several conceptual ideas for the origin of the tail-like feature were examined. The models suggest that the bridge material falling into the gravitational potential of the northern disk overshoots the disk; as more bridge material streams into the region, compression drives star formation. This work on star-formation in the pile-up region can be extended to the studies of the formation of tidal dwarf galaxies or globular clusters. Thirdly, the development of spiral waves was studied with numerical models
Institute of Scientific and Technical Information of China (English)
SUN Zhu; LIU Pu-Hu
2008-01-01
The final state particle multiplicity distributions in high-energy nucleus-nucleus collisions are described by two different sub-distributious contributed by a single nucleon.The Monte Carlo calculated results from the two sub-distributions and the participant-spectator model are compared and found to be in agreement with the experimental data of Au-Au collisions ats=130 AGeV and Pb-Pb collisions at 158 AGeV.
Monte Carlo based verification of a beam model used in a treatment planning system
Wieslander, E.; Knöös, T.
2008-02-01
Modern treatment planning systems (TPSs) usually separate the dose modelling into a beam modelling phase, describing the beam exiting the accelerator, followed by a subsequent dose calculation in the patient. The aim of this work is to use the Monte Carlo code system EGSnrc to study the modelling of head scatter as well as the transmission through multi-leaf collimator (MLC) and diaphragms in the beam model used in a commercial TPS (MasterPlan, Nucletron B.V.). An Elekta Precise linear accelerator equipped with an MLC has been modelled in BEAMnrc, based on available information from the vendor regarding the material and geometry of the treatment head. The collimation in the MLC direction consists of leafs which are complemented with a backup diaphragm. The characteristics of the electron beam, i.e., energy and spot size, impinging on the target have been tuned to match measured data. Phase spaces from simulations of the treatment head are used to extract the scatter from, e.g., the flattening filter and the collimating structures. Similar data for the source models used in the TPS are extracted from the treatment planning system, thus a comprehensive analysis is possible. Simulations in a water phantom, with DOSXYZnrc, are also used to study the modelling of the MLC and the diaphragms by the TPS. The results from this study will be helpful to understand the limitations of the model in the TPS and provide knowledge for further improvements of the TPS source modelling.
Study of dispersion forces with quantum Monte Carlo: toward a continuum model for solvation.
Amovilli, Claudio; Floris, Franca Maria
2015-05-28
We present a general method to compute dispersion interaction energy that, starting from London's interpretation, is based on the measure of the electronic electric field fluctuations, evaluated on electronic sampled configurations generated by quantum Monte Carlo. A damped electric field was considered in order to avoid divergence in the variance. Dispersion atom-atom C6 van der Waals coefficients were computed by coupling electric field fluctuations with static dipole polarizabilities. The dipole polarizability was evaluated at the diffusion Monte Carlo level by studying the response of the system to a constant external electric field. We extended the method to the calculation of the dispersion contribution to the free energy of solvation in the framework of the polarizable continuum model. We performed test calculations on pairs of some atomic systems. We considered He in ground and low lying excited states and Ne in the ground state and obtained a good agreement with literature data. We also made calculations on He, Ne, and F(-) in water as the solvent. Resulting dispersion contribution to the free energy of solvation shows the reliability of the method illustrated here.
Newton, Paul K; Mason, Jeremy; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Norton, Larry; Kuhn, Peter
2013-05-01
The classic view of metastatic cancer progression is that it is a unidirectional process initiated at the primary tumor site, progressing to variably distant metastatic sites in a fairly predictable, although not perfectly understood, fashion. A Markov chain Monte Carlo mathematical approach can determine a pathway diagram that classifies metastatic tumors as "spreaders" or "sponges" and orders the timescales of progression from site to site. In light of recent experimental evidence highlighting the potential significance of self-seeding of primary tumors, we use a Markov chain Monte Carlo (MCMC) approach, based on large autopsy data sets, to quantify the stochastic, systemic, and often multidirectional aspects of cancer progression. We quantify three types of multidirectional mechanisms of progression: (i) self-seeding of the primary tumor, (ii) reseeding of the primary tumor from a metastatic site (primary reseeding), and (iii) reseeding of metastatic tumors (metastasis reseeding). The model shows that the combined characteristics of the primary and the first metastatic site to which it spreads largely determine the future pathways and timescales of systemic disease.
A virtual source method for Monte Carlo simulation of Gamma Knife Model C
Energy Technology Data Exchange (ETDEWEB)
Kim, Tae Hoon; Kim, Yong Kyun [Hanyang University, Seoul (Korea, Republic of); Chung, Hyun Tai [Seoul National University College of Medicine, Seoul (Korea, Republic of)
2016-05-15
The Monte Carlo simulation method has been used for dosimetry of radiation treatment. Monte Carlo simulation is the method that determines paths and dosimetry of particles using random number. Recently, owing to the ability of fast processing of the computers, it is possible to treat a patient more precisely. However, it is necessary to increase the simulation time to improve the efficiency of accuracy uncertainty. When generating the particles from the cobalt source in a simulation, there are many particles cut off. So it takes time to simulate more accurately. For the efficiency, we generated the virtual source that has the phase space distribution which acquired a single gamma knife channel. We performed the simulation using the virtual sources on the 201 channel and compared the measurement with the simulation using virtual sources and real sources. A virtual source file was generated to reduce the simulation time of a Gamma Knife Model C. Simulations with a virtual source executed about 50 times faster than the original source code and there was no statistically significant difference in simulated results.
SU-E-T-754: Three-Dimensional Patient Modeling Using Photogrammetry for Collision Avoidance
Energy Technology Data Exchange (ETDEWEB)
Popple, R; Cardan, R [Univ Alabama Birmingham, Birmingham, AL (United States)
2015-06-15
Purpose: To evaluate photogrammetry for creating a three-dimensional patient model. Methods: A mannequin was configured on the couch of a CT scanner to simulate a patient setup using an indexed positioning device. A CT fiducial was placed on the indexed CT table-overlay at the reference index position. Two dimensional photogrammetry targets were placed on the table in known positions. A digital SLR camera was used to obtain 27 images from different positions around the CT table. The images were imported into a commercial photogrammetry package and a 3D model constructed. Each photogrammetry target was identified on 2 to 5 images. The CT DICOM metadata and the position of the CT fiducial were used to calculate the coordinates of the photogrammetry targets in the CT image frame of reference. The coordinates were transferred to the photogrammetry software to orient the 3D model. The mannequin setup was transferred to the treatment couch of a linear accelerator and positioned at isocenter using in-room lasers. The treatment couch coordinates were noted and compared with prediction. The collision free regions were measured over the full range of gantry and table motion and were compared with predictions obtained using a general purpose polygon interference algorithm. Results: The reconstructed 3D model consisted of 180000 triangles. The difference between the predicted and measured couch positions were 5 mm, 1 mm, and 1 mm for longitudinal, lateral, and vertical, respectively. The collision prediction tested 64620 gantry table combinations in 11.1 seconds. The accuracy was 96.5%, with false positive and negative results occurring at the boundaries of the collision space. Conclusion: Photogrammetry can be used as a tool for collision avoidance during treatment planning. The results indicate that a buffer zone is necessary to avoid false negatives at the boundary of the collision-free zone. Testing with human patients is underway. Research partially supported by a grant
Directory of Open Access Journals (Sweden)
Alhassid Y.
2014-04-01
Full Text Available The shell model Monte Carlo (SMMC method enables calculations in model spaces that are many orders of magnitude larger than those that can be treated by conventional methods, and is particularly suitable for the calculation of level densities in the presence of correlations. We review recent advances and applications of SMMC for the microscopic calculation of level densities. Recent developments include (i a method to calculate accurately the ground-state energy of an odd-mass nucleus, circumventing a sign problem that originates in the projection on an odd number of particles, and (ii a method to calculate directly level densities, which, unlike state densities, do not include the spin degeneracy of the levels. We calculated the level densities of a family of nickel isotopes 59−64Ni and of a heavy deformed rare-earth nucleus 162Dy and found them to be in close agreement with various experimental data sets.
Alhassid, Y; Liu, S; Mukherjee, A; Nakada, H
2014-01-01
The shell model Monte Carlo (SMMC) method enables calculations in model spaces that are many orders of magnitude larger than those that can be treated by conventional methods, and is particularly suitable for the calculation of level densities in the presence of correlations. We review recent advances and applications of SMMC for the microscopic calculation of level densities. Recent developments include (i) a method to calculate accurately the ground-state energy of an odd-mass nucleus, circumventing a sign problem that originates in the projection on an odd number of particles, and (ii) a method to calculate directly level densities, which, unlike state densities, do not include the spin degeneracy of the levels. We calculated the level densities of a family of nickel isotopes $^{59-64}$Ni and of a heavy deformed rare-earth nucleus $^{162}$Dy and found them to be in close agreement with various experimental data sets.
Samejima, Masaki; Akiyoshi, Masanori; Mitsukuni, Koshichiro; Komoda, Norihisa
We propose a business scenario evaluation method using qualitative and quantitative hybrid model. In order to evaluate business factors with qualitative causal relations, we introduce statistical values based on propagation and combination of effects of business factors by Monte Carlo simulation. In propagating an effect, we divide a range of each factor by landmarks and decide an effect to a destination node based on the divided ranges. In combining effects, we decide an effect of each arc using contribution degree and sum all effects. Through applied results to practical models, it is confirmed that there are no differences between results obtained by quantitative relations and results obtained by the proposed method at the risk rate of 5%.
Huang, Guanghui; Wan, Jianping; Chen, Hui
2013-02-01
Nonlinear stochastic differential equation models with unobservable state variables are now widely used in analysis of PK/PD data. Unobservable state variables are usually estimated with extended Kalman filter (EKF), and the unknown pharmacokinetic parameters are usually estimated by maximum likelihood estimator. However, EKF is inadequate for nonlinear PK/PD models, and MLE is known to be biased downwards. A density-based Monte Carlo filter (DMF) is proposed to estimate the unobservable state variables, and a simulation-based M estimator is proposed to estimate the unknown parameters in this paper, where a genetic algorithm is designed to search the optimal values of pharmacokinetic parameters. The performances of EKF and DMF are compared through simulations for discrete time and continuous time systems respectively, and it is found that the results based on DMF are more accurate than those given by EKF with respect to mean absolute error.
Energy Technology Data Exchange (ETDEWEB)
Gelß, Patrick, E-mail: p.gelss@fu-berlin.de; Matera, Sebastian, E-mail: matera@math.fu-berlin.de; Schütte, Christof, E-mail: schuette@mi.fu-berlin.de
2016-06-01
In multiscale modeling of heterogeneous catalytic processes, one crucial point is the solution of a Markovian master equation describing the stochastic reaction kinetics. Usually, this is too high-dimensional to be solved with standard numerical techniques and one has to rely on sampling approaches based on the kinetic Monte Carlo method. In this study we break the curse of dimensionality for the direct solution of the Markovian master equation by exploiting the Tensor Train Format for this purpose. The performance of the approach is demonstrated on a first principles based, reduced model for the CO oxidation on the RuO{sub 2}(110) surface. We investigate the complexity for increasing system size and for various reaction conditions. The advantage over the stochastic simulation approach is illustrated by a problem with increased stiffness.
A Monte Carlo method for critical systems in infinite volume: the planar Ising model
Herdeiro, Victor
2016-01-01
In this paper we propose a Monte Carlo method for generating finite-domain marginals of critical distributions of statistical models in infinite volume. The algorithm corrects the problem of the long-range effects of boundaries associated to generating critical distributions on finite lattices. It uses the advantage of scale invariance combined with ideas of the renormalization group in order to construct a type of "holographic" boundary condition that encodes the presence of an infinite volume beyond it. We check the quality of the distribution obtained in the case of the planar Ising model by comparing various observables with their infinite-plane prediction. We accurately reproduce planar two-, three- and four-point functions of spin and energy operators. We also define a lattice stress-energy tensor, and numerically obtain the associated conformal Ward identities and the Ising central charge.
Monte Carlo method for critical systems in infinite volume: The planar Ising model.
Herdeiro, Victor; Doyon, Benjamin
2016-10-01
In this paper we propose a Monte Carlo method for generating finite-domain marginals of critical distributions of statistical models in infinite volume. The algorithm corrects the problem of the long-range effects of boundaries associated to generating critical distributions on finite lattices. It uses the advantage of scale invariance combined with ideas of the renormalization group in order to construct a type of "holographic" boundary condition that encodes the presence of an infinite volume beyond it. We check the quality of the distribution obtained in the case of the planar Ising model by comparing various observables with their infinite-plane prediction. We accurately reproduce planar two-, three-, and four-point of spin and energy operators. We also define a lattice stress-energy tensor, and numerically obtain the associated conformal Ward identities and the Ising central charge.
Development of numerical models for Monte Carlo simulations of Th-Pb fuel assembly
Directory of Open Access Journals (Sweden)
Oettingen Mikołaj
2017-01-01
Full Text Available The thorium-uranium fuel cycle is a promising alternative against uranium-plutonium fuel cycle, but it demands many advanced research before starting its industrial application in commercial nuclear reactors. The paper presents the development of the thorium-lead (Th-Pb fuel assembly numerical models for the integral irradiation experiments. The Th-Pb assembly consists of a hexagonal array of ThO2 fuel rods and metallic Pb rods. The design of the assembly allows different combinations of rods for various types of irradiations and experimental measurements. The numerical model of the Th-Pb assembly was designed for the numerical simulations with the continuous energy Monte Carlo Burnup code (MCB implemented on the supercomputer Prometheus of the Academic Computer Centre Cyfronet AGH.
Monte Carlo study of Lefschetz thimble structure in one-dimensional Thirring model at finite density
Fujii, Hirotsugu; Kikukawa, Yoshio
2015-01-01
We consider the one-dimensional massive Thirring model formulated on the lattice with staggered fermions and an auxiliary compact vector (link) field, which is exactly solvable and shows a phase transition with increasing the chemical potential of fermion number: the crossover at a finite temperature and the first order transition at zero temperature. We complexify its path-integration on Lefschetz thimbles and examine its phase transition by hybrid Monte Carlo simulations on the single dominant thimble. We observe a discrepancy between the numerical and exact results in the crossover region for small inverse coupling $\\beta$ and/or large lattice size $L$, while they are in good agreement at the lower and higher density regions. We also observe that the discrepancy persists in the continuum limit keeping the temperature finite and it becomes more significant toward the low-temperature limit. This numerical result is consistent with our analytical study of the model's thimble structure. And these results imply...
Hasenfratz, Anna
2010-01-01
Strongly coupled gauge systems with many fermions are important in many phenomenological models. I use the 2-lattice matching Monte Carlo renormalization group method to study the fixed point structure and critical indexes of SU(3) gauge models with 8 and 12 flavors of fundamental fermions. With an improved renormalization group block transformation I am able to connect the perturbative and confining regimes of the N_f=8 flavor system, thus verifying its QCD-like nature. With N_f=12 flavors the data favor the existence of an infrared fixed point and conformal phase, though the results are also consistent with very slow walking. I measure the anomalous mass dimension in both systems at several gauge couplings and find that they are barely different from the free field value.
Measurement and Monte Carlo modeling of the spatial response of scintillation screens
Energy Technology Data Exchange (ETDEWEB)
Pistrui-Maximean, S.A. [CNDRI (NDT using Ionizing Radiation) Laboratory, INSA-Lyon, 69621 Villeurbanne (France)], E-mail: spistrui@gmail.com; Letang, J.M. [CNDRI (NDT using Ionizing Radiation) Laboratory, INSA-Lyon, 69621 Villeurbanne (France)], E-mail: jean-michel.letang@insa-lyon.fr; Freud, N. [CNDRI (NDT using Ionizing Radiation) Laboratory, INSA-Lyon, 69621 Villeurbanne (France); Koch, A. [Thales Electron Devices, 38430 Moirans (France); Walenta, A.H. [Detectors and Electronics Department, FB Physik, Siegen University, 57068 Siegen (Germany); Montarou, G. [Corpuscular Physics Laboratory, Blaise Pascal University, 63177 Aubiere (France); Babot, D. [CNDRI (NDT using Ionizing Radiation) Laboratory, INSA-Lyon, 69621 Villeurbanne (France)
2007-11-01
In this article, we propose a detailed protocol to carry out measurements of the spatial response of scintillation screens and to assess the agreement with simulated results. The experimental measurements have been carried out using a practical implementation of the slit method. A Monte Carlo simulation model of scintillator screens, implemented with the toolkit Geant4, has been used to study the influence of the acquisition setup parameters and to compare with the experimental results. An algorithm of global stochastic optimization based on a localized random search method has been implemented to adjust the optical parameters (optical scattering and absorption coefficients). The algorithm has been tested for different X-ray tube voltages (40, 70 and 100 kV). A satisfactory convergence between the results simulated with the optimized model and the experimental measurements is obtained.
Monte Carlo markovian modeling of modal competition in dual-wavelength semiconductor lasers
Chusseau, Laurent; Philippe, Fabrice; Jean-Marie, Alain
2014-03-01
Monte Carlo markovian models of a dual-mode semiconductor laser with quantum well (QW) or quantum dot (QD) active regions are proposed. Accounting for carriers and photons as particles that may exchange energy in the course of time allows an ab initio description of laser dynamics such as the mode competition and intrinsic laser noise. We used these models to evaluate the stability of the dual-mode regime when laser characteristics are varied: mode gains and losses, non-radiative recombination rates, intraband relaxation time, capture time in QD, transfer of excitation between QD via the wetting layer. . . As a major result, a possible steady-sate dualmode regime is predicted for specially designed QD semiconductor lasers thereby acting as a CW microwave or terahertz-beating source whereas it does not occur for QW lasers.
A New Algorithm for Self-Consistent 3-D Modeling of Collisions in Dusty Debris Disks
Stark, Christopher C
2009-01-01
We present a new "collisional grooming" algorithm that enables us to model images of debris disks where the collision time is less than the Poynting Robertson time for the dominant grain size. Our algorithm uses the output of a collisionless disk simulation to iteratively solve the mass flux equation for the density distribution of a collisional disk containing planets in 3 dimensions. The algorithm can be run on a single processor in ~1 hour. Our preliminary models of disks with resonant ring structures caused by terrestrial mass planets show that the collision rate for background particles in a ring structure is enhanced by a factor of a few compared to the rest of the disk, and that dust grains in or near resonance have even higher collision rates. We show how collisions can alter the morphology of a resonant ring structure by reducing the sharpness of a resonant ring's inner edge and by smearing out azimuthal structure. We implement a simple prescription for particle fragmentation and show how Poynting-Ro...
Živković, Jelena V; Trutić, Nataša V; Veselinović, Jovana B; Nikolić, Goran M; Veselinović, Aleksandar M
2015-09-01
The Monte Carlo method was used for QSAR modeling of maleimide derivatives as glycogen synthase kinase-3β inhibitors. The first QSAR model was developed for a series of 74 3-anilino-4-arylmaleimide derivatives. The second QSAR model was developed for a series of 177 maleimide derivatives. QSAR models were calculated with the representation of the molecular structure by the simplified molecular input-line entry system. Two splits have been examined: one split into the training and test set for the first QSAR model, and one split into the training, test and validation set for the second. The statistical quality of the developed model is very good. The calculated model for 3-anilino-4-arylmaleimide derivatives had following statistical parameters: r(2)=0.8617 for the training set; r(2)=0.8659, and r(m)(2)=0.7361 for the test set. The calculated model for maleimide derivatives had following statistical parameters: r(2)=0.9435, for the training, r(2)=0.9262 and r(m)(2)=0.8199 for the test and r(2)=0.8418, r(av)(m)(2)=0.7469 and ∆r(m)(2)=0.1476 for the validation set. Structural indicators considered as molecular fragments responsible for the increase and decrease in the inhibition activity have been defined. The computer-aided design of new potential glycogen synthase kinase-3β inhibitors has been presented by using defined structural alerts.
Effect of nonlinearity in hybrid kinetic Monte Carlo-continuum models.
Balter, Ariel; Lin, Guang; Tartakovsky, Alexandre M
2012-01-01
Recently there has been interest in developing efficient ways to model heterogeneous surface reactions with hybrid computational models that couple a kinetic Monte Carlo (KMC) model for a surface to a finite-difference model for bulk diffusion in a continuous domain. We consider two representative problems that validate a hybrid method and show that this method captures the combined effects of nonlinearity and stochasticity. We first validate a simple deposition-dissolution model with a linear rate showing that the KMC-continuum hybrid agrees with both a fully deterministic model and its analytical solution. We then study a deposition-dissolution model including competitive adsorption, which leads to a nonlinear rate, and show that in this case the KMC-continuum hybrid and fully deterministic simulations do not agree. However, we are able to identify the difference as a natural result of the stochasticity coming from the KMC surface process. Because KMC captures inherent fluctuations, we consider it to be more realistic than a purely deterministic model. Therefore, we consider the KMC-continuum hybrid to be more representative of a real system.
A Distributed and Deterministic TDMA Algorithm for Write-All-With-Collision Model
Arumugam, Mahesh
2008-01-01
Several self-stabilizing time division multiple access (TDMA) algorithms are proposed for sensor networks. In addition to providing a collision-free communication service, such algorithms enable the transformation of programs written in abstract models considered in distributed computing literature into a model consistent with sensor networks, i.e., write all with collision (WAC) model. Existing TDMA slot assignment algorithms have one or more of the following properties: (i) compute slots using a randomized algorithm, (ii) assume that the topology is known upfront, and/or (iii) assign slots sequentially. If these algorithms are used to transform abstract programs into programs in WAC model then the transformed programs are probabilistically correct, do not allow the addition of new nodes, and/or converge in a sequential fashion. In this paper, we propose a self-stabilizing deterministic TDMA algorithm where a sensor is aware of only its neighbors. We show that the slots are assigned to the sensors in a concu...
Farah, J; Martinetti, F; Sayah, R; Lacoste, V; Donadille, L; Trompier, F; Nauraye, C; De Marzi, L; Vabre, I; Delacroix, S; Hérault, J; Clairand, I
2014-06-07
Monte Carlo calculations are increasingly used to assess stray radiation dose to healthy organs of proton therapy patients and estimate the risk of secondary cancer. Among the secondary particles, neutrons are of primary concern due to their high relative biological effectiveness. The validation of Monte Carlo simulations for out-of-field neutron doses remains however a major challenge to the community. Therefore this work focused on developing a global experimental approach to test the reliability of the MCNPX models of two proton therapy installations operating at 75 and 178 MeV for ocular and intracranial tumor treatments, respectively. The method consists of comparing Monte Carlo calculations against experimental measurements of: (a) neutron spectrometry inside the treatment room, (b) neutron ambient dose equivalent at several points within the treatment room, (c) secondary organ-specific neutron doses inside the Rando-Alderson anthropomorphic phantom. Results have proven that Monte Carlo models correctly reproduce secondary neutrons within the two proton therapy treatment rooms. Sensitive differences between experimental measurements and simulations were nonetheless observed especially with the highest beam energy. The study demonstrated the need for improved measurement tools, especially at the high neutron energy range, and more accurate physical models and cross sections within the Monte Carlo code to correctly assess secondary neutron doses in proton therapy applications.
Farah, J.; Martinetti, F.; Sayah, R.; Lacoste, V.; Donadille, L.; Trompier, F.; Nauraye, C.; De Marzi, L.; Vabre, I.; Delacroix, S.; Hérault, J.; Clairand, I.
2014-06-01
Monte Carlo calculations are increasingly used to assess stray radiation dose to healthy organs of proton therapy patients and estimate the risk of secondary cancer. Among the secondary particles, neutrons are of primary concern due to their high relative biological effectiveness. The validation of Monte Carlo simulations for out-of-field neutron doses remains however a major challenge to the community. Therefore this work focused on developing a global experimental approach to test the reliability of the MCNPX models of two proton therapy installations operating at 75 and 178 MeV for ocular and intracranial tumor treatments, respectively. The method consists of comparing Monte Carlo calculations against experimental measurements of: (a) neutron spectrometry inside the treatment room, (b) neutron ambient dose equivalent at several points within the treatment room, (c) secondary organ-specific neutron doses inside the Rando-Alderson anthropomorphic phantom. Results have proven that Monte Carlo models correctly reproduce secondary neutrons within the two proton therapy treatment rooms. Sensitive differences between experimental measurements and simulations were nonetheless observed especially with the highest beam energy. The study demonstrated the need for improved measurement tools, especially at the high neutron energy range, and more accurate physical models and cross sections within the Monte Carlo code to correctly assess secondary neutron doses in proton therapy applications.
PACIAE 2.0: An Updated Parton and Hadron Cascade Model (Program) for Relativistic Nuclear Collisions
Institute of Scientific and Technical Information of China (English)
SA; Ben-hao; ZHOU; Dai-mei; YAN; Yu-liang; LI; Xiao-mei; FENG; Sheng-qing; DONG; Bao-guo; CAI; Xu
2012-01-01
<正>We have updated the parton and hadron cascade model PACIAE for the relativistic nuclear collisions, from based on JETSET 6.4 and PYTHIA 5.7, and referred to as PACIAE 2.0. The main physics concerning the stages of the parton initiation, parton rescattering, hadronization, and hadron rescattering were discussed. The structures of the programs were briefly explained. In addition, some calculated examples were compared with the experimental data. It turns out that this model (program) works well.
Kieftenbeld, Vincent; Natesan, Prathiba
2012-01-01
Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…
Monte Carlo model of the Studsvik BNCT clinical beam: description and validation.
Giusti, Valerio; Munck af Rosenschöld, Per M; Sköld, Kurt; Montagnini, Bruno; Capala, Jacek
2003-12-01
The neutron beam at the Studsvik facility for boron neutron capture therapy (BNCT) and the validation of the related computational model developed for the MCNP-4B Monte Carlo code are presented. Several measurements performed at the epithermal neutron port used for clinical trials have been made in order to validate the Monte Carlo computational model. The good general agreement between the MCNP calculations and the experimental results has provided an adequate check of the calculation procedure. In particular, at the nominal reactor power of 1 MW, the calculated in-air epithermal neutron flux in the energy interval between 0.4 eV-10 keV is 3.24 x 10(9) n cm(-2) s(-1) (+/- 1.2% 1 std. dev.) while the measured value is 3.30 x 10(9) n cm(-20 s(-1) (+/- 5.0% 1 std. dev.). Furthermore, the calculated in-phantom thermal neutron flux, equal to 6.43 x 10(9) n cm(-2) s(-1) (+/- 1.0% 1 std. dev.), and the corresponding measured value of 6.33 X 10(9) n cm(-2) s(-1) (+/- 5.3% 1 std. dev.) agree within their respective uncertainties. The only statistically significant disagreement is a discrepancy of 39% between the MCNP calculations of the in-air photon kerma and the corresponding experimental value. Despite this, a quite acceptable overall in-phantom beam performance was obtained, with a maximum value of the therapeutic ratio (the ratio between the local tumor dose and the maximum healthy tissue dose) equal to 6.7. The described MCNP model of the Studsvik facility has been deemed adequate to evaluate further improvements in the beam design as well as to plan experimental work.
A Coarse-Grained DNA Model Parameterized from Atomistic Simulations by Inverse Monte Carlo
Directory of Open Access Journals (Sweden)
Nikolay Korolev
2014-05-01
Full Text Available Computer modeling of very large biomolecular systems, such as long DNA polyelectrolytes or protein-DNA complex-like chromatin cannot reach all-atom resolution in a foreseeable future and this necessitates the development of coarse-grained (CG approximations. DNA is both highly charged and mechanically rigid semi-flexible polymer and adequate DNA modeling requires a correct description of both its structural stiffness and salt-dependent electrostatic forces. Here, we present a novel CG model of DNA that approximates the DNA polymer as a chain of 5-bead units. Each unit represents two DNA base pairs with one central bead for bases and pentose moieties and four others for phosphate groups. Charges, intra- and inter-molecular force field potentials for the CG DNA model were calculated using the inverse Monte Carlo method from all atom molecular dynamic (MD simulations of 22 bp DNA oligonucleotides. The CG model was tested by performing dielectric continuum Langevin MD simulations of a 200 bp double helix DNA in solutions of monovalent salt with explicit ions. Excellent agreement with experimental data was obtained for the dependence of the DNA persistent length on salt concentration in the range 0.1–100 mM. The new CG DNA model is suitable for modeling various biomolecular systems with adequate description of electrostatic and mechanical properties.
A Monte Carlo model of hot electron trapping and detrapping in SiO2
Kamocsai, R. L.; Porod, W.
1991-02-01
High-field stressing and oxide degradation of SiO2 are studied using a microscopic model of electron heating and charge trapping and detrapping. Hot electrons lead to a charge buildup in the oxide according to the dynamic trapping-detrapping model by Nissan-Cohen and co-workers [Y. Nissan-Cohen, J. Shappir, D. Frohman-Bentchkowsky, J. Appl. Phys. 58, 2252 (1985)]. Detrapping events are modeled as trap-to-band impact ionization processes initiated by high energy conduction electrons. The detailed electronic distribution function obtained from Monte Carlo transport simulations is utilized for the determination of the detrapping rates. We apply our microscopic model to the calculation of the flat-band voltage shift in silicon dioxide as a function of the electric field, and we show that our model is able to reproduce the experimental results. We also compare these results to the predictions of the empirical trapping-detrapping model which assumes a heuristic detrapping cross section. Our microscopic theory accounts for the nonlocal nature of impact ionization which leads to a dark space close to the injecting cathode, which is unaccounted for in the empirical model.
Backbone exponents of the two-dimensional q-state Potts model: a Monte Carlo investigation.
Deng, Youjin; Blöte, Henk W J; Nienhuis, Bernard
2004-02-01
We determine the backbone exponent X(b) of several critical and tricritical q-state Potts models in two dimensions. The critical systems include the bond percolation, the Ising, the q=2-sqrt[3], 3, and 4 state Potts, and the Baxter-Wu model, and the tricritical ones include the q=1 Potts model and the Blume-Capel model. For this purpose, we formulate several efficient Monte Carlo methods and sample the probability P2 of a pair of points connected via at least two independent paths. Finite-size-scaling analysis of P2 yields X(b) as 0.3566(2), 0.2696(3), 0.2105(3), and 0.127(4) for the critical q=2-sqrt[3], 1,2, 3, and 4 state Potts model, respectively. At tricriticality, we obtain X(b)=0.0520(3) and 0.0753(6) for the q=1 and 2 Potts model, respectively. For the critical q-->0 Potts model it is derived that X(b)=3/4. From a scaling argument, we find that, at tricriticality, X(b) reduces to the magnetic exponent, as confirmed by the numerical results.
Application of JAERI quantum molecular dynamics model for collisions of heavy nuclei
Directory of Open Access Journals (Sweden)
Ogawa Tatsuhiko
2016-01-01
Full Text Available The quantum molecular dynamics (QMD model incorporated into the general-purpose radiation transport code PHITS was revised for accurate prediction of fragment yields in peripheral collisions. For more accurate simulation of peripheral collisions, stability of the nuclei at their ground state was improved and the algorithm to reject invalid events was modified. In-medium correction on nucleon-nucleon cross sections was also considered. To clarify the effect of this improvement on fragmentation of heavy nuclei, the new QMD model coupled with a statistical decay model was used to calculate fragment production cross sections of Ag and Au targets and compared with the data of earlier measurement. It is shown that the revised version can predict cross section more accurately.
Modeling chiral criticality and its consequences for heavy-ion collisions
Almási, Gábor András; Redlich, Krzysztof
2016-01-01
We explore the critical fluctuations near the chiral critical endpoint (CEP) in a chiral effective model and discuss possible signals of the CEP, recently explored experimentally in nuclear collision. Particular attention is paid to the dependence of such signals on the location of the phase boundary and the EP relative to the chemical freeze-out conditions in nuclear collisions. We argue that in effective models, standard freeze-out fits to heavy-ion data should not be used directly. Instead, the relevant quantities should be examined on lines in the phase diagram that are defined self-consistently, within the framework of the model. We discuss possible choices for such an approach.
Relativistic Brownian motion: from a microscopic binary collision model to the Langevin equation.
Dunkel, Jörn; Hänggi, Peter
2006-11-01
The Langevin equation (LE) for the one-dimensional relativistic Brownian motion is derived from a microscopic collision model. The model assumes that a heavy pointlike Brownian particle interacts with the lighter heat bath particles via elastic hard-core collisions. First, the commonly known, nonrelativistic LE is deduced from this model, by taking into account the nonrelativistic conservation laws for momentum and kinetic energy. Subsequently, this procedure is generalized to the relativistic case. There, it is found that the relativistic stochastic force is still delta correlated (white noise) but no longer corresponds to a Gaussian white noise process. Explicit results for the friction and momentum-space diffusion coefficients are presented and discussed.
Model for fast, nonadiabatic collisions between alkali atoms and diatomic molecules
Hickman, A. P.
1980-11-01
Equations for collisions involving two potential surfaces are presented in the impact parameter approximation. In this approximation, a rectilinear classical trajectory is assumed for the translational motion, leading to a time-dependent Schroedinger's equation for the remaining degrees of freedom. Model potentials are considered for collisions of alkali atoms with diatomic molecules that lead to a particularly simple form of the final equations. Using the Magnus approximation, these equations are solved for parameters chosen to model the process Cs+O2→Cs++O2-, and total cross sections for ion-pair formation are obtained as a function of energy. The results exhibit oscillations that correspond qualitatively to those seen in recent measurements. In addition, the model predicts that the oscillations will become less pronounced as the initial vibrational level of O2 is increased.
Ward, Adam S.; Kelleher, Christa A.; Mason, Seth J. K.; Wagener, Thorsten; McIntyre, Neil; McGlynn, Brian L.; Runkel, Robert L.; Payn, Robert A.
2017-01-01
Researchers and practitioners alike often need to understand and characterize how water and solutes move through a stream in terms of the relative importance of in-stream and near-stream storage and transport processes. In-channel and subsurface storage processes are highly variable in space and time and difficult to measure. Storage estimates are commonly obtained using transient-storage models (TSMs) of the experimentally obtained solute-tracer test data. The TSM equations represent key transport and storage processes with a suite of numerical parameters. Parameter values are estimated via inverse modeling, in which parameter values are iteratively changed until model simulations closely match observed solute-tracer data. Several investigators have shown that TSM parameter estimates can be highly uncertain. When this is the case, parameter values cannot be used reliably to interpret stream-reach functioning. However, authors of most TSM studies do not evaluate or report parameter certainty. Here, we present a software tool linked to the One-dimensional Transport with Inflow and Storage (OTIS) model that enables researchers to conduct uncertainty analyses via Monte-Carlo parameter sampling and to visualize uncertainty and sensitivity results. We demonstrate application of our tool to 2 case studies and compare our results to output obtained from more traditional implementation of the OTIS model. We conclude by suggesting best practices for transient-storage modeling and recommend that future applications of TSMs include assessments of parameter certainty to support comparisons and more reliable interpretations of transport processes.
Exploring uncertainty in glacier mass balance modelling with Monte Carlo simulation
Directory of Open Access Journals (Sweden)
H. Machguth
2008-12-01
Full Text Available By means of Monte Carlo simulations we calculated uncertainty in modelled cumulative mass balance over 400 days at one particular point on the tongue of Morteratsch Glacier, Switzerland, using a glacier energy balance model of intermediate complexity. Before uncertainty assessment, the model was tuned to observed mass balance for the investigated time period and its robustness was tested by comparing observed and modelled mass balance over 11 years, yielding very small deviations. Both systematic and random uncertainties are assigned to twelve input parameters and their respective values estimated from the literature or from available meteorological data sets. The calculated overall uncertainty in the model output is dominated by systematic errors and amounts to 0.7 m w.e. or approximately 10% of total melt over the investigated time span. In order to provide a first order estimate on variability in uncertainty depending on the quality of input data, we conducted a further experiment, calculating overall uncertainty for different levels of uncertainty in measured global radiation and air temperature. Our results show that the output of a well calibrated model is subject to considerable uncertainties, in particular when applied for extrapolation in time and space where systematic errors are likely to be an important issue.
Exploring uncertainty in glacier mass balance modelling with Monte Carlo simulation
Directory of Open Access Journals (Sweden)
H. Machguth
2008-06-01
Full Text Available By means of Monte Carlo simulations we calculated uncertainty in modelled cumulative mass balance over 400 days at one particular point on the tongue of Morteratsch Glacier, Switzerland, using a glacier energy balance model of intermediate complexity. Before uncertainty assessment, the model was tuned to observed mass balance for the investigated time period and its robustness was tested by comparing observed and modelled mass balance over 11 years, yielding very small deviations. Both systematic and random uncertainties are assigned to twelve input parameters and their respective values estimated from the literature or from available meteorological data sets. The calculated overall uncertainty in the model output is dominated by systematic errors and amounts to 0.7 m w.e. or approximately 10% of total melt over the investigated time span. In order to provide a first order estimate on variability in uncertainty depending on the quality of input data, we conducted a further experiment, calculating overall uncertainty for different levels of uncertainty in measured global radiation and air temperature. Our results show that the output of a well calibrated model is subject to considerable uncertainties, in particular when applied for extrapolation in time and space where systematic errors are likely to be an important issue.
Monte Carlo based geometrical model for efficiency calculation of an n-type HPGe detector
Energy Technology Data Exchange (ETDEWEB)
Padilla Cabal, Fatima, E-mail: fpadilla@instec.c [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba); Lopez-Pino, Neivy; Luis Bernal-Castillo, Jose; Martinez-Palenzuela, Yisel; Aguilar-Mena, Jimmy; D' Alessandro, Katia; Arbelo, Yuniesky; Corrales, Yasser; Diaz, Oscar [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba)
2010-12-15
A procedure to optimize the geometrical model of an n-type detector is described. Sixteen lines from seven point sources ({sup 241}Am, {sup 133}Ba, {sup 22}Na, {sup 60}Co, {sup 57}Co, {sup 137}Cs and {sup 152}Eu) placed at three different source-to-detector distances (10, 20 and 30 cm) were used to calibrate a low-background gamma spectrometer between 26 and 1408 keV. Direct Monte Carlo techniques using the MCNPX 2.6 and GEANT 4 9.2 codes, and a semi-empirical procedure were performed to obtain theoretical efficiency curves. Since discrepancies were found between experimental and calculated data using the manufacturer parameters of the detector, a detail study of the crystal dimensions and the geometrical configuration is carried out. The relative deviation with experimental data decreases from a mean value of 18-4%, after the parameters were optimized.
Monte Carlo modeling of cavity imaging in pure iron using back-scatter electron scanning microscopy
Yan, Qiang; Gigax, Jonathan; Chen, Di; Garner, F. A.; Shao, Lin
2016-11-01
Backscattered electrons (BSE) in a scanning electron microscope (SEM) can produce images of subsurface cavity distributions as a nondestructive characterization technique. Monte Carlo simulations were performed to understand the mechanism of void imaging and to identify key parameters in optimizing void resolution. The modeling explores an iron target of different thicknesses, electron beams of different energies, beam sizes, and scan pitch, evaluated for voids of different sizes and depths below the surface. The results show that the void image contrast is primarily caused by discontinuity of energy spectra of backscattered electrons, due to increased outward path lengths for those electrons which penetrate voids and are backscattered at deeper depths. Size resolution of voids at specific depths, and maximum detection depth of specific voids sizes are derived as a function of electron beam energy. The results are important for image optimization and data extraction.
Macroion solutions in the cell model studied by field theory and Monte Carlo simulations.
Lue, Leo; Linse, Per
2011-12-14
Aqueous solutions of charged spherical macroions with variable dielectric permittivity and their associated counterions are examined within the cell model using a field theory and Monte Carlo simulations. The field theory is based on separation of fields into short- and long-wavelength terms, which are subjected to different statistical-mechanical treatments. The simulations were performed by using a new, accurate, and fast algorithm for numerical evaluation of the electrostatic polarization interaction. The field theory provides counterion distributions outside a macroion in good agreement with the simulation results over the full range from weak to strong electrostatic coupling. A low-dielectric macroion leads to a displacement of the counterions away from the macroion.
Hybrid Monte-Carlo simulation of interacting tight-binding model of graphene
Smith, Dominik
2013-01-01
In this work, results are presented of Hybrid-Monte-Carlo simulations of the tight-binding Hamiltonian of graphene, coupled to an instantaneous long-range two-body potential which is modeled by a Hubbard-Stratonovich auxiliary field. We present an investigation of the spontaneous breaking of the sublattice symmetry, which corresponds to a phase transition from a conducting to an insulating phase and which occurs when the effective fine-structure constant $\\alpha$ of the system crosses above a certain threshold $\\alpha_C$. Qualitative comparisons to earlier works on the subject (which used larger system sizes and higher statistics) are made and it is established that $\\alpha_C$ is of a plausible magnitude in our simulations. Also, we discuss differences between simulations using compact and non-compact variants of the Hubbard field and present a quantitative comparison of distinct discretization schemes of the Euclidean time-like dimension in the Fermion operator.
Of bugs and birds: Markov Chain Monte Carlo for hierarchical modeling in wildlife research
Link, W.A.; Cam, E.; Nichols, J.D.; Cooch, E.G.
2002-01-01
Markov chain Monte Carlo (MCMC) is a statistical innovation that allows researchers to fit far more complex models to data than is feasible using conventional methods. Despite its widespread use in a variety of scientific fields, MCMC appears to be underutilized in wildlife applications. This may be due to a misconception that MCMC requires the adoption of a subjective Bayesian analysis, or perhaps simply to its lack of familiarity among wildlife researchers. We introduce the basic ideas of MCMC and software BUGS (Bayesian inference using Gibbs sampling), stressing that a simple and satisfactory intuition for MCMC does not require extraordinary mathematical sophistication. We illustrate the use of MCMC with an analysis of the association between latent factors governing individual heterogeneity in breeding and survival rates of kittiwakes (Rissa tridactyla). We conclude with a discussion of the importance of individual heterogeneity for understanding population dynamics and designing management plans.
Efficient Monte Carlo and greedy heuristic for the inference of stochastic block models
Peixoto, Tiago P
2014-01-01
We present an efficient algorithm for the inference of stochastic block models in large networks. The algorithm can be used as an optimized Markov chain Monte Carlo (MCMC) method, with a fast mixing time and a much reduced susceptibility to getting trapped in metastable states, or as a greedy agglomerative heuristic, with an almost linear $O(N\\ln^2N)$ complexity, where $N$ is the number of nodes in the network, independent on the number of blocks being inferred. We show that the heuristic is capable of delivering results which are indistinguishable from the more exact and numerically expensive MCMC method in many artificial and empirical networks, despite being much faster. The method is entirely unbiased towards any specific mixing pattern, and in particular it does not favor assortative community structures.
Monte Carlo based geometrical model for efficiency calculation of an n-type HPGe detector.
Cabal, Fatima Padilla; Lopez-Pino, Neivy; Bernal-Castillo, Jose Luis; Martinez-Palenzuela, Yisel; Aguilar-Mena, Jimmy; D'Alessandro, Katia; Arbelo, Yuniesky; Corrales, Yasser; Diaz, Oscar
2010-12-01
A procedure to optimize the geometrical model of an n-type detector is described. Sixteen lines from seven point sources ((241)Am, (133)Ba, (22)Na, (60)Co, (57)Co, (137)Cs and (152)Eu) placed at three different source-to-detector distances (10, 20 and 30 cm) were used to calibrate a low-background gamma spectrometer between 26 and 1408 keV. Direct Monte Carlo techniques using the MCNPX 2.6 and GEANT 4 9.2 codes, and a semi-empirical procedure were performed to obtain theoretical efficiency curves. Since discrepancies were found between experimental and calculated data using the manufacturer parameters of the detector, a detail study of the crystal dimensions and the geometrical configuration is carried out. The relative deviation with experimental data decreases from a mean value of 18-4%, after the parameters were optimized.
FPGA Hardware Acceleration of Monte Carlo Simulations for the Ising Model
Ortega-Zamorano, Francisco; Cannas, Sergio A; Jerez, José M; Franco, Leonardo
2016-01-01
A two-dimensional Ising model with nearest-neighbors ferromagnetic interactions is implemented in a Field Programmable Gate Array (FPGA) board.Extensive Monte Carlo simulations were carried out using an efficient hardware representation of individual spins and a combined global-local LFSR random number generator. Consistent results regarding the descriptive properties of magnetic systems, like energy, magnetization and susceptibility are obtained while a speed-up factor of approximately 6 times is achieved in comparison to previous FPGA-based published works and almost $10^4$ times in comparison to a standard CPU simulation. A detailed description of the logic design used is given together with a careful analysis of the quality of the random number generator used. The obtained results confirm the potential of FPGAs for analyzing the statistical mechanics of magnetic systems.
A Thermodynamic Model for Square-well Chain Fluid: Theory and Monte Carlo Simulation
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
A thermodynamic model for the freely jointed square-well chain fluids was developed based on the thermodynamic perturbation theory of Barker-Henderson, Zhang and Wertheim. In this derivation Zhang's expressions for square-well monomers improved from Barker-Henderson compressibility approximation were adopted as the reference fluid, and Wertheim＇s polymerization method was used to obtain the free energy term due to the bond connectivity. An analytic expression for the Helmholtz free energy of the square-well chain fluids was obtained. The expression without adjustable parameters leads to the thermodynamic consistent predictions of the compressibility factors, residual internal energy and constant-volume heat capacity for dimer,4-mer, 8-mer and 16-mer square-well fluids. The results are in good agreement with the Monte Carlo simulation. To obtain the MC data of residual internal energy and the constant-volume heat capacity needed, NVT MC simulations were performed for these square-well chain fluids.
World-line quantum Monte Carlo algorithm for a one-dimensional Bose model
Energy Technology Data Exchange (ETDEWEB)
Batrouni, G.G. (Thinking Machines Corporation, 245 First Street, Cambridge, Massachusetts 02142 (United States)); Scalettar, R.T. (Physics Department, University of California, Davis, California 95616 (United States))
1992-10-01
In this paper we provide a detailed description of the ground-state phase diagram of interacting, disordered bosons on a lattice. We describe a quantum Monte Carlo algorithm that incorporates in an efficient manner the required bosonic wave-function symmetry. We consider the ordered case, where we evaluate the compressibility gap and show the lowest three Mott insulating lobes. We obtain the critical ratio of interaction strength to hopping at which the onset of superfluidity occurs for the first lobe, and the critical exponents {nu} and {ital z}. For the disordered model we show the effect of randomness on the phase diagram and the superfluid correlations. We also measure the response of the superfluid density, {rho}{sub {ital s}}, to external perturbations. This provides an unambiguous characterization of the recently observed Bose and Anderson glass phases.
Constrained-path quantum Monte Carlo approach for non-yrast states within the shell model
Energy Technology Data Exchange (ETDEWEB)
Bonnard, J. [INFN, Sezione di Padova, Padova (Italy); LPC Caen, ENSICAEN, Universite de Caen, CNRS/IN2P3, Caen (France); Juillet, O. [LPC Caen, ENSICAEN, Universite de Caen, CNRS/IN2P3, Caen (France)
2016-04-15
The present paper intends to present an extension of the constrained-path quantum Monte Carlo approach allowing to reconstruct non-yrast states in order to reach the complete spectroscopy of nuclei within the interacting shell model. As in the yrast case studied in a previous work, the formalism involves a variational symmetry-restored wave function assuming two central roles. First, it guides the underlying Brownian motion to improve the efficiency of the sampling. Second, it constrains the stochastic paths according to the phaseless approximation to control sign or phase problems that usually plague fermionic QMC simulations. Proof-of-principle results in the sd valence space are reported. They prove the ability of the scheme to offer remarkably accurate binding energies for both even- and odd-mass nuclei irrespective of the considered interaction. (orig.)
Charged-particle rapidity density in Au+Au collisions in a quark combination model
Shao, Feng-Lan; Yao, Tao; Xie, Qu-Bing
2007-03-01
Rapidity/pseudorapidity densities for charged particles and their centrality, rapidity, and energy dependence in Au+Au collisions at the Relativistic Heavy Ion Collider are studied in a quark combination model. Using a Gaussian-type rapidity distribution for constituent quarks as a result of Landau hydrodynamic evolution, the data at sNN=130,200 GeV at various centralities in full pseudorapidity range are well described, and the charged-particle multiplicities are reproduced as functions of the number of participants. The energy dependence of the shape of the dNch/dη distribution is also described at various collision energies sNN=200,130,62.4 GeV in central collisions with same value of parameters except 19.6 GeV. The calculated rapidity distributions and yields for the charged pions and kaons in central Au+Au collisions at sNN=200 GeV are compared with experimental data of the BRAHMS Collaboration.
Sadi, M; Dabir, B
2003-01-01
Monte Carlo Method is one of the most powerful techniques to model different processes, such as polymerization reactions. By this method, without any need to solve moment equations, a very detailed information on the structure and properties of polymers are obtained. The number of algorithm repetitions (selected volumes of reactor for modelling which represent the number of initial molecules) is very important in this method. In Monte Carlo method calculations are based on the random number of generations and reaction probability determinations. so the number of algorithm repetition is very important. In this paper, the initiation reaction was considered alone and the importance of number of initiator molecules on the result were studied. It can be concluded that Monte Carlo method will not give accurate results if the number of molecules is not satisfied to be big enough, because in that case , selected volume would not be representative of the whole system.
Ševecek, Pavel; Broz, Miroslav; Nesvorny, David; Durda, Daniel D.; Asphaug, Erik; Walsh, Kevin J.; Richardson, Derek C.
2016-10-01
Detailed models of asteroid collisions can yield important constrains for the evolution of the Main Asteroid Belt, but the respective parameter space is large and often unexplored. We thus performed a new set of simulations of asteroidal breakups, i.e. fragmentations of intact targets, subsequent gravitational reaccumulation and formation of small asteroid families, focusing on parent bodies with diameters D = 10 km.Simulations were performed with a smoothed-particle hydrodynamics (SPH) code (Benz & Asphaug 1994), combined with an efficient N-body integrator (Richardson et al. 2000). We assumed a number of projectile sizes, impact velocities and impact angles. The rheology used in the physical model does not include friction nor crushing; this allows for a direct comparison to results of Durda et al. (2007). Resulting size-frequency distributions are significantly different from scaled-down simulations with D = 100 km monolithic targets, although they may be even more different for pre-shattered targets.We derive new parametric relations describing fragment distributions, suitable for Monte-Carlo collisional models. We also characterize velocity fields and angular distributions of fragments, which can be used as initial conditions in N-body simulations of small asteroid families. Finally, we discuss various uncertainties related to SPH simulations.
Faught, Austin M; Davidson, Scott E; Fontenot, Jonas; Kry, Stephen F; Etzel, Carol; Ibbott, Geoffrey S; Followill, David S
2017-09-01
The Imaging and Radiation Oncology Core Houston (IROC-H) (formerly the Radiological Physics Center) has reported varying levels of agreement in their anthropomorphic phantom audits. There is reason to believe one source of error in this observed disagreement is the accuracy of the dose calculation algorithms and heterogeneity corrections used. To audit this component of the radiotherapy treatment process, an independent dose calculation tool is needed. Monte Carlo multiple source models for Elekta 6 MV and 10 MV therapeutic x-ray beams were commissioned based on measurement of central axis depth dose data for a 10 × 10 cm(2) field size and dose profiles for a 40 × 40 cm(2) field size. The models were validated against open field measurements consisting of depth dose data and dose profiles for field sizes ranging from 3 × 3 cm(2) to 30 × 30 cm(2) . The models were then benchmarked against measurements in IROC-H's anthropomorphic head and neck and lung phantoms. Validation results showed 97.9% and 96.8% of depth dose data passed a ±2% Van Dyk criterion for 6 MV and 10 MV models respectively. Dose profile comparisons showed an average agreement using a ±2%/2 mm criterion of 98.0% and 99.0% for 6 MV and 10 MV models respectively. Phantom plan comparisons were evaluated using ±3%/2 mm gamma criterion, and averaged passing rates between Monte Carlo and measurements were 87.4% and 89.9% for 6 MV and 10 MV models respectively. Accurate multiple source models for Elekta 6 MV and 10 MV x-ray beams have been developed for inclusion in an independent dose calculation tool for use in clinical trial audits. © 2017 American Association of Physicists in Medicine.
An effective model for entropy deposition in high-energy pp, pA, and AA collisions
Moreland, J Scott; Bass, Steffen A
2014-01-01
We introduce TRENTO, a new initial condition model for high-energy nuclear collisions based on eikonal entropy deposition via a "reduced thickness" function. The model simultaneously predicts the shapes of experimental proton-proton, proton-nucleus, and nucleus-nucleus multiplicity distributions, and generates nucleus-nucleus eccentricity harmonics consistent with experimental flow constraints. In addition, the model provides a possible resolution to the "knee" puzzle in ultra-central uranium-uranium collisions.
Kadoura, Ahmad
2011-06-06
Lennard‐Jones (L‐J) and Buckingham exponential‐6 (exp‐6) potential models were used to produce isotherms for methane at temperatures below and above critical one. Molecular simulation approach, particularly Monte Carlo simulations, were employed to create these isotherms working with both canonical and Gibbs ensembles. Experiments in canonical ensemble with each model were conducted to estimate pressures at a range of temperatures above methane critical temperature. Results were collected and compared to experimental data existing in literature; both models showed an elegant agreement with the experimental data. In parallel, experiments below critical temperature were run in Gibbs ensemble using L‐J model only. Upon comparing results with experimental ones, a good fit was obtained with small deviations. The work was further developed by adding some statistical studies in order to achieve better understanding and interpretation to the estimated quantities by the simulation. Methane phase diagrams were successfully reproduced by an efficient molecular simulation technique with different potential models. This relatively simple demonstration shows how powerful molecular simulation methods could be, hence further applications on more complicated systems are considered. Prediction of phase behavior of elemental sulfur in sour natural gases has been an interesting and challenging field in oil and gas industry. Determination of elemental sulfur solubility conditions helps avoiding all kinds of problems caused by its dissolution in gas production and transportation processes. For this purpose, further enhancement to the methods used is to be considered in order to successfully simulate elemental sulfur phase behavior in sour natural gases mixtures.
Worm Monte Carlo study of the honeycomb-lattice loop model
Energy Technology Data Exchange (ETDEWEB)
Liu Qingquan, E-mail: liuqq@mail.ustc.edu.c [Hefei National Laboratory for Physical Sciences at Microscale, Department of Modern Physics, University of Science and Technology of China, Hefei, 230027 (China); Deng Youjin, E-mail: yjdeng@ustc.edu.c [Hefei National Laboratory for Physical Sciences at Microscale, Department of Modern Physics, University of Science and Technology of China, Hefei, 230027 (China); Garoni, Timothy M., E-mail: t.garoni@ms.unimelb.edu.a [ARC Centre of Excellence for Mathematics and Statistics of Complex Systems, Department of Mathematics and Statistics, University of Melbourne, Victoria 3010 (Australia)
2011-05-11
We present a Markov-chain Monte Carlo algorithm of worm type that correctly simulates the O(n) loop model on any (finite and connected) bipartite cubic graph, for any real n>0, and any edge weight, including the fully-packed limit of infinite edge weight. Furthermore, we prove rigorously that the algorithm is ergodic and has the correct stationary distribution. We emphasize that by using known exact mappings when n=2, this algorithm can be used to simulate a number of zero-temperature Potts antiferromagnets for which the Wang-Swendsen-Kotecky cluster algorithm is non-ergodic, including the 3-state model on the kagome lattice and the 4-state model on the triangular lattice. We then use this worm algorithm to perform a systematic study of the honeycomb-lattice loop model as a function of n{<=}2, on the critical line and in the densely-packed and fully-packed phases. By comparing our numerical results with Coulomb gas theory, we identify a set of exact expressions for scaling exponents governing some fundamental geometric and dynamic observables. In particular, we show that for all n{<=}2, the scaling of a certain return time in the worm dynamics is governed by the magnetic dimension of the loop model, thus providing a concrete dynamical interpretation of this exponent. The case n>2 is also considered, and we confirm the existence of a phase transition in the 3-state Potts universality class that was recently observed via numerical transfer matrix calculations.
Voxel2MCNP: software for handling voxel models for Monte Carlo radiation transport calculations.
Hegenbart, Lars; Pölz, Stefan; Benzler, Andreas; Urban, Manfred
2012-02-01
Voxel2MCNP is a program that sets up radiation protection scenarios with voxel models and generates corresponding input files for the Monte Carlo code MCNPX. Its technology is based on object-oriented programming, and the development is platform-independent. It has a user-friendly graphical interface including a two- and three-dimensional viewer. A row of equipment models is implemented in the program. Various voxel model file formats are supported. Applications include calculation of counting efficiency of in vivo measurement scenarios and calculation of dose coefficients for internal and external radiation scenarios. Moreover, anthropometric parameters of voxel models, for instance chest wall thickness, can be determined. Voxel2MCNP offers several methods for voxel model manipulations including image registration techniques. The authors demonstrate the validity of the program results and provide references for previous successful implementations. The authors illustrate the reliability of calculated dose conversion factors and specific absorbed fractions. Voxel2MCNP is used on a regular basis to generate virtual radiation protection scenarios at Karlsruhe Institute of Technology while further improvements and developments are ongoing.
Unified description of pf-shell nuclei by the Monte Carlo shell model calculations
Energy Technology Data Exchange (ETDEWEB)
Mizusaki, Takahiro; Otsuka, Takaharu [Tokyo Univ. (Japan). Dept. of Physics; Honma, Michio
1998-03-01
The attempts to solve shell model by new methods are briefed. The shell model calculation by quantum Monte Carlo diagonalization which was proposed by the authors is a more practical method, and it became to be known that it can solve the problem with sufficiently good accuracy. As to the treatment of angular momentum, in the method of the authors, deformed Slater determinant is used as the basis, therefore, for making angular momentum into the peculiar state, projected operator is used. The space determined dynamically is treated mainly stochastically, and the energy of the multibody by the basis formed as the result is evaluated and selectively adopted. The symmetry is discussed, and the method of decomposing shell model space into dynamically determined space and the product of spin and isospin spaces was devised. The calculation processes are shown with the example of {sup 50}Mn nuclei. The calculation of the level structure of {sup 48}Cr with known exact energy can be done with the accuracy of peculiar absolute energy value within 200 keV. {sup 56}Ni nuclei are the self-conjugate nuclei of Z=N=28. The results of the shell model calculation of {sup 56}Ni nucleus structure by using the interactions of nuclear models are reported. (K.I.)
Modeling of composite latex particle morphology by off-lattice Monte Carlo simulation.
Duda, Yurko; Vázquez, Flavio
2005-02-01
Composite latex particles have shown a great range of applications such as paint resins, varnishes, water borne adhesives, impact modifiers, etc. The high-performance properties of this kind of materials may be explained in terms of a synergistical combination of two different polymers (usually a rubber and a thermoplastic). A great variety of composite latex particles with very different morphologies may be obtained by two-step emulsion polymerization processes. The formation of specific particle morphology depends on the chemical and physical nature of the monomers used during the synthesis, the process temperature, the reaction initiator, the surfactants, etc. Only a few models have been proposed to explain the appearance of the composite particle morphologies. These models have been based on the change of the interfacial energies during the synthesis. In this work, we present a new three-component model: Polymer blend (flexible and rigid chain particles) is dispersed in water by forming spherical cavities. Monte Carlo simulations of the model in two dimensions are used to determine the density distribution of chains and water molecules inside the suspended particle. This approach allows us to study the dependence of the morphology of the composite latex particles on the relative hydrophilicity and flexibility of the chain molecules as well as on their density and composition. It has been shown that our simple model is capable of reproducing the main features of the various morphologies observed in synthesis experiments.
Unified description of pf-shell nuclei by the Monte Carlo shell model calculations
Energy Technology Data Exchange (ETDEWEB)
Mizusaki, Takahiro; Otsuka, Takaharu [Tokyo Univ. (Japan). Dept. of Physics; Honma, Michio
1998-03-01
The attempts to solve shell model by new methods are briefed. The shell model calculation by quantum Monte Carlo diagonalization which was proposed by the authors is a more practical method, and it became to be known that it can solve the problem with sufficiently good accuracy. As to the treatment of angular momentum, in the method of the authors, deformed Slater determinant is used as the basis, therefore, for making angular momentum into the peculiar state, projected operator is used. The space determined dynamically is treated mainly stochastically, and the energy of the multibody by the basis formed as the result is evaluated and selectively adopted. The symmetry is discussed, and the method of decomposing shell model space into dynamically determined space and the product of spin and isospin spaces was devised. The calculation processes are shown with the example of {sup 50}Mn nuclei. The calculation of the level structure of {sup 48}Cr with known exact energy can be done with the accuracy of peculiar absolute energy value within 200 keV. {sup 56}Ni nuclei are the self-conjugate nuclei of Z=N=28. The results of the shell model calculation of {sup 56}Ni nucleus structure by using the interactions of nuclear models are reported. (K.I.)
Behera, Nirbhay K; Naik, Bharati; Nandi, Basanta K; Pani, Tanmay
2016-01-01
The charged particle multiplicity distribution and the transverse energy distribution measured in heavy-ion collisions at top RHIC and LHC energies are described using the two-component model approach based on convolution of Monte Carlo Glauber model with the Weibull model for particle production. The model successfully describes the multiplicity and transverse energy distribution of minimum bias collision data for a wide range of energies. We also propose that Weibull-Glauber model can be used to determine the centrality classes in heavy-ion collision as an alternative to the conventional Negative Binomial distribution for particle production.
Energy Technology Data Exchange (ETDEWEB)
Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke, E-mail: ksheng@mednet.ucla.edu [Department of Radiation Oncology, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, California 90024 (United States)
2015-11-15
Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was
Corner wetting in the two-dimensional Ising model: Monte Carlo results
Energy Technology Data Exchange (ETDEWEB)
Albano, E V [INIFTA, Universidad Nacional de La Plata, CC 16 Suc. 4, 1900 La Plata (Argentina); Virgiliis, A De [INIFTA, Universidad Nacional de La Plata, CC 16 Suc. 4, 1900 La Plata (Argentina); Mueller, M [Institut fuer Physik, Johannes Gutenberg Universitaet, Staudinger Weg 7, D-55099 Mainz (Germany); Binder, K [Institut fuer Physik, Johannes Gutenberg Universitaet, Staudinger Weg 7, D-55099 Mainz (Germany)
2003-01-29
Square LxL (L=24-128) Ising lattices with nearest neighbour ferromagnetic exchange are considered using free boundary conditions at which boundary magnetic fields are applied, i.e., at the two boundary rows ending at the lower left corner a field +h acts, while at the two boundary rows ending at the upper right corner a field -h acts. For temperatures T less than the critical temperature T{sub c} of the bulk, this boundary condition leads to the formation of two domains with opposite orientations of the magnetization direction, separated by an interface which for T larger than the filling transition temperature T{sub f} (h) runs from the upper left corner to the lower right corner, while for T
Energy Technology Data Exchange (ETDEWEB)
Springer, H K; Miller, W O; Levatin, J L; Pertica, A J; Olivier, S S
2010-09-06
Satellite collision debris poses risks to existing space assets and future space missions. Predictive models of debris generated from these hypervelocity collisions are critical for developing accurate space situational awareness tools and effective mitigation strategies. Hypervelocity collisions involve complex phenomenon that spans several time- and length-scales. We have developed a satellite collision debris modeling approach consisting of a Lagrangian hydrocode enriched with smooth particle hydrodynamics (SPH), advanced material failure models, detailed satellite mesh models, and massively parallel computers. These computational studies enable us to investigate the influence of satellite center-of-mass (CM) overlap and orientation, relative velocity, and material composition on the size, velocity, and material type distributions of collision debris. We have applied our debris modeling capability to the recent Iridium 33-Cosmos 2251 collision event. While the relative velocity was well understood in this event, the degree of satellite CM overlap and orientation was ill-defined. In our simulations, we varied the collision CM overlap and orientation of the satellites from nearly maximum overlap to partial overlap on the outermost extents of the satellites (i.e, solar panels and gravity boom). As expected, we found that with increased satellite overlap, the overall debris cloud mass and momentum (transfer) increases, the average debris size decreases, and the debris velocity increases. The largest predicted debris can also provide insight into which satellite components were further removed from the impact location. A significant fraction of the momentum transfer is imparted to the smallest debris (< 1-5mm, dependent on mesh resolution), especially in large CM overlap simulations. While the inclusion of the smallest debris is critical to enforcing mass and momentum conservation in hydrocode simulations, there seems to be relatively little interest in their
Lüdde, H. J.; Achenbach, A.; Kalkbrenner, T.; Jankowiak, H. C.; Kirchner, T.
2016-05-01
A recently introduced model to account for geometric screening corrections in an independent-atom-model description of ion-molecule collisions is applied to proton collisions from amino acids and DNA and RNA nucleobases. The correction coefficients are obtained from using a pixel counting method (PCM) for the exact calculation of the effective cross sectional area that emerges when the molecular cross section is pictured as a structure of (overlapping) atomic cross sections. This structure varies with the relative orientation of the molecule with respect to the projectile beam direction and, accordingly, orientation-independent total cross sections are obtained from averaging the pixel count over many orientations. We present net capture and net ionization cross sections over wide ranges of impact energy and analyze the strength of the screening effect by comparing the PCM results with Bragg additivity rule cross sections and with experimental data where available. Work supported by NSERC, Canada.
An Oriented-Eddy Collision Model for Turbulence Prediction
2007-06-15
kinetic energy, K, and dissipation rate, E). There is also a hypothesized algebraic constitutive equation relating these two scalar quantities and the...elliptic relaxation ( Durbin ) have even expanded the predictive scope of these models. Nevertheless, it is well understood at this time, even by CFD users...Publisher, 1993 P.A. Durbin , Near-wall turbulence closure modeling without ’damping functions’, Theoret. Comput. Fluid Dynamics 3, 1-13, 1991. W. C
Diffenderfer, Eric S; Dolney, Derek; Schaettler, Maximilian; Sanzari, Jenine K; McDonough, James; Cengel, Keith A
2014-03-01
The space radiation environment imposes increased dangers of exposure to ionizing radiation, particularly during a solar particle event (SPE). These events consist primarily of low energy protons that produce a highly inhomogeneous dose distribution. Due to this inherent dose heterogeneity, experiments designed to investigate the radiobiological effects of SPE radiation present difficulties in evaluating and interpreting dose to sensitive organs. To address this challenge, we used the Geant4 Monte Carlo simulation framework to develop dosimetry software that uses computed tomography (CT) images and provides radiation transport simulations incorporating all relevant physical interaction processes. We found that this simulation accurately predicts measured data in phantoms and can be applied to model dose in radiobiological experiments with animal models exposed to charged particle (electron and proton) beams. This study clearly demonstrates the value of Monte Carlo radiation transport methods for two critically interrelated uses: (i) determining the overall dose distribution and dose levels to specific organ systems for animal experiments with SPE-like radiation, and (ii) interpreting the effect of random and systematic variations in experimental variables (e.g. animal movement during long exposures) on the dose distributions and consequent biological effects from SPE-like radiation exposure. The software developed and validated in this study represents a critically important new tool that allows integration of computational and biological modeling for evaluating the biological outcomes of exposures to inhomogeneous SPE-like radiation dose distributions, and has potential applications for other environmental and therapeutic exposure simulations.
Development of a randomized 3D cell model for Monte Carlo microdosimetry simulations
Energy Technology Data Exchange (ETDEWEB)
Douglass, Michael; Bezak, Eva; Penfold, Scott [School of Chemistry and Physics, University of Adelaide, North Terrace, Adelaide 5005, South Australia (Australia) and Department of Medical Physics, Royal Adelaide Hospital, North Terrace, Adelaide 5000, South Australia (Australia)
2012-06-15
Purpose: The objective of the current work was to develop an algorithm for growing a macroscopic tumor volume from individual randomized quasi-realistic cells. The major physical and chemical components of the cell need to be modeled. It is intended to import the tumor volume into GEANT4 (and potentially other Monte Carlo packages) to simulate ionization events within the cell regions. Methods: A MATLAB Copyright-Sign code was developed to produce a tumor coordinate system consisting of individual ellipsoidal cells randomized in their spatial coordinates, sizes, and rotations. An eigenvalue method using a mathematical equation to represent individual cells was used to detect overlapping cells. GEANT4 code was then developed to import the coordinate system into GEANT4 and populate it with individual cells of varying sizes and composed of the membrane, cytoplasm, reticulum, nucleus, and nucleolus. Each region is composed of chemically realistic materials. Results: The in-house developed MATLAB Copyright-Sign code was able to grow semi-realistic cell distributions ({approx}2 Multiplication-Sign 10{sup 8} cells in 1 cm{sup 3}) in under 36 h. The cell distribution can be used in any number of Monte Carlo particle tracking toolkits including GEANT4, which has been demonstrated in this work. Conclusions: Using the cell distribution and GEANT4, the authors were able to simulate ionization events in the individual cell components resulting from 80 keV gamma radiation (the code is applicable to other particles and a wide range of energies). This virtual microdosimetry tool will allow for a more complete picture of cell damage to be developed.
Solution of deterministic-stochastic epidemic models by dynamical Monte Carlo method
Aièllo, O. E.; Haas, V. J.; daSilva, M. A. A.; Caliri, A.
2000-07-01
This work is concerned with dynamical Monte Carlo (MC) method and its application to models originally formulated in a continuous-deterministic approach. Specifically, a susceptible-infected-removed-susceptible (SIRS) model is used in order to analyze aspects of the dynamical MC algorithm and achieve its applications in epidemic contexts. We first examine two known approaches to the dynamical interpretation of the MC method and follow with the application of one of them in the SIRS model. The working method chosen is based on the Poisson process where hierarchy of events, properly calculated waiting time between events, and independence of the events simulated, are the basic requirements. To verify the consistence of the method, some preliminary MC results are compared against exact steady-state solutions and other general numerical results (provided by Runge-Kutta method): good agreement is found. Finally, a space-dependent extension of the SIRS model is introduced and treated by MC. The results are interpreted under and in accordance with aspects of the herd-immunity concept.
Critical Casimir force and its fluctuations in lattice spin models: exact and Monte Carlo results.
Dantchev, Daniel; Krech, Michael
2004-04-01
We present general arguments and construct a stress tensor operator for finite lattice spin models. The average value of this operator gives the Casimir force of the system close to the bulk critical temperature T(c). We verify our arguments via exact results for the force in the two-dimensional Ising model, d -dimensional Gaussian, and mean spherical model with 2Monte Carlo simulations for three-dimensional Ising, XY, and Heisenberg models we demonstrate that the standard deviation of the Casimir force F(C) in a slab geometry confining a critical substance in-between is k(b) TD(T) (A/ a(d-1) )(1/2), where A is the surface area of the plates, a is the lattice spacing, and D(T) is a slowly varying nonuniversal function of the temperature T. The numerical calculations demonstrate that at the critical temperature T(c) the force possesses a Gaussian distribution centered at the mean value of the force = k(b) T(c) (d-1)Delta/ (L/a)(d), where L is the distance between the plates and Delta is the (universal) Casimir amplitude.
Monte Carlo tests of renormalization-group predictions for critical phenomena in Ising models
Binder, Kurt; Luijten, Erik
2001-04-01
A critical review is given of status and perspectives of Monte Carlo simulations that address bulk and interfacial phase transitions of ferromagnetic Ising models. First, some basic methodological aspects of these simulations are briefly summarized (single-spin flip vs. cluster algorithms, finite-size scaling concepts), and then the application of these techniques to the nearest-neighbor Ising model in d=3 and 5 dimensions is described, and a detailed comparison to theoretical predictions is made. In addition, the case of Ising models with a large but finite range of interaction and the crossover scaling from mean-field behavior to the Ising universality class are treated. If one considers instead a long-range interaction described by a power-law decay, new classes of critical behavior depending on the exponent of this power law become accessible, and a stringent test of the ε-expansion becomes possible. As a final type of crossover from mean-field type behavior to two-dimensional Ising behavior, the interface localization-delocalization transition of Ising films confined between “competing” walls is considered. This problem is still hampered by questions regarding the appropriate coarse-grained model for the fluctuating interface near a wall, which is the starting point for both this problem and the theory of critical wetting.
Monte Carlo Modeling of Computed Tomography Ceiling Scatter for Shielding Calculations.
Edwards, Stephen; Schick, Daniel
2016-04-01
Radiation protection for clinical staff and members of the public is of paramount importance, particularly in occupied areas adjacent to computed tomography scanner suites. Increased patient workloads and the adoption of multi-slice scanning systems may make unshielded secondary scatter from ceiling surfaces a significant contributor to dose. The present paper expands upon an existing analytical model for calculating ceiling scatter accounting for variable room geometries and provides calibration data for a range of clinical beam qualities. The practical effect of gantry, false ceiling, and wall attenuation in limiting ceiling scatter is also explored and incorporated into the model. Monte Carlo simulations were used to calibrate the model for scatter from both concrete and lead surfaces. Gantry attenuation experimental data showed an effective blocking of scatter directed toward the ceiling at angles up to 20-30° from the vertical for the scanners examined. The contribution of ceiling scatter from computed tomography operation to the effective dose of individuals in areas surrounding the scanner suite could be significant and therefore should be considered in shielding design according to the proposed analytical model.
A background error covariance model of significant wave height employing Monte Carlo simulation
Institute of Scientific and Technical Information of China (English)
GUO Yanyou; HOU Yijun; ZHANG Chunmei; YANG Jie
2012-01-01
The quality of background error statistics is one of the key components for successful assimilation of observations in a numerical model.The background error covariance(BEC)of ocean waves is generally estimated under an assumption that it is stationary over a period of time and uniform over a domain.However,error statistics are in fact functions of the physical processes governing the meteorological situation and vary with the wave condition.In this paper,we simulated the BEC of the significant wave height(SWH)employing Monte Carlo methods.An interesting result is that the BEC varies consistently with the mean wave direction(MWD).In the model domain,the BEC of the SWH decreases significantly when the MWD changes abruptly.A new BEC model of the SWH based on the correlation between the BEC and MWD was then developed.A case study of regional data assimilation was performed,where the SWH observations of buoy 22001 were used to assess the SWH hindcast.The results show that the new BEC model benefits wave prediction and allows reasonable approximations of anisotropy and inhomogeneous errors.
Modeling the Collision Phenomena of Ø11X19 Size Rolls
Directory of Open Access Journals (Sweden)
Tiberiu Manescu jr.
2011-09-01
Full Text Available This paper presents a numerical comparison using dynamic modeling techniques, of physical phenomena occurring at collisions between two rollers in a lot of distinct situations: impact on the edge at angles of 0°, 10°, 20°, 30°, 40°, 50°, 60°, 70°, 80° and impact on generator. These situations occur frequently in the manufacturing process of small cylindrical rollers.
Institute of Scientific and Technical Information of China (English)
李双; 冯笙琴
2012-01-01
The net-baryon number is essentially transported by valence quarks that probe the saturation regime in the target by multiple scattering. The net-baryon distributions, nuclear stopping power and gluon saturation features in the SPS and RHIC energy regions are investigated by taking advantage of the gluon saturation model with geometric scaling. Predications are made for the net-baryon rapidity distributions, mean rapidity loss and gluon saturation features in central Pb ＋ Pb collisions at LHC.
Modeling and simulation for a new virtual-clock-based collision resolution algorithm
Institute of Scientific and Technical Information of China (English)
Yin rupo; Cai yunze; He xing; Zhang weidong; Xu xiaoming
2006-01-01
Virtual time Ethernet is a multiple access protocol proposed to provide FCFS transmission service over the predominant Ethernet bus. It incorporates a novel message-rescheduling algorithm based on the virtual clock mechanism. By manipulating virtual clocks back up over a common virtual time axis and performing timely collision resolution, the algorithm guarantees the system's queuing strictness. The protocol is particularly modeled as a finite state machine and implemented using OPNET tools. Simulation studies prove its correctness and effectiveness.
A Habitat-based Wind-Wildlife Collision Model with Application to the Upper Great Plains Region
Energy Technology Data Exchange (ETDEWEB)
Forcey, Greg, M.
2012-08-28
Most previous studies on collision impacts at wind facilities have taken place at the site-specific level and have only examined small-scale influences on mortality. In this study, we examine landscape-level influences using a hierarchical spatial model combined with existing datasets and life history knowledge for: Horned Lark, Red-eyed Vireo, Mallard, American Avocet, Golden Eagle, Whooping Crane, red bat, silver-haired bat, and hoary bat. These species were modeled in the central United States within Bird Conservation Regions 11, 17, 18, and 19. For the bird species, we modeled bird abundance from existing datasets as a function of habitat variables known to be preferred by each species to develop a relative abundance prediction for each species. For bats, there are no existing abundance datasets so we identified preferred habitat in the landscape for each species and assumed that greater amounts of preferred habitat would equate to greater abundance of bats. The abundance predictions for bird and bats were modeled with additional exposure factors known to influence collisions such as visibility, wind, temperature, precipitation, topography, and behavior to form a final mapped output of predicted collision risk within the study region. We reviewed published mortality studies from wind farms in our study region and collected data on reported mortality of our focal species to compare to our modeled predictions. We performed a sensitivity analysis evaluating model performance of 6 different scenarios where habitat and exposure factors were weighted differently. We compared the model performance in each scenario by evaluating observed data vs. our model predictions using spearmans rank correlations. Horned Lark collision risk was predicted to be highest in the northwestern and west-central portions of the study region with lower risk predicted elsewhere. Red-eyed Vireo collision risk was predicted to be the highest in the eastern portions of the study region and in
A Habitat-based Wind-Wildlife Collision Model with Application to the Upper Great Plains Region
Energy Technology Data Exchange (ETDEWEB)
Forcey, Greg, M.
2012-08-28
Most previous studies on collision impacts at wind facilities have taken place at the site-specific level and have only examined small-scale influences on mortality. In this study, we examine landscape-level influences using a hierarchical spatial model combined with existing datasets and life history knowledge for: Horned Lark, Red-eyed Vireo, Mallard, American Avocet, Golden Eagle, Whooping Crane, red bat, silver-haired bat, and hoary bat. These species were modeled in the central United States within Bird Conservation Regions 11, 17, 18, and 19. For the bird species, we modeled bird abundance from existing datasets as a function of habitat variables known to be preferred by each species to develop a relative abundance prediction for each species. For bats, there are no existing abundance datasets so we identified preferred habitat in the landscape for each species and assumed that greater amounts of preferred habitat would equate to greater abundance of bats. The abundance predictions for bird and bats were modeled with additional exposure factors known to influence collisions such as visibility, wind, temperature, precipitation, topography, and behavior to form a final mapped output of predicted collision risk within the study region. We reviewed published mortality studies from wind farms in our study region and collected data on reported mortality of our focal species to compare to our modeled predictions. We performed a sensitivity analysis evaluating model performance of 6 different scenarios where habitat and exposure factors were weighted differently. We compared the model performance in each scenario by evaluating observed data vs. our model predictions using spearmans rank correlations. Horned Lark collision risk was predicted to be highest in the northwestern and west-central portions of the study region with lower risk predicted elsewhere. Red-eyed Vireo collision risk was predicted to be the highest in the eastern portions of the study region and in
Modelling of the Internal Mechanics in Ship Collisions
DEFF Research Database (Denmark)
Paik, Jeom Kee; Pedersen, Preben Terndrup
1996-01-01
on the stiffness and the strength is considered as well. In order to include the coupling effects between local and global failure of the structure, the usual non-linear finite-element technique is applied. In order to deal with the gap and contact conditions between the striking and the struck ships, gap....../contact elements are employed. Dynamic effects are considered by inclusion of the influence of strain-Rate sensitivity in the material model. On the basis of the theory a computer program has been written. The procedure is verified by a comparison of experimental results obtained from test models of double...
Macroscopic Model for Head-On Binary Droplet Collisions in a Gaseous Medium
Li, Jie
2016-11-01
In this Letter, coalescence-bouncing transitions of head-on binary droplet collisions are predicted by a novel macroscopic model based entirely on fundamental laws of physics. By making use of the lubrication theory of Zhang and Law [Phys. Fluids 23, 042102 (2011)], we have modified the Navier-Stokes equations to accurately account for the rarefied nature of the interdroplet gas film. Through the disjoint pressure model, we have incorporated the intermolecular van der Waals forces. Our model does not use any adjustable (empirical) parameters. It therefore encompasses an extreme range of length scales (more than 5 orders of magnitude): from those of the external flow in excess of the droplet size (a few hundred μ m ) to the effective range of the van der Waals force around 10 nm. A state of the art moving adaptive mesh method, capable of resolving all the relevant length scales, has been employed. Our numerical simulations are able to capture the coalescence-bouncing and bouncing-coalescence transitions that are observed as the collision intensity increases. The predicted transition Weber numbers for tetradecane and water droplet collisions at different pressures show good agreement with published experimental values. Our study also sheds new light on the roles of gas density, droplet size, and mean free path in the rupture of the gas film.
Wirth, Erin A.; Long, Maureen D.; Moriarty, John C.
2017-01-01
Teleseismic receiver functions contain information regarding Earth structure beneath a seismic station. P-to-SV converted phases are often used to characterize crustal and upper-mantle discontinuities and isotropic velocity structures. More recently, P-to-SH converted energy has been used to interrogate the orientation of anisotropy at depth, as well as the geometry of dipping interfaces. Many studies use a trial-and-error forward modeling approach for the interpretation of receiver functions, generating synthetic receiver functions from a user-defined input model of Earth structure and amending this model until it matches major features in the actual data. While often successful, such an approach makes it impossible to explore model space in a systematic and robust manner, which is especially important given that solutions are likely non-unique. Here, we present a Markov chain Monte Carlo algorithm with Gibbs sampling for the interpretation of anisotropic receiver functions. Synthetic examples are used to test the viability of the algorithm, suggesting that it works well for models with a reasonable number of free parameters (<˜20). Additionally, the synthetic tests illustrate that certain parameters are well constrained by receiver function data, while others are subject to severe trade-offs-an important implication for studies that attempt to interpret Earth structure based on receiver function data. Finally, we apply our algorithm to receiver function data from station WCI in the central United States. We find evidence for a change in anisotropic structure at mid-lithospheric depths, consistent with previous work that used a grid search approach to model receiver function data at this station. Forward modeling of receiver functions using model space search algorithms, such as the one presented here, provide a meaningful framework for interrogating Earth structure from receiver function data.
Nightingale, M.P.; Blöte , H.W.J.
1996-01-01
The principle and the efficiency of the Monte Carlo transfer-matrix algorithm are discussed. Enhancements of this algorithm are illustrated by applications to several phase transitions in lattice spin models. We demonstrate how the statistical noise can be reduced considerably by a similarity transf
Ivantsov, Ilya; Ferraz, Alvaro; Kochetov, Evgenii
2016-01-01
We perform quantum Monte Carlo simulations of the itinerant-localized periodic Kondo-Heisenberg model for the underdoped cuprates to calculate the associated spin correlation functions. The strong electron correlations are shown to play a key role in the abrupt destruction of the quasi long-range antiferromagnetic order in the lightly doped regime.
Ivantsov, Ilya; Ferraz, Alvaro; Kochetov, Evgenii
2016-12-01
We perform quantum Monte Carlo simulations of the itinerant-localized periodic Kondo-Heisenberg model for the underdoped cuprates to calculate the associated spin correlation functions. The strong electron correlations are shown to play a key role in the abrupt destruction of the quasi-long-range antiferromagnetic order in the lightly doped regime.
Quantum Monte Carlo study of the cooperative binding of NO2 to fragment models of carbon nanotubes
Lawson, John W.; Bauschlicher Jr., Charles W.; Toulouse, Julien; Filippi, Claudia; Umrigar, C.J.
2008-01-01
Previous calculations on model systems for the cooperative binding of two NO2 molecules to carbon nanotubes using density functional theory and second order Moller–Plesset perturbation theory gave results differing by 30 kcal/mol. Quantum Monte Carlo calculations are performed to study the role of e
Meta-Analysis of Single-Case Data: A Monte Carlo Investigation of a Three Level Model
Owens, Corina M.
2011-01-01
Numerous ways to meta-analyze single-case data have been proposed in the literature, however, consensus on the most appropriate method has not been reached. One method that has been proposed involves multilevel modeling. This study used Monte Carlo methods to examine the appropriateness of Van den Noortgate and Onghena's (2008) raw data multilevel…
A Monte-Carlo study for the critical exponents of the three-dimensional O(6) model
Loison, D.
1999-09-01
Using Wolff's single-cluster Monte-Carlo update algorithm, the three-dimensional O(6)-Heisenberg model on a simple cubic lattice is simulated. With the help of finite size scaling we compute the critical exponents ν, β, γ and η. Our results agree with the field-theory predictions but not so well with the prediction of the series expansions.
Nightingale, M.P.; Blöte , H.W.J.
1996-01-01
The principle and the efficiency of the Monte Carlo transfer-matrix algorithm are discussed. Enhancements of this algorithm are illustrated by applications to several phase transitions in lattice spin models. We demonstrate how the statistical noise can be reduced considerably by a similarity
Hofstede, ter F.; Wedel, M.
1998-01-01
This study investigates the effects of time aggregation in discrete and continuous-time hazard models. A Monte Carlo study is conducted in which data are generated according to various continuous and discrete-time processes, and aggregated into daily, weekly and monthly intervals. These data are
Energy Technology Data Exchange (ETDEWEB)
Tsige-Tamirat, H. [Association FZK-Euratom, Forschungszentrum Karlsruhe, P.O. Box 3640, 76021 Karlsruhe (Germany)]. E-mail: tsige@irs.fzk.de; Fischer, U. [Association FZK-Euratom, Forschungszentrum Karlsruhe, P.O. Box 3640, 76021 Karlsruhe (Germany); Carman, P.P. [Euratom/UKAEA Fusion Association, Culham Science Center, Abingdon, Oxfordshire OX14 3DB (United Kingdom); Loughlin, M. [Euratom/UKAEA Fusion Association, Culham Science Center, Abingdon, Oxfordshire OX14 3DB (United Kingdom)
2005-11-15
The paper describes the automatic generation of a JET 3D neutronics model from data of computer aided design (CAD) system for Monte Carlo (MC) calculations. The applied method converts suitable CAD data into a representation appropriate for MC codes. The converted geometry is fully equivalent to the CAD geometry.
A Data-Based Approach for Modeling and Analysis of Vehicle Collision by LPV-ARMAX Models
Directory of Open Access Journals (Sweden)
Qiugang Lu
2013-01-01
Full Text Available Vehicle crash test is considered to be the most direct and common approach to assess the vehicle crashworthiness. However, it suffers from the drawbacks of high experiment cost and huge time consumption. Therefore, the establishment of a mathematical model of vehicle crash which can simplify the analysis process is significantly attractive. In this paper, we present the application of LPV-ARMAX model to simulate the car-to-pole collision with different initial impact velocities. The parameters of the LPV-ARMAX are assumed to have dependence on the initial impact velocities. Instead of establishing a set of LTI models for vehicle crashes with various impact velocities, the LPV-ARMAX model is comparatively simple and applicable to predict the responses of new collision situations different from the ones used for identification. Finally, the comparison between the predicted response and the real test data is conducted, which shows the high fidelity of the LPV-ARMAX model.
Collision statistics in sheared inelastic hard spheres.
Bannerman, Marcus N; Green, Thomas E; Grassia, Paul; Lue, Leo
2009-04-01
The dynamics of sheared inelastic-hard-sphere systems is studied using nonequilibrium molecular-dynamics simulations and direct simulation Monte Carlo. In the molecular-dynamics simulations Lees-Edwards boundary conditions are used to impose the shear. The dimensions of the simulation box are chosen to ensure that the systems are homogeneous and that the shear is applied uniformly. Various system properties are monitored, including the one-particle velocity distribution, granular temperature, stress tensor, collision rates, and time between collisions. The one-particle velocity distribution is found to agree reasonably well with an anisotropic Gaussian distribution, with only a slight overpopulation of the high-velocity tails. The velocity distribution is strongly anisotropic, especially at lower densities and lower values of the coefficient of restitution, with the largest variance in the direction of shear. The density dependence of the compressibility factor of the sheared inelastic-hard-sphere system is quite similar to that of elastic-hard-sphere fluids. As the systems become more inelastic, the glancing collisions begin to dominate over more direct, head-on collisions. Examination of the distribution of the times between collisions indicates that the collisions experienced by the particles are strongly correlated in the highly inelastic systems. A comparison of the simulation data is made with direct Monte Carlo simulation of the Enskog equation. Results of the kinetic model of Montanero [J. Fluid Mech. 389, 391 (1999)] based on the Enskog equation are also included. In general, good agreement is found for high-density, weakly inelastic systems.
2016-01-01
Background Self-contained tests estimate and test the association between a phenotype and mean expression level in a gene set defined a priori. Many self-contained gene set analysis methods have been developed but the performance of these methods for phenotypes that are continuous rather than discrete and with multiple nuisance covariates has not been well studied. Here, I use Monte Carlo simulation to evaluate the performance of both novel and previously published (and readily available via R) methods for inferring effects of a continuous predictor on mean expression in the presence of nuisance covariates. The motivating data are a high-profile dataset which was used to show opposing effects of hedonic and eudaimonic well-being (or happiness) on the mean expression level of a set of genes that has been correlated with social adversity (the CTRA gene set). The original analysis of these data used a linear model (GLS) of fixed effects with correlated error to infer effects of Hedonia and Eudaimonia on mean CTRA expression. Methods The standardized effects of Hedonia and Eudaimonia on CTRA gene set expression estimated by GLS were compared to estimates using multivariate (OLS) linear models and generalized estimating equation (GEE) models. The OLS estimates were tested using O’Brien’s OLS test, Anderson’s permutation \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}${r}_{F}^{2}$\\end{document}rF2-test, two permutation F-tests (including GlobalAncova), and a rotation z-test (Roast). The GEE estimates were tested using a Wald test with robust standard errors. The performance (Type I, II, S, and M errors) of all tests was investigated using a Monte Carlo simulation of data explicitly modeled on the re-analyzed dataset. Results GLS estimates are inconsistent between data
Directory of Open Access Journals (Sweden)
Jeffrey A. Walker
2016-10-01
Full Text Available Background Self-contained tests estimate and test the association between a phenotype and mean expression level in a gene set defined a priori. Many self-contained gene set analysis methods have been developed but the performance of these methods for phenotypes that are continuous rather than discrete and with multiple nuisance covariates has not been well studied. Here, I use Monte Carlo simulation to evaluate the performance of both novel and previously published (and readily available via R methods for inferring effects of a continuous predictor on mean expression in the presence of nuisance covariates. The motivating data are a high-profile dataset which was used to show opposing effects of hedonic and eudaimonic well-being (or happiness on the mean expression level of a set of genes that has been correlated with social adversity (the CTRA gene set. The original analysis of these data used a linear model (GLS of fixed effects with correlated error to infer effects of Hedonia and Eudaimonia on mean CTRA expression. Methods The standardized effects of Hedonia and Eudaimonia on CTRA gene set expression estimated by GLS were compared to estimates using multivariate (OLS linear models and generalized estimating equation (GEE models. The OLS estimates were tested using O’Brien’s OLS test, Anderson’s permutation ${r}_{F}^{2}$ r F 2 -test, two permutation F-tests (including GlobalAncova, and a rotation z-test (Roast. The GEE estimates were tested using a Wald test with robust standard errors. The performance (Type I, II, S, and M errors of all tests was investigated using a Monte Carlo simulation of data explicitly modeled on the re-analyzed dataset. Results GLS estimates are inconsistent between data sets, and, in each dataset, at least one coefficient is large and highly statistically significant. By contrast, effects estimated by OLS or GEE are very small, especially relative to the standard errors. Bootstrap and permutation GLS
Calibration, characterisation and Monte Carlo modelling of a fast-UNCL
Energy Technology Data Exchange (ETDEWEB)
Tagziria, Hamid, E-mail: hamid.tagziria@jrc.ec.europa.eu [European Commission, Joint Research Center, ITU-Nuclear Security Unit, I-21027 Ispra (Italy); Bagi, Janos; Peerani, Paolo [European Commission, Joint Research Center, ITU-Nuclear Security Unit, I-21027 Ispra (Italy); Belian, Antony [Department of Safeguards, SGTS/TAU, IAEA Vienna Austria (Austria)
2012-09-21
This paper describes the calibration, characterisation and Monte Carlo modelling of a new IAEA Uranium Neutron Collar (UNCL) for LWR fuel, which can be operated in both passive and active modes. It can employ either 35 {sup 3}He tubes (in active configuration) or 44 tubes at 10 atm pressure (in its passive configuration) and thus can be operated in fast mode (with Cd liner) as its efficiency is higher than that of the standard UNCL. Furthermore, it has an adjustable internal cavity which allows the measurement of varying sizes of fuel assemblies such as WWER, PWR and BWR. It is intended to be used with Cd liners in active mode (with an AmLi interrogation source in place) by the inspectorate for the determination of the {sup 235}U content in fresh fuel assemblies, especially in cases where high concentrations of burnable poisons cause problems with accurate assays. A campaign of measurements has been carried out at the JRC Performance Laboratories (PERLA) in Ispra (Italy) using various radionuclide neutron sources ({sup 252}Cf, {sup 241}AmLi and PuGa) and our BWR and PWR reference assemblies, in order to calibrate and characterise the counter as well as assess its performance and determine its optimum operational parameters. Furthermore, the fast-UNCL has been extensively modelled at JRC using the Monte Carlo code, MCNP-PTA, which simulates both the neutron transport and the coincidence electronics. The model has been validated using our measurements which agreed well with calculations. The WWER1000 fuel assembly for which there are no representative reference materials for an adequate calibration of the counter, has also been modelled and the response of the counter to this fuel assembly has been simulated. Subsequently numerical calibrations curves have been obtained for the above fuel assemblies in various modes (fast and thermal). The sensitivity of the counter to fuel rods substitution as well as other important aspects and the parameters of the fast
A Monte Carlo Method for Summing Modeled and Background Pollutant Concentrations.
Dhammapala, Ranil; Bowman, Clint; Schulte, Jill
2017-02-23
Air quality analyses for permitting new pollution sources often involve modeling dispersion of pollutants using models like AERMOD. Representative background pollutant concentrations must be added to modeled concentrations to determine compliance with air quality standards. Summing 98(th) (or 99(th)) percentiles of two independent distributions that are unpaired in time, overestimates air quality impacts and could needlessly burden sources with restrictive permit conditions. This problem is exacerbated when emissions and background concentrations peak during different seasons. Existing methods addressing this matter either require much input data, disregard source and background seasonality, or disregard the variability of the background by utilizing a single concentration for each season, month, hour-of-day, day-of-week or wind direction. Availability of representative background concentrations are another limitation. Here we report on work to improve permitting analyses, with the development of (1) daily gridded, background concentrations interpolated from 12km-CMAQ forecasts and monitored data. A two- step interpolation reproduced measured background concentrations to within 6.2%; and (2) a Monte Carlo (MC) method to combine AERMOD output and background concentrations while respecting their seasonality. The MC method randomly combines, with replacement, data from the same months, and calculates 1000 estimates of the 98(th) or 99(th) percentiles. The design concentration of background + new source is the median of these 1000 estimates. We found that the AERMOD design value (DV) + background DV lay at the upper end of the distribution of these thousand 99(th) percentiles, while measured DVs were at the lower end. Our MC method sits between these two metrics and is sufficiently protective of public health in that it overestimates design concentrations somewhat. We also calculated probabilities of exceeding specified thresholds at each receptor, better informing
Bogaerts, A.; Gijbels, R.; W. Goedheer,
2001-01-01
An improved hybrid Monte Carlo-fluid model for electrons, argon ions and fast argon atoms, is presented for the rf Grimm-type glow discharge. In this new approach, all electrons, including the large slow electron group in the bulk plasma, are treated with the Monte Carlo model. The calculation
Zavyalov, Sergey; Zakharov, Vladimir
2016-04-01
A number of issues concerning Precambrian geodynamics still remain unsolved because of uncertainity of many physical (thermal regime, lithosphere thickness, crust thickness, etc.) and chemical (mantle composition, crust composition) parameters, which differed considerably comparing to the present day values. In this work, we show results of numerical supercomputations based on petrological and thermomechanical 2D model, which simulates the process of collision between two continental plates, each 80-160 km thick, with various convergence rates ranging from 5 to 15 cm/year. In the model, the upper mantle temperature is 150-200 ⁰C higher than the modern value, while the continental crust radiogenic heat production is higher than the present value by the factor of 1.5. These settings correspond to Archean conditions. The present study investigates the dependence of collision style on various continental crust parameters, especially on crust composition. The 3 following archetypal settings of continental crust composition are examined: 1) completely felsic continental crust; 2) basic lower crust and felsic upper crust; 3) basic upper crust and felsic lower crust (hereinafter referred to as inverted crust). Modeling results show that collision with completely felsic crust is unlikely. In the case of basic lower crust, a continental subduction and subsequent continental rocks exhumation can take place. Therefore, formation of ultra-high pressure metamorphic rocks is possible. Continental subduction also occurs in the case of inverted continental crust. However, in the latter case, the exhumation of felsic rocks is blocked by upper basic layer and their subsequent interaction depends on their volume ratio. Thus, if the total inverted crust thickness is about 15 km and the thicknesses of the two layers are equal, felsic rocks cannot be exhumed. If the total thickness is 30 to 40 km and that of the felsic layer is 20 to 25 km, it breaks through the basic layer leading to
Wilson, Robert H.; Dooley, Kathryn A.; Morris, Michael D.; Mycek, Mary-Ann
2009-02-01
Light-scattering spectroscopy has the potential to provide information about bone composition via a fiber-optic probe placed on the skin. In order to design efficient probes, one must understand the effect of all tissue layers on photon transport. To quantitatively understand the effect of overlying tissue layers on the detected bone Raman signal, a layered Monte Carlo model was modified for Raman scattering. The model incorporated the absorption and scattering properties of three overlying tissue layers (dermis, subdermis, muscle), as well as the underlying bone tissue. The attenuation of the collected bone Raman signal, predominantly due to elastic light scattering in the overlying tissue layers, affected the carbonate/phosphate (C/P) ratio by increasing the standard deviation of the computational result. Furthermore, the mean C/P ratio varied when the relative thicknesses of the layers were varied and the elastic scattering coefficient at the Raman scattering wavelength of carbonate was modeled to be different from that at the Raman scattering wavelength of phosphate. These results represent the first portion of a computational study designed to predict optimal probe geometry and help to analyze detected signal for Raman scattering experiments involving bone.
Modeling Monte Carlo of multileaf collimators using the code GEANT4
Energy Technology Data Exchange (ETDEWEB)
Oliveira, Alex C.H.; Lima, Fernando R.A., E-mail: oliveira.ach@yahoo.com, E-mail: falima@cnen.gov.br [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil); Lima, Luciano S.; Vieira, Jose W., E-mail: lusoulima@yahoo.com.br [Instituto Federal de Educacao, Ciencia e Tecnologia de Pernambuco (IFPE), Recife, PE (Brazil)
2014-07-01
Radiotherapy uses various techniques and equipment for local treatment of cancer. The equipment most often used in radiotherapy to the patient irradiation is linear accelerator (Linac). Among the many algorithms developed for evaluation of dose distributions in radiotherapy planning, the algorithms based on Monte Carlo (MC) methods have proven to be very promising in terms of accuracy by providing more realistic results. The MC simulations for applications in radiotherapy are divided into two parts. In the first, the simulation of the production of the radiation beam by the Linac is performed and then the phase space is generated. The phase space contains information such as energy, position, direction, etc. of millions of particles (photons, electrons, positrons). In the second part the simulation of the transport of particles (sampled phase space) in certain configurations of irradiation field is performed to assess the dose distribution in the patient (or phantom). Accurate modeling of the Linac head is of particular interest in the calculation of dose distributions for intensity modulated radiation therapy (IMRT), where complex intensity distributions are delivered using a multileaf collimator (MLC). The objective of this work is to describe a methodology for modeling MC of MLCs using code Geant4. To exemplify this methodology, the Varian Millennium 120-leaf MLC was modeled, whose physical description is available in BEAMnrc Users Manual (20 11). The dosimetric characteristics (i.e., penumbra, leakage, and tongue-and-groove effect) of this MLC were evaluated. The results agreed with data published in the literature concerning the same MLC. (author)
Tests of the modified Sigmund model of ion sputtering using Monte Carlo simulations
Hofsäss, Hans; Bradley, R. Mark
2015-05-01
Monte Carlo simulations are used to evaluate the Modified Sigmund Model of Sputtering. Simulations were carried out for a range of ion incidence angles and surface curvatures for different ion species, ion energies, and target materials. Sputter yields, moments of erosive crater functions, and the fraction of backscattered energy were determined. In accordance with the Modified Sigmund Model of Sputtering, we find that for sufficiently large incidence angles θ the curvature dependence of the erosion crater function tends to destabilize the solid surface along the projected direction of the incident ions. For the perpendicular direction, however, the curvature dependence always leads to a stabilizing contribution. The simulation results also show that, for larger values of θ, a significant fraction of the ions is backscattered, carrying off a substantial amount of the incident ion energy. This provides support for the basic idea behind the Modified Sigmund Model of Sputtering: that the incidence angle θ should be replaced by a larger angle Ψ to account for the reduced energy that is deposited in the solid for larger values of θ.
Two electric field Monte Carlo models of coherent backscattering of polarized light.
Doronin, Alexander; Radosevich, Andrew J; Backman, Vadim; Meglinski, Igor
2014-11-01
Modeling of coherent polarized light propagation in turbid scattering medium by the Monte Carlo method provides an ultimate understanding of coherent effects of multiple scattering, such as enhancement of coherent backscattering and peculiarities of laser speckle formation in dynamic light scattering (DLS) and optical coherence tomography (OCT) diagnostic modalities. In this report, we consider two major ways of modeling the coherent polarized light propagation in scattering tissue-like turbid media. The first approach is based on tracking transformations of the electric field along the ray propagation. The second one is developed in analogy to the iterative procedure of the solution of the Bethe-Salpeter equation. To achieve a higher accuracy in the results and to speed up the modeling, both codes utilize the implementation of parallel computing on NVIDIA Graphics Processing Units (GPUs) with Compute Unified Device Architecture (CUDA). We compare these two approaches through simulations of the enhancement of coherent backscattering of polarized light and evaluate the accuracy of each technique with the results of a known analytical solution. The advantages and disadvantages of each computational approach and their further developments are discussed. Both codes are available online and are ready for immediate use or download.
Mesh-based Monte Carlo code for fluorescence modeling in complex tissues with irregular boundaries
Wilson, Robert H.; Chen, Leng-Chun; Lloyd, William; Kuo, Shiuhyang; Marcelo, Cynthia; Feinberg, Stephen E.; Mycek, Mary-Ann
2011-07-01
There is a growing need for the development of computational models that can account for complex tissue morphology in simulations of photon propagation. We describe the development and validation of a user-friendly, MATLAB-based Monte Carlo code that uses analytically-defined surface meshes to model heterogeneous tissue geometry. The code can use information from non-linear optical microscopy images to discriminate the fluorescence photons (from endogenous or exogenous fluorophores) detected from different layers of complex turbid media. We present a specific application of modeling a layered human tissue-engineered construct (Ex Vivo Produced Oral Mucosa Equivalent, EVPOME) designed for use in repair of oral tissue following surgery. Second-harmonic generation microscopic imaging of an EVPOME construct (oral keratinocytes atop a scaffold coated with human type IV collagen) was employed to determine an approximate analytical expression for the complex shape of the interface between the two layers. This expression can then be inserted into the code to correct the simulated fluorescence for the effect of the irregular tissue geometry.
Matsumoto, T.
2007-09-01
Monte Carlo simulations are performed to evaluate depth-dose distributions for possible treatment of cancers by boron neutron capture therapy (BNCT). The ICRU computational model of ADAM & EVA was used as a phantom to simulate tumors at a depth of 5 cm in central regions of the lungs, liver and pancreas. Tumors of the prostate and osteosarcoma were also centered at the depth of 4.5 and 2.5 cm in the phantom models. The epithermal neutron beam from a research reactor was the primary neutron source for the MCNP calculation of the depth-dose distributions in those cancer models. For brain tumor irradiations, the whole-body dose was also evaluated. The MCNP simulations suggested that a lethal dose of 50 Gy to the tumors can be achieved without reaching the tolerance dose of 25 Gy to normal tissue. The whole-body phantom calculations also showed that the BNCT could be applied for brain tumors without significant damage to whole-body organs.
Energy Technology Data Exchange (ETDEWEB)
Oh, Kye Min [KHNP Central Research Institute, Daejeon (Korea, Republic of); Han, Sang Hoon; Park, Jin Hee; Lim, Ho Gon; Yang, Joon Yang [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Heo, Gyun Young [Kyung Hee University, Yongin (Korea, Republic of)
2017-06-15
In Korea, many nuclear power plants operate at a single site based on geographical characteristics, but the population density near the sites is higher than that in other countries. Thus, multiunit accidents are a more important consideration than in other countries and should be addressed appropriately. Currently, there are many issues related to a multiunit probabilistic safety assessment (PSA). One of them is the quantification of a multiunit PSA model. A traditional PSA uses a Boolean manipulation of the fault tree in terms of the minimal cut set. However, such methods have some limitations when rare event approximations cannot be used effectively or a very small truncation limit should be applied to identify accident sequence combinations for a multiunit site. In particular, it is well known that seismic risk in terms of core damage frequency can be overestimated because there are many events that have a high failure probability. In this study, we propose a quantification method based on a Monte Carlo approach for a multiunit PSA model. This method can consider all possible accident sequence combinations in a multiunit site and calculate a more exact value for events that have a high failure probability. An example model for six identical units at a site was also developed and quantified to confirm the applicability of the proposed method.
Kumar, A; Chauhan, S
2017-03-08
Obesity is one of the most provoking health burdens in the developed countries. One of the strategies to prevent obesity is the inhibition of pancreatic lipase enzyme. The aim of this study was to build QSAR models for natural lipase inhibitors by using the Monte Carlo method. The molecular structures were represented by the simplified molecular input line entry system (SMILES) notation and molecular graphs. Three sets - training, calibration and test set of three splits - were examined and validated. Statistical quality of all the described models was very good. The best QSAR model showed the following statistical parameters: r(2) = 0.864 and Q(2) = 0.836 for the test set and r(2) = 0.824 and Q(2) = 0.819 for the validation set. Structural attributes for increasing and decreasing the activity (expressed as pIC50) were also defined. Using defined structural attributes, the design of new potential lipase inhibitors is also presented. Additionally, a molecular docking study was performed for the determination of binding modes of designed molecules.
Monte Carlo Uncertainty Quantification Using Quasi-1D SRM Ballistic Model
Directory of Open Access Journals (Sweden)
Davide Viganò
2016-01-01
Full Text Available Compactness, reliability, readiness, and construction simplicity of solid rocket motors make them very appealing for commercial launcher missions and embarked systems. Solid propulsion grants high thrust-to-weight ratio, high volumetric specific impulse, and a Technology Readiness Level of 9. However, solid rocket systems are missing any throttling capability at run-time, since pressure-time evolution is defined at the design phase. This lack of mission flexibility makes their missions sensitive to deviations of performance from nominal behavior. For this reason, the reliability of predictions and reproducibility of performances represent a primary goal in this field. This paper presents an analysis of SRM performance uncertainties throughout the implementation of a quasi-1D numerical model of motor internal ballistics based on Shapiro’s equations. The code is coupled with a Monte Carlo algorithm to evaluate statistics and propagation of some peculiar uncertainties from design data to rocker performance parameters. The model has been set for the reproduction of a small-scale rocket motor, discussing a set of parametric investigations on uncertainty propagation across the ballistic model.
Monte Carlo renormalization: the triangular Ising model as a test case.
Guo, Wenan; Blöte, Henk W J; Ren, Zhiming
2005-04-01
We test the performance of the Monte Carlo renormalization method in the context of the Ising model on a triangular lattice. We apply a block-spin transformation which allows for an adjustable parameter so that the transformation can be optimized. This optimization purportedly brings the fixed point of the transformation to a location where the corrections to scaling vanish. To this purpose we determine corrections to scaling of the triangular Ising model with nearest- and next-nearest-neighbor interactions by means of transfer-matrix calculations and finite-size scaling. We find that the leading correction to scaling just vanishes for the nearest-neighbor model. However, the fixed point of the commonly used majority-rule block-spin transformation appears to lie well away from the nearest-neighbor critical point. This raises the question whether the majority rule is suitable as a renormalization transformation, because the standard assumptions of real-space renormalization imply that corrections to scaling vanish at the fixed point. We avoid this inconsistency by means of the optimized transformation which shifts the fixed point back to the vicinity of the nearest-neighbor critical Hamiltonian. The results of the optimized transformation in terms of the Ising critical exponents are more accurate than those obtained with the majority rule.
Monte Carlo simulations for a Lotka-type model with reactant surface diffusion and interactions.
Zvejnieks, G; Kuzovkov, V N
2001-05-01
The standard Lotka-type model, which was introduced for the first time by Mai et al. [J. Phys. A 30, 4171 (1997)] for a simplified description of autocatalytic surface reactions, is generalized here for a case of mobile and energetically interacting reactants. The mathematical formalism is proposed for determining the dependence of transition rates on the interaction energy (and temperature) for the general mathematical model, and the Lotka-type model, in particular. By means of Monte Carlo computer simulations, we have studied the impact of diffusion (with and without energetic interactions between reactants) on oscillatory properties of the A+B-->2B reaction. The diffusion leads to a desynchronization of oscillations and a subsequent decrease of oscillation amplitude. The energetic interaction between reactants has a dual effect depending on the type of mobile reactants. In the limiting case of mobile reactants B the repulsion results in a decrease of amplitudes. However, these amplitudes increase if reactants A are mobile and repulse each other. A simplified interpretation of the obtained results is given.
Monte carlo method-based QSAR modeling of penicillins binding to human serum proteins.
Veselinović, Jovana B; Toropov, Andrey A; Toropova, Alla P; Nikolić, Goran M; Veselinović, Aleksandar M
2015-01-01
The binding of penicillins to human serum proteins was modeled with optimal descriptors based on the Simplified Molecular Input-Line Entry System (SMILES). The concentrations of protein-bound drug for 87 penicillins expressed as percentage of the total plasma concentration were used as experimental data. The Monte Carlo method was used as a computational tool to build up the quantitative structure-activity relationship (QSAR) model for penicillins binding to plasma proteins. One random data split into training, test and validation set was examined. The calculated QSAR model had the following statistical parameters: r(2) = 0.8760, q(2) = 0.8665, s = 8.94 for the training set and r(2) = 0.9812, q(2) = 0.9753, s = 7.31 for the test set. For the validation set, the statistical parameters were r(2) = 0.727 and s = 12.52, but after removing the three worst outliers, the statistical parameters improved to r(2) = 0.921 and s = 7.18. SMILES-based molecular fragments (structural indicators) responsible for the increase and decrease of penicillins binding to plasma proteins were identified. The possibility of using these results for the computer-aided design of new penicillins with desired binding properties is presented. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
QSAR models for HEPT derivates as NNRTI inhibitors based on Monte Carlo method.
Toropova, Alla P; Toropov, Andrey A; Veselinović, Jovana B; Miljković, Filip N; Veselinović, Aleksandar M
2014-04-22
A series of 107 1-[(2-hydroxyethoxy)-methyl]-6-(phenylthio) thymine (HEPT) with anti-HIV-1 activity as a non-nucleoside reverse transcriptase inhibitor (NNRTI) has been studied. Monte Carlo method has been used as a tool to build up the quantitative structure-activity relationships (QSAR) for anti-HIV-1 activity. The QSAR models were calculated with the representation of the molecular structure by simplified molecular input-line entry system and by the molecular graph. Three various splits into training and test set were examined. Statistical quality of all build models is very good. Best calculated model had following statistical parameters: for training set r(2) = 0.8818, q(2) = 0.8774 and r(2) = 0.9360, q(2) = 0.9243 for test set. Structural indicators (alerts) for increase and decrease of the IC50 are defined. Using defined structural alerts computer aided design of new potential anti-HIV-1 HEPT derivates is presented. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Mathematical modeling, analysis and Markov Chain Monte Carlo simulation of Ebola epidemics
Tulu, Thomas Wetere; Tian, Boping; Wu, Zunyou
Ebola virus infection is a severe infectious disease with the highest case fatality rate which become the global public health treat now. What makes the disease the worst of all is no specific effective treatment available, its dynamics is not much researched and understood. In this article a new mathematical model incorporating both vaccination and quarantine to study the dynamics of Ebola epidemic has been developed and comprehensively analyzed. The existence as well as uniqueness of the solution to the model is also verified and the basic reproduction number is calculated. Besides, stability conditions are also checked and finally simulation is done using both Euler method and one of the top ten most influential algorithm known as Markov Chain Monte Carlo (MCMC) method. Different rates of vaccination to predict the effect of vaccination on the infected individual over time and that of quarantine are discussed. The results show that quarantine and vaccination are very effective ways to control Ebola epidemic. From our study it was also seen that there is less possibility of an individual for getting Ebola virus for the second time if they survived his/her first infection. Last but not least real data has been fitted to the model, showing that it can used to predict the dynamic of Ebola epidemic.
Scott, Alison J D; Nahum, Alan E; Fenwick, John D
2009-07-01
The accuracy with which Monte Carlo models of photon beams generated by linear accelerators (linacs) can describe small-field dose distributions depends on the modeled width of the electron beam profile incident on the linac target. It is known that the electron focal spot width affects penumbra and cross-field profiles; here, the authors explore the extent to which source occlusion reduces linac output for smaller fields and larger spot sizes. A BEAMnrc Monte Carlo linac model has been used to investigate the variation in penumbra widths and small-field output factors with electron spot size. A formalism is developed separating head scatter factors into source occlusion and flattening filter factors. Differences between head scatter factors defined in terms of in-air energy fluence, collision kerma, and terma are explored using Monte Carlo calculations. Estimates of changes in kerma-based source occlusion and flattening filter factors with field size and focal spot width are obtained by calculating doses deposited in a narrow 2 mm wide virtual "milliphantom" geometry. The impact of focal spot size on phantom scatter is also explored. Modeled electron spot sizes of 0.4-0.7 mm FWHM generate acceptable matches to measured penumbra widths. However the 0.5 cm field output factor is quite sensitive to electron spot width, the measured output only being matched by calculations for a 0.7 mm spot width. Because the spectra of the unscattered primary (psi(pi)) and head-scattered (psi(sigma)) photon energy fluences differ, miniphantom-based collision kerma measurements do not scale precisely with total in-air energy fluence psi = (psi(pi) + psi(sigma) but with (psi(pi)+ 1.2psi(sigma)). For most field sizes, on-axis collision kerma is independent of the focal spot size; but for a 0.5 cm field size and 1.0 mm spot width, it is reduced by around 7% mostly due to source occlusion. The phantom scatter factor of the 0.5 cm field also shows some spot size dependence, decreasing by
Galilean invariance in the exponential model of atomic collisions
Energy Technology Data Exchange (ETDEWEB)
del Pozo, A.; Riera, A.; Yaez, M.
1986-11-01
Using the X/sup n//sup +/(1s/sup 2/)+He/sup 2+/ colliding systems as specific examples, we study the origin dependence of results in the application of the two-state exponential model, and we show the relevance of polarization effects in that study. Our analysis shows that polarization effects of the He/sup +/(1s) orbital due to interaction with X/sup (//sup n//sup +1)+/ ion in the exit channel yield a very small contribution to the energy difference and render the dynamical coupling so strongly origin dependent that it invalidates the basic premises of the model. Further study, incorporating translation factors in the formalism, is needed.
A Monte Carlo model of the Varian IGRT couch top for RapidArc QA.
Teke, T; Gill, B; Duzenli, C; Popescu, I A
2011-12-21
The objectives of this study are to evaluate the effect of couch attenuation on quality assurance (QA) results and to present a couch top model for Monte Carlo (MC) dose calculation for RapidArc treatments. The IGRT couch top is modelled in Eclipse as a thin skin of higher density material with a homogeneous fill of foam of lower density and attenuation. The IGRT couch structure consists of two longitudinal sections referred to as thick and thin. The Hounsfield Unit (HU) characterization of the couch structure was determined using a cylindrical phantom by comparing ion chamber measurements with the dose predicted by the treatment planning system (TPS). The optimal set of HU for the inside of the couch and the surface shell was found to be respectively -960 and -700 HU in agreement with Vanetti et al (2009 Phys. Med. Biol. 54 N157-66). For each plan, the final dose calculation was performed with the thin, thick and without the couch top. Dose differences up to 2.6% were observed with TPS calculated doses not including the couch and up to 3.4% with MC not including the couch and were found to be treatment specific. A MC couch top model was created based on the TPS geometrical model. The carbon fibre couch top skin was modelled using carbon graphite; the density was adjusted until good agreement with experimental data was observed, while the density of the foam inside was kept constant. The accuracy of the couch top model was evaluated by comparison with ion chamber measurements and TPS calculated dose combined with a 3D gamma analysis. Similar to the TPS case, a single graphite density can be used for both the thin and thick MC couch top models. Results showed good agreement with ion chamber measurements (within 1.2%) and with TPS (within 1%). For each plan, over 95% of the points passed the 3D gamma test.
The two-phase issue in the O(n) non-linear $\\sigma$-model: A Monte Carlo study
Alles, B.; Buonanno, A.; Cella, G.
1996-01-01
We have performed a high statistics Monte Carlo simulation to investigate whether the two-dimensional O(n) non-linear sigma models are asymptotically free or they show a Kosterlitz- Thouless-like phase transition. We have calculated the mass gap and the magnetic susceptibility in the O(8) model with standard action and the O(3) model with Symanzik action. Our results for O(8) support the asymptotic freedom scenario.
Border Collision Bifurcations in a Generalized Model of Population Dynamics
Directory of Open Access Journals (Sweden)
Lilia M. Ladino
2016-01-01
Full Text Available We analyze the dynamics of a generalized discrete time population model of a two-stage species with recruitment and capture. This generalization, which is inspired by other approaches and real data that one can find in literature, consists in considering no restriction for the value of the two key parameters appearing in the model, that is, the natural death rate and the mortality rate due to fishing activity. In the more general case the feasibility of the system has been preserved by posing opportune formulas for the piecewise map defining the model. The resulting two-dimensional nonlinear map is not smooth, though continuous, as its definition changes as any border is crossed in the phase plane. Hence, techniques from the mathematical theory of piecewise smooth dynamical systems must be applied to show that, due to the existence of borders, abrupt changes in the dynamic behavior of population sizes and multistability emerge. The main novelty of the present contribution with respect to the previous ones is that, while using real data, richer dynamics are produced, such as fluctuations and multistability. Such new evidences are of great interest in biology since new strategies to preserve the survival of the species can be suggested.
Yan Gao; Zhiqiang Hu; Jin Wang
2014-01-01
The increasing marine activities in Arctic area have brought growing interest in ship-iceberg collision study. The purpose of this paper is to study the iceberg geometry shape effect on the collision process. In order to estimate the sensitivity parameter, five different geometry iceberg models and two iceberg material models are adopted in the analysis. The FEM numerical simulation is used to predict the scenario and the related responses. The simulation results including energy dissipation ...
Caselle, Michele; Panero, Marco
2007-01-01
We provide accurate Monte Carlo results for the free energy of interfaces with periodic boundary conditions in the 3D Ising model. We study a large range of inverse temperatures, allowing to control corrections to scaling. In addition to square interfaces, we study rectangular interfaces for a large range of aspect ratios u=L_1/L_2. Our numerical results are compared with predictions of effective interface models. This comparison verifies clearly the effective Nambu-Goto model up to two-loop order. Our data also allow us to obtain the estimates T_c sigma^-1/2=1.235(2), m_0++ sigma^-1/2=3.037(16) and R_+=f_+ sigma_0^2 =0.387(2), which are more precise than previous ones.