WorldWideScience

Sample records for carlo shell model

  1. Shell model Monte Carlo methods

    International Nuclear Information System (INIS)

    Koonin, S.E.

    1996-01-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of γ-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs

  2. Shell model the Monte Carlo way

    International Nuclear Information System (INIS)

    Ormand, W.E.

    1995-01-01

    The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined

  3. Shell model the Monte Carlo way

    Energy Technology Data Exchange (ETDEWEB)

    Ormand, W.E.

    1995-03-01

    The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.

  4. Monte Carlo Shell Model for ab initio nuclear structure

    Directory of Open Access Journals (Sweden)

    Abe T.

    2014-03-01

    Full Text Available We report on our recent application of the Monte Carlo Shell Model to no-core calculations. At the initial stage of the application, we have performed benchmark calculations in the p-shell region. Results are compared with those in the Full Configuration Interaction and No-Core Full Configuration methods. These are found to be consistent with each other within quoted uncertainties when they could be quantified. The preliminary results in Nshell = 5 reveal the onset of systematic convergence pattern.

  5. Unified description of pf-shell nuclei by the Monte Carlo shell model calculations

    Energy Technology Data Exchange (ETDEWEB)

    Mizusaki, Takahiro; Otsuka, Takaharu [Tokyo Univ. (Japan). Dept. of Physics; Honma, Michio

    1998-03-01

    The attempts to solve shell model by new methods are briefed. The shell model calculation by quantum Monte Carlo diagonalization which was proposed by the authors is a more practical method, and it became to be known that it can solve the problem with sufficiently good accuracy. As to the treatment of angular momentum, in the method of the authors, deformed Slater determinant is used as the basis, therefore, for making angular momentum into the peculiar state, projected operator is used. The space determined dynamically is treated mainly stochastically, and the energy of the multibody by the basis formed as the result is evaluated and selectively adopted. The symmetry is discussed, and the method of decomposing shell model space into dynamically determined space and the product of spin and isospin spaces was devised. The calculation processes are shown with the example of {sup 50}Mn nuclei. The calculation of the level structure of {sup 48}Cr with known exact energy can be done with the accuracy of peculiar absolute energy value within 200 keV. {sup 56}Ni nuclei are the self-conjugate nuclei of Z=N=28. The results of the shell model calculation of {sup 56}Ni nucleus structure by using the interactions of nuclear models are reported. (K.I.)

  6. Level densities of heavy nuclei in the shell model Monte Carlo approach

    Directory of Open Access Journals (Sweden)

    Alhassid Y.

    2016-01-01

    Full Text Available Nuclear level densities are necessary input to the Hauser-Feshbach theory of compound nuclear reactions. However, the microscopic calculation of level densities in the presence of correlations is a challenging many-body problem. The configurationinteraction shell model provides a suitable framework for the inclusion of correlations and shell effects, but the large dimensionality of the many-particle model space has limited its application in heavy nuclei. The shell model Monte Carlo method enables calculations in spaces that are many orders of magnitude larger than spaces that can be treated by conventional diagonalization methods and has proven to be a powerful tool in the microscopic calculation of level densities. We discuss recent applications of the method in heavy nuclei.

  7. Constrained-path quantum Monte Carlo approach for non-yrast states within the shell model

    Energy Technology Data Exchange (ETDEWEB)

    Bonnard, J. [INFN, Sezione di Padova, Padova (Italy); LPC Caen, ENSICAEN, Universite de Caen, CNRS/IN2P3, Caen (France); Juillet, O. [LPC Caen, ENSICAEN, Universite de Caen, CNRS/IN2P3, Caen (France)

    2016-04-15

    The present paper intends to present an extension of the constrained-path quantum Monte Carlo approach allowing to reconstruct non-yrast states in order to reach the complete spectroscopy of nuclei within the interacting shell model. As in the yrast case studied in a previous work, the formalism involves a variational symmetry-restored wave function assuming two central roles. First, it guides the underlying Brownian motion to improve the efficiency of the sampling. Second, it constrains the stochastic paths according to the phaseless approximation to control sign or phase problems that usually plague fermionic QMC simulations. Proof-of-principle results in the sd valence space are reported. They prove the ability of the scheme to offer remarkably accurate binding energies for both even- and odd-mass nuclei irrespective of the considered interaction. (orig.)

  8. Microscopic calculation of level densities: the shell model Monte Carlo approach

    International Nuclear Information System (INIS)

    Alhassid, Yoram

    2012-01-01

    The shell model Monte Carlo (SMMC) approach provides a powerful technique for the microscopic calculation of level densities in model spaces that are many orders of magnitude larger than those that can be treated by conventional methods. We discuss a number of developments: (i) Spin distribution. We used a spin projection method to calculate the exact spin distribution of energy levels as a function of excitation energy. In even-even nuclei we find an odd-even staggering effect (in spin). Our results were confirmed in recent analysis of experimental data. (ii) Heavy nuclei. The SMMC approach was extended to heavy nuclei. We have studied the crossover between vibrational and rotational collectivity in families of samarium and neodymium isotopes in model spaces of dimension approx. 10 29 . We find good agreement with experimental results for both state densities and 2 > (where J is the total spin). (iii) Collective enhancement factors. We have calculated microscopically the vibrational and rotational enhancement factors of level densities versus excitation energy. We find that the decay of these enhancement factors in heavy nuclei is correlated with the pairing and shape phase transitions. (iv) Odd-even and odd-odd nuclei. The projection on an odd number of particles leads to a sign problem in SMMC. We discuss a novel method to calculate state densities in odd-even and odd-odd nuclei despite the sign problem. (v) State densities versus level densities. The SMMC approach has been used extensively to calculate state densities. However, experiments often measure level densities (where levels are counted without including their spin degeneracies.) A spin projection method enables us to also calculate level densities in SMMC. We have calculated the SMMC level density of 162 Dy and found it to agree well with experiments

  9. Shell-model Monte Carlo simulations of the BCS-BEC crossover in few-fermion systems

    DEFF Research Database (Denmark)

    Zinner, Nikolaj Thomas; Mølmer, Klaus; Özen, C.

    2009-01-01

    We study a trapped system of fermions with a zero-range two-body interaction using the shell-model Monte Carlo method, providing ab initio results for the low particle number limit where mean-field theory is not applicable. We present results for the N-body energies as function of interaction...... strength, particle number, and temperature. The subtle question of renormalization in a finite model space is addressed and the convergence of our method and its applicability across the BCS-BEC crossover is discussed. Our findings indicate that very good quantitative results can be obtained on the BCS...

  10. McSCIA: application of the equivalence theorem in a Monte Carlo radiative transfer model for spherical shell

    NARCIS (Netherlands)

    Spada, F.M.; Krol, M.C.; Stammes, P.

    2006-01-01

    A new multiple-scattering Monte Carlo 3-D radiative transfer model named McSCIA (Monte Carlo for SCIAmachy) is presented. The backward technique is used to efficiently simulate narrow field of view instruments. The McSCIA algorithm has been formulated as a function of the Earth’s radius, and can

  11. McSCIA: application of the equivalence theorem in a Monte Carlo radiative transfer model for spherical shell atmospheres

    NARCIS (Netherlands)

    Spada, F.; Krol, M.C.; Stammes, P.

    2006-01-01

    A new multiple-scatteringMonte Carlo 3-D radiative transfer model named McSCIA (Monte Carlo for SCIA-machy) is presented. The backward technique is used to efficiently simulate narrow field of view instruments. The McSCIA algorithm has been formulated as a function of the Earth's radius, and can

  12. McSCIA: application of the Equivalence Theorem in a Monte Carlo radiative transfer model for spherical shell atmospheres

    Directory of Open Access Journals (Sweden)

    F. Spada

    2006-01-01

    Full Text Available A new multiple-scattering Monte Carlo 3-D radiative transfer model named McSCIA (Monte Carlo for SCIAmachy is presented. The backward technique is used to efficiently simulate narrow field of view instruments. The McSCIA algorithm has been formulated as a function of the Earth's radius, and can thus perform simulations for both plane-parallel and spherical atmospheres. The latter geometry is essential for the interpretation of limb satellite measurements, as performed by SCIAMACHY on board of ESA's Envisat. The model can simulate UV-vis-NIR radiation. First the ray-tracing algorithm is presented in detail, and then successfully validated against literature references, both in plane-parallel and in spherical geometry. A simple 1-D model is used to explain two different ways of treating absorption. One method uses the single scattering albedo while the other uses the equivalence theorem. The equivalence theorem is based on a separation of absorption and scattering. It is shown that both methods give, in a statistical way, identical results for a wide variety of scenarios. Both absorption methods are included in McSCIA, and it is shown that also for a 3-D case both formulations give identical results. McSCIA limb profiles for atmospheres with and without absorption compare well with the one of the state of the art Monte Carlo radiative transfer model MCC++. A simplification of the photon statistics may lead to very fast calculations of absorption features in the atmosphere. However, these simplifications potentially introduce biases in the results. McSCIA does not use simplifications and is therefore a relatively slow implementation of the equivalence theorem.

  13. Strongly screening electron capture for nuclides 52, 53, 59, 60Fe by the Shell-Model Monte Carlo method in pre-supernovae

    Science.gov (United States)

    Liu, Jing-Jing; Peng, Qiu-He; Liu, Dong-Mei

    2017-09-01

    The death of massive stars due to supernova explosions is a key ingredient in stellar evolution and stellar population synthesis. Electron capture (EC) plays a vital role in supernova explosions. Using the Shell-Model Monte Carlo method, based on the nuclear random phase approximation and linear response theory model for electrons, we study the strong screening EC rates of 52, 53, 59, 60Fe in pre-supernovae. The results show that the screening rates can decrease by about 18.66%. Our results may become a good foundation for future investigation of the evolution of late-type stars, supernova explosion mechanisms and numerical simulations. Supported by National Natural Science Foundation of China (11565020), Counterpart Foundation of Sanya (2016PT43), Special Foundation of Science and Technology Cooperation for Advanced Academy and Regional of Sanya (2016YD28), Scientific Research Staring Foundation for 515 Talented Project of Hainan Tropical Ocean University (RHDRC201701) and Natural Science Foundation of Hainan Province (114012)

  14. Closed-shell variational quantum Monte Carlo simulation for the ...

    African Journals Online (AJOL)

    Closed-shell variational quantum Monte Carlo simulation for the electric dipole moment calculation of hydrazine molecule using casino-code. ... From our result, though the VQMC method showed much fluctuation, the technique calculated the electric dipole moment of hydrazine molecule as 2.0 D, which is in closer ...

  15. Contemporary nuclear shell models

    CERN Document Server

    Luo Yan An; Zhang Xia; Tan Yu Hong; Ning Ping Zhi

    2002-01-01

    The current status on the theoretical investigations of the nuclear shell model is reviewed, and the fundamental problems in shell-model studies are mentioned. Basically the shell-model uses a very intuitive approach to study the nuclear many-body dynamics in terms of valence particles. It assumes that the nucleons, belonging to a closed core, do not participate in the establishment of the nuclear spectrum. One of the main problems in the (traditional) shell model is to make a calculation feasible. With the explosive growth of the computational power, it is possible to carry out a 'Very Large Scale' shell model calculation. Nevertheless, whether such a calculation really helps authors' understanding of physics is still an open question. Furthermore, the case of the medium weight and heavy nuclei with configurations of 10 sup 1 sup 4 -10 sup 1 sup 8 remains out of reach. For these nuclei one still needs to truncate the huge shell model space to a manageable subspace. Recently, a useful formalism has been descr...

  16. Nuclear spectroscopy in large shell model spaces: recent advances

    International Nuclear Information System (INIS)

    Kota, V.K.B.

    1995-01-01

    Three different approaches are now available for carrying out nuclear spectroscopy studies in large shell model spaces and they are: (i) the conventional shell model diagonalization approach but taking into account new advances in computer technology; (ii) the recently introduced Monte Carlo method for the shell model; (iii) the spectral averaging theory, based on central limit theorems, in indefinitely large shell model spaces. The various principles, recent applications and possibilities of these three methods are described and the similarity between the Monte Carlo method and the spectral averaging theory is emphasized. (author). 28 refs., 1 fig., 5 tabs

  17. Temporal structures in shell models

    DEFF Research Database (Denmark)

    Okkels, F.

    2001-01-01

    The intermittent dynamics of the turbulent Gledzer, Ohkitani, and Yamada shell-model is completely characterized by a single type of burstlike structure, which moves through the shells like a front. This temporal structure is described by the dynamics of the instantaneous configuration of the shell...

  18. closed-shell variational quantum monte carlo simulation for the ...

    African Journals Online (AJOL)

    Vincent

    presented. The variational quantum Monte Carlo (VQMC) technique used in this work employed the restricted Hartree-Fock. (RHF) scheme. The components dependence of the electric dipole moment from the QMC technique is studied with a single determinant Slater-Jastrow trial wave-function obtained from the ...

  19. Isogeometric shell formulation based on a classical shell model

    KAUST Repository

    Niemi, Antti

    2012-09-04

    This paper constitutes the first steps in our work concerning isogeometric shell analysis. An isogeometric shell model of the Reissner-Mindlin type is introduced and a study of its accuracy in the classical pinched cylinder benchmark problem presented. In contrast to earlier works [1,2,3,4], the formulation is based on a shell model where the displacement, strain and stress fields are defined in terms of a curvilinear coordinate system arising from the NURBS description of the shell middle surface. The isogeometric shell formulation is implemented using the PetIGA and igakit software packages developed by the authors. The igakit package is a Python package used to generate NURBS representations of geometries that can be utilised by the PetIGA finite element framework. The latter utilises data structures and routines of the portable, extensible toolkit for scientific computation (PETSc), [5,6]. The current shell implementation is valid for static, linear problems only, but the software package is well suited for future extensions to geometrically and materially nonlinear regime as well as to dynamic problems. The accuracy of the approach in the pinched cylinder benchmark problem and present comparisons against the h-version of the finite element method with bilinear elements. Quadratic, cubic and quartic NURBS discretizations are compared against the isoparametric bilinear discretization introduced in [7]. The results show that the quadratic and cubic NURBS approximations exhibit notably slower convergence under uniform mesh refinement as the thickness decreases but the quartic approximation converges relatively quickly within the standard variational framework. The authors future work is concerned with building an isogeometric finite element method for modelling nonlinear structural response of thin-walled shells undergoing large rigid-body motions. The aim is to use the model in a aeroelastic framework for the simulation of flapping wings.

  20. Monte Carlo simulations of core/shell nanoparticles containing interfacial defects: Role of disordered ferromagnetic spins

    International Nuclear Information System (INIS)

    Ho, Le Bin; Lan, Tran Nguyen; Hai, Tran Hoang

    2013-01-01

    In this work, we have used the Monte Carlo simulation to investigate the magnetic properties of an isolated composite magnetic nanoparticle with ferromagnetic (FM) core and antiferromagnetic (AFM) shell morphology. The defects were assumed to be randomly located at the AFM interface. The Néel anisotropy was used for the FM interface spins at where there are the lacks of crystal symmetry due to the vacancies at AFM interface. With a moderate defect concentration, the coercive field non-monotonously depends on the Néel anisotropy. We have examined the dependence of coercivity, exchange bias field, and vertical shift on defect concentration. We found that in addition to AFM shell, the disordered FM interface is another pining-source for exchange bias phenomenon. We discuss our simulated results in the relation to recent experimental findings

  1. Drexel University Shell Model (DUSM) algorithm

    Science.gov (United States)

    Valliéres, Michel; Novoselsky, Akiva

    1994-03-01

    This lecture is devoted to the Drexel University Shell Model (DUSM) code; this is a new shell-model code based on a separation of the various subspaces in which the single particle wavefunctions are defined. This is achieved via extensive use of permutation group concepts and a redefinition of the Coeficients of Fractional Parentage (CFP) to include permutation labels. This leads to a modern and efficient approach to nuclear shell-model.

  2. Drexel University Shell Model (DUSM) algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Vallieres, M. (Drexel Univ., Philadelphia, PA (United States). Dept. of Physics and Atmospheric Science); Novoselsky, A. (Hebrew Univ., Jerusalem (Israel). Dept. of Physics)

    1994-03-28

    This lecture is devoted to the Drexel University Shell Model (DUSM) code; this is a new shell-model code based on a separation of the various subspaces in which the single particle wavefunctions are defined. This is achieved via extensive use of permutation group concepts and a redefinition of the Coeficients of Fractional Parentage (CEP) to include permutation labels. This leads to a modern and efficient approach to nuclear shell-model. (orig.)

  3. Localized versus shell-model-like clusters

    Energy Technology Data Exchange (ETDEWEB)

    Cseh, J.; Algora, A. [Institute of Nuclear Research of the Hungarian Academy of Sciences, Debrecen, Pf. 51, 4001 Hungary (Hungary); Darai, J. [Institute of Experimental Physics, University of Debrecen, Debrecen, Bem ter 18/A, 4026 Hungary (Hungary); Yepez M, H. [Universidad Autonoma de la Ciudad de Mexico, Prolongacion San Isidro 151, Col. San Lorenzo Tezonco, 09790 Mexico D. F. (Mexico); Hess, P. O. [Instituto de Ciencias Nucleares, UNAM, Apartado Postal 70-543, 04510 Mexico D. F. (Mexico)]. e-mail: cseh@atomki.hu

    2008-12-15

    In light of the relation of the shell model and the cluster model, the concepts of localized and shell-model-like clusters are discussed. They are interpreted as different phases of clusterization, which may be characterized by quasi-dynamical symmetries, and are connected by a phase-transition. (Author)

  4. Monte Carlo simulation of dynamic phase transitions and frequency dispersions of hysteresis curves in core/shell ferrimagnetic cubic nanoparticle

    Energy Technology Data Exchange (ETDEWEB)

    Vatansever, Erol, E-mail: erol.vatansever@deu.edu.tr

    2017-05-10

    By means of Monte Carlo simulation method with Metropolis algorithm, we elucidate the thermal and magnetic phase transition behaviors of a ferrimagnetic core/shell nanocubic system driven by a time dependent magnetic field. The particle core is composed of ferromagnetic spins, and it is surrounded by an antiferromagnetic shell. At the interface of the core/shell particle, we use antiferromagnetic spin–spin coupling. We simulate the nanoparticle using classical Heisenberg spins. After a detailed analysis, our Monte Carlo simulation results suggest that present system exhibits unusual and interesting magnetic behaviors. For example, at the relatively lower temperature regions, an increment in the amplitude of the external field destroys the antiferromagnetism in the shell part of the nanoparticle, leading to a ground state with ferromagnetic character. Moreover, particular attention has been dedicated to the hysteresis behaviors of the system. For the first time, we show that frequency dispersions can be categorized into three groups for a fixed temperature for finite core/shell systems, as in the case of the conventional bulk systems under the influence of an oscillating magnetic field. - Highlights: • Cubic core/shell nanoparticle is considered. • Monte-Carlo simulation with Metropolis algorithm is used. • The particle is subjected to time dependent oscillating magnetic field. • External field destroys the antiferromagnetism in the shell part of particle. • Frequency dispersions of hysteresis loop areas can be categorized into three groups.

  5. Variability in shell models of GRBs

    Science.gov (United States)

    Sumner, M. C.; Fenimore, E. E.

    1997-01-01

    Many cosmological models of gamma-ray bursts (GRBs) assume that a single relativistic shell carries kinetic energy away from the source and later converts it into gamma rays, perhaps by interactions with the interstellar medium or by internal shocks within the shell. Although such models are able to reproduce general trends in GRB time histories, it is difficult to reproduce the high degree of variability often seen in GRBs. The authors investigate methods of achieving this variability using a simplified external shock model. Since the model emphasizes geometric and statistical considerations, rather than the detailed physics of the shell, it is applicable to any theory that relies on relativistic shells. They find that the variability in GRBs gives strong clues to the efficiency with which the shell converts its kinetic energy into gamma rays.

  6. Monte Carlo simulation of Markov unreliability models

    International Nuclear Information System (INIS)

    Lewis, E.E.; Boehm, F.

    1984-01-01

    A Monte Carlo method is formulated for the evaluation of the unrealibility of complex systems with known component failure and repair rates. The formulation is in terms of a Markov process allowing dependences between components to be modeled and computational efficiencies to be achieved in the Monte Carlo simulation. Two variance reduction techniques, forced transition and failure biasing, are employed to increase computational efficiency of the random walk procedure. For an example problem these result in improved computational efficiency by more than three orders of magnitudes over analog Monte Carlo. The method is generalized to treat problems with distributed failure and repair rate data, and a batching technique is introduced and shown to result in substantial increases in computational efficiency for an example problem. A method for separating the variance due to the data uncertainty from that due to the finite number of random walks is presented. (orig.)

  7. Monte Carlo modeling of eye iris color

    Science.gov (United States)

    Koblova, Ekaterina V.; Bashkatov, Alexey N.; Dolotov, Leonid E.; Sinichkin, Yuri P.; Kamenskikh, Tatyana G.; Genina, Elina A.; Tuchin, Valery V.

    2007-05-01

    Based on the presented two-layer eye iris model, the iris diffuse reflectance has been calculated by Monte Carlo technique in the spectral range 400-800 nm. The diffuse reflectance spectra have been recalculated in L*a*b* color coordinate system. Obtained results demonstrated that the iris color coordinates (hue and chroma) can be used for estimation of melanin content in the range of small melanin concentrations, i.e. for estimation of melanin content in blue and green eyes.

  8. Shell model calculations for exotic nuclei

    International Nuclear Information System (INIS)

    Brown, B.A.; Wildenthal, B.H.

    1991-01-01

    A review of the shell-model approach to understanding the properties of light exotic nuclei is given. Binding energies including p and p-sd model spaces and sd and sd-pf model spaces; cross-shell excitations around 32 Mg, including weak-coupling aspects and mechanisms for lowering the ntw excitations; beta decay properties of neutron-rich sd model, of p-sd and sd-pf model spaces, of proton-rich sd model space; coulomb break-up cross sections are discussed. (G.P.) 76 refs.; 12 figs

  9. Cluster model of s-and p-shell ΛΛ hypernuclei

    Indian Academy of Sciences (India)

    The binding energy ( ) of the s- and p-shell hypernuclei are calculated variationally in the cluster model and multidimensional integrations are performed using Monte Carlo. A variety of phenomenological -core potentials consistent with the -core energies and a wide range of simulated s-state potentials are ...

  10. Modeling of microencapsulated polymer shell solidification

    International Nuclear Information System (INIS)

    Boone, T.; Cheung, L.; Nelson, D.; Soane, D.; Wilemski, G.; Cook, R.

    1995-01-01

    A finite element transport model has been developed and implemented to complement experimental efforts to improve the quality of ICF target shells produced via controlled-mass microencapsulation. The model provides an efficient means to explore the effect of processing variables on the dynamics of shell dimensions, concentricity, and phase behavior. Comparisons with experiments showed that the model successfully predicts the evolution of wall thinning and core/wall density differences. The model was used to efficiently explore and identify initial wall compositions and processing temperatures which resulted in concentricity improvements from 65 to 99%. The evolution of trace amounts of water entering into the shell wall was also tracked in the simulations. Comparisons with phase envelope estimations from modified UNIFAP calculations suggest that the water content trajectory approaches the two-phase region where vacuole formation via microphase separation may occur

  11. Projected shell model description for nuclear isomers

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Y. [Department of Physics, Shanghai Jiao Tong University, Shanghai 200240, Popular Republic (China)

    2008-12-15

    The study of nuclear isomer properties is a current research focus. To describe isomers, we present a method based on the Projected Shell Model. Two kinds of isomers, {kappa}-isomers and shape isomers, are discussed. For the {kappa}-isomer treatment, {kappa}-mixing is properly implemented in the model. It is found however that in order to describe the strong {kappa}-violation more efficiently, it may be necessary to further introduce triaxiality into the shell model basis. To treat shape isomers, a scheme is outlined which allows mixing those configurations belonging to different shapes. (Author)

  12. Monte carlo methods and models in finance and insurance

    CERN Document Server

    Korn, Ralf; Kroisandt, Gerald

    2010-01-01

    Offering a unique balance between applications and calculations, Monte Carlo Methods and Models in Finance and Insurance incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The authors separately discuss Monte Carlo techniques, stochastic process basics, and the theoretical background and intuition behind financial and actuarial mathematics, before bringing the topics together to apply the Monte Carlo methods to areas of finance and insurance. This allows for the easy identification of standard Monte Carlo tools and for a detailed focus on the main principles of financial and insurance mathematics. The book describes high-level Monte Carlo methods for standard simulation and the simulation of...

  13. Monte Carlo methods and models in finance and insurance

    CERN Document Server

    Korn, Ralf; Kroisandt, Gerald

    2010-01-01

    Offering a unique balance between applications and calculations, Monte Carlo Methods and Models in Finance and Insurance incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The authors separately discuss Monte Carlo techniques, stochastic process basics, and the theoretical background and intuition behind financial and actuarial mathematics, before bringing the topics together to apply the Monte Carlo methods to areas of finance and insurance. This allows for the easy identification of standard Monte Carlo tools and for a detailed focus on the main principles of financial and insurance mathematics. The book describes high-level Monte Carlo methods for standard simulation and the simulation of...

  14. Translational invariant shell model for Λ hypernuclei

    Directory of Open Access Journals (Sweden)

    Jolos R.V.

    2016-01-01

    Full Text Available We extend shell model for Λ hypernuclei suggested by Gal and Millener by including 2ћω excitations in the translation invariant version to estimate yields of different hyperfragments from primary p-shell hypernuclei. We are inspired by the first successful experiment done at MAMI which opens way to study baryon decay of hypernuclei. We use quantum numbers of group SU(4, [f], and SU(3, (λμ, to classify basis wave functions and calculate coefficients of fractional parentage.

  15. Electron transport in quantum dot solids: Monte Carlo simulations of the effects of shell filling, Coulomb repulsions, and site disorder

    Science.gov (United States)

    Chandler, R. E.; Houtepen, A. J.; Nelson, J.; Vanmaekelbergh, D.

    2007-02-01

    A Monte Carlo model is developed for the hopping conductance in arrays of quantum dots (QDs). Hopping is simulated using a continuous time random walk algorithm, incorporating all possible transitions, and using a nonresonant electron-hopping rate based on broadening of the energy levels through quantum fluctuations. Arrays of identical QDs give rise to electronic conductance that depends strongly upon level filling. In the case of low charging energy, metal insulator transitions are observed at electron occupation levels, ⟨n⟩ , that correspond to the complete filling of an S , P , or D shell. When the charging energy becomes comparable to the level broadening, additional minima in conductance appear at integer values of ⟨n⟩ , as a result of electron-electron repulsion. Disorder in QD diameters leads to disorder in the energy levels, resulting in washing out of the structure in the dependence of conductance on ⟨n⟩ and a net reduction in conductance. Simulation results are shown to be consistent with experimental measurements of conductance in arrays of zinc oxide and cadmium selenide QDs that have different degrees of size disorder, and the degree of size disorder is quantified. Simulations of the temperature dependence of conductance show that both Coulombic charging and size disorder can lead to activated behavior and that size disorder leads to conductance that is sublinear on an Arrhenius plot.

  16. Symplectic symmetry in the nuclear shell model

    NARCIS (Netherlands)

    French, J.B.

    The nature of the general two-particle interaction which is compatible with symplectic symmetry in the jj coupling shell model is investigated. The essential result is that, to within an additive constant and an additive multiple of T2, the interaction should have the form of a sum of scalar

  17. Monte Carlo modeling and meteor showers

    International Nuclear Information System (INIS)

    Kulikova, N.V.

    1987-01-01

    Prediction of short lived increases in the cosmic dust influx, the concentration in lower thermosphere of atoms and ions of meteor origin and the determination of the frequency of micrometeor impacts on spacecraft are all of scientific and practical interest and all require adequate models of meteor showers at an early stage of their existence. A Monte Carlo model of meteor matter ejection from a parent body at any point of space was worked out by other researchers. This scheme is described. According to the scheme, the formation of ten well known meteor streams was simulated and the possibility of genetic affinity of each of them with the most probable parent comet was analyzed. Some of the results are presented

  18. Mapping the Two-Component Atomic Fermi Gas to the Nuclear Shell-Model

    DEFF Research Database (Denmark)

    Özen, C.; Zinner, Nikolaj Thomas

    2014-01-01

    of the external potential becomes important. A system of two-species fermionic cold atoms with an attractive zero-range interaction is analogous to a simple model of nucleus in which neutrons and protons interact only through a residual pairing interaction. In this article, we discuss how the problem of a two......-component atomic fermi gas in a tight external trap can be mapped to the nuclear shell model so that readily available many-body techniques in nuclear physics, such as the Shell Model Monte Carlo (SMMC) method, can be directly applied to the study of these systems. We demonstrate an application of the SMMC method...

  19. Monte Carlo modelling of TRIGA research reactor

    International Nuclear Information System (INIS)

    El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.

    2010-01-01

    The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucleaires de la Maamora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S(α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file 'up259'. The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.

  20. Monte Carlo modelling of TRIGA research reactor

    Science.gov (United States)

    El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.

    2010-10-01

    The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucléaires de la Maâmora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.

  1. Shell model calculations for exotic nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Brown, B.A. (Michigan State Univ., East Lansing, MI (USA)); Warburton, E.K. (Brookhaven National Lab., Upton, NY (USA)); Wildenthal, B.H. (New Mexico Univ., Albuquerque, NM (USA). Dept. of Physics and Astronomy)

    1990-02-01

    In this paper we review the progress of the shell-model approach to understanding the properties of light exotic nuclei (A < 40). By shell-model'' we mean the consistent and large-scale application of the classic methods discussed, for example, in the book of de-Shalit and Talmi. Modern calculations incorporate as many of the important configurations as possible and make use of realistic effective interactions for the valence nucleons. Properties such as the nuclear densities depend on the mean-field potential, which is usually separately from the valence interaction. We will discuss results for radii which are based on a standard Hartree-Fock approach with Skyrme-type interactions.

  2. Monte Carlo simulation of magnetic properties of a ferrimagnetic nanoisland with hexagonal prismatic core-shell structure

    Science.gov (United States)

    Wang, Wei; Peng, Zhou; Lin, Shan-shan; Li, Qi; Lv, Dan; Yang, Sen

    2018-01-01

    Using Monte Carlo simulation, the magnetic and thermodynamic properties of a ferrimagnetic nanoisland with hexagonal prismatic core-shell structure, consisting of the bilayer with a core of spin-5/2 atoms surrounded by shell of spin-2 atoms in the external magnetic field have been studied. We have investigated the effects of the single-ion anisotropies, the exchange coupling and the magnetic field on the magnetization, susceptibility, internal energy and blocking temperature of the nanoisland. A great number of interesting behaviors, such as various types of magnetization curves, have been obtained depending on different values of the physical parameters. The magnetic hysteresis loop behaviors are the main focus of the research. The system exhibits multiple hysteresis loop behaviors, such as double, triple and quadruple hysteresis loops for certain parameters.

  3. Exchange bias and asymmetric hysteresis loops from a microscopic model of core/shell nanoparticles

    International Nuclear Information System (INIS)

    Iglesias, Oscar; Batlle, Xavier; Labarta, Amilcar

    2007-01-01

    We present Monte Carlo simulations of hysteresis loops of a model of a magnetic nanoparticle with a ferromagnetic core and an antiferromagnetic shell with varying values of the core/shell interface exchange coupling which aim to clarify the microscopic origin of exchange bias observed experimentally. We have found loop shifts in the field direction as well as displacements along the magnetization axis that increase in magnitude when increasing the interfacial exchange coupling. Overlap functions computed from the spin configurations along the loops have been obtained to explain the origin and magnitude of these features microscopically

  4. Forecasting with nonlinear time series model: A Monte-Carlo ...

    African Journals Online (AJOL)

    In this paper, we propose a new method of forecasting with nonlinear time series model using Monte-Carlo Bootstrap method. This new method gives better result in terms of forecast root mean squared error (RMSE) when compared with the traditional Bootstrap method and Monte-Carlo method of forecasting using a ...

  5. Dynamical symmetries of the shell model

    Energy Technology Data Exchange (ETDEWEB)

    Van Isacker, P

    2000-07-01

    The applications of spectrum generating algebras and of dynamical symmetries in the nuclear shell model are many and varied. They stretch back to Wigner's early work on the supermultiplet model and encompass important landmarks in our understanding of the structure of the atomic nucleus such as Racah's SU(2) pairing model and Elliot's SU(3) rotational model. One of the aims of this contribution has been to show the historical importance of the idea of dynamical symmetry in nuclear physics. Another has been to indicate that, in spite of being old, this idea continues to inspire developments that are at the forefront of today's research in nuclear physics. It has been argued in this contribution that the main driving features of nuclear structure can be represented algebraically but at the same time the limitations of the symmetry approach must be recognised. It should be clear that such approach can only account for gross properties and that any detailed description requires more involved numerical calculations of which we have seen many fine examples during this symposium. In this way symmetry techniques can be used as an appropriate starting point for detailed calculations. A noteworthy example of this approach is the pseudo-SU(3) model which starting from its initial symmetry Ansatz has grown into an adequate and powerful description of the nucleus in terms of a truncated shell model. (author)

  6. Note on off-shell relations in nonlinear sigma model

    International Nuclear Information System (INIS)

    Chen, Gang; Du, Yi-Jian; Li, Shuyi; Liu, Hanqing

    2015-01-01

    In this note, we investigate relations between tree-level off-shell currents in nonlinear sigma model. Under Cayley parametrization, all odd-point currents vanish. We propose and prove a generalized U(1) identity for even-point currents. The off-shell U(1) identity given in http://dx.doi.org/10.1007/JHEP01(2014)061 is a special case of the generalized identity studied in this note. The on-shell limit of this identity is equivalent with the on-shell KK relation. Thus this relation provides the full off-shell correspondence of tree-level KK relation in nonlinear sigma model.

  7. Finite element model for nonlinear shells of revolution

    International Nuclear Information System (INIS)

    Cook, W.A.

    1979-01-01

    Nuclear material shipping containers have shells of revolution as basic structural components. Analytically modeling the response of these containers to severe accident impact conditions requires a nonlinear shell-of-revolution model that accounts for both geometric and material nonlinearities. Existing models are limited to large displacements, small rotations, and nonlinear materials. The paper presents a finite element model for a nonlinear shell of revolution that will account for large displacements, large strains, large rotations, and nonlinear materials

  8. The sine Gordon model perturbation theory and cluster Monte Carlo

    CERN Document Server

    Hasenbusch, M; Pinn, K

    1994-01-01

    We study the expansion of the surface thickness in the 2-dimensional lattice Sine Gordon model in powers of the fugacity z. Using the expansion to order z**2, we derive lines of constant physics in the rough phase. We describe and test a VMR cluster algorithm for the Monte Carlo simulation of the model. The algorithm shows nearly no critical slowing down. We apply the algorithm in a comparison of our perturbative results with Monte Carlo data.

  9. Type I Shell Galaxies as a Test of Gravity Models

    Energy Technology Data Exchange (ETDEWEB)

    Vakili, Hajar; Rahvar, Sohrab [Department of Physics, Sharif University of Technology, P.O. Box 11365-9161, Tehran (Iran, Islamic Republic of); Kroupa, Pavel, E-mail: vakili@physics.sharif.edu [Helmholtz-Institut für Strahlen-und Kernphysik, Universität Bonn, Nussallee 14-16, D-53115 Bonn (Germany)

    2017-10-10

    Shell galaxies are understood to form through the collision of a dwarf galaxy with an elliptical galaxy. Shell structures and kinematics have been noted to be independent tools to measure the gravitational potential of the shell galaxies. We compare theoretically the formation of shells in Type I shell galaxies in different gravity theories in this work because this is so far missing in the literature. We include Newtonian plus dark halo gravity, and two non-Newtonian gravity models, MOG and MOND, in identical initial systems. We investigate the effect of dynamical friction, which by slowing down the dwarf galaxy in the dark halo models limits the range of shell radii to low values. Under the same initial conditions, shells appear on a shorter timescale and over a smaller range of distances in the presence of dark matter than in the corresponding non-Newtonian gravity models. If galaxies are embedded in a dark matter halo, then the merging time may be too rapid to allow multi-generation shell formation as required by observed systems because of the large dynamical friction effect. Starting from the same initial state, the observation of small bright shells in the dark halo model should be accompanied by large faint ones, while for the case of MOG, the next shell generation patterns iterate with a specific time delay. The first shell generation pattern shows a degeneracy with the age of the shells and in different theories, but the relative distance of the shells and the shell expansion velocity can break this degeneracy.

  10. Transition sum rules in the shell model

    Science.gov (United States)

    Lu, Yi; Johnson, Calvin W.

    2018-03-01

    An important characterization of electromagnetic and weak transitions in atomic nuclei are sum rules. We focus on the non-energy-weighted sum rule (NEWSR), or total strength, and the energy-weighted sum rule (EWSR); the ratio of the EWSR to the NEWSR is the centroid or average energy of transition strengths from an nuclear initial state to all allowed final states. These sum rules can be expressed as expectation values of operators, which in the case of the EWSR is a double commutator. While most prior applications of the double commutator have been to special cases, we derive general formulas for matrix elements of both operators in a shell model framework (occupation space), given the input matrix elements for the nuclear Hamiltonian and for the transition operator. With these new formulas, we easily evaluate centroids of transition strength functions, with no need to calculate daughter states. We apply this simple tool to a number of nuclides and demonstrate the sum rules follow smooth secular behavior as a function of initial energy, as well as compare the electric dipole (E 1 ) sum rule against the famous Thomas-Reiche-Kuhn version. We also find surprising systematic behaviors for ground-state electric quadrupole (E 2 ) centroids in the s d shell.

  11. Shell model calculations for the mass 18 nuclei in the sd-shell

    International Nuclear Information System (INIS)

    Hamoudi, A.

    1997-01-01

    A simple effective nucleon-nucleon interaction for shell model calculations in the sd-shell is derived from the Reid soft-core potential folded with two-body correlation functions which take account of the strong short-range repulsion and large tensor component in the Reid force. Calculations of binding energies and low-lying spectra are performed for the mass A=18 with T=0 and 1 nuclei using this interaction. The results of this shell model calculations show a reasonable agreement with experiment

  12. Fate of the open-shell singlet ground state in the experimentally accessible acenes: A quantum Monte Carlo study

    Science.gov (United States)

    Dupuy, Nicolas; Casula, Michele

    2018-04-01

    By means of the Jastrow correlated antisymmetrized geminal power (JAGP) wave function and quantum Monte Carlo (QMC) methods, we study the ground state properties of the oligoacene series, up to the nonacene. The JAGP is the accurate variational realization of the resonating-valence-bond (RVB) ansatz proposed by Pauling and Wheland to describe aromatic compounds. We show that the long-ranged RVB correlations built in the acenes' ground state are detrimental for the occurrence of open-shell diradical or polyradical instabilities, previously found by lower-level theories. We substantiate our outcome by a direct comparison with another wave function, tailored to be an open-shell singlet (OSS) for long-enough acenes. By comparing on the same footing the RVB and OSS wave functions, both optimized at a variational QMC level and further projected by the lattice regularized diffusion Monte Carlo method, we prove that the RVB wave function has always a lower variational energy and better nodes than the OSS, for all molecular species considered in this work. The entangled multi-reference RVB state acts against the electron edge localization implied by the OSS wave function and weakens the diradical tendency for higher oligoacenes. These properties are reflected by several descriptors, including wave function parameters, bond length alternation, aromatic indices, and spin-spin correlation functions. In this context, we propose a new aromatic index estimator suitable for geminal wave functions. For the largest acenes taken into account, the long-range decay of the charge-charge correlation functions is compatible with a quasi-metallic behavior.

  13. Studies of Monte Carlo Modelling of Jets at ATLAS

    CERN Document Server

    Kar, Deepak; The ATLAS collaboration

    2017-01-01

    The predictions of different Monte Carlo generators for QCD jet production, both in multijets and for jets produced in association with other objects, are presented. Recent improvements in showering Monte Carlos provide new tools for assessing systematic uncertainties associated with these jets.  Studies of the dependence of physical observables on the choice of shower tune parameters and new prescriptions for assessing systematic uncertainties associated with the choice of shower model and tune are presented.

  14. Electron transport in quantum dot solids: Monte Carlo simulations of the effects of shell filling, Coulomb repulsions, and site disorder

    NARCIS (Netherlands)

    Chandler, R.E.; Houtepen, A.J.; Nelson, J.; Vanmaekelbergh, D.A.M.

    2007-01-01

    A Monte Carlo model is developed for the hopping conductance in arrays of quantum dots (QDs). Hopping is simulated using a continuous time random walk algorithm, incorporating all possible transitions, and using a nonresonant electron-hopping rate based on broadening of the energy levels through

  15. forecasting with nonlinear time series model: a monte-carlo ...

    African Journals Online (AJOL)

    PUBLICATIONS1

    with nonlinear time series model by comparing the RMSE with the traditional bootstrap and. Monte-Carlo method of forecasting. We use the logistic smooth transition autoregressive. (LSTAR) model as a case study. We first consider a linear model called the AR. (p) model of order p which satisfies the follow- ing linear ...

  16. Aspects of perturbative QCD in Monte Carlo shower models

    International Nuclear Information System (INIS)

    Gottschalk, T.D.

    1986-01-01

    The perturbative QCD content of Monte Carlo models for high energy hadron-hadron scattering is examined. Particular attention is given to the recently developed backwards evolution formalism for initial state parton showers, and the merging of parton shower evolution with hard scattering cross sections. Shower estimates of K-factors are discussed, and a simple scheme is presented for incorporating 2 → QCD cross sections into shower model calculations without double counting. Additional issues in the development of hard scattering Monte Carlo models are summarized. 69 references, 20 figures

  17. Shell model studies in the N = 54 isotones 99Rh, 100Pd

    International Nuclear Information System (INIS)

    Ghugre, S.S.; Sarkar, S.; Chintalapudi, S.N.

    1996-01-01

    The shell model in reproducing the observed level is used to investigate the observed level sequences in 99 Rh and 100 Pd within the spherical shell model framework. Shell model calculations have been performed using the code OXBASH

  18. Monte Carlo simulation models of breeding-population advancement.

    Science.gov (United States)

    J.N. King; G.R. Johnson

    1993-01-01

    Five generations of population improvement were modeled using Monte Carlo simulations. The model was designed to address questions that are important to the development of an advanced generation breeding population. Specifically we addressed the effects on both gain and effective population size of different mating schemes when creating a recombinant population for...

  19. Strain in the mesoscale kinetic Monte Carlo model for sintering

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Frandsen, Henrik Lund; Tikare, V.

    2014-01-01

    Shrinkage strains measured from microstructural simulations using the mesoscale kinetic Monte Carlo (kMC) model for solid state sintering are discussed. This model represents the microstructure using digitized discrete sites that are either grain or pore sites. The algorithm used to simulate dens...

  20. Ab initio no core shell model

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, Bruce R. [Univ. of Arizona, Tucson, AZ (United States); Navrátil, Petr [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Vary, James P. [Ames Lab. and Iowa State Univ., Ames, IA (United States)

    2012-11-17

    A long-standing goal of nuclear theory is to determine the properties of atomic nuclei based on the fundamental interactions among the protons and neutrons (i.e., nucleons). By adopting nucleon-nucleon (NN), three-nucleon (NNN) and higher-nucleon interactions determined from either meson-exchange theory or QCD, with couplings fixed by few-body systems, we preserve the predictive power of nuclear theory. This foundation enables tests of nature's fundamental symmetries and offers new vistas for the full range of complex nuclear phenomena. Basic questions that drive our quest for a microscopic predictive theory of nuclear phenomena include: (1) What controls nuclear saturation; (2) How the nuclear shell model emerges from the underlying theory; (3) What are the properties of nuclei with extreme neutron/proton ratios; (4) Can we predict useful cross sections that cannot be measured; (5) Can nuclei provide precision tests of the fundamental laws of nature; and (6) Under what conditions do we need QCD to describe nuclear structure, among others. Along with other ab initio nuclear theory groups, we have pursued these questions with meson-theoretical NN interactions, such as CD-Bonn and Argonne V18, that were tuned to provide high-quality descriptions of the NN scattering phase shifts and deuteron properties. We then add meson-theoretic NNN interactions such as the Tucson-Melbourne or Urbana IX interactions. More recently, we have adopted realistic NN and NNN interactions with ties to QCD. Chiral perturbation theory within effective field theory ({chi}EFT) provides us with a promising bridge between QCD and hadronic systems. In this approach one works consistently with systems of increasing nucleon number and makes use of the explicit and spontaneous breaking of chiral symmetry to expand the strong interaction in terms of a dimensionless constant, the ratio of a generic small momentum divided by the chiral symmetry breaking scale taken to be about 1 GeV/c. The

  1. Shell model description of band structure in 48Cr

    International Nuclear Information System (INIS)

    Vargas, Carlos E.; Velazquez, Victor M.

    2007-01-01

    The band structure for normal and abnormal parity bands in 48Cr are described using the m-scheme shell model. In addition to full fp-shell, two particles in the 1d3/2 orbital are allowed in order to describe intruder states. The interaction includes fp-, sd- and mixed matrix elements

  2. Statistical properties of the nuclear shell-model Hamiltonian

    International Nuclear Information System (INIS)

    Dias, H.; Hussein, M.S.; Oliveira, N.A. de

    1986-01-01

    The statistical properties of realistic nuclear shell-model Hamiltonian are investigated in sd-shell nuclei. The probability distribution of the basic-vector amplitude is calculated and compared with the Porter-Thomas distribution. Relevance of the results to the calculation of the giant resonance mixing parameter is pointed out. (Author) [pt

  3. Electron transport in quantum dot solids: Monte Carlo simulations of the effects of shell filling, Coulomb repulsions, and site disorder

    OpenAIRE

    Chandler, R.E.; Houtepen, A.J.; Nelson, J.; Vanmaekelbergh, D.A.M.

    2007-01-01

    A Monte Carlo model is developed for the hopping conductance in arrays of quantum dots (QDs). Hopping is simulated using a continuous time random walk algorithm, incorporating all possible transitions, and using a nonresonant electron-hopping rate based on broadening of the energy levels through quantum fluctuations. Arrays of identical QDs give rise to electronic conductance that depends strongly upon level filling. In the case of low charging energy, metal insulator transitions are observed...

  4. Shell-model representation to describe α emission

    Science.gov (United States)

    Delion, D. S.; Liotta, R. J.

    2013-04-01

    It is shown that the standard shell-model representation is inadequate to explain cluster decay processes due to a deficient asymptotic behavior of the corresponding single-particle wave functions. A new representation is proposed which is derived from a mean field consisting of the standard Woods-Saxon plus spin-orbit potential of the shell model, with an additional attractive pocket potential of a Gaussian form localized on the nuclear surface. The eigenvectors of this new mean field provide a representation which retains all the benefits of the standard shell model while at the same time reproducing well the experimental absolute α-decay widths from heavy nuclei.

  5. Towards a Revised Monte Carlo Neutral Particle Surface Interaction Model

    International Nuclear Information System (INIS)

    Stotler, D.P.

    2005-01-01

    The components of the neutral- and plasma-surface interaction model used in the Monte Carlo neutral transport code DEGAS 2 are reviewed. The idealized surfaces and processes handled by that model are inadequate for accurately simulating neutral transport behavior in present day and future fusion devices. We identify some of the physical processes missing from the model, such as mixed materials and implanted hydrogen, and make some suggestions for improving the model

  6. A novel Monte Carlo approach to hybrid local volatility models

    NARCIS (Netherlands)

    A.W. van der Stoep (Anton); L.A. Grzelak (Lech Aleksander); C.W. Oosterlee (Cornelis)

    2017-01-01

    textabstractWe present in a Monte Carlo simulation framework, a novel approach for the evaluation of hybrid local volatility [Risk, 1994, 7, 18–20], [Int. J. Theor. Appl. Finance, 1998, 1, 61–110] models. In particular, we consider the stochastic local volatility model—see e.g. Lipton et al. [Quant.

  7. A novel Monte Carlo approach to hybrid local volatility models

    NARCIS (Netherlands)

    van der Stoep, A.W.; Grzelak, L.A.; Oosterlee, C.W.

    2017-01-01

    We present in a Monte Carlo simulation framework, a novel approach for the evaluation of hybrid local volatility [Risk, 1994, 7, 18–20], [Int. J. Theor. Appl. Finance, 1998, 1, 61–110] models. In particular, we consider the stochastic local volatility model—see e.g. Lipton et al. [Quant. Finance,

  8. Modeling plate shell structures using pyFormex

    DEFF Research Database (Denmark)

    Bagger, Anne; Verhegghe, Benedict; Hertz, Kristian Dahl

    2009-01-01

    (plate shells and triangulated lattice shells) may not differ in complexity regarding the topology, but when it comes to the practical generation of the geometry, e.g. in CAD, the plate shell is far more troublesome to handle than the triangulated geometry. The free software tool “pyFormex”, developed...... element analysis software Abaqus as a Python script, which translates the information to an Abaqus CAE-model. In pyFormex the model has been prepared for applying the meshing in Abaqus, by allocation of edge seeds, and by defining geometry sets for easy handling....

  9. Influence of time dependent longitudinal magnetic fields on the cooling process, exchange bias and magnetization reversal mechanism in FM core/AFM shell nanoparticles: a Monte Carlo study.

    Science.gov (United States)

    Yüksel, Yusuf; Akıncı, Ümit

    2016-12-07

    Using Monte Carlo simulations, we have investigated the dynamic phase transition properties of magnetic nanoparticles with ferromagnetic core coated by an antiferromagnetic shell structure. Effects of field amplitude and frequency on the thermal dependence of magnetizations, magnetization reversal mechanisms during hysteresis cycles, as well as on the exchange bias and coercive fields have been examined, and the feasibility of applying dynamic magnetic fields on the particle have been discussed for technological and biomedical purposes.

  10. The continuum shell-model neutron states of Pb

    Indian Academy of Sciences (India)

    even magic core nucleus 208Pb. For the discrete low-lying excited states, the depletion of the shell-model ... nucleon moves. The matrix elements of K(r) has been kept fixed at 50 MeV and this has been discussed in the following section. The shell-model neutron state |j2)has been coupled with the vibrational |λπ)spin state.

  11. A finite element model for nonlinear shells of revolution

    International Nuclear Information System (INIS)

    Cook, W.A.

    1979-01-01

    A shell-of-revolution model was developed to analyze impact problems associated with the safety analysis of nuclear material shipping containers. The nonlinear shell theory presented by Eric Reissner in 1972 was used to develop our model. Reissner's approach includes transverse shear deformation and moments turning about the middle surface normal. With these features, this approach is valid for both thin and thick shells. His theory is formulated in terms of strain and stress resultants that refer to the undeformed geometry. This nonlinear shell model is developed using the virtual work principle associated with Reissner's equilibrium equations. First, the virtual work principle is modified for incremental loading; then it is linearized by assuming that the nonlinear portions of the strains are known. By iteration, equilibrium is then approximated for each increment. A benefit of this approach is that this iteration process makes it possible to use nonlinear material properties. (orig.)

  12. Reservoir Modeling Combining Geostatistics with Markov Chain Monte Carlo Inversion

    DEFF Research Database (Denmark)

    Zunino, Andrea; Lange, Katrine; Melnikova, Yulia

    2014-01-01

    We present a study on the inversion of seismic reflection data generated from a synthetic reservoir model. Our aim is to invert directly for rock facies and porosity of the target reservoir zone. We solve this inverse problem using a Markov chain Monte Carlo (McMC) method to handle the nonlinear,...... constitute samples of the posterior distribution.......We present a study on the inversion of seismic reflection data generated from a synthetic reservoir model. Our aim is to invert directly for rock facies and porosity of the target reservoir zone. We solve this inverse problem using a Markov chain Monte Carlo (McMC) method to handle the nonlinear......, multi-step forward model (rock physics and seismology) and to provide realistic estimates of uncertainties. To generate realistic models which represent samples of the prior distribution, and to overcome the high computational demand, we reduce the search space utilizing an algorithm drawn from...

  13. Nuclear reaction matrix calculations with a shell-model Q

    International Nuclear Information System (INIS)

    Barrett, B.R.; McCarthy, R.J.

    1976-01-01

    Das Barrett-Hewitt-McCarthy (BHM) method for calculating the nuclear reaction matrix G is used to compute shell-model matrix elements for A = 18 nuclei. The energy denominators in intermediate states containing one unoccupied single-particle (s.p.) state and one valence s.p. state are treated correctly, in contrast to previous calculations. These corrections are not important for valence-shell matrix elements but are found to lead to relatively large changes in cross-shell matrix elements involved in core-polarization diagrams. (orig.) [de

  14. Monte Carlo Euler approximations of HJM term structure financial models

    KAUST Repository

    Björk, Tomas

    2012-11-22

    We present Monte Carlo-Euler methods for a weak approximation problem related to the Heath-Jarrow-Morton (HJM) term structure model, based on Itô stochastic differential equations in infinite dimensional spaces, and prove strong and weak error convergence estimates. The weak error estimates are based on stochastic flows and discrete dual backward problems, and they can be used to identify different error contributions arising from time and maturity discretization as well as the classical statistical error due to finite sampling. Explicit formulas for efficient computation of sharp error approximation are included. Due to the structure of the HJM models considered here, the computational effort devoted to the error estimates is low compared to the work to compute Monte Carlo solutions to the HJM model. Numerical examples with known exact solution are included in order to show the behavior of the estimates. © 2012 Springer Science+Business Media Dordrecht.

  15. Monte Carlo Study of the 3D Thirring Model

    OpenAIRE

    Hands, Simon

    1997-01-01

    I review three different non-perturbative approaches to the three dimensional Thirring model: the 1/N_f expansion, Schwinger-Dyson equations, and Monte Carlo simulation. Simulation results are presented to support the existence of a non-perturbative fixed point at a chiral symmetry breaking phase transition for N_f=2 and 4, but not for N_f=6. Spectrum calculations for $N_f=2$ reveal conventional level ordering near the transition.

  16. Monte Carlo Modelling of Mammograms : Development and Validation

    International Nuclear Information System (INIS)

    Spyrou, G.; Panayiotakis, G.; Bakas, A.; Tzanakos, G.

    1998-01-01

    A software package using Monte Carlo methods has been developed for the simulation of x-ray mammography. A simplified geometry of the mammographic apparatus has been considered along with the software phantom of compressed breast. This phantom may contain inhomogeneities of various compositions and sizes at any point. Using this model one can produce simulated mammograms. Results that demonstrate the validity of this simulation are presented. (authors)

  17. Target dose conversion modeling from pencil beam (PB) to Monte Carlo (MC) for lung SBRT

    International Nuclear Information System (INIS)

    Zheng, Dandan; Zhu, Xiaofeng; Zhang, Qinghui; Liang, Xiaoying; Zhen, Weining; Lin, Chi; Verma, Vivek; Wang, Shuo; Wahl, Andrew; Lei, Yu; Zhou, Sumin; Zhang, Chi

    2016-01-01

    A challenge preventing routine clinical implementation of Monte Carlo (MC)-based lung SBRT is the difficulty of reinterpreting historical outcome data calculated with inaccurate dose algorithms, because the target dose was found to decrease to varying degrees when recalculated with MC. The large variability was previously found to be affected by factors such as tumour size, location, and lung density, usually through sub-group comparisons. We hereby conducted a pilot study to systematically and quantitatively analyze these patient factors and explore accurate target dose conversion models, so that large-scale historical outcome data can be correlated with more accurate MC dose without recalculation. Twenty-one patients that underwent SBRT for early-stage lung cancer were replanned with 6MV 360° dynamic conformal arcs using pencil-beam (PB) and recalculated with MC. The percent D95 difference (PB-MC) was calculated for the PTV and GTV. Using single linear regression, this difference was correlated with the following quantitative patient indices: maximum tumour diameter (MaxD); PTV and GTV volumes; minimum distance from tumour to soft tissue (dmin); and mean density and standard deviation of the PTV, GTV, PTV margin, lung, and 2 mm, 15 mm, 50 mm shells outside the PTV. Multiple linear regression and artificial neural network (ANN) were employed to model multiple factors and improve dose conversion accuracy. Single linear regression with PTV D95 deficiency identified the strongest correlation on mean-density (location) indices, weaker on lung density, and the weakest on size indices, with the following R 2 values in decreasing orders: shell2mm (0.71), PTV (0.68), PTV margin (0.65), shell15mm (0.62), shell50mm (0.49), lung (0.40), dmin (0.22), GTV (0.19), MaxD (0.17), PTV volume (0.15), and GTV volume (0.08). A multiple linear regression model yielded the significance factor of 3.0E-7 using two independent features: mean density of shell2mm (P = 1.6E-7) and PTV volume

  18. The continuum shell-model neutron states of Pb

    Indian Academy of Sciences (India)

    model states with the collective vibrational states from giant resonances. The particle-vibration coupling model can be applied to understand the spreading pattern of the shell-model states lying in continuum region. The single-particle states are ...

  19. Quantum Monte Carlo Simulation of Frustrated Kondo Lattice Models

    Science.gov (United States)

    Sato, Toshihiro; Assaad, Fakher F.; Grover, Tarun

    2018-03-01

    The absence of the negative sign problem in quantum Monte Carlo simulations of spin and fermion systems has different origins. World-line based algorithms for spins require positivity of matrix elements whereas auxiliary field approaches for fermions depend on symmetries such as particle-hole symmetry. For negative-sign-free spin and fermionic systems, we show that one can formulate a negative-sign-free auxiliary field quantum Monte Carlo algorithm that allows Kondo coupling of fermions with the spins. Using this general approach, we study a half-filled Kondo lattice model on the honeycomb lattice with geometric frustration. In addition to the conventional Kondo insulator and antiferromagnetically ordered phases, we find a partial Kondo screened state where spins are selectively screened so as to alleviate frustration, and the lattice rotation symmetry is broken nematically.

  20. Monte Carlo Numerical Models for Nuclear Logging Applications

    Directory of Open Access Journals (Sweden)

    Fusheng Li

    2012-06-01

    Full Text Available Nuclear logging is one of most important logging services provided by many oil service companies. The main parameters of interest are formation porosity, bulk density, and natural radiation. Other services are also provided from using complex nuclear logging tools, such as formation lithology/mineralogy, etc. Some parameters can be measured by using neutron logging tools and some can only be measured by using a gamma ray tool. To understand the response of nuclear logging tools, the neutron transport/diffusion theory and photon diffusion theory are needed. Unfortunately, for most cases there are no analytical answers if complex tool geometry is involved. For many years, Monte Carlo numerical models have been used by nuclear scientists in the well logging industry to address these challenges. The models have been widely employed in the optimization of nuclear logging tool design, and the development of interpretation methods for nuclear logs. They have also been used to predict the response of nuclear logging systems for forward simulation problems. In this case, the system parameters including geometry, materials and nuclear sources, etc., are pre-defined and the transportation and interactions of nuclear particles (such as neutrons, photons and/or electrons in the regions of interest are simulated according to detailed nuclear physics theory and their nuclear cross-section data (probability of interacting. Then the deposited energies of particles entering the detectors are recorded and tallied and the tool responses to such a scenario are generated. A general-purpose code named Monte Carlo N– Particle (MCNP has been the industry-standard for some time. In this paper, we briefly introduce the fundamental principles of Monte Carlo numerical modeling and review the physics of MCNP. Some of the latest developments of Monte Carlo Models are also reviewed. A variety of examples are presented to illustrate the uses of Monte Carlo numerical models

  1. Monte Carlo modeling of human tooth optical coherence tomography imaging

    International Nuclear Information System (INIS)

    Shi, Boya; Meng, Zhuo; Wang, Longzhi; Liu, Tiegen

    2013-01-01

    We present a Monte Carlo model for optical coherence tomography (OCT) imaging of human tooth. The model is implemented by combining the simulation of a Gaussian beam with simulation for photon propagation in a two-layer human tooth model with non-parallel surfaces through a Monte Carlo method. The geometry and the optical parameters of the human tooth model are chosen on the basis of the experimental OCT images. The results show that the simulated OCT images are qualitatively consistent with the experimental ones. Using the model, we demonstrate the following: firstly, two types of photons contribute to the information of morphological features and noise in the OCT image of a human tooth, respectively. Secondly, the critical imaging depth of the tooth model is obtained, and it is found to decrease significantly with increasing mineral loss, simulated as different enamel scattering coefficients. Finally, the best focus position is located below and close to the dental surface by analysis of the effect of focus positions on the OCT signal and critical imaging depth. We anticipate that this modeling will become a powerful and accurate tool for a preliminary numerical study of the OCT technique on diseases of dental hard tissue in human teeth. (paper)

  2. Monte Carlo modeling of human tooth optical coherence tomography imaging

    Science.gov (United States)

    Shi, Boya; Meng, Zhuo; Wang, Longzhi; Liu, Tiegen

    2013-07-01

    We present a Monte Carlo model for optical coherence tomography (OCT) imaging of human tooth. The model is implemented by combining the simulation of a Gaussian beam with simulation for photon propagation in a two-layer human tooth model with non-parallel surfaces through a Monte Carlo method. The geometry and the optical parameters of the human tooth model are chosen on the basis of the experimental OCT images. The results show that the simulated OCT images are qualitatively consistent with the experimental ones. Using the model, we demonstrate the following: firstly, two types of photons contribute to the information of morphological features and noise in the OCT image of a human tooth, respectively. Secondly, the critical imaging depth of the tooth model is obtained, and it is found to decrease significantly with increasing mineral loss, simulated as different enamel scattering coefficients. Finally, the best focus position is located below and close to the dental surface by analysis of the effect of focus positions on the OCT signal and critical imaging depth. We anticipate that this modeling will become a powerful and accurate tool for a preliminary numerical study of the OCT technique on diseases of dental hard tissue in human teeth.

  3. Mayer–Jensen Shell Model and Magic Numbers

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 12; Issue 12. Mayer-Jensen Shell Model and Magic Numbers - An Independent Nucleon Model with Spin-Orbit Coupling. R Velusamy. General Article Volume 12 Issue 12 December 2007 pp 12-24 ...

  4. Deformed shell model studies of spectroscopic properties of Zn and ...

    Indian Academy of Sciences (India)

    2014-04-05

    Apr 5, 2014 ... the generating coordinate method framework (GCM+PNAMP), (v) projected Hartree– ... shall first study its spectroscopic properties using deformed shell model (DSM) to test the effectiveness of the model for ... Section. 3 gives DSM results for 64Zn for spectroscopic properties and then the results for both 2ν.

  5. Decaying and kicked turbulence in a shell model

    DEFF Research Database (Denmark)

    Hooghoudt, Jan Otto; Lohse, Detlef; Toschi, Federico

    2001-01-01

    Decaying and periodically kicked turbulence are analyzed within the Gledzer–Ohkitani–Yamada shell model, to allow for sufficiently large scaling regimes. Energy is transferred towards the small scales in intermittent bursts. Nevertheless, mean field arguments are sufficient to account for the ens......Decaying and periodically kicked turbulence are analyzed within the Gledzer–Ohkitani–Yamada shell model, to allow for sufficiently large scaling regimes. Energy is transferred towards the small scales in intermittent bursts. Nevertheless, mean field arguments are sufficient to account...

  6. Major shell centroids in the symplectic collective model

    International Nuclear Information System (INIS)

    Draayer, J.P.; Rosensteel, G.; Tulane Univ., New Orleans, LA

    1983-01-01

    Analytic expressions are given for the major shell centroids of the collective potential V(#betta#, #betta#) and the shape observable #betta# 2 in the Sp(3,R) symplectic model. The tools of statistical spectroscopy are shown to be useful, firstly, in translating a requirement that the underlying shell structure be preserved into constraints on the parameters of the collective potential and, secondly, in giving a reasonable estimate for a truncation of the infinite dimensional symplectic model space from experimental B(E2) transition strengths. Results based on the centroid information are shown to compare favorably with results from exact calculations in the case of 20 Ne. (orig.)

  7. Reservoir Modeling Combining Geostatistics with Markov Chain Monte Carlo Inversion

    DEFF Research Database (Denmark)

    Zunino, Andrea; Lange, Katrine; Melnikova, Yulia

    2014-01-01

    We present a study on the inversion of seismic reflection data generated from a synthetic reservoir model. Our aim is to invert directly for rock facies and porosity of the target reservoir zone. We solve this inverse problem using a Markov chain Monte Carlo (McMC) method to handle the nonlinear......, multi-step forward model (rock physics and seismology) and to provide realistic estimates of uncertainties. To generate realistic models which represent samples of the prior distribution, and to overcome the high computational demand, we reduce the search space utilizing an algorithm drawn from...... geostatistics. The geostatistical algorithm learns the multiple-point statistics from prototype models, then generates proposal models which are tested by a Metropolis sampler. The solution of the inverse problem is finally represented by a collection of reservoir models in terms of facies and porosity, which...

  8. The alpha-particle and shell models of the nucleus

    International Nuclear Information System (INIS)

    Perring, J.K.; Skyrme, T.H.R.

    1994-01-01

    It is shown that it is possible to write down α-particle wave functions for the ground states of 8 Be, 12 C and 16 O, which become, when antisymmetrized, identical with shell-model wave functions. The α-particle functions are used to obtain potentials which can then be used to derive wave functions and energies of excited states. Most of the low-lying states of 16 O are obtained in this way, qualitative agreement with experiment being found. The shell structure of the 0 + level at 6·06 MeV is analyzed, and is found to consist largely of single-particle excitations. The lifetime for pair-production is calculated, and found to be comparable with the experimental value. The validity of the method is discussed, and comparison made with shell-model calculations. (author). 5 refs, 1 tab

  9. Conditional Monte Carlo randomization tests for regression models.

    Science.gov (United States)

    Parhat, Parwen; Rosenberger, William F; Diao, Guoqing

    2014-08-15

    We discuss the computation of randomization tests for clinical trials of two treatments when the primary outcome is based on a regression model. We begin by revisiting the seminal paper of Gail, Tan, and Piantadosi (1988), and then describe a method based on Monte Carlo generation of randomization sequences. The tests based on this Monte Carlo procedure are design based, in that they incorporate the particular randomization procedure used. We discuss permuted block designs, complete randomization, and biased coin designs. We also use a new technique by Plamadeala and Rosenberger (2012) for simple computation of conditional randomization tests. Like Gail, Tan, and Piantadosi, we focus on residuals from generalized linear models and martingale residuals from survival models. Such techniques do not apply to longitudinal data analysis, and we introduce a method for computation of randomization tests based on the predicted rate of change from a generalized linear mixed model when outcomes are longitudinal. We show, by simulation, that these randomization tests preserve the size and power well under model misspecification. Copyright © 2014 John Wiley & Sons, Ltd.

  10. Testing refined shell-model interactions in the sd shell: Coulomb excitation of Na26

    CERN Document Server

    Siebeck, B; Blazhev, A; Reiter, P; Altenkirch, R; Bauer, C; Butler, P A; De Witte, H; Elseviers, J; Gaffney, L P; Hess, H; Huyse, M; Kröll, T; Lutter, R; Pakarinen, J; Pietralla, N; Radeck, F; Scheck, M; Schneiders, D; Sotty, C; Van Duppen, P; Vermeulen, M; Voulot, D; Warr, N; Wenander, F

    2015-01-01

    Background: Shell-model calculations crucially depend on the residual interaction used to approximate the nucleon-nucleon interaction. Recent improvements to the empirical universal sd interaction (USD) describing nuclei within the sd shell yielded two new interactions—USDA and USDB—causing changes in the theoretical description of these nuclei. Purpose: Transition matrix elements between excited states provide an excellent probe to examine the underlying shell structure. These observables provide a stringent test for the newly derived interactions. The nucleus Na26 with 7 valence neutrons and 3 valence protons outside the doubly-magic 16O core is used as a test case. Method: A radioactive beam experiment with Na26 (T1/2=1,07s) was performed at the REX-ISOLDE facility (CERN) using Coulomb excitation at safe energies below the Coulomb barrier. Scattered particles were detected with an annular Si detector in coincidence with γ rays observed by the segmented MINIBALL array. Coulomb excitation cross sections...

  11. Kinematic arguments against single relativistic shell models for GRBs

    Science.gov (United States)

    Fenimore, Edward E.; Ramirez, E.; Sumner, M. C.

    1997-01-01

    Two main types of models have been suggested to explain the long durations and multiple peaks of Gamma Ray Bursts (GRBs). In one, there is a very quick release of energy at a central site resulting in a single relativistic shell that produces peaks in the time history through its interactions with the ambient material. In the other, the central site sporadically releases energy over hundreds of seconds forming a peak with each burst of energy. The authors show that the average envelope of emission and the presence of gaps in GRBs are inconsistent with a single relativistic shell. They estimate that the maximum fraction of a single shell that can produce gamma-rays in a GRB with multiple peaks is 10(exp (minus)3), implying that single relativistic shells require 10(exp 3) times more energy than previously thought. They conclude that either the central site of a GRB must produce (approx)10(exp 51) erg/s(exp (minus)1) for hundreds of seconds, or the relativistic shell must have structure on a scales the order of (radical)(epsilon)(Gamma)(exp (minus)1), where (Gamma) is the bulk Lorentz factor ((approximately)10(exp 2) to 10(exp 3)) and (epsilon) is the efficiency.

  12. Shell model test of the Porter-Thomas distribution

    International Nuclear Information System (INIS)

    Grimes, S.M.; Bloom, S.D.

    1981-01-01

    Eigenvectors have been calculated for the A=18, 19, 20, 21, and 26 nuclei in an sd shell basis. The decomposition of these states into their shell model components shows, in agreement with other recent work, that this distribution is not a single Gaussian. We find that the largest amplitudes are distributed approximately in a Gaussian fashion. Thus, many experimental measurements should be consistent with the Porter-Thomas predictions. We argue that the non-Gaussian form of the complete distribution can be simply related to the structure of the Hamiltonian

  13. Evolutionary Sequential Monte Carlo Samplers for Change-Point Models

    Directory of Open Access Journals (Sweden)

    Arnaud Dufays

    2016-03-01

    Full Text Available Sequential Monte Carlo (SMC methods are widely used for non-linear filtering purposes. However, the SMC scope encompasses wider applications such as estimating static model parameters so much that it is becoming a serious alternative to Markov-Chain Monte-Carlo (MCMC methods. Not only do SMC algorithms draw posterior distributions of static or dynamic parameters but additionally they provide an estimate of the marginal likelihood. The tempered and time (TNT algorithm, developed in this paper, combines (off-line tempered SMC inference with on-line SMC inference for drawing realizations from many sequential posterior distributions without experiencing a particle degeneracy problem. Furthermore, it introduces a new MCMC rejuvenation step that is generic, automated and well-suited for multi-modal distributions. As this update relies on the wide heuristic optimization literature, numerous extensions are readily available. The algorithm is notably appropriate for estimating change-point models. As an example, we compare several change-point GARCH models through their marginal log-likelihoods over time.

  14. Quantum Monte Carlo study of the Rabi-Hubbard model

    Science.gov (United States)

    Flottat, Thibaut; Hébert, Frédéric; Rousseau, Valéry G.; Batrouni, George Ghassan

    2016-10-01

    We study, using quantum Monte Carlo (QMC) simulations, the ground state properties of a one dimensional Rabi-Hubbard model. The model consists of a lattice of Rabi systems coupled by a photon hopping term between near neighbor sites. For large enough coupling between photons and atoms, the phase diagram generally consists of only two phases: a coherent phase and a compressible incoherent one separated by a quantum phase transition (QPT). We show that, as one goes deeper in the coherent phase, the system becomes unstable exhibiting a divergence of the number of photons. The Mott phases which are present in the Jaynes-Cummings-Hubbard model are not observed in these cases due to the presence of non-negligible counter-rotating terms. We show that these two models become equivalent only when the detuning is negative and large enough, or if the counter-rotating terms are small enough

  15. Markov chain Monte Carlo simulation for Bayesian Hidden Markov Models

    Science.gov (United States)

    Chan, Lay Guat; Ibrahim, Adriana Irawati Nur Binti

    2016-10-01

    A hidden Markov model (HMM) is a mixture model which has a Markov chain with finite states as its mixing distribution. HMMs have been applied to a variety of fields, such as speech and face recognitions. The main purpose of this study is to investigate the Bayesian approach to HMMs. Using this approach, we can simulate from the parameters' posterior distribution using some Markov chain Monte Carlo (MCMC) sampling methods. HMMs seem to be useful, but there are some limitations. Therefore, by using the Mixture of Dirichlet processes Hidden Markov Model (MDPHMM) based on Yau et. al (2011), we hope to overcome these limitations. We shall conduct a simulation study using MCMC methods to investigate the performance of this model.

  16. Bursts and shocks in a continuum shell model

    DEFF Research Database (Denmark)

    Andersen, Ken Haste; Bohr, Tomas; Jensen, M.H.

    1998-01-01

    We study a burst event, i.e., the evolution of an initial condition having support only in a finite interval of k-space, in the continuum shell model due to Parisi. We show that the continuum equation without forcing or dissipation can be explicitly written in characteristic form and that the right...

  17. Ab initio shell model for A=10 nuclei

    Czech Academy of Sciences Publication Activity Database

    Caurier, E.; Navrátil, Petr; Ormand, W. E.; Vary, J. P.

    2002-01-01

    Roč. 66, č. 2 (2002), 024314-1 ISSN 0556-2813 R&D Projects: GA ČR GA202/99/0149 Institutional research plan: CEZ:AV0Z1048901 Keywords : light nuclei * shell-modell Subject RIV: BE - Theoretical Physics Impact factor: 2.848, year: 2002

  18. Symmetry-guided large-scale shell-model theory

    Czech Academy of Sciences Publication Activity Database

    Launey, K. D.; Dytrych, Tomáš; Draayer, J. P.

    2016-01-01

    Roč. 89, JUL (2016), s. 101-136 ISSN 0146-6410 R&D Projects: GA ČR GA16-16772S Institutional support: RVO:61389005 Keywords : Ab intio shell-model theory * Symplectic symmetry * Collectivity * Clusters * Hoyle state * Orderly patterns in nuclei from first principles Subject RIV: BE - Theoretical Physics Impact factor: 11.229, year: 2016

  19. Final Report Fermionic Symmetries and Self consistent Shell Model

    International Nuclear Information System (INIS)

    Zamick, Larry

    2008-01-01

    In this final report in the field of theoretical nuclear physics we note important accomplishments.We were confronted with 'anomoulous' magnetic moments by the experimetalists and were able to expain them. We found unexpected partial dynamical symmetries--completely unknown before, and were able to a large extent to expain them. The importance of a self consistent shell model was emphasized.

  20. Acoustic modeling of shell-encapsulated gas bubbles

    NARCIS (Netherlands)

    P.J.A. Frinking (Peter); N. de Jong (Nico)

    1998-01-01

    textabstractExisting theoretical models do not adequately describe the scatter and attenuation properties of the ultrasound contrast agents Quantison(TM) and Myomap(TM). An adapted version of the Rayleigh-Plesset equation, in which the shell is described by a viscoelastic solid, is proposed and

  1. Monte Carlo investigations of distance-dependent effects on energy deposition in K-shell x-ray fluorescence bone lead measurement

    International Nuclear Information System (INIS)

    Ahmed, Naseer; Fleming, David E B; O'Meara, Joanne M

    2004-01-01

    Radiation energy deposition results are presented from a Monte Carlo code simulating the lower part of a leg during an in vivo 109 Cd K-shell x-ray fluorescence (KXRF) bone lead measurement. The simulations were run for a leg phantom model representing an adult subject, assuming concentrations of 10 μg Pb per gram bone mineral and tracing 500 million photons in each simulation. Trials were performed over a range (0.5-6.0 cm) of source-to-sample (S-S) distances. Energies deposited due to Compton and photoelectric processes occurring in the bone and the soft tissue were obtained. The data show an increase in the amount of energy deposited in the bone as the sample is moved closer to the source (from 2.0 cm to 0.5 cm). However, there is a decrease in the amount of energy deposited in the soft tissue as the sample is moved closer to the source over the same distance interval. In decreasing the S-S distance from 2.0 cm to 0.5 cm, the amount of energy deposited in the sample as a whole was found to increase by 11%. By calculating the energy deposition in the bone and in the soft tissue as a fraction of the total energy deposited in the sample, the corresponding changes are quantified as a function of S-S distance. Similarly, the proportions of energy deposited via the photoelectric effect and Compton scattering are presented as a function of S-S distance. (note)

  2. Super-hypernuclei in the quark-shell model, 2

    International Nuclear Information System (INIS)

    Terazawa, Hidezumi.

    1989-07-01

    By following the previous paper, where the quark-shell model of nuclei in quantum chromodynamics is briefly reviewed, a short review of the MIT bag model of nuclei is presented for comparison and a simple estimate of the Hλ ('hexalambda') mass is also made for illustration. Furthermore, an even shorter review of the 'nucleon cluster model' of nuclei is presented for further comparison. (J.P.N.)

  3. A Monte Carlo methodology for modelling ashfall hazards

    Science.gov (United States)

    Hurst, Tony; Smith, Warwick

    2004-12-01

    We have developed a methodology for quantifying the probability of particular thicknesses of tephra at any given site, using Monte Carlo methods. This is a part of the development of a probabilistic volcanic hazard model (PVHM) for New Zealand, for hazards planning and insurance purposes. We use an established program (ASHFALL) to model individual eruptions, where the likely thickness of ash deposited at selected sites depends on the location of the volcano, eruptive volume, column height and ash size, and the wind conditions. A Monte Carlo procedure allows us to simulate the variations in eruptive volume and in wind conditions by analysing repeat eruptions, each time allowing the parameters to vary randomly according to known or assumed distributions. Actual wind velocity profiles are used, with randomness included by selection of a starting date. This method can handle the effects of multiple volcanic sources, each source with its own characteristics. We accumulate the tephra thicknesses from all sources to estimate the combined ashfall hazard, expressed as the frequency with which any given depth of tephra is likely to be deposited at selected sites. These numbers are expressed as annual probabilities or as mean return periods. We can also use this method for obtaining an estimate of how often and how large the eruptions from a particular volcano have been. Results from sediment cores in Auckland give useful bounds for the likely total volumes erupted from Egmont Volcano (Mt. Taranaki), 280 km away, during the last 130,000 years.

  4. Quantum Monte Carlo method for models of molecular nanodevices

    Science.gov (United States)

    Arrachea, Liliana; Rozenberg, Marcelo J.

    2005-07-01

    We introduce a quantum Monte Carlo technique to calculate exactly at finite temperatures the Green function of a fermionic quantum impurity coupled to a bosonic field. While the algorithm is general, we focus on the single impurity Anderson model coupled to a Holstein phonon as a schematic model for a molecular transistor. We compute the density of states at the impurity in a large range of parameters, to demonstrate the accuracy and efficiency of the method. We also obtain the conductance of the impurity model and analyze different regimes. The results show that even in the case when the effective attractive phonon interaction is larger than the Coulomb repulsion, a Kondo-like conductance behavior might be observed.

  5. Describing Compton scattering and two-quanta positron annihilation based on Compton profiles: Two models suited for the Monte Carlo method

    CERN Document Server

    Bohlen, TT; Patera, V; Sala, P R

    2012-01-01

    An accurate description of the basic physics processes of Compton scattering and positron annihilation in matter requires the consideration of atomic shell structure effects and, in specific, the momentum distributions of the atomic electrons. Two algorithms which model Compton scattering and two-quanta positron annihilation at rest accounting for shell structure effects are proposed. Two-quanta positron annihilation is a physics process which is of particular importance for applications such as positron emission tomography (PET). Both models use a detailed description of the processes which incorporate consistently Doppler broadening and binding effects. This together with the relatively low level of complexity of the models makes them particularly suited to be employed by fast sampling methods for Monte Carlo particle transport. Momentum distributions of shell electrons are obtained from parametrized one-electron Compton profiles. For conduction electrons, momentum distributions are derived in the framework...

  6. Longitudinal functional principal component modelling via Stochastic Approximation Monte Carlo

    KAUST Repository

    Martinez, Josue G.

    2010-06-01

    The authors consider the analysis of hierarchical longitudinal functional data based upon a functional principal components approach. In contrast to standard frequentist approaches to selecting the number of principal components, the authors do model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order to overcome this, the authors show how to apply Stochastic Approximation Monte Carlo (SAMC) to this problem, a method that has the potential to explore the entire space and does not become trapped in local extrema. The combination of reversible jump methods and SAMC in hierarchical longitudinal functional data is simplified by a polar coordinate representation of the principal components. The approach is easy to implement and does well in simulated data in determining the distribution of the number of principal components, and in terms of its frequentist estimation properties. Empirical applications are also presented.

  7. Determination of Hamiltonian matrix for IBM4 and compare it is self value with shells model

    International Nuclear Information System (INIS)

    Slyman, S.; Hadad, S.; Souman, H.

    2004-01-01

    The Hamiltonian is determined using the procedure OAI and the mapping of (IBM4) states into the shell model, which is based on the seniority classification scheme. A boson sub-matrix of the shell model Hamiltonian for the (sd) 4 configuration is constructed, and is proved to produce the same eigenvalues as the shell model Hamiltonian for the corresponding fermion states. (authors)

  8. Monte Carlo model of diagnostic X-ray dosimetry

    International Nuclear Information System (INIS)

    Khrutchinsky, Arkady; Kutsen, Semion; Gatskevich, George

    2008-01-01

    Full text: A Monte Carlo simulation of absorbed dose distribution in patient's tissues is often used in a dosimetry assessment of X-ray examinations. The results of such simulations in Belarus are presented in the report based on an anthropomorphic tissue-equivalent Rando-like physical phantom. The phantom corresponds to an adult 173 cm high and of 73 kg and consists of a torso and a head made of tissue-equivalent plastics which model soft (muscular), bone, and lung tissues. It consists of 39 layers (each 25 mm thick), including 10 head and neck ones, 16 chest and 13 pelvis ones. A tomographic model of the phantom has been developed from its CT-scan images with a voxel size of 0.88 x 0.88 x 4 mm 3 . A necessary pixelization in Mathematics-based in-house program was carried out for the phantom to be used in the radiation transport code MCNP-4b. The final voxel size of 14.2 x 14.2 x 8 mm 3 was used for the reasonable computer consuming calculations of absorbed dose in tissues and organs in various diagnostic X-ray examinations. MCNP point detectors allocated through body slices obtained as a result of the pixelization were used to calculate the absorbed dose. X-ray spectra generated by the empirical TASMIP model were verified on the X-ray units MEVASIM and SIREGRAPH CF. Absorbed dose distributions in the phantom volume were determined by the corresponding Monte Carlo simulations with a set of point detectors. Doses in organs of the adult phantom computed from the absorbed dose distributions by another Mathematics-based in-house program were estimated for 22 standard organs for various standard X-ray examinations. The results of Monte Carlo simulations were compared with the results of direct measurements of the absorbed dose in the phantom on the X-ray unit SIREGRAPH CF with the calibrated thermo-luminescent dosimeter DTU-01. The measurements were carried out in specified locations of different layers in heart, lungs, liver, pancreas, and stomach at high voltage of

  9. Household water use and conservation models using Monte Carlo techniques

    Directory of Open Access Journals (Sweden)

    R. Cahill

    2013-10-01

    Full Text Available The increased availability of end use measurement studies allows for mechanistic and detailed approaches to estimating household water demand and conservation potential. This study simulates water use in a single-family residential neighborhood using end-water-use parameter probability distributions generated from Monte Carlo sampling. This model represents existing water use conditions in 2010 and is calibrated to 2006–2011 metered data. A two-stage mixed integer optimization model is then developed to estimate the least-cost combination of long- and short-term conservation actions for each household. This least-cost conservation model provides an estimate of the upper bound of reasonable conservation potential for varying pricing and rebate conditions. The models were adapted from previous work in Jordan and are applied to a neighborhood in San Ramon, California in the eastern San Francisco Bay Area. The existing conditions model produces seasonal use results very close to the metered data. The least-cost conservation model suggests clothes washer rebates are among most cost-effective rebate programs for indoor uses. Retrofit of faucets and toilets is also cost-effective and holds the highest potential for water savings from indoor uses. This mechanistic modeling approach can improve understanding of water demand and estimate cost-effectiveness of water conservation programs.

  10. Monte Carlo Computational Modeling of Atomic Oxygen Interactions

    Science.gov (United States)

    Banks, Bruce A.; Stueber, Thomas J.; Miller, Sharon K.; De Groh, Kim K.

    2017-01-01

    Computational modeling of the erosion of polymers caused by atomic oxygen in low Earth orbit (LEO) is useful for determining areas of concern for spacecraft environment durability. Successful modeling requires that the characteristics of the environment such as atomic oxygen energy distribution, flux, and angular distribution be properly represented in the model. Thus whether the atomic oxygen is arriving normal to or inclined to a surface and whether it arrives in a consistent direction or is sweeping across the surface such as in the case of polymeric solar array blankets is important to determine durability. When atomic oxygen impacts a polymer surface it can react removing a certain volume per incident atom (called the erosion yield), recombine, or be ejected as an active oxygen atom to potentially either react with other polymer atoms or exit into space. Scattered atoms can also have a lower energy as a result of partial or total thermal accommodation. Many solutions to polymer durability in LEO involve protective thin films of metal oxides such as SiO2 to prevent atomic oxygen erosion. Such protective films also have their own interaction characteristics. A Monte Carlo computational model has been developed which takes into account the various types of atomic oxygen arrival and how it reacts with a representative polymer (polyimide Kapton H) and how it reacts at defect sites in an oxide protective coating, such as SiO2 on that polymer. Although this model was initially intended to determine atomic oxygen erosion behavior at defect sites for the International Space Station solar arrays, it has been used to predict atomic oxygen erosion or oxidation behavior on many other spacecraft components including erosion of polymeric joints, durability of solar array blanket box covers, and scattering of atomic oxygen into telescopes and microwave cavities where oxidation of critical component surfaces can take place. The computational model is a two dimensional model

  11. Monte Carlo Modeling of Crystal Channeling at High Energies

    CERN Document Server

    Schoofs, Philippe; Cerutti, Francesco

    Charged particles entering a crystal close to some preferred direction can be trapped in the electromagnetic potential well existing between consecutive planes or strings of atoms. This channeling effect can be used to extract beam particles if the crystal is bent beforehand. Crystal channeling is becoming a reliable and efficient technique for collimating beams and removing halo particles. At CERN, the installation of silicon crystals in the LHC is under scrutiny by the UA9 collaboration with the goal of investigating if they are a viable option for the collimation system upgrade. This thesis describes a new Monte Carlo model of planar channeling which has been developed from scratch in order to be implemented in the FLUKA code simulating particle transport and interactions. Crystal channels are described through the concept of continuous potential taking into account thermal motion of the lattice atoms and using Moliere screening function. The energy of the particle transverse motion determines whether or n...

  12. Underwater Optical Wireless Channel Modeling Using Monte-Carlo Method

    Science.gov (United States)

    Saini, P. Sri; Prince, Shanthi

    2011-10-01

    At present, there is a lot of interest in the functioning of the marine environment. Unmanned or Autonomous Underwater Vehicles (UUVs or AUVs) are used in the exploration of the underwater resources, pollution monitoring, disaster prevention etc. Underwater, where radio waves do not propagate, acoustic communication is being used. But, underwater communication is moving towards Optical Communication which has higher bandwidth when compared to Acoustic Communication but has shorter range comparatively. Underwater Optical Wireless Communication (OWC) is mainly affected by the absorption and scattering of the optical signal. In coastal waters, both inherent and apparent optical properties (IOPs and AOPs) are influenced by a wide array of physical, biological and chemical processes leading to optical variability. The scattering effect has two effects: the attenuation of the signal and the Inter-Symbol Interference (ISI) of the signal. However, the Inter-Symbol Interference is ignored in the present paper. Therefore, in order to have an efficient underwater OWC link it is necessary to model the channel efficiently. In this paper, the underwater optical channel is modeled using Monte-Carlo method. The Monte Carlo approach provides the most general and most flexible technique for numerically solving the equations of Radiative transfer. The attenuation co-efficient of the light signal is studied as a function of the absorption (a) and scattering (b) coefficients. It has been observed that for pure sea water and for less chlorophyll conditions blue wavelength is less absorbed whereas for chlorophyll rich environment red wavelength signal is absorbed less comparative to blue and green wavelength.

  13. A valence force field-Monte Carlo algorithm for quantum dot growth modeling

    DEFF Research Database (Denmark)

    Barettin, Daniele; Kadkhodazadeh, Shima; Pecchia, Alessandro

    2017-01-01

    We present a novel kinetic Monte Carlo version for the atomistic valence force fields algorithm in order to model a self-assembled quantum dot growth process. We show our atomistic model is both computationally favorable and capture more details compared to traditional kinetic Monte Carlo models...

  14. Resonance and continuum Gamow shell model with realistic nuclear forces

    Science.gov (United States)

    Sun, Z. H.; Wu, Q.; Zhao, Z. H.; Hu, B. S.; Dai, S. J.; Xu, F. R.

    2017-06-01

    Starting from realistic nuclear forces, we have developed a core Gamow shell model which can describe resonance and continuum properties of loosely-bound or unbound nuclear systems. To describe properly resonance and continuum, the Berggren representation has been employed, which treats bound, resonant and continuum states on equal footing in a complex-momentum (complex-k) plane. To derive the model-space effective interaction based on realistic forces, the full Q ˆ -box folded-diagram renormalization has been, for the first time, extended to the nondegenerate complex-k space. The CD-Bonn potential is softened by using the Vlow-k method. Choosing 16O as the inert core, we have calculated sd-shell neutron-rich oxygen isotopes, giving good descriptions of both bound and resonant states. The isotopes 25,26O are calculated to be resonant even in their ground states.

  15. Deformed shell model studies of spectroscopic properties of 64 Zn ...

    Indian Academy of Sciences (India)

    2014-04-05

    Apr 5, 2014 ... The spectroscopic properties of 64Zn and 64Ni are calculated within the framework of the deformed shell model (DSM) based on Hartree–Fock states. GXPF1A interaction in 1 f 7 / 2 , 2 p 3 / 2 , 1 f 5 / 2 and 2 p 1 / 2 space with 40Ca as the core is employed. After ensuring that DSM gives good description of ...

  16. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Diks, Cees G H [NON LANL; Clark, Martyn P [NON LANL

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  17. An IBM-3 hamiltonian from a multi-j-shell model

    International Nuclear Information System (INIS)

    Evans, J.A.; Elliott, J.P.; Lac, V.S.; Long, G.L.

    1995-01-01

    The number and isospin dependence of the hamiltonian in the isospin invariant form (IBM-3) of the boson model is deduced from a seniority mapping onto a shell-model system of several shells. The numerical results are compared with earlier work for a single j-shell. (orig.)

  18. Monte Carlo simulations of lattice models for single polymer systems

    International Nuclear Information System (INIS)

    Hsu, Hsiao-Ping

    2014-01-01

    Single linear polymer chains in dilute solutions under good solvent conditions are studied by Monte Carlo simulations with the pruned-enriched Rosenbluth method up to the chain length N∼O(10 4 ). Based on the standard simple cubic lattice model (SCLM) with fixed bond length and the bond fluctuation model (BFM) with bond lengths in a range between 2 and √(10), we investigate the conformations of polymer chains described by self-avoiding walks on the simple cubic lattice, and by random walks and non-reversible random walks in the absence of excluded volume interactions. In addition to flexible chains, we also extend our study to semiflexible chains for different stiffness controlled by a bending potential. The persistence lengths of chains extracted from the orientational correlations are estimated for all cases. We show that chains based on the BFM are more flexible than those based on the SCLM for a fixed bending energy. The microscopic differences between these two lattice models are discussed and the theoretical predictions of scaling laws given in the literature are checked and verified. Our simulations clarify that a different mapping ratio between the coarse-grained models and the atomistically realistic description of polymers is required in a coarse-graining approach due to the different crossovers to the asymptotic behavior

  19. Phases and phase transitions in the algebraic microscopic shell model

    Directory of Open Access Journals (Sweden)

    Georgieva A. I.

    2016-01-01

    Full Text Available We explore the dynamical symmetries of the shell model number conserving algebra, which define three types of pairing and quadrupole phases, with the aim to obtain the prevailing phase or phase transition for the real nuclear systems in a single shell. This is achieved by establishing a correspondence between each of the pairing bases with the Elliott’s SU(3 basis that describes collective rotation of nuclear systems. This allows for a complete classification of the basis states of different number of particles in all the limiting cases. The probability distribution of the SU(3 basis states within theirs corresponding pairing states is also obtained. The relative strengths of dynamically symmetric quadrupole-quadrupole interaction in respect to the isoscalar, isovector and total pairing interactions define a control parameter, which estimates the importance of each term of the Hamiltonian in the correct reproduction of the experimental data for the considered nuclei.

  20. Monte Carlo Modeling the UCN τ Magneto-Gravitational Trap

    Science.gov (United States)

    Holley, A. T.; UCNτ Collaboration

    2016-09-01

    The current uncertainty in our knowledge of the free neutron lifetime is dominated by the nearly 4 σ discrepancy between complementary ``beam'' and ``bottle'' measurement techniques. An incomplete assessment of systematic effects is the most likely explanation for this difference and must be addressed in order to realize the potential of both approaches. The UCN τ collaboration has constructed a large-volume magneto-gravitational trap that eliminates the material interactions which complicated the interpretation of previous bottle experiments. This is accomplished using permanent NdFeB magnets in a bowl-shaped Halbach array to confine polarized UCN from the sides and below and the earth's gravitational field to trap them from above. New in situ detectors that count surviving UCN provide a means of empirically assessing residual systematic effects. The interpretation of that data, and its implication for experimental configurations with enhanced precision, can be bolstered by Monte Carlo models of the current experiment which provide the capability for stable tracking of trapped UCN and detailed modeling of their polarization. Work to develop such models and their comparison with data acquired during our first extensive set of systematics studies will be discussed.

  1. Monte Carlo and phantom study in the brain edema models

    Directory of Open Access Journals (Sweden)

    Yubing Liu

    2017-05-01

    Full Text Available Because the brain edema has a crucial impact on morbidity and mortality, it is important to develop a noninvasive method to monitor the process of the brain edema effectively. When the brain edema occurs, the optical properties of the brain will change. The goal of this study is to access the feasibility and reliability of using noninvasive near-infrared spectroscopy (NIRS monitoring method to measure the brain edema. Specifically, three models, including the water content changes in the cerebrospinal fluid (CSF, gray matter and white matter, were explored. Moreover, these models were numerically simulated by the Monte Carlo studies. Then, the phantom experiments were performed to investigate the light intensity which was measured at different detecting radius on the tissue surface. The results indicated that the light intensity correlated well with the conditions of the brain edema and the detecting radius. Briefly, at the detecting radius of 3.0cm and 4.0cm, the light intensity has a high response to the change of tissue parameters and optical properties. Thus, it is possible to monitor the brain edema noninvasively by NIRS method and the light intensity is a reliable and simple parameter to assess the brain edema.

  2. Image based Monte Carlo modeling for computational phantom

    International Nuclear Information System (INIS)

    Cheng, M.; Wang, W.; Zhao, K.; Fan, Y.; Long, P.; Wu, Y.

    2013-01-01

    Full text of the publication follows. The evaluation on the effects of ionizing radiation and the risk of radiation exposure on human body has been becoming one of the most important issues for radiation protection and radiotherapy fields, which is helpful to avoid unnecessary radiation and decrease harm to human body. In order to accurately evaluate the dose on human body, it is necessary to construct more realistic computational phantom. However, manual description and verification of the models for Monte Carlo (MC) simulation are very tedious, error-prone and time-consuming. In addition, it is difficult to locate and fix the geometry error, and difficult to describe material information and assign it to cells. MCAM (CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport Simulation) was developed as an interface program to achieve both CAD- and image-based automatic modeling. The advanced version (Version 6) of MCAM can achieve automatic conversion from CT/segmented sectioned images to computational phantoms such as MCNP models. Imaged-based automatic modeling program(MCAM6.0) has been tested by several medical images and sectioned images. And it has been applied in the construction of Rad-HUMAN. Following manual segmentation and 3D reconstruction, a whole-body computational phantom of Chinese adult female called Rad-HUMAN was created by using MCAM6.0 from sectioned images of a Chinese visible human dataset. Rad-HUMAN contains 46 organs/tissues, which faithfully represented the average anatomical characteristics of the Chinese female. The dose conversion coefficients (Dt/Ka) from kerma free-in-air to absorbed dose of Rad-HUMAN were calculated. Rad-HUMAN can be applied to predict and evaluate dose distributions in the Treatment Plan System (TPS), as well as radiation exposure for human body in radiation protection. (authors)

  3. Image based Monte Carlo Modeling for Computational Phantom

    Science.gov (United States)

    Cheng, Mengyun; Wang, Wen; Zhao, Kai; Fan, Yanchang; Long, Pengcheng; Wu, Yican

    2014-06-01

    The evaluation on the effects of ionizing radiation and the risk of radiation exposure on human body has been becoming one of the most important issues for radiation protection and radiotherapy fields, which is helpful to avoid unnecessary radiation and decrease harm to human body. In order to accurately evaluate the dose on human body, it is necessary to construct more realistic computational phantom. However, manual description and verfication of the models for Monte carlo(MC)simulation are very tedious, error-prone and time-consuming. In addiation, it is difficult to locate and fix the geometry error, and difficult to describe material information and assign it to cells. MCAM (CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport Simulation) was developed as an interface program to achieve both CAD- and image-based automatic modeling by FDS Team (Advanced Nuclear Energy Research Team, http://www.fds.org.cn). The advanced version (Version 6) of MCAM can achieve automatic conversion from CT/segmented sectioned images to computational phantoms such as MCNP models. Imaged-based automatic modeling program(MCAM6.0) has been tested by several medical images and sectioned images. And it has been applied in the construction of Rad-HUMAN. Following manual segmentation and 3D reconstruction, a whole-body computational phantom of Chinese adult female called Rad-HUMAN was created by using MCAM6.0 from sectioned images of a Chinese visible human dataset. Rad-HUMAN contains 46 organs/tissues, which faithfully represented the average anatomical characteristics of the Chinese female. The dose conversion coefficients(Dt/Ka) from kerma free-in-air to absorbed dose of Rad-HUMAN were calculated. Rad-HUMAN can be applied to predict and evaluate dose distributions in the Treatment Plan System (TPS), as well as radiation exposure for human body in radiation protection.

  4. Mesoscopic kinetic Monte Carlo modeling of organic photovoltaic device characteristics

    Science.gov (United States)

    Kimber, Robin G. E.; Wright, Edward N.; O'Kane, Simon E. J.; Walker, Alison B.; Blakesley, James C.

    2012-12-01

    Measured mobility and current-voltage characteristics of single layer and photovoltaic (PV) devices composed of poly{9,9-dioctylfluorene-co-bis[N,N'-(4-butylphenyl)]bis(N,N'-phenyl-1,4-phenylene)diamine} (PFB) and poly(9,9-dioctylfluorene-co-benzothiadiazole) (F8BT) have been reproduced by a mesoscopic model employing the kinetic Monte Carlo (KMC) approach. Our aim is to show how to avoid the uncertainties common in electrical transport models arising from the need to fit a large number of parameters when little information is available, for example, a single current-voltage curve. Here, simulation parameters are derived from a series of measurements using a self-consistent “building-blocks” approach, starting from data on the simplest systems. We found that site energies show disorder and that correlations in the site energies and a distribution of deep traps must be included in order to reproduce measured charge mobility-field curves at low charge densities in bulk PFB and F8BT. The parameter set from the mobility-field curves reproduces the unipolar current in single layers of PFB and F8BT and allows us to deduce charge injection barriers. Finally, by combining these disorder descriptions and injection barriers with an optical model, the external quantum efficiency and current densities of blend and bilayer organic PV devices can be successfully reproduced across a voltage range encompassing reverse and forward bias, with the recombination rate the only parameter to be fitted, found to be 1×107 s-1. These findings demonstrate an approach that removes some of the arbitrariness present in transport models of organic devices, which validates the KMC as an accurate description of organic optoelectronic systems, and provides information on the microscopic origins of the device behavior.

  5. Monte Carlo Modeling Electronuclear Processes in Cascade Subcritical Reactor

    CERN Document Server

    Bznuni, S A; Zhamkochyan, V M; Polyanskii, A A; Sosnin, A N; Khudaverdian, A G

    2000-01-01

    Accelerator driven subcritical cascade reactor composed of the main thermal neutron reactor constructed analogous to the core of the VVER-1000 reactor and a booster-reactor, which is constructed similar to the core of the BN-350 fast breeder reactor, is taken as a model example. It is shown by means of Monte Carlo calculations that such system is a safe energy source (k_{eff}=0.94-0.98) and it is capable of transmuting produced radioactive wastes (neutron flux density in the thermal zone is PHI^{max} (r,z)=10^{14} n/(cm^{-2} s^{-1}), neutron flux in the fast zone is respectively equal PHI^{max} (r,z)=2.25 cdot 10^{15} n/(cm^{-2} s^{-1}) if the beam current of the proton accelerator is k_{eff}=0.98 and I=5.3 mA). Suggested configuration of the "cascade" reactor system essentially reduces the requirements on the proton accelerator current.

  6. A Shell Model for Free Vibration Analysis of Carbon Nanoscroll

    Directory of Open Access Journals (Sweden)

    Amin Taraghi Osguei

    2017-04-01

    Full Text Available Carbon nanoscroll (CNS is a graphene sheet rolled into a spiral structure with great potential for different applications in nanotechnology. In this paper, an equivalent open shell model is presented to study the vibration behavior of a CNS with arbitrary boundary conditions. The equivalent parameters used for modeling the carbon nanotubes are implemented to simulate the CNS. The interactions between the layers of CNS due to van der Waals forces are included in the model. The uniformly distributed translational and torsional springs along the boundaries are considered to achieve a unified solution for different boundary conditions. To study the vibration characteristics of CNS, total energy including strain energy, kinetic energy, and van der Waals energy are minimized using the Rayleigh-Ritz technique. The first-order shear deformation theory has been utilized to model the shell. Chebyshev polynomials of first kind are used to obtain the eigenvalue matrices. The natural frequencies and corresponding mode shapes of CNS in different boundary conditions are evaluated. The effect of electric field in axial direction on the natural frequencies and mode shapes of CNS is investigated. The results indicate that, as the electric field increases, the natural frequencies decrease.

  7. On an efficient multiple time step Monte Carlo simulation of the SABR model

    NARCIS (Netherlands)

    A. Leitao Rodriguez (Álvaro); L.A. Grzelak (Lech Aleksander); C.W. Oosterlee (Cornelis)

    2017-01-01

    textabstractIn this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl.

  8. On an efficient multiple time step Monte Carlo simulation of the SABR model

    NARCIS (Netherlands)

    Leitao Rodriguez, A.; Grzelak, L.A.; Oosterlee, C.W.

    2017-01-01

    In this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl. Math.

  9. Bayesian specification analysis and estimation of simultaneous equation models using Monte Carlo methods

    NARCIS (Netherlands)

    A. Zellner (Arnold); L. Bauwens (Luc); H.K. van Dijk (Herman)

    1988-01-01

    textabstractBayesian procedures for specification analysis or diagnostic checking of modeling assumptions for structural equations of econometric models are developed and applied using Monte Carlo numerical methods. Checks on the validity of identifying restrictions, exogeneity assumptions and other

  10. Morphing the Shell Model into an Effective Theory

    International Nuclear Information System (INIS)

    Haxton, W. C.; Song, C.-L.

    2000-01-01

    We describe a strategy for attacking the canonical nuclear structure problem--bound-state properties of a system of point nucleons interacting via a two-body potential--which involves an expansion in the number of particles scattering at high momenta, but is otherwise exact. The required self-consistent solutions of the Bloch-Horowitz equation for effective interactions and operators are obtained by an efficient Green's function method based on the Lanczos algorithm. We carry out this program for the simplest nuclei, d and 3 He , in order to explore the consequences of reformulating the shell model as a controlled effective theory. (c) 2000 The American Physical Society

  11. A Monte Carlo reflectance model for soil surfaces with three-dimensional structure

    Science.gov (United States)

    Cooper, K. D.; Smith, J. A.

    1985-01-01

    A Monte Carlo soil reflectance model has been developed to study the effect of macroscopic surface irregularities larger than the wavelength of incident flux. The model treats incoherent multiple scattering from Lambertian facets distributed on a periodic surface. Resulting bidirectional reflectance distribution functions are non-Lambertian and compare well with experimental trends reported in the literature. Examples showing the coupling of the Monte Carlo soil model to an adding bidirectional canopy of reflectance model are also given.

  12. Shell model in-water frequencies of the core barrel

    International Nuclear Information System (INIS)

    Takeuchi, K.; De Santo, D.F.

    1980-01-01

    Natural frequencies of a 1/24th-scale core barrel/vessel model in air and in water are measured by determining frequency responses to applied forces. The measured data are analyzed by the use of the one-dimensional fluid-structure computer code, MULTIFLEX, developed to calculate the hydraulic force. The fluid-structure interaction in the downcomer annulus is computed with a one-dimensional network model formed to be equivalent to two-dimensional fluid-structure interaction. The structural model incorporated in MULTIFLEX is substantially simpler than that necessary for structural analyses. Proposed for computation of structural dynamics is the projector method than can deal with the beam mode by modal analysis and the other shell modes by a direct integration method. Computed in-air and in-water frequencies agree fairly well with the experimental data, verifying the above MULTIFLEX technique

  13. Utility of Monte Carlo Modelling for Holdup Measurements.

    Energy Technology Data Exchange (ETDEWEB)

    Belian, Anthony P.; Russo, P. A. (Phyllis A.); Weier, Dennis R. (Dennis Ray),

    2005-01-01

    Non-destructive assay (NDA) measurements performed to locate and quantify holdup in the Oak Ridge K25 enrichment cascade used neutron totals counting and low-resolution gamma-ray spectroscopy. This facility housed the gaseous diffusion process for enrichment of uranium, in the form of UF{sub 6} gas, from {approx} 20% to 93%. Inventory of {sup 235}U inventory in K-25 is all holdup. These buildings have been slated for decontaminatino and decommissioning. The NDA measurements establish the inventory quantities and will be used to assure criticality safety and meet criteria for waste analysis and transportation. The tendency to err on the side of conservatism for the sake of criticality safety in specifying total NDA uncertainty argues, in the interests of safety and costs, for obtaining the best possible value of uncertainty at the conservative confidence level for each item of process equipment. Variable deposit distribution is a complex systematic effect (i.e., determined by multiple independent variables) on the portable NDA results for very large and bulk converters that contributes greatly to total uncertainty for holdup in converters measured by gamma or neutron NDA methods. Because the magnitudes of complex systematic effects are difficult to estimate, computational tools are important for evaluating those that are large. Motivated by very large discrepancies between gamma and neutron measurements of high-mass converters with gamma results tending to dominate, the Monte Carlo code MCNP has been used to determine the systematic effects of deposit distribution on gamma and neutron results for {sup 235}U holdup mass in converters. This paper details the numerical methodology used to evaluate large systematic effects unique to each measurement type, validates the methodology by comparison with measurements, and discusses how modeling tools can supplement the calibration of instruments used for holdup measurements by providing realistic values at well

  14. Sky-Radiance Models for Monte Carlo Radiative Transfer Applications

    Science.gov (United States)

    Santos, I.; Dalimonte, D.; Santos, J. P.

    2012-04-01

    Photon-tracing can be initialized through sky-radiance (Lsky) distribution models when executing Monte Carlo simulations for ocean color studies. To be effective, the Lsky model should: 1) properly represent sky-radiance features of interest; 2) require low computing time; and 3) depend on a limited number of input parameters. The present study verifies the satisfiability of these prerequisite by comparing results from different Lsky formulations. Specifically, two Lsky models were considered as reference cases because of their different approach among solutions presented in the literature. The first model, developed by the Harrisson and Coombes (HC), is based on a parametric expression where the sun geometry is the unique input. The HC model is one of the sky-radiance analytical distribution applied in state-of-art simulations for ocean optics. The coefficients of the HC model were set upon broad-band field measurements and the result is a model that requires a few implementation steps. The second model, implemented by Zibordi and Voss (ZV), is based on physical expressions that accounts for the optical thickness of permanent gases, aerosol, ozone and water vapour at specific wavelengths. Inter-comparisons between normalized ^LskyZV and ^LskyHC (i.e., with unitary scalar irradiance) are discussed by means of individual polar maps and percent difference between sky-radiance distributions. Sky-radiance cross-sections are presented as well. Considered cases include different sun zenith values and wavelengths (i.e., λ=413, 490 and 665 nm, corresponding to selected center-bands of the MEdium Resolution Imaging Spectrometer MERIS). Results have shown a significant convergence between ^LskyHC and ^LskyZV at 665 nm. Differences between models increase with the sun zenith and mostly with wavelength. For Instance, relative differences up to 50% between ^ L skyHC and ^ LskyZV can be observed in the antisolar region for λ=665 nm and θ*=45°. The effects of these

  15. Monte Carlo simulation of quantum statistical lattice models

    NARCIS (Netherlands)

    Raedt, Hans De; Lagendijk, Ad

    1985-01-01

    In this article we review recent developments in computational methods for quantum statistical lattice problems. We begin by giving the necessary mathematical basis, the generalized Trotter formula, and discuss the computational tools, exact summations and Monte Carlo simulation, that will be used

  16. Partial dynamical symmetry in the symplectic shell model

    Energy Technology Data Exchange (ETDEWEB)

    Escher, J. [TRIUMF, Vancouver, British Columbia (Canada); Leviatan, A. [Hebrew Univ., Racah Inst. of Physics, Jerusalem (Israel)

    2000-08-01

    We present an example of a partial dynamical symmetry (PDS) in an interacting fermion system and demonstrate the close relationship of the associated Hamiltonians with a realistic quadrupole-quadrupole interaction, thus shedding light on this important interaction. Specifically, in the framework of the symplectic shell model of nuclei, we prove the existence of a family of fermionic Hamiltonians with partial SU(3) symmetry. We outline the construction process for the PDS eigenstates with good symmetry and give analytic expressions for the energies of these states and E2 transition strengths between them. Characteristics of both pure and mixed-symmetry PDS eigenstates are discussed and the resulting spectra and transition strengths are compared to those of real nuclei. The PDS concept is shown to be relevant to the description of prolate, oblate, as well as triaxially deformed nuclei. Similarities and differences between the fermion case and the previously established partial SU(3) symmetry in the interacting boson model are considered. (author)

  17. Partial dynamical symmetry in the symplectic shell model

    International Nuclear Information System (INIS)

    Escher, Jutta; Leviatan, Amiram

    2002-01-01

    We present an example of a partial dynamical symmetry (PDS) in an interacting fermion system and demonstrate the close relationship of the associated Hamiltonians with a realistic quadrupole-quadrupole interaction, thus shedding light on this important interaction. Specifically, in the framework of the symplectic shell model of nuclei, we prove the existence of a family of fermionic Hamiltonians with partial SU(3) symmetry. We outline the construction process for the PDS eigenstates with good symmetry and give analytic expressions for the energies of these states and E2 transition strengths between them. Characteristics of both pure and mixed-symmetry PDS eigenstates are discussed and the resulting spectra and transition strengths are compared to those of real nuclei. The PDS concept is shown to be relevant to the description of prolate, oblate, as well as triaxially deformed nuclei. Similarities and differences between the fermion case and the previously established partial SU(3) symmetry in the interacting boson model are considered

  18. Development of the Delta Shell as an integrated modeling environment

    Science.gov (United States)

    Donchyts, Gennadii; Baart, Fedor; Jagers, Bert

    2010-05-01

    Many engineering problem require the use of multiple numerical models from multiple disciplines. For example the use of river model for flow calculation coupled with groundwater model and rainfall-runoff model. These models need to be setup, coupled, run, results need to be visualized, input and output data need to be stored. For some of these steps a software or standards already exist, but there is a need for an environment allowing to perform all these steps.The goal of the present work is to create a modeling environment where models from different domains can perform all the sixe steps: setup, couple, run, visualize, store. This presentation deals with the different problems which arise when setting up a modelling framework, such as terminology, numerical aspects as well as the software development issues which arise. In order to solve these issues we use Domain Driven Design methods, available open standards and open source components. While creating an integrated modeling environment we have identified that a separation of the following domains is essential: a framework allowing to link and exchange data between models; a framework allowing to integrate different components of the environment; graphical user interface; GIS; hybrid relational and multi-dimensional data store; discipline-specific libraries: river hydrology, morphology, water quality, statistics; model-specific components Delta Shell environment which is the basis for several products such as HABITAT, SOBEK and the future Delft3D interface. It implements and integrates components covering the above mentioned domains by making use of open standards and open source components. Different components have been developed to fill in gaps. For exchaning data with the GUI an object oriented scientific framework in .NET was developed within Delta Shell somewhat similar to the JSR-275. For the GIS domain several OGC standards were used such as SFS, WCS and WFS. For storage the CF standard together with

  19. Importance-truncated no-core shell model for fermionic many-body systems

    Energy Technology Data Exchange (ETDEWEB)

    Spies, Helena

    2017-03-15

    The exact solution of quantum mechanical many-body problems is only possible for few particles. Therefore, numerical methods were developed in the fields of quantum physics and quantum chemistry for larger particle numbers. Configuration Interaction (CI) methods or the No-Core Shell Model (NCSM) allow ab initio calculations for light and intermediate-mass nuclei, without resorting to phenomenology. An extension of the NCSM is the Importance-Truncated No-Core Shell Model, which uses an a priori selection of the most important basis states. The importance truncation was first developed and applied in quantum chemistry in the 1970s and latter successfully applied to models of light and intermediate mass nuclei. Other numerical methods for calculations for ultra-cold fermionic many-body systems are the Fixed-Node Diffusion Monte Carlo method (FN-DMC) and the stochastic variational approach with Correlated Gaussian basis functions (CG). There are also such method as the Coupled-Cluster method, Green's Function Monte Carlo (GFMC) method, et cetera, used for calculation of many-body systems. In this thesis, we adopt the IT-NCSM for the calculation of ultra-cold Fermi gases at unitarity. Ultracold gases are dilute, strongly correlated systems, in which the average interparticle distance is much larger than the range of the interaction. Therefore, the detailed radial dependence of the potential is not resolved, and the potential can be replaced by an effective contact interaction. At low energy, s-wave scattering dominates and the interaction can be described by the s-wave scattering length. If the scattering length is small and negative, Cooper-pairs are formed in the Bardeen-Cooper-Schrieffer (BCS) regime. If the scattering length is small and positive, these Cooper-pairs become strongly bound molecules in a Bose-Einstein-Condensate (BEC). In between (for large scattering lengths) is the unitary limit with universal properties. Calculations of the energy spectra

  20. Monte Carlo study of superconductivity in the three-band Emery model

    International Nuclear Information System (INIS)

    Frick, M.; Pattnaik, P.C.; Morgenstern, I.; Newns, D.M.; von der Linden, W.

    1990-01-01

    We have examined the three-band Hubbard model for the copper oxide planes in high-temperature superconductors using the projector quantum Monte Carlo method. We find no evidence for s-wave superconductivity

  1. Simplest Validation of the HIJING Monte Carlo Model

    CERN Document Server

    Uzhinsky, V.V.

    2003-01-01

    Fulfillment of the energy-momentum conservation law, as well as the charge, baryon and lepton number conservation is checked for the HIJING Monte Carlo program in $pp$-interactions at $\\sqrt{s}=$ 200, 5500, and 14000 GeV. It is shown that the energy is conserved quite well. The transverse momentum is not conserved, the deviation from zero is at the level of 1--2 GeV/c, and it is connected with the hard jet production. The deviation is absent for soft interactions. Charge, baryon and lepton numbers are conserved. Azimuthal symmetry of the Monte Carlo events is studied, too. It is shown that there is a small signature of a "flow". The situation with the symmetry gets worse for nucleus-nucleus interactions.

  2. Modeling complicated rheological behaviors in encapsulating shells of lipid-coated microbubbles accounting for nonlinear changes of both shell viscosity and elasticity

    Science.gov (United States)

    Li, Qian; Matula, Thomas J.; Tu, Juan; Guo, Xiasheng; Zhang, Dong

    2013-02-01

    It has been accepted that the dynamic responses of ultrasound contrast agent (UCA) microbubbles will be significantly affected by the encapsulating shell properties (e.g., shell elasticity and viscosity). In this work, a new model is proposed to describe the complicated rheological behaviors in an encapsulating shell of UCA microbubbles by applying the nonlinear ‘Cross law’ to the shell viscous term in the Marmottant model. The proposed new model was verified by fitting the dynamic responses of UCAs measured with either a high-speed optical imaging system or a light scattering system. The comparison results between the measured radius-time curves and the numerical simulations demonstrate that the ‘compression-only’ behavior of UCAs can be successfully simulated with the new model. Then, the shell elastic and viscous coefficients of SonoVue microbubbles were evaluated based on the new model simulations, and compared to the results obtained from some existing UCA models. The results confirm the capability of the current model for reducing the dependence of bubble shell parameters on the initial bubble radius, which indicates that the current model might be more comprehensive to describe the complex rheological nature (e.g., ‘shear-thinning’ and ‘strain-softening’) in encapsulating shells of UCA microbubbles by taking into account the nonlinear changes of both shell elasticity and shell viscosity.

  3. Symplectic ab initio no-core shell model

    Energy Technology Data Exchange (ETDEWEB)

    Draayer, J. P.; Dytrych, T.; Sviratcheva, K. D.; Bahri, C. [Department of Physics and Astronomy, Lousiana State University, Baton Rouge, 70803 Lousiana (United States); Vary, J. P. [Department of Physics and Astronomy, Iowa State University, Ames, 50011 Iowa (United States)

    2008-12-15

    The present study confirms the significance of the symplectic Sp(3,R) symmetry in nuclear dynamics as unveiled, for the first time, by examinations of realistic nucleon-nucleon interactions as well as of eigenstates calculated in the framework of the ab initio No-Core Shell Model (NCSM). The results reveal that the NCSM wave functions for light nuclei highly overlap (at the {approx} 90% level) with only a few of the most deformed Sp(3,R)-symmetric basis states. This points to the possibility of achieving convergence of higher-lying collective modes and reaching heavier nuclei by expanding the NCSM basis space beyond its current limits through Sp(3,R) basis states. Furthermore the symplectic symmetry is found to be favored by the JISP 16 and CD-Bonn realistic nucleon-nucleon interactions, which points to a more fundamental origin of the symplectic symmetry. (Author)

  4. Pushing the limits of Monte Carlo simulations for the three-dimensional Ising model

    Science.gov (United States)

    Ferrenberg, Alan M.; Xu, Jiahao; Landau, David P.

    2018-04-01

    While the three-dimensional Ising model has defied analytic solution, various numerical methods like Monte Carlo, Monte Carlo renormalization group, and series expansion have provided precise information about the phase transition. Using Monte Carlo simulation that employs the Wolff cluster flipping algorithm with both 32-bit and 53-bit random number generators and data analysis with histogram reweighting and quadruple precision arithmetic, we have investigated the critical behavior of the simple cubic Ising Model, with lattice sizes ranging from 163 to 10243. By analyzing data with cross correlations between various thermodynamic quantities obtained from the same data pool, e.g., logarithmic derivatives of magnetization and derivatives of magnetization cumulants, we have obtained the critical inverse temperature Kc=0.221 654 626 (5 ) and the critical exponent of the correlation length ν =0.629 912 (86 ) with precision that exceeds all previous Monte Carlo estimates.

  5. A Novel Model of Dielectric Constant of Two-Phase Composites with Interfacial Shells

    Science.gov (United States)

    Xue, Qingzhong

    Considering the interface effect between two phases in composite, we present a novel model of dielectric constant of two-phase composites with interfacial shells. Starting from Maxwell theory and average polarization theory, the formula of calculating the effective dielectric constant of two-phase random composites with interfacial shells is presented. The theoretical results on effective dielectric constant of alkyd resin paint/Barium titanate random composites with interfacial shells are in good agreement with the experimental data.

  6. Elementary isovector spin and orbital magnetic dipole modes revisited in the shell model

    International Nuclear Information System (INIS)

    Richter, A.

    1988-08-01

    A review is given on the status of mainly spin magnetic dipole modes in some sd- and fp-shell nuclei studied with inelastic electron and proton scattering, and by β + -decay. Particular emphasis is also placed on a fairly new, mainly orbital magnetic dipole mode investigated by high-resolution (e,e') and (p,p') scattering experiments on a series of fp-shell nuclei. Both modes are discussed in terms of the shell model with various effective interactions. (orig.)

  7. Towards a shell-model description of intruder states and the onset of deformation

    International Nuclear Information System (INIS)

    Heyde, K.; Van Isacker, P.; Casten, R.F.; Wood, J.L.

    1985-01-01

    Basing on the nuclear shell-model and concentrating on the monopole, pairing and quadrupole corrections originating from the nucleon-nucleon force, both the appearance of low-lying 0 + intruder states near major closed shells (Z = 50, 82) and sub-shell regions (Z = 40, 64) can be described. Moreover, a number of new facets related to the study of intruder states are presented. 19 refs., 3 figs

  8. Sinusoidal velaroidal shell – numerical modelling of the nonlinear ...

    African Journals Online (AJOL)

    Many works are devoted to linear and nonlinear analyses of shells of classical form. But for thin shells of complex geometry, many things remained to do. Four different sources of nonlinearity exist in solid mechanics. The geometric nonlinearity, the material nonlinearity, the kinetic nonlinearity and the force nonlinearity.

  9. Progress and applications of MCAM. Monte Carlo automatic modeling program for particle transport simulation

    International Nuclear Information System (INIS)

    Wang Guozhong; Zhang Junjun; Xiong Jian

    2010-01-01

    MCAM (Monte Carlo Automatic Modeling program for particle transport simulation) was developed by FDS Team as a CAD based bi-directional interface program between general CAD systems and Monte Carlo particle transport simulation codes. The physics and material modeling and void space modeling functions were improved and the free form surfaces processing function was developed recently. The applications to the ITER (International Thermonuclear Experimental Reactor) building model and FFHR (Force Free Helical Reactor) model have demonstrated the feasibility, effectiveness and maturity of MCAM latest version for nuclear applications with complex geometry. (author)

  10. Stress Resultant Based Elasto-Viscoplastic Thick Shell Model

    Directory of Open Access Journals (Sweden)

    Pawel Woelke

    2012-01-01

    Full Text Available The current paper presents enhancement introduced to the elasto-viscoplastic shell formulation, which serves as a theoretical base for the finite element code EPSA (Elasto-Plastic Shell Analysis [1–3]. The shell equations used in EPSA are modified to account for transverse shear deformation, which is important in the analysis of thick plates and shells, as well as composite laminates. Transverse shear forces calculated from transverse shear strains are introduced into a rate-dependent yield function, which is similar to Iliushin's yield surface expressed in terms of stress resultants and stress couples [12]. The hardening rule defined by Bieniek and Funaro [4], which allows for representation of the Bauschinger effect on a moment-curvature plane, was previously adopted in EPSA and is used here in the same form. Viscoplastic strain rates are calculated, taking into account the transverse shears. Only non-layered shells are considered in this work.

  11. A Monte Carlo model of complex spectra of opacity calculations

    International Nuclear Information System (INIS)

    Klapisch, M.; Duffy, P.; Goldstein, W.H.

    1991-01-01

    We are developing a Monte Carlo method for calculating opacities of complex spectra. It should be faster than atomic structure codes and is more accurate than the UTA method. We use the idea that wavelength-averaged opacities depend on the overall properties, but not the details, of the spectrum; our spectra have the same statistical properties as real ones but the strength and energy of each line is random. In preliminary tests we can get Rosseland mean opacities within 20% of actual values. (orig.)

  12. Optical Monte Carlo modeling of a true portwine stain anatomy

    Science.gov (United States)

    Barton, Jennifer K.; Pfefer, T. Joshua; Welch, Ashley J.; Smithies, Derek J.; Nelson, Jerry; van Gemert, Martin J.

    1998-04-01

    A unique Monte Carlo program capable of accommodating an arbitrarily complex geometry was used to determine the energy deposition in a true port wine stain anatomy. Serial histologic sections taken from a biopsy of a dark red, laser therapy resistant stain were digitized and used to create the program input for simulation at wavelengths of 532 and 585 nm. At both wavelengths, the greatest energy deposition occurred in the superficial blood vessels, and subsequently decreased with depth as the laser beam was attenuated. However, more energy was deposited in the epidermis and superficial blood vessels at 532 nm than at 585 nm.

  13. Improving system modeling accuracy with Monte Carlo codes

    International Nuclear Information System (INIS)

    Johnson, A.S.

    1996-01-01

    The use of computer codes based on Monte Carlo methods to perform criticality calculations has become common-place. Although results frequently published in the literature report calculated k eff values to four decimal places, people who use the codes in their everyday work say that they only believe the first two decimal places of any result. The lack of confidence in the computed k eff values may be due to the tendency of the reported standard deviation to underestimate errors associated with the Monte Carlo process. The standard deviation as reported by the codes is the standard deviation of the mean of the k eff values for individual generations in the computer simulation, not the standard deviation of the computed k eff value compared with the physical system. A more subtle problem with the standard deviation of the mean as reported by the codes is that all the k eff values from the separate generations are not statistically independent since the k eff of a given generation is a function of k eff of the previous generation, which is ultimately based on the starting source. To produce a standard deviation that is more representative of the physical system, statistically independent values of k eff are needed

  14. Recent Developments in No-Core Shell-Model Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Navratil, P; Quaglioni, S; Stetcu, I; Barrett, B R

    2009-03-20

    We present an overview of recent results and developments of the no-core shell model (NCSM), an ab initio approach to the nuclear many-body problem for light nuclei. In this aproach, we start from realistic two-nucleon or two- plus three-nucleon interactions. Many-body calculations are performed using a finite harmonic-oscillator (HO) basis. To facilitate convergence for realistic inter-nucleon interactions that generate strong short-range correlations, we derive effective interactions by unitary transformations that are tailored to the HO basis truncation. For soft realistic interactions this might not be necessary. If that is the case, the NCSM calculations are variational. In either case, the ab initio NCSM preserves translational invariance of the nuclear many-body problem. In this review, we, in particular, highlight results obtained with the chiral two- plus three-nucleon interactions. We discuss efforts to extend the applicability of the NCSM to heavier nuclei and larger model spaces using importance-truncation schemes and/or use of effective interactions with a core. We outline an extension of the ab initio NCSM to the description of nuclear reactions by the resonating group method technique. A future direction of the approach, the ab initio NCSM with continuum, which will provide a complete description of nuclei as open systems with coupling of bound and continuum states is given in the concluding part of the review.

  15. Core-shell particles as model compound for studying fouling

    DEFF Research Database (Denmark)

    Christensen, Morten Lykkegaard; Nielsen, Troels Bach; Andersen, Morten Boel Overgaard

    2008-01-01

    Synthetic colloidal particles with hard cores and soft, water-swollen shells were used to study cake formation during ultrafiltration. The total cake resistance was lowest for particles with thick shells, which indicates that interparticular forces between particles (steric hindrance...... and electrostatic repulsion) influenced cake formation. At low pressure the specific cake resistance could be predicted from the Kozeny-Carman equation. At higher pressures, the resistance increased due to cake compression. Both cake formation and compression were reversible. For particles with thick shells...

  16. On two-dimensionalization of three-dimensional turbulence in shell models

    DEFF Research Database (Denmark)

    Chakraborty, Sagar; Jensen, Mogens Høgh; Sarkar, A.

    2010-01-01

    Applying a modified version of the Gledzer-Ohkitani-Yamada (GOY) shell model, the signatures of so-called two-dimensionalization effect of three-dimensional incompressible, homogeneous, isotropic fully developed unforced turbulence have been studied and reproduced. Within the framework of shell...

  17. Microscopic imaging through turbid media Monte Carlo modeling and applications

    CERN Document Server

    Gu, Min; Deng, Xiaoyuan

    2015-01-01

    This book provides a systematic introduction to the principles of microscopic imaging through tissue-like turbid media in terms of Monte-Carlo simulation. It describes various gating mechanisms based on the physical differences between the unscattered and scattered photons and method for microscopic image reconstruction, using the concept of the effective point spread function. Imaging an object embedded in a turbid medium is a challenging problem in physics as well as in biophotonics. A turbid medium surrounding an object under inspection causes multiple scattering, which degrades the contrast, resolution and signal-to-noise ratio. Biological tissues are typically turbid media. Microscopic imaging through a tissue-like turbid medium can provide higher resolution than transillumination imaging in which no objective is used. This book serves as a valuable reference for engineers and scientists working on microscopy of tissue turbid media.

  18. Preparation of hollow shell ICF targets using a depolymerizing model

    International Nuclear Information System (INIS)

    Letts, S.A.; Fearon, E.M.; Buckley, S.R.

    1994-11-01

    A new technique for producing hollow shell laser fusion capsules was developed that starts with a depolymerizable mandrel. In this technique we use poly(alpha-methylstyrene) (PAMS) beads or shells as mandrels which are overcoated with plasma polymer. The PAMS mandrel is thermally depolymerized to gas phase monomer, which diffuses through the permeable and thermally more stable plasma polymer coating, leaving a hollow shell. We have developed methods for controlling the size of the PAMS mandrel by either grinding to make smaller sizes or melt sintering to form larger mandrels. Sphericity and surface finish are improved by heating the PAMS mandrels in hot water using a surfactant to prevent aggregation. Using this technique we have made shells from 200 μm to 5 mm diameter with 15 to 100 μm wall thickness having sphericity better than 2 μm and surface finish better than 10 nm RMS

  19. Importance estimation in Monte Carlo modelling of neutron and photon transport

    International Nuclear Information System (INIS)

    Mickael, M.W.

    1992-01-01

    The estimation of neutron and photon importance in a three-dimensional geometry is achieved using a coupled Monte Carlo and diffusion theory calculation. The parameters required for the solution of the multigroup adjoint diffusion equation are estimated from an analog Monte Carlo simulation of the system under investigation. The solution of the adjoint diffusion equation is then used as an estimate of the particle importance in the actual simulation. This approach provides an automated and efficient variance reduction method for Monte Carlo simulations. The technique has been successfully applied to Monte Carlo simulation of neutron and coupled neutron-photon transport in the nuclear well-logging field. The results show that the importance maps obtained in a few minutes of computer time using this technique are in good agreement with Monte Carlo generated importance maps that require prohibitive computing times. The application of this method to Monte Carlo modelling of the response of neutron porosity and pulsed neutron instruments has resulted in major reductions in computation time. (Author)

  20. Modelling of electron contamination in clinical photon beams for Monte Carlo dose calculation

    International Nuclear Information System (INIS)

    Yang, J; Li, J S; Qin, L; Xiong, W; Ma, C-M

    2004-01-01

    The purpose of this work is to model electron contamination in clinical photon beams and to commission the source model using measured data for Monte Carlo treatment planning. In this work, a planar source is used to represent the contaminant electrons at a plane above the upper jaws. The source size depends on the dimensions of the field size at the isocentre. The energy spectra of the contaminant electrons are predetermined using Monte Carlo simulations for photon beams from different clinical accelerators. A 'random creep' method is employed to derive the weight of the electron contamination source by matching Monte Carlo calculated monoenergetic photon and electron percent depth-dose (PDD) curves with measured PDD curves. We have integrated this electron contamination source into a previously developed multiple source model and validated the model for photon beams from Siemens PRIMUS accelerators. The EGS4 based Monte Carlo user code BEAM and MCSIM were used for linac head simulation and dose calculation. The Monte Carlo calculated dose distributions were compared with measured data. Our results showed good agreement (less than 2% or 2 mm) for 6, 10 and 18 MV photon beams

  1. R and D on automatic modeling methods for Monte Carlo codes FLUKA

    International Nuclear Information System (INIS)

    Wang Dianxi; Hu Liqin; Wang Guozhong; Zhao Zijia; Nie Fanzhi; Wu Yican; Long Pengcheng

    2013-01-01

    FLUKA is a fully integrated particle physics Monte Carlo simulation package. It is necessary to create the geometry models before calculation. However, it is time- consuming and error-prone to describe the geometry models manually. This study developed an automatic modeling method which could automatically convert computer-aided design (CAD) geometry models into FLUKA models. The conversion program was integrated into CAD/image-based automatic modeling program for nuclear and radiation transport simulation (MCAM). Its correctness has been demonstrated. (authors)

  2. CAD-based Monte Carlo automatic modeling method based on primitive solid

    International Nuclear Information System (INIS)

    Wang, Dong; Song, Jing; Yu, Shengpeng; Long, Pengcheng; Wang, Yongliang

    2016-01-01

    Highlights: • We develop a method which bi-convert between CAD model and primitive solid. • This method was improved from convert method between CAD model and half space. • This method was test by ITER model and validated the correctness and efficiency. • This method was integrated in SuperMC which could model for SuperMC and Geant4. - Abstract: Monte Carlo method has been widely used in nuclear design and analysis, where geometries are described with primitive solids. However, it is time consuming and error prone to describe a primitive solid geometry, especially for a complicated model. To reuse the abundant existed CAD models and conveniently model with CAD modeling tools, an automatic modeling method for accurate prompt modeling between CAD model and primitive solid is needed. An automatic modeling method for Monte Carlo geometry described by primitive solid was developed which could bi-convert between CAD model and Monte Carlo geometry represented by primitive solids. While converting from CAD model to primitive solid model, the CAD model was decomposed into several convex solid sets, and then corresponding primitive solids were generated and exported. While converting from primitive solid model to the CAD model, the basic primitive solids were created and related operation was done. This method was integrated in the SuperMC and was benchmarked with ITER benchmark model. The correctness and efficiency of this method were demonstrated.

  3. Remembrances of Maria Goeppert Mayer and the Nuclear Shell Model.

    Science.gov (United States)

    Baranger, Elizabeth

    2013-04-01

    Maria Goeppert Mayer received the Nobel Prize in Physics in 1963 for her work on the nuclear shell model. I knew her in my teens as a close ``friend of the family.'' The Mayers lived a few blocks away in Leonia, New Jersey from 1939 to 1945, across the street in Chicago from 1945 to 1958 and about one mile away in La Jolla, CA from 1960 till her death. Maria held primarily ``vol'' (voluntary) positions during this period, although in Chicago she was half time at Argonne National Laboratory as a Senior Physicist. She joined the University of California at San Diego as a professor in 1960, her first full-time academic position. I will discuss her positive impact on a teenager seriously considering becoming a physicist. I will also discuss briefly the impact of her work on our understanding of the structure of nuclei. Maria Mayer was creative, well educated, with a supportive father and husband, but she was foreign , received her Ph D at the time of the Great Depression, and was one of the few women trained in physics. Her unusual career and her great success is due to her love of physics and her ability as a theoretical physicist.

  4. NRMC - A GPU code for N-Reverse Monte Carlo modeling of fluids in confined media

    Science.gov (United States)

    Sánchez-Gil, Vicente; Noya, Eva G.; Lomba, Enrique

    2017-08-01

    NRMC is a parallel code for performing N-Reverse Monte Carlo modeling of fluids in confined media [V. Sánchez-Gil, E.G. Noya, E. Lomba, J. Chem. Phys. 140 (2014) 024504]. This method is an extension of the usual Reverse Monte Carlo method to obtain structural models of confined fluids compatible with experimental diffraction patterns, specifically designed to overcome the problem of slow diffusion that can appear under conditions of tight confinement. Most of the computational time in N-Reverse Monte Carlo modeling is spent in the evaluation of the structure factor for each trial configuration, a calculation that can be easily parallelized. Implementation of the structure factor evaluation in NVIDIA® CUDA so that the code can be run on GPUs leads to a speed up of up to two orders of magnitude.

  5. Symmetry-dictated trucation: Solutions of the spherical shell model for heavy nuclei

    International Nuclear Information System (INIS)

    Guidry, M.W.

    1992-01-01

    Principles of dynamical symmetry are used to simplify the spherical shell model. The resulting symmetry-dictated truncation leads to dynamical symmetry solutions that are often in quantitative agreement with a variety of observables. Numerical calculations, including terms that break the dynamical symmetries, are shown that correspond to shell model calculations for heavy deformed nuclei. The effective residual interaction is simple, well-behaved, and can be determined from basic observables. With this approach, we intend to apply the shell model in systematic fashion to all nuclei. The implications for nuclear structure far from stability and for nuclear masses and other quantities of interest in astrophysics are discussed

  6. The No-Core Gamow Shell Model: Including the continuum in the NCSM

    CERN Document Server

    Barrett, B R; Michel, N; Płoszajczak, M

    2015-01-01

    We are witnessing an era of intense experimental efforts that will provide information about the properties of nuclei far from the line of stability, regarding resonant and scattering states as well as (weakly) bound states. This talk describes our formalism for including these necessary ingredients into the No-Core Shell Model by using the Gamow Shell Model approach. Applications of this new approach, known as the No-Core Gamow Shell Model, both to benchmark cases as well as to unstable nuclei will be given.

  7. Monte Carlo simulation of diblock copolymer microphases by means of a 'fast' off-lattice model

    DEFF Research Database (Denmark)

    Besold, Gerhard; Hassager, O.; Mouritsen, Ole G.

    1999-01-01

    We present a mesoscopic off-lattice model for the simulation of diblock copolymer melts by Monte Carlo techniques. A single copolymer molecule is modeled as a discrete Edwards chain consisting of two blocks with vertices of type A and B, respectively. The volume interaction is formulated in terms...

  8. A model for Monte Carlo simulation of low angle photon scattering in biological tissues

    CERN Document Server

    Tartari, A; Bonifazzi, C

    2001-01-01

    In order to include the molecular interference effect, a simple procedure is proposed and demonstrated to be able to update the usual cross section database for photon coherent scattering modelling in Monte Carlo codes. This effect was evaluated by measurement of coherent scattering distributions and by means of a model based on four basic materials composing biological tissues.

  9. Monte Carlo Simulations of Compressible Ising Models: Do We Understand Them?

    Science.gov (United States)

    Landau, D. P.; Dünweg, B.; Laradji, M.; Tavazza, F.; Adler, J.; Cannavaccioulo, L.; Zhu, X.

    Extensive Monte Carlo simulations have begun to shed light on our understanding of phase transitions and universality classes for compressible Ising models. A comprehensive analysis of a Landau-Ginsburg-Wilson hamiltonian for systems with elastic degrees of freedom resulted in the prediction that there should be four distinct cases that would have different behavior, depending upon symmetries and thermodynamic constraints. We shall provide an account of the results of careful Monte Carlo simulations for a simple compressible Ising model that can be suitably modified so as to replicate all four cases.

  10. Markov chain Monte Carlo methods for state-space models with point process observations.

    Science.gov (United States)

    Yuan, Ke; Girolami, Mark; Niranjan, Mahesan

    2012-06-01

    This letter considers how a number of modern Markov chain Monte Carlo (MCMC) methods can be applied for parameter estimation and inference in state-space models with point process observations. We quantified the efficiencies of these MCMC methods on synthetic data, and our results suggest that the Reimannian manifold Hamiltonian Monte Carlo method offers the best performance. We further compared such a method with a previously tested variational Bayes method on two experimental data sets. Results indicate similar performance on the large data sets and superior performance on small ones. The work offers an extensive suite of MCMC algorithms evaluated on an important class of models for physiological signal analysis.

  11. Accelerated Monte Carlo system reliability analysis through machine-learning-based surrogate models of network connectivity

    International Nuclear Information System (INIS)

    Stern, R.E.; Song, J.; Work, D.B.

    2017-01-01

    The two-terminal reliability problem in system reliability analysis is known to be computationally intractable for large infrastructure graphs. Monte Carlo techniques can estimate the probability of a disconnection between two points in a network by selecting a representative sample of network component failure realizations and determining the source-terminal connectivity of each realization. To reduce the runtime required for the Monte Carlo approximation, this article proposes an approximate framework in which the connectivity check of each sample is estimated using a machine-learning-based classifier. The framework is implemented using both a support vector machine (SVM) and a logistic regression based surrogate model. Numerical experiments are performed on the California gas distribution network using the epicenter and magnitude of the 1989 Loma Prieta earthquake as well as randomly-generated earthquakes. It is shown that the SVM and logistic regression surrogate models are able to predict network connectivity with accuracies of 99% for both methods, and are 1–2 orders of magnitude faster than using a Monte Carlo method with an exact connectivity check. - Highlights: • Surrogate models of network connectivity are developed by machine-learning algorithms. • Developed surrogate models can reduce the runtime required for Monte Carlo simulations. • Support vector machine and logistic regressions are employed to develop surrogate models. • Numerical example of California gas distribution network demonstrate the proposed approach. • The developed models have accuracies 99%, and are 1–2 orders of magnitude faster than MCS.

  12. Particle-number-projected Hartree-Fock-Bogoliubov study with effective shell model interactions

    Science.gov (United States)

    Maqbool, I.; Sheikh, J. A.; Ganai, P. A.; Ring, P.

    2011-04-01

    We perform the particle-number-projected mean-field study using the recently developed symmetry-projected Hartree-Fock-Bogoliubov (HFB) equations. Realistic calculations have been performed in sd- and fp-shell nuclei using the shell model empirical interactions, USD and GXPFIA. It is demonstrated that the mean-field results for energy surfaces, obtained with these shell model interactions, are quite similar to those obtained using the density functional approaches. Further, it is shown that particle-number-projected results, for neutron-rich isotopes, can lead to different ground-state shapes in comparison to bare HFB calculations.

  13. Atomic structure of Mg-based metallic glass investigated with neutron diffraction, reverse Monte Carlo modeling and electron microscopy.

    Science.gov (United States)

    Babilas, Rafał; Łukowiec, Dariusz; Temleitner, Laszlo

    2017-01-01

    The structure of a multicomponent metallic glass, Mg 65 Cu 20 Y 10 Ni 5 , was investigated by the combined methods of neutron diffraction (ND), reverse Monte Carlo modeling (RMC) and high-resolution transmission electron microscopy (HRTEM). The RMC method, based on the results of ND measurements, was used to develop a realistic structure model of a quaternary alloy in a glassy state. The calculated model consists of a random packing structure of atoms in which some ordered regions can be indicated. The amorphous structure was also described by peak values of partial pair correlation functions and coordination numbers, which illustrated some types of cluster packing. The N = 9 clusters correspond to the tri-capped trigonal prisms, which are one of Bernal's canonical clusters, and atomic clusters with N = 6 and N = 12 are suitable for octahedral and icosahedral atomic configurations. The nanocrystalline character of the alloy after annealing was also studied by HRTEM. The selected HRTEM images of the nanocrystalline regions were also processed by inverse Fourier transform analysis. The high-angle annular dark-field (HAADF) technique was used to determine phase separation in the studied glass after heat treatment. The HAADF mode allows for the observation of randomly distributed, dark contrast regions of about 4-6 nm. The interplanar spacing identified for the orthorhombic Mg 2 Cu crystalline phase is similar to the value of the first coordination shell radius from the short-range order.

  14. Flexible configuration-interaction shell-model many-body solver

    Energy Technology Data Exchange (ETDEWEB)

    2017-10-01

    BIGSTICK Is a flexible configuration-Interaction open-source shell-model code for the many-fermion problem In a shell model (occupation representation) framework. BIGSTICK can generate energy spectra, static and transition one-body densities, and expectation values of scalar operators. Using the built-in Lanczos algorithm one can compute transition probabflity distributions and decompose wave functions into components defined by group theory.

  15. Seniority truncation in an equations-of-motion approach to the shell model

    International Nuclear Information System (INIS)

    Covello, A.; Andreozzi, F.; Gargano, A.; Porrino, A.

    1989-01-01

    This paper presents an equations-of-motion method for treating shell-model problems within the framework of the seniority scheme. This method can be applied at many levels of approximation and represents therefore a valuable tool to further reduce seniority truncated shell-model spaces. To show its practical value the authors report some results of an extensive study of the N = 82 isotones which is currently under way

  16. Large-basis no-core shell model

    Czech Academy of Sciences Publication Activity Database

    Barrett, BR.; Navrátil, Petr; Vary, J. P.

    2002-01-01

    Roč. 704, č. 17 (2002), s. 254C-263C ISSN 0375-9474 Institutional research plan: CEZ:AV0Z1048901 Keywords : monte-carlo calculations * light-nuclei * cross-sections * ground-state * C12 * systems Subject RIV: BE - Theoretical Physics Impact factor: 1.568, year: 2002

  17. Model unspecific search in CMS. Treatment of insufficient Monte Carlo statistics

    Energy Technology Data Exchange (ETDEWEB)

    Lieb, Jonas; Albert, Andreas; Duchardt, Deborah; Hebbeker, Thomas; Knutzen, Simon; Meyer, Arnd; Pook, Tobias; Roemer, Jonas [III. Physikalisches Institut A, RWTH Aachen University (Germany)

    2016-07-01

    In 2015, the CMS detector recorded proton-proton collisions at an unprecedented center of mass energy of √(s)=13 TeV. The Model Unspecific Search in CMS (MUSiC) offers an analysis approach of these data which is complementary to dedicated analyses: By taking all produced final states into consideration, MUSiC is sensitive to indicators of new physics appearing in final states that are usually not investigated. In a two step process, MUSiC first classifies events according to their physics content and then searches kinematic distributions for the most significant deviations between Monte Carlo simulations and observed data. Such a general approach introduces its own set of challenges. One of them is the treatment of situations with insufficient Monte Carlo statistics. Complementing introductory presentations on the MUSiC event selection and classification, this talk will present a method of dealing with the issue of low Monte Carlo statistics.

  18. Quasi-Monte Carlo methods: applications to modeling of light transport in tissue

    Science.gov (United States)

    Schafer, Steven A.

    1996-05-01

    Monte Carlo modeling of light propagation can accurately predict the distribution of light in scattering materials. A drawback of Monte Carlo methods is that they converge inversely with the square root of the number of iterations. Theoretical considerations suggest that convergence which scales inversely with the first power of the number of iterations is possible. We have previously shown that one can obtain at least a portion of that improvement by using van der Corput sequences in place of a conventional pseudo-random number generator. Here, we present our further analysis, and show that quasi-Monte Carlo methods do have limited applicability to light scattering problems. We also discuss potential improvements which may increase the applicability.

  19. Particle Markov Chain Monte Carlo Techniques of Unobserved Component Time Series Models Using Ox

    DEFF Research Database (Denmark)

    Nonejad, Nima

    This paper details Particle Markov chain Monte Carlo techniques for analysis of unobserved component time series models using several economic data sets. PMCMC combines the particle filter with the Metropolis-Hastings algorithm. Overall PMCMC provides a very compelling, computationally fast...

  20. Monte Carlo tools for Beyond the Standard Model Physics , April 14-16

    DEFF Research Database (Denmark)

    Badger...[], Simon; Christensen, Christian Holm; Dalsgaard, Hans Hjersing

    2011-01-01

    already exist for the study of low energy supersymmetry and the MSSM in particular, this workshop will instead focus on tools for alternative TeV-scale physics models. The main goals of the workshop are: To survey what is available. To provide feedback on user experiences with Monte Carlo tools for BSM...

  1. Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo

    DEFF Research Database (Denmark)

    Herckenrath, Daan; Langevin, Christian D.; Doherty, John

    2011-01-01

    Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction...

  2. Confronting uncertainty in model-based geostatistics using Markov Chain Monte Carlo simulation

    NARCIS (Netherlands)

    Minasny, B.; Vrugt, J.A.; McBratney, A.B.

    2011-01-01

    This paper demonstrates for the first time the use of Markov Chain Monte Carlo (MCMC) simulation for parameter inference in model-based soil geostatistics. We implemented the recently developed DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm to jointly summarize the posterior

  3. Treatment of input uncertainty in hydrologic modeling: Doing hydrology backward with Markov chain Monte Carlo simulation

    NARCIS (Netherlands)

    Vrugt, J.A.; Braak, ter C.J.F.; Clark, M.P.; Hyman, J.M.; Robinson, B.A.

    2008-01-01

    There is increasing consensus in the hydrologic literature that an appropriate framework for streamflow forecasting and simulation should include explicit recognition of forcing and parameter and model structural error. This paper presents a novel Markov chain Monte Carlo (MCMC) sampler, entitled

  4. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans.

    Science.gov (United States)

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2017-03-07

    The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients' CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.

  5. SKIRT: The design of a suite of input models for Monte Carlo radiative transfer simulations

    Science.gov (United States)

    Baes, M.; Camps, P.

    2015-09-01

    The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can be either analytical toy models or numerical models defined on grids or a set of particles) and the extensive use of decorators that combine and alter these building blocks to more complex structures. For a number of decorators, e.g. those that add spiral structure or clumpiness, we provide a detailed description of the algorithms that can be used to generate random positions. Advantages of this decorator-based design include code transparency, the avoidance of code duplication, and an increase in code maintainability. Moreover, since decorators can be chained without problems, very complex models can easily be constructed out of simple building blocks. Finally, based on a number of test simulations, we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black-box random position generator.

  6. A Shell/3D Modeling Technique for the Analysis of Delaminated Composite Laminates

    Science.gov (United States)

    Krueger, Ronald; OBrien, T. Kevin

    2000-01-01

    A shell/3D modeling technique was developed for which a local solid finite element model is used only in the immediate vicinity of the delamination front. The goal was to combine the accuracy of the full three-dimensional solution with the computational efficiency of a shell finite element model. Multi-point constraints provided a kinematically compatible interface between the local 3D model and the global structural model which has been meshed with shell finite elements. Double Cantilever Beam, End Notched Flexure, and Single Leg Bending specimens were analyzed first using full 3D finite element models to obtain reference solutions. Mixed mode strain energy release rate distributions were computed using the virtual crack closure technique. The analyses were repeated using the shell/3D technique to study the feasibility for pure mode I, mode II and mixed mode I/II cases. Specimens with a unidirectional layup and with a multidirectional layup were simulated. For a local 3D model, extending to a minimum of about three specimen thicknesses on either side of the delamination front, the results were in good agreement with mixed mode strain energy release rates obtained from computations where the entire specimen had been modeled with solid elements. For large built-up composite structures the shell/3D modeling technique offers a great potential for reducing the model size, since only a relatively small section in the vicinity of the delamination front needs to be modeled with solid elements.

  7. Determining shell thicknesses in stabilised CdSe@ZnS core-shell nanoparticles by quantitative XPS analysis using an Infinitesimal Columns model

    Energy Technology Data Exchange (ETDEWEB)

    Kalbe, H., E-mail: henryk.kalbe@gmail.com; Rades, S.; Unger, W.E.S.

    2016-10-15

    Highlights: • A novel method to calculate shell thicknesses of core-shell nanoparticles from XPS data is presented. • The approach is widely applicable and combines advantages of existing models. • CdSe@ZnS quantum dots with additional organic stabiliser shell are analysed by XPS. • ZnS and organic shell thicknesses were calculated. • Potential as well as challenges of this and similar approaches are demonstrated. - Abstract: A novel Infinitesimal Columns (IC) simulation model is introduced in this study for the quantitative analysis of core-shell nanoparticles (CSNP) by means of XPS, which combines the advantages of existing approaches. The IC model is applied to stabilised Lumidot™ CdSe/ZnS 610 CSNP for an extensive investigation of their internal structure, i.e. calculation of the two shell thicknesses (ZnS and stabiliser) and exploration of deviations from the idealised CSNP composition. The observed discrepancies between different model calculations can be attributed to the presence of excess stabiliser as well as synthesis residues, demonstrating the necessity of sophisticated purification methods. An excellent agreement is found in the comparison of the IC model with established models from the existing literature, the Shard model and the software SESSA.

  8. forecasting with nonlinear time series model: a monte-carlo

    African Journals Online (AJOL)

    PUBLICATIONS1

    erated recursively up to any step greater than one. For nonlinear time series model, point forecast for step one can be done easily like in the linear case but forecast for a step greater than or equal to ..... London. Franses, P. H. (1998). Time series models for business and Economic forecasting, Cam- bridge University press.

  9. Perturbation analysis for Monte Carlo continuous cross section models

    International Nuclear Information System (INIS)

    Kennedy, Chris B.; Abdel-Khalik, Hany S.

    2011-01-01

    Sensitivity analysis, including both its forward and adjoint applications, collectively referred to hereinafter as Perturbation Analysis (PA), is an essential tool to complete Uncertainty Quantification (UQ) and Data Assimilation (DA). PA-assisted UQ and DA have traditionally been carried out for reactor analysis problems using deterministic as opposed to stochastic models for radiation transport. This is because PA requires many model executions to quantify how variations in input data, primarily cross sections, affect variations in model's responses, e.g. detectors readings, flux distribution, multiplication factor, etc. Although stochastic models are often sought for their higher accuracy, their repeated execution is at best computationally expensive and in reality intractable for typical reactor analysis problems involving many input data and output responses. Deterministic methods however achieve computational efficiency needed to carry out the PA analysis by reducing problem dimensionality via various spatial and energy homogenization assumptions. This however introduces modeling error components into the PA results which propagate to the following UQ and DA analyses. The introduced errors are problem specific and therefore are expected to limit the applicability of UQ and DA analyses to reactor systems that satisfy the introduced assumptions. This manuscript introduces a new method to complete PA employing a continuous cross section stochastic model and performed in a computationally efficient manner. If successful, the modeling error components introduced by deterministic methods could be eliminated, thereby allowing for wider applicability of DA and UQ results. Two MCNP models demonstrate the application of the new method - a Critical Pu Sphere (Jezebel), a Pu Fast Metal Array (Russian BR-1). The PA is completed for reaction rate densities, reaction rate ratios, and the multiplication factor. (author)

  10. Neutrinoless double-β decay matrix elements in large shell-model spaces with the generator-coordinate method

    Science.gov (United States)

    Jiao, C. F.; Engel, J.; Holt, J. D.

    2017-11-01

    We use the generator-coordinate method (GCM) with realistic shell-model interactions to closely approximate full shell-model calculations of the matrix elements for the neutrinoless double-β decay of 48Ca, 76Ge, and 82Se. We work in one major shell for the first isotope, in the f5 /2p g9 /2 space for the second and third, and finally in two major shells for all three. Our coordinates include not only the usual axial deformation parameter β , but also the triaxiality angle γ and neutron-proton pairing amplitudes. In the smaller model spaces our matrix elements agree well with those of full shell-model diagonalization, suggesting that our Hamiltonian-based GCM captures most of the important valence-space correlations. In two major shells, where exact diagonalization is not currently possible, our matrix elements are only slightly different from those in a single shell.

  11. Stability of core–shell nanowires in selected model solutions

    Energy Technology Data Exchange (ETDEWEB)

    Kalska-Szostko, B., E-mail: kalska@uwb.edu.pl; Wykowska, U.; Basa, A.; Zambrzycka, E.

    2015-03-30

    Highlights: • Stability of the core–shell nanowires in environmental solutions were tested. • The most and the least aggressive solutions were determined. • The influence of different solutions on magnetic nanowires core was found out. - Abstract: This paper presents the studies of stability of magnetic core–shell nanowires prepared by electrochemical deposition from an acidic solution containing iron in the core and modified surface layer. The obtained nanowires were tested according to their durability in distilled water, 0.01 M citric acid, 0.9% NaCl, and commercial white wine (12% alcohol). The proposed solutions were chosen in such a way as to mimic food related environment due to a possible application of nanowires as additives to, for example, packages. After 1, 2 and 3 weeks wetting in the solutions, nanoparticles were tested by Infrared Spectroscopy, Atomic Absorption Spectroscopy, Transmission Electron Microscopy and X-ray diffraction methods.

  12. Stability of core–shell nanowires in selected model solutions

    International Nuclear Information System (INIS)

    Kalska-Szostko, B.; Wykowska, U.; Basa, A.; Zambrzycka, E.

    2015-01-01

    Highlights: • Stability of the core–shell nanowires in environmental solutions were tested. • The most and the least aggressive solutions were determined. • The influence of different solutions on magnetic nanowires core was found out. - Abstract: This paper presents the studies of stability of magnetic core–shell nanowires prepared by electrochemical deposition from an acidic solution containing iron in the core and modified surface layer. The obtained nanowires were tested according to their durability in distilled water, 0.01 M citric acid, 0.9% NaCl, and commercial white wine (12% alcohol). The proposed solutions were chosen in such a way as to mimic food related environment due to a possible application of nanowires as additives to, for example, packages. After 1, 2 and 3 weeks wetting in the solutions, nanoparticles were tested by Infrared Spectroscopy, Atomic Absorption Spectroscopy, Transmission Electron Microscopy and X-ray diffraction methods

  13. Inner shell Coulomb ionization by heavy charged particles studied by the SCA model

    International Nuclear Information System (INIS)

    Hansteen, J.M.

    1976-12-01

    The seven papers, introduced by the most recent, subtitled 'A condensed status review', form a survey of the work by the author and his colleagues on K-, L-, and M-shell ionisation by impinging protons, deuterons and α-particles in the period 1971-1976. The SCA model is discussed and compared with other approximations for inner shell Coulomb ionisation. The future aspects in this field are also discussed. (JIW)

  14. All (4,1): Sigma models with (4,q) off-shell supersymmetry

    Energy Technology Data Exchange (ETDEWEB)

    Hull, Chris [The Blackett Laboratory, Imperial College London,Prince Consort Road London SW7 @AZ (United Kingdom); Lindström, Ulf [The Blackett Laboratory, Imperial College London,Prince Consort Road London SW7 @AZ (United Kingdom); Department of Physics and Astronomy, Division of Theoretical Physics,Uppsala University, Box 516, SE-751 20 Uppsala (Sweden)

    2017-03-08

    Off-shell (4,q) supermultiplets in 2-dimensions are constructed for q=1,2,4. These are used to construct sigma models whose target spaces are hyperkähler with torsion. The off-shell supersymmetry implies the three complex structures are simultaneously integrable and allows us to construct actions using extended superspace and projective superspace, giving an explicit construction of the target space geometries.

  15. Adaptable three-dimensional Monte Carlo modeling of imaged blood vessels in skin

    Science.gov (United States)

    Pfefer, T. Joshua; Barton, Jennifer K.; Chan, Eric K.; Ducros, Mathieu G.; Sorg, Brian S.; Milner, Thomas E.; Nelson, J. Stuart; Welch, Ashley J.

    1997-06-01

    In order to reach a higher level of accuracy in simulation of port wine stain treatment, we propose to discard the typical layered geometry and cylindrical blood vessel assumptions made in optical models and use imaging techniques to define actual tissue geometry. Two main additions to the typical 3D, weighted photon, variable step size Monte Carlo routine were necessary to achieve this goal. First, optical low coherence reflectometry (OLCR) images of rat skin were used to specify a 3D material array, with each entry assigned a label to represent the type of tissue in that particular voxel. Second, the Monte Carlo algorithm was altered so that when a photon crosses into a new voxel, the remaining path length is recalculated using the new optical properties, as specified by the material array. The model has shown good agreement with data from the literature. Monte Carlo simulations using OLCR images of asymmetrically curved blood vessels show various effects such as shading, scattering-induced peaks at vessel surfaces, and directionality-induced gradients in energy deposition. In conclusion, this augmentation of the Monte Carlo method can accurately simulate light transport for a wide variety of nonhomogeneous tissue geometries.

  16. Absorbed dose in fibrotic microenvironment models employing Monte Carlo simulation

    International Nuclear Information System (INIS)

    Zambrano Ramírez, O.D.; Rojas Calderón, E.L.; Azorín Vega, E.P.; Ferro Flores, G.; Martínez Caballero, E.

    2015-01-01

    The presence or absence of fibrosis and yet more, the multimeric and multivalent nature of the radiopharmaceutical have recently been reported to have an effect on the radiation absorbed dose in tumor microenvironment models. Fibroblast and myofibroblast cells produce the extracellular matrix by the secretion of proteins which provide structural and biochemical support to cells. The reactive and reparative mechanisms triggered during the inflammatory process causes the production and deposition of extracellular matrix proteins, the abnormal excessive growth of the connective tissue leads to fibrosis. In this work, microenvironment (either not fibrotic or fibrotic) models composed of seven spheres representing cancer cells of 10 μm in diameter each with a 5 μm diameter inner sphere (cell nucleus) were created in two distinct radiation transport codes (PENELOPE and MCNP). The purpose of creating these models was to determine the radiation absorbed dose in the nucleus of cancer cells, based on previously reported radiopharmaceutical retain (by HeLa cells) percentages of the 177 Lu-Tyr 3 -octreotate (monomeric) and 177 Lu-Tyr 3 -octreotate-AuNP (multimeric) radiopharmaceuticals. A comparison in the results between the PENELOPE and MCNP was done. We found a good agreement in the results of the codes. The percent difference between the increase percentages of the absorbed dose in the not fibrotic model with respect to the fibrotic model of the codes PENELOPE and MCNP was found to be under 1% for both radiopharmaceuticals. (authors)

  17. TRIPOLI-4{sup ®} Monte Carlo code ITER A-lite neutronic model validation

    Energy Technology Data Exchange (ETDEWEB)

    Jaboulay, Jean-Charles, E-mail: jean-charles.jaboulay@cea.fr [CEA, DEN, Saclay, DM2S, SERMA, F-91191 Gif-sur-Yvette (France); Cayla, Pierre-Yves; Fausser, Clement [MILLENNIUM, 16 Av du Québec Silic 628, F-91945 Villebon sur Yvette (France); Damian, Frederic; Lee, Yi-Kang; Puma, Antonella Li; Trama, Jean-Christophe [CEA, DEN, Saclay, DM2S, SERMA, F-91191 Gif-sur-Yvette (France)

    2014-10-15

    3D Monte Carlo transport codes are extensively used in neutronic analysis, especially in radiation protection and shielding analyses for fission and fusion reactors. TRIPOLI-4{sup ®} is a Monte Carlo code developed by CEA. The aim of this paper is to show its capability to model a large-scale fusion reactor with complex neutron source and geometry. A benchmark between MCNP5 and TRIPOLI-4{sup ®}, on the ITER A-lite model was carried out; neutron flux, nuclear heating in the blankets and tritium production rate in the European TBMs were evaluated and compared. The methodology to build the TRIPOLI-4{sup ®} A-lite model is based on MCAM and the MCNP A-lite model. Simplified TBMs, from KIT, were integrated in the equatorial-port. A good agreement between MCNP and TRIPOLI-4{sup ®} is shown; discrepancies are mainly included in the statistical error.

  18. Monte Carlo modeling of atomic oxygen attack of polymers with protective coatings on LDEF

    Science.gov (United States)

    Banks, Bruce A.; Degroh, Kim K.; Auer, Bruce M.; Gebauer, Linda; Edwards, Jonathan L.

    1993-01-01

    Characterization of the behavior of atomic oxygen interaction with materials on the Long Duration Exposure Facility (LDEF) assists in understanding of the mechanisms involved. Thus the reliability of predicting in-space durability of materials based on ground laboratory testing should be improved. A computational model which simulates atomic oxygen interaction with protected polymers was developed using Monte Carlo techniques. Through the use of an assumed mechanistic behavior of atomic oxygen interaction based on in-space atomic oxygen erosion of unprotected polymers and ground laboratory atomic oxygen interaction with protected polymers, prediction of atomic oxygen interaction with protected polymers on LDEF was accomplished. However, the results of these predictions are not consistent with the observed LDEF results at defect sites in protected polymers. Improved agreement between observed LDEF results and predicted Monte Carlo modeling can be achieved by modifying of the atomic oxygen interactive assumptions used in the model. LDEF atomic oxygen undercutting results, modeling assumptions, and implications are presented.

  19. Essays on Quantitative Marketing Models and Monte Carlo Integration Methods

    NARCIS (Netherlands)

    R.D. van Oest (Rutger)

    2005-01-01

    textabstractThe last few decades have led to an enormous increase in the availability of large detailed data sets and in the computing power needed to analyze such data. Furthermore, new models and new computing techniques have been developed to exploit both sources. All of this has allowed for

  20. Monte Carlo model of light transport in scintillating fibers and large scintillators

    International Nuclear Information System (INIS)

    Chakarova, R.

    1995-01-01

    A Monte Carlo model is developed which simulates the light transport in a scintillator surrounded by a transparent layer with different surface properties. The model is applied to analyse the light collection properties of scintillating fibers and a large scintillator wrapped in aluminium foil. The influence of the fiber interface characteristics on the light yield is investigated in detail. Light output results as well as time distributions are obtained for the large scintillator case. 15 refs, 16 figs

  1. Chinese Basic Pension Substitution Rate: A Monte Carlo Demonstration of the Individual Account Model

    OpenAIRE

    Dong, Bei; Zhang, Ling; Lu, Xuan

    2008-01-01

    At the end of 2005, the State Council of China passed ”The Decision on adjusting the Individual Account of Basic Pension System”, which adjusted the individual account in the 1997 basic pension system. In this essay, we will analyze the adjustment above, and use Life Annuity Actuarial Theory to establish the basic pension substitution rate model. Monte Carlo simulation is also used to prove the rationality of the model. Some suggestions are put forward associated with the substitution rate ac...

  2. Efficient Markov Chain Monte Carlo Sampling for Hierarchical Hidden Markov Models

    OpenAIRE

    Turek, Daniel; de Valpine, Perry; Paciorek, Christopher J.

    2016-01-01

    Traditional Markov chain Monte Carlo (MCMC) sampling of hidden Markov models (HMMs) involves latent states underlying an imperfect observation process, and generates posterior samples for top-level parameters concurrently with nuisance latent variables. When potentially many HMMs are embedded within a hierarchical model, this can result in prohibitively long MCMC runtimes. We study combinations of existing methods, which are shown to vastly improve computational efficiency for these hierarchi...

  3. Structure of AgxNa1-xPO3 glasses by neutron diffraction and reverse Monte Carlo modelling

    International Nuclear Information System (INIS)

    Hall, Andreas; Swenson, Jan; Karlsson, Christian; Adams, Stefan; Bowron, Daniel T

    2007-01-01

    We have performed structural studies of mixed mobile ion phosphate glasses Ag x Na 1-x PO 3 using diffraction experiments and reverse Monte Carlo simulations. This glass system is particularly interesting as a model system for investigations of the mixed mobile ion effect, due to its anomalously low magnitude in the system. As for previously studied mixed alkali phosphate glasses, with a much more pronounced mixed mobile ion effect, we find no substantial structural alterations of the phosphorous-oxygen network and the local coordination of the mobile cations. Furthermore, the mobile Ag + and Na + ions are randomly mixed with no detectable preference for either similar or dissimilar pairs of cations. However, in contrast to mixed mobile ion systems with a very pronounced mixed mobile ion effect, the two types of mobile ions have, in this case, very similar local environments. For all the studied glass compositions the average Ag-O and Na-O distances in the first coordination shell are determined to be 2.5 ± 0.1 and 2.5 ± 0.1 A, and the corresponding average coordination numbers are approximately 3.2 and 3.7, respectively. The similar local coordinations of the two types of mobile ions suggests that the energy mismatch for a Na + ion to occupy a site that previously has been occupied by a Ag + ion (and vice versa) is low, and that this low energy mismatch is responsible for the anomalously weak mixed mobile ion effect

  4. Mean field theory of nuclei and shell model. Present status and future outlook

    International Nuclear Information System (INIS)

    Nakada, Hitoshi

    2003-01-01

    Many of the recent topics of the nuclear structure are concerned on the problems of unstable nuclei. It has been revealed experimentally that the nuclear halos and the neutron skins as well as the cluster structures or the molecule-like structures can be present in the unstable nuclei, and the magic numbers well established in the stable nuclei disappear occasionally while new ones appear. The shell model based on the mean field approximation has been successfully applied to stable nuclei to explain the nuclear structure as the finite many body system quantitatively and it is considered as the standard model at present. If the unstable nuclei will be understood on the same model basis or not is a matter related to fundamental principle of nuclear structure theories. In this lecture, the fundamental concept and the framework of the theory of nuclear structure based on the mean field theory and the shell model are presented to make clear the problems and to suggest directions for future researches. At first fundamental properties of nuclei are described under the subtitles: saturation and magic numbers, nuclear force and effective interactions, nuclear matter, and LS splitting. Then the mean field theory is presented under subtitles: the potential model, the mean field theory, Hartree-Fock approximation for nuclear matter, density dependent force, semiclassical mean field theory, mean field theory and symmetry, Skyrme interaction and density functional, density matrix expansion, finite range interactions, effective masses, and motion of center of mass. The subsequent section is devoted to the shell model with the subtitles: beyond the mean field approximation, core polarization, effective interaction of shell model, one-particle wave function, nuclear deformation and shell model, and shell model of cross shell. Finally structure of unstable nuclei is discussed with the subtitles: general remark on the study of unstable nuclear structure, asymptotic behavior of wave

  5. Dispersion of radionuclides released into a stable planetary boundary layer using a Monte Carlo model

    International Nuclear Information System (INIS)

    Basit, Abdul; Raza, S Shoaib; Irfan, Naseem

    2006-01-01

    In this paper a Monte Carlo model for describing the atmospheric dispersion of radionuclides (represented by Lagrangian particles/neutral tracers) continuously released into a stable planetary boundary layer is presented. The effect of variation in release height and wind directional shear on plume dispersion is studied. The resultant plume concentration and dose rate at the ground is also calculated. The turbulent atmospheric parameters, like vertical profiles of fluctuating wind velocity components and eddy lifetime, were calculated using empirical relations for a stable atmosphere. The horizontal and vertical dispersion coefficients calculated by a numerical Lagrangian model are compared with the original and modified Pasquill-Gifford and Briggs empirical σs. The comparison shows that the Monte Carlo model can successfully predict dispersion in a stable atmosphere using the empirical turbulent parameters. The predicted ground concentration and dose rate contours indicate a significant increase in the affected area when wind shear is accounted for in the calculations

  6. New model for mines and transportation tunnels external dose calculation using Monte Carlo simulation

    International Nuclear Information System (INIS)

    Allam, Kh. A.

    2017-01-01

    In this work, a new methodology is developed based on Monte Carlo simulation for tunnels and mines external dose calculation. Tunnels external dose evaluation model of a cylindrical shape of finite thickness with an entrance and with or without exit. A photon transportation model was applied for exposure dose calculations. A new software based on Monte Carlo solution was designed and programmed using Delphi programming language. The variation of external dose due to radioactive nuclei in a mine tunnel and the corresponding experimental data lies in the range 7.3 19.9%. The variation of specific external dose rate with position in, tunnel building material density and composition were studied. The given new model has more flexible for real external dose in any cylindrical tunnel structure calculations. (authors)

  7. Monte Carlo simulations of a model for opinion formation

    Science.gov (United States)

    Bordogna, C. M.; Albano, E. V.

    2007-04-01

    A model for opinion formation based on the Theory of Social Impact is presented and studied by means of numerical simulations. Individuals with two states of opinion are impacted due to social interactions with: i) members of the society, ii) a strong leader with a well-defined opinion and iii) the mass media that could either support or compete with the leader. Due to that competition, the average opinion of the social group exhibits phase-transition like behaviour between different states of opinion.

  8. Markov chain Monte Carlo methods in directed graphical models

    DEFF Research Database (Denmark)

    Højbjerre, Malene

    have primarily been based on a Bayesian paradigm, i.e. prior information on the parameters is a prerequisite, but questions about undesirable side effects from the priors are raised.     We present a method, based on MCMC methods, that approximates profile log-likelihood functions in directed graphical...... a tendency to foetal loss is heritable. The data possess a complicated dependence structure due to replicate pregnancies for the same woman, and a given family pattern. We conclude that a tendency to foetal loss is heritable. The model is of great interest in genetic epidemiology, because it considers both...

  9. Finite element model updating using the shadow hybrid Monte Carlo technique

    Science.gov (United States)

    Boulkaibet, I.; Mthembu, L.; Marwala, T.; Friswell, M. I.; Adhikari, S.

    2015-02-01

    Recent research in the field of finite element model updating (FEM) advocates the adoption of Bayesian analysis techniques to dealing with the uncertainties associated with these models. However, Bayesian formulations require the evaluation of the Posterior Distribution Function which may not be available in analytical form. This is the case in FEM updating. In such cases sampling methods can provide good approximations of the Posterior distribution when implemented in the Bayesian context. Markov Chain Monte Carlo (MCMC) algorithms are the most popular sampling tools used to sample probability distributions. However, the efficiency of these algorithms is affected by the complexity of the systems (the size of the parameter space). The Hybrid Monte Carlo (HMC) offers a very important MCMC approach to dealing with higher-dimensional complex problems. The HMC uses the molecular dynamics (MD) steps as the global Monte Carlo (MC) moves to reach areas of high probability where the gradient of the log-density of the Posterior acts as a guide during the search process. However, the acceptance rate of HMC is sensitive to the system size as well as the time step used to evaluate the MD trajectory. To overcome this limitation we propose the use of the Shadow Hybrid Monte Carlo (SHMC) algorithm. The SHMC algorithm is a modified version of the Hybrid Monte Carlo (HMC) and designed to improve sampling for large-system sizes and time steps. This is done by sampling from a modified Hamiltonian function instead of the normal Hamiltonian function. In this paper, the efficiency and accuracy of the SHMC method is tested on the updating of two real structures; an unsymmetrical H-shaped beam structure and a GARTEUR SM-AG19 structure and is compared to the application of the HMC algorithm on the same structures.

  10. Multilevel Monte Carlo and improved timestepping methods in atmospheric dispersion modelling

    Science.gov (United States)

    Katsiolides, Grigoris; Müller, Eike H.; Scheichl, Robert; Shardlow, Tony; Giles, Michael B.; Thomson, David J.

    2018-02-01

    A common way to simulate the transport and spread of pollutants in the atmosphere is via stochastic Lagrangian dispersion models. Mathematically, these models describe turbulent transport processes with stochastic differential equations (SDEs). The computational bottleneck is the Monte Carlo algorithm, which simulates the motion of a large number of model particles in a turbulent velocity field; for each particle, a trajectory is calculated with a numerical timestepping method. Choosing an efficient numerical method is particularly important in operational emergency-response applications, such as tracking radioactive clouds from nuclear accidents or predicting the impact of volcanic ash clouds on international aviation, where accurate and timely predictions are essential. In this paper, we investigate the application of the Multilevel Monte Carlo (MLMC) method to simulate the propagation of particles in a representative one-dimensional dispersion scenario in the atmospheric boundary layer. MLMC can be shown to result in asymptotically superior computational complexity and reduced computational cost when compared to the Standard Monte Carlo (StMC) method, which is currently used in atmospheric dispersion modelling. To reduce the absolute cost of the method also in the non-asymptotic regime, it is equally important to choose the best possible numerical timestepping method on each level. To investigate this, we also compare the standard symplectic Euler method, which is used in many operational models, with two improved timestepping algorithms based on SDE splitting methods.

  11. MODELING OF NONLINEAR DEFORMATION AND BUCKLING OF ELASTIC INHOMOGENEOUS SHELLS

    Directory of Open Access Journals (Sweden)

    Bazhenov V.A.

    2014-06-01

    Full Text Available The paper outlines the fundamentals of the method of solving static problems of geometrically nonlinear deformation, buckling, and postbuckling behavior of thin thermoelastic inhomogeneous shells with complex-shaped mid-surface, geometrical features throughout the thickness, and multilayer structure under complex thermomechanical loading. The method is based on the geometrically nonlinear equations of three-dimensional thermoelasticity and the moment finiteelement scheme. The method is justified numerically. Comparing solutions with those obtained by other authors and by software LIRA and SCAD is conducted.

  12. A sequential Monte Carlo model of the combined GB gas and electricity network

    International Nuclear Information System (INIS)

    Chaudry, Modassar; Wu, Jianzhong; Jenkins, Nick

    2013-01-01

    A Monte Carlo model of the combined GB gas and electricity network was developed to determine the reliability of the energy infrastructure. The model integrates the gas and electricity network into a single sequential Monte Carlo simulation. The model minimises the combined costs of the gas and electricity network, these include gas supplies, gas storage operation and electricity generation. The Monte Carlo model calculates reliability indices such as loss of load probability and expected energy unserved for the combined gas and electricity network. The intention of this tool is to facilitate reliability analysis of integrated energy systems. Applications of this tool are demonstrated through a case study that quantifies the impact on the reliability of the GB gas and electricity network given uncertainties such as wind variability, gas supply availability and outages to energy infrastructure assets. Analysis is performed over a typical midwinter week on a hypothesised GB gas and electricity network in 2020 that meets European renewable energy targets. The efficacy of doubling GB gas storage capacity on the reliability of the energy system is assessed. The results highlight the value of greater gas storage facilities in enhancing the reliability of the GB energy system given various energy uncertainties. -- Highlights: •A Monte Carlo model of the combined GB gas and electricity network was developed. •Reliability indices are calculated for the combined GB gas and electricity system. •The efficacy of doubling GB gas storage capacity on reliability of the energy system is assessed. •Integrated reliability indices could be used to assess the impact of investment in energy assets

  13. Ab Initio Study of 40Ca with an Importance Truncated No-Core Shell Model

    Energy Technology Data Exchange (ETDEWEB)

    Roth, R; Navratil, P

    2007-05-22

    We propose an importance truncation scheme for the no-core shell model, which enables converged calculations for nuclei well beyond the p-shell. It is based on an a priori measure for the importance of individual basis states constructed by means of many-body perturbation theory. Only the physically relevant states of the no-core model space are considered, which leads to a dramatic reduction of the basis dimension. We analyze the validity and efficiency of this truncation scheme using different realistic nucleon-nucleon interactions and compare to conventional no-core shell model calculations for {sup 4}He and {sup 16}O. Then, we present the first converged calculations for the ground state of {sup 40}Ca within no-core model spaces including up to 16{h_bar}{Omega}-excitations using realistic low-momentum interactions. The scheme is universal and can be easily applied to other quantum many-body problems.

  14. MONTE CARLO ANALYSES OF THE YALINA THERMAL FACILITY WITH SERPENT STEREOLITHOGRAPHY GEOMETRY MODEL

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, A.; Gohar, Y.

    2015-01-01

    This paper analyzes the YALINA Thermal subcritical assembly of Belarus using two different Monte Carlo transport programs, SERPENT and MCNP. The MCNP model is based on combinatorial geometry and universes hierarchy, while the SERPENT model is based on Stereolithography geometry. The latter consists of unstructured triangulated surfaces defined by the normal and vertices. This geometry format is used by 3D printers and it has been created by: the CUBIT software, MATLAB scripts, and C coding. All the Monte Carlo simulations have been performed using the ENDF/B-VII.0 nuclear data library. Both MCNP and SERPENT share the same geometry specifications, which describe the facility details without using any material homogenization. Three different configurations have been studied with different number of fuel rods. The three fuel configurations use 216, 245, or 280 fuel rods, respectively. The numerical simulations show that the agreement between SERPENT and MCNP results is within few tens of pcms.

  15. Symplectic symmetry and the ab initio no-core shell model

    Energy Technology Data Exchange (ETDEWEB)

    Draayer, J.P.; Dytrych, T.; Sviratcheva, K.D.; Bahri, C. [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, LA 70803 (United States); Vary, J.P. [Department of Physics and Astronomy, Iowa State University, Ames, IA 50011 (United States)

    2007-12-15

    The symplectic symmetry of eigenstates for the O{sub gs}{sup +} in {sup 16}O and the O{sub gs}{sup +}s and lowest 2{sup +} and 4{sup +} configurations of {sup 12}C that are well-converged within the framework of the no-core shell model with the JISP16 realistic interaction is examined. These states are found to project at the 85-90% level onto very few symplectic representations including the most deformed configuration, which confirms the importance of a symplectic no-core shell model and reaffirms the relevance of the Elliott SU(3) model upon which the symplectic scheme is built. (Author)

  16. Symplectic Symmetry and the Ab Initio No-Core Shell Model

    Energy Technology Data Exchange (ETDEWEB)

    Draayer, Jerry P.; Dytrych, Tomas; Sviratcheva, Kristina D.; Bahri, Chairul; /Louisiana State U.; Vary, James P.; /Iowa State U. /LLNL, Livermore /SLAC

    2007-03-14

    The symplectic symmetry of eigenstates for the 0{sub gs}{sup +} in {sup 16}O and the 0{sub gs}{sup +} and lowest 2{sup +} and 4{sup +} configurations of {sup 12}C that are well-converged within the framework of the no-core shell model with the JISP16 realistic interaction is examined. These states are found to project at the 85-90% level onto very few symplectic representations including the most deformed configuration, which confirms the importance of a symplectic no-core shell model and reaffirms the relevance of the Elliott SU(3) model upon which the symplectic scheme is built.

  17. Testing Lorentz Invariance Emergence in the Ising Model using Monte Carlo simulations

    CERN Document Server

    Dias Astros, Maria Isabel

    2017-01-01

    In the context of the Lorentz invariance as an emergent phenomenon at low energy scales to study quantum gravity a system composed by two 3D interacting Ising models (one with an anisotropy in one direction) was proposed. Two Monte Carlo simulations were run: one for the 2D Ising model and one for the target model. In both cases the observables (energy, magnetization, heat capacity and magnetic susceptibility) were computed for different lattice sizes and a Binder cumulant introduced in order to estimate the critical temperature of the systems. Moreover, the correlation function was calculated for the 2D Ising model.

  18. Monte Carlo modeling of neutron imaging at the SINQ spallation source

    International Nuclear Information System (INIS)

    Lebenhaft, J.R.; Lehmann, E.H.; Pitcher, E.J.; McKinney, G.W.

    2003-01-01

    Modeling of the Swiss Spallation Neutron Source (SINQ) has been used to demonstrate the neutron radiography capability of the newly released MPI-version of the MCNPX Monte Carlo code. A detailed MCNPX model was developed of SINQ and its associated neutron transmission radiography (NEUTRA) facility. Preliminary validation of the model was performed by comparing the calculated and measured neutron fluxes in the NEUTRA beam line, and a simulated radiography image was generated for a sample consisting of steel tubes containing different materials. This paper describes the SINQ facility, provides details of the MCNPX model, and presents preliminary results of the neutron imaging. (authors)

  19. Exact boson mappings for nuclear neutron (proton) shell-model algebras having SU(3) subalgebras

    International Nuclear Information System (INIS)

    Bonatsos, D.; Klein, A.

    1986-01-01

    In this paper the commutation relations of the fermion pair operators of identical nucleons coupled to spin zero are given for the general nuclear major shell in LST coupling. The associated Lie algebras are the unitary symplectic algebras Sp(2M). The corresponding multipole subalgebras are the unitary algebras U(M), which possess SU(3) subalgebras. Number conserving exact boson mappings of both the Dyson and hermitian form are given for the nuclear neutron (proton) s--d, p--f, s--d--g, and p--f--h shells, and their group theoretical structure is emphasized. The results are directly applicable in the case of the s--d shell, while in higher shells the experimentally plausible pseudo-SU(3) symmetry makes them applicable. The final purpose of this work is to provide a link between the shell model and the Interacting Boson Model (IBM) in the deformed limit. As already implied in the work of Draayer and Hecht, it is difficult to associate the boson model developed here with the conventional IBM model. The differences between the two approaches (due mainly to the effects of the Pauli principle) as well as their physical implications are extensively discussed

  20. Interstitial void structure in Cu Sn liquid alloy as revealed from reverse Monte Carlo modelling

    Science.gov (United States)

    Hoyer, W.; Kleinhempel, R.; Lorinczi, A.; Pohlers, A.; Popescu, M.; Sava, F.

    2005-02-01

    A model for the structure of copper-tin liquid alloy has been developed using the standard reverse Monte Carlo method. The interstitial void structure (size distribution) was analysed. The effects of various kinds of voids (small size and large size) on the interference function and radial distribution function were investigated. Predictions related to the formation of some ternary alloys by filling the interstices of the basic alloy were advanced.

  1. Investigation of Multicritical Phenomena in ANNNI Model by Monte Carlo Methods

    Directory of Open Access Journals (Sweden)

    A. K. Murtazaev

    2012-01-01

    Full Text Available The anisotropic Ising model with competing interactions is investigated in wide temperature range and |J1/J| parameters by means of Monte Carlo methods. Static critical exponents of the magnetization, susceptibility, heat capacity, and correlation radius are calculated in the neighborhood of Lifshitz point. According to obtained results, a phase diagram is plotted, the coordinates of Lifshitz point are defined, and a character of multicritical behavior of the system is detected.

  2. Model comparison to evaluate a shell quality bio-complex in layer hens.

    Science.gov (United States)

    Arango, J; Wolc, A; Settar, P; O'Sullivan, N P

    2016-11-01

    Reducing the incidence of egg shell breakage is an important selection goal in egg layer hens breeding. Breaking strength provides an indicator of static shell resistance correlated with shell thickness. Acoustic egg tests combine shell's resonance profile with egg mass to calculate dynamic stiffness (KDyn) a quantitative indicator of integral shell resistance, and a novel direct detection of both cracks and micro-cracks (MCr) making it possible for use in selection programs aiming improvement of shell quality. A shell quality bio-complex was defined to improve overall shell quality, including: breaking strength at equator (BSe) and poles (BSp), KDyn, and MCr, on multiple eggs/hen-age. A total of 81,667; 101,113; and 72,462 records from 4 generations of three pure lines were evaluated. Two models were tested in the brown-egg line: I) four-trait linear repeatability model and II) three-trait linear (BS, KDyn)-threshold (MCr) in the three lines. Models were implemented with AIREMLF90 and THRGIBBS1F90. Heritability and repeatability (Model I) estimates were: h 2 = 0.14, 0.18, 0.33, and 0.02; r = 0.16, 0.28, 0.43, and 0.03 for BSe, BSp, KDyn, and MCr, respectively. Corresponding values in White Plymouth Rock were h 2 = 0.14, 0.17, 0.33, and 0.02; r = 0.21, 0.33, 0.44, and 0.04, and in White Leghorn were h 2 = 0.14, 0.23, 0.36, and 0.02; r = 0.24, 0.38, 0.52, and 0.02. Genetic correlations between BSe and BSp were between 0.51 and 0.68. The BS traits were moderately correlated with KDyn (+0.23 to +0.51), and tended to be negatively correlated with MCr. KDyn, and MCr (-0.46 to -0.62). Model II had similar results; except for increased h 2 = 0.06 and r = 0.09 for MCr. Results indicate that BSe and BSp are different traits; while incidence of MCr is low in heritable but showed negative genetic correlations with the other traits. This makes MCr unsuitable for direct selection; but favors indirect selection against MCr via BSe, BSp, and KDyn for a holistic selection to

  3. OWL: A code for the two-center shell model with spherical Woods-Saxon potentials

    Science.gov (United States)

    Diaz-Torres, Alexis

    2018-03-01

    A Fortran-90 code for solving the two-center nuclear shell model problem is presented. The model is based on two spherical Woods-Saxon potentials and the potential separable expansion method. It describes the single-particle motion in low-energy nuclear collisions, and is useful for characterizing a broad range of phenomena from fusion to nuclear molecular structures.

  4. A Shell/3D Modeling Technique for the Analyses of Delaminated Composite Laminates

    Science.gov (United States)

    Krueger, Ronald; OBrien, T. Kevin

    2001-01-01

    A shell/3D modeling technique was developed for which a local three-dimensional solid finite element model is used only in the immediate vicinity of the delamination front. The goal was to combine the accuracy of the full three-dimensional solution with the computational efficiency of a plate or shell finite element model. Multi-point constraints provided a kinematically compatible interface between the local three-dimensional model and the global structural model which has been meshed with plate or shell finite elements. Double Cantilever Beam (DCB), End Notched Flexure (ENF), and Single Leg Bending (SLB) specimens were modeled using the shell/3D technique to study the feasibility for pure mode I (DCB), mode II (ENF) and mixed mode I/II (SLB) cases. Mixed mode strain energy release rate distributions were computed across the width of the specimens using the virtual crack closure technique. Specimens with a unidirectional layup and with a multidirectional layup where the delamination is located between two non-zero degree plies were simulated. For a local three-dimensional model, extending to a minimum of about three specimen thicknesses on either side of the delamination front, the results were in good agreement with mixed mode strain energy release rates obtained from computations where the entire specimen had been modeled with solid elements. For large built-up composite structures modeled with plate elements, the shell/3D modeling technique offers a great potential for reducing the model size, since only a relatively small section in the vicinity of the delamination front needs to be modeled with solid elements.

  5. Shell supports

    DEFF Research Database (Denmark)

    Almegaard, Henrik

    2004-01-01

    A new statical and conceptual model for membrane shell structures - the stringer system - has been found. The principle was first published at the IASS conference in Copenhagen (OHL91), and later the theory has been further developed (ALMO3)(ALMO4). From the analysis of the stringer model it can...... be concluded that all membrane shells can be described by a limited number of basic configurations of which quite a few have free edges....

  6. Stability of core-shell nanowires in selected model solutions

    Science.gov (United States)

    Kalska-Szostko, B.; Wykowska, U.; Basa, A.; Zambrzycka, E.

    2015-03-01

    This paper presents the studies of stability of magnetic core-shell nanowires prepared by electrochemical deposition from an acidic solution containing iron in the core and modified surface layer. The obtained nanowires were tested according to their durability in distilled water, 0.01 M citric acid, 0.9% NaCl, and commercial white wine (12% alcohol). The proposed solutions were chosen in such a way as to mimic food related environment due to a possible application of nanowires as additives to, for example, packages. After 1, 2 and 3 weeks wetting in the solutions, nanoparticles were tested by Infrared Spectroscopy, Atomic Absorption Spectroscopy, Transmission Electron Microscopy and X-ray diffraction methods.

  7. Monte Carlo tests of the Rasch model based on scalability coefficients

    DEFF Research Database (Denmark)

    Christensen, Karl Bang; Kreiner, Svend

    2010-01-01

    that summarizes the number of Guttman errors in the data matrix. These coefficients are shown to yield efficient tests of the Rasch model using p-values computed using Markov chain Monte Carlo methods. The power of the tests of unequal item discrimination, and their ability to distinguish between local dependence......For item responses fitting the Rasch model, the assumptions underlying the Mokken model of double monotonicity are met. This makes non-parametric item response theory a natural starting-point for Rasch item analysis. This paper studies scalability coefficients based on Loevinger's H coefficient...

  8. Monte Carlo simulation for statistical mechanics model of ion-channel cooperativity in cell membranes

    Science.gov (United States)

    Erdem, Riza; Aydiner, Ekrem

    2009-03-01

    Voltage-gated ion channels are key molecules for the generation and propagation of electrical signals in excitable cell membranes. The voltage-dependent switching of these channels between conducting and nonconducting states is a major factor in controlling the transmembrane voltage. In this study, a statistical mechanics model of these molecules has been discussed on the basis of a two-dimensional spin model. A new Hamiltonian and a new Monte Carlo simulation algorithm are introduced to simulate such a model. It was shown that the results well match the experimental data obtained from batrachotoxin-modified sodium channels in the squid giant axon using the cut-open axon technique.

  9. Monte Carlo tools for Beyond the Standard Model Physics , April 14-16

    DEFF Research Database (Denmark)

    Badger...[], Simon; Christensen, Christian Holm; Dalsgaard, Hans Hjersing

    2011-01-01

    This workshop aims to gather together theorists and experimentalists interested in developing and using Monte Carlo tools for Beyond the Standard Model Physics in an attempt to be prepared for the analysis of data focusing on the Large Hadron Collider. Since a large number of excellent tools....... To identify promising models (or processes) for which the tools have not yet been constructed and start filling up these gaps. To propose ways to streamline the process of going from models to events, i.e. to make the process more user-friendly so that more people can get involved and perform serious collider...

  10. High accuracy modeling for advanced nuclear reactor core designs using Monte Carlo based coupled calculations

    Science.gov (United States)

    Espel, Federico Puente

    The main objective of this PhD research is to develop a high accuracy modeling tool using a Monte Carlo based coupled system. The presented research comprises the development of models to include the thermal-hydraulic feedback to the Monte Carlo method and speed-up mechanisms to accelerate the Monte Carlo criticality calculation. Presently, deterministic codes based on the diffusion approximation of the Boltzmann transport equation, coupled with channel-based (or sub-channel based) thermal-hydraulic codes, carry out the three-dimensional (3-D) reactor core calculations of the Light Water Reactors (LWRs). These deterministic codes utilize nuclear homogenized data (normally over large spatial zones, consisting of fuel assembly or parts of fuel assembly, and in the best case, over small spatial zones, consisting of pin cell), which is functionalized in terms of thermal-hydraulic feedback parameters (in the form of off-line pre-generated cross-section libraries). High accuracy modeling is required for advanced nuclear reactor core designs that present increased geometry complexity and material heterogeneity. Such high-fidelity methods take advantage of the recent progress in computation technology and coupled neutron transport solutions with thermal-hydraulic feedback models on pin or even on sub-pin level (in terms of spatial scale). The continuous energy Monte Carlo method is well suited for solving such core environments with the detailed representation of the complicated 3-D problem. The major advantages of the Monte Carlo method over the deterministic methods are the continuous energy treatment and the exact 3-D geometry modeling. However, the Monte Carlo method involves vast computational time. The interest in Monte Carlo methods has increased thanks to the improvements of the capabilities of high performance computers. Coupled Monte-Carlo calculations can serve as reference solutions for verifying high-fidelity coupled deterministic neutron transport methods

  11. Shell Tectonics: A Mechanical Model for Strike-slip Displacement on Europa

    Science.gov (United States)

    Rhoden, Alyssa Rose; Wurman, Gilead; Huff, Eric M.; Manga, Michael; Hurford, Terry A.

    2012-01-01

    We introduce a new mechanical model for producing tidally-driven strike-slip displacement along preexisting faults on Europa, which we call shell tectonics. This model differs from previous models of strike-slip on icy satellites by incorporating a Coulomb failure criterion, approximating a viscoelastic rheology, determining the slip direction based on the gradient of the tidal shear stress rather than its sign, and quantitatively determining the net offset over many orbits. This model allows us to predict the direction of net displacement along faults and determine relative accumulation rate of displacement. To test the shell tectonics model, we generate global predictions of slip direction and compare them with the observed global pattern of strike-slip displacement on Europa in which left-lateral faults dominate far north of the equator, right-lateral faults dominate in the far south, and near-equatorial regions display a mixture of both types of faults. The shell tectonics model reproduces this global pattern. Incorporating a small obliquity into calculations of tidal stresses, which are used as inputs to the shell tectonics model, can also explain regional differences in strike-slip fault populations. We also discuss implications for fault azimuths, fault depth, and Europa's tectonic history.

  12. A Shell/3D Modeling Technique for Delaminations in Composite Laminates

    Science.gov (United States)

    Krueger, Ronald

    1999-01-01

    A shell/3D modeling technique was developed for which a local solid finite element model is used only in the immediate vicinity of the delamination front. The goal was to combine the accuracy of the full three-dimensional solution with the computational efficiency of a plate or shell finite element model. Multi-point constraints provide a kinematically compatible interface between the local 3D model and the global structural model which has been meshed with plate or shell finite elements. For simple double cantilever beam (DCB), end notched flexure (ENF), and single leg bending (SLB) specimens, mixed mode energy release rate distributions were computed across the width from nonlinear finite element analyses using the virtual crack closure technique. The analyses served to test the accuracy of the shell/3D technique for the pure mode I case (DCB), mode II case (ENF) and a mixed mode I/II case (SLB). Specimens with a unidirectional layup where the delamination is located between two 0 plies, as well as a multidirectional layup where the delamination is located between two non-zero degree plies, were simulated. For a local 3D model extending to a minimum of about three specimen thicknesses in front of and behind the delamination front, the results were in good agreement with mixed mode strain energy release rates obtained from computations where the entire specimen had been modeled with solid elements. For large built-up composite structures modeled with plate elements, the shell/3D modeling technique offers a great potential, since only a relatively small section in the vicinity of the delamination front needs to be modeled with solid elements.

  13. Use of shell model calculations in R-matrix studies of neutron-induced reactions

    International Nuclear Information System (INIS)

    Knox, H.D.

    1986-01-01

    R-matrix analyses of neutron-induced reactions for many of the lightest p-shell nuclei are difficult due to a lack of distinct resonance structure in the reaction cross sections. Initial values for the required R-matrix parameters, E,sub(lambda) and γsub(lambdac) for states in the compound system, can be obtained from shell model calculations. In the present work, the results of recent shell model calculations for the lithium isotopes have been used in R-matrix analyses of 6 Li+n and 7 Li+n reactions for E sub(n) 7 Li and 8 Li on the 6 Li+n and 7 Li+n reaction mechanisms and cross sections are discussed. (author)

  14. The Dynamic Similitude Design of a Thin-Wall Cylindrical Shell with Sealing Teeth and Its Geometrically Distorted Model

    Directory of Open Access Journals (Sweden)

    Zhong Luo

    2015-02-01

    Full Text Available This study investigates a method of designing a simplified cylindrical shell model. This model accurately predicts the dynamic characteristics of a prototype cylindrical shell with sealing teeth accurately. The significance of this study is that it provides an acceptable process which guides the design of test models. Firstly, an equivalent cylindrical shell with rectangular rings is designed by combining the energy equation and numerical analysis. Then the transfer matrixes of the stiffened cylindrical shell and the cylindrical shell are employed to calculate the equivalent thickness of the simplified cylindrical shell commonly used in model tests. Further, the equivalent thicknesses are normalized by introducing an average equivalent thickness. The distorted scaling laws and size applicable intervals are investigated to reduce the errors caused by the normalization. Finally, a 42CrMo cylindrical shell with sealing teeth is used as a prototype and a number 45 steel scaled-down cylindrical shell is used as a distorted test model. The accuracy of the prediction is verified by using experimental data, and the results indicate that the distorted model can predict the characteristics of the stiffened cylindrical shell prototype with good accuracy.

  15. Bayesian Monte Carlo and Maximum Likelihood Approach for Uncertainty Estimation and Risk Management: Application to Lake Oxygen Recovery Model

    Science.gov (United States)

    Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...

  16. Monte Carlo modeling provides accurate calibration factors for radionuclide activity meters

    International Nuclear Information System (INIS)

    Zagni, F.; Cicoria, G.; Lucconi, G.; Infantino, A.; Lodi, F.; Marengo, M.

    2014-01-01

    Accurate determination of calibration factors for radionuclide activity meters is crucial for quantitative studies and in the optimization step of radiation protection, as these detectors are widespread in radiopharmacy and nuclear medicine facilities. In this work we developed the Monte Carlo model of a widely used activity meter, using the Geant4 simulation toolkit. More precisely the “PENELOPE” EM physics models were employed. The model was validated by means of several certified sources, traceable to primary activity standards, and other sources locally standardized with spectrometry measurements, plus other experimental tests. Great care was taken in order to accurately reproduce the geometrical details of the gas chamber and the activity sources, each of which is different in shape and enclosed in a unique container. Both relative calibration factors and ionization current obtained with simulations were compared against experimental measurements; further tests were carried out, such as the comparison of the relative response of the chamber for a source placed at different positions. The results showed a satisfactory level of accuracy in the energy range of interest, with the discrepancies lower than 4% for all the tested parameters. This shows that an accurate Monte Carlo modeling of this type of detector is feasible using the low-energy physics models embedded in Geant4. The obtained Monte Carlo model establishes a powerful tool for first instance determination of new calibration factors for non-standard radionuclides, for custom containers, when a reference source is not available. Moreover, the model provides an experimental setup for further research and optimization with regards to materials and geometrical details of the measuring setup, such as the ionization chamber itself or the containers configuration. - Highlights: • We developed a Monte Carlo model of a radionuclide activity meter using Geant4. • The model was validated using several

  17. Modeling dose-rate on/over the surface of cylindrical radio-models using Monte Carlo methods

    International Nuclear Information System (INIS)

    Xiao Xuefu; Ma Guoxue; Wen Fuping; Wang Zhongqi; Wang Chaohui; Zhang Jiyun; Huang Qingbo; Zhang Jiaqiu; Wang Xinxing; Wang Jun

    2004-01-01

    Objective: To determine the dose-rates on/over the surface of 10 cylindrical radio-models, which belong to the Metrology Station of Radio-Geological Survey of CNNC. Methods: The dose-rates on/over the surface of 10 cylindrical radio-models were modeled using the famous Monte Carlo code-MCNP. The dose-rates on/over the surface of 10 cylindrical radio-models were measured by a high gas pressurized ionization chamber dose-rate meter, respectively. The values of dose-rate modeled using MCNP code were compared with those obtained by authors in the present experimental measurement, and with those obtained by other workers previously. Some factors causing the discrepancy between the data obtained by authors using MCNP code and the data obtained using other methods are discussed in this paper. Results: The data of dose-rates on/over the surface of 10 cylindrical radio-models, obtained using MCNP code, were in good agreement with those obtained by other workers using the theoretical method. They were within the discrepancy of ±5% in general, and the maximum discrepancy was less than 10%. Conclusions: As if each factor needed for the Monte Carlo code is correct, the dose-rates on/over the surface of cylindrical radio-models modeled using the Monte Carlo code are correct with an uncertainty of 3%

  18. Modeling of exchange bias in the antiferromagnetic (core)/ferromagnetic (shell) nanoparticles with specialized shapes

    International Nuclear Information System (INIS)

    Hu Yong; Liu Yan; Du An

    2011-01-01

    Zero-field-cooled (ZFC) and field-cooled (FC) hysteresis loops of egg- and ellipsoid-shaped nanoparticles with inverted ferromagnetic (FM)-antiferromagnetic (AFM) core-shell morphologies are simulated using a modified Monte Carlo method, which takes into account both the thermal fluctuations and energy barriers during the rotation of spin. Pronounced exchange bias (EB) fields and reduced coercivities are obtained in the FC hysteresis loops. The analysis of the microscopic spin configurations allows us to conclude that the magnetization reversal occurs by means of the nucleation process during both the ZFC and FC hysteresis branches. The nucleation takes place in the form of 'sparks' resulting from the energy competition and the morphology of the nanoparticle. The appearance of EB in the FC hysteresis loops is only dependent on that the movements of 'sparks' driven by magnetic field at both branches of hysteresis loops are not along the same axis, which is independent of the strength of AFM anisotropy. The tilt of 'spark' movement with respect to the symmetric axis implies the existence of additional unidirectional anisotropy at the AFM/FM interfaces as a consequence of the surplus magnetization in the AFM core, which is the commonly accepted origin of EB. Our simulations allow us to clarify the microscopic mechanisms of the observed EB behavior, not accessible in experiments. - Highlights: → A modified Monte Carlo method considers thermal fluctuations and energy barriers. → Egg and ellipsoid nanoparticles with inverted core-shell morphology are studied. → Pronounced exchange bias fields and reduced coercivities may be detected. → 'Sparks' representing nucleation sites due to energy competition are observed. → 'Sparks' can reflect or check directly and vividly the origin of exchange bias.

  19. Development of Shear Deformable Laminated Shell Element and Its Application to ANCF Tire Model

    Science.gov (United States)

    2015-04-24

    DEFORMABLE LAMINATED SHELL ELEMENT AND ITS APPLICATION TO ANCF TIRE MODEL Hiroki Yamashita Department of Mechanical and Industrial Engineering...for application to the modeling of fiber-reinforced rubber (FRR) structure of the physics-based ANCF tire model. The complex deformation coupling...cornering forces. Since a tire consists of layers of plies and steel belts embedded in rubber , the tire structure needs to be modeled by cord- rubber

  20. MARMOSET: The Path from LHC Data to the New Standard Model via On-Shell Effective Theories

    Energy Technology Data Exchange (ETDEWEB)

    Arkani-Hamed, Nima; Schuster, Philip; Toro, Natalia; /Harvard U., Phys. Dept.; Thaler, Jesse; /UC, Berkeley /LBL, Berkeley; Wang, Lian-Tao; /Princeton U.; Knuteson, Bruce; /MIT, LNS; Mrenna, Stephen; /Fermilab

    2007-03-01

    We describe a coherent strategy and set of tools for reconstructing the fundamental theory of the TeV scale from LHC data. We show that On-Shell Effective Theories (OSETs) effectively characterize hadron collider data in terms of masses, production cross sections, and decay modes of candidate new particles. An OSET description of the data strongly constrains the underlying new physics, and sharply motivates the construction of its Lagrangian. Simulating OSETs allows efficient analysis of new-physics signals, especially when they arise from complicated production and decay topologies. To this end, we present MARMOSET, a Monte Carlo tool for simulating the OSET version of essentially any new-physics model. MARMOSET enables rapid testing of theoretical hypotheses suggested by both data and model-building intuition, which together chart a path to the underlying theory. We illustrate this process by working through a number of data challenges, where the most important features of TeV-scale physics are reconstructed with as little as 5 fb{sup -1} of simulated LHC signals.

  1. Mathematical Modeling of the Thermal Shell State of the Cylindrical Cryogenic Tank During Filling and Emptying

    Directory of Open Access Journals (Sweden)

    V. S. Zarubin

    2015-01-01

    Full Text Available Liquid hydrogen and oxygen are used as the oxidizer and fuel for liquid rocket engines. Liquefied natural gas, which is based on methane, is seen as a promising motor fuel for internal combustion engines. One of the technical problems arising from the use of said cryogenic liquid is to provide containers for storage, transport and use in the propulsion system. In the design and operation of such vessels it is necessary to have reliable information about their temperature condition, on which depend the loss of cryogenic fluids due to evaporation and the stress-strain state of the structural elements of the containers.Uneven temperature distribution along the generatrix of the cylindrical thin-walled shell of rocket cryogenic tanks, in a localized zone of cryogenic liquid level leads to a curvature of the shell and reduce the permissible axle load in a hazard shell buckling in the preparation for the start of the missile in flight with an increasing acceleration. Moving the level of the cryogenic liquid during filling or emptying the tank at a certain combination of parameters results in an increase of the local temperature distribution nonuniformity.Along with experimental study of the shell temperature state of the cryogenic container, methods of mathematical modeling allow to have information needed for designing and testing the construction of cryogenic tanks. In this study a mathematical model is built taking into account features of heat transfer in a cryogenic container, including the boiling cryogenic liquid in the inner surface of the container. This mathematical model describes the temperature state of the thin-walled shell of cylindrical cryogenic tank during filling and emptying. The work also presents a quantitative analysis of this model in case of fixed liquid level, its movement at a constant speed, and harmonic oscillations relative to a middle position. The quantitative analysis of this model has allowed to find the limit options

  2. Quantum Monte Carlo simulation for S=1 Heisenberg model with uniaxial anisotropy

    International Nuclear Information System (INIS)

    Tsukamoto, Mitsuaki; Batista, Cristian; Kawashima, Naoki

    2007-01-01

    We perform quantum Monte Carlo simulations for S=1 Heisenberg model with an uniaxial anisotropy. The system exhibits a phase transition as we vary the anisotropy and a long range order appears at a finite temperature when the exchange interaction J is comparable to the uniaxial anisotropy D. We investigate quantum critical phenomena of this model and obtain the line of the phase transition which approaches a power-law with logarithmic corrections at low temperature. We derive the form of logarithmic corrections analytically and compare it to our simulation results

  3. Modeling the cathode region of noble gas mixture discharges using Monte Carlo simulation

    International Nuclear Information System (INIS)

    Donko, Z.; Janossy, M.

    1992-10-01

    A model of the cathode dark space of DC glow discharges was developed in order to study the effects caused by mixing small amounts (≤2%) of other noble gases (Ne, Ar, Kr and Xe) to He. The motion of charged particles was described by Monte Carlo simulation. Several discharge parameters (electron and ion energy distribution functions, electron and ion current densities, reduced ionization coefficients, and current density-voltage characteristics) were obtained. Small amounts of admixtures were found to modify significantly the discharge parameters. Current density-voltage characteristics obtained from the model showed good agreement with experimental data. (author) 40 refs.; 14 figs

  4. Monte Carlo modelling of the Belgian materials testing reactor BR2: present status

    International Nuclear Information System (INIS)

    Verboomen, B.; Aoust, Th.; Raedt, Ch. de; Beeckmans de West-Meerbeeck, A.

    2001-01-01

    A very detailed 3-D MCNP-4B model of the BR2 reactor was developed to perform all neutron and gamma calculations needed for the design of new experimental irradiation rigs. The Monte Carlo model of BR2 includes the nearly exact geometrical representation of fuel elements (now with their axially varying burn-up), of partially inserted control and regulating rods, of experimental devices and of radioisotope production rigs. The multiple level-geometry possibilities of MCNP-4B are fully exploited to obtain sufficiently flexible tools to cope with the very changing core loading. (orig.)

  5. Development of a Monte Carlo model for the Brainlab microMLC

    Energy Technology Data Exchange (ETDEWEB)

    Belec, Jason; Patrocinio, Horacio; Verhaegen, Frank [Medical Physics Department, McGill University Health Centre, McGill University, Montreal General Hospital, 1650 Cedar Avenue, Montreal, Quebec, H3G1A4 (Canada)

    2005-03-07

    Stereotactic radiosurgery with several static conformal beams shaped by a micro multileaf collimator ({mu}MLC) is used to treat small irregularly shaped brain lesions. Our goal is to perform Monte Carlo calculations of dose distributions for certain treatment plans as a verification tool. A dedicated {mu}MLC component module for the BEAMnrc code was developed as part of this project and was incorporated in a model of the Varian CL2300 linear accelerator 6 MV photon beam. As an initial validation of the code, the leaf geometry was visualized by tracing particles through the component module and recording their position each time a leaf boundary was crossed. The leaf dimensions were measured and the leaf material density and interleaf air gap were chosen to match the simulated leaf leakage profiles with film measurements in a solid water phantom. A comparison between Monte Carlo calculations and measurements (diode, radiographic film) was performed for square and irregularly shaped fields incident on flat and homogeneous water phantoms. Results show that Monte Carlo calculations agree with measured dose distributions to within 2% and/or 1 mm except for field size smaller than 1.2 cm diameter where agreement is within 5% due to uncertainties in measured output factors.

  6. Design and evaluation of a Monte Carlo based model of an orthovoltage treatment system

    International Nuclear Information System (INIS)

    Penchev, Petar; Maeder, Ulf; Fiebich, Martin; Zink, Klemens; University Hospital Marburg

    2015-01-01

    The aim of this study was to develop a flexible framework of an orthovoltage treatment system capable of calculating and visualizing dose distributions in different phantoms and CT datasets. The framework provides a complete set of various filters, applicators and X-ray energies and therefore can be adapted to varying studies or be used for educational purposes. A dedicated user friendly graphical interface was developed allowing for easy setup of the simulation parameters and visualization of the results. For the Monte Carlo simulations the EGSnrc Monte Carlo code package was used. Building the geometry was accomplished with the help of the EGSnrc C++ class library. The deposited dose was calculated according to the KERMA approximation using the track-length estimator. The validation against measurements showed a good agreement within 4-5% deviation, down to depths of 20% of the depth dose maximum. Furthermore, to show its capabilities, the validated model was used to calculate the dose distribution on two CT datasets. Typical Monte Carlo calculation time for these simulations was about 10 minutes achieving an average statistical uncertainty of 2% on a standard PC. However, this calculation time depends strongly on the used CT dataset, tube potential, filter material/thickness and applicator size.

  7. Coupled Monte Carlo simulation and Copula theory for uncertainty analysis of multiphase flow simulation models.

    Science.gov (United States)

    Jiang, Xue; Na, Jin; Lu, Wenxi; Zhang, Yu

    2017-11-01

    Simulation-optimization techniques are effective in identifying an optimal remediation strategy. Simulation models with uncertainty, primarily in the form of parameter uncertainty with different degrees of correlation, influence the reliability of the optimal remediation strategy. In this study, a coupled Monte Carlo simulation and Copula theory is proposed for uncertainty analysis of a simulation model when parameters are correlated. Using the self-adaptive weight particle swarm optimization Kriging method, a surrogate model was constructed to replace the simulation model and reduce the computational burden and time consumption resulting from repeated and multiple Monte Carlo simulations. The Akaike information criterion (AIC) and the Bayesian information criterion (BIC) were employed to identify whether the t Copula function or the Gaussian Copula is the optimal Copula function to match the relevant structure of the parameters. The results show that both the AIC and BIC values of the t Copula function are less than those of the Gaussian Copula function. This indicates that the t Copula function is the optimal function for matching the relevant structure of the parameters. The outputs of the simulation model when parameter correlation was considered and when it was ignored were compared. The results show that the amplitude of the fluctuation interval when parameter correlation was considered is less than the corresponding amplitude when parameter estimation was ignored. Moreover, it was demonstrated that considering the correlation among parameters is essential for uncertainty analysis of a simulation model, and the results of uncertainty analysis should be incorporated into the remediation strategy optimization process.

  8. Numerical Demons in Monte Carlo Estimation of Bayesian Model Evidence with Application to Soil Respiration Models

    Science.gov (United States)

    Elshall, A. S.; Ye, M.; Niu, G. Y.; Barron-Gafford, G.

    2016-12-01

    Bayesian multimodel inference is increasingly being used in hydrology. Estimating Bayesian model evidence (BME) is of central importance in many Bayesian multimodel analysis such as Bayesian model averaging and model selection. BME is the overall probability of the model in reproducing the data, accounting for the trade-off between the goodness-of-fit and the model complexity. Yet estimating BME is challenging, especially for high dimensional problems with complex sampling space. Estimating BME using the Monte Carlo numerical methods is preferred, as the methods yield higher accuracy than semi-analytical solutions (e.g. Laplace approximations, BIC, KIC, etc.). However, numerical methods are prone the numerical demons arising from underflow of round off errors. Although few studies alluded to this issue, to our knowledge this is the first study that illustrates these numerical demons. We show that the precision arithmetic can become a threshold on likelihood values and Metropolis acceptance ratio, which results in trimming parameter regions (when likelihood function is less than the smallest floating point number that a computer can represent) and corrupting of the empirical measures of the random states of the MCMC sampler (when using log-likelihood function). We consider two of the most powerful numerical estimators of BME that are the path sampling method of thermodynamic integration (TI) and the importance sampling method of steppingstone sampling (SS). We also consider the two most widely used numerical estimators, which are the prior sampling arithmetic mean (AS) and posterior sampling harmonic mean (HM). We investigate the vulnerability of these four estimators to the numerical demons. Interesting, the most biased estimator, namely the HM, turned out to be the least vulnerable. While it is generally assumed that AM is a bias-free estimator that will always approximate the true BME by investing in computational effort, we show that arithmetic underflow can

  9. Efficient matrix-vector products for large-scale nuclear Shell-Model calculations

    OpenAIRE

    Toivanen, J.

    2006-01-01

    A method to accelerate the matrix-vector products of j-scheme nuclear Shell-Model Configuration Interaction (SMCI) calculations is presented. The method takes advantage of the matrix product form of the j-scheme proton-neutron Hamiltonian matrix. It is shown that the method can speed up unrestricted large-scale pf-shell calculations by up to two orders of magnitude compared to previously existing related j-scheme method. The new method allows unrestricted SMCI calculations up to j-scheme dime...

  10. Shell model calculation for Te and Sn isotopes in the vicinity of {sup 100}Sn

    Energy Technology Data Exchange (ETDEWEB)

    Yakhelef, A.; Bouldjedri, A. [Physics Department, Farhat abbas University, Setif (Algeria); Physics Department, Hadj Lakhdar University, Batna (Algeria)

    2012-06-27

    New Shell Model calculations for even-even isotopes {sup 104-108}Sn and {sup 106,108}Te, in the vicinity of {sup 100}Sn have been performed. The calculations have been carried out using the windows version of NuShell-MSU. The two body matrix elements TBMEs of the effective interaction between valence nucleons are obtained from the renormalized two body effective interaction based on G-matrix derived from the CD-bonn nucleon-nucleon potential. The single particle energies of the proton and neutron valence spaces orbitals are defined from the available spectra of lightest odd isotopes of Sb and Sn respectively.

  11. Laboratory Measurement And Theoretical Modeling of K-Shell X-Ray Lines From Inner-Shell Excited And Ionized Ions of Oxygen

    Energy Technology Data Exchange (ETDEWEB)

    Gu, M.F.

    2005-04-08

    We present high resolution laboratory spectra of K-shell X-ray lines from inner-shell excited and ionized ions of oxygen, obtained with a reflection grating spectrometer on the electron beam ion trap (EBIT-I) at the Lawrence Livermore National Laboratory. Only with a multi-ion model including all major atomic collisional and radiative processes, are we able to identify the observed K-shell transitions of oxygen ions from O III to O VI. The wavelengths and associated errors for some of the strongest transitions are given, taking into account both the experimental and modeling uncertainties. The present data should be useful in identifying the absorption features present in astrophysical sources, such as active galactic nuclei and X-ray binaries. They are also useful in providing benchmarks for the testing of theoretical atomic structure calculations.

  12. Study of neutron-rich Mo isotopes by the projected shell model ...

    Indian Academy of Sciences (India)

    also predicts a decrease in the quantum of triaxiality with increasing neutron number and angular momentum for odd mass neutron-rich Mo isotopes. Keywords. Neutron-rich nuclei; electromagnetic quantities; projected shell model. PACS Nos 21.60.Cs; 21.10.Ky; 21.10.Re; 27.60.+j. 1. Introduction. Neutron-rich nuclei in the ...

  13. Recursive calculation of matrix elements for the generalized seniority shell model

    International Nuclear Information System (INIS)

    Luo, F.Q.; Caprio, M.A.

    2011-01-01

    A recursive calculational scheme is developed for matrix elements in the generalized seniority scheme for the nuclear shell model. Recurrence relations are derived which permit straightforward and efficient computation of matrix elements of one-body and two-body operators and basis state overlaps.

  14. On The Estimation of Parameters of Thick Current Shell Model of ...

    African Journals Online (AJOL)

    Equatorial electrojet, an intense current flowing eastward in the low latitude ionosphere within the narrow region flanking the dip equator, is a major phenomenon of interest in geomagnetic field studies. For the first time the five parameters required to fully describe the Onwumechili\\'s composite thick current shell model ...

  15. Cluster-orbital shell model approach and developments for study of neutron-rich systems

    CERN Document Server

    Masui, H; Ikeda, K

    2010-01-01

    We develop an m-scheme approach of the cluster-orbital shell model (COSM). By using the Gaussian as the radial part of the basis function, components of the unbound states are correctly taken into account. We apply the m-scheme COSM to oxygen isotopes and study the energies and r.m.s.radii.

  16. Level density of the sd-nuclei-Statistical shell-model predictions

    Science.gov (United States)

    Karampagia, S.; Senkov, R. A.; Zelevinsky, V.

    2018-03-01

    Accurate knowledge of the nuclear level density is important both from a theoretical viewpoint as a powerful instrument for studying nuclear structure and for numerous applications. For example, astrophysical reactions responsible for the nucleosynthesis in the universe can be understood only if we know the nuclear level density. We use the configuration-interaction nuclear shell model to predict nuclear level density for all nuclei in the sd-shell, both total and for individual spins (only with positive parity). To avoid the diagonalization in large model spaces we use the moments method based on statistical properties of nuclear many-body systems. In the cases where the diagonalization is possible, the results of the moments method practically coincide with those from the shell-model calculations. Using the computed level densities, we fit the parameters of the Constant Temperature phenomenological model, which can be used by practitioners in their studies of nuclear reactions at excitation energies appropriate for the sd-shell nuclei.

  17. On large-scale shell-model calculations in sup 4 He

    Energy Technology Data Exchange (ETDEWEB)

    Bishop, R.F.; Flynn, M.F. (Manchester Univ. (UK). Inst. of Science and Technology); Bosca, M.C.; Buendia, E.; Guardiola, R. (Granada Univ. (Spain). Dept. de Fisica Moderna)

    1990-03-01

    Most shell-model calculations of {sup 4}He require very large basis spaces for the energy spectrum to stabilise. Coupled cluster methods and an exact treatment of the centre-of-mass motion dramatically reduce the number of configurations. We thereby obtain almost exact results with small bases, but which include states of very high excitation energy. (author).

  18. First-Principles Modeling of Core/Shell Quantum Dot Sensitized Solar Cells

    NARCIS (Netherlands)

    Azpiroz, Jon Mikel; Infante, Ivan; De Angelis, Filippo

    2015-01-01

    We report on the density functional theory (DFT) modeling of core/shell quantum dot (QD) sensitized solar cells (QDSSCs), a device architecture that holds great potential in photovoltaics but has not been fully exploited so far. To understand the working mechanisms of this kind of solar cells, we

  19. Comparison of nonstationary generalized logistic models based on Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    S. Kim

    2015-06-01

    Full Text Available Recently, the evidences of climate change have been observed in hydrologic data such as rainfall and flow data. The time-dependent characteristics of statistics in hydrologic data are widely defined as nonstationarity. Therefore, various nonstationary GEV and generalized Pareto models have been suggested for frequency analysis of nonstationary annual maximum and POT (peak-over-threshold data, respectively. However, the alternative models are required for nonstatinoary frequency analysis because of analyzing the complex characteristics of nonstationary data based on climate change. This study proposed the nonstationary generalized logistic model including time-dependent parameters. The parameters of proposed model are estimated using the method of maximum likelihood based on the Newton-Raphson method. In addition, the proposed model is compared by Monte Carlo simulation to investigate the characteristics of models and applicability.

  20. Equilibrium resurfacing of Venus: Results from new Monte Carlo modeling and implications for Venus surface histories

    Science.gov (United States)

    Bjonnes, E. E.; Hansen, V. L.; James, B.; Swenson, J. B.

    2012-02-01

    Venus' impact crater population imposes two observational constraints that must be met by possible model surface histories: (1) near random spatial distribution of ˜975 craters, and (2) few obviously modified impact craters. Catastrophic resurfacing obviously meets these constraints, but equilibrium resurfacing histories require a balance between crater distribution and modification to be viable. Equilibrium resurfacing scenarios with small incremental resurfacing areas meet constraint 1 but not 2, whereas those with large incremental resurfacing areas meet constraint 2 but not 1. Results of Monte Carlo modeling of equilibrium resurfacing ( Strom et al., 1994) is widely cited as support for catastrophic resurfacing hypotheses and as evidence against hypotheses of equilibrium resurfacing. However, the Monte Carlo models did not consider intermediate-size incremental resurfacing areas, nor did they consider histories in which the era of impact crater formation outlasts an era of equilibrium resurfacing. We construct three suites of Monte Carlo experiments that examine incremental resurfacing areas not previously considered (5%, 1%, 0.7%, and 0.1%), and that vary the duration of resurfacing relative to impact crater formation time (1:1 [suite A], 5:6 [suite B], and 2:3 [suite C]). We test the model results against the two impact crater constraints. Several experiments met both constraints. The shorter the time period of equilibrium resurfacing, or the longer the time of crater formation following the cessation of equilibrium resurfacing, the larger the possible areas of incremental resurfacing that satisfy both constraints. Equilibrium resurfacing is statistically viable for suite A at 0.1%, suite B at 0.1%, and suite C for 1%, 0.7%, and 0.1% areas of incremental resurfacing.

  1. A new moving strategy for the sequential Monte Carlo approach in optimizing the hydrological model parameters

    Science.gov (United States)

    Zhu, Gaofeng; Li, Xin; Ma, Jinzhu; Wang, Yunquan; Liu, Shaomin; Huang, Chunlin; Zhang, Kun; Hu, Xiaoli

    2018-04-01

    Sequential Monte Carlo (SMC) samplers have become increasing popular for estimating the posterior parameter distribution with the non-linear dependency structures and multiple modes often present in hydrological models. However, the explorative capabilities and efficiency of the sampler depends strongly on the efficiency in the move step of SMC sampler. In this paper we presented a new SMC sampler entitled the Particle Evolution Metropolis Sequential Monte Carlo (PEM-SMC) algorithm, which is well suited to handle unknown static parameters of hydrologic model. The PEM-SMC sampler is inspired by the works of Liang and Wong (2001) and operates by incorporating the strengths of the genetic algorithm, differential evolution algorithm and Metropolis-Hasting algorithm into the framework of SMC. We also prove that the sampler admits the target distribution to be a stationary distribution. Two case studies including a multi-dimensional bimodal normal distribution and a conceptual rainfall-runoff hydrologic model by only considering parameter uncertainty and simultaneously considering parameter and input uncertainty show that PEM-SMC sampler is generally superior to other popular SMC algorithms in handling the high dimensional problems. The study also indicated that it may be important to account for model structural uncertainty by using multiplier different hydrological models in the SMC framework in future study.

  2. Monte Carlo Modelling of Single-Crystal Diffuse Scattering from Intermetallics

    Directory of Open Access Journals (Sweden)

    Darren J. Goossens

    2016-02-01

    Full Text Available Single-crystal diffuse scattering (SCDS reveals detailed structural insights into materials. In particular, it is sensitive to two-body correlations, whereas traditional Bragg peak-based methods are sensitive to single-body correlations. This means that diffuse scattering is sensitive to ordering that persists for just a few unit cells: nanoscale order, sometimes referred to as “local structure”, which is often crucial for understanding a material and its function. Metals and alloys were early candidates for SCDS studies because of the availability of large single crystals. While great progress has been made in areas like ab initio modelling and molecular dynamics, a place remains for Monte Carlo modelling of model crystals because of its ability to model very large systems; important when correlations are relatively long (though still finite in range. This paper briefly outlines, and gives examples of, some Monte Carlo methods appropriate for the modelling of SCDS from metallic compounds, and considers data collection as well as analysis. Even if the interest in the material is driven primarily by magnetism or transport behaviour, an understanding of the local structure can underpin such studies and give an indication of nanoscale inhomogeneity.

  3. MCNP-REN - A Monte Carlo Tool for Neutron Detector Design Without Using the Point Model

    International Nuclear Information System (INIS)

    Abhold, M.E.; Baker, M.C.

    1999-01-01

    The development of neutron detectors makes extensive use of the predictions of detector response through the use of Monte Carlo techniques in conjunction with the point reactor model. Unfortunately, the point reactor model fails to accurately predict detector response in common applications. For this reason, the general Monte Carlo N-Particle code (MCNP) was modified to simulate the pulse streams that would be generated by a neutron detector and normally analyzed by a shift register. This modified code, MCNP - Random Exponentially Distributed Neutron Source (MCNP-REN), along with the Time Analysis Program (TAP) predict neutron detector response without using the point reactor model, making it unnecessary for the user to decide whether or not the assumptions of the point model are met for their application. MCNP-REN is capable of simulating standard neutron coincidence counting as well as neutron multiplicity counting. Measurements of MOX fresh fuel made using the Underwater Coincidence Counter (UWCC) as well as measurements of HEU reactor fuel using the active neutron Research Reactor Fuel Counter (RRFC) are compared with calculations. The method used in MCNP-REN is demonstrated to be fundamentally sound and shown to eliminate the need to use the point model for detector performance predictions

  4. The Physical Models and Statistical Procedures Used in the RACER Monte Carlo Code

    Energy Technology Data Exchange (ETDEWEB)

    Sutton, T.M.; Brown, F.B.; Bischoff, F.G.; MacMillan, D.B.; Ellis, C.L.; Ward, J.T.; Ballinger, C.T.; Kelly, D.J.; Schindler, L.

    1999-07-01

    This report describes the MCV (Monte Carlo - Vectorized)Monte Carlo neutron transport code [Brown, 1982, 1983; Brown and Mendelson, 1984a]. MCV is a module in the RACER system of codes that is used for Monte Carlo reactor physics analysis. The MCV module contains all of the neutron transport and statistical analysis functions of the system, while other modules perform various input-related functions such as geometry description, material assignment, output edit specification, etc. MCV is very closely related to the 05R neutron Monte Carlo code [Irving et al., 1965] developed at Oak Ridge National Laboratory. 05R evolved into the 05RR module of the STEMB system, which was the forerunner of the RACER system. Much of the overall logic and physics treatment of 05RR has been retained and, indeed, the original verification of MCV was achieved through comparison with STEMB results. MCV has been designed to be very computationally efficient [Brown, 1981, Brown and Martin, 1984b; Brown, 1986]. It was originally programmed to make use of vector-computing architectures such as those of the CDC Cyber- 205 and Cray X-MP. MCV was the first full-scale production Monte Carlo code to effectively utilize vector-processing capabilities. Subsequently, MCV was modified to utilize both distributed-memory [Sutton and Brown, 1994] and shared memory parallelism. The code has been compiled and run on platforms ranging from 32-bit UNIX workstations to clusters of 64-bit vector-parallel supercomputers. The computational efficiency of the code allows the analyst to perform calculations using many more neutron histories than is practical with most other Monte Carlo codes, thereby yielding results with smaller statistical uncertainties. MCV also utilizes variance reduction techniques such as survival biasing, splitting, and rouletting to permit additional reduction in uncertainties. While a general-purpose neutron Monte Carlo code, MCV is optimized for reactor physics calculations. It has the

  5. Theoretical and experimental stress analyses of ORNL thin-shell cylinder-to-cylinder model 3

    International Nuclear Information System (INIS)

    Gwaltney, R.C.; Bolt, S.E.; Corum, J.M.; Bryson, J.W.

    1975-06-01

    The third in a series of four thin-shell cylinder-to-cylinder models was tested, and the experimentally determined elastic stress distributions were compared with theoretical predictions obtained from a thin-shell finite-element analysis. The models are idealized thin-shell structures consisting of two circular cylindrical shells that intersect at right angles. There are no transitions, reinforcements, or fillets in the junction region. This series of model tests serves two basic purposes: the experimental data provide design information directly applicable to nozzles in cylindrical vessels; and the idealized models provide test results for use in developing and evaluating theoretical analyses applicable to nozzles in cylindrical vessels and to thin piping tees. The cylinder of model 3 had a 10 in. OD and the nozzle had a 1.29 in. OD, giving a d 0 /D 0 ratio of 0.129. The OD/thickness ratios for the cylinder and the nozzle were 50 and 7.68 respectively. Thirteen separate loading cases were analyzed. In each, one end of the cylinder was rigidly held. In addition to an internal pressure loading, three mutually perpendicular force components and three mutually perpendicular moment components were individually applied at the free end of the cylinder and at the end of the nozzle. The experimental stress distributions for all the loadings were obtained using 158 three-gage strain rosettes located on the inner and outer surfaces. The loading cases were also analyzed theoretically using a finite-element shell analysis developed at the University of California, Berkeley. The analysis used flat-plate elements and considered five degrees of freedom per node in the final assembled equations. The comparisons between theory and experiment show reasonably good agreement for this model. (U.S.)

  6. Theoretical and experimental stress analyses of ORNL thin-shell cylinder-to-cylinder model 4

    International Nuclear Information System (INIS)

    Gwaltney, R.C.; Bolt, S.E.; Bryson, J.W.

    1975-06-01

    The last in a series of four thin-shell cylinder-to-cylinder models was tested, and the experimentally determined elastic stress distributions were compared with theoretical predictions obtained from a thin-shell finite-element analysis. The models in the series are idealized thin-shell structures consisting of two circular cylindrical shells that intersect at right angles. There are no transitions, reinforcements, or fillets in the junction region. This series of model tests serves two basic purposes: (1) the experimental data provide design information directly applicable to nozzles in cylindrical vessels, and (2) the idealized models provide test results for use in developing and evaluating theoretical analyses applicable to nozzles in cylindrical vessels and to thin piping tees. The cylinder of model 4 had an outside diameter of 10 in., and the nozzle had an outside diameter of 1.29 in., giving a d 0 /D 0 ratio of 0.129. The OD/thickness ratios were 50 and 20.2 for the cylinder and nozzle respectively. Thirteen separate loading cases were analyzed. For each loading condition one end of the cylinder was rigidly held. In addition to an internal pressure loading, three mutually perpendicular force components and three mutually perpendicular moment components were individually applied at the free end of the cylinder and at the end of the nozzle. The experimental stress distributions for each of the 13 loadings were obtained using 157 three-gage strain rosettes located on the inner and outer surfaces. Each of the 13 loading cases was also analyzed theoretically using a finite-element shell analysis developed at the University of California, Berkeley. The analysis used flat-plate elements and considered five degrees of freedom per node in the final assembled equations. The comparisons between theory and experiment show reasonably good agreement for this model. (U.S.)

  7. Proceedings of a symposium on the occasion of the 40th anniversary of the nuclear shell model

    Energy Technology Data Exchange (ETDEWEB)

    Lee, T.S.H.; Wiringa, R.B. (eds.)

    1990-03-01

    This report contains papers on the following topics: excitation of 1p-1h stretched states with the (p,n) reaction as a test of shell-model calculations; on Z=64 shell closure and some high spin states of {sup 149}Gd and {sup 159}Ho; saturating interactions in {sup 4}He with density dependence; are short-range correlations visible in very large-basis shell-model calculations ; recent and future applications of the shell model in the continuum; shell model truncation schemes for rotational nuclei; the particle-hole interaction and high-spin states near A-16; magnetic moment of doubly closed shell +1 nucleon nucleus {sup 41}Sc(I{sup {pi}}=7/2{sup {minus}}); the new magic nucleus {sup 96}Zr; comparing several boson mappings with the shell model; high spin band structures in {sup 165}Lu; optical potential with two-nucleon correlations; generalized valley approximation applied to a schematic model of the monopole excitation; pair approximation in the nuclear shell model; and many-particle, many-hole deformed states.

  8. Spin Density Distribution in Open-Shell Transition Metal Systems: A Comparative Post-Hartree-Fock, Density Functional Theory, and Quantum Monte Carlo Study of the CuCl2 Molecule.

    Science.gov (United States)

    Caffarel, Michel; Giner, Emmanuel; Scemama, Anthony; Ramírez-Solís, Alejandro

    2014-12-09

    We present a comparative study of the spatial distribution of the spin density of the ground state of CuCl2 using Density Functional Theory (DFT), quantum Monte Carlo (QMC), and post-Hartree-Fock wave function theory (WFT). A number of studies have shown that an accurate description of the electronic structure of the lowest-lying states of this molecule is particularly challenging due to the interplay between the strong dynamical correlation effects in the 3d shell and the delocalization of the 3d hole over the chlorine atoms. More generally, this problem is representative of the difficulties encountered when studying open-shell metal-containing molecular systems. Here, it is shown that qualitatively different results for the spin density distribution are obtained from the various quantum-mechanical approaches. At the DFT level, the spin density distribution is found to be very dependent on the functional employed. At the QMC level, Fixed-Node Diffusion Monte Carlo (FN-DMC) results are strongly dependent on the nodal structure of the trial wave function. Regarding wave function methods, most approaches not including a very high amount of dynamic correlation effects lead to a much too high localization of the spin density on the copper atom, in sharp contrast with DFT. To shed some light on these conflicting results Full CI-type (FCI) calculations using the 6-31G basis set and based on a selection process of the most important determinants, the so-called CIPSI approach (Configuration Interaction with Perturbative Selection done Iteratively) are performed. Quite remarkably, it is found that for this 63-electron molecule and a full CI space including about 10(18) determinants, the FCI limit can almost be reached. Putting all results together, a natural and coherent picture for the spin distribution is proposed.

  9. Modeling of radiation-induced bystander effect using Monte Carlo methods

    Science.gov (United States)

    Xia, Junchao; Liu, Liteng; Xue, Jianming; Wang, Yugang; Wu, Lijun

    2009-03-01

    Experiments showed that the radiation-induced bystander effect exists in cells, or tissues, or even biological organisms when irradiated with energetic ions or X-rays. In this paper, a Monte Carlo model is developed to study the mechanisms of bystander effect under the cells sparsely populated conditions. This model, based on our previous experiment which made the cells sparsely located in a round dish, focuses mainly on the spatial characteristics. The simulation results successfully reach the agreement with the experimental data. Moreover, other bystander effect experiment is also computed by this model and finally the model succeeds in predicting the results. The comparison of simulations with the experimental results indicates the feasibility of the model and the validity of some vital mechanisms assumed.

  10. Optimizing Availability of a Framework in Series Configuration Utilizing Markov Model and Monte Carlo Simulation Techniques

    Directory of Open Access Journals (Sweden)

    Mansoor Ahmed Siddiqui

    2017-06-01

    Full Text Available This research work is aimed at optimizing the availability of a framework comprising of two units linked together in series configuration utilizing Markov Model and Monte Carlo (MC Simulation techniques. In this article, effort has been made to develop a maintenance model that incorporates three distinct states for each unit, while taking into account their different levels of deterioration. Calculations are carried out using the proposed model for two distinct cases of corrective repair, namely perfect and imperfect repairs, with as well as without opportunistic maintenance. Initially, results are accomplished using an analytical technique i.e., Markov Model. Validation of the results achieved is later carried out with the help of MC Simulation. In addition, MC Simulation based codes also work well for the frameworks that follow non-exponential failure and repair rates, and thus overcome the limitations offered by the Markov Model.

  11. Monte Carlo simulation as a tool to predict blasting fragmentation based on the Kuz Ram model

    Science.gov (United States)

    Morin, Mario A.; Ficarazzo, Francesco

    2006-04-01

    Rock fragmentation is considered the most important aspect of production blasting because of its direct effects on the costs of drilling and blasting and on the economics of the subsequent operations of loading, hauling and crushing. Over the past three decades, significant progress has been made in the development of new technologies for blasting applications. These technologies include increasingly sophisticated computer models for blast design and blast performance prediction. Rock fragmentation depends on many variables such as rock mass properties, site geology, in situ fracturing and blasting parameters and as such has no complete theoretical solution for its prediction. However, empirical models for the estimation of size distribution of rock fragments have been developed. In this study, a blast fragmentation Monte Carlo-based simulator, based on the Kuz-Ram fragmentation model, has been developed to predict the entire fragmentation size distribution, taking into account intact and joints rock properties, the type and properties of explosives and the drilling pattern. Results produced by this simulator were quite favorable when compared with real fragmentation data obtained from a blast quarry. It is anticipated that the use of Monte Carlo simulation will increase our understanding of the effects of rock mass and explosive properties on the rock fragmentation by blasting, as well as increase our confidence in these empirical models. This understanding will translate into improvements in blasting operations, its corresponding costs and the overall economics of open pit mines and rock quarries.

  12. Applying sequential Monte Carlo methods into a distributed hydrologic model: lagged particle filtering approach with regularization

    Directory of Open Access Journals (Sweden)

    S. J. Noh

    2011-10-01

    Full Text Available Data assimilation techniques have received growing attention due to their capability to improve prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC methods, known as "particle filters", are a Bayesian learning process that has the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response times of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until the uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on the Markov chain Monte Carlo (MCMC methods is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, water and energy transfer processes (WEP, is implemented for the sequential data assimilation through the updating of state variables. The lagged regularized particle filter (LRPF and the sequential importance resampling (SIR particle filter are implemented for hindcasting of streamflow at the Katsura catchment, Japan. Control state variables for filtering are soil moisture content and overland flow. Streamflow measurements are used for data assimilation. LRPF shows consistent forecasts regardless of the process noise assumption, while SIR has different values of optimal process noise and shows sensitive variation of confidential intervals, depending on the process noise. Improvement of LRPF forecasts compared to SIR is particularly found for rapidly varied high flows due to preservation of sample diversity from the kernel, even if particle impoverishment takes place.

  13. Transport appraisal and Monte Carlo simulation by use of the CBA-DK model

    DEFF Research Database (Denmark)

    Salling, Kim Bang; Leleur, Steen

    2011-01-01

    calculation, where risk analysis is carried out using Monte Carlo simulation. Special emphasis has been placed on the separation between inherent randomness in the modeling system and lack of knowledge. These two concepts have been defined in terms of variability (ontological uncertainty) and uncertainty...... (epistemic uncertainty). After a short introduction to deterministic calculation resulting in some evaluation criteria a more comprehensive evaluation of the stochastic calculation is made. Especially, the risk analysis part of CBA-DK, with considerations about which probability distributions should be used...

  14. A threaded Java concurrent implementation of the Monte-Carlo Metropolis Ising model

    Science.gov (United States)

    Castañeda-Marroquín, Carlos; de la Puente, Alfonso Ortega; Alfonseca, Manuel; Glazier, James A.; Swat, Maciej

    2010-01-01

    This paper describes a concurrent Java implementation of the Metropolis Monte-Carlo algorithm that is used in 2D Ising model simulations. The presented method uses threads, monitors, shared variables and high level concurrent constructs that hide the low level details. In our algorithm we assign one thread to handle one spin flip attempt at a time. We use special lattice site selection algorithm to avoid two or more threads working concurently in the region of the lattice that “belongs” to two or more different spins undergoing spin-flip transformation. Our approach does not depend on the current platform and maximizes concurrent use of the available resources. PMID:21814633

  15. A Monte Carlo modeling alternative for the API Gamma Ray Calibration Facility.

    Science.gov (United States)

    Galford, J E

    2017-04-01

    The gamma ray pit at the API Calibration Facility, located on the University of Houston campus, defines the API unit for natural gamma ray logs used throughout the petroleum logging industry. Future use of the facility is uncertain. An alternative method is proposed to preserve the gamma ray API unit definition as an industry standard by using Monte Carlo modeling to obtain accurate counting rate-to-API unit conversion factors for gross-counting and spectral gamma ray tool designs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Modelling of scintillator based flat-panel detectors with Monte-Carlo simulations

    International Nuclear Information System (INIS)

    Reims, N; Sukowski, F; Uhlmann, N

    2011-01-01

    Scintillator based flat panel detectors are state of the art in the field of industrial X-ray imaging applications. Choosing the proper system and setup parameters for the vast range of different applications can be a time consuming task, especially when developing new detector systems. Since the system behaviour cannot always be foreseen easily, Monte-Carlo (MC) simulations are keys to gain further knowledge of system components and their behaviour for different imaging conditions. In this work we used two Monte-Carlo based models to examine an indirect converting flat panel detector, specifically the Hamamatsu C9312SK. We focused on the signal generation in the scintillation layer and its influence on the spatial resolution of the whole system. The models differ significantly in their level of complexity. The first model gives a global description of the detector based on different parameters characterizing the spatial resolution. With relatively small effort a simulation model can be developed which equates the real detector regarding signal transfer. The second model allows a more detailed insight of the system. It is based on the well established cascade theory, i.e. describing the detector as a cascade of elemental gain and scattering stages, which represent the built in components and their signal transfer behaviour. In comparison to the first model the influence of single components especially the important light spread behaviour in the scintillator can be analysed in a more differentiated way. Although the implementation of the second model is more time consuming both models have in common that a relatively small amount of system manufacturer parameters are needed. The results of both models were in good agreement with the measured parameters of the real system.

  17. Monte Carlo Radiative Transfer Modeling of Lightning Observed in Galileo Images of Jupiter

    Science.gov (United States)

    Dyudine, U. A.; Ingersoll, Andrew P.

    2002-01-01

    We study lightning on Jupiter and the clouds illuminated by the lightning using images taken by the Galileo orbiter. The Galileo images have a resolution of 25 km/pixel and axe able to resolve the shape of the single lightning spots in the images, which have full widths at half the maximum intensity in the range of 90-160 km. We compare the measured lightning flash images with simulated images produced by our ED Monte Carlo light-scattering model. The model calculates Monte Carlo scattering of photons in a ED opacity distribution. During each scattering event, light is partially absorbed. The new direction of the photon after scattering is chosen according to a Henyey-Greenstein phase function. An image from each direction is produced by accumulating photons emerging from the cloud in a small range (bins) of emission angles. Lightning bolts are modeled either as points or vertical lines. Our results suggest that some of the observed scattering patterns axe produced in a 3-D cloud rather than in a plane-parallel cloud layer. Lightning is estimated to occur at least as deep as the bottom of the expected water cloud. For the six cases studied, we find that the clouds above the lightning are optically thick (tau > 5). Jovian flashes are more regular and circular than the largest terrestrial flashes observed from space. On Jupiter there is nothing equivalent to the 30-40-km horizontal flashes which axe seen on Earth.

  18. Particle rejuvenation of Rao-Blackwellized sequential Monte Carlo smoothers for conditionally linear and Gaussian models

    Science.gov (United States)

    Nguyen, Ngoc Minh; Corff, Sylvain Le; Moulines, Éric

    2017-12-01

    This paper focuses on sequential Monte Carlo approximations of smoothing distributions in conditionally linear and Gaussian state spaces. To reduce Monte Carlo variance of smoothers, it is typical in these models to use Rao-Blackwellization: particle approximation is used to sample sequences of hidden regimes while the Gaussian states are explicitly integrated conditional on the sequence of regimes and observations, using variants of the Kalman filter/smoother. The first successful attempt to use Rao-Blackwellization for smoothing extends the Bryson-Frazier smoother for Gaussian linear state space models using the generalized two-filter formula together with Kalman filters/smoothers. More recently, a forward-backward decomposition of smoothing distributions mimicking the Rauch-Tung-Striebel smoother for the regimes combined with backward Kalman updates has been introduced. This paper investigates the benefit of introducing additional rejuvenation steps in all these algorithms to sample at each time instant new regimes conditional on the forward and backward particles. This defines particle-based approximations of the smoothing distributions whose support is not restricted to the set of particles sampled in the forward or backward filter. These procedures are applied to commodity markets which are described using a two-factor model based on the spot price and a convenience yield for crude oil data.

  19. Monte carlo inference for state-space models of wild animal populations.

    Science.gov (United States)

    Newman, Ken B; Fernández, Carmen; Thomas, Len; Buckland, Stephen T

    2009-06-01

    We compare two Monte Carlo (MC) procedures, sequential importance sampling (SIS) and Markov chain Monte Carlo (MCMC), for making Bayesian inferences about the unknown states and parameters of state-space models for animal populations. The procedures were applied to both simulated and real pup count data for the British grey seal metapopulation, as well as to simulated data for a Chinook salmon population. The MCMC implementation was based on tailor-made proposal distributions combined with analytical integration of some of the states and parameters. SIS was implemented in a more generic fashion. For the same computing time MCMC tended to yield posterior distributions with less MC variation across different runs of the algorithm than the SIS implementation with the exception in the seal model of some states and one of the parameters that mixed quite slowly. The efficiency of the SIS sampler greatly increased by analytically integrating out unknown parameters in the observation model. We consider that a careful implementation of MCMC for cases where data are informative relative to the priors sets the gold standard, but that SIS samplers are a viable alternative that can be programmed more quickly. Our SIS implementation is particularly competitive in situations where the data are relatively uninformative; in other cases, SIS may require substantially more computer power than an efficient implementation of MCMC to achieve the same level of MC error.

  20. Electric conduction in semiconductors: a pedagogical model based on the Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Capizzo, M C; Sperandeo-Mineo, R M; Zarcone, M [UoP-PERG, University of Palermo Physics Education Research Group and Dipartimento di Fisica e Tecnologie Relative, Universita di Palermo (Italy)], E-mail: sperandeo@difter.unipa.it

    2008-05-15

    We present a pedagogic approach aimed at modelling electric conduction in semiconductors in order to describe and explain some macroscopic properties, such as the characteristic behaviour of resistance as a function of temperature. A simple model of the band structure is adopted for the generation of electron-hole pairs as well as for the carrier transport in moderate electric fields. The semiconductor behaviour is described by substituting the traditional statistical approach (requiring a deep mathematical background) with microscopic models, based on the Monte Carlo method, in which simple rules applied to microscopic particles and quasi-particles determine the macroscopic properties. We compare measurements of electric properties of matter with 'virtual experiments' built by using some models where the physical concepts can be presented at different formalization levels.

  1. Realistic Gamow shell model for resonance and continuum in atomic nuclei

    Science.gov (United States)

    Xu, F. R.; Sun, Z. H.; Wu, Q.; Hu, B. S.; Dai, S. J.

    2018-02-01

    The Gamow shell model can describe resonance and continuum for atomic nuclei. The model is established in the complex-moment (complex-k) plane of the Berggren coordinates in which bound, resonant and continuum states are treated on equal footing self-consistently. In the present work, the realistic nuclear force, CD Bonn, has been used. We have developed the full \\hat{Q}-box folded-diagram method to derive the realistic effective interaction in the model space which is nondegenerate and contains resonance and continuum channels. The CD-Bonn potential is renormalized using the V low-k method. With choosing 16O as the inert core, we have applied the Gamow shell model to oxygen isotopes.

  2. Evidence for Symplectic Symmetry in Ab Initio No-Core Shell Model Results for Light Nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Dytrych, Tomas; Sviratcheva, Kristina D.; Bahri, Chairul; Draayer, Jerry P.; /Louisiana State U.; Vary, James P.; /Iowa State U. /LLNL, Livermore /SLAC

    2007-04-24

    Clear evidence for symplectic symmetry in low-lying states of {sup 12}C and {sup 16}O is reported. Eigenstates of {sup 12}C and {sup 16}O, determined within the framework of the no-core shell model using the JISP16 NN realistic interaction, typically project at the 85-90% level onto a few of the most deformed symplectic basis states that span only a small fraction of the full model space. The results are nearly independent of whether the bare or renormalized effective interactions are used in the analysis. The outcome confirms Elliott's SU(3) model which underpins the symplectic scheme, and above all, points to the relevance of a symplectic no-core shell model that can reproduce experimental B(E2) values without effective charges as well as deformed spatial modes associated with clustering phenomena in nuclei.

  3. Pion-nucleus double charge exchange and the nuclear shell model

    International Nuclear Information System (INIS)

    Auerbach, N.; Gibbs, W.R.; Ginocchio, J.N.; Kaufmann, W.B.

    1988-01-01

    The pion-nucleus double charge exchange reaction is studied with special emphasis on nuclear structure. The reaction mechanism and nuclear structure aspects of the process are separated using both the plane-wave and distorted-wave impulse approximations. Predictions are made employing both the seniority model and a full shell model (with a single active orbit). Transitions to the double analog state and to the ground state of the residual nucleus are computed. The seniority model yields particularly simple relations among double charge exchange cross sections for nuclei within the same shell. Limitations of the seniority model and of the plane-wave impulse approximation are discussed as well as extensions to the generalized seniority scheme. Applications of the foregoing ideas to single charge exchange are also presented

  4. The First 24 Years of Reverse Monte Carlo Modelling, Budapest, Hungary, 20-22 September 2012

    Science.gov (United States)

    Keen, David A.; Pusztai, László

    2013-11-01

    This special issue contains a collection of papers reflecting the content of the fifth workshop on reverse Monte Carlo (RMC) methods, held in a hotel on the banks of the Danube in the Budapest suburbs in the autumn of 2012. Over fifty participants gathered to hear talks and discuss a broad range of science based on the RMC technique in very convivial surroundings. Reverse Monte Carlo modelling is a method for producing three-dimensional disordered structural models in quantitative agreement with experimental data. The method was developed in the late 1980s and has since achieved wide acceptance within the scientific community [1], producing an average of over 90 papers and 1200 citations per year over the last five years. It is particularly suitable for the study of the structures of liquid and amorphous materials, as well as the structural analysis of disordered crystalline systems. The principal experimental data that are modelled are obtained from total x-ray or neutron scattering experiments, using the reciprocal space structure factor and/or the real space pair distribution function (PDF). Additional data might be included from extended x-ray absorption fine structure spectroscopy (EXAFS), Bragg peak intensities or indeed any measured data that can be calculated from a three-dimensional atomistic model. It is this use of total scattering (diffuse and Bragg), rather than just the Bragg peak intensities more commonly used for crystalline structure analysis, which enables RMC modelling to probe the often important deviations from the average crystal structure, to probe the structures of poorly crystalline or nanocrystalline materials, and the local structures of non-crystalline materials where only diffuse scattering is observed. This flexibility across various condensed matter structure-types has made the RMC method very attractive in a wide range of disciplines, as borne out in the contents of this special issue. It is however important to point out that since

  5. Clinical Management and Burden of Prostate Cancer: A Markov Monte Carlo Model

    Science.gov (United States)

    Sanyal, Chiranjeev; Aprikian, Armen; Cury, Fabio; Chevalier, Simone; Dragomir, Alice

    2014-01-01

    Background Prostate cancer (PCa) is the most common non-skin cancer among men in developed countries. Several novel treatments have been adopted by healthcare systems to manage PCa. Most of the observational studies and randomized trials on PCa have concurrently evaluated fewer treatments over short follow-up. Further, preceding decision analytic models on PCa management have not evaluated various contemporary management options. Therefore, a contemporary decision analytic model was necessary to address limitations to the literature by synthesizing the evidence on novel treatments thereby forecasting short and long-term clinical outcomes. Objectives To develop and validate a Markov Monte Carlo model for the contemporary clinical management of PCa, and to assess the clinical burden of the disease from diagnosis to end-of-life. Methods A Markov Monte Carlo model was developed to simulate the management of PCa in men 65 years and older from diagnosis to end-of-life. Health states modeled were: risk at diagnosis, active surveillance, active treatment, PCa recurrence, PCa recurrence free, metastatic castrate resistant prostate cancer, overall and PCa death. Treatment trajectories were based on state transition probabilities derived from the literature. Validation and sensitivity analyses assessed the accuracy and robustness of model predicted outcomes. Results Validation indicated model predicted rates were comparable to observed rates in the published literature. The simulated distribution of clinical outcomes for the base case was consistent with sensitivity analyses. Predicted rate of clinical outcomes and mortality varied across risk groups. Life expectancy and health adjusted life expectancy predicted for the simulated cohort was 20.9 years (95%CI 20.5–21.3) and 18.2 years (95% CI 17.9–18.5), respectively. Conclusion Study findings indicated contemporary management strategies improved survival and quality of life in patients with PCa. This model could be used

  6. On the absence of an α-nucleus structure in a two-centre shell model

    International Nuclear Information System (INIS)

    Gupta, R.K.; Sharma, M.K.; Antonenko, N.V.; Scheid, W.

    1999-01-01

    The two-centre shell model, used within the Strutinsky macro-microscopic method, is a valid prescription for calculating adiabatic or diabatic potential energy surfaces. It is shown, however, that this model does not contain the appropriate α-nucleus structure effects, very much required for collisions between light nuclei. A possible way to incorporate such effects is suggested. (author). Letter-to-the-editor

  7. Four shells atomic model to computer the counting efficiency of electron-capture nuclides

    International Nuclear Information System (INIS)

    Grau Malonda, A.; Fernandez Martinez, A.

    1985-01-01

    The present paper develops a four-shells atomic model in order to obtain the efficiency of detection in liquid scintillation courting, Mathematical expressions are given to calculate the probabilities of the 229 different atomic rearrangements so as the corresponding effective energies. This new model will permit the study of the influence of the different parameters upon the counting efficiency for nuclides of high atomic number. (Author) 7 refs

  8. Modeling shell morphology of an epitoniid species with parametric equations

    Science.gov (United States)

    Bernido, Christopher C.; Carpio-Bernido, M. Victoria; Sadudaquil, Jerome A.; Salas, Rochelle I.; Mangyao, Justin Ericson A.; Halasan, Lorenzo C.; Baja, Paz Kenneth S.; Jumawan, Ethel Jade V.

    2017-08-01

    An epitoniid specimen under the genus Cycloscala is mathematically modeled using parametric equations which allow comparison of growth functions and parameter values with other specimens of the same genus. This mathematical modeling approach may supplement the currently used genetic and microscopy methods in the taxonomic classification of species.

  9. Combinatorial nuclear level density by a Monte Carlo method

    International Nuclear Information System (INIS)

    Cerf, N.

    1994-01-01

    We present a new combinatorial method for the calculation of the nuclear level density. It is based on a Monte Carlo technique, in order to avoid a direct counting procedure which is generally impracticable for high-A nuclei. The Monte Carlo simulation, making use of the Metropolis sampling scheme, allows a computationally fast estimate of the level density for many fermion systems in large shell model spaces. We emphasize the advantages of this Monte Carlo approach, particularly concerning the prediction of the spin and parity distributions of the excited states,and compare our results with those derived from a traditional combinatorial or a statistical method. Such a Monte Carlo technique seems very promising to determine accurate level densities in a large energy range for nuclear reaction calculations

  10. Comparative study of adsorptive removal of Cr(VI) ion from aqueous solution in fixed bed column by peanut shell and almond shell using empirical models and ANN.

    Science.gov (United States)

    Banerjee, Munmun; Bar, Nirjhar; Basu, Ranjan Kumar; Das, Sudip Kumar

    2017-04-01

    Cr(VI) is a toxic water pollutant, which causes cancer and mutation in living organisms. Adsorption has become the most preferred method for removal of Cr(VI) due to its high efficiency and low cost. Peanut and almond shells were used as adsorbents in downflow fixed bed continuous column operation for Cr(VI) removal. The experiments were carried out to scrutinise the adsorptive capacity of the peanut shells and almond shells, as well as to find out the effect of various operating parameters such as column bed depth (5-10 cm), influent flow rate (10-22 ml min -1 ) and influent Cr(VI) concentration (10-20 mg L -1 ) on the Cr(VI) removal. The fixed bed column operation for Cr(VI) adsorption the equilibrium was illustrated by Langmuir isotherm. Different well-known mathematical models were applied to the experimental data to identify the best-fitted model to explain the bed dynamics. Prediction of the bed dynamics by Yan et al. model was found to be satisfactory. Applicability of artificial neural network (ANN) modelling is also reported. An ANN modelling of multilayer perceptron with gradient descent and Levenberg-Marquardt algorithms have also been tried to predict the percentage removal of Cr(VI). This study indicates that these adsorbents have an excellent potential and are useful for water treatment particularly small- and medium-sized industries of third world countries. Almond shell represents better adsorptive capacity as breakthrough time and exhaustion time are longer in comparison to peanut shell.

  11. Heat transfer models for predicting Salmonella enteritidis in shell eggs through supply chain distribution.

    Science.gov (United States)

    Almonacid, S; Simpson, R; Teixeira, A

    2007-11-01

    Egg and egg preparations are important vehicles for Salmonella enteritidis infections. The influence of time-temperature becomes important when the presence of this organism is found in commercial shell eggs. A computer-aided mathematical model was validated to estimate surface and interior temperature of shell eggs under variable ambient and refrigerated storage temperature. A risk assessment of S. enteritidis based on the use of this model, coupled with S. enteritidis kinetics, has already been reported in a companion paper published earlier in JFS. The model considered the actual geometry and composition of shell eggs and was solved by numerical techniques (finite differences and finite elements). Parameters of interest such as local (h) and global (U) heat transfer coefficient, thermal conductivity, and apparent volumetric specific heat were estimated by an inverse procedure from experimental temperature measurement. In order to assess the error in predicting microbial population growth, theoretical and experimental temperatures were applied to a S. enteritidis growth model taken from the literature. Errors between values of microbial population growth calculated from model predicted compared with experimentally measured temperatures were satisfactorily low: 1.1% and 0.8% for the finite difference and finite element model, respectively.

  12. Development of new physical models devoted to internal dosimetry using the EGS4 Monte Carlo code

    International Nuclear Information System (INIS)

    Clairand, I.

    1999-01-01

    In the framework of diagnostic and therapeutic applications of nuclear medicine, the calculation of the absorbed dose at the organ scale is necessary for the evaluation of the risks taken by patients after the intake of radiopharmaceuticals. The classical calculation methods supply only a very approximative estimation of this dose because they use dosimetric models based on anthropomorphic phantoms with average corpulence (reference adult man and woman). The aim of this work is to improve these models by a better consideration of the physical characteristics of the patient in order to refine the dosimetric estimations. Several mathematical anthropomorphic phantoms representative of the morphological variations encountered in the adult population have been developed. The corresponding dosimetric parameters have been determined using the Monte Carlo method. The calculation code, based on the EGS4 Monte Carlo code, has been validated using the literature data for reference phantoms. Several phantoms with different corpulence have been developed using the analysis of anthropometric data from medico-legal autopsies. The corresponding dosimetric estimations show the influence of morphological variations on the absorbed dose. Two examples of application, based on clinical data, confirm the interest of this approach with respect to classical methods. (J.S.)

  13. Fast Monte Carlo-simulator with full collimator and detector response modelling for SPECT

    International Nuclear Information System (INIS)

    Sohlberg, A.O.; Kajaste, M.T.

    2012-01-01

    Monte Carlo (MC)-simulations have proved to be a valuable tool in studying single photon emission computed tomography (SPECT)-reconstruction algorithms. Despite their popularity, the use of Monte Carlo-simulations is still often limited by their large computation demand. This is especially true in situations where full collimator and detector modelling with septal penetration, scatter and X-ray fluorescence needs to be included. This paper presents a rapid and simple MC-simulator, which can effectively reduce the computation times. The simulator was built on the convolution-based forced detection principle, which can markedly lower the number of simulated photons. Full collimator and detector response look-up tables are pre-simulated and then later used in the actual MC-simulations to model the system response. The developed simulator was validated by comparing it against 123 I point source measurements made with a clinical gamma camera system and against 99m Tc software phantom simulations made with the SIMIND MC-package. The results showed good agreement between the new simulator, measurements and the SIMIND-package. The new simulator provided near noise-free projection data in approximately 1.5 min per projection with 99m Tc, which was less than one-tenth of SIMIND's time. The developed MC-simulator can markedly decrease the simulation time without sacrificing image quality. (author)

  14. Optical model for port-wine stain skin and its Monte Carlo simulation

    Science.gov (United States)

    Xu, Lanqing; Xiao, Zhengying; Chen, Rong; Wang, Ying

    2008-12-01

    Laser irradiation is the most acceptable therapy for PWS patient at present time. Its efficacy is highly dependent on the energy deposition rules in skin. To achieve optimal PWS treatment parameters a better understanding of light propagation in PWS skin is indispensable. Traditional Monte Carlo simulations using simple geometries such as planar layer tissue model can not provide energy deposition in the skin with enlarged blood vessels. In this paper the structure of normal skin and the pathological character of PWS skin was analyzed in detail and the true structure were simplified into a hybrid layered mathematical model to character two most important aspects of PWS skin: layered structure and overabundant dermal vessels. The basic laser-tissue interaction mechanisms in skin were investigated and the optical parameters of PWS skin tissue at the therapeutic wavelength. Monte Carlo (MC) based techniques were choused to calculate the energy deposition in the skin. Results can be used in choosing optical dosage. Further simulations can be used to predict optimal laser parameters to achieve high-efficacy laser treatment of PWS.

  15. Kinetic Monte-Carlo modeling of hydrogen retention and re-emission from Tore Supra deposits

    International Nuclear Information System (INIS)

    Rai, A.; Schneider, R.; Warrier, M.; Roubin, P.; Martin, C.; Richou, M.

    2009-01-01

    A multi-scale model has been developed to study the reactive-diffusive transport of hydrogen in porous graphite [A. Rai, R. Schneider, M. Warrier, J. Nucl. Mater. (submitted for publication). http://dx.doi.org/10.1016/j.jnucmat.2007.08.013.]. The deposits found on the leading edge of the neutralizer of Tore Supra are multi-scale in nature, consisting of micropores with typical size lower than 2 nm (∼11%), mesopores (∼5%) and macropores with a typical size more than 50 nm [C. Martin, M. Richou, W. Sakaily, B. Pegourie, C. Brosset, P. Roubin, J. Nucl. Mater. 363-365 (2007) 1251]. Kinetic Monte-Carlo (KMC) has been used to study the hydrogen transport at meso-scales. Recombination rate and the diffusion coefficient calculated at the meso-scale was used as an input to scale up and analyze the hydrogen transport at macro-scale. A combination of KMC and MCD (Monte-Carlo diffusion) method was used at macro-scales. Flux dependence of hydrogen recycling has been studied. The retention and re-emission analysis of the model has been extended to study the chemical erosion process based on the Kueppers-Hopf cycle [M. Wittmann, J. Kueppers, J. Nucl. Mater. 227 (1996) 186].

  16. Investigation of SIBM driven recrystallization in alpha Zirconium based on EBSD data and Monte Carlo modeling

    Science.gov (United States)

    Jedrychowski, M.; Bacroix, B.; Salman, O. U.; Tarasiuk, J.; Wronski, S.

    2015-08-01

    The work focuses on the influence of moderate plastic deformation on subsequent partial recrystallization of hexagonal zirconium (Zr702). In the considered case, strain induced boundary migration (SIBM) is assumed to be the dominating recrystallization mechanism. This hypothesis is analyzed and tested in detail using experimental EBSD-OIM data and Monte Carlo computer simulations. An EBSD investigation is performed on zirconium samples, which were channel-die compressed in two perpendicular directions: normal direction (ND) and transverse direction (TD) of the initial material sheet. The maximal applied strain was below 17%. Then, samples were briefly annealed in order to achieve a partly recrystallized state. Obtained EBSD data were analyzed in terms of texture evolution associated with a microstructural characterization, including: kernel average misorientation (KAM), grain orientation spread (GOS), twinning, grain size distributions, description of grain boundary regions. In parallel, Monte Carlo Potts model combined with experimental microstructures was employed in order to verify two main recrystallization scenarios: SIBM driven growth from deformed sub-grains and classical growth of recrystallization nuclei. It is concluded that simulation results provided by the SIBM model are in a good agreement with experimental data in terms of texture as well as microstructural evolution.

  17. Monte Carlo impurity transport modeling in the DIII-D transport

    International Nuclear Information System (INIS)

    Evans, T.E.; Finkenthal, D.F.

    1998-04-01

    A description of the carbon transport and sputtering physics contained in the Monte Carlo Impurity (MCI) transport code is given. Examples of statistically significant carbon transport pathways are examined using MCI's unique tracking visualizer and a mechanism for enhanced carbon accumulation on the high field side of the divertor chamber is discussed. Comparisons between carbon emissions calculated with MCI and those measured in the DIII-D tokamak are described. Good qualitative agreement is found between 2D carbon emission patterns calculated with MCI and experimentally measured carbon patterns. While uncertainties in the sputtering physics, atomic data, and transport models have made quantitative comparisons with experiments more difficult, recent results using a physics based model for physical and chemical sputtering has yielded simulations with about 50% of the total carbon radiation measured in the divertor. These results and plans for future improvement in the physics models and atomic data are discussed

  18. Clinical trial optimization: Monte Carlo simulation Markov model for planning clinical trials recruitment.

    Science.gov (United States)

    Abbas, Ismail; Rovira, Joan; Casanovas, Josep

    2007-05-01

    The patient recruitment process of clinical trials is an essential element which needs to be designed properly. In this paper we describe different simulation models under continuous and discrete time assumptions for the design of recruitment in clinical trials. The results of hypothetical examples of clinical trial recruitments are presented. The recruitment time is calculated and the number of recruited patients is quantified for a given time and probability of recruitment. The expected delay and the effective recruitment durations are estimated using both continuous and discrete time modeling. The proposed type of Monte Carlo simulation Markov models will enable optimization of the recruitment process and the estimation and the calibration of its parameters to aid the proposed clinical trials. A continuous time simulation may minimize the duration of the recruitment and, consequently, the total duration of the trial.

  19. Uncertainty assessment of integrated distributed hydrological models using GLUE with Markov chain Monte Carlo sampling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan

    2008-01-01

    uncertainty estimation (GLUE) procedure based on Markov chain Monte Carlo sampling is applied in order to improve the performance of the methodology in estimating parameters and posterior output distributions. The description of the spatial variations of the hydrological processes is accounted for by defining......-distributed responses are, however, still quite unexplored. Especially for complex models, rigorous parameterization, reduction of the parameter space and use of efficient and effective algorithms are essential to facilitate the calibration process and make it more robust. Moreover, for these models multi...... the identifiability of the parameters and results in satisfactory multi-variable simulations and uncertainty estimates. However, the parameter uncertainty alone cannot explain the total uncertainty at all the sites, due to limitations in the distributed data included in the model calibration. The study also indicates...

  20. Monte Carlo modelling of germanium crystals that are tilted and have rounded front edges

    Energy Technology Data Exchange (ETDEWEB)

    Gasparro, Joel [EC-JRC-IRMM, Institute for Reference Materials and Measurements, Retieseweg 111, B-2440 Geel (Belgium); Hult, Mikael [EC-JRC-IRMM, Institute for Reference Materials and Measurements, Retieseweg 111, B-2440 Geel (Belgium)], E-mail: mikael.hult@ec.europa.eu; Johnston, Peter N. [Applied Physics, Royal Melbourne Institute of Technology, GPO Box 2476V, Melbourne 3001 (Australia); Tagziria, Hamid [EC-JRC-IPSC, Institute for the Protection and the Security of the Citizen, Via E. Fermi 1, I-21020 Ispra (Vatican City State, Holy See,) (Italy)

    2008-09-01

    Gamma-ray detection efficiencies and cascade summing effects in germanium detectors are often calculated using Monte Carlo codes based on a computer model of the detection system. Such a model can never fully replicate reality and it is important to understand how various parameters affect the results. This work concentrates on quantifying two issues, namely (i) the effect of having a Ge-crystal that is tilted inside the cryostat and (ii) the effect of having a model of a Ge-crystal with rounded edges (bulletization). The effect of the tilting is very small (in the order of per mille) when the tilting angles are within a realistic range. The effect of the rounded edges is, however, relatively large (5-10% or higher) particularly for gamma-ray energies below 100 keV.

  1. Monte Carlo modelling of germanium crystals that are tilted and have rounded front edges

    International Nuclear Information System (INIS)

    Gasparro, Joel; Hult, Mikael; Johnston, Peter N.; Tagziria, Hamid

    2008-01-01

    Gamma-ray detection efficiencies and cascade summing effects in germanium detectors are often calculated using Monte Carlo codes based on a computer model of the detection system. Such a model can never fully replicate reality and it is important to understand how various parameters affect the results. This work concentrates on quantifying two issues, namely (i) the effect of having a Ge-crystal that is tilted inside the cryostat and (ii) the effect of having a model of a Ge-crystal with rounded edges (bulletization). The effect of the tilting is very small (in the order of per mille) when the tilting angles are within a realistic range. The effect of the rounded edges is, however, relatively large (5-10% or higher) particularly for gamma-ray energies below 100 keV

  2. Derivation of a Monte Carlo method for modeling heterodyne detection in optical coherence tomography systems

    DEFF Research Database (Denmark)

    Tycho, Andreas; Jørgensen, Thomas Martini; Andersen, Peter E.

    2002-01-01

    A Monte Carlo (MC) method for modeling optical coherence tomography (OCT) measurements of a diffusely reflecting discontinuity emb edded in a scattering medium is presented. For the first time to the authors' knowledge it is shown analytically that the applicability of an MC approach...... from the sample will have a finite spatial coherence that cannot be accounted for by MC simulation. To estimate this intensity distribution adequately we have developed a novel method for modeling a focused Gaussian beam in MC simulation. This approach is valid for a softly as well as for a strongly...... focused beam, and it is shown that in free space the full three-dimensional intensity distribution of a Gaussian beam is obtained. The OCT signal and the intensity distribution in a scattering medium have been obtained for several geometries with the suggested MC method; when this model and a recently...

  3. A Monte Carlo model for photoneutron generation by a medical LINAC

    Science.gov (United States)

    Sumini, M.; Isolan, L.; Cucchi, G.; Sghedoni, R.; Iori, M.

    2017-11-01

    For an optimal tuning of the radiation protection planning, a Monte Carlo model using the MCNPX code has been built, allowing an accurate estimate of the spectrometric and geometrical characteristics of photoneutrons generated by a Varian TrueBeam Stx© medical linear accelerator. We considered in our study a device working at the reference energy for clinical applications of 15 MV, stemmed from a Varian Clinac©2100 modeled starting from data collected thanks to several papers available in the literature. The model results were compared with neutron and photon dose measurements inside and outside the bunker hosting the accelerator obtaining a complete dose map. Normalized neutron fluences were tallied in different positions at the patient plane and at different depths. A sensitivity analysis with respect to the flattening filter material were performed to enlighten aspects that could influence the photoneutron production.

  4. Dynamic Value at Risk: A Comparative Study Between Heteroscedastic Models and Monte Carlo Simulation

    Directory of Open Access Journals (Sweden)

    José Lamartine Távora Junior

    2006-12-01

    Full Text Available The objective of this paper was to analyze the risk management of a portfolio composed by Petrobras PN, Telemar PN and Vale do Rio Doce PNA stocks. It was verified if the modeling of Value-at-Risk (VaR through the place Monte Carlo simulation with volatility of GARCH family is supported by hypothesis of efficient market. The results have shown that the statistic evaluation in inferior to dynamics, evidencing that the dynamic analysis supplies support to the hypothesis of efficient market of the Brazilian share holding market, in opposition of some empirical evidences. Also, it was verified that the GARCH models of volatility is enough to accommodate the variations of the shareholding Brazilian market, since the model is capable to accommodate the great dynamic of the Brazilian market.

  5. Design and modeling of an additive manufactured thin shell for x-ray astronomy

    Science.gov (United States)

    Feldman, Charlotte; Atkins, Carolyn; Brooks, David; Watson, Stephen; Cochrane, William; Roulet, Melanie; Willingale, Richard; Doel, Peter

    2017-09-01

    Future X-ray astronomy missions require light-weight thin shells to provide large collecting areas within the weight limits of launch vehicles, whilst still delivering angular resolutions close to that of Chandra (0.5 arc seconds). Additive manufacturing (AM), also known as 3D printing, is a well-established technology with the ability to construct or `print' intricate support structures, which can be both integral and light-weight, and is therefore a candidate technique for producing shells for space-based X-ray telescopes. The work described here is a feasibility study into this technology for precision X-ray optics for astronomy and has been sponsored by the UK Space Agency's National Space Technology Programme. The goal of the project is to use a series of test samples to trial different materials and processes with the aim of developing a viable path for the production of an X-ray reflecting prototype for astronomical applications. The initial design of an AM prototype X-ray shell is presented with ray-trace modelling and analysis of the X-ray performance. The polishing process may cause print-through from the light-weight support structure on to the reflecting surface. Investigations in to the effect of the print-through on the X-ray performance of the shell are also presented.

  6. Full modelling of the MOSAIC animal PET system based on the GATE Monte Carlo simulation code

    International Nuclear Information System (INIS)

    Merheb, C; Petegnief, Y; Talbot, J N

    2007-01-01

    Positron emission tomography (PET) systems dedicated to animal imaging are now widely used for biological studies. The scanner performance strongly depends on the design and the characteristics of the system. Many parameters must be optimized like the dimensions and type of crystals, geometry and field-of-view (FOV), sampling, electronics, lightguide, shielding, etc. Monte Carlo modelling is a powerful tool to study the effect of each of these parameters on the basis of realistic simulated data. Performance assessment in terms of spatial resolution, count rates, scatter fraction and sensitivity is an important prerequisite before the model can be used instead of real data for a reliable description of the system response function or for optimization of reconstruction algorithms. The aim of this study is to model the performance of the Philips Mosaic(TM) animal PET system using a comprehensive PET simulation code in order to understand and describe the origin of important factors that influence image quality. We use GATE, a Monte Carlo simulation toolkit for a realistic description of the ring PET model, the detectors, shielding, cap, electronic processing and dead times. We incorporate new features to adjust signal processing to the Anger logic underlying the Mosaic(TM) system. Special attention was paid to dead time and energy spectra descriptions. Sorting of simulated events in a list mode format similar to the system outputs was developed to compare experimental and simulated sensitivity and scatter fractions for different energy thresholds using various models of phantoms describing rat and mouse geometries. Count rates were compared for both cylindrical homogeneous phantoms. Simulated spatial resolution was fitted to experimental data for 18 F point sources at different locations within the FOV with an analytical blurring function for electronic processing effects. Simulated and measured sensitivities differed by less than 3%, while scatter fractions agreed

  7. Using a Monte Carlo model to predict dosimetric properties of small radiotherapy photon fields

    International Nuclear Information System (INIS)

    Scott, Alison J. D.; Nahum, Alan E.; Fenwick, John D.

    2008-01-01

    Accurate characterization of small-field dosimetry requires measurements to be made with precisely aligned specialized detectors and is thus time consuming and error prone. This work explores measurement differences between detectors by using a Monte Carlo model matched to large-field data to predict properties of smaller fields. Measurements made with a variety of detectors have been compared with calculated results to assess their validity and explore reasons for differences. Unshielded diodes are expected to produce some of the most useful data, as their small sensitive cross sections give good resolution whilst their energy dependence is shown to vary little with depth in a 15 MV linac beam. Their response is shown to be constant with field size over the range 1-10 cm, with a correction of 3% needed for a field size of 0.5 cm. BEAMnrc has been used to create a 15 MV beam model, matched to dosimetric data for square fields larger than 3 cm, and producing small-field profiles and percentage depth doses (PDDs) that agree well with unshielded diode data for field sizes down to 0.5 cm. For fields sizes of 1.5 cm and above, little detector-to-detector variation exists in measured output factors, however for a 0.5 cm field a relative spread of 18% is seen between output factors measured with different detectors--values measured with the diamond and pinpoint detectors lying below that of the unshielded diode, with the shielded diode value being higher. Relative to the corrected unshielded diode measurement, the Monte Carlo modeled output factor is 4.5% low, a discrepancy that is probably due to the focal spot fluence profile and source occlusion modeling. The large-field Monte Carlo model can, therefore, currently be used to predict small-field profiles and PDDs measured with an unshielded diode. However, determination of output factors for the smallest fields requires a more detailed model of focal spot fluence and source occlusion.

  8. A Model for the Growth of Localized Shell Features in Inertial Confinement Fusion Implosions

    Science.gov (United States)

    Goncharov, V. N.

    2017-10-01

    Engineering features and target debris on inertial confinement fusion capsules play detrimental role in target performance. The contact points of such features with target surface as well as shadowing effects produce localized shell nonuniformities that grow in time because of the Rayleigh-Taylor instability developed during shell acceleration. Such growth leads to significant mass modulation in the shell and injection of ablator and cold fuel material into the target vapor region. These effects are commonly modeled using 2-D and 3-D hydrodynamic codes that take into account multiple physics effects. Such simulations, however, are very challenging since in many cases they are inherently three dimensional (as in the case of fill tube or stalk shadowing) and require very high grid resolution to accurately model short-scale features. To gain physics insight, an analytic model describing the growth of these features has been developed. The model is based on the Layzer-type approach. The talk will discuss the results of the model used to study perturbation growth seeded by localized target debris, glue spots, fill tubes, and stalks. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.

  9. Structural Acoustic Physics Based Modeling of Curved Composite Shells

    Science.gov (United States)

    2017-09-19

    various geometries and materials. This can help drive future research on composite material applications and enhance design methods for future Navy...both the plate and the water is 0.15 inch. The plate elements are eight-node, linear , brick stress/displacement continuum elements (C3D8R) while the...water elements are eight-node, linear , brick acoustic continuum elements (AC3D8). The analyses of the flat plate model were completed using Abaqus

  10. Bayesian modelling of uncertainties of Monte Carlo radiative-transfer simulations

    Science.gov (United States)

    Beaujean, Frederik; Eggers, Hans C.; Kerzendorf, Wolfgang E.

    2018-04-01

    One of the big challenges in astrophysics is the comparison of complex simulations to observations. As many codes do not directly generate observables (e.g. hydrodynamic simulations), the last step in the modelling process is often a radiative-transfer treatment. For this step, the community relies increasingly on Monte Carlo radiative transfer due to the ease of implementation and scalability with computing power. We show how to estimate the statistical uncertainty given the output of just a single radiative-transfer simulation in which the number of photon packets follows a Poisson distribution and the weight (e.g. energy or luminosity) of a single packet may follow an arbitrary distribution. Our Bayesian approach produces a posterior distribution that is valid for any number of packets in a bin, even zero packets, and is easy to implement in practice. Our analytic results for large number of packets show that we generalise existing methods that are valid only in limiting cases. The statistical problem considered here appears in identical form in a wide range of Monte Carlo simulations including particle physics and importance sampling. It is particularly powerful in extracting information when the available data are sparse or quantities are small.

  11. Monte Carlo climate change forecasts with a global coupled ocean-atmosphere model

    International Nuclear Information System (INIS)

    Cubasch, U.; Santer, B.D.; Hegerl, G.; Hoeck, H.; Maier-Reimer, E.; Mikolajwicz, U.; Stoessel, A.; Voss, R.

    1992-01-01

    The Monte Carlo approach, which has increasingly been used during the last decade in the field of extended range weather forecasting, has been applied for climate change experiments. Four integrations with a global coupled ocean-atmosphere model have been started from different initial conditions, but with the same greenhouse gas forcing according to the IPCC scenario A. All experiments have been run for a period of 50 years. The results indicate that the time evolution of the global mean warming depends strongly on the initial state of the climate system. It can vary between 6 and 31 years. The Monte Carlo approach delivers information about both the mean response and the statistical significance of the response. While the individual members of the ensemble show a considerable variation in the climate change pattern of temperature after 50 years, the ensemble mean climate change pattern closely resembles the pattern obtained in a 100 year integration and is, at least over most of the land areas, statistically significant. The ensemble averaged sea-level change due to thermal expansion is significant in the global mean and locally over wide regions of the Pacific. The hydrological cycle is also significantly enhanced in the global mean, but locally the changes in precipitation and soil moisture are masked by the variability of the experiments. (orig.)

  12. Development of self-learning Monte Carlo technique for more efficient modeling of nuclear logging measurements

    International Nuclear Information System (INIS)

    Zazula, J.M.

    1988-01-01

    The self-learning Monte Carlo technique has been implemented to the commonly used general purpose neutron transport code MORSE, in order to enhance sampling of the particle histories that contribute to a detector response. The parameters of all the biasing techniques available in MORSE, i.e. of splitting, Russian roulette, source and collision outgoing energy importance sampling, path length transformation and additional biasing of the source angular distribution are optimized. The learning process is iteratively performed after each batch of particles, by retrieving the data concerning the subset of histories that passed the detector region and energy range in the previous batches. This procedure has been tested on two sample problems in nuclear geophysics, where an unoptimized Monte Carlo calculation is particularly inefficient. The results are encouraging, although the presented method does not directly minimize the variance and the convergence of our algorithm is restricted by the statistics of successful histories from previous random walk. Further applications for modeling of the nuclear logging measurements seem to be promising. 11 refs., 2 figs., 3 tabs. (author)

  13. Measurement and Monte Carlo modeling of the spatial response of scintillation screens

    Energy Technology Data Exchange (ETDEWEB)

    Pistrui-Maximean, S.A. [CNDRI (NDT using Ionizing Radiation) Laboratory, INSA-Lyon, 69621 Villeurbanne (France)], E-mail: spistrui@gmail.com; Letang, J.M. [CNDRI (NDT using Ionizing Radiation) Laboratory, INSA-Lyon, 69621 Villeurbanne (France)], E-mail: jean-michel.letang@insa-lyon.fr; Freud, N. [CNDRI (NDT using Ionizing Radiation) Laboratory, INSA-Lyon, 69621 Villeurbanne (France); Koch, A. [Thales Electron Devices, 38430 Moirans (France); Walenta, A.H. [Detectors and Electronics Department, FB Physik, Siegen University, 57068 Siegen (Germany); Montarou, G. [Corpuscular Physics Laboratory, Blaise Pascal University, 63177 Aubiere (France); Babot, D. [CNDRI (NDT using Ionizing Radiation) Laboratory, INSA-Lyon, 69621 Villeurbanne (France)

    2007-11-01

    In this article, we propose a detailed protocol to carry out measurements of the spatial response of scintillation screens and to assess the agreement with simulated results. The experimental measurements have been carried out using a practical implementation of the slit method. A Monte Carlo simulation model of scintillator screens, implemented with the toolkit Geant4, has been used to study the influence of the acquisition setup parameters and to compare with the experimental results. An algorithm of global stochastic optimization based on a localized random search method has been implemented to adjust the optical parameters (optical scattering and absorption coefficients). The algorithm has been tested for different X-ray tube voltages (40, 70 and 100 kV). A satisfactory convergence between the results simulated with the optimized model and the experimental measurements is obtained.

  14. Monte Carlo method for critical systems in infinite volume: The planar Ising model.

    Science.gov (United States)

    Herdeiro, Victor; Doyon, Benjamin

    2016-10-01

    In this paper we propose a Monte Carlo method for generating finite-domain marginals of critical distributions of statistical models in infinite volume. The algorithm corrects the problem of the long-range effects of boundaries associated to generating critical distributions on finite lattices. It uses the advantage of scale invariance combined with ideas of the renormalization group in order to construct a type of "holographic" boundary condition that encodes the presence of an infinite volume beyond it. We check the quality of the distribution obtained in the case of the planar Ising model by comparing various observables with their infinite-plane prediction. We accurately reproduce planar two-, three-, and four-point of spin and energy operators. We also define a lattice stress-energy tensor, and numerically obtain the associated conformal Ward identities and the Ising central charge.

  15. Development of numerical models for Monte Carlo simulations of Th-Pb fuel assembly

    Directory of Open Access Journals (Sweden)

    Oettingen Mikołaj

    2017-01-01

    Full Text Available The thorium-uranium fuel cycle is a promising alternative against uranium-plutonium fuel cycle, but it demands many advanced research before starting its industrial application in commercial nuclear reactors. The paper presents the development of the thorium-lead (Th-Pb fuel assembly numerical models for the integral irradiation experiments. The Th-Pb assembly consists of a hexagonal array of ThO2 fuel rods and metallic Pb rods. The design of the assembly allows different combinations of rods for various types of irradiations and experimental measurements. The numerical model of the Th-Pb assembly was designed for the numerical simulations with the continuous energy Monte Carlo Burnup code (MCB implemented on the supercomputer Prometheus of the Academic Computer Centre Cyfronet AGH.

  16. Monte Carlo simulation of a statistical mechanical model of multiple protein sequence alignment.

    Science.gov (United States)

    Kinjo, Akira R

    2017-01-01

    A grand canonical Monte Carlo (MC) algorithm is presented for studying the lattice gas model (LGM) of multiple protein sequence alignment, which coherently combines long-range interactions and variable-length insertions. MC simulations are used for both parameter optimization of the model and production runs to explore the sequence subspace around a given protein family. In this Note, I describe the details of the MC algorithm as well as some preliminary results of MC simulations with various temperatures and chemical potentials, and compare them with the mean-field approximation. The existence of a two-state transition in the sequence space is suggested for the SH3 domain family, and inappropriateness of the mean-field approximation for the LGM is demonstrated.

  17. A three-dimensional self-learning kinetic Monte Carlo model: application to Ag(111)

    International Nuclear Information System (INIS)

    Latz, Andreas; Brendel, Lothar; Wolf, Dietrich E

    2012-01-01

    The reliability of kinetic Monte Carlo (KMC) simulations depends on accurate transition rates. The self-learning KMC method (Trushin et al 2005 Phys. Rev. B 72 115401) combines the accuracy of rates calculated from a realistic potential with the efficiency of a rate catalog, using a pattern recognition scheme. This work expands the original two-dimensional method to three dimensions. The concomitant huge increase in the number of rate calculations on the fly needed can be avoided by setting up an initial database, containing exact activation energies calculated for processes gathered from a simpler KMC model. To provide two representative examples, the model is applied to the diffusion of Ag monolayer islands on Ag(111), and the homoepitaxial growth of Ag on Ag(111) at low temperatures.

  18. Study of nickel nuclei by (p,d) and (p,t) reactions. Shell model interpretation

    International Nuclear Information System (INIS)

    Kong-A-Siou, D.-H.

    1975-01-01

    The experimental techniques employed at the Nuclear Science Institute (Grenoble) and at Michigan State University are described. The development of the transition amplitude calculation of the one-or two-nucleon transfer reactions is described first, after which the principle of shell model calculations is outlined. The choices of configuration space and two-body interactions are discussed. The DWBA method of analysis is studied in more detail. The effects of different approximations and the influence of the parameters are examined. Special attention is paid to the j-dependence of the form of the angular distributions, on effect not explained in the standard DWBA framework. The results are analysed and a large section is devoted to a comparative study of the experimental results obtained and those from other nuclear reactions. The spectroscopic data obtained are compared with the results of shell model calculations [fr

  19. An Enhanced Tire Model for Dynamic Simulation based on Geometrically Exact Shells

    Directory of Open Access Journals (Sweden)

    Roller Michael

    2016-06-01

    Full Text Available In the present work, a tire model is derived based on geometrically exact shells. The discretization is done with the help of isoparametric quadrilateral finite elements. The interpolation is performed with bilinear Lagrangian polynomials for the mid-surface as well as for the director field. As time stepping method for the resulting differential algebraic equation a backward differentiation formula is chosen. A multilayer material model for geometrically exact shells is introduced, to describe the anisotropic behavior of the tire material. To handle the interaction with a rigid road surface, a unilateral frictional contact formulation is introduced. Therein a special surface to surface contact element is developed, which rebuilds the shape of the tire.

  20. One-dimensional σ-models with N = 5, 6, 7, 8 off-shell supersymmetries

    International Nuclear Information System (INIS)

    Gonzales, M.; Toppan, F.; Rojas, M.

    2008-12-01

    We computed the actions for the 1D N = 5 σ-models with respect to the two inequivalent (2, 8, 6) multiplets. 4 supersymmetry generators are manifest, while the constraint originated by imposing the 5-th supersymmetry automatically induces a full N = 8 off-shell invariance. The resulting action coincides in the two cases and corresponds to a conformally flat 2D target satisfying a special geometry of rigid type. To obtain these results we developed a computational method (for Maple 11) which does not require the notion of superfields and is instead based on the nowadays available list of the inequivalent representations of the 1D N-extended supersymmetry. Its application to systematically analyze the σ-models off-shell invariant actions for the remaining N = 5, 6, 7, 8 (k, 8, 8 - k) multiplets, as well as for the N > 8 representations, only requires more cumbersome computations. (author)

  1. Conservation laws in the 1 f7 /2 shell model of 48Cr

    Science.gov (United States)

    Neergârd, K.

    2015-04-01

    Conservation laws in the 1 f7 /2 shell model of 48Cr found in numeric studies by Escuderos, Zamick, and Bayman [arXiv:nucl-th/0506050 (2005)] and me [K. Neergård, Phys. Rev. C 90, 014318 (2014) 10.1103/PhysRevC.90.014318] are explained by symmetry under particle-hole conjugation and the structure of the irreps of the symplectic group Sp(4). A generalization is discussed.

  2. A finite-temperature Hartree-Fock code for shell-model Hamiltonians

    OpenAIRE

    Bertsch, G. F.; Mehlhaff, J. M

    2016-01-01

    The codes HFgradZ.py and HFgradT.py find axially symmetric minima of a Hartree-Fock energy functional for a Hamiltonian supplied in a shell-model basis. The functional to be minimized is the Hartree-Fock energy for zero-temperature properties or the Hartree-Fock grand potential for finite-temperature properties. Various constraints may be imposed on the minima.

  3. Spectral statistics of rare-earth nuclei: Investigation of shell model configuration effect

    Energy Technology Data Exchange (ETDEWEB)

    Sabri, H., E-mail: h-sabri@tabrizu.ac.ir

    2015-09-15

    The spectral statistics of even–even rare-earth nuclei are investigated by using all the available empirical data for Ba, Ce, Nd, Sm, Gd, Dy, Er, Yb and Hf isotopes. The Berry–Robnik distribution and Maximum Likelihood estimation technique are used for analyses. An obvious deviation from GOE is observed for considered nuclei and there are some suggestions about the effect due to mass, deformation parameter and shell model configurations.

  4. Shell model for time-correlated random advection of passive scalars

    DEFF Research Database (Denmark)

    Andersen, Ken Haste; Muratore-Ginanneschi, P.

    1999-01-01

    We study a minimal shell model for the advection of a passive scalar by a Gaussian time-correlated velocity field. The anomalous scaling properties of the white noise limit are studied analytically. The effect of the time correlations are investigated using perturbation theory around the white...... noise limit and nonperturbatively by numerical integration. The time correlation of the velocity field is seen to enhance the intermittency of the passive scalar. [S1063-651X(99)07711-9]....

  5. Region of validity of the Thomas–Fermi model with quantum, exchange and shell corrections

    International Nuclear Information System (INIS)

    Dyachkov, S A; Levashov, P R; Minakov, D V

    2016-01-01

    A novel approach to calculate thermodynamically consistent shell corrections in wide range of parameters is used to predict the region of validity of the Thomas-Fermi approach. Calculated thermodynamic functions of electrons at high density are consistent with the more precise density functional theory. It makes it possible to work out a semi-classical model applicable both at low and high density. (paper)

  6. Reexamination of shell model tests of the Porter-Thomas distribution

    International Nuclear Information System (INIS)

    Grimes, S.M.

    1983-01-01

    Recent shell model calculations have yielded width amplitude distributions which have apparently not agreed with the Porter-Thomas distribution. This result conflicts with the present experimental evidence. A reanalysis of these calculations suggests that, although correct, they do not imply that the Porter-Thomas distribution will fail to describe the width distributions observed experimentally. The conditions for validity of the Porter-Thomas distribution are discussed

  7. Spectral statistics of rare-earth nuclei: Investigation of shell model configuration effect

    International Nuclear Information System (INIS)

    Sabri, H.

    2015-01-01

    The spectral statistics of even–even rare-earth nuclei are investigated by using all the available empirical data for Ba, Ce, Nd, Sm, Gd, Dy, Er, Yb and Hf isotopes. The Berry–Robnik distribution and Maximum Likelihood estimation technique are used for analyses. An obvious deviation from GOE is observed for considered nuclei and there are some suggestions about the effect due to mass, deformation parameter and shell model configurations

  8. Bayesian calibration of terrestrial ecosystem models: a study of advanced Markov chain Monte Carlo methods

    Science.gov (United States)

    Lu, Dan; Ricciuto, Daniel; Walker, Anthony; Safta, Cosmin; Munger, William

    2017-09-01

    Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC) sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this work, a differential evolution adaptive Metropolis (DREAM) algorithm is used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC) model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The calibration of DREAM results in a better model fit and predictive performance compared to the popular adaptive Metropolis (AM) scheme. Moreover, DREAM indicates that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identifies one mode. The application suggests that DREAM is very suitable to calibrate complex terrestrial ecosystem models, where the uncertain parameter size is usually large and existence of local optima is always a concern. In addition, this effort justifies the assumptions of the error model used in Bayesian calibration according to the residual analysis. The result indicates that a heteroscedastic, correlated, Gaussian error model is appropriate for the problem, and the consequent constructed likelihood function can alleviate the underestimation of parameter uncertainty that is usually caused by using uncorrelated error models.

  9. Bayesian calibration of terrestrial ecosystem models: a study of advanced Markov chain Monte Carlo methods

    Directory of Open Access Journals (Sweden)

    D. Lu

    2017-09-01

    Full Text Available Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this work, a differential evolution adaptive Metropolis (DREAM algorithm is used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The calibration of DREAM results in a better model fit and predictive performance compared to the popular adaptive Metropolis (AM scheme. Moreover, DREAM indicates that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identifies one mode. The application suggests that DREAM is very suitable to calibrate complex terrestrial ecosystem models, where the uncertain parameter size is usually large and existence of local optima is always a concern. In addition, this effort justifies the assumptions of the error model used in Bayesian calibration according to the residual analysis. The result indicates that a heteroscedastic, correlated, Gaussian error model is appropriate for the problem, and the consequent constructed likelihood function can alleviate the underestimation of parameter uncertainty that is usually caused by using uncorrelated error models.

  10. Application of the Shell/3D Modeling Technique for the Analysis of Skin-Stiffener Debond Specimens

    Science.gov (United States)

    Krueger, Ronald; O'Brien, T. Kevin; Minguet, Pierre J.

    2002-01-01

    The application of a shell/3D modeling technique for the simulation of skin/stringer debond in a specimen subjected to three-point bending is demonstrated. The global structure was modeled with shell elements. A local three-dimensional model, extending to about three specimen thicknesses on either side of the delamination front was used to capture the details of the damaged section. Computed total strain energy release rates and mixed-mode ratios obtained from shell/13D simulations were in good agreement with results obtained from full solid models. The good correlations of the results demonstrated the effectiveness of the shell/3D modeling technique for the investigation of skin/stiffener separation due to delamination in the adherents.

  11. Theoretical and experimental stress analyses of ORNL thin-shell cylinder-to-cylinder model 2

    International Nuclear Information System (INIS)

    Gwaltney, R.C.; Bolt, S.E.; Bryson, J.W.

    1975-10-01

    Model 2 in a series of four thin-shell cylinder-to-cylinder models was tested, and the experimentally determined elastic stress distributions were compared with theoretical predictions obtained from a thin-shell finite-element analysis. Both the cylinder and the nozzle of model 2 had outside diameters of 10 in., giving a d 0 /D 0 ratio of 1.0, and both had outside diameter/thickness ratios of 100. Sixteen separate loading cases in which one end of the cylinder was rigidly held were analyzed. An internal pressure loading, three mutually perpendicular force components, and three mutually perpendicular moment components were individually applied at the free end of the cylinder and at the end of the nozzle. In addition to these 13 loadings, 3 additional loads were applied to the nozzle (in-plane bending moment, out-of-plane bending moment, and axial force) with the free end of the cylinder restrained. The experimental stress distributions for each of the 16 loadings were obtained using 152 three-gage strain rosettes located on the inner and outer surfaces. All the 16 loading cases were also analyzed theoretically using a finite-element shell analysis. The analysis used flat-plate elements and considered five degrees of freedom per node in the final assembled equations. The comparisons between theory and experiment show reasonably good general agreement, and it is felt that the analysis would be satisfactory for most engineering purposes. (auth)

  12. 3D MODELS COMPARISON OF COMPLEX SHELL IN UNDERWATER AND DRY ENVIRONMENTS

    Directory of Open Access Journals (Sweden)

    S. Troisi

    2015-04-01

    Full Text Available In marine biology the shape, morphology, texture and dimensions of the shells and organisms like sponges and gorgonians are very important parameters. For example, a particular type of gorgonian grows every year only few millimeters; this estimation was conducted without any measurement instrument but it has been provided after successive observational studies, because this organism is very fragile: the contact could compromise its structure and outliving. Non-contact measurement system has to be used to preserve such organisms: the photogrammetry is a method capable to assure high accuracy without contact. Nevertheless, the achievement of a 3D photogrammetric model of complex object (as gorgonians or particular shells is a challenge in normal environments, either with metric camera or with consumer camera. Indeed, the successful of automatic target-less image orientation and the image matching algorithms is strictly correlated to the object texture properties and of camera calibration quality as well. In the underwater scenario, the environment conditions strongly influence the results quality; in particular, water’s turbidity, the presence of suspension, flare and other optical aberrations decrease the image quality reducing the accuracy and increasing the noise on the 3D model. Furthermore, seawater density variability influences its refraction index and consequently the interior orientation camera parameters. For this reason, the camera calibration has to be performed in the same survey conditions. In this paper, a comparison between the 3D models of a Charonia Tritonis shell are carried out through surveys conducted both in dry and underwater environments.

  13. No-Core Shell Model for A = 47 and A = 49

    Energy Technology Data Exchange (ETDEWEB)

    Vary, J P; Negoita, A G; Stoica, S

    2006-11-13

    We apply the no-core shell model to the nuclear structure of odd-mass nuclei straddling {sup 48}Ca. Starting with the NN interaction, that fits two-body scattering and bound state data, we evaluate the nuclear properties of A = 47 and A = 49 nuclei while preserving all the underlying symmetries. Due to model space limitations and the absence of three-body interactions, we incorporate phenomenological interaction terms determined by fits to A = 48 nuclei in a previous effort. Our modified Hamiltonian produces reasonable spectra for these odd-mass nuclei. In addition to the differences in single-particle basis states, the absence of a single-particle Hamiltonian in our no-core approach complicates comparisons with valence effective NN interactions. We focus on purely off-diagonal two-body matrix elements since they are not affected by ambiguities in the different roles for one-body potentials and we compare selected sets of fp-shell matrix elements of our initial and modified Hamiltonians in the harmonic oscillator basis with those of a recent model fp-shell interaction, the GXPF1 interaction of Honma et al. While some significant differences emerge from these comparisons, there is an overall reasonably good correlation between our off-diagonal matrix elements and those of GXPF1.

  14. Monte Carlo Simulation Of The Portfolio-Balance Model Of Exchange Rates: Finite Sample Properties Of The GMM Estimator

    OpenAIRE

    Hong-Ghi Min

    2011-01-01

    Using Monte Carlo simulation of the Portfolio-balance model of the exchange rates, we report finite sample properties of the GMM estimator for testing over-identifying restrictions in the simultaneous equations model. F-form of Sargans statistic performs better than its chi-squared form while Hansens GMM statistic has the smallest bias.

  15. Experimental investigation shell model excitations of 89Zr up to high spin and its comparison with 88,90Zr

    International Nuclear Information System (INIS)

    Saha, S.; Palit, R.; Sethi, J.

    2012-01-01

    The excited states of nuclei near N=50 closed shell provide suitable laboratory for testing the interactions of shell model states, possible presence of high spin isomers and help in understanding the shape transition as the higher orbitals are occupied. In particular, the structure of N = 49 isotones (and Z =32 to 46) with one hole in N=50 shell gap have been investigated using different reactions. Interestingly, the high spin states in these isotones have contribution from particle excitations across the respective proton and neutron shell gaps and provide suitable testing ground for the prediction of shell model interactions describing theses excitations across the shell gap. In the literature, extensive study of the high spin states of heavier N = 49 isotones starting with 91 Mo up to 95 Pd are available. Limited information existed on the high spin states of lighter isotones. Therefore, the motivation of the present work is to extend the high spin structure of 89 Zr and to characterize the structure of these levels through comparison with the large scale shell model calculations based on two new residual interactions in f 5/2 pg 9/2 model space

  16. SU-E-T-239: Monte Carlo Modelling of SMC Proton Nozzles Using TOPAS

    International Nuclear Information System (INIS)

    Chung, K; Kim, J; Shin, J; Han, Y; Ju, S; Hong, C; Kim, D; Kim, H; Shin, E; Ahn, S; Chung, S; Choi, D

    2014-01-01

    Purpose: To expedite and cross-check the commissioning of the proton therapy nozzles at Samsung Medical Center using TOPAS. Methods: We have two different types of nozzles at Samsung Medical Center (SMC), a multi-purpose nozzle and a pencil beam scanning dedicated nozzle. Both nozzles have been modelled in Monte Carlo simulation by using TOPAS based on the vendor-provided geometry. The multi-purpose nozzle is mainly composed of wobbling magnets, scatterers, ridge filters and multi-leaf collimators (MLC). Including patient specific apertures and compensators, all the parts of the nozzle have been implemented in TOPAS following the geometry information from the vendor.The dedicated scanning nozzle has a simpler structure than the multi-purpose nozzle with a vacuum pipe at the down stream of the nozzle.A simple water tank volume has been implemented to measure the dosimetric characteristics of proton beams from the nozzles. Results: We have simulated the two proton beam nozzles at SMC. Two different ridge filters have been tested for the spread-out Bragg peak (SOBP) generation of wobbling mode in the multi-purpose nozzle. The spot sizes and lateral penumbra in two nozzles have been simulated and analyzed using a double Gaussian model. Using parallel geometry, both the depth dose curve and dose profile have been measured simultaneously. Conclusion: The proton therapy nozzles at SMC have been successfully modelled in Monte Carlo simulation using TOPAS. We will perform a validation with measured base data and then use the MC simulation to interpolate/extrapolate the measured data. We believe it will expedite the commissioning process of the proton therapy nozzles at SMC

  17. Monte Carlo Techniques for the Comprehensive Modeling of Isotopic Inventories in Future Nuclear Systems and Fuel Cycles. Final Report

    International Nuclear Information System (INIS)

    Paul P.H. Wilson

    2005-01-01

    The development of Monte Carlo techniques for isotopic inventory analysis has been explored in order to facilitate the modeling of systems with flowing streams of material through varying neutron irradiation environments. This represents a novel application of Monte Carlo methods to a field that has traditionally relied on deterministic solutions to systems of first-order differential equations. The Monte Carlo techniques were based largely on the known modeling techniques of Monte Carlo radiation transport, but with important differences, particularly in the area of variance reduction and efficiency measurement. The software that was developed to implement and test these methods now provides a basis for validating approximate modeling techniques that are available to deterministic methodologies. The Monte Carlo methods have been shown to be effective in reproducing the solutions of simple problems that are possible using both stochastic and deterministic methods. The Monte Carlo methods are also effective for tracking flows of materials through complex systems including the ability to model removal of individual elements or isotopes in the system. Computational performance is best for flows that have characteristic times that are large fractions of the system lifetime. As the characteristic times become short, leading to thousands or millions of passes through the system, the computational performance drops significantly. Further research is underway to determine modeling techniques to improve performance within this range of problems. This report describes the technical development of Monte Carlo techniques for isotopic inventory analysis. The primary motivation for this solution methodology is the ability to model systems of flowing material being exposed to varying and stochastically varying radiation environments. The methodology was developed in three stages: analog methods which model each atom with true reaction probabilities (Section 2), non-analog methods

  18. Geodynamic Modeling of Planetary Ice-Oceans: Evolution of Ice-Shell Thickness in Convecting Two-Phase Systems

    Science.gov (United States)

    Allu Peddinti, D.; McNamara, A. K.

    2016-12-01

    Along with the newly unveiled icy surface of Pluto, several icy planetary bodies show indications of an active surface perhaps underlain by liquid oceans of some size. This augments the interest to explore the evolution of an ice-ocean system and its surface implications. The geologically young surface of the Jovian moon Europa lends much speculation to variations in ice-shell thickness over time. Along with the observed surface features, it suggests the possibility of episodic convection and conduction within the ice-shell as it evolved. What factors would control the growth of the ice-shell as it forms? If and how would those factors determine the thickness of the ice-shell and consequently the heat transfer? Would parameters such as tidal heating or initial temperature affect how the ice-shell grows and to what significance? We perform numerical experiments using geodynamical models of the two-phase ice-water system to study the evolution of planetary ice-oceans such as that of Europa. The models evolve self-consistently from an initial liquid ocean as it cools with time. The effects of presence, absence and magnitude of tidal heating on ice-shell thickness are studied in different models. The vigor of convection changes as the ice-shell continues to thicken. Initial modeling results track changes in the growth rate of the ice-shell as the vigor of the convection changes. The magnitude and temporal location of the rate change varies with different properties of tidal heating and values of initial temperature. A comparative study of models is presented to demonstrate how as the ice-shell is forming, its growth rate and convection are affected by processes such as tidal heating.

  19. Model of electronic energy relaxation in the test-particle Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Roblin, P.; Rosengard, A. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. des Procedes d`Enrichissement; Nguyen, T.T. [Compagnie Internationale de Services en Informatique (CISI) - Centre d`Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1994-12-31

    We previously presented a new test-particle Monte Carlo method (1) (which we call PTMC), an iterative method for solving the Boltzmann equation, and now improved and very well-suited to the collisional steady gas flows. Here, we apply a statistical method, described by Anderson (2), to treat electronic translational energy transfer by a collisional process, to atomic uranium vapor. For our study, only three levels of its multiple energy states are considered: 0,620 cm{sup -1} and an average level grouping upper levels. After presenting two-dimensional results, we apply this model to the evaporation of uranium by electron bombardment and show that the PTMC results, for given initial electronic temperatures, are in good agreement with experimental radial velocity measurements. (author). 12 refs., 1 fig.

  20. Monte Carlo evidence for the gluon-chain model of QCD string formation

    International Nuclear Information System (INIS)

    Greensite, J.; San Francisco State Univ., CA

    1988-08-01

    The Monte Carlo method is used to calculate the overlaps string vertical stroken gluons>, where Ψ string [A] is the Yang-Mills wavefunctional due to a static quark-antiquark pair, and vertical stroken gluons > are orthogonal trial states containing n=0, 1, or 2 gluon operators multiplying the true ground state. The calculation is carried out for SU(2) lattice gauge theory in Coulomb gauge, in D=4 dimensions. It is found that the string state is dominated, at small qanti q separations, by the vacuum ('no-gluon') state, at larger separations by the 1-gluon state, and, at the largest separations attempted, the 2-gluon state begins to dominate. This behavior is in qualitative agreement with the gluon-chain model, which is a large-N colors motivated theory of QCD string formation. (orig.)

  1. Dynamic connectivity algorithms for Monte Carlo simulations of the random-cluster model

    Science.gov (United States)

    Metin Elçi, Eren; Weigel, Martin

    2014-05-01

    We review Sweeny's algorithm for Monte Carlo simulations of the random cluster model. Straightforward implementations suffer from the problem of computational critical slowing down, where the computational effort per edge operation scales with a power of the system size. By using a tailored dynamic connectivity algorithm we are able to perform all operations with a poly-logarithmic computational effort. This approach is shown to be efficient in keeping online connectivity information and is of use for a number of applications also beyond cluster-update simulations, for instance in monitoring droplet shape transitions. As the handling of the relevant data structures is non-trivial, we provide a Python module with a full implementation for future reference.

  2. Dynamic connectivity algorithms for Monte Carlo simulations of the random-cluster model

    International Nuclear Information System (INIS)

    Elçi, Eren Metin; Weigel, Martin

    2014-01-01

    We review Sweeny's algorithm for Monte Carlo simulations of the random cluster model. Straightforward implementations suffer from the problem of computational critical slowing down, where the computational effort per edge operation scales with a power of the system size. By using a tailored dynamic connectivity algorithm we are able to perform all operations with a poly-logarithmic computational effort. This approach is shown to be efficient in keeping online connectivity information and is of use for a number of applications also beyond cluster-update simulations, for instance in monitoring droplet shape transitions. As the handling of the relevant data structures is non-trivial, we provide a Python module with a full implementation for future reference.

  3. New software library of geometrical primitives for modelling of solids used in Monte Carlo detector simulations

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    We present our effort for the creation of a new software library of geometrical primitives, which are used for solid modelling in Monte Carlo detector simulations. We plan to replace and unify current geometrical primitive classes in the CERN software projects Geant4 and ROOT with this library. Each solid is represented by a C++ class with methods suited for measuring distances of particles from the surface of a solid and for determination as to whether the particles are located inside, outside or on the surface of the solid. We use numerical tolerance for determining whether the particles are located on the surface. The class methods also contain basic support for visualization. We use dedicated test suites for validation of the shape codes. These include also special performance and numerical value comparison tests for help with analysis of possible candidates of class methods as well as to verify that our new implementation proposals were designed and implemented properly. Currently, bridge classes are u...

  4. Monte Carlo based geometrical model for efficiency calculation of an n-type HPGe detector

    Energy Technology Data Exchange (ETDEWEB)

    Padilla Cabal, Fatima, E-mail: fpadilla@instec.c [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba); Lopez-Pino, Neivy; Luis Bernal-Castillo, Jose; Martinez-Palenzuela, Yisel; Aguilar-Mena, Jimmy; D' Alessandro, Katia; Arbelo, Yuniesky; Corrales, Yasser; Diaz, Oscar [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba)

    2010-12-15

    A procedure to optimize the geometrical model of an n-type detector is described. Sixteen lines from seven point sources ({sup 241}Am, {sup 133}Ba, {sup 22}Na, {sup 60}Co, {sup 57}Co, {sup 137}Cs and {sup 152}Eu) placed at three different source-to-detector distances (10, 20 and 30 cm) were used to calibrate a low-background gamma spectrometer between 26 and 1408 keV. Direct Monte Carlo techniques using the MCNPX 2.6 and GEANT 4 9.2 codes, and a semi-empirical procedure were performed to obtain theoretical efficiency curves. Since discrepancies were found between experimental and calculated data using the manufacturer parameters of the detector, a detail study of the crystal dimensions and the geometrical configuration is carried out. The relative deviation with experimental data decreases from a mean value of 18-4%, after the parameters were optimized.

  5. Calibration of lung counter using a CT model of Torso phantom and Monte Carlo method

    International Nuclear Information System (INIS)

    Zhang Binquan; Ma Jizeng; Yang Duanjie; Liu Liye; Cheng Jianping

    2006-01-01

    Tomography image of a Torso phantom was obtained from CT-Scan. The Torso phantom represents the trunk of an adult man that is 170 cm high and weight of 65 kg. After these images were segmented, cropped, and resized, a 3-dimension voxel phantom was created. The voxel phantom includes more than 2 million voxels, which size was 2.73 mm x 2.73 mm x 3 mm. This model could be used for the calibration of lung counter with Monte Carlo method. On the assumption that radioactive material was homogeneously distributed throughout the lung, counting efficiencies of a HPGe detector in different positions were calculated as Adipose Mass fraction (AMF) was different in the soft tissue in chest. The results showed that counting efficiencies of the lung counter changed up to 67% for 17.5 keV γ ray and 20% for 25 keV γ ray when AMF changed from 0 to 40%. (authors)

  6. Markov Chain Monte Carlo Simulation to Assess Uncertainty in Models of Naturally Deformed Rock

    Science.gov (United States)

    Davis, J. R.; Titus, S.; Giorgis, S. D.; Horsman, E. M.

    2015-12-01

    Field studies in tectonics and structural geology involve many kinds of data, such as foliation-lineation pairs, folded and boudinaged veins, deformed clasts, and lattice preferred orientations. Each data type can inform a model of deformation, for example by excluding certain geometries or constraining model parameters. In past work we have demonstrated how to systematically integrate a wide variety of data types into the computation of best-fit deformations. However, because even the simplest deformation models tend to be highly non-linear in their parameters, evaluating the uncertainty in the best fit has been difficult. In this presentation we describe an approach to rigorously assessing the uncertainty in models of naturally deformed rock. Rather than finding a single vector of parameter values that fits the data best, we use Bayesian Markov chain Monte Carlo methods to generate a large set of vectors of varying fitness. Taken together, these vectors approximate the probability distribution of the parameters given the data. From this distribution, various auxiliary statistical quantities and conclusions can be derived. Further, the relative probability of differing models can be quantified. We apply this approach to two example data sets, from the Gem Lake shear zone and western Idaho shear zone. Our findings address shear zone geometry, magnitude of deformation, strength of field fabric, and relative viscosity of clasts. We compare our model predictions to those of earlier studies.

  7. Revisiting the hybrid quantum Monte Carlo method for Hubbard and electron-phonon models

    Science.gov (United States)

    Beyl, Stefan; Goth, Florian; Assaad, Fakher F.

    2018-02-01

    A unique feature of the hybrid quantum Monte Carlo (HQMC) method is the potential to simulate negative sign free lattice fermion models with subcubic scaling in system size. Here we will revisit the algorithm for various models. We will show that for the Hubbard model the HQMC suffers from ergodicity issues and unbounded forces in the effective action. Solutions to these issues can be found in terms of a complexification of the auxiliary fields. This implementation of the HQMC that does not attempt to regularize the fermionic matrix so as to circumvent the aforementioned singularities does not outperform single spin flip determinantal methods with cubic scaling. On the other hand we will argue that there is a set of models for which the HQMC is very efficient. This class is characterized by effective actions free of singularities. Using the Majorana representation, we show that models such as the Su-Schrieffer-Heeger Hamiltonian at half filling and on a bipartite lattice belong to this class. For this specific model subcubic scaling is achieved.

  8. Bayesian parameter estimation in dynamic population model via particle Markov chain Monte Carlo

    Directory of Open Access Journals (Sweden)

    Meng Gao

    2012-12-01

    Full Text Available In nature, population dynamics are subject to multiple sources of stochasticity. State-space models (SSMs provide an ideal framework for incorporating both environmental noises and measurement errors into dynamic population models. In this paper, we present a recently developed method, Particle Markov Chain Monte Carlo (Particle MCMC, for parameter estimation in nonlinear SSMs. We use one effective algorithm of Particle MCMC, Particle Gibbs sampling algorithm, to estimate the parameters of a state-space model of population dynamics. The posterior distributions of parameters are derived given the conjugate prior distribution. Numerical simulations showed that the model parameters can be accurately estimated, no matter the deterministic model is stable, periodic or chaotic. Moreover, we fit the model to 16 representative time series from Global Population Dynamics Database (GPDD. It is verified that the results of parameter and state estimation using Particle Gibbs sampling algorithm are satisfactory for a majority of time series. For other time series, the quality of parameter estimation can also be improved, if prior knowledge is constrained. In conclusion, Particle Gibbs sampling algorithm provides a new Bayesian parameter inference method for studying population dynamics.

  9. Electromagnetic and weak observables in the context of the shell model

    International Nuclear Information System (INIS)

    Wildenthal, B.H.

    1984-01-01

    Wave functions for A = 17-39 nuclei have been obtained from diagonalizations of a single Hamiltonian formulation in the complete sd-shell configuration space for each NTJ system. These wave functions are used to generate the one-body density matrices corresponding to weak and electromagnetic transitions and moments. These densities are combined with different assumptions for the single-particle matrix elements of the weak and electromagnetic operators to produce theoretical matrix elements. The predictions are compared with experiment to determine, in some ''linearly dependent'' fashion, the correctness of the wave functions themselves, the optimum values of the single-particle matrix elements, and the viability of the overall shell-model formulation. (author)

  10. Ab initio many-body perturbation theory and no-core shell model

    Science.gov (United States)

    Hu, B. S.; Wu, Q.; Xu, F. R.

    2017-10-01

    In many-body perturbation theory (MBPT) we always introduce a parameter N shell to measure the maximal allowed major harmonic-oscillator (HO) shells for the single-particle basis, while the no-core shell model (NCSM) uses N maxℏΩ HO excitation truncation above the lowest HO configuration for the many-body basis. It is worth comparing the two different methods. Starting from “bare” and Okubo-Lee-Suzuki renormalized modern nucleon-nucleon interactions, NNLOopt and JISP16, we show that MBPT within Hartree-Fock bases is in reasonable agreement with NCSM within harmonic oscillator bases for 4He and 16O in “close” model space. In addition, we compare the results using “bare” force with the Okubo-Lee-Suzuki renormalized force. Supported by National Key Basic Research Program of China (2013CB834402), National Natural Science Foundation of China (11235001, 11320101004, 11575007) and the CUSTIPEN (China-U.S. Theory Institute for Physics with Exotic Nuclei) funded by the U.S. Department of Energy, Office of Science (DE-SC0009971)

  11. The effect of a number of selective points in modeling of polymerization reacting Monte Carlo method: studying the initiation reaction

    CERN Document Server

    Sadi, M; Dabir, B

    2003-01-01

    Monte Carlo Method is one of the most powerful techniques to model different processes, such as polymerization reactions. By this method, without any need to solve moment equations, a very detailed information on the structure and properties of polymers are obtained. The number of algorithm repetitions (selected volumes of reactor for modelling which represent the number of initial molecules) is very important in this method. In Monte Carlo method calculations are based on the random number of generations and reaction probability determinations. so the number of algorithm repetition is very important. In this paper, the initiation reaction was considered alone and the importance of number of initiator molecules on the result were studied. It can be concluded that Monte Carlo method will not give accurate results if the number of molecules is not satisfied to be big enough, because in that case , selected volume would not be representative of the whole system.

  12. The geometry of morphospaces: lessons from the classic Raup shell coiling model.

    Science.gov (United States)

    Gerber, Sylvain

    2017-05-01

    Morphospaces are spatial depictions of morphological variation among biological forms that have become an integral part of the analytical toolkit of evolutionary biologists and palaeobiologists. Nevertheless, the term morphospace brings together a great variety of spaces with different geometries. In particular, many morphospaces lack the metric properties underlying the notions of distance and direction, which are, however, central to the analysis of morphological differences and evolutionary transitions. The problem is illustrated here with the iconic morphospace of coiled shells implemented by Raup 50 years ago. The model, which allows the description of shell coiling geometry of various invertebrate taxa, is a seminal reference in theoretical morphology and morphospace theory, but also a morphometric framework frequently used in empirical studies, particularly of ammonoids. Because of the definition of its underlying parameters, Raup's morphospace does not possess a Euclidean structure and a meaningful interpretation of the spread and spacing of taxa within it is not guaranteed. Focusing on the region of the morphospace occupied by most ammonoids, I detail a landmark-based morphospace circumventing this problem and built from the same input measurements required for the calculation of Raup's parameters. From simulations and a reanalysis of Palaeozoic ammonoid shell disparity, the properties of these morphospaces are compared and their algebraic and geometric relationships highlighted. While Raup's model remains a valuable tool for describing ammonoid shells and relating their shapes to the coiling process, it is demonstrated that quantitative analyses of morphological patterns should be carried out within the landmark-based framework. Beyond this specific case, the increasing use and diversity of morphospaces in evolutionary morphology call for caution when interpreting patterns and comparing results drawn from different types of morphospaces. © 2016

  13. HR Del REMNANT ANATOMY USING TWO-DIMENSIONAL SPECTRAL DATA AND THREE-DIMENSIONAL PHOTOIONIZATION SHELL MODELS

    International Nuclear Information System (INIS)

    Moraes, Manoel; Diaz, Marcos

    2009-01-01

    The HR Del nova remnant was observed with the IFU-GMOS at Gemini North. The spatially resolved spectral data cube was used in the kinematic, morphological, and abundance analysis of the ejecta. The line maps show a very clumpy shell with two main symmetric structures. The first one is the outer part of the shell seen in Hα, which forms two rings projected in the sky plane. These ring structures correspond to a closed hourglass shape, first proposed by Harman and O'Brien. The equatorial emission enhancement is caused by the superimposed hourglass structures in the line of sight. The second structure seen only in the [O III] and [N II] maps is located along the polar directions inside the hourglass structure. Abundance gradients between the polar caps and equatorial region were not found. However, the outer part of the shell seems to be less abundant in oxygen and nitrogen than the inner regions. Detailed 2.5-dimensional photoionization modeling of the three-dimensional shell was performed using the mass distribution inferred from the observations and the presence of mass clumps. The resulting model grids are used to constrain the physical properties of the shell as well as the central ionizing source. A sequence of three-dimensional clumpy models including a disk-shaped ionization source is able to reproduce the ionization gradients between polar and equatorial regions of the shell. Differences between shell axial ratios in different lines can also be explained by aspherical illumination. A total shell mass of 9 x 10 -4 M sun is derived from these models. We estimate that 50%-70% of the shell mass is contained in neutral clumps with density contrast up to a factor of 30.

  14. Development of a Monte Carlo multiple source model for inclusion in a dose calculation auditing tool.

    Science.gov (United States)

    Faught, Austin M; Davidson, Scott E; Fontenot, Jonas; Kry, Stephen F; Etzel, Carol; Ibbott, Geoffrey S; Followill, David S

    2017-09-01

    The Imaging and Radiation Oncology Core Houston (IROC-H) (formerly the Radiological Physics Center) has reported varying levels of agreement in their anthropomorphic phantom audits. There is reason to believe one source of error in this observed disagreement is the accuracy of the dose calculation algorithms and heterogeneity corrections used. To audit this component of the radiotherapy treatment process, an independent dose calculation tool is needed. Monte Carlo multiple source models for Elekta 6 MV and 10 MV therapeutic x-ray beams were commissioned based on measurement of central axis depth dose data for a 10 × 10 cm 2 field size and dose profiles for a 40 × 40 cm 2 field size. The models were validated against open field measurements consisting of depth dose data and dose profiles for field sizes ranging from 3 × 3 cm 2 to 30 × 30 cm 2 . The models were then benchmarked against measurements in IROC-H's anthropomorphic head and neck and lung phantoms. Validation results showed 97.9% and 96.8% of depth dose data passed a ±2% Van Dyk criterion for 6 MV and 10 MV models respectively. Dose profile comparisons showed an average agreement using a ±2%/2 mm criterion of 98.0% and 99.0% for 6 MV and 10 MV models respectively. Phantom plan comparisons were evaluated using ±3%/2 mm gamma criterion, and averaged passing rates between Monte Carlo and measurements were 87.4% and 89.9% for 6 MV and 10 MV models respectively. Accurate multiple source models for Elekta 6 MV and 10 MV x-ray beams have been developed for inclusion in an independent dose calculation tool for use in clinical trial audits. © 2017 American Association of Physicists in Medicine.

  15. Study of Monte Carlo Simulation Method for Methane Phase Diagram Prediction using Two Different Potential Models

    KAUST Repository

    Kadoura, Ahmad

    2011-06-06

    Lennard‐Jones (L‐J) and Buckingham exponential‐6 (exp‐6) potential models were used to produce isotherms for methane at temperatures below and above critical one. Molecular simulation approach, particularly Monte Carlo simulations, were employed to create these isotherms working with both canonical and Gibbs ensembles. Experiments in canonical ensemble with each model were conducted to estimate pressures at a range of temperatures above methane critical temperature. Results were collected and compared to experimental data existing in literature; both models showed an elegant agreement with the experimental data. In parallel, experiments below critical temperature were run in Gibbs ensemble using L‐J model only. Upon comparing results with experimental ones, a good fit was obtained with small deviations. The work was further developed by adding some statistical studies in order to achieve better understanding and interpretation to the estimated quantities by the simulation. Methane phase diagrams were successfully reproduced by an efficient molecular simulation technique with different potential models. This relatively simple demonstration shows how powerful molecular simulation methods could be, hence further applications on more complicated systems are considered. Prediction of phase behavior of elemental sulfur in sour natural gases has been an interesting and challenging field in oil and gas industry. Determination of elemental sulfur solubility conditions helps avoiding all kinds of problems caused by its dissolution in gas production and transportation processes. For this purpose, further enhancement to the methods used is to be considered in order to successfully simulate elemental sulfur phase behavior in sour natural gases mixtures.

  16. Rapid creation, Monte Carlo simulation, and visualization of realistic 3D cell models.

    Science.gov (United States)

    Czech, Jacob; Dittrich, Markus; Stiles, Joel R

    2009-01-01

    Spatially realistic diffusion-reaction simulations supplement traditional experiments and provide testable hypotheses for complex physiological systems. To date, however, the creation of realistic 3D cell models has been difficult and time-consuming, typically involving hand reconstruction from electron microscopic images. Here, we present a complementary approach that is much simpler and faster, because the cell architecture (geometry) is created directly in silico using 3D modeling software like that used for commercial film animations. We show how a freely available open source program (Blender) can be used to create the model geometry, which then can be read by our Monte Carlo simulation and visualization softwares (MCell and DReAMM, respectively). This new workflow allows rapid prototyping and development of realistic computational models, and thus should dramatically accelerate their use by a wide variety of computational and experimental investigators. Using two self-contained examples based on synaptic transmission, we illustrate the creation of 3D cellular geometry with Blender, addition of molecules, reactions, and other run-time conditions using MCell's Model Description Language (MDL), and subsequent MCell simulations and DReAMM visualizations. In the first example, we simulate calcium influx through voltage-gated channels localized on a presynaptic bouton, with subsequent intracellular calcium diffusion and binding to sites on synaptic vesicles. In the second example, we simulate neurotransmitter release from synaptic vesicles as they fuse with the presynaptic membrane, subsequent transmitter diffusion into the synaptic cleft, and binding to postsynaptic receptors on a dendritic spine.

  17. Modeling a secular trend by Monte Carlo simulation of height biased migration in a spatial network.

    Science.gov (United States)

    Groth, Detlef

    2017-04-01

    Background: In a recent Monte Carlo simulation, the clustering of body height of Swiss military conscripts within a spatial network with characteristic features of the natural Swiss geography was investigated. In this study I examined the effect of migration of tall individuals into network hubs on the dynamics of body height within the whole spatial network. The aim of this study was to simulate height trends. Material and methods: Three networks were used for modeling, a regular rectangular fishing net like network, a real world example based on the geographic map of Switzerland, and a random network. All networks contained between 144 and 148 districts and between 265-307 road connections. Around 100,000 agents were initially released with average height of 170 cm, and height standard deviation of 6.5 cm. The simulation was started with the a priori assumption that height variation within a district is limited and also depends on height of neighboring districts (community effect on height). In addition to a neighborhood influence factor, which simulates a community effect, body height dependent migration of conscripts between adjacent districts in each Monte Carlo simulation was used to re-calculate next generation body heights. In order to determine the direction of migration for taller individuals, various centrality measures for the evaluation of district importance within the spatial network were applied. Taller individuals were favored to migrate more into network hubs, backward migration using the same number of individuals was random, not biased towards body height. Network hubs were defined by the importance of a district within the spatial network. The importance of a district was evaluated by various centrality measures. In the null model there were no road connections, height information could not be delivered between the districts. Results: Due to the favored migration of tall individuals into network hubs, average body height of the hubs, and later

  18. Modelling by the SPH method of the impact of a shell containing a fluid

    International Nuclear Information System (INIS)

    Maurel, B.

    2008-01-01

    The aim of this work was to develop a numerical simulation tool using a mesh-less approach, able to simulate the deformation and the rupture of thin structures under the impact of a fluid. A model of thick mesh-less shell (Mindlin-Reissner) based on the SPH method has then been carried out. A contact algorithm has moreover been perfected for the interactions between the structure and the fluid, it is modelled too by the SPH method. These studies have been carried out and been included in the CEA Europlexus fast dynamics software. (O.M.)

  19. Exotic muon-to-positron conversion in nuclei: partial transition sum evaluation by using shell model

    International Nuclear Information System (INIS)

    Divari, P.C.; Vergados, J.D.; Kosmas, T.S.; Skouras, L.D.

    2001-01-01

    A comprehensive study of the exotic (μ - ,e + ) conversion in 27 Al, 27 Al(μ - ,e + ) 27 Na is presented. The relevant operators are deduced assuming one-pion and two-pion modes in the framework of intermediate neutrino mixing models, paying special attention to the light neutrino case. The total rate is calculated by summing over partial transition strengths for all kinematically accessible final states derived with s-d shell model calculations employing the well-known Wildenthal realistic interaction

  20. Onion-shell model for cosmic ray electrons and radio synchrotron emission in supernova remnants

    Science.gov (United States)

    Beck, R.; Drury, L. O.; Voelk, H. J.; Bogdan, T. J.

    1985-01-01

    The spectrum of cosmic ray electrons, accelerated in the shock front of a supernova remnant (SNR), is calculated in the test-particle approximation using an onion-shell model. Particle diffusion within the evolving remnant is explicity taken into account. The particle spectrum becomes steeper with increasing radius as well as SNR age. Simple models of the magnetic field distribution allow a prediction of the intensity and spectrum of radio synchrotron emission and their radial variation. The agreement with existing observations is satisfactory in several SNR's but fails in other cases. Radiative cooling may be an important effect, especially in SNR's exploding in a dense interstellar medium.

  1. Forwards and Backwards Modelling of Ashfall Hazards in New Zealand by Monte Carlo Methods

    Science.gov (United States)

    Hurst, T.; Smith, W. D.; Bibby, H. M.

    2003-12-01

    We have developed a technique for quantifying the probability of particular thicknesses of airfall ash from a volcanic eruption at any given site, using Monte Carlo methods, for hazards planning and insurance purposes. We use an established program (ASHFALL) to model individual eruptions, where the likely thickness of ash deposited at selected sites depends on the location of the volcano, eruptive volume, column height and ash size, and the wind conditions. A Monte Carlo formulation then allows us to simulate the variations in eruptive volume and in wind conditions by analysing repeat eruptions, each time allowing the parameters to vary randomly according to known or assumed distributions. Actual wind velocity profiles are used, with randomness included by selection of a starting date. We show how this method can handle the effects of multiple volcanic sources by aggregation, each source with its own characteristics. This follows a similar procedure which we have used for earthquake hazard assessment. The result is estimates of the frequency with which any given depth of ash is likely to be deposited at the selected site, accounting for all volcanoes that might affect it. These numbers are expressed as annual probabilities or as mean return periods. We can also use this method for obtaining an estimate of how often and how large the eruptions from a particular volcano have been. Results from ash cores in Auckland can give useful bounds for the likely total volumes erupted from the volcano Mt Egmont/Mt Taranaki, 280 km away, during the last 140,000 years, information difficult to obtain from local tephra stratigraphy.

  2. Modeling of molecular nitrogen collisions and dissociation processes for direct simulation Monte Carlo

    International Nuclear Information System (INIS)

    Parsons, Neal; Levin, Deborah A.; Duin, Adri C. T. van; Zhu, Tong

    2014-01-01

    The Direct Simulation Monte Carlo (DSMC) method typically used for simulating hypersonic Earth re-entry flows requires accurate total collision cross sections and reaction probabilities. However, total cross sections are often determined from extrapolations of relatively low-temperature viscosity data, so their reliability is unknown for the high temperatures observed in hypersonic flows. Existing DSMC reaction models accurately reproduce experimental equilibrium reaction rates, but the applicability of these rates to the strong thermal nonequilibrium observed in hypersonic shocks is unknown. For hypersonic flows, these modeling issues are particularly relevant for nitrogen, the dominant species of air. To rectify this deficiency, the Molecular Dynamics/Quasi-Classical Trajectories (MD/QCT) method is used to accurately compute collision and reaction cross sections for the N 2 ( 1 Σ g + )-N 2 ( 1 Σ g + ) collision pair for conditions expected in hypersonic shocks using a new potential energy surface developed using a ReaxFF fit to recent advanced ab initio calculations. The MD/QCT-computed reaction probabilities were found to exhibit better physical behavior and predict less dissociation than the baseline total collision energy reaction model for strong nonequilibrium conditions expected in a shock. The MD/QCT reaction model compared well with computed equilibrium reaction rates and shock-tube data. In addition, the MD/QCT-computed total cross sections were found to agree well with established variable hard sphere total cross sections

  3. Modeling of molecular nitrogen collisions and dissociation processes for direct simulation Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Parsons, Neal, E-mail: neal.parsons@cd-adapco.com; Levin, Deborah A., E-mail: deblevin@illinois.edu [Department of Aerospace Engineering, The Pennsylvania State University, 233 Hammond Building, University Park, Pennsylvania 16802 (United States); Duin, Adri C. T. van, E-mail: acv13@engr.psu.edu [Department of Mechanical and Nuclear Engineering, The Pennsylvania State University, 136 Research East, University Park, Pennsylvania 16802 (United States); Zhu, Tong, E-mail: tvz5037@psu.edu [Department of Aerospace Engineering, The Pennsylvania State University, 136 Research East, University Park, Pennsylvania 16802 (United States)

    2014-12-21

    The Direct Simulation Monte Carlo (DSMC) method typically used for simulating hypersonic Earth re-entry flows requires accurate total collision cross sections and reaction probabilities. However, total cross sections are often determined from extrapolations of relatively low-temperature viscosity data, so their reliability is unknown for the high temperatures observed in hypersonic flows. Existing DSMC reaction models accurately reproduce experimental equilibrium reaction rates, but the applicability of these rates to the strong thermal nonequilibrium observed in hypersonic shocks is unknown. For hypersonic flows, these modeling issues are particularly relevant for nitrogen, the dominant species of air. To rectify this deficiency, the Molecular Dynamics/Quasi-Classical Trajectories (MD/QCT) method is used to accurately compute collision and reaction cross sections for the N{sub 2}({sup 1}Σ{sub g}{sup +})-N{sub 2}({sup 1}Σ{sub g}{sup +}) collision pair for conditions expected in hypersonic shocks using a new potential energy surface developed using a ReaxFF fit to recent advanced ab initio calculations. The MD/QCT-computed reaction probabilities were found to exhibit better physical behavior and predict less dissociation than the baseline total collision energy reaction model for strong nonequilibrium conditions expected in a shock. The MD/QCT reaction model compared well with computed equilibrium reaction rates and shock-tube data. In addition, the MD/QCT-computed total cross sections were found to agree well with established variable hard sphere total cross sections.

  4. 3D Monte Carlo model of optical transport in laser-irradiated cutaneous vascular malformations

    Science.gov (United States)

    Majaron, Boris; Milanič, Matija; Jia, Wangcun; Nelson, J. S.

    2010-11-01

    We have developed a three-dimensional Monte Carlo (MC) model of optical transport in skin and applied it to analysis of port wine stain treatment with sequential laser irradiation and intermittent cryogen spray cooling. Our MC model extends the approaches of the popular multi-layer model by Wang et al.1 to three dimensions, thus allowing treatment of skin inclusions with more complex geometries and arbitrary irradiation patterns. To overcome the obvious drawbacks of either "escape" or "mirror" boundary conditions at the lateral boundaries of the finely discretized volume of interest (VOI), photons exiting the VOI are propagated in laterally infinite tissue layers with appropriate optical properties, until they loose all their energy, escape into the air, or return to the VOI, but the energy deposition outside of the VOI is not computed and recorded. After discussing the selection of tissue parameters, we apply the model to analysis of blood photocoagulation and collateral thermal damage in treatment of port wine stain (PWS) lesions with sequential laser irradiation and intermittent cryogen spray cooling.

  5. Kinetic Monte Carlo Potts Model for Simulating a High Burnup Structure in UO2

    International Nuclear Information System (INIS)

    Oh, Jae-Yong; Koo, Yang-Hyun; Lee, Byung-Ho

    2008-01-01

    A Potts model, based on the kinetic Monte Carlo method, was originally developed for magnetic domain evolutions, but it was also proposed as a model for a grain growth in polycrystals due to similarities between Potts domain structures and grain structures. It has modeled various microstructural phenomena such as grain growths, a recrystallization, a sintering, and so on. A high burnup structure (HBS) is observed in the periphery of a high burnup UO 2 fuel. Although its formation mechanism is not clearly understood yet, its characteristics are well recognized: The HBS microstructure consists of very small grains and large bubbles instead of original as-sintered grains. A threshold burnup for the HBS is observed at a local burnup 60-80 Gwd/tM, and the threshold temperature is 1000-1200 .deg. C. Concerning a energy stability, the HBS can be created if the system energy of the HBS is lower than that of the original structure in an irradiated UO 2 . In this paper, a Potts model was implemented for simulating the HBS by calculating system energies, and the simulation results were compared with the HBS characteristics mentioned above

  6. General hypothesis and shell model for the synthesis of semiconductor nanotubes, including carbon nanotubes

    Science.gov (United States)

    Mohammad, S. Noor

    2010-09-01

    Semiconductor nanotubes, including carbon nanotubes, have vast potential for new technology development. The fundamental physics and growth kinetics of these nanotubes are still obscured. Various models developed to elucidate the growth suffer from limited applicability. An in-depth investigation of the fundamentals of nanotube growth has, therefore, been carried out. For this investigation, various features of nanotube growth, and the role of the foreign element catalytic agent (FECA) in this growth, have been considered. Observed growth anomalies have been analyzed. Based on this analysis, a new shell model and a general hypothesis have been proposed for the growth. The essential element of the shell model is the seed generated from segregation during growth. The seed structure has been defined, and the formation of droplet from this seed has been described. A modified definition of the droplet exhibiting adhesive properties has also been presented. Various characteristics of the droplet, required for alignment and organization of atoms into tubular forms, have been discussed. Employing the shell model, plausible scenarios for the formation of carbon nanotubes, and the variation in the characteristics of these carbon nanotubes have been articulated. The experimental evidences, for example, for the formation of shell around a core, dipole characteristics of the seed, and the existence of nanopores in the seed, have been presented. They appear to justify the validity of the proposed model. The diversities of nanotube characteristics, fundamentals underlying the creation of bamboo-shaped carbon nanotubes, and the impurity generation on the surface of carbon nanotubes have been elucidated. The catalytic action of FECA on growth has been quantified. The applicability of the proposed model to the nanotube growth by a variety of mechanisms has been elaborated. These mechanisms include the vapor-liquid-solid mechanism, the oxide-assisted growth mechanism, the self

  7. An open source Bayesian Monte Carlo isotope mixing model with applications in Earth surface processes

    Science.gov (United States)

    Arendt, Carli A.; Aciego, Sarah M.; Hetland, Eric A.

    2015-05-01

    The implementation of isotopic tracers as constraints on source contributions has become increasingly relevant to understanding Earth surface processes. Interpretation of these isotopic tracers has become more accessible with the development of Bayesian Monte Carlo (BMC) mixing models, which allow uncertainty in mixing end-members and provide methodology for systems with multicomponent mixing. This study presents an open source multiple isotope BMC mixing model that is applicable to Earth surface environments with sources exhibiting distinct end-member isotopic signatures. Our model is first applied to new δ18O and δD measurements from the Athabasca Glacier, which showed expected seasonal melt evolution trends and vigorously assessed the statistical relevance of the resulting fraction estimations. To highlight the broad applicability of our model to a variety of Earth surface environments and relevant isotopic systems, we expand our model to two additional case studies: deriving melt sources from δ18O, δD, and 222Rn measurements of Greenland Ice Sheet bulk water samples and assessing nutrient sources from ɛNd and 87Sr/86Sr measurements of Hawaiian soil cores. The model produces results for the Greenland Ice Sheet and Hawaiian soil data sets that are consistent with the originally published fractional contribution estimates. The advantage of this method is that it quantifies the error induced by variability in the end-member compositions, unrealized by the models previously applied to the above case studies. Results from all three case studies demonstrate the broad applicability of this statistical BMC isotopic mixing model for estimating source contribution fractions in a variety of Earth surface systems.

  8. Creating high-resolution digital elevation model using thin plate spline interpolation and Monte Carlo simulation

    International Nuclear Information System (INIS)

    Pohjola, J.; Turunen, J.; Lipping, T.

    2009-07-01

    In this report creation of the digital elevation model of Olkiluoto area incorporating a large area of seabed is described. The modeled area covers 960 square kilometers and the apparent resolution of the created elevation model was specified to be 2.5 x 2.5 meters. Various elevation data like contour lines and irregular elevation measurements were used as source data in the process. The precision and reliability of the available source data varied largely. Digital elevation model (DEM) comprises a representation of the elevation of the surface of the earth in particular area in digital format. DEM is an essential component of geographic information systems designed for the analysis and visualization of the location-related data. DEM is most often represented either in raster or Triangulated Irregular Network (TIN) format. After testing several methods the thin plate spline interpolation was found to be best suited for the creation of the elevation model. The thin plate spline method gave the smallest error in the test where certain amount of points was removed from the data and the resulting model looked most natural. In addition to the elevation data the confidence interval at each point of the new model was required. The Monte Carlo simulation method was selected for this purpose. The source data points were assigned probability distributions according to what was known about their measurement procedure and from these distributions 1 000 (20 000 in the first version) values were drawn for each data point. Each point of the newly created DEM had thus as many realizations. The resulting high resolution DEM will be used in modeling the effects of land uplift and evolution of the landscape in the time range of 10 000 years from the present. This time range comes from the requirements set for the spent nuclear fuel repository site. (orig.)

  9. A Monte Carlo study of time-aggregation in continuous-time and discrete-time parametric hazard models.

    NARCIS (Netherlands)

    Hofstede, ter F.; Wedel, M.

    1998-01-01

    This study investigates the effects of time aggregation in discrete and continuous-time hazard models. A Monte Carlo study is conducted in which data are generated according to various continuous and discrete-time processes, and aggregated into daily, weekly and monthly intervals. These data are

  10. Monte Carlo simulation of OLS and linear mixed model inference of phenotypic effects on gene expression

    Directory of Open Access Journals (Sweden)

    Jeffrey A. Walker

    2016-10-01

    Full Text Available Background Self-contained tests estimate and test the association between a phenotype and mean expression level in a gene set defined a priori. Many self-contained gene set analysis methods have been developed but the performance of these methods for phenotypes that are continuous rather than discrete and with multiple nuisance covariates has not been well studied. Here, I use Monte Carlo simulation to evaluate the performance of both novel and previously published (and readily available via R methods for inferring effects of a continuous predictor on mean expression in the presence of nuisance covariates. The motivating data are a high-profile dataset which was used to show opposing effects of hedonic and eudaimonic well-being (or happiness on the mean expression level of a set of genes that has been correlated with social adversity (the CTRA gene set. The original analysis of these data used a linear model (GLS of fixed effects with correlated error to infer effects of Hedonia and Eudaimonia on mean CTRA expression. Methods The standardized effects of Hedonia and Eudaimonia on CTRA gene set expression estimated by GLS were compared to estimates using multivariate (OLS linear models and generalized estimating equation (GEE models. The OLS estimates were tested using O’Brien’s OLS test, Anderson’s permutation ${r}_{F}^{2}$ r F 2 -test, two permutation F-tests (including GlobalAncova, and a rotation z-test (Roast. The GEE estimates were tested using a Wald test with robust standard errors. The performance (Type I, II, S, and M errors of all tests was investigated using a Monte Carlo simulation of data explicitly modeled on the re-analyzed dataset. Results GLS estimates are inconsistent between data sets, and, in each dataset, at least one coefficient is large and highly statistically significant. By contrast, effects estimated by OLS or GEE are very small, especially relative to the standard errors. Bootstrap and permutation GLS

  11. Monte Carlo simulation of OLS and linear mixed model inference of phenotypic effects on gene expression.

    Science.gov (United States)

    Walker, Jeffrey A

    2016-01-01

    Self-contained tests estimate and test the association between a phenotype and mean expression level in a gene set defined a priori . Many self-contained gene set analysis methods have been developed but the performance of these methods for phenotypes that are continuous rather than discrete and with multiple nuisance covariates has not been well studied. Here, I use Monte Carlo simulation to evaluate the performance of both novel and previously published (and readily available via R) methods for inferring effects of a continuous predictor on mean expression in the presence of nuisance covariates. The motivating data are a high-profile dataset which was used to show opposing effects of hedonic and eudaimonic well-being (or happiness) on the mean expression level of a set of genes that has been correlated with social adversity (the CTRA gene set). The original analysis of these data used a linear model (GLS) of fixed effects with correlated error to infer effects of Hedonia and Eudaimonia on mean CTRA expression. The standardized effects of Hedonia and Eudaimonia on CTRA gene set expression estimated by GLS were compared to estimates using multivariate (OLS) linear models and generalized estimating equation (GEE) models. The OLS estimates were tested using O'Brien's OLS test, Anderson's permutation [Formula: see text]-test, two permutation F -tests (including GlobalAncova), and a rotation z -test (Roast). The GEE estimates were tested using a Wald test with robust standard errors. The performance (Type I, II, S, and M errors) of all tests was investigated using a Monte Carlo simulation of data explicitly modeled on the re-analyzed dataset. GLS estimates are inconsistent between data sets, and, in each dataset, at least one coefficient is large and highly statistically significant. By contrast, effects estimated by OLS or GEE are very small, especially relative to the standard errors. Bootstrap and permutation GLS distributions suggest that the GLS results in

  12. Analysis of Composite Skin-Stiffener Debond Specimens Using a Shell/3D Modeling Technique and Submodeling

    Science.gov (United States)

    OBrien, T. Kevin (Technical Monitor); Krueger, Ronald; Minguet, Pierre J.

    2004-01-01

    The application of a shell/3D modeling technique for the simulation of skin/stringer debond in a specimen subjected to tension and three-point bending was studied. The global structure was modeled with shell elements. A local three-dimensional model, extending to about three specimen thicknesses on either side of the delamination front was used to model the details of the damaged section. Computed total strain energy release rates and mixed-mode ratios obtained from shell/3D simulations were in good agreement with results obtained from full solid models. The good correlation of the results demonstrated the effectiveness of the shell/3D modeling technique for the investigation of skin/stiffener separation due to delamination in the adherents. In addition, the application of the submodeling technique for the simulation of skin/stringer debond was also studied. Global models made of shell elements and solid elements were studied. Solid elements were used for local submodels, which extended between three and six specimen thicknesses on either side of the delamination front to model the details of the damaged section. Computed total strain energy release rates and mixed-mode ratios obtained from the simulations using the submodeling technique were not in agreement with results obtained from full solid models.

  13. Monte Carlo simulation of atomic short range order and cluster formation in two dimensional model alloys

    International Nuclear Information System (INIS)

    Rojas T, J.; Instituto Peruano de Energia Nuclear, Lima; Manrique C, E.; Torres T, E.

    2002-01-01

    Using monte Carlo simulation have been carried out an atomistic description of the structure and ordering processes in the system Cu-Au in a two-dimensional model. The ABV model of the alloy is a system of N atoms A and B, located in rigid lattice with some vacant sites. In the model we assume pair wise interactions between nearest neighbors with constant ordering energy J = 0,03 eV. The dynamics was introduced by means of a vacancy that exchanges of place with any atom of its neighbors. The simulations were carried out in a square lattice with 1024 and 4096 particles, using periodic boundary conditions to avoid border effects. We calculate the first two parameters of short range order of Warren-Cowley as function of the concentration and temperature. It was also studied the probabilities of formation of different atomic clusters that consist of 9 atoms as function of the concentration of the alloy and temperatures in a wide range of values. In some regions of temperature and concentration it was observed compositional and thermal polymorphism

  14. Modeling Monte Carlo of multileaf collimators using the code GEANT4

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Alex C.H.; Lima, Fernando R.A., E-mail: oliveira.ach@yahoo.com, E-mail: falima@cnen.gov.br [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil); Lima, Luciano S.; Vieira, Jose W., E-mail: lusoulima@yahoo.com.br [Instituto Federal de Educacao, Ciencia e Tecnologia de Pernambuco (IFPE), Recife, PE (Brazil)

    2014-07-01

    Radiotherapy uses various techniques and equipment for local treatment of cancer. The equipment most often used in radiotherapy to the patient irradiation is linear accelerator (Linac). Among the many algorithms developed for evaluation of dose distributions in radiotherapy planning, the algorithms based on Monte Carlo (MC) methods have proven to be very promising in terms of accuracy by providing more realistic results. The MC simulations for applications in radiotherapy are divided into two parts. In the first, the simulation of the production of the radiation beam by the Linac is performed and then the phase space is generated. The phase space contains information such as energy, position, direction, etc. of millions of particles (photons, electrons, positrons). In the second part the simulation of the transport of particles (sampled phase space) in certain configurations of irradiation field is performed to assess the dose distribution in the patient (or phantom). Accurate modeling of the Linac head is of particular interest in the calculation of dose distributions for intensity modulated radiation therapy (IMRT), where complex intensity distributions are delivered using a multileaf collimator (MLC). The objective of this work is to describe a methodology for modeling MC of MLCs using code Geant4. To exemplify this methodology, the Varian Millennium 120-leaf MLC was modeled, whose physical description is available in BEAMnrc Users Manual (20 11). The dosimetric characteristics (i.e., penumbra, leakage, and tongue-and-groove effect) of this MLC were evaluated. The results agreed with data published in the literature concerning the same MLC. (author)

  15. Monte Carlo Uncertainty Quantification Using Quasi-1D SRM Ballistic Model

    Directory of Open Access Journals (Sweden)

    Davide Viganò

    2016-01-01

    Full Text Available Compactness, reliability, readiness, and construction simplicity of solid rocket motors make them very appealing for commercial launcher missions and embarked systems. Solid propulsion grants high thrust-to-weight ratio, high volumetric specific impulse, and a Technology Readiness Level of 9. However, solid rocket systems are missing any throttling capability at run-time, since pressure-time evolution is defined at the design phase. This lack of mission flexibility makes their missions sensitive to deviations of performance from nominal behavior. For this reason, the reliability of predictions and reproducibility of performances represent a primary goal in this field. This paper presents an analysis of SRM performance uncertainties throughout the implementation of a quasi-1D numerical model of motor internal ballistics based on Shapiro’s equations. The code is coupled with a Monte Carlo algorithm to evaluate statistics and propagation of some peculiar uncertainties from design data to rocker performance parameters. The model has been set for the reproduction of a small-scale rocket motor, discussing a set of parametric investigations on uncertainty propagation across the ballistic model.

  16. Mathematical modeling, analysis and Markov Chain Monte Carlo simulation of Ebola epidemics

    Science.gov (United States)

    Tulu, Thomas Wetere; Tian, Boping; Wu, Zunyou

    Ebola virus infection is a severe infectious disease with the highest case fatality rate which become the global public health treat now. What makes the disease the worst of all is no specific effective treatment available, its dynamics is not much researched and understood. In this article a new mathematical model incorporating both vaccination and quarantine to study the dynamics of Ebola epidemic has been developed and comprehensively analyzed. The existence as well as uniqueness of the solution to the model is also verified and the basic reproduction number is calculated. Besides, stability conditions are also checked and finally simulation is done using both Euler method and one of the top ten most influential algorithm known as Markov Chain Monte Carlo (MCMC) method. Different rates of vaccination to predict the effect of vaccination on the infected individual over time and that of quarantine are discussed. The results show that quarantine and vaccination are very effective ways to control Ebola epidemic. From our study it was also seen that there is less possibility of an individual for getting Ebola virus for the second time if they survived his/her first infection. Last but not least real data has been fitted to the model, showing that it can used to predict the dynamic of Ebola epidemic.

  17. Monte-Carlo modelling to determine optimum filter choices for sub-microsecond optical pyrometry

    Science.gov (United States)

    Ota, Thomas A.; Chapman, David J.; Eakins, Daniel E.

    2017-04-01

    When designing a spectral-band pyrometer for use at high time resolutions (sub-μs), there is ambiguity regarding the optimum characteristics for a spectral filter(s). In particular, while prior work has discussed uncertainties in spectral-band pyrometry, there has been little discussion of the effects of noise which is an important consideration in time-resolved, high speed experiments. Using a Monte-Carlo process to simulate the effects of noise, a model of collection from a black body has been developed to give insights into the optimum choices for centre wavelength and passband width. The model was validated and then used to explore the effects of centre wavelength and passband width on measurement uncertainty. This reveals a transition centre wavelength below which uncertainties in calculated temperature are high. To further investigate system performance, simultaneous variation of the centre wavelength and bandpass width of a filter is investigated. Using data reduction, the effects of temperature and noise levels are illustrated and an empirical approximation is determined. The results presented show that filter choice can significantly affect instrument performance and, while best practice requires detailed modelling to achieve optimal performance, the expression presented can be used to aid filter selection.

  18. Two electric field Monte Carlo models of coherent backscattering of polarized light.

    Science.gov (United States)

    Doronin, Alexander; Radosevich, Andrew J; Backman, Vadim; Meglinski, Igor

    2014-11-01

    Modeling of coherent polarized light propagation in turbid scattering medium by the Monte Carlo method provides an ultimate understanding of coherent effects of multiple scattering, such as enhancement of coherent backscattering and peculiarities of laser speckle formation in dynamic light scattering (DLS) and optical coherence tomography (OCT) diagnostic modalities. In this report, we consider two major ways of modeling the coherent polarized light propagation in scattering tissue-like turbid media. The first approach is based on tracking transformations of the electric field along the ray propagation. The second one is developed in analogy to the iterative procedure of the solution of the Bethe-Salpeter equation. To achieve a higher accuracy in the results and to speed up the modeling, both codes utilize the implementation of parallel computing on NVIDIA Graphics Processing Units (GPUs) with Compute Unified Device Architecture (CUDA). We compare these two approaches through simulations of the enhancement of coherent backscattering of polarized light and evaluate the accuracy of each technique with the results of a known analytical solution. The advantages and disadvantages of each computational approach and their further developments are discussed. Both codes are available online and are ready for immediate use or download.

  19. Study on quantification method based on Monte Carlo sampling for multiunit probabilistic safety assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Kye Min [KHNP Central Research Institute, Daejeon (Korea, Republic of); Han, Sang Hoon; Park, Jin Hee; Lim, Ho Gon; Yang, Joon Yang [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Heo, Gyun Young [Kyung Hee University, Yongin (Korea, Republic of)

    2017-06-15

    In Korea, many nuclear power plants operate at a single site based on geographical characteristics, but the population density near the sites is higher than that in other countries. Thus, multiunit accidents are a more important consideration than in other countries and should be addressed appropriately. Currently, there are many issues related to a multiunit probabilistic safety assessment (PSA). One of them is the quantification of a multiunit PSA model. A traditional PSA uses a Boolean manipulation of the fault tree in terms of the minimal cut set. However, such methods have some limitations when rare event approximations cannot be used effectively or a very small truncation limit should be applied to identify accident sequence combinations for a multiunit site. In particular, it is well known that seismic risk in terms of core damage frequency can be overestimated because there are many events that have a high failure probability. In this study, we propose a quantification method based on a Monte Carlo approach for a multiunit PSA model. This method can consider all possible accident sequence combinations in a multiunit site and calculate a more exact value for events that have a high failure probability. An example model for six identical units at a site was also developed and quantified to confirm the applicability of the proposed method.

  20. Restricted primitive model for electrical double layers: modified HNC theory of density profiles and Monte Carlo study of differential capacitance

    International Nuclear Information System (INIS)

    Ballone, P.; Pastore, G.; Tosi, M.P.

    1986-02-01

    Interfacial properties of an ionic fluid next to a uniformly charged planar wall are studied in the restricted primitive model by both theoretical and Monte Carlo methods. The system is a 1:1 fluid of equisized charged hard spheres in a state appropriate to 1M aqueous electrolyte solutions. The interfacial density profiles of counterions and coions are evaluated by extending the hypernetted chain approximation (HNC) to include the leading bridge diagrams for the wall-ion correlations. The theoretical results compare well with those of grand canonical Monte Carlo computations of Torrie and Valleau over the whole range of surface charge density considered by these authors, thus resolving the earlier disagreement between statistical mechanical theories and simulation data at large charge densities. In view of the importance of the model as a testing ground for theories of the diffuse layer, the Monte Carlo calculations are tested by considering alternative choices for the basic simulation cell and are extended so as to allow an evaluation of the differential capacitance of the model interface by two independent methods. These involve numerical differentiation of the mean potential drop as a function of the surface charge density or alternatively an appropriate use of a fluctuation theory formula for the capacitance. The results of these two Monte Carlo approaches consistently indicate an initially smooth increase of the diffuse layer capacitance followed by structure at large charge densities, this behaviour being connected with layering of counterions as already revealed in the density profiles reported by Torrie and Valleau. (author)

  1. Monte Carlo modeling of Lead-Cooled Fast Reactor in adiabatic equilibrium state

    Energy Technology Data Exchange (ETDEWEB)

    Stanisz, Przemysław, E-mail: pstanisz@agh.edu.pl; Oettingen, Mikołaj, E-mail: moettin@agh.edu.pl; Cetnar, Jerzy, E-mail: cetnar@mail.ftj.agh.edu.pl

    2016-05-15

    Graphical abstract: - Highlights: • We present the Monte Carlo modeling of the LFR in the adiabatic equilibrium state. • We assess the adiabatic equilibrium fuel composition using the MCB code. • We define the self-adjusting process of breeding gain by the control rod operation. • The designed LFR can work in the adiabatic cycle with zero fuel breeding. - Abstract: Nuclear power would appear to be the only energy source able to satisfy the global energy demand while also achieving a significant reduction of greenhouse gas emissions. Moreover, it can provide a stable and secure source of electricity, and plays an important role in many European countries. However, nuclear power generation from its birth has been doomed by the legacy of radioactive nuclear waste. In addition, the looming decrease in the available resources of fissile U235 may influence the future sustainability of nuclear energy. The integrated solution to both problems is not trivial, and postulates the introduction of a closed-fuel cycle strategy based on breeder reactors. The perfect choice of a novel reactor system fulfilling both requirements is the Lead-Cooled Fast Reactor operating in the adiabatic equilibrium state. In such a state, the reactor converts depleted or natural uranium into plutonium while consuming any self-generated minor actinides and transferring only fission products as waste. We present the preliminary design of a Lead-Cooled Fast Reactor operating in the adiabatic equilibrium state with the Monte Carlo Continuous Energy Burnup Code – MCB. As a reference reactor model we apply the core design developed initially under the framework of the European Lead-cooled SYstem (ELSY) project and refined in the follow-up Lead-cooled European Advanced DEmonstration Reactor (LEADER) project. The major objective of the study is to show to what extent the constraints of the adiabatic cycle are maintained and to indicate the phase space for further improvements. The analysis

  2. A Monte-Carlo Model for Microstructure-Induced Ultrasonic Signal Fluctuations in Titanium Alloy Inspections

    International Nuclear Information System (INIS)

    Yu Linxiao; Thompson, R.B.; Margetan, F.J.; Wang Yurong

    2004-01-01

    In ultrasonic inspections of some jet-engine alloys, microstructural inhomogeneities act to significantly distort the amplitude and phase profiles of the incident sonic beam, and these distortions lead in turn to ultrasonic amplitude variations. For example, in pulse/echo inspections the back-wall signal amplitude is often seen to fluctuate dramatically when scanning a transducer parallel to a flat specimen. The stochastic nature of the ultrasonic response has obvious implications for both flaw characterization and probability of detection, and tools to estimate fluctuation levels are needed. In this study, as a first step, we develop a quantitative Monte-Carlo model to predict the back-wall amplitude fluctuations seen in ultrasonic pulse/echo inspections. Inputs to the model include statistical descriptions of various beam distortion effects, namely: the lateral 'drift' of the center-of-energy about its expected position; the distortion of pressure amplitude about its expected pattern; and two types of wave-front distortion ('wrinkling' and 'tilting'). The model inputs are deduced by analyzing through-transmission measurements in which the sonic beam emerging from an immersed metal specimen is mapped using a small receiver. The mapped field is compared to the model prediction for a homogeneous metal, and statistical parameters describing the differences are deduced using the technique of 'maximum likelihood estimation' (MLE). Our modeling approach is demonstrated using rectangular coupons of jet-engine Titanium alloys, and predicted back-wall fluctuation levels are shown to be in good agreement with experiment. As a new way of modeling ultrasonic signal fluctuations, the approach outlined in this paper suggests many possibilities for future research

  3. On the weaknesses of the valence shell electron pair repulsion (VSEPR) model

    Science.gov (United States)

    Røeggen, Inge

    1986-07-01

    The validity of the valence shell electron pair repulsion model (VSEPR) is discussed within the framework of an antisymmetric product of strongly orthogonal geminals (APSG). It is shown that when a molecule is partitioned onto fragments consisting of a central fragment, lone pairs, bond pairs, and ligands, the total APSG energy including the nuclear repulsion terms, can be written as a sum of intra- and interfragment energies. The VSEPR terms can be identified as three out of 13 different energy components. The analysis is applied to the water molecule. Six of the neglected energy components in the VSEPR model have a larger variation with the bond angle than the terms which are included in the model. According to this analysis it is difficult to consider the VSEPR model as a valid framework for discussing molecular equilibrium geometries. It is suggested that energy fragment analysis might represent an alternative model.

  4. Non-linear rotation-free shell finite-element models for aortic heart valves.

    Science.gov (United States)

    Gilmanov, Anvar; Stolarski, Henryk; Sotiropoulos, Fotis

    2017-01-04

    Hyperelastic material models have been incorporated in the rotation-free, large deformation, shell finite element (FE) formulation of (Stolarski et al., 2013) and applied to dynamic simulations of aortic heart valve. Two models used in the past in analysis of such problem i.e. the Saint-Venant and May-Newmann-Yin (MNY) material models have been considered and compared. Uniaxial tests for those constitutive equations were performed to verify the formulation and implementation of the models. The issue of leaflets interactions during the closing of the heart valve at the end of systole is considered. The critical role of using non-linear anisotropic model for proper dynamic response of the heart valve especially during the closing phase is demonstrated quantitatively. This work contributes an efficient FE framework for simulating biological tissues and paves the way for high-fidelity flow structure interaction simulations of native and bioprosthetic aortic heart valves. Copyright © 2016. Published by Elsevier Ltd.

  5. Neutron and gamma sensitivities of self-powered detectors: Monte Carlo modelling

    International Nuclear Information System (INIS)

    Vermeeren, Ludo

    2015-01-01

    This paper deals with the development of a detailed Monte Carlo approach for the calculation of the absolute neutron sensitivity of SPNDs, which makes use of the MCNP code. We will explain the calculation approach, including the activation and beta emission steps, the gamma-electron interactions, the charge deposition in the various detector parts and the effect of the space charge field in the insulator. The model can also be applied for the calculation of the gamma sensitivity of self-powered detectors and for the radiation-induced currents in signal cables. The model yields detailed information on the various contributions to the sensor currents, with distinct response times. Results for the neutron sensitivity of various types of SPNDs are in excellent agreement with experimental data obtained at the BR2 research reactor. For typical neutron to gamma flux ratios, the calculated gamma induced SPND currents are significantly lower than the neutron induced currents. The gamma sensitivity depends very strongly upon the immediate detector surroundings and on the gamma spectrum. Our calculation method opens the way to a reliable on-line determination of the absolute in-pile thermal neutron flux. (authors)

  6. Single-site Lennard-Jones models via polynomial chaos surrogates of Monte Carlo molecular simulation

    KAUST Repository

    Kadoura, Ahmad Salim

    2016-06-01

    In this work, two Polynomial Chaos (PC) surrogates were generated to reproduce Monte Carlo (MC) molecular simulation results of the canonical (single-phase) and the NVT-Gibbs (two-phase) ensembles for a system of normalized structureless Lennard-Jones (LJ) particles. The main advantage of such surrogates, once generated, is the capability of accurately computing the needed thermodynamic quantities in a few seconds, thus efficiently replacing the computationally expensive MC molecular simulations. Benefiting from the tremendous computational time reduction, the PC surrogates were used to conduct large-scale optimization in order to propose single-site LJ models for several simple molecules. Experimental data, a set of supercritical isotherms, and part of the two-phase envelope, of several pure components were used for tuning the LJ parameters (ε, σ). Based on the conducted optimization, excellent fit was obtained for different noble gases (Ar, Kr, and Xe) and other small molecules (CH4, N2, and CO). On the other hand, due to the simplicity of the LJ model used, dramatic deviations between simulation and experimental data were observed, especially in the two-phase region, for more complex molecules such as CO2 and C2 H6.

  7. Modelling of a general purpose irradiation chamber using a Monte Carlo particle transport code

    International Nuclear Information System (INIS)

    Dhiyauddin Ahmad Fauzi; Sheik, F.O.A.; Nurul Fadzlin Hasbullah

    2013-01-01

    Full-text: The aim of this research is to stimulate the effectiveness use of a general purpose irradiation chamber to contain pure neutron particles obtained from a research reactor. The secondary neutron and gamma particles dose discharge from the chamber layers will be used as a platform to estimate the safe dimension of the chamber. The chamber, made up of layers of lead (Pb), shielding, polyethylene (PE), moderator and commercial grade aluminium (Al) cladding is proposed for the use of interacting samples with pure neutron particles in a nuclear reactor environment. The estimation was accomplished through simulation based on general Monte Carlo N-Particle transport code using Los Alamos MCNPX software. Simulations were performed on the model of the chamber subjected to high neutron flux radiation and its gamma radiation product. The model of neutron particle used is based on the neutron source found in PUSPATI TRIGA MARK II research reactor which holds a maximum flux value of 1 x 10 12 neutron/ cm 2 s. The expected outcomes of this research are zero gamma dose in the core of the chamber and neutron dose rate of less than 10 μSv/ day discharge from the chamber system. (author)

  8. Application of a Monte Carlo linac model in routine verifications of dose calculations

    International Nuclear Information System (INIS)

    Linares Rosales, H. M.; Alfonso Laguardia, R.; Lara Mas, E.; Popescu, T.

    2015-01-01

    The analysis of some parameters of interest in Radiotherapy Medical Physics based on an experimentally validated Monte Carlo model of an Elekta Precise lineal accelerator, was performed for 6 and 15 Mv photon beams. The simulations were performed using the EGSnrc code. As reference for simulations, the optimal beam parameters values (energy and FWHM) previously obtained were used. Deposited dose calculations in water phantoms were done, on typical complex geometries commonly are used in acceptance and quality control tests, such as irregular and asymmetric fields. Parameters such as MLC scatter, maximum opening or closing position, and the separation between them were analyzed from calculations in water. Similarly simulations were performed on phantoms obtained from CT studies of real patients, making comparisons of the dose distribution calculated with EGSnrc and the dose distribution obtained from the computerized treatment planning systems (TPS) used in routine clinical plans. All the results showed a great agreement with measurements, finding all of them within tolerance limits. These results allowed the possibility of using the developed model as a robust verification tool for validating calculations in very complex situation, where the accuracy of the available TPS could be questionable. (Author)

  9. Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases

    Science.gov (United States)

    Pfeiffer, M.; Nizenkov, P.; Mirza, A.; Fasoulas, S.

    2016-02-01

    Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn's Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methods are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.

  10. Nanostructure evolution of neutron-irradiated reactor pressure vessel steels: Revised Object kinetic Monte Carlo model

    Energy Technology Data Exchange (ETDEWEB)

    Chiapetto, M., E-mail: mchiapet@sckcen.be [SCK-CEN, Nuclear Materials Science Institute, Boeretang 200, B-2400 Mol (Belgium); Unité Matériaux Et Transformations (UMET), UMR 8207, Université de Lille 1, ENSCL, F-59600 Villeneuve d’Ascq Cedex (France); Messina, L. [DEN-Service de Recherches de Métallurgie Physique, CEA, Université Paris-Saclay, F-91191 Gif-sur-Yvette (France); KTH Royal Institute of Technology, Roslagstullsbacken 21, SE-114 21 Stockholm (Sweden); Becquart, C.S. [Unité Matériaux Et Transformations (UMET), UMR 8207, Université de Lille 1, ENSCL, F-59600 Villeneuve d’Ascq Cedex (France); Olsson, P. [KTH Royal Institute of Technology, Roslagstullsbacken 21, SE-114 21 Stockholm (Sweden); Malerba, L. [SCK-CEN, Nuclear Materials Science Institute, Boeretang 200, B-2400 Mol (Belgium)

    2017-02-15

    This work presents a revised set of parameters to be used in an Object kinetic Monte Carlo model to simulate the microstructure evolution under neutron irradiation of reactor pressure vessel steels at the operational temperature of light water reactors (∼300 °C). Within a “grey-alloy” approach, a more physical description than in a previous work is used to translate the effect of Mn and Ni solute atoms on the defect cluster diffusivity reduction. The slowing down of self-interstitial clusters, due to the interaction between solutes and crowdions in Fe is now parameterized using binding energies from the latest DFT calculations and the solute concentration in the matrix from atom-probe experiments. The mobility of vacancy clusters in the presence of Mn and Ni solute atoms was also modified on the basis of recent DFT results, thereby removing some previous approximations. The same set of parameters was seen to predict the correct microstructure evolution for two different types of alloys, under very different irradiation conditions: an Fe-C-MnNi model alloy, neutron irradiated at a relatively high flux, and a high-Mn, high-Ni RPV steel from the Swedish Ringhals reactor surveillance program. In both cases, the predicted self-interstitial loop density matches the experimental solute cluster density, further corroborating the surmise that the MnNi-rich nanofeatures form by solute enrichment of immobilized small interstitial loops, which are invisible to the electron microscope.

  11. Monte Carlo model to describe depth selective fluorescence spectra of epithelial tissue

    Science.gov (United States)

    Pavlova, Ina; Weber, Crystal Redden; Schwarz, Richard A.; Williams, Michelle; El-Naggar, Adel; Gillenwater, Ann; Richards-Kortum, Rebecca

    2008-01-01

    We present a Monte Carlo model to predict fluorescence spectra of the oral mucosa obtained with a depth-selective fiber optic probe as a function of tissue optical properties. A model sensitivity analysis determines how variations in optical parameters associated with neoplastic development influence the intensity and shape of spectra, and elucidates the biological basis for differences in spectra from normal and premalignant oral sites. Predictions indicate that spectra of oral mucosa collected with a depth-selective probe are affected by variations in epithelial optical properties, and to a lesser extent, by changes in superficial stromal parameters, but not by changes in the optical properties of deeper stroma. The depth selective probe offers enhanced detection of epithelial fluorescence, with 90% of the detected signal originating from the epithelium and superficial stroma. Predicted depth-selective spectra are in good agreement with measured average spectra from normal and dysplastic oral sites. Changes in parameters associated with dysplastic progression lead to a decreased fluorescence intensity and a shift of the spectra to longer emission wavelengths. Decreased fluorescence is due to a drop in detected stromal photons, whereas the shift of spectral shape is attributed to an increased fraction of detected photons arising in the epithelium. PMID:19123659

  12. Markov-chain model of classified atomistic transition states for discrete kinetic Monte Carlo simulations.

    Science.gov (United States)

    Numazawa, Satoshi; Smith, Roger

    2011-10-01

    Classical harmonic transition state theory is considered and applied in discrete lattice cells with hierarchical transition levels. The scheme is then used to determine transitions that can be applied in a lattice-based kinetic Monte Carlo (KMC) atomistic simulation model. The model results in an effective reduction of KMC simulation steps by utilizing a classification scheme of transition levels for thermally activated atomistic diffusion processes. Thermally activated atomistic movements are considered as local transition events constrained in potential energy wells over certain local time periods. These processes are represented by Markov chains of multidimensional Boolean valued functions in three-dimensional lattice space. The events inhibited by the barriers under a certain level are regarded as thermal fluctuations of the canonical ensemble and accepted freely. Consequently, the fluctuating system evolution process is implemented as a Markov chain of equivalence class objects. It is shown that the process can be characterized by the acceptance of metastable local transitions. The method is applied to a problem of Au and Ag cluster growth on a rippled surface. The simulation predicts the existence of a morphology-dependent transition time limit from a local metastable to stable state for subsequent cluster growth by accretion. Excellent agreement with observed experimental results is obtained.

  13. Neutron and gamma sensitivities of self-powered detectors: Monte Carlo modelling

    Energy Technology Data Exchange (ETDEWEB)

    Vermeeren, Ludo [SCK-CEN, Nuclear Research Centre, Boeretang 200, B-2400 Mol, (Belgium)

    2015-07-01

    This paper deals with the development of a detailed Monte Carlo approach for the calculation of the absolute neutron sensitivity of SPNDs, which makes use of the MCNP code. We will explain the calculation approach, including the activation and beta emission steps, the gamma-electron interactions, the charge deposition in the various detector parts and the effect of the space charge field in the insulator. The model can also be applied for the calculation of the gamma sensitivity of self-powered detectors and for the radiation-induced currents in signal cables. The model yields detailed information on the various contributions to the sensor currents, with distinct response times. Results for the neutron sensitivity of various types of SPNDs are in excellent agreement with experimental data obtained at the BR2 research reactor. For typical neutron to gamma flux ratios, the calculated gamma induced SPND currents are significantly lower than the neutron induced currents. The gamma sensitivity depends very strongly upon the immediate detector surroundings and on the gamma spectrum. Our calculation method opens the way to a reliable on-line determination of the absolute in-pile thermal neutron flux. (authors)

  14. Modeling charged defects, dopant diffusion and activation mechanisms for TCAD simulations using kinetic Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Martin-Bragado, Ignacio [Synopsys Inc, 700 East Middlefield Road, Mountain View, 94043 CA (United States)]. E-mail: Ignacio.martin-bragado@synopsys.com; Tian, S. [Synopsys Inc, 700 East Middlefield Road, Mountain View, 94043 CA (United States); Johnson, M. [Synopsys Inc, 700 East Middlefield Road, Mountain View, 94043 CA (United States); Castrillo, P. [Department of Electronics, University of Valladolid Campus Miguel Delibes, Camino del Cementerio S/N, 47011 Valladolid (Spain); Pinacho, R. [Department of Electronics, University of Valladolid Campus Miguel Delibes, Camino del Cementerio S/N, 47011 Valladolid (Spain); Rubio, J. [Department of Electronics, University of Valladolid Campus Miguel Delibes, Camino del Cementerio S/N, 47011 Valladolid (Spain); Jaraiz, M. [Department of Electronics, University of Valladolid Campus Miguel Delibes, Camino del Cementerio S/N, 47011 Valladolid (Spain)

    2006-12-15

    This work will show how the kinetic Monte Carlo (KMC) technique is able to successfully model the defects and diffusion of dopants in Si-based materials for advanced microelectronic devices, especially for non-equilibrium conditions. Charge states of point defects and paired dopants are also simulated, including the dependency of the diffusivities on the Fermi level and charged particle drift coming from the electric field. The KMC method is used to simulate the diffusion of the point defects, and formation and dissolution of extended defects, whereas a quasi-atomistic approach is used to take into account the carrier densities. The simulated mechanisms include the kick-out diffusion mechanism, extended defect formation and the activation/deactivation of dopants through the formation of impurity clusters. Damage accumulation and amorphization are also taken into account. Solid phase epitaxy regrowth is included, and also the dopants redistribution during recrystallization of the amorphized regions. Regarding the charged defects, the model considers the dependencies of charge reactions, electric bias, pairing and break-up reactions according to the local Fermi level. Some aspects of the basic physical mechanisms have also been taken into consideration: how to smooth out the atomistic dopant point charge distribution, avoiding very abrupt and unphysical charge profiles and how to implement the drift of charged particles into the existing electric field. The work will also discuss the efficiency, accuracy and relevance of the method, together with its implementation in a technology computer aided design process simulator.

  15. A stochastic Markov chain approach for tennis: Monte Carlo simulation and modeling

    Science.gov (United States)

    Aslam, Kamran

    This dissertation describes the computational formulation of probability density functions (pdfs) that facilitate head-to-head match simulations in tennis along with ranking systems developed from their use. A background on the statistical method used to develop the pdfs , the Monte Carlo method, and the resulting rankings are included along with a discussion on ranking methods currently being used both in professional sports and in other applications. Using an analytical theory developed by Newton and Keller in [34] that defines a tennis player's probability of winning a game, set, match and single elimination tournament, a computational simulation has been developed in Matlab that allows further modeling not previously possible with the analytical theory alone. Such experimentation consists of the exploration of non-iid effects, considers the concept the varying importance of points in a match and allows an unlimited number of matches to be simulated between unlikely opponents. The results of these studies have provided pdfs that accurately model an individual tennis player's ability along with a realistic, fair and mathematically sound platform for ranking them.

  16. Monte Carlo modeling of the absolute solid angle acceptance in the LAMPF P10 spectrometer

    International Nuclear Information System (INIS)

    Redmon, J.A.; Isenhower, L.D.; Sadler, M.E.

    1992-01-01

    The P10 Spectrometer is used to measure the individual energies of the two gamma rays produced by π o decay. With these energies and the opening angle between the two gamma rays, the energy of the π o can be calculated. An absolute determination of the solid angle is needed for measurements of the differential cross section for π - p → η o n. Using Monte Carlo modeling, the laboratory setup is scrutinized by changing various parameters in the simulation runs. This model is compared with actual data obtained by experiment. Using this comparison, it is possible to determine any deviations in the laboratory setup. With the deviations known, an absolute solid angle acceptance can be computed. Parameters varied are the following: (1) Beam Energy, (2) Beam Position in the X and Y direction relative to the target, and (3) Target Position in the Z direction relative to the pivot between the two arms of the spectrometer. These variations are made for beam energies of 10, 20, and 40 MeV at laboratory scattering angles of 0 and 180 degrees

  17. Recent developments of the projected shell model based on many-body techniques

    Directory of Open Access Journals (Sweden)

    Sun Yang

    2015-01-01

    Full Text Available Recent developments of the projected shell model (PSM are summarized. Firstly, by using the Pfaffian algorithm, the multi-quasiparticle configuration space is expanded to include 6-quasiparticle states. The yrast band of 166Hf at very high spins is studied as an example, where the observed third back-bending in the moment of inertia is well reproduced and explained. Secondly, an angular-momentum projected generate coordinate method is developed based on PSM. The evolution of the low-lying states, including the second 0+ state, of the soft Gd, Dy, and Er isotopes to the well-deformed ones is calculated, and compared with experimental data.

  18. Accounting of inter-electron correlations in the model of mobile electron shells

    International Nuclear Information System (INIS)

    Panov, Yu.D.; Moskvin, A.S.

    2000-01-01

    One studied the basic peculiar features of the model for mobile electron shells for multielectron atom or cluster. One offered a variation technique to take account of the electron correlations where the coordinates of the centre of single-particle atomic orbital served as variation parameters. It enables to interpret dramatically variation of electron density distribution under anisotropic external effect in terms of the limited initial basis. One studied specific correlated states that might make correlation contribution into the orbital current. Paper presents generalization of the typical MO-LCAO pattern with the limited set of single particle functions enabling to take account of additional multipole-multipole interactions in the cluster [ru

  19. Shell-model calculations of beta-decay rates for s- and r-process nucleosyntheses

    International Nuclear Information System (INIS)

    Takahashi, K.; Mathews, G.J.; Bloom, S.D.

    1985-01-01

    Examples of large-basis shell-model calculations of Gamow-Teller β-decay properties of specific interest in the astrophysical s- and r- processes are presented. Numerical results are given for: (1) the GT-matrix elements for the excited state decays of the unstable s-process nucleus 99 Tc; and (2) the GT-strength function for the neutron-rich nucleus 130 Cd, which lies on the r-process path. The results are discussed in conjunction with the astrophysics problems. 23 refs., 3 figs

  20. Zero-point energies in the two-center shell model. II

    International Nuclear Information System (INIS)

    Reinhard, P.-G.

    1978-01-01

    The zero-point energy (ZPE) contained in the potential-energy surface of a two-center shell model (TCSM) is evaluated. In extension of previous work, the author uses here the full TCSM with l.s force, smoothing and asymmetry. The results show a critical dependence on the height of the potential barrier between the centers. The ZPE turns out to be non-negligible along the fission path for 236 U, and even more so for lighter systems. It is negligible for surface quadrupole motion and it is just on the fringe of being negligible for motion along the asymmetry coordinate. (Auth.)

  1. Magnetization of the Ising model on the Sierpinski pastry-shell

    Science.gov (United States)

    Chame, Anna; Branco, N. S.

    1992-02-01

    Using a real-space renormalization group approach, we calculate the approximate magnetization in the Ising model on the Sierpinski Pastry-shell. We consider, as an approximation, only two regions of the fractal: the internal surfaces, or walls (sites on the border of eliminated areas), with coupling constants JS, and the bulk (all other sites), with coupling constants Jv. We obtain the mean magnetization of the two regions as a function of temperature, for different values of α= JS/ JV and different geometric parameters b and l. Curves present a step-like behavior for some values of b and l, as well as different universality classes for the bulk transition.

  2. Learning reduced kinetic Monte Carlo models of complex chemistry from molecular dynamics.

    Science.gov (United States)

    Yang, Qian; Sing-Long, Carlos A; Reed, Evan J

    2017-08-01

    We propose a novel statistical learning framework for automatically and efficiently building reduced kinetic Monte Carlo (KMC) models of large-scale elementary reaction networks from data generated by a single or few molecular dynamics simulations (MD). Existing approaches for identifying species and reactions from molecular dynamics typically use bond length and duration criteria, where bond duration is a fixed parameter motivated by an understanding of bond vibrational frequencies. In contrast, we show that for highly reactive systems, bond duration should be a model parameter that is chosen to maximize the predictive power of the resulting statistical model. We demonstrate our method on a high temperature, high pressure system of reacting liquid methane, and show that the learned KMC model is able to extrapolate more than an order of magnitude in time for key molecules. Additionally, our KMC model of elementary reactions enables us to isolate the most important set of reactions governing the behavior of key molecules found in the MD simulation. We develop a new data-driven algorithm to reduce the chemical reaction network which can be solved either as an integer program or efficiently using L1 regularization, and compare our results with simple count-based reduction. For our liquid methane system, we discover that rare reactions do not play a significant role in the system, and find that less than 7% of the approximately 2000 reactions observed from molecular dynamics are necessary to reproduce the molecular concentration over time of methane. The framework described in this work paves the way towards a genomic approach to studying complex chemical systems, where expensive MD simulation data can be reused to contribute to an increasingly large and accurate genome of elementary reactions and rates.

  3. Modeling of continuous free-radical butadiene-styrene copolymerization process by the Monte Carlo method

    Directory of Open Access Journals (Sweden)

    T. A. Mikhailova

    2016-01-01

    Full Text Available In the paper the algorithm of modeling of continuous low-temperature free-radical butadiene-styrene copolymerization process in emulsion based on the Monte-Carlo method is offered. This process is the cornerstone of industrial production butadiene – styrene synthetic rubber which is the most widespread large-capacity rubber of general purpose. Imitation of growth of each macromolecule of the formed copolymer and tracking of the processes happening to it is the basis of algorithm of modeling. Modeling is carried out taking into account residence-time distribution of particles in system that gives the chance to research the process proceeding in the battery of consistently connected polymerization reactors. At the same time each polymerization reactor represents the continuous stirred tank reactor. Since the process is continuous, it is considered continuous addition of portions to the reaction mixture in the first reactor of battery. The constructed model allows to research molecular-weight and viscous characteristics of the formed copolymerization product, to predict the mass content of butadiene and styrene in copolymer, to carry out calculation of molecular-weight distribution of the received product at any moment of conducting process. According to the results of computational experiments analyzed the influence of mode of the process of the regulator introduced during the maintaining on change of characteristics of the formed butadiene-styrene copolymer. As the considered process takes place with participation of monomers of two types, besides listed the model allows to research compositional heterogeneity of the received product that is to carry out calculation of composite distribution and distribution of macromolecules for the size and structure. On the basis of the proposed algorithm created the software tool that allows you to keep track of changes in the characteristics of the resulting product in the dynamics.

  4. An exercise in model validation: Comparing univariate statistics and Monte Carlo-based multivariate statistics

    International Nuclear Information System (INIS)

    Weathers, J.B.; Luck, R.; Weathers, J.W.

    2009-01-01

    The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.

  5. A shell-neutral modeling approach yields sustainable oyster harvest estimates: a retrospective analysis of the Louisiana state primary seed grounds

    Science.gov (United States)

    Soniat, Thomas M.; Klinck, John M.; Powell, Eric N.; Cooper, Nathan; Abdelguerfi, Mahdi; Hofmann, Eileen E.; Dahal, Janak; Tu, Shengru; Finigan, John; Eberline, Benjamin S.; La Peyre, Jerome F.; LaPeyre, Megan K.; Qaddoura, Fareed

    2012-01-01

    A numerical model is presented that defines a sustainability criterion as no net loss of shell, and calculates a sustainable harvest of seed (market oysters (≥75 mm). Stock assessments of the Primary State Seed Grounds conducted east of the Mississippi from 2009 to 2011 show a general trend toward decreasing abundance of sack and seed oysters. Retrospective simulations provide estimates of annual sustainable harvests. Comparisons of simulated sustainable harvests with actual harvests show a trend toward unsustainable harvests toward the end of the time series. Stock assessments combined with shell-neutral models can be used to estimate sustainable harvest and manage cultch through shell planting when actual harvest exceeds sustainable harvest. For exclusive restoration efforts (no fishing allowed), the model provides a metric for restoration success-namely, shell accretion. Oyster fisheries that remove shell versus reef restorations that promote shell accretion, although divergent in their goals, are convergent in their management; both require vigilant attention to shell budgets.

  6. Assesment of advanced step models for steady state Monte Carlo burnup calculations in application to prismatic HTGR

    Directory of Open Access Journals (Sweden)

    Kępisty Grzegorz

    2015-09-01

    Full Text Available In this paper, we compare the methodology of different time-step models in the context of Monte Carlo burnup calculations for nuclear reactors. We discuss the differences between staircase step model, slope model, bridge scheme and stochastic implicit Euler method proposed in literature. We focus on the spatial stability of depletion procedure and put additional emphasis on the problem of normalization of neutron source strength. Considered methodology has been implemented in our continuous energy Monte Carlo burnup code (MCB5. The burnup simulations have been performed using the simplified high temperature gas-cooled reactor (HTGR system with and without modeling of control rod withdrawal. Useful conclusions have been formulated on the basis of results.

  7. Light Nuclei in the Framework of the Symplectic No-Core Shell Model

    Energy Technology Data Exchange (ETDEWEB)

    Draayer, Jerry P.; Dytrych, Tomas; Sviratcheva, Kristina D.; Bahri, Chairul; /Louisiana State U.; Vary, James P.; /Iowa State U. /LLNL, Livermore /SLAC

    2007-04-02

    A symplectic no-core shell model (Sp-NCSM) is constructed with the goal of extending the ab-initio NCSM to include strongly deformed higher-oscillator-shell configurations and to reach heavier nuclei that cannot be studied currently because the spaces encountered are too large to handle, even with the best of modern-day computers. This goal is achieved by integrating two powerful concepts: the ab-initio NCSM with that of the Sp(3,R) {contains} SU(3) group-theoretical approach. The NCSM uses modern realistic nuclear interactions in model spaces that consists of many-body configurations up to a given number of {h_bar}{Upsilon} excitations together with modern high-performance parallel computing techniques. The symplectic theory extends this picture by recognizing that when deformed configurations dominate, which they often do, the model space can be better selected so less relevant low-lying {h_bar}{Upsilon} configurations yield to more relevant high-lying {h_bar}{Upsilon} configurations, ones that respect a near symplectic symmetry found in the Hamiltonian. Results from an application of the Sp-NCSM to light nuclei are compared with those for the NCSM and with experiment.

  8. A layered shell containing patches of piezoelectric fibers and interdigitated electrodes: Finite element modeling and experimental validation

    DEFF Research Database (Denmark)

    Nielsen, Bo Bjerregaard; Nielsen, Martin S.; Santos, Ilmar

    2017-01-01

    The work gives a theoretical and experimental contribution to the problem of smart materials connected to double curved flexible shells. In the theoretical part the finite element modeling of a double curved flexible shell with a piezoelectric fiber patch with interdigitated electrodes (IDEs......) is presented. The developed element is based on a purely mechanical eight-node isoparametric layered element for a double curved shell, utilizing first-order shear deformation theory. The electromechanical coupling of piezoelectric material is added to all elements, but can also be excluded by setting...... the piezoelectric material properties to zero. The electrical field applied via the IDEs is aligned with the piezoelectric fibers, and hence the direct d33 piezoelectric constant is utilized for the electromechanical coupling. The dynamic performance of a shell with a microfiber composite (MFC) patch...

  9. Inverse Modeling Using Markov Chain Monte Carlo Aided by Adaptive Stochastic Collocation Method with Transformation

    Science.gov (United States)

    Zhang, D.; Liao, Q.

    2016-12-01

    The Bayesian inference provides a convenient framework to solve statistical inverse problems. In this method, the parameters to be identified are treated as random variables. The prior knowledge, the system nonlinearity, and the measurement errors can be directly incorporated in the posterior probability density function (PDF) of the parameters. The Markov chain Monte Carlo (MCMC) method is a powerful tool to generate samples from the posterior PDF. However, since the MCMC usually requires thousands or even millions of forward simulations, it can be a computationally intensive endeavor, particularly when faced with large-scale flow and transport models. To address this issue, we construct a surrogate system for the model responses in the form of polynomials by the stochastic collocation method. In addition, we employ interpolation based on the nested sparse grids and takes into account the different importance of the parameters, under the condition of high random dimensions in the stochastic space. Furthermore, in case of low regularity such as discontinuous or unsmooth relation between the input parameters and the output responses, we introduce an additional transform process to improve the accuracy of the surrogate model. Once we build the surrogate system, we may evaluate the likelihood with very little computational cost. We analyzed the convergence rate of the forward solution and the surrogate posterior by Kullback-Leibler divergence, which quantifies the difference between probability distributions. The fast convergence of the forward solution implies fast convergence of the surrogate posterior to the true posterior. We also tested the proposed algorithm on water-flooding two-phase flow reservoir examples. The posterior PDF calculated from a very long chain with direct forward simulation is assumed to be accurate. The posterior PDF calculated using the surrogate model is in reasonable agreement with the reference, revealing a great improvement in terms of

  10. A Monte Carlo/response surface strategy for sensitivity analysis: application to a dynamic model of vegetative plant growth

    Science.gov (United States)

    Lim, J. T.; Gold, H. J.; Wilkerson, G. G.; Raper, C. D. Jr; Raper CD, J. r. (Principal Investigator)

    1989-01-01

    We describe the application of a strategy for conducting a sensitivity analysis for a complex dynamic model. The procedure involves preliminary screening of parameter sensitivities by numerical estimation of linear sensitivity coefficients, followed by generation of a response surface based on Monte Carlo simulation. Application is to a physiological model of the vegetative growth of soybean plants. The analysis provides insights as to the relative importance of certain physiological processes in controlling plant growth. Advantages and disadvantages of the strategy are discussed.

  11. Modeling interfacial slag layer phenomena in the shell/mold gap in continuous casting of steel

    Science.gov (United States)

    Meng, Ya

    A new lubrication and friction model of slag in the interfacial gap was combined into an existing 1-D heat transfer model, CON1D. Analytical transient models of liquid slag flow and solid slag stress have been coupled with a finite-difference model of heat transfer in the mold, gap and steel shell to predict transient shear stress, friction, slip and fracture of the slag layers. Experimental work was conducted to measure the properties of slag powder, including the friction coefficient at elevated temperatures and viscosity near solidification temperature. Tests with wide cooling rates range were conducted to construct CCT curves and to predict critical cooling rates of two slags with different crystallization tendencies. Slag composition and microstructure were analyzed by XRD and SEM. The CON1D model predicts shell thickness, temperature distributions in the mold and shell, slag layers thickness, heat flux profiles down the mold, cooling water temperature rise, ideal taper of the mold walls, and other related phenomena. Plants measurements from operating casters were collected to calibrate the model. The model was then applied to study the effect of casting speed and powder viscosity properties on slag layer behavior. The study finds that liquid slag lubrication would produce negligible stresses. Lower mold slag consumption rate leads to higher solid friction and results in solid slag layer fracture and movement if it falls below a critical value. Mold friction and fracture are governed by lubrication consumption rate. The high measured friction force in operating casters could be due to three sources: an intermittent moving solid slag layer, excessive mold taper or mold misalignment. The model was also applied to interpret the crystallization behavior of slag. A mechanism for the formation of this crystalline layer was proposed that combined the effects of a shift in the viscosity curve, a decrease in the liquid slag conductivity due to partial crystallization

  12. Monte Carlo modelling of the effect of an absorber on an electron beam

    International Nuclear Information System (INIS)

    Li, L.; Stewart, A.T.; Round, W.H.

    2004-01-01

    Full text: The electron beam from a linear accelerator is essentially spatially uniform in energy and intensity. Hence it may not be suitable for treating a patient where it is desirable for the treatment depth to vary across the field. Using an absorber to shield the part of the beam where a shallower treatment depth is required may provide a solution. But the absorber will cause energy degradation, spectrum spreading and scattering of the incident beam. This situation was investigated using Monte Carlo simulation to determine the changes in the incident beam under, and near the edge of, a sheet absorber made of a low atomic number material. The EGSncr system, along with user code written in Mortran, was used to perform the Monte Carlo simulations. A situation where a thin absorber was placed in a pure 15 MeV 10 cm wide electron beam from a point source was modelled. The absorber was placed to cover half of the beam. This was repeated for different thicknesses of aluminium. It was further repeated for absorbers where the edge on the middle of the beam is chamfered. The dose distributions were plotted, and compared to measured distributions from a clinical accelerator. In addition, the effects of energy degradation, spectrum spreading and scattering were also investigated. This was done by analysing the energies and angles of the simulated electrons after passing through the absorber. Knowledge of energy loss versus scattering angle for different thicknesses of different materials allows for a better choice of absorber. The simulations predicted that at the edge of the shadow of the absorber a hot spot appeared outside the shadow and a cold spot inside the shadow. This was confirmed by measurement. Chamfering the edge of the absorber was seen to reduce this effect with the significance of the effect being dependent in the absorber thickness and the shape of the chamfer. The choice of thickness of the absorber should take into account the effects of energy spectrum

  13. [Verification of the VEF photon beam model for dose calculations by the Voxel-Monte-Carlo-Algorithm].

    Science.gov (United States)

    Kriesen, Stephan; Fippel, Matthias

    2005-01-01

    The VEF linac head model (VEF, virtual energy fluence) was developed at the University of Tübingen to determine the primary fluence for calculations of dose distributions in patients by the Voxel-Monte-Carlo-Algorithm (XVMC). This analytical model can be fitted to any therapy accelerator head by measuring only a few basic dose data; therefore, time-consuming Monte-Carlo simulations of the linac head become unnecessary. The aim of the present study was the verification of the VEF model by means of water-phantom measurements, as well as the comparison of this system with a common analytical linac head model of a commercial planning system (TMS, formerly HELAX or MDS Nordion, respectively). The results show that both the VEF and the TMS models can very well simulate the primary fluence. However, the VEF model proved superior in the simulations of scattered radiation and in the calculations of strongly irregular MLC fields. Thus, an accurate and clinically practicable tool for the determination of the primary fluence for Monte-Carlo-Simulations with photons was established, especially for the use in IMRT planning.

  14. Verification of the VEF photon beam model for dose calculations by the voxel-Monte-Carlo-algorithm

    International Nuclear Information System (INIS)

    Kriesen, S.; Fippel, M.

    2005-01-01

    The VEF linac head model (VEF, virtual energy fluence) was developed at the University of Tuebingen to determine the primary fluence for calculations of dose distributions in patients by the Voxel-Monte-Carlo-Algorithm (XVMC). This analytical model can be fitted to any therapy accelerator head by measuring only a few basic dose data; therefore, time-consuming Monte-Carlo simulations of the linac head become unnecessary. The aim of the present study was the verification of the VEF model by means of water-phantom measurements, as well as the comparison of this system with a common analytical linac head model of a commercial planning system (TMS, formerly HELAX or MDS Nordion, respectively). The results show that both the VEF and the TMS models can very well simulate the primary fluence. However, the VEF model proved superior in the simulations of scattered radiation and in the calculations of strongly irregular MLC fields. Thus, an accurate and clinically practicable tool for the determination of the primary fluence for Monte-Carlo-Simulations with photons was established, especially for the use in IMRT planning. (orig.)

  15. Monte Carlo modeling for realizing optimized management of failed fuel replacement

    International Nuclear Information System (INIS)

    Morishita, Kazunori; Yamamoto, Yasunori; Nakasuji, Toshiki

    2014-01-01

    Fuel cladding is one of the key components in a fission reactor to keep confining radioactive materials inside a fuel tube. During reactor operation, the cladding is however sometimes breached and radioactive materials leak from the fuel ceramic pellet into the coolant water through the breach. The primary coolant water is therefore monitored so that any leak is quickly detected, where the coolant water is periodically sampled and the concentration of, for example the radioactive iodine 131 (I-131), is measured. Depending on the measured concentration, the faulty fuel assembly with leaking rod is removed from the reactor and replaced by new one immediately or at the next refueling. In the present study, an effort has been made to develop a methodology to optimize the management for replacement of failed fuels due to cladding failures using the I-131 concentration measured in the sampled coolant water. A model numerical equation is proposed to describe the time evolution of I-131 concentration due to fuel leaks, and is then solved using the Monte-Carlo method as a function of sampling rate. Our results have indicated that, in order to achieve the rationalized management of failed fuels, higher resolution to detect a small amount of I-131 is not necessarily required but more frequent sampling is favorable. (author)

  16. Modelling Protein-induced Membrane Deformation using Monte Carlo and Langevin Dynamics Simulations

    Science.gov (United States)

    Radhakrishnan, R.; Agrawal, N.; Ramakrishnan, N.; Kumar, P. B. Sunil; Liu, J.

    2010-11-01

    In eukaryotic cells, internalization of extracellular cargo via the cellular process of endocytosis is orchestrated by a variety of proteins, many of which are implicated in membrane deformation/bending. We model the energetics of deformations membranes by using the Helfrich Hamiltonian using two different formalisms: (i) Cartesian or Monge Gauge using Langevin dynamics; (ii) Curvilinear coordinate system using Monte Carlo (MC). Monge gauge approach which has been extensively studied is limited to small deformations of the membrane and cannot describe extreme deformations. Curvilinear coordinate approach can handle large deformation limits as well as finite-temperature membrane fluctuations; here we employ an unstructured triangular mesh to compute the local curvature tensor, and we evolve the membrane surface using a MC method. In our application, we compare the two approaches (i and ii above) to study how the spatial assembly of curvature inducing proteins leads to vesicle budding from a planar membrane. We also quantify how the curvature field of the membrane impacts the spatial segregation of proteins.

  17. Modeling the biophysical effects in a carbon beam delivery line by using Monte Carlo simulations

    Science.gov (United States)

    Cho, Ilsung; Yoo, SeungHoon; Cho, Sungho; Kim, Eun Ho; Song, Yongkeun; Shin, Jae-ik; Jung, Won-Gyun

    2016-09-01

    The Relative biological effectiveness (RBE) plays an important role in designing a uniform dose response for ion-beam therapy. In this study, the biological effectiveness of a carbon-ion beam delivery system was investigated using Monte Carlo simulations. A carbon-ion beam delivery line was designed for the Korea Heavy Ion Medical Accelerator (KHIMA) project. The GEANT4 simulation tool kit was used to simulate carbon-ion beam transport into media. An incident energy carbon-ion beam with energy in the range between 220 MeV/u and 290 MeV/u was chosen to generate secondary particles. The microdosimetric-kinetic (MK) model was applied to describe the RBE of 10% survival in human salivary-gland (HSG) cells. The RBE weighted dose was estimated as a function of the penetration depth in the water phantom along the incident beam's direction. A biologically photon-equivalent Spread Out Bragg Peak (SOBP) was designed using the RBE-weighted absorbed dose. Finally, the RBE of mixed beams was predicted as a function of the depth in the water phantom.

  18. Optimization of dual-wavelength intravascular photoacoustic imaging of atherosclerotic plaques using Monte Carlo optical modeling

    Science.gov (United States)

    Dana, Nicholas; Sowers, Timothy; Karpiouk, Andrei; Vanderlaan, Donald; Emelianov, Stanislav

    2017-10-01

    Coronary heart disease (the presence of coronary atherosclerotic plaques) is a significant health problem in the industrialized world. A clinical method to accurately visualize and characterize atherosclerotic plaques is needed. Intravascular photoacoustic (IVPA) imaging is being developed to fill this role, but questions remain regarding optimal imaging wavelengths. We utilized a Monte Carlo optical model to simulate IVPA excitation in coronary tissues, identifying optimal wavelengths for plaque characterization. Near-infrared wavelengths (≤1800 nm) were simulated, and single- and dual-wavelength data were analyzed for accuracy of plaque characterization. Results indicate light penetration is best in the range of 1050 to 1370 nm, where 5% residual fluence can be achieved at clinically relevant depths of ≥2 mm in arteries. Across the arterial wall, fluence may vary by over 10-fold, confounding plaque characterization. For single-wavelength results, plaque segmentation accuracy peaked at 1210 and 1720 nm, though correlation was poor (primary wavelength (≈1.0). Results suggest that, without flushing the luminal blood, a primary and secondary wavelength near 1210 and 1350 nm, respectively, may offer the best implementation of dual-wavelength IVPA imaging. These findings could guide the development of a cost-effective clinical system by highlighting optimal wavelengths and improving plaque characterization.

  19. Monte Carlo modeling the phase diagram of magnets with the Dzyaloshinskii - Moriya interaction

    Science.gov (United States)

    Belemuk, A. M.; Stishov, S. M.

    2017-11-01

    We use classical Monte Carlo calculations to model the high-pressure behavior of the phase transition in the helical magnets. We vary values of the exchange interaction constant J and the Dzyaloshinskii-Moriya interaction constant D, which is equivalent to changing spin-spin distances, as occurs in real systems under pressure. The system under study is self-similar at D / J = constant , and its properties are defined by the single variable J / T , where T is temperature. The existence of the first order phase transition critically depends on the ratio D / J . A variation of J strongly affects the phase transition temperature and width of the fluctuation region (the ;hump;) as follows from the system self-similarity. The high-pressure behavior of the spin system depends on the evolution of the interaction constants J and D on compression. Our calculations are relevant to the high pressure phase diagrams of helical magnets MnSi and Cu2OSeO3.

  20. Risk analysis of gravity dam instability using credibility theory Monte Carlo simulation model.

    Science.gov (United States)

    Xin, Cao; Chongshi, Gu

    2016-01-01

    Risk analysis of gravity dam stability involves complicated uncertainty in many design parameters and measured data. Stability failure risk ratio described jointly by probability and possibility has deficiency in characterization of influence of fuzzy factors and representation of the likelihood of risk occurrence in practical engineering. In this article, credibility theory is applied into stability failure risk analysis of gravity dam. Stability of gravity dam is viewed as a hybrid event considering both fuzziness and randomness of failure criterion, design parameters and measured data. Credibility distribution function is conducted as a novel way to represent uncertainty of influence factors of gravity dam stability. And combining with Monte Carlo simulation, corresponding calculation method and procedure are proposed. Based on a dam section, a detailed application of the modeling approach on risk calculation of both dam foundation and double sliding surfaces is provided. The results show that, the present method is feasible to be applied on analysis of stability failure risk for gravity dams. The risk assessment obtained can reflect influence of both sorts of uncertainty, and is suitable as an index value.

  1. Fullrmc, a rigid body Reverse Monte Carlo modeling package enabled with machine learning and artificial intelligence.

    Science.gov (United States)

    Aoun, Bachir

    2016-05-05

    A new Reverse Monte Carlo (RMC) package "fullrmc" for atomic or rigid body and molecular, amorphous, or crystalline materials is presented. fullrmc main purpose is to provide a fully modular, fast and flexible software, thoroughly documented, complex molecules enabled, written in a modern programming language (python, cython, C and C++ when performance is needed) and complying to modern programming practices. fullrmc approach in solving an atomic or molecular structure is different from existing RMC algorithms and software. In a nutshell, traditional RMC methods and software randomly adjust atom positions until the whole system has the greatest consistency with a set of experimental data. In contrast, fullrmc applies smart moves endorsed with reinforcement machine learning to groups of atoms. While fullrmc allows running traditional RMC modeling, the uniqueness of this approach resides in its ability to customize grouping atoms in any convenient way with no additional programming efforts and to apply smart and more physically meaningful moves to the defined groups of atoms. In addition, fullrmc provides a unique way with almost no additional computational cost to recur a group's selection, allowing the system to go out of local minimas by refining a group's position or exploring through and beyond not allowed positions and energy barriers the unrestricted three dimensional space around a group. © 2016 Wiley Periodicals, Inc.

  2. Monte Carlo Modeling of Sodium in Mercury's Exosphere During the First Two MESSENGER Flybys

    Science.gov (United States)

    Burger, Matthew H.; Killen, Rosemary M.; Vervack, Ronald J., Jr.; Bradley, E. Todd; McClintock, William E.; Sarantos, Menelaos; Benna, Mehdi; Mouawad, Nelly

    2010-01-01

    We present a Monte Carlo model of the distribution of neutral sodium in Mercury's exosphere and tail using data from the Mercury Atmospheric and Surface Composition Spectrometer (MASCS) on the MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) spacecraft during the first two flybys of the planet in January and September 2008. We show that the dominant source mechanism for ejecting sodium from the surface is photon-stimulated desorption (PSD) and that the desorption rate is limited by the diffusion rate of sodium from the interior of grains in the regolith to the topmost few monolayers where PSD is effective. In the absence of ion precipitation, we find that the sodium source rate is limited to approximately 10(exp 6) - 10(exp 7) per square centimeter per second, depending on the sticking efficiency of exospheric sodium that returns to the surface. The diffusion rate must be at least a factor of 5 higher in regions of ion precipitation to explain the MASCS observations during the second MESSENGER f1yby. We estimate that impact vaporization of micrometeoroids may provide up to 15% of the total sodium source rate in the regions observed. Although sputtering by precipitating ions was found not to be a significant source of sodium during the MESSENGER flybys, ion precipitation is responsible for increasing the source rate at high latitudes through ion-enhanced diffusion.

  3. Monte Carlo Technique Used to Model the Degradation of Internal Spacecraft Surfaces by Atomic Oxygen

    Science.gov (United States)

    Banks, Bruce A.; Miller, Sharon K.

    2004-01-01

    Atomic oxygen is one of the predominant constituents of Earth's upper atmosphere. It is created by the photodissociation of molecular oxygen (O2) into single O atoms by ultraviolet radiation. It is chemically very reactive because a single O atom readily combines with another O atom or with other atoms or molecules that can form a stable oxide. The effects of atomic oxygen on the external surfaces of spacecraft in low Earth orbit can have dire consequences for spacecraft life, and this is a well-known and much studied problem. Much less information is known about the effects of atomic oxygen on the internal surfaces of spacecraft. This degradation can occur when openings in components of the spacecraft exterior exist that allow the entry of atomic oxygen into regions that may not have direct atomic oxygen attack but rather scattered attack. Openings can exist because of spacecraft venting, microwave cavities, and apertures for Earth viewing, Sun sensors, or star trackers. The effects of atomic oxygen erosion of polymers interior to an aperture on a spacecraft were simulated at the NASA Glenn Research Center by using Monte Carlo computational techniques. A two-dimensional model was used to provide quantitative indications of the attenuation of atomic oxygen flux as a function of the distance into a parallel-walled cavity. The model allows the atomic oxygen arrival direction, the Maxwell Boltzman temperature, and the ram energy to be varied along with the interaction parameters of the degree of recombination upon impact with polymer or nonreactive surfaces, the initial reaction probability, the reaction probability dependence upon energy and angle of attack, degree of specularity of scattering of reactive and nonreactive surfaces, and the degree of thermal accommodation upon impact with reactive and non-reactive surfaces to be varied to allow the model to produce atomic oxygen erosion geometries that replicate actual experimental results from space. The degree of

  4. Monte Carlo Based Calibration and Uncertainty Analysis of a Coupled Plant Growth and Hydrological Model

    Science.gov (United States)

    Houska, Tobias; Multsch, Sebastian; Kraft, Philipp; Frede, Hans-Georg; Breuer, Lutz

    2014-05-01

    Computer simulations are widely used to support decision making and planning in the agriculture sector. On the one hand, many plant growth models use simplified hydrological processes and structures, e.g. by the use of a small number of soil layers or by the application of simple water flow approaches. On the other hand, in many hydrological models plant growth processes are poorly represented. Hence, fully coupled models with a high degree of process representation would allow a more detailed analysis of the dynamic behaviour of the soil-plant interface. We used the Python programming language to couple two of such high process oriented independent models and to calibrate both models simultaneously. The Catchment Modelling Framework (CMF) simulated soil hydrology based on the Richards equation and the Van-Genuchten-Mualem retention curve. CMF was coupled with the Plant growth Modelling Framework (PMF), which predicts plant growth on the basis of radiation use efficiency, degree days, water shortage and dynamic root biomass allocation. The Monte Carlo based Generalised Likelihood Uncertainty Estimation (GLUE) method was applied to parameterize the coupled model and to investigate the related uncertainty of model predictions to it. Overall, 19 model parameters (4 for CMF and 15 for PMF) were analysed through 2 x 106 model runs randomly drawn from an equally distributed parameter space. Three objective functions were used to evaluate the model performance, i.e. coefficient of determination (R2), bias and model efficiency according to Nash Sutcliffe (NSE). The model was applied to three sites with different management in Muencheberg (Germany) for the simulation of winter wheat (Triticum aestivum L.) in a cross-validation experiment. Field observations for model evaluation included soil water content and the dry matters of roots, storages, stems and leaves. Best parameter sets resulted in NSE of 0.57 for the simulation of soil moisture across all three sites. The shape

  5. Gamow-teller beta decay of 29Na and comparison with shell-model predictions

    International Nuclear Information System (INIS)

    Baumann, P.; Dessagne, P.; Huck, A.

    1987-01-01

    Gamma radiation and delayed neutrons following the decay of 29 Na have been studied in singles and coincidence mode with mass-separated sources at the ISOLDE facility. Evidence for a first excited state in 29 Mg at 54.6 keV has been found, in good agreement with theoretical predictions. From the lifetime, deduced from γ-γ delayed coincidences with BaF 2 counters, a transition strength is found for the 55 keV (1/2 + )→ g.s. (3/2 + ) transition. A level scheme built on the new state at 55 keV and involving five new transitions has been established for bound levels in 29 Mg. The features of these levels are compared with predictions of shell-model calculations, with which they substantially agree. In the neutron time-of-flight spectra, high-energy neutron branches have been found in addition to the previously reported decays. The observed distribution of the Gamow-Teller transition strengths with excitation energy for particle-unbound levels in 29 Mg is found to be shifted upward by about 1 MeV relative to shell-model predictions

  6. Angle-correlated cross sections in the framework of the continuum shell model

    International Nuclear Information System (INIS)

    Moerschel, K.P.

    1984-01-01

    In the present thesis in the framework of the continuum shell modell a concept for the treatment of angle-correlated cross sections was developed by which coincidence experiments on electron scattering on nuclei are described. For this the existing Darmstadt continuum-shell-model code had to be extended to the calculation of the correlation coefficients in which nuclear dynamics enter and which determine completely the angle-correlated cross sections. Under inclusion of the kinematics a method for the integration over the scattered electron was presented and used for the comparison with corresponding experiments. As application correlation coefficients for the proton channel in 12 C with 1 - and 2 + excitations were studied. By means of these coefficients finally cross sections for the reaction 12 C (e,p) 11 B could be calculated and compared with the experiment whereby the developed methods were proved as suitable to predict correctly both the slope and the quantity of the experimental cross sections. (orig.) [de

  7. Shell-model method for Gamow-Teller transitions in heavy deformed odd-mass nuclei

    Science.gov (United States)

    Wang, Long-Jun; Sun, Yang; Ghorui, Surja K.

    2018-04-01

    A shell-model method for calculating Gamow-Teller (GT) transition rates in heavy deformed odd-mass nuclei is presented. The method is developed within the framework of the projected shell model. To implement the computation requirement when many multi-quasiparticle configurations are included in the basis, a numerical advancement based on the Pfaffian formula is introduced. With this new many-body technique, it becomes feasible to perform state-by-state calculations for the GT nuclear matrix elements of β -decay and electron-capture processes, including those at high excitation energies in heavy nuclei which are usually deformed. The first results, β- decays of the well-deformed A =153 neutron-rich nuclei, are shown as the example. The known log(f t ) data corresponding to the B (GT- ) decay rates of the ground state of 153Nd to the low-lying states of 153Pm are well described. It is further shown that the B (GT) distributions can have a strong dependence on the detailed microscopic structure of relevant states of both the parent and daughter nuclei.

  8. Towards the modeling of nanoindentation of virus shells: Do substrate adhesion and geometry matter?

    Science.gov (United States)

    Bousquet, Arthur; Dragnea, Bogdan; Tayachi, Manel; Temam, Roger

    2016-12-01

    Soft nanoparticles adsorbing at surfaces undergo deformation and buildup of elastic strain as a consequence of interfacial adhesion of similar magnitude with constitutive interactions. An example is the adsorption of virus particles at surfaces, a phenomenon of central importance for experiments in virus nanoindentation and for understanding of virus entry. The influence of adhesion forces and substrate corrugation on the mechanical response to indentation has not been studied. This is somewhat surprising considering that many single-stranded RNA icosahedral viruses are organized by soft intermolecular interactions while relatively strong adhesion forces are required for virus immobilization for nanoindentation. This article presents numerical simulations via finite elements discretization investigating the deformation of a thick shell in the context of slow evolution linear elasticity and in presence of adhesion interactions with the substrate. We study the influence of the adhesion forces in the deformation of the virus model under axial compression on a flat substrate by comparing the force-displacement curves for a shell having elastic constants relevant to virus capsids with and without adhesion forces derived from the Lennard-Jones potential. Finally, we study the influence of the geometry of the substrate in two-dimensions by comparing deformation of the virus model adsorbed at the cusp between two cylinders with that on a flat surface.

  9. Comparison of deep inelastic electron-photon scattering data with the HERWIG and PHOJET Monte Carlo models

    CERN Document Server

    Achard, P.; Braccini, S.; Chamizo, M.; Cowan, G.; de Roeck, A.; Field, J.H.; Finch, A.J.; Lin, C.H.; Lauber, J.A.; Lehto, M.H.; Kienzle-Focacci, M.N.; Miller, D.J.; Nisius, R.; Saremi, S.; Soldner-Rembold, S.; Surrow, B.; Taylor, R.J.; Wadhwa, M.; Wright, A.E.

    2002-01-01

    Deep inelastic electron-photon scattering is studied in the $Q^2$ range from 1.2 to 30 GeV$^2$ using the LEP1 data taken with the ALEPH, L3 and OPAL detectors at centre-of-mass energies close to the mass of the Z boson. Distributions of the measured hadronic final state are corrected to the hadron level and compared to the predictions of the HERWIG and PHOJET Monte Carlo models. For large regions in most of the distributions studied the results of the different experiments agree with one another. However, significant differences are found between the data and the models. Therefore the combined LEP data serve as an important input to improve on the Monte Carlo models.

  10. Understanding valence-shell electron-pair repulsion (VSEPR) theory using origami molecular models

    Science.gov (United States)

    Endah Saraswati, Teguh; Saputro, Sulistyo; Ramli, Murni; Praseptiangga, Danar; Khasanah, Nurul; Marwati, Sri

    2017-01-01

    Valence-shell electron-pair repulsion (VSEPR) theory is conventionally used to predict molecular geometry. However, it is difficult to explore the full implications of this theory by simply drawing chemical structures. Here, we introduce origami modelling as a more accessible approach for exploration of the VSEPR theory. Our technique is simple, readily accessible and inexpensive compared with other sophisticated methods such as computer simulation or commercial three-dimensional modelling kits. This method can be implemented in chemistry education at both the high school and university levels. We discuss the example of a simple molecular structure prediction for ammonia (NH3). Using the origami model, both molecular shape and the scientific justification can be visualized easily. This ‘hands-on’ approach to building molecules will help promote understanding of VSEPR theory.

  11. Understanding valence-shell electron-pair repulsion (VSEPR) theory using origami molecular models

    International Nuclear Information System (INIS)

    Saraswati, Teguh Endah; Saputro, Sulistyo; Ramli, Murni; Praseptiangga, Danar; Khasanah, Nurul; Marwati, Sri

    2017-01-01

    Valence-shell electron-pair repulsion (VSEPR) theory is conventionally used to predict molecular geometry. However, it is difficult to explore the full implications of this theory by simply drawing chemical structures. Here, we introduce origami modelling as a more accessible approach for exploration of the VSEPR theory. Our technique is simple, readily accessible and inexpensive compared with other sophisticated methods such as computer simulation or commercial three-dimensional modelling kits. This method can be implemented in chemistry education at both the high school and university levels. We discuss the example of a simple molecular structure prediction for ammonia (NH_3). Using the origami model, both molecular shape and the scientific justification can be visualized easily. This ‘hands-on’ approach to building molecules will help promote understanding of VSEPR theory. (paper)

  12. Symmetry chains for the atomic shell model. I. Classification of symmetry chains for atomic configurations

    International Nuclear Information System (INIS)

    Gruber, B.; Thomas, M.S.

    1980-01-01

    In this article the symmetry chains for the atomic shell model are classified in such a way that they lead from the group SU(4l+2) to its subgroup SOsub(J)(3). The atomic configurations (nl)sup(N) transform like irreducible representations of the group SU(4l+2), while SOsub(J)(3) corresponds to total angular momentum in SU(4l+2). The defining matrices for the various embeddings are given for each symmetry chain that is obtained. These matrices also define the projection onto the weight subspaces for the corresponding subsymmetries and thus relate the various quantum numbers and determine the branching of representations. It is shown in this article that three (interrelated) symmetry chains are obtained which correspond to L-S coupling, j-j coupling, and a seniority dependent coupling. Moreover, for l<=6 these chains are complete, i.e., there are no other chains but these. In articles to follow, the symmetry chains that lead from the group SO(8l+5) to SOsub(J)(3) will be discussed, with the entire atomic shell transforming like an irreducible representation of SO(8l+5). The transformation properties of the states of the atomic shell will be determined according to the various symmetry chains obtained. The symmetry lattice discussed in this article forms a sublattice of the larger symmetry lattice with SO(8l+5) as supergroup. Thus the transformation properties of the states of the atomic configurations, according to the various symmetry chains discussed in this article, will be obtained too. (author)

  13. Noise in Neuronal and Electronic Circuits: A General Modeling Framework and Non-Monte Carlo Simulation Techniques.

    Science.gov (United States)

    Kilinc, Deniz; Demir, Alper

    2017-08-01

    The brain is extremely energy efficient and remarkably robust in what it does despite the considerable variability and noise caused by the stochastic mechanisms in neurons and synapses. Computational modeling is a powerful tool that can help us gain insight into this important aspect of brain mechanism. A deep understanding and computational design tools can help develop robust neuromorphic electronic circuits and hybrid neuroelectronic systems. In this paper, we present a general modeling framework for biological neuronal circuits that systematically captures the nonstationary stochastic behavior of ion channels and synaptic processes. In this framework, fine-grained, discrete-state, continuous-time Markov chain models of both ion channels and synaptic processes are treated in a unified manner. Our modeling framework features a mechanism for the automatic generation of the corresponding coarse-grained, continuous-state, continuous-time stochastic differential equation models for neuronal variability and noise. Furthermore, we repurpose non-Monte Carlo noise analysis techniques, which were previously developed for analog electronic circuits, for the stochastic characterization of neuronal circuits both in time and frequency domain. We verify that the fast non-Monte Carlo analysis methods produce results with the same accuracy as computationally expensive Monte Carlo simulations. We have implemented the proposed techniques in a prototype simulator, where both biological neuronal and analog electronic circuits can be simulated together in a coupled manner.

  14. Study of the tensor correlation in oxygen isotopes using mean-field-type and shell model methods

    International Nuclear Information System (INIS)

    Sugimoto, Satoru

    2007-01-01

    The tensor force plays important roles in nuclear structure. Recently, we have developed a mean-field-type model which can treat the two-particle-two-hole correlation induced by the tensor force. We applied the model to sub-closed-shell oxygen isotopes and found that an sizable attractive energy comes from the tensor force. We also studied the tensor correlation in 16O using a shell model including two-particle-two-hole configurations. In this case, quite a large attractive energy is obtained for the correlation energy from the tensor force

  15. Study of Fractal Features of Geomagnetic Activity Through an MHD Shell Model

    Science.gov (United States)

    Dominguez, M.; Nigro, G.; Munoz, V.; Carbone, V.

    2013-12-01

    Studies on complexity have been of great interest in plasma physics, because they provide new insights and reveal possible universalities on issues such as geomagnetic activity, turbulence in laboratory plasmas, physics of the solar wind, etc. [1, 2]. In particular, various studies have discussed the relationship between the fractal dimension, as a measure of complexity, and physical processes in magnetized plasmas such as the Sun's surface, the solar wind and the Earth's magnetosphere, including the possibility of forecasting geomagnetic activity [3, 4, 5]. Shell models are low dimensional dynamical models describing the main statistical properties of magnetohydrodynamic (MHD) turbulence [6]. These models allow us to describe extreme parameter conditions hence reaching very high Reynolds (Re) numbers. In this work a MHD shell model is used to describe the dissipative events which are taking place in the Earth's magnetosphere and causing geomagnetic storms. The box-counting fractal dimension (D) [7] is calculated for the time series of the magnetic energy dissipation rate obtained in this MHD shell model. We analyze the correlation between D and the energy dissipation rate in order to make a comparison with the same analysis made on the geomagnetic data. We show that, depending on the values of the viscosity and the diffusivity, the fractal dimension and the occurrence of bursts exhibit correlations similar as those observed in geomagnetic and solar data, [8] suggesting that the latter parameters could play a fundamental role in these processes. References [1] R. O. Dendy, S. C. Chapman, and M. Paczuski, Plasma Phys. Controlled Fusion 49, A95 (2007). [2] T. Chang and C. C. Wu, Phys. Rev. E 77, 045401 (2008). [3] R. T. J. McAteer, P. T. Gallagher, and J. Ireland, Astrophys. J. 631, 628 (2005). [4] V. M. Uritsky, A. J. Klimas, and D. Vassiliadis, Adv. Space Res. 37, 539 (2006). [5] S. C. Chapman, B. Hnat, and K. Kiyani, Nonlinear Proc. Geophys. 15, 445 (2008). [6] G

  16. Range Verification Methods in Particle Therapy: Underlying Physics and Monte Carlo Modeling

    Science.gov (United States)

    Kraan, Aafke Christine

    2015-01-01

    Hadron therapy allows for highly conformal dose distributions and better sparing of organs-at-risk, thanks to the characteristic dose deposition as function of depth. However, the quality of hadron therapy treatments is closely connected with the ability to predict and achieve a given beam range in the patient. Currently, uncertainties in particle range lead to the employment of safety margins, at the expense of treatment quality. Much research in particle therapy is therefore aimed at developing methods to verify the particle range in patients. Non-invasive in vivo monitoring of the particle range can be performed by detecting secondary radiation, emitted from the patient as a result of nuclear interactions of charged hadrons with tissue, including β+ emitters, prompt photons, and charged fragments. The correctness of the dose delivery can be verified by comparing measured and pre-calculated distributions of the secondary particles. The reliability of Monte Carlo (MC) predictions is a key issue. Correctly modeling the production of secondaries is a non-trivial task, because it involves nuclear physics interactions at energies, where no rigorous theories exist to describe them. The goal of this review is to provide a comprehensive overview of various aspects in modeling the physics processes for range verification with secondary particles produced in proton, carbon, and heavier ion irradiation. We discuss electromagnetic and nuclear interactions of charged hadrons in matter, which is followed by a summary of some widely used MC codes in hadron therapy. Then, we describe selected examples of how these codes have been validated and used in three range verification techniques: PET, prompt gamma, and charged particle detection. We include research studies and clinically applied methods. For each of the techniques, we point out advantages and disadvantages, as well as clinical challenges still to be addressed, focusing on MC simulation aspects. PMID:26217586

  17. Range Verification Methods in Particle Therapy: Underlying Physics and Monte Carlo Modeling.

    Science.gov (United States)

    Kraan, Aafke Christine

    2015-01-01

    Hadron therapy allows for highly conformal dose distributions and better sparing of organs-at-risk, thanks to the characteristic dose deposition as function of depth. However, the quality of hadron therapy treatments is closely connected with the ability to predict and achieve a given beam range in the patient. Currently, uncertainties in particle range lead to the employment of safety margins, at the expense of treatment quality. Much research in particle therapy is therefore aimed at developing methods to verify the particle range in patients. Non-invasive in vivo monitoring of the particle range can be performed by detecting secondary radiation, emitted from the patient as a result of nuclear interactions of charged hadrons with tissue, including β (+) emitters, prompt photons, and charged fragments. The correctness of the dose delivery can be verified by comparing measured and pre-calculated distributions of the secondary particles. The reliability of Monte Carlo (MC) predictions is a key issue. Correctly modeling the production of secondaries is a non-trivial task, because it involves nuclear physics interactions at energies, where no rigorous theories exist to describe them. The goal of this review is to provide a comprehensive overview of various aspects in modeling the physics processes for range verification with secondary particles produced in proton, carbon, and heavier ion irradiation. We discuss electromagnetic and nuclear interactions of charged hadrons in matter, which is followed by a summary of some widely used MC codes in hadron therapy. Then, we describe selected examples of how these codes have been validated and used in three range verification techniques: PET, prompt gamma, and charged particle detection. We include research studies and clinically applied methods. For each of the techniques, we point out advantages and disadvantages, as well as clinical challenges still to be addressed, focusing on MC simulation aspects.

  18. Aqueous corrosion of borosilicate glasses: experiments, modeling and Monte-Carlo simulations

    International Nuclear Information System (INIS)

    Ledieu, A.

    2004-10-01

    This work is concerned with the corrosion of borosilicate glasses with variable oxide contents. The originality of this study is the complementary use of experiments and numerical simulations. This study is expected to contribute to a better understanding of the corrosion of nuclear waste confinement glasses. First, the corrosion of glasses containing only silicon, boron and sodium oxides has been studied. The kinetics of leaching show that the rate of leaching and the final degree of corrosion sharply depend on the boron content through a percolation mechanism. For some glass contents and some conditions of leaching, the layer which appears at the glass surface stops the release of soluble species (boron and sodium). This altered layer (also called the gel layer) has been characterized with nuclear magnetic resonance (NMR) and small angle X-ray scattering (SAXS) techniques. Second, additional elements have been included in the glass composition. It appears that calcium, zirconium or aluminum oxides strongly modify the final degree of corrosion so that the percolation properties of the boron sub-network is no more a sufficient explanation to account for the behavior of these glasses. Meanwhile, we have developed a theoretical model, based on the dissolution and the reprecipitation of the silicon. Kinetic Monte Carlo simulations have been used in order to test several concepts such as the boron percolation, the local reactivity of weakly soluble elements and the restructuring of the gel layer. This model has been fully validated by comparison with the results on the three oxide glasses. Then, it has been used as a comprehensive tool to investigate the paradoxical behavior of the aluminum and zirconium glasses: although these elements slow down the corrosion kinetics, they lead to a deeper final degree of corrosion. The main contribution of this work is that the final degree of corrosion of borosilicate glasses results from the competition of two opposite mechanisms

  19. Water leaching of borosilicate glasses: experiments, modeling and Monte Carlo simulations

    International Nuclear Information System (INIS)

    Ledieu, A.

    2004-10-01

    This work is concerned with the corrosion of borosilicate glasses with variable oxide contents. The originality of this study is the complementary use of experiments and numerical simulations. This study is expected to contribute to a better understanding of the corrosion of nuclear waste confinement glasses. First, the corrosion of glasses containing only silicon, boron and sodium oxides has been studied. The kinetics of leaching show that the rate of leaching and the final degree of corrosion sharply depend on the boron content through a percolation mechanism. For some glass contents and some conditions of leaching, the layer which appears at the glass surface stops the release of soluble species (boron and sodium). This altered layer (also called the gel layer) has been characterized with nuclear magnetic resonance (NMR) and small angle X-ray scattering (SAXS) techniques. Second, additional elements have been included in the glass composition. It appears that calcium, zirconium or aluminum oxides strongly modify the final degree of corrosion so that the percolation properties of the boron sub-network is no more a sufficient explanation to account for the behavior of these glasses. Meanwhile, we have developed a theoretical model, based on the dissolution and the reprecipitation of the silicon. Kinetic Monte Carlo simulations have been used in order to test several concepts such as the boron percolation, the local reactivity of weakly soluble elements and the restructuring of the gel layer. This model has been fully validated by comparison with the results on the three oxide glasses. Then, it has been used as a comprehensive tool to investigate the paradoxical behavior of the aluminum and zirconium glasses: although these elements slow down the corrosion kinetics, they lead to a deeper final degree of corrosion. The main contribution of this work is that the final degree of corrosion of borosilicate glasses results from the competition of two opposite mechanisms

  20. Monte Carlo analysis of an ODE Model of the Sea Urchin Endomesoderm Network

    Directory of Open Access Journals (Sweden)

    Klipp Edda

    2009-08-01

    Full Text Available Abstract Background Gene Regulatory Networks (GRNs control the differentiation, specification and function of cells at the genomic level. The levels of interactions within large GRNs are of enormous depth and complexity. Details about many GRNs are emerging, but in most cases it is unknown to what extent they control a given process, i.e. the grade of completeness is uncertain. This uncertainty stems from limited experimental data, which is the main bottleneck for creating detailed dynamical models of cellular processes. Parameter estimation for each node is often infeasible for very large GRNs. We propose a method, based on random parameter estimations through Monte-Carlo simulations to measure completeness grades of GRNs. Results We developed a heuristic to assess the completeness of large GRNs, using ODE simulations under different conditions and randomly sampled parameter sets to detect parameter-invariant effects of perturbations. To test this heuristic, we constructed the first ODE model of the whole sea urchin endomesoderm GRN, one of the best studied large GRNs. We find that nearly 48% of the parameter-invariant effects correspond with experimental data, which is 65% of the expected optimal agreement obtained from a submodel for which kinetic parameters were estimated and used for simulations. Randomized versions of the model reproduce only 23.5% of the experimental data. Conclusion The method described in this paper enables an evaluation of network topologies of GRNs without requiring any parameter values. The benefit of this method is exemplified in the first mathematical analysis of the complete Endomesoderm Network Model. The predictions we provide deliver candidate nodes in the network that are likely to be erroneous or miss unknown connections, which may need additional experiments to improve the network topology. This mathematical model can serve as a scaffold for detailed and more realistic models. We propose that our method can

  1. The Global Modeling Initiative Assessment Model: Model Description, Integration and Testing of the Transport Shell

    Energy Technology Data Exchange (ETDEWEB)

    Rotman, D.A.; Tannahill, J.R.; Kinnison, D.E.; Connell, P.S.; Bergmann, D.; Proctor, D.; Rodriquez, J.M.; Lin, S.J.; Rood, R.B.; Prather, M.J.; Rasch, P.J.; Considine, D.B.; Ramaroson, R.; Kawa, S.R.

    2000-04-25

    We describe the three dimensional global stratospheric chemistry model developed under the NASA Global Modeling Initiative (GMI) to assess the possible environmental consequences from the emissions of a fleet of proposed high speed civil transport aircraft. This model was developed through a unique collaboration of the members of the GMI team. Team members provided computational modules representing various physical and chemical processes, and analysis of simulation results through extensive comparison to observation. The team members' modules were integrated within a computational framework that allowed transportability and simulations on massively parallel computers. A unique aspect of this model framework is the ability to interchange and intercompare different submodules to assess the sensitivity of numerical algorithms and model assumptions to simulation results. In this paper, we discuss the important attributes of the GMI effort, describe the GMI model computational framework and the numerical modules representing physical and chemical processes. As an application of the concept, we illustrate an analysis of the impact of advection algorithms on the dispersion of a NO{sub y}-like source in the stratosphere which mimics that of a fleet of commercial supersonic transports (High-Speed Civil Transport (HSCT)) flying between 17 and 20 kilometers.

  2. Study of the validity of a combined potential model using the Hybrid Reverse Monte Carlo method in Fluoride glass system

    Directory of Open Access Journals (Sweden)

    M. Kotbi

    2013-03-01

    Full Text Available The choice of appropriate interaction models is among the major disadvantages of conventional methods such as Molecular Dynamics (MD and Monte Carlo (MC simulations. On the other hand, the so-called Reverse Monte Carlo (RMC method, based on experimental data, can be applied without any interatomic and/or intermolecular interactions. The RMC results are accompanied by artificial satellite peaks. To remedy this problem, we use an extension of the RMC algorithm, which introduces an energy penalty term into the acceptance criteria. This method is referred to as the Hybrid Reverse Monte Carlo (HRMC method. The idea of this paper is to test the validity of a combined potential model of coulomb and Lennard-Jones in a Fluoride glass system BaMnMF7 (M = Fe,V using HRMC method. The results show a good agreement between experimental and calculated characteristics, as well as a meaningful improvement in partial pair distribution functions (PDFs. We suggest that this model should be used in calculating the structural properties and in describing the average correlations between components of fluoride glass or a similar system. We also suggest that HRMC could be useful as a tool for testing the interaction potential models, as well as for conventional applications.

  3. Hybrid method for fast Monte Carlo simulation of diffuse reflectance from a multilayered tissue model with tumor-like heterogeneities.

    Science.gov (United States)

    Zhu, Caigang; Liu, Quan

    2012-01-01

    We present a hybrid method that combines a multilayered scaling method and a perturbation method to speed up the Monte Carlo simulation of diffuse reflectance from a multilayered tissue model with finite-size tumor-like heterogeneities. The proposed method consists of two steps. In the first step, a set of photon trajectory information generated from a baseline Monte Carlo simulation is utilized to scale the exit weight and exit distance of survival photons for the multilayered tissue model. In the second step, another set of photon trajectory information, including the locations of all collision events from the baseline simulation and the scaling result obtained from the first step, is employed by the perturbation Monte Carlo method to estimate diffuse reflectance from the multilayered tissue model with tumor-like heterogeneities. Our method is demonstrated to shorten simulation time by several orders of magnitude. Moreover, this hybrid method works for a larger range of probe configurations and tumor models than the scaling method or the perturbation method alone.

  4. Modeling deformation and chaining of flexible shells in a nematic solvent with finite elements on an adaptive moving mesh

    Science.gov (United States)

    DeBenedictis, Andrew; Atherton, Timothy J.; Rodarte, Andrea L.; Hirst, Linda S.

    2018-03-01

    A micrometer-scale elastic shell immersed in a nematic liquid crystal may be deformed by the host if the cost of deformation is comparable to the cost of elastic deformation of the nematic. Moreover, such inclusions interact and form chains due to quadrupolar distortions induced in the host. A continuum theory model using finite elements is developed for this system, using mesh regularization and dynamic refinement to ensure quality of the numerical representation even for large deformations. From this model, we determine the influence of the shell elasticity, nematic elasticity, and anchoring condition on the shape of the shell and hence extract parameter values from an experimental realization. Extending the model to multibody interactions, we predict the alignment angle of the chain with respect to the host nematic as a function of aspect ratio, which is found to be in excellent agreement with experiments.

  5. Spectroscopy and shell model interpretation of high-spin states in the N = 126 nucleus 214Ra

    International Nuclear Information System (INIS)

    Stuchbery, A.E.; Dracoulis, G.D.; Kibedi, T.; Byrne, A.P.; Fabricius, B.; Poletti, A.R.; Lane, G.J.; Baxter, A.M.

    1992-01-01

    Excited states in the N = 126 nucleus 214 Ra have been studied using γ-ray and electron spectroscopy following reactions of 12 C and 13 C on 206 Pb targets. Levels were identified to spins of ≅ 25 ℎ and excitation energies of ≅ 7.8 MeV. Lifetimes and magnetic moments were measured for several levels, including a spin (25 - ) core-excited isomer at 6577.0 keV with τ = 184 ± 5 ns and g = 0.66 ± 0.01 The level scheme, lifetime and magnetic moment data are compared with, and discussed in terms of, empirical shell-model calculations and multiparticle octupole- coupled shell-model calculations. In general, the experimental data are well described by the empirical shell model. (orig.)

  6. An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations

    Science.gov (United States)

    Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B.; Jia, Xun

    2015-10-01

    Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum

  7. Oxygen distribution in tumors: A qualitative analysis and modeling study providing a novel Monte Carlo approach

    International Nuclear Information System (INIS)

    Lagerlöf, Jakob H.; Kindblom, Jon; Bernhardt, Peter

    2014-01-01

    Purpose: To construct a Monte Carlo (MC)-based simulation model for analyzing the dependence of tumor oxygen distribution on different variables related to tumor vasculature [blood velocity, vessel-to-vessel proximity (vessel proximity), and inflowing oxygen partial pressure (pO 2 )]. Methods: A voxel-based tissue model containing parallel capillaries with square cross-sections (sides of 10 μm) was constructed. Green's function was used for diffusion calculations and Michaelis-Menten's kinetics to manage oxygen consumption. The model was tuned to approximately reproduce the oxygenational status of a renal carcinoma; the depth oxygenation curves (DOC) were fitted with an analytical expression to facilitate rapid MC simulations of tumor oxygen distribution. DOCs were simulated with three variables at three settings each (blood velocity, vessel proximity, and inflowing pO 2 ), which resulted in 27 combinations of conditions. To create a model that simulated variable oxygen distributions, the oxygen tension at a specific point was randomly sampled with trilinear interpolation in the dataset from the first simulation. Six correlations between blood velocity, vessel proximity, and inflowing pO 2 were hypothesized. Variable models with correlated parameters were compared to each other and to a nonvariable, DOC-based model to evaluate the differences in simulated oxygen distributions and tumor radiosensitivities for different tumor sizes. Results: For tumors with radii ranging from 5 to 30 mm, the nonvariable DOC model tended to generate normal or log-normal oxygen distributions, with a cut-off at zero. The pO 2 distributions simulated with the six-variable DOC models were quite different from the distributions generated with the nonvariable DOC model; in the former case the variable models simulated oxygen distributions that were more similar to in vivo results found in the literature. For larger tumors, the oxygen distributions became truncated in the lower

  8. An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations.

    Science.gov (United States)

    Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun

    2015-10-21

    Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum

  9. Development of an accurate 3D Monte Carlo broadband atmospheric radiative transfer model

    Science.gov (United States)

    Jones, Alexandra L.

    Radiation is the ultimate source of energy that drives our weather and climate. It is also the fundamental quantity detected by satellite sensors from which earth's properties are inferred. Radiative energy from the sun and emitted from the earth and atmosphere is redistributed by clouds in one of their most important roles in the atmosphere. Without accurately representing these interactions we greatly decrease our ability to successfully predict climate change, weather patterns, and to observe our environment from space. The remote sensing algorithms and dynamic models used to study and observe earth's atmosphere all parameterize radiative transfer with approximations that reduce or neglect horizontal variation of the radiation field, even in the presence of clouds. Despite having complete knowledge of the underlying physics at work, these approximations persist due to perceived computational expense. In the current context of high resolution modeling and remote sensing observations of clouds, from shallow cumulus to deep convective clouds, and given our ever advancing technological capabilities, these approximations have been exposed as inappropriate in many situations. This presents a need for accurate 3D spectral and broadband radiative transfer models to provide bounds on the interactions between clouds and radiation to judge the accuracy of similar but less expensive models and to aid in new parameterizations that take into account 3D effects when coupled to dynamic models of the atmosphere. Developing such a state of the art model based on the open source, object-oriented framework of the I3RC Monte Carlo Community Radiative Transfer ("IMC-original") Model is the task at hand. It has involved incorporating (1) thermal emission sources of radiation ("IMC+emission model"), allowing it to address remote sensing problems involving scattering of light emitted at earthly temperatures as well as spectral cooling rates, (2) spectral integration across an arbitrary

  10. Monte Carlo modeling of Standard Model multi-boson production processes for $\\sqrt{s} = 13$ TeV ATLAS analyses

    CERN Document Server

    Li, Shu; The ATLAS collaboration

    2017-01-01

    Proceeding for the poster presentation at LHCP2017, Shanghai, China on the topic of "Monte Carlo modeling of Standard Model multi-boson production processes for $\\sqrt{s} = 13$ TeV ATLAS analyses" (ATL-PHYS-SLIDE-2017-265 https://cds.cern.ch/record/2265389) Deadline: 01/09/2017

  11. Diffusion Monte Carlo determination of the binding energy of the sup 4 He nucleus for model Wigner potentials

    Energy Technology Data Exchange (ETDEWEB)

    Bishop, R.F. (Manchester Univ. (United Kingdom). Inst. of Science and Technology); Buendia, E. (Granada Univ. (Spain). Dept. de Fisica Moderna); Flynn, M.F. (Kent State Univ., OH (United States). Dept. of Physics); Guardiola, R. (Valencia Univ. (Spain). Dept. de Fisica Atomica y Nuclear)

    1992-02-01

    The diffusion Monte Carlo method is used to integrate the four-body Schroedinger equation corresponding to the {sup 4}He nucleus for several model potentials of Wigner type. Good importance sampling trial functions are used, and the sampling is large enough to obtain the ground-state energy with an error of only 0.01 to 0.02 MeV. (author).

  12. Bayesian Modelling, Monte Carlo Sampling and Capital Allocation of Insurance Risks

    Directory of Open Access Journals (Sweden)

    Gareth W. Peters

    2017-09-01

    Full Text Available The main objective of this work is to develop a detailed step-by-step guide to the development and application of a new class of efficient Monte Carlo methods to solve practically important problems faced by insurers under the new solvency regulations. In particular, a novel Monte Carlo method to calculate capital allocations for a general insurance company is developed, with a focus on coherent capital allocation that is compliant with the Swiss Solvency Test. The data used is based on the balance sheet of a representative stylized company. For each line of business in that company, allocations are calculated for the one-year risk with dependencies based on correlations given by the Swiss Solvency Test. Two different approaches for dealing with parameter uncertainty are discussed and simulation algorithms based on (pseudo-marginal Sequential Monte Carlo algorithms are described and their efficiency is analysed.

  13. Monte Carlo simulation of the Leksell Gamma Knife: I. Source modelling and calculations in homogeneous media

    Energy Technology Data Exchange (ETDEWEB)

    Moskvin, Vadim [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN (United States)]. E-mail: vmoskvin@iupui.edu; DesRosiers, Colleen; Papiez, Lech; Timmerman, Robert; Randall, Marcus; DesRosiers, Paul [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN (United States)

    2002-06-21

    The Monte Carlo code PENELOPE has been used to simulate photon flux from the Leksell Gamma Knife, a precision method for treating intracranial lesions. Radiation from a single {sup 60}Co assembly traversing the collimator system was simulated, and phase space distributions at the output surface of the helmet for photons and electrons were calculated. The characteristics describing the emitted final beam were used to build a two-stage Monte Carlo simulation of irradiation of a target. A dose field inside a standard spherical polystyrene phantom, usually used for Gamma Knife dosimetry, has been computed and compared with experimental results, with calculations performed by other authors with the use of the EGS4 Monte Carlo code, and data provided by the treatment planning system Gamma Plan. Good agreement was found between these data and results of simulations in homogeneous media. Owing to this established accuracy, PENELOPE is suitable for simulating problems relevant to stereotactic radiosurgery. (author)

  14. Projected shell model analysis of multi-quasiparticle high-K isomers in sup 1 sup 7 sup 4 Hf

    CERN Document Server

    Zhou Xian Rong; Sun Yang; Long Gui Lu

    2002-01-01

    Multi-quasiparticle high-K states in sup 1 sup 7 sup 4 Hf are studied in the framework of the projected shell model. The calculation reproduces well the observed ground-state band as well as most of the two- and four-quasiparticle rotational bands. Some as yet unobserved high-K isomeric states in sup 1 sup 7 sup 4 Hf are predicted. Possible reasons for the existing discrepancies between calculation and experiment are discussed. It is suggested that the projected shell model may be a useful method for studying multi-quasiparticle high-K isomers and the K-mixing phenomenon in heavy deformed nuclei

  15. Validating a virtual source model based in Monte Carlo Method for profiles and percent deep doses calculation

    Energy Technology Data Exchange (ETDEWEB)

    Del Nero, Renata Aline; Yoriyaz, Hélio [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Nakandakari, Marcos Vinicius Nakaoka, E-mail: hyoriyaz@ipen.br, E-mail: marcos.sake@gmail.com [Hospital Beneficência Portuguesa de São Paulo, SP (Brazil)

    2017-07-01

    The Monte Carlo method for radiation transport data has been adapted for medical physics application. More specifically, it has received more attention in clinical treatment planning with the development of more efficient computer simulation techniques. In linear accelerator modeling by the Monte Carlo method, the phase space data file (phsp) is used a lot. However, to obtain precision in the results, it is necessary detailed information about the accelerator's head and commonly the supplier does not provide all the necessary data. An alternative to the phsp is the Virtual Source Model (VSM). This alternative approach presents many advantages for the clinical Monte Carlo application. This is the most efficient method for particle generation and can provide an accuracy similar when the phsp is used. This research propose a VSM simulation with the use of a Virtual Flattening Filter (VFF) for profiles and percent deep doses calculation. Two different sizes of open fields (40 x 40 cm² and 40√2 x 40√2 cm²) were used and two different source to surface distance (SSD) were applied: the standard 100 cm and custom SSD of 370 cm, which is applied in radiotherapy treatments of total body irradiation. The data generated by the simulation was analyzed and compared with experimental data to validate the VSM. This current model is easy to build and test. (author)

  16. The 3-Attractor Water Model: Monte-Carlo Simulations with a New, Effective 2-Body Potential (BMW

    Directory of Open Access Journals (Sweden)

    Francis Muguet

    2003-02-01

    Full Text Available According to the precepts of the 3-attractor (3-A water model, effective 2-body water potentials should feature as local minima the bifurcated and inverted water dimers in addition to the well-known linear water dimer global minimum. In order to test the 3-A model, a new pair wise effective intermolecular rigid water potential has been designed. The new potential is part of new class of potentials called BMW (Bushuev-Muguet-Water which is built by modifying existing empirical potentials. This version (BMW v. 0.1 has been designed by modifying the SPC/E empirical water potential. It is a preliminary version well suited for exploratory Monte-Carlo simulations. The shape of the potential energy surface (PES around each local minima has been approximated with the help of Gaussian functions. Classical Monte Carlo simulations have been carried out for liquid water in the NPT ensemble for a very wide range of state parameters up to the supercritical water regime. Thermodynamic properties are reported. The radial distributions functions (RDFs have been computed and are compared with the RDFs obtained from Neutron Scattering experimental data. Our preliminary Monte-Carlo simulations show that the seemingly unconventional hypotheses of the 3-A model are most plausible. The simulation has also uncovered a totally new role for 2-fold H-bonds.

  17. Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo

    Science.gov (United States)

    Herckenrath, Daan; Langevin, Christian D.; Doherty, John

    2011-01-01

    Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction uncertainty was tested for a synthetic saltwater intrusion model patterned after the Henry problem. Saltwater intrusion caused by a reduction in fresh groundwater discharge was simulated for 1000 randomly generated hydraulic conductivity distributions, representing a mildly heterogeneous aquifer. From these 1000 simulations, the hydraulic conductivity distribution giving rise to the most extreme case of saltwater intrusion was selected and was assumed to represent the "true" system. Head and salinity values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. The NSMC method was used to calculate 1000 calibration-constrained parameter fields. If the dimensionality of the solution space was set appropriately, the estimated uncertainty range from the NSMC analysis encompassed the truth. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. Reducing the dimensionality of the null-space for the processing of the random parameter sets did not result in any significant gains in efficiency and compromised the ability of the NSMC method to encompass the true prediction value. The addition of intrapilot point heterogeneity to the NSMC process was also tested. According to a variogram comparison, this provided the same scale of heterogeneity that was used to generate the truth. However, incorporation of intrapilot point variability did not make a noticeable difference to the uncertainty of the prediction. With this higher level of heterogeneity, however, the computational burden of

  18. Modeling of FREYA fast critical experiments with the Serpent Monte Carlo code

    International Nuclear Information System (INIS)

    Fridman, E.; Kochetkov, A.; Krása, A.

    2017-01-01

    Highlights: • FREYA – the EURATOM project executed to support fast lead-based reactor systems. • Critical experiments in the VENUS-F facility during the FREYA project. • Characterization of the critical VENUS-F cores with Serpent. • Comparison of the numerical Serpent results to the experimental data. - Abstract: The FP7 EURATOM project FREYA has been executed between 2011 and 2016 with the aim of supporting the design of fast lead-cooled reactor systems such as MYRRHA and ALFRED. During the project, a number of critical experiments were conducted in the VENUS-F facility located at SCK·CEN, Mol, Belgium. The Monte Carlo code Serpent was one of the codes applied for the characterization of the critical VENUS-F cores. Four critical configurations were modeled with Serpent, namely the reference critical core, the clean MYRRHA mock-up, the full MYRRHA mock-up, and the critical core with the ALFRED island. This paper briefly presents the VENUS-F facility, provides a detailed description of the aforementioned critical VENUS-F cores, and compares the numerical results calculated by Serpent to the available experimental data. The compared parameters include keff, point kinetics parameters, fission rate ratios of important actinides to that of U235 (spectral indices), axial and radial distribution of fission rates, and lead void reactivity effect. The reported results show generally good agreement between the calculated and experimental values. Nevertheless, the paper also reveals some noteworthy issues requiring further attention. This includes the systematic overprediction of reactivity and systematic underestimation of the U238 to U235 fission rate ratio.

  19. Modeling parameterized geometry in GPU-based Monte Carlo particle transport simulation for radiotherapy.

    Science.gov (United States)

    Chi, Yujie; Tian, Zhen; Jia, Xun

    2016-08-07

    Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU's shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0

  20. A Monte Carlo Model for Neutron Coincidence Counting with Fast Organic Liquid Scintillation Detectors

    International Nuclear Information System (INIS)

    Gamage, Kelum A.A.; Joyce, Malcolm J.; Cave, Frank D.

    2013-06-01

    Neutron coincidence counting is an established, nondestructive method for the qualitative and quantitative analysis of nuclear materials. Several even-numbered nuclei of the actinide isotopes, and especially even-numbered plutonium isotopes, undergo spontaneous fission, resulting in the emission of neutrons which are correlated in time. The characteristics of this i.e. the multiplicity can be used to identify each isotope in question. Similarly, the corresponding characteristics of isotopes that are susceptible to stimulated fission are somewhat isotope-related, and also dependent on the energy of the incident neutron that stimulates the fission event, and this can hence be used to identify and quantify isotopes also. Most of the neutron coincidence counters currently used are based on 3 He gas tubes. In the 3 He-filled gas proportional-counter, the (n, p) reaction is largely responsible for the detection of slow neutrons and hence neutrons have to be slowed down to thermal energies. As a result, moderator and shielding materials are essential components of many systems designed to assess quantities of fissile materials. The use of a moderator, however, extends the die-away time of the detector necessitating a larger coincidence window and, further, 3 He is now in short supply and expensive. In this paper, a simulation based on the Monte Carlo method is described which has been performed using MCNPX 2.6.0, to model the geometry of a sector-shaped liquid scintillation detector in response to coincident neutron events. The detection of neutrons from a mixed-oxide (MOX) fuel pellet using an organic liquid scintillator has been simulated for different thicknesses of scintillators. In this new neutron detector, a layer of lead has been used to reduce the gamma-ray fluence reaching the scintillator. The effect of lead for neutron detection has also been estimated by considering different thicknesses of lead layers. (authors)

  1. Water leaching of borosilicate glasses: experiments, modeling and Monte Carlo simulations; Alteration par l'eau des verres borosilicates: experiences, modelisation et simulations Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Ledieu, A

    2004-10-15

    This work is concerned with the corrosion of borosilicate glasses with variable oxide contents. The originality of this study is the complementary use of experiments and numerical simulations. This study is expected to contribute to a better understanding of the corrosion of nuclear waste confinement glasses. First, the corrosion of glasses containing only silicon, boron and sodium oxides has been studied. The kinetics of leaching show that the rate of leaching and the final degree of corrosion sharply depend on the boron content through a percolation mechanism. For some glass contents and some conditions of leaching, the layer which appears at the glass surface stops the release of soluble species (boron and sodium). This altered layer (also called the gel layer) has been characterized with nuclear magnetic resonance (NMR) and small angle X-ray scattering (SAXS) techniques. Second, additional elements have been included in the glass composition. It appears that calcium, zirconium or aluminum oxides strongly modify the final degree of corrosion so that the percolation properties of the boron sub-network is no more a sufficient explanation to account for the behavior of these glasses. Meanwhile, we have developed a theoretical model, based on the dissolution and the reprecipitation of the silicon. Kinetic Monte Carlo simulations have been used in order to test several concepts such as the boron percolation, the local reactivity of weakly soluble elements and the restructuring of the gel layer. This model has been fully validated by comparison with the results on the three oxide glasses. Then, it has been used as a comprehensive tool to investigate the paradoxical behavior of the aluminum and zirconium glasses: although these elements slow down the corrosion kinetics, they lead to a deeper final degree of corrosion. The main contribution of this work is that the final degree of corrosion of borosilicate glasses results from the competition of two opposite mechanisms

  2. Aqueous corrosion of borosilicate glasses: experiments, modeling and Monte-Carlo simulations; Alteration par l'eau des verres borosilicates: experiences, modelisation et simulations Monte-Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Ledieu, A

    2004-10-01

    This work is concerned with the corrosion of borosilicate glasses with variable oxide contents. The originality of this study is the complementary use of experiments and numerical simulations. This study is expected to contribute to a better understanding of the corrosion of nuclear waste confinement glasses. First, the corrosion of glasses containing only silicon, boron and sodium oxides has been studied. The kinetics of leaching show that the rate of leaching and the final degree of corrosion sharply depend on the boron content through a percolation mechanism. For some glass contents and some conditions of leaching, the layer which appears at the glass surface stops the release of soluble species (boron and sodium). This altered layer (also called the gel layer) has been characterized with nuclear magnetic resonance (NMR) and small angle X-ray scattering (SAXS) techniques. Second, additional elements have been included in the glass composition. It appears that calcium, zirconium or aluminum oxides strongly modify the final degree of corrosion so that the percolation properties of the boron sub-network is no more a sufficient explanation to account for the behavior of these glasses. Meanwhile, we have developed a theoretical model, based on the dissolution and the reprecipitation of the silicon. Kinetic Monte Carlo simulations have been used in order to test several concepts such as the boron percolation, the local reactivity of weakly soluble elements and the restructuring of the gel layer. This model has been fully validated by comparison with the results on the three oxide glasses. Then, it has been used as a comprehensive tool to investigate the paradoxical behavior of the aluminum and zirconium glasses: although these elements slow down the corrosion kinetics, they lead to a deeper final degree of corrosion. The main contribution of this work is that the final degree of corrosion of borosilicate glasses results from the competition of two opposite mechanisms

  3. Performance prediction and validation of equilibrium modeling for gasification of cashew nut shell char

    Directory of Open Access Journals (Sweden)

    M. Venkata Ramanan

    2008-09-01

    Full Text Available Cashew nut shell, a waste product obtained during deshelling of cashew kernels, had in the past been deemed unfit as a fuel for gasification owing to its high occluded oil content. The oil, a source of natural phenol, oozes upon gasification, thereby clogging the gasifier throat, downstream equipment and associated utilities with oil, resulting in ineffective gasification and premature failure of utilities due to its corrosive characteristics. To overcome this drawback, the cashew shells were de-oiled by charring in closed chambers and were subsequently gasified in an autothermal downdraft gasifier. Equilibrium modeling was carried out to predict the producer gas composition under varying performance influencing parameters, viz., equivalence ratio (ER, reaction temperature (RT and moisture content (MC. The results were compared with the experimental output and are presented in this paper. The model is quite satisfactory with the experimental outcome at the ER applicable to gasification systems, i.e., 0.15 to 0.30. The results show that the mole fraction of (i H2, CO and CH4 decreases while (N2 + H2O and CO2 increases with ER, (ii H2 and CO increases while CH4, (N2 + H2O and CO2 decreases with reaction temperature, (iii H2, CH4, CO2 and (N2 + H2O increases while CO decreases with moisture content. However at an equivalence ratio less than 0.15, the model predicts an unrealistic composition and is observed to be non valid below this ER.

  4. Monte Carlo Error Analysis Applied to Core Formation: The Single-stage Model Revived

    Science.gov (United States)

    Cottrell, E.; Walter, M. J.

    2009-12-01

    The last decade has witnessed an explosion of studies that scrutinize whether or not the siderophile element budget of the modern mantle can plausibly be explained by metal-silicate equilibration in a deep magma ocean during core formation. The single-stage equilibrium scenario is seductive because experiments that equilibrate metal and silicate can then serve as a proxy for the early earth, and the physical and chemical conditions of core formation can be identified. Recently, models have become more complex as they try to accommodate the proliferation of element partitioning data sets, each of which sets its own limits on the pressure, temperature, and chemistry of equilibration. The ability of single stage models to explain mantle chemistry has subsequently been challenged, resulting in the development of complex multi-stage core formation models. Here we show that the extent to which extant partitioning data are consistent with single-stage core formation depends heavily upon (1) the assumptions made when regressing experimental partitioning data (2) the certainty with which regression coefficients are known and (3) the certainty with which the core/mantle concentration ratios of the siderophile elements are known. We introduce a Monte Carlo algorithm coded in MATLAB that samples parameter space in pressure and oxygen fugacity for a given mantle composition (nbo/t) and liquidus, and returns the number of equilibrium single-stage liquidus “solutions” that are permissible, taking into account the uncertainty in regression parameters and range of acceptable core/mantle ratios. Here we explore the consequences of regression parameter uncertainty and the impact of regression construction on model outcomes. We find that the form of the partition coefficient (Kd with enforced valence state, or D) and the handling of the temperature effect (based on 1-atm free energy data or high P-T experimental observations) critically affects model outcomes. We consider the most

  5. Modeling the structure of amorphous MoS3: a neutron diffraction and reverse Monte Carlo study.

    Science.gov (United States)

    Hibble, Simon J; Wood, Glenn B

    2004-01-28

    A model for the structure of amorphous molybdenum trisulfide, a-MoS3, has been created using reverse Monte Carlo methods. This model, which consists of chains of MoS6 units sharing three sulfurs with each of its two neighbors and forming alternate long, nonbonded, and short, bonded, Mo-Mo separations, is a good fit to the neutron diffraction data and is chemically and physically realistic. The paper identifies the limitations of previous models based on Mo3 triangular clusters in accounting for the available experimental data.

  6. Studies of Top Quark Monte Carlo Modelling with the ATLAS Detector

    CERN Document Server

    Asquith, Lily; The ATLAS collaboration

    2017-01-01

    The status of recent studies of modern Monte Carlo generator setups for the pair production of top quarks at the LHC. Samples at a center of mass energy of 13 TeV have been generated for a variety of generators and with different generator configurations. The predictions from these sample are compared to ATLAS data for a variety of kinematic observables.

  7. A non-local shell model of hydrodynamic and magnetohydrodynamic turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Plunian, F [Laboratoire de Geophysique Interne et Tectonophysique, CNRS, Universite Joseph Fourier, Maison des Geosciences, BP 53, 38041 Grenoble Cedex 9 (France); Stepanov, R [Institute of Continuous Media Mechanics, Korolyov 1, 614013 Perm (Russian Federation)

    2007-08-15

    We derive a new shell model of magnetohydrodynamic (MHD) turbulence in which the energy transfers are not necessarily local. Like the original MHD equations, the model conserves the total energy, magnetic helicity, cross-helicity and volume in phase space (Liouville's theorem) apart from the effects of external forcing, viscous dissipation and magnetic diffusion. The model of hydrodynamic (HD) turbulence is derived from the MHD model setting the magnetic field to zero. In that case the conserved quantities are the kinetic energy and the kinetic helicity. In addition to a statistically stationary state with a Kolmogorov spectrum, the HD model exhibits multiscaling. The anomalous scaling exponents are found to depend on a free parameter {alpha} that measures the non-locality degree of the model. In freely decaying turbulence, the infra-red spectrum also depends on {alpha}. Comparison with theory suggests using {alpha} = -5/2. In MHD turbulence, we investigate the fully developed turbulent dynamo for a wide range of magnetic Prandtl numbers in both kinematic and dynamic cases. Both local and non-local energy transfers are clearly identified.

  8. Environmental dose rate heterogeneity of beta radiation and its implications for luminescence dating: Monte Carlo modelling and experimental validation

    DEFF Research Database (Denmark)

    Nathan, R.P.; Thomas, P.J.; Jain, M.

    2003-01-01

    -e distributions and it is important to characterise this effect, both to ensure that dose distributions are not misinterpreted, and that an accurate beta dose rate is employed in dating calculations. In this study, we make a first attempt providing a description of potential problems in heterogeneous environments...... and identify the likely size of these effects on D-e distributions. The study employs the MCNP 4C Monte Carlo electron/photon transport model, supported by an experimental validation of the code in several case studies. We find good agreement between the experimental measurements and the Monte Carlo...... simulations. It is concluded that the effect of beta, heterogeneity in complex environments for luminescence dating is two fold: (i) the infinite matrix dose rate is not universally applicable; its accuracy depends on the scale of the heterogeneity, and (ii) the interpretation of D-e distributions is complex...

  9. Monte Carlo simulations of phase transitions and lattice dynamics in an atom-phonon model for spin transition compounds

    International Nuclear Information System (INIS)

    Apetrei, Alin Marian; Enachescu, Cristian; Tanasa, Radu; Stoleriu, Laurentiu; Stancu, Alexandru

    2010-01-01

    We apply here the Monte Carlo Metropolis method to a known atom-phonon coupling model for 1D spin transition compounds (STC). These inorganic molecular systems can switch under thermal or optical excitation, between two states in thermodynamical competition, i.e. high spin (HS) and low spin (LS). In the model, the ST units (molecules) are linked by springs, whose elastic constants depend on the spin states of the neighboring atoms, and can only have three possible values. Several previous analytical papers considered a unique average value for the elastic constants (mean-field approximation) and obtained phase diagrams and thermal hysteresis loops. Recently, Monte Carlo simulation papers, taking into account all three values of the elastic constants, obtained thermal hysteresis loops, but no phase diagrams. Employing Monte Carlo simulation, in this work we obtain the phase diagram at T=0 K, which is fully consistent with earlier analytical work; however it is more complex. The main difference is the existence of two supplementary critical curves that mark a hysteresis zone in the phase diagram. This explains the pressure hysteresis curves at low temperature observed experimentally and predicts a 'chemical' hysteresis in STC at very low temperatures. The formation and the dynamics of the domains are also discussed.

  10. Analysis of polytype stability in PVT grown silicon carbide single crystal using competitive lattice model Monte Carlo simulations

    Directory of Open Access Journals (Sweden)

    Hui-Jun Guo

    2014-09-01

    Full Text Available Polytype stability is very important for high quality SiC single crystal growth. However, the growth conditions for the 4H, 6H and 15R polytypes are similar, and the mechanism of polytype stability is not clear. The kinetics aspects, such as surface-step nucleation, are important. The kinetic Monte Carlo method is a common tool to study surface kinetics in crystal growth. However, the present lattice models for kinetic Monte Carlo simulations cannot solve the problem of the competitive growth of two or more lattice structures. In this study, a competitive lattice model was developed for kinetic Monte Carlo simulation of the competition growth of the 4H and 6H polytypes of SiC. The site positions are fixed at the perfect crystal lattice positions without any adjustment of the site positions. Surface steps on seeds and large ratios of diffusion/deposition have positive effects on the 4H polytype stability. The 3D polytype distribution in a physical vapor transport method grown SiC ingot showed that the facet preserved the 4H polytype even if the 6H polytype dominated the growth surface. The theoretical and experimental results of polytype growth in SiC suggest that retaining the step growth mode is an important factor to maintain a stable single 4H polytype during SiC growth.

  11. High-spin states in 133Cs and the shell model description

    Science.gov (United States)

    Biswas, S.; Palit, R.; Sethi, J.; Saha, S.; Raghav, A.; Garg, U.; Laskar, Md. S. R.; Babra, F. S.; Naik, Z.; Sharma, S.; Deo, A. Y.; Parkar, V. V.; Naidu, B. S.; Donthi, R.; Jadhav, S.; Jain, H. C.; Joshi, P. K.; Sihotra, S.; Kumar, S.; Mehta, D.; Mukherjee, G.; Goswami, A.; Srivastava, P. C.

    2017-06-01

    The high-spin states in 133Cs, populated using the reaction 130Te(7Li,4 n ) with 45-MeV beam energy, have been extended up to an excitation energy of 5.265 MeV using the Indian National Gamma Array. The observed one- and three-quasiparticle bands in 133Cs, built on the π h11 /2,π g7 /2 , π d5 /2 ; and (πg7 /2π d5 /2) 1⊗ν h11/2 -2 configurations, respectively, have similar structure as seen in the lighter odd-A Cs isotopes. The experimental level scheme has been compared with the large-scale shell model calculation without truncation using the j j 55 p n a interaction, showing a good agreement for both positive- and negative-parity states.

  12. Independent clusters in coordinate space: an efficient alternative to shell-model expansions

    Energy Technology Data Exchange (ETDEWEB)

    Bishop, R.F. (Manchester Univ. (UK). Inst. of Science and Technology); Buendia, E. (Granada Univ. (Spain). Dept. de Fisica Moderna); Flynn, M.F.; Guardiola, R. (Valencia Univ. (Spain). Dept. de Fisica Nuclear)

    1991-06-01

    A previous shell-model-style calculation for the ground-state energy of the {sup 4}He nucleus, based on coupled cluster techniques, was able to treat exactly the centre-of-mass motion. It is now recast in a precisely equivalent but vastly more computationally efficient form, directly in terms of coordinate-space correlation functions which are expanded in a Gaussian geminal basis and determined variationally. This reformulation further leads in a straightforward manner to a natural procedure for including higher-order correlations. Its implementation at even the simplest level produces a significant improvement in the already very good upper bounds achieved for the ground-state energy. Further extensions are also discussed. (author).

  13. Nonlinear Shell Modeling of Thin Membranes with Emphasis on Structural Wrinkling

    Science.gov (United States)

    Tessler, Alexander; Sleight, David W.; Wang, John T.

    2003-01-01

    Thin solar sail membranes of very large span are being envisioned for near-term space missions. One major design issue that is inherent to these very flexible structures is the formation of wrinkling patterns. Structural wrinkles may deteriorate a solar sail's performance and, in certain cases, structural integrity. In this paper, a geometrically nonlinear, updated Lagrangian shell formulation is employed using the ABAQUS finite element code to simulate the formation of wrinkled deformations in thin-film membranes. The restrictive assumptions of true membranes, i.e. Tension Field theory (TF), are not invoked. Two effective modeling strategies are introduced to facilitate convergent solutions of wrinkled equilibrium states. Several numerical studies are carried out, and the results are compared with recent experimental data. Good agreement is observed between the numerical simulations and experimental data.

  14. Seniority structure of the cranked shell model wave function and the pairing phase transition

    International Nuclear Information System (INIS)

    Wu, C.S.; Zeng, J.Y.; Center of Theoretical Physics, China Center of Advanced Science and Technology

    1989-01-01

    The accurate solutions to the low-lying eigenstates of the cranked shell model Hamiltonian are obtained by the particle-number-conserving treatment, in which a many-particle configuration truncation is adopted instead of the conventional single-particle level truncation. The variation of the seniority structures of low-lying eigenstates with rotational frequency ω is analyzed. The gap parameter of the yrast band decreases with ω very slowly, though the seniority structure has undergone a great change. It is suggested to use the seniority structure to indicate the possible pairing phase transition from a superconducting state to a normal state. The important blocking effects on the low-lying eigenstates are discussed

  15. Stability of nonrotating stellar systems. II - Prolate shell-orbit models

    Energy Technology Data Exchange (ETDEWEB)

    Merritt, D.; Hernquist, L. (Rutgers University, Piscataway, NJ (United States) Institute for Advanced Study, Princeton, NJ (United States))

    1991-08-01

    The dynamical stability of nonrotating prolate galaxy models constructed from thin long-axis tube orbits ('shell' orbits) are investigated. Models more elongated than about E6 (axis ratio of about 2:5) are unstable to bending modes than rapidly increase the velocity dispersion perpendicular to the long axis and decrease the model's elongation. Approximate representations of the spatial forms of the fastest growing modes and their growth rates are obtained. Most of the evolution is due to two modes: a symmetric (banana-shaped) bending and an antisymmetric (S-shaped) bending. The instability is similar to the 'firehose' instability of a thin self-gravitating slab, except that it persists in models with velocity anisotropies that are much less extreme than the critical value for instability of the slab. A simple model is given that reproduces the basic features of the instability in the prolate geometry. These results provide support for the hypothesis of Fridman and Polyachenko (1984) that the absence of elliptical galaxies flatter than about E6 is due to dynamical instability. 37 refs.

  16. Application of dimensional analysis to the study of shells subject to external pressure and to the use of models

    International Nuclear Information System (INIS)

    Lefrancois, A.

    1976-01-01

    The method of dimensional analysis is applied to the evaluation of deformation, stress, and ideal buckling strength (which is independent of the values of the elastic range), of shells subject to external pressure. The relations obtained are verified in two examples: a cylindrical ring and a tube with free ends and almost circular cross-section. Further, it is shown how and to what extent the results obtained from model tests can be used to predict the behaviour of geometrically similar shells which are made of the same material, or even of a different material. (Author) [fr

  17. Radiative capture reaction {sup 7}Be(p,{gamma}){sup 8}B in the continuum shell model

    Energy Technology Data Exchange (ETDEWEB)

    Bennaceur, K.; Ploszajczak, M. [Grand Accelerateur National d`Ions Lourds (GANIL), Caen (France); Nowacki, F. [Grand Accelerateur National d`Ions Lourds (GANIL), Caen (France)]|[Lab. de Physique Theorique Strasbourg, Strasbourg (France); Okolowicz, J. [Grand Accelerateur National d`Ions Lourds (GANIL), Caen (France)]|[Inst. of Nuclear Physics, Krakow (Poland)

    1998-06-01

    We present here the first application of realistic shell model (SM) including coupling between many-particle (quasi-)bound states and the continuum of one-particle scattering states to the calculation of the total capture cross section and the astrophysical factor in the reaction {sup 7}Be(p,{gamma}){sup 8}B. (orig.)

  18. Core/shell CdS/ZnS nanoparticles: Molecular modelling and characterization by photocatalytic decomposition of Methylene Blue

    Czech Academy of Sciences Publication Activity Database

    Praus, P.; Svoboda, L.; Tokarský, J.; Hospodková, Alice; Klemm, V.

    2014-01-01

    Roč. 292, Feb (2014), s. 813-822 ISSN 0169-4332 Institutional support: RVO:68378271 Keywords : core/shell nanoparticles * CdS/ZnS * molecular modelling * electron tunnelling * photocatalysis Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 2.711, year: 2014

  19. Development of two mix model postprocessors for the investigation of shell mix in indirect drive implosion cores

    International Nuclear Information System (INIS)

    Welser-Sherrill, L.; Mancini, R. C.; Haynes, D. A.; Haan, S. W.; Koch, J. A.; Izumi, N.; Tommasini, R.; Golovkin, I. E.; MacFarlane, J. J.; Radha, P. B.; Delettrez, J. A.; Regan, S. P.; Smalyuk, V. A.

    2007-01-01

    The presence of shell mix in inertial confinement fusion implosion cores is an important characteristic. Mixing in this experimental regime is primarily due to hydrodynamic instabilities, such as Rayleigh-Taylor and Richtmyer-Meshkov, which can affect implosion dynamics. Two independent theoretical mix models, Youngs' model and the Haan saturation model, were used to estimate the level of Rayleigh-Taylor mixing in a series of indirect drive experiments. The models were used to predict the radial width of the region containing mixed fuel and shell materials. The results for Rayleigh-Taylor mixing provided by Youngs' model are considered to be a lower bound for the mix width, while those generated by Haan's model incorporate more experimental characteristics and consequently have larger mix widths. These results are compared with an independent experimental analysis, which infers a larger mix width based on all instabilities and effects captured in the experimental data

  20. Modelling of neutron and photon transport in iron and concrete radiation shieldings by the Monte Carlo method - Version 2

    CERN Document Server

    Žukauskaite, A; Plukiene, R; Plukis, A

    2007-01-01

    Particle accelerators and other high energy facilities produce penetrating ionizing radiation (neutrons and γ-rays) that must be shielded. The objective of this work was to model photon and neutron transport in various materials, usually used as shielding, such as concrete, iron or graphite. Monte Carlo method allows obtaining answers by simulating individual particles and recording some aspects of their average behavior. In this work several nuclear experiments were modeled: AVF 65 – γ-ray beams (1-10 MeV), HIMAC and ISIS-800 – high energy neutrons (20-800 MeV) transport in iron and concrete. The results were then compared with experimental data.

  1. An assessment of the feasibility of using Monte Carlo calculations to model a combined neutron/gamma electronic personal dosemeter

    International Nuclear Information System (INIS)

    Tanner, J.E.; Witts, D.; Tanner, R.J.; Bartlett, D.T.; Burgess, P.H.; Edwards, A.A.; More, B.R.

    1995-01-01

    A Monte Carlo facility has been developed for modelling the response of semiconductor devices to mixed neutron-photon fields. This utilises the code MCNP for neutron and photon transport and a new code, STRUGGLE, which has been developed to model the secondary charged particle transport. It is thus possible to predict the pulse height distribution expected from prototype electronic personal detectors, given the detector efficiency factor. Initial calculations have been performed on a simple passivated implanted planar silicon detector. This device has also been irradiated in neutron, gamma and X ray fields to verify the accuracy of the predictions. Good agreement was found between experiment and calculation. (author)

  2. Assessment of Transport Infrastructure Projects by the use of Monte Carlo Simulation: The CBA-DK Model

    DEFF Research Database (Denmark)

    Salling, Kim Bang; Leleur, Steen

    2006-01-01

    calculation, where risk analysis (RA) is car-ried out using Monte Carlo Simulation (MCS). After a de-scription of the deterministic and stochastic calculations emphasis is paid to the RA part of CBA-DK with consid-erations about which probability distributions to make use of. Furthermore, a comprehensive......This paper presents the Danish CBA-DK software model for assessment of transport infrastructure projects. The as-sessment model is based on both a deterministic calcula-tion following the cost-benefit analysis (CBA) methodol-ogy in a Danish manual from the Ministry of Transport and on a stochastic...

  3. Theoretical uncertainties of the Duflo–Zuker shell-model mass formulae

    International Nuclear Information System (INIS)

    Qi, Chong

    2015-01-01

    It is becoming increasingly important to understand the uncertainties of nuclear mass model calculations and their limitations when extrapolating to driplines. In this paper we evaluate the parameter uncertainties of the Duflo–Zuker (DZ) shell model mass formulae by fitting to the latest experimental mass compilation AME2012 using the least square and minimax fitting procedures. We also analyze the propagation of the uncertainties in binding energy calculations when extrapolated to driplines. The parameter uncertainties and uncertain propagations are evaluated with the help of the covariance matrix thus derived. Large deviations from the extrapolations of AME2012 are seen in superheavy nuclei. A simplified version of the DZ model (DZ19) with much smaller uncertainties than that of DZ33 is proposed. Calculations are compared with results from other mass formulae. Systematics on the uncertainty propagation as well as the positions of the driplines are also presented. The DZ mass formulae are shown to be well defined with good extrapolation properties and rather small uncertainties, even though some of the parameters of the full DZ33 model cannot be fully determined by fitting to available experimental data. (paper)

  4. Panel Stiffener Debonding Analysis using a Shell/3D Modeling Technique

    Science.gov (United States)

    Krueger, Ronald; Ratcliffe, James G.; Minguet, Pierre J.

    2008-01-01

    A shear loaded, stringer reinforced composite panel is analyzed to evaluate the fidelity of computational fracture mechanics analyses of complex structures. Shear loading causes the panel to buckle. The resulting out -of-plane deformations initiate skin/stringer separation at the location of an embedded defect. The panel and surrounding load fixture were modeled with shell elements. A small section of the stringer foot, web and noodle as well as the panel skin near the delamination front were modeled with a local 3D solid model. Across the width of the stringer fo to, the mixed-mode strain energy release rates were calculated using the virtual crack closure technique. A failure index was calculated by correlating the results with a mixed-mode failure criterion of the graphite/epoxy material. The objective was to study the effect of the fidelity of the local 3D finite element model on the computed mixed-mode strain energy release rates and the failure index.

  5. Panel-Stiffener Debonding and Analysis Using a Shell/3D Modeling Technique

    Science.gov (United States)

    Krueger, Ronald; Ratcliffe, James G.; Minguet, Pierre J.

    2007-01-01

    A shear loaded, stringer reinforced composite panel is analyzed to evaluate the fidelity of computational fracture mechanics analyses of complex structures. Shear loading causes the panel to buckle. The resulting out-of-plane deformations initiate skin/stringer separation at the location of an embedded defect. The panel and surrounding load fixture were modeled with shell elements. A small section of the stringer foot, web and noodle as well as the panel skin near the delamination front were modeled with a local 3D solid model. Across the width of the stringer foot, the mixed-mode strain energy release rates were calculated using the virtual crack closure technique. A failure index was calculated by correlating the results with a mixed-mode failure criterion of the graphite/epoxy material. The objective was to study the effect of the fidelity of the local 3D finite element model on the computed mixed-mode strain energy release rates and the failure index.

  6. Mathematical Modeling of Dual Layer Shell Type Recuperation System for Biogas Dehumidification

    Science.gov (United States)

    Gendelis, S.; Timuhins, A.; Laizans, A.; Bandeniece, L.

    2015-12-01

    The main aim of the current paper is to create a mathematical model for dual layer shell type recuperation system, which allows reducing the heat losses from the biomass digester and water amount in the biogas without any additional mechanical or chemical components. The idea of this system is to reduce the temperature of the outflowing gas by creating two-layered counter-flow heat exchanger around the walls of biogas digester, thus increasing a thermal resistance and the gas temperature, resulting in a condensation on a colder surface. Complex mathematical model, including surface condensation, is developed for this type of biogas dehumidifier and the parameter study is carried out for a wide range of parameters. The model is reduced to 1D case to make numerical calculations faster. It is shown that latent heat of condensation is very important for the total heat balance and the condensation rate is highly dependent on insulation between layers and outside temperature. Modelling results allow finding optimal geometrical parameters for the known gas flow and predicting the condensation rate for different system setups and seasons.

  7. Monte Carlo modeling and optimization of contrast-enhanced radiotherapy of brain tumors

    Energy Technology Data Exchange (ETDEWEB)

    Perez-Lopez, C E; Garnica-Garza, H M, E-mail: hgarnica@cinvestav.mx [Centro de Investigacion y de Estudios Avanzados del Instituto Politecnico Nacional Unidad Monterrey, Via del Conocimiento 201 Parque de Investigacion e Innovacion Tecnologica, Apodaca NL CP 66600 (Mexico)

    2011-07-07

    Contrast-enhanced radiotherapy involves the use of a kilovoltage x-ray beam to impart a tumoricidal dose to a target into which a radiological contrast agent has previously been loaded in order to increase the x-ray absorption efficiency. In this treatment modality the selection of the proper x-ray spectrum is important since at the energy range of interest the penetration ability of the x-ray beam is limited. For the treatment of brain tumors, the situation is further complicated by the presence of the skull, which also absorbs kilovoltage x-ray in a very efficient manner. In this work, using Monte Carlo simulation, a realistic patient model and the Cimmino algorithm, several irradiation techniques and x-ray spectra are evaluated for two possible clinical scenarios with respect to the location of the target, these being a tumor located at the center of the head and at a position close to the surface of the head. It will be shown that x-ray spectra, such as those produced by a conventional x-ray generator, are capable of producing absorbed dose distributions with excellent uniformity in the target as well as dose differential of at least 20% of the prescribed tumor dose between this and the surrounding brain tissue, when the tumor is located at the center of the head. However, for tumors with a lateral displacement from the center and close to the skull, while the absorbed dose distribution in the target is also quite uniform and the dose to the surrounding brain tissue is within an acceptable range, hot spots in the skull arise which are above what is considered a safe limit. A comparison with previously reported results using mono-energetic x-ray beams such as those produced by a radiation synchrotron is also presented and it is shown that the absorbed dose distributions rendered by this type of beam are very similar to those obtained with a conventional x-ray beam.

  8. Voxel2MCNP: a framework for modeling, simulation and evaluation of radiation transport scenarios for Monte Carlo codes

    International Nuclear Information System (INIS)

    Pölz, Stefan; Laubersheimer, Sven; Eberhardt, Jakob S; Harrendorf, Marco A; Keck, Thomas; Benzler, Andreas; Breustedt, Bastian

    2013-01-01

    The basic idea of Voxel2MCNP is to provide a framework supporting users in modeling radiation transport scenarios using voxel phantoms and other geometric models, generating corresponding input for the Monte Carlo code MCNPX, and evaluating simulation output. Applications at Karlsruhe Institute of Technology are primarily whole and partial body counter calibration and calculation of dose conversion coefficients. A new generic data model describing data related to radiation transport, including phantom and detector geometries and their properties, sources, tallies and materials, has been developed. It is modular and generally independent of the targeted Monte Carlo code. The data model has been implemented as an XML-based file format to facilitate data exchange, and integrated with Voxel2MCNP to provide a common interface for modeling, visualization, and evaluation of data. Also, extensions to allow compatibility with several file formats, such as ENSDF for nuclear structure properties and radioactive decay data, SimpleGeo for solid geometry modeling, ImageJ for voxel lattices, and MCNPX’s MCTAL for simulation results have been added. The framework is presented and discussed in this paper and example workflows for body counter calibration and calculation of dose conversion coefficients is given to illustrate its application. (paper)

  9. Characterization of an Ar/O2 magnetron plasma by a multi-species Monte Carlo model

    International Nuclear Information System (INIS)

    Bultinck, E; Bogaerts, A

    2011-01-01

    A combined Monte Carlo (MC)/analytical surface model is developed to study the plasma processes occurring during the reactive sputter deposition of TiO x thin films. This model describes the important plasma species with a MC approach (i.e. electrons, Ar + ions, O 2 + ions, fast Ar atoms and sputtered Ti atoms). The deposition of the TiO x film is treated by an analytical surface model. The implementation of our so-called multi-species MC model is presented, and some typical calculation results are shown, such as densities, fluxes, energies and collision rates. The advantages and disadvantages of the multi-species MC model are illustrated by a comparison with a particle-in-cell/Monte Carlo collisions (PIC/MCC) model. Disadvantages include the fact that certain input values and assumptions are needed. However, when these are accounted for, the results are in good agreement with the PIC/MCC simulations, and the calculation time has drastically decreased, which enables us to simulate large and complicated reactor geometries. To illustrate this, the effect of larger target-substrate distances on the film properties is investigated. It is shown that a stoichiometric film is deposited at all investigated target-substrate distances (24, 40, 60 and 80 mm). Moreover, a larger target-substrate distance promotes film uniformity, but the deposition rate is much lower.

  10. High-temperature stability of the hydrate shell of a Na+ cation in a flat nanopore with hydrophobic walls

    Science.gov (United States)

    Shevkunov, S. V.

    2017-11-01

    The effect of elevated temperature has on the hydrate shell of a singly charged sodium cation inside a flat nanopore with smooth walls is studied using the Monte Carlo method. The free energy and the entropy of vapor molecule attachment are calculated by means of a bicanonical statistical ensemble using a detailed model of interactions. The nanopore has a stabilizing effect on the hydrate shell with respect to fluctuations and a destabilizing effect with respect to complete evaporation. At the boiling point of water, behavior is observed that is qualitatively similar to behavior at room temperature, but with a substantial shift in the vapor pressure and shell size.

  11. Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo

    Science.gov (United States)

    Cheong, R. Y.; Gabda, D.

    2017-09-01

    Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.

  12. Development of surrogate models using artificial neural network for building shell energy labelling

    International Nuclear Information System (INIS)

    Melo, A.P.; Cóstola, D.; Lamberts, R.; Hensen, J.L.M.

    2014-01-01

    Surrogate models are an important part of building energy labelling programs, but these models still present low accuracy, particularly in cooling-dominated climates. The objective of this study was to evaluate the feasibility of using an artificial neural network (ANN) to improve the accuracy of surrogate models for labelling purposes. An ANN was applied to model the building stock of a city in Brazil, based on the results of extensive simulations using the high-resolution building energy simulation program EnergyPlus. Sensitivity and uncertainty analyses were carried out to evaluate the behaviour of the ANN model, and the variations in the best and worst performance for several typologies were analysed in relation to variations in the input parameters and building characteristics. The results obtained indicate that an ANN can represent the interaction between input and output data for a vast and diverse building stock. Sensitivity analysis showed that no single input parameter can be identified as the main factor responsible for the building energy performance. The uncertainty associated with several parameters plays a major role in assessing building energy performance, together with the facade area and the shell-to-floor ratio. The results of this study may have a profound impact as ANNs could be applied in the future to define regulations in many countries, with positive effects on optimizing the energy consumption. - Highlights: • We model several typologies which have variation in input parameters. • We evaluate the accuracy of surrogate models for labelling purposes. • ANN is applied to model the building stock. • Uncertainty in building plays a major role in the building energy performance. • Results show that ANN could help to develop building energy labelling systems

  13. Direct Simulation Monte Carlo Application of the Three Dimensional Forced Harmonic Oscillator Model

    Science.gov (United States)

    2017-12-07

    challenges of implementing the full array of VV transitions mentioned earlier. Note here that an account of VV processes is not expected to influence the...method. It takes into account the microscopic reversibility between the excitation and deexcitation processes, and it satisfies the detailed balance...probabilities and is suitable for the direct simulation Monte Carlo method. It takes into account the microscopic reversibility between the excitation

  14. Modelling of an industrial environment, part 1.: Monte Carlo simulations of photon transport

    International Nuclear Information System (INIS)

    Kis, Z.; Eged, K.; Meckbach, R.; Voigt, G.

    2002-01-01

    After a nuclear accident releasing radioactive material into the environment the external exposures may contribute significantly to the radiation exposure of the population (UNSCEAR 1988, 2000). For urban populations the external gamma exposure from radionuclides deposited on the surfaces of the urban-industrial environments yields the dominant contributions to the total dose to the public (Kelly 1987; Jacob and Meckbach 1990). The radiation field is naturally influenced by the environment around the sources. For calculations of the shielding effect of the structures in complex and realistic urban environments Monte Carlo methods turned out to be useful tools (Jacob and Meckbach 1987; Meckbach et al. 1988). Using these methods a complex environment can be set up in which the photon transport can be solved on a reliable way. The accuracy of the methods is in principle limited only by the knowledge of the atomic cross sections and the computational time. Several papers using Monte Carlo results for calculating doses from the external gamma exposures were published (Jacob and Meckbach 1987, 1990; Meckbach et al. 1988; Rochedo et al. 1996). In these papers the Monte Carlo simulations were run in urban environments and for different photon energies. The industrial environment can be defined as such an area where productive and/or commercial activity is carried out. A good example can be a factory or a supermarket. An industrial environment can rather be different from the urban ones as for the types and structures of the buildings and their dimensions. These variations will affect the radiation field of this environment. Hence there is a need to run new Monte Carlo simulations designed specially for the industrial environments

  15. Finite element modeling of shell shape in the freshwater turtle Pseudemys concinna reveals a trade-off between mechanical strength and hydrodynamic efficiency.

    Science.gov (United States)

    Rivera, Gabriel; Stayton, C Tristan

    2011-10-01

    Aquatic species can experience different selective pressures on morphology in different flow regimes. Species inhabiting lotic regimes often adapt to these conditions by evolving low-drag (i.e., streamlined) morphologies that reduce the likelihood of dislodgment or displacement. However, hydrodynamic factors are not the only selective pressures influencing organismal morphology and shapes well suited to flow conditions may compromise performance in other roles. We investigated the possibility of morphological trade-offs in the turtle Pseudemys concinna. Individuals living in lotic environments have flatter, more streamlined shells than those living in lentic environments; however, this flatter shape may also make the shells less capable of resisting predator-induced loads. We tested the idea that "lotic" shell shapes are weaker than "lentic" shell shapes, concomitantly examining effects of sex. Geometric morphometric data were used to transform an existing finite element shell model into a series of models corresponding to the shapes of individual turtles. Models were assigned identical material properties and loaded under identical conditions, and the stresses produced by a series of eight loads were extracted to describe the strength of the shells. "Lotic" shell shapes produced significantly higher stresses than "lentic" shell shapes, indicating that the former is weaker than the latter. Females had significantly stronger shell shapes than males, although these differences were less consistent than differences between flow regimes. We conclude that, despite the potential for many-to-one mapping of shell shape onto strength, P. concinna experiences a trade-off in shell shape between hydrodynamic and mechanical performance. This trade-off may be evident in many other turtle species or any other aquatic species that also depend on a shell for defense. However, evolution of body size may provide an avenue of escape from this trade-off in some cases, as changes in

  16. Phase field modeling of brittle fracture for enhanced assumed strain shells at large deformations: formulation and finite element implementation

    Science.gov (United States)

    Reinoso, J.; Paggi, M.; Linder, C.

    2017-06-01

    Fracture of technological thin-walled components can notably limit the performance of their corresponding engineering systems. With the aim of achieving reliable fracture predictions of thin structures, this work presents a new phase field model of brittle fracture for large deformation analysis of shells relying on a mixed enhanced assumed strain (EAS) formulation. The kinematic description of the shell body is constructed according to the solid shell concept. This enables the use of fully three-dimensional constitutive models for the material. The proposed phase field formulation integrates the use of the (EAS) method to alleviate locking pathologies, especially Poisson thickness and volumetric locking. This technique is further combined with the assumed natural strain method to efficiently derive a locking-free solid shell element. On the computational side, a fully coupled monolithic framework is consistently formulated. Specific details regarding the corresponding finite element formulation and the main aspects associated with its implementation in the general purpose packages FEAP and ABAQUS are addressed. Finally, the applicability of the current strategy is demonstrated through several numerical examples involving different loading conditions, and including linear and nonlinear hyperelastic constitutive models.

  17. Modeling and numerical analysis of a three-dimensional shape memory alloy shell structure

    Science.gov (United States)

    Zhao, Pengtao; Qiu, Jinhao; Ji, Hongli; Wang, Mingyi; Nie, Rui

    2012-04-01

    In this paper, modeling and numerical analysis of a three dimensional shell structure made of shape memory alloy (SMA) are introduced. As a new smart material, SMA material has been applied in many fields due to two significant macroscopic phenomena which are called the shape memory effect (SME) and pseudoelasticity. The material of SMA exhibits two-way shape memory effect (TWSME) after undergoing especial heat treatment and thermo-mechanical training. This work investigates the numerical simulation and application of the SMA component: SMA strip, which has been pre-curved in the room temperature. The component is expected to extend upon heating and shorten on cooling along the curve. Hence the shape memory effect can be used to change the shape of the structure. The return mapping algorithm of the 3-D SMA thermomechanical constitutive equations based on Boyd-Lagoudas model is used in the finite element analysis to describe the material features of the SMA. In this paper, the ABAQUS finite element program has been utilized with a user material subroutine (UMAT) which is written in the FORTRAN code for the modeling of the SMA strip. The SMA component which has a certain initial transformation strain can emerge considerable deflection during the reverse phase transformation inducing by the temperature.

  18. Low-Energy Magnetic Radiation Enhancement Within the Nuclear Shell Model

    Science.gov (United States)

    Karampagia, S.; Brown, B. A.; Zelevinsky, V.

    2018-02-01

    The γ-ray strength function, the average reduced probability of absorbing or emitting a γ-ray of a given energy, is an indispensable quantity for calculations of astrophysical interest. Experimental studies of the γSF have revealed an enhancement of this quantity in the low E γ energy region, which cannot be described by none of the known resonances or by semiclassical models. To understand the origin of the low-energy enhancement we have calculated the M1 transition probabilities, both in the emission and absorption regions, for the 49,50Cr and 48V nuclei in the f 7/2 shell-model basis. We find that the M1 strength distribution peaks at zero transition energy and falls off exponentially, independentely of the excitation energy or spin range selected. The form of this exponential is the same across all three different nuclei studied within this model space. We also show that the slope of the exponential is proportional to the strength of the T = 1 pairing matrix elements.

  19. A replica exchange Monte Carlo algorithm for protein folding in the HP model

    Directory of Open Access Journals (Sweden)

    Shmygelska Alena

    2007-09-01

    Full Text Available Abstract Background The ab initio protein folding problem consists of predicting protein tertiary structure from a given amino acid sequence by minimizing an energy function; it is one of the most important and challenging problems in biochemistry, molecular biology and biophysics. The ab initio protein folding problem is computationally challenging and has been shown to be NP MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaat0uy0HwzTfgDPnwy1egaryqtHrhAL1wy0L2yHvdaiqaacqWFneVtcqqGqbauaaa@3961@-hard even when conformations are restricted to a lattice. In this work, we implement and evaluate the replica exchange Monte Carlo (REMC method, which has already been applied very successfully to more complex protein models and other optimization problems with complex energy landscapes, in combination with the highly effective pull move neighbourhood in two widely studied Hydrophobic Polar (HP lattice models. Results We demonstrate that REMC is highly effective for solving instances of the square (2D and cubic (3D HP protein folding problem. When using the pull move neighbourhood, REMC outperforms current state-of-the-art algorithms for most benchmark instances. Additionally, we show that this new algorithm provides a larger ensemble of ground-state structures than the existing state-of-the-art methods. Furthermore, it scales well with sequence length, and it finds significantly better conformations on long biological sequences and sequences with a provably unique ground-state structure, which is believed to be a characteristic of real proteins. We also present evidence that our REMC algorithm can fold sequences which exhibit significant interaction between termini in the hydrophobic core relatively easily. Conclusion We demonstrate that REMC utilizing the pull move

  20. Analysis of Composite Panel-Stiffener Debonding Using a Shell/3D Modeling Technique

    Science.gov (United States)

    Krueger, Ronald; Ratcliffe, James; Minguet, Pierre J.

    2007-01-01

    Interlaminar fracture mechanics has proven useful for characterizing the onset of delaminations in composites and has been used successfully primarily to investigate onset in fracture toughness specimens and laboratory size coupon type specimens. Future acceptance of the methodology by industry and certification authorities, however, requires the successful demonstration of the methodology on the structural level. For this purpose, a panel was selected that is reinforced with stiffeners. Shear loading causes the panel to buckle, and the resulting out-of-plane deformations initiate skin/stiffener separation at the location of an embedded defect. A small section of the stiffener foot, web and noodle as well as the panel skin in the vicinity of the delamination front were modeled with a local 3D solid model. Across the width of the stiffener foot, the mixedmode strain energy release rates were calculated using the virtual crack closure technique. A failure index was calculated by correlating the results with a mixed-mode failure criterion of the graphite/epoxy material. Computed failure indices were compared to corresponding results where the entire web was modeled with shell elements and only a small section of the stiffener foot and panel were modeled locally with solid elements. Including the stiffener web in the local 3D solid model increased the computed failure index. Further including the noodle and transition radius in the local 3D solid model changed the local distribution across the width. The magnitude of the failure index decreased with increasing transition radius and noodle area. For the transition radii modeled, the material properties used for the noodle area had a negligible effect on the results. The results of this study are intended to be used as a guide for conducting finite element and fracture mechanics analyses of delamination and debonding in complex structures such as integrally stiffened panels.