WorldWideScience

Sample records for resolution n-body simulations

  1. High Resolution N-Body Simulations of Terrestrial Planet Growth

    Science.gov (United States)

    Clark Wallace, Spencer; Quinn, Thomas R.

    2018-04-01

    We investigate planetesimal accretion with a direct N-body simulation of an annulus at 1 AU around a 1 M_sun star. The planetesimal ring, which initially contains N = 106 bodies is evolved through the runaway growth stage into the phase of oligarchic growth. We find that the mass distribution of planetesimals develops a bump around 1022 g shortly after the oligarchs form. This feature is absent in previous lower resolution studies. We find that this bump marks a boundary between growth modes. Below the bump mass, planetesimals are packed tightly enough together to populate first order mean motion resonances with the oligarchs. These resonances act to heat the tightly packed, low mass planetesimals, inhibiting their growth. We examine the eccentricity evolution of a dynamically hot planetary embryo embedded in an annulus of planetesimals and find that dynamical friction acts more strongly on the embryo when the planetesimals are finely resolved. This effect disappears when the annulus is made narrow enough to exclude most of the mean motion resonances. Additionally, we find that the 1022 g bump is significantly less prominent when we follow planetesimal growth with a skinny annulus.This feature, which is reminiscent of the power law break seen in the size distribution of asteroid belt objects may be an important clue for constraining the initial size of planetesimals in planet formation models.

  2. GLOBAL HIGH-RESOLUTION N-BODY SIMULATION OF PLANET FORMATION. I. PLANETESIMAL-DRIVEN MIGRATION

    Energy Technology Data Exchange (ETDEWEB)

    Kominami, J. D. [Earth-Life Science Institute, Tokyo Institute of Technology, Meguro-Ku, Tokyo (Japan); Daisaka, H. [Hitotsubashi University, Kunitachi-shi, Tokyo (Japan); Makino, J. [RIKEN Advanced Institute for Computational Science, Chuo-ku, Kobe, Hyogo (Japan); Fujimoto, M., E-mail: kominami@mail.jmlab.jp, E-mail: daisaka@phys.science.hit-u.ac.jp, E-mail: makino@mail.jmlab.jp, E-mail: fujimoto.masaki@jaxa.jp [Japan Aerospace Exploration Agency, Sagamihara-shi, Kanagawa (Japan)

    2016-03-01

    We investigated whether outward planetesimal-driven migration (PDM) takes place or not in simulations when the self-gravity of planetesimals is included. We performed N-body simulations of planetesimal disks with a large width (0.7–4 au) that ranges over the ice line. The simulations consisted of two stages. The first-stage simulations were carried out to see the runaway growth phase using the planetesimals of initially the same mass. The runaway growth took place both at the inner edge of the disk and at the region just outside the ice line. This result was utilized for the initial setup of the second-stage simulations, in which the runaway bodies just outside the ice line were replaced by the protoplanets with about the isolation mass. In the second-stage simulations, the outward migration of the protoplanet was followed by the stopping of the migration due to the increase of the random velocity of the planetesimals. Owing to this increase of random velocities, one of the PDM criteria derived in Minton and Levison was broken. In the current simulations, the effect of the gas disk is not considered. It is likely that the gas disk plays an important role in PDM, and we plan to study its effect in future papers.

  3. Cosmological N -body simulations including radiation perturbations

    DEFF Research Database (Denmark)

    Brandbyge, Jacob; Rampf, Cornelius; Tram, Thomas

    2017-01-01

    CosmologicalN-body simulations are the standard tools to study the emergence of the observed large-scale structure of the Universe. Such simulations usually solve for the gravitational dynamics of matter within the Newtonian approximation, thus discarding general relativistic effects such as the ......CosmologicalN-body simulations are the standard tools to study the emergence of the observed large-scale structure of the Universe. Such simulations usually solve for the gravitational dynamics of matter within the Newtonian approximation, thus discarding general relativistic effects...

  4. Particle Number Dependence of the N-body Simulations of Moon Formation

    Science.gov (United States)

    Sasaki, Takanori; Hosono, Natsuki

    2018-04-01

    The formation of the Moon from the circumterrestrial disk has been investigated by using N-body simulations with the number N of particles limited from 104 to 105. We develop an N-body simulation code on multiple Pezy-SC processors and deploy Framework for Developing Particle Simulators to deal with large number of particles. We execute several high- and extra-high-resolution N-body simulations of lunar accretion from a circumterrestrial disk of debris generated by a giant impact on Earth. The number of particles is up to 107, in which 1 particle corresponds to a 10 km sized satellitesimal. We find that the spiral structures inside the Roche limit radius differ between low-resolution simulations (N ≤ 105) and high-resolution simulations (N ≥ 106). According to this difference, angular momentum fluxes, which determine the accretion timescale of the Moon also depend on the numerical resolution.

  5. Post-Newtonian N-body simulations

    Science.gov (United States)

    Aarseth, Sverre J.

    2007-06-01

    We report on the first fully consistent conventional cluster simulation which includes terms up to the third-order post-Newtonian approximation. Numerical problems for treating extremely energetic binaries orbiting a single massive object are circumvented by employing the special `wheel-spoke' regularization method of Zare which has not been used in large-N simulations before. Idealized models containing N = 1 × 105 particles of mass 1Msolar with a central black hole (BH) of 300Msolar have been studied on GRAPE-type computers. An initial half-mass radius of rh ~= 0.1 pc is sufficiently small to yield examples of relativistic coalescence. This is achieved by significant binary shrinkage within a density cusp environment, followed by the generation of extremely high eccentricities which are induced by Kozai cycles and/or resonant relaxation. More realistic models with white dwarfs and 10 times larger half-mass radii also show evidence of general relativity effects before disruption. An experimentation with the post-Newtonian terms suggests that reducing the time-scales for activating the different orders progressively may be justified for obtaining qualitatively correct solutions without aiming for precise predictions of the final gravitational radiation wave form. The results obtained suggest that the standard loss-cone arguments underestimate the swallowing rate in globular clusters containing a central BH.

  6. Numerical techniques for large cosmological N-body simulations

    International Nuclear Information System (INIS)

    Efstathiou, G.; Davis, M.; Frenk, C.S.; White, S.D.M.

    1985-01-01

    We describe and compare techniques for carrying out large N-body simulations of the gravitational evolution of clustering in the fundamental cube of an infinite periodic universe. In particular, we consider both particle mesh (PM) codes and P 3 M codes in which a higher resolution force is obtained by direct summation of contributions from neighboring particles. We discuss the mesh-induced anisotropies in the forces calculated by these schemes, and the extent to which they can model the desired 1/r 2 particle-particle interaction. We also consider how transformation of the time variable can improve the efficiency with which the equations of motion are integrated. We present tests of the accuracy with which the resulting schemes conserve energy and are able to follow individual particle trajectories. We have implemented an algorithm which allows initial conditions to be set up to model any desired spectrum of linear growing mode density fluctuations. A number of tests demonstrate the power of this algorithm and delineate the conditions under which it is effective. We carry out several test simulations using a variety of techniques in order to show how the results are affected by dynamic range limitations in the force calculations, by boundary effects, by residual artificialities in the initial conditions, and by the number of particles employed. For most purposes cosmological simulations are limited by the resolution of their force calculation rather than by the number of particles they can employ. For this reason, while PM codes are quite adequate to study the evolution of structure on large scale, P 3 M methods are to be preferred, in spite of their greater cost and complexity, whenever the evolution of small-scale structure is important

  7. Cosmological N-body simulations with generic hot dark matter

    DEFF Research Database (Denmark)

    Brandbyge, Jacob; Hannestad, Steen

    2017-01-01

    We have calculated the non-linear effects of generic fermionic and bosonic hot dark matter components in cosmological N-body simulations. For sub-eV masses, the non-linear power spectrum suppression caused by thermal free-streaming resembles the one seen for massive neutrinos, whereas for masses...

  8. FORMING CIRCUMBINARY PLANETS: N-BODY SIMULATIONS OF KEPLER-34

    International Nuclear Information System (INIS)

    Lines, S.; Leinhardt, Z. M.; Paardekooper, S.; Baruteau, C.; Thebault, P.

    2014-01-01

    Observations of circumbinary planets orbiting very close to the central stars have shown that planet formation may occur in a very hostile environment, where the gravitational pull from the binary should be very strong on the primordial protoplanetary disk. Elevated impact velocities and orbit crossings from eccentricity oscillations are the primary contributors to high energy, potentially destructive collisions that inhibit the growth of aspiring planets. In this work, we conduct high-resolution, inter-particle gravity enabled N-body simulations to investigate the feasibility of planetesimal growth in the Kepler-34 system. We improve upon previous work by including planetesimal disk self-gravity and an extensive collision model to accurately handle inter-planetesimal interactions. We find that super-catastrophic erosion events are the dominant mechanism up to and including the orbital radius of Kepler-34(AB)b, making in situ growth unlikely. It is more plausible that Kepler-34(AB)b migrated from a region beyond 1.5 AU. Based on the conclusions that we have made for Kepler-34, it seems likely that all of the currently known circumbinary planets have also migrated significantly from their formation location with the possible exception of Kepler-47(AB)c

  9. FORMING CIRCUMBINARY PLANETS: N-BODY SIMULATIONS OF KEPLER-34

    Energy Technology Data Exchange (ETDEWEB)

    Lines, S.; Leinhardt, Z. M. [School of Physics, University of Bristol, H. H. Wills Physics Laboratory, Tyndall Avenue, Bristol BS8 1TL (United Kingdom); Paardekooper, S.; Baruteau, C. [DAMTP, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA (United Kingdom); Thebault, P., E-mail: stefan.lines@bristol.ac.uk [LESIA-Observatoire de Paris, UPMC Univ. Paris 06, Univ. Paris-Diderot, F-92195 Meudon Cedex (France)

    2014-02-10

    Observations of circumbinary planets orbiting very close to the central stars have shown that planet formation may occur in a very hostile environment, where the gravitational pull from the binary should be very strong on the primordial protoplanetary disk. Elevated impact velocities and orbit crossings from eccentricity oscillations are the primary contributors to high energy, potentially destructive collisions that inhibit the growth of aspiring planets. In this work, we conduct high-resolution, inter-particle gravity enabled N-body simulations to investigate the feasibility of planetesimal growth in the Kepler-34 system. We improve upon previous work by including planetesimal disk self-gravity and an extensive collision model to accurately handle inter-planetesimal interactions. We find that super-catastrophic erosion events are the dominant mechanism up to and including the orbital radius of Kepler-34(AB)b, making in situ growth unlikely. It is more plausible that Kepler-34(AB)b migrated from a region beyond 1.5 AU. Based on the conclusions that we have made for Kepler-34, it seems likely that all of the currently known circumbinary planets have also migrated significantly from their formation location with the possible exception of Kepler-47(AB)c.

  10. Forming Circumbinary Planets: N-body Simulations of Kepler-34

    Science.gov (United States)

    Lines, S.; Leinhardt, Z. M.; Paardekooper, S.; Baruteau, C.; Thebault, P.

    2014-02-01

    Observations of circumbinary planets orbiting very close to the central stars have shown that planet formation may occur in a very hostile environment, where the gravitational pull from the binary should be very strong on the primordial protoplanetary disk. Elevated impact velocities and orbit crossings from eccentricity oscillations are the primary contributors to high energy, potentially destructive collisions that inhibit the growth of aspiring planets. In this work, we conduct high-resolution, inter-particle gravity enabled N-body simulations to investigate the feasibility of planetesimal growth in the Kepler-34 system. We improve upon previous work by including planetesimal disk self-gravity and an extensive collision model to accurately handle inter-planetesimal interactions. We find that super-catastrophic erosion events are the dominant mechanism up to and including the orbital radius of Kepler-34(AB)b, making in situ growth unlikely. It is more plausible that Kepler-34(AB)b migrated from a region beyond 1.5 AU. Based on the conclusions that we have made for Kepler-34, it seems likely that all of the currently known circumbinary planets have also migrated significantly from their formation location with the possible exception of Kepler-47(AB)c.

  11. Relativistic initial conditions for N-body simulations

    Energy Technology Data Exchange (ETDEWEB)

    Fidler, Christian [Catholic University of Louvain—Center for Cosmology, Particle Physics and Phenomenology (CP3) 2, Chemin du Cyclotron, B-1348 Louvain-la-Neuve (Belgium); Tram, Thomas; Crittenden, Robert; Koyama, Kazuya; Wands, David [Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth PO1 3FX (United Kingdom); Rampf, Cornelius, E-mail: christian.fidler@uclouvain.be, E-mail: thomas.tram@port.ac.uk, E-mail: rampf@thphys.uni-heidelberg.de, E-mail: robert.crittenden@port.ac.uk, E-mail: kazuya.koyama@port.ac.uk, E-mail: david.wands@port.ac.uk [Institut für Theoretische Physik, Universität Heidelberg, Philosophenweg 16, D–69120 Heidelberg (Germany)

    2017-06-01

    Initial conditions for (Newtonian) cosmological N-body simulations are usually set by re-scaling the present-day power spectrum obtained from linear (relativistic) Boltzmann codes to the desired initial redshift of the simulation. This back-scaling method can account for the effect of inhomogeneous residual thermal radiation at early times, which is absent in the Newtonian simulations. We analyse this procedure from a fully relativistic perspective, employing the recently-proposed Newtonian motion gauge framework. We find that N-body simulations for ΛCDM cosmology starting from back-scaled initial conditions can be self-consistently embedded in a relativistic space-time with first-order metric potentials calculated using a linear Boltzmann code. This space-time coincides with a simple ''N-body gauge'' for z < 50 for all observable modes. Care must be taken, however, when simulating non-standard cosmologies. As an example, we analyse the back-scaling method in a cosmology with decaying dark matter, and show that metric perturbations become large at early times in the back-scaling approach, indicating a breakdown of the perturbative description. We suggest a suitable ''forwards approach' for such cases.

  12. ZENO: N-body and SPH Simulation Codes

    Science.gov (United States)

    Barnes, Joshua E.

    2011-02-01

    The ZENO software package integrates N-body and SPH simulation codes with a large array of programs to generate initial conditions and analyze numerical simulations. Written in C, the ZENO system is portable between Mac, Linux, and Unix platforms. It is in active use at the Institute for Astronomy (IfA), at NRAO, and possibly elsewhere. Zeno programs can perform a wide range of simulation and analysis tasks. While many of these programs were first created for specific projects, they embody algorithms of general applicability and embrace a modular design strategy, so existing code is easily applied to new tasks. Major elements of the system include: Structured data file utilities facilitate basic operations on binary data, including import/export of ZENO data to other systems.Snapshot generation routines create particle distributions with various properties. Systems with user-specified density profiles can be realized in collisionless or gaseous form; multiple spherical and disk components may be set up in mutual equilibrium.Snapshot manipulation routines permit the user to sift, sort, and combine particle arrays, translate and rotate particle configurations, and assign new values to data fields associated with each particle.Simulation codes include both pure N-body and combined N-body/SPH programs: Pure N-body codes are available in both uniprocessor and parallel versions.SPH codes offer a wide range of options for gas physics, including isothermal, adiabatic, and radiating models. Snapshot analysis programs calculate temporal averages, evaluate particle statistics, measure shapes and density profiles, compute kinematic properties, and identify and track objects in particle distributions.Visualization programs generate interactive displays and produce still images and videos of particle distributions; the user may specify arbitrary color schemes and viewing transformations.

  13. Relativistic N-body simulations with massive neutrinos

    Science.gov (United States)

    Adamek, Julian; Durrer, Ruth; Kunz, Martin

    2017-11-01

    Some of the dark matter in the Universe is made up of massive neutrinos. Their impact on the formation of large scale structure can be used to determine their absolute mass scale from cosmology, but to this end accurate numerical simulations have to be developed. Due to their relativistic nature, neutrinos pose additional challenges when one tries to include them in N-body simulations that are traditionally based on Newtonian physics. Here we present the first numerical study of massive neutrinos that uses a fully relativistic approach. Our N-body code, gevolution, is based on a weak-field formulation of general relativity that naturally provides a self-consistent framework for relativistic particle species. This allows us to model neutrinos from first principles, without invoking any ad-hoc recipes. Our simulation suite comprises some of the largest neutrino simulations performed to date. We study the effect of massive neutrinos on the nonlinear power spectra and the halo mass function, focusing on the interesting mass range between 0.06 eV and 0.3 eV and including a case for an inverted mass hierarchy.

  14. Cosmological N -body simulations with generic hot dark matter

    Energy Technology Data Exchange (ETDEWEB)

    Brandbyge, Jacob; Hannestad, Steen, E-mail: jacobb@phys.au.dk, E-mail: sth@phys.au.dk [Department of Physics and Astronomy, University of Aarhus, Ny Munkegade 120, DK–8000 Aarhus C (Denmark)

    2017-10-01

    We have calculated the non-linear effects of generic fermionic and bosonic hot dark matter components in cosmological N -body simulations. For sub-eV masses, the non-linear power spectrum suppression caused by thermal free-streaming resembles the one seen for massive neutrinos, whereas for masses larger than 1 eV, the non-linear relative suppression of power is smaller than in linear theory. We furthermore find that in the non-linear regime, one can map fermionic to bosonic models by performing a simple transformation.

  15. N-Body simulations of tidal encounters between stellar systems

    International Nuclear Information System (INIS)

    Rao, P.D.; Ramamani, N.; Alladin, S.M.

    1985-10-01

    N-Body simulations have been performed to study the tidal effects of a primary stellar system on a secondary stellar system of density close to the Roche density. Two hyperbolic, one parabolic and one elliptic encounters have been simulated. The changes in energy, angular momentum, mass distribution, and shape of the secondary system have been determined in each case. The inner region containing about 40% of the mass was found to be practically unchanged and the mass exterior to the tidal radius was found to escape. The intermediate region showed tidal distension. The thickness of this region decreased as we went from hyperbolic encounters to the elliptic encounter keeping the distance of closest approach constant. The numerical results for the fractional change in energy have been compared with the predictions of the available analytic formulae and the usefulness and limitations of the formulae have been discussed. (author)

  16. N-body simulations for coupled scalar-field cosmology

    International Nuclear Information System (INIS)

    Li Baojiu; Barrow, John D.

    2011-01-01

    We describe in detail the general methodology and numerical implementation of consistent N-body simulations for coupled-scalar-field models, including background cosmology and the generation of initial conditions (with the different couplings to different matter species taken into account). We perform fully consistent simulations for a class of coupled-scalar-field models with an inverse power-law potential and negative coupling constant, for which the chameleon mechanism does not work. We find that in such cosmological models the scalar-field potential plays a negligible role except in the background expansion, and the fifth force that is produced is proportional to gravity in magnitude, justifying the use of a rescaled gravitational constant G in some earlier N-body simulation works for similar models. We then study the effects of the scalar coupling on the nonlinear matter power spectra and compare with linear perturbation calculations to see the agreement and places where the nonlinear treatment deviates from the linear approximation. We also propose an algorithm to identify gravitationally virialized matter halos, trying to take account of the fact that the virialization itself is also modified by the scalar-field coupling. We use the algorithm to measure the mass function and study the properties of dark-matter halos. We find that the net effect of the scalar coupling helps produce more heavy halos in our simulation boxes and suppresses the inner (but not the outer) density profile of halos compared with the ΛCDM prediction, while the suppression weakens as the coupling between the scalar field and dark-matter particles increases in strength.

  17. Evaluation of clustering statistics with N-body simulations

    International Nuclear Information System (INIS)

    Quinn, T.R.

    1986-01-01

    Two series of N-body simulations are used to determine the effectiveness of various clustering statistics in revealing initial conditions from evolved models. All the simulations contained 16384 particles and were integrated with the PPPM code. One series is a family of models with power at only one wavelength. The family contains five models with the wavelength of the power separated by factors of √2. The second series is a family of all equal power combinations of two wavelengths taken from the first series. The clustering statistics examined are the two point correlation function, the multiplicity function, the nearest neighbor distribution, the void probability distribution, the distribution of counts in cells, and the peculiar velocity distribution. It is found that the covariance function, the nearest neighbor distribution, and the void probability distribution are relatively insensitive to the initial conditions. The distribution of counts in cells show a little more sensitivity, but the multiplicity function is the best of the statistics considered for revealing the initial conditions

  18. Effects of the initial conditions on cosmological $N$-body simulations

    OpenAIRE

    L'Huillier, Benjamin; Park, Changbom; Kim, Juhan

    2014-01-01

    Cosmology is entering an era of percent level precision due to current large observational surveys. This precision in observation is now demanding more accuracy from numerical methods and cosmological simulations. In this paper, we study the accuracy of $N$-body numerical simulations and their dependence on changes in the initial conditions and in the simulation algorithms. For this purpose, we use a series of cosmological $N$-body simulations with varying initial conditions. We test the infl...

  19. HNBody: A Simulation Package for Hierarchical N-Body Systems

    Science.gov (United States)

    Rauch, Kevin P.

    2018-04-01

    HNBody (http://www.hnbody.org/) is an extensible software package forintegrating the dynamics of N-body systems. Although general purpose, itincorporates several features and algorithms particularly well-suited tosystems containing a hierarchy (wide dynamic range) of masses. HNBodyversion 1 focused heavily on symplectic integration of nearly-Kepleriansystems. Here I describe the capabilities of the redesigned and expandedpackage version 2, which includes: symplectic integrators up to eighth order(both leap frog and Wisdom-Holman type methods), with symplectic corrector andclose encounter support; variable-order, variable-timestep Bulirsch-Stoer andStörmer integrators; post-Newtonian and multipole physics options; advancedround-off control for improved long-term stability; multi-threading and SIMDvectorization enhancements; seamless availability of extended precisionarithmetic for all calculations; extremely flexible configuration andoutput. Tests of the physical correctness of the algorithms are presentedusing JPL Horizons ephemerides (https://ssd.jpl.nasa.gov/?horizons) andpreviously published results for reference. The features and performanceof HNBody are also compared to several other freely available N-body codes,including MERCURY (Chambers), SWIFT (Levison & Duncan) and WHFAST (Rein &Tamayo).

  20. The Abacus Cosmos: A Suite of Cosmological N-body Simulations

    Science.gov (United States)

    Garrison, Lehman H.; Eisenstein, Daniel J.; Ferrer, Douglas; Tinker, Jeremy L.; Pinto, Philip A.; Weinberg, David H.

    2018-06-01

    We present a public data release of halo catalogs from a suite of 125 cosmological N-body simulations from the ABACUS project. The simulations span 40 wCDM cosmologies centered on the Planck 2015 cosmology at two mass resolutions, 4 × 1010 h ‑1 M ⊙ and 1 × 1010 h ‑1 M ⊙, in 1.1 h ‑1 Gpc and 720 h ‑1 Mpc boxes, respectively. The boxes are phase-matched to suppress sample variance and isolate cosmology dependence. Additional volume is available via 16 boxes of fixed cosmology and varied phase; a few boxes of single-parameter excursions from Planck 2015 are also provided. Catalogs spanning z = 1.5 to 0.1 are available for friends-of-friends and ROCKSTAR halo finders and include particle subsamples. All data products are available at https://lgarrison.github.io/AbacusCosmos.

  1. Effects of the Size of Cosmological N-body Simulations on Physical ...

    Indian Academy of Sciences (India)

    Apart from N-body simulations, an analytical prescription given by Press & ...... Little, B., Weinberg, D. H., Park, C. 1991, MNRAS, 253, 295. Ma, C.-P. ... Padmanabhan, T. 1993, Structure Formation in the Universe, Cambridge University Press.

  2. The effect of early radiation in N-body simulations of cosmic structure formation

    DEFF Research Database (Denmark)

    Adamek, Julian; Brandbyge, Jacob; Fidler, Christian

    2017-01-01

    Newtonian N-body simulations have been employed successfully over the past decades for the simulation of the cosmological large-scale structure. Such simulations usually ignore radiation perturbations (photons and massless neutrinos) and the impact of general relativity (GR) beyond the background...

  3. N-body simulations of terrestrial planet formation under the influence of a hot Jupiter

    International Nuclear Information System (INIS)

    Ogihara, Masahiro; Kobayashi, Hiroshi; Inutsuka, Shu-ichiro

    2014-01-01

    We investigate the formation of multiple-planet systems in the presence of a hot Jupiter (HJ) using extended N-body simulations that are performed simultaneously with semianalytic calculations. Our primary aims are to describe the planet formation process starting from planetesimals using high-resolution simulations, and to examine the dependences of the architecture of planetary systems on input parameters (e.g., disk mass, disk viscosity). We observe that protoplanets that arise from oligarchic growth and undergo type I migration stop migrating when they join a chain of resonant planets outside the orbit of an HJ. The formation of a resonant chain is almost independent of our model parameters, and is thus a robust process. At the end of our simulations, several terrestrial planets remain at around 0.1 AU. The formed planets are not equal mass; the largest planet constitutes more than 50% of the total mass in the close-in region, which is also less dependent on parameters. In the previous work of this paper, we have found a new physical mechanism of induced migration of the HJ, which is called a crowding-out. If the HJ opens up a wide gap in the disk (e.g., owing to low disk viscosity), crowding-out becomes less efficient and the HJ remains. We also discuss angular momentum transfer between the planets and disk.

  4. Halo Models of Large Scale Structure and Reliability of Cosmological N-Body Simulations

    Directory of Open Access Journals (Sweden)

    José Gaite

    2013-05-01

    Full Text Available Halo models of the large scale structure of the Universe are critically examined, focusing on the definition of halos as smooth distributions of cold dark matter. This definition is essentially based on the results of cosmological N-body simulations. By a careful analysis of the standard assumptions of halo models and N-body simulations and by taking into account previous studies of self-similarity of the cosmic web structure, we conclude that N-body cosmological simulations are not fully reliable in the range of scales where halos appear. Therefore, to have a consistent definition of halos is necessary either to define them as entities of arbitrary size with a grainy rather than smooth structure or to define their size in terms of small-scale baryonic physics.

  5. A New Signal Model for Axion Cavity Searches from N -body Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Lentz, Erik W.; Rosenberg, Leslie J. [Physics Department, University of Washington, Seattle, WA 98195-1580 (United States); Quinn, Thomas R.; Tremmel, Michael J., E-mail: lentze@phys.washington.edu, E-mail: ljrosenberg@phys.washington.edu, E-mail: trq@astro.washington.edu, E-mail: mjt29@astro.washington.edu [Astronomy Department, University of Washington, Seattle, WA 98195-1580 (United States)

    2017-08-20

    Signal estimates for direct axion dark matter (DM) searches have used the isothermal sphere halo model for the last several decades. While insightful, the isothermal model does not capture effects from a halo’s infall history nor the influence of baryonic matter, which has been shown to significantly influence a halo’s inner structure. The high resolution of cavity axion detectors can make use of modern cosmological structure-formation simulations, which begin from realistic initial conditions, incorporate a wide range of baryonic physics, and are capable of resolving detailed structure. This work uses a state-of-the-art cosmological N -body+Smoothed-Particle Hydrodynamics simulation to develop an improved signal model for axion cavity searches. Signal shapes from a class of galaxies encompassing the Milky Way are found to depart significantly from the isothermal sphere. A new signal model for axion detectors is proposed and projected sensitivity bounds on the Axion DM eXperiment (ADMX) data are presented.

  6. The effect of thermal velocities on structure formation in N-body simulations of warm dark matter

    Science.gov (United States)

    Leo, Matteo; Baugh, Carlton M.; Li, Baojiu; Pascoli, Silvia

    2017-11-01

    We investigate the impact of thermal velocities in N-body simulations of structure formation in warm dark matter models. Adopting the commonly used approach of adding thermal velocities, randomly selected from a Fermi-Dirac distribution, to the gravitationally-induced velocities of the simulation particles, we compare the matter and velocity power spectra measured from CDM and WDM simulations, in the latter case with and without thermal velocities. This prescription for adding thermal velocities introduces numerical noise into the initial conditions, which influences structure formation. At early times, the noise affects dramatically the power spectra measured from simulations with thermal velocities, with deviations of the order of ~ Script O(10) (in the matter power spectra) and of the order of ~ Script O(102) (in the velocity power spectra) compared to those extracted from simulations without thermal velocities. At late times, these effects are less pronounced with deviations of less than a few percent. Increasing the resolution of the N-body simulation shifts these discrepancies to higher wavenumbers. We also find that spurious haloes start to appear in simulations which include thermal velocities at a mass that is ~3 times larger than in simulations without thermal velocities.

  7. Fast Generation of Ensembles of Cosmological N-Body Simulations via Mode-Resampling

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, M D; Cole, S; Frenk, C S; Szapudi, I

    2011-02-14

    We present an algorithm for quickly generating multiple realizations of N-body simulations to be used, for example, for cosmological parameter estimation from surveys of large-scale structure. Our algorithm uses a new method to resample the large-scale (Gaussian-distributed) Fourier modes in a periodic N-body simulation box in a manner that properly accounts for the nonlinear mode-coupling between large and small scales. We find that our method for adding new large-scale mode realizations recovers the nonlinear power spectrum to sub-percent accuracy on scales larger than about half the Nyquist frequency of the simulation box. Using 20 N-body simulations, we obtain a power spectrum covariance matrix estimate that matches the estimator from Takahashi et al. (from 5000 simulations) with < 20% errors in all matrix elements. Comparing the rates of convergence, we determine that our algorithm requires {approx}8 times fewer simulations to achieve a given error tolerance in estimates of the power spectrum covariance matrix. The degree of success of our algorithm indicates that we understand the main physical processes that give rise to the correlations in the matter power spectrum. Namely, the large-scale Fourier modes modulate both the degree of structure growth through the variation in the effective local matter density and also the spatial frequency of small-scale perturbations through large-scale displacements. We expect our algorithm to be useful for noise modeling when constraining cosmological parameters from weak lensing (cosmic shear) and galaxy surveys, rescaling summary statistics of N-body simulations for new cosmological parameter values, and any applications where the influence of Fourier modes larger than the simulation size must be accounted for.

  8. N-body simulations for f(R) gravity using a self-adaptive particle-mesh code

    Science.gov (United States)

    Zhao, Gong-Bo; Li, Baojiu; Koyama, Kazuya

    2011-02-01

    We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu [Phys. Rev. DPRVDAQ1550-7998 78, 123524 (2008)10.1103/PhysRevD.78.123524] and Schmidt [Phys. Rev. DPRVDAQ1550-7998 79, 083518 (2009)10.1103/PhysRevD.79.083518], and extend the resolution up to k˜20h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discuss how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.

  9. N-body simulations for f(R) gravity using a self-adaptive particle-mesh code

    International Nuclear Information System (INIS)

    Zhao Gongbo; Koyama, Kazuya; Li Baojiu

    2011-01-01

    We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu et al.[Phys. Rev. D 78, 123524 (2008)] and Schmidt et al.[Phys. Rev. D 79, 083518 (2009)], and extend the resolution up to k∼20 h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discuss how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.

  10. The Matter Bispectrum in N-body Simulations with non-Gaussian Initial Conditions

    OpenAIRE

    Sefusatti, Emiliano; Crocce, Martin; Desjacques, Vincent

    2010-01-01

    We present measurements of the dark matter bispectrum in N-body simulations with non-Gaussian initial conditions of the local kind for a large variety of triangular configurations and compare them with predictions from Eulerian perturbation theory up to one-loop corrections. We find that the effects of primordial non-Gaussianity at large scales, when compared to perturbation theory, are well described by the initial component of the matter bispectrum, linearly extrapolated at the redshift of ...

  11. Dark matter direct detection signals inferred from a cosmological N-body simulation with baryons

    International Nuclear Information System (INIS)

    Ling, F.-S.; Nezri, E.; Athanassoula, E.; Teyssier, R.

    2010-01-01

    We extract at redshift z = 0 a Milky Way sized object including gas, stars and dark matter (DM) from a recent, high-resolution cosmological N-body simulation with baryons. Its resolution is sufficient to witness the formation of a rotating disk and bulge at the center of the halo potential, therefore providing a realistic description of the birth and the evolution of galactic structures in the ΛCDM cosmology paradigm. The phase-space structure of the central galaxy reveals that, throughout a thick region, the dark halo is co-rotating on average with the stellar disk. At the Earth's location, the rotating component, sometimes called dark disk in the literature, is characterized by a minimum lag velocity v lag ≅ 75 km/s, in which case it contributes to around 25% of the total DM local density, whose value is ρ DM ≅ 0.37GeV/cm 3 . The velocity distributions also show strong deviations from pure Gaussian and Maxwellian distributions, with a sharper drop of the high velocity tail. We give a detailed study of the impact of these features on the predictions for DM signals in direct detection experiments. In particular, the question of whether the modulation signal observed by DAMA is or is not excluded by limits set by other experiments (CDMS, XENON and CRESST...) is re-analyzed and compared to the case of a standard Maxwellian halo. We consider spin-independent interactions for both the elastic and the inelastic scattering scenarios. For the first time, we calculate the allowed regions for DAMA and the exclusion limits of other null experiments directly from the velocity distributions found in the simulation. We then compare these results with the predictions of various analytical distributions. We find that the compatibility between DAMA and the other experiments is improved. In the elastic scenario, the DAMA modulation signal is slightly enhanced in the so-called channeling region, as a result of several effects that include a departure from a Maxwellian

  12. Studying Tidal Effects In Planetary Systems With Posidonius. A N-Body Simulator Written In Rust.

    Science.gov (United States)

    Blanco-Cuaresma, Sergi; Bolmont, Emeline

    2017-10-01

    Planetary systems with several planets in compact orbital configurations such as TRAPPIST-1 are surely affected by tidal effects. Its study provides us with important insight about its evolution. We developed a second generation of a N-body code based on the tidal model used in Mercury-T, re-implementing and improving its functionalities using Rust as programming language (including a Python interface for easy use) and the WHFAST integrator. The new open source code ensures memory safety, reproducibility of numerical N-body experiments, it improves the spin integration compared to Mercury-T and allows to take into account a new prescription for the dissipation of tidal inertial waves in the convective envelope of stars. Posidonius is also suitable for binary system simulations with evolving stars.

  13. On the evolution of galaxy clustering and cosmological N-body simulations

    International Nuclear Information System (INIS)

    Fall, S.M.

    1978-01-01

    Some aspects of the problem of simulating the evolution of galaxy clustering by N-body computer experiments are discussed. The results of four 1000-body experiments are presented and interpreted on the basis of simple scaling arguments for the gravitational condensation of bound aggregates. They indicate that the internal dynamics of condensed aggregates are negligible in determining the form of the pair-correlation function xi. On small scales the form of xi is determined by discreteness effects in the initial N-body distribution and is not sensitive to this distribution. The experiments discussed here test the simple scaling arguments effectively for only one value of the cosmological density parameter (Ω = 1) and one form of the initial fluctuation spectrum (n = 0). (author)

  14. N-MODY: A Code for Collisionless N-body Simulations in Modified Newtonian Dynamics

    Science.gov (United States)

    Londrillo, Pasquale; Nipoti, Carlo

    2011-02-01

    N-MODY is a parallel particle-mesh code for collisionless N-body simulations in modified Newtonian dynamics (MOND). N-MODY is based on a numerical potential solver in spherical coordinates that solves the non-linear MOND field equation, and is ideally suited to simulate isolated stellar systems. N-MODY can be used also to compute the MOND potential of arbitrary static density distributions. A few applications of N-MODY indicate that some astrophysically relevant dynamical processes are profoundly different in MOND and in Newtonian gravity with dark matter.

  15. High performance direct gravitational N-body simulations on graphics processing units II: An implementation in CUDA

    NARCIS (Netherlands)

    Belleman, R.G.; Bédorf, J.; Portegies Zwart, S.F.

    2008-01-01

    We present the results of gravitational direct N-body simulations using the graphics processing unit (GPU) on a commercial NVIDIA GeForce 8800GTX designed for gaming computers. The force evaluation of the N-body problem is implemented in "Compute Unified Device Architecture" (CUDA) using the GPU to

  16. An Accelerating Solution for N-Body MOND Simulation with FPGA-SoC

    Directory of Open Access Journals (Sweden)

    Bo Peng

    2016-01-01

    Full Text Available As a modified-gravity proposal to handle the dark matter problem on galactic scales, Modified Newtonian Dynamics (MOND has shown a great success. However, the N-body MOND simulation is quite challenged by its computation complexity, which appeals to acceleration of the simulation calculation. In this paper, we present a highly integrated accelerating solution for N-body MOND simulations. By using the FPGA-SoC, which integrates both FPGA and SoC (system on chip in one chip, our solution exhibits potentials for better performance, higher integration, and lower power consumption. To handle the calculation bottleneck of potential summation, on one hand, we develop a strategy to simplify the pipeline, in which the square calculation task is conducted by the DSP48E1 of Xilinx 7 series FPGAs, so as to reduce the logic resource utilization of each pipeline; on the other hand, advantages of particle-mesh scheme are taken to overcome the bottleneck on bandwidth. Our experiment results show that 2 more pipelines can be integrated in Zynq-7020 FPGA-SoC with the simplified pipeline, and the bandwidth requirement is reduced significantly. Furthermore, our accelerating solution has a full range of advantages over different processors. Compared with GPU, our work is about 10 times better in performance per watt and 50% better in performance per cost.

  17. Quantification of discreteness effects in cosmological N-body simulations: Initial conditions

    International Nuclear Information System (INIS)

    Joyce, M.; Marcos, B.

    2007-01-01

    The relation between the results of cosmological N-body simulations, and the continuum theoretical models they simulate, is currently not understood in a way which allows a quantification of N dependent effects. In this first of a series of papers on this issue, we consider the quantification of such effects in the initial conditions of such simulations. A general formalism developed in [A. Gabrielli, Phys. Rev. E 70, 066131 (2004).] allows us to write down an exact expression for the power spectrum of the point distributions generated by the standard algorithm for generating such initial conditions. Expanded perturbatively in the amplitude of the input (i.e. theoretical, continuum) power spectrum, we obtain at linear order the input power spectrum, plus two terms which arise from discreteness and contribute at large wave numbers. For cosmological type power spectra, one obtains as expected, the input spectrum for wave numbers k smaller than that characteristic of the discreteness. The comparison of real space correlation properties is more subtle because the discreteness corrections are not as strongly localized in real space. For cosmological type spectra the theoretical mass variance in spheres and two-point correlation function are well approximated above a finite distance. For typical initial amplitudes this distance is a few times the interparticle distance, but it diverges as this amplitude (or, equivalently, the initial redshift of the cosmological simulation) goes to zero, at fixed particle density. We discuss briefly the physical significance of these discreteness terms in the initial conditions, in particular, with respect to the definition of the continuum limit of N-body simulations

  18. Halo mass and weak galaxy-galaxy lensing profiles in rescaled cosmological N-body simulations

    Science.gov (United States)

    Renneby, Malin; Hilbert, Stefan; Angulo, Raúl E.

    2018-05-01

    We investigate 3D density and weak lensing profiles of dark matter haloes predicted by a cosmology-rescaling algorithm for N-body simulations. We extend the rescaling method of Angulo & White (2010) and Angulo & Hilbert (2015) to improve its performance on intra-halo scales by using models for the concentration-mass-redshift relation based on excursion set theory. The accuracy of the method is tested with numerical simulations carried out with different cosmological parameters. We find that predictions for median density profiles are more accurate than ˜5 % for haloes with masses of 1012.0 - 1014.5h-1 M⊙ for radii 0.05 baryons, are likely required for interpreting future (dark energy task force stage IV) experiments.

  19. Halo statistics analysis within medium volume cosmological N-body simulation

    Directory of Open Access Journals (Sweden)

    Martinović N.

    2015-01-01

    Full Text Available In this paper we present halo statistics analysis of a ΛCDM N body cosmological simulation (from first halo formation until z = 0. We study mean major merger rate as a function of time, where for time we consider both per redshift and per Gyr dependence. For latter we find that it scales as the well known power law (1 + zn for which we obtain n = 2.4. The halo mass function and halo growth function are derived and compared both with analytical and empirical fits. We analyse halo growth through out entire simulation, making it possible to continuously monitor evolution of halo number density within given mass ranges. The halo formation redshift is studied exploring possibility for a new simple preliminary analysis during the simulation run. Visualization of the simulation is portrayed as well. At redshifts z = 0−7 halos from simulation have good statistics for further analysis especially in mass range of 1011 − 1014 M./h. [176021 ’Visible and invisible matter in nearby galaxies: theory and observations

  20. The shape of the invisible halo: N-body simulations on parallel supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Warren, M.S.; Zurek, W.H. (Los Alamos National Lab., NM (USA)); Quinn, P.J. (Australian National Univ., Canberra (Australia). Mount Stromlo and Siding Spring Observatories); Salmon, J.K. (California Inst. of Tech., Pasadena, CA (USA))

    1990-01-01

    We study the shapes of halos and the relationship to their angular momentum content by means of N-body (N {approximately} 10{sup 6}) simulations. Results indicate that in relaxed halos with no apparent substructure: (i) the shape and orientation of the isodensity contours tends to persist throughout the virialised portion of the halo; (ii) most ({approx}70%) of the halos are prolate; (iii) the approximate direction of the angular momentum vector tends to persist throughout the halo; (iv) for spherical shells centered on the core of the halo the magnitude of the specific angular momentum is approximately proportional to their radius; (v) the shortest axis of the ellipsoid which approximates the shape of the halo tends to align with the rotation axis of the halo. This tendency is strongest in the fastest rotating halos. 13 refs., 4 figs.

  1. Speeding up N -body simulations of modified gravity: chameleon screening models

    Energy Technology Data Exchange (ETDEWEB)

    Bose, Sownak; Li, Baojiu; He, Jian-hua; Llinares, Claudio [Institute for Computational Cosmology, Department of Physics, Durham University, Durham DH1 3LE (United Kingdom); Barreira, Alexandre [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching (Germany); Hellwing, Wojciech A.; Koyama, Kazuya [Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth PO1 3FX (United Kingdom); Zhao, Gong-Bo, E-mail: sownak.bose@durham.ac.uk, E-mail: baojiu.li@durham.ac.uk, E-mail: barreira@mpa-garching.mpg.de, E-mail: jianhua.he@durham.ac.uk, E-mail: wojciech.hellwing@port.ac.uk, E-mail: kazuya.koyama@port.ac.uk, E-mail: claudio.llinares@durham.ac.uk, E-mail: gbzhao@nao.cas.cn [National Astronomy Observatories, Chinese Academy of Science, Beijing, 100012 (China)

    2017-02-01

    We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f ( R ) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergence rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f ( R ) simulations. For example, a test simulation with 512{sup 3} particles in a box of size 512 Mpc/ h is now 5 times faster than before, while a Millennium-resolution simulation for f ( R ) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.

  2. Speeding up N-body simulations of modified gravity: chameleon screening models

    Science.gov (United States)

    Bose, Sownak; Li, Baojiu; Barreira, Alexandre; He, Jian-hua; Hellwing, Wojciech A.; Koyama, Kazuya; Llinares, Claudio; Zhao, Gong-Bo

    2017-02-01

    We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f(R) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergence rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f(R) simulations. For example, a test simulation with 5123 particles in a box of size 512 Mpc/h is now 5 times faster than before, while a Millennium-resolution simulation for f(R) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.

  3. Speeding up N -body simulations of modified gravity: chameleon screening models

    International Nuclear Information System (INIS)

    Bose, Sownak; Li, Baojiu; He, Jian-hua; Llinares, Claudio; Barreira, Alexandre; Hellwing, Wojciech A.; Koyama, Kazuya; Zhao, Gong-Bo

    2017-01-01

    We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f ( R ) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergence rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f ( R ) simulations. For example, a test simulation with 512 3 particles in a box of size 512 Mpc/ h is now 5 times faster than before, while a Millennium-resolution simulation for f ( R ) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.

  4. Sixth- and eighth-order Hermite integrator for N-body simulations

    Science.gov (United States)

    Nitadori, Keigo; Makino, Junichiro

    2008-10-01

    We present sixth- and eighth-order Hermite integrators for astrophysical N-body simulations, which use the derivatives of accelerations up to second-order ( snap) and third-order ( crackle). These schemes do not require previous values for the corrector, and require only one previous value to construct the predictor. Thus, they are fairly easy to implement. The additional cost of the calculation of the higher-order derivatives is not very high. Even for the eighth-order scheme, the number of floating-point operations for force calculation is only about two times larger than that for traditional fourth-order Hermite scheme. The sixth-order scheme is better than the traditional fourth-order scheme for most cases. When the required accuracy is very high, the eighth-order one is the best. These high-order schemes have several practical advantages. For example, they allow a larger number of particles to be integrated in parallel than the fourth-order scheme does, resulting in higher execution efficiency in both general-purpose parallel computers and GRAPE systems.

  5. The GENGA code: gravitational encounters in N-body simulations with GPU acceleration

    International Nuclear Information System (INIS)

    Grimm, Simon L.; Stadel, Joachim G.

    2014-01-01

    We describe an open source GPU implementation of a hybrid symplectic N-body integrator, GENGA (Gravitational ENcounters with Gpu Acceleration), designed to integrate planet and planetesimal dynamics in the late stage of planet formation and stability analyses of planetary systems. GENGA uses a hybrid symplectic integrator to handle close encounters with very good energy conservation, which is essential in long-term planetary system integration. We extended the second-order hybrid integration scheme to higher orders. The GENGA code supports three simulation modes: integration of up to 2048 massive bodies, integration with up to a million test particles, or parallel integration of a large number of individual planetary systems. We compare the results of GENGA to Mercury and pkdgrav2 in terms of energy conservation and performance and find that the energy conservation of GENGA is comparable to Mercury and around two orders of magnitude better than pkdgrav2. GENGA runs up to 30 times faster than Mercury and up to 8 times faster than pkdgrav2. GENGA is written in CUDA C and runs on all NVIDIA GPUs with a computing capability of at least 2.0.

  6. The GENGA code: gravitational encounters in N-body simulations with GPU acceleration

    Energy Technology Data Exchange (ETDEWEB)

    Grimm, Simon L.; Stadel, Joachim G., E-mail: sigrimm@physik.uzh.ch [Institute for Computational Science, University of Zürich, Winterthurerstrasse 190, CH-8057 Zürich (Switzerland)

    2014-11-20

    We describe an open source GPU implementation of a hybrid symplectic N-body integrator, GENGA (Gravitational ENcounters with Gpu Acceleration), designed to integrate planet and planetesimal dynamics in the late stage of planet formation and stability analyses of planetary systems. GENGA uses a hybrid symplectic integrator to handle close encounters with very good energy conservation, which is essential in long-term planetary system integration. We extended the second-order hybrid integration scheme to higher orders. The GENGA code supports three simulation modes: integration of up to 2048 massive bodies, integration with up to a million test particles, or parallel integration of a large number of individual planetary systems. We compare the results of GENGA to Mercury and pkdgrav2 in terms of energy conservation and performance and find that the energy conservation of GENGA is comparable to Mercury and around two orders of magnitude better than pkdgrav2. GENGA runs up to 30 times faster than Mercury and up to 8 times faster than pkdgrav2. GENGA is written in CUDA C and runs on all NVIDIA GPUs with a computing capability of at least 2.0.

  7. Simulations of collisions between N-body classical systems in interaction

    International Nuclear Information System (INIS)

    Morisseau, Francois

    2006-05-01

    The Classical N-body Dynamics (CNBD) is dedicated to the simulation of collisions between classical systems. The 2-body interaction used here has the properties of the Van der Waals potential and depends on just a few parameters. This work has two main goals. First, some theoretical approaches assume that the dynamical stage of the collisions plays an important role. Moreover, colliding nuclei are supposed to present a 1. order liquid-gas phase transition. Several signals have been introduced to show this transition. We have searched for two of them: the bimodality of the mass asymmetry and negative heat capacity. We have found them and we give an explanation of their presence in our calculations. Second, we have improved the interaction by adding a Coulomb like potential and by taking into account the stronger proton-neutron interaction in nuclei. Then we have figured out the relations that exist between the parameters of the 2-body interaction and the properties of the systems. These studies allow us to fit the properties of the classical systems to those of the nuclei. In this manuscript the first results of this fit are shown. (author)

  8. N-body simulations of planet formation: understanding exoplanet system architectures

    Science.gov (United States)

    Coleman, Gavin; Nelson, Richard

    2015-12-01

    Observations have demonstrated the existence of a significant population of compact systems comprised of super-Earths and Neptune-mass planets, and a population of gas giants that appear to occur primarily in either short-period (100 days) orbits. The broad diversity of system architectures raises the question of whether or not the same formation processes operating in standard disc models can explain these planets, or if different scenarios are required instead to explain the widely differing architectures. To explore this issue, we present the results from a comprehensive suite of N-body simulations of planetary system formation that include the following physical processes: gravitational interactions and collisions between planetary embryos and planetesimals; type I and II migration; gas accretion onto planetary cores; self-consistent viscous disc evolution and disc removal through photo-evaporation. Our results indicate that the formation and survival of compact systems of super-Earths and Neptune-mass planets occur commonly in disc models where a simple prescription for the disc viscosity is assumed, but such models never lead to the formation and survival of gas giant planets due to migration into the star. Inspired in part by the ALMA observations of HL Tau, and by MHD simulations that display the formation of long-lived zonal flows, we have explored the consequences of assuming that the disc viscosity varies in both time and space. We find that the radial structuring of the disc leads to conditions in which systems of giant planets are able to form and survive. Furthermore, these giants generally occupy those regions of the mass-period diagram that are densely populated by the observed gas giants, suggesting that the planet traps generated by radial structuring of protoplanetary discs may be a necessary ingredient for forming giant planets.

  9. Clusters of galaxies compared with N-body simulations: masses and mass segregation

    International Nuclear Information System (INIS)

    Struble, M.F.; Bludman, S.A.

    1979-01-01

    With three virially stable N-body simulations of Wielen, it is shown that use of the expression for the total mass derived from averaged quantities (velocity dispersion and mean harmonic radius) yields an overestimate of the mass by as much as a factor of 2-3, and use of the heaviest mass sample gives an underestimate by a factor of 2-3. The estimate of the mass using mass weighted quantities (i.e., derived from the customary definition of kinetic and potential energies) yields a better value irrespectively of mass sample as applied to late time intervals of the models (>= three two-body relaxation times). The uncertainty is at most approximately 50%. This suggests that it is better to employ the mass weighted expression for the mass when determining cluster masses. The virial ratio, which is a ratio of the mass weighted/averaged expression for the potential energy, is found to vary between 1 and 2. It is concluded that ratios for observed clusters approximately 4-10 cannot be explained even by the imprecision of the expression for the mass using averaged quantities, and certainly implies the presence of unseen matter. Total masses via customary application of the virial theorem are calculated for 39 clusters, and total masses for 12 clusters are calculated by a variant of the usual application. The distribution of cluster masses is also presented and briefly discussed. Mass segregation in Wielen's models is studied in terms of the binding energy per unit mass of the 'heavy' sample compared with the 'light' sample. The general absence of mass segregation in relaxaed clusters and the large virial discrepancies are attributed to a population of many low-mass objects that may constitute the bulk mass of clusters of galaxies. (Auth.)

  10. A PARALLEL MONTE CARLO CODE FOR SIMULATING COLLISIONAL N-BODY SYSTEMS

    Energy Technology Data Exchange (ETDEWEB)

    Pattabiraman, Bharath; Umbreit, Stefan; Liao, Wei-keng; Choudhary, Alok; Kalogera, Vassiliki; Memik, Gokhan; Rasio, Frederic A., E-mail: bharath@u.northwestern.edu [Center for Interdisciplinary Exploration and Research in Astrophysics, Northwestern University, Evanston, IL (United States)

    2013-02-15

    We present a new parallel code for computing the dynamical evolution of collisional N-body systems with up to N {approx} 10{sup 7} particles. Our code is based on the Henon Monte Carlo method for solving the Fokker-Planck equation, and makes assumptions of spherical symmetry and dynamical equilibrium. The principal algorithmic developments involve optimizing data structures and the introduction of a parallel random number generation scheme as well as a parallel sorting algorithm required to find nearest neighbors for interactions and to compute the gravitational potential. The new algorithms we introduce along with our choice of decomposition scheme minimize communication costs and ensure optimal distribution of data and workload among the processing units. Our implementation uses the Message Passing Interface library for communication, which makes it portable to many different supercomputing architectures. We validate the code by calculating the evolution of clusters with initial Plummer distribution functions up to core collapse with the number of stars, N, spanning three orders of magnitude from 10{sup 5} to 10{sup 7}. We find that our results are in good agreement with self-similar core-collapse solutions, and the core-collapse times generally agree with expectations from the literature. Also, we observe good total energy conservation, within {approx}< 0.04% throughout all simulations. We analyze the performance of the code, and demonstrate near-linear scaling of the runtime with the number of processors up to 64 processors for N = 10{sup 5}, 128 for N = 10{sup 6} and 256 for N = 10{sup 7}. The runtime reaches saturation with the addition of processors beyond these limits, which is a characteristic of the parallel sorting algorithm. The resulting maximum speedups we achieve are approximately 60 Multiplication-Sign , 100 Multiplication-Sign , and 220 Multiplication-Sign , respectively.

  11. Structure formation by a fifth force: N-body versus linear simulations

    Science.gov (United States)

    Li, Baojiu; Zhao, Hongsheng

    2009-08-01

    We lay out the frameworks to numerically study the structure formation in both linear and nonlinear regimes in general dark-matter-coupled scalar field models, and give an explicit example where the scalar field serves as a dynamical dark energy. Adopting parameters of the scalar field which yield a realistic cosmic microwave background (CMB) spectrum, we generate the initial conditions for our N-body simulations, which follow the spatial distributions of the dark matter and the scalar field by solving their equations of motion using the multilevel adaptive grid technique. We show that the spatial configuration of the scalar field tracks well the voids and clusters of dark matter. Indeed, the propagation of scalar degree of freedom effectively acts as a fifth force on dark matter particles, whose range and magnitude are determined by the two model parameters (μ,γ), local dark matter density as well as the background value for the scalar field. The model behaves like the ΛCDM paradigm on scales relevant to the CMB spectrum, which are well beyond the probe of the local fifth force and thus not significantly affected by the matter-scalar coupling. On scales comparable or shorter than the range of the local fifth force, the fifth force is perfectly parallel to gravity and their strengths have a fixed ratio 2γ2 determined by the matter-scalar coupling, provided that the chameleon effect is weak; if on the other hand there is a strong chameleon effect (i.e., the scalar field almost resides at its effective potential minimum everywhere in the space), the fifth force indeed has suppressed effects in high density regions and shows no obvious correlation with gravity, which means that the dark-matter-scalar-field coupling is not simply equivalent to a rescaling of the gravitational constant or the mass of the dark matter particles. We show these spatial distributions and (lack of) correlations at typical redshifts (z=0,1,5.5) in our multigrid million-particle simulations

  12. Structure formation by a fifth force: N-body versus linear simulations

    International Nuclear Information System (INIS)

    Li Baojiu; Zhao Hongsheng

    2009-01-01

    We lay out the frameworks to numerically study the structure formation in both linear and nonlinear regimes in general dark-matter-coupled scalar field models, and give an explicit example where the scalar field serves as a dynamical dark energy. Adopting parameters of the scalar field which yield a realistic cosmic microwave background (CMB) spectrum, we generate the initial conditions for our N-body simulations, which follow the spatial distributions of the dark matter and the scalar field by solving their equations of motion using the multilevel adaptive grid technique. We show that the spatial configuration of the scalar field tracks well the voids and clusters of dark matter. Indeed, the propagation of scalar degree of freedom effectively acts as a fifth force on dark matter particles, whose range and magnitude are determined by the two model parameters (μ,γ), local dark matter density as well as the background value for the scalar field. The model behaves like the ΛCDM paradigm on scales relevant to the CMB spectrum, which are well beyond the probe of the local fifth force and thus not significantly affected by the matter-scalar coupling. On scales comparable or shorter than the range of the local fifth force, the fifth force is perfectly parallel to gravity and their strengths have a fixed ratio 2γ 2 determined by the matter-scalar coupling, provided that the chameleon effect is weak; if on the other hand there is a strong chameleon effect (i.e., the scalar field almost resides at its effective potential minimum everywhere in the space), the fifth force indeed has suppressed effects in high density regions and shows no obvious correlation with gravity, which means that the dark-matter-scalar-field coupling is not simply equivalent to a rescaling of the gravitational constant or the mass of the dark matter particles. We show these spatial distributions and (lack of) correlations at typical redshifts (z=0,1,5.5) in our multigrid million

  13. N-body simulations with a cosmic vector for dark energy

    Science.gov (United States)

    Carlesi, Edoardo; Knebe, Alexander; Yepes, Gustavo; Gottlöber, Stefan; Jiménez, Jose Beltrán.; Maroto, Antonio L.

    2012-07-01

    We present the results of a series of cosmological N-body simulations of a vector dark energy (VDE) model, performed using a suitably modified version of the publicly available GADGET-2 code. The set-ups of our simulations were calibrated pursuing a twofold aim: (1) to analyse the large-scale distribution of massive objects and (2) to determine the properties of halo structure in this different framework. We observe that structure formation is enhanced in VDE, since the mass function at high redshift is boosted up to a factor of 10 with respect to Λ cold dark matter (ΛCDM), possibly alleviating tensions with the observations of massive clusters at high redshifts and early reionization epoch. Significant differences can also be found for the value of the growth factor, which in VDE shows a completely different behaviour, and in the distribution of voids, which in this cosmology are on average smaller and less abundant. We further studied the structure of dark matter haloes more massive than 5 × 1013 h-1 M⊙, finding that no substantial difference emerges when comparing spin parameter, shape, triaxiality and profiles of structures evolved under different cosmological pictures. Nevertheless, minor differences can be found in the concentration-mass relation and the two-point correlation function, both showing different amplitudes and steeper slopes. Using an additional series of simulations of a ΛCDM scenario with the same ? and σ8 used in the VDE cosmology, we have been able to establish whether the modifications induced in the new cosmological picture were due to the particular nature of the dynamical dark energy or a straightforward consequence of the cosmological parameters. On large scales, the dynamical effects of the cosmic vector field can be seen in the peculiar evolution of the cluster number density function with redshift, in the shape of the mass function, in the distribution of voids and on the characteristic form of the growth index γ(z). On

  14. The gravitational interaction between N-body (star clusters) and hydrodynamic (ISM) codes in disk galaxy simulations

    International Nuclear Information System (INIS)

    Schroeder, M.C.; Comins, N.F.

    1986-01-01

    During the past twenty years, three approaches to numerical simulations of the evolution of galaxies have been developed. The first approach, N-body programs, models the motion of clusters of stars as point particles which interact via their gravitational potentials to determine the system dynamics. Some N-body codes model molecular clouds as colliding, inelastic particles. The second approach, hydrodynamic models of galactic dynamics, simulates the activity of the interstellar medium as a compressible gas. These models presently do not include stars, the effect of gravitational fields, or allow for stellar evolution and exchange of mass or angular momentum between stars and the interstellar medium. The third approach, stochastic star formation simulations of disk galaxies, allows for the interaction between stars and interstellar gas, but does not allow the star particles to move under the influence of gravity

  15. Distribution function approach to redshift space distortions. Part II: N-body simulations

    International Nuclear Information System (INIS)

    Okumura, Teppei; Seljak, Uroš; McDonald, Patrick; Desjacques, Vincent

    2012-01-01

    Measurement of redshift-space distortions (RSD) offers an attractive method to directly probe the cosmic growth history of density perturbations. A distribution function approach where RSD can be written as a sum over density weighted velocity moment correlators has recently been developed. In this paper we use results of N-body simulations to investigate the individual contributions and convergence of this expansion for dark matter. If the series is expanded as a function of powers of μ, cosine of the angle between the Fourier mode and line of sight, then there are a finite number of terms contributing at each order. We present these terms and investigate their contribution to the total as a function of wavevector k. For μ 2 the correlation between density and momentum dominates on large scales. Higher order corrections, which act as a Finger-of-God (FoG) term, contribute 1% at k ∼ 0.015hMpc −1 , 10% at k ∼ 0.05hMpc −1 at z = 0, while for k > 0.15hMpc −1 they dominate and make the total negative. These higher order terms are dominated by density-energy density correlations which contributes negatively to the power, while the contribution from vorticity part of momentum density auto-correlation adds to the total power, but is an order of magnitude lower. For μ 4 term the dominant term on large scales is the scalar part of momentum density auto-correlation, while higher order terms dominate for k > 0.15hMpc −1 . For μ 6 and μ 8 we find it has very little power for k −1 , shooting up by 2–3 orders of magnitude between k −1 and k −1 . We also compare the expansion to the full 2-d P ss (k,μ), as well as to the monopole, quadrupole, and hexadecapole integrals of P ss (k,μ). For these statistics an infinite number of terms contribute and we find that the expansion achieves percent level accuracy for kμ −1 at 6-th order, but breaks down on smaller scales because the series is no longer perturbative. We explore resummation of the terms into Fo

  16. Selecting ultra-faint dwarf candidate progenitors in cosmological N-body simulations at high redshifts

    Science.gov (United States)

    Safarzadeh, Mohammadtaher; Ji, Alexander P.; Dooley, Gregory A.; Frebel, Anna; Scannapieco, Evan; Gómez, Facundo A.; O'Shea, Brian W.

    2018-06-01

    The smallest satellites of the Milky Way ceased forming stars during the epoch of reionization and thus provide archaeological access to galaxy formation at z > 6. Numerical studies of these ultrafaint dwarf galaxies (UFDs) require expensive cosmological simulations with high mass resolution that are carried out down to z = 0. However, if we are able to statistically identify UFD host progenitors at high redshifts with relatively high probabilities, we can avoid this high computational cost. To find such candidates, we analyse the merger trees of Milky Way type haloes from the high-resolution Caterpillar suite of dark matter only simulations. Satellite UFD hosts at z = 0 are identified based on four different abundance matching (AM) techniques. All the haloes at high redshifts are traced forward in time in order to compute the probability of surviving as satellite UFDs today. Our results show that selecting potential UFD progenitors based solely on their mass at z = 12 (8) results in a 10 per cent (20 per cent) chance of obtaining a surviving UFD at z = 0 in three of the AM techniques we adopted. We find that the progenitors of surviving satellite UFDs have lower virial ratios (η), and are preferentially located at large distances from the main MW progenitor, while they show no correlation with concentration parameter. Haloes with favorable locations and virial ratios are ≈3 times more likely to survive as satellite UFD candidates at z = 0.

  17. Testing lowered isothermal models with direct N-body simulations of globular clusters - II. Multimass models

    Science.gov (United States)

    Peuten, M.; Zocchi, A.; Gieles, M.; Hénault-Brunet, V.

    2017-09-01

    Lowered isothermal models, such as the multimass Michie-King models, have been successful in describing observational data of globular clusters. In this study, we assess whether such models are able to describe the phase space properties of evolutionary N-body models. We compare the multimass models as implemented in limepy (Gieles & Zocchi) to N-body models of star clusters with different retention fractions for the black holes and neutron stars evolving in a tidal field. We find that multimass models successfully reproduce the density and velocity dispersion profiles of the different mass components in all evolutionary phases and for different remnants retention. We further use these results to study the evolution of global model parameters. We find that over the lifetime of clusters, radial anisotropy gradually evolves from the low- to the high-mass components and we identify features in the properties of observable stars that are indicative of the presence of stellar-mass black holes. We find that the model velocity scale depends on mass as m-δ, with δ ≃ 0.5 for almost all models, but the dependence of central velocity dispersion on m can be shallower, depending on the dark remnant content, and agrees well with that of the N-body models. The reported model parameters, and correlations amongst them, can be used as theoretical priors when fitting these types of mass models to observational data.

  18. Application of the Ewald method to cosmological N-body simulations

    International Nuclear Information System (INIS)

    Hernquist, L.; Suto, Yasushi; Bouchet, F.R.

    1990-03-01

    Fully periodic boundary conditions are incorporated into a gridless cosmological N-body code using the Ewald method. It is shown that the linear evolution of density fluctuations agrees well with analytic calculations, contrary to the case of quasi-periodic boundary conditions where the fundamental mode grows too rapidly. The implementation of fully periodic boundaries is of particular importance to relative comparisons of methods based on hierarchical tree algorithms and more traditional schemes using Fourier techniques such as PM and P 3 M codes. (author)

  19. A new gravitational N-body simulation algorithm for investigation of Lagrangian turbulence in astrophysical and cosmological systems

    Energy Technology Data Exchange (ETDEWEB)

    Rosa, Reinaldo Roberto; Gomes, Vitor; Araujo, Amarisio [Instituto Nacional de Pesquisas Espaciais (INPE), Sao Jose dos Campos, SP (Brazil); Clua, Esteban [Universidade Federal Fluminense (UFF), Niteroi, RJ (Brazil)

    2011-07-01

    Full text: Turbulent-like behaviour is an important and recent ingredient in the investigation of large-scale structure formation in the observable universe. Recently, an established statistical method was used to demonstrate the importance of considering chaotic advection (or Lagrange turbulence) in combination with gravitational instabilities in the {Lambda}-CDM simulations performed from the Virgo Consortium (VC). However, the Hubble volumes simulated from GADGET-VC algorithm have some limitations for direct Lagrangian data analysis due to the large amount of data and no real time computation for particle kinetic velocity along the dark matter structure evolution. Hence, the Lab for Computing and Applied Mathematics at INPE, Brazil, has been working for the past two years in computational environments to achieve the so-called COsmic LAgrangian TUrbulence Simulator (COLATUS) allowing N-body simulation from a Lagrangian perspective. The COLATUS prototype, as usual packages, computes gravitational forces with a hierarchical tree algorithm in combination with a local particle kinetic velocity vector in a particle-mesh scheme for long-range gravitational forces. In the present work we show preliminary simulations for 106 particles showing Lagrangian power spectra for individual particles converging to a stable power-law of S(v) {approx} v{sup 5}. The code may be run on an arbitrary number of processors, with a restriction to powers of two. COLATUS has a potential to evaluate complex kinematics of a single particle in a simulated N-body gravitational system. However, to introduce this method as a GNU software further improvements and investigations are necessary. Then, the mapping techniques for the N-body problem incorporating radiation pressure and fluid characteristics by means of smoothed particle hydrodynamics (SPH) are discussed. Finally, we focus on the all-pairs computational kernel and its future GPU implementation using the NVIDIA CUDA programming model

  20. A new gravitational N-body simulation algorithm for investigation of Lagrangian turbulence in astrophysical and cosmological systems

    International Nuclear Information System (INIS)

    Rosa, Reinaldo Roberto; Gomes, Vitor; Araujo, Amarisio; Clua, Esteban

    2011-01-01

    Full text: Turbulent-like behaviour is an important and recent ingredient in the investigation of large-scale structure formation in the observable universe. Recently, an established statistical method was used to demonstrate the importance of considering chaotic advection (or Lagrange turbulence) in combination with gravitational instabilities in the Λ-CDM simulations performed from the Virgo Consortium (VC). However, the Hubble volumes simulated from GADGET-VC algorithm have some limitations for direct Lagrangian data analysis due to the large amount of data and no real time computation for particle kinetic velocity along the dark matter structure evolution. Hence, the Lab for Computing and Applied Mathematics at INPE, Brazil, has been working for the past two years in computational environments to achieve the so-called COsmic LAgrangian TUrbulence Simulator (COLATUS) allowing N-body simulation from a Lagrangian perspective. The COLATUS prototype, as usual packages, computes gravitational forces with a hierarchical tree algorithm in combination with a local particle kinetic velocity vector in a particle-mesh scheme for long-range gravitational forces. In the present work we show preliminary simulations for 106 particles showing Lagrangian power spectra for individual particles converging to a stable power-law of S(v) ∼ v 5 . The code may be run on an arbitrary number of processors, with a restriction to powers of two. COLATUS has a potential to evaluate complex kinematics of a single particle in a simulated N-body gravitational system. However, to introduce this method as a GNU software further improvements and investigations are necessary. Then, the mapping techniques for the N-body problem incorporating radiation pressure and fluid characteristics by means of smoothed particle hydrodynamics (SPH) are discussed. Finally, we focus on the all-pairs computational kernel and its future GPU implementation using the NVIDIA CUDA programming model. (author)

  1. K-means clustering for optimal partitioning and dynamic load balancing of parallel hierarchical N-body simulations

    International Nuclear Information System (INIS)

    Marzouk, Youssef M.; Ghoniem, Ahmed F.

    2005-01-01

    A number of complex physical problems can be approached through N-body simulation, from fluid flow at high Reynolds number to gravitational astrophysics and molecular dynamics. In all these applications, direct summation is prohibitively expensive for large N and thus hierarchical methods are employed for fast summation. This work introduces new algorithms, based on k-means clustering, for partitioning parallel hierarchical N-body interactions. We demonstrate that the number of particle-cluster interactions and the order at which they are performed are directly affected by partition geometry. Weighted k-means partitions minimize the sum of clusters' second moments and create well-localized domains, and thus reduce the computational cost of N-body approximations by enabling the use of lower-order approximations and fewer cells. We also introduce compatible techniques for dynamic load balancing, including adaptive scaling of cluster volumes and adaptive redistribution of cluster centroids. We demonstrate the performance of these algorithms by constructing a parallel treecode for vortex particle simulations, based on the serial variable-order Cartesian code developed by Lindsay and Krasny [Journal of Computational Physics 172 (2) (2001) 879-907]. The method is applied to vortex simulations of a transverse jet. Results show outstanding parallel efficiencies even at high concurrencies, with velocity evaluation errors maintained at or below their serial values; on a realistic distribution of 1.2 million vortex particles, we observe a parallel efficiency of 98% on 1024 processors. Excellent load balance is achieved even in the face of several obstacles, such as an irregular, time-evolving particle distribution containing a range of length scales and the continual introduction of new vortex particles throughout the domain. Moreover, results suggest that k-means yields a more efficient partition of the domain than a global oct-tree

  2. BOOSTED TIDAL DISRUPTION BY MASSIVE BLACK HOLE BINARIES DURING GALAXY MERGERS FROM THE VIEW OF N -BODY SIMULATION

    Energy Technology Data Exchange (ETDEWEB)

    Li, Shuo; Berczik, Peter; Spurzem, Rainer [National Astronomical Observatories and Key Laboratory of Computational Astrophysics, Chinese Academy of Sciences, 20A Datun Rd., Chaoyang District, Beijing 100012 (China); Liu, F. K., E-mail: lishuo@nao.cas.cn [Department of Astronomy, School of Physics, Peking University, Yiheyuan Lu 5, Haidian Qu, Beijing 100871 (China)

    2017-01-10

    Supermassive black hole binaries (SMBHBs) are productions of the hierarchical galaxy formation model. There are many close connections between a central SMBH and its host galaxy because the former plays very important roles on galaxy formation and evolution. For this reason, the evolution of SMBHBs in merging galaxies is a fundamental challenge. Since there are many discussions about SMBHB evolution in a gas-rich environment, we focus on the quiescent galaxy, using tidal disruption (TD) as a diagnostic tool. Our study is based on a series of numerical, large particle number, direct N -body simulations for dry major mergers. According to the simulation results, the evolution can be divided into three phases. In phase I, the TD rate for two well separated SMBHs in a merging system is similar to that for a single SMBH in an isolated galaxy. After two SMBHs approach close enough to form a bound binary in phase II, the disruption rate can be enhanced by ∼2 orders of magnitude within a short time. This “boosted” disruption stage finishes after the SMBHB evolves to a compact binary system in phase III, corresponding to a reduction in disruption rate back to a level of a few times higher than in phase I. We also discuss how to correctly extrapolate our N -body simulation results to reality, and the implications of our results to observations.

  3. A PARALLEL MONTE CARLO CODE FOR SIMULATING COLLISIONAL N-BODY SYSTEMS

    International Nuclear Information System (INIS)

    Pattabiraman, Bharath; Umbreit, Stefan; Liao, Wei-keng; Choudhary, Alok; Kalogera, Vassiliki; Memik, Gokhan; Rasio, Frederic A.

    2013-01-01

    We present a new parallel code for computing the dynamical evolution of collisional N-body systems with up to N ∼ 10 7 particles. Our code is based on the Hénon Monte Carlo method for solving the Fokker-Planck equation, and makes assumptions of spherical symmetry and dynamical equilibrium. The principal algorithmic developments involve optimizing data structures and the introduction of a parallel random number generation scheme as well as a parallel sorting algorithm required to find nearest neighbors for interactions and to compute the gravitational potential. The new algorithms we introduce along with our choice of decomposition scheme minimize communication costs and ensure optimal distribution of data and workload among the processing units. Our implementation uses the Message Passing Interface library for communication, which makes it portable to many different supercomputing architectures. We validate the code by calculating the evolution of clusters with initial Plummer distribution functions up to core collapse with the number of stars, N, spanning three orders of magnitude from 10 5 to 10 7 . We find that our results are in good agreement with self-similar core-collapse solutions, and the core-collapse times generally agree with expectations from the literature. Also, we observe good total energy conservation, within ∼ 5 , 128 for N = 10 6 and 256 for N = 10 7 . The runtime reaches saturation with the addition of processors beyond these limits, which is a characteristic of the parallel sorting algorithm. The resulting maximum speedups we achieve are approximately 60×, 100×, and 220×, respectively.

  4. Estimating non-circular motions in barred galaxies using numerical N-body simulations

    Science.gov (United States)

    Randriamampandry, T. H.; Combes, F.; Carignan, C.; Deg, N.

    2015-12-01

    The observed velocities of the gas in barred galaxies are a combination of the azimuthally averaged circular velocity and non-circular motions, primarily caused by gas streaming along the bar. These non-circular flows must be accounted for before the observed velocities can be used in mass modelling. In this work, we examine the performance of the tilted-ring method and the DISKFIT algorithm for transforming velocity maps of barred spiral galaxies into rotation curves (RCs) using simulated data. We find that the tilted-ring method, which does not account for streaming motions, under-/overestimates the circular motions when the bar is parallel/perpendicular to the projected major axis. DISKFIT, which does include streaming motions, is limited to orientations where the bar is not aligned with either the major or minor axis of the image. Therefore, we propose a method of correcting RCs based on numerical simulations of galaxies. We correct the RC derived from the tilted-ring method based on a numerical simulation of a galaxy with similar properties and projections as the observed galaxy. Using observations of NGC 3319, which has a bar aligned with the major axis, as a test case, we show that the inferred mass models from the uncorrected and corrected RCs are significantly different. These results show the importance of correcting for the non-circular motions and demonstrate that new methods of accounting for these motions are necessary as current methods fail for specific bar alignments.

  5. Satellite alignment. I. Distribution of substructures and their dependence on assembly history from n-body simulations

    International Nuclear Information System (INIS)

    Wang, Yang Ocean; Lin, W. P.; Yu, Yu; Kang, X.; Dutton, Aaron; Macciò, Andrea V.

    2014-01-01

    Observations have shown that the spatial distribution of satellite galaxies is not random, but aligned with the major axes of central galaxies. This alignment is dependent on galaxy properties, such that red satellites are more strongly aligned than blue satellites. Theoretical work conducted to interpret this phenomenon has found that it is due to the non-spherical nature of dark matter halos. However, most studies overpredict the alignment signal under the assumption that the central galaxy shape follows the shape of the host halo. It is also not clear whether the color dependence of alignment is due to an assembly bias or an evolution effect. In this paper we study these problems using a cosmological N-body simulation. Subhalos are used to trace the positions of satellite galaxies. It is found that the shapes of dark matter halos are mis-aligned at different radii. If the central galaxy shares the same shape as the inner host halo, then the alignment effect is weaker and agrees with observational data. However, it predicts almost no dependence of alignment on the color of satellite galaxies, though the late accreted subhalos show stronger alignment with the outer layer of the host halo than their early accreted counterparts. We find that this is due to the limitation of pure N-body simulations where satellite galaxies without associated subhalos ('orphan galaxies') are not resolved. These orphan (mostly red) satellites often reside in the inner region of host halos and should follow the shape of the host halo in the inner region.

  6. GRAPE-5: A Special-Purpose Computer for N-body Simulation

    OpenAIRE

    Kawai, Atsushi; Fukushige, Toshiyuki; Makino, Junichiro; Taiji, Makoto

    1999-01-01

    We have developed a special-purpose computer for gravitational many-body simulations, GRAPE-5. GRAPE-5 is the successor of GRAPE-3. Both consist of eight custom pipeline chips (G5 chip and GRAPE chip). The difference between GRAPE-5 and GRAPE-3 are: (1) The G5 chip contains two pipelines operating at 80 MHz, while the GRAPE chip had one at 20 MHz. Thus, the calculation speed of the G5 chip and that of GRAPE-5 board are 8 times faster than that of GRAPE chip and GRAPE-3 board. (2) The GRAPE-5 ...

  7. Scalable streaming tools for analyzing N-body simulations: Finding halos and investigating excursion sets in one pass

    Science.gov (United States)

    Ivkin, N.; Liu, Z.; Yang, L. F.; Kumar, S. S.; Lemson, G.; Neyrinck, M.; Szalay, A. S.; Braverman, V.; Budavari, T.

    2018-04-01

    Cosmological N-body simulations play a vital role in studying models for the evolution of the Universe. To compare to observations and make a scientific inference, statistic analysis on large simulation datasets, e.g., finding halos, obtaining multi-point correlation functions, is crucial. However, traditional in-memory methods for these tasks do not scale to the datasets that are forbiddingly large in modern simulations. Our prior paper (Liu et al., 2015) proposes memory-efficient streaming algorithms that can find the largest halos in a simulation with up to 109 particles on a small server or desktop. However, this approach fails when directly scaling to larger datasets. This paper presents a robust streaming tool that leverages state-of-the-art techniques on GPU boosting, sampling, and parallel I/O, to significantly improve performance and scalability. Our rigorous analysis of the sketch parameters improves the previous results from finding the centers of the 103 largest halos (Liu et al., 2015) to ∼ 104 - 105, and reveals the trade-offs between memory, running time and number of halos. Our experiments show that our tool can scale to datasets with up to ∼ 1012 particles while using less than an hour of running time on a single GPU Nvidia GTX 1080.

  8. Cosmological N-body simulations with a tree code - Fluctuations in the linear and nonlinear regimes

    International Nuclear Information System (INIS)

    Suginohara, Tatsushi; Suto, Yasushi; Bouchet, F.R.; Hernquist, L.

    1991-01-01

    The evolution of gravitational systems is studied numerically in a cosmological context using a hierarchical tree algorithm with fully periodic boundary conditions. The simulations employ 262,144 particles, which are initially distributed according to scale-free power spectra. The subsequent evolution is followed in both flat and open universes. With this large number of particles, the discretized system can accurately model the linear phase. It is shown that the dynamics in the nonlinear regime depends on both the spectral index n and the density parameter Omega. In Omega = 1 universes, the evolution of the two-point correlation function Xi agrees well with similarity solutions for Xi greater than about 100 but its slope is steeper in open models with the same n. 28 refs

  9. The halo bispectrum in N-body simulations with non-Gaussian initial conditions

    Science.gov (United States)

    Sefusatti, E.; Crocce, M.; Desjacques, V.

    2012-10-01

    We present measurements of the bispectrum of dark matter haloes in numerical simulations with non-Gaussian initial conditions of local type. We show, in the first place, that the overall effect of primordial non-Gaussianity on the halo bispectrum is larger than on the halo power spectrum when all measurable configurations are taken into account. We then compare our measurements with a tree-level perturbative prediction, finding good agreement at large scales when the constant Gaussian bias parameter, both linear and quadratic, and their constant non-Gaussian corrections are fitted for. The best-fitting values of the Gaussian bias factors and their non-Gaussian, scale-independent corrections are in qualitative agreement with the peak-background split expectations. In particular, we show that the effect of non-Gaussian initial conditions on squeezed configurations is fairly large (up to 30 per cent for fNL = 100 at redshift z = 0.5) and results from contributions of similar amplitude induced by the initial matter bispectrum, scale-dependent bias corrections as well as from non-linear matter bispectrum corrections. We show, in addition, that effects at second order in fNL are irrelevant for the range of values allowed by cosmic microwave background and galaxy power spectrum measurements, at least on the scales probed by our simulations (k > 0.01 h Mpc-1). Finally, we present a Fisher matrix analysis to assess the possibility of constraining primordial non-Gaussianity with future measurements of the galaxy bispectrum. We find that a survey with a volume of about 10 h-3 Gpc3 at mean redshift z ≃ 1 could provide an error on fNL of the order of a few. This shows the relevance of a joint analysis of galaxy power spectrum and bispectrum in future redshift surveys.

  10. Keeping it real: revisiting a real-space approach to running ensembles of cosmological N-body simulations

    International Nuclear Information System (INIS)

    Orban, Chris

    2013-01-01

    In setting up initial conditions for ensembles of cosmological N-body simulations there are, fundamentally, two choices: either maximizing the correspondence of the initial density field to the assumed fourier-space clustering or, instead, matching to real-space statistics and allowing the DC mode (i.e. overdensity) to vary from box to box as it would in the real universe. As a stringent test of both approaches, I perform ensembles of simulations using power law and a ''powerlaw times a bump'' model inspired by baryon acoustic oscillations (BAO), exploiting the self-similarity of these initial conditions to quantify the accuracy of the matter-matter two-point correlation results. The real-space method, which was originally proposed by Pen 1997 [1] and implemented by Sirko 2005 [2], performed well in producing the expected self-similar behavior and corroborated the non-linear evolution of the BAO feature observed in conventional simulations, even in the strongly-clustered regime (σ 8 ∼>1). In revisiting the real-space method championed by [2], it was also noticed that this earlier study overlooked an important integral constraint correction to the correlation function in results from the conventional approach that can be important in ΛCDM simulations with L box ∼ −1 Gpc and on scales r∼>L box /10. Rectifying this issue shows that the fourier space and real space methods are about equally accurate and efficient for modeling the evolution and growth of the correlation function, contrary to previous claims. An appendix provides a useful independent-of-epoch analytic formula for estimating the importance of the integral constraint bias on correlation function measurements in ΛCDM simulations

  11. Comparing Results of SPH/N-body Impact Simulations Using Both Solid and Rubble-pile Target Asteroids

    Science.gov (United States)

    Durda, Daniel D.; Bottke, W. F.; Enke, B. L.; Nesvorný, D.; Asphaug, E.; Richardson, D. C.

    2006-09-01

    We have been investigating the properties of satellites and the morphology of size-frequency distributions (SFDs) resulting from a suite of 160 SPH/N-body simulations of impacts into 100-km diameter parent asteroids (Durda et al. 2004, Icarus 170, 243-257; Durda et al. 2006, Icarus, in press). These simulations have produced many valuable insights into the outcomes of cratering and disruptive impacts but were limited to monolithic basalt targets. As a natural consequence of collisional evolution, however, many asteroids have undergone a series of battering impacts that likely have left their interiors substantially fractured, if not completely rubblized. In light of this, we have re-mapped the matrix of simulations using rubble-pile target objects. We constructed the rubble-pile targets by filling the interior of the 100-km diameter spherical shell (the target envelope) with randomly sized solid spheres in mutual contact. We then assigned full damage (which reduces tensile and shear stresses to zero) to SPH particles in the contacts between the components; the remaining volume is void space. The internal spherical components have a power-law distribution of sizes simulating fragments of a pre-shattered parent object. First-look analysis of the rubble-pile results indicate some general similarities to the simulations with the monolithic targets (e.g., similar trends in the number of small, gravitationally bound satellite systems as a function of impact conditions) and some significant differences (e.g., size of largest remnants and smaller debris affecting size frequency distributions of resulting families). We will report details of a more thorough analysis and the implications for collisional models of the main asteroid belt. This work is supported by the National Science Foundation, grant number AST0407045.

  12. Transients from initial conditions based on Lagrangian perturbation theory in N-body simulations II: the effect of the transverse mode

    International Nuclear Information System (INIS)

    Tatekawa, Takayuki

    2014-01-01

    We study the initial conditions for cosmological N-body simulations for precision cosmology. In general, Zel'dovich approximation has been applied for the initial conditions of N-body simulations for a long time. These initial conditions provide incorrect higher-order growth. These error caused by setting up the initial conditions by perturbation theory is called transients. We investigated the impact of transient on non-Gaussianity of density field by performing cosmological N-body simulations with initial conditions based on first-, second-, and third-order Lagrangian perturbation theory in previous paper. In this paper, we evaluates the effect of the transverse mode in the third-order Lagrangian perturbation theory for several statistical quantities such as power spectrum and non-Gaussianty. Then we clarified that the effect of the transverse mode in the third-order Lagrangian perturbation theory is quite small

  13. MODELING PLANETARY SYSTEM FORMATION WITH N-BODY SIMULATIONS: ROLE OF GAS DISK AND STATISTICS COMPARED TO OBSERVATIONS

    International Nuclear Information System (INIS)

    Liu Huigen; Zhou Jilin; Wang Su

    2011-01-01

    During the late stage of planet formation, when Mars-sized cores appear, interactions among planetary cores can excite their orbital eccentricities, accelerate their merging, and thus sculpt their final orbital architecture. This study contributes to the final assembling of planetary systems with N-body simulations, including the type I or II migration of planets and gas accretion of massive cores in a viscous disk. Statistics on the final distributions of planetary masses, semimajor axes, and eccentricities are derived and are comparable to those of the observed systems. Our simulations predict some new orbital signatures of planetary systems around solar mass stars: 36% of the surviving planets are giant planets (>10 M + ). Most of the massive giant planets (>30 M + ) are located at 1-10 AU. Terrestrial planets are distributed more or less evenly at J in highly eccentric orbits (e > 0.3-0.4). The average eccentricity (∼0.15) of the giant planets (>10 M + ) is greater than that (∼0.05) of the terrestrial planets ( + ). A planetary system with more planets tends to have smaller planet masses and orbital eccentricities on average.

  14. Dissipative N-body simulations of the formation of single galaxies in a cold dark-matter cosmology

    International Nuclear Information System (INIS)

    Ewell, M.W. Jr.

    1988-01-01

    The details of an N-body code designed specifically to study the collapse of a single protogalaxy are presented. This code uses a spherical harmonic expansion to model the gravity and a sticky-particle algorithm to model the gas physics. It includes external tides and cosmologically realistic boundary conditions. The results of twelve simulations using this code are given. The initial conditions for these runs use mean-density profiles and r.m.s. quadrupoles and tides taken from the CDM power spectrum. The simulations start when the center of the perturbation first goes nonlinear, and continue until a redshift Z ∼ 1-2. The resulting rotation curves are approximately flat out to 100 kpc, but do show some structure. The circular velocity is 200 km/sec around a 3σ peak. The final systems have λ approx-equal .03. The angular momentum per unit mass of the baryons implies disk scale lengths of 1-3 kpc. The tidal forces are strong enough to profoundly influence the collapse geometry. In particular, the usual assumption, that tidal torques produce a system approximately in solid-body rotation, is shown to be seriously in error

  15. Simulating the formation and evolution of galaxies with EvoL, the Padova N-body Tree-SPH code

    International Nuclear Information System (INIS)

    Merlin, E.; Chiosi, C.; Grassi, T.; Buonomo, U.; Chinellato, S.

    2009-01-01

    The importance of numerical simulations in astrophysics is constantly growing, because of the complexity, the multi-scaling properties and the non-linearity of many physical phenomena. In particular, cosmological and galaxy-sized simulations of structure formation have cast light on different aspects, giving answers to many questions, but raising a number of new issues to be investigated. Over the last decade, great effort has been devoted in Padova to develop a tool explicitly designed to study the problem of galaxy formation and evolution, with particular attention to the early-type ones. To this aim, many simulations have been run on CINECA supercomputers (see publications list below). The next step is the new release of EvoL, a Fortran N-body code capable to follow in great detail many different aspects of stellar, interstellar and cosmological physics. In particular, special care has been paid to the properties of stars and their interplay with the surrounding interstellar medium (ISM), as well as to the multiphase nature of the ISM, to the setting of the initial and boundary conditions, and to the correct description of gas physics via modern formulations of the classical Smoothed Particle Hydrodynamics algorithms. Moreover, a powerful tool to compare numerical predictions with observables has been developed, self-consistently closing the whole package. A library of new simulations, run with EvoL on CINECA supercomputers, is to be built in the next years, while new physics, including magnetic properties of matter and more exotic energy feedback effects, is to be added.

  16. Structure formation from non-Gaussian initial conditions: Multivariate biasing, statistics, and comparison with N-body simulations

    International Nuclear Information System (INIS)

    Giannantonio, Tommaso; Porciani, Cristiano

    2010-01-01

    We study structure formation in the presence of primordial non-Gaussianity of the local type with parameters f NL and g NL . We show that the distribution of dark-matter halos is naturally described by a multivariate bias scheme where the halo overdensity depends not only on the underlying matter density fluctuation δ but also on the Gaussian part of the primordial gravitational potential φ. This corresponds to a non-local bias scheme in terms of δ only. We derive the coefficients of the bias expansion as a function of the halo mass by applying the peak-background split to common parametrizations for the halo mass function in the non-Gaussian scenario. We then compute the halo power spectrum and halo-matter cross spectrum in the framework of Eulerian perturbation theory up to third order. Comparing our results against N-body simulations, we find that our model accurately describes the numerical data for wave numbers k≤0.1-0.3h Mpc -1 depending on redshift and halo mass. In our multivariate approach, perturbations in the halo counts trace φ on large scales, and this explains why the halo and matter power spectra show different asymptotic trends for k→0. This strongly scale-dependent bias originates from terms at leading order in our expansion. This is different from what happens using the standard univariate local bias where the scale-dependent terms come from badly behaved higher-order corrections. On the other hand, our biasing scheme reduces to the usual local bias on smaller scales, where |φ| is typically much smaller than the density perturbations. We finally discuss the halo bispectrum in the context of multivariate biasing and show that, due to its strong scale and shape dependence, it is a powerful tool for the detection of primordial non-Gaussianity from future galaxy surveys.

  17. COUNTS-IN-CYLINDERS IN THE SLOAN DIGITAL SKY SURVEY WITH COMPARISONS TO N-BODY SIMULATIONS

    International Nuclear Information System (INIS)

    Berrier, Heather D.; Barton, Elizabeth J.; Bullock, James S.; Berrier, Joel C.; Zentner, Andrew R.; Wechsler, Risa H.

    2011-01-01

    Environmental statistics provide a necessary means of comparing the properties of galaxies in different environments, and a vital test of models of galaxy formation within the prevailing hierarchical cosmological model. We explore counts-in-cylinders, a common statistic defined as the number of companions of a particular galaxy found within a given projected radius and redshift interval. Galaxy distributions with the same two-point correlation functions do not necessarily have the same companion count distributions. We use this statistic to examine the environments of galaxies in the Sloan Digital Sky Survey Data Release 4 (SDSS DR4). We also make preliminary comparisons to four models for the spatial distributions of galaxies, based on N-body simulations and data from SDSS DR4, to study the utility of the counts-in-cylinders statistic. There is a very large scatter between the number of companions a galaxy has and the mass of its parent dark matter halo and the halo occupation, limiting the utility of this statistic for certain kinds of environmental studies. We also show that prevalent empirical models of galaxy clustering, that match observed two- and three-point clustering statistics well, fail to reproduce some aspects of the observed distribution of counts-in-cylinders on 1, 3, and 6 h -1 Mpc scales. All models that we explore underpredict the fraction of galaxies with few or no companions in 3 and 6 h -1 Mpc cylinders. Roughly 7% of galaxies in the real universe are significantly more isolated within a 6 h -1 Mpc cylinder than the galaxies in any of the models we use. Simple phenomenological models that map galaxies to dark matter halos fail to reproduce high-order clustering statistics in low-density environments.

  18. Simulations of collisions between N-body classical systems in interaction; Simulations de collisions entre systemes classiques a n-corps en interaction

    Energy Technology Data Exchange (ETDEWEB)

    Morisseau, Francois [Laboratoire de Physique Corpusculaire de CAEN, ENSICAEN, Universite de Caen Basse-Normandie, UFR des Sciences, 6 bd Marechal Juin, 14050 Caen Cedex (France)

    2006-05-15

    The Classical N-body Dynamics (CNBD) is dedicated to the simulation of collisions between classical systems. The 2-body interaction used here has the properties of the Van der Waals potential and depends on just a few parameters. This work has two main goals. First, some theoretical approaches assume that the dynamical stage of the collisions plays an important role. Moreover, colliding nuclei are supposed to present a 1. order liquid-gas phase transition. Several signals have been introduced to show this transition. We have searched for two of them: the bimodality of the mass asymmetry and negative heat capacity. We have found them and we give an explanation of their presence in our calculations. Second, we have improved the interaction by adding a Coulomb like potential and by taking into account the stronger proton-neutron interaction in nuclei. Then we have figured out the relations that exist between the parameters of the 2-body interaction and the properties of the systems. These studies allow us to fit the properties of the classical systems to those of the nuclei. In this manuscript the first results of this fit are shown. (author)

  19. PROBING THE ROLE OF DYNAMICAL FRICTION IN SHAPING THE BSS RADIAL DISTRIBUTION. I. SEMI-ANALYTICAL MODELS AND PRELIMINARY N-BODY SIMULATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Miocchi, P.; Lanzoni, B.; Ferraro, F. R.; Dalessandro, E.; Alessandrini, E. [Dipartimento di Fisica e Astronomia, Università di Bologna, Viale Berti Pichat 6/2, I-40127 Bologna (Italy); Pasquato, M.; Lee, Y.-W. [Department of Astronomy and Center for Galaxy Evolution Research, Yonsei University, Seoul 120-749 (Korea, Republic of); Vesperini, E. [Department of Astronomy, Indiana University, Bloomington, IN 47405 (United States)

    2015-01-20

    We present semi-analytical models and simplified N-body simulations with 10{sup 4} particles aimed at probing the role of dynamical friction (DF) in determining the radial distribution of blue straggler stars (BSSs) in globular clusters. The semi-analytical models show that DF (which is the only evolutionary mechanism at work) is responsible for the formation of a bimodal distribution with a dip progressively moving toward the external regions of the cluster. However, these models fail to reproduce the formation of the long-lived central peak observed in all dynamically evolved clusters. The results of N-body simulations confirm the formation of a sharp central peak, which remains as a stable feature over time regardless of the initial concentration of the system. In spite of noisy behavior, a bimodal distribution forms in many cases, with the size of the dip increasing as a function of time. In the most advanced stages, the distribution becomes monotonic. These results are in agreement with the observations. Also, the shape of the peak and the location of the minimum (which, in most of cases, is within 10 core radii) turn out to be consistent with observational results. For a more detailed and close comparison with observations, including a proper calibration of the timescales of the dynamical processes driving the evolution of the BSS spatial distribution, more realistic simulations will be necessary.

  20. Star formation in N-body simulations .1. The impact of the stellar ultraviolet radiation on star formation

    NARCIS (Netherlands)

    Gerritsen, JPE; Icke, [No Value

    We present numerical simulations of isolated disk galaxies including gas dynamics and star formation. The gas is allowed to cool to 10 K, while heating of the gas is provided by the far-ultraviolet flux of all stars. Stars are allowed to form from the gas according to a Jeans instability criterion:

  1. Initial conditions for cosmological N-body simulations of the scalar sector of theories of Newtonian, Relativistic and Modified Gravity

    International Nuclear Information System (INIS)

    Valkenburg, Wessel; Hu, Bin

    2015-01-01

    We present a description for setting initial particle displacements and field values for simulations of arbitrary metric theories of gravity, for perfect and imperfect fluids with arbitrary characteristics. We extend the Zel'dovich Approximation to nontrivial theories of gravity, and show how scale dependence implies curved particle paths, even in the entirely linear regime of perturbations. For a viable choice of Effective Field Theory of Modified Gravity, initial conditions set at high redshifts are affected at the level of up to 5% at Mpc scales, which exemplifies the importance of going beyond Λ-Cold Dark Matter initial conditions for modifications of gravity outside of the quasi-static approximation. In addition, we show initial conditions for a simulation where a scalar modification of gravity is modelled in a Lagrangian particle-like description. Our description paves the way for simulations and mock galaxy catalogs under theories of gravity beyond the standard model, crucial for progress towards precision tests of gravity and cosmology

  2. PHoToNs–A parallel heterogeneous and threads oriented code for cosmological N-body simulation

    Science.gov (United States)

    Wang, Qiao; Cao, Zong-Yan; Gao, Liang; Chi, Xue-Bin; Meng, Chen; Wang, Jie; Wang, Long

    2018-06-01

    We introduce a new code for cosmological simulations, PHoToNs, which incorporates features for performing massive cosmological simulations on heterogeneous high performance computer (HPC) systems and threads oriented programming. PHoToNs adopts a hybrid scheme to compute gravitational force, with the conventional Particle-Mesh (PM) algorithm to compute the long-range force, the Tree algorithm to compute the short range force and the direct summation Particle-Particle (PP) algorithm to compute gravity from very close particles. A self-similar space filling a Peano-Hilbert curve is used to decompose the computing domain. Threads programming is advantageously used to more flexibly manage the domain communication, PM calculation and synchronization, as well as Dual Tree Traversal on the CPU+MIC platform. PHoToNs scales well and efficiency of the PP kernel achieves 68.6% of peak performance on MIC and 74.4% on CPU platforms. We also test the accuracy of the code against the much used Gadget-2 in the community and found excellent agreement.

  3. N-body simulation for self-gravitating collisional systems with a new SIMD instruction set extension to the x86 architecture, Advanced Vector eXtensions

    Science.gov (United States)

    Tanikawa, Ataru; Yoshikawa, Kohji; Okamoto, Takashi; Nitadori, Keigo

    2012-02-01

    We present a high-performance N-body code for self-gravitating collisional systems accelerated with the aid of a new SIMD instruction set extension of the x86 architecture: Advanced Vector eXtensions (AVX), an enhanced version of the Streaming SIMD Extensions (SSE). With one processor core of Intel Core i7-2600 processor (8 MB cache and 3.40 GHz) based on Sandy Bridge micro-architecture, we implemented a fourth-order Hermite scheme with individual timestep scheme ( Makino and Aarseth, 1992), and achieved the performance of ˜20 giga floating point number operations per second (GFLOPS) for double-precision accuracy, which is two times and five times higher than that of the previously developed code implemented with the SSE instructions ( Nitadori et al., 2006b), and that of a code implemented without any explicit use of SIMD instructions with the same processor core, respectively. We have parallelized the code by using so-called NINJA scheme ( Nitadori et al., 2006a), and achieved ˜90 GFLOPS for a system containing more than N = 8192 particles with 8 MPI processes on four cores. We expect to achieve about 10 tera FLOPS (TFLOPS) for a self-gravitating collisional system with N ˜ 10 5 on massively parallel systems with at most 800 cores with Sandy Bridge micro-architecture. This performance will be comparable to that of Graphic Processing Unit (GPU) cluster systems, such as the one with about 200 Tesla C1070 GPUs ( Spurzem et al., 2010). This paper offers an alternative to collisional N-body simulations with GRAPEs and GPUs.

  4. The morphological evolution and internal convection of ExB-drifting plasma clouds: Theory, dielectric-in-cell simulations, and N-body dielectric simulations

    International Nuclear Information System (INIS)

    Borovsky, J.E.; Hansen, P.J.

    1998-01-01

    The evolution of ExB-drifting plasma clouds is investigated with the aid of a computational technique denoted here as open-quotes dielectric-in-cell.close quotes Many of the familiar phenomena associated with clouds of collisionless plasma are seen and explained and less-well-known phenomena associated with convection patterns, with the stripping of cloud material, and with the evolution of plasma clouds composed of differing ion species are investigated. The effects of spatially uniform diffusion are studied with the dielectric-in-cell technique and with another computational technique denoted as open-quotes N-body dielectric;close quotes the suppression of convection, the suppression of structure growth, the increase in material stripping, and the evolution of cloud anisotropy are examined. copyright 1998 American Institute of Physics

  5. Visualizing astrophysical N-body systems

    International Nuclear Information System (INIS)

    Dubinski, John

    2008-01-01

    I begin with a brief history of N-body simulation and visualization and then go on to describe various methods for creating images and animations of modern simulations in cosmology and galactic dynamics. These techniques are incorporated into a specialized particle visualization software library called MYRIAD that is designed to render images within large parallel N-body simulations as they run. I present several case studies that explore the application of these methods to animations in star clusters, interacting galaxies and cosmological structure formation.

  6. Adaptive resolution simulation of salt solutions

    International Nuclear Information System (INIS)

    Bevc, Staš; Praprotnik, Matej; Junghans, Christoph; Kremer, Kurt

    2013-01-01

    We present an adaptive resolution simulation of aqueous salt (NaCl) solutions at ambient conditions using the adaptive resolution scheme. Our multiscale approach concurrently couples the atomistic and coarse-grained models of the aqueous NaCl, where water molecules and ions change their resolution while moving from one resolution domain to the other. We employ standard extended simple point charge (SPC/E) and simple point charge (SPC) water models in combination with AMBER and GROMOS force fields for ion interactions in the atomistic domain. Electrostatics in our model are described by the generalized reaction field method. The effective interactions for water–water and water–ion interactions in the coarse-grained model are derived using structure-based coarse-graining approach while the Coulomb interactions between ions are appropriately screened. To ensure an even distribution of water molecules and ions across the simulation box we employ thermodynamic forces. We demonstrate that the equilibrium structural, e.g. radial distribution functions and density distributions of all the species, and dynamical properties are correctly reproduced by our adaptive resolution method. Our multiscale approach, which is general and can be used for any classical non-polarizable force-field and/or types of ions, will significantly speed up biomolecular simulation involving aqueous salt. (paper)

  7. Adaptive Resolution Simulation of MARTINI Solvents

    NARCIS (Netherlands)

    Zavadlav, Julija; Melo, Manuel N.; Cunha, Ana V.; de Vries, Alex H.; Marrink, Siewert J.; Praprotnik, Matej

    We present adaptive resolution dynamics simulations of aqueous and apolar solvents coarse-grained molecular models that are compatible with the MARTINI force field. As representatives of both classes solvents we have chosen liquid water and butane, respectively, at ambient temperature. The solvent

  8. STAR FORMATION AND FEEDBACK IN SMOOTHED PARTICLE HYDRODYNAMIC SIMULATIONS. II. RESOLUTION EFFECTS

    International Nuclear Information System (INIS)

    Christensen, Charlotte R.; Quinn, Thomas; Bellovary, Jillian; Stinson, Gregory; Wadsley, James

    2010-01-01

    We examine the effect of mass and force resolution on a specific star formation (SF) recipe using a set of N-body/smooth particle hydrodynamic simulations of isolated galaxies. Our simulations span halo masses from 10 9 to 10 13 M sun , more than 4 orders of magnitude in mass resolution, and 2 orders of magnitude in the gravitational softening length, ε, representing the force resolution. We examine the total global SF rate, the SF history, and the quantity of stellar feedback and compare the disk structure of the galaxies. Based on our analysis, we recommend using at least 10 4 particles each for the dark matter (DM) and gas component and a force resolution of ε ∼ 10 -3 R vir when studying global SF and feedback. When the spatial distribution of stars is important, the number of gas and DM particles must be increased to at least 10 5 of each. Low-mass resolution simulations with fixed softening lengths show particularly weak stellar disks due to two-body heating. While decreasing spatial resolution in low-mass resolution simulations limits two-body effects, density and potential gradients cannot be sustained. Regardless of the softening, low-mass resolution simulations contain fewer high density regions where SF may occur. Galaxies of approximately 10 10 M sun display unique sensitivity to both mass and force resolution. This mass of galaxy has a shallow potential and is on the verge of forming a disk. The combination of these factors gives this galaxy the potential for strong gas outflows driven by supernova feedback and makes it particularly sensitive to any changes to the simulation parameters.

  9. Validation of High-resolution Climate Simulations over Northern Europe.

    Science.gov (United States)

    Muna, R. A.

    2005-12-01

    Two AMIP2-type (Gates 1992) experiments have been performed with climate versions of ARPEGE/IFS model examine for North Atlantic North Europe, and Norwegian region and analyzed the effect of increasing resolution on the simulated biases. The ECMWF reanalysis or ERA-15 has been used to validate the simulations. Each of the simulations is an integration of the period 1979 to 1996. The global simulations used observed monthly mean sea surface temperatures (SST) as lower boundary condition. All aspects but the horizontal resolutions are similar in the two simulations. The first simulation has a uniform horizontal resolution of T63L. The second one has a variable resolution (T106Lc3) with the highest resolution in the Norwegian Sea. Both simulations have 31 vertical layers in the same locations. For each simulation the results were divided into two seasons: winter (DJF) and summer (JJA). The parameters investigated were mean sea level pressure, geopotential and temperature at 850 hPa and 500 hPa. To find out the causes of temperature bias during summer, latent and sensible heat flux, total cloud cover and total precipitation were analyzed. The high-resolution simulation exhibits more or less realistic climate over Nordic, Artic and European region. The overall performance of the simulations shows improvements of generally all fields investigated with increasing resolution over the target area both in winter (DJF) and summer (JJA).

  10. KiDS-450: cosmological constraints from weak-lensing peak statistics - II: Inference from shear peaks using N-body simulations

    Science.gov (United States)

    Martinet, Nicolas; Schneider, Peter; Hildebrandt, Hendrik; Shan, HuanYuan; Asgari, Marika; Dietrich, Jörg P.; Harnois-Déraps, Joachim; Erben, Thomas; Grado, Aniello; Heymans, Catherine; Hoekstra, Henk; Klaes, Dominik; Kuijken, Konrad; Merten, Julian; Nakajima, Reiko

    2018-02-01

    We study the statistics of peaks in a weak-lensing reconstructed mass map of the first 450 deg2 of the Kilo Degree Survey (KiDS-450). The map is computed with aperture masses directly applied to the shear field with an NFW-like compensated filter. We compare the peak statistics in the observations with that of simulations for various cosmologies to constrain the cosmological parameter S_8 = σ _8 √{Ω _m/0.3}, which probes the (Ωm, σ8) plane perpendicularly to its main degeneracy. We estimate S8 = 0.750 ± 0.059, using peaks in the signal-to-noise range 0 ≤ S/N ≤ 4, and accounting for various systematics, such as multiplicative shear bias, mean redshift bias, baryon feedback, intrinsic alignment, and shear-position coupling. These constraints are ˜ 25 per cent tighter than the constraints from the high significance peaks alone (3 ≤ S/N ≤ 4) which typically trace single-massive haloes. This demonstrates the gain of information from low-S/N peaks. However, we find that including S/N KiDS-450. Combining shear peaks with non-tomographic measurements of the shear two-point correlation functions yields a ˜20 per cent improvement in the uncertainty on S8 compared to the shear two-point correlation functions alone, highlighting the great potential of peaks as a cosmological probe.

  11. Resolution convergence in cosmological hydrodynamical simulations using adaptive mesh refinement

    Science.gov (United States)

    Snaith, Owain N.; Park, Changbom; Kim, Juhan; Rosdahl, Joakim

    2018-06-01

    We have explored the evolution of gas distributions from cosmological simulations carried out using the RAMSES adaptive mesh refinement (AMR) code, to explore the effects of resolution on cosmological hydrodynamical simulations. It is vital to understand the effect of both the resolution of initial conditions (ICs) and the final resolution of the simulation. Lower initial resolution simulations tend to produce smaller numbers of low-mass structures. This will strongly affect the assembly history of objects, and has the same effect of simulating different cosmologies. The resolution of ICs is an important factor in simulations, even with a fixed maximum spatial resolution. The power spectrum of gas in simulations using AMR diverges strongly from the fixed grid approach - with more power on small scales in the AMR simulations - even at fixed physical resolution and also produces offsets in the star formation at specific epochs. This is because before certain times the upper grid levels are held back to maintain approximately fixed physical resolution, and to mimic the natural evolution of dark matter only simulations. Although the impact of hold-back falls with increasing spatial and IC resolutions, the offsets in the star formation remain down to a spatial resolution of 1 kpc. These offsets are of the order of 10-20 per cent, which is below the uncertainty in the implemented physics but are expected to affect the detailed properties of galaxies. We have implemented a new grid-hold-back approach to minimize the impact of hold-back on the star formation rate.

  12. Operational High Resolution Chemical Kinetics Simulation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Numerical simulations of chemical kinetics are critical to addressing urgent issues in both the developed and developing world. Ongoing demand for higher resolution...

  13. Playing With Conflict: Teaching Conflict Resolution through Simulations and Games

    Science.gov (United States)

    Powers, Richard B.; Kirkpatrick, Kat

    2013-01-01

    Playing With Conflict is a weekend course for graduate students in Portland State University's Conflict Resolution program and undergraduates in all majors. Students participate in simulations, games, and experiential exercises to learn and practice conflict resolution skills. Graduate students create a guided role-play of a conflict. In addition…

  14. Low-resolution simulations of vesicle suspensions in 2D

    Science.gov (United States)

    Kabacaoğlu, Gökberk; Quaife, Bryan; Biros, George

    2018-03-01

    Vesicle suspensions appear in many biological and industrial applications. These suspensions are characterized by rich and complex dynamics of vesicles due to their interaction with the bulk fluid, and their large deformations and nonlinear elastic properties. Many existing state-of-the-art numerical schemes can resolve such complex vesicle flows. However, even when using provably optimal algorithms, these simulations can be computationally expensive, especially for suspensions with a large number of vesicles. These high computational costs can limit the use of simulations for parameter exploration, optimization, or uncertainty quantification. One way to reduce the cost is to use low-resolution discretizations in space and time. However, it is well-known that simply reducing the resolution results in vesicle collisions, numerical instabilities, and often in erroneous results. In this paper, we investigate the effect of a number of algorithmic empirical fixes (which are commonly used by many groups) in an attempt to make low-resolution simulations more stable and more predictive. Based on our empirical studies for a number of flow configurations, we propose a scheme that attempts to integrate these fixes in a systematic way. This low-resolution scheme is an extension of our previous work [51,53]. Our low-resolution correction algorithms (LRCA) include anti-aliasing and membrane reparametrization for avoiding spurious oscillations in vesicles' membranes, adaptive time stepping and a repulsion force for handling vesicle collisions and, correction of vesicles' area and arc-length for maintaining physical vesicle shapes. We perform a systematic error analysis by comparing the low-resolution simulations of dilute and dense suspensions with their high-fidelity, fully resolved, counterparts. We observe that the LRCA enables both efficient and statistically accurate low-resolution simulations of vesicle suspensions, while it can be 10× to 100× faster.

  15. Atomic Force Microscopy and Real Atomic Resolution. Simple Computer Simulations

    NARCIS (Netherlands)

    Koutsos, V.; Manias, E.; Brinke, G. ten; Hadziioannou, G.

    1994-01-01

    Using a simple computer simulation for AFM imaging in the contact mode, pictures with true and false atomic resolution are demonstrated. The surface probed consists of two f.c.c. (111) planes and an atomic vacancy is introduced in the upper layer. Changing the size of the effective tip and its

  16. The relative entropy is fundamental to adaptive resolution simulations

    Science.gov (United States)

    Kreis, Karsten; Potestio, Raffaello

    2016-07-01

    Adaptive resolution techniques are powerful methods for the efficient simulation of soft matter systems in which they simultaneously employ atomistic and coarse-grained (CG) force fields. In such simulations, two regions with different resolutions are coupled with each other via a hybrid transition region, and particles change their description on the fly when crossing this boundary. Here we show that the relative entropy, which provides a fundamental basis for many approaches in systematic coarse-graining, is also an effective instrument for the understanding of adaptive resolution simulation methodologies. We demonstrate that the use of coarse-grained potentials which minimize the relative entropy with respect to the atomistic system can help achieve a smoother transition between the different regions within the adaptive setup. Furthermore, we derive a quantitative relation between the width of the hybrid region and the seamlessness of the coupling. Our results do not only shed light on the what and how of adaptive resolution techniques but will also help setting up such simulations in an optimal manner.

  17. Study of drift tube resolution using numerical simulations

    International Nuclear Information System (INIS)

    Lundin, M.C.

    1990-01-01

    The results off a simulation of straw tube detector response are presented. These gas ionization detectors and the electronics which must presumably go along with them are characterized in a simple but meaningful manner. The physical processes which comprise the response of the individual straw tubes are broken down and examined in detail. Different parameters of the simulation are varied and resulting predictions of drift tube spatial resolution are shown. In addition, small aspects of the predictions are compared to recent laboratory results, which can be seen as a measure of the simulation's usefulness. 10 refs., 8 figs

  18. Impact of ocean model resolution on CCSM climate simulations

    Energy Technology Data Exchange (ETDEWEB)

    Kirtman, Ben P.; Rousset, Clement; Siqueira, Leo [University of Miami, Rosenstiel School for Marine and Atmospheric Science, Coral Gables, FL (United States); Bitz, Cecilia [University of Washington, Department of Atmospheric Science, Seattle, WA (United States); Bryan, Frank; Dennis, John; Hearn, Nathan; Loft, Richard; Tomas, Robert; Vertenstein, Mariana [National Center for Atmospheric Research, Boulder, CO (United States); Collins, William [University of California, Berkeley, Berkeley, CA (United States); Kinter, James L.; Stan, Cristiana [Center for Ocean-Land-Atmosphere Studies, Calverton, MD (United States); George Mason University, Fairfax, VA (United States)

    2012-09-15

    The current literature provides compelling evidence suggesting that an eddy-resolving (as opposed to eddy-permitting or eddy-parameterized) ocean component model will significantly impact the simulation of the large-scale climate, although this has not been fully tested to date in multi-decadal global coupled climate simulations. The purpose of this paper is to examine how resolved ocean fronts and eddies impact the simulation of large-scale climate. The model used for this study is the NCAR Community Climate System Model version 3.5 (CCSM3.5) - the forerunner to CCSM4. Two experiments are reported here. The control experiment is a 155-year present-day climate simulation using a 0.5 atmosphere component (zonal resolution 0.625 meridional resolution 0.5 ; land surface component at the same resolution) coupled to ocean and sea-ice components with zonal resolution of 1.2 and meridional resolution varying from 0.27 at the equator to 0.54 in the mid-latitudes. The second simulation uses the same atmospheric and land-surface models coupled to eddy-resolving 0.1 ocean and sea-ice component models. The simulations are compared in terms of how the representation of smaller scale features in the time mean ocean circulation and ocean eddies impact the mean and variable climate. In terms of the global mean surface temperature, the enhanced ocean resolution leads to a ubiquitous surface warming with a global mean surface temperature increase of about 0.2 C relative to the control. The warming is largest in the Arctic and regions of strong ocean fronts and ocean eddy activity (i.e., Southern Ocean, western boundary currents). The Arctic warming is associated with significant losses of sea-ice in the high-resolution simulation. The sea surface temperature gradients in the North Atlantic, in particular, are better resolved in the high-resolution model leading to significantly sharper temperature gradients and associated large-scale shifts in the rainfall. In the extra-tropics, the

  19. Constraining Stochastic Parametrisation Schemes Using High-Resolution Model Simulations

    Science.gov (United States)

    Christensen, H. M.; Dawson, A.; Palmer, T.

    2017-12-01

    Stochastic parametrisations are used in weather and climate models as a physically motivated way to represent model error due to unresolved processes. Designing new stochastic schemes has been the target of much innovative research over the last decade. While a focus has been on developing physically motivated approaches, many successful stochastic parametrisation schemes are very simple, such as the European Centre for Medium-Range Weather Forecasts (ECMWF) multiplicative scheme `Stochastically Perturbed Parametrisation Tendencies' (SPPT). The SPPT scheme improves the skill of probabilistic weather and seasonal forecasts, and so is widely used. However, little work has focused on assessing the physical basis of the SPPT scheme. We address this matter by using high-resolution model simulations to explicitly measure the `error' in the parametrised tendency that SPPT seeks to represent. The high resolution simulations are first coarse-grained to the desired forecast model resolution before they are used to produce initial conditions and forcing data needed to drive the ECMWF Single Column Model (SCM). By comparing SCM forecast tendencies with the evolution of the high resolution model, we can measure the `error' in the forecast tendencies. In this way, we provide justification for the multiplicative nature of SPPT, and for the temporal and spatial scales of the stochastic perturbations. However, we also identify issues with the SPPT scheme. It is therefore hoped these measurements will improve both holistic and process based approaches to stochastic parametrisation. Figure caption: Instantaneous snapshot of the optimal SPPT stochastic perturbation, derived by comparing high-resolution simulations with a low resolution forecast model.

  20. Impact of Variable-Resolution Meshes on Regional Climate Simulations

    Science.gov (United States)

    Fowler, L. D.; Skamarock, W. C.; Bruyere, C. L.

    2014-12-01

    The Model for Prediction Across Scales (MPAS) is currently being used for seasonal-scale simulations on globally-uniform and regionally-refined meshes. Our ongoing research aims at analyzing simulations of tropical convective activity and tropical cyclone development during one hurricane season over the North Atlantic Ocean, contrasting statistics obtained with a variable-resolution mesh against those obtained with a quasi-uniform mesh. Analyses focus on the spatial distribution, frequency, and intensity of convective and grid-scale precipitations, and their relative contributions to the total precipitation as a function of the horizontal scale. Multi-month simulations initialized on May 1st 2005 using ERA-Interim re-analyses indicate that MPAS performs satisfactorily as a regional climate model for different combinations of horizontal resolutions and transitions between the coarse and refined meshes. Results highlight seamless transitions for convection, cloud microphysics, radiation, and land-surface processes between the quasi-uniform and locally- refined meshes, despite the fact that the physics parameterizations were not developed for variable resolution meshes. Our goal of analyzing the performance of MPAS is twofold. First, we want to establish that MPAS can be successfully used as a regional climate model, bypassing the need for nesting and nudging techniques at the edges of the computational domain as done in traditional regional climate modeling. Second, we want to assess the performance of our convective and cloud microphysics parameterizations as the horizontal resolution varies between the lower-resolution quasi-uniform and higher-resolution locally-refined areas of the global domain.

  1. An N-body Integrator for Planetary Rings

    Science.gov (United States)

    Hahn, Joseph M.

    2011-04-01

    A planetary ring that is disturbed by a satellite's resonant perturbation can respond in an organized way. When the resonance lies in the ring's interior, the ring responds via an m-armed spiral wave, while a ring whose edge is confined by the resonance exhibits an m-lobed scalloping along the ring-edge. The amplitude of these disturbances are sensitive to ring surface density and viscosity, so modelling these phenomena can provide estimates of the ring's properties. However a brute force attempt to simulate a ring's full azimuthal extent with an N-body code will likely fail because of the large number of particles needed to resolve the ring's behavior. Another impediment is the gravitational stirring that occurs among the simulated particles, which can wash out the ring's organized response. However it is possible to adapt an N-body integrator so that it can simulate a ring's collective response to resonant perturbations. The code developed here uses a few thousand massless particles to trace streamlines within the ring. Particles are close in a radial sense to these streamlines, which allows streamlines to be treated as straight wires of constant linear density. Consequently, gravity due to these streamline is a simple function of the particle's radial distance to all streamlines. And because particles are responding to smooth gravitating streamlines, rather than discrete particles, this method eliminates the stirring that ordinarily occurs in brute force N-body calculations. Note also that ring surface density is now a simple function of streamline separations, so effects due to ring pressure and viscosity are easily accounted for, too. A poster will describe this N-body method in greater detail. Simulations of spiral density waves and scalloped ring-edges are executed in typically ten minutes on a desktop PC, and results for Saturn's A and B rings will be presented at conference time.

  2. High Resolution Simulations of Future Climate in West Africa Using a Variable-Resolution Atmospheric Model

    Science.gov (United States)

    Adegoke, J. O.; Engelbrecht, F.; Vezhapparambu, S.

    2013-12-01

    In previous work demonstrated the application of a var¬iable-resolution global atmospheric model, the conformal-cubic atmospheric model (CCAM), across a wide range of spatial and time scales to investigate the ability of the model to provide realistic simulations of present-day climate and plausible projections of future climate change over sub-Saharan Africa. By applying the model in stretched-grid mode the versatility of the model dynamics, numerical formulation and physical parameterizations to function across a range of length scales over the region of interest, was also explored. We primarily used CCAM to illustrate the capability of the model to function as a flexible downscaling tool at the climate-change time scale. Here we report on additional long term climate projection studies performed by downscaling at much higher resolutions (8 Km) over an area that stretches from just south of Sahara desert to the southern coast of the Niger Delta and into the Gulf of Guinea. To perform these simulations, CCAM was provided with synoptic-scale forcing of atmospheric circulation from 2.5 deg resolution NCEP reanalysis at 6-hourly interval and SSTs from NCEP reanalysis data uses as lower boundary forcing. CCAM 60 Km resolution downscaled to 8 Km (Schmidt factor 24.75) then 8 Km resolution simulation downscaled to 1 Km (Schmidt factor 200) over an area approximately 50 Km x 50 Km in the southern Lake Chad Basin (LCB). Our intent in conducting these high resolution model runs was to obtain a deeper understanding of linkages between the projected future climate and the hydrological processes that control the surface water regime in this part of sub-Saharan Africa.

  3. The AGORA High-resolution Galaxy Simulations Comparison Project

    OpenAIRE

    Kim Ji-hoon; Abel Tom; Agertz Oscar; Bryan Greg L.; Ceverino Daniel; Christensen Charlotte; Conroy Charlie; Dekel Avishai; Gnedin Nickolay Y.; Goldbaum Nathan J.; Guedes Javiera; Hahn Oliver; Hobbs Alexander; Hopkins Philip F.; Hummels Cameron B.

    2014-01-01

    The Astrophysical Journal Supplement Series 210.1 (2014): 14 reproduced by permission of the AAS We introduce the Assembling Galaxies Of Resolved Anatomy (AGORA) project, a comprehensive numerical study of well-resolved galaxies within the ΛCDM cosmology. Cosmological hydrodynamic simulations with force resolutions of ∼100 proper pc or better will be run with a variety of code platforms to follow the hierarchical growth, star formation history, morphological transformation, and the cycle o...

  4. Adaptive resolution simulation of an atomistic protein in MARTINI water

    International Nuclear Information System (INIS)

    Zavadlav, Julija; Melo, Manuel Nuno; Marrink, Siewert J.; Praprotnik, Matej

    2014-01-01

    We present an adaptive resolution simulation of protein G in multiscale water. We couple atomistic water around the protein with mesoscopic water, where four water molecules are represented with one coarse-grained bead, farther away. We circumvent the difficulties that arise from coupling to the coarse-grained model via a 4-to-1 molecule coarse-grain mapping by using bundled water models, i.e., we restrict the relative movement of water molecules that are mapped to the same coarse-grained bead employing harmonic springs. The water molecules change their resolution from four molecules to one coarse-grained particle and vice versa adaptively on-the-fly. Having performed 15 ns long molecular dynamics simulations, we observe within our error bars no differences between structural (e.g., root-mean-squared deviation and fluctuations of backbone atoms, radius of gyration, the stability of native contacts and secondary structure, and the solvent accessible surface area) and dynamical properties of the protein in the adaptive resolution approach compared to the fully atomistically solvated model. Our multiscale model is compatible with the widely used MARTINI force field and will therefore significantly enhance the scope of biomolecular simulations

  5. SPECTRA OF STRONG MAGNETOHYDRODYNAMIC TURBULENCE FROM HIGH-RESOLUTION SIMULATIONS

    International Nuclear Information System (INIS)

    Beresnyak, Andrey

    2014-01-01

    Magnetohydrodynamic (MHD) turbulence is present in a variety of solar and astrophysical environments. Solar wind fluctuations with frequencies lower than 0.1 Hz are believed to be mostly governed by Alfvénic turbulence with particle transport depending on the power spectrum and the anisotropy of such turbulence. Recently, conflicting spectral slopes for the inertial range of MHD turbulence have been reported by different groups. Spectral shapes from earlier simulations showed that MHD turbulence is less scale-local compared with hydrodynamic turbulence. This is why higher-resolution simulations, and careful and rigorous numerical analysis is especially needed for the MHD case. In this Letter, we present two groups of simulations with resolution up to 4096 3 , which are numerically well-resolved and have been analyzed with an exact and well-tested method of scaling study. Our results from both simulation groups indicate that the asymptotic power spectral slope for all energy-related quantities, such as total energy and residual energy, is around –1.7, close to Kolmogorov's –5/3. This suggests that residual energy is a constant fraction of the total energy and that in the asymptotic regime of Alfvénic turbulence magnetic and kinetic spectra have the same scaling. The –1.5 slope for energy and the –2 slope for residual energy, which have been suggested earlier, are incompatible with our numerics

  6. Constructing high-quality bounding volume hierarchies for N-body computation using the acceptance volume heuristic

    Science.gov (United States)

    Olsson, O.

    2018-01-01

    We present a novel heuristic derived from a probabilistic cost model for approximate N-body simulations. We show that this new heuristic can be used to guide tree construction towards higher quality trees with improved performance over current N-body codes. This represents an important step beyond the current practice of using spatial partitioning for N-body simulations, and enables adoption of a range of state-of-the-art algorithms developed for computer graphics applications to yield further improvements in N-body simulation performance. We outline directions for further developments and review the most promising such algorithms.

  7. Montecarlo simulation for a new high resolution elemental analysis methodology

    Energy Technology Data Exchange (ETDEWEB)

    Figueroa S, Rodolfo; Brusa, Daniel; Riveros, Alberto [Universidad de La Frontera, Temuco (Chile). Facultad de Ingenieria y Administracion

    1996-12-31

    Full text. Spectra generated by binary, ternary and multielement matrixes when irradiated by a variable energy photon beam are simulated by means of a Monte Carlo code. Significative jumps in the counting rate are shown when the photon energy is just over the edge associated to each element, because of the emission of characteristic X rays. For a given associated energy, the net height of these jumps depends mainly on the concentration and of the sample absorption coefficient. The spectra were obtained by a monochromatic energy scan considering all the emitted radiation by the sample in a 2{pi} solid angle, associating a single multichannel spectrometer channel to each incident energy (Multichannel Scaling (MCS) mode). The simulated spectra were made with Monte Carlo simulation software adaptation of the package called PENELOPE (Penetration and Energy Loss of Positrons and Electrons in matter). The results show that it is possible to implement a new high resolution spectroscopy methodology, where a synchrotron would be an ideal source, due to the high intensity and ability to control the energy of the incident beam. The high energy resolution would be determined by the monochromating system and not by the detection system and not by the detection system, which would basicalbe a photon counter. (author)

  8. Montecarlo simulation for a new high resolution elemental analysis methodology

    International Nuclear Information System (INIS)

    Figueroa S, Rodolfo; Brusa, Daniel; Riveros, Alberto

    1996-01-01

    Full text. Spectra generated by binary, ternary and multielement matrixes when irradiated by a variable energy photon beam are simulated by means of a Monte Carlo code. Significative jumps in the counting rate are shown when the photon energy is just over the edge associated to each element, because of the emission of characteristic X rays. For a given associated energy, the net height of these jumps depends mainly on the concentration and of the sample absorption coefficient. The spectra were obtained by a monochromatic energy scan considering all the emitted radiation by the sample in a 2π solid angle, associating a single multichannel spectrometer channel to each incident energy (Multichannel Scaling (MCS) mode). The simulated spectra were made with Monte Carlo simulation software adaptation of the package called PENELOPE (Penetration and Energy Loss of Positrons and Electrons in matter). The results show that it is possible to implement a new high resolution spectroscopy methodology, where a synchrotron would be an ideal source, due to the high intensity and ability to control the energy of the incident beam. The high energy resolution would be determined by the monochromating system and not by the detection system and not by the detection system, which would basicalbe a photon counter. (author)

  9. An Advanced N -body Model for Interacting Multiple Stellar Systems

    Energy Technology Data Exchange (ETDEWEB)

    Brož, Miroslav [Astronomical Institute of the Charles University, Faculty of Mathematics and Physics, V Holešovičkách 2, CZ-18000 Praha 8 (Czech Republic)

    2017-06-01

    We construct an advanced model for interacting multiple stellar systems in which we compute all trajectories with a numerical N -body integrator, namely the Bulirsch–Stoer from the SWIFT package. We can then derive various observables: astrometric positions, radial velocities, minima timings (TTVs), eclipse durations, interferometric visibilities, closure phases, synthetic spectra, spectral energy distribution, and even complete light curves. We use a modified version of the Wilson–Devinney code for the latter, in which the instantaneous true phase and inclination of the eclipsing binary are governed by the N -body integration. If all of these types of observations are at one’s disposal, a joint χ {sup 2} metric and an optimization algorithm (a simplex or simulated annealing) allow one to search for a global minimum and construct very robust models of stellar systems. At the same time, our N -body model is free from artifacts that may arise if mutual gravitational interactions among all components are not self-consistently accounted for. Finally, we present a number of examples showing dynamical effects that can be studied with our code and we discuss how systematic errors may affect the results (and how to prevent this from happening).

  10. Hydrodynamics in adaptive resolution particle simulations: Multiparticle collision dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Alekseeva, Uliana, E-mail: Alekseeva@itc.rwth-aachen.de [Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation (IAS), Forschungszentrum Jülich, D-52425 Jülich (Germany); German Research School for Simulation Sciences (GRS), Forschungszentrum Jülich, D-52425 Jülich (Germany); Winkler, Roland G., E-mail: r.winkler@fz-juelich.de [Theoretical Soft Matter and Biophysics, Institute for Advanced Simulation (IAS), Forschungszentrum Jülich, D-52425 Jülich (Germany); Sutmann, Godehard, E-mail: g.sutmann@fz-juelich.de [Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation (IAS), Forschungszentrum Jülich, D-52425 Jülich (Germany); ICAMS, Ruhr-University Bochum, D-44801 Bochum (Germany)

    2016-06-01

    A new adaptive resolution technique for particle-based multi-level simulations of fluids is presented. In the approach, the representation of fluid and solvent particles is changed on the fly between an atomistic and a coarse-grained description. The present approach is based on a hybrid coupling of the multiparticle collision dynamics (MPC) method and molecular dynamics (MD), thereby coupling stochastic and deterministic particle-based methods. Hydrodynamics is examined by calculating velocity and current correlation functions for various mixed and coupled systems. We demonstrate that hydrodynamic properties of the mixed fluid are conserved by a suitable coupling of the two particle methods, and that the simulation results agree well with theoretical expectations.

  11. Validation of two high‐resolution climate simulations over Scandinavia

    DEFF Research Database (Denmark)

    Mayer, Stephanie; Maule, Cathrine Fox; Sobolowski, Stefan

    2014-01-01

    ., 2007) and to evaluate to what degree the models simulate observed weather. This is done by performing a so‐called perfect boundary experiment by dynamically downscaling ERA interim data. The atmospheric models WRF and HIRHAM5 were used as regional climate models (RCMs) in this study. Both models were...... are employed to examine the performance of the RCMs behaviour on a seasonal to sub‐daily time scale. Both models exhibit a wet bias of 50‐100 % (1‐3 mm) in seasonal precipitation. This bias is most pronounced during winter. The lower‐resolution reanalysis data underestimates wet‐day precipitation in all four...... season by 13‐36 % over the selected cities Bergen, Oslo and Copenhagen. The RCM simulations show a reduction of this underestimation and even indicate a sign change in some seasons/locations. A spatio‐temporal evaluation of downscaled precipitation extremes shows that both RCM downscalings are much...

  12. Kinetic energy spectra, vertical resolution and dissipation in high-resolution atmospheric simulations.

    Science.gov (United States)

    Skamarock, W. C.

    2017-12-01

    We have performed week-long full-physics simulations with the MPAS global model at 15 km cell spacing using vertical mesh spacings of 800, 400, 200 and 100 meters in the mid-troposphere through the mid-stratosphere. We find that the horizontal kinetic energy spectra in the upper troposphere and stratosphere does not converge with increasing vertical resolution until we reach 200 meter level spacing. Examination of the solutions indicates that significant inertia-gravity waves are not vertically resolved at the lower vertical resolutions. Diagnostics from the simulations indicate that the primary kinetic energy dissipation results from the vertical mixing within the PBL parameterization and from the gravity-wave drag parameterization, with smaller but significant contributions from damping in the vertical transport scheme and from the horizontal filters in the dynamical core. Most of the kinetic energy dissipation in the free atmosphere occurs within breaking mid-latitude baroclinic waves. We will briefly review these results and their implications for atmospheric model configuration and for atmospheric dynamics, specifically that related to the dynamics associated with the mesoscale kinetic energy spectrum.

  13. Sampling general N-body interactions with auxiliary fields

    Science.gov (United States)

    Körber, C.; Berkowitz, E.; Luu, T.

    2017-09-01

    We present a general auxiliary field transformation which generates effective interactions containing all possible N-body contact terms. The strength of the induced terms can analytically be described in terms of general coefficients associated with the transformation and thus are controllable. This transformation provides a novel way for sampling 3- and 4-body (and higher) contact interactions non-perturbatively in lattice quantum Monte Carlo simulations. As a proof of principle, we show that our method reproduces the exact solution for a two-site quantum mechanical problem.

  14. THE AGORA HIGH-RESOLUTION GALAXY SIMULATIONS COMPARISON PROJECT

    International Nuclear Information System (INIS)

    Kim, Ji-hoon; Conroy, Charlie; Goldbaum, Nathan J.; Krumholz, Mark R.; Abel, Tom; Agertz, Oscar; Gnedin, Nickolay Y.; Kravtsov, Andrey V.; Bryan, Greg L.; Ceverino, Daniel; Christensen, Charlotte; Hummels, Cameron B.; Dekel, Avishai; Guedes, Javiera; Hahn, Oliver; Hobbs, Alexander; Hopkins, Philip F.; Iannuzzi, Francesca; Keres, Dusan; Klypin, Anatoly

    2014-01-01

    We introduce the Assembling Galaxies Of Resolved Anatomy (AGORA) project, a comprehensive numerical study of well-resolved galaxies within the ΛCDM cosmology. Cosmological hydrodynamic simulations with force resolutions of ∼100 proper pc or better will be run with a variety of code platforms to follow the hierarchical growth, star formation history, morphological transformation, and the cycle of baryons in and out of eight galaxies with halo masses M vir ≅ 10 10 , 10 11 , 10 12 , and 10 13 M ☉ at z = 0 and two different ('violent' and 'quiescent') assembly histories. The numerical techniques and implementations used in this project include the smoothed particle hydrodynamics codes GADGET and GASOLINE, and the adaptive mesh refinement codes ART, ENZO, and RAMSES. The codes share common initial conditions and common astrophysics packages including UV background, metal-dependent radiative cooling, metal and energy yields of supernovae, and stellar initial mass function. These are described in detail in the present paper. Subgrid star formation and feedback prescriptions will be tuned to provide a realistic interstellar and circumgalactic medium using a non-cosmological disk galaxy simulation. Cosmological runs will be systematically compared with each other using a common analysis toolkit and validated against observations to verify that the solutions are robust—i.e., that the astrophysical assumptions are responsible for any success, rather than artifacts of particular implementations. The goals of the AGORA project are, broadly speaking, to raise the realism and predictive power of galaxy simulations and the understanding of the feedback processes that regulate galaxy 'metabolism'. The initial conditions for the AGORA galaxies as well as simulation outputs at various epochs will be made publicly available to the community. The proof-of-concept dark-matter-only test of the formation of a galactic halo with a z = 0 mass of M

  15. A 4.5 km resolution Arctic Ocean simulation with the global multi-resolution model FESOM 1.4

    Science.gov (United States)

    Wang, Qiang; Wekerle, Claudia; Danilov, Sergey; Wang, Xuezhu; Jung, Thomas

    2018-04-01

    In the framework of developing a global modeling system which can facilitate modeling studies on Arctic Ocean and high- to midlatitude linkage, we evaluate the Arctic Ocean simulated by the multi-resolution Finite Element Sea ice-Ocean Model (FESOM). To explore the value of using high horizontal resolution for Arctic Ocean modeling, we use two global meshes differing in the horizontal resolution only in the Arctic Ocean (24 km vs. 4.5 km). The high resolution significantly improves the model's representation of the Arctic Ocean. The most pronounced improvement is in the Arctic intermediate layer, in terms of both Atlantic Water (AW) mean state and variability. The deepening and thickening bias of the AW layer, a common issue found in coarse-resolution simulations, is significantly alleviated by using higher resolution. The topographic steering of the AW is stronger and the seasonal and interannual temperature variability along the ocean bottom topography is enhanced in the high-resolution simulation. The high resolution also improves the ocean surface circulation, mainly through a better representation of the narrow straits in the Canadian Arctic Archipelago (CAA). The representation of CAA throughflow not only influences the release of water masses through the other gateways but also the circulation pathways inside the Arctic Ocean. However, the mean state and variability of Arctic freshwater content and the variability of freshwater transport through the Arctic gateways appear not to be very sensitive to the increase in resolution employed here. By highlighting the issues that are independent of model resolution, we address that other efforts including the improvement of parameterizations are still required.

  16. Adaptive resolution simulation of supramolecular water : The concurrent making, breaking, and remaking of water bundles

    NARCIS (Netherlands)

    Zavadlav, Julija; Marrink, Siewert J; Praprotnik, Matej

    The adaptive resolution scheme (AdResS) is a multiscale molecular dynamics simulation approach that can concurrently couple atomistic (AT) and coarse-grained (CG) resolution regions, i.e., the molecules can freely adapt their resolution according to their current position in the system. Coupling to

  17. Cut-free LK quasi-polynomially simulates resolution

    OpenAIRE

    Arai, Noriko

    1998-01-01

    In this paper, the relative efficiency of two propositional systems is studied: resolution and cut-free LK in DAG. We give an upper bound for translation of resolution refutation to cut-free LK proofs. The best upper bound known was 2.

  18. Very high resolution regional climate model simulations over Greenland: Identifying added value

    DEFF Research Database (Denmark)

    Lucas-Picher, P.; Wulff-Nielsen, M.; Christensen, J.H.

    2012-01-01

    models. However, the bias between the simulations and the few available observations does not reduce with higher resolution. This is partly explained by the lack of observations in regions where the higher resolution is expected to improve the simulated climate. The RCM simulations show......This study presents two simulations of the climate over Greenland with the regional climate model (RCM) HIRHAM5 at 0.05° and 0.25° resolution driven at the lateral boundaries by the ERA-Interim reanalysis for the period 1989–2009. These simulations are validated against observations from...... that the temperature has increased the most in the northern part of Greenland and at lower elevations over the period 1989–2009. Higher resolution increases the relief variability in the model topography and causes the simulated precipitation to be larger on the coast and smaller over the main ice sheet compared...

  19. Propagation Diagnostic Simulations Using High-Resolution Equatorial Plasma Bubble Simulations

    Science.gov (United States)

    Rino, C. L.; Carrano, C. S.; Yokoyama, T.

    2017-12-01

    In a recent paper, under review, equatorial-plasma-bubble (EPB) simulations were used to conduct a comparative analysis of the EPB spectra characteristics with high-resolution in-situ measurements from the C/NOFS satellite. EPB realizations sampled in planes perpendicular to magnetic field lines provided well-defined EPB structure at altitudes penetrating both high and low-density regions. The average C/NOFS structure in highly disturbed regions showed nearly identical two-component inverse-power-law spectral characteristics as the measured EPB structure. This paper describes the results of PWE simulations using the same two-dimensional cross-field EPB realizations. New Irregularity Parameter Estimation (IPE) diagnostics, which are based on two-dimensional equivalent-phase-screen theory [A theory of scintillation for two-component power law irregularity spectra: Overview and numerical results, by Charles Carrano and Charles Rino, DOI: 10.1002/2015RS005903], have been successfully applied to extract two-component inverse-power-law parameters from measured intensity spectra. The EPB simulations [Low and Midlatitude Ionospheric Plasma DensityIrregularities and Their Effects on Geomagnetic Field, by Tatsuhiro Yokoyama and Claudia Stolle, DOI 10.1007/s11214-016-0295-7] have sufficient resolution to populate the structure scales (tens of km to hundreds of meters) that cause strong scintillation at GPS frequencies. The simulations provide an ideal geometry whereby the ramifications of varying structure along the propagation path can be investigated. It is well known path-integrated one-dimensional spectra increase the one-dimensional index by one. The relation requires decorrelation along the propagation path. Correlated structure would be interpreted as stochastic total-electron-content (TEC). The simulations are performed with unmodified structure. Because the EPB structure is confined to the central region of the sample planes, edge effects are minimized. Consequently

  20. Regional Community Climate Simulations with variable resolution meshes in the Community Earth System Model

    Science.gov (United States)

    Zarzycki, C. M.; Gettelman, A.; Callaghan, P.

    2017-12-01

    Accurately predicting weather extremes such as precipitation (floods and droughts) and temperature (heat waves) requires high resolution to resolve mesoscale dynamics and topography at horizontal scales of 10-30km. Simulating such resolutions globally for climate scales (years to decades) remains computationally impractical. Simulating only a small region of the planet is more tractable at these scales for climate applications. This work describes global simulations using variable-resolution static meshes with multiple dynamical cores that target the continental United States using developmental versions of the Community Earth System Model version 2 (CESM2). CESM2 is tested in idealized, aquaplanet and full physics configurations to evaluate variable mesh simulations against uniform high and uniform low resolution simulations at resolutions down to 15km. Different physical parameterization suites are also evaluated to gauge their sensitivity to resolution. Idealized variable-resolution mesh cases compare well to high resolution tests. More recent versions of the atmospheric physics, including cloud schemes for CESM2, are more stable with respect to changes in horizontal resolution. Most of the sensitivity is due to sensitivity to timestep and interactions between deep convection and large scale condensation, expected from the closure methods. The resulting full physics model produces a comparable climate to the global low resolution mesh and similar high frequency statistics in the high resolution region. Some biases are reduced (orographic precipitation in the western United States), but biases do not necessarily go away at high resolution (e.g. summertime JJA surface Temp). The simulations are able to reproduce uniform high resolution results, making them an effective tool for regional climate studies and are available in CESM2.

  1. A general CFD framework for fault-resilient simulations based on multi-resolution information fusion

    Science.gov (United States)

    Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em

    2017-10-01

    We develop a general CFD framework for multi-resolution simulations to target multiscale problems but also resilience in exascale simulations, where faulty processors may lead to gappy, in space-time, simulated fields. We combine approximation theory and domain decomposition together with statistical learning techniques, e.g. coKriging, to estimate boundary conditions and minimize communications by performing independent parallel runs. To demonstrate this new simulation approach, we consider two benchmark problems. First, we solve the heat equation (a) on a small number of spatial "patches" distributed across the domain, simulated by finite differences at fine resolution and (b) on the entire domain simulated at very low resolution, thus fusing multi-resolution models to obtain the final answer. Second, we simulate the flow in a lid-driven cavity in an analogous fashion, by fusing finite difference solutions obtained with fine and low resolution assuming gappy data sets. We investigate the influence of various parameters for this framework, including the correlation kernel, the size of a buffer employed in estimating boundary conditions, the coarseness of the resolution of auxiliary data, and the communication frequency across different patches in fusing the information at different resolution levels. In addition to its robustness and resilience, the new framework can be employed to generalize previous multiscale approaches involving heterogeneous discretizations or even fundamentally different flow descriptions, e.g. in continuum-atomistic simulations.

  2. Distribution-independent hierarchicald N-body methods

    International Nuclear Information System (INIS)

    Aluru, S.

    1994-01-01

    The N-body problem is to simulate the motion of N particles under the influence of mutual force fields based on an inverse square law. The problem has applications in several domains including astrophysics, molecular dynamics, fluid dynamics, radiosity methods in computer graphics and numerical complex analysis. Research efforts have focused on reducing the O(N 2 ) time per iteration required by the naive algorithm of computing each pairwise interaction. Widely respected among these are the Barnes-Hut and Greengard methods. Greengard claims his algorithm reduces the complexity to O(N) time per iteration. Throughout this thesis, we concentrate on rigorous, distribution-independent, worst-case analysis of the N-body methods. We show that Greengard's algorithm is not O(N), as claimed. Both Barnes-Hut and Greengard's methods depend on the same data structure, which we show is distribution-dependent. For the distribution that results in the smallest running time, we show that Greengard's algorithm is Ω(N log 2 N) in two dimensions and Ω(N log 4 N) in three dimensions. We have designed a hierarchical data structure whose size depends entirely upon the number of particles and is independent of the distribution of the particles. We show that both Greengard's and Barnes-Hut algorithms can be used in conjunction with this data structure to reduce their complexity. Apart from reducing the complexity of the Barnes-Hut algorithm, the data structure also permits more accurate error estimation. We present two- and three-dimensional algorithms for creating the data structure. The multipole method designed using this data structure has a complexity of O(N log N) in two dimensions and O(N log 2 N) in three dimensions

  3. Impacts of spatial resolution and representation of flow connectivity on large-scale simulation of floods

    Directory of Open Access Journals (Sweden)

    C. M. R. Mateo

    2017-10-01

    Full Text Available Global-scale river models (GRMs are core tools for providing consistent estimates of global flood hazard, especially in data-scarce regions. Due to former limitations in computational power and input datasets, most GRMs have been developed to use simplified representations of flow physics and run at coarse spatial resolutions. With increasing computational power and improved datasets, the application of GRMs to finer resolutions is becoming a reality. To support development in this direction, the suitability of GRMs for application to finer resolutions needs to be assessed. This study investigates the impacts of spatial resolution and flow connectivity representation on the predictive capability of a GRM, CaMa-Flood, in simulating the 2011 extreme flood in Thailand. Analyses show that when single downstream connectivity (SDC is assumed, simulation results deteriorate with finer spatial resolution; Nash–Sutcliffe efficiency coefficients decreased by more than 50 % between simulation results at 10 km resolution and 1 km resolution. When multiple downstream connectivity (MDC is represented, simulation results slightly improve with finer spatial resolution. The SDC simulations result in excessive backflows on very flat floodplains due to the restrictive flow directions at finer resolutions. MDC channels attenuated these effects by maintaining flow connectivity and flow capacity between floodplains in varying spatial resolutions. While a regional-scale flood was chosen as a test case, these findings should be universal and may have significant impacts on large- to global-scale simulations, especially in regions where mega deltas exist.These results demonstrate that a GRM can be used for higher resolution simulations of large-scale floods, provided that MDC in rivers and floodplains is adequately represented in the model structure.

  4. Impacts of spatial resolution and representation of flow connectivity on large-scale simulation of floods

    Science.gov (United States)

    Mateo, Cherry May R.; Yamazaki, Dai; Kim, Hyungjun; Champathong, Adisorn; Vaze, Jai; Oki, Taikan

    2017-10-01

    Global-scale river models (GRMs) are core tools for providing consistent estimates of global flood hazard, especially in data-scarce regions. Due to former limitations in computational power and input datasets, most GRMs have been developed to use simplified representations of flow physics and run at coarse spatial resolutions. With increasing computational power and improved datasets, the application of GRMs to finer resolutions is becoming a reality. To support development in this direction, the suitability of GRMs for application to finer resolutions needs to be assessed. This study investigates the impacts of spatial resolution and flow connectivity representation on the predictive capability of a GRM, CaMa-Flood, in simulating the 2011 extreme flood in Thailand. Analyses show that when single downstream connectivity (SDC) is assumed, simulation results deteriorate with finer spatial resolution; Nash-Sutcliffe efficiency coefficients decreased by more than 50 % between simulation results at 10 km resolution and 1 km resolution. When multiple downstream connectivity (MDC) is represented, simulation results slightly improve with finer spatial resolution. The SDC simulations result in excessive backflows on very flat floodplains due to the restrictive flow directions at finer resolutions. MDC channels attenuated these effects by maintaining flow connectivity and flow capacity between floodplains in varying spatial resolutions. While a regional-scale flood was chosen as a test case, these findings should be universal and may have significant impacts on large- to global-scale simulations, especially in regions where mega deltas exist.These results demonstrate that a GRM can be used for higher resolution simulations of large-scale floods, provided that MDC in rivers and floodplains is adequately represented in the model structure.

  5. Very high-resolution regional climate simulations over Scandinavia-present climate

    DEFF Research Database (Denmark)

    Christensen, Ole B.; Christensen, Jens H.; Machenhauer, Bennert

    1998-01-01

    realistically simulated. It is found in particular that in mountainous regions the high-resolution simulation shows improvements in the simulation of hydrologically relevant fields such as runoff and snow cover. Also, the distribution of precipitation on different intensity classes is most realistically...... on a high-density station network for the Scandinavian countries compiled for the present study. The simulated runoff is compared with observed data from Sweden extracted from a Swedish climatological atlas. These runoff data indicate that the precipitation analyses are underestimating the true...... simulated in the high-resolution simulation. It does, however, inherit certain large-scale systematic errors from the driving GCM. In many cases these errors increase with increasing resolution. Model verification of near-surface temperature and precipitation is made using a new gridded climatology based...

  6. Large-watershed flood simulation and forecasting based on different-resolution distributed hydrological model

    Science.gov (United States)

    Li, J.

    2017-12-01

    Large-watershed flood simulation and forecasting is very important for a distributed hydrological model in the application. There are some challenges including the model's spatial resolution effect, model performance and accuracy and so on. To cope with the challenge of the model's spatial resolution effect, different model resolution including 1000m*1000m, 600m*600m, 500m*500m, 400m*400m, 200m*200m were used to build the distributed hydrological model—Liuxihe model respectively. The purpose is to find which one is the best resolution for Liuxihe model in Large-watershed flood simulation and forecasting. This study sets up a physically based distributed hydrological model for flood forecasting of the Liujiang River basin in south China. Terrain data digital elevation model (DEM), soil type and land use type are downloaded from the website freely. The model parameters are optimized by using an improved Particle Swarm Optimization(PSO) algorithm; And parameter optimization could reduce the parameter uncertainty that exists for physically deriving model parameters. The different model resolution (200m*200m—1000m*1000m ) are proposed for modeling the Liujiang River basin flood with the Liuxihe model in this study. The best model's spatial resolution effect for flood simulation and forecasting is 200m*200m.And with the model's spatial resolution reduction, the model performance and accuracy also become worse and worse. When the model resolution is 1000m*1000m, the flood simulation and forecasting result is the worst, also the river channel divided based on this resolution is differs from the actual one. To keep the model with an acceptable performance, minimum model spatial resolution is needed. The suggested threshold model spatial resolution for modeling the Liujiang River basin flood is a 500m*500m grid cell, but the model spatial resolution with a 200m*200m grid cell is recommended in this study to keep the model at a best performance.

  7. N-Body Simulations of Tidal Encounters between Stellar Systems

    Indian Academy of Sciences (India)

    tribpo

    Saleh Mohammed Alladin† International Centre for Theoretical Physics, ... concentrate on how the tidal field of the primary changes the mass distribution, energy and angular momentum ..... International School for Advanced Studies, Trieste.

  8. Kinetic Energy from Supernova Feedback in High-resolution Galaxy Simulations

    Science.gov (United States)

    Simpson, Christine M.; Bryan, Greg L.; Hummels, Cameron; Ostriker, Jeremiah P.

    2015-08-01

    We describe a new method for adding a prescribed amount of kinetic energy to simulated gas modeled on a cartesian grid by directly altering grid cells’ mass and velocity in a distributed fashion. The method is explored in the context of supernova (SN) feedback in high-resolution (˜10 pc) hydrodynamic simulations of galaxy formation. Resolution dependence is a primary consideration in our application of the method, and simulations of isolated explosions (performed at different resolutions) motivate a resolution-dependent scaling for the injected fraction of kinetic energy that we apply in cosmological simulations of a 109 M⊙ dwarf halo. We find that in high-density media (≳50 cm-3) with coarse resolution (≳4 pc per cell), results are sensitive to the initial kinetic energy fraction due to early and rapid cooling. In our galaxy simulations, the deposition of small amounts of SN energy in kinetic form (as little as 1%) has a dramatic impact on the evolution of the system, resulting in an order-of-magnitude suppression of stellar mass. The overall behavior of the galaxy in the two highest resolution simulations we perform appears to converge. We discuss the resulting distribution of stellar metallicities, an observable sensitive to galactic wind properties, and find that while the new method demonstrates increased agreement with observed systems, significant discrepancies remain, likely due to simplistic assumptions that neglect contributions from SNe Ia and stellar winds.

  9. Monte Carlo simulation of the resolution volume for the SEQUOIA spectrometer

    Directory of Open Access Journals (Sweden)

    Granroth G.E.

    2015-01-01

    Full Text Available Monte Carlo ray tracing simulations, of direct geometry spectrometers, have been particularly useful in instrument design and characterization. However, these tools can also be useful for experiment planning and analysis. To this end, the McStas Monte Carlo ray tracing model of SEQUOIA, the fine resolution fermi chopper spectrometer at the Spallation Neutron Source (SNS of Oak Ridge National Laboratory (ORNL, has been modified to include the time of flight resolution sample and detector components. With these components, the resolution ellipsoid can be calculated for any detector pixel and energy bin of the instrument. The simulation is split in two pieces. First, the incident beamline up to the sample is simulated for 1 × 1011 neutron packets (4 days on 30 cores. This provides a virtual source for the backend that includes the resolution sample and monitor components. Next, a series of detector and energy pixels are computed in parallel. It takes on the order of 30 s to calculate a single resolution ellipsoid on a single core. Python scripts have been written to transform the ellipsoid into the space of an oriented single crystal, and to characterize the ellipsoid in various ways. Though this tool is under development as a planning tool, we have successfully used it to provide the resolution function for convolution with theoretical models. Specifically, theoretical calculations of the spin waves in YFeO3 were compared to measurements taken on SEQUOIA. Though the overall features of the spectra can be explained while neglecting resolution effects, the variation in intensity of the modes is well described once the resolution is included. As this was a single sharp mode, the simulated half intensity value of the resolution ellipsoid was used to provide the resolution width. A description of the simulation, its use, and paths forward for this technique will be discussed.

  10. Measurement and simulation of the drift pulses and resolution in the micro-jet chamber

    International Nuclear Information System (INIS)

    Va'vra, J.

    1983-01-01

    We have tested a prototype of a micro-jet chamber, using both a nitrogen laser and a 10GeV electron beam. The achieved resolution in the particle beam was sigma = 18μm for a lmm impact parameter and 22μm when averaging over the entire beam profile. The experimental results were compared to a Monte Carlo program which simulates the pulse shapes and resolution in drift chambers of any geometry. The main emphasis in our simulation analysis was to study various strategies for drift chambers in order to achieve the best possible timing resolution

  11. Measurement and simulation of the inelastic resolution function of a time-of-flight spectrometer

    International Nuclear Information System (INIS)

    Roth, S.V.; Zirkel, A.; Neuhaus, J.; Petry, W.; Bossy, J.; Peters, J.; Schober, H.

    2002-01-01

    The deconvolution of inelastic neutron scattering data requires the knowledge of the inelastic resolution function. The inelastic resolution function of the time-of-flight spectrometer IN5/ILL has been measured by exploiting the sharp resonances of the roton and maxon excitations in superfluid 4 He for the two respective (q,ω) values. The calculated inelastic resolution function for three different instrumental setups is compared to the experimentally determined resolution function. The agreement between simulation and experimental data is excellent, allowing us in principle to extrapolate the simulations and thus to determine the resolution function in the whole accessible dynamic range of IN5 or any other time-of-flight spectrometer. (orig.)

  12. Measurement and simulation of the inelastic resolution function of a time-of-flight spectrometer

    CERN Document Server

    Roth, S V; Neuhaus, J; Petry, W; Bossy, J; Peters, J; Schober, H

    2002-01-01

    The deconvolution of inelastic neutron scattering data requires the knowledge of the inelastic resolution function. The inelastic resolution function of the time-of-flight spectrometer IN5/ILL has been measured by exploiting the sharp resonances of the roton and maxon excitations in superfluid sup 4 He for the two respective (q,omega) values. The calculated inelastic resolution function for three different instrumental setups is compared to the experimentally determined resolution function. The agreement between simulation and experimental data is excellent, allowing us in principle to extrapolate the simulations and thus to determine the resolution function in the whole accessible dynamic range of IN5 or any other time-of-flight spectrometer. (orig.)

  13. Computer simulation of high resolution transmission electron micrographs: theory and analysis

    International Nuclear Information System (INIS)

    Kilaas, R.

    1985-03-01

    Computer simulation of electron micrographs is an invaluable aid in their proper interpretation and in defining optimum conditions for obtaining images experimentally. Since modern instruments are capable of atomic resolution, simulation techniques employing high precision are required. This thesis makes contributions to four specific areas of this field. First, the validity of a new method for simulating high resolution electron microscope images has been critically examined. Second, three different methods for computing scattering amplitudes in High Resolution Transmission Electron Microscopy (HRTEM) have been investigated as to their ability to include upper Laue layer (ULL) interaction. Third, a new method for computing scattering amplitudes in high resolution transmission electron microscopy has been examined. Fourth, the effect of a surface layer of amorphous silicon dioxide on images of crystalline silicon has been investigated for a range of crystal thicknesses varying from zero to 2 1/2 times that of the surface layer

  14. Minimal coupling schemes in N-body reaction theory

    International Nuclear Information System (INIS)

    Picklesimer, A.; Tandy, P.C.; Thaler, R.M.

    1982-01-01

    A new derivation of the N-body equations of Bencze, Redish, and Sloan is obtained through the use of Watson-type multiple scattering techniques. The derivation establishes an intimate connection between these partition-labeled N-body equations and the particle-labeled Rosenberg equations. This result yields new insight into the implicit role of channel coupling in, and the minimal dimensionality of, the partition-labeled equations

  15. Simulation of the Position Resolution of a Scintillation Detector

    CERN Document Server

    Templ, Sebastian; Sauerzopf, Clemens

    In the Standard Model of particle physics, CPT symmetry is regarded as invariant. In order to test this prediction, the ASACUSA collaboration (“Atomic Spectroscopy And Collisions Using Slow Antiprotons”) aims to make a very precise measurement of the hyperfine structure of antihydrogen with a Rabi-like experiment. The compar- ison of the experimentally-obtained antihydrogen transition frequencies with those of hydrogen allows for a direct test of CPT symmetry. The spectrometer line of the ASACUSA HBAR-GSHFS (“Antihydrogen ground state hyperfine splitting”) experiment consists of a particle source, a spin flip-in- ducing microwave cavity, a spin-analyzing sextupole magnet, and a detector. In the course of the work for this thesis, a single scintillation detector as used in the hodoscopes of the detector at the end of the spectrometer line was simulated using the particle physics toolkit Geant4. Subsequent analysis of the simulation data allows for an estimate of the minimal uncertainty in determining t...

  16. Impact of atmospheric model resolution on simulation of ENSO feedback processes: a coupled model study

    Science.gov (United States)

    Hua, Lijuan; Chen, Lin; Rong, Xinyao; Su, Jingzhi; Wang, Lu; Li, Tim; Yu, Yongqiang

    2018-03-01

    This study examines El Niño-Southern Oscillation (ENSO)-related air-sea feedback processes in a coupled general circulation model (CGCM) to gauge model errors and pin down their sources in ENSO simulation. Three horizontal resolutions of the atmospheric component (T42, T63 and T106) of the CGCM are used to investigate how the simulated ENSO behaviors are affected by the resolution. We find that air-sea feedback processes in the three experiments mainly differ in terms of both thermodynamic and dynamic feedbacks. We also find that these processes are simulated more reasonably in the highest resolution version than in the other two lower resolution versions. The difference in the thermodynamic feedback arises from the difference in the shortwave-radiation (SW) feedback. Due to the severely (mildly) excessive cold tongue in the lower (higher) resolution version, the SW feedback is severely (mildly) underestimated. The main difference in the dynamic feedback processes lies in the thermocline feedback and the zonal-advection feedback, both of which are caused by the difference in the anomalous thermocline response to anomalous zonal wind stress. The difference in representing the anomalous thermocline response is attributed to the difference in meridional structure of zonal wind stress anomaly in the three simulations, which is linked to meridional resolution.

  17. The influence of atmospheric grid resolution in a climate model-forced ice sheet simulation

    Science.gov (United States)

    Lofverstrom, Marcus; Liakka, Johan

    2018-04-01

    Coupled climate-ice sheet simulations have been growing in popularity in recent years. Experiments of this type are however challenging as ice sheets evolve over multi-millennial timescales, which is beyond the practical integration limit of most Earth system models. A common method to increase model throughput is to trade resolution for computational efficiency (compromise accuracy for speed). Here we analyze how the resolution of an atmospheric general circulation model (AGCM) influences the simulation quality in a stand-alone ice sheet model. Four identical AGCM simulations of the Last Glacial Maximum (LGM) were run at different horizontal resolutions: T85 (1.4°), T42 (2.8°), T31 (3.8°), and T21 (5.6°). These simulations were subsequently used as forcing of an ice sheet model. While the T85 climate forcing reproduces the LGM ice sheets to a high accuracy, the intermediate resolution cases (T42 and T31) fail to build the Eurasian ice sheet. The T21 case fails in both Eurasia and North America. Sensitivity experiments using different surface mass balance parameterizations improve the simulations of the Eurasian ice sheet in the T42 case, but the compromise is a substantial ice buildup in Siberia. The T31 and T21 cases do not improve in the same way in Eurasia, though the latter simulates the continent-wide Laurentide ice sheet in North America. The difficulty to reproduce the LGM ice sheets in the T21 case is in broad agreement with previous studies using low-resolution atmospheric models, and is caused by a substantial deterioration of the model climate between the T31 and T21 resolutions. It is speculated that this deficiency may demonstrate a fundamental problem with using low-resolution atmospheric models in these types of experiments.

  18. Utilization of Short-Simulations for Tuning High-Resolution Climate Model

    Science.gov (United States)

    Lin, W.; Xie, S.; Ma, P. L.; Rasch, P. J.; Qian, Y.; Wan, H.; Ma, H. Y.; Klein, S. A.

    2016-12-01

    Many physical parameterizations in atmospheric models are sensitive to resolution. Tuning the models that involve a multitude of parameters at high resolution is computationally expensive, particularly when relying primarily on multi-year simulations. This work describes a complementary set of strategies for tuning high-resolution atmospheric models, using ensembles of short simulations to reduce the computational cost and elapsed time. Specifically, we utilize the hindcast approach developed through the DOE Cloud Associated Parameterization Testbed (CAPT) project for high-resolution model tuning, which is guided by a combination of short (tests have been found to be effective in numerous previous studies in identifying model biases due to parameterized fast physics, and we demonstrate that it is also useful for tuning. After the most egregious errors are addressed through an initial "rough" tuning phase, longer simulations are performed to "hone in" on model features that evolve over longer timescales. We explore these strategies to tune the DOE ACME (Accelerated Climate Modeling for Energy) model. For the ACME model at 0.25° resolution, it is confirmed that, given the same parameters, major biases in global mean statistics and many spatial features are consistent between Atmospheric Model Intercomparison Project (AMIP)-type simulations and CAPT-type hindcasts, with just a small number of short-term simulations for the latter over the corresponding season. The use of CAPT hindcasts to find parameter choice for the reduction of large model biases dramatically improves the turnaround time for the tuning at high resolution. Improvement seen in CAPT hindcasts generally translates to improved AMIP-type simulations. An iterative CAPT-AMIP tuning approach is therefore adopted during each major tuning cycle, with the former to survey the likely responses and narrow the parameter space, and the latter to verify the results in climate context along with assessment in

  19. A combined N-body and hydrodynamic code for modeling disk galaxies

    International Nuclear Information System (INIS)

    Schroeder, M.C.

    1989-01-01

    A combined N-body and hydrodynamic computer code for the modeling of two dimensional galaxies is described. The N-body portion of the code is used to calculate the motion of the particle component of a galaxy, while the hydrodynamics portion of the code is used to follow the motion and evolution of the fluid component. A complete description of the numerical methods used for each portion of the code is given. Additionally, the proof tests of the separate and combined portions of the code are presented and discussed. Finally, a discussion of the topics researched with the code and results obtained is presented. These include: the measurement of stellar relaxation times in disk galaxy simulations; the effects of two-armed spiral perturbations on stable axisymmetric disks; the effects of the inclusion of an instellar medium (ISM) on the stability of disk galaxies; and the effect of the inclusion of stellar evolution on disk galaxy simulations

  20. Speeding up N-body Calculations on Machines without Hardware Square Root

    Directory of Open Access Journals (Sweden)

    Alan H. Karp

    1992-01-01

    Full Text Available The most time consuming part of an N-body simulation is computing the components of the accelerations of the particles. On most machines the slowest part of computing the acceleration is in evaluating r-3/2, which is especially true on machines that do the square root in software. This note shows how to cut the time for this part of the calculation by a factor of 3 or more using standard Fortran.

  1. Non-instantaneous gas recycling and chemical evolution in N-body disk galaxies

    Czech Academy of Sciences Publication Activity Database

    Jungwiert, Bruno; Carraro, G.; Dalla Vecchia, C.

    2004-01-01

    Roč. 289, 3-4 (2004), s. 441-444 ISSN 0004-640X. [From observations to self-consistent modelling of the ISM in galaxies. Porto, 03.09.2002-05.09.2002] R&D Projects: GA ČR GP202/01/D075 Institutional research plan: CEZ:AV0Z1003909 Keywords : N-body simulations * galaxy evolution * gas recycling Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics Impact factor: 0.597, year: 2004

  2. Efficient nonparametric n -body force fields from machine learning

    Science.gov (United States)

    Glielmo, Aldo; Zeni, Claudio; De Vita, Alessandro

    2018-05-01

    We provide a definition and explicit expressions for n -body Gaussian process (GP) kernels, which can learn any interatomic interaction occurring in a physical system, up to n -body contributions, for any value of n . The series is complete, as it can be shown that the "universal approximator" squared exponential kernel can be written as a sum of n -body kernels. These recipes enable the choice of optimally efficient force models for each target system, as confirmed by extensive testing on various materials. We furthermore describe how the n -body kernels can be "mapped" on equivalent representations that provide database-size-independent predictions and are thus crucially more efficient. We explicitly carry out this mapping procedure for the first nontrivial (three-body) kernel of the series, and we show that this reproduces the GP-predicted forces with meV /Å accuracy while being orders of magnitude faster. These results pave the way to using novel force models (here named "M-FFs") that are computationally as fast as their corresponding standard parametrized n -body force fields, while retaining the nonparametric character, the ease of training and validation, and the accuracy of the best recently proposed machine-learning potentials.

  3. Quantifying uncertainty due to internal variability using high-resolution regional climate model simulations

    Science.gov (United States)

    Gutmann, E. D.; Ikeda, K.; Deser, C.; Rasmussen, R.; Clark, M. P.; Arnold, J. R.

    2015-12-01

    The uncertainty in future climate predictions is as large or larger than the mean climate change signal. As such, any predictions of future climate need to incorporate and quantify the sources of this uncertainty. One of the largest sources comes from the internal, chaotic, variability within the climate system itself. This variability has been approximated using the 30 ensemble members of the Community Earth System Model (CESM) large ensemble. Here we examine the wet and dry end members of this ensemble for cool-season precipitation in the Colorado Rocky Mountains with a set of high-resolution regional climate model simulations. We have used the Weather Research and Forecasting model (WRF) to simulate the periods 1990-2000, 2025-2035, and 2070-2080 on a 4km grid. These simulations show that the broad patterns of change depicted in CESM are inherited by the high-resolution simulations; however, the differences in the height and location of the mountains in the WRF simulation, relative to the CESM simulation, means that the location and magnitude of the precipitation changes are very different. We further show that high-resolution simulations with the Intermediate Complexity Atmospheric Research model (ICAR) predict a similar spatial pattern in the change signal as WRF for these ensemble members. We then use ICAR to examine the rest of the CESM Large Ensemble as well as the uncertainty in the regional climate model due to the choice of physics parameterizations.

  4. High resolution simulations of orographic flow over a complex terrain on the Southeast coast of Brazil

    Science.gov (United States)

    Chou, S. C.; Zolino, M. M.; Gomes, J. L.; Bustamante, J. F.; Lima-e-Silva, P. P.

    2012-04-01

    The Eta Model is used operationally by CPTEC to produce weather forecasts over South America since 1997. The model has gone through upgrades. In order to prepare the model for operational higher resolution forecasts, the model is configured and tested over a region of complex topography located near the coast of Southeast Brazil. The Eta Model was configured, with 2-km horizontal resolution and 50 layers. The Eta-2km is a second nesting, it is driven by Eta-15km, which in its turn is driven by Era-Interim reanalyses. The model domain includes the two Brazilians cities, Rio de Janeiro and Sao Paulo, urban areas, preserved tropical forest, pasture fields, and complex terrain and coastline. Mountains can rise up to about 700m. The region suffers frequent events of floods and landslides. The objective of this work is to evaluate high resolution simulations of wind and temperature in this complex area. Verification of model runs uses observations taken from the nuclear power plant. Accurate near-surface wind direction and magnitude are needed for the plant emergency plan and winds are highly sensitive to model spatial resolution and atmospheric stability. Verification of two cases during summer shows that model has clear diurnal cycle signal for wind in that region. The area is characterized by weak winds which makes the simulation more difficult. The simulated wind magnitude is about 1.5m/s, which is close to observations of about 2m/s; however, the observed change of wind direction of the sea breeze is fast whereas it is slow in the simulations. Nighttime katabatic flow is captured by the simulations. Comparison against Eta-5km runs show that the valley circulation is better described in the 2-km resolution run. Simulated temperatures follow closely the observed diurnal cycle. Experiments improving some surface conditions such as the surface temperature and land cover show simulation error reduction and improved diurnal cycle.

  5. Numerical solutions of the N-body problem

    International Nuclear Information System (INIS)

    Marciniak, A.

    1985-01-01

    Devoted to the study of numerical methods for solving the general N-body problem and related problems, this volume starts with an overview of the conventional numerical methods for solving the initial value problem. The major part of the book contains original work and features a presentation of special numerical methods conserving the constants of motion in the general N-body problem and methods conserving the Jacobi constant in the problem of motion of N bodies in a rotating frame, as well as an analysis of the applications of both (conventional and special) kinds of methods for solving these problems. For all the methods considered, the author presents algorithms which are easily programmable in any computer language. Moreover, the author compares various methods and presents adequate numerical results. The appendix contains PL/I procedures for all the special methods conserving the constants of motion. 91 refs.; 35 figs.; 41 tabs

  6. HIGH-RESOLUTION SIMULATIONS OF CONVECTION PRECEDING IGNITION IN TYPE Ia SUPERNOVAE USING ADAPTIVE MESH REFINEMENT

    International Nuclear Information System (INIS)

    Nonaka, A.; Aspden, A. J.; Almgren, A. S.; Bell, J. B.; Zingale, M.; Woosley, S. E.

    2012-01-01

    We extend our previous three-dimensional, full-star simulations of the final hours of convection preceding ignition in Type Ia supernovae to higher resolution using the adaptive mesh refinement capability of our low Mach number code, MAESTRO. We report the statistics of the ignition of the first flame at an effective 4.34 km resolution and general flow field properties at an effective 2.17 km resolution. We find that off-center ignition is likely, with radius of 50 km most favored and a likely range of 40-75 km. This is consistent with our previous coarser (8.68 km resolution) simulations, implying that we have achieved sufficient resolution in our determination of likely ignition radii. The dynamics of the last few hot spots preceding ignition suggest that a multiple ignition scenario is not likely. With improved resolution, we can more clearly see the general flow pattern in the convective region, characterized by a strong outward plume with a lower speed recirculation. We show that the convective core is turbulent with a Kolmogorov spectrum and has a lower turbulent intensity and larger integral length scale than previously thought (on the order of 16 km s –1 and 200 km, respectively), and we discuss the potential consequences for the first flames.

  7. Surface Wind Regionalization over Complex Terrain: Evaluation and Analysis of a High-Resolution WRF Simulation

    NARCIS (Netherlands)

    Jiménez, P.A.; González-Rouco, J.F.; García-Bustamante, E.; Navarro, J.; Montávez, J.P.; Vilà-Guerau de Arellano, J.; Dudhia, J.; Muñoz-Roldan, A.

    2010-01-01

    This study analyzes the daily-mean surface wind variability over an area characterized by complex topography through comparing observations and a 2-km-spatial-resolution simulation performed with the Weather Research and Forecasting (WRF) model for the period 1992–2005. The evaluation focuses on the

  8. Mitigating the effects of system resolution on computer simulation of Portland cement hydration

    NARCIS (Netherlands)

    Chen, Wei; Brouwers, Jos

    2008-01-01

    CEMHYD3D is an advanced, three-dimensional computer model for simulating the hydration processes of cement, in which the microstructure of the hydrating cement paste is represented by digitized particles in a cubic domain. However, the system resolution (which is determined by the voxel size) has a

  9. Impact of mesh and DEM resolutions in SEM simulation of 3D seismic response

    NARCIS (Netherlands)

    Khan, Saad; van der Meijde, M.; van der Werff, H.M.A.; Shafique, Muhammad

    2017-01-01

    This study shows that the resolution of a digital elevation model (DEM) and model mesh strongly influences 3D simulations of seismic response. Topographic heterogeneity scatters seismic waves and causes variation in seismic response (am-plification and deamplification of seismic amplitudes) at the

  10. Eulerian and Lagrangian statistics from high resolution numerical simulations of weakly compressible turbulence

    NARCIS (Netherlands)

    Benzi, R.; Biferale, L.; Fisher, R.T.; Lamb, D.Q.; Toschi, F.

    2009-01-01

    We report a detailed study of Eulerian and Lagrangian statistics from high resolution Direct Numerical Simulations of isotropic weakly compressible turbulence. Reynolds number at the Taylor microscale is estimated to be around 600. Eulerian and Lagrangian statistics is evaluated over a huge data

  11. Simulated cosmic microwave background maps at 0.5 deg resolution: Basic results

    Science.gov (United States)

    Hinshaw, G.; Bennett, C. L.; Kogut, A.

    1995-01-01

    We have simulated full-sky maps of the cosmic microwave background (CMB) anisotropy expected from cold dark matter (CDM) models at 0.5 deg and 1.0 deg angular resolution. Statistical properties of the maps are presented as a function of sky coverage, angular resolution, and instrument noise, and the implications of these results for observability of the Doppler peak are discussed. The rms fluctuations in a map are not a particularly robust probe of the existence of a Doppler peak; however, a full correlation analysis can provide reasonable sensitivity. We find that sensitivity to the Doppler peak depends primarily on the fraction of sky covered, and only secondarily on the angular resolution and noise level. Color plates of the simulated maps are presented to illustrate the anisotropies.

  12. Hydrologic Simulation in Mediterranean flood prone Watersheds using high-resolution quality data

    Science.gov (United States)

    Eirini Vozinaki, Anthi; Alexakis, Dimitrios; Pappa, Polixeni; Tsanis, Ioannis

    2015-04-01

    Flooding is a significant threat causing lots of inconveniencies in several societies, worldwide. The fact that the climatic change is already happening, increases the flooding risk, which is no longer a substantial menace to several societies and their economies. The improvement of spatial-resolution and accuracy of the topography and land use data due to remote sensing techniques could provide integrated flood inundation simulations. In this work hydrological analysis of several historic flood events in Mediterranean flood prone watersheds (island of Crete/Greece) takes place. Satellite images of high resolution are elaborated. A very high resolution (VHR) digital elevation model (DEM) is produced from a GeoEye-1 0.5-m-resolution satellite stereo pair and is used for floodplain management and mapping applications such as watershed delineation and river cross-section extraction. Sophisticated classification algorithms are implemented for improving Land Use/ Land Cover maps accuracy. In addition, soil maps are updated with means of Radar satellite images. The above high-resolution data are innovatively used to simulate and validate several historical flood events in Mediterranean watersheds, which have experienced severe flooding in the past. The hydrologic/hydraulic models used for flood inundation simulation in this work are HEC-HMS and HEC-RAS. The Natural Resource Conservation Service (NRCS) curve number (CN) approach is implemented to account for the effect of LULC and soil on the hydrologic response of the catchment. The use of high resolution data provides detailed validation results and results of high precision, accordingly. Furthermore, the meteorological forecasting data, which are also combined to the simulation model results, manage the development of an integrated flood forecasting and early warning system tool, which is capable of confronting or even preventing this imminent risk. The research reported in this paper was fully supported by the

  13. N-body scattering solution in coordinate space

    International Nuclear Information System (INIS)

    Cheng-Guang, B.

    1986-01-01

    The Schroedinger equation has been transformed into a set of coupled partial differential equations having hyper-variables as arguments and a procedure for embedding the boundary conditions into the N-body scattering solution by using a set of homogeneous linear algebraic equations is proposed

  14. Comments on "Adaptive resolution simulation in equilibrium and beyond" by H. Wang and A. Agarwal

    Science.gov (United States)

    Klein, R.

    2015-09-01

    Wang and Agarwal (Eur. Phys. J. Special Topics, this issue, 2015, doi: 10.1140/epjst/e2015-02411-2) discuss variants of Adaptive Resolution Molecular Dynamics Simulations (AdResS), and their applications. Here we comment on their report, addressing scaling properties of the method, artificial forcings implemented to ensure constant density across the full simulation despite changing thermodynamic properties of the simulated media, the possible relation between an AdResS system on the one hand and a phase transition phenomenon on the other, and peculiarities of the SPC/E water model.

  15. The fusion of satellite and UAV data: simulation of high spatial resolution band

    Science.gov (United States)

    Jenerowicz, Agnieszka; Siok, Katarzyna; Woroszkiewicz, Malgorzata; Orych, Agata

    2017-10-01

    Remote sensing techniques used in the precision agriculture and farming that apply imagery data obtained with sensors mounted on UAV platforms became more popular in the last few years due to the availability of low- cost UAV platforms and low- cost sensors. Data obtained from low altitudes with low- cost sensors can be characterised by high spatial and radiometric resolution but quite low spectral resolution, therefore the application of imagery data obtained with such technology is quite limited and can be used only for the basic land cover classification. To enrich the spectral resolution of imagery data acquired with low- cost sensors from low altitudes, the authors proposed the fusion of RGB data obtained with UAV platform with multispectral satellite imagery. The fusion is based on the pansharpening process, that aims to integrate the spatial details of the high-resolution panchromatic image with the spectral information of lower resolution multispectral or hyperspectral imagery to obtain multispectral or hyperspectral images with high spatial resolution. The key of pansharpening is to properly estimate the missing spatial details of multispectral images while preserving their spectral properties. In the research, the authors presented the fusion of RGB images (with high spatial resolution) obtained with sensors mounted on low- cost UAV platforms and multispectral satellite imagery with satellite sensors, i.e. Landsat 8 OLI. To perform the fusion of UAV data with satellite imagery, the simulation of the panchromatic bands from RGB data based on the spectral channels linear combination, was conducted. Next, for simulated bands and multispectral satellite images, the Gram-Schmidt pansharpening method was applied. As a result of the fusion, the authors obtained several multispectral images with very high spatial resolution and then analysed the spatial and spectral accuracies of processed images.

  16. Adaptive resolution simulation of a biomolecule and its hydration shell: Structural and dynamical properties

    International Nuclear Information System (INIS)

    Fogarty, Aoife C.; Potestio, Raffaello; Kremer, Kurt

    2015-01-01

    A fully atomistic modelling of many biophysical and biochemical processes at biologically relevant length- and time scales is beyond our reach with current computational resources, and one approach to overcome this difficulty is the use of multiscale simulation techniques. In such simulations, when system properties necessitate a boundary between resolutions that falls within the solvent region, one can use an approach such as the Adaptive Resolution Scheme (AdResS), in which solvent particles change their resolution on the fly during the simulation. Here, we apply the existing AdResS methodology to biomolecular systems, simulating a fully atomistic protein with an atomistic hydration shell, solvated in a coarse-grained particle reservoir and heat bath. Using as a test case an aqueous solution of the regulatory protein ubiquitin, we first confirm the validity of the AdResS approach for such systems, via an examination of protein and solvent structural and dynamical properties. We then demonstrate how, in addition to providing a computational speedup, such a multiscale AdResS approach can yield otherwise inaccessible physical insights into biomolecular function. We use our methodology to show that protein structure and dynamics can still be correctly modelled using only a few shells of atomistic water molecules. We also discuss aspects of the AdResS methodology peculiar to biomolecular simulations

  17. Adaptive resolution simulation of a biomolecule and its hydration shell: Structural and dynamical properties

    Energy Technology Data Exchange (ETDEWEB)

    Fogarty, Aoife C., E-mail: fogarty@mpip-mainz.mpg.de; Potestio, Raffaello, E-mail: potestio@mpip-mainz.mpg.de; Kremer, Kurt, E-mail: kremer@mpip-mainz.mpg.de [Max Planck Institute for Polymer Research, Ackermannweg 10, 55128 Mainz (Germany)

    2015-05-21

    A fully atomistic modelling of many biophysical and biochemical processes at biologically relevant length- and time scales is beyond our reach with current computational resources, and one approach to overcome this difficulty is the use of multiscale simulation techniques. In such simulations, when system properties necessitate a boundary between resolutions that falls within the solvent region, one can use an approach such as the Adaptive Resolution Scheme (AdResS), in which solvent particles change their resolution on the fly during the simulation. Here, we apply the existing AdResS methodology to biomolecular systems, simulating a fully atomistic protein with an atomistic hydration shell, solvated in a coarse-grained particle reservoir and heat bath. Using as a test case an aqueous solution of the regulatory protein ubiquitin, we first confirm the validity of the AdResS approach for such systems, via an examination of protein and solvent structural and dynamical properties. We then demonstrate how, in addition to providing a computational speedup, such a multiscale AdResS approach can yield otherwise inaccessible physical insights into biomolecular function. We use our methodology to show that protein structure and dynamics can still be correctly modelled using only a few shells of atomistic water molecules. We also discuss aspects of the AdResS methodology peculiar to biomolecular simulations.

  18. Improving MJO Prediction and Simulation Using AGCM Coupled Ocean Model with Refined Vertical Resolution

    Science.gov (United States)

    Tu, Chia-Ying; Tseng, Wan-Ling; Kuo, Pei-Hsuan; Lan, Yung-Yao; Tsuang, Ben-Jei; Hsu, Huang-Hsiung

    2017-04-01

    Precipitation in Taiwan area is significantly influenced by MJO (Madden-Julian Oscillation) in the boreal winter. This study is therefore conducted by toggling the MJO prediction and simulation with a unique model structure. The one-dimensional TKE (Turbulence Kinetic Energy) type ocean model SIT (Snow, Ice, Thermocline) with refined vertical resolution near surface is able to resolve cool skin, as well as diurnal warm layer. SIT can simulate accurate SST and hence give precise air-sea interaction. By coupling SIT with ECHAM5 (MPI-Meteorology), CAM5 (NCAR) and HiRAM (GFDL), the MJO simulations in 20-yrs climate integrations conducted by three SIT-coupled AGCMs are significant improved comparing to those driven by prescribed SST. The horizontal resolutions in ECHAM5, CAM5 and HiRAM are 2-deg., 1-deg and 0.5-deg., respectively. This suggests that the improvement of MJO simulation by coupling SIT is AGCM-resolution independent. This study further utilizes HiRAM coupled SIT to evaluate its MJO forecast skill. HiRAM has been recognized as one of the best model for seasonal forecasts of hurricane/typhoon activity (Zhao et al., 2009; Chen & Lin, 2011; 2013), but was not as successful in MJO forecast. The preliminary result of the HiRAM-SIT experiment during DYNAMO period shows improved success in MJO forecast. These improvements of MJO prediction and simulation in both hindcast experiments and climate integrations are mainly from better-simulated SST diurnal cycle and diurnal amplitude, which is contributed by the refined vertical resolution near ocean surface in SIT. Keywords: MJO Predictability, DYNAMO

  19. High resolution real time capable combustion chamber simulation; Zeitlich hochaufloesende echtzeitfaehige Brennraumsimulation

    Energy Technology Data Exchange (ETDEWEB)

    Piewek, J. [Volkswagen AG, Wolfsburg (Germany)

    2008-07-01

    The article describes a zero-dimensional model for the real time capable combustion chamber pressure calculation with analogue pressure sensor output. The closed-loop-operation of an Engine Control Unit is shown at the hardware-in-the-loop-simulator (HiL simulator) for a 4-cylinder common rail diesel engine. The presentation of the model focuses on the simulation of the load variation which does not depend on the injection system and thus the simulated heat release rate. Particular attention is paid to the simulation and the resulting test possibilities regarding to full-variable valve gears. It is shown that black box models consisting in the HiL mean value model for the aspirated gas mass, the exhaust gas temperature after the outlet valve and the mean indicated pressure can be replaced by calculations from the high-resolution combustion chamber model. (orig.)

  20. A suitable boundary condition for bounded plasma simulation without sheath resolution

    International Nuclear Information System (INIS)

    Parker, S.E.; Procassini, R.J.; Birdsall, C.K.; Cohen, B.I.

    1993-01-01

    We have developed a technique that allows for a sheath boundary layer without having to resolve the inherently small space and time scales of the sheath region. We refer to this technique as the logical sheath boundary condition. This boundary condition, when incorporated into a direct-implicit particle code, permits large space- and time-scale simulations of bounded systems, which would otherwise be impractical on current supercomputers. The lack of resolution of the collector sheath potential drop obtained from conventional implicit simulations at moderate values of ω pe Δt and Δz/λ De provides the motivation for the development of the logical sheath boundary condition. The algorithm for use of the logical sheath boundary condition in a particle simulation is presented. Results from simulations which use the logical sheath boundary condition are shown to compare reasonably well with those from an analytic theory and simulations in which the sheath is resolved

  1. Simulations of the temporal and spatial resolution for a compact time-resolved electron diffractometer

    Science.gov (United States)

    Robinson, Matthew S.; Lane, Paul D.; Wann, Derek A.

    2016-02-01

    A novel compact electron gun for use in time-resolved gas electron diffraction experiments has recently been designed and commissioned. In this paper we present and discuss the extensive simulations that were performed to underpin the design in terms of the spatial and temporal qualities of the pulsed electron beam created by the ionisation of a gold photocathode using a femtosecond laser. The response of the electron pulses to a solenoid lens used to focus the electron beam has also been studied. The simulated results show that focussing the electron beam affects the overall spatial and temporal resolution of the experiment in a variety of ways, and that factors that improve the resolution of one parameter can often have a negative effect on the other. A balance must, therefore, be achieved between spatial and temporal resolution. The optimal experimental time resolution for the apparatus is predicted to be 416 fs for studies of gas-phase species, while the predicted spatial resolution of better than 2 nm-1 compares well with traditional time-averaged electron diffraction set-ups.

  2. Can High-resolution WRF Simulations Be Used for Short-term Forecasting of Lightning?

    Science.gov (United States)

    Goodman, S. J.; Lapenta, W.; McCaul, E. W., Jr.; LaCasse, K.; Petersen, W.

    2006-01-01

    A number of research teams have begun to make quasi-operational forecast simulations at high resolution with models such as the Weather Research and Forecast (WRF) model. These model runs have used horizontal meshes of 2-4 km grid spacing, and thus resolved convective storms explicitly. In the light of recent global satellite-based observational studies that reveal robust relationships between total lightning flash rates and integrated amounts of precipitation-size ice hydrometeors in storms, it is natural to inquire about the capabilities of these convection-resolving models in representing the ice hydrometeor fields faithfully. If they do, this might make operational short-term forecasts of lightning activity feasible. We examine high-resolution WRF simulations from several Southeastern cases for which either NLDN or LMA lightning data were available. All the WRF runs use a standard microphysics package that depicts only three ice species, cloud ice, snow and graupel. The realism of the WRF simulations is examined by comparisons with both lightning and radar observations and with additional even higher-resolution cloud-resolving model runs. Preliminary findings are encouraging in that they suggest that WRF often makes convective storms of the proper size in approximately the right location, but they also indicate that higher resolution and better hydrometeor microphysics would be helpful in improving the realism of the updraft strengths, reflectivity and ice hydrometeor fields.

  3. Atmospheric blocking in the Climate SPHINX simulations: the role of orography and resolution

    Science.gov (United States)

    Davini, Paolo; Corti, Susanna; D'Andrea, Fabio; Riviere, Gwendal; von Hardenberg, Jost

    2017-04-01

    The representation of atmospheric blocking in numerical simulations, especially over the Euro-Atlantic region, still represents a main concern for the climate modelling community. We here discuss the Northern Hemisphere winter atmospheric blocking representation in a set of 30-year simulations which has been performed in the framework of the PRACE project "Climate SPHINX". Simulations were run using the EC-Earth Global Climate Model with several ensemble members at 5 different horizontal resolutions (ranging from 125 km to 16 km). Results show that the negative bias in blocking frequency over Europe becomes negligible at resolutions of about 40 km and finer. However, the blocking duration is still underestimated by 1-2 days, suggesting that the correct blocking frequencies are achieved with an overestimation of the number of blocking onsets. The reasons leading to such improvements are then discussed, highlighting the role of orography in shaping the Atlantic jet stream: at higher resolution the jet is weaker and less penetrating over Europe, favoring the breaking of synoptic Rossby waves over the Atlantic stationary ridge and thus increasing the simulated blocking frequency.

  4. Effect of model resolution on a regional climate model simulation over southeast Australia

    KAUST Repository

    Evans, J. P.; McCabe, Matthew

    2013-01-01

    Dynamically downscaling climate projections from global climate models (GCMs) for use in impacts and adaptation research has become a common practice in recent years. In this study, the CSIRO Mk3.5 GCM is downscaled using the Weather Research and Forecasting (WRF) regional climate model (RCM) to medium (50 km) and high (10 km) resolution over southeast Australia. The influence of model resolution on the present-day (1985 to 2009) modelled regional climate and projected future (2075 to 2099) changes are examined for both mean climate and extreme precipitation characteristics. Increasing model resolution tended to improve the simulation of present day climate, with larger improvements in areas affected by mountains and coastlines. Examination of circumstances under which increasing the resolution decreased performance revealed an error in the GCM circulation, the effects of which had been masked by the coarse GCM topography. Resolution modifications to projected changes were largest in regions with strong topographic and coastline influences, and can be large enough to change the sign of the climate change projected by the GCM. Known physical mechanisms for these changes included orographic uplift and low-level blocking of air-masses caused by mountains. In terms of precipitation extremes, the GCM projects increases in extremes even when the projected change in the mean was a decrease: but this was not always true for the higher resolution models. Thus, while the higher resolution RCM climate projections often concur with the GCM projections, there are times and places where they differ significantly due to their better representation of physical processes. It should also be noted that the model resolution can modify precipitation characteristics beyond just its mean value.

  5. Effect of model resolution on a regional climate model simulation over southeast Australia

    KAUST Repository

    Evans, J. P.

    2013-03-26

    Dynamically downscaling climate projections from global climate models (GCMs) for use in impacts and adaptation research has become a common practice in recent years. In this study, the CSIRO Mk3.5 GCM is downscaled using the Weather Research and Forecasting (WRF) regional climate model (RCM) to medium (50 km) and high (10 km) resolution over southeast Australia. The influence of model resolution on the present-day (1985 to 2009) modelled regional climate and projected future (2075 to 2099) changes are examined for both mean climate and extreme precipitation characteristics. Increasing model resolution tended to improve the simulation of present day climate, with larger improvements in areas affected by mountains and coastlines. Examination of circumstances under which increasing the resolution decreased performance revealed an error in the GCM circulation, the effects of which had been masked by the coarse GCM topography. Resolution modifications to projected changes were largest in regions with strong topographic and coastline influences, and can be large enough to change the sign of the climate change projected by the GCM. Known physical mechanisms for these changes included orographic uplift and low-level blocking of air-masses caused by mountains. In terms of precipitation extremes, the GCM projects increases in extremes even when the projected change in the mean was a decrease: but this was not always true for the higher resolution models. Thus, while the higher resolution RCM climate projections often concur with the GCM projections, there are times and places where they differ significantly due to their better representation of physical processes. It should also be noted that the model resolution can modify precipitation characteristics beyond just its mean value.

  6. Eikonal representation of N-body Coulomb scattering amplitudes

    International Nuclear Information System (INIS)

    Fried, H.M.; Kang, K.; McKellar, B.H.J.

    1983-01-01

    A new technique for the construction of N-body Coulomb scattering amplitudes is proposed, suggested by the simplest case of N = 2: Calculate the scattering amplitude in eikonal approximation, discard the infinite phase factors which appear upon taking the limit of a Coulomb potential, and treat the remainder as an amplitude whose absolute value squared produces the exact, Coulomb differential cross section. The method easily generalizes to the N-body Coulomb problem for elastic scattering, and for inelastic rearrangement scattering of Coulomb bound states. We give explicit results for N = 3 and 4; in the N = 3 case we extract amplitudes for the processes (12)+3->1+2+3 (breakup), (12)+3->1+(23) (rearrangement), and (12)+3→(12)'+3 (inelastic scattering) as residues at the appropriate poles in the free-free amplitude. The method produces scattering amplitudes f/sub N/ given in terms of explicit quadratures over (N-2) 2 distinct integrands

  7. Accelerator-feasible N-body nonlinear integrable system

    Directory of Open Access Journals (Sweden)

    V. Danilov

    2014-12-01

    Full Text Available Nonlinear N-body integrable Hamiltonian systems, where N is an arbitrary number, have attracted the attention of mathematical physicists for the last several decades, following the discovery of some number of these systems. This paper presents a new integrable system, which can be realized in facilities such as particle accelerators. This feature makes it more attractive than many of the previous such systems with singular or unphysical forces.

  8. Internal or shape coordinates in the n-body problem

    International Nuclear Information System (INIS)

    Littlejohn, R.G.; Reinsch, M.

    1995-01-01

    The construction of global shape coordinates for the n-body problem is considered. Special attention is given to the three- and four-body problems. Quantities, including candidates for coordinates, are organized according to their transformation properties under so-called democracy transformations (orthogonal transformations of Jacobi vectors). Important submanifolds of shape space are identified and their topology studied, including the manifolds upon which shapes are coplanar or collinear, and the manifolds upon which the moment of inertia tensor is degenerate

  9. Position resolution simulations for the inverted-coaxial germanium detector, SIGMA

    Science.gov (United States)

    Wright, J. P.; Harkness-Brennan, L. J.; Boston, A. J.; Judson, D. S.; Labiche, M.; Nolan, P. J.; Page, R. D.; Pearce, F.; Radford, D. C.; Simpson, J.; Unsworth, C.

    2018-06-01

    The SIGMA Germanium detector has the potential to revolutionise γ-ray spectroscopy, providing superior energy and position resolving capabilities compared with current large volume state-of-the-art Germanium detectors. The theoretical position resolution of the detector as a function of γ-ray interaction position has been studied using simulated detector signals. A study of the effects of RMS noise at various energies has been presented with the position resolution ranging from 0.33 mm FWHM at Eγ = 1 MeV, to 0.41 mm at Eγ = 150 keV. An additional investigation into the effects pulse alignment have on pulse shape analysis and in turn, position resolution has been performed. The theoretical performance of SIGMA operating in an experimental setting is presented for use as a standalone detector and as part of an ancillary system.

  10. Simulation of high-resolution X-ray microscopic images for improved alignment

    International Nuclear Information System (INIS)

    Song Xiangxia; Zhang Xiaobo; Liu Gang; Cheng Xianchao; Li Wenjie; Guan Yong; Liu Ying; Xiong Ying; Tian Yangchao

    2011-01-01

    The introduction of precision optical elements to X-ray microscopes necessitates fine realignment to achieve optimal high-resolution imaging. In this paper, we demonstrate a numerical method for simulating image formation that facilitates alignment of the source, condenser, objective lens, and CCD camera. This algorithm, based on ray-tracing and Rayleigh-Sommerfeld diffraction theory, is applied to simulate the X-ray microscope beamline U7A of National Synchrotron Radiation Laboratory (NSRL). The simulations and imaging experiments show that the algorithm is useful for guiding experimental adjustments. Our alignment simulation method is an essential tool for the transmission X-ray microscope (TXM) with optical elements and may also be useful for the alignment of optical components in other modes of microscopy.

  11. Quantum N-body problem with a minimal length

    International Nuclear Information System (INIS)

    Buisseret, Fabien

    2010-01-01

    The quantum N-body problem is studied in the context of nonrelativistic quantum mechanics with a one-dimensional deformed Heisenberg algebra of the form [x,p]=i(1+βp 2 ), leading to the existence of a minimal observable length √(β). For a generic pairwise interaction potential, analytical formulas are obtained that allow estimation of the ground-state energy of the N-body system by finding the ground-state energy of a corresponding two-body problem. It is first shown that in the harmonic oscillator case, the β-dependent term grows faster with increasing N than the β-independent term. Then, it is argued that such a behavior should also be observed with generic potentials and for D-dimensional systems. Consequently, quantum N-body bound states might be interesting places to look at nontrivial manifestations of a minimal length, since the more particles that are present, the more the system deviates from standard quantum-mechanical predictions.

  12. An adaptive N-body algorithm of optimal order

    International Nuclear Information System (INIS)

    Pruett, C. David.; Rudmin, Joseph W.; Lacy, Justin M.

    2003-01-01

    Picard iteration is normally considered a theoretical tool whose primary utility is to establish the existence and uniqueness of solutions to first-order systems of ordinary differential equations (ODEs). However, in 1996, Parker and Sochacki [Neural, Parallel, Sci. Comput. 4 (1996)] published a practical numerical method for a certain class of ODEs, based upon modified Picard iteration, that generates the Maclaurin series of the solution to arbitrarily high order. The applicable class of ODEs consists of first-order, autonomous systems whose right-hand side functions (generators) are projectively polynomial; that is, they can be written as polynomials in the unknowns. The class is wider than might be expected. The method is ideally suited to the classical N-body problem, which is projectively polynomial. Here, we recast the N-body problem in polynomial form and develop a Picard-based algorithm for its solution. The algorithm is highly accurate, parameter-free, and simultaneously adaptive in time and order. Test cases for both benign and chaotic N-body systems reveal that optimal order is dynamic. That is, in addition to dependency upon N and the desired accuracy, optimal order depends upon the configuration of the bodies at any instant

  13. The simulation of a data acquisition system for a proposed high resolution PET scanner

    Energy Technology Data Exchange (ETDEWEB)

    Rotolo, C.; Larwill, M.; Chappa, S. [Fermi National Accelerator Lab., Batavia, IL (United States); Ordonez, C. [Chicago Univ., IL (United States)

    1993-10-01

    The simulation of a specific data acquisition (DAQ) system architecture for a proposed high resolution Positron Emission Tomography (PET) scanner is discussed. Stochastic processes are used extensively to model PET scanner signal timing and probable DAQ circuit limitations. Certain architectural parameters, along with stochastic parameters, are varied to quantatively study the resulting output under various conditions. The inclusion of the DAQ in the model represents a novel method of more complete simulations of tomograph designs, and could prove to be of pivotal importance in the optimization of such designs.

  14. Geant4 simulation of a 3D high resolution gamma camera

    International Nuclear Information System (INIS)

    Akhdar, H.; Kezzar, K.; Aksouh, F.; Assemi, N.; AlGhamdi, S.; AlGarawi, M.; Gerl, J.

    2015-01-01

    The aim of this work is to develop a 3D gamma camera with high position resolution and sensitivity relying on both distance/absorption and Compton scattering techniques and without using any passive collimation. The proposed gamma camera is simulated in order to predict its performance using the full benefit of Geant4 features that allow the construction of the needed geometry of the detectors, have full control of the incident gamma particles and study the response of the detector in order to test the suggested geometries. Three different geometries are simulated and each configuration is tested with three different scintillation materials (LaBr3, LYSO and CeBr3)

  15. The simulation of a data acquisition system for a proposed high resolution PET scanner

    International Nuclear Information System (INIS)

    Rotolo, C.; Larwill, M.; Chappa, S.; Ordonez, C.

    1993-10-01

    The simulation of a specific data acquisition (DAQ) system architecture for a proposed high resolution Positron Emission Tomography (PET) scanner is discussed. Stochastic processes are used extensively to model PET scanner signal timing and probable DAQ circuit limitations. Certain architectural parameters, along with stochastic parameters, are varied to quantatively study the resulting output under various conditions. The inclusion of the DAQ in the model represents a novel method of more complete simulations of tomograph designs, and could prove to be of pivotal importance in the optimization of such designs

  16. AUTOMATIC INTERPRETATION OF HIGH RESOLUTION SAR IMAGES: FIRST RESULTS OF SAR IMAGE SIMULATION FOR SINGLE BUILDINGS

    Directory of Open Access Journals (Sweden)

    J. Tao

    2012-09-01

    Full Text Available Due to the all-weather data acquisition capabilities, high resolution space borne Synthetic Aperture Radar (SAR plays an important role in remote sensing applications like change detection. However, because of the complex geometric mapping of buildings in urban areas, SAR images are often hard to interpret. SAR simulation techniques ease the visual interpretation of SAR images, while fully automatic interpretation is still a challenge. This paper presents a method for supporting the interpretation of high resolution SAR images with simulated radar images using a LiDAR digital surface model (DSM. Line features are extracted from the simulated and real SAR images and used for matching. A single building model is generated from the DSM and used for building recognition in the SAR image. An application for the concept is presented for the city centre of Munich where the comparison of the simulation to the TerraSAR-X data shows a good similarity. Based on the result of simulation and matching, special features (e.g. like double bounce lines, shadow areas etc. can be automatically indicated in SAR image.

  17. Regional simulation of Indian summer monsoon intraseasonal oscillations at gray-zone resolution

    Science.gov (United States)

    Chen, Xingchao; Pauluis, Olivier M.; Zhang, Fuqing

    2018-01-01

    Simulations of the Indian summer monsoon by the cloud-permitting Weather Research and Forecasting (WRF) model at gray-zone resolution are described in this study, with a particular emphasis on the model ability to capture the monsoon intraseasonal oscillations (MISOs). Five boreal summers are simulated from 2007 to 2011 using the ERA-Interim reanalysis as the lateral boundary forcing data. Our experimental setup relies on a horizontal grid spacing of 9 km to explicitly simulate deep convection without the use of cumulus parameterizations. When compared to simulations with coarser grid spacing (27 km) and using a cumulus scheme, the 9 km simulations reduce the biases in mean precipitation and produce more realistic low-frequency variability associated with MISOs. Results show that the model at the 9 km gray-zone resolution captures the salient features of the summer monsoon. The spatial distributions and temporal evolutions of monsoon rainfall in the WRF simulations verify qualitatively well against observations from the Tropical Rainfall Measurement Mission (TRMM), with regional maxima located over Western Ghats, central India, Himalaya foothills, and the west coast of Myanmar. The onset, breaks, and withdrawal of the summer monsoon in each year are also realistically captured by the model. The MISO-phase composites of monsoon rainfall, low-level wind, and precipitable water anomalies in the simulations also agree qualitatively with the observations. Both the simulations and observations show a northeastward propagation of the MISOs, with the intensification and weakening of the Somali Jet over the Arabian Sea during the active and break phases of the Indian summer monsoon.

  18. High Resolution Numerical Simulations of Primary Atomization in Diesel Sprays with Single Component Reference Fuels

    Science.gov (United States)

    2015-09-01

    NC. 14. ABSTRACT A high-resolution numerical simulation of jet breakup and spray formation from a complex diesel fuel injector at diesel engine... diesel fuel injector at diesel engine type conditions has been performed. A full understanding of the primary atomization process in diesel fuel... diesel liquid sprays the complexity is further compounded by the physical attributes present including nozzle turbulence, large density ratios

  19. Achieving accurate simulations of urban impacts on ozone at high resolution

    International Nuclear Information System (INIS)

    Li, J; Georgescu, M; Mahalov, A; Moustaoui, M; Hyde, P

    2014-01-01

    The effects of urbanization on ozone levels have been widely investigated over cities primarily located in temperate and/or humid regions. In this study, nested WRF-Chem simulations with a finest grid resolution of 1 km are conducted to investigate ozone concentrations [O 3 ] due to urbanization within cities in arid/semi-arid environments. First, a method based on a shape preserving Monotonic Cubic Interpolation (MCI) is developed and used to downscale anthropogenic emissions from the 4 km resolution 2005 National Emissions Inventory (NEI05) to the finest model resolution of 1 km. Using the rapidly expanding Phoenix metropolitan region as the area of focus, we demonstrate the proposed MCI method achieves ozone simulation results with appreciably improved correspondence to observations relative to the default interpolation method of the WRF-Chem system. Next, two additional sets of experiments are conducted, with the recommended MCI approach, to examine impacts of urbanization on ozone production: (1) the urban land cover is included (i.e., urbanization experiments) and, (2) the urban land cover is replaced with the region’s native shrubland. Impacts due to the presence of the built environment on [O 3 ] are highly heterogeneous across the metropolitan area. Increased near surface [O 3 ] due to urbanization of 10–20 ppb is predominantly a nighttime phenomenon while simulated impacts during daytime are negligible. Urbanization narrows the daily [O 3 ] range (by virtue of increasing nighttime minima), an impact largely due to the region’s urban heat island. Our results demonstrate the importance of the MCI method for accurate representation of the diurnal profile of ozone, and highlight its utility for high-resolution air quality simulations for urban areas. (letter)

  20. Numerical simulation study for atomic-resolution x-ray fluorescence holography

    International Nuclear Information System (INIS)

    Xie Honglan; Gao Hongyi; Chen Jianwen; Xiong Shisheng; Xu Zhizhan; Wang Junyue; Zhu Peiping; Xian Dingchang

    2003-01-01

    Based on the principle of x-ray fluorescence holography, an iron single crystal model of a body-centred cubic lattice is numerically simulated. From the fluorescence hologram produced numerically, the Fe atomic images were reconstructed. The atomic images of the (001), (100), (010) crystallographic planes were consistent with the corresponding atomic positions of the model. The result indicates that one can obtain internal structure images of single crystals at atomic-resolution by using x-ray fluorescence holography

  1. Simulation study for high resolution alpha particle spectrometry with mesh type collimator

    International Nuclear Information System (INIS)

    Park, Seunghoon; Kwak, Sungwoo; Kang, Hanbyeol; Shin, Jungki; Park, Iljin

    2014-01-01

    An alpha particle spectrometry with a mesh type collimator plays a crucial role in identifying specific radionuclide in a radioactive source collected from the atmosphere or environment. The energy resolution is degraded without collimation because particles with a high angle have a longer path to travel in the air. Therefore, collision with the background increases. The collimator can cut out particles which traveling at a high angle. As a result, an energy distribution with high resolution can be obtained. Therefore, the mesh type collimator is simulated for high resolution alpha particle spectrometry. In conclusion, the collimator can improve resolution. With collimator, the collimator is a role of cutting out particles with a high angle, so, low energy tail and broadened energy distribution can be reduced. The mesh diameter is found out as an important factor to control resolution and counting efficiency. Therefore, a target particle, for example, 235 U, can be distinguished by a detector with a collimator under a mixture of various nuclides, for example: 232 U, 238 U, and 232 Th

  2. Simulating the Agulhas system in global ocean models - nesting vs. multi-resolution unstructured meshes

    Science.gov (United States)

    Biastoch, Arne; Sein, Dmitry; Durgadoo, Jonathan V.; Wang, Qiang; Danilov, Sergey

    2018-01-01

    Many questions in ocean and climate modelling require the combined use of high resolution, global coverage and multi-decadal integration length. For this combination, even modern resources limit the use of traditional structured-mesh grids. Here we compare two approaches: A high-resolution grid nested into a global model at coarser resolution (NEMO with AGRIF) and an unstructured-mesh grid (FESOM) which allows to variably enhance resolution where desired. The Agulhas system around South Africa is used as a testcase, providing an energetic interplay of a strong western boundary current and mesoscale dynamics. Its open setting into the horizontal and global overturning circulations also requires global coverage. Both model configurations simulate a reasonable large-scale circulation. Distribution and temporal variability of the wind-driven circulation are quite comparable due to the same atmospheric forcing. However, the overturning circulation differs, owing each model's ability to represent formation and spreading of deep water masses. In terms of regional, high-resolution dynamics, all elements of the Agulhas system are well represented. Owing to the strong nonlinearity in the system, Agulhas Current transports of both configurations and in comparison with observations differ in strength and temporal variability. Similar decadal trends in Agulhas Current transport and Agulhas leakage are linked to the trends in wind forcing.

  3. High-resolution simulations of galaxy formation in a cold dark matter scenario

    International Nuclear Information System (INIS)

    Kates, R.E.; Klypin, A.A.

    1990-01-01

    We present the results of our numerical simulations of galaxy clustering in a two-dimensional model. Our simulations allowed better resolution than could be obtained in three-dimensional simulations. We used a spectrum of initial perturbations corresponding to a cold dark matter (CDM) model and followed the history of each particle by modelling the shocking and subsequent cooling of matter. We took into account cooling processes in a hot plasma with primeval cosmic abundances of H and He as well as Compton cooling. (However, the influence of these processes on the trajectories of ordinary matter particles was not simulated in the present code.) As a result of the high resolution, we were able to observe a network of chains on all scales down to the limits of resolution. This network extends out from dense clusters and superclusters and penetrates into voids (with decreasing density). In addition to the dark matter network structure, a definite prediction of our simulations is the existence of a connected filamentary structure consisting of hot gas with a temperature of 10 6 K and extending over 100-150 Mpc. (Throughout this paper, we assume the Hubble constant H 0 =50 km/sec/Mpc.) These structures trace high-density filaments of the dark matter distribution and should be searched for in soft X-ray observations. In contrast to common assumptions, we found that peaks of the linearized density distribution were not reliable tracers of the eventual galaxy distribution. We were also able to demonstrate that the influence of small-scale fluctuations on the structure at larger scales is always small, even at the late nonlinear stage. (orig.)

  4. Effect of grid resolution on large eddy simulation of wall-bounded turbulence

    Science.gov (United States)

    Rezaeiravesh, S.; Liefvendahl, M.

    2018-05-01

    The effect of grid resolution on a large eddy simulation (LES) of a wall-bounded turbulent flow is investigated. A channel flow simulation campaign involving a systematic variation of the streamwise (Δx) and spanwise (Δz) grid resolution is used for this purpose. The main friction-velocity-based Reynolds number investigated is 300. Near the walls, the grid cell size is determined by the frictional scaling, Δx+ and Δz+, and strongly anisotropic cells, with first Δy+ ˜ 1, thus aiming for the wall-resolving LES. Results are compared to direct numerical simulations, and several quality measures are investigated, including the error in the predicted mean friction velocity and the error in cross-channel profiles of flow statistics. To reduce the total number of channel flow simulations, techniques from the framework of uncertainty quantification are employed. In particular, a generalized polynomial chaos expansion (gPCE) is used to create metamodels for the errors over the allowed parameter ranges. The differing behavior of the different quality measures is demonstrated and analyzed. It is shown that friction velocity and profiles of the velocity and Reynolds stress tensor are most sensitive to Δz+, while the error in the turbulent kinetic energy is mostly influenced by Δx+. Recommendations for grid resolution requirements are given, together with the quantification of the resulting predictive accuracy. The sensitivity of the results to the subgrid-scale (SGS) model and varying Reynolds number is also investigated. All simulations are carried out with second-order accurate finite-volume-based solver OpenFOAM. It is shown that the choice of numerical scheme for the convective term significantly influences the error portraits. It is emphasized that the proposed methodology, involving the gPCE, can be applied to other modeling approaches, i.e., other numerical methods and the choice of SGS model.

  5. Effects of model resolution and parameterizations on the simulations of clouds, precipitation, and their interactions with aerosols

    Science.gov (United States)

    Lee, Seoung Soo; Li, Zhanqing; Zhang, Yuwei; Yoo, Hyelim; Kim, Seungbum; Kim, Byung-Gon; Choi, Yong-Sang; Mok, Jungbin; Um, Junshik; Ock Choi, Kyoung; Dong, Danhong

    2018-01-01

    This study investigates the roles played by model resolution and microphysics parameterizations in the well-known uncertainties or errors in simulations of clouds, precipitation, and their interactions with aerosols by the numerical weather prediction (NWP) models. For this investigation, we used cloud-system-resolving model (CSRM) simulations as benchmark simulations that adopt high-resolution and full-fledged microphysical processes. These simulations were evaluated against observations, and this evaluation demonstrated that the CSRM simulations can function as benchmark simulations. Comparisons between the CSRM simulations and the simulations at the coarse resolutions that are generally adopted by current NWP models indicate that the use of coarse resolutions as in the NWP models can lower not only updrafts and other cloud variables (e.g., cloud mass, condensation, deposition, and evaporation) but also their sensitivity to increasing aerosol concentration. The parameterization of the saturation process plays an important role in the sensitivity of cloud variables to aerosol concentrations. while the parameterization of the sedimentation process has a substantial impact on how cloud variables are distributed vertically. The variation in cloud variables with resolution is much greater than what happens with varying microphysics parameterizations, which suggests that the uncertainties in the NWP simulations are associated with resolution much more than microphysics parameterizations.

  6. Toolbox for Urban Mobility Simulation: High Resolution Population Dynamics for Global Cities

    Science.gov (United States)

    Bhaduri, B. L.; Lu, W.; Liu, C.; Thakur, G.; Karthik, R.

    2015-12-01

    In this rapidly urbanizing world, unprecedented rate of population growth is not only mirrored by increasing demand for energy, food, water, and other natural resources, but has detrimental impacts on environmental and human security. Transportation simulations are frequently used for mobility assessment in urban planning, traffic operation, and emergency management. Previous research, involving purely analytical techniques to simulations capturing behavior, has investigated questions and scenarios regarding the relationships among energy, emissions, air quality, and transportation. Primary limitations of past attempts have been availability of input data, useful "energy and behavior focused" models, validation data, and adequate computational capability that allows adequate understanding of the interdependencies of our transportation system. With increasing availability and quality of traditional and crowdsourced data, we have utilized the OpenStreetMap roads network, and has integrated high resolution population data with traffic simulation to create a Toolbox for Urban Mobility Simulations (TUMS) at global scale. TUMS consists of three major components: data processing, traffic simulation models, and Internet-based visualizations. It integrates OpenStreetMap, LandScanTM population, and other open data (Census Transportation Planning Products, National household Travel Survey, etc.) to generate both normal traffic operation and emergency evacuation scenarios. TUMS integrates TRANSIMS and MITSIM as traffic simulation engines, which are open-source and widely-accepted for scalable traffic simulations. Consistent data and simulation platform allows quick adaption to various geographic areas that has been demonstrated for multiple cities across the world. We are combining the strengths of geospatial data sciences, high performance simulations, transportation planning, and emissions, vehicle and energy technology development to design and develop a simulation

  7. MODELING AND SIMULATION OF HIGH RESOLUTION OPTICAL REMOTE SENSING SATELLITE GEOMETRIC CHAIN

    Directory of Open Access Journals (Sweden)

    Z. Xia

    2018-04-01

    Full Text Available The high resolution satellite with the longer focal length and the larger aperture has been widely used in georeferencing of the observed scene in recent years. The consistent end to end model of high resolution remote sensing satellite geometric chain is presented, which consists of the scene, the three line array camera, the platform including attitude and position information, the time system and the processing algorithm. The integrated design of the camera and the star tracker is considered and the simulation method of the geolocation accuracy is put forward by introduce the new index of the angle between the camera and the star tracker. The model is validated by the geolocation accuracy simulation according to the test method of the ZY-3 satellite imagery rigorously. The simulation results show that the geolocation accuracy is within 25m, which is highly consistent with the test results. The geolocation accuracy can be improved about 7 m by the integrated design. The model combined with the simulation method is applicable to the geolocation accuracy estimate before the satellite launching.

  8. Statistics of Deep Convection in the Congo Basin Derived From High-Resolution Simulations.

    Science.gov (United States)

    White, B.; Stier, P.; Kipling, Z.; Gryspeerdt, E.; Taylor, S.

    2016-12-01

    Convection transports moisture, momentum, heat and aerosols through the troposphere, and so the temporal variability of convection is a major driver of global weather and climate. The Congo basin is home to some of the most intense convective activity on the planet and is under strong seasonal influence of biomass burning aerosol. However, deep convection in the Congo basin remains under studied compared to other regions of tropical storm systems, especially when compared to the neighbouring, relatively well-understood West African climate system. We use the WRF model to perform a high-resolution, cloud-system resolving simulation to investigate convective storm systems in the Congo. Our setup pushes the boundaries of current computational resources, using a 1 km grid length over a domain covering millions of square kilometres and for a time period of one month. This allows us to draw statistical conclusions on the nature of the simulated storm systems. Comparing data from satellite observations and the model enables us to quantify the diurnal variability of deep convection in the Congo basin. This approach allows us to evaluate our simulations despite the lack of in-situ observational data. This provides a more comprehensive analysis of the diurnal cycle than has previously been shown. Further, we show that high-resolution convection-permitting simulations performed over near-seasonal timescales can be used in conjunction with satellite observations as an effective tool to evaluate new convection parameterisations.

  9. Simulation studies for a high resolution time projection chamber at the international linear collider

    Energy Technology Data Exchange (ETDEWEB)

    Muennich, A.

    2007-03-26

    The International Linear Collider (ILC) is planned to be the next large accelerator. The ILC will be able to perform high precision measurements only possible at the clean environment of electron positron collisions. In order to reach this high accuracy, the requirements for the detector performance are challenging. Several detector concepts are currently under study. The understanding of the detector and its performance will be crucial to extract the desired physics results from the data. To optimise the detector design, simulation studies are needed. Simulation packages like GEANT4 allow to model the detector geometry and simulate the energy deposit in the different materials. However, the detector response taking into account the transportation of the produced charge to the readout devices and the effects ofthe readout electronics cannot be described in detail. These processes in the detector will change the measured position of the energy deposit relative to the point of origin. The determination of this detector response is the task of detailed simulation studies, which have to be carried out for each subdetector. A high resolution Time Projection Chamber (TPC) with gas amplification based on micro pattern gas detectors, is one of the options for the main tracking system at the ILC. In the present thesis a detailed simulation tool to study the performance of a TPC was developed. Its goal is to find the optimal settings to reach an excellent momentum and spatial resolution. After an introduction to the present status of particle physics and the ILC project with special focus on the TPC as central tracker, the simulation framework is presented. The basic simulation methods and implemented processes are introduced. Within this stand-alone simulation framework each electron produced by primary ionisation is transferred through the gas volume and amplified using Gas Electron Multipliers (GEMs). The output format of the simulation is identical to the raw data from a

  10. Quality and sensitivity of high-resolution numerical simulation of urban heat islands

    Science.gov (United States)

    Li, Dan; Bou-Zeid, Elie

    2014-05-01

    High-resolution numerical simulations of the urban heat island (UHI) effect with the widely-used Weather Research and Forecasting (WRF) model are assessed. Both the sensitivity of the results to the simulation setup, and the quality of the simulated fields as representations of the real world, are investigated. Results indicate that the WRF-simulated surface temperatures are more sensitive to the planetary boundary layer (PBL) scheme choice during nighttime, and more sensitive to the surface thermal roughness length parameterization during daytime. The urban surface temperatures simulated by WRF are also highly sensitive to the urban canopy model (UCM) used. The implementation in this study of an improved UCM (the Princeton UCM or PUCM) that allows the simulation of heterogeneous urban facets and of key hydrological processes, together with the so-called CZ09 parameterization for the thermal roughness length, significantly reduce the bias (Changing UCMs and PBL schemes does not alter the performance of WRF in reproducing bulk boundary layer temperature profiles significantly. The results illustrate the wide range of urban environmental conditions that various configurations of WRF can produce, and the significant biases that should be assessed before inferences are made based on WRF outputs. The optimal set-up of WRF-PUCM developed in this paper also paves the way for a confident exploration of the city-scale impacts of UHI mitigation strategies in the companion paper (Li et al 2014).

  11. Verification of high resolution simulation of precipitation and wind in Portugal

    Science.gov (United States)

    Menezes, Isilda; Pereira, Mário; Moreira, Demerval; Carvalheiro, Luís; Bugalho, Lourdes; Corte-Real, João

    2017-04-01

    Demand of energy and freshwater continues to grow as the global population and demands increase. Precipitation feed the freshwater ecosystems which provides a wealth of goods and services for society and river flow to sustain native species and natural ecosystem functions. The adoption of the wind and hydro-electric power supplies will sustain energy demands/services without restricting the economic growth and accelerated policies scenarios. However, the international meteorological observation network is not sufficiently dense to directly support high resolution climatic research. In this sense, coupled global and regional atmospheric models constitute the most appropriate physical and numerical tool for weather forecasting and downscaling in high resolution grids with the capacity to solve problems resulting from the lack of observed data and measuring errors. Thus, this study aims to calibrate and validate of the WRF regional model from precipitation and wind fields simulation, in high spatial resolution grid cover in Portugal. The simulations were performed in two-way nesting with three grids of increasing resolution (60 km, 20 km and 5 km) and the model performance assessed for the summer and winter months (January and July), using input variables from two different reanalyses and forecasted databases (ERA-Interim and NCEP-FNL) and different forcing schemes. The verification procedure included: (i) the use of several statistics error estimators, correlation based measures and relative errors descriptors; and, (ii) an observed dataset composed by time series of hourly precipitation, wind speed and direction provided by the Portuguese meteorological institute for a comprehensive set of weather stations. Main results suggested the good ability of the WRF to: (i) reproduce the spatial patterns of the mean and total observed fields; (ii) with relatively small values of bias and other errors; and, (iii) and good temporal correlation. These findings are in good

  12. Dynamical Studies of N-Body Gravity and Tidal Dissipation in the TRAPPIST-1 Star System

    Science.gov (United States)

    Nayak, Michael; Kuettel, Donald H.; Stebler, Shane T.; Udrea, Bogdan

    2018-01-01

    To date, we have discovered a total of 2,729 planetary systems that contain more than 3,639 known exoplanets [1]. A majority of these are defined as compact systems, containing multiple exoplanets within 0.25 AU of the central star. It has been shown that tightly packed exoplanets avoid colliding due to long-term resonance-induced orbit stability [2]. However, due to extreme proximity, these planets experience intense gravitational forces from each other that are unprecedented within our own solar system, which makes the existence of exomoons doubtful. We present the results of an initial study evaluating dynamical stability of potential exomoons within such highly compact systems.This work is baselined around TRAPPIST-1, an ultra-cool dwarf star that hosts seven temperate terrestrial planets, three of which are in the habitable zone, orbiting within 0.06 AU [3]. N-body simulations place a grid of test particles varying semi-major axis, eccentricity, and inclination around the three habitable zone planets. We find that most exomoons with semi-major axes less than half the Hill sphere of their respective planet are stable over 10 kyrs, with several stable over 300 kyrs.However, in compact systems, tidal influences from other planets can compete with tidal effects from the primary planet, resulting in possible instabilities and massive amounts of tidal dissipation. We investigate these effects with a large grid search that incorporates exomoon radius, tidal quality factor and a range of planet rigidities. Results of simulations that combine n-body gravity effects with both planetary and satellite tides are presented and contrasted with n-body results. Finally, we examine long-term stability (> 1Myrs) of the stable subset of test particles from the n-body simulation with the addition of tidal dissipation, to determine if exomoons can survive around planets e, f, and g in the TRAPPIST-1 system.[1] Schneider (2017). The Extrasolar Planets Encyclopedia. http

  13. Improved Synthesis of Global Irradiance with One-Minute Resolution for PV System Simulations

    Directory of Open Access Journals (Sweden)

    Martin Hofmann

    2014-01-01

    Full Text Available High resolution global irradiance time series are needed for accurate simulations of photovoltaic (PV systems, since the typical volatile PV power output induced by fast irradiance changes cannot be simulated properly with commonly available hourly averages of global irradiance. We present a two-step algorithm that is capable of synthesizing one-minute global irradiance time series based on hourly averaged datasets. The algorithm is initialized by deriving characteristic transition probability matrices (TPM for different weather conditions (cloudless, broken clouds and overcast from a large number of high resolution measurements. Once initialized, the algorithm is location-independent and capable of synthesizing one-minute values based on hourly averaged global irradiance of any desired location. The one-minute time series are derived by discrete-time Markov chains based on a TPM that matches the weather condition of the input dataset. One-minute time series generated with the presented algorithm are compared with measured high resolution data and show a better agreement compared to two existing synthesizing algorithms in terms of temporal variability and characteristic frequency distributions of global irradiance and clearness index values. A comparison based on measurements performed in Lindenberg, Germany, and Carpentras, France, shows a reduction of the frequency distribution root mean square errors of more than 60% compared to the two existing synthesizing algorithms.

  14. Monte Carlo Simulations of Ultra-High Energy Resolution Gamma Detectors for Nuclear Safeguards

    International Nuclear Information System (INIS)

    Robles, A.; Drury, O.B.; Friedrich, S.

    2009-01-01

    Ultra-high energy resolution superconducting gamma-ray detectors can improve the accuracy of non-destructive analysis for unknown radioactive materials. These detectors offer an order of magnitude improvement in resolution over conventional high purity germanium detectors. The increase in resolution reduces errors from line overlap and allows for the identification of weaker gamma-rays by increasing the magnitude of the peaks above the background. In order to optimize the detector geometry and to understand the spectral response function Geant4, a Monte Carlo simulation package coded in C++, was used to model the detectors. Using a 1 mm 3 Sn absorber and a monochromatic gamma source, different absorber geometries were tested. The simulation was expanded to include the Cu block behind the absorber and four layers of shielding required for detector operation at 0.1 K. The energy spectrum was modeled for an Am-241 and a Cs-137 source, including scattering events in the shielding, and the results were compared to experimental data. For both sources the main spectral features such as the photopeak, the Compton continuum, the escape x-rays and the backscatter peak were identified. Finally, the low energy response of a Pu-239 source was modeled to assess the feasibility of Pu-239 detection in spent fuel. This modeling of superconducting detectors can serve as a guide to optimize the configuration in future spectrometer designs.

  15. NEUTRINO-DRIVEN CONVECTION IN CORE-COLLAPSE SUPERNOVAE: HIGH-RESOLUTION SIMULATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Radice, David; Ott, Christian D. [TAPIR, Walter Burke Institute for Theoretical Physics, Mailcode 350-17, California Institute of Technology, Pasadena, CA 91125 (United States); Abdikamalov, Ernazar [Department of Physics, School of Science and Technology, Nazarbayev University, Astana 010000 (Kazakhstan); Couch, Sean M. [Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824 (United States); Haas, Roland [Max-Planck-Institut für Gravitationsphysik, Albert-Einstein-Institut, D-14476 Golm (Germany); Schnetter, Erik, E-mail: dradice@caltech.edu [Perimeter Institute for Theoretical Physics, Waterloo, ON (Canada)

    2016-03-20

    We present results from high-resolution semiglobal simulations of neutrino-driven convection in core-collapse supernovae. We employ an idealized setup with parameterized neutrino heating/cooling and nuclear dissociation at the shock front. We study the internal dynamics of neutrino-driven convection and its role in redistributing energy and momentum through the gain region. We find that even if buoyant plumes are able to locally transfer heat up to the shock, convection is not able to create a net positive energy flux and overcome the downward transport of energy from the accretion flow. Turbulent convection does, however, provide a significant effective pressure support to the accretion flow as it favors the accumulation of energy, mass, and momentum in the gain region. We derive an approximate equation that is able to explain and predict the shock evolution in terms of integrals of quantities such as the turbulent pressure in the gain region or the effects of nonradial motion of the fluid. We use this relation as a way to quantify the role of turbulence in the dynamics of the accretion shock. Finally, we investigate the effects of grid resolution, which we change by a factor of 20 between the lowest and highest resolution. Our results show that the shallow slopes of the turbulent kinetic energy spectra reported in previous studies are a numerical artifact. Kolmogorov scaling is progressively recovered as the resolution is increased.

  16. NEUTRINO-DRIVEN CONVECTION IN CORE-COLLAPSE SUPERNOVAE: HIGH-RESOLUTION SIMULATIONS

    International Nuclear Information System (INIS)

    Radice, David; Ott, Christian D.; Abdikamalov, Ernazar; Couch, Sean M.; Haas, Roland; Schnetter, Erik

    2016-01-01

    We present results from high-resolution semiglobal simulations of neutrino-driven convection in core-collapse supernovae. We employ an idealized setup with parameterized neutrino heating/cooling and nuclear dissociation at the shock front. We study the internal dynamics of neutrino-driven convection and its role in redistributing energy and momentum through the gain region. We find that even if buoyant plumes are able to locally transfer heat up to the shock, convection is not able to create a net positive energy flux and overcome the downward transport of energy from the accretion flow. Turbulent convection does, however, provide a significant effective pressure support to the accretion flow as it favors the accumulation of energy, mass, and momentum in the gain region. We derive an approximate equation that is able to explain and predict the shock evolution in terms of integrals of quantities such as the turbulent pressure in the gain region or the effects of nonradial motion of the fluid. We use this relation as a way to quantify the role of turbulence in the dynamics of the accretion shock. Finally, we investigate the effects of grid resolution, which we change by a factor of 20 between the lowest and highest resolution. Our results show that the shallow slopes of the turbulent kinetic energy spectra reported in previous studies are a numerical artifact. Kolmogorov scaling is progressively recovered as the resolution is increased

  17. Using Instrument Simulators and a Satellite Database to Evaluate Microphysical Assumptions in High-Resolution Simulations of Hurricane Rita

    Science.gov (United States)

    Hristova-Veleva, S. M.; Chao, Y.; Chau, A. H.; Haddad, Z. S.; Knosp, B.; Lambrigtsen, B.; Li, P.; Martin, J. M.; Poulsen, W. L.; Rodriguez, E.; Stiles, B. W.; Turk, J.; Vu, Q.

    2009-12-01

    Improving forecasting of hurricane intensity remains a significant challenge for the research and operational communities. Many factors determine a tropical cyclone’s intensity. Ultimately, though, intensity is dependent on the magnitude and distribution of the latent heating that accompanies the hydrometeor production during the convective process. Hence, the microphysical processes and their representation in hurricane models are of crucial importance for accurately simulating hurricane intensity and evolution. The accurate modeling of the microphysical processes becomes increasingly important when running high-resolution models that should properly reflect the convective processes in the hurricane eyewall. There are many microphysical parameterizations available today. However, evaluating their performance and selecting the most representative ones remains a challenge. Several field campaigns were focused on collecting in situ microphysical observations to help distinguish between different modeling approaches and improve on the most promising ones. However, these point measurements cannot adequately reflect the space and time correlations characteristic of the convective processes. An alternative approach to evaluating microphysical assumptions is to use multi-parameter remote sensing observations of the 3D storm structure and evolution. In doing so, we could compare modeled to retrieved geophysical parameters. The satellite retrievals, however, carry their own uncertainty. To increase the fidelity of the microphysical evaluation results, we can use instrument simulators to produce satellite observables from the model fields and compare to the observed. This presentation will illustrate how instrument simulators can be used to discriminate between different microphysical assumptions. We will compare and contrast the members of high-resolution ensemble WRF model simulations of Hurricane Rita (2005), each member reflecting different microphysical assumptions

  18. High-resolution simulation and forecasting of Jeddah floods using WRF version 3.5

    KAUST Repository

    Deng, Liping

    2013-12-01

    Modeling flash flood events in arid environments is a difficult but important task that has impacts on both water resource related issues and also emergency management and response. The challenge is often related to adequately describing the precursor intense rainfall events that cause these flood responses, as they are generally poorly simulated and forecast. Jeddah, the second largest city in the Kingdom of Saudi Arabia, has suffered from a number of flash floods over the last decade, following short-intense rainfall events. The research presented here focuses on examining four historic Jeddah flash floods (Nov. 25-26 2009, Dec. 29-30 2010, Jan. 14-15 2011 and Jan. 25-26 2011) and investigates the feasibility of using numerical weather prediction models to achieve a more realistic simulation of these flood-producing rainfall events. The Weather Research and Forecasting (WRF) model (version 3.5) is used to simulate precipitation and meteorological conditions via a high-resolution inner domain (1-km) around Jeddah. A range of different convective closure and microphysics parameterization, together with high-resolution (4-km) sea surface temperature data are employed. Through examining comparisons between the WRF model output and in-situ, radar and satellite data, the characteristics and mechanism producing the extreme rainfall events are discussed and the capacity of the WRF model to accurately forecast these rainstorms is evaluated.

  19. High-resolution simulation and forecasting of Jeddah floods using WRF version 3.5

    KAUST Repository

    Deng, Liping; McCabe, Matthew; Stenchikov, Georgiy L.; Evans, Jason; Kucera, Paul

    2013-01-01

    Modeling flash flood events in arid environments is a difficult but important task that has impacts on both water resource related issues and also emergency management and response. The challenge is often related to adequately describing the precursor intense rainfall events that cause these flood responses, as they are generally poorly simulated and forecast. Jeddah, the second largest city in the Kingdom of Saudi Arabia, has suffered from a number of flash floods over the last decade, following short-intense rainfall events. The research presented here focuses on examining four historic Jeddah flash floods (Nov. 25-26 2009, Dec. 29-30 2010, Jan. 14-15 2011 and Jan. 25-26 2011) and investigates the feasibility of using numerical weather prediction models to achieve a more realistic simulation of these flood-producing rainfall events. The Weather Research and Forecasting (WRF) model (version 3.5) is used to simulate precipitation and meteorological conditions via a high-resolution inner domain (1-km) around Jeddah. A range of different convective closure and microphysics parameterization, together with high-resolution (4-km) sea surface temperature data are employed. Through examining comparisons between the WRF model output and in-situ, radar and satellite data, the characteristics and mechanism producing the extreme rainfall events are discussed and the capacity of the WRF model to accurately forecast these rainstorms is evaluated.

  20. High-resolution global climate modelling: the UPSCALE project, a large-simulation campaign

    Directory of Open Access Journals (Sweden)

    M. S. Mizielinski

    2014-08-01

    Full Text Available The UPSCALE (UK on PRACE: weather-resolving Simulations of Climate for globAL Environmental risk project constructed and ran an ensemble of HadGEM3 (Hadley Centre Global Environment Model 3 atmosphere-only global climate simulations over the period 1985–2011, at resolutions of N512 (25 km, N216 (60 km and N96 (130 km as used in current global weather forecasting, seasonal prediction and climate modelling respectively. Alongside these present climate simulations a parallel ensemble looking at extremes of future climate was run, using a time-slice methodology to consider conditions at the end of this century. These simulations were primarily performed using a 144 million core hour, single year grant of computing time from PRACE (the Partnership for Advanced Computing in Europe in 2012, with additional resources supplied by the Natural Environment Research Council (NERC and the Met Office. Almost 400 terabytes of simulation data were generated on the HERMIT supercomputer at the High Performance Computing Center Stuttgart (HLRS, and transferred to the JASMIN super-data cluster provided by the Science and Technology Facilities Council Centre for Data Archival (STFC CEDA for analysis and storage. In this paper we describe the implementation of the project, present the technical challenges in terms of optimisation, data output, transfer and storage that such a project involves and include details of the model configuration and the composition of the UPSCALE data set. This data set is available for scientific analysis to allow assessment of the value of model resolution in both present and potential future climate conditions.

  1. An adaptive N-body algorithm of optimal order

    CERN Document Server

    Pruett, C D; Lacy, J M

    2003-01-01

    Picard iteration is normally considered a theoretical tool whose primary utility is to establish the existence and uniqueness of solutions to first-order systems of ordinary differential equations (ODEs). However, in 1996, Parker and Sochacki [Neural, Parallel, Sci. Comput. 4 (1996)] published a practical numerical method for a certain class of ODEs, based upon modified Picard iteration, that generates the Maclaurin series of the solution to arbitrarily high order. The applicable class of ODEs consists of first-order, autonomous systems whose right-hand side functions (generators) are projectively polynomial; that is, they can be written as polynomials in the unknowns. The class is wider than might be expected. The method is ideally suited to the classical N-body problem, which is projectively polynomial. Here, we recast the N-body problem in polynomial form and develop a Picard-based algorithm for its solution. The algorithm is highly accurate, parameter-free, and simultaneously adaptive in time and order. T...

  2. Evaluation of high-resolution climate simulations for West Africa using COSMO-CLM

    Science.gov (United States)

    Dieng, Diarra; Smiatek, Gerhard; Bliefernicht, Jan; Laux, Patrick; Heinzeller, Dominikus; Kunstmann, Harald; Sarr, Abdoulaye; Thierno Gaye, Amadou

    2017-04-01

    The climate change modeling activities within the WASCAL program (West African Science Service Center on Climate Change and Adapted Land Use) concentrate on the provisioning of future climate change scenario data at high spatial and temporal resolution and quality in West Africa. Such information is highly required for impact studies in water resources and agriculture for the development of reliable climate change adaptation and mitigation strategies. In this study, we present a detailed evaluation of high simulation runs based on the regional climate model, COSMO model in CLimate Mode (COSMO-CLM). The model is applied over West Africa in a nested approach with two simulation domains at 0.44° and 0.11° resolution using reanalysis data from ERA-Interim (1979-2013). The models runs are compared to several state-of-the-art observational references (e.g., CRU, CHIRPS) including daily precipitation data provided by national meteorological services in West Africa. Special attention is paid to the reproduction of the dynamics of the West African Monsoon (WMA), its associated precipitation patterns and crucial agro-climatological indices such as the onset of the rainy season. In addition, first outcomes of the regional climate change simulations driven by MPI-ESM-LR are presented for a historical period (1980 to 2010) and two future periods (2020 to 2050, 2070 to 2100). The evaluation of the reanalysis runs shows that COSMO-CLM is able to reproduce the observed major climate characteristics including the West African Monsoon within the range of comparable RCM evaluations studies. However, substantial uncertainties remain, especially in the Sahel zone. The added value of the higher resolution of the nested run is reflected in a smaller bias in extreme precipitation statistics with respect to the reference data.

  3. Development of computational infrastructure to support hyper-resolution large-ensemble hydrology simulations from local-to-continental scales

    Data.gov (United States)

    National Aeronautics and Space Administration — Development of computational infrastructure to support hyper-resolution large-ensemble hydrology simulations from local-to-continental scales A move is currently...

  4. Smoke, Clouds and Radiation Brazil NASA ER-2 Moderate Resolution Imaging Spectrometer (MODIS) Airborne Simulator (MAS) Data

    Data.gov (United States)

    National Aeronautics and Space Administration — SCARB_ER2_MAS data are Smoke, Clouds and Radiation Brazil (SCARB) NASA ER2 Moderate Resolution Imaging Spectrometer (MODIS) Airborne Simulator (MAS)...

  5. Evaluating Galactic Habitability Using High Resolution Cosmological Simulations of Galaxy Formation

    OpenAIRE

    Forgan, Duncan; Dayal, Pratika; Cockell, Charles; Libeskind, Noam

    2015-01-01

    D. F. acknowledges support from STFC consolidated grant ST/J001422/1, and the ‘ECOGAL’ ERC Advanced Grant. P. D. acknowledges the support of the Addison Wheeler Fellowship awarded by the Institute of Advanced Study at Durham University. N. I. L. is supported by the Deutsche Forschungs Gemeinschaft (DFG). We present the first model that couples high-resolution simulations of the formation of local group galaxies with calculations of the galactic habitable zone (GHZ), a region of space which...

  6. Reconstructing the distribution of haloes and mock galaxies below the resolution limit in cosmological simulations

    OpenAIRE

    de la Torre, Sylvain; Peacock, John A.

    2012-01-01

    We present a method for populating dark matter simulations with haloes of mass below the resolution limit. It is based on stochastically sampling a field derived from the density field of the halo catalogue, using constraints from the conditional halo mass function n(m|{\\delta}). We test the accuracy of the method and show its application in the context of building mock galaxy samples. We find that this technique allows precise reproduction of the two-point statistics of galaxies in mock samp...

  7. The 2010 Pakistan floods: high-resolution simulations with the WRF model

    Science.gov (United States)

    Viterbo, Francesca; Parodi, Antonio; Molini, Luca; Provenzale, Antonello; von Hardenberg, Jost; Palazzi, Elisa

    2013-04-01

    Estimating current and future water resources in high mountain regions with complex orography is a difficult but crucial task. In particular, the French-Italian project PAPRIKA is focused on two specific regions in the Hindu-Kush -- Himalaya -- Karakorum (HKKH)region: the Shigar basin in Pakistan, at the feet of K2, and the Khumbu valley in Nepal, at the feet of Mount Everest. In this framework, we use the WRF model to simulate precipitation and meteorological conditions with high resolution in areas with extreme orographic slopes, comparing the model output with station and satellite data. Once validated the model, we shall run a set of three future time-slices at very high spatial resolution, in the periods 2046-2050, 2071-2075 and 2096-2100, nested in different climate change scenarios (EXtreme PREcipitation and Hydrological climate Scenario Simulations -EXPRESS-Hydro project). As a prelude to this study, here we discuss the simulation of specific, high-intensity rainfall events in this area. In this paper we focus on the 2010 Pakistan floods which began in late July 2010, producing heavy monsoon rains in the Khyber Pakhtunkhwa, Sindh, Punjab and Balochistan regions of Pakistan and affecting the Indus River basin. Approximately one-fifth of Pakistan's total land area was underwater, with a death toll of about 2000 people. This event has been simulated with the WRF model (version 3.3.) in cloud-permitting mode (d01 14 km and d02 3.5 km): different convective closures and microphysics parameterization have been used. A deeper understanding of the processes responsible for this event has been gained through comparison with rainfall depth observations, radiosounding data and geostationary/polar satellite images.

  8. Air quality high resolution simulations of Italian urban areas with WRF-CHIMERE

    Science.gov (United States)

    Falasca, Serena; Curci, Gabriele

    2017-04-01

    The new European Directive on ambient air quality and cleaner air for Europe (2008/50/EC) encourages the use of modeling techniques to support the observations in the assessment and forecasting of air quality. The modelling system based on the combination of the WRF meteorological model and the CHIMERE chemistry-transport model is used to perform simulations at high resolution over the main Italian cities (e.g. Milan, Rome). Three domains covering Europe, Italy and the urban areas are nested with a decreasing grid size up to 1 km. Numerical results are produced for a winter month and a summer month of the year 2010 and are validated using ground-based observations (e.g. from the European air quality database AirBase). A sensitivity study is performed using different physics options, domain resolution and grid ratio; different urban parameterization schemes are tested using also characteristic morphology parameters for the cities considered. A spatial reallocation of anthropogenic emissions derived from international (e.g. EMEP, TNO, HTAP) and national (e.g. CTN-ACE) emissions inventories and based on the land cover datasets (Global Land Cover Facility and GlobCover) and the OpenStreetMap tool is also included. Preliminary results indicate that the introduction of the spatial redistribution at high-resolution allows a more realistic reproduction of the distribution of the emission flows and thus the concentrations of the pollutants, with significant advantages especially for the urban environments.

  9. High-resolution Hydrodynamic Simulation of Tidal Detonation of a Helium White Dwarf by an Intermediate Mass Black Hole

    Science.gov (United States)

    Tanikawa, Ataru

    2018-05-01

    We demonstrate tidal detonation during a tidal disruption event (TDE) of a helium (He) white dwarf (WD) with 0.45 M ⊙ by an intermediate mass black hole using extremely high-resolution simulations. Tanikawa et al. have shown tidal detonation in results of previous studies from unphysical heating due to low-resolution simulations, and such unphysical heating occurs in three-dimensional (3D) smoothed particle hydrodynamics (SPH) simulations even with 10 million SPH particles. In order to avoid such unphysical heating, we perform 3D SPH simulations up to 300 million SPH particles, and 1D mesh simulations using flow structure in the 3D SPH simulations for 1D initial conditions. The 1D mesh simulations have higher resolutions than the 3D SPH simulations. We show that tidal detonation occurs and confirm that this result is perfectly converged with different space resolution in both 3D SPH and 1D mesh simulations. We find that detonation waves independently arise in leading parts of the WD, and yield large amounts of 56Ni. Although detonation waves are not generated in trailing parts of the WD, the trailing parts would receive detonation waves generated in the leading parts and would leave large amounts of Si group elements. Eventually, this He WD TDE would synthesize 56Ni of 0.30 M ⊙ and Si group elements of 0.08 M ⊙, and could be observed as a luminous thermonuclear transient comparable to SNe Ia.

  10. Simulating return signals of a spaceborne high-spectral resolution lidar channel at 532 nm

    Science.gov (United States)

    Xiao, Yu; Binglong, Chen; Min, Min; Xingying, Zhang; Lilin, Yao; Yiming, Zhao; Lidong, Wang; Fu, Wang; Xiaobo, Deng

    2018-06-01

    High spectral resolution lidar (HSRL) system employs a narrow spectral filter to separate the particulate (cloud/aerosol) and molecular scattering components in lidar return signals, which improves the quality of the retrieved cloud/aerosol optical properties. To better develop a future spaceborne HSRL system, a novel simulation technique was developed to simulate spaceborne HSRL return signals at 532 nm using the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) cloud/aerosol extinction coefficients product and numerical weather prediction data. For validating simulated data, a mathematical particulate extinction coefficient retrieval method for spaceborne HSRL return signals is described here. We compare particulate extinction coefficient profiles from the CALIPSO operational product with simulated spaceborne HSRL data. Further uncertainty analysis shows that relative uncertainties are acceptable for retrieving the optical properties of cloud and aerosol. The final results demonstrate that they agree well with each other. It indicates that the return signals of the spaceborne HSRL molecular channel at 532 nm will be suitable for developing operational algorithms supporting a future spaceborne HSRL system.

  11. Monte-Carlo simulation of spatial resolution of an image intensifier in a saturation mode

    Science.gov (United States)

    Xie, Yuntao; Wang, Xi; Zhang, Yujun; Sun, Xiaoquan

    2018-04-01

    In order to investigate the spatial resolution of an image intensifier which is irradiated by high-energy pulsed laser, a three-dimensional electron avalanche model was built and the cascade process of the electrons was numerically simulated. The influence of positive wall charges, due to the failure of replenishing charges extracted from the channel during the avalanche, was considered by calculating its static electric field through particle-in-cell (PIC) method. By tracing the trajectory of electrons throughout the image intensifier, the energy of the electrons at the output of the micro channel plate and the electron distribution at the phosphor screen are numerically calculated. The simulated energy distribution of output electrons are in good agreement with experimental data of previous studies. In addition, the FWHM extensions of the electron spot at phosphor screen as a function of the number of incident electrons are calculated. The results demonstrate that the spot size increases significantly with the increase in the number of incident electrons. Furthermore, we got the MTFs of the image intensifier by Fourier transform of a point spread function at phosphor screen. Comparison between the MTFs in our model and the MTFs by analytic method shows that spatial resolution of the image intensifier decreases significantly as the number of incident electrons increases, and it is particularly obvious when incident electron number greater than 100.

  12. The simulation of medicanes in a high-resolution regional climate model

    Energy Technology Data Exchange (ETDEWEB)

    Cavicchia, Leone [Centro Euro-Mediterraneo per i Cambiamenti Climatici, Bologna (Italy); Helmholtz-Zentrum Geesthacht, Institute of Coastal Research, Geesthacht (Germany); Ca' Foscari University, Venice (Italy); Storch, Hans von [Helmholtz-Zentrum Geesthacht, Institute of Coastal Research, Geesthacht (Germany); University of Hamburg, Meteorological Institute, Hamburg (Germany)

    2012-11-15

    Medicanes, strong mesoscale cyclones with tropical-like features, develop occasionally over the Mediterranean Sea. Due to the scarcity of observations over sea and the coarse resolution of the long-term reanalysis datasets, it is difficult to study systematically the multidecadal statistics of sub-synoptic medicanes. Our goal is to assess the long-term variability and trends of medicanes, obtaining a long-term climatology through dynamical downscaling of the NCEP/NCAR reanalysis data. In this paper, we examine the robustness of this method and investigate the value added for the study of medicanes. To do so, we performed several climate mode simulations with a high resolution regional atmospheric model (CCLM) for a number of test cases described in the literature. We find that the medicanes are formed in the simulations, with deeper pressures and stronger winds than in the driving global NCEP reanalysis. The tracks are adequately reproduced. We conclude that our methodology is suitable for constructing multi-decadal statistics and scenarios of current and possible future medicane activities. (orig.)

  13. Quality and sensitivity of high-resolution numerical simulation of urban heat islands

    International Nuclear Information System (INIS)

    Li, Dan; Bou-Zeid, Elie

    2014-01-01

    High-resolution numerical simulations of the urban heat island (UHI) effect with the widely-used Weather Research and Forecasting (WRF) model are assessed. Both the sensitivity of the results to the simulation setup, and the quality of the simulated fields as representations of the real world, are investigated. Results indicate that the WRF-simulated surface temperatures are more sensitive to the planetary boundary layer (PBL) scheme choice during nighttime, and more sensitive to the surface thermal roughness length parameterization during daytime. The urban surface temperatures simulated by WRF are also highly sensitive to the urban canopy model (UCM) used. The implementation in this study of an improved UCM (the Princeton UCM or PUCM) that allows the simulation of heterogeneous urban facets and of key hydrological processes, together with the so-called CZ09 parameterization for the thermal roughness length, significantly reduce the bias (<1.5 °C) in the surface temperature fields as compared to satellite observations during daytime. The boundary layer potential temperature profiles are captured by WRF reasonable well at both urban and rural sites; the biases in these profiles relative to aircraft-mounted senor measurements are on the order of 1.5 °C. Changing UCMs and PBL schemes does not alter the performance of WRF in reproducing bulk boundary layer temperature profiles significantly. The results illustrate the wide range of urban environmental conditions that various configurations of WRF can produce, and the significant biases that should be assessed before inferences are made based on WRF outputs. The optimal set-up of WRF-PUCM developed in this paper also paves the way for a confident exploration of the city-scale impacts of UHI mitigation strategies in the companion paper (Li et al 2014). (letter)

  14. On the n-body problem on surfaces of revolution

    Science.gov (United States)

    Stoica, Cristina

    2018-05-01

    We explore the n-body problem, n ≥ 3, on a surface of revolution with a general interaction depending on the pairwise geodesic distance. Using the geometric methods of classical mechanics we determine a large set of properties. In particular, we show that Saari's conjecture fails on surfaces of revolution admitting a geodesic circle. We define homographic motions and, using the discrete symmetries, prove that when the masses are equal, they form an invariant manifold. On this manifold the dynamics are reducible to a one-degree of freedom system. We also find that for attractive interactions, regular n-gon shaped relative equilibria with trajectories located on geodesic circles typically experience a pitchfork bifurcation. Some applications are included.

  15. An analytic n-body potential for bcc Iron

    Energy Technology Data Exchange (ETDEWEB)

    Pontikis, V. [Commissariat a l' Energie Atomique, DRECAM/LSI, CE de Saclay, Building 524, Room 40B, 91191 Gif-sur-Yvette Cedex (France)]. E-mail: Vassilis.Pontikis@cea.fr; Russier, V. [Centre d' Etudes de Chimie Metallurgique, CNRS UPR2801, 94407 Vitry-sur-Seine (France); Wallenius, J. [Royal Institute of Technology, Department of Nuclear and Reactor Physics, Stockholm (Sweden)

    2007-02-15

    We have developed an analytic n-body phenomenological potential for bcc iron made of two electron-density functionals representing repulsion via the Thomas-Fermi free-electron gas kinetic energy term and attraction via a square root functional similar to the second moment approximation of the tight-binding scheme. Electron-density is given by radial, hydrogen-like orbitals with effective charges taken as adjustable parameters fitted on experimental and ab-initio data. Although the set of adjustable parameters is small, prediction of static and dynamical properties of iron is in excellent agreement with the experiments. Advantages and shortcomings of this model are discussed with reference to published works.

  16. An analytic n-body potential for bcc Iron

    International Nuclear Information System (INIS)

    Pontikis, V.; Russier, V.; Wallenius, J.

    2007-01-01

    We have developed an analytic n-body phenomenological potential for bcc iron made of two electron-density functionals representing repulsion via the Thomas-Fermi free-electron gas kinetic energy term and attraction via a square root functional similar to the second moment approximation of the tight-binding scheme. Electron-density is given by radial, hydrogen-like orbitals with effective charges taken as adjustable parameters fitted on experimental and ab-initio data. Although the set of adjustable parameters is small, prediction of static and dynamical properties of iron is in excellent agreement with the experiments. Advantages and shortcomings of this model are discussed with reference to published works

  17. Integral bounds for N-body total cross sections

    International Nuclear Information System (INIS)

    Osborn, T.A.; Bolle, D.

    1979-01-01

    We study the behavior of the total cross sections in the three- and N-body scattering problem. Working within the framework of the time-dependent two-Hilbert space scattering theory, we give a simple derivation of integral bounds for the total cross section for all processes initiated by the collision of two clusters. By combining the optical theorem with a trace identity derived by Jauch, Sinha, and Misra, we find, roughly speaking, that if the local pairwise interaction falls off faster than r -3 , then sigma/sub tot/(E) must decrease faster than E/sup -1/2/ at high energy. This conclusion is unchanged if one introduces a class of well-behaved three-body interactions

  18. Explicit solution to the N-body Calogero problem

    Energy Technology Data Exchange (ETDEWEB)

    Brink, L [Inst. of Theoretical Physics, CTH, Goeteborg (Sweden); Hansson, T H [Inst. of Theoretical Physics, Univ. Stockholm (Sweden); Vasiliev, M A [Dept. of Theoretical Physics, P.N. Lebedev Physical Inst., Moscow (Russia)

    1992-07-23

    We solve the N-body Calogero problem, i.e., N particles in one dimension subject to a two-body interaction of the form 1/2 {Sigma}{sub i,j} ((x{sub i}-x{sub j}){sup 2}+g/(x{sub i}-x{sub j}){sup 2}), by constructing annihilation and creation operators of the form a{sub i}{sup -+}=(1/{radical}2)(x{sub i}{+-}ip{sub i}) where p{sub i} is a modified momentum operator obeying Heisenberg-type commutation relations with x{sub i}, involving explicitly permutation operators. On the other hand, D{sub j}=ip{sub j} can be interpreted as a covariant derivative corresponding to a flat connection. The relation to fractional statistics in 1+1 dimensions and anyons in a strong magnetic field is briefly discussed. (orig.).

  19. High-resolution WRF-LES simulations for real episodes: A case study for prealpine terrain

    Science.gov (United States)

    Hald, Cornelius; Mauder, Matthias; Laux, Patrick; Kunstmann, Harald

    2017-04-01

    While in most large or regional scale weather and climate models turbulence is parametrized, LES (Large Eddy Simulation) allows for the explicit modeling of turbulent structures in the atmosphere. With the exponential growth in available computing power the technique has become more and more applicable, yet it has mostly been used to model idealized scenarios. It is investigated how well WRF-LES can represent small scale weather patterns. The results are evaluated against different hydrometeorological measurements. We use WRF-LES to model the diurnal cycle for a 48 hour episode in summer over moderately complex terrain in southern Germany. The model setup uses a high resolution digital elevation model, land use and vegetation map. The atmospheric boundary conditions are set by reanalysis data. Schemes for radiation and microphysics and a land-surface model are employed. The biggest challenge in modeling arises from the high horizontal resolution of dx = 30m, since the subgrid-scale model then requires a vertical resolution dz ≈ 10m for optimal results. We observe model instabilities and present solutions like smoothing of the surface input data, careful positioning of the model domain and shortening of the model time step down to a twentieth of a second. Model results are compared to an array of various instruments including eddy covariance stations, LIDAR, RASS, SODAR, weather stations and unmanned aerial vehicles. All instruments are part of the TERENO pre-Alpine area and were employed in the orchestrated measurement campaign ScaleX in July 2015. Examination of the results show reasonable agreement between model and measurements in temperature- and moisture profiles. Modeled wind profiles are highly dependent on the vertical resolution and are in accordance with measurements only at higher wind speeds. A direct comparison of turbulence is made difficult by the purely statistical character of turbulent motions in the model.

  20. The influence of model spatial resolution on simulated ozone and fine particulate matter for Europe: implications for health impact assessments

    Science.gov (United States)

    Fenech, Sara; Doherty, Ruth M.; Heaviside, Clare; Vardoulakis, Sotiris; Macintyre, Helen L.; O'Connor, Fiona M.

    2018-04-01

    We examine the impact of model horizontal resolution on simulated concentrations of surface ozone (O3) and particulate matter less than 2.5 µm in diameter (PM2.5), and the associated health impacts over Europe, using the HadGEM3-UKCA chemistry-climate model to simulate pollutant concentrations at a coarse (˜ 140 km) and a finer (˜ 50 km) resolution. The attributable fraction (AF) of total mortality due to long-term exposure to warm season daily maximum 8 h running mean (MDA8) O3 and annual-average PM2.5 concentrations is then calculated for each European country using pollutant concentrations simulated at each resolution. Our results highlight a seasonal variation in simulated O3 and PM2.5 differences between the two model resolutions in Europe. Compared to the finer resolution results, simulated European O3 concentrations at the coarse resolution are higher on average in winter and spring (˜ 10 and ˜ 6 %, respectively). In contrast, simulated O3 concentrations at the coarse resolution are lower in summer and autumn (˜ -1 and ˜ -4 %, respectively). These differences may be partly explained by differences in nitrogen dioxide (NO2) concentrations simulated at the two resolutions. Compared to O3, we find the opposite seasonality in simulated PM2.5 differences between the two resolutions. In winter and spring, simulated PM2.5 concentrations are lower at the coarse compared to the finer resolution (˜ -8 and ˜ -6 %, respectively) but higher in summer and autumn (˜ 29 and ˜ 8 %, respectively). Simulated PM2.5 values are also mostly related to differences in convective rainfall between the two resolutions for all seasons. These differences between the two resolutions exhibit clear spatial patterns for both pollutants that vary by season, and exert a strong influence on country to country variations in estimated AF for the two resolutions. Warm season MDA8 O3 levels are higher in most of southern Europe, but lower in areas of northern and eastern Europe when

  1. THE AGORA HIGH-RESOLUTION GALAXY SIMULATIONS COMPARISON PROJECT. II. ISOLATED DISK TEST

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ji-hoon [Kavli Institute for Particle Astrophysics and Cosmology, SLAC National Accelerator Laboratory, Menlo Park, CA 94025 (United States); Agertz, Oscar [Department of Physics, University of Surrey, Guildford, Surrey, GU2 7XH (United Kingdom); Teyssier, Romain; Feldmann, Robert [Centre for Theoretical Astrophysics and Cosmology, Institute for Computational Science, University of Zurich, Zurich, 8057 (Switzerland); Butler, Michael J. [Max-Planck-Institut für Astronomie, D-69117 Heidelberg (Germany); Ceverino, Daniel [Zentrum für Astronomie der Universität Heidelberg, Institut für Theoretische Astrophysik, D-69120 Heidelberg (Germany); Choi, Jun-Hwan [Department of Astronomy, University of Texas, Austin, TX 78712 (United States); Keller, Ben W. [Department of Physics and Astronomy, McMaster University, Hamilton, ON L8S 4M1 (Canada); Lupi, Alessandro [Institut d’Astrophysique de Paris, Sorbonne Universites, UPMC Univ Paris 6 et CNRS, F-75014 Paris (France); Quinn, Thomas; Wallace, Spencer [Department of Astronomy, University of Washington, Seattle, WA 98195 (United States); Revaz, Yves [Institute of Physics, Laboratoire d’Astrophysique, École Polytechnique Fédérale de Lausanne, CH-1015 Lausanne (Switzerland); Gnedin, Nickolay Y. [Particle Astrophysics Center, Fermi National Accelerator Laboratory, Batavia, IL 60510 (United States); Leitner, Samuel N. [Department of Astronomy, University of Maryland, College Park, MD 20742 (United States); Shen, Sijing [Kavli Institute for Cosmology, University of Cambridge, Cambridge, CB3 0HA (United Kingdom); Smith, Britton D., E-mail: me@jihoonkim.org [Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh EH9 3HJ (United Kingdom); Collaboration: AGORA Collaboration; and others

    2016-12-20

    Using an isolated Milky Way-mass galaxy simulation, we compare results from nine state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, we find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt–Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly formed stellar clump mass functions show more significant variation (difference by up to a factor of ∼3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low-density region, and between more diffusive and less diffusive schemes in the high-density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.

  2. High-resolution nested model simulations of the climatological circulation in the southeastern Mediterranean Sea

    Directory of Open Access Journals (Sweden)

    S. Brenner

    2003-01-01

    Full Text Available As part of the Mediterranean Forecasting System Pilot Project (MFSPP we have implemented a high-resolution (2 km horizontal grid, 30 sigma levels version of the Princeton Ocean Model for the southeastern corner of the Mediterranean Sea. The domain extends 200 km offshore and includes the continental shelf and slope, and part of the open sea. The model is nested in an intermediate resolution (5.5 km grid model that covers the entire Levantine, Ionian, and Aegean Sea. The nesting is one way so that velocity, temperature, and salinity along the boundaries are interpolated from the relevant intermediate model variables. An integral constraint is applied so that the net mass flux across the open boundaries is identical to the net flux in the intermediate model. The model is integrated for three perpetual years with surface forcing specified from monthly mean climatological wind stress and heat fluxes. The model is stable and spins up within the first year to produce a repeating seasonal cycle throughout the three-year integration period. While there is some internal variability evident in the results, it is clear that, due to the relatively small domain, the results are strongly influenced by the imposed lateral boundary conditions. The results closely follow the simulation of the intermediate model. The main improvement is in the simulation over the narrow shelf region, which is not adequately resolved by the coarser grid model. Comparisons with direct current measurements over the shelf and slope show reasonable agreement despite the limitations of the climatological forcing. The model correctly simulates the direction and the typical speeds of the flow over the shelf and slope, but has difficulty properly re-producing the seasonal cycle in the speed.Key words. Oceanography: general (continental shelf processes; numerical modelling; ocean prediction

  3. Ensemble flood simulation for a small dam catchment in Japan using 10 and 2 km resolution nonhydrostatic model rainfalls

    Science.gov (United States)

    Kobayashi, Kenichiro; Otsuka, Shigenori; Apip; Saito, Kazuo

    2016-08-01

    This paper presents a study on short-term ensemble flood forecasting specifically for small dam catchments in Japan. Numerical ensemble simulations of rainfall from the Japan Meteorological Agency nonhydrostatic model (JMA-NHM) are used as the input data to a rainfall-runoff model for predicting river discharge into a dam. The ensemble weather simulations use a conventional 10 km and a high-resolution 2 km spatial resolutions. A distributed rainfall-runoff model is constructed for the Kasahori dam catchment (approx. 70 km2) and applied with the ensemble rainfalls. The results show that the hourly maximum and cumulative catchment-average rainfalls of the 2 km resolution JMA-NHM ensemble simulation are more appropriate than the 10 km resolution rainfalls. All the simulated inflows based on the 2 and 10 km rainfalls become larger than the flood discharge of 140 m3 s-1, a threshold value for flood control. The inflows with the 10 km resolution ensemble rainfall are all considerably smaller than the observations, while at least one simulated discharge out of 11 ensemble members with the 2 km resolution rainfalls reproduces the first peak of the inflow at the Kasahori dam with similar amplitude to observations, although there are spatiotemporal lags between simulation and observation. To take positional lags into account of the ensemble discharge simulation, the rainfall distribution in each ensemble member is shifted so that the catchment-averaged cumulative rainfall of the Kasahori dam maximizes. The runoff simulation with the position-shifted rainfalls shows much better results than the original ensemble discharge simulations.

  4. High resolution geodynamo simulations with strongly-driven convection and low viscosity

    Science.gov (United States)

    Schaeffer, Nathanael; Fournier, Alexandre; Jault, Dominique; Aubert, Julien

    2015-04-01

    Numerical simulations have been successful at explaining the magnetic field of the Earth for 20 years. However, the regime in which these simulations operate is in many respect very far from what is expected in the Earth's core. By reviewing previous work, we find that it appears difficult to have both low viscosity (low magnetic Prandtl number) and strong magnetic fields in numerical models (large ratio of magnetic over kinetic energy, a.k.a inverse squared Alfvén number). In order to understand better the dynamics and turbulence of the core, we have run a series of 3 simulations, with increasingly demanding parameters. The last simulation is at the limit of what nowadays codes can do on current super computers, with a resolution of 2688 grid points in longitude, 1344 in latitude, and 1024 radial levels. We will show various features of these numerical simulations, including what appears as trends when pushing the parameters toward the one of the Earth. The dynamics is very rich. From short time scales to large time scales, we observe at large scales: Inertial Waves, Torsional Alfvén Waves, columnar convective overturn dynamics and long-term thermal winds. In addition, the dynamics inside and outside the tangent cylinder seem to follow different routes. We find that the ohmic dissipation largely dominates the viscous one and that the magnetic energy dominates the kinetic energy. The magnetic field seems to play an ambiguous role. Despite the large magnetic field, which has an important impact on the flow, we find that the force balance for the mean flow is a thermal wind balance, and that the scale of convective cells is still dominated by viscous effects.

  5. Use of High-Resolution WRF Simulations to Forecast Lightning Threat

    Science.gov (United States)

    McCaul, E. W., Jr.; LaCasse, K.; Goodman, S. J.; Cecil, D. J.

    2008-01-01

    Recent observational studies have confirmed the existence of a robust statistical relationship between lightning flash rates and the amount of large precipitating ice hydrometeors aloft in storms. This relationship is exploited, in conjunction with the capabilities of cloud-resolving forecast models such as WRF, to forecast explicitly the threat of lightning from convective storms using selected output fields from the model forecasts. The simulated vertical flux of graupel at -15C and the shape of the simulated reflectivity profile are tested in this study as proxies for charge separation processes and their associated lightning risk. Our lightning forecast method differs from others in that it is entirely based on high-resolution simulation output, without reliance on any climatological data. short [6-8 h) simulations are conducted for a number of case studies for which three-dmmensional lightning validation data from the North Alabama Lightning Mapping Array are available. Experiments indicate that initialization of the WRF model on a 2 km grid using Eta boundary conditions, Doppler radar radial velocity fields, and METAR and ACARS data y&eld satisfactory simulations. __nalyses of the lightning threat fields suggests that both the graupel flux and reflectivity profile approaches, when properly calibrated, can yield reasonable lightning threat forecasts, although an ensemble approach is probably desirable in order to reduce the tendency for misplacement of modeled storms to hurt the accuracy of the forecasts. Our lightning threat forecasts are also compared to other more traditional means of forecasting thunderstorms, such as those based on inspection of the convective available potential energy field.

  6. Detailed high-resolution three-dimensional simulations of OMEGA separated reactants inertial confinement fusion experiments

    Energy Technology Data Exchange (ETDEWEB)

    Haines, Brian M., E-mail: bmhaines@lanl.gov; Fincke, James R.; Shah, Rahul C.; Boswell, Melissa; Fowler, Malcolm M.; Gore, Robert A.; Hayes-Sterbenz, Anna C.; Jungman, Gerard; Klein, Andreas; Rundberg, Robert S.; Steinkamp, Michael J.; Wilhelmy, Jerry B. [Los Alamos National Laboratory, MS T087, Los Alamos, New Mexico 87545 (United States); Grim, Gary P. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); Forrest, Chad J.; Silverstein, Kevin; Marshall, Frederic J. [Laboratory for Laser Energetics, University of Rochester, Rochester, New York 14623 (United States)

    2016-07-15

    We present results from the comparison of high-resolution three-dimensional (3D) simulations with data from the implosions of inertial confinement fusion capsules with separated reactants performed on the OMEGA laser facility. Each capsule, referred to as a “CD Mixcap,” is filled with tritium and has a polystyrene (CH) shell with a deuterated polystyrene (CD) layer whose burial depth is varied. In these implosions, fusion reactions between deuterium and tritium ions can occur only in the presence of atomic mix between the gas fill and shell material. The simulations feature accurate models for all known experimental asymmetries and do not employ any adjustable parameters to improve agreement with experimental data. Simulations are performed with the RAGE radiation-hydrodynamics code using an Implicit Large Eddy Simulation (ILES) strategy for the hydrodynamics. We obtain good agreement with the experimental data, including the DT/TT neutron yield ratios used to diagnose mix, for all burial depths of the deuterated shell layer. Additionally, simulations demonstrate good agreement with converged simulations employing explicit models for plasma diffusion and viscosity, suggesting that the implicit sub-grid model used in ILES is sufficient to model these processes in these experiments. In our simulations, mixing is driven by short-wavelength asymmetries and longer-wavelength features are responsible for developing flows that transport mixed material towards the center of the hot spot. Mix material transported by this process is responsible for most of the mix (DT) yield even for the capsule with a CD layer adjacent to the tritium fuel. Consistent with our previous results, mix does not play a significant role in TT neutron yield degradation; instead, this is dominated by the displacement of fuel from the center of the implosion due to the development of turbulent instabilities seeded by long-wavelength asymmetries. Through these processes, the long

  7. Multi-resolution simulation of focused ultrasound propagation through ovine skull from a single-element transducer

    Science.gov (United States)

    Yoon, Kyungho; Lee, Wonhye; Croce, Phillip; Cammalleri, Amanda; Yoo, Seung-Schik

    2018-05-01

    Transcranial focused ultrasound (tFUS) is emerging as a non-invasive brain stimulation modality. Complicated interactions between acoustic pressure waves and osseous tissue introduce many challenges in the accurate targeting of an acoustic focus through the cranium. Image-guidance accompanied by a numerical simulation is desired to predict the intracranial acoustic propagation through the skull; however, such simulations typically demand heavy computation, which warrants an expedited processing method to provide on-site feedback for the user in guiding the acoustic focus to a particular brain region. In this paper, we present a multi-resolution simulation method based on the finite-difference time-domain formulation to model the transcranial propagation of acoustic waves from a single-element transducer (250 kHz). The multi-resolution approach improved computational efficiency by providing the flexibility in adjusting the spatial resolution. The simulation was also accelerated by utilizing parallelized computation through the graphic processing unit. To evaluate the accuracy of the method, we measured the actual acoustic fields through ex vivo sheep skulls with different sonication incident angles. The measured acoustic fields were compared to the simulation results in terms of focal location, dimensions, and pressure levels. The computational efficiency of the presented method was also assessed by comparing simulation speeds at various combinations of resolution grid settings. The multi-resolution grids consisting of 0.5 and 1.0 mm resolutions gave acceptable accuracy (under 3 mm in terms of focal position and dimension, less than 5% difference in peak pressure ratio) with a speed compatible with semi real-time user feedback (within 30 s). The proposed multi-resolution approach may serve as a novel tool for simulation-based guidance for tFUS applications.

  8. PDF added value of a high resolution climate simulation for precipitation

    Science.gov (United States)

    Soares, Pedro M. M.; Cardoso, Rita M.

    2015-04-01

    dynamical downscaling, based on simple PDF skill scores. The measure can assess the full quality of the PDFs and at the same time integrates a flexible manner to weight differently the PDF tails. In this study we apply the referred method to characaterize the PDF added value of a high resolution simulation with the WRF model. Results from a WRF climate simulation centred at the Iberian Penisnula with two nested grids, a larger one at 27km and a smaller one at 9km. This simulation is forced by ERA-Interim. The observational data used covers from rain gauges precipitation records to observational regular grids of daily precipitation. Two regular gridded precipitation datasets are used. A Portuguese grid precipitation dataset developed at 0.2°× 0.2°, from observed rain gauges daily precipitation. A second one corresponding to the ENSEMBLES observational gridded dataset for Europe, which includes daily precipitation values at 0.25°. The analisys shows an important PDF added value from the higher resolution simulation, regarding the full PDF and the extremes. This method shows higher potential to be applied to other simulation exercises and to evaluate other variables.

  9. Quantifying uncertainty in Transcranial Magnetic Stimulation - A high resolution simulation study in ICBM space.

    Science.gov (United States)

    Toschi, Nicola; Keck, Martin E; Welt, Tobias; Guerrisi, Maria

    2012-01-01

    Transcranial Magnetic Stimulation offers enormous potential for noninvasive brain stimulation. While it is known that brain tissue significantly "reshapes" induced field and charge distributions, most modeling investigations to-date have focused on single-subject data with limited generality. Further, the effects of the significant uncertainties which exist in the simulation (i.e. brain conductivity distributions) and stimulation (e.g. coil positioning and orientations) setup have not been quantified. In this study, we construct a high-resolution anisotropic head model in standard ICBM space, which can be used as a population-representative standard for bioelectromagnetic simulations. Further, we employ Monte-Carlo simulations in order to quantify how uncertainties in conductivity values propagate all the way to induced field and currents, demonstrating significant, regionally dependent dispersions in values which are commonly assumed "ground truth". This framework can be leveraged in order to quantify the effect of any type of uncertainty in noninvasive brain stimulation and bears relevance in all applications of TMS, both investigative and therapeutic.

  10. Experimental Investigation and High Resolution Simulation of In-Situ Combustion Processes

    Energy Technology Data Exchange (ETDEWEB)

    Margot Gerritsen; Tony Kovscek

    2008-04-30

    This final technical report describes work performed for the project 'Experimental Investigation and High Resolution Numerical Simulator of In-Situ Combustion Processes', DE-FC26-03NT15405. In summary, this work improved our understanding of in-situ combustion (ISC) process physics and oil recovery. This understanding was translated into improved conceptual models and a suite of software algorithms that extended predictive capabilities. We pursued experimental, theoretical, and numerical tasks during the performance period. The specific project objectives were (i) identification, experimentally, of chemical additives/injectants that improve combustion performance and delineation of the physics of improved performance, (ii) establishment of a benchmark one-dimensional, experimental data set for verification of in-situ combustion dynamics computed by simulators, (iii) develop improved numerical methods that can be used to describe in-situ combustion more accurately, and (iv) to lay the underpinnings of a highly efficient, 3D, in-situ combustion simulator using adaptive mesh refinement techniques and parallelization. We believe that project goals were met and exceeded as discussed.

  11. High-resolution 3D simulations of NIF ignition targets performed on Sequoia with HYDRA

    Science.gov (United States)

    Marinak, M. M.; Clark, D. S.; Jones, O. S.; Kerbel, G. D.; Sepke, S.; Patel, M. V.; Koning, J. M.; Schroeder, C. R.

    2015-11-01

    Developments in the multiphysics ICF code HYDRA enable it to perform large-scale simulations on the Sequoia machine at LLNL. With an aggregate computing power of 20 Petaflops, Sequoia offers an unprecedented capability to resolve the physical processes in NIF ignition targets for a more complete, consistent treatment of the sources of asymmetry. We describe modifications to HYDRA that enable it to scale to over one million processes on Sequoia. These include new options for replicating parts of the mesh over a subset of the processes, to avoid strong scaling limits. We consider results from a 3D full ignition capsule-only simulation performed using over one billion zones run on 262,000 processors which resolves surface perturbations through modes l = 200. We also report progress towards a high-resolution 3D integrated hohlraum simulation performed using 262,000 processors which resolves surface perturbations on the ignition capsule through modes l = 70. These aim for the most complete calculations yet of the interactions and overall impact of the various sources of asymmetry for NIF ignition targets. This work was performed under the auspices of the Lawrence Livermore National Security, LLC, (LLNS) under Contract No. DE-AC52-07NA27344.

  12. Adaptive resolution simulation of polarizable supramolecular coarse-grained water models

    International Nuclear Information System (INIS)

    Zavadlav, Julija; Praprotnik, Matej; Melo, Manuel N.; Marrink, Siewert J.

    2015-01-01

    Multiscale simulations methods, such as adaptive resolution scheme, are becoming increasingly popular due to their significant computational advantages with respect to conventional atomistic simulations. For these kind of simulations, it is essential to develop accurate multiscale water models that can be used to solvate biophysical systems of interest. Recently, a 4-to-1 mapping was used to couple the bundled-simple point charge water with the MARTINI model. Here, we extend the supramolecular mapping to coarse-grained models with explicit charges. In particular, the two tested models are the polarizable water and big multiple water models associated with the MARTINI force field. As corresponding coarse-grained representations consist of several interaction sites, we couple orientational degrees of freedom of the atomistic and coarse-grained representations via a harmonic energy penalty term. This additional energy term aligns the dipole moments of both representations. We test this coupling by studying the system under applied static external electric field. We show that our approach leads to the correct reproduction of the relevant structural and dynamical properties

  13. Simulating SiD Calorimetry: Software Calibration Procedures and Jet Energy Resolution

    International Nuclear Information System (INIS)

    Cassell, R.

    2009-01-01

    Simulated calorimeter performance in the SiD detector is examined. The software calibration procedures are described, as well as the perfect pattern recognition PFA reconstruction. Performance of the SiD calorimeters is summarized with jet energy resolutions from calorimetry only, perfect pattern recognition and the SiD PFA algorithm. Presented at LCWS08(1). Our objective is to simulate the calorimeter performance of the SiD detector, with and without a Particle Flow Algorithm (PFA). Full Geant4 simulations using SLIC(2) and the SiD simplified detector geometry (SiD02) are used. In this geometry, the calorimeters are represented as layered cylinders. The EM calorimeter is Si/W, with 20 layers of 2.5mm W and 10 layers of 5mm W, segmented in 3.5 x 3.5mm 2 cells. The HAD calorimeter is RPC/Fe, with 40 layers of 20mm Fe and a digital readout, segmented in 10 x 10mm 2 cells. The barrel detectors are layered in radius, while the endcap detectors are layered in z(along the beam axis)

  14. Simulation of the oxidation pathway on Si(100) using high-resolution EELS

    Energy Technology Data Exchange (ETDEWEB)

    Hogan, Conor [Consiglio Nazionale delle Ricerche, Istituto di Struttura della Materia (CNR-ISM), Rome (Italy); Dipartimento di Fisica, Universita di Roma ' ' Tor Vergata' ' , Roma (Italy); European Theoretical Spectroscopy Facility (ETSF), Roma (Italy); Caramella, Lucia; Onida, Giovanni [Dipartimento di Fisica, Universita degli Studi di Milano (Italy); European Theoretical Spectroscopy Facility (ETSF), Milano (Italy)

    2012-06-15

    We compute high-resolution electron energy loss spectra (HREELS) of possible structural motifs that form during the dynamic oxidation process on Si(100), including the important metastable precursor silanone and an adjacent-dimer bridge (ADB) structure that may seed oxide formation. Spectroscopic fingerprints of single site, silanone, and ''seed'' structures are identified and related to changes in the surface bandstructure of the clean surface. Incorporation of oxygen into the silicon lattice through adsorption and dissociation of water is also examined. Results are compared to available HREELS spectra and surface optical data, which are closely related. Our simulations confirm that HREELS offers complementary evidence to surface optical spectroscopy, and show that its high sensitivity allows it to distinguish between energetically and structurally similar oxidation models. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  15. Updated vegetation information in high resolution regional climate simulations using WRF

    DEFF Research Database (Denmark)

    Nielsen, Joakim Refslund; Dellwik, Ebba; Hahmann, Andrea N.

    Climate studies show that the frequency of heat wave events and above-average high temperatures during the summer months over Europe will increase in the coming decades. Such climatic changes and long-term meteorological conditions will impact the seasonal development of vegetation and ultimately...... modify the energy distribution at the land surface. In weather and climate models it is important to represent the vegetation variability accurately to obtain reliable results. The weather research and forecasting (WRF) model uses a green vegetation fraction (GVF) climatology to represent the seasonal...... or changes in management practice since it is derived more than twenty years ago. In this study, a new high resolution, high quality GVF product is applied in a WRF climate simulation over Denmark during the 2006 heat wave year. The new GVF product reflects the year 2006 and it was previously tested...

  16. High-resolution, regional-scale crop yield simulations for the Southwestern United States

    Science.gov (United States)

    Stack, D. H.; Kafatos, M.; Medvigy, D.; El-Askary, H. M.; Hatzopoulos, N.; Kim, J.; Kim, S.; Prasad, A. K.; Tremback, C.; Walko, R. L.; Asrar, G. R.

    2012-12-01

    Over the past few decades, there have been many process-based crop models developed with the goal of better understanding the impacts of climate, soils, and management decisions on crop yields. These models simulate the growth and development of crops in response to environmental drivers. Traditionally, process-based crop models have been run at the individual farm level for yield optimization and management scenario testing. Few previous studies have used these models over broader geographic regions, largely due to the lack of gridded high-resolution meteorological and soil datasets required as inputs for these data intensive process-based models. In particular, assessment of regional-scale yield variability due to climate change requires high-resolution, regional-scale, climate projections, and such projections have been unavailable until recently. The goal of this study was to create a framework for extending the Agricultural Production Systems sIMulator (APSIM) crop model for use at regional scales and analyze spatial and temporal yield changes in the Southwestern United States (CA, AZ, and NV). Using the scripting language Python, an automated pipeline was developed to link Regional Climate Model (RCM) output with the APSIM crop model, thus creating a one-way nested modeling framework. This framework was used to combine climate, soil, land use, and agricultural management datasets in order to better understand the relationship between climate variability and crop yield at the regional-scale. Three different RCMs were used to drive APSIM: OLAM, RAMS, and WRF. Preliminary results suggest that, depending on the model inputs, there is some variability between simulated RCM driven maize yields and historical yields obtained from the United States Department of Agriculture (USDA). Furthermore, these simulations showed strong non-linear correlations between yield and meteorological drivers, with critical threshold values for some of the inputs (e.g. minimum and

  17. The shape of dark matter haloes in the Aquarius simulations : Evolution and memory

    NARCIS (Netherlands)

    Vera-Ciro, C.A.; Sales, L. V.; Helmi, A.; Reyle, C; Robin, A; Schultheis, M

    We use the high resolution cosmological N-body simulations from the Aquarius project to investigate in detail the mechanisms that determine the shape of Milky Way-type dark matter haloes. We find that, when measured at the instantaneous virial radius, the shape of individual haloes changes with

  18. The shape of dark matter haloes in the Aquarius simulations: Evolution and memory

    NARCIS (Netherlands)

    Vera-Ciro, C. A.; Sales, L. V.; Helmi, A.

    We use the high resolution cosmological N-body simulations from the Aquarius project to investigate in detail the mechanisms that determine the shape of Milky Way-type dark matter haloes. We find that, when measured at the instantaneous virial radius, the shape of individual haloes changes with

  19. Method of Obtaining High Resolution Intrinsic Wire Boom Damping Parameters for Multi-Body Dynamics Simulations

    Science.gov (United States)

    Yew, Alvin G.; Chai, Dean J.; Olney, David J.

    2010-01-01

    The goal of NASA's Magnetospheric MultiScale (MMS) mission is to understand magnetic reconnection with sensor measurements from four spinning satellites flown in a tight tetrahedron formation. Four of the six electric field sensors on each satellite are located at the end of 60- meter wire booms to increase measurement sensitivity in the spin plane and to minimize motion coupling from perturbations on the main body. A propulsion burn however, might induce boom oscillations that could impact science measurements if oscillations do not damp to values on the order of 0.1 degree in a timely fashion. Large damping time constants could also adversely affect flight dynamics and attitude control performance. In this paper, we will discuss the implementation of a high resolution method for calculating the boom's intrinsic damping, which was used in multi-body dynamics simulations. In summary, experimental data was obtained with a scaled-down boom, which was suspended as a pendulum in vacuum. Optical techniques were designed to accurately measure the natural decay of angular position and subsequently, data processing algorithms resulted in excellent spatial and temporal resolutions. This method was repeated in a parametric study for various lengths, root tensions and vacuum levels. For all data sets, regression models for damping were applied, including: nonlinear viscous, frequency-independent hysteretic, coulomb and some combination of them. Our data analysis and dynamics models have shown that the intrinsic damping for the baseline boom is insufficient, thereby forcing project management to explore mitigation strategies.

  20. Interactive desktop analysis of high resolution simulations: application to turbulent plume dynamics and current sheet formation

    International Nuclear Information System (INIS)

    Clyne, John; Mininni, Pablo; Norton, Alan; Rast, Mark

    2007-01-01

    The ever increasing processing capabilities of the supercomputers available to computational scientists today, combined with the need for higher and higher resolution computational grids, has resulted in deluges of simulation data. Yet the computational resources and tools required to make sense of these vast numerical outputs through subsequent analysis are often far from adequate, making such analysis of the data a painstaking, if not a hopeless, task. In this paper, we describe a new tool for the scientific investigation of massive computational datasets. This tool (VAPOR) employs data reduction, advanced visualization, and quantitative analysis operations to permit the interactive exploration of vast datasets using only a desktop PC equipped with a commodity graphics card. We describe VAPORs use in the study of two problems. The first, motivated by stellar envelope convection, investigates the hydrodynamic stability of compressible thermal starting plumes as they descend through a stratified layer of increasing density with depth. The second looks at current sheet formation in an incompressible helical magnetohydrodynamic flow to understand the early spontaneous development of quasi two-dimensional (2D) structures embedded within the 3D solution. Both of the problems were studied at sufficiently high spatial resolution, a grid of 504 2 by 2048 points for the first and 1536 3 points for the second, to overwhelm the interactive capabilities of typically available analysis resources

  1. Influence of spatial resolution on precipitation simulations for the central Andes Mountains

    Science.gov (United States)

    Trachte, Katja; Bendix, Jörg

    2013-04-01

    The climate of South America is highly influenced by the north-south oriented Andes Mountains. Their complex structure causes modifications of large-scale atmospheric circulations resulting in various mesoscale phenomena as well as a high variability in the local conditions. Due to their height and length the terrain generates distinctly climate conditions between the western and the eastern slopes. While in the tropical regions along the western flanks the conditions are cold and arid, the eastern slopes are dominated by warm-moist and rainy air coming from the Amazon basin. Below 35° S the situation reverses with rather semiarid conditions in the eastern part and temperate rainy climate along southern Chile. Generally, global circulation models (GCMs) describe the state of the global climate and its changes, but are disabled to capture regional or even local features due to their coarse resolution. This is particularly true in heterogeneous regions such as the Andes Mountains, where local driving features, e. g. local circulation systems, highly varies on small scales and thus, lead to a high variability of rainfall distributions. An appropriate technique to overcome this problem and to gain regional and local scale rainfall information is the dynamical downscaling of the global data using a regional climate model (RCM). The poster presents results of the evaluation of the performance of the Weather Research and Forecasting (WRF) model over South America with special focus on the central Andes Mountains of Ecuador. A sensitivity study regarding the cumulus parametrization, microphysics, boundary layer processes and the radiation budget is conducted. With 17 simulations consisting of 16 parametrization scheme combinations and 1 default run a suitable model set-up for climate research in this region is supposed to be evaluated. The simulations were conducted in a two-way nested mode i) to examine the best physics scheme combination for the target and ii) to

  2. Changes in Moisture Flux over the Tibetan Plateau during 1979-2011: Insights from a High Resolution Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Yanhong; Leung, Lai-Yung R.; Zhang, Yongxin; Cuo, Lan

    2015-05-15

    Net precipitation (precipitation minus evapotranspiration, P-E) changes between 1979 and 2011 from a high resolution regional climate simulation and its reanalysis forcing are analyzed over the Tibet Plateau (TP) and compared to the global land data assimilation system (GLDAS) product. The high resolution simulation better resolves precipitation changes than its coarse resolution forcing, which contributes dominantly to the improved P-E change in the regional simulation compared to the global reanalysis. Hence, the former may provide better insights about the drivers of P-E changes. The mechanism behind the P-E changes is explored by decomposing the column integrated moisture flux convergence into thermodynamic, dynamic, and transient eddy components. High-resolution climate simulation improves the spatial pattern of P-E changes over the best available global reanalysis. High-resolution climate simulation also facilitates new and substantial findings regarding the role of thermodynamics and transient eddies in P-E changes reflected in observed changes in major river basins fed by runoff from the TP. The analysis revealed the contrasting convergence/divergence changes between the northwestern and southeastern TP and feedback through latent heat release as an important mechanism leading to the mean P-E changes in the TP.

  3. Geometrical themes inspired by the n-body problem

    CERN Document Server

    Herrera, Haydeé; Herrera, Rafael

    2018-01-01

    Presenting a selection of recent developments in geometrical problems inspired by the N-body problem, these lecture notes offer a variety of approaches to study them, ranging from variational to dynamical, while developing new insights, making geometrical and topological detours, and providing historical references. A. Guillot’s notes aim to describe differential equations in the complex domain, motivated by the evolution of N particles moving on the plane subject to the influence of a magnetic field. Guillot studies such differential equations using different geometric structures on complex curves (in the sense of W. Thurston) in order to find isochronicity conditions.   R. Montgomery’s notes deal with a version of the planar Newtonian three-body equation. Namely, he investigates the problem of whether every free homotopy class is realized by a periodic geodesic. The solution involves geometry, dynamical systems, and the McGehee blow-up. A novelty of the approach is the use of energy-balance in order t...

  4. Near transferable phenomenological n-body potentials for noble metals.

    Science.gov (United States)

    Pontikis, Vassilis; Baldinozzi, Gianguido; Luneville, Laurence; Simeone, David

    2017-09-06

    We present a semi-empirical model of cohesion in noble metals with suitable parameters reproducing a selected set of experimental properties of perfect and defective lattices in noble metals. It consists of two short-range, n-body terms accounting respectively for attractive and repulsive interactions, the former deriving from the second moment approximation of the tight-binding scheme and the latter from the gas approximation of the kinetic energy of electrons. The stability of the face centred cubic versus the hexagonal compact stacking is obtained via a long-range, pairwise function of customary use with ionic pseudo-potentials. Lattice dynamics, molecular statics, molecular dynamics and nudged elastic band calculations show that, unlike previous potentials, this cohesion model reproduces and predicts quite accurately thermodynamic properties in noble metals. In particular, computed surface energies, largely underestimated by existing empirical cohesion models, compare favourably with measured values, whereas predicted unstable stacking-fault energy profiles fit almost perfectly ab initio evaluations from the literature. All together the results suggest that this semi-empirical model is nearly transferable.

  5. A high resolution hydrodynamic 3-D model simulation of the malta shelf area

    Directory of Open Access Journals (Sweden)

    A. F. Drago

    2003-01-01

    Full Text Available The seasonal variability of the water masses and transport in the Malta Channel and proximity of the Maltese Islands have been simulated by a high resolution (1.6 km horizontal grid on average, 15 vertical sigma layers eddy resolving primitive equation shelf model (ROSARIO-I. The numerical simulation was run with climatological forcing and includes thermohaline dynamics with a turbulence scheme for the vertical mixing coefficients on the basis of the Princeton Ocean Model (POM. The model has been coupled by one-way nesting along three lateral boundaries (east, south and west to an intermediate coarser resolution model (5 km implemented over the Sicilian Channel area. The fields at the open boundaries and the atmospheric forcing at the air-sea interface were applied on a repeating "perpetual" year climatological cycle. The ability of the model to reproduce a realistic circulation of the Sicilian-Maltese shelf area has been demonstrated. The skill of the nesting procedure was tested by model-modelc omparisons showing that the major features of the coarse model flow field can be reproduced by the fine model with additional eddy space scale components. The numerical results included upwelling, mainly in summer and early autumn, along the southern coasts of Sicily and Malta; a strong eastward shelf surface flow along shore to Sicily, forming part of the Atlantic Ionian Stream, with a presence throughout the year and with significant seasonal modulation, and a westward winter intensified flow of LIW centered at a depth of around 280 m under the shelf break to the south of Malta. The seasonal variability in the thermohaline structure of the domain and the associated large-scale flow structures can be related to the current knowledge on the observed hydrography of the area. The level of mesoscale resolution achieved by the model allowed the spatial and temporal evolution of the changing flow patterns, triggered by internal dynamics, to be followed in

  6. A high resolution hydrodynamic 3-D model simulation of the malta shelf area

    Directory of Open Access Journals (Sweden)

    A. F. Drago

    Full Text Available The seasonal variability of the water masses and transport in the Malta Channel and proximity of the Maltese Islands have been simulated by a high resolution (1.6 km horizontal grid on average, 15 vertical sigma layers eddy resolving primitive equation shelf model (ROSARIO-I. The numerical simulation was run with climatological forcing and includes thermohaline dynamics with a turbulence scheme for the vertical mixing coefficients on the basis of the Princeton Ocean Model (POM. The model has been coupled by one-way nesting along three lateral boundaries (east, south and west to an intermediate coarser resolution model (5 km implemented over the Sicilian Channel area. The fields at the open boundaries and the atmospheric forcing at the air-sea interface were applied on a repeating "perpetual" year climatological cycle.

    The ability of the model to reproduce a realistic circulation of the Sicilian-Maltese shelf area has been demonstrated. The skill of the nesting procedure was tested by model-modelc omparisons showing that the major features of the coarse model flow field can be reproduced by the fine model with additional eddy space scale components. The numerical results included upwelling, mainly in summer and early autumn, along the southern coasts of Sicily and Malta; a strong eastward shelf surface flow along shore to Sicily, forming part of the Atlantic Ionian Stream, with a presence throughout the year and with significant seasonal modulation, and a westward winter intensified flow of LIW centered at a depth of around 280 m under the shelf break to the south of Malta. The seasonal variability in the thermohaline structure of the domain and the associated large-scale flow structures can be related to the current knowledge on the observed hydrography of the area. The level of mesoscale resolution achieved by the model allowed the spatial and temporal evolution of the changing flow patterns, triggered by

  7. The WASCAL high-resolution regional climate simulation ensemble for West Africa: concept, dissemination and assessment

    Science.gov (United States)

    Heinzeller, Dominikus; Dieng, Diarra; Smiatek, Gerhard; Olusegun, Christiana; Klein, Cornelia; Hamann, Ilse; Salack, Seyni; Bliefernicht, Jan; Kunstmann, Harald

    2018-04-01

    Climate change and constant population growth pose severe challenges to 21st century rural Africa. Within the framework of the West African Science Service Center on Climate Change and Adapted Land Use (WASCAL), an ensemble of high-resolution regional climate change scenarios for the greater West African region is provided to support the development of effective adaptation and mitigation measures. This contribution presents the overall concept of the WASCAL regional climate simulations, as well as detailed information on the experimental design, and provides information on the format and dissemination of the available data. All data are made available to the public at the CERA long-term archive of the German Climate Computing Center (DKRZ) with a subset available at the PANGAEA Data Publisher for Earth & Environmental Science portal (https://doi.pangaea.de/10.1594/PANGAEA.880512" target="_blank">https://doi.pangaea.de/10.1594/PANGAEA.880512). A brief assessment of the data are presented to provide guidance for future users. Regional climate projections are generated at high (12 km) and intermediate (60 km) resolution using the Weather Research and Forecasting Model (WRF). The simulations cover the validation period 1980-2010 and the two future periods 2020-2050 and 2070-2100. A brief comparison to observations and two climate change scenarios from the Coordinated Regional Downscaling Experiment (CORDEX) initiative is presented to provide guidance on the data set to future users and to assess their climate change signal. Under the RCP4.5 (Representative Concentration Pathway 4.5) scenario, the results suggest an increase in temperature by 1.5 °C at the coast of Guinea and by up to 3 °C in the northern Sahel by the end of the 21st century, in line with existing climate projections for the region. They also project an increase in precipitation by up to 300 mm per year along the coast of Guinea, by up to 150 mm per year in the Soudano region adjacent in the north and

  8. High resolution crop growth simulation for identification of potential adaptation strategies under climate change

    Science.gov (United States)

    Kim, K. S.; Yoo, B. H.

    2016-12-01

    Impact assessment of climate change on crop production would facilitate planning of adaptation strategies. Because socio-environmental conditions would differ by local areas, it would be advantageous to assess potential adaptation measures at a specific area. The objectives of this study was to develop a crop growth simulation system at a very high spatial resolution, e.g., 30 m, and to assess different adaptation options including shift of planting date and use of different cultivars. The Decision Support System for Agrotechnology Transfer (DSSAT) model was used to predict yields of soybean and maize in Korea. Gridded data for climate and soil were used to prepare input data for the DSSAT model. Weather input data were prepared at the resolution of 30 m using bilinear interpolation from gridded climate scenario data. Those climate data were obtained from Korean Meteorology Administration. Spatial resolution of temperature and precipitation was 1 km whereas that of solar radiation was 12.5 km. Soil series data at the 30 m resolution were obtained from the soil database operated by Rural Development Administration, Korea. The SOL file, which is a soil input file for the DSSAT model was prepared using physical and chemical properties of a given soil series, which were available from the soil database. Crop yields were predicted by potential adaptation options based on planting date and cultivar. For example, 10 planting dates and three cultivars were used to identify ideal management options for climate change adaptation. In prediction of maize yield, combination of 20 planting dates and two cultivars was used as management options. Predicted crop yields differed by site even within a relatively small region. For example, the maximum of average yields for 2001-2010 seasons differed by sites In a county of which areas is 520 km2 (Fig. 1). There was also spatial variation in the ideal management option in the region (Fig. 2). These results suggested that local

  9. Assessment of high-resolution methods for numerical simulations of compressible turbulence with shock waves

    International Nuclear Information System (INIS)

    Johnsen, Eric; Larsson, Johan; Bhagatwala, Ankit V.; Cabot, William H.; Moin, Parviz; Olson, Britton J.; Rawat, Pradeep S.; Shankar, Santhosh K.; Sjoegreen, Bjoern; Yee, H.C.; Zhong Xiaolin; Lele, Sanjiva K.

    2010-01-01

    Flows in which shock waves and turbulence are present and interact dynamically occur in a wide range of applications, including inertial confinement fusion, supernovae explosion, and scramjet propulsion. Accurate simulations of such problems are challenging because of the contradictory requirements of numerical methods used to simulate turbulence, which must minimize any numerical dissipation that would otherwise overwhelm the small scales, and shock-capturing schemes, which introduce numerical dissipation to stabilize the solution. The objective of the present work is to evaluate the performance of several numerical methods capable of simultaneously handling turbulence and shock waves. A comprehensive range of high-resolution methods (WENO, hybrid WENO/central difference, artificial diffusivity, adaptive characteristic-based filter, and shock fitting) and suite of test cases (Taylor-Green vortex, Shu-Osher problem, shock-vorticity/entropy wave interaction, Noh problem, compressible isotropic turbulence) relevant to problems with shocks and turbulence are considered. The results indicate that the WENO methods provide sharp shock profiles, but overwhelm the physical dissipation. The hybrid method is minimally dissipative and leads to sharp shocks and well-resolved broadband turbulence, but relies on an appropriate shock sensor. Artificial diffusivity methods in which the artificial bulk viscosity is based on the magnitude of the strain-rate tensor resolve vortical structures well but damp dilatational modes in compressible turbulence; dilatation-based artificial bulk viscosity methods significantly improve this behavior. For well-defined shocks, the shock fitting approach yields good results.

  10. Earth System Modeling 2.0: A Blueprint for Models That Learn From Observations and Targeted High-Resolution Simulations

    Science.gov (United States)

    Schneider, Tapio; Lan, Shiwei; Stuart, Andrew; Teixeira, João.

    2017-12-01

    Climate projections continue to be marred by large uncertainties, which originate in processes that need to be parameterized, such as clouds, convection, and ecosystems. But rapid progress is now within reach. New computational tools and methods from data assimilation and machine learning make it possible to integrate global observations and local high-resolution simulations in an Earth system model (ESM) that systematically learns from both and quantifies uncertainties. Here we propose a blueprint for such an ESM. We outline how parameterization schemes can learn from global observations and targeted high-resolution simulations, for example, of clouds and convection, through matching low-order statistics between ESMs, observations, and high-resolution simulations. We illustrate learning algorithms for ESMs with a simple dynamical system that shares characteristics of the climate system; and we discuss the opportunities the proposed framework presents and the challenges that remain to realize it.

  11. Aerosol midlatitude cyclone indirect effects in observations and high-resolution simulations

    Directory of Open Access Journals (Sweden)

    D. T. McCoy

    2018-04-01

    Full Text Available Aerosol–cloud interactions are a major source of uncertainty in inferring the climate sensitivity from the observational record of temperature. The adjustment of clouds to aerosol is a poorly constrained aspect of these aerosol–cloud interactions. Here, we examine the response of midlatitude cyclone cloud properties to a change in cloud droplet number concentration (CDNC. Idealized experiments in high-resolution, convection-permitting global aquaplanet simulations with constant CDNC are compared to 13 years of remote-sensing observations. Observations and idealized aquaplanet simulations agree that increased warm conveyor belt (WCB moisture flux into cyclones is consistent with higher cyclone liquid water path (CLWP. When CDNC is increased a larger LWP is needed to give the same rain rate. The LWP adjusts to allow the rain rate to be equal to the moisture flux into the cyclone along the WCB. This results in an increased CLWP for higher CDNC at a fixed WCB moisture flux in both observations and simulations. If observed cyclones in the top and bottom tercile of CDNC are contrasted it is found that they have not only higher CLWP but also cloud cover and albedo. The difference in cyclone albedo between the cyclones in the top and bottom third of CDNC is observed by CERES to be between 0.018 and 0.032, which is consistent with a 4.6–8.3 Wm−2 in-cyclone enhancement in upwelling shortwave when scaled by annual-mean insolation. Based on a regression model to observed cyclone properties, roughly 60 % of the observed variability in CLWP can be explained by CDNC and WCB moisture flux.

  12. Monte-Carlo simulation of a high-resolution inverse geometry spectrometer on the SNS. Long Wavelength Target Station

    International Nuclear Information System (INIS)

    Bordallo, H.N.; Herwig, K.W.

    2001-01-01

    Using the Monte-Carlo simulation program McStas, we present the design principles of the proposed high-resolution inverse geometry spectrometer on the SNS-Long Wavelength Target Station (LWTS). The LWTS will provide the high flux of long wavelength neutrons at the requisite pulse rate required by the spectrometer design. The resolution of this spectrometer lies between that routinely achieved by spin echo techniques and the design goal of the high power target station backscattering spectrometer. Covering this niche in energy resolution will allow systematic studies over the large dynamic range required by many disciplines, such as protein dynamics. (author)

  13. Outcomes and challenges of global high-resolution non-hydrostatic atmospheric simulations using the K computer

    Science.gov (United States)

    Satoh, Masaki; Tomita, Hirofumi; Yashiro, Hisashi; Kajikawa, Yoshiyuki; Miyamoto, Yoshiaki; Yamaura, Tsuyoshi; Miyakawa, Tomoki; Nakano, Masuo; Kodama, Chihiro; Noda, Akira T.; Nasuno, Tomoe; Yamada, Yohei; Fukutomi, Yoshiki

    2017-12-01

    This article reviews the major outcomes of a 5-year (2011-2016) project using the K computer to perform global numerical atmospheric simulations based on the non-hydrostatic icosahedral atmospheric model (NICAM). The K computer was made available to the public in September 2012 and was used as a primary resource for Japan's Strategic Programs for Innovative Research (SPIRE), an initiative to investigate five strategic research areas; the NICAM project fell under the research area of climate and weather simulation sciences. Combining NICAM with high-performance computing has created new opportunities in three areas of research: (1) higher resolution global simulations that produce more realistic representations of convective systems, (2) multi-member ensemble simulations that are able to perform extended-range forecasts 10-30 days in advance, and (3) multi-decadal simulations for climatology and variability. Before the K computer era, NICAM was used to demonstrate realistic simulations of intra-seasonal oscillations including the Madden-Julian oscillation (MJO), merely as a case study approach. Thanks to the big leap in computational performance of the K computer, we could greatly increase the number of cases of MJO events for numerical simulations, in addition to integrating time and horizontal resolution. We conclude that the high-resolution global non-hydrostatic model, as used in this five-year project, improves the ability to forecast intra-seasonal oscillations and associated tropical cyclogenesis compared with that of the relatively coarser operational models currently in use. The impacts of the sub-kilometer resolution simulation and the multi-decadal simulations using NICAM are also reviewed.

  14. Computational high-resolution heart phantoms for medical imaging and dosimetry simulations

    Energy Technology Data Exchange (ETDEWEB)

    Gu Songxiang; Kyprianou, Iacovos [Center for Devices and Radiological Health, US Food and Drug Administration, Silver Spring, MD (United States); Gupta, Rajiv, E-mail: songxiang.gu@fda.hhs.gov, E-mail: rgupta1@partners.org, E-mail: iacovos.kyprianou@fda.hhs.gov [Massachusetts General Hospital, Boston, MA (United States)

    2011-09-21

    Cardiovascular disease in general and coronary artery disease (CAD) in particular, are the leading cause of death worldwide. They are principally diagnosed using either invasive percutaneous transluminal coronary angiograms or non-invasive computed tomography angiograms (CTA). Minimally invasive therapies for CAD such as angioplasty and stenting are rendered under fluoroscopic guidance. Both invasive and non-invasive imaging modalities employ ionizing radiation and there is concern for deterministic and stochastic effects of radiation. Accurate simulation to optimize image quality with minimal radiation dose requires detailed, gender-specific anthropomorphic phantoms with anatomically correct heart and associated vasculature. Such phantoms are currently unavailable. This paper describes an open source heart phantom development platform based on a graphical user interface. Using this platform, we have developed seven high-resolution cardiac/coronary artery phantoms for imaging and dosimetry from seven high-quality CTA datasets. To extract a phantom from a coronary CTA, the relationship between the intensity distribution of the myocardium, the ventricles and the coronary arteries is identified via histogram analysis of the CTA images. By further refining the segmentation using anatomy-specific criteria such as vesselness, connectivity criteria required by the coronary tree and image operations such as active contours, we are able to capture excellent detail within our phantoms. For example, in one of the female heart phantoms, as many as 100 coronary artery branches could be identified. Triangular meshes are fitted to segmented high-resolution CTA data. We have also developed a visualization tool for adding stenotic lesions to the coronaries. The male and female heart phantoms generated so far have been cross-registered and entered in the mesh-based Virtual Family of phantoms with matched age/gender information. Any phantom in this family, along with user

  15. Computational high-resolution heart phantoms for medical imaging and dosimetry simulations

    International Nuclear Information System (INIS)

    Gu Songxiang; Kyprianou, Iacovos; Gupta, Rajiv

    2011-01-01

    Cardiovascular disease in general and coronary artery disease (CAD) in particular, are the leading cause of death worldwide. They are principally diagnosed using either invasive percutaneous transluminal coronary angiograms or non-invasive computed tomography angiograms (CTA). Minimally invasive therapies for CAD such as angioplasty and stenting are rendered under fluoroscopic guidance. Both invasive and non-invasive imaging modalities employ ionizing radiation and there is concern for deterministic and stochastic effects of radiation. Accurate simulation to optimize image quality with minimal radiation dose requires detailed, gender-specific anthropomorphic phantoms with anatomically correct heart and associated vasculature. Such phantoms are currently unavailable. This paper describes an open source heart phantom development platform based on a graphical user interface. Using this platform, we have developed seven high-resolution cardiac/coronary artery phantoms for imaging and dosimetry from seven high-quality CTA datasets. To extract a phantom from a coronary CTA, the relationship between the intensity distribution of the myocardium, the ventricles and the coronary arteries is identified via histogram analysis of the CTA images. By further refining the segmentation using anatomy-specific criteria such as vesselness, connectivity criteria required by the coronary tree and image operations such as active contours, we are able to capture excellent detail within our phantoms. For example, in one of the female heart phantoms, as many as 100 coronary artery branches could be identified. Triangular meshes are fitted to segmented high-resolution CTA data. We have also developed a visualization tool for adding stenotic lesions to the coronaries. The male and female heart phantoms generated so far have been cross-registered and entered in the mesh-based Virtual Family of phantoms with matched age/gender information. Any phantom in this family, along with user

  16. Creating high-resolution digital elevation model using thin plate spline interpolation and Monte Carlo simulation

    International Nuclear Information System (INIS)

    Pohjola, J.; Turunen, J.; Lipping, T.

    2009-07-01

    In this report creation of the digital elevation model of Olkiluoto area incorporating a large area of seabed is described. The modeled area covers 960 square kilometers and the apparent resolution of the created elevation model was specified to be 2.5 x 2.5 meters. Various elevation data like contour lines and irregular elevation measurements were used as source data in the process. The precision and reliability of the available source data varied largely. Digital elevation model (DEM) comprises a representation of the elevation of the surface of the earth in particular area in digital format. DEM is an essential component of geographic information systems designed for the analysis and visualization of the location-related data. DEM is most often represented either in raster or Triangulated Irregular Network (TIN) format. After testing several methods the thin plate spline interpolation was found to be best suited for the creation of the elevation model. The thin plate spline method gave the smallest error in the test where certain amount of points was removed from the data and the resulting model looked most natural. In addition to the elevation data the confidence interval at each point of the new model was required. The Monte Carlo simulation method was selected for this purpose. The source data points were assigned probability distributions according to what was known about their measurement procedure and from these distributions 1 000 (20 000 in the first version) values were drawn for each data point. Each point of the newly created DEM had thus as many realizations. The resulting high resolution DEM will be used in modeling the effects of land uplift and evolution of the landscape in the time range of 10 000 years from the present. This time range comes from the requirements set for the spent nuclear fuel repository site. (orig.)

  17. Evaluation of global fine-resolution precipitation products and their uncertainty quantification in ensemble discharge simulations

    Science.gov (United States)

    Qi, W.; Zhang, C.; Fu, G.; Sweetapple, C.; Zhou, H.

    2016-02-01

    The applicability of six fine-resolution precipitation products, including precipitation radar, infrared, microwave and gauge-based products, using different precipitation computation recipes, is evaluated using statistical and hydrological methods in northeastern China. In addition, a framework quantifying uncertainty contributions of precipitation products, hydrological models, and their interactions to uncertainties in ensemble discharges is proposed. The investigated precipitation products are Tropical Rainfall Measuring Mission (TRMM) products (TRMM3B42 and TRMM3B42RT), Global Land Data Assimilation System (GLDAS)/Noah, Asian Precipitation - Highly-Resolved Observational Data Integration Towards Evaluation of Water Resources (APHRODITE), Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN), and a Global Satellite Mapping of Precipitation (GSMAP-MVK+) product. Two hydrological models of different complexities, i.e. a water and energy budget-based distributed hydrological model and a physically based semi-distributed hydrological model, are employed to investigate the influence of hydrological models on simulated discharges. Results show APHRODITE has high accuracy at a monthly scale compared with other products, and GSMAP-MVK+ shows huge advantage and is better than TRMM3B42 in relative bias (RB), Nash-Sutcliffe coefficient of efficiency (NSE), root mean square error (RMSE), correlation coefficient (CC), false alarm ratio, and critical success index. These findings could be very useful for validation, refinement, and future development of satellite-based products (e.g. NASA Global Precipitation Measurement). Although large uncertainty exists in heavy precipitation, hydrological models contribute most of the uncertainty in extreme discharges. Interactions between precipitation products and hydrological models can have the similar magnitude of contribution to discharge uncertainty as the hydrological models. A

  18. Air-sea exchange over Black Sea estimated from high resolution regional climate simulations

    Science.gov (United States)

    Velea, Liliana; Bojariu, Roxana; Cica, Roxana

    2013-04-01

    Black Sea is an important influencing factor for the climate of bordering countries, showing cyclogenetic activity (Trigo et al, 1999) and influencing Mediterranean cyclones passing over. As for other seas, standard observations of the atmosphere are limited in time and space and available observation-based estimations of air-sea exchange terms present quite large ranges of uncertainty. The reanalysis datasets (e.g. ERA produced by ECMWF) provide promising validation estimates of climatic characteristics against the ones in available climatic data (Schrum et al, 2001), while cannot reproduce some local features due to relatively coarse horizontal resolution. Detailed and realistic information on smaller-scale processes are foreseen to be provided by regional climate models, due to continuous improvements of physical parameterizations and numerical solutions and thus affording simulations at high spatial resolution. The aim of the study is to assess the potential of three regional climate models in reproducing known climatological characteristics of air-sea exchange over Black Sea, as well as to explore the added value of the model compared to the input (reanalysis) data. We employ results of long-term (1961-2000) simulations performed within ENSEMBLE project (http://ensemblesrt3.dmi.dk/) using models ETHZ-CLM, CNRM-ALADIN, METO-HadCM, for which the integration domain covers the whole area of interest. The analysis is performed for the entire basin for several variables entering the heat and water budget terms and available as direct output from the models, at seasonal and annual scale. A comparison with independent data (ERA-INTERIM) and findings from other studies (e.g. Schrum et al, 2001) is also presented. References: Schrum, C., Staneva, J., Stanev, E. and Ozsoy, E., 2001: Air-sea exchange in the Black Sea estimated from atmospheric analysis for the period 1979-1993, J. Marine Systems, 31, 3-19 Trigo, I. F., T. D. Davies, and G. R. Bigg (1999): Objective

  19. Using gaps in N-body tidal streams to probe missing satellites

    International Nuclear Information System (INIS)

    Ngan, W. H. W.; Carlberg, R. G.

    2014-01-01

    We use N-body simulations to model the tidal disruption of a star cluster in a Milky-Way-sized dark matter halo, which results in a narrow stream comparable to (but slightly wider than) Pal-5 or GD-1. The mean Galactic dark matter halo is modeled by a spherical Navarro-Frenk-White potential with subhalos predicted by the ΛCDM cosmological model. The distribution and mass function of the subhalos follow the results from the Aquarius simulation. We use a matched filter approach to look for 'gaps' in tidal streams at 12 length scales from 0.1 kpc to 5 kpc, which appear as characteristic dips in the linear densities along the streams. We find that, in addition to the subhalos' perturbations, the epicyclic overdensities (EOs) due to the coherent epicyclic motions of particles in a stream also produce gap-like signals near the progenitor. We measure the gap spectra—the gap formation rates as functions of gap length—due to both subhalo perturbations and EOs, which have not been accounted for together by previous studies. Finally, we project the simulated streams onto the sky to investigate issues when interpreting gap spectra in observations. In particular, we find that gap spectra from low signal-to-noise observations can be biased by the orbital phase of the stream. This indicates that the study of stream gaps will benefit greatly from high-quality data from future missions.

  20. Comparison of Explicitly Simulated and Downscaled Tropical Cyclone Activity in a High-Resolution Global Climate Model

    Directory of Open Access Journals (Sweden)

    Hirofumi Tomita

    2010-01-01

    Full Text Available The response of tropical cyclone activity to climate change is a matter of great inherent interest and practical importance. Most current global climate models are not, however, capable of adequately resolving tropical cyclones; this has led to the development of downscaling techniques designed to infer tropical cyclone activity from the large-scale fields produced by climate models. Here we compare the statistics of tropical cyclones simulated explicitly in a very high resolution (~14 km grid mesh global climate model to the results of one such downscaling technique driven by the same global model. This is done for a simulation of the current climate and also for a simulation of a climate warmed by the addition of carbon dioxide. The explicitly simulated and downscaled storms are similarly distributed in space, but the intensity distribution of the downscaled events has a somewhat longer high-intensity tail, owing to the higher resolution of the downscaling model. Both explicitly simulated and downscaled events show large increases in the frequency of events at the high-intensity ends of their respective intensity distributions, but the downscaled storms also show increases in low-intensity events, whereas the explicitly simulated weaker events decline in number. On the regional scale, there are large differences in the responses of the explicitly simulated and downscaled events to global warming. In particular, the power dissipation of downscaled events shows a 175% increase in the Atlantic, while the power dissipation of explicitly simulated events declines there.

  1. Grand Canonical adaptive resolution simulation for molecules with electrons: A theoretical framework based on physical consistency

    Science.gov (United States)

    Delle Site, Luigi

    2018-01-01

    A theoretical scheme for the treatment of an open molecular system with electrons and nuclei is proposed. The idea is based on the Grand Canonical description of a quantum region embedded in a classical reservoir of molecules. Electronic properties of the quantum region are calculated at constant electronic chemical potential equal to that of the corresponding (large) bulk system treated at full quantum level. Instead, the exchange of molecules between the quantum region and the classical environment occurs at the chemical potential of the macroscopic thermodynamic conditions. The Grand Canonical Adaptive Resolution Scheme is proposed for the treatment of the classical environment; such an approach can treat the exchange of molecules according to first principles of statistical mechanics and thermodynamic. The overall scheme is build on the basis of physical consistency, with the corresponding definition of numerical criteria of control of the approximations implied by the coupling. Given the wide range of expertise required, this work has the intention of providing guiding principles for the construction of a well founded computational protocol for actual multiscale simulations from the electronic to the mesoscopic scale.

  2. MOLECULAR DYNAMICS SIMULATION OF KINETIC RESOLUTION OF RACEMIC ALCOHOL USING BURKHOLDERIA CEPACIA LIPASE IN ORGANIC SOLVENTS

    Directory of Open Access Journals (Sweden)

    A. C. Mathpati

    2018-03-01

    Full Text Available Lipases, a subclass of hydrolases, have gained a lot of importance as they can catalyze esterification, transesterification and hydrolysis reaction in non-aqueous media. Lipases are also widely used for kinetic resolution of racemic alcohols into enantiopure compounds. The lipase activity is affected by organic solvents due to changes in the conformational rigidity of enzymes, the active site, or altering the solvation of the transition state. The activity of lipases strongly depends on the logP value of solvents. Molecular dynamics (MD can help to understand the effect of solvents on lipase conformation as well as protein-ligand complex. In this work, MD simulations of Burkholderia cepacia lipase (BCL and complex between R and S conformation of acetylated form of 1-phenylethanol with BCL using gromacs have been carried in various organic solvents. The RMSD values were within the range of 0.15 to 0.20 nm and radius of gyration was found to be with 1.65 to 1.9 nm. Major changes in the B factor compared to reference structure were observed between residues 60 to 80, 120 to 150 and 240 to 260. Higher unfolding was observed in toluene and diethyl ether compared to hexane and acetonitrile. R acetylated complex was found to favorably bind BCL compared to S form. The predicted enantioselectivity were in good agreement with the experimental data.

  3. Transitioning Resolution Responsibility between the Controller and Automation Team in Simulated NextGen Separation Assurance

    Science.gov (United States)

    Cabrall, C.; Gomez, A.; Homola, J.; Hunt, S..; Martin, L.; Merccer, J.; Prevott, T.

    2013-01-01

    As part of an ongoing research effort on separation assurance and functional allocation in NextGen, a controller- in-the-loop study with ground-based automation was conducted at NASA Ames' Airspace Operations Laboratory in August 2012 to investigate the potential impact of introducing self-separating aircraft in progressively advanced NextGen timeframes. From this larger study, the current exploratory analysis of controller-automation interaction styles focuses on the last and most far-term time frame. Measurements were recorded that firstly verified the continued operational validity of this iteration of the ground-based functional allocation automation concept in forecast traffic densities up to 2x that of current day high altitude en-route sectors. Additionally, with greater levels of fully automated conflict detection and resolution as well as the introduction of intervention functionality, objective and subjective analyses showed a range of passive to active controller- automation interaction styles between the participants. Not only did the controllers work with the automation to meet their safety and capacity goals in the simulated future NextGen timeframe, they did so in different ways and with different attitudes of trust/use of the automation. Taken as a whole, the results showed that the prototyped controller-automation functional allocation framework was very flexible and successful overall.

  4. Surface drag effects on simulated wind fields in high-resolution atmospheric forecast model

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Kyo Sun; Lim, Jong Myoung; Ji, Young Yong [Environmental Radioactivity Assessment Team,Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Shin, Hye Yum [NOAA/Geophysical Fluid Dynamics Laboratory, Princeton (United States); Hong, Jin Kyu [Yonsei University, Seoul (Korea, Republic of)

    2017-04-15

    It has been reported that the Weather Research and Forecasting (WRF) model generally shows a substantial over prediction bias at low to moderate wind speeds and winds are too geostrophic (Cheng and Steenburgh 2005), which limits the application of WRF model in the area that requires the accurate surface wind estimation such as wind-energy application, air-quality studies, and radioactive-pollutants dispersion studies. The surface drag generated by the subgrid-scale orography is represented by introducing a sink term in the momentum equation in their studies. The purpose of our study is to evaluate the simulated meteorological fields in the high-resolution WRF framework, that includes the parameterization of subgrid-scale orography developed by Mass and Ovens (2010), and enhance the forecast skill of low-level wind fields, which plays an important role in transport and dispersion of air pollutants including radioactive pollutants. The positive bias in 10-m wind speed is significantly alleviated by implementing the subgrid-scale orography parameterization, while other meteorological fields including 10-m wind direction are not changed. Increased variance of subgrid- scale orography enhances the sink of momentum and further reduces the bias in 10-m wind speed.

  5. Forecasting wildland fire behavior using high-resolution large-eddy simulations

    Science.gov (United States)

    Munoz-Esparza, D.; Kosovic, B.; Jimenez, P. A.; Anderson, A.; DeCastro, A.; Brown, B.

    2017-12-01

    Wildland fires are responsible for large socio-economic impacts. Fires affect the environment, damage structures, threaten lives, cause health issues, and involve large suppression costs. These impacts can be mitigated via accurate fire spread forecast to inform the incident management team. To this end, the state of Colorado is funding the development of the Colorado Fire Prediction System (CO-FPS). The system is based on the Weather Research and Forecasting (WRF) model enhanced with a fire behavior module (WRF-Fire). Realistic representation of wildland fire behavior requires explicit representation of small scale weather phenomena to properly account for coupled atmosphere-wildfire interactions. Moreover, transport and dispersion of biomass burning emissions from wildfires is controlled by turbulent processes in the atmospheric boundary layer, which are difficult to parameterize and typically lead to large errors when simplified source estimation and injection height methods are used. Therefore, we utilize turbulence-resolving large-eddy simulations at a resolution of 111 m to forecast fire spread and smoke distribution using a coupled atmosphere-wildfire model. This presentation will describe our improvements to the level-set based fire-spread algorithm in WRF-Fire and an evaluation of the operational system using 12 wildfire events that occurred in Colorado in 2016, as well as other historical fires. In addition, the benefits of explicit representation of turbulence for smoke transport and dispersion will be demonstrated.

  6. Faster-Than-Real-Time Simulation of Lithium Ion Batteries with Full Spatial and Temporal Resolution

    Directory of Open Access Journals (Sweden)

    Sandip Mazumder

    2013-01-01

    Full Text Available A one-dimensional coupled electrochemical-thermal model of a lithium ion battery with full temporal and normal-to-electrode spatial resolution is presented. Only a single pair of electrodes is considered in the model. It is shown that simulation of a lithium ion battery with the inclusion of detailed transport phenomena and electrochemistry is possible with faster-than-real-time compute times. The governing conservation equations of mass, charge, and energy are discretized using the finite volume method and solved using an iterative procedure. The model is first successfully validated against experimental data for both charge and discharge processes in a LixC6-LiyMn2O4 battery. Finally, it is demonstrated for an arbitrary rapidly changing transient load typical of a hybrid electric vehicle drive cycle. The model is able to predict the cell voltage of a 15-minute drive cycle in less than 12 seconds of compute time on a laptop with a 2.33 GHz Intel Pentium 4 processor.

  7. S-World: A high resolution global soil database for simulation modelling (Invited)

    Science.gov (United States)

    Stoorvogel, J. J.

    2013-12-01

    There is an increasing call for high resolution soil information at the global level. A good example for such a call is the Global Gridded Crop Model Intercomparison carried out within AgMIP. While local studies can make use of surveying techniques to collect additional techniques this is practically impossible at the global level. It is therefore important to rely on legacy data like the Harmonized World Soil Database. Several efforts do exist that aim at the development of global gridded soil property databases. These estimates of the variation of soil properties can be used to assess e.g., global soil carbon stocks. However, they do not allow for simulation runs with e.g., crop growth simulation models as these models require a description of the entire pedon rather than a few soil properties. This study provides the required quantitative description of pedons at a 1 km resolution for simulation modelling. It uses the Harmonized World Soil Database (HWSD) for the spatial distribution of soil types, the ISRIC-WISE soil profile database to derive information on soil properties per soil type, and a range of co-variables on topography, climate, and land cover to further disaggregate the available data. The methodology aims to take stock of these available data. The soil database is developed in five main steps. Step 1: All 148 soil types are ordered on the basis of their expected topographic position using e.g., drainage, salinization, and pedogenesis. Using the topographic ordering and combining the HWSD with a digital elevation model allows for the spatial disaggregation of the composite soil units. This results in a new soil map with homogeneous soil units. Step 2: The ranges of major soil properties for the topsoil and subsoil of each of the 148 soil types are derived from the ISRIC-WISE soil profile database. Step 3: A model of soil formation is developed that focuses on the basic conceptual question where we are within the range of a particular soil property

  8. Air-Sea Interaction Processes in Low and High-Resolution Coupled Climate Model Simulations for the Southeast Pacific

    Science.gov (United States)

    Porto da Silveira, I.; Zuidema, P.; Kirtman, B. P.

    2017-12-01

    The rugged topography of the Andes Cordillera along with strong coastal upwelling, strong sea surface temperatures (SST) gradients and extensive but geometrically-thin stratocumulus decks turns the Southeast Pacific (SEP) into a challenge for numerical modeling. In this study, hindcast simulations using the Community Climate System Model (CCSM4) at two resolutions were analyzed to examine the importance of resolution alone, with the parameterizations otherwise left unchanged. The hindcasts were initialized on January 1 with the real-time oceanic and atmospheric reanalysis (CFSR) from 1982 to 2003, forming a 10-member ensemble. The two resolutions are (0.1o oceanic and 0.5o atmospheric) and (1.125o oceanic and 0.9o atmospheric). The SST error growth in the first six days of integration (fast errors) and those resulted from model drift (saturated errors) are assessed and compared towards evaluating the model processes responsible for the SST error growth. For the high-resolution simulation, SST fast errors are positive (+0.3oC) near the continental borders and negative offshore (-0.1oC). Both are associated with a decrease in cloud cover, a weakening of the prevailing southwesterly winds and a reduction of latent heat flux. The saturated errors possess a similar spatial pattern, but are larger and are more spatially concentrated. This suggests that the processes driving the errors already become established within the first week, in contrast to the low-resolution simulations. These, instead, manifest too-warm SSTs related to too-weak upwelling, driven by too-strong winds and Ekman pumping. Nevertheless, the ocean surface tends to be cooler in the low-resolution simulation than the high-resolution due to a higher cloud cover. Throughout the integration, saturated SST errors become positive and could reach values up to +4oC. These are accompanied by upwelling dumping and a decrease in cloud cover. High and low resolution models presented notable differences in how SST

  9. A Non-hydrostatic Atmospheric Model for Global High-resolution Simulation

    Science.gov (United States)

    Peng, X.; Li, X.

    2017-12-01

    A three-dimensional non-hydrostatic atmosphere model, GRAPES_YY, is developed on the spherical Yin-Yang grid system in order to enforce global high-resolution weather simulation or forecasting at the CAMS/CMA. The quasi-uniform grid makes the computation be of high efficiency and free of pole problem. Full representation of the three-dimensional Coriolis force is considered in the governing equations. Under the constraint of third-order boundary interpolation, the model is integrated with the semi-implicit semi-Lagrangian method using the same code on both zones. A static halo region is set to ensure computation of cross-boundary transport and updating Dirichlet-type boundary conditions in the solution process of elliptical equations with the Schwarz method. A series of dynamical test cases, including the solid-body advection, the balanced geostrophic flow, zonal flow over an isolated mountain, development of the Rossby-Haurwitz wave and a baroclinic wave, are carried out, and excellent computational stability and accuracy of the dynamic core has been confirmed. After implementation of the physical processes of long and short-wave radiation, cumulus convection, micro-physical transformation of water substances and the turbulent processes in the planetary boundary layer include surface layer vertical fluxes parameterization, a long-term run of the model is then put forward under an idealized aqua-planet configuration to test the model physics and model ability in both short-term and long-term integrations. In the aqua-planet experiment, the model shows an Earth-like structure of circulation. The time-zonal mean temperature, wind components and humidity illustrate reasonable subtropical zonal westerly jet, meridional three-cell circulation, tropical convection and thermodynamic structures. The specific SST and solar insolation being symmetric about the equator enhance the ITCZ and tropical precipitation, which concentrated in tropical region. Additional analysis and

  10. Effects of Resolution on the Simulation of Boundary-layer Clouds and the Partition of Kinetic Energy to Subgrid Scales

    Directory of Open Access Journals (Sweden)

    Anning Cheng

    2010-02-01

    Full Text Available Seven boundary-layer cloud cases are simulated with UCLA-LES (The University of California, Los Angeles – large eddy simulation model with different horizontal and vertical gridspacing to investigate how the results depend on gridspacing. Some variables are more sensitive to horizontal gridspacing, while others are more sensitive to vertical gridspacing, and still others are sensitive to both horizontal and vertical gridspacings with similar or opposite trends. For cloud-related variables having the opposite dependence on horizontal and vertical gridspacings, changing the gridspacing proportionally in both directions gives the appearance of convergence. In this study, we mainly discuss the impact of subgrid-scale (SGS kinetic energy (KE on the simulations with coarsening of horizontal and vertical gridspacings. A running-mean operator is used to separate the KE of the high-resolution benchmark simulations into that of resolved scales of coarse-resolution simulations and that of SGSs. The diagnosed SGS KE is compared with that parameterized by the Smagorinsky-Lilly SGS scheme at various gridspacings. It is found that the parameterized SGS KE for the coarse-resolution simulations is usually underestimated but the resolved KE is unrealistically large, compared to benchmark simulations. However, the sum of resolved and SGS KEs is about the same for simulations with various gridspacings. The partitioning of SGS and resolved heat and moisture transports is consistent with that of SGS and resolved KE, which means that the parameterized transports are underestimated but resolved-scale transports are overestimated. On the whole, energy shifts to large-scales as the horizontal gridspacing becomes coarse, hence the size of clouds and the resolved circulation increase, the clouds become more stratiform-like with an increase in cloud fraction, cloud liquid-water path and surface precipitation; when coarse vertical gridspacing is used, cloud sizes do not

  11. Eastern equatorial Pacific sea surface temperature annual cycle in the Kiel climate model: simulation benefits from enhancing atmospheric resolution

    Science.gov (United States)

    Wengel, C.; Latif, M.; Park, W.; Harlaß, J.; Bayr, T.

    2018-05-01

    A long-standing difficulty of climate models is to capture the annual cycle (AC) of eastern equatorial Pacific (EEP) sea surface temperature (SST). In this study, we first examine the EEP SST AC in a set of integrations of the coupled Kiel Climate Model, in which only atmosphere model resolution differs. When employing coarse horizontal and vertical atmospheric resolution, significant biases in the EEP SST AC are observed. These are reflected in an erroneous timing of the cold tongue's onset and termination as well as in an underestimation of the boreal spring warming amplitude. A large portion of these biases are linked to a wrong simulation of zonal surface winds, which can be traced back to precipitation biases on both sides of the equator and an erroneous low-level atmospheric circulation over land. Part of the SST biases also is related to shortwave radiation biases related to cloud cover biases. Both wind and cloud cover biases are inherent to the atmospheric component, as shown by companion uncoupled atmosphere model integrations forced by observed SSTs. Enhancing atmosphere model resolution, horizontal and vertical, markedly reduces zonal wind and cloud cover biases in coupled as well as uncoupled mode and generally improves simulation of the EEP SST AC. Enhanced atmospheric resolution reduces convection biases and improves simulation of surface winds over land. Analysis of a subset of models from the Coupled Model Intercomparison Project phase 5 (CMIP5) reveals that in these models, very similar mechanisms are at work in driving EEP SST AC biases.

  12. An analysis of MM5 sensitivity to different parameterizations for high-resolution climate simulations

    Science.gov (United States)

    Argüeso, D.; Hidalgo-Muñoz, J. M.; Gámiz-Fortis, S. R.; Esteban-Parra, M. J.; Castro-Díez, Y.

    2009-04-01

    An evaluation of MM5 mesoscale model sensitivity to different parameterizations schemes is presented in terms of temperature and precipitation for high-resolution integrations over Andalusia (South of Spain). As initial and boundary conditions ERA-40 Reanalysis data are used. Two domains were used, a coarse one with dimensions of 55 by 60 grid points with spacing of 30 km and a nested domain of 48 by 72 grid points grid spaced 10 km. Coarse domain fully covers Iberian Peninsula and Andalusia fits loosely in the finer one. In addition to parameterization tests, two dynamical downscaling techniques have been applied in order to examine the influence of initial conditions on RCM long-term studies. Regional climate studies usually employ continuous integration for the period under survey, initializing atmospheric fields only at the starting point and feeding boundary conditions regularly. An alternative approach is based on frequent re-initialization of atmospheric fields; hence the simulation is divided in several independent integrations. Altogether, 20 simulations have been performed using varying physics options, of which 4 were fulfilled applying the re-initialization technique. Surface temperature and accumulated precipitation (daily and monthly scale) were analyzed for a 5-year period covering from 1990 to 1994. Results have been compared with daily observational data series from 110 stations for temperature and 95 for precipitation Both daily and monthly average temperatures are generally well represented by the model. Conversely, daily precipitation results present larger deviations from observational data. However, noticeable accuracy is gained when comparing with monthly precipitation observations. There are some especially conflictive subregions where precipitation is scarcely captured, such as the Southeast of the Iberian Peninsula, mainly due to its extremely convective nature. Regarding parameterization schemes performance, every set provides very

  13. Sensitivity of ocean model simulation in the coastal ocean to the resolution of the meteorological forcing

    Science.gov (United States)

    Chen, Feng; Shapiro, Georgy; Thain, Richard

    2013-04-01

    The quality of ocean simulations depends on a number of factors such as approximations in governing equations, errors introduced by the numerical scheme, uncertainties in input parameters, and atmospheric forcing. The identification of relations between the uncertainties in input and output data is still a challenge for the development of numerical models. The impacts of ocean variables on ocean models are still not well known (e.g., Kara et al., 2009). Given the considerable importance of the atmospheric forcing to the air-sea interaction, it is essential that researchers in ocean modelling work need a good understanding about how sensitive the atmospheric forcing is to variations of model results, which is beneficial to the development of ocean models. Also, it provides a proper way to choose the atmospheric forcing in ocean modelling applications. Our previous study (Shapiro et al, 2011) has shown that the basin-wide circulation pattern and the temperature structure in the Black Sea produced by the same model is significantly dependent on the source of the meteorological input, giving remarkably different responses. For the purpose of this study we have chosen the Celtic Sea where high resolution meteo data are available from the UK Met office since 2006. The Celtic Sea is tidally dominated water basin, with the tidal stream amplitude varying from 0.25m/s in the southwest to 2 m/s in the Bristol Channel. It is also filled with mesoscale eddies which contribute to the formation of the residual (tidally averaged) circulation pattern (Young et al, 2003). The sea is strongly stratified from April to November, which adds to the formation of density driven currents. In this paper we analyse how sensitive the model output is to variations in the spatial resolution of meteorological using low (1.6°) and high (0.11°) resolution meteo forcing, giving the quantitative relation between variations of met forcing and the resulted differences of model results, as well as

  14. High-resolution simulations of the thermophysiological effects of human exposure to 100 MHz RF energy

    International Nuclear Information System (INIS)

    Nelson, David A; Curran, Allen R; Nyberg, Hans A; Marttila, Eric A; Mason, Patrick A; Ziriax, John M

    2013-01-01

    Human exposure to radio frequency (RF) electromagnetic energy is known to result in tissue heating and can raise temperatures substantially in some situations. Standards for safe exposure to RF do not reflect bio-heat transfer considerations however. Thermoregulatory function (vasodilation, sweating) may mitigate RF heating effects in some environments and exposure scenarios. Conversely, a combination of an extreme environment (high temperature, high humidity), high activity levels and thermally insulating garments may exacerbate RF exposure and pose a risk of unsafe temperature elevation, even for power densities which might be acceptable in a normothermic environment. A high-resolution thermophysiological model, incorporating a heterogeneous tissue model of a seated adult has been developed and used to replicate a series of whole-body exposures at a frequency (100 MHz) which approximates that of human whole-body resonance. Exposures were simulated at three power densities (4, 6 and 8 mW cm −2 ) plus a sham exposure and at three different ambient temperatures (24, 28 and 31 °C). The maximum hypothalamic temperature increase over the course of a 45 min exposure was 0.28 °C and occurred in the most extreme conditions (T amb = 31 °C, PD = 8 mW cm −2 ). Skin temperature increases attributable to RF exposure were modest, with the exception of a ‘hot spot’ in the vicinity of the ankle where skin temperatures exceeded 39 °C. Temperature increases in internal organs and tissues were small, except for connective tissue and bone in the lower leg and foot. Temperature elevation also was noted in the spinal cord, consistent with a hot spot previously identified in the literature. (paper)

  15. Influence of grid resolution in fluid-model simulation of nanosecond dielectric barrier discharge plasma actuator

    Science.gov (United States)

    Hua, Weizhuo; Fukagata, Koji

    2018-04-01

    Two-dimensional numerical simulation of a surface dielectric barrier discharge (SDBD) plasma actuator, driven by a nanosecond voltage pulse, is conducted. A special focus is laid upon the influence of grid resolution on the computational result. It is found that the computational result is not very sensitive to the streamwise grid spacing, whereas the wall-normal grid spacing has a critical influence. In particular, the computed propagation velocity changes discontinuously around the wall-normal grid spacing about 2 μm due to a qualitative change of discharge structure. The present result suggests that a computational grid finer than that was used in most of previous studies is required to correctly capture the structure and dynamics of streamer: when a positive nanosecond voltage pulse is applied to the upper electrode, a streamer forms in the vicinity of upper electrode and propagates along the dielectric surface with a maximum propagation velocity of 2 × 108 cm/s, and a gap with low electron and ion density (i.e., plasma sheath) exists between the streamer and dielectric surface. Difference between the results obtained using the finer and the coarser grid is discussed in detail in terms of the electron transport at a position near the surface. When the finer grid is used, the low electron density near the surface is caused by the absence of ionization avalanche: in that region, the electrons generated by ionization is compensated by drift-diffusion flux. In contrast, when the coarser grid is used, underestimated drift-diffusion flux cannot compensate the electrons generated by ionization, and it leads to an incorrect increase of electron density.

  16. Simulation study of spatial resolution in phase-contrast X-ray imaging with Takagi-Taupin equation

    International Nuclear Information System (INIS)

    Koyama, Ichiro; Momose, Atsushi

    2003-01-01

    To evaluate attainable spatial resolution of phase-contrast X-ray imaging using an LLL X-ray interferometer with a thin crystal wafer, a computer simulation study with Takagi-Taupin equation was performed. Modulation transfer function of the wafer for X-ray phase was evaluated. For a polyester film whose thickness is 0.1 mm, it was concluded that the spatial resolution can be improved up to 3 μm by thinning the wafer, under our experimental condition

  17. North Atlantic Tropical Cyclones: historical simulations and future changes with the new high-resolution Arpege AGCM.

    Science.gov (United States)

    Pilon, R.; Chauvin, F.; Palany, P.; Belmadani, A.

    2017-12-01

    A new version of the variable high-resolution Meteo-France Arpege atmospheric general circulation model (AGCM) has been developed for tropical cyclones (TC) studies, with a focus on the North Atlantic basin, where the model horizontal resolution is 15 km. Ensemble historical AMIP (Atmospheric Model Intercomparison Project)-type simulations (1965-2014) and future projections (2020-2080) under the IPCC (Intergovernmental Panel on Climate Change) representative concentration pathway (RCP) 8.5 scenario have been produced. TC-like vortices tracking algorithm is used to investigate TC activity and variability. TC frequency, genesis, geographical distribution and intensity are examined. Historical simulations are compared to best-track and reanalysis datasets. Model TC frequency is generally realistic but tends to be too high during the rst decade of the historical simulations. Biases appear to originate from both the tracking algorithm and model climatology. Nevertheless, the model is able to simulate extremely well intense TCs corresponding to category 5 hurricanes in the North Atlantic, where grid resolution is highest. Interaction between developing TCs and vertical wind shear is shown to be contributing factor for TC variability. Future changes in TC activity and properties are also discussed.

  18. High-resolution simulations of cylindrical void collapse in energetic materials: Effect of primary and secondary collapse on initiation thresholds

    Science.gov (United States)

    Rai, Nirmal Kumar; Schmidt, Martin J.; Udaykumar, H. S.

    2017-04-01

    Void collapse in energetic materials leads to hot spot formation and enhanced sensitivity. Much recent work has been directed towards simulation of collapse-generated reactive hot spots. The resolution of voids in calculations to date has varied as have the resulting predictions of hot spot intensity. Here we determine the required resolution for reliable cylindrical void collapse calculations leading to initiation of chemical reactions. High-resolution simulations of collapse provide new insights into the mechanism of hot spot generation. It is found that initiation can occur in two different modes depending on the loading intensity: Either the initiation occurs due to jet impact at the first collapse instant or it can occur at secondary lobes at the periphery of the collapsed void. A key observation is that secondary lobe collapse leads to large local temperatures that initiate reactions. This is due to a combination of a strong blast wave from the site of primary void collapse and strong colliding jets and vortical flows generated during the collapse of the secondary lobes. The secondary lobe collapse results in a significant lowering of the predicted threshold for ignition of the energetic material. The results suggest that mesoscale simulations of void fields may suffer from significant uncertainty in threshold predictions because unresolved calculations cannot capture the secondary lobe collapse phenomenon. The implications of this uncertainty for mesoscale simulations are discussed in this paper.

  19. Changes in Moisture Flux Over the Tibetan Plateau During 1979-2011: Insights from a High Resolution Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Yanhong; Leung, Lai-Yung R.; Zhang, Yongxin; Cuo, Lan

    2015-05-01

    Net precipitation (precipitation minus evapotranspiration, P-E) changes from a high resolution regional climate simulation and its reanalysis forcing are analyzed over the Tibet Plateau (TP) and compared to the global land data assimilation system (GLDAS) product. The mechanism behind the P-E changes is explored by decomposing the column integrated moisture flux convergence into thermodynamic, dynamic, and transient eddy components. High-resolution climate simulation improves the spatial pattern of P-E changes over the best available global reanalysis. Improvement in simulating precipitation changes at high elevations contributes dominantly to the improved P-E changes. High-resolution climate simulation also facilitates new and substantial findings regarding the role of thermodynamics and transient eddies in P-E changes reflected in observed changes in major river basins fed by runoff from the TP. The analysis revealed the contrasting convergence/divergence changes between the northwestern and southeastern TP and feedback through latent heat release as an important mechanism leading to the mean P-E changes in the TP.

  20. Idealized climate change simulations with a high-resolution physical model: HadGEM3-GC2

    Science.gov (United States)

    Senior, Catherine A.; Andrews, Timothy; Burton, Chantelle; Chadwick, Robin; Copsey, Dan; Graham, Tim; Hyder, Pat; Jackson, Laura; McDonald, Ruth; Ridley, Jeff; Ringer, Mark; Tsushima, Yoko

    2016-06-01

    Idealized climate change simulations with a new physical climate model, HadGEM3-GC2 from The Met Office Hadley Centre are presented and contrasted with the earlier MOHC model, HadGEM2-ES. The role of atmospheric resolution is also investigated. The Transient Climate Response (TCR) is 1.9 K/2.1 K at N216/N96 and Effective Climate Sensitivity (ECS) is 3.1 K/3.2 K at N216/N96. These are substantially lower than HadGEM2-ES (TCR: 2.5 K; ECS: 4.6 K) arising from a combination of changes in the size of climate feedbacks. While the change in the net cloud feedback between HadGEM3 and HadGEM2 is relatively small, there is a change in sign of its longwave and a strengthening of its shortwave components. At a global scale, there is little impact of the increase in atmospheric resolution on the future climate change signal and even at a broad regional scale, many features are robust including tropical rainfall changes, however, there are some significant exceptions. For the North Atlantic and western Europe, the tripolar pattern of winter storm changes found in most CMIP5 models is little impacted by resolution but for the most intense storms, there is a larger percentage increase in number at higher resolution than at lower resolution. Arctic sea-ice sensitivity shows a larger dependence on resolution than on atmospheric physics.

  1. Impact of model resolution on simulated wind, drifting snow and surface mass balance in Terre Adélie, East Antarctica

    NARCIS (Netherlands)

    Lenaerts, J.T.M.|info:eu-repo/dai/nl/314850163; van den Broeke, M.R.|info:eu-repo/dai/nl/073765643; Scarchilli, C.; Agosta, C.

    2012-01-01

    This paper presents the impact of model resolution on the simulated wind speed, drifting snow climate and surface mass balance (SMB) of Terre Ad´elie and its surroundings, East Antarctica. We compare regional climate model simulations at 27 and 5.5 km resolution for the year 2009. The wind speed

  2. Local-scale high-resolution atmospheric dispersion model using large-eddy simulation. LOHDIM-LES

    International Nuclear Information System (INIS)

    Nakayama, Hiromasa; Nagai, Haruyasu

    2016-03-01

    We developed LOcal-scale High-resolution atmospheric DIspersion Model using Large-Eddy Simulation (LOHDIM-LES). This dispersion model is designed based on LES which is effective to reproduce unsteady behaviors of turbulent flows and plume dispersion. The basic equations are the continuity equation, the Navier-Stokes equation, and the scalar conservation equation. Buildings and local terrain variability are resolved by high-resolution grids with a few meters and these turbulent effects are represented by immersed boundary method. In simulating atmospheric turbulence, boundary layer flows are generated by a recycling turbulent inflow technique in a driver region set up at the upstream of the main analysis region. This turbulent inflow data are imposed at the inlet of the main analysis region. By this approach, the LOHDIM-LES can provide detailed information on wind velocities and plume concentration in the investigated area. (author)

  3. VAST PLANES OF SATELLITES IN A HIGH-RESOLUTION SIMULATION OF THE LOCAL GROUP: COMPARISON TO ANDROMEDA

    International Nuclear Information System (INIS)

    Gillet, N.; Ocvirk, P.; Aubert, D.; Knebe, A.; Yepes, G.; Libeskind, N.; Gottlöber, S.; Hoffman, Y.

    2015-01-01

    We search for vast planes of satellites (VPoS) in a high-resolution simulation of the Local Group performed by the CLUES project, which improves significantly the resolution of previous similar studies. We use a simple method for detecting planar configurations of satellites, and validate it on the known plane of M31. We implement a range of prescriptions for modeling the satellite populations, roughly reproducing the variety of recipes used in the literature, and investigate the occurrence and properties of planar structures in these populations. The structure of the simulated satellite systems is strongly non-random and contains planes of satellites, predominantly co-rotating, with, in some cases, sizes comparable to the plane observed in M31 by Ibata et al. However, the latter is slightly richer in satellites, slightly thinner, and has stronger co-rotation, which makes it stand out as overall more exceptional than the simulated planes, when compared to a random population. Although the simulated planes we find are generally dominated by one real structure forming its backbone, they are also partly fortuitous and are thus not kinematically coherent structures as a whole. Provided that the simulated and observed planes of satellites are indeed of the same nature, our results suggest that the VPoS of M31 is not a coherent disk and that one-third to one-half of its satellites must have large proper motions perpendicular to the plane

  4. An efficient non hydrostatic dynamical care far high-resolution simulations down to the urban scale

    International Nuclear Information System (INIS)

    Bonaventura, L.; Cesari, D.

    2005-01-01

    Numerical simulations of idealized stratified flows aver obstacles at different spatial scales demonstrate the very general applicability and the parallel efficiency of a new non hydrostatic dynamical care far simulation of mesoscale flows aver complex terrain

  5. Technique for Simulation of Black Sea Circulation with Increased Resolution in the Area of the IO RAS Polygon

    Science.gov (United States)

    Gusev, A. V.; Zalesny, V. B.; Fomin, V. V.

    2017-11-01

    A numerical technique is presented for simulating the hydrophysical fields of the Black Sea on a variable-step grid with refinement in the area of IO RAS polygon. Model primitive equations are written in spherical coordinates with an arbitrary arrangement of poles. In order to increase the horizontal resolution of the coastal zone in the area of the IO RAS polygon in the northeastern part of the sea near Gelendzhik, one of the poles is placed at a land point (38.35° E, 44.75° N). The model horizontal resolution varies from 150 m in the area of the IO RAS polygon to 4.6 km in the southwestern part of the Black Sea. The numerical technique makes it possible to simulate a large-scale structure of Black Sea circulation as well as the meso- and submesoscale dynamics of the coastal zone. In order to compute the atmospheric forcing, the results of the regional climate model WRF with a resolution of about 10 km in space and 1 h in time are used. In order to demonstrate the technique, Black Sea hydrophysical fields for 2011-2012 and a passive tracer transport representing self-cleaning of Gelendzhik Bay in July 2012 are simulated.

  6. Recommended aquifer grid resolution for E-Area PA revision transport simulations

    Energy Technology Data Exchange (ETDEWEB)

    Flach, G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2018-01-03

    This memorandum addresses portions of Section 3.5.2 of SRNL (2016) by recommending horizontal and vertical grid resolution for aquifer transport, in preparation for the next E-Area Performance Assessment (WSRC 2008) revision.

  7. Impacts of spatial resolution and representation of flow connectivity on large-scale simulation of floods

    OpenAIRE

    C. M. R. Mateo; C. M. R. Mateo; D. Yamazaki; D. Yamazaki; H. Kim; A. Champathong; J. Vaze; T. Oki; T. Oki

    2017-01-01

    Global-scale river models (GRMs) are core tools for providing consistent estimates of global flood hazard, especially in data-scarce regions. Due to former limitations in computational power and input datasets, most GRMs have been developed to use simplified representations of flow physics and run at coarse spatial resolutions. With increasing computational power and improved datasets, the application of GRMs to finer resolutions is becoming a reality. To support development...

  8. Impacts of spatial resolution and representation of flow connectivity on large-scale simulation of floods

    OpenAIRE

    Mateo, Cherry May R.; Yamazaki, Dai; Kim, Hyungjun; Champathong, Adisorn; Vaze, Jai; Oki, Taikan

    2017-01-01

    Global-scale River Models (GRMs) are core tools for providing consistent estimates of global flood hazard, especially in data-scarce regions. Due to former limitations in computational power and input datasets, most GRMs have been developed to use simplified representation of flow physics and run at coarse spatial resolutions. With increasing computational power and improved datasets, the application of GRMs to finer resolutions is becoming a reality. To support development in this direction,...

  9. Virialization in N-body models of the expanding universe. I. Isolated pairs

    International Nuclear Information System (INIS)

    Evrard, A.E.; Yahil, A.; and Institute of Astronomy, University of Cambridge)

    1985-01-01

    The degree of virialization of isolated pairs of galaxies is investigated in the N-body simulations of Efstathiou and Eastwood for open (Ω 0 = 0.1) and critical (Ω 0 = 1.0) universes, utilizing the three-dimensional information available for both position and velocity. Roughly half of the particles in the models form isolated pairs whose dynamics is dominated by their own two-body force. Three-quarters or more of these pairs are bound, and this ensemble of bound isolated pairs is found to yield excellent mass estimates upon application of the virial theorem. Contamination from unbound pairs introduces error factors smaller than 2 in mass estimates, and these errors can be corrected by simple methods. Oribts of bound pairs are highly eccentric, but this does not lead to serious selection effects in orbital phases, since these are uniformly distributed. The relative velocity of these pairs of mass points shows a Keplerian falloff with separation, contrary to observational evidence for real galaxies. All the above results are independent of the value of Ω 0 , but may be sensitive to initial conditions and the point-mass nature of the particles

  10. Analysis of a high-resolution regional climate simulation for Alpine temperature. Validation and influence of the NAO

    Energy Technology Data Exchange (ETDEWEB)

    Proemmel, K. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Kuestenforschung

    2008-11-06

    To determine whether the increase in resolution of climate models improves the representation of climate is a crucial topic in regional climate modelling. An improvement over coarser-scale models is expected especially in areas with complex orography or along coastlines. However, some studies have shown no clear added value for regional climate models. In this study a high-resolution regional climate model simulation performed with REMO over the period 1958-1998 is analysed for 2m temperature over the orographically complex European Alps and their surroundings called the Greater Alpine Region (GAR). The model setup is in hindcast mode meaning that the simulation is driven with perfect boundary conditions by the ERA40 reanalysis through prescribing the values at the lateral boundaries and spectral nudging of the large-scale wind field inside the model domain. The added value is analysed between the regional climate simulation with a resolution of 1/6 and the driving reanalysis with a resolution of 1.125 . Before analysing the added value both the REMO simulation and the ERA40 reanalysis are validated against different station datasets of monthly and daily mean 2m temperature. The largest dataset is the dense, homogenised and quality controlled HISTALP dataset covering the whole GAR, which gave the opportunity for the validation undertaken in this study. The temporal variability of temperature, as quantified by correlation, is well represented by both REMO and ERA40. However, both show considerable biases. The REMO bias reaches 3 K in summer in regions known to experience a problem with summer drying in a number of regional models. In winter the bias is strongly influenced by the choice of the temperature lapse rate, which is applied to compare grid box and station data at different altitudes, and has the strongest influence on inner Alpine subregions where the altitude differences are largest. By applying a constant lapse rate the REMO bias in winter in the high

  11. A Coastal Bay Summer Breeze Study, Part 2: High-resolution Numerical Simulation of Sea-breeze Local Influences

    Science.gov (United States)

    Calmet, Isabelle; Mestayer, Patrice G.; van Eijk, Alexander M. J.; Herlédant, Olivier

    2018-04-01

    We complete the analysis of the data obtained during the experimental campaign around the semi circular bay of Quiberon, France, during two weeks in June 2006 (see Part 1). A reanalysis of numerical simulations performed with the Advanced Regional Prediction System model is presented. Three nested computational domains with increasing horizontal resolution down to 100 m, and a vertical resolution of 10 m at the lowest level, are used to reproduce the local-scale variations of the breeze close to the water surface of the bay. The Weather Research and Forecasting mesoscale model is used to assimilate the meteorological data. Comparisons of the simulations with the experimental data obtained at three sites reveal a good agreement of the flow over the bay and around the Quiberon peninsula during the daytime periods of sea-breeze development and weakening. In conditions of offshore synoptic flow, the simulations demonstrate that the semi-circular shape of the bay induces a corresponding circular shape in the offshore zones of stagnant flow preceding the sea-breeze onset, which move further offshore thereafter. The higher-resolution simulations are successful in reproducing the small-scale impacts of the peninsula and local coasts (breeze deviations, wakes, flow divergences), and in demonstrating the complexity of the breeze fields close to the surface over the bay. Our reanalysis also provides guidance for numerical simulation strategies for analyzing the structure and evolution of the near-surface breeze over a semi-circular bay, and for forecasting important flow details for use in upcoming sailing competitions.

  12. Atlantic hurricanes and associated insurance loss potentials in future climate scenarios: limitations of high-resolution AGCM simulations

    Directory of Open Access Journals (Sweden)

    Thomas F. Stocker

    2012-01-01

    Full Text Available Potential future changes in tropical cyclone (TC characteristics are among the more serious regional threats of global climate change. Therefore, a better understanding of how anthropogenic climate change may affect TCs and how these changes translate in socio-economic impacts is required. Here, we apply a TC detection and tracking method that was developed for ERA-40 data to time-slice experiments of two atmospheric general circulation models, namely the fifth version of the European Centre model of Hamburg model (MPI, Hamburg, Germany, T213 and the Japan Meteorological Agency/ Meteorological research Institute model (MRI, Tsukuba city, Japan, TL959. For each model, two climate simulations are available: a control simulation for present-day conditions to evaluate the model against observations, and a scenario simulation to assess future changes. The evaluation of the control simulations shows that the number of intense storms is underestimated due to the model resolution. To overcome this deficiency, simulated cyclone intensities are scaled to the best track data leading to a better representation of the TC intensities. Both models project an increased number of major hurricanes and modified trajectories in their scenario simulations. These changes have an effect on the projected loss potentials. However, these state-of-the-art models still yield contradicting results, and therefore they are not yet suitable to provide robust estimates of losses due to uncertainties in simulated hurricane intensity, location and frequency.

  13. Feasibility of High-Resolution Soil Erosion Measurements by Means of Rainfall Simulations and SfM Photogrammetry

    Directory of Open Access Journals (Sweden)

    Phoebe Hänsel

    2016-11-01

    Full Text Available The silty soils of the intensively used agricultural landscape of the Saxon loess province, eastern Germany, are very prone to soil erosion, mainly caused by water erosion. Rainfall simulations, and also increasingly structure-from-motion (SfM photogrammetry, are used as methods in soil erosion research not only to assess soil erosion by water, but also to quantify soil loss. This study aims to validate SfM photogrammetry determined soil loss estimations with rainfall simulations measurements. Rainfall simulations were performed at three agricultural sites in central Saxony. Besides the measured data runoff and soil loss by sampling (in mm, terrestrial images were taken from the plots with digital cameras before and after the rainfall simulation. Subsequently, SfM photogrammetry was used to reconstruct soil surface changes due to soil erosion in terms of high resolution digital elevation models (DEMs for the pre- and post-event (resolution 1 × 1 mm. By multi-temporal change detection, the digital elevation model of difference (DoD and an averaged soil loss (in mm is received, which was compared to the soil loss by sampling. Soil loss by DoD was higher than soil loss by sampling. The method of SfM photogrammetry-determined soil loss estimations also include a comparison of three different ground control point (GCP approaches, revealing that the most complex one delivers the most reliable soil loss by DoD. Additionally, soil bulk density changes and splash erosion beyond the plot were measured during the rainfall simulation experiments in order to separate these processes and associated surface changes from the soil loss by DoD. Furthermore, splash was negligibly small, whereas higher soil densities after the rainfall simulations indicated soil compaction. By means of calculated soil surface changes due to soil compaction, the soil loss by DoD achieved approximately the same value as the soil loss by rainfall simulation.

  14. The Impact of High-Resolution Sea Surface Temperatures on the Simulated Nocturnal Florida Marine Boundary Layer

    Science.gov (United States)

    LaCasse, Katherine M.; Splitt, Michael E.; Lazarus, Steven M.; Lapenta, William M.

    2008-01-01

    High- and low-resolution sea surface temperature (SST) analysis products are used to initialize the Weather Research and Forecasting (WRF) Model for May 2004 for short-term forecasts over Florida and surrounding waters. Initial and boundary conditions for the simulations were provided by a combination of observations, large-scale model output, and analysis products. The impact of using a 1-km Moderate Resolution Imaging Spectroradiometer (MODIS) SST composite on subsequent evolution of the marine atmospheric boundary layer (MABL) is assessed through simulation comparisons and limited validation. Model results are presented for individual simulations, as well as for aggregates of easterly- and westerly-dominated low-level flows. The simulation comparisons show that the use of MODIS SST composites results in enhanced convergence zones. earlier and more intense horizontal convective rolls. and an increase in precipitation as well as a change in precipitation location. Validation of 10-m winds with buoys shows a slight improvement in wind speed. The most significant results of this study are that 1) vertical wind stress divergence and pressure gradient accelerations across the Florida Current region vary in importance as a function of flow direction and stability and 2) the warmer Florida Current in the MODIS product transports heat vertically and downwind of this heat source, modifying the thermal structure and the MABL wind field primarily through pressure gradient adjustments.

  15. Development of a High-Resolution Climate Model for Future Climate Change Projection on the Earth Simulator

    Science.gov (United States)

    Kanzawa, H.; Emori, S.; Nishimura, T.; Suzuki, T.; Inoue, T.; Hasumi, H.; Saito, F.; Abe-Ouchi, A.; Kimoto, M.; Sumi, A.

    2002-12-01

    The fastest supercomputer of the world, the Earth Simulator (total peak performance 40TFLOPS) has recently been available for climate researches in Yokohama, Japan. We are planning to conduct a series of future climate change projection experiments on the Earth Simulator with a high-resolution coupled ocean-atmosphere climate model. The main scientific aims for the experiments are to investigate 1) the change in global ocean circulation with an eddy-permitting ocean model, 2) the regional details of the climate change including Asian monsoon rainfall pattern, tropical cyclones and so on, and 3) the change in natural climate variability with a high-resolution model of the coupled ocean-atmosphere system. To meet these aims, an atmospheric GCM, CCSR/NIES AGCM, with T106(~1.1o) horizontal resolution and 56 vertical layers is to be coupled with an oceanic GCM, COCO, with ~ 0.28ox 0.19o horizontal resolution and 48 vertical layers. This coupled ocean-atmosphere climate model, named MIROC, also includes a land-surface model, a dynamic-thermodynamic seaice model, and a river routing model. The poles of the oceanic model grid system are rotated from the geographic poles so that they are placed in Greenland and Antarctic land masses to avoild the singularity of the grid system. Each of the atmospheric and the oceanic parts of the model is parallelized with the Message Passing Interface (MPI) technique. The coupling of the two is to be done with a Multi Program Multi Data (MPMD) fashion. A 100-model-year integration will be possible in one actual month with 720 vector processors (which is only 14% of the full resources of the Earth Simulator).

  16. The Quantum N-Body Problem and the Auxiliary Field Method

    International Nuclear Information System (INIS)

    Semay, C.; Buisseret, F.; Silvestre-Brac, B.

    2011-01-01

    Approximate analytical energy formulas for N-body semirelativistic Hamiltonians with one- and two-body interactions are obtained within the framework of the auxiliary field method. We first review the method in the case of nonrelativistic two-body problems. A general procedure is then given for N-body systems and applied to the case of baryons in the large-N c limit. (author)

  17. Facial identification in very low-resolution images simulating prosthetic vision.

    Science.gov (United States)

    Chang, M H; Kim, H S; Shin, J H; Park, K S

    2012-08-01

    Familiar facial identification is important to blind or visually impaired patients and can be achieved using a retinal prosthesis. Nevertheless, there are limitations in delivering the facial images with a resolution sufficient to distinguish facial features, such as eyes and nose, through multichannel electrode arrays used in current visual prostheses. This study verifies the feasibility of familiar facial identification under low-resolution prosthetic vision and proposes an edge-enhancement method to deliver more visual information that is of higher quality. We first generated a contrast-enhanced image and an edge image by applying the Sobel edge detector and blocked each of them by averaging. Then, we subtracted the blocked edge image from the blocked contrast-enhanced image and produced a pixelized image imitating an array of phosphenes. Before subtraction, every gray value of the edge images was weighted as 50% (mode 2), 75% (mode 3) and 100% (mode 4). In mode 1, the facial image was blocked and pixelized with no further processing. The most successful identification was achieved with mode 3 at every resolution in terms of identification index, which covers both accuracy and correct response time. We also found that the subjects recognized a distinctive face especially more accurately and faster than the other given facial images even under low-resolution prosthetic vision. Every subject could identify familiar faces even in very low-resolution images. And the proposed edge-enhancement method seemed to contribute to intermediate-stage visual prostheses.

  18. Simulation of high-resolution MFM tip using exchange-spring magnet

    Energy Technology Data Exchange (ETDEWEB)

    Saito, H. [Faculty of Resource Science and Engineering, Akita University, Akita 010-8502 (Japan)]. E-mail: hsaito@ipc.akita-u.ac.jp; Yatsuyanagi, D. [Faculty of Resource Science and Engineering, Akita University, Akita 010-8502 (Japan); Ishio, S. [Faculty of Resource Science and Engineering, Akita University, Akita 010-8502 (Japan); Ito, A. [Nitto Optical Co. Ltd., Misato, Akita 019-1403 (Japan); Kawamura, H. [Nitto Optical Co. Ltd., Misato, Akita 019-1403 (Japan); Ise, K. [Research Institute of Advanced Technology Akita, Akita 010-1623 (Japan); Taguchi, K. [Research Institute of Advanced Technology Akita, Akita 010-1623 (Japan); Takahashi, S. [Research Institute of Advanced Technology Akita, Akita 010-1623 (Japan)

    2007-03-15

    The transfer function of magnetic force microscope (MFM) tips using an exchange-spring trilayer composed of a centered soft magnetic layer and two hard magnetic layers was calculated and the resolution was estimated by considering the thermodynamic noise limit of an MFM cantilever. It was found that reducing the thickness of the centered soft magnetic layer and the magnetization of hard magnetic layer are important to obtain high resolution. Tips using an exchange-spring trilayer with a very thin FeCo layer and isotropic hard magnetic layers, such as CoPt and FePt, are found to be suitable for obtaining a resolution less than 10 nm at room temperature.

  19. Validation of high-resolution aerosol optical thickness simulated by a global non-hydrostatic model against remote sensing measurements

    Science.gov (United States)

    Goto, Daisuke; Sato, Yousuke; Yashiro, Hisashi; Suzuki, Kentaroh; Nakajima, Teruyuki

    2017-02-01

    A high-performance computing resource allows us to conduct numerical simulations with a horizontal grid spacing that is sufficiently high to resolve cloud systems. The cutting-edge computational capability, which was provided by the K computer at RIKEN in Japan, enabled the authors to perform long-term, global simulations of air pollutions and clouds with unprecedentedly high horizontal resolutions. In this study, a next generation model capable of simulating global air pollutions with O(10 km) grid spacing by coupling an atmospheric chemistry model to the Non-hydrostatic Icosahedral Atmospheric Model (NICAM) was performed. Using the newly developed model, month-long simulations for July were conducted with 14 km grid spacing on the K computer. Regarding the global distributions of aerosol optical thickness (AOT), it was found that the correlation coefficient (CC) between the simulation and AERONET measurements was approximately 0.7, and the normalized mean bias was -10%. The simulated AOT was also compared with satellite-retrieved values; the CC was approximately 0.6. The radiative effects due to each chemical species (dust, sea salt, organics, and sulfate) were also calculated and compared with multiple measurements. As a result, the simulated fluxes of upward shortwave radiation at the top of atmosphere and the surface compared well with the observed values, whereas those of downward shortwave radiation at the surface were underestimated, even if all aerosol components were considered. However, the aerosol radiative effects on the downward shortwave flux at the surface were found to be as high as 10 W/m2 in a global scale; thus, simulated aerosol distributions can strongly affect the simulated air temperature and dynamic circulation.

  20. Initialization of high resolution surface wind simulations using NWS gridded data

    Science.gov (United States)

    J. Forthofer; K. Shannon; Bret Butler

    2010-01-01

    WindNinja is a standalone computer model designed to provide the user with simulations of surface wind flow. It is deterministic and steady state. It is currently being modified to allow the user to initialize the flow calculation using National Digital Forecast Database. It essentially allows the user to downscale the coarse scale simulations from meso-scale models to...

  1. Effects of high spatial and temporal resolution Earth observations on simulated hydrometeorological variables in a cropland (southwestern France

    Directory of Open Access Journals (Sweden)

    J. Etchanchu

    2017-11-01

    Full Text Available Agricultural landscapes are often constituted by a patchwork of crop fields whose seasonal evolution is dependent on specific crop rotation patterns and phenologies. This temporal and spatial heterogeneity affects surface hydrometeorological processes and must be taken into account in simulations of land surface and distributed hydrological models. The Sentinel-2 mission allows for the monitoring of land cover and vegetation dynamics at unprecedented spatial resolutions and revisit frequencies (20 m and 5 days, respectively that are fully compatible with such heterogeneous agricultural landscapes. Here, we evaluate the impact of Sentinel-2-like remote sensing data on the simulation of surface water and energy fluxes via the Interactions between the Surface Biosphere Atmosphere (ISBA land surface model included in the EXternalized SURface (SURFEX modeling platform. The study focuses on the effect of the leaf area index (LAI spatial and temporal variability on these fluxes. We compare the use of the LAI climatology from ECOCLIMAP-II, used by default in SURFEX-ISBA, and time series of LAI derived from the high-resolution Formosat-2 satellite data (8 m. The study area is an agricultural zone in southwestern France covering 576 km2 (24 km  ×  24 km. An innovative plot-scale approach is used, in which each computational unit has a homogeneous vegetation type. Evaluation of the simulations quality is done by comparing model outputs with in situ eddy covariance measurements of latent heat flux (LE. Our results show that the use of LAI derived from high-resolution remote sensing significantly improves simulated evapotranspiration with respect to ECOCLIMAP-II, especially when the surface is covered with summer crops. The comparison with in situ measurements shows an improvement of roughly 0.3 in the correlation coefficient and a decrease of around 30 % of the root mean square error (RMSE in the simulated evapotranspiration. This

  2. Climate SPHINX: High-resolution present-day and future climate simulations with an improved representation of small-scale variability

    Science.gov (United States)

    Davini, Paolo; von Hardenberg, Jost; Corti, Susanna; Subramanian, Aneesh; Weisheimer, Antje; Christensen, Hannah; Juricke, Stephan; Palmer, Tim

    2016-04-01

    The PRACE Climate SPHINX project investigates the sensitivity of climate simulations to model resolution and stochastic parameterization. The EC-Earth Earth-System Model is used to explore the impact of stochastic physics in 30-years climate integrations as a function of model resolution (from 80km up to 16km for the atmosphere). The experiments include more than 70 simulations in both a historical scenario (1979-2008) and a climate change projection (2039-2068), using RCP8.5 CMIP5 forcing. A total amount of 20 million core hours will be used at end of the project (March 2016) and about 150 TBytes of post-processed data will be available to the climate community. Preliminary results show a clear improvement in the representation of climate variability over the Euro-Atlantic following resolution increase. More specifically, the well-known atmospheric blocking negative bias over Europe is definitely resolved. High resolution runs also show improved fidelity in representation of tropical variability - such as the MJO and its propagation - over the low resolution simulations. It is shown that including stochastic parameterization in the low resolution runs help to improve some of the aspects of the MJO propagation further. These findings show the importance of representing the impact of small scale processes on the large scale climate variability either explicitly (with high resolution simulations) or stochastically (in low resolution simulations).

  3. Adaptive resolution simulation of an atomistic DNA molecule in MARTINI salt solution

    NARCIS (Netherlands)

    Zavadlav, J.; Podgornik, R.; Melo, M.n.; Marrink, S.j.; Praprotnik, M.

    2016-01-01

    We present a dual-resolution model of a deoxyribonucleic acid (DNA) molecule in a bathing solution, where we concurrently couple atomistic bundled water and ions with the coarse-grained MAR- TINI model of the solvent. We use our fine-grained salt solution model as a solvent in the inner shell

  4. Machine vision-based high-resolution weed mapping and patch-sprayer performance simulation

    NARCIS (Netherlands)

    Tang, L.; Tian, L.F.; Steward, B.L.

    1999-01-01

    An experimental machine vision-based patch-sprayer was developed. This sprayer was primarily designed to do real-time weed density estimation and variable herbicide application rate control. However, the sprayer also had the capability to do high-resolution weed mapping if proper mapping techniques

  5. Identifying added value in high-resolution climate simulations over Scandinavia

    DEFF Research Database (Denmark)

    Mayer, Stephania; Fox Maule, Cathrine; Sobolowski, Stefan

    2015-01-01

    High-resolution data are needed in order to assess potential impacts of extreme events on infrastructure in the mid-latitudes. Dynamical downscaling offers one way to obtain this information. However, prior to implementation in any impacts assessment scheme, model output must be validated and det...

  6. Using Process Observation to Teach Alternative Dispute Resolution: Alternatives to Simulation.

    Science.gov (United States)

    Bush, Robert A. Barush

    1987-01-01

    A method of teaching alternative dispute resolution (ADR) involves sending students to observe actual ADR sessions, by agreement with the agencies conducting them, and then analyzing the students' observations in focused discussions to improve student insight and understanding of the processes involved. (MSE)

  7. Changes in snow cover over China in the 21st century as simulated by a high resolution regional climate model

    International Nuclear Information System (INIS)

    Shi Ying; Gao Xuejie; Wu Jia; Giorgi, Filippo

    2011-01-01

    On the basis of the climate change simulations conducted using a high resolution regional climate model, the Abdus Salam International Centre for Theoretical Physics (ICTP) Regional Climate Model, RegCM3, at 25 km grid spacing, future changes in snow cover over China are analyzed. The simulations are carried out for the period of 1951–2100 following the IPCC SRES A1B emission scenario. The results suggest good performances of the model in simulating the number of snow cover days and the snow cover depth, as well as the starting and ending dates of snow cover to the present day (1981–2000). Their spatial distributions and amounts show fair consistency between the simulation and observation, although with some discrepancies. In general, decreases in the number of snow cover days and the snow cover depth, together with postponed snow starting dates and advanced snow ending dates, are simulated for the future, except in some places where the opposite appears. The most dramatic changes are found over the Tibetan Plateau among the three major snow cover areas of Northeast, Northwest and the Tibetan Plateau in China.

  8. Variability of wet troposphere delays over inland reservoirs as simulated by a high-resolution regional climate model

    Science.gov (United States)

    Clark, E.; Lettenmaier, D. P.

    2014-12-01

    Satellite radar altimetry is widely used for measuring global sea level variations and, increasingly, water height variations of inland water bodies. Existing satellite radar altimeters measure water surfaces directly below the spacecraft (approximately at nadir). Over the ocean, most of these satellites use radiometry to measure the delay of radar signals caused by water vapor in the atmosphere (also known as the wet troposphere delay (WTD)). However, radiometry can only be used to estimate this delay over the largest inland water bodies, such as the Great Lakes, due to spatial resolution issues. As a result, atmospheric models are typically used to simulate and correct for the WTD at the time of observations. The resolutions of these models are quite coarse, at best about 5000 km2 at 30˚N. The upcoming NASA- and CNES-led Surface Water and Ocean Topography (SWOT) mission, on the other hand, will use interferometric synthetic aperture radar (InSAR) techniques to measure a 120-km-wide swath of the Earth's surface. SWOT is expected to make useful measurements of water surface elevation and extent (and storage change) for inland water bodies at spatial scales as small as 250 m, which is much smaller than current altimetry targets and several orders of magnitude smaller than the models used for wet troposphere corrections. Here, we calculate WTD from very high-resolution (4/3-km to 4-km) simulations of the Weather Research and Forecasting (WRF) regional climate model, and use the results to evaluate spatial variations in WTD. We focus on six U.S. reservoirs: Lake Elwell (MT), Lake Pend Oreille (ID), Upper Klamath Lake (OR), Elephant Butte (NM), Ray Hubbard (TX), and Sam Rayburn (TX). The reservoirs vary in climate, shape, use, and size. Because evaporation from open water impacts local water vapor content, we compare time series of WTD over land and water in the vicinity of each reservoir. To account for resolution effects, we examine the difference in WRF-simulated

  9. Proposing New Methods to Enhance the Low-Resolution Simulated GPR Responses in the Frequency and Wavelet Domains

    Directory of Open Access Journals (Sweden)

    Reza Ahmadi

    2014-12-01

    Full Text Available To date, a number of numerical methods, including the popular Finite-Difference Time Domain (FDTD technique, have been proposed to simulate Ground-Penetrating Radar (GPR responses. Despite having a number of advantages, the finite-difference method also has pitfalls such as being very time consuming in simulating the most common case of media with high dielectric permittivity, causing the forward modelling process to be very long lasting, even with modern high-speed computers. In the present study the well-known hyperbolic pattern response of horizontal cylinders, usually found in GPR B-Scan images, is used as a basic model to examine the possibility of reducing the forward modelling execution time. In general, the simulated GPR traces of common reflected objects are time shifted, as with the Normal Moveout (NMO traces encountered in seismic reflection responses. This suggests the application of Fourier transform to the GPR traces, employing the time-shifting property of the transformation to interpolate the traces between the adjusted traces in the frequency domain (FD. Therefore, in the present study two post-processing algorithms have been adopted to increase the speed of forward modelling while maintaining the required precision. The first approach is based on linear interpolation in the Fourier domain, resulting in increasing lateral trace-to-trace interval of appropriate sampling frequency of the signal, preventing any aliasing. In the second approach, a super-resolution algorithm based on 2D-wavelet transform is developed to increase both vertical and horizontal resolution of the GPR B-Scan images through preserving scale and shape of hidden hyperbola features. Through comparing outputs from both methods with the corresponding actual high-resolution forward response, it is shown that both approaches can perform satisfactorily, although the wavelet-based approach outperforms the frequency-domain approach noticeably, both in amplitude and

  10. Principles and simulations of high-resolution STM imaging with a flexible tip apex

    Czech Academy of Sciences Publication Activity Database

    Krejčí, Ondřej; Hapala, Prokop; Ondráček, Martin; Jelínek, Pavel

    2017-01-01

    Roč. 95, č. 4 (2017), 1-9, č. článku 045407. ISSN 2469-9950 R&D Projects: GA ČR(CZ) GC14-16963J Institutional support: RVO:68378271 Keywords : STM * AFM * high-resolution Subject RIV: BM - Solid Matter Physics ; Magnetism OBOR OECD: Condensed matter physics (including formerly solid state physics, supercond.) Impact factor: 3.836, year: 2016

  11. Intramolecular diffusive motion in alkane monolayers studied by high-resolution quasielastic neutron scattering and molecular dynamics simulations

    DEFF Research Database (Denmark)

    Hansen, Flemming Yssing; Criswell, L.; Fuhrmann, D

    2004-01-01

    Molecular dynamics simulations of a tetracosane (n-C24H50) monolayer adsorbed on a graphite basal-plane surface show that there are diffusive motions associated with the creation and annihilation of gauche defects occurring on a time scale of similar to0.1-4 ns. We present evidence...... that these relatively slow motions are observable by high-energy-resolution quasielastic neutron scattering (QNS) thus demonstrating QNS as a technique, complementary to nuclear magnetic resonance, for studying conformational dynamics on a nanosecond time scale in molecular monolayers....

  12. Modeling the Self-assembly and Stability of DHPC Micelles using Atomic Resolution and Coarse Grained MD Simulations

    DEFF Research Database (Denmark)

    Kraft, Johan Frederik; Vestergaard, Mikkel; Schiøtt, Birgit

    2012-01-01

    Membrane mimics such as micelles and bicelles are widely used in experiments involving membrane proteins. With the aim of being able to carry out molecular dynamics simulations in environments comparable to experimental conditions, we set out to test the ability of both coarse grained and atomistic...... resolution force fields to model the experimentally observed behavior of the lipid 1,2-dihexanoyl-sn-glycero-3-phosphocholine (DHPC), which is a widely used lipid for biophysical characterization of membrane proteins. It becomes clear from our results that a satisfactory modeling of DHPC aggregates...

  13. Evaluation of a high-resolution regional climate simulation over Greenland

    Energy Technology Data Exchange (ETDEWEB)

    Lefebre, Filip [Universite catholique de Louvain, Institut d' Astronomie et de Geophysique G. Lemaitre, Louvain-la-Neuve (Belgium); Vito - Flemish Institute for Technological Research, Integral Environmental Studies, Mol (Belgium); Fettweis, Xavier; Ypersele, Jean-Pascal van; Marbaix, Philippe [Universite catholique de Louvain, Institut d' Astronomie et de Geophysique G. Lemaitre, Louvain-la-Neuve (Belgium); Gallee, Hubert [Laboratoire de Glaciologie et de Geophysique de l' Environnement, Grenoble (France); Greuell, Wouter [Utrecht University, Institute for Marine and Atmospheric Research, Utrecht (Netherlands); Calanca, Pierluigi [Swiss Federal Research Station for Agroecology and Agriculture, Zurich (Switzerland)

    2005-07-01

    A simulation of the 1991 summer has been performed over south Greenland with a coupled atmosphere-snow regional climate model (RCM) forced by the ECMWF re-analysis. The simulation is evaluated with in-situ coastal and ice-sheet atmospheric and glaciological observations. Modelled air temperature, specific humidity, wind speed and radiative fluxes are in good agreement with the available observations, although uncertainties in the radiative transfer scheme need further investigation to improve the model's performance. In the sub-surface snow-ice model, surface albedo is calculated from the simulated snow grain shape and size, snow depth, meltwater accumulation, cloudiness and ice albedo. The use of snow metamorphism processes allows a realistic modelling of the temporal variations in the surface albedo during both melting periods and accumulation events. Concerning the surface albedo, the main finding is that an accurate albedo simulation during the melting season strongly depends on a proper initialization of the surface conditions which mainly result from winter accumulation processes. Furthermore, in a sensitivity experiment with a constant 0.8 albedo over the whole ice sheet, the average amount of melt decreased by more than 60%, which highlights the importance of a correctly simulated surface albedo. The use of this coupled atmosphere-snow RCM offers new perspectives in the study of the Greenland surface mass balance due to the represented feedback between the surface climate and the surface albedo, which is the most sensitive parameter in energy-balance-based ablation calculations. (orig.)

  14. EGS4CYL a Montecarlo simulation method of a PET or spect equipment at high spatial resolution

    International Nuclear Information System (INIS)

    Ferriani, S.; Galli, M.

    1995-11-01

    This report describes a Montecarlo simulation method for the simulation of a Pet or Spect equipment. The method is based on the Egs4cyl code. This work has been done in the framework of the Hirespet collaboration, for the developing of an high spatial resolution tomograph, the method will be used for the project of the tomograph. The treated geometry consists of a set of coaxial cylinders, surrounded by a ring of detectors. The detectors have a box shape, a collimator in front of each of them can be included, by means of geometrical constraints to the incident particles. An isotropic source is in the middle of the system. For the particles transport the Egs4code is used, for storing and plotting results the Cern packages Higz and Hbook are used

  15. Simulation of heat and mass transfer in turbulent channel flow using the spectral-element method: effect of spatial resolution

    Science.gov (United States)

    Ryzhenkov, V.; Ivashchenko, V.; Vinuesa, R.; Mullyadzhanov, R.

    2016-10-01

    We use the open-source code nek5000 to assess the accuracy of high-order spectral element large-eddy simulations (LES) of a turbulent channel flow depending on the spatial resolution compared to the direct numerical simulation (DNS). The Reynolds number Re = 6800 is considered based on the bulk velocity and half-width of the channel. The filtered governing equations are closed with the dynamic Smagorinsky model for subgrid stresses and heat flux. The results show very good agreement between LES and DNS for time-averaged velocity and temperature profiles and their fluctuations. Even the coarse LES grid which contains around 30 times less points than the DNS one provided predictions of the friction velocity within 2.0% accuracy interval.

  16. Tests of high-resolution simulations over a region of complex terrain in Southeast coast of Brazil

    Science.gov (United States)

    Chou, Sin Chan; Luís Gomes, Jorge; Ristic, Ivan; Mesinger, Fedor; Sueiro, Gustavo; Andrade, Diego; Lima-e-Silva, Pedro Paulo

    2013-04-01

    The Eta Model is used operationally by INPE at the Centre for Weather Forecasts and Climate Studies (CPTEC) to produce weather forecasts over South America since 1997. The model has gone through upgrades along these years. In order to prepare the model for operational higher resolution forecasts, the model is configured and tested over a region of complex topography located near the coast of Southeast Brazil. The model domain includes the two Brazilians cities, Rio de Janeiro and Sao Paulo, urban areas, preserved tropical forest, pasture fields, and complex terrain where it can rise from sea level up to about 1000 m. Accurate near-surface wind direction and magnitude are needed for the power plant emergency plan. Besides, the region suffers from frequent events of floods and landslides, therefore accurate local forecasts are required for disaster warnings. The objective of this work is to carry out a series of numerical experiments to test and evaluate high resolution simulations in this complex area. Verification of model runs uses observations taken from the nuclear power plant and higher resolution reanalyses data. The runs were tested in a period when flow was predominately forced by local conditions and in a period forced by frontal passage. The Eta Model was configured initially with 2-km horizontal resolution and 50 layers. The Eta-2km is a second nesting, it is driven by Eta-15km, which in its turn is driven by Era-Interim reanalyses. The series of experiments consists of replacing surface layer stability function, adjusting cloud microphysics scheme parameters, further increasing vertical and horizontal resolutions. By replacing the stability function for the stable conditions substantially increased the katabatic winds and verified better against the tower wind data. Precipitation produced by the model was excessive in the region. Increasing vertical resolution to 60 layers caused a further increase in precipitation production. This excessive

  17. Counts-in-Cylinders in the Sloan Digital Sky Survey with Comparisons to N-Body

    Energy Technology Data Exchange (ETDEWEB)

    Berrier, Heather D.; Barton, Elizabeth J.; /UC, Irvine; Berrier, Joel C.; /Arkansas U.; Bullock, James S.; /UC, Irvine; Zentner, Andrew R.; /Pittsburgh U.; Wechsler, Risa H. /KIPAC, Menlo Park /SLAC

    2010-12-16

    Environmental statistics provide a necessary means of comparing the properties of galaxies in different environments and a vital test of models of galaxy formation within the prevailing, hierarchical cosmological model. We explore counts-in-cylinders, a common statistic defined as the number of companions of a particular galaxy found within a given projected radius and redshift interval. Galaxy distributions with the same two-point correlation functions do not necessarily have the same companion count distributions. We use this statistic to examine the environments of galaxies in the Sloan Digital Sky Survey, Data Release 4. We also make preliminary comparisons to four models for the spatial distributions of galaxies, based on N-body simulations, and data from SDSS DR4 to study the utility of the counts-in-cylinders statistic. There is a very large scatter between the number of companions a galaxy has and the mass of its parent dark matter halo and the halo occupation, limiting the utility of this statistic for certain kinds of environmental studies. We also show that prevalent, empirical models of galaxy clustering that match observed two- and three-point clustering statistics well fail to reproduce some aspects of the observed distribution of counts-in-cylinders on 1, 3 and 6-h{sup -1}Mpc scales. All models that we explore underpredict the fraction of galaxies with few or no companions in 3 and 6-h{sup -1} Mpc cylinders. Roughly 7% of galaxies in the real universe are significantly more isolated within a 6 h{sup -1} Mpc cylinder than the galaxies in any of the models we use. Simple, phenomenological models that map galaxies to dark matter halos fail to reproduce high-order clustering statistics in low-density environments.

  18. Path integral molecular dynamics within the grand canonical-like adaptive resolution technique: Simulation of liquid water

    Energy Technology Data Exchange (ETDEWEB)

    Agarwal, Animesh, E-mail: animesh@zedat.fu-berlin.de; Delle Site, Luigi, E-mail: dellesite@fu-berlin.de [Institute for Mathematics, Freie Universität Berlin, Berlin (Germany)

    2015-09-07

    Quantum effects due to the spatial delocalization of light atoms are treated in molecular simulation via the path integral technique. Among several methods, Path Integral (PI) Molecular Dynamics (MD) is nowadays a powerful tool to investigate properties induced by spatial delocalization of atoms; however, computationally this technique is very demanding. The above mentioned limitation implies the restriction of PIMD applications to relatively small systems and short time scales. One of the possible solutions to overcome size and time limitation is to introduce PIMD algorithms into the Adaptive Resolution Simulation Scheme (AdResS). AdResS requires a relatively small region treated at path integral level and embeds it into a large molecular reservoir consisting of generic spherical coarse grained molecules. It was previously shown that the realization of the idea above, at a simple level, produced reasonable results for toy systems or simple/test systems like liquid parahydrogen. Encouraged by previous results, in this paper, we show the simulation of liquid water at room conditions where AdResS, in its latest and more accurate Grand-Canonical-like version (GC-AdResS), is merged with two of the most relevant PIMD techniques available in the literature. The comparison of our results with those reported in the literature and/or with those obtained from full PIMD simulations shows a highly satisfactory agreement.

  19. Online model evaluation of large-eddy simulations covering Germany with a horizontal resolution of 156 m

    Science.gov (United States)

    Hansen, Akio; Ament, Felix; Lammert, Andrea

    2017-04-01

    Large-eddy simulations have been performed since several decades, but due to computational limits most studies were restricted to small domains or idealised initial-/boundary conditions. Within the High definition clouds and precipitation for advancing climate prediction (HD(CP)2) project realistic weather forecasting like LES simulations were performed with the newly developed ICON LES model for several days. The domain covers central Europe with a horizontal resolution down to 156 m. The setup consists of more than 3 billion grid cells, by what one 3D dump requires roughly 500 GB. A newly developed online evaluation toolbox was created to check instantaneously for realistic model simulations. The toolbox automatically combines model results with observations and generates several quicklooks for various variables. So far temperature-/humidity profiles, cloud cover, integrated water vapour, precipitation and many more are included. All kind of observations like aircraft observations, soundings or precipitation radar networks are used. For each dataset, a specific module is created, which allows for an easy handling and enhancement of the toolbox. Most of the observations are automatically downloaded from the Standardized Atmospheric Measurement Database (SAMD). The evaluation tool should support scientists at monitoring computational costly model simulations as well as to give a first overview about model's performance. The structure of the toolbox as well as the SAMD database are presented. Furthermore, the toolbox was applied on an ICON LES sensitivity study, where example results are shown.

  20. Relativistic n-body wave equations in scalar quantum field theory

    International Nuclear Information System (INIS)

    Emami-Razavi, Mohsen

    2006-01-01

    The variational method in a reformulated Hamiltonian formalism of Quantum Field Theory (QFT) is used to derive relativistic n-body wave equations for scalar particles (bosons) interacting via a massive or massless mediating scalar field (the scalar Yukawa model). Simple Fock-space variational trial states are used to derive relativistic n-body wave equations. The equations are shown to have the Schroedinger non-relativistic limits, with Coulombic interparticle potentials in the case of a massless mediating field and Yukawa interparticle potentials in the case of a massive mediating field. Some examples of approximate ground state solutions of the n-body relativistic equations are obtained for various strengths of coupling, for both massive and massless mediating fields

  1. Data Collection Methods for Validation of Advanced Multi-Resolution Fast Reactor Simulations

    International Nuclear Information System (INIS)

    2015-01-01

    In pool-type Sodium Fast Reactors (SFR) the regions most susceptible to thermal striping are the upper instrumentation structure (UIS) and the intermediate heat exchanger (IHX). This project experimentally and computationally (CFD) investigated the thermal mixing in the region exiting the reactor core to the UIS. The thermal mixing phenomenon was simulated using two vertical jets at different velocities and temperatures as prototypic of two adjacent channels out of the core. Thermal jet mixing of anticipated flows at different temperatures and velocities were investigated. Velocity profiles are measured throughout the flow region using Ultrasonic Doppler Velocimetry (UDV), and temperatures along the geometric centerline between the jets were recorded using a thermocouple array. CFD simulations, using COMSOL, were used to initially understand the flow, then to design the experimental apparatus and finally to compare simulation results and measurements characterizing the flows. The experimental results and CFD simulations show that the flow field is characterized into three regions with respective transitions, namely, convective mixing, (flow direction) transitional, and post-mixing. Both experiments and CFD simulations support this observation. For the anticipated SFR conditions the flow is momentum dominated and thus thermal mixing is limited due to the short flow length associated from the exit of the core to the bottom of the UIS. This means that there will be thermal striping at any surface where poorly mixed streams impinge; rather unless lateral mixing is actively promoted out of the core, thermal striping will prevail. Furthermore we note that CFD can be considered a separate effects (computational) test and is recommended as part of any integral analysis. To this effect, poorly mixed streams then have potential impact on the rest of the SFR design and scaling, especially placement of internal components, such as the IHX that may see poorly mixed streams

  2. Data Collection Methods for Validation of Advanced Multi-Resolution Fast Reactor Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Tokuhiro, Akiro [Univ. of Idaho, Moscow, ID (United States); Ruggles, Art [Univ. of Tennessee, Knoxville, TN (United States); Pointer, David [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-01-22

    In pool-type Sodium Fast Reactors (SFR) the regions most susceptible to thermal striping are the upper instrumentation structure (UIS) and the intermediate heat exchanger (IHX). This project experimentally and computationally (CFD) investigated the thermal mixing in the region exiting the reactor core to the UIS. The thermal mixing phenomenon was simulated using two vertical jets at different velocities and temperatures as prototypic of two adjacent channels out of the core. Thermal jet mixing of anticipated flows at different temperatures and velocities were investigated. Velocity profiles are measured throughout the flow region using Ultrasonic Doppler Velocimetry (UDV), and temperatures along the geometric centerline between the jets were recorded using a thermocouple array. CFD simulations, using COMSOL, were used to initially understand the flow, then to design the experimental apparatus and finally to compare simulation results and measurements characterizing the flows. The experimental results and CFD simulations show that the flow field is characterized into three regions with respective transitions, namely, convective mixing, (flow direction) transitional, and post-mixing. Both experiments and CFD simulations support this observation. For the anticipated SFR conditions the flow is momentum dominated and thus thermal mixing is limited due to the short flow length associated from the exit of the core to the bottom of the UIS. This means that there will be thermal striping at any surface where poorly mixed streams impinge; rather unless lateral mixing is ‘actively promoted out of the core, thermal striping will prevail. Furthermore we note that CFD can be considered a ‘separate effects (computational) test’ and is recommended as part of any integral analysis. To this effect, poorly mixed streams then have potential impact on the rest of the SFR design and scaling, especially placement of internal components, such as the IHX that may see poorly mixed

  3. Enhanced simulator software for image validation and interpretation for multimodal localization super-resolution fluorescence microscopy

    Science.gov (United States)

    Erdélyi, Miklós; Sinkó, József; Gajdos, Tamás.; Novák, Tibor

    2017-02-01

    Optical super-resolution techniques such as single molecule localization have become one of the most dynamically developed areas in optical microscopy. These techniques routinely provide images of fixed cells or tissues with sub-diffraction spatial resolution, and can even be applied for live cell imaging under appropriate circumstances. Localization techniques are based on the precise fitting of the point spread functions (PSF) to the measured images of stochastically excited, identical fluorescent molecules. These techniques require controlling the rate between the on, off and the bleached states, keeping the number of active fluorescent molecules at an optimum value, so their diffraction limited images can be detected separately both spatially and temporally. Because of the numerous (and sometimes unknown) parameters, the imaging system can only be handled stochastically. For example, the rotation of the dye molecules obscures the polarization dependent PSF shape, and only an averaged distribution - typically estimated by a Gaussian function - is observed. TestSTORM software was developed to generate image stacks for traditional localization microscopes, where localization meant the precise determination of the spatial position of the molecules. However, additional optical properties (polarization, spectra, etc.) of the emitted photons can be used for further monitoring the chemical and physical properties (viscosity, pH, etc.) of the local environment. The image stack generating program was upgraded by several new features, such as: multicolour, polarization dependent PSF, built-in 3D visualization, structured background. These features make the program an ideal tool for optimizing the imaging and sample preparation conditions.

  4. Simulated cosmic microwave background maps at 0.5 deg resolution: Unresolved features

    Science.gov (United States)

    Kogut, A.; Hinshaw, G.; Bennett, C. L.

    1995-01-01

    High-contrast peaks in the cosmic microwave background (CMB) anisotropy can appear as unresolved sources to observers. We fit simluated CMB maps generated with a cold dark matter model to a set of unresolved features at instrumental resolution 0.5 deg-1.5 deg to derive the integral number density per steradian n (greater than absolute value of T) of features brighter than threshold temperature absolute value of T and compare the results to recent experiments. A typical medium-scale experiment observing 0.001 sr at 0.5 deg resolution would expect to observe one feature brighter than 85 micro-K after convolution with the beam profile, with less than 5% probability to observe a source brighter than 150 micro-K. Increasing the power-law index of primordial density perturbations n from 1 to 1.5 raises these temperature limits absolute value of T by a factor of 2. The MSAM features are in agreement with standard cold dark matter models and are not necessarily evidence for processes beyond the standard model.

  5. A High-Resolution Spatially Explicit Monte-Carlo Simulation Approach to Commercial and Residential Electricity and Water Demand Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Morton, April M [ORNL; McManamay, Ryan A [ORNL; Nagle, Nicholas N [ORNL; Piburn, Jesse O [ORNL; Stewart, Robert N [ORNL; Surendran Nair, Sujithkumar [ORNL

    2016-01-01

    Abstract As urban areas continue to grow and evolve in a world of increasing environmental awareness, the need for high resolution spatially explicit estimates for energy and water demand has become increasingly important. Though current modeling efforts mark significant progress in the effort to better understand the spatial distribution of energy and water consumption, many are provided at a course spatial resolution or rely on techniques which depend on detailed region-specific data sources that are not publicly available for many parts of the U.S. Furthermore, many existing methods do not account for errors in input data sources and may therefore not accurately reflect inherent uncertainties in model outputs. We propose an alternative and more flexible Monte-Carlo simulation approach to high-resolution residential and commercial electricity and water consumption modeling that relies primarily on publicly available data sources. The method s flexible data requirement and statistical framework ensure that the model is both applicable to a wide range of regions and reflective of uncertainties in model results. Key words: Energy Modeling, Water Modeling, Monte-Carlo Simulation, Uncertainty Quantification Acknowledgment This manuscript has been authored by employees of UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the U.S. Department of Energy. Accordingly, the United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  6. Potential for added value in precipitation simulated by high-resolution nested Regional Climate Models and observations

    Energy Technology Data Exchange (ETDEWEB)

    Di Luca, Alejandro; Laprise, Rene [Universite du Quebec a Montreal (UQAM), Centre ESCER (Etude et Simulation du Climat a l' Echelle Regionale), Departement des Sciences de la Terre et de l' Atmosphere, PK-6530, Succ. Centre-ville, B.P. 8888, Montreal, QC (Canada); De Elia, Ramon [Universite du Quebec a Montreal, Ouranos Consortium, Centre ESCER (Etude et Simulation du Climat a l' Echelle Regionale), Montreal (Canada)

    2012-03-15

    Regional Climate Models (RCMs) constitute the most often used method to perform affordable high-resolution regional climate simulations. The key issue in the evaluation of nested regional models is to determine whether RCM simulations improve the representation of climatic statistics compared to the driving data, that is, whether RCMs add value. In this study we examine a necessary condition that some climate statistics derived from the precipitation field must satisfy in order that the RCM technique can generate some added value: we focus on whether the climate statistics of interest contain some fine spatial-scale variability that would be absent on a coarser grid. The presence and magnitude of fine-scale precipitation variance required to adequately describe a given climate statistics will then be used to quantify the potential added value (PAV) of RCMs. Our results show that the PAV of RCMs is much higher for short temporal scales (e.g., 3-hourly data) than for long temporal scales (16-day average data) due to the filtering resulting from the time-averaging process. PAV is higher in warm season compared to cold season due to the higher proportion of precipitation falling from small-scale weather systems in the warm season. In regions of complex topography, the orographic forcing induces an extra component of PAV, no matter the season or the temporal scale considered. The PAV is also estimated using high-resolution datasets based on observations allowing the evaluation of the sensitivity of changing resolution in the real climate system. The results show that RCMs tend to reproduce relatively well the PAV compared to observations although showing an overestimation of the PAV in warm season and mountainous regions. (orig.)

  7. N-body simulations of stars escaping from the Orion nebula

    NARCIS (Netherlands)

    Gualandris, A.; Portegies Zwart, S.F.; Eggleton, P.P.

    2004-01-01

    We study the dynamical interaction in which the two single runaway stars, AE Aurigæ and mu Columbæ, and the binary iota Orionis acquired their unusually high space velocity. The two single runaways move in almost opposite directions with a velocity greater than 100 km s-1 away from the Trapezium

  8. TreePM: A Code for Cosmological N-Body Simulations

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    particle summation of the short range force takes a lot of time in highly clustered ... Our approach is to divide force into long and short range components ..... code, the memory requirement is obviously greater than that for either one code. We.

  9. Simulation and resolution of voltage reversal in microbial fuel cell stack.

    Science.gov (United States)

    Sugnaux, Marc; Savy, Cyrille; Cachelin, Christian Pierre; Hugenin, Gérald; Fischer, Fabian

    2017-08-01

    To understand the biotic and non-biotic contributions of voltage reversals in microbial fuel cell stacks (MFC) they were simulated with an electronic MFC-Stack mimic. The simulation was then compared with results from a real 3L triple MFC-Stack with shared anolyte. It showed that voltage reversals originate from the variability of biofilms, but also the external load plays a role. When similar biofilm properties were created on all anodes the likelihood of voltage reversals was largely reduced. Homogenous biofilms on all anodes were created by electrical circuit alternation and electrostimulation. Conversely, anolyte recirculation, or increased nutriment supply, postponed reversals and unfavourable voltage asymmetries on anodes persisted. In conclusion, voltage reversals are often a negative event but occur also in close to best MFC-Stack performance. They were manageable and this with a simplified MFC architecture in which multiple anodes share the same anolyte. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. A high-resolution code for large eddy simulation of incompressible turbulent boundary layer flows

    KAUST Repository

    Cheng, Wan

    2014-03-01

    We describe a framework for large eddy simulation (LES) of incompressible turbulent boundary layers over a flat plate. This framework uses a fractional-step method with fourth-order finite difference on a staggered mesh. We present several laminar examples to establish the fourth-order accuracy and energy conservation property of the code. Furthermore, we implement a recycling method to generate turbulent inflow. We use the stretched spiral vortex subgrid-scale model and virtual wall model to simulate the turbulent boundary layer flow. We find that the case with Reθ ≈ 2.5 × 105 agrees well with available experimental measurements of wall friction, streamwise velocity profiles and turbulent intensities. We demonstrate that for cases with extremely large Reynolds numbers (Reθ = 1012), the present LES can reasonably predict the flow with a coarse mesh. The parallel implementation of the LES code demonstrates reasonable scaling on O(103) cores. © 2013 Elsevier Ltd.

  11. Medical images of patients in voxel structures in high resolution for Monte Carlo simulation

    International Nuclear Information System (INIS)

    Boia, Leonardo S.; Menezes, Artur F.; Silva, Ademir X.

    2011-01-01

    This work aims to present a computational process of conversion of tomographic and MRI medical images from patients in voxel structures to an input file, which will be manipulated in Monte Carlo Simulation code for tumor's radiotherapic treatments. The problem's scenario inherent to the patient is simulated by such process, using the volume element (voxel) as a unit of computational tracing. The head's voxel structure geometry has voxels with volumetric dimensions around 1 mm 3 and a population of millions, which helps - in that way, for a realistic simulation and a decrease in image's digital process techniques for adjustments and equalizations. With such additional data from the code, a more critical analysis can be developed in order to determine the volume of the tumor, and the protection, beside the patients' medical images were borrowed by Clinicas Oncologicas Integradas (COI/RJ), joined to the previous performed planning. In order to execute this computational process, SAPDI computational system is used in a digital image process for optimization of data, conversion program Scan2MCNP, which manipulates, processes, and converts the medical images into voxel structures to input files and the graphic visualizer Moritz for the verification of image's geometry placing. (author)

  12. Medical images of patients in voxel structures in high resolution for Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Boia, Leonardo S.; Menezes, Artur F.; Silva, Ademir X., E-mail: lboia@con.ufrj.b, E-mail: ademir@con.ufrj.b [Universidade Federal do Rio de Janeiro (PEN/COPPE/UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia. Programa de Engenharia Nuclear; Salmon Junior, Helio A. [Clinicas Oncologicas Integradas (COI), Rio de Janeiro, RJ (Brazil)

    2011-07-01

    This work aims to present a computational process of conversion of tomographic and MRI medical images from patients in voxel structures to an input file, which will be manipulated in Monte Carlo Simulation code for tumor's radiotherapic treatments. The problem's scenario inherent to the patient is simulated by such process, using the volume element (voxel) as a unit of computational tracing. The head's voxel structure geometry has voxels with volumetric dimensions around 1 mm{sup 3} and a population of millions, which helps - in that way, for a realistic simulation and a decrease in image's digital process techniques for adjustments and equalizations. With such additional data from the code, a more critical analysis can be developed in order to determine the volume of the tumor, and the protection, beside the patients' medical images were borrowed by Clinicas Oncologicas Integradas (COI/RJ), joined to the previous performed planning. In order to execute this computational process, SAPDI computational system is used in a digital image process for optimization of data, conversion program Scan2MCNP, which manipulates, processes, and converts the medical images into voxel structures to input files and the graphic visualizer Moritz for the verification of image's geometry placing. (author)

  13. Temporal resolution criterion for correctly simulating relativistic electron motion in a high-intensity laser field

    Energy Technology Data Exchange (ETDEWEB)

    Arefiev, Alexey V. [Institute for Fusion Studies, The University of Texas, Austin, Texas 78712 (United States); Cochran, Ginevra E.; Schumacher, Douglass W. [Physics Department, The Ohio State University, Columbus, Ohio 43210 (United States); Robinson, Alexander P. L. [Central Laser Facility, STFC Rutherford-Appleton Laboratory, Didcot OX11 0QX (United Kingdom); Chen, Guangye [Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States)

    2015-01-15

    Particle-in-cell codes are now standard tools for studying ultra-intense laser-plasma interactions. Motivated by direct laser acceleration of electrons in sub-critical plasmas, we examine temporal resolution requirements that must be satisfied to accurately calculate electron dynamics in strong laser fields. Using the motion of a single electron in a perfect plane electromagnetic wave as a test problem, we show surprising deterioration of the numerical accuracy with increasing wave amplitude a{sub 0} for a given time-step. We go on to show analytically that the time-step must be significantly less than λ/ca{sub 0} to achieve good accuracy. We thus propose adaptive electron sub-cycling as an efficient remedy.

  14. Simulation and Prediction of Weather Radar Clutter Using a Wave Propagator on High Resolution NWP Data

    DEFF Research Database (Denmark)

    Benzon, Hans-Henrik; Bovith, Thomas

    2008-01-01

    for prediction of this type of weather radar clutter is presented. The method uses a wave propagator to identify areas of potential non-standard propagation. The wave propagator uses a three dimensional refractivity field derived from the geophysical parameters: temperature, humidity, and pressure obtained from......Weather radars are essential sensors for observation of precipitation in the troposphere and play a major part in weather forecasting and hydrological modelling. Clutter caused by non-standard wave propagation is a common problem in weather radar applications, and in this paper a method...... a high-resolution Numerical Weather Prediction (NWP) model. The wave propagator is based on the parabolic equation approximation to the electromagnetic wave equation. The parabolic equation is solved using the well-known Fourier split-step method. Finally, the radar clutter prediction technique is used...

  15. Coating Thickness Measurement of the Simulated TRISO-Coated Fuel Particles using an Image Plate and a High Resolution Scanner

    International Nuclear Information System (INIS)

    Kim, Woong Ki; Kim, Yeon Ku; Jeong, Kyung Chai; Lee, Young Woo; Kim, Bong Goo; Eom, Sung Ho; Kim, Young Min; Yeo, Sung Hwan; Cho, Moon Sung

    2014-01-01

    In this study, the thickness of the coating layers of 196 coated particles was measured using an Image Plate detector, high resolution scanner and digital image processing techniques. The experimental results are as follows. - An X-ray image was acquired for 196 simulated TRISO-coated fuel particles with ZrO 2 kernel using an Image Plate with high resolution in a reduced amount of time. - We could observe clear boundaries between coating layers for 196 particles. - The geometric distortion error was compensated for the calculation. - The coating thickness of the TRISO-coated fuel particles can be nondestructively measured using X-ray radiography and digital image processing technology. - We can increase the number of TRISO-coated particles to be inspected by increasing the number of Image Plate detectors. A TRISO-coated fuel particle for an HTGR (high temperature gas-cooled reactor) is composed of a nuclear fuel kernel and outer coating layers. The coating layers consist of buffer PyC (pyrolytic carbon), inner PyC (I-PyC), SiC, and outer PyC (O-PyC) layer. The coating thickness is measured to evaluate the soundness of the coating layers. X-ray radiography is one of the nondestructive alternatives for measuring the coating thickness without generating a radioactive waste. Several billion particles are subject to be loaded in a reactor. A lot of sample particles should be tested as much as possible. The acquired X-ray images for the measurement of coating thickness have included a small number of particles because of the restricted resolution and size of the X-ray detector. We tried to test many particles for an X-ray exposure to reduce the measurement time. In this experiment, an X-ray image was acquired for 196 simulated TRISO-coated fuel particles using an image plate and high resolution scanner with a pixel size of 25Χ25 μm 2 . The coating thickness for the particles could be measured on the image

  16. OpenRBC: Redefining the Frontier of Red Blood Cell Simulations at Protein Resolution

    Science.gov (United States)

    Tang, Yu-Hang; Lu, Lu; Li, He; Grinberg, Leopold; Sachdeva, Vipin; Evangelinos, Constantinos; Karniadakis, George

    We present a from-scratch development of OpenRBC, a coarse-grained molecular dynamics code, which is capable of performing an unprecedented in silico experiment - simulating an entire mammal red blood cell lipid bilayer and cytoskeleton modeled by 4 million mesoscopic particles - on a single shared memory node. To achieve this, we invented an adaptive spatial searching algorithm to accelerate the computation of short-range pairwise interactions in an extremely sparse 3D space. The algorithm is based on a Voronoi partitioning of the point cloud of coarse-grained particles, and is continuously updated over the course of the simulation. The algorithm enables the construction of a lattice-free cell list, i.e. the key spatial searching data structure in our code, in O (N) time and space space with cells whose position and shape adapts automatically to the local density and curvature. The code implements NUMA/NUCA-aware OpenMP parallelization and achieves perfect scaling with up to hundreds of hardware threads. The code outperforms a legacy solver by more than 8 times in time-to-solution and more than 20 times in problem size, thus providing a new venue for probing the cytomechanics of red blood cells. This work was supported by the Department of Energy (DOE) Collaboratory on Mathematics for Mesoscopic Model- ing of Materials (CM4). YHT acknowledges partial financial support from an IBM Ph.D. Scholarship Award.

  17. Generating High-Resolution Lake Bathymetry over Lake Mead using the ICESat-2 Airborne Simulator

    Science.gov (United States)

    Li, Y.; Gao, H.; Jasinski, M. F.; Zhang, S.; Stoll, J.

    2017-12-01

    Precise lake bathymetry (i.e., elevation/contour) mapping is essential for optimal decision making in water resources management. Although the advancement of remote sensing has made it possible to monitor global reservoirs from space, most of the existing studies focus on estimating the elevation, area, and storage of reservoirs—and not on estimating the bathymetry. This limitation is attributed to the low spatial resolution of satellite altimeters. With the significant enhancement of ICESat-2—the Ice, Cloud & Land Elevation Satellite #2, which is scheduled to launch in 2018—producing satellite-based bathymetry becomes feasible. Here we present a pilot study for deriving the bathymetry of Lake Mead by combining Landsat area estimations with airborne elevation data using the prototype of ICESat-2—the Multiple Altimeter Beam Experimental Lidar (MABEL). First, an ISODATA classifier was adopted to extract the lake area from Landsat images during the period from 1982 to 2017. Then the lake area classifications were paired with MABEL elevations to establish an Area-Elevation (AE) relationship, which in turn was applied to the classification contour map to obtain the bathymetry. Finally, the Lake Mead bathymetry image was embedded onto the Shuttle Radar Topography Mission (SRTM) Digital Elevation Model (DEM), to replace the existing constant values. Validation against sediment survey data indicates that the bathymetry derived from this study is reliable. This algorithm has the potential for generating global lake bathymetry when ICESat-2 data become available after next year's launch.

  18. Appending High-Resolution Elevation Data to GPS Speed Traces for Vehicle Energy Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Wood, E.; Burton, E.; Duran, A.; Gonder, J.

    2014-06-01

    Accurate and reliable global positioning system (GPS)-based vehicle use data are highly valuable for many transportation, analysis, and automotive considerations. Model-based design, real-world fuel economy analysis, and the growing field of autonomous and connected technologies (including predictive powertrain control and self-driving cars) all have a vested interest in high-fidelity estimation of powertrain loads and vehicle usage profiles. Unfortunately, road grade can be a difficult property to extract from GPS data with consistency. In this report, we present a methodology for appending high-resolution elevation data to GPS speed traces via a static digital elevation model. Anomalous data points in the digital elevation model are addressed during a filtration/smoothing routine, resulting in an elevation profile that can be used to calculate road grade. This process is evaluated against a large, commercially available height/slope dataset from the Navteq/Nokia/HERE Advanced Driver Assistance Systems product. Results will show good agreement with the Advanced Driver Assistance Systems data in the ability to estimate road grade between any two consecutive points in the contiguous United States.

  19. Hygromorphic characterization of softwood under high resolution X-ray tomography for hygrothermal simulation

    Science.gov (United States)

    El Hachem, Chady; Abahri, Kamilia; Vicente, Jérôme; Bennacer, Rachid; Belarbi, Rafik

    2018-03-01

    Because of their complex hygromorphic shape, microstructural study of wooden materials behavior has recently been the point of interest of researchers. The purpose of this study, in a first part, consists in characterizing by high resolution X-ray tomography the microstructural properties of spruce wood. In a second part, the subresulting geometrical parameters will be incorporated when evaluating the wooden hygrothermal transfers behavior. To do so, volume reconstructions of 3 Dimensional images (3D), obtained with a voxel size of 0.5 μm were achieved. The post-treatment of the corresponding volumes has given access to averages and standard deviations of lumens' diameters and cell walls' thicknesses. These results were performed for both early wood and latewood. Further, a segmentation approach for individualizing wood lumens was developed, which presents an important challenge in understanding localized physical properties. In this context, 3D heat and mass transfers within the real reconstructed geometries took place in order to highlight the effect of wood directions on the equivalent conductivity and moisture diffusion coefficients. Results confirm that the softwood cellular structure has a critical impact on the reliability of the studied physical parameters.

  20. A Sensor Driven Probabilistic Method for Enabling Hyper Resolution Flood Simulations

    Science.gov (United States)

    Fries, K. J.; Salas, F.; Kerkez, B.

    2016-12-01

    A reduction in the cost of sensors and wireless communications is now enabling researchers and local governments to make flow, stage and rain measurements at locations that are not covered by existing USGS or state networks. We ask the question: how should these new sources of densified, street-level sensor measurements be used to make improved forecasts using the National Water Model (NWM)? Assimilating these data "into" the NWM can be challenging due to computational complexity, as well as heterogeneity of sensor and other input data. Instead, we introduce a machine learning and statistical framework that layers these data "on top" of the NWM outputs to improve high-resolution hydrologic and hydraulic forecasting. By generalizing our approach into a post-processing framework, a rapidly repeatable blueprint is generated for for decision makers who want to improve local forecasts by coupling sensor data with the NWM. We present preliminary results based on case studies in highly instrumented watersheds in the US. Through the use of statistical learning tools and hydrologic routing schemes, we demonstrate the ability of our approach to improve forecasts while simultaneously characterizing bias and uncertainty in the NWM.

  1. Absence of positive eigenvalues for hard-core N-body systems

    DEFF Research Database (Denmark)

    Ito, K.; Skibsted, Erik

    We show absence of positive eigenvalues for generalized 2-body hard-core Schrödinger operators under the condition of bounded strictly convex obstacles. A scheme for showing absence of positive eigenvalues for generalized N-body hard-core Schrödinger operators, N≥ 2, is presented. This scheme inv...

  2. On the discrete spectrum of the N-body quantum mechanical Hamiltonian. Pt. 2

    International Nuclear Information System (INIS)

    Iorio, R.J. Jr.

    1981-01-01

    Using the Weinberg-van Winter equations we prove finiteness of the discrete spectrum of the N-body quantum mechanical Hamiltonian with pair potentials satisfying vertical stroke V(x) vertical stroke 2 ) - sup(rho), rho > 1 increase the threshold of the continuous spectrum is negative and determined exclusively by eigenvalues of two-cluster Hamiltonians. (orig.)

  3. Application of quasiexactly solvable potential method to the N-body ...

    Indian Academy of Sciences (India)

    physics pp. 985–996. Application of quasiexactly solvable potential method to the N-body ... Application of QES method to N-particle quantum model interacting via an ... Now, if we choose the centre of mass R as the origin of the coordinates,.

  4. Graphs and an exactly solvable N-body problem in one dimension

    Energy Technology Data Exchange (ETDEWEB)

    Barucchi, G [Turin Univ. (Italy). Ist. di Fisica Matematica

    1980-08-21

    The one-dimensional N-body classical problem with inversely quadratic pair potential is considered. A method of explicit construction, by means of graphs, of the constants of the motion is given. It is then shown how to obtain, by means of a computer, the position variables of the particles as numerical functions of time.

  5. Algebraic internal wave solitons and the integrable Calogero--Moser--Sutherland N-body problem

    International Nuclear Information System (INIS)

    Chen, H.H.; Lee, Y.C.; Pereira, N.R.

    1979-01-01

    The Benjamin--Ono equation that describes nonlinear internal waves in a stratified fluid is solved by a pole expansion method. The dynamics of poles which characterize solitons is shown to be identical to the well-known integrable N-body problem of Calogero, Moser, and Sutherland

  6. Mesoscale spiral vortex embedded within a Lake Michigan snow squall band - High resolution satellite observations and numerical model simulations

    Science.gov (United States)

    Lyons, Walter A.; Keen, Cecil S.; Hjelmfelt, Mark; Pease, Steven R.

    1988-01-01

    It is known that Great Lakes snow squall convection occurs in a variety of different modes depending on various factors such as air-water temperature contrast, boundary-layer wind shear, and geostrophic wind direction. An exceptional and often neglected source of data for mesoscale cloud studies is the ultrahigh resolution multispectral data produced by Landsat satellites. On October 19, 1972, a clearly defined spiral vortex was noted in a Landsat-1 image near the southern end of Lake Michigan during an exceptionally early cold air outbreak over a still very warm lake. In a numerical simulation using a three-dimensional Eulerian hydrostatic primitive equation mesoscale model with an initially uniform wind field, a definite analog to the observed vortex was generated. This suggests that intense surface heating can be a principal cause in the development of a low-level mesoscale vortex.

  7. Estimating Hydraulic Resistance for Floodplain Mapping and Hydraulic Studies from High-Resolution Topography: Physical and Numerical Simulations

    Science.gov (United States)

    Minear, J. T.

    2017-12-01

    One of the primary unknown variables in hydraulic analyses is hydraulic resistance, values for which are typically set using broad assumptions or calibration, with very few methods available for independent and robust determination. A better understanding of hydraulic resistance would be highly useful for understanding floodplain processes, forecasting floods, advancing sediment transport and hydraulic coupling, and improving higher dimensional flood modeling (2D+), as well as correctly calculating flood discharges for floods that are not directly measured. The relationship of observed features to hydraulic resistance is difficult to objectively quantify in the field, partially because resistance occurs at a variety of scales (i.e. grain, unit and reach) and because individual resistance elements, such as trees, grass and sediment grains, are inherently difficult to measure. Similar to photogrammetric techniques, Terrestrial Laser Scanning (TLS, also known as Ground-based LiDAR) has shown great ability to rapidly collect high-resolution topographic datasets for geomorphic and hydrodynamic studies and could be used to objectively quantify the features that collectively create hydraulic resistance in the field. Because of its speed in data collection and remote sensing ability, TLS can be used both for pre-flood and post-flood studies that require relatively quick response in relatively dangerous settings. Using datasets collected from experimental flume runs and numerical simulations, as well as field studies of several rivers in California and post-flood rivers in Colorado, this study evaluates the use of high-resolution topography to estimate hydraulic resistance, particularly from grain-scale elements. Contrary to conventional practice, experimental laboratory runs with bed grain size held constant but with varying grain-scale protusion create a nearly twenty-fold variation in measured hydraulic resistance. The ideal application of this high-resolution topography

  8. High-resolution simulations of the final assembly of Earth-like planets. 2. Water delivery and planetary habitability.

    Science.gov (United States)

    Raymond, Sean N; Quinn, Thomas; Lunine, Jonathan I

    2007-02-01

    The water content and habitability of terrestrial planets are determined during their final assembly, from perhaps 100 1,000-km "planetary embryos " and a swarm of billions of 1-10-km "planetesimals. " During this process, we assume that water-rich material is accreted by terrestrial planets via impacts of water-rich bodies that originate in the outer asteroid region. We present analysis of water delivery and planetary habitability in five high-resolution simulations containing about 10 times more particles than in previous simulations. These simulations formed 15 terrestrial planets from 0.4 to 2.6 Earth masses, including five planets in the habitable zone. Every planet from each simulation accreted at least the Earth's current water budget; most accreted several times that amount (assuming no impact depletion). Each planet accreted at least five water-rich embryos and planetesimals from the past 2.5 astronomical units; most accreted 10-20 water-rich bodies. We present a new model for water delivery to terrestrial planets in dynamically calm systems, with low-eccentricity or low-mass giant planets-such systems may be very common in the Galaxy. We suggest that water is accreted in comparable amounts from a few planetary embryos in a " hit or miss " way and from millions of planetesimals in a statistically robust process. Variations in water content are likely to be caused by fluctuations in the number of water-rich embryos accreted, as well as from systematic effects, such as planetary mass and location, and giant planet properties.

  9. Phase I and phase II reductive metabolism simulation of nitro aromatic xenobiotics with electrochemistry coupled with high resolution mass spectrometry.

    Science.gov (United States)

    Bussy, Ugo; Chung-Davidson, Yu-Wen; Li, Ke; Li, Weiming

    2014-11-01

    Electrochemistry combined with (liquid chromatography) high resolution mass spectrometry was used to simulate the general reductive metabolism of three biologically important nitro aromatic molecules: 3-trifluoromethyl-4-nitrophenol (TFM), niclosamide, and nilutamide. TFM is a pesticide used in the Laurential Great Lakes while niclosamide and nilutamide are used in cancer therapy. At first, a flow-through electrochemical cell was directly connected to a high resolution mass spectrometer to evaluate the ability of electrochemistry to produce the main reduction metabolites of nitro aromatic, nitroso, hydroxylamine, and amine functional groups. Electrochemical experiments were then carried out at a constant potential of -2.5 V before analysis of the reduction products by LC-HRMS, which confirmed the presence of the nitroso, hydroxylamine, and amine species as well as dimers. Dimer identification illustrates the reactivity of the nitroso species with amine and hydroxylamine species. To investigate xenobiotic metabolism, the reactivity of nitroso species to biomolecules was also examined. Binding of the nitroso metabolite to glutathione was demonstrated by the observation of adducts by LC-ESI(+)-HRMS and the characteristics of their MSMS fragmentation. In conclusion, electrochemistry produces the main reductive metabolites of nitro aromatics and supports the observation of nitroso reactivity through dimer or glutathione adduct formation.

  10. A new method to assess the added value of high-resolution regional climate simulations: application to the EURO-CORDEX dataset

    Science.gov (United States)

    Soares, P. M. M.; Cardoso, R. M.

    2017-12-01

    Regional climate models (RCM) are used with increasing resolutions pursuing to represent in an improved way regional to local scale atmospheric phenomena. The EURO-CORDEX simulations at 0.11° and simulations exploiting finer grid spacing approaching the convective-permitting regimes are representative examples. The climate runs are computationally very demanding and do not always show improvements. These depend on the region, variable and object of study. The gains or losses associated with the use of higher resolution in relation to the forcing model (global climate model or reanalysis), or to different resolution RCM simulations, is known as added value. Its characterization is a long-standing issue, and many different added-value measures have been proposed. In the current paper, a new method is proposed to assess the added value of finer resolution simulations, in comparison to its forcing data or coarser resolution counterparts. This approach builds on a probability density function (PDF) matching score, giving a normalised measure of the difference between diverse resolution PDFs, mediated by the observational ones. The distribution added value (DAV) is an objective added value measure that can be applied to any variable, region or temporal scale, from hindcast or historical (non-synchronous) simulations. The DAVs metric and an application to the EURO-CORDEX simulations, for daily temperatures and precipitation, are here presented. The EURO-CORDEX simulations at both resolutions (0.44o,0.11o) display a clear added value in relation to ERA-Interim, with values around 30% in summer and 20% in the intermediate seasons, for precipitation. When both RCM resolutions are directly compared the added value is limited. The regions with the larger precipitation DAVs are areas where convection is relevant, e.g. Alps and Iberia. When looking at the extreme precipitation PDF tail, the higher resolution improvement is generally greater than the low resolution for seasons

  11. Development of numerical simulation technology for high resolution thermal hydraulic analysis

    International Nuclear Information System (INIS)

    Yoon, Han Young; Kim, K. D.; Kim, B. J.; Kim, J. T.; Park, I. K.; Bae, S. W.; Song, C. H.; Lee, S. W.; Lee, S. J.; Lee, J. R.; Chung, S. K.; Chung, B. D.; Cho, H. K.; Choi, S. K.; Ha, K. S.; Hwang, M. K.; Yun, B. J.; Jeong, J. J.; Sul, A. S.; Lee, H. D.; Kim, J. W.

    2012-04-01

    A realistic simulation of two phase flows is essential for the advanced design and safe operation of a nuclear reactor system. The need for a multi dimensional analysis of thermal hydraulics in nuclear reactor components is further increasing with advanced design features, such as a direct vessel injection system, a gravity driven safety injection system, and a passive secondary cooling system. These features require more detailed analysis with enhanced accuracy. In this regard, KAERI has developed a three dimensional thermal hydraulics code, CUPID, for the analysis of transient, multi dimensional, two phase flows in nuclear reactor components. The code was designed for use as a component scale code, and/or a three dimensional component, which can be coupled with a system code. This report presents an overview of the CUPID code development and preliminary assessment, mainly focusing on the numerical solution method and its verification and validation. It was shown that the CUPID code was successfully verified. The results of the validation calculations show that the CUPID code is very promising, but a systematic approach for the validation and improvement of the physical models is still needed

  12. Flooding Simulation of Extreme Event on Barnegat Bay by High-Resolution Two Dimensional Hydrodynamic Model

    Science.gov (United States)

    Wang, Y.; Ramaswamy, V.; Saleh, F.

    2017-12-01

    Barnegat Bay located on the east coast of New Jersey, United States and is separated from the Atlantic Ocean by the narrow Barnegat Peninsula which acts as a barrier island. The bay is fed by several rivers which empty through small estuaries along the inner shore. In terms of vulnerability from flooding, the Barnegat Peninsula is under the influence of both coastal storm surge and riverine flooding. Barnegat Bay was hit by Hurricane Sandy causing flood damages with extensive cross-island flow at many streets perpendicular to the shoreline. The objective of this work is to identify and quantify the sources of flooding using a two dimensional inland hydrodynamic model. The hydrodynamic model was forced by three observed coastal boundary conditions, and one hydrologic boundary condition from United States Geological Survey (USGS). The model reliability was evaluated with both FEMA spatial flooding extend and USGS High water marks. Simulated flooding extent showed good agreement with the reanalysis spatial inundation extents. Results offered important perspectives on the flow of the water into the bay, the velocity and the depth of the inundated areas. Using such information can enable emergency managers and decision makers identify evacuation and deploy flood defenses.

  13. The WASCAL high-resolution regional climate simulation ensemble for West Africa: concept, dissemination and assessment

    Directory of Open Access Journals (Sweden)

    D. Heinzeller

    2018-04-01

    Full Text Available Climate change and constant population growth pose severe challenges to 21st century rural Africa. Within the framework of the West African Science Service Center on Climate Change and Adapted Land Use (WASCAL, an ensemble of high-resolution regional climate change scenarios for the greater West African region is provided to support the development of effective adaptation and mitigation measures. This contribution presents the overall concept of the WASCAL regional climate simulations, as well as detailed information on the experimental design, and provides information on the format and dissemination of the available data. All data are made available to the public at the CERA long-term archive of the German Climate Computing Center (DKRZ with a subset available at the PANGAEA Data Publisher for Earth & Environmental Science portal (https://doi.pangaea.de/10.1594/PANGAEA.880512. A brief assessment of the data are presented to provide guidance for future users. Regional climate projections are generated at high (12 km and intermediate (60 km resolution using the Weather Research and Forecasting Model (WRF. The simulations cover the validation period 1980–2010 and the two future periods 2020–2050 and 2070–2100. A brief comparison to observations and two climate change scenarios from the Coordinated Regional Downscaling Experiment (CORDEX initiative is presented to provide guidance on the data set to future users and to assess their climate change signal. Under the RCP4.5 (Representative Concentration Pathway 4.5 scenario, the results suggest an increase in temperature by 1.5 °C at the coast of Guinea and by up to 3 °C in the northern Sahel by the end of the 21st century, in line with existing climate projections for the region. They also project an increase in precipitation by up to 300 mm per year along the coast of Guinea, by up to 150 mm per year in the Soudano region adjacent in the north and almost no change in

  14. The Effect of Model Grid Resolution on the Distributed Hydrologic Simulations for Forecasting Stream Flows and Reservoir Storage

    Science.gov (United States)

    Turnbull, S. J.

    2017-12-01

    Within the US Army Corps of Engineers (USACE), reservoirs are typically operated according to a rule curve that specifies target water levels based on the time of year. The rule curve is intended to maximize flood protection by specifying releases of water before the dominant rainfall period for a region. While some operating allowances are permissible, generally the rule curve elevations must be maintained. While this operational approach provides for the required flood control purpose, it may not result in optimal reservoir operations for multi-use impoundments. In the Russian River Valley of California a multi-agency research effort called Forecast-Informed Reservoir Operations (FIRO) is assessing the application of forecast weather and streamflow predictions to potentially enhance the operation of reservoirs in the watershed. The focus of the study has been on Lake Mendocino, a USACE project important for flood control, water supply, power generation and ecological flows. As part of this effort the Engineer Research and Development Center is assessing the ability of utilizing the physics based, distributed watershed model Gridded Surface Subsurface Hydrologic Analysis (GSSHA) model to simulate stream flows, reservoir stages, and discharges while being driven by weather forecast products. A key question in this application is the effect of watershed model resolution on forecasted stream flows. To help resolve this question, GSSHA models of multiple grid resolutions, 30, 50, and 270m, were developed for the upper Russian River, which includes Lake Mendocino. The models were derived from common inputs: DEM, soils, land use, stream network, reservoir characteristics, and specified inflows and discharges. All the models were calibrated in both event and continuous simulation mode using measured precipitation gages and then driven with the West-WRF atmospheric model in prediction mode to assess the ability of the model to function in short term, less than one week

  15. Impact of irrigations on simulated convective activity over Central Greece: A high resolution study

    Science.gov (United States)

    Kotsopoulos, S.; Tegoulias, I.; Pytharoulis, I.; Kartsios, S.; Bampzelis, D.; Karacostas, T.

    2014-12-01

    The aim of this research is to investigate the impact of irrigations in the characteristics of convective activity simulated by the non-hydrostatic Weather Research and Forecasting model with the Advanced Research dynamic solver (WRF-ARW, version 3.5.1), under different upper air synoptic conditions in central Greece. To this end, 42 cases equally distributed under the six most frequent upper air synoptic conditions, which are associated with convective activity in the region of interest, were utilized considering two different soil moisture scenarios. In the first scenario, the model was initialized with the surface soil moisture of the ECMWF analysis data that usually does not take into account the modification of soil moisture due to agricultural activity in the area of interest. In the second scenario, the soil moisture in the upper soil layers of the study area was modified to the field capacity for the irrigated cropland. Three model domains, covering Europe, the Mediterranean Sea and northern Africa (d01), the wider area of Greece (d02) and central Greece - Thessaly region (d03) are used at horizontal grid-spacings of 15km, 5km and 1km respectively. The model numerical results indicate a strong dependence of convective spatiotemporal characteristics from the soil moisture difference between the two scenarios. Acknowledgements: This research is co-financed by the European Union (European Regional Development Fund) and Greek national funds, through the action "COOPERATION 2011: Partnerships of Production and Research Institutions in Focused Research and Technology Sectors" (contract number 11SYN_8_1088 - DAPHNE) in the framework of the operational programme "Competitiveness and Entrepreneurship" and Regions in Transition (OPC II, NSRF 2007-2013).

  16. Energetic frustrations in protein folding at residue resolution: a homologous simulation study of Im9 proteins.

    Directory of Open Access Journals (Sweden)

    Yunxiang Sun

    Full Text Available Energetic frustration is becoming an important topic for understanding the mechanisms of protein folding, which is a long-standing big biological problem usually investigated by the free energy landscape theory. Despite the significant advances in probing the effects of folding frustrations on the overall features of protein folding pathways and folding intermediates, detailed characterizations of folding frustrations at an atomic or residue level are still lacking. In addition, how and to what extent folding frustrations interact with protein topology in determining folding mechanisms remains unclear. In this paper, we tried to understand energetic frustrations in the context of protein topology structures or native-contact networks by comparing the energetic frustrations of five homologous Im9 alpha-helix proteins that share very similar topology structures but have a single hydrophilic-to-hydrophobic mutual mutation. The folding simulations were performed using a coarse-grained Gō-like model, while non-native hydrophobic interactions were introduced as energetic frustrations using a Lennard-Jones potential function. Energetic frustrations were then examined at residue level based on φ-value analyses of the transition state ensemble structures and mapped back to native-contact networks. Our calculations show that energetic frustrations have highly heterogeneous influences on the folding of the four helices of the examined structures depending on the local environment of the frustration centers. Also, the closer the introduced frustration is to the center of the native-contact network, the larger the changes in the protein folding. Our findings add a new dimension to the understanding of protein folding the topology determination in that energetic frustrations works closely with native-contact networks to affect the protein folding.

  17. Simulation of the Demand Side Management impacts: resolution enhancement of the input parameters at the local scale

    International Nuclear Information System (INIS)

    Imbert, P.

    2011-01-01

    Following the integrated energy planning paradigm in the 90's and the recent renewal of decentralized energy planning interests, Demand Side Management (DSM) actions are expected to take a significant role on energy planning activities in the future. Indeed the DSM actions represent a relevant option to achieve environmental and energy commitments or to alleviate some specific problems of electricity supply. DSM actions at the local scale at least in the French context is observed today. There is a need for appropriate methods and tools to assess the impacts of such MDE programs at local level. The local scale involves taking into account the specificities of the territories (physical, social, geographical, economical, institutional, etc.) The objective of this thesis is to improve the spatial resolution of input variables for the use in DSM action simulation tools. Based on a case study in France (PREMIO project: smart architecture for load management applied to a district) and an existing simulation tool we will study the impacts of this local experience to several municipalities. (author)

  18. ROLE OF MAGNETIC FIELD STRENGTH AND NUMERICAL RESOLUTION IN SIMULATIONS OF THE HEAT-FLUX-DRIVEN BUOYANCY INSTABILITY

    International Nuclear Information System (INIS)

    Avara, Mark J.; Reynolds, Christopher S.; Bogdanović, Tamara

    2013-01-01

    The role played by magnetic fields in the intracluster medium (ICM) of galaxy clusters is complex. The weakly collisional nature of the ICM leads to thermal conduction that is channeled along field lines. This anisotropic heat conduction profoundly changes the instabilities of the ICM atmosphere, with convective stabilities being driven by temperature gradients of either sign. Here, we employ the Athena magnetohydrodynamic code to investigate the local non-linear behavior of the heat-flux-driven buoyancy instability (HBI) relevant in the cores of cooling-core clusters where the temperature increases with radius. We study a grid of two-dimensional simulations that span a large range of initial magnetic field strengths and numerical resolutions. For very weak initial fields, we recover the previously known result that the HBI wraps the field in the horizontal direction, thereby shutting off the heat flux. However, we find that simulations that begin with intermediate initial field strengths have a qualitatively different behavior, forming HBI-stable filaments that resist field-line wrapping and enable sustained vertical conductive heat flux at a level of 10%-25% of the Spitzer value. While astrophysical conclusions regarding the role of conduction in cooling cores require detailed global models, our local study proves that systems dominated by the HBI do not necessarily quench the conductive heat flux

  19. Implementation of the n-body Monte-Carlo event generator into the Geant4 toolkit for photonuclear studies

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Wen, E-mail: wenluo-ok@163.com [School of Nuclear Science and Technology, University of South China, Hengyang 421001 (China); Lan, Hao-yang [School of Nuclear Science and Technology, University of South China, Hengyang 421001 (China); Xu, Yi; Balabanski, Dimiter L. [Extreme Light Infrastructure-Nuclear Physics, “Horia Hulubei” National Institute for Physics and Nuclear Engineering (IFIN-HH), 30 Reactorului, 077125 Bucharest-Magurele (Romania)

    2017-03-21

    A data-based Monte Carlo simulation algorithm, Geant4-GENBOD, was developed by coupling the n-body Monte-Carlo event generator to the Geant4 toolkit, aiming at accurate simulations of specific photonuclear reactions for diverse photonuclear physics studies. Good comparisons of Geant4-GENBOD calculations with reported measurements of photo-neutron production cross-sections and yields, and with reported energy spectra of the {sup 6}Li(n,α)t reaction were performed. Good agreements between the calculations and experimental data were found and the validation of the developed program was verified consequently. Furthermore, simulations for the {sup 92}Mo(γ,p) reaction of astrophysics relevance and photo-neutron production of {sup 99}Mo/{sup 99m}Tc and {sup 225}Ra/{sup 225}Ac radioisotopes were investigated, which demonstrate the applicability of this program. We conclude that the Geant4-GENBOD is a reliable tool for study of the emerging experiment programs at high-intensity γ-beam laboratories, such as the Extreme Light Infrastructure – Nuclear Physics facility and the High Intensity Gamma-Ray Source at Duke University.

  20. The Schroedinger-Poisson equations as the large-N limit of the Newtonian N-body system. Applications to the large scale dark matter dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Briscese, Fabio [Northumbria University, Department of Mathematics, Physics and Electrical Engineering, Newcastle upon Tyne (United Kingdom); Citta Universitaria, Istituto Nazionale di Alta Matematica Francesco Severi, Gruppo Nazionale di Fisica Matematica, Rome (Italy)

    2017-09-15

    In this paper it is argued how the dynamics of the classical Newtonian N-body system can be described in terms of the Schroedinger-Poisson equations in the large N limit. This result is based on the stochastic quantization introduced by Nelson, and on the Calogero conjecture. According to the Calogero conjecture, the emerging effective Planck constant is computed in terms of the parameters of the N-body system as ℎ ∝ M{sup 5/3}G{sup 1/2}(N/ left angle ρ right angle){sup 1/6}, where is G the gravitational constant, N and M are the number and the mass of the bodies, and left angle ρ right angle is their average density. The relevance of this result in the context of large scale structure formation is discussed. In particular, this finding gives a further argument in support of the validity of the Schroedinger method as numerical double of the N-body simulations of dark matter dynamics at large cosmological scales. (orig.)

  1. Stochastic porous media modeling and high-resolution schemes for numerical simulation of subsurface immiscible fluid flow transport

    Science.gov (United States)

    Brantson, Eric Thompson; Ju, Binshan; Wu, Dan; Gyan, Patricia Semwaah

    2018-04-01

    This paper proposes stochastic petroleum porous media modeling for immiscible fluid flow simulation using Dykstra-Parson coefficient (V DP) and autocorrelation lengths to generate 2D stochastic permeability values which were also used to generate porosity fields through a linear interpolation technique based on Carman-Kozeny equation. The proposed method of permeability field generation in this study was compared to turning bands method (TBM) and uniform sampling randomization method (USRM). On the other hand, many studies have also reported that, upstream mobility weighting schemes, commonly used in conventional numerical reservoir simulators do not accurately capture immiscible displacement shocks and discontinuities through stochastically generated porous media. This can be attributed to high level of numerical smearing in first-order schemes, oftentimes misinterpreted as subsurface geological features. Therefore, this work employs high-resolution schemes of SUPERBEE flux limiter, weighted essentially non-oscillatory scheme (WENO), and monotone upstream-centered schemes for conservation laws (MUSCL) to accurately capture immiscible fluid flow transport in stochastic porous media. The high-order schemes results match well with Buckley Leverett (BL) analytical solution without any non-oscillatory solutions. The governing fluid flow equations were solved numerically using simultaneous solution (SS) technique, sequential solution (SEQ) technique and iterative implicit pressure and explicit saturation (IMPES) technique which produce acceptable numerical stability and convergence rate. A comparative and numerical examples study of flow transport through the proposed method, TBM and USRM permeability fields revealed detailed subsurface instabilities with their corresponding ultimate recovery factors. Also, the impact of autocorrelation lengths on immiscible fluid flow transport were analyzed and quantified. A finite number of lines used in the TBM resulted into visual

  2. Incorporation of Three-dimensional Radiative Transfer into a Very High Resolution Simulation of Horizontally Inhomogeneous Clouds

    Science.gov (United States)

    Ishida, H.; Ota, Y.; Sekiguchi, M.; Sato, Y.

    2016-12-01

    A three-dimensional (3D) radiative transfer calculation scheme is developed to estimate horizontal transport of radiation energy in a very high resolution (with the order of 10 m in spatial grid) simulation of cloud evolution, especially for horizontally inhomogeneous clouds such as shallow cumulus and stratocumulus. Horizontal radiative transfer due to inhomogeneous clouds seems to cause local heating/cooling in an atmosphere with a fine spatial scale. It is, however, usually difficult to estimate the 3D effects, because the 3D radiative transfer often needs a large resource for computation compared to a plane-parallel approximation. This study attempts to incorporate a solution scheme that explicitly solves the 3D radiative transfer equation into a numerical simulation, because this scheme has an advantage in calculation for a sequence of time evolution (i.e., the scene at a time is little different from that at the previous time step). This scheme is also appropriate to calculation of radiation with strong absorption, such as the infrared regions. For efficient computation, this scheme utilizes several techniques, e.g., the multigrid method for iteration solution, and a correlated-k distribution method refined for efficient approximation of the wavelength integration. For a case study, the scheme is applied to an infrared broadband radiation calculation in a broken cloud field generated with a large eddy simulation model. The horizontal transport of infrared radiation, which cannot be estimated by the plane-parallel approximation, and its variation in time can be retrieved. The calculation result elucidates that the horizontal divergences and convergences of infrared radiation flux are not negligible, especially at the boundaries of clouds and within optically thin clouds, and the radiative cooling at lateral boundaries of clouds may reduce infrared radiative heating in clouds. In a future work, the 3D effects on radiative heating/cooling will be able to be

  3. The Microphysical Properties of Convective Precipitation Over the Tibetan Plateau by a Subkilometer Resolution Cloud-Resolving Simulation

    Science.gov (United States)

    Gao, Wenhua; Liu, Liping; Li, Jian; Lu, Chunsong

    2018-03-01

    The microphysical properties of convective precipitation over the Tibetan Plateau are unique because of the extremely high topography and special atmospheric conditions. In this study, the ground-based cloud radar and disdrometer observations as well as high-resolution Weather Research and Forecasting simulations with the Chinese Academy of Meteorological Sciences microphysics and four other microphysical schemes are used to investigate the microphysics and precipitation mechanisms of a convection event on 24 July 2014. The Weather Research and Forecasting-Chinese Academy of Meteorological Sciences simulation reasonably reproduces the spatial distribution of 24-hr accumulated rainfall, yet the temporal evolution of rain rate has a delay of 1-3 hr. The model reflectivity shares the common features with the cloud radar observations. The simulated raindrop size distributions demonstrate more of small- and large-size raindrops produced with the increase of rain rate, suggesting that changeable shape parameter should be used in size distribution. Results show that abundant supercooled water exists through condensation of water vapor above the freezing layer. The prevailing ice crystal microphysical processes are depositional growth and autoconversion of ice crystal to snow. The dominant source term of snow/graupel is riming of supercooled water. Sedimentation of graupel can play a vital role in the formation of precipitation, but melting of snow is rather small and quite different from that in other regions. Furthermore, water vapor budgets suggest that surface moisture flux be the principal source of water vapor and self-circulation of moisture happen at the beginning of convection, while total moisture flux convergence determine condensation and precipitation during the convective process over the Tibetan Plateau.

  4. Large Eddy Simulation of Wall-Bounded Turbulent Flows with the Lattice Boltzmann Method: Effect of Collision Model, SGS Model and Grid Resolution

    Science.gov (United States)

    Pradhan, Aniruddhe; Akhavan, Rayhaneh

    2017-11-01

    Effect of collision model, subgrid-scale model and grid resolution in Large Eddy Simulation (LES) of wall-bounded turbulent flows with the Lattice Boltzmann Method (LBM) is investigated in turbulent channel flow. The Single Relaxation Time (SRT) collision model is found to be more accurate than Multi-Relaxation Time (MRT) collision model in well-resolved LES. Accurate LES requires grid resolutions of Δ+ LBM requires either grid-embedding in the near-wall region, with grid resolutions comparable to DNS, or a wall model. Results of LES with grid-embedding and wall models will be discussed.

  5. Sensitivity to grid resolution in the ability of a chemical transport model to simulate observed oxidant chemistry under high-isoprene conditions

    Directory of Open Access Journals (Sweden)

    K. Yu

    2016-04-01

    Full Text Available Formation of ozone and organic aerosol in continental atmospheres depends on whether isoprene emitted by vegetation is oxidized by the high-NOx pathway (where peroxy radicals react with NO or by low-NOx pathways (where peroxy radicals react by alternate channels, mostly with HO2. We used mixed layer observations from the SEAC4RS aircraft campaign over the Southeast US to test the ability of the GEOS-Chem chemical transport model at different grid resolutions (0.25°  ×  0.3125°, 2°  ×  2.5°, 4°  ×  5° to simulate this chemistry under high-isoprene, variable-NOx conditions. Observations of isoprene and NOx over the Southeast US show a negative correlation, reflecting the spatial segregation of emissions; this negative correlation is captured in the model at 0.25°  ×  0.3125° resolution but not at coarser resolutions. As a result, less isoprene oxidation takes place by the high-NOx pathway in the model at 0.25°  ×  0.3125° resolution (54 % than at coarser resolution (59 %. The cumulative probability distribution functions (CDFs of NOx, isoprene, and ozone concentrations show little difference across model resolutions and good agreement with observations, while formaldehyde is overestimated at coarse resolution because excessive isoprene oxidation takes place by the high-NOx pathway with high formaldehyde yield. The good agreement of simulated and observed concentration variances implies that smaller-scale non-linearities (urban and power plant plumes are not important on the regional scale. Correlations of simulated vs. observed concentrations do not improve with grid resolution because finer modes of variability are intrinsically more difficult to capture. Higher model resolution leads to decreased conversion of NOx to organic nitrates and increased conversion to nitric acid, with total reactive nitrogen oxides (NOy changing little across model resolutions. Model concentrations in the

  6. Implementing O(N N-Body Algorithms Efficiently in Data-Parallel Languages

    Directory of Open Access Journals (Sweden)

    Yu Hu

    1996-01-01

    Full Text Available The optimization techniques for hierarchical O(N N-body algorithms described here focus on managing the data distribution and the data references, both between the memories of different nodes and within the memory hierarchy of each node. We show how the techniques can be expressed in data-parallel languages, such as High Performance Fortran (HPF and Connection Machine Fortran (CMF. The effectiveness of our techniques is demonstrated on an implementation of Anderson's hierarchical O(N N-body method for the Connection Machine system CM-5/5E. Of the total execution time, communication accounts for about 10–20% of the total time, with the average efficiency for arithmetic operations being about 40% and the total efficiency (including communication being about 35%. For the CM-5E, a performance in excess of 60 Mflop/s per node (peak 160 Mflop/s per node has been measured.

  7. Explicit treatment of N-body correlations within a density-matrix formalism

    International Nuclear Information System (INIS)

    Shun-Jin, W.; Cassing, W.

    1985-01-01

    The nuclear many-body problem is reformulated in the density-matrix approach such that n-body correlations are separated out from the reduced density matrix rho/sub n/. A set of equations for the time evolution of the n-body correlations c/sub n/ is derived which allows for physically transparent truncations with respect to the order of correlations. In the stationary limit (c/sub n/ = 0) a restriction to two-body correlations yields a generalized Bethe-Goldstone equation a restriction to body correlations yields generalized Faddeev equations in the density-matrix formulation. Furthermore it can be shown that any truncation of the set of equations (c/sub n/ = 0, n>m) is compatible with conservation laws, a quality which in general is not fulfilled if higher order correlations are treated perturbatively

  8. GANDALF - Graphical Astrophysics code for N-body Dynamics And Lagrangian Fluids

    Science.gov (United States)

    Hubber, D. A.; Rosotti, G. P.; Booth, R. A.

    2018-01-01

    GANDALF is a new hydrodynamics and N-body dynamics code designed for investigating planet formation, star formation and star cluster problems. GANDALF is written in C++, parallelized with both OPENMP and MPI and contains a PYTHON library for analysis and visualization. The code has been written with a fully object-oriented approach to easily allow user-defined implementations of physics modules or other algorithms. The code currently contains implementations of smoothed particle hydrodynamics, meshless finite-volume and collisional N-body schemes, but can easily be adapted to include additional particle schemes. We present in this paper the details of its implementation, results from the test suite, serial and parallel performance results and discuss the planned future development. The code is freely available as an open source project on the code-hosting website github at https://github.com/gandalfcode/gandalf and is available under the GPLv2 license.

  9. Highly eccentric hip-hop solutions of the 2 N-body problem

    Science.gov (United States)

    Barrabés, Esther; Cors, Josep M.; Pinyol, Conxita; Soler, Jaume

    2010-02-01

    We show the existence of families of hip-hop solutions in the equal-mass 2 N-body problem which are close to highly eccentric planar elliptic homographic motions of 2 N bodies plus small perpendicular non-harmonic oscillations. By introducing a parameter ɛ, the homographic motion and the small amplitude oscillations can be uncoupled into a purely Keplerian homographic motion of fixed period and a vertical oscillation described by a Hill type equation. Small changes in the eccentricity induce large variations in the period of the perpendicular oscillation and give rise, via a Bolzano argument, to resonant periodic solutions of the uncoupled system in a rotating frame. For small ɛ≠0, the topological transversality persists and Brouwer’s fixed point theorem shows the existence of this kind of solutions in the full system.

  10. Hip-hop solutions of the 2N-body problem

    Science.gov (United States)

    Barrabés, Esther; Cors, Josep Maria; Pinyol, Conxita; Soler, Jaume

    2006-05-01

    Hip-hop solutions of the 2N-body problem with equal masses are shown to exist using an analytic continuation argument. These solutions are close to planar regular 2N-gon relative equilibria with small vertical oscillations. For fixed N, an infinity of these solutions are three-dimensional choreographies, with all the bodies moving along the same closed curve in the inertial frame.

  11. High-Resolution Biogeochemical Simulation Identifies Practical Opportunities for Bioenergy Landscape Intensification Across Diverse US Agricultural Regions

    Science.gov (United States)

    Field, J.; Adler, P. R.; Evans, S.; Paustian, K.; Marx, E.; Easter, M.

    2015-12-01

    The sustainability of biofuel expansion is strongly dependent on the environmental footprint of feedstock production, including both direct impacts within feedstock-producing areas and potential leakage effects due to disruption of existing food, feed, or fiber production. Assessing and minimizing these impacts requires novel methods compared to traditional supply chain lifecycle assessment. When properly validated and applied at appropriate spatial resolutions, biogeochemical process models are useful for simulating how the productivity and soil greenhouse gas fluxes of cultivating both conventional crops and advanced feedstock crops respond across gradients of land quality and management intensity. In this work we use the DayCent model to assess the biogeochemical impacts of agricultural residue collection, establishment of perennial grasses on marginal cropland or conservation easements, and intensification of existing cropping at high spatial resolution across several real-world case study landscapes in diverse US agricultural regions. We integrate the resulting estimates of productivity, soil carbon changes, and nitrous oxide emissions with crop production budgets and lifecycle inventories, and perform a basic optimization to generate landscape cost/GHG frontiers and determine the most practical opportunities for low-impact feedstock provisioning. The optimization is constrained to assess the minimum combined impacts of residue collection, land use change, and intensification of existing agriculture necessary for the landscape to supply a commercial-scale biorefinery while maintaining exiting food, feed, and fiber production levels. These techniques can be used to assess how different feedstock provisioning strategies perform on both economic and environmental criteria, and sensitivity of performance to environmental and land use factors. The included figure shows an example feedstock cost-GHG mitigation tradeoff frontier for a commercial-scale cellulosic

  12. The quantum n-body problem in dimension d ⩾ n – 1: ground state

    Science.gov (United States)

    Miller, Willard, Jr.; Turbiner, Alexander V.; Escobar-Ruiz, M. A.

    2018-05-01

    We employ generalized Euler coordinates for the n body system in dimensional space, which consists of the centre-of-mass vector, relative (mutual) mass-independent distances r ij and angles as remaining coordinates. We prove that the kinetic energy of the quantum n-body problem for can be written as the sum of three terms: (i) kinetic energy of centre-of-mass, (ii) the second order differential operator which depends on relative distances alone and (iii) the differential operator which annihilates any angle-independent function. The operator has a large reflection symmetry group and in variables is an algebraic operator, which can be written in terms of generators of the hidden algebra . Thus, makes sense of the Hamiltonian of a quantum Euler–Arnold top in a constant magnetic field. It is conjectured that for any n, the similarity-transformed is the Laplace–Beltrami operator plus (effective) potential; thus, it describes a -dimensional quantum particle in curved space. This was verified for . After de-quantization the similarity-transformed becomes the Hamiltonian of the classical top with variable tensor of inertia in an external potential. This approach allows a reduction of the dn-dimensional spectral problem to a -dimensional spectral problem if the eigenfunctions depend only on relative distances. We prove that the ground state function of the n body problem depends on relative distances alone.

  13. A NEW HYBRID N-BODY-COAGULATION CODE FOR THE FORMATION OF GAS GIANT PLANETS

    International Nuclear Information System (INIS)

    Bromley, Benjamin C.; Kenyon, Scott J.

    2011-01-01

    We describe an updated version of our hybrid N-body-coagulation code for planet formation. In addition to the features of our 2006-2008 code, our treatment now includes algorithms for the one-dimensional evolution of the viscous disk, the accretion of small particles in planetary atmospheres, gas accretion onto massive cores, and the response of N-bodies to the gravitational potential of the gaseous disk and the swarm of planetesimals. To validate the N-body portion of the algorithm, we use a battery of tests in planetary dynamics. As a first application of the complete code, we consider the evolution of Pluto-mass planetesimals in a swarm of 0.1-1 cm pebbles. In a typical evolution time of 1-3 Myr, our calculations transform 0.01-0.1 M sun disks of gas and dust into planetary systems containing super-Earths, Saturns, and Jupiters. Low-mass planets form more often than massive planets; disks with smaller α form more massive planets than disks with larger α. For Jupiter-mass planets, masses of solid cores are 10-100 M + .

  14. AN N-BODY INTEGRATOR FOR GRAVITATING PLANETARY RINGS, AND THE OUTER EDGE OF SATURN'S B RING

    International Nuclear Information System (INIS)

    Hahn, Joseph M.; Spitale, Joseph N.

    2013-01-01

    A new symplectic N-body integrator is introduced, one designed to calculate the global 360° evolution of a self-gravitating planetary ring that is in orbit about an oblate planet. This freely available code is called epi i nt, and it is distinct from other such codes in its use of streamlines to calculate the effects of ring self-gravity. The great advantage of this approach is that the perturbing forces arise from smooth wires of ring matter rather than discreet particles, so there is very little gravitational scattering and so only a modest number of particles are needed to simulate, say, the scalloped edge of a resonantly confined ring or the propagation of spiral density waves. The code is applied to the outer edge of Saturn's B ring, and a comparison of Cassini measurements of the ring's forced response to simulations of Mimas's resonant perturbations reveals that the B ring's surface density at its outer edge is σ 0 = 195 ± 60 g cm –2 , which, if the same everywhere across the ring, would mean that the B ring's mass is about 90% of Mimas's mass. Cassini observations show that the B ring-edge has several free normal modes, which are long-lived disturbances of the ring-edge that are not driven by any known satellite resonances. Although the mechanism that excites or sustains these normal modes is unknown, we can plant such a disturbance at a simulated ring's edge and find that these modes persist without any damping for more than ∼10 5 orbits or ∼100 yr despite the simulated ring's viscosity ν s = 100 cm 2 s –1 . These simulations also indicate that impulsive disturbances at a ring can excite long-lived normal modes, which suggests that an impact in the recent past by perhaps a cloud of cometary debris might have excited these disturbances, which are quite common to many of Saturn's sharp-edged rings

  15. Spatial resolution of gas hydrate and permeability changes from ERT data in LARS simulating the Mallik gas hydrate production test

    Science.gov (United States)

    Priegnitz, Mike; Thaler, Jan; Spangenberg, Erik; Schicks, Judith M.; Abendroth, Sven

    2014-05-01

    The German gas hydrate project SUGAR studies innovative methods and approaches to be applied in the production of methane from hydrate-bearing reservoirs. To enable laboratory studies in pilot scale, a large reservoir simulator (LARS) was realized allowing for the formation and dissociation of gas hydrates under simulated in-situ conditions. LARS is equipped with a series of sensors. This includes a cylindrical electrical resistance tomography (ERT) array composed of 25 electrode rings featuring 15 electrodes each. The high-resolution ERT array is used to monitor the spatial distribution of the electrical resistivity during hydrate formation and dissociation experiments over time. As the present phases of poorly conducting sediment, well conducting pore fluid, non-conducting hydrates, and isolating free gas cover a wide range of electrical properties, ERT measurements enable us to monitor the spatial distribution of these phases during the experiments. In order to investigate the hydrate dissociation and the resulting fluid flow, we simulated a hydrate production test in LARS that was based on the Mallik gas hydrate production test (see abstract Heeschen et al., this volume). At first, a hydrate phase was produced from methane saturated saline water. During the two months of gas hydrate production we measured the electrical properties within the sediment sample every four hours. These data were used to establish a routine estimating both the local degrees of hydrate saturation and the resulting local permeabilities in the sediment's pore space from the measured resistivity data. The final gas hydrate saturation filled 89.5% of the total pore space. During hydrate dissociation, ERT data do not allow for a quantitative determination of free gas and remaining gas hydrates since both phases are electrically isolating. However, changes are resolved in the spatial distribution of the conducting liquid and the isolating phase with gas being the only mobile isolating phase

  16. Vegetation and Carbon Cycle Dynamics in the High-Resolution Transient Holocene Simulations Using the MPI Earth System Model

    Science.gov (United States)

    Brovkin, V.; Lorenz, S.; Raddatz, T.; Claussen, M.; Dallmeyer, A.

    2017-12-01

    -scale variability helps to quantify the vegetation and land carbon feedbacks during the past periods when the temporal resolution of the ice-core CO2 record is not sufficient to capture fast CO2 variations. From a set of Holocene simulations with prescribed or interactive atmospheric CO2, we get estimates of climate-carbon feedback useful for future climate studies.

  17. Thermal regulation for APDs in a 1 mm3 resolution clinical PET camera: design, simulation and experimental verification

    International Nuclear Information System (INIS)

    Zhai, Jinjian; Vandenbroucke, Arne; Levin, Craig S

    2014-01-01

    We are developing a 1 mm 3 resolution positron emission tomography camera dedicated to breast imaging. The camera collects high energy photons emitted from radioactively labeled agents introduced in the patients in order to detect molecular signatures of breast cancer. The camera comprises many layers of lutetium yttrium oxyorthosilicate (LYSO) scintillation crystals coupled to position sensitive avalanche photodiodes (PSAPDs). The main objectives of the studies presented in this paper are to investigate the temperature profile of the layers of LYSO–PSAPD detectors (a.k.a. ‘fins’) residing in the camera and to use these results to present the design of the thermal regulation system for the front end of the camera. The study was performed using both experimental methods and simulation. We investigated a design with a heat-dissipating fin. Three fin configurations are tested: fin with Al windows (FwW), fin without Al windows (FwoW) and fin with alumina windows (FwAW). A Fluent® simulation was conducted to study the experimentally inaccessible temperature of the PSAPDs. For the best configuration (FwW), the temperature difference from the center to a point near the edge is 1.0 K when 1.5 A current was applied to the Peltier elements. Those of FwoW and FwAW are 2.6 K and 1.7 K, respectively. We conclude that the design of a heat-dissipating fin configuration with ‘aluminum windows’ (FwW) that borders the scintillation crystal arrays of 16 adjacent detector modules has better heat dissipation capabilities than the design without ‘aluminum windows' (FwoW) and the design with ‘alumina windows’ (FwAW), respectively. (paper)

  18. Development of local-scale high-resolution atmospheric dispersion model using large-eddy simulation. Part 3: turbulent flow and plume dispersion in building arrays

    Czech Academy of Sciences Publication Activity Database

    Nakayama, H.; Jurčáková, Klára; Nagai, H.

    2013-01-01

    Roč. 50, č. 5 (2013), s. 503-519 ISSN 0022-3131 Institutional support: RVO:61388998 Keywords : local-scale high-resolution dispersion model * nuclear emergency response system * large-eddy simulation * spatially developing turbulent boundary layer flow Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 1.452, year: 2013

  19. Simulation the spatial resolution of an X-ray imager based on zinc oxide nanowires in anodic aluminium oxide membrane by using MCNP and OPTICS Codes

    Science.gov (United States)

    Samarin, S. N.; Saramad, S.

    2018-05-01

    The spatial resolution of a detector is a very important parameter for x-ray imaging. A bulk scintillation detector because of spreading of light inside the scintillator does't have a good spatial resolution. The nanowire scintillators because of their wave guiding behavior can prevent the spreading of light and can improve the spatial resolution of traditional scintillation detectors. The zinc oxide (ZnO) scintillator nanowire, with its simple construction by electrochemical deposition in regular hexagonal structure of Aluminum oxide membrane has many advantages. The three dimensional absorption of X-ray energy in ZnO scintillator is simulated by a Monte Carlo transport code (MCNP). The transport, attenuation and scattering of the generated photons are simulated by a general-purpose scintillator light response simulation code (OPTICS). The results are compared with a previous publication which used a simulation code of the passage of particles through matter (Geant4). The results verify that this scintillator nanowire structure has a spatial resolution less than one micrometer.

  20. Development of high-resolution multi-scale modelling system for simulation of coastal-fluvial urban flooding

    Science.gov (United States)

    Comer, Joanne; Indiana Olbert, Agnieszka; Nash, Stephen; Hartnett, Michael

    2017-02-01

    Urban developments in coastal zones are often exposed to natural hazards such as flooding. In this research, a state-of-the-art, multi-scale nested flood (MSN_Flood) model is applied to simulate complex coastal-fluvial urban flooding due to combined effects of tides, surges and river discharges. Cork city on Ireland's southwest coast is a study case. The flood modelling system comprises a cascade of four dynamically linked models that resolve the hydrodynamics of Cork Harbour and/or its sub-region at four scales: 90, 30, 6 and 2 m. Results demonstrate that the internalization of the nested boundary through the use of ghost cells combined with a tailored adaptive interpolation technique creates a highly dynamic moving boundary that permits flooding and drying of the nested boundary. This novel feature of MSN_Flood provides a high degree of choice regarding the location of the boundaries to the nested domain and therefore flexibility in model application. The nested MSN_Flood model through dynamic downscaling facilitates significant improvements in accuracy of model output without incurring the computational expense of high spatial resolution over the entire model domain. The urban flood model provides full characteristics of water levels and flow regimes necessary for flood hazard identification and flood risk assessment.

  1. Immersed boundary methods for high-resolution simulation of atmospheric boundary-layer flow over complex terrain

    Science.gov (United States)

    Lundquist, Katherine Ann

    Mesoscale models, such as the Weather Research and Forecasting (WRF) model, are increasingly used for high resolution simulations, particularly in complex terrain, but errors associated with terrain-following coordinates degrade the accuracy of the solution. Use of an alternative Cartesian gridding technique, known as an immersed boundary method (IBM), alleviates coordinate transformation errors and eliminates restrictions on terrain slope which currently limit mesoscale models to slowly varying terrain. In this dissertation, an immersed boundary method is developed for use in numerical weather prediction. Use of the method facilitates explicit resolution of complex terrain, even urban terrain, in the WRF mesoscale model. First, the errors that arise in the WRF model when complex terrain is present are presented. This is accomplished using a scalar advection test case, and comparing the numerical solution to the analytical solution. Results are presented for different orders of advection schemes, grid resolutions and aspect ratios, as well as various degrees of terrain slope. For comparison, results from the same simulation are presented using the IBM. Both two-dimensional and three-dimensional immersed boundary methods are then described, along with details that are specific to the implementation of IBM in the WRF code. Our IBM is capable of imposing both Dirichlet and Neumann boundary conditions. Additionally, a method for coupling atmospheric physics parameterizations at the immersed boundary is presented, making IB methods much more functional in the context of numerical weather prediction models. The two-dimensional IB method is verified through comparisons of solutions for gentle terrain slopes when using IBM and terrain-following grids. The canonical case of flow over a Witch of Agnesi hill provides validation of the basic no-slip and zero gradient boundary conditions. Specified diurnal heating in a valley, producing anabatic winds, is used to validate the

  2. Immersed Boundary Methods for High-Resolution Simulation of Atmospheric Boundary-Layer Flow Over Complex Terrain

    Energy Technology Data Exchange (ETDEWEB)

    Lundquist, K A [Univ. of California, Berkeley, CA (United States)

    2010-05-12

    Mesoscale models, such as the Weather Research and Forecasting (WRF) model, are increasingly used for high resolution simulations, particularly in complex terrain, but errors associated with terrain-following coordinates degrade the accuracy of the solution. Use of an alternative Cartesian gridding technique, known as an immersed boundary method (IBM), alleviates coordinate transformation errors and eliminates restrictions on terrain slope which currently limit mesoscale models to slowly varying terrain. In this dissertation, an immersed boundary method is developed for use in numerical weather prediction. Use of the method facilitates explicit resolution of complex terrain, even urban terrain, in the WRF mesoscale model. First, the errors that arise in the WRF model when complex terrain is present are presented. This is accomplished using a scalar advection test case, and comparing the numerical solution to the analytical solution. Results are presented for different orders of advection schemes, grid resolutions and aspect ratios, as well as various degrees of terrain slope. For comparison, results from the same simulation are presented using the IBM. Both two-dimensional and three-dimensional immersed boundary methods are then described, along with details that are specific to the implementation of IBM in the WRF code. Our IBM is capable of imposing both Dirichlet and Neumann boundary conditions. Additionally, a method for coupling atmospheric physics parameterizations at the immersed boundary is presented, making IB methods much more functional in the context of numerical weather prediction models. The two-dimensional IB method is verified through comparisons of solutions for gentle terrain slopes when using IBM and terrain-following grids. The canonical case of flow over a Witch of Agnesi hill provides validation of the basic no-slip and zero gradient boundary conditions. Specified diurnal heating in a valley, producing anabatic winds, is used to validate the

  3. High-resolution simulations of unstable cylindrical gravity currents undergoing wandering and splitting motions in a rotating system

    Science.gov (United States)

    Dai, Albert; Wu, Ching-Sen

    2018-02-01

    High-resolution simulations of unstable cylindrical gravity currents when wandering and splitting motions occur in a rotating system are reported. In this study, our attention is focused on the situation of unstable rotating cylindrical gravity currents when the ratio of Coriolis to inertia forces is larger, namely, 0.5 ≤ C ≤ 2.0, in comparison to the stable ones when C ≤ 0.3 as investigated previously by the authors. The simulations reproduce the major features of the unstable rotating cylindrical gravity currents observed in the laboratory, i.e., vortex-wandering or vortex-splitting following the contraction-relaxation motion, and good agreement is found when compared with the experimental results on the outrush radius of the advancing front and on the number of bulges. Furthermore, the simulations provide energy budget information which could not be attained in the laboratory. After the heavy fluid is released, the heavy fluid collapses and a contraction-relaxation motion is at work for approximately 2-3 revolutions of the system. During the contraction-relaxation motion of the heavy fluid, the unstable rotating cylindrical gravity currents behave similar to the stable ones. Towards the end of the contraction-relaxation motion, the dissipation rate in the system reaches a local minimum and a quasi-geostrophic equilibrium state is reached. After the quasi-geostrophic equilibrium state, vortex-wandering or vortex-splitting may occur depending on the ratio of Coriolis to inertia forces. The vortex-splitting process begins with non-axisymmetric bulges and, as the bulges grow, the kinetic energy increases at the expense of decreasing potential energy in the system. The completion of vortex-splitting is accompanied by a local maximum of dissipation rate and a local maximum of kinetic energy in the system. A striking feature of the unstable rotating cylindrical gravity currents is the persistent upwelling and downwelling motions, which are observed for both the

  4. High-Resolution Mesoscale Simulations of the 6-7 May 2000 Missouri Flash Flood: Impact of Model Initialization and Land Surface Treatment

    Science.gov (United States)

    Baker, R. David; Wang, Yansen; Tao, Wei-Kuo; Wetzel, Peter; Belcher, Larry R.

    2004-01-01

    High-resolution mesoscale model simulations of the 6-7 May 2000 Missouri flash flood event were performed to test the impact of model initialization and land surface treatment on timing, intensity, and location of extreme precipitation. In this flash flood event, a mesoscale convective system (MCS) produced over 340 mm of rain in roughly 9 hours in some locations. Two different types of model initialization were employed: 1) NCEP global reanalysis with 2.5-degree grid spacing and 12-hour temporal resolution, and 2) Eta reanalysis with 40- km grid spacing and $hour temporal resolution. In addition, two different land surface treatments were considered. A simple land scheme. (SLAB) keeps soil moisture fixed at initial values throughout the simulation, while a more sophisticated land model (PLACE) allows for r interactive feedback. Simulations with high-resolution Eta model initialization show considerable improvement in the intensity of precipitation due to the presence in the initialization of a residual mesoscale convective vortex (hlCV) from a previous MCS. Simulations with the PLACE land model show improved location of heavy precipitation. Since soil moisture can vary over time in the PLACE model, surface energy fluxes exhibit strong spatial gradients. These surface energy flux gradients help produce a strong low-level jet (LLJ) in the correct location. The LLJ then interacts with the cold outflow boundary of the MCS to produce new convective cells. The simulation with both high-resolution model initialization and time-varying soil moisture test reproduces the intensity and location of observed rainfall.

  5. A dual communicator and dual grid-resolution algorithm for petascale simulations of turbulent mixing at high Schmidt number

    Science.gov (United States)

    Clay, M. P.; Buaria, D.; Gotoh, T.; Yeung, P. K.

    2017-10-01

    A new dual-communicator algorithm with very favorable performance characteristics has been developed for direct numerical simulation (DNS) of turbulent mixing of a passive scalar governed by an advection-diffusion equation. We focus on the regime of high Schmidt number (S c), where because of low molecular diffusivity the grid-resolution requirements for the scalar field are stricter than those for the velocity field by a factor √{ S c }. Computational throughput is improved by simulating the velocity field on a coarse grid of Nv3 points with a Fourier pseudo-spectral (FPS) method, while the passive scalar is simulated on a fine grid of Nθ3 points with a combined compact finite difference (CCD) scheme which computes first and second derivatives at eighth-order accuracy. A static three-dimensional domain decomposition and a parallel solution algorithm for the CCD scheme are used to avoid the heavy communication cost of memory transposes. A kernel is used to evaluate several approaches to optimize the performance of the CCD routines, which account for 60% of the overall simulation cost. On the petascale supercomputer Blue Waters at the University of Illinois, Urbana-Champaign, scalability is improved substantially with a hybrid MPI-OpenMP approach in which a dedicated thread per NUMA domain overlaps communication calls with computational tasks performed by a separate team of threads spawned using OpenMP nested parallelism. At a target production problem size of 81923 (0.5 trillion) grid points on 262,144 cores, CCD timings are reduced by 34% compared to a pure-MPI implementation. Timings for 163843 (4 trillion) grid points on 524,288 cores encouragingly maintain scalability greater than 90%, although the wall clock time is too high for production runs at this size. Performance monitoring with CrayPat for problem sizes up to 40963 shows that the CCD routines can achieve nearly 6% of the peak flop rate. The new DNS code is built upon two existing FPS and CCD codes

  6. Simulation of synoptic and sub-synoptic phenomena over East Africa and Arabian Peninsula for current and future climate using a high resolution AGCM

    KAUST Repository

    Raj, Jerry

    2015-04-01

    Climate regimes of East Africa and Arabia are complex and are poorly understood. East Africa has large-scale tropical controls like major convergence zones and air streams. The region is in the proximity of two monsoons, north-east and south-west, and the humid and thermally unstable Congo air stream. The domain comprises regions with one, two, and three rainfall maxima, and the rainfall pattern over this region has high spatial variability. To explore the synoptic and sub-synoptic phenomena that drive the climate of the region we conducted climate simulations using a high resolution Atmospheric General Circulation Model (AGCM), GFDL\\'s High Resolution Atmospheric Model (HiRAM). Historic simulations (1975-2004) and future projections (2007-2050), with both RCP 4.5 and RCP 8.5 pathways, were performed according to CORDEX standard. The sea surface temperature (SST) was prescribed from the 2°x2.5° latitude-longitude resolution GFDL Earth System Model runs of IPCC AR5, as bottom boundary condition over the ocean. Our simulations were conducted at a horizontal grid spacing of 25 km, which is an ample resolution for regional climate simulation. In comparison with the regional models, global HiRAM has the advantage of accounting for two-way interaction between regional and global scale processes. Our initial results show that HiRAM simulations for historic period well reproduce the regional climate in East Africa and the Arabian Peninsula with their complex interplay of regional and global processes. Our future projections indicate warming and increased precipitation over the Ethiopian highlands and the Greater Horn of Africa. We found significant regional differences between RCP 4.5 and RCP 8.5 projections, e.g., west coast of the Arabian Peninsula, show anomalies of opposite signs in these two simulations.

  7. The quantum N-body problem in the mean-field and semiclassical regime.

    Science.gov (United States)

    Golse, François

    2018-04-28

    The present work discusses the mean-field limit for the quantum N -body problem in the semiclassical regime. More precisely, we establish a convergence rate for the mean-field limit which is uniform as the ratio of Planck constant to the action of the typical single particle tends to zero. This convergence rate is formulated in terms of a quantum analogue of the quadratic Monge-Kantorovich or Wasserstein distance. This paper is an account of some recent collaboration with C. Mouhot, T. Paul and M. Pulvirenti.This article is part of the themed issue 'Hilbert's sixth problem'. © 2018 The Author(s).

  8. Factorization properties and spurious solutions in N-body scattering theories

    International Nuclear Information System (INIS)

    Vanzani, V.

    1979-01-01

    The origin of spurious solutions in N-body scattering equations is discussed. It is shown that spurious solutions are expected because of specific factorization properties of the homogeneous equations. The equations proposed by Rosenberg, by Mitra, Gillespie, Sugar and Panchapakesan, by Takahashi and Mishima, by Alessandrini, by Sasakawa, by Sloan, Bencze and Redish, by Weinberg and van Winter and by Avishai are considered. It is explicitly shown that spurious multipliers arise from repeated employment of resolvent equations or, equiValently, from generalized iteration procedure

  9. S-matrix formulation of thermodynamics with N-body scatterings

    Energy Technology Data Exchange (ETDEWEB)

    Lo, Pok Man [University of Wroclaw, Institute of Theoretical Physics, Wroclaw (Poland); Extreme Matter Institute EMMI, GSI, Darmstadt (Germany)

    2017-08-15

    We apply a phase space expansion scheme to incorporate the N-body scattering processes in the S-matrix formulation of statistical mechanics. A generalized phase shift function suitable for studying the thermal contribution of N → N processes is motivated and examined in various models. Using the expansion scheme, we revisit how the hadron resonance gas model emerges from the S-matrix framework, and consider an example of structureless scattering in which the phase shift function can be exactly worked out. Finally we analyze the influence of dynamics on the phase shift function in a simple example of 3- and 4-body scattering. (orig.)

  10. Impact of respiratory motion correction and spatial resolution on lesion detection in PET: a simulation study based on real MR dynamic data

    Science.gov (United States)

    Polycarpou, Irene; Tsoumpas, Charalampos; King, Andrew P.; Marsden, Paul K.

    2014-02-01

    The aim of this study is to investigate the impact of respiratory motion correction and spatial resolution on lesion detectability in PET as a function of lesion size and tracer uptake. Real respiratory signals describing different breathing types are combined with a motion model formed from real dynamic MR data to simulate multiple dynamic PET datasets acquired from a continuously moving subject. Lung and liver lesions were simulated with diameters ranging from 6 to 12 mm and lesion to background ratio ranging from 3:1 to 6:1. Projection data for 6 and 3 mm PET scanner resolution were generated using analytic simulations and reconstructed without and with motion correction. Motion correction was achieved using motion compensated image reconstruction. The detectability performance was quantified by a receiver operating characteristic (ROC) analysis obtained using a channelized Hotelling observer and the area under the ROC curve (AUC) was calculated as the figure of merit. The results indicate that respiratory motion limits the detectability of lung and liver lesions, depending on the variation of the breathing cycle length and amplitude. Patients with large quiescent periods had a greater AUC than patients with regular breathing cycles and patients with long-term variability in respiratory cycle or higher motion amplitude. In addition, small (less than 10 mm diameter) or low contrast (3:1) lesions showed the greatest improvement in AUC as a result of applying motion correction. In particular, after applying motion correction the AUC is improved by up to 42% with current PET resolution (i.e. 6 mm) and up to 51% for higher PET resolution (i.e. 3 mm). Finally, the benefit of increasing the scanner resolution is small unless motion correction is applied. This investigation indicates high impact of respiratory motion correction on lesion detectability in PET and highlights the importance of motion correction in order to benefit from the increased resolution of future

  11. Impact of respiratory motion correction and spatial resolution on lesion detection in PET: a simulation study based on real MR dynamic data

    International Nuclear Information System (INIS)

    Polycarpou, Irene; Tsoumpas, Charalampos; King, Andrew P; Marsden, Paul K

    2014-01-01

    The aim of this study is to investigate the impact of respiratory motion correction and spatial resolution on lesion detectability in PET as a function of lesion size and tracer uptake. Real respiratory signals describing different breathing types are combined with a motion model formed from real dynamic MR data to simulate multiple dynamic PET datasets acquired from a continuously moving subject. Lung and liver lesions were simulated with diameters ranging from 6 to 12 mm and lesion to background ratio ranging from 3:1 to 6:1. Projection data for 6 and 3 mm PET scanner resolution were generated using analytic simulations and reconstructed without and with motion correction. Motion correction was achieved using motion compensated image reconstruction. The detectability performance was quantified by a receiver operating characteristic (ROC) analysis obtained using a channelized Hotelling observer and the area under the ROC curve (AUC) was calculated as the figure of merit. The results indicate that respiratory motion limits the detectability of lung and liver lesions, depending on the variation of the breathing cycle length and amplitude. Patients with large quiescent periods had a greater AUC than patients with regular breathing cycles and patients with long-term variability in respiratory cycle or higher motion amplitude. In addition, small (less than 10 mm diameter) or low contrast (3:1) lesions showed the greatest improvement in AUC as a result of applying motion correction. In particular, after applying motion correction the AUC is improved by up to 42% with current PET resolution (i.e. 6 mm) and up to 51% for higher PET resolution (i.e. 3 mm). Finally, the benefit of increasing the scanner resolution is small unless motion correction is applied. This investigation indicates high impact of respiratory motion correction on lesion detectability in PET and highlights the importance of motion correction in order to benefit from the increased resolution of future

  12. Vegetation and land carbon feedbacks in the high-resolution transient Holocene simulations using the MPI Earth system model

    Science.gov (United States)

    Brovkin, Victor; Lorenz, Stephan; Raddatz, Thomas

    2017-04-01

    Plants influence climate through changes in the land surface biophysics (albedo, transpiration) and concentrations of the atmospheric greenhouse gases. One of the interesting periods to investigate a climatic role of terrestrial biosphere is the Holocene, when, despite of the relatively steady global climate, the atmospheric CO2 grew by about 20 ppm from 7 kyr BP to pre-industrial. We use a new setup of the Max Planck Institute Earth System Model MPI-ESM1 consisting of the latest version of the atmospheric model ECHAM6, including the land surface model JSBACH3 with carbon cycle and vegetation dynamics, coupled to the ocean circulation model MPI-OM, which includes the HAMOCC model of ocean biogeochemistry. The model has been run for several simulations over the Holocene period of the last 8000 years under the forcing data sets of orbital insolation, atmospheric greenhouse gases, volcanic aerosols, solar irradiance and stratospheric ozone, as well as land-use changes. In response to this forcing, the land carbon storage increased by about 60 PgC between 8 and 4 kyr BP, stayed relatively constant until 2 kyr BP, and decreased by about 90 PgC by 1850 AD due to land use changes. Vegetation and soil carbon changes significantly affected atmospheric CO2 during the periods of strong volcanic eruptions. In response to the eruption-caused cooling, the land initially stores more carbon as respiration decreases, but then it releases even more carbon due to productivity decrease. This decadal- scale variability helps to quantify the vegetation and land carbon feedbacks during the past periods when the temporal resolution of the ice-core CO2 record is not sufficient to capture fast CO2 variations. From a set of Holocene simulations with prescribed or interactive atmospheric CO2, we get estimates of climate-carbon feedback useful for future climate studies. Members of the Hamburg Holocene Team: Jürgen Bader1, Sebastian Bathiany2, Victor Brovkin1, Martin Claussen1,3, Traute Cr

  13. Covariability of seasonal temperature and precipitation over the Iberian Peninsula in high-resolution regional climate simulations (1001-2099)

    Science.gov (United States)

    Fernández-Montes, S.; Gómez-Navarro, J. J.; Rodrigo, F. S.; García-Valero, J. A.; Montávez, J. P.

    2017-04-01

    Precipitation and surface temperature are interdependent variables, both as a response to atmospheric dynamics and due to intrinsic thermodynamic relationships and feedbacks between them. This study analyzes the covariability of seasonal temperature (T) and precipitation (P) across the Iberian Peninsula (IP) using regional climate paleosimulations for the period 1001-1990, driven by reconstructions of external forcings. Future climate (1990-2099) was simulated according to SRES scenarios A2 and B2. These simulations enable exploring, at high spatial resolution, robust and physically consistent relationships. In winter, positive P-T correlations dominate west-central IP (Pearson correlation coefficient ρ = + 0.43, for 1001-1990), due to prevalent cold-dry and warm-wet conditions, while this relationship weakens and become negative towards mountainous, northern and eastern regions. In autumn, negative correlations appear in similar regions as in winter, whereas for summer they extend also to the N/NW of the IP. In spring, the whole IP depicts significant negative correlations, strongest for eastern regions (ρ = - 0.51). This is due to prevalent frequency of warm-dry and cold-wet modes in these regions and seasons. At the temporal scale, regional correlation series between seasonal anomalies of temperature and precipitation (assessed in 31 years running windows in 1001-1990) show very large multidecadal variability. For winter and spring, periodicities of about 50-60 years arise. The frequency of warm-dry and cold-wet modes appears correlated with the North Atlantic Oscillation (NAO), explaining mainly co-variability changes in spring. For winter and some regions in autumn, maximum and minimum P-T correlations appear in periods with enhanced meridional or easterly circulation (low or high pressure anomalies in the Mediterranean and Europe). In spring and summer, the Atlantic Multidecadal Oscillation shows some fingerprint on the frequency of warm/cold modes. For

  14. Mediterranean Thermohaline Response to Large-Scale Winter Atmospheric Forcing in a High-Resolution Ocean Model Simulation

    Science.gov (United States)

    Cusinato, Eleonora; Zanchettin, Davide; Sannino, Gianmaria; Rubino, Angelo

    2018-04-01

    Large-scale circulation anomalies over the North Atlantic and Euro-Mediterranean regions described by dominant climate modes, such as the North Atlantic Oscillation (NAO), the East Atlantic pattern (EA), the East Atlantic/Western Russian (EAWR) and the Mediterranean Oscillation Index (MOI), significantly affect interannual-to-decadal climatic and hydroclimatic variability in the Euro-Mediterranean region. However, whereas previous studies assessed the impact of such climate modes on air-sea heat and freshwater fluxes in the Mediterranean Sea, the propagation of these atmospheric forcing signals from the surface toward the interior and the abyss of the Mediterranean Sea remains unexplored. Here, we use a high-resolution ocean model simulation covering the 1979-2013 period to investigate spatial patterns and time scales of the Mediterranean thermohaline response to winter forcing from NAO, EA, EAWR and MOI. We find that these modes significantly imprint on the thermohaline properties in key areas of the Mediterranean Sea through a variety of mechanisms. Typically, density anomalies induced by all modes remain confined in the upper 600 m depth and remain significant for up to 18-24 months. One of the clearest propagation signals refers to the EA in the Adriatic and northern Ionian seas: There, negative EA anomalies are associated to an extensive positive density response, with anomalies that sink to the bottom of the South Adriatic Pit within a 2-year time. Other strong responses are the thermally driven responses to the EA in the Gulf of Lions and to the EAWR in the Aegean Sea. MOI and EAWR forcing of thermohaline properties in the Eastern Mediterranean sub-basins seems to be determined by reinforcement processes linked to the persistency of these modes in multiannual anomalous states. Our study also suggests that NAO, EA, EAWR and MOI could critically interfere with internal, deep and abyssal ocean dynamics and variability in the Mediterranean Sea.

  15. A dual resolution measurement based Monte Carlo simulation technique for detailed dose analysis of small volume organs in the skull base region

    International Nuclear Information System (INIS)

    Yeh, Chi-Yuan; Tung, Chuan-Jung; Chao, Tsi-Chain; Lin, Mu-Han; Lee, Chung-Chi

    2014-01-01

    The purpose of this study was to examine dose distribution of a skull base tumor and surrounding critical structures in response to high dose intensity-modulated radiosurgery (IMRS) with Monte Carlo (MC) simulation using a dual resolution sandwich phantom. The measurement-based Monte Carlo (MBMC) method (Lin et al., 2009) was adopted for the study. The major components of the MBMC technique involve (1) the BEAMnrc code for beam transport through the treatment head of a Varian 21EX linear accelerator, (2) the DOSXYZnrc code for patient dose simulation and (3) an EPID-measured efficiency map which describes non-uniform fluence distribution of the IMRS treatment beam. For the simulated case, five isocentric 6 MV photon beams were designed to deliver a total dose of 1200 cGy in two fractions to the skull base tumor. A sandwich phantom for the MBMC simulation was created based on the patient's CT scan of a skull base tumor [gross tumor volume (GTV)=8.4 cm 3 ] near the right 8th cranial nerve. The phantom, consisted of a 1.2-cm thick skull base region, had a voxel resolution of 0.05×0.05×0.1 cm 3 and was sandwiched in between 0.05×0.05×0.3 cm 3 slices of a head phantom. A coarser 0.2×0.2×0.3 cm 3 single resolution (SR) phantom was also created for comparison with the sandwich phantom. A particle history of 3×10 8 for each beam was used for simulations of both the SR and the sandwich phantoms to achieve a statistical uncertainty of <2%. Our study showed that the planning target volume (PTV) receiving at least 95% of the prescribed dose (VPTV95) was 96.9%, 96.7% and 99.9% for the TPS, SR, and sandwich phantom, respectively. The maximum and mean doses to large organs such as the PTV, brain stem, and parotid gland for the TPS, SR and sandwich MC simulations did not show any significant difference; however, significant dose differences were observed for very small structures like the right 8th cranial nerve, right cochlea, right malleus and right semicircular

  16. Very high resolution regional climate simulations on the 4 km scale as a basis for carbon balance assessments in northeast European Russia

    Science.gov (United States)

    Stendel, Martin; Hesselbjerg Christensen, Jens; Adalgeirsdottir, Gudfinna; Rinke, Annette; Matthes, Heidrun; Marchenko, Sergej; Daanen, Ronald; Romanovsky, Vladimir

    2010-05-01

    Simulations with global circulation models (GCMs) clearly indicate that major climate changes in polar regions can be expected during the 21st century. Model studies have shown that the area of the Northern Hemisphere underlain by permafrost could be reduced substantially in a warmer climate. However, thawing of permafrost, in particular if it is ice-rich, is subject to a time lag due to the large latent heat of fusion. State-of-the-art GCMs are unable to adequately model these processes because (a) even the most advanced subsurface schemes rarely treat depths below 5 m explicitly, and (b) soil thawing and freezing processes cannot be dealt with directly due to the coarse resolution of present GCMs. Any attempt to model subsurface processes needs information about soil properties, vegetation and snow cover, which are hardly realistic on a typical GCM grid. Furthermore, simulated GCM precipitation is often underestimated and the proportion of rain and snow is incorrect. One possibility to overcome resolution-related problems is to use regional climate models (RCMs). Such an RCM, HIRHAM, has until now been the only one used for the entire circumpolar domain, and its most recent version, HIRHAM5, has also been used in the high resolution study described here. Instead of the traditional approach via a degree-day based frost index from observations or model data, we use the regional model to create boundary conditions for an advanced permafrost model. This approach offers the advantage that the permafrost model can be run on the grid of the regional model, i.e. in a considerably higher resolution than in previous approaches. We here present results from a new time-slice integration with an unprecedented horizontal resolution of only 4 km, covering northeast European Russia. This model simulation has served as basis for an assessment of the carbon balance for a region in northeast European Russia within the EU-funded Carbo-North project.

  17. High-resolution numerical simulation of summer wind field comparing WRF boundary-layer parametrizations over complex Arctic topography: case study from central Spitsbergen

    Czech Academy of Sciences Publication Activity Database

    Láska, K.; Chládová, Zuzana; Hošek, Jiří

    2017-01-01

    Roč. 26, č. 4 (2017), s. 391-408 ISSN 0941-2948 Institutional support: RVO:68378289 Keywords : surface wind field * model evaluation * topographic effect * circulation pattern * Svalbard Subject RIV: DG - Athmosphere Sciences, Meteorology OBOR OECD: Meteorology and atmospheric sciences Impact factor: 1.989, year: 2016 http://www.schweizerbart.de/papers/metz/detail/prepub/87659/High_resolution_numerical_simulation_of_summer_wind_field_comparing_WRF_boundary_layer_parametrizations_over_complex_Arctic_topography_case_study_from_central_Spitsbergen

  18. Last Glacial Maximum simulations over southern Africa using a variable-resolution global model: synoptic-scale verification

    CSIR Research Space (South Africa)

    Nkoana, R

    2015-09-01

    Full Text Available developed by the Commonwealth Scientific and Industrial Research Organization (CSIRO) in Australia. An ensemble of LGM simulations was constructed, through the downscaling of PMIP3 coupled model simulations over southern Africa. A multiple nudging...

  19. Parametrized post-Newtonian theory of reference frames, multipolar expansions and equations of motion in the N-body problem

    International Nuclear Information System (INIS)

    Kopeikin, Sergei; Vlasov, Igor

    2004-01-01

    Post-Newtonian relativistic theory of astronomical reference frames based on Einstein's general theory of relativity was adopted by General Assembly of the International Astronomical Union in 2000. This theory is extended in the present paper by taking into account all relativistic effects caused by the presumable existence of a scalar field and parametrized by two parameters, β and γ, of the parametrized post-Newtonian (PPN) formalism. We use a general class of the scalar-tensor (Brans-Dicke type) theories of gravitation to work out PPN concepts of global and local reference frames for an astronomical N-body system. The global reference frame is a standard PPN coordinate system. A local reference frame is constructed in the vicinity of a weakly self-gravitating body (a sub-system of the bodies) that is a member of the astronomical N-body system. Such local inertial frame is required for unambiguous derivation of the equations of motion of the body in the field of other members of the N-body system and for construction of adequate algorithms for data analysis of various gravitational experiments conducted in ground-based laboratories and/or on board of spacecrafts in the solar system.We assume that the bodies comprising the N-body system have weak gravitational field and move slowly. At the same time we do not impose any specific limitations on the distribution of density, velocity and the equation of state of the body's matter. Scalar-tensor equations of the gravitational field are solved by making use of the post-Newtonian approximations so that the metric tensor and the scalar field are obtained as functions of the global and local coordinates. A correspondence between the local and global coordinate frames is found by making use of asymptotic expansion matching technique. This technique allows us to find a class of the post-Newtonian coordinate transformations between the frames as well as equations of translational motion of the origin of the local frame

  20. Equilibrium Solutions of the Logarithmic Hamiltonian Leapfrog for the N-body Problem

    Science.gov (United States)

    Minesaki, Yukitaka

    2018-04-01

    We prove that a second-order logarithmic Hamiltonian leapfrog for the classical general N-body problem (CGNBP) designed by Mikkola and Tanikawa and some higher-order logarithmic Hamiltonian methods based on symmetric multicompositions of the logarithmic algorithm exactly reproduce the orbits of elliptic relative equilibrium solutions in the original CGNBP. These methods are explicit symplectic methods. Before this proof, only some implicit discrete-time CGNBPs proposed by Minesaki had been analytically shown to trace the orbits of elliptic relative equilibrium solutions. The proof is therefore the first existence proof for explicit symplectic methods. Such logarithmic Hamiltonian methods with a variable time step can also precisely retain periodic orbits in the classical general three-body problem, which generic numerical methods with a constant time step cannot do.

  1. Unified connected theory of few-body reaction mechanisms in N-body scattering theory

    Science.gov (United States)

    Polyzou, W. N.; Redish, E. F.

    1978-01-01

    A unified treatment of different reaction mechanisms in nonrelativistic N-body scattering is presented. The theory is based on connected kernel integral equations that are expected to become compact for reasonable constraints on the potentials. The operators T/sub +-//sup ab/(A) are approximate transition operators that describe the scattering proceeding through an arbitrary reaction mechanism A. These operators are uniquely determined by a connected kernel equation and satisfy an optical theorem consistent with the choice of reaction mechanism. Connected kernel equations relating T/sub +-//sup ab/(A) to the full T/sub +-//sup ab/ allow correction of the approximate solutions for any ignored process to any order. This theory gives a unified treatment of all few-body reaction mechanisms with the same dynamic simplicity of a model calculation, but can include complicated reaction mechanisms involving overlapping configurations where it is difficult to formulate models.

  2. Introduction to Hamiltonian dynamical systems and the N-body problem

    CERN Document Server

    Meyer, Kenneth R

    2017-01-01

    This third edition text provides expanded material on the restricted three body problem and celestial mechanics. With each chapter containing new content, readers are provided with new material on reduction, orbifolds, and the regularization of the Kepler problem, all of which are provided with applications. The previous editions grew out of graduate level courses in mathematics, engineering, and physics given at several different universities. The courses took students who had some background in differential equations and lead them through a systematic grounding in the theory of Hamiltonian mechanics from a dynamical systems point of view. This text provides a mathematical structure of celestial mechanics ideal for beginners, and will be useful to graduate students and researchers alike. Reviews of the second edition: "The primary subject here is the basic theory of Hamiltonian differential equations studied from the perspective of differential dynamical systems. The N-body problem is used as the primary exa...

  3. High-resolution model for the simulation of the activity distribution and radiation field at the German FRJ-2 research reactor

    International Nuclear Information System (INIS)

    Winter, D.; Haeussler, A.; Abbasi, F.; Simons, F.; Nabbi, R.; Thomauske, B.

    2013-01-01

    F or the decommissioning of nuclear facilities in Germany, activity and dose rate atlases (ADAs) are required for the approval of the domestic regulatory authority. Thus, high detailed modeling efforts are demanded in order to optimize the quantification and the characterization of nuclear waste as well as to realize optimum radiation protection. For the generation of ADAs, computer codes based on the Monte-Carlo method are increasingly employed because of their potential for high resolution simulation of the neutron and gamma transport for activity and dose rate predictions, respectively. However, the demand on the modeling effort and the simulation time increases with the size and the complexity of the whole model that becomes a limiting factor. For instance, the German FRJ-2 research reactor consisting of a complex reactor core, the graphite reflector, and the adjacent thermal and biological shielding structures represents such a case. For the solving of this drawback, various techniques such as variance reduction methods are applied. A further simple but effective approach is the modeling of the regions of interest with appropriate boundary conditions e.g. surface source or reflective surfaces. In the framework of the existing research a high sophisticated simulation tool is developed which is characterized by: - CAD-based model generation for Monte-Carlo transport simulations; - Production and 3D visualization of high resolution activity and dose rate atlases; - Application of coupling routines and interface structures for optimum and automated simulations. The whole simulation system is based on the Monte-Carlo code MCNP5 and the depletion/activation code ORIGEN2. The numerical and computational efficiency of the proposed methods is discussed in this paper on the basis of the simulation and CAD-based model of the FRJ-2 research reactor with emphasis on the effect of variance reduction methods. (orig.)

  4. High-resolution model for the simulation of the activity distribution and radiation field at the German FRJ-2 research reactor

    Energy Technology Data Exchange (ETDEWEB)

    Winter, D.; Haeussler, A.; Abbasi, F.; Simons, F.; Nabbi, R.; Thomauske, B. [RWTH Aachen Univ. (Germany). Inst. of Nuclear Fuel Cycle; Damm, G. [Research Center Juelich (Germany)

    2013-11-15

    F or the decommissioning of nuclear facilities in Germany, activity and dose rate atlases (ADAs) are required for the approval of the domestic regulatory authority. Thus, high detailed modeling efforts are demanded in order to optimize the quantification and the characterization of nuclear waste as well as to realize optimum radiation protection. For the generation of ADAs, computer codes based on the Monte-Carlo method are increasingly employed because of their potential for high resolution simulation of the neutron and gamma transport for activity and dose rate predictions, respectively. However, the demand on the modeling effort and the simulation time increases with the size and the complexity of the whole model that becomes a limiting factor. For instance, the German FRJ-2 research reactor consisting of a complex reactor core, the graphite reflector, and the adjacent thermal and biological shielding structures represents such a case. For the solving of this drawback, various techniques such as variance reduction methods are applied. A further simple but effective approach is the modeling of the regions of interest with appropriate boundary conditions e.g. surface source or reflective surfaces. In the framework of the existing research a high sophisticated simulation tool is developed which is characterized by: - CAD-based model generation for Monte-Carlo transport simulations; - Production and 3D visualization of high resolution activity and dose rate atlases; - Application of coupling routines and interface structures for optimum and automated simulations. The whole simulation system is based on the Monte-Carlo code MCNP5 and the depletion/activation code ORIGEN2. The numerical and computational efficiency of the proposed methods is discussed in this paper on the basis of the simulation and CAD-based model of the FRJ-2 research reactor with emphasis on the effect of variance reduction methods. (orig.)

  5. AMM15: a new high-resolution NEMO configuration for operational simulation of the European north-west shelf

    Science.gov (United States)

    Graham, Jennifer A.; O'Dea, Enda; Holt, Jason; Polton, Jeff; Hewitt, Helene T.; Furner, Rachel; Guihou, Karen; Brereton, Ashley; Arnold, Alex; Wakelin, Sarah; Castillo Sanchez, Juan Manuel; Mayorga Adame, C. Gabriela

    2018-02-01

    This paper describes the next-generation ocean forecast model for the European north-west shelf, which will become the basis of operational forecasts in 2018. This new system will provide a step change in resolution and therefore our ability to represent small-scale processes. The new model has a resolution of 1.5 km compared with a grid spacing of 7 km in the current operational system. AMM15 (Atlantic Margin Model, 1.5 km) is introduced as a new regional configuration of NEMO v3.6. Here we describe the technical details behind this configuration, with modifications appropriate for the new high-resolution domain. Results from a 30-year non-assimilative run using the AMM15 domain demonstrate the ability of this model to represent the mean state and variability of the region.Overall, there is an improvement in the representation of the mean state across the region, suggesting similar improvements may be seen in the future operational system. However, the reduction in seasonal bias is greater off-shelf than on-shelf. In the North Sea, biases are largely unchanged. Since there has been no change to the vertical resolution or parameterization schemes, performance improvements are not expected in regions where stratification is dominated by vertical processes rather than advection. This highlights the fact that increased horizontal resolution will not lead to domain-wide improvements. Further work is needed to target bias reduction across the north-west shelf region.

  6. Impact of Automation Support on the Conflict Resolution Task in a Human-in-the-Loop Air Traffic Control Simulation

    Science.gov (United States)

    Mercer, Joey; Gomez, Ashley; Gabets, Cynthia; Bienert, Nancy; Edwards, Tamsyn; Martin, Lynne; Gujral, Vimmy; Homola, Jeffrey

    2016-01-01

    To determine the capabilities and limitations of human operators and automation in separation assurance roles, the second of three Human-in-the-Loop (HITL) part-task studies investigated air traffic controllers ability to detect and resolve conflicts under varying task sets, traffic densities, and run lengths. Operations remained within a single sector, staffed by a single controller, and explored, among other things, the controllers responsibility for conflict resolution with or without their involvement in the conflict detection task. Furthermore, these conditions were examined across two different traffic densities; 1x (current-day traffic) and a 20 increase above current-day traffic levels (1.2x). Analyses herein offer an examination of the conflict resolution strategies employed by controllers. In particular, data in the form of elapsed time between conflict detection and conflict resolution are used to assess if, and how, the controllers involvement in the conflict detection task affected the way in which they resolved traffic conflicts.

  7. Improving the spatial resolution in CZT detectors using charge sharing effect and transient signal analysis: Simulation study

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Xiaoqing; Cheng, Zeng [Department of Electrical and Computer Engineering, McMaster University (Canada); Deen, M. Jamal, E-mail: jamal@mcmaster.ca [Department of Electrical and Computer Engineering, McMaster University (Canada); School of Biomedical Engineering, McMaster University (Canada); Peng, Hao, E-mail: penghao@mcmaster.ca [Department of Electrical and Computer Engineering, McMaster University (Canada); School of Biomedical Engineering, McMaster University (Canada); Department of Medical Physics, McMaster University, Ontario L8S 4K1, Hamilton (Canada)

    2016-02-01

    Cadmium Zinc Telluride (CZT) semiconductor detectors are capable of providing superior energy resolution and three-dimensional position information of gamma ray interactions in a large variety of fields, including nuclear physics, gamma-ray imaging and nuclear medicine. Some dedicated Positron Emission Tomography (PET) systems, for example, for breast cancer detection, require higher contrast recovery and more accurate event location compared with a whole-body PET system. The spatial resolution is currently limited by electrode pitch in CZT detectors. A straightforward approach to increase the spatial resolution is by decreasing the detector electrode pitch, but this leads to higher fabrication cost and a larger number of readout channels. In addition, inter-electrode charge spreading can negate any improvement in spatial resolution. In this work, we studied the feasibility of achieving sub-pitch spatial resolution in CZT detectors using two methods: charge sharing effect and transient signal analysis. We noted that their valid ranges of usage were complementary. The dependences of their corresponding valid ranges on electrode design, depth-of-interaction (DOI), voltage bias and signal triggering threshold were investigated. The implementation of these two methods in both pixelated and cross-strip configuration of CZT detectors were discussed. Our results show that the valid range of charge sharing effect increases as a function of DOI, but decreases with increasing gap width and bias voltage. For a CZT detector of 5 mm thickness, 100 µm gap and biased at 400 V, the valid range of charge sharing effect was found to be about 112.3 µm around the gap center. This result complements the valid range of the transient signal analysis within one electrode pitch. For a signal-to-noise ratio (SNR) of ~17 and preliminary measurements, the sub-pitch spatial resolution is expected to be ~30 µm and ~250 µm for the charge sharing and transient signal analysis methods

  8. Spatial Variability in Column CO2 Inferred from High Resolution GEOS-5 Global Model Simulations: Implications for Remote Sensing and Inversions

    Science.gov (United States)

    Ott, L.; Putman, B.; Collatz, J.; Gregg, W.

    2012-01-01

    Column CO2 observations from current and future remote sensing missions represent a major advancement in our understanding of the carbon cycle and are expected to help constrain source and sink distributions. However, data assimilation and inversion methods are challenged by the difference in scale of models and observations. OCO-2 footprints represent an area of several square kilometers while NASA s future ASCENDS lidar mission is likely to have an even smaller footprint. In contrast, the resolution of models used in global inversions are typically hundreds of kilometers wide and often cover areas that include combinations of land, ocean and coastal areas and areas of significant topographic, land cover, and population density variations. To improve understanding of scales of atmospheric CO2 variability and representativeness of satellite observations, we will present results from a global, 10-km simulation of meteorology and atmospheric CO2 distributions performed using NASA s GEOS-5 general circulation model. This resolution, typical of mesoscale atmospheric models, represents an order of magnitude increase in resolution over typical global simulations of atmospheric composition allowing new insight into small scale CO2 variations across a wide range of surface flux and meteorological conditions. The simulation includes high resolution flux datasets provided by NASA s Carbon Monitoring System Flux Pilot Project at half degree resolution that have been down-scaled to 10-km using remote sensing datasets. Probability distribution functions are calculated over larger areas more typical of global models (100-400 km) to characterize subgrid-scale variability in these models. Particular emphasis is placed on coastal regions and regions containing megacities and fires to evaluate the ability of coarse resolution models to represent these small scale features. Additionally, model output are sampled using averaging kernels characteristic of OCO-2 and ASCENDS measurement

  9. Investigating the Effects of Grid Resolution of WRF Model for Simulating the Atmosphere for use in the Study of Wake Turbulence

    Science.gov (United States)

    Prince, Alyssa; Trout, Joseph; di Mercurio, Alexis

    2017-01-01

    The Weather Research and Forecasting (WRF) Model is a nested-grid, mesoscale numerical weather prediction system maintained by the Developmental Testbed Center. The model simulates the atmosphere by integrating partial differential equations, which use the conservation of horizontal momentum, conservation of thermal energy, and conservation of mass along with the ideal gas law. This research investigated the possible use of WRF in investigating the effects of weather on wing tip wake turbulence. This poster shows the results of an investigation into the accuracy of WRF using different grid resolutions. Several atmospheric conditions were modeled using different grid resolutions. In general, the higher the grid resolution, the better the simulation, but the longer the model run time. This research was supported by Dr. Manuel A. Rios, Ph.D. (FAA) and the grant ``A Pilot Project to Investigate Wake Vortex Patterns and Weather Patterns at the Atlantic City Airport by the Richard Stockton College of NJ and the FAA'' (13-G-006). Dr. Manuel A. Rios, Ph.D. (FAA), and the grant ``A Pilot Project to Investigate Wake Vortex Patterns and Weather Patterns at the Atlantic City Airport by the Richard Stockton College of NJ and the FAA''

  10. Overview of Proposal on High Resolution Climate Model Simulations of Recent Hurricane and Typhoon Activity: The Impact of SSTs and the Madden Julian Oscillation

    Science.gov (United States)

    Schubert, Siegfried; Kang, In-Sik; Reale, Oreste

    2009-01-01

    This talk gives an update on the progress and further plans for a coordinated project to carry out and analyze high-resolution simulations of tropical storm activity with a number of state-of-the-art global climate models. Issues addressed include, the mechanisms by which SSTs control tropical storm. activity on inter-annual and longer time scales, the modulation of that activity by the Madden Julian Oscillation on sub-seasonal time scales, as well as the sensitivity of the results to model formulation. The project also encourages companion coarser resolution runs to help assess resolution dependence, and. the ability of the models to capture the large-scale and long-terra changes in the parameters important for hurricane development. Addressing the above science questions is critical to understanding the nature of the variability of the Asian-Australian monsoon and its regional impacts, and thus CLIVAR RAMP fully endorses the proposed tropical storm simulation activity. The project is open to all interested organizations and investigators, and the results from the runs will be shared among the participants, as well as made available to the broader scientific community for analysis.

  11. Simulations of the transport and deposition of 137Cs over Europe after the Chernobyl NPP accident: influence of varying emission-altitude and model horizontal and vertical resolution

    Science.gov (United States)

    Evangeliou, N.; Balkanski, Y.; Cozic, A.; Møller, A. P.

    2013-03-01

    The coupled model LMDzORINCA has been used to simulate the transport, wet and dry deposition of the radioactive tracer 137Cs after accidental releases. For that reason, two horizontal resolutions were deployed and used in the model, a regular grid of 2.5°×1.25°, and the same grid stretched over Europe to reach a resolution of 0.45°×0.51°. The vertical dimension is represented with two different resolutions, 19 and 39 levels, respectively, extending up to mesopause. Four different simulations are presented in this work; the first uses the regular grid over 19 vertical levels assuming that the emissions took place at the surface (RG19L(S)), the second also uses the regular grid over 19 vertical levels but realistic source injection heights (RG19L); in the third resolution the grid is regular and the vertical resolution 39 vertical levels (RG39L) and finally, it is extended to the stretched grid with 19 vertical levels (Z19L). The best choice for the model validation was the Chernobyl accident which occurred in Ukraine (ex-USSR) on 26 May 1986. This accident has been widely studied since 1986, and a large database has been created containing measurements of atmospheric activity concentration and total cumulative deposition for 137Cs from most of the European countries. According to the results, the performance of the model to predict the transport and deposition of the radioactive tracer was efficient and accurate presenting low biases in activity concentrations and deposition inventories, despite the large uncertainties on the intensity of the source released. However, the best agreement with observations was obtained using the highest horizontal resolution of the model (Z19L run). The model managed to predict the radioactive contamination in most of the European regions (similar to Atlas), and also the arrival times of the radioactive fallout. As regards to the vertical resolution, the largest biases were obtained for the 39 layers run due to the increase of

  12. Computer simulation on spatial resolution of X-ray bright-field imaging by dynamical diffraction theory for a Laue-case crystal analyzer

    International Nuclear Information System (INIS)

    Suzuki, Yoshifumi; Chikaura, Yoshinori; Ando, Masami

    2011-01-01

    Recently, dark-field imaging (DFI) and bright-field imaging (BFI) have been proposed and applied to visualize X-ray refraction effects yielded in biomedical objects. In order to clarify the spatial resolution due to a crystal analyzer in Laue geometry, a program based on the Takagi-Taupin equation was modified to be used for carrying out simulations to evaluate the spatial resolution of images coming into a Laue angular analyzer (LAA). The calculation was done with a perfect plane wave for diffraction wave-fields, which corresponded to BFI, under the conditions of 35 keV and a diffraction index 440 for a 2100 μm thick LAA. As a result, the spatial resolution along the g-vector direction showed approximately 37.5 μm. 126 μm-thick LAA showed a spatial resolution better than 3.1 μm under the conditions of 13.7 keV and a diffraction index 220.

  13. SPMHD simulations of structure formation

    Science.gov (United States)

    Barnes, David J.; On, Alvina Y. L.; Wu, Kinwah; Kawata, Daisuke

    2018-05-01

    The intracluster medium of galaxy clusters is permeated by μ {G} magnetic fields. Observations with current and future facilities have the potential to illuminate the role of these magnetic fields play in the astrophysical processes of galaxy clusters. To obtain a greater understanding of how the initial seed fields evolve to the magnetic fields in the intracluster medium requires magnetohydrodynamic simulations. We critically assess the current smoothed particle magnetohydrodynamic (SPMHD) schemes, especially highlighting the impact of a hyperbolic divergence cleaning scheme and artificial resistivity switch on the magnetic field evolution in cosmological simulations of the formation of a galaxy cluster using the N-body/SPMHD code GCMHD++. The impact and performance of the cleaning scheme and two different schemes for the artificial resistivity switch is demonstrated via idealized test cases and cosmological simulations. We demonstrate that the hyperbolic divergence cleaning scheme is effective at suppressing the growth of the numerical divergence error of the magnetic field and should be applied to any SPMHD simulation. Although the artificial resistivity is important in the strong field regime, it can suppress the growth of the magnetic field in the weak field regime, such as galaxy clusters. With sufficient resolution, simulations with divergence cleaning can reproduce observed magnetic fields. We conclude that the cleaning scheme alone is sufficient for galaxy cluster simulations, but our results indicate that the SPMHD scheme must be carefully chosen depending on the regime of the magnetic field.

  14. Evaluation of the spatial resolution and the dose in magnified breast simulation in function of collimation system

    International Nuclear Information System (INIS)

    Policarpo, Erica M.; Alves, Marcos P.S.; Murata, Camila H.; Oliveira, Cassio M.; Farias, Thiago M.B.; Daros, Kellen A.C.

    2017-01-01

    Mammography screening remains the best method for monitoring breast pathologies for its ability to detect microcalcifications and a need for follow-up of asymptomatic patients. Mammography exams are often necessary magnified technique of an anatomical region of interest to supplement the examination. These exams require a attention due to proximity to the X ray tube resulting in increasing dose in the patient breast. The purpose of this study was to evaluate spatial resolution and the kerma-area product doses in magnified mammography for thicker breasts in function of system collimation. Measurements were performed to evaluate high contrast spatial resolution and estimated dose related to each exposure in magnified images. The spatial resolution were evaluated with spatial resolution pattern model 18-251 by Fluke Biomedical® and polymethylmethacrylate (PMMA) plates. Two mammography equipment were tested, Philips-VMI® model Graph Mammo AF and Hologic® Lorad model MIV-113R. The air kerma for each exposure was measured by ionization chamber - Radcal® - model 10 X 6-6M dedicated to mammography and the kerma-area product was estimated. Preliminary results demonstrated that kerma-area product for the Philips-VMI® equipment were significantly higher - about 3 times - than the estimated kerma-area product doses of the Hologic® Lorad and the resolution was reduced when the image was performed without collimation. This fact can be explained due to Philips-VMI® equipment does not have a collimation system. Additionally, the Hologic® Lorad equipment presented better image quality compared to Philips equipment. (author)

  15. Evaluation of the spatial resolution and the dose in magnified breast simulation in function of collimation system

    Energy Technology Data Exchange (ETDEWEB)

    Policarpo, Erica M.; Alves, Marcos P.S.; Murata, Camila H.; Oliveira, Cassio M.; Farias, Thiago M.B.; Daros, Kellen A.C., E-mail: erica.policarpo@bol.com.br [Universidade Federal de Sao Paulo (DDI/EPM/UNIFESP), Sao Paulo, SP (Brazil). Escola Paulista de Medicina. Departamento de Diagnostico por Imagem

    2017-11-01

    Mammography screening remains the best method for monitoring breast pathologies for its ability to detect microcalcifications and a need for follow-up of asymptomatic patients. Mammography exams are often necessary magnified technique of an anatomical region of interest to supplement the examination. These exams require a attention due to proximity to the X ray tube resulting in increasing dose in the patient breast. The purpose of this study was to evaluate spatial resolution and the kerma-area product doses in magnified mammography for thicker breasts in function of system collimation. Measurements were performed to evaluate high contrast spatial resolution and estimated dose related to each exposure in magnified images. The spatial resolution were evaluated with spatial resolution pattern model 18-251 by Fluke Biomedical® and polymethylmethacrylate (PMMA) plates. Two mammography equipment were tested, Philips-VMI® model Graph Mammo AF and Hologic® Lorad model MIV-113R. The air kerma for each exposure was measured by ionization chamber - Radcal® - model 10 X 6-6M dedicated to mammography and the kerma-area product was estimated. Preliminary results demonstrated that kerma-area product for the Philips-VMI® equipment were significantly higher - about 3 times - than the estimated kerma-area product doses of the Hologic® Lorad and the resolution was reduced when the image was performed without collimation. This fact can be explained due to Philips-VMI® equipment does not have a collimation system. Additionally, the Hologic® Lorad equipment presented better image quality compared to Philips equipment. (author)

  16. Geometric characterization for the least Lagrangian action of n-body problems

    Institute of Scientific and Technical Information of China (English)

    ZHANG; Shiqing

    2001-01-01

    [1]Manev, G., La gravitation et l'énergie au zéro, Comptes Rendus, 924, 78: 259.[2]Diacu, F. N., Near-collision dynamics for particle systems with quasihomogeneous potentials, J. of Diff. Equ., 996, 28: 58.[3]Ambrosetti, A., Coti Zelati, V., Periodic Solutions of Singular Lagrangian Systems, Basel: Birkhuser, 993.[4]Arnold, V., Kozlov, V., Neishtadt, A., Dynamical Systems (iii): Mathematical Aspects of Classical and Celestial Mechanics, Berlin: Springer-Verlag, 988.[5]Chenciner, A., Desolneux, N., Minima de l'intégrale d'action et équilibres relatifs de n corps, C R Acad. Sci. Paris, serie I, 998, 326: 209.[6]Coti Zelati, V., The periodic solutions of n-body type problems, Ann IHP Anal nonlinéaire, 990, 7: 477.[7]Euler, L., De motu rectilineo trium corprum se mutuo attrahentium, Novi. Comm. Acad. Sci. Imp. Petropll, 767: 45.[8]Gordon, W., A minimizing property of Keplerian orbits, Amer. J. Math., 977, 99: 96.[9]Lagrange, J., Essai sur le problé me des trois corps, 772, Ouvres, 783, 3: 229.[10]Long, Y., Zhang, S. Q., Geometric characterization for variational minimization solutions of the 3-body problem, Chinese Science Bulletin, 999, 44(8): 653.[11]Long, Y., Zhang, S. Q., Geometric characterization for variational minimization solutions of the 3-body problem with fixed energy, J. of Diff. Equ., 2000, 60: 422.[12]Meyer, K., Hall, G., Introduction to Hamiltonian systems and the n-body problems, Berlin: Springer-Verlag,992.[13]Serra, E., Terracini, S., Collisionless periodic solutions to some three-body problems, Arch. Rational Mech. Anal., 992, 20: 305.[14]Siegle, C., Moser, J., Lectures on Celestial Mechanics, Berlin: Springer-Verlag, 97.[15]Wintner, A., Analytical Foundations of Celestial Mechanics, Princeton: Princeton University Press, 94.[16]Hardy, G., Littlewood, J., Pólya, G., Inequalities, 2nd ed., Cambridge: Combridge University Press, 952.

  17. Near-global climate simulation at 1 km resolution: establishing a performance baseline on 4888 GPUs with COSMO 5.0

    Directory of Open Access Journals (Sweden)

    O. Fuhrer

    2018-05-01

    Full Text Available The best hope for reducing long-standing global climate model biases is by increasing resolution to the kilometer scale. Here we present results from an ultrahigh-resolution non-hydrostatic climate model for a near-global setup running on the full Piz Daint supercomputer on 4888 GPUs (graphics processing units. The dynamical core of the model has been completely rewritten using a domain-specific language (DSL for performance portability across different hardware architectures. Physical parameterizations and diagnostics have been ported using compiler directives. To our knowledge this represents the first complete atmospheric model being run entirely on accelerators on this scale. At a grid spacing of 930 m (1.9 km, we achieve a simulation throughput of 0.043 (0.23 simulated years per day and an energy consumption of 596 MWh per simulated year. Furthermore, we propose a new memory usage efficiency (MUE metric that considers how efficiently the memory bandwidth – the dominant bottleneck of climate codes – is being used.

  18. Near-global climate simulation at 1 km resolution: establishing a performance baseline on 4888 GPUs with COSMO 5.0

    Science.gov (United States)

    Fuhrer, Oliver; Chadha, Tarun; Hoefler, Torsten; Kwasniewski, Grzegorz; Lapillonne, Xavier; Leutwyler, David; Lüthi, Daniel; Osuna, Carlos; Schär, Christoph; Schulthess, Thomas C.; Vogt, Hannes

    2018-05-01

    The best hope for reducing long-standing global climate model biases is by increasing resolution to the kilometer scale. Here we present results from an ultrahigh-resolution non-hydrostatic climate model for a near-global setup running on the full Piz Daint supercomputer on 4888 GPUs (graphics processing units). The dynamical core of the model has been completely rewritten using a domain-specific language (DSL) for performance portability across different hardware architectures. Physical parameterizations and diagnostics have been ported using compiler directives. To our knowledge this represents the first complete atmospheric model being run entirely on accelerators on this scale. At a grid spacing of 930 m (1.9 km), we achieve a simulation throughput of 0.043 (0.23) simulated years per day and an energy consumption of 596 MWh per simulated year. Furthermore, we propose a new memory usage efficiency (MUE) metric that considers how efficiently the memory bandwidth - the dominant bottleneck of climate codes - is being used.

  19. Test Particle Simulations of Electron Injection by the Bursty Bulk Flows (BBFs) using High Resolution Lyon-Feddor-Mobarry (LFM) Code

    Science.gov (United States)

    Eshetu, W. W.; Lyon, J.; Wiltberger, M. J.; Hudson, M. K.

    2017-12-01

    Test particle simulations of electron injection by the bursty bulk flows (BBFs) have been done using a test particle tracer code [1], and the output fields of the Lyon-Feddor-Mobarry global magnetohydro- dynamics (MHD) code[2]. The MHD code was run with high resolu- tion (oct resolution), and with specified solar wind conditions so as to reproduce the observed qualitative picture of the BBFs [3]. Test par- ticles were injected so that they interact with earthward propagating BBFs. The result of the simulation shows that electrons are pushed ahead of the BBFs and accelerated into the inner magnetosphere. Once electrons are in the inner magnetosphere they are further energized by drift resonance with the azimuthal electric field. In addition pitch angle scattering of electrons resulting in the violation conservation of the first adiabatic invariant has been observed. The violation of the first adiabatic invariant occurs as electrons cross a weak magnetic field region with a strong gradient of the field perturbed by the BBFs. References 1. Kress, B. T., Hudson,M. K., Looper, M. D. , Albert, J., Lyon, J. G., and Goodrich, C. C. (2007), Global MHD test particle simulations of ¿ 10 MeV radiation belt electrons during storm sudden commencement, J. Geophys. Res., 112, A09215, doi:10.1029/2006JA012218. Lyon,J. G., Fedder, J. A., and Mobarry, C.M., The Lyon- Fedder-Mobarry (LFM) Global MHD Magnetospheric Simulation Code (2004), J. Atm. And Solar-Terrestrial Phys., 66, Issue 15-16, 1333- 1350,doi:10.1016/j.jastp. Wiltberger, Merkin, M., Lyon, J. G., and Ohtani, S. (2015), High-resolution global magnetohydrodynamic simulation of bursty bulk flows, J. Geophys. Res. Space Physics, 120, 45554566, doi:10.1002/2015JA021080.

  20. A simulation study of high-resolution x-ray computed tomography imaging using irregular sampling with a photon-counting detector

    International Nuclear Information System (INIS)

    Lee, Seungwan; Choi, Yu-Na; Kim, Hee-Joung

    2013-01-01

    The purpose of this study was to improve the spatial resolution for the x-ray computed tomography (CT) imaging with a photon-counting detector using an irregular sampling method. The geometric shift-model of detector was proposed to produce the irregular sampling pattern and increase the number of samplings in the radial direction. The conventional micro-x-ray CT system and the novel system with the geometric shift-model of detector were simulated using analytic and Monte Carlo simulations. The projections were reconstructed using filtered back-projection (FBP), algebraic reconstruction technique (ART), and total variation (TV) minimization algorithms, and the reconstructed images were compared in terms of normalized root-mean-square error (NRMSE), full-width at half-maximum (FWHM), and coefficient-of-variation (COV). The results showed that the image quality improved in the novel system with the geometric shift-model of detector, and the NRMSE, FWHM, and COV were lower for the images reconstructed using the TV minimization technique in the novel system with the geometric shift-model of detector. The irregular sampling method produced by the geometric shift-model of detector can improve the spatial resolution and reduce artifacts and noise for reconstructed images obtained from an x-ray CT system with a photon-counting detector. -- Highlights: • We proposed a novel sampling method based on a spiral pattern to improve the spatial resolution. • The novel sampling method increased the number of samplings in the radial direction. • The spatial resolution was improved by the novel sampling method

  1. High-resolution simulation of link-level vehicle emissions and concentrations for air pollutants in a traffic-populated eastern Asian city

    Directory of Open Access Journals (Sweden)

    S. Zhang

    2016-08-01

    Full Text Available Vehicle emissions containing air pollutants created substantial environmental impacts on air quality for many traffic-populated cities in eastern Asia. A high-resolution emission inventory is a useful tool compared with traditional tools (e.g. registration data-based approach to accurately evaluate real-world traffic dynamics and their environmental burden. In this study, Macau, one of the most populated cities in the world, is selected to demonstrate a high-resolution simulation of vehicular emissions and their contribution to air pollutant concentrations by coupling multimodels. First, traffic volumes by vehicle category on 47 typical roads were investigated during weekdays in 2010 and further applied in a networking demand simulation with the TransCAD model to establish hourly profiles of link-level vehicle counts. Local vehicle driving speed and vehicle age distribution data were also collected in Macau. Second, based on a localized vehicle emission model (e.g. the emission factor model for the Beijing vehicle fleet – Macau, EMBEV–Macau, this study established a link-based vehicle emission inventory in Macau with high resolution meshed in a temporal and spatial framework. Furthermore, we employed the AERMOD (AMS/EPA Regulatory Model model to map concentrations of CO and primary PM2.5 contributed by local vehicle emissions during weekdays in November 2010. This study has discerned the strong impact of traffic flow dynamics on the temporal and spatial patterns of vehicle emissions, such as a geographic discrepancy of spatial allocation up to 26 % between THC and PM2.5 emissions owing to spatially heterogeneous vehicle-use intensity between motorcycles and diesel fleets. We also identified that the estimated CO2 emissions from gasoline vehicles agreed well with the statistical fuel consumption in Macau. Therefore, this paper provides a case study and a solid framework for developing high-resolution environment assessment tools for other

  2. Microcanonical thermodynamics and statistical fragmentation of dissipative systems. The topological structure of the N-body phase space

    Science.gov (United States)

    Gross, D. H. E.

    1997-01-01

    configurations. It is shown that the three basic quantities which specify a phase transition of first order - Transition temperature, latent heat, and interphase surface entropy - can be well determined for finite systems from the caloric equation of state T( E) in the coexistence region. Their values are already for a lattice of only ~ 30 ∗ 30 spins close to the ones of the corresponding infinite system. The significance of the backbending of the caloric equation of state T( E) is clarified. It is the signal for a phase transition of first order in a finite isolated system. (II) Fragmentation is shown to be a specific and generic phase transition of finite systems. The caloric equation of state T( E) for hot nuclei is calculated. The phase transition towards fragmentation can unambiguously be identified by the anomalies in T( E). As microcanonical thermodynamics is a full N-body theory it determines all many-body correlations as well. Consequently, various statistical multi-fragment correlations are investigated which give insight into the details of the equilibration mechanism. (III) Fragmentation of neutral and multiply charged atomic clusters is the next example of a realistic application of microcanonical thermodynamics. Our simulation method, microcanonical Metropolis Monte Carlo, combines the explicit microscopic treatment of the fragmentational degrees of freedom with the implicit treatment of the internal degrees of freedom of the fragments described by the experimental bulk specific heat. This micro-macro approach allows us to study the fragmentation of also larger fragments. Characteristic details of the fission of multiply charged metal clusters find their explanation by the different bulk properties. (IV) Finally, the fragmentation of strongly rotating nuclei is discussed as an example for a microcanonical ensemble under the action of a two-dimensional repulsive force.

  3. Simulating single-phase and two-phase non-Newtonian fluid flow of a digital rock scanned at high resolution

    Science.gov (United States)

    Tembely, Moussa; Alsumaiti, Ali M.; Jouini, Mohamed S.; Rahimov, Khurshed; Dolatabadi, Ali

    2017-11-01

    Most of the digital rock physics (DRP) simulations focus on Newtonian fluids and overlook the detailed description of rock-fluid interaction. A better understanding of multiphase non-Newtonian fluid flow at pore-scale is crucial for optimizing enhanced oil recovery (EOR). The Darcy scale properties of reservoir rocks such as the capillary pressure curves and the relative permeability are controlled by the pore-scale behavior of the multiphase flow. In the present work, a volume of fluid (VOF) method coupled with an adaptive meshing technique is used to perform the pore-scale simulation on a 3D X-ray micro-tomography (CT) images of rock samples. The numerical model is based on the resolution of the Navier-Stokes equations along with a phase fraction equation incorporating the dynamics contact model. The simulations of a single phase flow for the absolute permeability showed a good agreement with the literature benchmark. Subsequently, the code is used to simulate a two-phase flow consisting of a polymer solution, displaying a shear-thinning power law viscosity. The simulations enable to access the impact of the consistency factor (K), the behavior index (n), along with the two contact angles (advancing and receding) on the relative permeability.

  4. Simulation of an extreme heavy rainfall event over Chennai, India using WRF: Sensitivity to grid resolution and boundary layer physics

    KAUST Repository

    Srinivas, C.V.

    2018-05-04

    In this study, the heavy precipitation event on 01 December 2015 over Chennai located on the southeast coast of India was simulated using the Weather Research and Forecast (WRF) model. A series of simulations were conducted using explicit convection and varying the planetary boundary layer (PBL) parameterization schemes. The model results were compared with available surface, satellite and Doppler Weather Radar observations. Simulations indicate strong, sustained moist convection associated with development of a mesoscale upper air cyclonic circulation, during the passage of a synoptic scale low-pressure trough caused heavy rainfall over Chennai and its surroundings. Results suggest that veering of wind with height associated with strong wind shear in the layer 800–400 hPa together with dry air advection facilitated development of instability and initiation of convection. The 1-km domain using explicit convection improved the prediction of rainfall intensity of about 450 mm and its distribution. The PBL physics strongly influenced the rainfall prediction by changing the location of upper air circulation, energy transport, moisture convergence and intensity of convection in the schemes YSU, MYJ and MYNN. All the simulations underestimated the first spell of the heavy rainfall. While YSU and MYJ schemes grossly underestimated the rainfall and dislocated the area of maximum rainfall, the higher order MYNN scheme simulated the rainfall pattern in better agreement with observations. The MYNN showed lesser mixing and simulated more humid boundary layer, higher convective available potential energy (CAPE) and stronger winds at mid-troposphere than did the other schemes. The MYNN also realistically simulated the location of upper air cyclonic flow and various dynamic and thermodynamic features. Consequently it simulated stronger moisture convergence and higher precipitation.

  5. Simulation of an extreme heavy rainfall event over Chennai, India using WRF: Sensitivity to grid resolution and boundary layer physics

    KAUST Repository

    Srinivas, C.V.; Yesubabu, V.; Hari Prasad, D.; Hari Prasad, K.B.R.R.; Greeshma, M.M.; Baskaran, R.; Venkatraman, B.

    2018-01-01

    In this study, the heavy precipitation event on 01 December 2015 over Chennai located on the southeast coast of India was simulated using the Weather Research and Forecast (WRF) model. A series of simulations were conducted using explicit convection and varying the planetary boundary layer (PBL) parameterization schemes. The model results were compared with available surface, satellite and Doppler Weather Radar observations. Simulations indicate strong, sustained moist convection associated with development of a mesoscale upper air cyclonic circulation, during the passage of a synoptic scale low-pressure trough caused heavy rainfall over Chennai and its surroundings. Results suggest that veering of wind with height associated with strong wind shear in the layer 800–400 hPa together with dry air advection facilitated development of instability and initiation of convection. The 1-km domain using explicit convection improved the prediction of rainfall intensity of about 450 mm and its distribution. The PBL physics strongly influenced the rainfall prediction by changing the location of upper air circulation, energy transport, moisture convergence and intensity of convection in the schemes YSU, MYJ and MYNN. All the simulations underestimated the first spell of the heavy rainfall. While YSU and MYJ schemes grossly underestimated the rainfall and dislocated the area of maximum rainfall, the higher order MYNN scheme simulated the rainfall pattern in better agreement with observations. The MYNN showed lesser mixing and simulated more humid boundary layer, higher convective available potential energy (CAPE) and stronger winds at mid-troposphere than did the other schemes. The MYNN also realistically simulated the location of upper air cyclonic flow and various dynamic and thermodynamic features. Consequently it simulated stronger moisture convergence and higher precipitation.

  6. A Last Glacial Maximum world-ocean simulation at eddy-permitting resolution - Part 1: Experimental design and basic evaluation

    Science.gov (United States)

    Ballarotta, M.; Brodeau, L.; Brandefelt, J.; Lundberg, P.; Döös, K.

    2013-01-01

    Most state-of-the-art climate models include a coarsely resolved oceanic component, which has difficulties in capturing detailed dynamics, and therefore eddy-permitting/eddy-resolving simulations have been developed to reproduce the observed World Ocean. In this study, an eddy-permitting numerical experiment is conducted to simulate the global ocean state for a period of the Last Glacial Maximum (LGM, ~ 26 500 to 19 000 yr ago) and to investigate the improvements due to taking into account these higher spatial scales. The ocean general circulation model is forced by a 49-yr sample of LGM atmospheric fields constructed from a quasi-equilibrated climate-model simulation. The initial state and the bottom boundary condition conform to the Paleoclimate Modelling Intercomparison Project (PMIP) recommendations. Before evaluating the model efficiency in representing the paleo-proxy reconstruction of the surface state, the LGM experiment is in this first part of the investigation, compared with a present-day eddy-permitting hindcast simulation as well as with the available PMIP results. It is shown that the LGM eddy-permitting simulation is consistent with the quasi-equilibrated climate-model simulation, but large discrepancies are found with the PMIP model analyses, probably due to the different equilibration states. The strongest meridional gradients of the sea-surface temperature are located near 40° N and S, this due to particularly large North-Atlantic and Southern-Ocean sea-ice covers. These also modify the locations of the convection sites (where deep-water forms) and most of the LGM Conveyor Belt circulation consequently takes place in a thinner layer than today. Despite some discrepancies with other LGM simulations, a glacial state is captured and the eddy-permitting simulation undertaken here yielded a useful set of data for comparisons with paleo-proxy reconstructions.

  7. Comparison of HYSPLIT-4 model simulations of the ETEX data, using meteorological input data of differing spatial and temporal resolution

    International Nuclear Information System (INIS)

    Hess, G.D.; Mills, G.A.; Draxler, R.R.

    1997-01-01

    Model simulations of air concentrations during ETEX-1 using the HYSPLIT-4 (HYbrid Single-Particle Lagrangian Integrated Trajectories, version 4) code and analysed meteorological data fields provided by ECMWF and the Australian Bureau of Meteorology are presented here. The HYSPLIT-4 model is a complete system for computing simple trajectories to complex dispersion and deposition simulations using either puff or particle approaches. A mixed dispersion algorithm is employed in this study: puffs in the horizontal and particles in the vertical

  8. N-body quantum scattering theory in two Hilbert spaces. VII. Real-energy limits

    International Nuclear Information System (INIS)

    Chandler, C.; Gibson, A.G.

    1994-01-01

    A study is made of the real-energy limits of approximate solutions of the Chandler--Gibson equations, as well as the real-energy limits of the approximate equations themselves. It is proved that (1) the approximate time-independent transition operator T π (z) and an auxiliary operator M π (z), when restricted to finite energy intervals, are trace class operators and have limits in trace norm for almost all values of the real energy; (2) the basic dynamical equation that determines the operator M π (z), when restricted to the space of trace class operators, has a real-energy limit in trace norm for almost all values of the real energy; (3) the real-energy limit of M π (z) is a solution of the real-energy limit equation; (4) the diagonal (on-shell) elements of the kernels of the real-energy limit of T π (z) and of all solutions of the real-energy limit equation exactly equal the on-shell transition operator, implying that the real-energy limit equation uniquely determines the physical transition amplitude; and (5) a sequence of approximate on-shell transition operators converges strongly to the exact on-shell transition operator. These mathematically rigorous results are believed to be the most general of their type for nonrelativistic N-body quantum scattering theories

  9. Communication Reducing Algorithms for Distributed Hierarchical N-Body Problems with Boundary Distributions

    KAUST Repository

    AbdulJabbar, Mustafa Abdulmajeed

    2017-05-11

    Reduction of communication and efficient partitioning are key issues for achieving scalability in hierarchical N-Body algorithms like Fast Multipole Method (FMM). In the present work, we propose three independent strategies to improve partitioning and reduce communication. First, we show that the conventional wisdom of using space-filling curve partitioning may not work well for boundary integral problems, which constitute a significant portion of FMM’s application user base. We propose an alternative method that modifies orthogonal recursive bisection to relieve the cell-partition misalignment that has kept it from scaling previously. Secondly, we optimize the granularity of communication to find the optimal balance between a bulk-synchronous collective communication of the local essential tree and an RDMA per task per cell. Finally, we take the dynamic sparse data exchange proposed by Hoefler et al. [1] and extend it to a hierarchical sparse data exchange, which is demonstrated at scale to be faster than the MPI library’s MPI_Alltoallv that is commonly used.

  10. Optimal order and time-step criterion for Aarseth-type N-body integrators

    International Nuclear Information System (INIS)

    Makino, Junichiro

    1991-01-01

    How the selection of the time-step criterion and the order of the integrator change the efficiency of Aarseth-type N-body integrators is discussed. An alternative to Aarseth's scheme based on the direct calculation of the time derivative of the force using the Hermite interpolation is compared to Aarseth's scheme, which uses the Newton interpolation to construct the predictor and corrector. How the number of particles in the system changes the behavior of integrators is examined. The Hermite scheme allows a time step twice as large as that for the standard Aarseth scheme for the same accuracy. The calculation cost of the Hermite scheme per time step is roughly twice as much as that of the standard Aarseth scheme. The optimal order of the integrators depends on both the particle number and the accuracy required. The time-step criterion of the standard Aarseth scheme is found to be inapplicable to higher-order integrators, and a more uniformly reliable criterion is proposed. 18 refs

  11. A complete basis for a perturbation expansion of the general N-body problem

    International Nuclear Information System (INIS)

    Laing, W Blake; Kelle, David W; Dunn, Martin; Watson, Deborah K

    2009-01-01

    We discuss a basis set developed to calculate perturbation coefficients in an expansion of the general N-body problem. This basis has two advantages. First, the basis is complete order-by-order for the perturbation series. Second, the number of independent basis tensors spanning the space for a given order does not scale with N, the number of particles, despite the generality of the problem. At first order, the number of basis tensors is 25 for all N, i.e. the problem scales as N 0 , although one would initially expect an N 6 scaling at first order. The perturbation series is expanded in inverse powers of the spatial dimension. This results in a maximally symmetric configuration at lowest order which has a point group isomorphic with the symmetric group, S N . The resulting perturbation series is order-by-order invariant under the N! operations of the S N point group which is responsible for the slower than exponential growth of the basis. In this paper, we demonstrate the completeness of the basis and perform the first test of this formalism through first order by comparing to an exactly solvable fully interacting problem of N particles with a two-body harmonic interaction potential

  12. Studies of Planet Formation using a Hybrid N-body + Planetesimal Code

    Science.gov (United States)

    Kenyon, Scott J.; Bromley, Benjamin C.; Salamon, Michael (Technical Monitor)

    2005-01-01

    The goal of our proposal was to use a hybrid multi-annulus planetesimal/n-body code to examine the planetesimal theory, one of the two main theories of planet formation. We developed this code to follow the evolution of numerous 1 m to 1 km planetesimals as they collide, merge, and grow into full-fledged planets. Our goal was to apply the code to several well-posed, topical problems in planet formation and to derive observational consequences of the models. We planned to construct detailed models to address two fundamental issues: 1) icy planets - models for icy planet formation will demonstrate how the physical properties of debris disks, including the Kuiper Belt in our solar system, depend on initial conditions and input physics; and 2) terrestrial planets - calculations following the evolution of 1-10 km planetesimals into Earth-mass planets and rings of dust will provide a better understanding of how terrestrial planets form and interact with their environment. During the past year, we made progress on each issue. Papers published in 2004 are summarized. Summaries of work to be completed during the first half of 2005 and work planned for the second half of 2005 are included.

  13. The program of a fast calorimeter simulation and some its application to investigate Higgs and Z0-boson effective mass resolution at LHC energies

    International Nuclear Information System (INIS)

    Bumazhnov, V.A.

    1994-01-01

    A fast program simulating a response of electromagnetic and hadronic calorimeters with projection geometry to a hard event produced at LHC energies has been written. This program takes into account transverse sizes of a shower in a calorimeter and uses the lateral shower profile parametrization. It is shown that a realistic jet-finding algorithm gives the main contribution to the effective mass resolution of Z-boson decaying into hadron jets detected with electromagnetic and hadronic calorimeters. Higgs and Z 0 -boson mass and width dependences on calorimeter granularity have been obtained. 19 refs., 15 figs., 3 tabs

  14. Statistical Analyses of High-Resolution Aircraft and Satellite Observations of Sea Ice: Applications for Improving Model Simulations

    Science.gov (United States)

    Farrell, S. L.; Kurtz, N. T.; Richter-Menge, J.; Harbeck, J. P.; Onana, V.

    2012-12-01

    Satellite-derived estimates of ice thickness and observations of ice extent over the last decade point to a downward trend in the basin-scale ice volume of the Arctic Ocean. This loss has broad-ranging impacts on the regional climate and ecosystems, as well as implications for regional infrastructure, marine navigation, national security, and resource exploration. New observational datasets at small spatial and temporal scales are now required to improve our understanding of physical processes occurring within the ice pack and advance parameterizations in the next generation of numerical sea-ice models. High-resolution airborne and satellite observations of the sea ice are now available at meter-scale resolution or better that provide new details on the properties and morphology of the ice pack across basin scales. For example the NASA IceBridge airborne campaign routinely surveys the sea ice of the Arctic and Southern Oceans with an advanced sensor suite including laser and radar altimeters and digital cameras that together provide high-resolution measurements of sea ice freeboard, thickness, snow depth and lead distribution. Here we present statistical analyses of the ice pack primarily derived from the following IceBridge instruments: the Digital Mapping System (DMS), a nadir-looking, high-resolution digital camera; the Airborne Topographic Mapper, a scanning lidar; and the University of Kansas snow radar, a novel instrument designed to estimate snow depth on sea ice. Together these instruments provide data from which a wide range of sea ice properties may be derived. We provide statistics on lead distribution and spacing, lead width and area, floe size and distance between floes, as well as ridge height, frequency and distribution. The goals of this study are to (i) identify unique statistics that can be used to describe the characteristics of specific ice regions, for example first-year/multi-year ice, diffuse ice edge/consolidated ice pack, and convergent

  15. Simulations

    CERN Document Server

    Ngada, Narcisse

    2015-06-15

    The complexity and cost of building and running high-power electrical systems make the use of simulations unavoidable. The simulations available today provide great understanding about how systems really operate. This paper helps the reader to gain an insight into simulation in the field of power converters for particle accelerators. Starting with the definition and basic principles of simulation, two simulation types, as well as their leading tools, are presented: analog and numerical simulations. Some practical applications of each simulation type are also considered. The final conclusion then summarizes the main important items to keep in mind before opting for a simulation tool or before performing a simulation.

  16. Simulating high spatial resolution high severity burned area in Sierra Nevada forests for California Spotted Owl habitat climate change risk assessment and management.

    Science.gov (United States)

    Keyser, A.; Westerling, A. L.; Jones, G.; Peery, M. Z.

    2017-12-01

    Sierra Nevada forests have experienced an increase in very large fires with significant areas of high burn severity, such as the Rim (2013) and King (2014) fires, that have impacted habitat of endangered species such as the California spotted owl. In order to support land manager forest management planning and risk assessment activities, we used historical wildfire histories from the Monitoring Trends in Burn Severity project and gridded hydroclimate and land surface characteristics data to develope statistical models to simulate the frequency, location and extent of high severity burned area in Sierra Nevada forest wildfires as functions of climate and land surface characteristics. We define high severity here as BA90 area: the area comprising patches with ninety percent or more basal area killed within a larger fire. We developed a system of statistical models to characterize the probability of large fire occurrence, the probability of significant BA90 area present given a large fire, and the total extent of BA90 area in a fire on a 1/16 degree lat/lon grid over the Sierra Nevada. Repeated draws from binomial and generalized pareto distributions using these probabilities generated a library of simulated histories of high severity fire for a range of near (50 yr) future climate and fuels management scenarios. Fuels management scenarios were provided by USFS Region 5. Simulated BA90 area was then downscaled to 30 m resolution using a statistical model we developed using Random Forest techniques to estimate the probability of adjacent 30m pixels burning with ninety percent basal kill as a function of fire size and vegetation and topographic features. The result is a library of simulated high resolution maps of BA90 burned areas for a range of climate and fuels management scenarios with which we estimated conditional probabilities of owl nesting sites being impacted by high severity wildfire.

  17. White-light full-field OCT resolution improvement by image sensor colour balance adjustment: numerical simulation

    International Nuclear Information System (INIS)

    Kalyanov, A L; Lychagov, V V; Ryabukho, V P; Smirnov, I V

    2012-01-01

    The possibility of improving white-light full-field optical coherence tomography (OCT) resolution by image sensor colour balance tuning is shown numerically. We calculated the full-width at half-maximum (FWHM) of a coherence pulse registered by a silicon colour image sensor under various colour balance settings. The calculations were made for both a halogen lamp and white LED sources. The results show that the interference pulse width can be reduced by the proper choice of colour balance coefficients. The reduction is up to 18%, as compared with a colour image sensor with regular settings, and up to 20%, as compared with a monochrome sensor. (paper)

  18. GRAVIDY, a GPU modular, parallel direct-summation N-body integrator: dynamics with softening

    Science.gov (United States)

    Maureira-Fredes, Cristián; Amaro-Seoane, Pau

    2018-01-01

    A wide variety of outstanding problems in astrophysics involve the motion of a large number of particles under the force of gravity. These include the global evolution of globular clusters, tidal disruptions of stars by a massive black hole, the formation of protoplanets and sources of gravitational radiation. The direct-summation of N gravitational forces is a complex problem with no analytical solution and can only be tackled with approximations and numerical methods. To this end, the Hermite scheme is a widely used integration method. With different numerical techniques and special-purpose hardware, it can be used to speed up the calculations. But these methods tend to be computationally slow and cumbersome to work with. We present a new graphics processing unit (GPU), direct-summation N-body integrator written from scratch and based on this scheme, which includes relativistic corrections for sources of gravitational radiation. GRAVIDY has high modularity, allowing users to readily introduce new physics, it exploits available computational resources and will be maintained by regular updates. GRAVIDY can be used in parallel on multiple CPUs and GPUs, with a considerable speed-up benefit. The single-GPU version is between one and two orders of magnitude faster than the single-CPU version. A test run using four GPUs in parallel shows a speed-up factor of about 3 as compared to the single-GPU version. The conception and design of this first release is aimed at users with access to traditional parallel CPU clusters or computational nodes with one or a few GPU cards.

  19. Coupled atmosphere ocean climate model simulations in the Mediterranean region: effect of a high-resolution marine model on cyclones and precipitation

    Directory of Open Access Journals (Sweden)

    A. Sanna

    2013-06-01

    Full Text Available In this study we investigate the importance of an eddy-permitting Mediterranean Sea circulation model on the simulation of atmospheric cyclones and precipitation in a climate model. This is done by analyzing results of two fully coupled GCM (general circulation models simulations, differing only for the presence/absence of an interactive marine module, at very high-resolution (~ 1/16°, for the simulation of the 3-D circulation of the Mediterranean Sea. Cyclones are tracked by applying an objective Lagrangian algorithm to the MSLP (mean sea level pressure field. On annual basis, we find a statistically significant difference in vast cyclogenesis regions (northern Adriatic, Sirte Gulf, Aegean Sea and southern Turkey and in lifetime, giving evidence of the effect of both land–sea contrast and surface heat flux intensity and spatial distribution on cyclone characteristics. Moreover, annual mean convective precipitation changes significantly in the two model climatologies as a consequence of differences in both air–sea interaction strength and frequency of cyclogenesis in the two analyzed simulations.

  20. Coupled multi-group neutron photon transport for the simulation of high-resolution gamma-ray spectroscopy applications

    Energy Technology Data Exchange (ETDEWEB)

    Burns, Kimberly A. [Georgia Inst. of Technology, Atlanta, GA (United States)

    2009-08-01

    The accurate and efficient simulation of coupled neutron-photon problems is necessary for several important radiation detection applications. Examples include the detection of nuclear threats concealed in cargo containers and prompt gamma neutron activation analysis for nondestructive determination of elemental composition of unknown samples.

  1. Data for Figures and Tables in Journal Article Assessment of the Effects of Horizontal Grid Resolution on Long-Term Air Quality Trends using Coupled WRF-CMAQ Simulations, doi:10.1016/j.atmosenv.2016.02.036

    Science.gov (United States)

    The dataset represents the data depicted in the Figures and Tables of a Journal Manuscript with the following abstract: The objective of this study is to determine the adequacy of using a relatively coarse horizontal resolution (i.e. 36 km) to simulate long-term trends of pollutant concentrations and radiation variables with the coupled WRF-CMAQ model. WRF-CMAQ simulations over the continental United State are performed over the 2001 to 2010 time period at two different horizontal resolutions of 12 and 36 km. Both simulations used the same emission inventory and model configurations. Model results are compared both in space and time to assess the potential weaknesses and strengths of using coarse resolution in long-term air quality applications. The results show that the 36 km and 12 km simulations are comparable in terms of trends analysis for both pollutant concentrations and radiation variables. The advantage of using the coarser 36 km resolution is a significant reduction of computational cost, time and storage requirement which are key considerations when performing multiple years of simulations for trend analysis. However, if such simulations are to be used for local air quality analysis, finer horizontal resolution may be beneficial since it can provide information on local gradients. In particular, divergences between the two simulations are noticeable in urban, complex terrain and coastal regions.This dataset is associated with the following publication

  2. MO-G-17A-04: Internal Dosimetric Calculations for Pediatric Nuclear Imaging Applications, Using Monte Carlo Simulations and High-Resolution Pediatric Computational Models

    Energy Technology Data Exchange (ETDEWEB)

    Papadimitroulas, P; Kagadis, GC [University of Patras, Rion, Ahaia (Greece); Loudos, G [Technical Educational Institute of Athens, Aigaleo, Attiki (Greece)

    2014-06-15

    Purpose: Our purpose is to evaluate the administered absorbed dose in pediatric, nuclear imaging studies. Monte Carlo simulations with the incorporation of pediatric computational models can serve as reference for the accurate determination of absorbed dose. The procedure of the calculated dosimetric factors is described, while a dataset of reference doses is created. Methods: Realistic simulations were executed using the GATE toolkit and a series of pediatric computational models, developed by the “IT'IS Foundation”. The series of the phantoms used in our work includes 6 models in the range of 5–14 years old (3 boys and 3 girls). Pre-processing techniques were applied to the images, to incorporate the phantoms in GATE simulations. The resolution of the phantoms was set to 2 mm3. The most important organ densities were simulated according to the GATE “Materials Database”. Several used radiopharmaceuticals in SPECT and PET applications are being tested, following the EANM pediatric dosage protocol. The biodistributions of the several isotopes used as activity maps in the simulations, were derived by the literature. Results: Initial results of absorbed dose per organ (mGy) are presented in a 5 years old girl from the whole body exposure to 99mTc - SestaMIBI, 30 minutes after administration. Heart, kidney, liver, ovary, pancreas and brain are the most critical organs, in which the S-factors are calculated. The statistical uncertainty in the simulation procedure was kept lower than 5%. The Sfactors for each target organ are calculated in Gy/(MBq*sec) with highest dose being absorbed in kidneys and pancreas (9.29*10{sup 10} and 0.15*10{sup 10} respectively). Conclusion: An approach for the accurate dosimetry on pediatric models is presented, creating a reference dosage dataset for several radionuclides in children computational models with the advantages of MC techniques. Our study is ongoing, extending our investigation to other reference models and

  3. MO-G-17A-04: Internal Dosimetric Calculations for Pediatric Nuclear Imaging Applications, Using Monte Carlo Simulations and High-Resolution Pediatric Computational Models

    International Nuclear Information System (INIS)

    Papadimitroulas, P; Kagadis, GC; Loudos, G

    2014-01-01

    Purpose: Our purpose is to evaluate the administered absorbed dose in pediatric, nuclear imaging studies. Monte Carlo simulations with the incorporation of pediatric computational models can serve as reference for the accurate determination of absorbed dose. The procedure of the calculated dosimetric factors is described, while a dataset of reference doses is created. Methods: Realistic simulations were executed using the GATE toolkit and a series of pediatric computational models, developed by the “IT'IS Foundation”. The series of the phantoms used in our work includes 6 models in the range of 5–14 years old (3 boys and 3 girls). Pre-processing techniques were applied to the images, to incorporate the phantoms in GATE simulations. The resolution of the phantoms was set to 2 mm3. The most important organ densities were simulated according to the GATE “Materials Database”. Several used radiopharmaceuticals in SPECT and PET applications are being tested, following the EANM pediatric dosage protocol. The biodistributions of the several isotopes used as activity maps in the simulations, were derived by the literature. Results: Initial results of absorbed dose per organ (mGy) are presented in a 5 years old girl from the whole body exposure to 99mTc - SestaMIBI, 30 minutes after administration. Heart, kidney, liver, ovary, pancreas and brain are the most critical organs, in which the S-factors are calculated. The statistical uncertainty in the simulation procedure was kept lower than 5%. The Sfactors for each target organ are calculated in Gy/(MBq*sec) with highest dose being absorbed in kidneys and pancreas (9.29*10 10 and 0.15*10 10 respectively). Conclusion: An approach for the accurate dosimetry on pediatric models is presented, creating a reference dosage dataset for several radionuclides in children computational models with the advantages of MC techniques. Our study is ongoing, extending our investigation to other reference models and evaluating the

  4. Allocating emissions to 4 km and 1 km horizontal spatial resolutions and its impact on simulated NOx and O3 in Houston, TX

    Science.gov (United States)

    Pan, Shuai; Choi, Yunsoo; Roy, Anirban; Jeon, Wonbae

    2017-09-01

    A WRF-SMOKE-CMAQ air quality modeling system was used to investigate the impact of horizontal spatial resolution on simulated nitrogen oxides (NOx) and ozone (O3) in the Greater Houston area (a non-attainment area for O3). We employed an approach recommended by the United States Environmental Protection Agency to allocate county-based emissions to model grid cells in 1 km and 4 km horizontal grid resolutions. The CMAQ Integrated Process Rate analyses showed a substantial difference in emissions contributions between 1 and 4 km grids but similar NOx and O3 concentrations over urban and industrial locations. For example, the peak NOx emissions at an industrial and urban site differed by a factor of 20 for the 1 km and 8 for the 4 km grid, but simulated NOx concentrations changed only by a factor of 1.2 in both cases. Hence, due to the interplay of the atmospheric processes, we cannot expect a similar level of reduction of the gas-phase air pollutants as the reduction of emissions. Both simulations reproduced the variability of NASA P-3B aircraft measurements of NOy and O3 in the lower atmosphere (from 90 m to 4.5 km). Both simulations provided similar reasonable predictions at surface, while 1 km case depicted more detailed features of emissions and concentrations in heavily polluted areas, such as highways, airports, and industrial regions, which are useful in understanding the major causes of O3 pollution in such regions, and to quantify transport of O3 to populated communities in urban areas. The Integrated Reaction Rate analyses indicated a distinctive difference of chemistry processes between the model surface layer and upper layers, implying that correcting the meteorological conditions at the surface may not help to enhance the O3 predictions. The model-observation O3 bias in our studies (e.g., large over-prediction during the nighttime or along Gulf of Mexico coastline), were due to uncertainties in meteorology, chemistry or other processes. Horizontal grid

  5. Simulations of radiocarbon in a coarse-resolution world ocean model 2. Distributions of bomb-produced Carbon 14

    International Nuclear Information System (INIS)

    Toggweiler, J.R.; Dixon, K.; Bryan, K.

    1989-01-01

    Part 1 of this study examined the ability of the Geophysical Fluid Dynamics Laboratory (GFDL) primitive equation ocean general circulation model to simulate the steady state distribution of naturally produced 14 C in the ocean prior to the nuclear bomb tests of the 1950's and early 1960's. In part 2 begin with the steady state distributions of part 1 and subject the model to the pulse of elevated atmospheric 14 C concentrations observed since the 1950's

  6. Monte Carlo simulation of the X-ray response of a germanium microstrip detector with energy and position resolution

    CERN Document Server

    Rossi, G; Fajardo, P; Morse, J

    1999-01-01

    We present Monte Carlo computer simulations of the X-ray response of a micro-strip germanium detector over the energy range 30-100 keV. The detector consists of a linear array of lithographically defined 150 mu m wide strips on a high purity monolithic germanium crystal of 6 mm thickness. The simulation code is divided into two parts. We first consider a 10 mu m wide X-ray beam striking the detector surface at normal incidence and compute the interaction processes possible for each photon. Photon scattering and absorption inside the detector crystal are simulated using the EGS4 code with the LSCAT extension for low energies. A history of events is created of the deposited energies which is read by the second part of the code which computes the energy histogram for each detector strip. Appropriate algorithms are introduced to account for lateral charge spreading occurring during charge carrier drift to the detector surface, and Fano and preamplifier electronic noise contributions. Computed spectra for differen...

  7. COINCIDENCES BETWEEN O VI AND O VII LINES: INSIGHTS FROM HIGH-RESOLUTION SIMULATIONS OF THE WARM-HOT INTERGALACTIC MEDIUM

    International Nuclear Information System (INIS)

    Cen Renyue

    2012-01-01

    With high-resolution (0.46 h –1 kpc), large-scale, adaptive mesh-refinement Eulerian cosmological hydrodynamic simulations we compute properties of O VI and O VII absorbers from the warm-hot intergalactic medium (WHIM) at z = 0. Our new simulations are in broad agreement with previous simulations with ∼40% of the intergalactic medium being in the WHIM. Our simulations are in agreement with observed properties of O VI absorbers with respect to the line incidence rate and Doppler-width-column-density relation. It is found that the amount of gas in the WHIM below and above 10 6 K is roughly equal. Strong O VI absorbers are found to be predominantly collisionally ionized. It is found that (61%, 57%, 39%) of O VI absorbers of log N(O VI) cm 2 = (12.5-13, 13-14, > 14) have T 5 K. Cross correlations between galaxies and strong [N(O VI) > 10 14 cm –2 ] O VI absorbers on ∼100-300 kpc scales are suggested as a potential differentiator between collisional ionization and photoionization models. Quantitative prediction is made for the presence of broad and shallow O VI lines that are largely missed by current observations but will be detectable by Cosmic Origins Spectrograph observations. The reported 3σ upper limit on the mean column density of coincidental O VII lines at the location of detected O VI lines by Yao et al. is above our predicted value by a factor of 2.5-4. The claimed observational detection of O VII lines by Nicastro et al., if true, is 2σ above what our simulations predict.

  8. A High-resolution Simulation of the Transport of Gazeous Pollutants from a Severe Effusive Volcanic Eruption

    Science.gov (United States)

    Durand, J.; Tulet, P.; Filippi, J. B.; Leriche, M.

    2014-12-01

    The Reunion Island experienced its biggest eruption of Piton de la Fournaise volcano during April 2007. Known as "the eruption of the century", this event degassed more than 230 KT of SO2. Theses emissions led to important health issues, accompanied by environmental and infrastructure degradations. We want to show a modeling study uses the mesoscale chemical model MesoNH to simulate the transport of gazeous SO2 between April 2nd and 7th, with a focus on the influence of heat fluxes from lava. Three domains are nested from 2km to 100m horizontal spacing grid, allow us to better represent the phenomenology of its eruption. This modelling study have coupled on-line (i) the MesoNH mesoscale dynamics, (ii) a gas and aqueous chemical scheme, and (iii) a surface scheme that integrates a new sheme for the lava heat flux and its surface propagation. Thus, all flows (heat sensible and latent, vapor, SO2, CO2, CO) are triggered depending on its dynamic. Our simulations reproduce quite faithfully the surface field observation of SO2. Various sensitivity analyzes exhibit that volcano sulfur distribution was mainly controlled by the lava heat flow.Without heat flow parameterization, the surface concentrations are multiplied by a factor 30 compared to the reference simulation.Numerical modeling allows us to distinguish acid rain produced by the emission of water vapor and chloride when the lava flows into the seawater of those formed by the mixing of the volcanic SO2 into the raindrops of convective clouds.

  9. High resolution temperature mapping of gas turbine combustor simulator exhaust with femtosecond laser induced fiber Bragg gratings

    Science.gov (United States)

    Walker, Robert B.; Yun, Sangsig; Ding, Huimin; Charbonneau, Michel; Coulas, David; Lu, Ping; Mihailov, Stephen J.; Ramachandran, Nanthan

    2017-04-01

    Femtosecond infrared (fs-IR) laser written fiber Bragg gratings (FBGs), have demonstrated great potential for extreme sensing. Such conditions are inherent in advanced gas turbine engines under development to reduce greenhouse gas emissions; and the ability to measure temperature gradients in these harsh environments is currently limited by the lack of sensors and controls capable of withstanding the high temperature, pressure and corrosive conditions present. This paper discusses fabrication and deployment of several fs-IR written FBG arrays, for monitoring exhaust temperature gradients of a gas turbine combustor simulator. Results include: contour plots of measured temperature gradients, contrast with thermocouple data.

  10. Chirality in MoS2 nano tubes studied by molecular dynamics simulation and images of high resolution microscopy

    International Nuclear Information System (INIS)

    Perez A, M.

    2003-01-01

    The nano tubes is a new material intensely studied from 1991 due to their characteristics that are the result of their nano metric size and of the associated quantum effects. Great part of these investigations have been focused to the characterization, modelling and computerized simulation, in order to studying its properties and possible behavior without necessity of the real manipulation of the material. The obtention of the structural properties in the different forms of particles of nano metric dimensions observed in the Transmission Electron Microscope is of great aid to study them mesoscopic characteristic of the material. (Author)

  11. Dark matter substructure in numerical simulations: a tale of discreteness noise, runaway instabilities, and artificial disruption

    Science.gov (United States)

    van den Bosch, Frank C.; Ogiya, Go

    2018-04-01

    To gain understanding of the complicated, non-linear, and numerical processes associated with the tidal evolution of dark matter subhaloes in numerical simulation, we perform a large suite of idealized simulations that follow individual N-body subhaloes in a fixed, analytical host halo potential. By varying both physical and numerical parameters, we investigate under what conditions the subhaloes undergo disruption. We confirm the conclusions from our more analytical assessment in van den Bosch et al. that most disruption is numerical in origin; as long as a subhalo is resolved with sufficient mass and force resolution, a bound remnant survives. This implies that state-of-the-art cosmological simulations still suffer from significant overmerging. We demonstrate that this is mainly due to inadequate force softening, which causes excessive mass loss and artificial tidal disruption. In addition, we show that subhaloes in N-body simulations are susceptible to a runaway instability triggered by the amplification of discreteness noise in the presence of a tidal field. These two processes conspire to put serious limitations on the reliability of dark matter substructure in state-of-the-art cosmological simulations. We present two criteria that can be used to assess whether individual subhaloes in cosmological simulations are reliable or not, and advocate that subhaloes that satisfy either of these two criteria be discarded from further analysis. We discuss the potential implications of this work for several areas in astrophysics.

  12. Simulations of Cyclone Sidr in the Bay of Bengal with a High-Resolution Model: Sensitivity to Large-Scale Boundary Forcing

    Science.gov (United States)

    Kumar, Anil; Done, James; Dudhia, Jimy; Niyogi, Dev

    2011-01-01

    The predictability of Cyclone Sidr in the Bay of Bengal was explored in terms of track and intensity using the Advanced Research Hurricane Weather Research Forecast (AHW) model. This constitutes the first application of the AHW over an area that lies outside the region of the North Atlantic for which this model was developed and tested. Several experiments were conducted to understand the possible contributing factors that affected Sidr s intensity and track simulation by varying the initial start time and domain size. Results show that Sidr s track was strongly controlled by the synoptic flow at the 500-hPa level, seen especially due to the strong mid-latitude westerly over north-central India. A 96-h forecast produced westerly winds over north-central India at the 500-hPa level that were notably weaker; this likely caused the modeled cyclone track to drift from the observed actual track. Reducing the model domain size reduced model error in the synoptic-scale winds at 500 hPa and produced an improved cyclone track. Specifically, the cyclone track appeared to be sensitive to the upstream synoptic flow, and was, therefore, sensitive to the location of the western boundary of the domain. However, cyclone intensity remained largely unaffected by this synoptic wind error at the 500-hPa level. Comparison of the high resolution, moving nested domain with a single coarser resolution domain showed little difference in tracks, but resulted in significantly different intensities. Experiments on the domain size with regard to the total precipitation simulated by the model showed that precipitation patterns and 10-m surface winds were also different. This was mainly due to the mid-latitude westerly flow across the west side of the model domain. The analysis also suggested that the total precipitation pattern and track was unchanged when the domain was extended toward the east, north, and south. Furthermore, this highlights our conclusion that Sidr was influenced from the west

  13. Impact of the dynamical core on the direct simulation of tropical cyclones in a high-resolution global model

    International Nuclear Information System (INIS)

    Reed, K. A.

    2015-01-01

    Our paper examines the impact of the dynamical core on the simulation of tropical cyclone (TC) frequency, distribution, and intensity. The dynamical core, the central fluid flow component of any general circulation model (GCM), is often overlooked in the analysis of a model's ability to simulate TCs compared to the impact of more commonly documented components (e.g., physical parameterizations). The Community Atmosphere Model version 5 is configured with multiple dynamics packages. This analysis demonstrates that the dynamical core has a significant impact on storm intensity and frequency, even in the presence of similar large-scale environments. In particular, the spectral element core produces stronger TCs and more hurricanes than the finite-volume core using very similar parameterization packages despite the latter having a slightly more favorable TC environment. Furthermore, these results suggest that more detailed investigations into the impact of the GCM dynamical core on TC climatology are needed to fully understand these uncertainties. Key Points The impact of the GCM dynamical core is often overlooked in TC assessments The CAM5 dynamical core has a significant impact on TC frequency and intensity A larger effort is needed to better understand this uncertainty

  14. High-resolution fast temperature mapping of a gas turbine combustor simulator with femtosecond infrared laser written fiber Bragg gratings

    Science.gov (United States)

    Walker, Robert B.; Yun, Sangsig; Ding, Huimin; Charbonneau, Michel; Coulas, David; Ramachandran, Nanthan; Mihailov, Stephen J.

    2017-02-01

    Femtosecond infrared (fs-IR) written fiber Bragg gratings (FBGs), have demonstrated great potential for extreme sensing. Such conditions are inherent to the advanced gas turbine engines under development to reduce greenhouse gas emissions; and the ability to measure temperature gradients in these harsh environments is currently limited by the lack of sensors and controls capable of withstanding the high temperature, pressure and corrosive conditions present. This paper discusses fabrication and deployment of several fs-IR written FBG arrays, for monitoring the sidewall and exhaust temperature gradients of a gas turbine combustor simulator. Results include: contour plots of measured temperature gradients contrasted with thermocouple data, discussion of deployment strategies and comments on reliability.

  15. Multi Resolution In-Situ Testing and Multiscale Simulation for Creep Fatigue Damage Analysis of Alloy 617

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yongming [Arizona State Univ., Tempe, AZ (United States). School for Engineering of Matter, Transport and Energy; Oskay, Caglar [Vanderbilt Univ., Nashville, TN (United States). Dept. of Civil and Environmental Engineering

    2017-04-30

    This report outlines the research activities that were carried out for the integrated experimental and simulation investigation of creep-fatigue damage mechanism and life prediction of Nickel-based alloy, Inconel 617 at high temperatures (950° and 850°). First, a novel experimental design using a hybrid control technique is proposed. The newly developed experimental technique can generate different combinations of creep and fatigue damage by changing the experimental design parameters. Next, detailed imaging analysis and statistical data analysis are performed to quantify the failure mechanisms of the creep fatigue of alloy 617 at high temperatures. It is observed that the creep damage is directly associated with the internal voids at the grain boundaries and the fatigue damage is directly related to the surface cracking. It is also observed that the classical time fraction approach does not has a good correlation with the experimental observed damage features. An effective time fraction parameter is seen to have an excellent correlation with the material microstructural damage. Thus, a new empirical damage interaction diagram is proposed based on the experimental observations. Following this, a macro level viscoplastic model coupled with damage is developed to simulate the stress/strain response under creep fatigue loadings. A damage rate function based on the hysteresis energy and creep energy is proposed to capture the softening behavior of the material and a good correlation with life prediction and material hysteresis behavior is observed. The simulation work is extended to include the microstructural heterogeneity. A crystal plasticity finite element model considering isothermal and large deformation conditions at the microstructural scale has been developed for fatigue, creep-fatigue as well as creep deformation and rupture at high temperature. The model considers collective dislocation glide and climb of the grains and progressive damage accumulation of

  16. Vertical Rise Velocity of Equatorial Plasma Bubbles Estimated from Equatorial Atmosphere Radar Observations and High-Resolution Bubble Model Simulations

    Science.gov (United States)

    Yokoyama, T.; Ajith, K. K.; Yamamoto, M.; Niranjan, K.

    2017-12-01

    Equatorial plasma bubble (EPB) is a well-known phenomenon in the equatorial ionospheric F region. As it causes severe scintillation in the amplitude and phase of radio signals, it is important to understand and forecast the occurrence of EPBs from a space weather point of view. The development of EPBs is presently believed as an evolution of the generalized Rayleigh-Taylor instability. We have already developed a 3D high-resolution bubble (HIRB) model with a grid spacing of as small as 1 km and presented nonlinear growth of EPBs which shows very turbulent internal structures such as bifurcation and pinching. As EPBs have field-aligned structures, the latitude range that is affected by EPBs depends on the apex altitude of EPBs over the dip equator. However, it was not easy to observe the apex altitude and vertical rise velocity of EPBs. Equatorial Atmosphere Radar (EAR) in Indonesia is capable of steering radar beams quickly so that the growth phase of EPBs can be captured clearly. The vertical rise velocities of the EPBs observed around the midnight hours are significantly smaller compared to those observed in postsunset hours. Further, the vertical growth of the EPBs around midnight hours ceases at relatively lower altitudes, whereas the majority of EPBs at postsunset hours found to have grown beyond the maximum detectable altitude of the EAR. The HIRB model with varying background conditions are employed to investigate the possible factors that control the vertical rise velocity and maximum attainable altitudes of EPBs. The estimated rise velocities from EAR observations at both postsunset and midnight hours are, in general, consistent with the nonlinear evolution of EPBs from the HIRB model.

  17. Interfaces and strain in InGaAsP/InP heterostructures assessed with dynamical simulations of high-resolution x-ray diffraction curves

    International Nuclear Information System (INIS)

    Vandenberg, J.M.

    1995-01-01

    The interfacial structure of a lattice-matched InGaAs/InP/(100)InP superlattice with a long period of ∼630 Angstrom has been studied by fully dynamical simulations of high-resolution x-ray diffraction curves. This structure exhibits a very symmetrical x-ray pattern enveloping a large number of closely spaced satellite intensities with pronounced maxima and minima. It appears in the dynamical analysis that the position and shape of these maxima and minima is extremely sensitive to the number N of molecular layers and atomic spacing d of the InGaAs and InP layer and in particular the presence of strained interfacial layers. The structural model of strained interfaces was also applied to an epitaxial lattice-matched 700 Angstrom InP/400 Angstrom InGaAsP/(100)InP beterostructure. 9 refs., 3 figs

  18. Simulation, optimization and testing of a novel high spatial resolution X-ray imager based on Zinc Oxide nanowires in Anodic Aluminium Oxide membrane using Geant4

    Science.gov (United States)

    Esfandi, F.; Saramad, S.

    2015-07-01

    In this work, a new generation of scintillator based X-ray imagers based on ZnO nanowires in Anodized Aluminum Oxide (AAO) nanoporous template is characterized. The optical response of ordered ZnO nanowire arrays in porous AAO template under low energy X-ray illumination is simulated by the Geant4 Monte Carlo code and compared with experimental results. The results show that for 10 keV X-ray photons, by considering the light guiding properties of zinc oxide inside the AAO template and suitable selection of detector thickness and pore diameter, the spatial resolution less than one micrometer and the detector detection efficiency of 66% are accessible. This novel nano scintillator detector can have many advantages for medical applications in the future.

  19. Simulation, optimization and testing of a novel high spatial resolution X-ray imager based on Zinc Oxide nanowires in Anodic Aluminium Oxide membrane using Geant4

    International Nuclear Information System (INIS)

    Esfandi, F.; Saramad, S.

    2015-01-01

    In this work, a new generation of scintillator based X-ray imagers based on ZnO nanowires in Anodized Aluminum Oxide (AAO) nanoporous template is characterized. The optical response of ordered ZnO nanowire arrays in porous AAO template under low energy X-ray illumination is simulated by the Geant4 Monte Carlo code and compared with experimental results. The results show that for 10 keV X-ray photons, by considering the light guiding properties of zinc oxide inside the AAO template and suitable selection of detector thickness and pore diameter, the spatial resolution less than one micrometer and the detector detection efficiency of 66% are accessible. This novel nano scintillator detector can have many advantages for medical applications in the future

  20. Can small island mountains provide relief from the Subtropical Precipitation Decline? Simulating future precipitation regimes for small island nations using high resolution Regional Climate Models.

    Science.gov (United States)

    Bowden, J.; Terando, A. J.; Misra, V.; Wootten, A.

    2017-12-01

    Small island nations are vulnerable to changes in the hydrologic cycle because of their limited water resources. This risk to water security is likely even higher in sub-tropical regions where anthropogenic forcing of the climate system is expected to lead to a drier future (the so-called `dry-get-drier' pattern). However, high-resolution numerical modeling experiments have also shown an enhancement of existing orographically-influenced precipitation patterns on islands with steep topography, potentially mitigating subtropical drying on windward mountain sides. Here we explore the robustness of the near-term (25-45 years) subtropical precipitation decline (SPD) across two island groupings in the Caribbean, Puerto Rico and the U.S. Virgin Islands. These islands, forming the boundary between the Greater and Lesser Antilles, significantly differ in size, topographic relief, and orientation to prevailing winds. Two 2-km horizontal resolution regional climate model simulations are used to downscale a total of three different GCMs under the RCP8.5 emissions scenario. Results indicate some possibility for modest increases in precipitation at the leading edge of the Luquillo Mountains in Puerto Rico, but consistent declines elsewhere. We conclude with a discussion of potential explanations for these patterns and the attendant risks to water security that subtropical small island nations could face as the climate warms.

  1. High resolution numerical simulation (WRF V3) of an extreme rainy event over the Guadeloupe archipelago: Case of 3-5 january 2011.

    Science.gov (United States)

    Bernard, Didier C.; Cécé, Raphaël; Dorville, Jean-François

    2013-04-01

    During the dry season, the Guadeloupe archipelago may be affected by extreme rainy disturbances which may induce floods in a very short time. C. Brévignon (2003) considered a heavy rain event for rainfall upper 100 mm per day (out of mountainous areas) for this tropical region. During a cold front passage (3-5 January 2011), torrential rainfalls caused floods, major damages, landslides and five deaths. This phenomenon has put into question the current warning system based on large scale numerical models. This low-resolution forecasting (around 50-km scale) has been unsuitable for small tropical island like Guadeloupe (1600 km2). The most affected area was the middle of Grande-Terre island which is the main flat island of the archipelago (area of 587 km2, peak at 136 m). It is the most populated sector of Guadeloupe. In this area, observed rainfall have reached to 100-160 mm in 24 hours (this amount is equivalent to two months of rain for January (C. Brévignon, 2003)), in less 2 hours drainage systems have been saturated, and five people died in a ravine. Since two years, the atmospheric model WRF ARW V3 (Skamarock et al., 2008) has been used to modeling meteorological variables fields observed over the Guadeloupe archipelago at high resolution 1-km scale (Cécé et al., 2011). The model error estimators show that meteorological variables seem to be properly simulated for standard types of weather: undisturbed, strong or weak trade winds. These simulations indicate that for synoptic winds weak to moderate, a small island like Grande-Terre is able to generate inland convergence zones during daytime. In this presentation, we apply this high resolution model to simulate this extreme rainy disturbance of 3-5 January 2011. The evolution of modeling meteorological variable fields is analyzed in the most affected area of Grande-Terre (city of Les Abymes). The main goal is to examine local quasi-stationary updraft systems and highlight their convective mechanisms. The

  2. Multi-resolution and multi-scale simulation of the thermal hydraulics in fast neutron reactor assemblies

    International Nuclear Information System (INIS)

    Angeli, P.-E.

    2011-01-01

    The present work is devoted to a multi-scale numerical simulation of an assembly of fast neutron reactor. In spite of the rapid growth of the computer power, the fine complete CFD of a such system remains out of reach in a context of research and development. After the determination of the thermalhydraulic behaviour of the assembly at the macroscopic scale, we propose to carry out a local reconstruction of the fine scale information. The complete approach will require a much lower CPU time than the CFD of the entire structure. The macro-scale description is obtained using either the volume averaging formalism in porous media, or an alternative modeling historically developed for the study of fast neutron reactor assemblies. It provides some information used as constraint of a down-scaling problem, through a penalization technique of the local conservation equations. This problem lean on the periodic nature of the structure by integrating periodic boundary conditions for the required microscale fields or their spatial deviation. After validating the methodologies on some model applications, we undertake to perform them on 'industrial' configurations which demonstrate the viability of this multi-scale approach. (author) [fr

  3. Method for Modeling High-Temporal-Resolution Stream Inflows in a Long-Term ParFlow.CLM Simulation

    Science.gov (United States)

    Miller, G. R.; Merket, C.

    2017-12-01

    Traditional hydrologic modeling has compartmentalized the water cycle into distinct components (e.g. rainfall-runoff, river routing, or groundwater flow models). An integrated, process-based modeling framework assesses two or more of these components simultaneously, reducing the error associated with approximated boundary conditions. One integrated model, ParFlow.CLM, offers the advantage of parallel computing, but it lacks any mechanism for incorporating time-varying streamflow as an upstream boundary condition. Here, we present a generalized method for applying transient streamflow at an upstream boundary in ParFlow.CLM. Downstream flow values are compared to predictions by traditional runoff and routing methods as implemented in HEC-HMS. Additionally, we define a model spin-up process which includes initialization of steady-state streamflow. The upstream inflow method was successfully tested on two domains - one synthetic tilted V catchment and an idealized small stream catchment in the Brazos River Basin. The stream in the idealized domain is gaged at the upstream and downstream boundaries. Both tests assumed a homogeneous subsurface so that the efficacy of the transient streamflow method could be evaluated with minimal complications by groundwater interactions. In the tilted V catchment, spin-up criteria were achieved within 6 model years. A 25 x 25 x 66 cell model grid was run at a computational efficiency of values early in the simulation.

  4. Optimization of the resolution of remotely sensed digital elevation model to facilitate the simulation and spatial propagation of flood events in flat areas

    Science.gov (United States)

    Karapetsas, Nikolaos; Skoulikaris, Charalampos; Katsogiannos, Fotis; Zalidis, George; Alexandridis, Thomas

    2013-04-01

    The use of satellite remote sensing products, such as Digital Elevation Models (DEMs), under specific computational interfaces of Geographic Information Systems (GIS) has fostered and facilitated the acquisition of data on specific hydrologic features, such as slope, flow direction and flow accumulation, which are crucial inputs to hydrology or hydraulic models at the river basin scale. However, even though DEMs of different resolution varying from a few km up to 20m are freely available for the European continent, these remotely sensed elevation data are rather coarse in cases where large flat areas are dominant inside a watershed, resulting in an unsatisfactory representation of the terrain characteristics. This scientific work aims at implementing a combing interpolation technique for the amelioration of the analysis of a DEM in order to be used as the input ground model to a hydraulic model for the assessment of potential flood events propagation in plains. More specifically, the second version of the ASTER Global Digital Elevation Model (GDEM2), which has an overall accuracy of around 20 meters, was interpolated with a vast number of aerial control points available from the Hellenic Mapping and Cadastral Organization (HMCO). The uncertainty that was inherent in both the available datasets (ASTER & HMCO) and the appearance of uncorrelated errors and artifacts was minimized by incorporating geostatistical filtering. The resolution of the produced DEM was approximately 10 meters and its validation was conducted with the use of an external dataset of 220 geodetic survey points. The derived DEM was then used as an input to the hydraulic model InfoWorks RS, whose operation is based on the relief characteristics contained in the ground model, for defining, in an automated way, the cross section parameters and simulating the flood spatial distribution. The plain of Serres, which is located in the downstream part of the Struma/Strymon transboundary river basin shared