WorldWideScience

Sample records for high-resolution cosmological simulations

  1. Resolution convergence in cosmological hydrodynamical simulations using adaptive mesh refinement

    Science.gov (United States)

    Snaith, Owain N.; Park, Changbom; Kim, Juhan; Rosdahl, Joakim

    2018-06-01

    We have explored the evolution of gas distributions from cosmological simulations carried out using the RAMSES adaptive mesh refinement (AMR) code, to explore the effects of resolution on cosmological hydrodynamical simulations. It is vital to understand the effect of both the resolution of initial conditions (ICs) and the final resolution of the simulation. Lower initial resolution simulations tend to produce smaller numbers of low-mass structures. This will strongly affect the assembly history of objects, and has the same effect of simulating different cosmologies. The resolution of ICs is an important factor in simulations, even with a fixed maximum spatial resolution. The power spectrum of gas in simulations using AMR diverges strongly from the fixed grid approach - with more power on small scales in the AMR simulations - even at fixed physical resolution and also produces offsets in the star formation at specific epochs. This is because before certain times the upper grid levels are held back to maintain approximately fixed physical resolution, and to mimic the natural evolution of dark matter only simulations. Although the impact of hold-back falls with increasing spatial and IC resolutions, the offsets in the star formation remain down to a spatial resolution of 1 kpc. These offsets are of the order of 10-20 per cent, which is below the uncertainty in the implemented physics but are expected to affect the detailed properties of galaxies. We have implemented a new grid-hold-back approach to minimize the impact of hold-back on the star formation rate.

  2. The AGORA High-resolution Galaxy Simulations Comparison Project

    OpenAIRE

    Kim Ji-hoon; Abel Tom; Agertz Oscar; Bryan Greg L.; Ceverino Daniel; Christensen Charlotte; Conroy Charlie; Dekel Avishai; Gnedin Nickolay Y.; Goldbaum Nathan J.; Guedes Javiera; Hahn Oliver; Hobbs Alexander; Hopkins Philip F.; Hummels Cameron B.

    2014-01-01

    The Astrophysical Journal Supplement Series 210.1 (2014): 14 reproduced by permission of the AAS We introduce the Assembling Galaxies Of Resolved Anatomy (AGORA) project, a comprehensive numerical study of well-resolved galaxies within the ΛCDM cosmology. Cosmological hydrodynamic simulations with force resolutions of ∼100 proper pc or better will be run with a variety of code platforms to follow the hierarchical growth, star formation history, morphological transformation, and the cycle o...

  3. Precision cosmology with time delay lenses: high resolution imaging requirements

    Energy Technology Data Exchange (ETDEWEB)

    Meng, Xiao-Lei; Liao, Kai [Department of Astronomy, Beijing Normal University, 19 Xinjiekouwai Street, Beijing, 100875 (China); Treu, Tommaso; Agnello, Adriano [Department of Physics, University of California, Broida Hall, Santa Barbara, CA 93106 (United States); Auger, Matthew W. [Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom); Marshall, Philip J., E-mail: xlmeng919@gmail.com, E-mail: tt@astro.ucla.edu, E-mail: aagnello@physics.ucsb.edu, E-mail: mauger@ast.cam.ac.uk, E-mail: liaokai@mail.bnu.edu.cn, E-mail: dr.phil.marshall@gmail.com [Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, 452 Lomita Mall, Stanford, CA 94305 (United States)

    2015-09-01

    Lens time delays are a powerful probe of cosmology, provided that the gravitational potential of the main deflector can be modeled with sufficient precision. Recent work has shown that this can be achieved by detailed modeling of the host galaxies of lensed quasars, which appear as ''Einstein Rings'' in high resolution images. The distortion of these arcs and counter-arcs, as measured over a large number of pixels, provides tight constraints on the difference between the gravitational potential between the quasar image positions, and thus on cosmology in combination with the measured time delay. We carry out a systematic exploration of the high resolution imaging required to exploit the thousands of lensed quasars that will be discovered by current and upcoming surveys with the next decade. Specifically, we simulate realistic lens systems as imaged by the Hubble Space Telescope (HST), James Webb Space Telescope (JWST), and ground based adaptive optics images taken with Keck or the Thirty Meter Telescope (TMT). We compare the performance of these pointed observations with that of images taken by the Euclid (VIS), Wide-Field Infrared Survey Telescope (WFIRST) and Large Synoptic Survey Telescope (LSST) surveys. We use as our metric the precision with which the slope γ' of the total mass density profile ρ{sub tot}∝ r{sup −γ'} for the main deflector can be measured. Ideally, we require that the statistical error on γ' be less than 0.02, such that it is subdominant to other sources of random and systematic uncertainties. We find that survey data will likely have sufficient depth and resolution to meet the target only for the brighter gravitational lens systems, comparable to those discovered by the SDSS survey. For fainter systems, that will be discovered by current and future surveys, targeted follow-up will be required. However, the exposure time required with upcoming facilitites such as JWST, the Keck Next Generation

  4. Precision cosmology with time delay lenses: High resolution imaging requirements

    Energy Technology Data Exchange (ETDEWEB)

    Meng, Xiao -Lei [Beijing Normal Univ., Beijing (China); Univ. of California, Santa Barbara, CA (United States); Treu, Tommaso [Univ. of California, Santa Barbara, CA (United States); Univ. of California, Los Angeles, CA (United States); Agnello, Adriano [Univ. of California, Santa Barbara, CA (United States); Univ. of California, Los Angeles, CA (United States); Auger, Matthew W. [Univ. of Cambridge, Cambridge (United Kingdom); Liao, Kai [Beijing Normal Univ., Beijing (China); Univ. of California, Santa Barbara, CA (United States); Univ. of California, Los Angeles, CA (United States); Marshall, Philip J. [Stanford Univ., Stanford, CA (United States)

    2015-09-28

    Lens time delays are a powerful probe of cosmology, provided that the gravitational potential of the main deflector can be modeled with sufficient precision. Recent work has shown that this can be achieved by detailed modeling of the host galaxies of lensed quasars, which appear as ``Einstein Rings'' in high resolution images. The distortion of these arcs and counter-arcs, as measured over a large number of pixels, provides tight constraints on the difference between the gravitational potential between the quasar image positions, and thus on cosmology in combination with the measured time delay. We carry out a systematic exploration of the high resolution imaging required to exploit the thousands of lensed quasars that will be discovered by current and upcoming surveys with the next decade. Specifically, we simulate realistic lens systems as imaged by the Hubble Space Telescope (HST), James Webb Space Telescope (JWST), and ground based adaptive optics images taken with Keck or the Thirty Meter Telescope (TMT). We compare the performance of these pointed observations with that of images taken by the Euclid (VIS), Wide-Field Infrared Survey Telescope (WFIRST) and Large Synoptic Survey Telescope (LSST) surveys. We use as our metric the precision with which the slope γ' of the total mass density profile ρtot∝ r–γ' for the main deflector can be measured. Ideally, we require that the statistical error on γ' be less than 0.02, such that it is subdominant to other sources of random and systematic uncertainties. We find that survey data will likely have sufficient depth and resolution to meet the target only for the brighter gravitational lens systems, comparable to those discovered by the SDSS survey. For fainter systems, that will be discovered by current and future surveys, targeted follow-up will be required. Furthermore, the exposure time required with upcoming facilitites such as JWST, the Keck Next Generation Adaptive

  5. Kinetic Energy from Supernova Feedback in High-resolution Galaxy Simulations

    Science.gov (United States)

    Simpson, Christine M.; Bryan, Greg L.; Hummels, Cameron; Ostriker, Jeremiah P.

    2015-08-01

    We describe a new method for adding a prescribed amount of kinetic energy to simulated gas modeled on a cartesian grid by directly altering grid cells’ mass and velocity in a distributed fashion. The method is explored in the context of supernova (SN) feedback in high-resolution (˜10 pc) hydrodynamic simulations of galaxy formation. Resolution dependence is a primary consideration in our application of the method, and simulations of isolated explosions (performed at different resolutions) motivate a resolution-dependent scaling for the injected fraction of kinetic energy that we apply in cosmological simulations of a 109 M⊙ dwarf halo. We find that in high-density media (≳50 cm-3) with coarse resolution (≳4 pc per cell), results are sensitive to the initial kinetic energy fraction due to early and rapid cooling. In our galaxy simulations, the deposition of small amounts of SN energy in kinetic form (as little as 1%) has a dramatic impact on the evolution of the system, resulting in an order-of-magnitude suppression of stellar mass. The overall behavior of the galaxy in the two highest resolution simulations we perform appears to converge. We discuss the resulting distribution of stellar metallicities, an observable sensitive to galactic wind properties, and find that while the new method demonstrates increased agreement with observed systems, significant discrepancies remain, likely due to simplistic assumptions that neglect contributions from SNe Ia and stellar winds.

  6. Evaluating Galactic Habitability Using High Resolution Cosmological Simulations of Galaxy Formation

    OpenAIRE

    Forgan, Duncan; Dayal, Pratika; Cockell, Charles; Libeskind, Noam

    2015-01-01

    D. F. acknowledges support from STFC consolidated grant ST/J001422/1, and the ‘ECOGAL’ ERC Advanced Grant. P. D. acknowledges the support of the Addison Wheeler Fellowship awarded by the Institute of Advanced Study at Durham University. N. I. L. is supported by the Deutsche Forschungs Gemeinschaft (DFG). We present the first model that couples high-resolution simulations of the formation of local group galaxies with calculations of the galactic habitable zone (GHZ), a region of space which...

  7. THE AGORA HIGH-RESOLUTION GALAXY SIMULATIONS COMPARISON PROJECT

    International Nuclear Information System (INIS)

    Kim, Ji-hoon; Conroy, Charlie; Goldbaum, Nathan J.; Krumholz, Mark R.; Abel, Tom; Agertz, Oscar; Gnedin, Nickolay Y.; Kravtsov, Andrey V.; Bryan, Greg L.; Ceverino, Daniel; Christensen, Charlotte; Hummels, Cameron B.; Dekel, Avishai; Guedes, Javiera; Hahn, Oliver; Hobbs, Alexander; Hopkins, Philip F.; Iannuzzi, Francesca; Keres, Dusan; Klypin, Anatoly

    2014-01-01

    We introduce the Assembling Galaxies Of Resolved Anatomy (AGORA) project, a comprehensive numerical study of well-resolved galaxies within the ΛCDM cosmology. Cosmological hydrodynamic simulations with force resolutions of ∼100 proper pc or better will be run with a variety of code platforms to follow the hierarchical growth, star formation history, morphological transformation, and the cycle of baryons in and out of eight galaxies with halo masses M vir ≅ 10 10 , 10 11 , 10 12 , and 10 13 M ☉ at z = 0 and two different ('violent' and 'quiescent') assembly histories. The numerical techniques and implementations used in this project include the smoothed particle hydrodynamics codes GADGET and GASOLINE, and the adaptive mesh refinement codes ART, ENZO, and RAMSES. The codes share common initial conditions and common astrophysics packages including UV background, metal-dependent radiative cooling, metal and energy yields of supernovae, and stellar initial mass function. These are described in detail in the present paper. Subgrid star formation and feedback prescriptions will be tuned to provide a realistic interstellar and circumgalactic medium using a non-cosmological disk galaxy simulation. Cosmological runs will be systematically compared with each other using a common analysis toolkit and validated against observations to verify that the solutions are robust—i.e., that the astrophysical assumptions are responsible for any success, rather than artifacts of particular implementations. The goals of the AGORA project are, broadly speaking, to raise the realism and predictive power of galaxy simulations and the understanding of the feedback processes that regulate galaxy 'metabolism'. The initial conditions for the AGORA galaxies as well as simulation outputs at various epochs will be made publicly available to the community. The proof-of-concept dark-matter-only test of the formation of a galactic halo with a z = 0 mass of M

  8. GERLUMPH DATA RELEASE 1: HIGH-RESOLUTION COSMOLOGICAL MICROLENSING MAGNIFICATION MAPS AND eResearch TOOLS

    International Nuclear Information System (INIS)

    Vernardos, G.; Fluke, C. J.; Croton, D.; Bate, N. F.

    2014-01-01

    As synoptic all-sky surveys begin to discover new multiply lensed quasars, the flow of data will enable statistical cosmological microlensing studies of sufficient size to constrain quasar accretion disk and supermassive black hole properties. In preparation for this new era, we are undertaking the GPU-Enabled, High Resolution cosmological MicroLensing parameter survey (GERLUMPH). We present here the GERLUMPH Data Release 1, which consists of 12,342 high resolution cosmological microlensing magnification maps and provides the first uniform coverage of the convergence, shear, and smooth matter fraction parameter space. We use these maps to perform a comprehensive numerical investigation of the mass-sheet degeneracy, finding excellent agreement with its predictions. We study the effect of smooth matter on microlensing induced magnification fluctuations. In particular, in the minima and saddle-point regions, fluctuations are enhanced only along the critical line, while in the maxima region they are always enhanced for high smooth matter fractions (≈0.9). We describe our approach to data management, including the use of an SQL database with a Web interface for data access and online analysis, obviating the need for individuals to download large volumes of data. In combination with existing observational databases and online applications, the GERLUMPH archive represents a fundamental component of a new microlensing eResearch cloud. Our maps and tools are publicly available at http://gerlumph.swin.edu.au/

  9. GERLUMPH DATA RELEASE 1: HIGH-RESOLUTION COSMOLOGICAL MICROLENSING MAGNIFICATION MAPS AND eResearch TOOLS

    Energy Technology Data Exchange (ETDEWEB)

    Vernardos, G.; Fluke, C. J.; Croton, D. [Centre for Astrophysics and Supercomputing, Swinburne University of Technology, P.O. Box 218, Hawthorn, Victoria, 3122 (Australia); Bate, N. F. [Sydney Institute for Astronomy, School of Physics, A28, University of Sydney, NSW, 2006 (Australia)

    2014-03-01

    As synoptic all-sky surveys begin to discover new multiply lensed quasars, the flow of data will enable statistical cosmological microlensing studies of sufficient size to constrain quasar accretion disk and supermassive black hole properties. In preparation for this new era, we are undertaking the GPU-Enabled, High Resolution cosmological MicroLensing parameter survey (GERLUMPH). We present here the GERLUMPH Data Release 1, which consists of 12,342 high resolution cosmological microlensing magnification maps and provides the first uniform coverage of the convergence, shear, and smooth matter fraction parameter space. We use these maps to perform a comprehensive numerical investigation of the mass-sheet degeneracy, finding excellent agreement with its predictions. We study the effect of smooth matter on microlensing induced magnification fluctuations. In particular, in the minima and saddle-point regions, fluctuations are enhanced only along the critical line, while in the maxima region they are always enhanced for high smooth matter fractions (≈0.9). We describe our approach to data management, including the use of an SQL database with a Web interface for data access and online analysis, obviating the need for individuals to download large volumes of data. In combination with existing observational databases and online applications, the GERLUMPH archive represents a fundamental component of a new microlensing eResearch cloud. Our maps and tools are publicly available at http://gerlumph.swin.edu.au/.

  10. Achieving Extreme Resolution in Numerical Cosmology Using Adaptive Mesh Refinement: Resolving Primordial Star Formation

    Directory of Open Access Journals (Sweden)

    Greg L. Bryan

    2002-01-01

    Full Text Available As an entry for the 2001 Gordon Bell Award in the "special" category, we describe our 3-d, hybrid, adaptive mesh refinement (AMR code Enzo designed for high-resolution, multiphysics, cosmological structure formation simulations. Our parallel implementation places no limit on the depth or complexity of the adaptive grid hierarchy, allowing us to achieve unprecedented spatial and temporal dynamic range. We report on a simulation of primordial star formation which develops over 8000 subgrids at 34 levels of refinement to achieve a local refinement of a factor of 1012 in space and time. This allows us to resolve the properties of the first stars which form in the universe assuming standard physics and a standard cosmological model. Achieving extreme resolution requires the use of 128-bit extended precision arithmetic (EPA to accurately specify the subgrid positions. We describe our EPA AMR implementation on the IBM SP2 Blue Horizon system at the San Diego Supercomputer Center.

  11. The Abacus Cosmos: A Suite of Cosmological N-body Simulations

    Science.gov (United States)

    Garrison, Lehman H.; Eisenstein, Daniel J.; Ferrer, Douglas; Tinker, Jeremy L.; Pinto, Philip A.; Weinberg, David H.

    2018-06-01

    We present a public data release of halo catalogs from a suite of 125 cosmological N-body simulations from the ABACUS project. The simulations span 40 wCDM cosmologies centered on the Planck 2015 cosmology at two mass resolutions, 4 × 1010 h ‑1 M ⊙ and 1 × 1010 h ‑1 M ⊙, in 1.1 h ‑1 Gpc and 720 h ‑1 Mpc boxes, respectively. The boxes are phase-matched to suppress sample variance and isolate cosmology dependence. Additional volume is available via 16 boxes of fixed cosmology and varied phase; a few boxes of single-parameter excursions from Planck 2015 are also provided. Catalogs spanning z = 1.5 to 0.1 are available for friends-of-friends and ROCKSTAR halo finders and include particle subsamples. All data products are available at https://lgarrison.github.io/AbacusCosmos.

  12. Star Formation History of Dwarf Galaxies in Cosmological Hydrodynamic Simulations

    Directory of Open Access Journals (Sweden)

    Kentaro Nagamine

    2010-01-01

    Full Text Available We examine the past and current work on the star formation (SF histories of dwarf galaxies in cosmological hydrodynamic simulations. The results obtained from different numerical methods are still somewhat mixed, but the differences are understandable if we consider the numerical and resolution effects. It remains a challenge to simulate the episodic nature of SF history in dwarf galaxies at late times within the cosmological context of a cold dark matter model. More work is needed to solve the mysteries of SF history of dwarf galaxies employing large-scale hydrodynamic simulations on the next generation of supercomputers.

  13. Pushing down the low-mass halo concentration frontier with the Lomonosov cosmological simulations

    Science.gov (United States)

    Pilipenko, Sergey V.; Sánchez-Conde, Miguel A.; Prada, Francisco; Yepes, Gustavo

    2017-12-01

    We introduce the Lomonosov suite of high-resolution N-body cosmological simulations covering a full box of size 32 h-1 Mpc with low-mass resolution particles (2 × 107 h-1 M⊙) and three zoom-in simulations of overdense, underdense and mean density regions at much higher particle resolution (4 × 104 h-1 M⊙). The main purpose of this simulation suite is to extend the concentration-mass relation of dark matter haloes down to masses below those typically available in large cosmological simulations. The three different density regions available at higher resolution provide a better understanding of the effect of the local environment on halo concentration, known to be potentially important for small simulation boxes and small halo masses. Yet, we find the correction to be small in comparison with the scatter of halo concentrations. We conclude that zoom simulations, despite their limited representativity of the volume of the Universe, can be effectively used for the measurement of halo concentrations at least at the halo masses probed by our simulations. In any case, after a precise characterization of this effect, we develop a robust technique to extrapolate the concentration values found in zoom simulations to larger volumes with greater accuracy. Altogether, Lomonosov provides a measure of the concentration-mass relation in the halo mass range 107-1010 h-1 M⊙ with superb halo statistics. This work represents a first important step to measure halo concentrations at intermediate, yet vastly unexplored halo mass scales, down to the smallest ones. All Lomonosov data and files are public for community's use.

  14. Selecting ultra-faint dwarf candidate progenitors in cosmological N-body simulations at high redshifts

    Science.gov (United States)

    Safarzadeh, Mohammadtaher; Ji, Alexander P.; Dooley, Gregory A.; Frebel, Anna; Scannapieco, Evan; Gómez, Facundo A.; O'Shea, Brian W.

    2018-06-01

    The smallest satellites of the Milky Way ceased forming stars during the epoch of reionization and thus provide archaeological access to galaxy formation at z > 6. Numerical studies of these ultrafaint dwarf galaxies (UFDs) require expensive cosmological simulations with high mass resolution that are carried out down to z = 0. However, if we are able to statistically identify UFD host progenitors at high redshifts with relatively high probabilities, we can avoid this high computational cost. To find such candidates, we analyse the merger trees of Milky Way type haloes from the high-resolution Caterpillar suite of dark matter only simulations. Satellite UFD hosts at z = 0 are identified based on four different abundance matching (AM) techniques. All the haloes at high redshifts are traced forward in time in order to compute the probability of surviving as satellite UFDs today. Our results show that selecting potential UFD progenitors based solely on their mass at z = 12 (8) results in a 10 per cent (20 per cent) chance of obtaining a surviving UFD at z = 0 in three of the AM techniques we adopted. We find that the progenitors of surviving satellite UFDs have lower virial ratios (η), and are preferentially located at large distances from the main MW progenitor, while they show no correlation with concentration parameter. Haloes with favorable locations and virial ratios are ≈3 times more likely to survive as satellite UFD candidates at z = 0.

  15. Compactified cosmological simulations of the infinite universe

    Science.gov (United States)

    Rácz, Gábor; Szapudi, István; Csabai, István; Dobos, László

    2018-06-01

    We present a novel N-body simulation method that compactifies the infinite spatial extent of the Universe into a finite sphere with isotropic boundary conditions to follow the evolution of the large-scale structure. Our approach eliminates the need for periodic boundary conditions, a mere numerical convenience which is not supported by observation and which modifies the law of force on large scales in an unrealistic fashion. We demonstrate that our method outclasses standard simulations executed on workstation-scale hardware in dynamic range, it is balanced in following a comparable number of high and low k modes and, its fundamental geometry and topology match observations. Our approach is also capable of simulating an expanding, infinite universe in static coordinates with Newtonian dynamics. The price of these achievements is that most of the simulated volume has smoothly varying mass and spatial resolution, an approximation that carries different systematics than periodic simulations. Our initial implementation of the method is called StePS which stands for Stereographically projected cosmological simulations. It uses stereographic projection for space compactification and naive O(N^2) force calculation which is nevertheless faster to arrive at a correlation function of the same quality than any standard (tree or P3M) algorithm with similar spatial and mass resolution. The N2 force calculation is easy to adapt to modern graphics cards, hence our code can function as a high-speed prediction tool for modern large-scale surveys. To learn about the limits of the respective methods, we compare StePS with GADGET-2 running matching initial conditions.

  16. Compactified Cosmological Simulations of the Infinite Universe

    Science.gov (United States)

    Rácz, Gábor; Szapudi, István; Csabai, István; Dobos, László

    2018-03-01

    We present a novel N-body simulation method that compactifies the infinite spatial extent of the Universe into a finite sphere with isotropic boundary conditions to follow the evolution of the large-scale structure. Our approach eliminates the need for periodic boundary conditions, a mere numerical convenience which is not supported by observation and which modifies the law of force on large scales in an unrealistic fashion. We demonstrate that our method outclasses standard simulations executed on workstation-scale hardware in dynamic range, it is balanced in following a comparable number of high and low k modes and, its fundamental geometry and topology match observations. Our approach is also capable of simulating an expanding, infinite universe in static coordinates with Newtonian dynamics. The price of these achievements is that most of the simulated volume has smoothly varying mass and spatial resolution, an approximation that carries different systematics than periodic simulations. Our initial implementation of the method is called StePS which stands for Stereographically Projected Cosmological Simulations. It uses stereographic projection for space compactification and naive O(N^2) force calculation which is nevertheless faster to arrive at a correlation function of the same quality than any standard (tree or P3M) algorithm with similar spatial and mass resolution. The N2 force calculation is easy to adapt to modern graphics cards, hence our code can function as a high-speed prediction tool for modern large-scale surveys. To learn about the limits of the respective methods, we compare StePS with GADGET-2 running matching initial conditions.

  17. Comparing AMR and SPH Cosmological Simulations. I. Dark Matter and Adiabatic Simulations

    Science.gov (United States)

    O'Shea, Brian W.; Nagamine, Kentaro; Springel, Volker; Hernquist, Lars; Norman, Michael L.

    2005-09-01

    We compare two cosmological hydrodynamic simulation codes in the context of hierarchical galaxy formation: the Lagrangian smoothed particle hydrodynamics (SPH) code GADGET, and the Eulerian adaptive mesh refinement (AMR) code Enzo. Both codes represent dark matter with the N-body method but use different gravity solvers and fundamentally different approaches for baryonic hydrodynamics. The SPH method in GADGET uses a recently developed ``entropy conserving'' formulation of SPH, while for the mesh-based Enzo two different formulations of Eulerian hydrodynamics are employed: the piecewise parabolic method (PPM) extended with a dual energy formulation for cosmology, and the artificial viscosity-based scheme used in the magnetohydrodynamics code ZEUS. In this paper we focus on a comparison of cosmological simulations that follow either only dark matter, or also a nonradiative (``adiabatic'') hydrodynamic gaseous component. We perform multiple simulations using both codes with varying spatial and mass resolution with identical initial conditions. The dark matter-only runs agree generally quite well provided Enzo is run with a comparatively fine root grid and a low overdensity threshold for mesh refinement, otherwise the abundance of low-mass halos is suppressed. This can be readily understood as a consequence of the hierarchical particle-mesh algorithm used by Enzo to compute gravitational forces, which tends to deliver lower force resolution than the tree-algorithm of GADGET at early times before any adaptive mesh refinement takes place. At comparable force resolution we find that the latter offers substantially better performance and lower memory consumption than the present gravity solver in Enzo. In simulations that include adiabatic gasdynamics we find general agreement in the distribution functions of temperature, entropy, and density for gas of moderate to high overdensity, as found inside dark matter halos. However, there are also some significant differences in

  18. Simulations of structure formation in interacting dark energy cosmologies

    International Nuclear Information System (INIS)

    Baldi, M.

    2009-01-01

    The evidence in favor of a dark energy component dominating the Universe, and driving its presently accelerated expansion, has progressively grown during the last decade of cosmological observations. If this dark energy is given by a dynamic scalar field, it may also have a direct interaction with other matter fields in the Universe, in particular with cold dark matter. Such interaction would imprint new features on the cosmological background evolution as well as on the growth of cosmic structure, like an additional long-range fifth-force between massive particles, or a variation in time of the dark matter particle mass. We present here the implementation of these new physical effects in the N-body code GADGET-2, and we discuss the outcomes of a series of high-resolution N-body simulations for a selected family of interacting dark energy models. We interestingly find, in contrast with previous claims, that the inner overdensity of dark matter halos decreases in these models with respect to ΛCDM, and consistently halo concentrations show a progressive reduction for increasing couplings. Furthermore, the coupling induces a bias in the overdensities of cold dark matter and baryons that determines a decrease of the halo baryon fraction below its cosmological value. These results go in the direction of alleviating tensions between astrophysical observations and the predictions of the ΛCDM model on small scales, thereby opening new room for coupled dark energy models as an alternative to the cosmological constant.

  19. Seeding black holes in cosmological simulations

    Science.gov (United States)

    Taylor, P.; Kobayashi, C.

    2014-08-01

    We present a new model for the formation of black holes in cosmological simulations, motivated by the first star formation. Black holes form from high density peaks of primordial gas, and grow via both gas accretion and mergers. Massive black holes heat the surrounding material, suppressing star formation at the centres of galaxies, and driving galactic winds. We perform an investigation into the physical effects of the model parameters, and obtain a `best' set of these parameters by comparing the outcome of simulations to observations. With this best set, we successfully reproduce the cosmic star formation rate history, black hole mass-velocity dispersion relation, and the size-velocity dispersion relation of galaxies. The black hole seed mass is ˜103 M⊙, which is orders of magnitude smaller than that which has been used in previous cosmological simulations with active galactic nuclei, but suggests that the origin of the seed black holes is the death of Population III stars.

  20. Cosmological N -body simulations including radiation perturbations

    DEFF Research Database (Denmark)

    Brandbyge, Jacob; Rampf, Cornelius; Tram, Thomas

    2017-01-01

    CosmologicalN-body simulations are the standard tools to study the emergence of the observed large-scale structure of the Universe. Such simulations usually solve for the gravitational dynamics of matter within the Newtonian approximation, thus discarding general relativistic effects such as the ......CosmologicalN-body simulations are the standard tools to study the emergence of the observed large-scale structure of the Universe. Such simulations usually solve for the gravitational dynamics of matter within the Newtonian approximation, thus discarding general relativistic effects...

  1. A small-scale dynamo in feedback-dominated galaxies - III. Cosmological simulations

    Science.gov (United States)

    Rieder, Michael; Teyssier, Romain

    2017-12-01

    Magnetic fields are widely observed in the Universe in virtually all astrophysical objects, from individual stars to entire galaxies, even in the intergalactic medium, but their specific genesis has long been debated. Due to the development of more realistic models of galaxy formation, viable scenarios are emerging to explain cosmic magnetism, thanks to both deeper observations and more efficient and accurate computer simulations. We present here a new cosmological high-resolution zoom-in magnetohydrodynamic (MHD) simulation, using the adaptive mesh refinement technique, of a dwarf galaxy with an initially weak and uniform magnetic seed field that is amplified by a small-scale dynamo (SSD) driven by supernova-induced turbulence. As first structures form from the gravitational collapse of small density fluctuations, the frozen-in magnetic field separates from the cosmic expansion and grows through compression. In a second step, star formation sets in and establishes a strong galactic fountain, self-regulated by supernova explosions. Inside the galaxy, the interstellar medium becomes highly turbulent, dominated by strong supersonic shocks, as demonstrated by the spectral analysis of the gas kinetic energy. In this turbulent environment, the magnetic field is quickly amplified via a SSD process and is finally carried out into the circumgalactic medium by a galactic wind. This realistic cosmological simulation explains how initially weak magnetic seed fields can be amplified quickly in early, feedback-dominated galaxies, and predicts, as a consequence of the SSD process, that high-redshift magnetic fields are likely to be dominated by their small-scale components.

  2. A Monte Carlo Simulation Framework for Testing Cosmological Models

    Directory of Open Access Journals (Sweden)

    Heymann Y.

    2014-10-01

    Full Text Available We tested alternative cosmologies using Monte Carlo simulations based on the sam- pling method of the zCosmos galactic survey. The survey encompasses a collection of observable galaxies with respective redshifts that have been obtained for a given spec- troscopic area of the sky. Using a cosmological model, we can convert the redshifts into light-travel times and, by slicing the survey into small redshift buckets, compute a curve of galactic density over time. Because foreground galaxies obstruct the images of more distant galaxies, we simulated the theoretical galactic density curve using an average galactic radius. By comparing the galactic density curves of the simulations with that of the survey, we could assess the cosmologies. We applied the test to the expanding-universe cosmology of de Sitter and to a dichotomous cosmology.

  3. Simulating cosmologies beyond ΛCDM with PINOCCHIO

    Energy Technology Data Exchange (ETDEWEB)

    Rizzo, Luca A. [Institut de Physique Theorique, Universite Paris-Saclay CEA, CNRS, F-91191 Gif-sur-Yvette, Cedex (France); Villaescusa-Navarro, Francisco [Center for Computational Astrophysics, 160 5th Ave, New York, NY, 10010 (United States); Monaco, Pierluigi [Sezione di Astronomia, Dipartimento di Fisica, Università di Trieste, via G.B. Tiepolo 11, I-34143 Trieste (Italy); Munari, Emiliano [Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Juliane Maries Vej 30, DK-2100 Copenhagen (Denmark); Borgani, Stefano [INAF – Astronomical Observatory of Trieste, via G.B. Tiepolo 11, I-34143 Trieste (Italy); Castorina, Emanuele [Berkeley Center for Cosmological Physics, University of California, Berkeley, CA 94720 (United States); Sefusatti, Emiliano, E-mail: luca.rizzo@cea.fr, E-mail: fvillaescusa@simonsfoundation.org, E-mail: monaco@oats.inaf.it, E-mail: munari@dark-cosmology.dk, E-mail: borgani@oats.inaf.it, E-mail: ecastorina@berkeley.edu, E-mail: emiliano.sefusatti@brera.inaf.it [INAF, Osservatorio Astronomico di Brera, Via Bianchi 46, I-23807 Merate (Italy)

    2017-01-01

    We present a method that extends the capabilities of the PINpointing Orbit-Crossing Collapsed HIerarchical Objects (PINOCCHIO) code, allowing it to generate accurate dark matter halo mock catalogues in cosmological models where the linear growth factor and the growth rate depend on scale. Such cosmologies comprise, among others, models with massive neutrinos and some classes of modified gravity theories. We validate the code by comparing the halo properties from PINOCCHIO against N-body simulations, focusing on cosmologies with massive neutrinos: νΛCDM. We analyse the halo mass function, halo two-point correlation function and halo power spectrum, showing that PINOCCHIO reproduces the results from simulations with the same level of precision as the original code (∼ 5–10%). We demonstrate that the abundance of halos in cosmologies with massless and massive neutrinos from PINOCCHIO matches very well the outcome of simulations, and point out that PINOCCHIO can reproduce the Ω{sub ν}–σ{sub 8} degeneracy that affects the halo mass function. We finally show that the clustering properties of the halos from PINOCCHIO matches accurately those from simulations both in real and redshift-space, in the latter case up to k = 0.3 h Mpc{sup −1}. We emphasize that the computational time required by PINOCCHIO to generate mock halo catalogues is orders of magnitude lower than the one needed for N-body simulations. This makes this tool ideal for applications like covariance matrix studies within the standard ΛCDM model but also in cosmologies with massive neutrinos or some modified gravity theories.

  4. Towards Forming a Primordial Protostar in a Cosmological AMR Simulation

    Science.gov (United States)

    Turk, Matthew J.; Abel, Tom; O'Shea, Brian W.

    2008-03-01

    Modeling the formation of the first stars in the universe is a well-posed problem and ideally suited for computational investigation.We have conducted high-resolution numerical studies of the formation of primordial stars. Beginning with primordial initial conditions appropriate for a ΛCDM model, we used the Eulerian adaptive mesh refinement code (Enzo) to achieve unprecedented numerical resolution, resolving cosmological scales as well as sub-stellar scales simultaneously. Building on the work of Abel, Bryan and Norman (2002), we followed the evolution of the first collapsing cloud until molecular hydrogen is optically thick to cooling radiation. In addition, the calculations account for the process of collision-induced emission (CIE) and add approximations to the optical depth in both molecular hydrogen roto-vibrational cooling and CIE. Also considered are the effects of chemical heating/cooling from the formation/destruction of molecular hydrogen. We present the results of these simulations, showing the formation of a 10 Jupiter-mass protostellar core bounded by a strongly aspherical accretion shock. Accretion rates are found to be as high as one solar mass per year.

  5. Very high-resolution regional climate simulations over Scandinavia-present climate

    DEFF Research Database (Denmark)

    Christensen, Ole B.; Christensen, Jens H.; Machenhauer, Bennert

    1998-01-01

    realistically simulated. It is found in particular that in mountainous regions the high-resolution simulation shows improvements in the simulation of hydrologically relevant fields such as runoff and snow cover. Also, the distribution of precipitation on different intensity classes is most realistically...... on a high-density station network for the Scandinavian countries compiled for the present study. The simulated runoff is compared with observed data from Sweden extracted from a Swedish climatological atlas. These runoff data indicate that the precipitation analyses are underestimating the true...... simulated in the high-resolution simulation. It does, however, inherit certain large-scale systematic errors from the driving GCM. In many cases these errors increase with increasing resolution. Model verification of near-surface temperature and precipitation is made using a new gridded climatology based...

  6. Validation of High-resolution Climate Simulations over Northern Europe.

    Science.gov (United States)

    Muna, R. A.

    2005-12-01

    Two AMIP2-type (Gates 1992) experiments have been performed with climate versions of ARPEGE/IFS model examine for North Atlantic North Europe, and Norwegian region and analyzed the effect of increasing resolution on the simulated biases. The ECMWF reanalysis or ERA-15 has been used to validate the simulations. Each of the simulations is an integration of the period 1979 to 1996. The global simulations used observed monthly mean sea surface temperatures (SST) as lower boundary condition. All aspects but the horizontal resolutions are similar in the two simulations. The first simulation has a uniform horizontal resolution of T63L. The second one has a variable resolution (T106Lc3) with the highest resolution in the Norwegian Sea. Both simulations have 31 vertical layers in the same locations. For each simulation the results were divided into two seasons: winter (DJF) and summer (JJA). The parameters investigated were mean sea level pressure, geopotential and temperature at 850 hPa and 500 hPa. To find out the causes of temperature bias during summer, latent and sensible heat flux, total cloud cover and total precipitation were analyzed. The high-resolution simulation exhibits more or less realistic climate over Nordic, Artic and European region. The overall performance of the simulations shows improvements of generally all fields investigated with increasing resolution over the target area both in winter (DJF) and summer (JJA).

  7. Cosmological implications of the MAXIMA-1 high-resolution cosmic microwave background anisotropy measurement

    International Nuclear Information System (INIS)

    Stompor, R.; Abroe, M.; Ade, P.; Balbi, A.; Barbosa, D.; Bock, J.; Borrill, J.; Boscaleri, A.; de Bernardis, P.; Ferreira, P.G.; Hanany, S.; Hristov, V.; Jaffe, A.H.; Lee, A.T.; Pascale, E.; Rabii, B.; Richards, P.L.; Smoot, G.F.; Winant, C.D.; Wu, J.H.P.

    2001-01-01

    We discuss the cosmological implications of the new constraints on the power spectrum of the cosmic microwave background (CMB) anisotropy derived from a new high-resolution analysis of the MAXIMA-1 measurement. The power spectrum indicates excess power at lsimilar to 860 over the average level of power at 411 less than or equal to l less than or equal to 785. This excess is statistically significant at the similar to 95 percent confidence level. Its position coincides with that of the third acoustic peak, as predicted by generic inflationary models selected to fit the first acoustic peak as observed in the data. The height of the excess power agrees with the predictions of a family of inflationary models with cosmological parameters that are fixed to fit the CMB data previously provided by BOOMERANG-LDB and MAXIMA-1 experiments. Our results therefore lend support for inflationary models and more generally for the dominance of adiabatic coherent perturbations in the structure formation of the universe. At the same time, they seem to disfavor a large variety of the nonstandard (but inflation-based) models that have been proposed to improve the quality of fits to the CMB data and the consistency with other cosmological observables. Within standard inflationary models, our results combined with the COBE/Differential Microwave Radiometer data give best-fit values and 95 percent confidence limits for the baryon density, Omega (b)h(2)similar or equal to 0.033 +/- 0.013, and the total density, Omega =0.9(-0.16)(+0.18). The primordial spectrum slope (n(s)) and the optical depth to the last scattering surface (tau (c)) are found to be degenerate and to obey the relation n(s) similar or equal to (0.99 +/- 0.14) + 0.46tau (c), for tau (c) less than or equal to 0.5 (all at 95 percent confidence levels)

  8. Effects of the initial conditions on cosmological $N$-body simulations

    OpenAIRE

    L'Huillier, Benjamin; Park, Changbom; Kim, Juhan

    2014-01-01

    Cosmology is entering an era of percent level precision due to current large observational surveys. This precision in observation is now demanding more accuracy from numerical methods and cosmological simulations. In this paper, we study the accuracy of $N$-body numerical simulations and their dependence on changes in the initial conditions and in the simulation algorithms. For this purpose, we use a series of cosmological $N$-body simulations with varying initial conditions. We test the infl...

  9. Cosmological simulations of multicomponent cold dark matter.

    Science.gov (United States)

    Medvedev, Mikhail V

    2014-08-15

    The nature of dark matter is unknown. A number of dark matter candidates are quantum flavor-mixed particles but this property has never been accounted for in cosmology. Here we explore this possibility from the first principles via extensive N-body cosmological simulations and demonstrate that the two-component dark matter model agrees with observational data at all scales. Substantial reduction of substructure and flattening of density profiles in the centers of dark matter halos found in simulations can simultaneously resolve several outstanding puzzles of modern cosmology. The model shares the "why now?" fine-tuning caveat pertinent to all self-interacting models. Predictions for direct and indirect detection dark matter experiments are made.

  10. GALAXY CLUSTER RADIO RELICS IN ADAPTIVE MESH REFINEMENT COSMOLOGICAL SIMULATIONS: RELIC PROPERTIES AND SCALING RELATIONSHIPS

    International Nuclear Information System (INIS)

    Skillman, Samuel W.; Hallman, Eric J.; Burns, Jack O.; Smith, Britton D.; O'Shea, Brian W.; Turk, Matthew J.

    2011-01-01

    Cosmological shocks are a critical part of large-scale structure formation, and are responsible for heating the intracluster medium in galaxy clusters. In addition, they are capable of accelerating non-thermal electrons and protons. In this work, we focus on the acceleration of electrons at shock fronts, which is thought to be responsible for radio relics-extended radio features in the vicinity of merging galaxy clusters. By combining high-resolution adaptive mesh refinement/N-body cosmological simulations with an accurate shock-finding algorithm and a model for electron acceleration, we calculate the expected synchrotron emission resulting from cosmological structure formation. We produce synthetic radio maps of a large sample of galaxy clusters and present luminosity functions and scaling relationships. With upcoming long-wavelength radio telescopes, we expect to see an abundance of radio emission associated with merger shocks in the intracluster medium. By producing observationally motivated statistics, we provide predictions that can be compared with observations to further improve our understanding of magnetic fields and electron shock acceleration.

  11. Remapping dark matter halo catalogues between cosmological simulations

    Science.gov (United States)

    Mead, A. J.; Peacock, J. A.

    2014-05-01

    We present and test a method for modifying the catalogue of dark matter haloes produced from a given cosmological simulation, so that it resembles the result of a simulation with an entirely different set of parameters. This extends the method of Angulo & White, which rescales the full particle distribution from a simulation. Working directly with the halo catalogue offers an advantage in speed, and also allows modifications of the internal structure of the haloes to account for non-linear differences between cosmologies. Our method can be used directly on a halo catalogue in a self-contained manner without any additional information about the overall density field; although the large-scale displacement field is required by the method, this can be inferred from the halo catalogue alone. We show proof of concept of our method by rescaling a matter-only simulation with no baryon acoustic oscillation (BAO) features to a more standard Λ cold dark matter model containing a cosmological constant and a BAO signal. In conjunction with the halo occupation approach, this method provides a basis for the rapid generation of mock galaxy samples spanning a wide range of cosmological parameters.

  12. HOT GAS HALOS AROUND DISK GALAXIES: CONFRONTING COSMOLOGICAL SIMULATIONS WITH OBSERVATIONS

    International Nuclear Information System (INIS)

    Rasmussen, Jesper; Sommer-Larsen, Jesper; Pedersen, Kristian; Toft, Sune; Grove, Lisbeth F.; Benson, Andrew; Bower, Richard G.

    2009-01-01

    Models of disk galaxy formation commonly predict the existence of an extended reservoir of accreted hot gas surrounding massive spirals at low redshift. As a test of these models, we use X-ray and Hα data of the two massive, quiescent edge-on spirals NGC 5746 and NGC 5170 to investigate the amount and origin of any hot gas in their halos. Contrary to our earlier claim, the Chandra analysis of NGC 5746, employing more recent calibration data, does not reveal any significant evidence for diffuse X-ray emission outside the optical disk, with a 3σ upper limit to the halo X-ray luminosity of 4 x 10 39 erg s -1 . An identical study of the less massive NGC 5170 also fails to detect any extraplanar X-ray emission. By extracting hot halo properties of disk galaxies formed in cosmological hydrodynamical simulations, we compare these results to expectations for cosmological accretion of hot gas by spirals. For Milky-Way-sized galaxies, these high-resolution simulations predict hot halo X-ray luminosities which are lower by a factor of ∼2 compared to our earlier results reported by Toft et al. We find the new simulation predictions to be consistent with our observational constraints for both NGC 5746 and NGC 5170, while also confirming that the hot gas detected so far around more actively star-forming spirals is in general probably associated with stellar activity in the disk. Observational results on quiescent disk galaxies at the high-mass end are nevertheless providing powerful constraints on theoretical predictions, and hence on the assumed input physics in numerical studies of disk galaxy formation and evolution.

  13. ANALYZING AND VISUALIZING COSMOLOGICAL SIMULATIONS WITH ParaView

    International Nuclear Information System (INIS)

    Woodring, Jonathan; Ahrens, James; Heitmann, Katrin; Pope, Adrian; Fasel, Patricia; Hsu, Chung-Hsing; Habib, Salman

    2011-01-01

    The advent of large cosmological sky surveys-ushering in the era of precision cosmology-has been accompanied by ever larger cosmological simulations. The analysis of these simulations, which currently encompass tens of billions of particles and up to a trillion particles in the near future, is often as daunting as carrying out the simulations in the first place. Therefore, the development of very efficient analysis tools combining qualitative and quantitative capabilities is a matter of some urgency. In this paper, we introduce new analysis features implemented within ParaView, a fully parallel, open-source visualization toolkit, to analyze large N-body simulations. A major aspect of ParaView is that it can live and operate on the same machines and utilize the same parallel power as the simulation codes themselves. In addition, data movement is in a serious bottleneck now and will become even more of an issue in the future; an interactive visualization and analysis tool that can handle data in situ is fast becoming essential. The new features in ParaView include particle readers and a very efficient halo finder that identifies friends-of-friends halos and determines common halo properties, including spherical overdensity properties. In combination with many other functionalities already existing within ParaView, such as histogram routines or interfaces to programming languages like Python, this enhanced version enables fast, interactive, and convenient analyses of large cosmological simulations. In addition, development paths are available for future extensions.

  14. Constraining Stochastic Parametrisation Schemes Using High-Resolution Model Simulations

    Science.gov (United States)

    Christensen, H. M.; Dawson, A.; Palmer, T.

    2017-12-01

    Stochastic parametrisations are used in weather and climate models as a physically motivated way to represent model error due to unresolved processes. Designing new stochastic schemes has been the target of much innovative research over the last decade. While a focus has been on developing physically motivated approaches, many successful stochastic parametrisation schemes are very simple, such as the European Centre for Medium-Range Weather Forecasts (ECMWF) multiplicative scheme `Stochastically Perturbed Parametrisation Tendencies' (SPPT). The SPPT scheme improves the skill of probabilistic weather and seasonal forecasts, and so is widely used. However, little work has focused on assessing the physical basis of the SPPT scheme. We address this matter by using high-resolution model simulations to explicitly measure the `error' in the parametrised tendency that SPPT seeks to represent. The high resolution simulations are first coarse-grained to the desired forecast model resolution before they are used to produce initial conditions and forcing data needed to drive the ECMWF Single Column Model (SCM). By comparing SCM forecast tendencies with the evolution of the high resolution model, we can measure the `error' in the forecast tendencies. In this way, we provide justification for the multiplicative nature of SPPT, and for the temporal and spatial scales of the stochastic perturbations. However, we also identify issues with the SPPT scheme. It is therefore hoped these measurements will improve both holistic and process based approaches to stochastic parametrisation. Figure caption: Instantaneous snapshot of the optimal SPPT stochastic perturbation, derived by comparing high-resolution simulations with a low resolution forecast model.

  15. THE PRESSURE OF THE STAR-FORMING INTERSTELLAR MEDIUM IN COSMOLOGICAL SIMULATIONS

    International Nuclear Information System (INIS)

    Munshi, Ferah; Quinn, Thomas R.; Governato, Fabio; Christensen, Charlotte; Wadsley, James; Loebman, Sarah; Shen, Sijing

    2014-01-01

    We examine the pressure of the star-forming interstellar medium (ISM) of Milky-Way-sized disk galaxies using fully cosmological SPH+N-body, high-resolution simulations. These simulations include explicit treatment of metal-line cooling in addition to dust and self-shielding, H 2 -based star formation. The four simulated halos have masses ranging from a few times 10 10 to nearly 10 12 solar masses. Using a kinematic decomposition of these galaxies into present-day bulge and disk components, we find that the typical pressure of the star-forming ISM in the present-day bulge is higher than that in the present-day disk by an order of magnitude. We also find that the pressure of the star-forming ISM at high redshift is, on average, higher than ISM pressures at low redshift. This explains why the bulge forms at higher pressures: the disk assembles at lower redshift when the ISM exhibits lower pressure and the bulge forms at high redshift when the ISM has higher pressure. If ISM pressure and IMF variation are tied together, these results could indicate a time-dependent IMF in Milky-Way-like systems as well as a different IMF in the bulge and the disk

  16. Utilization of Short-Simulations for Tuning High-Resolution Climate Model

    Science.gov (United States)

    Lin, W.; Xie, S.; Ma, P. L.; Rasch, P. J.; Qian, Y.; Wan, H.; Ma, H. Y.; Klein, S. A.

    2016-12-01

    Many physical parameterizations in atmospheric models are sensitive to resolution. Tuning the models that involve a multitude of parameters at high resolution is computationally expensive, particularly when relying primarily on multi-year simulations. This work describes a complementary set of strategies for tuning high-resolution atmospheric models, using ensembles of short simulations to reduce the computational cost and elapsed time. Specifically, we utilize the hindcast approach developed through the DOE Cloud Associated Parameterization Testbed (CAPT) project for high-resolution model tuning, which is guided by a combination of short (tests have been found to be effective in numerous previous studies in identifying model biases due to parameterized fast physics, and we demonstrate that it is also useful for tuning. After the most egregious errors are addressed through an initial "rough" tuning phase, longer simulations are performed to "hone in" on model features that evolve over longer timescales. We explore these strategies to tune the DOE ACME (Accelerated Climate Modeling for Energy) model. For the ACME model at 0.25° resolution, it is confirmed that, given the same parameters, major biases in global mean statistics and many spatial features are consistent between Atmospheric Model Intercomparison Project (AMIP)-type simulations and CAPT-type hindcasts, with just a small number of short-term simulations for the latter over the corresponding season. The use of CAPT hindcasts to find parameter choice for the reduction of large model biases dramatically improves the turnaround time for the tuning at high resolution. Improvement seen in CAPT hindcasts generally translates to improved AMIP-type simulations. An iterative CAPT-AMIP tuning approach is therefore adopted during each major tuning cycle, with the former to survey the likely responses and narrow the parameter space, and the latter to verify the results in climate context along with assessment in

  17. Computer simulation of high resolution transmission electron micrographs: theory and analysis

    International Nuclear Information System (INIS)

    Kilaas, R.

    1985-03-01

    Computer simulation of electron micrographs is an invaluable aid in their proper interpretation and in defining optimum conditions for obtaining images experimentally. Since modern instruments are capable of atomic resolution, simulation techniques employing high precision are required. This thesis makes contributions to four specific areas of this field. First, the validity of a new method for simulating high resolution electron microscope images has been critically examined. Second, three different methods for computing scattering amplitudes in High Resolution Transmission Electron Microscopy (HRTEM) have been investigated as to their ability to include upper Laue layer (ULL) interaction. Third, a new method for computing scattering amplitudes in high resolution transmission electron microscopy has been examined. Fourth, the effect of a surface layer of amorphous silicon dioxide on images of crystalline silicon has been investigated for a range of crystal thicknesses varying from zero to 2 1/2 times that of the surface layer

  18. Analyzing and Visualizing Cosmological Simulations with ParaView

    Science.gov (United States)

    Woodring, Jonathan; Heitmann, Katrin; Ahrens, James; Fasel, Patricia; Hsu, Chung-Hsing; Habib, Salman; Pope, Adrian

    2011-07-01

    The advent of large cosmological sky surveys—ushering in the era of precision cosmology—has been accompanied by ever larger cosmological simulations. The analysis of these simulations, which currently encompass tens of billions of particles and up to a trillion particles in the near future, is often as daunting as carrying out the simulations in the first place. Therefore, the development of very efficient analysis tools combining qualitative and quantitative capabilities is a matter of some urgency. In this paper, we introduce new analysis features implemented within ParaView, a fully parallel, open-source visualization toolkit, to analyze large N-body simulations. A major aspect of ParaView is that it can live and operate on the same machines and utilize the same parallel power as the simulation codes themselves. In addition, data movement is in a serious bottleneck now and will become even more of an issue in the future; an interactive visualization and analysis tool that can handle data in situ is fast becoming essential. The new features in ParaView include particle readers and a very efficient halo finder that identifies friends-of-friends halos and determines common halo properties, including spherical overdensity properties. In combination with many other functionalities already existing within ParaView, such as histogram routines or interfaces to programming languages like Python, this enhanced version enables fast, interactive, and convenient analyses of large cosmological simulations. In addition, development paths are available for future extensions.

  19. The origin of kinematically distinct cores and misaligned gas discs in galaxies from cosmological simulations

    Science.gov (United States)

    Taylor, Philip; Federrath, Christoph; Kobayashi, Chiaki

    2018-06-01

    Integral field spectroscopy surveys provide spatially resolved gas and stellar kinematics of galaxies. They have unveiled a range of atypical kinematic phenomena, which require detailed modelling to understand. We present results from a cosmological simulation that includes stellar and AGN feedback. We find that the distribution of angles between the gas and stellar angular momenta of galaxies is not affected by projection effects. We examine five galaxies (≈6 per cent of well resolved galaxies) that display atypical kinematics; two of the galaxies have kinematically distinct cores (KDC), while the other three have counter-rotating gas and stars. All five form the majority of their stars in the field, subsequently falling into cosmological filaments where the relative orientation of the stellar angular momentum and the bulk gas flow leads to the formation of a counter-rotating gas disc. The accreted gas exchanges angular momentum with pre-existing co-rotating gas causing it to fall to the centre of the galaxy. This triggers low-level AGN feedback, which reduces star formation. Later, two of the galaxies experience a minor merger (stellar mass ratio ˜1/10) with a galaxy on a retrograde orbit compared to the spin of the stellar component of the primary. This produces the KDCs, and is a different mechanism than suggested by other works. The role of minor mergers in the kinematic evolution of galaxies may have been under-appreciated in the past, and large, high-resolution cosmological simulations will be necessary to gain a better understanding in this area.

  20. Propagation Diagnostic Simulations Using High-Resolution Equatorial Plasma Bubble Simulations

    Science.gov (United States)

    Rino, C. L.; Carrano, C. S.; Yokoyama, T.

    2017-12-01

    In a recent paper, under review, equatorial-plasma-bubble (EPB) simulations were used to conduct a comparative analysis of the EPB spectra characteristics with high-resolution in-situ measurements from the C/NOFS satellite. EPB realizations sampled in planes perpendicular to magnetic field lines provided well-defined EPB structure at altitudes penetrating both high and low-density regions. The average C/NOFS structure in highly disturbed regions showed nearly identical two-component inverse-power-law spectral characteristics as the measured EPB structure. This paper describes the results of PWE simulations using the same two-dimensional cross-field EPB realizations. New Irregularity Parameter Estimation (IPE) diagnostics, which are based on two-dimensional equivalent-phase-screen theory [A theory of scintillation for two-component power law irregularity spectra: Overview and numerical results, by Charles Carrano and Charles Rino, DOI: 10.1002/2015RS005903], have been successfully applied to extract two-component inverse-power-law parameters from measured intensity spectra. The EPB simulations [Low and Midlatitude Ionospheric Plasma DensityIrregularities and Their Effects on Geomagnetic Field, by Tatsuhiro Yokoyama and Claudia Stolle, DOI 10.1007/s11214-016-0295-7] have sufficient resolution to populate the structure scales (tens of km to hundreds of meters) that cause strong scintillation at GPS frequencies. The simulations provide an ideal geometry whereby the ramifications of varying structure along the propagation path can be investigated. It is well known path-integrated one-dimensional spectra increase the one-dimensional index by one. The relation requires decorrelation along the propagation path. Correlated structure would be interpreted as stochastic total-electron-content (TEC). The simulations are performed with unmodified structure. Because the EPB structure is confined to the central region of the sample planes, edge effects are minimized. Consequently

  1. Quantitative and comparative visualization applied to cosmological simulations

    International Nuclear Information System (INIS)

    Ahrens, James; Heitmann, Katrin; Habib, Salman; Ankeny, Lee; McCormick, Patrick; Inman, Jeff; Armstrong, Ryan; Ma, Kwan-Liu

    2006-01-01

    Cosmological simulations follow the formation of nonlinear structure in dark and luminous matter. The associated simulation volumes and dynamic range are very large, making visualization both a necessary and challenging aspect of the analysis of these datasets. Our goal is to understand sources of inconsistency between different simulation codes that are started from the same initial conditions. Quantitative visualization supports the definition and reasoning about analytically defined features of interest. Comparative visualization supports the ability to visually study, side by side, multiple related visualizations of these simulations. For instance, a scientist can visually distinguish that there are fewer halos (localized lumps of tracer particles) in low-density regions for one simulation code out of a collection. This qualitative result will enable the scientist to develop a hypothesis, such as loss of halos in low-density regions due to limited resolution, to explain the inconsistency between the different simulations. Quantitative support then allows one to confirm or reject the hypothesis. If the hypothesis is rejected, this step may lead to new insights and a new hypothesis, not available from the purely qualitative analysis. We will present methods to significantly improve the Scientific analysis process by incorporating quantitative analysis as the driver for visualization. Aspects of this work are included as part of two visualization tools, ParaView, an open-source large data visualization tool, and Scout, an analysis-language based, hardware-accelerated visualization tool

  2. Modern Cosmology

    CERN Document Server

    Zhang Yuan Zhong

    2002-01-01

    This book is one of a series in the areas of high-energy physics, cosmology and gravitation published by the Institute of Physics. It includes courses given at a doctoral school on 'Relativistic Cosmology: Theory and Observation' held in Spring 2000 at the Centre for Scientific Culture 'Alessandro Volta', Italy, sponsored by SIGRAV-Societa Italiana di Relativita e Gravitazione (Italian Society of Relativity and Gravitation) and the University of Insubria. This book collects 15 review reports given by a number of outstanding scientists. They touch upon the main aspects of modern cosmology from observational matters to theoretical models, such as cosmological models, the early universe, dark matter and dark energy, modern observational cosmology, cosmic microwave background, gravitational lensing, and numerical simulations in cosmology. In particular, the introduction to the basics of cosmology includes the basic equations, covariant and tetrad descriptions, Friedmann models, observation and horizons, etc. The ...

  3. Cosmology for high energy physicists

    International Nuclear Information System (INIS)

    Albrecht, A.

    1987-11-01

    The standard big bang model of cosmology is presented. Although not perfect, its many successes make it a good starting point for most discussions of cosmology. Places are indicated where well understood laboratory physics is incorporated into the big bang, leading to successful predictions. Much less established aspects of high energy physics and some of the new ideas they have introduced into the field of cosmology are discussed, such as string theory, inflation and monopoles. 49 refs., 5 figs

  4. A numerical relativity scheme for cosmological simulations

    Science.gov (United States)

    Daverio, David; Dirian, Yves; Mitsou, Ermis

    2017-12-01

    Cosmological simulations involving the fully covariant gravitational dynamics may prove relevant in understanding relativistic/non-linear features and, therefore, in taking better advantage of the upcoming large scale structure survey data. We propose a new 3  +  1 integration scheme for general relativity in the case where the matter sector contains a minimally-coupled perfect fluid field. The original feature is that we completely eliminate the fluid components through the constraint equations, thus remaining with a set of unconstrained evolution equations for the rest of the fields. This procedure does not constrain the lapse function and shift vector, so it holds in arbitrary gauge and also works for arbitrary equation of state. An important advantage of this scheme is that it allows one to define and pass an adaptation of the robustness test to the cosmological context, at least in the case of pressureless perfect fluid matter, which is the relevant one for late-time cosmology.

  5. Galaxy Formation Efficiency and the Multiverse Explanation of the Cosmological Constant with EAGLE Simulations

    Science.gov (United States)

    Barnes, Luke A.; Elahi, Pascal J.; Salcido, Jaime; Bower, Richard G.; Lewis, Geraint F.; Theuns, Tom; Schaller, Matthieu; Crain, Robert A.; Schaye, Joop

    2018-04-01

    Models of the very early universe, including inflationary models, are argued to produce varying universe domains with different values of fundamental constants and cosmic parameters. Using the cosmological hydrodynamical simulation code from the EAGLE collaboration, we investigate the effect of the cosmological constant on the formation of galaxies and stars. We simulate universes with values of the cosmological constant ranging from Λ = 0 to Λ0 × 300, where Λ0 is the value of the cosmological constant in our Universe. Because the global star formation rate in our Universe peaks at t = 3.5 Gyr, before the onset of accelerating expansion, increases in Λ of even an order of magnitude have only a small effect on the star formation history and efficiency of the universe. We use our simulations to predict the observed value of the cosmological constant, given a measure of the multiverse. Whether the cosmological constant is successfully predicted depends crucially on the measure. The impact of the cosmological constant on the formation of structure in the universe does not seem to be a sharp enough function of Λ to explain its observed value alone.

  6. Origin of chemically distinct discs in the Auriga cosmological simulations

    Science.gov (United States)

    Grand, Robert J. J.; Bustamante, Sebastián; Gómez, Facundo A.; Kawata, Daisuke; Marinacci, Federico; Pakmor, Rüdiger; Rix, Hans-Walter; Simpson, Christine M.; Sparre, Martin; Springel, Volker

    2018-03-01

    The stellar disc of the Milky Way shows complex spatial and abundance structure that is central to understanding the key physical mechanisms responsible for shaping our Galaxy. In this study, we use six very high resolution cosmological zoom-in simulations of Milky Way-sized haloes to study the prevalence and formation of chemically distinct disc components. We find that our simulations develop a clearly bimodal distribution in the [α/Fe]-[Fe/H] plane. We find two main pathways to creating this dichotomy, which operate in different regions of the galaxies: (a) an early (z > 1) and intense high-[α/Fe] star formation phase in the inner region (R ≲ 5 kpc) induced by gas-rich mergers, followed by more quiescent low-[α/Fe] star formation; and (b) an early phase of high-[α/Fe] star formation in the outer disc followed by a shrinking of the gas disc owing to a temporarily lowered gas accretion rate, after which disc growth resumes. In process (b), a double-peaked star formation history around the time and radius of disc shrinking accentuates the dichotomy. If the early star formation phase is prolonged (rather than short and intense), chemical evolution proceeds as per process (a) in the inner region, but the dichotomy is less clear. In the outer region, the dichotomy is only evident if the first intense phase of star formation covers a large enough radial range before disc shrinking occurs; otherwise, the outer disc consists of only low-[α/Fe] sequence stars. We discuss the implication that both processes occurred in the Milky Way.

  7. Hydrologic Simulation in Mediterranean flood prone Watersheds using high-resolution quality data

    Science.gov (United States)

    Eirini Vozinaki, Anthi; Alexakis, Dimitrios; Pappa, Polixeni; Tsanis, Ioannis

    2015-04-01

    Flooding is a significant threat causing lots of inconveniencies in several societies, worldwide. The fact that the climatic change is already happening, increases the flooding risk, which is no longer a substantial menace to several societies and their economies. The improvement of spatial-resolution and accuracy of the topography and land use data due to remote sensing techniques could provide integrated flood inundation simulations. In this work hydrological analysis of several historic flood events in Mediterranean flood prone watersheds (island of Crete/Greece) takes place. Satellite images of high resolution are elaborated. A very high resolution (VHR) digital elevation model (DEM) is produced from a GeoEye-1 0.5-m-resolution satellite stereo pair and is used for floodplain management and mapping applications such as watershed delineation and river cross-section extraction. Sophisticated classification algorithms are implemented for improving Land Use/ Land Cover maps accuracy. In addition, soil maps are updated with means of Radar satellite images. The above high-resolution data are innovatively used to simulate and validate several historical flood events in Mediterranean watersheds, which have experienced severe flooding in the past. The hydrologic/hydraulic models used for flood inundation simulation in this work are HEC-HMS and HEC-RAS. The Natural Resource Conservation Service (NRCS) curve number (CN) approach is implemented to account for the effect of LULC and soil on the hydrologic response of the catchment. The use of high resolution data provides detailed validation results and results of high precision, accordingly. Furthermore, the meteorological forecasting data, which are also combined to the simulation model results, manage the development of an integrated flood forecasting and early warning system tool, which is capable of confronting or even preventing this imminent risk. The research reported in this paper was fully supported by the

  8. Numerical techniques for large cosmological N-body simulations

    International Nuclear Information System (INIS)

    Efstathiou, G.; Davis, M.; Frenk, C.S.; White, S.D.M.

    1985-01-01

    We describe and compare techniques for carrying out large N-body simulations of the gravitational evolution of clustering in the fundamental cube of an infinite periodic universe. In particular, we consider both particle mesh (PM) codes and P 3 M codes in which a higher resolution force is obtained by direct summation of contributions from neighboring particles. We discuss the mesh-induced anisotropies in the forces calculated by these schemes, and the extent to which they can model the desired 1/r 2 particle-particle interaction. We also consider how transformation of the time variable can improve the efficiency with which the equations of motion are integrated. We present tests of the accuracy with which the resulting schemes conserve energy and are able to follow individual particle trajectories. We have implemented an algorithm which allows initial conditions to be set up to model any desired spectrum of linear growing mode density fluctuations. A number of tests demonstrate the power of this algorithm and delineate the conditions under which it is effective. We carry out several test simulations using a variety of techniques in order to show how the results are affected by dynamic range limitations in the force calculations, by boundary effects, by residual artificialities in the initial conditions, and by the number of particles employed. For most purposes cosmological simulations are limited by the resolution of their force calculation rather than by the number of particles they can employ. For this reason, while PM codes are quite adequate to study the evolution of structure on large scale, P 3 M methods are to be preferred, in spite of their greater cost and complexity, whenever the evolution of small-scale structure is important

  9. Modeling Supermassive Black Holes in Cosmological Simulations

    Science.gov (United States)

    Tremmel, Michael

    My thesis work has focused on improving the implementation of supermassive black hole (SMBH) physics in cosmological hydrodynamic simulations. SMBHs are ubiquitous in mas- sive galaxies, as well as bulge-less galaxies and dwarfs, and are thought to be a critical component to massive galaxy evolution. Still, much is unknown about how SMBHs form, grow, and affect their host galaxies. Cosmological simulations are an invaluable tool for un- derstanding the formation of galaxies, self-consistently tracking their evolution with realistic merger and gas accretion histories. SMBHs are often modeled in these simulations (generally as a necessity to produce realistic massive galaxies), but their implementations are commonly simplified in ways that can limit what can be learned. Current and future observations are opening new windows into the lifecycle of SMBHs and their host galaxies, but require more detailed, physically motivated simulations. Within the novel framework I have developed, SMBHs 1) are seeded at early times without a priori assumptions of galaxy occupation, 2) grow in a way that accounts for the angular momentum of gas, and 3) experience realistic orbital evolution. I show how this model, properly tuned with a novel parameter optimiza- tion technique, results in realistic galaxies and SMBHs. Utilizing the unique ability of these simulations to capture the dynamical evolution of SMBHs, I present the first self-consistent prediction for the formation timescales of close SMBH pairs, precursors to SMBH binaries and merger events potentially detected by future gravitational wave experiments.

  10. Quantification of discreteness effects in cosmological N-body simulations: Initial conditions

    International Nuclear Information System (INIS)

    Joyce, M.; Marcos, B.

    2007-01-01

    The relation between the results of cosmological N-body simulations, and the continuum theoretical models they simulate, is currently not understood in a way which allows a quantification of N dependent effects. In this first of a series of papers on this issue, we consider the quantification of such effects in the initial conditions of such simulations. A general formalism developed in [A. Gabrielli, Phys. Rev. E 70, 066131 (2004).] allows us to write down an exact expression for the power spectrum of the point distributions generated by the standard algorithm for generating such initial conditions. Expanded perturbatively in the amplitude of the input (i.e. theoretical, continuum) power spectrum, we obtain at linear order the input power spectrum, plus two terms which arise from discreteness and contribute at large wave numbers. For cosmological type power spectra, one obtains as expected, the input spectrum for wave numbers k smaller than that characteristic of the discreteness. The comparison of real space correlation properties is more subtle because the discreteness corrections are not as strongly localized in real space. For cosmological type spectra the theoretical mass variance in spheres and two-point correlation function are well approximated above a finite distance. For typical initial amplitudes this distance is a few times the interparticle distance, but it diverges as this amplitude (or, equivalently, the initial redshift of the cosmological simulation) goes to zero, at fixed particle density. We discuss briefly the physical significance of these discreteness terms in the initial conditions, in particular, with respect to the definition of the continuum limit of N-body simulations

  11. MassiveNuS: cosmological massive neutrino simulations

    Science.gov (United States)

    Liu, Jia; Bird, Simeon; Zorrilla Matilla, José Manuel; Hill, J. Colin; Haiman, Zoltán; Madhavacheril, Mathew S.; Petri, Andrea; Spergel, David N.

    2018-03-01

    The non-zero mass of neutrinos suppresses the growth of cosmic structure on small scales. Since the level of suppression depends on the sum of the masses of the three active neutrino species, the evolution of large-scale structure is a promising tool to constrain the total mass of neutrinos and possibly shed light on the mass hierarchy. In this work, we investigate these effects via a large suite of N-body simulations that include massive neutrinos using an analytic linear-response approximation: the Cosmological Massive Neutrino Simulations (MassiveNuS). The simulations include the effects of radiation on the background expansion, as well as the clustering of neutrinos in response to the nonlinear dark matter evolution. We allow three cosmological parameters to vary: the neutrino mass sum Mν in the range of 0–0.6 eV, the total matter density Ωm, and the primordial power spectrum amplitude As. The rms density fluctuation in spheres of 8 comoving Mpc/h (σ8) is a derived parameter as a result. Our data products include N-body snapshots, halo catalogues, merger trees, ray-traced galaxy lensing convergence maps for four source redshift planes between zs=1–2.5, and ray-traced cosmic microwave background lensing convergence maps. We describe the simulation procedures and code validation in this paper. The data are publicly available at http://columbialensing.org.

  12. Montecarlo simulation for a new high resolution elemental analysis methodology

    Energy Technology Data Exchange (ETDEWEB)

    Figueroa S, Rodolfo; Brusa, Daniel; Riveros, Alberto [Universidad de La Frontera, Temuco (Chile). Facultad de Ingenieria y Administracion

    1996-12-31

    Full text. Spectra generated by binary, ternary and multielement matrixes when irradiated by a variable energy photon beam are simulated by means of a Monte Carlo code. Significative jumps in the counting rate are shown when the photon energy is just over the edge associated to each element, because of the emission of characteristic X rays. For a given associated energy, the net height of these jumps depends mainly on the concentration and of the sample absorption coefficient. The spectra were obtained by a monochromatic energy scan considering all the emitted radiation by the sample in a 2{pi} solid angle, associating a single multichannel spectrometer channel to each incident energy (Multichannel Scaling (MCS) mode). The simulated spectra were made with Monte Carlo simulation software adaptation of the package called PENELOPE (Penetration and Energy Loss of Positrons and Electrons in matter). The results show that it is possible to implement a new high resolution spectroscopy methodology, where a synchrotron would be an ideal source, due to the high intensity and ability to control the energy of the incident beam. The high energy resolution would be determined by the monochromating system and not by the detection system and not by the detection system, which would basicalbe a photon counter. (author)

  13. Montecarlo simulation for a new high resolution elemental analysis methodology

    International Nuclear Information System (INIS)

    Figueroa S, Rodolfo; Brusa, Daniel; Riveros, Alberto

    1996-01-01

    Full text. Spectra generated by binary, ternary and multielement matrixes when irradiated by a variable energy photon beam are simulated by means of a Monte Carlo code. Significative jumps in the counting rate are shown when the photon energy is just over the edge associated to each element, because of the emission of characteristic X rays. For a given associated energy, the net height of these jumps depends mainly on the concentration and of the sample absorption coefficient. The spectra were obtained by a monochromatic energy scan considering all the emitted radiation by the sample in a 2π solid angle, associating a single multichannel spectrometer channel to each incident energy (Multichannel Scaling (MCS) mode). The simulated spectra were made with Monte Carlo simulation software adaptation of the package called PENELOPE (Penetration and Energy Loss of Positrons and Electrons in matter). The results show that it is possible to implement a new high resolution spectroscopy methodology, where a synchrotron would be an ideal source, due to the high intensity and ability to control the energy of the incident beam. The high energy resolution would be determined by the monochromating system and not by the detection system and not by the detection system, which would basicalbe a photon counter. (author)

  14. Achieving accurate simulations of urban impacts on ozone at high resolution

    International Nuclear Information System (INIS)

    Li, J; Georgescu, M; Mahalov, A; Moustaoui, M; Hyde, P

    2014-01-01

    The effects of urbanization on ozone levels have been widely investigated over cities primarily located in temperate and/or humid regions. In this study, nested WRF-Chem simulations with a finest grid resolution of 1 km are conducted to investigate ozone concentrations [O 3 ] due to urbanization within cities in arid/semi-arid environments. First, a method based on a shape preserving Monotonic Cubic Interpolation (MCI) is developed and used to downscale anthropogenic emissions from the 4 km resolution 2005 National Emissions Inventory (NEI05) to the finest model resolution of 1 km. Using the rapidly expanding Phoenix metropolitan region as the area of focus, we demonstrate the proposed MCI method achieves ozone simulation results with appreciably improved correspondence to observations relative to the default interpolation method of the WRF-Chem system. Next, two additional sets of experiments are conducted, with the recommended MCI approach, to examine impacts of urbanization on ozone production: (1) the urban land cover is included (i.e., urbanization experiments) and, (2) the urban land cover is replaced with the region’s native shrubland. Impacts due to the presence of the built environment on [O 3 ] are highly heterogeneous across the metropolitan area. Increased near surface [O 3 ] due to urbanization of 10–20 ppb is predominantly a nighttime phenomenon while simulated impacts during daytime are negligible. Urbanization narrows the daily [O 3 ] range (by virtue of increasing nighttime minima), an impact largely due to the region’s urban heat island. Our results demonstrate the importance of the MCI method for accurate representation of the diurnal profile of ozone, and highlight its utility for high-resolution air quality simulations for urban areas. (letter)

  15. High Resolution Simulations of Future Climate in West Africa Using a Variable-Resolution Atmospheric Model

    Science.gov (United States)

    Adegoke, J. O.; Engelbrecht, F.; Vezhapparambu, S.

    2013-12-01

    In previous work demonstrated the application of a var¬iable-resolution global atmospheric model, the conformal-cubic atmospheric model (CCAM), across a wide range of spatial and time scales to investigate the ability of the model to provide realistic simulations of present-day climate and plausible projections of future climate change over sub-Saharan Africa. By applying the model in stretched-grid mode the versatility of the model dynamics, numerical formulation and physical parameterizations to function across a range of length scales over the region of interest, was also explored. We primarily used CCAM to illustrate the capability of the model to function as a flexible downscaling tool at the climate-change time scale. Here we report on additional long term climate projection studies performed by downscaling at much higher resolutions (8 Km) over an area that stretches from just south of Sahara desert to the southern coast of the Niger Delta and into the Gulf of Guinea. To perform these simulations, CCAM was provided with synoptic-scale forcing of atmospheric circulation from 2.5 deg resolution NCEP reanalysis at 6-hourly interval and SSTs from NCEP reanalysis data uses as lower boundary forcing. CCAM 60 Km resolution downscaled to 8 Km (Schmidt factor 24.75) then 8 Km resolution simulation downscaled to 1 Km (Schmidt factor 200) over an area approximately 50 Km x 50 Km in the southern Lake Chad Basin (LCB). Our intent in conducting these high resolution model runs was to obtain a deeper understanding of linkages between the projected future climate and the hydrological processes that control the surface water regime in this part of sub-Saharan Africa.

  16. MODELING AND SIMULATION OF HIGH RESOLUTION OPTICAL REMOTE SENSING SATELLITE GEOMETRIC CHAIN

    Directory of Open Access Journals (Sweden)

    Z. Xia

    2018-04-01

    Full Text Available The high resolution satellite with the longer focal length and the larger aperture has been widely used in georeferencing of the observed scene in recent years. The consistent end to end model of high resolution remote sensing satellite geometric chain is presented, which consists of the scene, the three line array camera, the platform including attitude and position information, the time system and the processing algorithm. The integrated design of the camera and the star tracker is considered and the simulation method of the geolocation accuracy is put forward by introduce the new index of the angle between the camera and the star tracker. The model is validated by the geolocation accuracy simulation according to the test method of the ZY-3 satellite imagery rigorously. The simulation results show that the geolocation accuracy is within 25m, which is highly consistent with the test results. The geolocation accuracy can be improved about 7 m by the integrated design. The model combined with the simulation method is applicable to the geolocation accuracy estimate before the satellite launching.

  17. Quantifying uncertainty due to internal variability using high-resolution regional climate model simulations

    Science.gov (United States)

    Gutmann, E. D.; Ikeda, K.; Deser, C.; Rasmussen, R.; Clark, M. P.; Arnold, J. R.

    2015-12-01

    The uncertainty in future climate predictions is as large or larger than the mean climate change signal. As such, any predictions of future climate need to incorporate and quantify the sources of this uncertainty. One of the largest sources comes from the internal, chaotic, variability within the climate system itself. This variability has been approximated using the 30 ensemble members of the Community Earth System Model (CESM) large ensemble. Here we examine the wet and dry end members of this ensemble for cool-season precipitation in the Colorado Rocky Mountains with a set of high-resolution regional climate model simulations. We have used the Weather Research and Forecasting model (WRF) to simulate the periods 1990-2000, 2025-2035, and 2070-2080 on a 4km grid. These simulations show that the broad patterns of change depicted in CESM are inherited by the high-resolution simulations; however, the differences in the height and location of the mountains in the WRF simulation, relative to the CESM simulation, means that the location and magnitude of the precipitation changes are very different. We further show that high-resolution simulations with the Intermediate Complexity Atmospheric Research model (ICAR) predict a similar spatial pattern in the change signal as WRF for these ensemble members. We then use ICAR to examine the rest of the CESM Large Ensemble as well as the uncertainty in the regional climate model due to the choice of physics parameterizations.

  18. Modern Cosmology

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Yuanzhong

    2002-06-21

    This book is one of a series in the areas of high-energy physics, cosmology and gravitation published by the Institute of Physics. It includes courses given at a doctoral school on 'Relativistic Cosmology: Theory and Observation' held in Spring 2000 at the Centre for Scientific Culture 'Alessandro Volta', Italy, sponsored by SIGRAV-Societa Italiana di Relativita e Gravitazione (Italian Society of Relativity and Gravitation) and the University of Insubria. This book collects 15 review reports given by a number of outstanding scientists. They touch upon the main aspects of modern cosmology from observational matters to theoretical models, such as cosmological models, the early universe, dark matter and dark energy, modern observational cosmology, cosmic microwave background, gravitational lensing, and numerical simulations in cosmology. In particular, the introduction to the basics of cosmology includes the basic equations, covariant and tetrad descriptions, Friedmann models, observation and horizons, etc. The chapters on the early universe involve inflationary theories, particle physics in the early universe, and the creation of matter in the universe. The chapters on dark matter (DM) deal with experimental evidence of DM, neutrino oscillations, DM candidates in supersymmetry models and supergravity, structure formation in the universe, dark-matter search with innovative techniques, and dark energy (cosmological constant), etc. The chapters about structure in the universe consist of the basis for structure formation, quantifying large-scale structure, cosmic background fluctuation, galaxy space distribution, and the clustering of galaxies. In the field of modern observational cosmology, galaxy surveys and cluster surveys are given. The chapter on gravitational lensing describes the lens basics and models, galactic microlensing and galaxy clusters as lenses. The last chapter, 'Numerical simulations in cosmology', deals with spatial and

  19. Simulation-based marginal likelihood for cluster strong lensing cosmology

    Science.gov (United States)

    Killedar, M.; Borgani, S.; Fabjan, D.; Dolag, K.; Granato, G.; Meneghetti, M.; Planelles, S.; Ragone-Figueroa, C.

    2018-01-01

    Comparisons between observed and predicted strong lensing properties of galaxy clusters have been routinely used to claim either tension or consistency with Λ cold dark matter cosmology. However, standard approaches to such cosmological tests are unable to quantify the preference for one cosmology over another. We advocate approximating the relevant Bayes factor using a marginal likelihood that is based on the following summary statistic: the posterior probability distribution function for the parameters of the scaling relation between Einstein radii and cluster mass, α and β. We demonstrate, for the first time, a method of estimating the marginal likelihood using the X-ray selected z > 0.5 Massive Cluster Survey clusters as a case in point and employing both N-body and hydrodynamic simulations of clusters. We investigate the uncertainty in this estimate and consequential ability to compare competing cosmologies, which arises from incomplete descriptions of baryonic processes, discrepancies in cluster selection criteria, redshift distribution and dynamical state. The relation between triaxial cluster masses at various overdensities provides a promising alternative to the strong lensing test.

  20. IMPLEMENTING THE DC MODE IN COSMOLOGICAL SIMULATIONS WITH SUPERCOMOVING VARIABLES

    International Nuclear Information System (INIS)

    Gnedin, Nickolay Y.; Kravtsov, Andrey V.; Rudd, Douglas H.

    2011-01-01

    As emphasized by previous studies, proper treatment of the density fluctuation on the fundamental scale of a cosmological simulation volume-the D C mode - is critical for accurate modeling of spatial correlations on scales ∼> 10% of simulation box size. We provide further illustration of the effects of the DC mode on the abundance of halos in small boxes and show that it is straightforward to incorporate this mode in cosmological codes that use the 'supercomoving' variables. The equations governing evolution of dark matter and baryons recast with these variables are particularly simple and include the expansion factor, and hence the effect of the DC mode, explicitly only in the Poisson equation.

  1. Clues to the 'Magellanic Galaxy' from cosmological simulations

    NARCIS (Netherlands)

    Sales, Laura V.; Navarro, Julio F.; Cooper, Andrew P.; White, Simon D. M.; Frenk, Carlos S.; Helmi, Amina

    2011-01-01

    We use cosmological simulations from the Aquarius Project to study the orbital history of the Large Magellanic Cloud (LMC) and its potential association with other satellites of the Milky Way (MW). We search for dynamical analogues to the LMC and find a subhalo that matches the LMC position and

  2. Microphysics, cosmology, and high energy astrophysics

    International Nuclear Information System (INIS)

    Hoyle, F.

    1974-01-01

    The discussion of microphysics, cosmology, and high energy astrophysics includes particle motion in an electromagnetic field, conformal transformations, conformally invariant theory of gravitation, particle orbits, Friedman models with k = 0, +-1, the history and present status of steady-state cosmology, and the nature of mass. (U.S.)

  3. Dark matter direct detection signals inferred from a cosmological N-body simulation with baryons

    International Nuclear Information System (INIS)

    Ling, F.-S.; Nezri, E.; Athanassoula, E.; Teyssier, R.

    2010-01-01

    We extract at redshift z = 0 a Milky Way sized object including gas, stars and dark matter (DM) from a recent, high-resolution cosmological N-body simulation with baryons. Its resolution is sufficient to witness the formation of a rotating disk and bulge at the center of the halo potential, therefore providing a realistic description of the birth and the evolution of galactic structures in the ΛCDM cosmology paradigm. The phase-space structure of the central galaxy reveals that, throughout a thick region, the dark halo is co-rotating on average with the stellar disk. At the Earth's location, the rotating component, sometimes called dark disk in the literature, is characterized by a minimum lag velocity v lag ≅ 75 km/s, in which case it contributes to around 25% of the total DM local density, whose value is ρ DM ≅ 0.37GeV/cm 3 . The velocity distributions also show strong deviations from pure Gaussian and Maxwellian distributions, with a sharper drop of the high velocity tail. We give a detailed study of the impact of these features on the predictions for DM signals in direct detection experiments. In particular, the question of whether the modulation signal observed by DAMA is or is not excluded by limits set by other experiments (CDMS, XENON and CRESST...) is re-analyzed and compared to the case of a standard Maxwellian halo. We consider spin-independent interactions for both the elastic and the inelastic scattering scenarios. For the first time, we calculate the allowed regions for DAMA and the exclusion limits of other null experiments directly from the velocity distributions found in the simulation. We then compare these results with the predictions of various analytical distributions. We find that the compatibility between DAMA and the other experiments is improved. In the elastic scenario, the DAMA modulation signal is slightly enhanced in the so-called channeling region, as a result of several effects that include a departure from a Maxwellian

  4. Simulation study for high resolution alpha particle spectrometry with mesh type collimator

    International Nuclear Information System (INIS)

    Park, Seunghoon; Kwak, Sungwoo; Kang, Hanbyeol; Shin, Jungki; Park, Iljin

    2014-01-01

    An alpha particle spectrometry with a mesh type collimator plays a crucial role in identifying specific radionuclide in a radioactive source collected from the atmosphere or environment. The energy resolution is degraded without collimation because particles with a high angle have a longer path to travel in the air. Therefore, collision with the background increases. The collimator can cut out particles which traveling at a high angle. As a result, an energy distribution with high resolution can be obtained. Therefore, the mesh type collimator is simulated for high resolution alpha particle spectrometry. In conclusion, the collimator can improve resolution. With collimator, the collimator is a role of cutting out particles with a high angle, so, low energy tail and broadened energy distribution can be reduced. The mesh diameter is found out as an important factor to control resolution and counting efficiency. Therefore, a target particle, for example, 235 U, can be distinguished by a detector with a collimator under a mixture of various nuclides, for example: 232 U, 238 U, and 232 Th

  5. Cosmological simulations of isotropic conduction in galaxy clusters

    International Nuclear Information System (INIS)

    Smith, Britton; O'Shea, Brian W.; Voit, G. Mark; Ventimiglia, David; Skillman, Samuel W.

    2013-01-01

    Simulations of galaxy clusters have a difficult time reproducing the radial gas-property gradients and red central galaxies observed to exist in the cores of galaxy clusters. Thermal conduction has been suggested as a mechanism that can help bring simulations of cluster cores into better alignment with observations by stabilizing the feedback processes that regulate gas cooling, but this idea has not yet been well tested with cosmological numerical simulations. Here we present cosmological simulations of 10 galaxy clusters performed with five different levels of isotropic Spitzer conduction, which alters both the cores and outskirts of clusters, though not dramatically. In the cores, conduction flattens central temperature gradients, making them nearly isothermal and slightly lowering the central density, but failing to prevent a cooling catastrophe there. Conduction has little effect on temperature gradients outside of cluster cores because outward conductive heat flow tends to inflate the outer parts of the intracluster medium (ICM), instead of raising its temperature. In general, conduction tends reduce temperature inhomogeneity in the ICM, but our simulations indicate that those homogenizing effects would be extremely difficult to observe in ∼5 keV clusters. Outside the virial radius, our conduction implementation lowers the gas densities and temperatures because it reduces the Mach numbers of accretion shocks. We conclude that, despite the numerous small ways in which conduction alters the structure of galaxy clusters, none of these effects are significant enough to make the efficiency of conduction easily measurable, unless its effects are more pronounced in clusters hotter than those we have simulated.

  6. Changes in Moisture Flux over the Tibetan Plateau during 1979-2011: Insights from a High Resolution Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Yanhong; Leung, Lai-Yung R.; Zhang, Yongxin; Cuo, Lan

    2015-05-15

    Net precipitation (precipitation minus evapotranspiration, P-E) changes between 1979 and 2011 from a high resolution regional climate simulation and its reanalysis forcing are analyzed over the Tibet Plateau (TP) and compared to the global land data assimilation system (GLDAS) product. The high resolution simulation better resolves precipitation changes than its coarse resolution forcing, which contributes dominantly to the improved P-E change in the regional simulation compared to the global reanalysis. Hence, the former may provide better insights about the drivers of P-E changes. The mechanism behind the P-E changes is explored by decomposing the column integrated moisture flux convergence into thermodynamic, dynamic, and transient eddy components. High-resolution climate simulation improves the spatial pattern of P-E changes over the best available global reanalysis. High-resolution climate simulation also facilitates new and substantial findings regarding the role of thermodynamics and transient eddies in P-E changes reflected in observed changes in major river basins fed by runoff from the TP. The analysis revealed the contrasting convergence/divergence changes between the northwestern and southeastern TP and feedback through latent heat release as an important mechanism leading to the mean P-E changes in the TP.

  7. On estimating cosmology-dependent covariance matrices

    International Nuclear Information System (INIS)

    Morrison, Christopher B.; Schneider, Michael D.

    2013-01-01

    We describe a statistical model to estimate the covariance matrix of matter tracer two-point correlation functions with cosmological simulations. Assuming a fixed number of cosmological simulation runs, we describe how to build a 'statistical emulator' of the two-point function covariance over a specified range of input cosmological parameters. Because the simulation runs with different cosmological models help to constrain the form of the covariance, we predict that the cosmology-dependent covariance may be estimated with a comparable number of simulations as would be needed to estimate the covariance for fixed cosmology. Our framework is a necessary first step in planning a simulations campaign for analyzing the next generation of cosmological surveys

  8. Classical resolution of singularities in dilaton cosmologies

    NARCIS (Netherlands)

    Bergshoeff, EA; Collinucci, A; Roest, D; Russo, JG; Townsend, PK

    2005-01-01

    For models of dilaton gravity with a possible exponential potential, such as the tensor-scalar sector of ITA supergravity, we show how cosmological solutions correspond to trajectories in a 2D Milne space (parametrized by the dilaton and the scale factor). Cosmological singularities correspond to

  9. The fusion of satellite and UAV data: simulation of high spatial resolution band

    Science.gov (United States)

    Jenerowicz, Agnieszka; Siok, Katarzyna; Woroszkiewicz, Malgorzata; Orych, Agata

    2017-10-01

    Remote sensing techniques used in the precision agriculture and farming that apply imagery data obtained with sensors mounted on UAV platforms became more popular in the last few years due to the availability of low- cost UAV platforms and low- cost sensors. Data obtained from low altitudes with low- cost sensors can be characterised by high spatial and radiometric resolution but quite low spectral resolution, therefore the application of imagery data obtained with such technology is quite limited and can be used only for the basic land cover classification. To enrich the spectral resolution of imagery data acquired with low- cost sensors from low altitudes, the authors proposed the fusion of RGB data obtained with UAV platform with multispectral satellite imagery. The fusion is based on the pansharpening process, that aims to integrate the spatial details of the high-resolution panchromatic image with the spectral information of lower resolution multispectral or hyperspectral imagery to obtain multispectral or hyperspectral images with high spatial resolution. The key of pansharpening is to properly estimate the missing spatial details of multispectral images while preserving their spectral properties. In the research, the authors presented the fusion of RGB images (with high spatial resolution) obtained with sensors mounted on low- cost UAV platforms and multispectral satellite imagery with satellite sensors, i.e. Landsat 8 OLI. To perform the fusion of UAV data with satellite imagery, the simulation of the panchromatic bands from RGB data based on the spectral channels linear combination, was conducted. Next, for simulated bands and multispectral satellite images, the Gram-Schmidt pansharpening method was applied. As a result of the fusion, the authors obtained several multispectral images with very high spatial resolution and then analysed the spatial and spectral accuracies of processed images.

  10. Formation of globular cluster candidates in merging proto-galaxies at high redshift: a view from the FIRE cosmological simulations

    Science.gov (United States)

    Kim, Ji-hoon; Ma, Xiangcheng; Grudić, Michael Y.; Hopkins, Philip F.; Hayward, Christopher C.; Wetzel, Andrew; Faucher-Giguère, Claude-André; Kereš, Dušan; Garrison-Kimmel, Shea; Murray, Norman

    2018-03-01

    Using a state-of-the-art cosmological simulation of merging proto-galaxies at high redshift from the FIRE project, with explicit treatments of star formation and stellar feedback in the interstellar medium, we investigate the formation of star clusters and examine one of the formation hypotheses of present-day metal-poor globular clusters. We find that frequent mergers in high-redshift proto-galaxies could provide a fertile environment to produce long-lasting bound star clusters. The violent merger event disturbs the gravitational potential and pushes a large gas mass of ≳ 105-6 M⊙ collectively to high density, at which point it rapidly turns into stars before stellar feedback can stop star formation. The high dynamic range of the reported simulation is critical in realizing such dense star-forming clouds with a small dynamical time-scale, tff ≲ 3 Myr, shorter than most stellar feedback time-scales. Our simulation then allows us to trace how clusters could become virialized and tightly bound to survive for up to ˜420 Myr till the end of the simulation. Because the cluster's tightly bound core was formed in one short burst, and the nearby older stars originally grouped with the cluster tend to be preferentially removed, at the end of the simulation the cluster has a small age spread.

  11. Verification of high resolution simulation of precipitation and wind in Portugal

    Science.gov (United States)

    Menezes, Isilda; Pereira, Mário; Moreira, Demerval; Carvalheiro, Luís; Bugalho, Lourdes; Corte-Real, João

    2017-04-01

    Demand of energy and freshwater continues to grow as the global population and demands increase. Precipitation feed the freshwater ecosystems which provides a wealth of goods and services for society and river flow to sustain native species and natural ecosystem functions. The adoption of the wind and hydro-electric power supplies will sustain energy demands/services without restricting the economic growth and accelerated policies scenarios. However, the international meteorological observation network is not sufficiently dense to directly support high resolution climatic research. In this sense, coupled global and regional atmospheric models constitute the most appropriate physical and numerical tool for weather forecasting and downscaling in high resolution grids with the capacity to solve problems resulting from the lack of observed data and measuring errors. Thus, this study aims to calibrate and validate of the WRF regional model from precipitation and wind fields simulation, in high spatial resolution grid cover in Portugal. The simulations were performed in two-way nesting with three grids of increasing resolution (60 km, 20 km and 5 km) and the model performance assessed for the summer and winter months (January and July), using input variables from two different reanalyses and forecasted databases (ERA-Interim and NCEP-FNL) and different forcing schemes. The verification procedure included: (i) the use of several statistics error estimators, correlation based measures and relative errors descriptors; and, (ii) an observed dataset composed by time series of hourly precipitation, wind speed and direction provided by the Portuguese meteorological institute for a comprehensive set of weather stations. Main results suggested the good ability of the WRF to: (i) reproduce the spatial patterns of the mean and total observed fields; (ii) with relatively small values of bias and other errors; and, (iii) and good temporal correlation. These findings are in good

  12. Cosmological N-body simulations with generic hot dark matter

    DEFF Research Database (Denmark)

    Brandbyge, Jacob; Hannestad, Steen

    2017-01-01

    We have calculated the non-linear effects of generic fermionic and bosonic hot dark matter components in cosmological N-body simulations. For sub-eV masses, the non-linear power spectrum suppression caused by thermal free-streaming resembles the one seen for massive neutrinos, whereas for masses...

  13. A web portal for hydrodynamical, cosmological simulations

    Science.gov (United States)

    Ragagnin, A.; Dolag, K.; Biffi, V.; Cadolle Bel, M.; Hammer, N. J.; Krukau, A.; Petkova, M.; Steinborn, D.

    2017-07-01

    This article describes a data centre hosting a web portal for accessing and sharing the output of large, cosmological, hydro-dynamical simulations with a broad scientific community. It also allows users to receive related scientific data products by directly processing the raw simulation data on a remote computing cluster. The data centre has a multi-layer structure: a web portal, a job control layer, a computing cluster and a HPC storage system. The outer layer enables users to choose an object from the simulations. Objects can be selected by visually inspecting 2D maps of the simulation data, by performing highly compounded and elaborated queries or graphically by plotting arbitrary combinations of properties. The user can run analysis tools on a chosen object. These services allow users to run analysis tools on the raw simulation data. The job control layer is responsible for handling and performing the analysis jobs, which are executed on a computing cluster. The innermost layer is formed by a HPC storage system which hosts the large, raw simulation data. The following services are available for the users: (I) CLUSTERINSPECT visualizes properties of member galaxies of a selected galaxy cluster; (II) SIMCUT returns the raw data of a sub-volume around a selected object from a simulation, containing all the original, hydro-dynamical quantities; (III) SMAC creates idealized 2D maps of various, physical quantities and observables of a selected object; (IV) PHOX generates virtual X-ray observations with specifications of various current and upcoming instruments.

  14. Deconstructing cosmology

    CERN Document Server

    Sanders, Robert H

    2016-01-01

    The advent of sensitive high-resolution observations of the cosmic microwave background radiation and their successful interpretation in terms of the standard cosmological model has led to great confidence in this model's reality. The prevailing attitude is that we now understand the Universe and need only work out the details. In this book, Sanders traces the development and successes of Lambda-CDM, and argues that this triumphalism may be premature. The model's two major components, dark energy and dark matter, have the character of the pre-twentieth-century luminiferous aether. While there is astronomical evidence for these hypothetical fluids, their enigmatic properties call into question our assumptions of the universality of locally determined physical law. Sanders explains how modified Newtonian dynamics (MOND) is a significant challenge for cold dark matter. Overall, the message is hopeful: the field of cosmology has not become frozen, and there is much fundamental work ahead for tomorrow's cosmologis...

  15. Outcomes and challenges of global high-resolution non-hydrostatic atmospheric simulations using the K computer

    Science.gov (United States)

    Satoh, Masaki; Tomita, Hirofumi; Yashiro, Hisashi; Kajikawa, Yoshiyuki; Miyamoto, Yoshiaki; Yamaura, Tsuyoshi; Miyakawa, Tomoki; Nakano, Masuo; Kodama, Chihiro; Noda, Akira T.; Nasuno, Tomoe; Yamada, Yohei; Fukutomi, Yoshiki

    2017-12-01

    This article reviews the major outcomes of a 5-year (2011-2016) project using the K computer to perform global numerical atmospheric simulations based on the non-hydrostatic icosahedral atmospheric model (NICAM). The K computer was made available to the public in September 2012 and was used as a primary resource for Japan's Strategic Programs for Innovative Research (SPIRE), an initiative to investigate five strategic research areas; the NICAM project fell under the research area of climate and weather simulation sciences. Combining NICAM with high-performance computing has created new opportunities in three areas of research: (1) higher resolution global simulations that produce more realistic representations of convective systems, (2) multi-member ensemble simulations that are able to perform extended-range forecasts 10-30 days in advance, and (3) multi-decadal simulations for climatology and variability. Before the K computer era, NICAM was used to demonstrate realistic simulations of intra-seasonal oscillations including the Madden-Julian oscillation (MJO), merely as a case study approach. Thanks to the big leap in computational performance of the K computer, we could greatly increase the number of cases of MJO events for numerical simulations, in addition to integrating time and horizontal resolution. We conclude that the high-resolution global non-hydrostatic model, as used in this five-year project, improves the ability to forecast intra-seasonal oscillations and associated tropical cyclogenesis compared with that of the relatively coarser operational models currently in use. The impacts of the sub-kilometer resolution simulation and the multi-decadal simulations using NICAM are also reviewed.

  16. Operational High Resolution Chemical Kinetics Simulation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Numerical simulations of chemical kinetics are critical to addressing urgent issues in both the developed and developing world. Ongoing demand for higher resolution...

  17. Can High-resolution WRF Simulations Be Used for Short-term Forecasting of Lightning?

    Science.gov (United States)

    Goodman, S. J.; Lapenta, W.; McCaul, E. W., Jr.; LaCasse, K.; Petersen, W.

    2006-01-01

    A number of research teams have begun to make quasi-operational forecast simulations at high resolution with models such as the Weather Research and Forecast (WRF) model. These model runs have used horizontal meshes of 2-4 km grid spacing, and thus resolved convective storms explicitly. In the light of recent global satellite-based observational studies that reveal robust relationships between total lightning flash rates and integrated amounts of precipitation-size ice hydrometeors in storms, it is natural to inquire about the capabilities of these convection-resolving models in representing the ice hydrometeor fields faithfully. If they do, this might make operational short-term forecasts of lightning activity feasible. We examine high-resolution WRF simulations from several Southeastern cases for which either NLDN or LMA lightning data were available. All the WRF runs use a standard microphysics package that depicts only three ice species, cloud ice, snow and graupel. The realism of the WRF simulations is examined by comparisons with both lightning and radar observations and with additional even higher-resolution cloud-resolving model runs. Preliminary findings are encouraging in that they suggest that WRF often makes convective storms of the proper size in approximately the right location, but they also indicate that higher resolution and better hydrometeor microphysics would be helpful in improving the realism of the updraft strengths, reflectivity and ice hydrometeor fields.

  18. High-resolution simulations of galaxy formation in a cold dark matter scenario

    International Nuclear Information System (INIS)

    Kates, R.E.; Klypin, A.A.

    1990-01-01

    We present the results of our numerical simulations of galaxy clustering in a two-dimensional model. Our simulations allowed better resolution than could be obtained in three-dimensional simulations. We used a spectrum of initial perturbations corresponding to a cold dark matter (CDM) model and followed the history of each particle by modelling the shocking and subsequent cooling of matter. We took into account cooling processes in a hot plasma with primeval cosmic abundances of H and He as well as Compton cooling. (However, the influence of these processes on the trajectories of ordinary matter particles was not simulated in the present code.) As a result of the high resolution, we were able to observe a network of chains on all scales down to the limits of resolution. This network extends out from dense clusters and superclusters and penetrates into voids (with decreasing density). In addition to the dark matter network structure, a definite prediction of our simulations is the existence of a connected filamentary structure consisting of hot gas with a temperature of 10 6 K and extending over 100-150 Mpc. (Throughout this paper, we assume the Hubble constant H 0 =50 km/sec/Mpc.) These structures trace high-density filaments of the dark matter distribution and should be searched for in soft X-ray observations. In contrast to common assumptions, we found that peaks of the linearized density distribution were not reliable tracers of the eventual galaxy distribution. We were also able to demonstrate that the influence of small-scale fluctuations on the structure at larger scales is always small, even at the late nonlinear stage. (orig.)

  19. Fast Generation of Ensembles of Cosmological N-Body Simulations via Mode-Resampling

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, M D; Cole, S; Frenk, C S; Szapudi, I

    2011-02-14

    We present an algorithm for quickly generating multiple realizations of N-body simulations to be used, for example, for cosmological parameter estimation from surveys of large-scale structure. Our algorithm uses a new method to resample the large-scale (Gaussian-distributed) Fourier modes in a periodic N-body simulation box in a manner that properly accounts for the nonlinear mode-coupling between large and small scales. We find that our method for adding new large-scale mode realizations recovers the nonlinear power spectrum to sub-percent accuracy on scales larger than about half the Nyquist frequency of the simulation box. Using 20 N-body simulations, we obtain a power spectrum covariance matrix estimate that matches the estimator from Takahashi et al. (from 5000 simulations) with < 20% errors in all matrix elements. Comparing the rates of convergence, we determine that our algorithm requires {approx}8 times fewer simulations to achieve a given error tolerance in estimates of the power spectrum covariance matrix. The degree of success of our algorithm indicates that we understand the main physical processes that give rise to the correlations in the matter power spectrum. Namely, the large-scale Fourier modes modulate both the degree of structure growth through the variation in the effective local matter density and also the spatial frequency of small-scale perturbations through large-scale displacements. We expect our algorithm to be useful for noise modeling when constraining cosmological parameters from weak lensing (cosmic shear) and galaxy surveys, rescaling summary statistics of N-body simulations for new cosmological parameter values, and any applications where the influence of Fourier modes larger than the simulation size must be accounted for.

  20. Quadratic genetic modifications: a streamlined route to cosmological simulations with controlled merger history

    Science.gov (United States)

    Rey, Martin P.; Pontzen, Andrew

    2018-02-01

    Recent work has studied the interplay between a galaxy's history and its observable properties using `genetically modified' cosmological zoom simulations. The approach systematically generates alternative histories for a halo, while keeping its cosmological environment fixed. Applications to date altered linear properties of the initial conditions, such as the mean overdensity of specified regions; we extend the formulation to include quadratic features, such as local variance, that determines the overall importance of smooth accretion relative to mergers in a galaxy's history. We introduce an efficient algorithm for this new class of modification and demonstrate its ability to control the variance of a region in a one-dimensional toy model. Outcomes of this work are twofold: (i) a clarification of the formulation of genetic modifications and (ii) a proof of concept for quadratic modifications leading the way to a forthcoming implementation in cosmological simulations.

  1. The 2010 Pakistan floods: high-resolution simulations with the WRF model

    Science.gov (United States)

    Viterbo, Francesca; Parodi, Antonio; Molini, Luca; Provenzale, Antonello; von Hardenberg, Jost; Palazzi, Elisa

    2013-04-01

    Estimating current and future water resources in high mountain regions with complex orography is a difficult but crucial task. In particular, the French-Italian project PAPRIKA is focused on two specific regions in the Hindu-Kush -- Himalaya -- Karakorum (HKKH)region: the Shigar basin in Pakistan, at the feet of K2, and the Khumbu valley in Nepal, at the feet of Mount Everest. In this framework, we use the WRF model to simulate precipitation and meteorological conditions with high resolution in areas with extreme orographic slopes, comparing the model output with station and satellite data. Once validated the model, we shall run a set of three future time-slices at very high spatial resolution, in the periods 2046-2050, 2071-2075 and 2096-2100, nested in different climate change scenarios (EXtreme PREcipitation and Hydrological climate Scenario Simulations -EXPRESS-Hydro project). As a prelude to this study, here we discuss the simulation of specific, high-intensity rainfall events in this area. In this paper we focus on the 2010 Pakistan floods which began in late July 2010, producing heavy monsoon rains in the Khyber Pakhtunkhwa, Sindh, Punjab and Balochistan regions of Pakistan and affecting the Indus River basin. Approximately one-fifth of Pakistan's total land area was underwater, with a death toll of about 2000 people. This event has been simulated with the WRF model (version 3.3.) in cloud-permitting mode (d01 14 km and d02 3.5 km): different convective closures and microphysics parameterization have been used. A deeper understanding of the processes responsible for this event has been gained through comparison with rainfall depth observations, radiosounding data and geostationary/polar satellite images.

  2. High energy physics and cosmology

    International Nuclear Information System (INIS)

    Silk, J.I.

    1991-01-01

    This research will focus on the implications of recent theories and experiments in high energy physics of the evolution of the early universe, and on the constraints that cosmological considerations can place on such theories. Several problems are under investigation, including studies of the nature of dark matter and the signature of annihilations in the galactic halo, where the resulting γ-ray fluxes are potentially observable, and in stars, where stellar evolution may be affects. We will develop constraints on the inflationary predictions of scale-free primordial fluctuations in a universe at critical closure density by studying their linear and non-linear evolution after they re-enter the particle horizon, examining the observable imprint of primordial density fluctuations on the cosmic microwave background radiation in both flat and curved cosmological models, and implications for observations of large-scale galaxy clustering and structure formation theories. We will also study spectral distortions in the microwave background radiation that are produced by exotic particle decays in the very early universe. We expect such astrophysical considerations to provide fruitful insights both into high-energy particle physics and into possible cosmological for the early universe

  3. N-body simulations for coupled scalar-field cosmology

    International Nuclear Information System (INIS)

    Li Baojiu; Barrow, John D.

    2011-01-01

    We describe in detail the general methodology and numerical implementation of consistent N-body simulations for coupled-scalar-field models, including background cosmology and the generation of initial conditions (with the different couplings to different matter species taken into account). We perform fully consistent simulations for a class of coupled-scalar-field models with an inverse power-law potential and negative coupling constant, for which the chameleon mechanism does not work. We find that in such cosmological models the scalar-field potential plays a negligible role except in the background expansion, and the fifth force that is produced is proportional to gravity in magnitude, justifying the use of a rescaled gravitational constant G in some earlier N-body simulation works for similar models. We then study the effects of the scalar coupling on the nonlinear matter power spectra and compare with linear perturbation calculations to see the agreement and places where the nonlinear treatment deviates from the linear approximation. We also propose an algorithm to identify gravitationally virialized matter halos, trying to take account of the fact that the virialization itself is also modified by the scalar-field coupling. We use the algorithm to measure the mass function and study the properties of dark-matter halos. We find that the net effect of the scalar coupling helps produce more heavy halos in our simulation boxes and suppresses the inner (but not the outer) density profile of halos compared with the ΛCDM prediction, while the suppression weakens as the coupling between the scalar field and dark-matter particles increases in strength.

  4. The Higgs field and the resolution of the Cosmological Constant Paradox in the Weyl-geometrical Universe

    Science.gov (United States)

    De Martini, Francesco

    2017-10-01

    The nature of the scalar field responsible for the cosmological inflation is found to be rooted in the most fundamental concept of Weyl's differential geometry: the parallel displacement of vectors in curved space-time. Within this novel geometrical scenario, the standard electroweak theory of leptons based on the SU(2)L⊗U(1)Y as well as on the conformal groups of space-time Weyl's transformations is analysed within the framework of a general-relativistic, conformally covariant scalar-tensor theory that includes the electromagnetic and the Yang-Mills fields. A Higgs mechanism within a spontaneous symmetry breaking process is identified and this offers formal connections between some relevant properties of the elementary particles and the dark energy content of the Universe. An `effective cosmological potential': Veff is expressed in terms of the dark energy potential: via the `mass reduction parameter': , a general property of the Universe. The mass of the Higgs boson, which is considered a `free parameter' by the standard electroweak theory, by our theory is found to be proportional to the mass which accounts for the measured cosmological constant, i.e. the measured content of vacuum-energy in the Universe. The non-integrable application of Weyl's geometry leads to a Proca equation accounting for the dynamics of a φρ-particle, a vector-meson proposed as an an optimum candidate for dark matter. On the basis of previous cosmic microwave background results our theory leads, in the condition of cosmological `critical density', to the assessment of the average energy content of the φρ-excitation. The peculiar mathematical structure of Veff offers a clue towards a very general resolution of a most intriguing puzzle of modern quantum field theory, the `Cosmological Constant Paradox' (here referred to as the `Λ-Paradox'). Indeed, our `universal' theory offers a resolution of the Λ-Paradox for all exponential inflationary potentials: VΛ(T,φ)∝e-nφ, and for all

  5. Very high resolution regional climate model simulations over Greenland: Identifying added value

    DEFF Research Database (Denmark)

    Lucas-Picher, P.; Wulff-Nielsen, M.; Christensen, J.H.

    2012-01-01

    models. However, the bias between the simulations and the few available observations does not reduce with higher resolution. This is partly explained by the lack of observations in regions where the higher resolution is expected to improve the simulated climate. The RCM simulations show......This study presents two simulations of the climate over Greenland with the regional climate model (RCM) HIRHAM5 at 0.05° and 0.25° resolution driven at the lateral boundaries by the ERA-Interim reanalysis for the period 1989–2009. These simulations are validated against observations from...... that the temperature has increased the most in the northern part of Greenland and at lower elevations over the period 1989–2009. Higher resolution increases the relief variability in the model topography and causes the simulated precipitation to be larger on the coast and smaller over the main ice sheet compared...

  6. Changes in Moisture Flux Over the Tibetan Plateau During 1979-2011: Insights from a High Resolution Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Yanhong; Leung, Lai-Yung R.; Zhang, Yongxin; Cuo, Lan

    2015-05-01

    Net precipitation (precipitation minus evapotranspiration, P-E) changes from a high resolution regional climate simulation and its reanalysis forcing are analyzed over the Tibet Plateau (TP) and compared to the global land data assimilation system (GLDAS) product. The mechanism behind the P-E changes is explored by decomposing the column integrated moisture flux convergence into thermodynamic, dynamic, and transient eddy components. High-resolution climate simulation improves the spatial pattern of P-E changes over the best available global reanalysis. Improvement in simulating precipitation changes at high elevations contributes dominantly to the improved P-E changes. High-resolution climate simulation also facilitates new and substantial findings regarding the role of thermodynamics and transient eddies in P-E changes reflected in observed changes in major river basins fed by runoff from the TP. The analysis revealed the contrasting convergence/divergence changes between the northwestern and southeastern TP and feedback through latent heat release as an important mechanism leading to the mean P-E changes in the TP.

  7. Cosmological simulations using a static scalar-tensor theory

    Energy Technology Data Exchange (ETDEWEB)

    RodrIguez-Meza, M A [Depto. de Fisica, Instituto Nacional de Investigaciones Nucleares, Col. Escandon, Apdo. Postal 18-1027, 11801 Mexico D.F (Mexico); Gonzalez-Morales, A X [Departamento Ingenierias, Universidad Iberoamericana, Prol. Paseo de la Reforma 880 Lomas de Santa Fe, Mexico D.F. Mexico (Mexico); Gabbasov, R F [Depto. de Fisica, Instituto Nacional de Investigaciones Nucleares, Col. Escandon, Apdo. Postal 18-1027, 11801 Mexico D.F (Mexico); Cervantes-Cota, Jorge L [Depto. de Fisica, Instituto Nacional de Investigaciones Nucleares, Col. Escandon, Apdo. Postal 18-1027, 11801 Mexico D.F (Mexico)

    2007-11-15

    We present {lambda}CDM N-body cosmological simulations in the framework of of a static general scalar-tensor theory of gravity. Due to the influence of the non-minimally coupled scalar field, the gravitational potential is modified by a Yukawa type term, yielding a new structure formation dynamics. We present some preliminary results and, in particular, we compute the density and velocity profiles of the most massive group.

  8. Simulation of high-resolution X-ray microscopic images for improved alignment

    International Nuclear Information System (INIS)

    Song Xiangxia; Zhang Xiaobo; Liu Gang; Cheng Xianchao; Li Wenjie; Guan Yong; Liu Ying; Xiong Ying; Tian Yangchao

    2011-01-01

    The introduction of precision optical elements to X-ray microscopes necessitates fine realignment to achieve optimal high-resolution imaging. In this paper, we demonstrate a numerical method for simulating image formation that facilitates alignment of the source, condenser, objective lens, and CCD camera. This algorithm, based on ray-tracing and Rayleigh-Sommerfeld diffraction theory, is applied to simulate the X-ray microscope beamline U7A of National Synchrotron Radiation Laboratory (NSRL). The simulations and imaging experiments show that the algorithm is useful for guiding experimental adjustments. Our alignment simulation method is an essential tool for the transmission X-ray microscope (TXM) with optical elements and may also be useful for the alignment of optical components in other modes of microscopy.

  9. External versus internal triggers of bar formation in cosmological zoom-in simulations

    Science.gov (United States)

    Zana, Tommaso; Dotti, Massimo; Capelo, Pedro R.; Bonoli, Silvia; Haardt, Francesco; Mayer, Lucio; Spinoso, Daniele

    2018-01-01

    The emergence of a large-scale stellar bar is one of the most striking features in disc galaxies. By means of state-of-the-art cosmological zoom-in simulations, we study the formation and evolution of bars in Milky Way-like galaxies in a fully cosmological context, including the physics of gas dissipation, star formation and supernova feedback. Our goal is to characterize the actual trigger of the non-axisymmetric perturbation that leads to the strong bar observable in the simulations at z = 0, discriminating between an internal/secular and an external/tidal origin. To this aim, we run a suite of cosmological zoom-in simulations altering the original history of galaxy-satellite interactions at a time when the main galaxy, though already bar-unstable, does not feature any non-axisymmetric structure yet. We find that the main effect of a late minor merger and of a close fly-by is to delay the time of bar formation and those two dynamical events are not directly responsible for the development of the bar and do not alter significantly its global properties (e.g. its final extension). We conclude that, once the disc has grown to a mass large enough to sustain global non-axisymmetric modes, then bar formation is inevitable.

  10. COINCIDENCES BETWEEN O VI AND O VII LINES: INSIGHTS FROM HIGH-RESOLUTION SIMULATIONS OF THE WARM-HOT INTERGALACTIC MEDIUM

    International Nuclear Information System (INIS)

    Cen Renyue

    2012-01-01

    With high-resolution (0.46 h –1 kpc), large-scale, adaptive mesh-refinement Eulerian cosmological hydrodynamic simulations we compute properties of O VI and O VII absorbers from the warm-hot intergalactic medium (WHIM) at z = 0. Our new simulations are in broad agreement with previous simulations with ∼40% of the intergalactic medium being in the WHIM. Our simulations are in agreement with observed properties of O VI absorbers with respect to the line incidence rate and Doppler-width-column-density relation. It is found that the amount of gas in the WHIM below and above 10 6 K is roughly equal. Strong O VI absorbers are found to be predominantly collisionally ionized. It is found that (61%, 57%, 39%) of O VI absorbers of log N(O VI) cm 2 = (12.5-13, 13-14, > 14) have T 5 K. Cross correlations between galaxies and strong [N(O VI) > 10 14 cm –2 ] O VI absorbers on ∼100-300 kpc scales are suggested as a potential differentiator between collisional ionization and photoionization models. Quantitative prediction is made for the presence of broad and shallow O VI lines that are largely missed by current observations but will be detectable by Cosmic Origins Spectrograph observations. The reported 3σ upper limit on the mean column density of coincidental O VII lines at the location of detected O VI lines by Yao et al. is above our predicted value by a factor of 2.5-4. The claimed observational detection of O VII lines by Nicastro et al., if true, is 2σ above what our simulations predict.

  11. Climate SPHINX: High-resolution present-day and future climate simulations with an improved representation of small-scale variability

    Science.gov (United States)

    Davini, Paolo; von Hardenberg, Jost; Corti, Susanna; Subramanian, Aneesh; Weisheimer, Antje; Christensen, Hannah; Juricke, Stephan; Palmer, Tim

    2016-04-01

    The PRACE Climate SPHINX project investigates the sensitivity of climate simulations to model resolution and stochastic parameterization. The EC-Earth Earth-System Model is used to explore the impact of stochastic physics in 30-years climate integrations as a function of model resolution (from 80km up to 16km for the atmosphere). The experiments include more than 70 simulations in both a historical scenario (1979-2008) and a climate change projection (2039-2068), using RCP8.5 CMIP5 forcing. A total amount of 20 million core hours will be used at end of the project (March 2016) and about 150 TBytes of post-processed data will be available to the climate community. Preliminary results show a clear improvement in the representation of climate variability over the Euro-Atlantic following resolution increase. More specifically, the well-known atmospheric blocking negative bias over Europe is definitely resolved. High resolution runs also show improved fidelity in representation of tropical variability - such as the MJO and its propagation - over the low resolution simulations. It is shown that including stochastic parameterization in the low resolution runs help to improve some of the aspects of the MJO propagation further. These findings show the importance of representing the impact of small scale processes on the large scale climate variability either explicitly (with high resolution simulations) or stochastically (in low resolution simulations).

  12. Local-scale high-resolution atmospheric dispersion model using large-eddy simulation. LOHDIM-LES

    International Nuclear Information System (INIS)

    Nakayama, Hiromasa; Nagai, Haruyasu

    2016-03-01

    We developed LOcal-scale High-resolution atmospheric DIspersion Model using Large-Eddy Simulation (LOHDIM-LES). This dispersion model is designed based on LES which is effective to reproduce unsteady behaviors of turbulent flows and plume dispersion. The basic equations are the continuity equation, the Navier-Stokes equation, and the scalar conservation equation. Buildings and local terrain variability are resolved by high-resolution grids with a few meters and these turbulent effects are represented by immersed boundary method. In simulating atmospheric turbulence, boundary layer flows are generated by a recycling turbulent inflow technique in a driver region set up at the upstream of the main analysis region. This turbulent inflow data are imposed at the inlet of the main analysis region. By this approach, the LOHDIM-LES can provide detailed information on wind velocities and plume concentration in the investigated area. (author)

  13. Large angle cosmic microwave background fluctuations from cosmic strings with a cosmological constant

    International Nuclear Information System (INIS)

    Landriau, M.; Shellard, E.P.S.

    2004-01-01

    In this paper, we present results for large-angle cosmic microwave background anisotropies generated from high resolution simulations of cosmic string networks in a range of flat Friedmann-Robertson-Walker universes with a cosmological constant. Using an ensemble of all-sky maps, we compare with the Cosmic Background Explorer data to infer a normalization (or upper bound) on the string linear energy density μ. For a flat matter-dominated model (Ω M =1) we find Gμ/c 2 ≅0.7x10 -6 , which is lower than previous constraints probably because of the more accurate inclusion of string small-scale structure. For a cosmological constant within an observationally acceptable range, we find a relatively weak dependence with Gμ/c 2 less than 10% higher

  14. Cosmological simulation with dust formation and destruction

    Science.gov (United States)

    Aoyama, Shohei; Hou, Kuan-Chou; Hirashita, Hiroyuki; Nagamine, Kentaro; Shimizu, Ikkoh

    2018-06-01

    To investigate the evolution of dust in a cosmological volume, we perform hydrodynamic simulations, in which the enrichment of metals and dust is treated self-consistently with star formation and stellar feedback. We consider dust evolution driven by dust production in stellar ejecta, dust destruction by sputtering, grain growth by accretion and coagulation, and grain disruption by shattering, and treat small and large grains separately to trace the grain size distribution. After confirming that our model nicely reproduces the observed relation between dust-to-gas ratio and metallicity for nearby galaxies, we concentrate on the dust abundance over the cosmological volume in this paper. The comoving dust mass density has a peak at redshift z ˜ 1-2, coincident with the observationally suggested dustiest epoch in the Universe. In the local Universe, roughly 10 per cent of the dust is contained in the intergalactic medium (IGM), where only 1/3-1/4 of the dust survives against dust destruction by sputtering. We also show that the dust mass function is roughly reproduced at ≲ 108 M⊙, while the massive end still has a discrepancy, which indicates the necessity of stronger feedback in massive galaxies. In addition, our model broadly reproduces the observed radial profile of dust surface density in the circum-galactic medium (CGM). While our model satisfies the observational constraints for the dust extinction on cosmological scales, it predicts that the dust in the CGM and IGM is dominated by large (>0.03 μm) grains, which is in tension with the steep reddening curves observed in the CGM.

  15. Halo Models of Large Scale Structure and Reliability of Cosmological N-Body Simulations

    Directory of Open Access Journals (Sweden)

    José Gaite

    2013-05-01

    Full Text Available Halo models of the large scale structure of the Universe are critically examined, focusing on the definition of halos as smooth distributions of cold dark matter. This definition is essentially based on the results of cosmological N-body simulations. By a careful analysis of the standard assumptions of halo models and N-body simulations and by taking into account previous studies of self-similarity of the cosmic web structure, we conclude that N-body cosmological simulations are not fully reliable in the range of scales where halos appear. Therefore, to have a consistent definition of halos is necessary either to define them as entities of arbitrary size with a grainy rather than smooth structure or to define their size in terms of small-scale baryonic physics.

  16. PHoToNs–A parallel heterogeneous and threads oriented code for cosmological N-body simulation

    Science.gov (United States)

    Wang, Qiao; Cao, Zong-Yan; Gao, Liang; Chi, Xue-Bin; Meng, Chen; Wang, Jie; Wang, Long

    2018-06-01

    We introduce a new code for cosmological simulations, PHoToNs, which incorporates features for performing massive cosmological simulations on heterogeneous high performance computer (HPC) systems and threads oriented programming. PHoToNs adopts a hybrid scheme to compute gravitational force, with the conventional Particle-Mesh (PM) algorithm to compute the long-range force, the Tree algorithm to compute the short range force and the direct summation Particle-Particle (PP) algorithm to compute gravity from very close particles. A self-similar space filling a Peano-Hilbert curve is used to decompose the computing domain. Threads programming is advantageously used to more flexibly manage the domain communication, PM calculation and synchronization, as well as Dual Tree Traversal on the CPU+MIC platform. PHoToNs scales well and efficiency of the PP kernel achieves 68.6% of peak performance on MIC and 74.4% on CPU platforms. We also test the accuracy of the code against the much used Gadget-2 in the community and found excellent agreement.

  17. The Higgs field and the resolution of the Cosmological Constant Paradox in the Weyl-geometrical Universe.

    Science.gov (United States)

    De Martini, Francesco

    2017-11-13

    The nature of the scalar field responsible for the cosmological inflation is found to be rooted in the most fundamental concept of Weyl's differential geometry: the parallel displacement of vectors in curved space-time. Within this novel geometrical scenario, the standard electroweak theory of leptons based on the SU (2) L ⊗ U (1) Y as well as on the conformal groups of space-time Weyl's transformations is analysed within the framework of a general-relativistic, conformally covariant scalar-tensor theory that includes the electromagnetic and the Yang-Mills fields. A Higgs mechanism within a spontaneous symmetry breaking process is identified and this offers formal connections between some relevant properties of the elementary particles and the dark energy content of the Universe. An 'effective cosmological potential': V eff is expressed in terms of the dark energy potential: [Formula: see text] via the 'mass reduction parameter': [Formula: see text], a general property of the Universe. The mass of the Higgs boson, which is considered a 'free parameter' by the standard electroweak theory, by our theory is found to be proportional to the mass [Formula: see text] which accounts for the measured cosmological constant, i.e. the measured content of vacuum-energy in the Universe. The non-integrable application of Weyl's geometry leads to a Proca equation accounting for the dynamics of a ϕ ρ -particle, a vector-meson proposed as an an optimum candidate for dark matter. On the basis of previous cosmic microwave background results our theory leads, in the condition of cosmological 'critical density', to the assessment of the average energy content of the ϕ ρ -excitation. The peculiar mathematical structure of V eff offers a clue towards a very general resolution of a most intriguing puzzle of modern quantum field theory, the 'Cosmological Constant Paradox' (here referred to as the ' Λ -Paradox'). Indeed, our 'universal' theory offers a resolution of the Λ -Paradox

  18. An Investigation of Intracluster Light Evolution Using Cosmological Hydrodynamical Simulations

    Science.gov (United States)

    Tang, Lin; Lin, Weipeng; Cui, Weiguang; Kang, Xi; Wang, Yang; Contini, E.; Yu, Yu

    2018-06-01

    Intracluster light (ICL) in observations is usually identified through the surface brightness limit (SBL) method. In this paper, for the first time we produce mock images of galaxy groups and clusters, using a cosmological hydrodynamical simulation to investigate the ICL fraction and focus on its dependence on observational parameters, e.g., the SBL, the effects of cosmological redshift-dimming, point-spread function (PSF), and CCD pixel size. Detailed analyses suggest that the width of the PSF has a significant effect on the measured ICL fraction, while the relatively small pixel size shows almost no influence. It is found that the measured ICL fraction depends strongly on the SBL. At a fixed SBL and redshift, the measured ICL fraction decreases with increasing halo mass, while with a much fainter SBL, it does not depend on halo mass at low redshifts. In our work, the measured ICL fraction shows a clear dependence on the cosmological redshift-dimming effect. It is found that there is more mass locked in the ICL component than light, suggesting that the use of a constant mass-to-light ratio at high surface brightness levels will lead to an underestimate of ICL mass. Furthermore, it is found that the radial profile of ICL shows a characteristic radius that is almost independent of halo mass. The current measurement of ICL from observations has a large dispersion due to different methods, and we emphasize the importance of using the same definition when observational results are compared with theoretical predictions.

  19. INTELLIGENT DESIGN: ON THE EMULATION OF COSMOLOGICAL SIMULATIONS

    International Nuclear Information System (INIS)

    Schneider, Michael D.; Holm, Oskar; Knox, Lloyd

    2011-01-01

    Simulation design is the choice of locations in parameter space at which simulations are to be run and is the first step in building an emulator capable of quickly providing estimates of simulation results for arbitrary locations in the parameter space. We introduce an alteration to the 'OALHS' design used by Heitmann et al. that reduces the number of simulation runs required to achieve a fixed accuracy in our case study by a factor of two. We also compare interpolation procedures for emulators and find that interpolation via Gaussian process models and via the much-easier-to-implement polynomial interpolation have comparable accuracy. A very simple emulation-building procedure consisting of a design sampled from the parameter prior distribution, combined with interpolation via polynomials also performs well. Although our primary motivation is efficient emulators of nonlinear cosmological N-body simulations, in an appendix we describe an emulator for the cosmic microwave background temperature power spectrum publicly available as a computer code.

  20. Planck 2013 results. XVI. Cosmological parameters

    CERN Document Server

    Ade, P.A.R.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A.J.; Barreiro, R.B.; Bartlett, J.G.; Battaner, E.; Benabed, K.; Benoit, A.; Benoit-Levy, A.; Bernard, J.P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J.J.; Bonaldi, A.; Bond, J.R.; Borrill, J.; Bouchet, F.R.; Bridges, M.; Bucher, M.; Burigana, C.; Butler, R.C.; Calabrese, E.; Cappellini, B.; Cardoso, J.F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chary, R.R.; Chen, X.; Chiang, L.Y.; Chiang, H.C.; Christensen, P.R.; Church, S.; Clements, D.L.; Colombi, S.; Colombo, L.P.L.; Couchot, F.; Coulais, A.; Crill, B.P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R.D.; Davis, R.J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.M.; Desert, F.X.; Dickinson, C.; Diego, J.M.; Dolag, K.; Dole, H.; Donzelli, S.; Dore, O.; Douspis, M.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Elsner, F.; Ensslin, T.A.; Eriksen, H.K.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A.A.; Franceschi, E.; Gaier, T.C.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Heraud, Y.; Gjerlow, E.; Gonzalez-Nuevo, J.; Gorski, K.M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J.E.; Haissinski, J.; Hamann, J.; Hansen, F.K.; Hanson, D.; Harrison, D.; Henrot-Versille, S.; Hernandez-Monteagudo, C.; Herranz, D.; Hildebrandt, S.R.; Hivon, E.; Hobson, M.; Holmes, W.A.; Hornstrup, A.; Hou, Z.; Hovest, W.; Huffenberger, K.M.; Jaffe, T.R.; Jaffe, A.H.; Jewell, J.; Jones, W.C.; Juvela, M.; Keihanen, E.; Keskitalo, R.; Kisner, T.S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lahteenmaki, A.; Lamarre, J.M.; Lasenby, A.; Lattanzi, M.; Laureijs, R.J.; Lawrence, C.R.; Leach, S.; Leahy, J.P.; Leonardi, R.; Leon-Tavares, J.; Lesgourgues, J.; Lewis, A.; Liguori, M.; Lilje, P.B.; Linden-Vornle, M.; Lopez-Caniego, M.; Lubin, P.M.; Macias-Perez, J.F.; Maffei, B.; Maino, D.; Mandolesi, N.; Maris, M.; Marshall, D.J.; Martin, P.G.; Martinez-Gonzalez, E.; Masi, S.; Matarrese, S.; Matthai, F.; Mazzotta, P.; Meinhold, P.R.; Melchiorri, A.; Melin, J.B.; Mendes, L.; Menegoni, E.; Mennella, A.; Migliaccio, M.; Millea, M.; Mitra, S.; Miville-Deschenes, M.A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C.B.; Norgaard-Nielsen, H.U.; Noviello, F.; Novikov, D.; Novikov, I.; O'Dwyer, I.J.; Osborne, S.; Oxborrow, C.A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, D.; Pearson, T.J.; Peiris, H.V.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Platania, P.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G.W.; Prezeau, G.; Prunet, S.; Puget, J.L.; Rachen, J.P.; Reach, W.T.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rowan-Robinson, M.; Rubino-Martin, J.A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Savini, G.; Scott, D.; Seiffert, M.D.; Shellard, E.P.S.; Spencer, L.D.; Starck, J.L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sunyaev, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.S.; Sygnet, J.F.; Tauber, J.A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Turler, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L.A.; Wandelt, B.D.; Wehus, I.K.; White, M.; White, S.D.M.; Wilkinson, A.; Yvon, D.; Zacchei, A.; Zonca, A.

    2014-10-29

    We present the first results based on Planck measurements of the CMB temperature and lensing-potential power spectra. The Planck spectra at high multipoles are extremely well described by the standard spatially-flat six-parameter LCDM cosmology. In this model Planck data determine the cosmological parameters to high precision. We find a low value of the Hubble constant, H0=67.3+/-1.2 km/s/Mpc and a high value of the matter density parameter, Omega_m=0.315+/-0.017 (+/-1 sigma errors) in excellent agreement with constraints from baryon acoustic oscillation (BAO) surveys. Including curvature, we find that the Universe is consistent with spatial flatness to percent-level precision using Planck CMB data alone. We present results from an analysis of extensions to the standard cosmology, using astrophysical data sets in addition to Planck and high-resolution CMB data. None of these models are favoured significantly over standard LCDM. The deviation of the scalar spectral index from unity is insensitive to the additi...

  1. Eulerian and Lagrangian statistics from high resolution numerical simulations of weakly compressible turbulence

    NARCIS (Netherlands)

    Benzi, R.; Biferale, L.; Fisher, R.T.; Lamb, D.Q.; Toschi, F.

    2009-01-01

    We report a detailed study of Eulerian and Lagrangian statistics from high resolution Direct Numerical Simulations of isotropic weakly compressible turbulence. Reynolds number at the Taylor microscale is estimated to be around 600. Eulerian and Lagrangian statistics is evaluated over a huge data

  2. Evaluation of high-resolution climate simulations for West Africa using COSMO-CLM

    Science.gov (United States)

    Dieng, Diarra; Smiatek, Gerhard; Bliefernicht, Jan; Laux, Patrick; Heinzeller, Dominikus; Kunstmann, Harald; Sarr, Abdoulaye; Thierno Gaye, Amadou

    2017-04-01

    The climate change modeling activities within the WASCAL program (West African Science Service Center on Climate Change and Adapted Land Use) concentrate on the provisioning of future climate change scenario data at high spatial and temporal resolution and quality in West Africa. Such information is highly required for impact studies in water resources and agriculture for the development of reliable climate change adaptation and mitigation strategies. In this study, we present a detailed evaluation of high simulation runs based on the regional climate model, COSMO model in CLimate Mode (COSMO-CLM). The model is applied over West Africa in a nested approach with two simulation domains at 0.44° and 0.11° resolution using reanalysis data from ERA-Interim (1979-2013). The models runs are compared to several state-of-the-art observational references (e.g., CRU, CHIRPS) including daily precipitation data provided by national meteorological services in West Africa. Special attention is paid to the reproduction of the dynamics of the West African Monsoon (WMA), its associated precipitation patterns and crucial agro-climatological indices such as the onset of the rainy season. In addition, first outcomes of the regional climate change simulations driven by MPI-ESM-LR are presented for a historical period (1980 to 2010) and two future periods (2020 to 2050, 2070 to 2100). The evaluation of the reanalysis runs shows that COSMO-CLM is able to reproduce the observed major climate characteristics including the West African Monsoon within the range of comparable RCM evaluations studies. However, substantial uncertainties remain, especially in the Sahel zone. The added value of the higher resolution of the nested run is reflected in a smaller bias in extreme precipitation statistics with respect to the reference data.

  3. Inflation, the Higgs field and the resolution of the Cosmological Constant Paradox

    Science.gov (United States)

    De Martini, Francesco

    2017-08-01

    The nature of the scalar field responsible for the cosmological inflation, the ”inflaton”, is found to be rooted in the most fundamental concept of the Weyl’s differential geometry: the parallel displacement of vectors in curved space-time. Within this novel dynamical scenario, the standard electroweak theory of leptons based on the SU(2) L ⊗ U(1) Y as well as on the conformal groups of spacetime Weyl’s transformations is analyzed within the framework of a general-relativistic, co-covariant scalar-tensor theory that includes the electromagnetic and the Yang-Mills fields. A Higgs mechanism within a spontaneous symmetry breaking process is identified and this offers formal connections between some relevant properties of the elementary particles and the dark energy content of the Universe. An ”Effective Cosmological Potential”: Veff is expressed in terms of the dark energy potential: {V}{{Λ }}\\equiv {M}{{Λ }}2 via the ”mass reduction parameter”: \\zeta \\equiv \\sqrt{\\frac{|{V}eff|}{|{V}{{Λ }}|}}, a general property of the Universe. The mass of the Higgs boson, which is considered a ”free parameter” by the standard electroweak theory, by our theory is found to be proportional to the geometrical mean: {M}H\\propto \\sqrt{{M}eff× {M}P} of the Planck mass, MP and of the mass {M}eff\\equiv \\sqrt{|{V}eff|} which accounts for the measured Cosmological Constant, i.e. the measured content of vacuum-energy in the Universe. The experimental result obtained by the ATLAS and CMS Collaborations at CERN in the year 2012: MH = 125.09(GeV/c 2) leads by our theory to a value: Meff ~ 3.19 · 10-6(eV/c 2). The peculiar mathematical structure of Veff offers a clue towards the resolution of a most intriguing puzzle of modern quantum field theory, the ”Cosmological Constant Paradox”.

  4. High energy physics and cosmology

    International Nuclear Information System (INIS)

    Silk, J.I.; Davis, M.

    1989-01-01

    This research will focus on the implications of recent theories and experiments in high energy physics for the evolution of the early Universe, and on the constraints that cosmological considerations can place on such theories. Several problems are under investigation, including the development of constraints on the inflationary predictions of scale--free primordial fluctuations in a universe at critical closure density by studying their linear and non-linear evolution after they re-enter the particle horizon. We will examine the observable imprint of primordial density fluctuations on the cosmic microwave background radiation curved cosmological models. Most astronomical evidence points to an open universe: one of our goals is to reconcile this conclusion with the particle physics input. We will investigate the response of the matter distribution to a network of cosmic strings produced during an early symmetry-breaking transition, and compute the resulting cosmic microwave background anisotropies. We will simulate the formation of large-scale structures whose dynamics are dominated by weakly interacting particles such as axions, massive neutrinos or photinos in order to model the formation of galaxies, galaxy clusters and superclusters. We will study of the distortions in the microwave background radiation, both spectral and angular, that are produced by ionized gas associated with forming clusters and groups of galaxies. We will also study constraints on exotic cooling mechanisms involving axions and majorons set by stellar evolution and the energy input into low mass stars by cold dark matter annihilation galactic nuclei. We will compute the detailed gamma ray spectrum predicted by various cold dark matter candidates undergoing annihilation in the galactic halo and bulge

  5. [High energy physics and cosmology

    International Nuclear Information System (INIS)

    Silk, J.I.; Davis, M.

    1988-01-01

    This research will focus on the implications of recent theories and experiments in high energy physics for the evolution of the early Universe, and on the constraints that cosmological considerations can place on such theories. Several problems are under investigation, including the development of constraints on the inflationary predictions of scale-free primordial fluctuations in a universe at critical closure density by studying their linear and non-linear evolution after they re-enter the particle horizon. We will examine the observable imprint of primordial density fluctuations on the cosmic microwave background radiation in curved cosmological models. Most astronomical evidence points to an open universe: one of our goals is to reconcile this conclusion with the particle physics input. We will investigate the response of the matter distribution to a network of cosmic strings produced during an early symmetry--breaking transition, and compute the resulting cosmic microwave background anisotropies. We will simulate the formation of large--scale structures whose dynamics are dominated by weakly interacting particles such as axions massive neutrinos or photinos in order to model the formation of galaxies, galaxy clusters and superclusters. We will study the distortions in the microwave background radiation, both spectral and angular, that are produced by ionized gas associated with forming clusters and groups of galaxies. We will also study constraints on exotic cooling mechanisms involving axions and majorons set by stellar evolution and the energy input into low mass stars by cold dark matter annihilation in galactic nuclei. We will compute the detailed gamma ray spectrum predicted by various cold dark matter candidates undergoing annihilation in the galactic halo and bulge

  6. Geant4 simulation of a 3D high resolution gamma camera

    International Nuclear Information System (INIS)

    Akhdar, H.; Kezzar, K.; Aksouh, F.; Assemi, N.; AlGhamdi, S.; AlGarawi, M.; Gerl, J.

    2015-01-01

    The aim of this work is to develop a 3D gamma camera with high position resolution and sensitivity relying on both distance/absorption and Compton scattering techniques and without using any passive collimation. The proposed gamma camera is simulated in order to predict its performance using the full benefit of Geant4 features that allow the construction of the needed geometry of the detectors, have full control of the incident gamma particles and study the response of the detector in order to test the suggested geometries. Three different geometries are simulated and each configuration is tested with three different scintillation materials (LaBr3, LYSO and CeBr3)

  7. Modified Baryonic Dynamics: two-component cosmological simulations with light sterile neutrinos

    Energy Technology Data Exchange (ETDEWEB)

    Angus, G.W.; Gentile, G. [Department of Physics and Astrophysics, Vrije Universiteit Brussel, Pleinlaan 2, Brussels, 1050 Belgium (Belgium); Diaferio, A. [Dipartimento di Fisica, Università di Torino, Via P. Giuria 1, Torino, I-10125 Italy (Italy); Famaey, B. [Observatoire astronomique de Strasbourg, CNRS UMR 7550, Université de Strasbourg, 11 rue de l' Université, Strasbourg, F-67000 France (France); Heyden, K.J. van der, E-mail: garry.angus@vub.ac.be, E-mail: diaferio@ph.unito.it, E-mail: benoit.famaey@astro.unistra.fr, E-mail: gianfranco.gentile@ugent.be, E-mail: heyden@ast.uct.ac.za [Astrophysics, Cosmology and Gravity Centre, Dept. of Astronomy, University of Cape Town, Private Bag X3, Rondebosch, 7701 South Africa (South Africa)

    2014-10-01

    In this article we continue to test cosmological models centred on Modified Newtonian Dynamics (MOND) with light sterile neutrinos, which could in principle be a way to solve the fine-tuning problems of the standard model on galaxy scales while preserving successful predictions on larger scales. Due to previous failures of the simple MOND cosmological model, here we test a speculative model where the modified gravitational field is produced only by the baryons and the sterile neutrinos produce a purely Newtonian field (hence Modified Baryonic Dynamics). We use two-component cosmological simulations to separate the baryonic N-body particles from the sterile neutrino ones. The premise is to attenuate the over-production of massive galaxy cluster halos which were prevalent in the original MOND plus light sterile neutrinos scenario. Theoretical issues with such a formulation notwithstanding, the Modified Baryonic Dynamics model fails to produce the correct amplitude for the galaxy cluster mass function for any reasonable value of the primordial power spectrum normalisation.

  8. Simulating the Growth of a Disk Galaxy and its Supermassive Black Hole in a Cosmological Simulating the Growth of a Disk Galaxy and its Supermassive Black Hole in a Cosmological Context

    International Nuclear Information System (INIS)

    Levine, Robyn Deborah; JILA, Boulder

    2008-01-01

    Supermassive black holes (SMBHs) are ubiquitous in the centers of galaxies. Their formation and subsequent evolution is inextricably linked to that of their host galaxies, and the study of galaxy formation is incomplete without the inclusion of SMBHs. The present work seeks to understand the growth and evolution of SMBHs through their interaction with the host galaxy and its environment. In the first part of the thesis (Chap. 2 and 3), we combine a simple semi-analytic model of outflows from active galactic nuclei (AGN) with a simulated dark matter density distribution to study the impact of SMBH feedback on cosmological scales. We find that constraints can be placed on the kinetic efficiency of such feedback using observations of the filling fraction of the Lyα forest. We also find that AGN feedback is energetic enough to redistribute baryons over cosmological distances, having potentially significant effects on the interpretation of cosmological data which are sensitive to the total matter density distribution (e.g. weak lensing). However, truly assessing the impact of AGN feedback in the universe necessitates large-dynamic range simulations with extensive treatment of baryonic physics to first model the fueling of SMBHs. In the second part of the thesis (Chap. 4-6) we use a hydrodynamic adaptive mesh refinement simulation to follow the growth and evolution of a typical disk galaxy hosting a SMBH, in a cosmological context. The simulation covers a dynamical range of 10 million allowing us to study the transport of matter and angular momentum from super-galactic scales all the way down to the outer edge of the accretion disk around the SMBH. Focusing our attention on the central few hundred parsecs of the galaxy, we find the presence of a cold, self-gravitating, molecular gas disk which is globally unstable. The global instabilities drive super-sonic turbulence, which maintains local stability and allows gas to fuel a SMBH without first fragmenting completely

  9. The Atacama Cosmology Telescope: Cosmology from Galaxy Clusters Detected Via the Sunyaev-Zel'dovich Effect

    Science.gov (United States)

    Sehgal, Neelima; Trac, Hy; Acquaviva, Viviana; Ade, Peter A. R.; Aguirre, Paula; Amiri, Mandana; Appel, John W.; Barrientos, L. Felipe; Battistelli, Elia S.; Bond, J. Richard; hide

    2010-01-01

    We present constraints on cosmological parameters based on a sample of Sunyaev-Zel'dovich-selected galaxy clusters detected in a millimeter-wave survey by the Atacama Cosmology Telescope. The cluster sample used in this analysis consists of 9 optically-confirmed high-mass clusters comprising the high-significance end of the total cluster sample identified in 455 square degrees of sky surveyed during 2008 at 148 GHz. We focus on the most massive systems to reduce the degeneracy between unknown cluster astrophysics and cosmology derived from SZ surveys. We describe the scaling relation between cluster mass and SZ signal with a 4-parameter fit. Marginalizing over the values of the parameters in this fit with conservative priors gives (sigma)8 = 0.851 +/- 0.115 and w = -1.14 +/- 0.35 for a spatially-flat wCDM cosmological model with WMAP 7-year priors on cosmological parameters. This gives a modest improvement in statistical uncertainty over WMAP 7-year constraints alone. Fixing the scaling relation between cluster mass and SZ signal to a fiducial relation obtained from numerical simulations and calibrated by X-ray observations, we find (sigma)8 + 0.821 +/- 0.044 and w = -1.05 +/- 0.20. These results are consistent with constraints from WMAP 7 plus baryon acoustic oscillations plus type Ia supernova which give (sigma)8 = 0.802 +/- 0.038 and w = -0.98 +/- 0.053. A stacking analysis of the clusters in this sample compared to clusters simulated assuming the fiducial model also shows good agreement. These results suggest that, given the sample of clusters used here, both the astrophysics of massive clusters and the cosmological parameters derived from them are broadly consistent with current models.

  10. The simulation of medicanes in a high-resolution regional climate model

    Energy Technology Data Exchange (ETDEWEB)

    Cavicchia, Leone [Centro Euro-Mediterraneo per i Cambiamenti Climatici, Bologna (Italy); Helmholtz-Zentrum Geesthacht, Institute of Coastal Research, Geesthacht (Germany); Ca' Foscari University, Venice (Italy); Storch, Hans von [Helmholtz-Zentrum Geesthacht, Institute of Coastal Research, Geesthacht (Germany); University of Hamburg, Meteorological Institute, Hamburg (Germany)

    2012-11-15

    Medicanes, strong mesoscale cyclones with tropical-like features, develop occasionally over the Mediterranean Sea. Due to the scarcity of observations over sea and the coarse resolution of the long-term reanalysis datasets, it is difficult to study systematically the multidecadal statistics of sub-synoptic medicanes. Our goal is to assess the long-term variability and trends of medicanes, obtaining a long-term climatology through dynamical downscaling of the NCEP/NCAR reanalysis data. In this paper, we examine the robustness of this method and investigate the value added for the study of medicanes. To do so, we performed several climate mode simulations with a high resolution regional atmospheric model (CCLM) for a number of test cases described in the literature. We find that the medicanes are formed in the simulations, with deeper pressures and stronger winds than in the driving global NCEP reanalysis. The tracks are adequately reproduced. We conclude that our methodology is suitable for constructing multi-decadal statistics and scenarios of current and possible future medicane activities. (orig.)

  11. The Atacama Cosmology Telescope: Cosmology from Galaxy Clusters Detected via the Sunyaev-Zeldovich Effect

    International Nuclear Information System (INIS)

    Sehgal, N.

    2011-01-01

    We present constraints on cosmological parameters based on a sample of Sunyaev-Zeldovich-selected galaxy clusters detected in a millimeter-wave survey by the Atacama Cosmology Telescope. The cluster sample used in this analysis consists of 9 optically-confirmed high-mass clusters comprising the high-significance end of the total cluster sample identified in 455 square degrees of sky surveyed during 2008 at 148GHz. We focus on the most massive systems to reduce the degeneracy between unknown cluster astrophysics and cosmology derived from SZ surveys. We describe the scaling relation between cluster mass and SZ signal with a 4-parameter fit. Marginalizing over the values of the parameters in this fit with conservative priors gives σ 8 = 0.851 ± 0.115 and w = -1.14 ± 0.35 for a spatially-flat wCDM cosmological model with WMAP 7-year priors on cosmological parameters. This gives a modest improvement in statistical uncertainty over WMAP 7-year constraints alone. Fixing the scaling relation between cluster mass and SZ signal to a fiducial relation obtained from numerical simulations and calibrated by X-ray observations, we find σ 8 = 0.821 ± 0.044 and w = -1.05 ± 0.20. These results are consistent with constraints from WMAP 7 plus baryon acoustic oscillations plus type Ia supernoava which give σ 8 = 0.802 ± 0.038 and w = -0.98 ± 0.053. A stacking analysis of the clusters in this sample compared to clusters simulated assuming the fiducial model also shows good agreement. These results suggest that, given the sample of clusters used here, both the astrophysics of massive clusters and the cosmological parameters derived from them are broadly consistent with current models.

  12. Kinetic energy spectra, vertical resolution and dissipation in high-resolution atmospheric simulations.

    Science.gov (United States)

    Skamarock, W. C.

    2017-12-01

    We have performed week-long full-physics simulations with the MPAS global model at 15 km cell spacing using vertical mesh spacings of 800, 400, 200 and 100 meters in the mid-troposphere through the mid-stratosphere. We find that the horizontal kinetic energy spectra in the upper troposphere and stratosphere does not converge with increasing vertical resolution until we reach 200 meter level spacing. Examination of the solutions indicates that significant inertia-gravity waves are not vertically resolved at the lower vertical resolutions. Diagnostics from the simulations indicate that the primary kinetic energy dissipation results from the vertical mixing within the PBL parameterization and from the gravity-wave drag parameterization, with smaller but significant contributions from damping in the vertical transport scheme and from the horizontal filters in the dynamical core. Most of the kinetic energy dissipation in the free atmosphere occurs within breaking mid-latitude baroclinic waves. We will briefly review these results and their implications for atmospheric model configuration and for atmospheric dynamics, specifically that related to the dynamics associated with the mesoscale kinetic energy spectrum.

  13. North Atlantic Tropical Cyclones: historical simulations and future changes with the new high-resolution Arpege AGCM.

    Science.gov (United States)

    Pilon, R.; Chauvin, F.; Palany, P.; Belmadani, A.

    2017-12-01

    A new version of the variable high-resolution Meteo-France Arpege atmospheric general circulation model (AGCM) has been developed for tropical cyclones (TC) studies, with a focus on the North Atlantic basin, where the model horizontal resolution is 15 km. Ensemble historical AMIP (Atmospheric Model Intercomparison Project)-type simulations (1965-2014) and future projections (2020-2080) under the IPCC (Intergovernmental Panel on Climate Change) representative concentration pathway (RCP) 8.5 scenario have been produced. TC-like vortices tracking algorithm is used to investigate TC activity and variability. TC frequency, genesis, geographical distribution and intensity are examined. Historical simulations are compared to best-track and reanalysis datasets. Model TC frequency is generally realistic but tends to be too high during the rst decade of the historical simulations. Biases appear to originate from both the tracking algorithm and model climatology. Nevertheless, the model is able to simulate extremely well intense TCs corresponding to category 5 hurricanes in the North Atlantic, where grid resolution is highest. Interaction between developing TCs and vertical wind shear is shown to be contributing factor for TC variability. Future changes in TC activity and properties are also discussed.

  14. High resolution real time capable combustion chamber simulation; Zeitlich hochaufloesende echtzeitfaehige Brennraumsimulation

    Energy Technology Data Exchange (ETDEWEB)

    Piewek, J. [Volkswagen AG, Wolfsburg (Germany)

    2008-07-01

    The article describes a zero-dimensional model for the real time capable combustion chamber pressure calculation with analogue pressure sensor output. The closed-loop-operation of an Engine Control Unit is shown at the hardware-in-the-loop-simulator (HiL simulator) for a 4-cylinder common rail diesel engine. The presentation of the model focuses on the simulation of the load variation which does not depend on the injection system and thus the simulated heat release rate. Particular attention is paid to the simulation and the resulting test possibilities regarding to full-variable valve gears. It is shown that black box models consisting in the HiL mean value model for the aspirated gas mass, the exhaust gas temperature after the outlet valve and the mean indicated pressure can be replaced by calculations from the high-resolution combustion chamber model. (orig.)

  15. Air-Sea Interaction Processes in Low and High-Resolution Coupled Climate Model Simulations for the Southeast Pacific

    Science.gov (United States)

    Porto da Silveira, I.; Zuidema, P.; Kirtman, B. P.

    2017-12-01

    The rugged topography of the Andes Cordillera along with strong coastal upwelling, strong sea surface temperatures (SST) gradients and extensive but geometrically-thin stratocumulus decks turns the Southeast Pacific (SEP) into a challenge for numerical modeling. In this study, hindcast simulations using the Community Climate System Model (CCSM4) at two resolutions were analyzed to examine the importance of resolution alone, with the parameterizations otherwise left unchanged. The hindcasts were initialized on January 1 with the real-time oceanic and atmospheric reanalysis (CFSR) from 1982 to 2003, forming a 10-member ensemble. The two resolutions are (0.1o oceanic and 0.5o atmospheric) and (1.125o oceanic and 0.9o atmospheric). The SST error growth in the first six days of integration (fast errors) and those resulted from model drift (saturated errors) are assessed and compared towards evaluating the model processes responsible for the SST error growth. For the high-resolution simulation, SST fast errors are positive (+0.3oC) near the continental borders and negative offshore (-0.1oC). Both are associated with a decrease in cloud cover, a weakening of the prevailing southwesterly winds and a reduction of latent heat flux. The saturated errors possess a similar spatial pattern, but are larger and are more spatially concentrated. This suggests that the processes driving the errors already become established within the first week, in contrast to the low-resolution simulations. These, instead, manifest too-warm SSTs related to too-weak upwelling, driven by too-strong winds and Ekman pumping. Nevertheless, the ocean surface tends to be cooler in the low-resolution simulation than the high-resolution due to a higher cloud cover. Throughout the integration, saturated SST errors become positive and could reach values up to +4oC. These are accompanied by upwelling dumping and a decrease in cloud cover. High and low resolution models presented notable differences in how SST

  16. Earth System Modeling 2.0: A Blueprint for Models That Learn From Observations and Targeted High-Resolution Simulations

    Science.gov (United States)

    Schneider, Tapio; Lan, Shiwei; Stuart, Andrew; Teixeira, João.

    2017-12-01

    Climate projections continue to be marred by large uncertainties, which originate in processes that need to be parameterized, such as clouds, convection, and ecosystems. But rapid progress is now within reach. New computational tools and methods from data assimilation and machine learning make it possible to integrate global observations and local high-resolution simulations in an Earth system model (ESM) that systematically learns from both and quantifies uncertainties. Here we propose a blueprint for such an ESM. We outline how parameterization schemes can learn from global observations and targeted high-resolution simulations, for example, of clouds and convection, through matching low-order statistics between ESMs, observations, and high-resolution simulations. We illustrate learning algorithms for ESMs with a simple dynamical system that shares characteristics of the climate system; and we discuss the opportunities the proposed framework presents and the challenges that remain to realize it.

  17. High resolution simulations of orographic flow over a complex terrain on the Southeast coast of Brazil

    Science.gov (United States)

    Chou, S. C.; Zolino, M. M.; Gomes, J. L.; Bustamante, J. F.; Lima-e-Silva, P. P.

    2012-04-01

    The Eta Model is used operationally by CPTEC to produce weather forecasts over South America since 1997. The model has gone through upgrades. In order to prepare the model for operational higher resolution forecasts, the model is configured and tested over a region of complex topography located near the coast of Southeast Brazil. The Eta Model was configured, with 2-km horizontal resolution and 50 layers. The Eta-2km is a second nesting, it is driven by Eta-15km, which in its turn is driven by Era-Interim reanalyses. The model domain includes the two Brazilians cities, Rio de Janeiro and Sao Paulo, urban areas, preserved tropical forest, pasture fields, and complex terrain and coastline. Mountains can rise up to about 700m. The region suffers frequent events of floods and landslides. The objective of this work is to evaluate high resolution simulations of wind and temperature in this complex area. Verification of model runs uses observations taken from the nuclear power plant. Accurate near-surface wind direction and magnitude are needed for the plant emergency plan and winds are highly sensitive to model spatial resolution and atmospheric stability. Verification of two cases during summer shows that model has clear diurnal cycle signal for wind in that region. The area is characterized by weak winds which makes the simulation more difficult. The simulated wind magnitude is about 1.5m/s, which is close to observations of about 2m/s; however, the observed change of wind direction of the sea breeze is fast whereas it is slow in the simulations. Nighttime katabatic flow is captured by the simulations. Comparison against Eta-5km runs show that the valley circulation is better described in the 2-km resolution run. Simulated temperatures follow closely the observed diurnal cycle. Experiments improving some surface conditions such as the surface temperature and land cover show simulation error reduction and improved diurnal cycle.

  18. Experimental Investigation and High Resolution Simulation of In-Situ Combustion Processes

    Energy Technology Data Exchange (ETDEWEB)

    Margot Gerritsen; Tony Kovscek

    2008-04-30

    This final technical report describes work performed for the project 'Experimental Investigation and High Resolution Numerical Simulator of In-Situ Combustion Processes', DE-FC26-03NT15405. In summary, this work improved our understanding of in-situ combustion (ISC) process physics and oil recovery. This understanding was translated into improved conceptual models and a suite of software algorithms that extended predictive capabilities. We pursued experimental, theoretical, and numerical tasks during the performance period. The specific project objectives were (i) identification, experimentally, of chemical additives/injectants that improve combustion performance and delineation of the physics of improved performance, (ii) establishment of a benchmark one-dimensional, experimental data set for verification of in-situ combustion dynamics computed by simulators, (iii) develop improved numerical methods that can be used to describe in-situ combustion more accurately, and (iv) to lay the underpinnings of a highly efficient, 3D, in-situ combustion simulator using adaptive mesh refinement techniques and parallelization. We believe that project goals were met and exceeded as discussed.

  19. Halo mass and weak galaxy-galaxy lensing profiles in rescaled cosmological N-body simulations

    Science.gov (United States)

    Renneby, Malin; Hilbert, Stefan; Angulo, Raúl E.

    2018-05-01

    We investigate 3D density and weak lensing profiles of dark matter haloes predicted by a cosmology-rescaling algorithm for N-body simulations. We extend the rescaling method of Angulo & White (2010) and Angulo & Hilbert (2015) to improve its performance on intra-halo scales by using models for the concentration-mass-redshift relation based on excursion set theory. The accuracy of the method is tested with numerical simulations carried out with different cosmological parameters. We find that predictions for median density profiles are more accurate than ˜5 % for haloes with masses of 1012.0 - 1014.5h-1 M⊙ for radii 0.05 baryons, are likely required for interpreting future (dark energy task force stage IV) experiments.

  20. High-Resolution Sonars: What Resolution Do We Need for Target Recognition?

    Directory of Open Access Journals (Sweden)

    Pailhas Yan

    2010-01-01

    Full Text Available Target recognition in sonar imagery has long been an active research area in the maritime domain, especially in the mine-counter measure context. Recently it has received even more attention as new sensors with increased resolution have been developed; new threats to critical maritime assets and a new paradigm for target recognition based on autonomous platforms have emerged. With the recent introduction of Synthetic Aperture Sonar systems and high-frequency sonars, sonar resolution has dramatically increased and noise levels decreased. Sonar images are distance images but at high resolution they tend to appear visually as optical images. Traditionally algorithms have been developed specifically for imaging sonars because of their limited resolution and high noise levels. With high-resolution sonars, algorithms developed in the image processing field for natural images become applicable. However, the lack of large datasets has hampered the development of such algorithms. Here we present a fast and realistic sonar simulator enabling development and evaluation of such algorithms.We develop a classifier and then analyse its performances using our simulated synthetic sonar images. Finally, we discuss sensor resolution requirements to achieve effective classification of various targets and demonstrate that with high resolution sonars target highlight analysis is the key for target recognition.

  1. AUTOMATIC INTERPRETATION OF HIGH RESOLUTION SAR IMAGES: FIRST RESULTS OF SAR IMAGE SIMULATION FOR SINGLE BUILDINGS

    Directory of Open Access Journals (Sweden)

    J. Tao

    2012-09-01

    Full Text Available Due to the all-weather data acquisition capabilities, high resolution space borne Synthetic Aperture Radar (SAR plays an important role in remote sensing applications like change detection. However, because of the complex geometric mapping of buildings in urban areas, SAR images are often hard to interpret. SAR simulation techniques ease the visual interpretation of SAR images, while fully automatic interpretation is still a challenge. This paper presents a method for supporting the interpretation of high resolution SAR images with simulated radar images using a LiDAR digital surface model (DSM. Line features are extracted from the simulated and real SAR images and used for matching. A single building model is generated from the DSM and used for building recognition in the SAR image. An application for the concept is presented for the city centre of Munich where the comparison of the simulation to the TerraSAR-X data shows a good similarity. Based on the result of simulation and matching, special features (e.g. like double bounce lines, shadow areas etc. can be automatically indicated in SAR image.

  2. SPECTRA OF STRONG MAGNETOHYDRODYNAMIC TURBULENCE FROM HIGH-RESOLUTION SIMULATIONS

    International Nuclear Information System (INIS)

    Beresnyak, Andrey

    2014-01-01

    Magnetohydrodynamic (MHD) turbulence is present in a variety of solar and astrophysical environments. Solar wind fluctuations with frequencies lower than 0.1 Hz are believed to be mostly governed by Alfvénic turbulence with particle transport depending on the power spectrum and the anisotropy of such turbulence. Recently, conflicting spectral slopes for the inertial range of MHD turbulence have been reported by different groups. Spectral shapes from earlier simulations showed that MHD turbulence is less scale-local compared with hydrodynamic turbulence. This is why higher-resolution simulations, and careful and rigorous numerical analysis is especially needed for the MHD case. In this Letter, we present two groups of simulations with resolution up to 4096 3 , which are numerically well-resolved and have been analyzed with an exact and well-tested method of scaling study. Our results from both simulation groups indicate that the asymptotic power spectral slope for all energy-related quantities, such as total energy and residual energy, is around –1.7, close to Kolmogorov's –5/3. This suggests that residual energy is a constant fraction of the total energy and that in the asymptotic regime of Alfvénic turbulence magnetic and kinetic spectra have the same scaling. The –1.5 slope for energy and the –2 slope for residual energy, which have been suggested earlier, are incompatible with our numerics

  3. Simulating nonlinear cosmological structure formation with massive neutrinos

    Energy Technology Data Exchange (ETDEWEB)

    Banerjee, Arka; Dalal, Neal, E-mail: abanerj6@illinois.edu, E-mail: dalaln@illinois.edu [Department of Physics, University of Illinois at Urbana-Champaign, 1110 West Green Street, Urbana, IL 61801-3080 (United States)

    2016-11-01

    We present a new method for simulating cosmologies that contain massive particles with thermal free streaming motion, such as massive neutrinos or warm/hot dark matter. This method combines particle and fluid descriptions of the thermal species to eliminate the shot noise known to plague conventional N-body simulations. We describe this method in detail, along with results for a number of test cases to validate our method, and check its range of applicability. Using this method, we demonstrate that massive neutrinos can produce a significant scale-dependence in the large-scale biasing of deep voids in the matter field. We show that this scale-dependence may be quantitatively understood using an extremely simple spherical expansion model which reproduces the behavior of the void bias for different neutrino parameters.

  4. Simulating nonlinear cosmological structure formation with massive neutrinos

    International Nuclear Information System (INIS)

    Banerjee, Arka; Dalal, Neal

    2016-01-01

    We present a new method for simulating cosmologies that contain massive particles with thermal free streaming motion, such as massive neutrinos or warm/hot dark matter. This method combines particle and fluid descriptions of the thermal species to eliminate the shot noise known to plague conventional N-body simulations. We describe this method in detail, along with results for a number of test cases to validate our method, and check its range of applicability. Using this method, we demonstrate that massive neutrinos can produce a significant scale-dependence in the large-scale biasing of deep voids in the matter field. We show that this scale-dependence may be quantitatively understood using an extremely simple spherical expansion model which reproduces the behavior of the void bias for different neutrino parameters.

  5. Simulation studies for a high resolution time projection chamber at the international linear collider

    Energy Technology Data Exchange (ETDEWEB)

    Muennich, A.

    2007-03-26

    The International Linear Collider (ILC) is planned to be the next large accelerator. The ILC will be able to perform high precision measurements only possible at the clean environment of electron positron collisions. In order to reach this high accuracy, the requirements for the detector performance are challenging. Several detector concepts are currently under study. The understanding of the detector and its performance will be crucial to extract the desired physics results from the data. To optimise the detector design, simulation studies are needed. Simulation packages like GEANT4 allow to model the detector geometry and simulate the energy deposit in the different materials. However, the detector response taking into account the transportation of the produced charge to the readout devices and the effects ofthe readout electronics cannot be described in detail. These processes in the detector will change the measured position of the energy deposit relative to the point of origin. The determination of this detector response is the task of detailed simulation studies, which have to be carried out for each subdetector. A high resolution Time Projection Chamber (TPC) with gas amplification based on micro pattern gas detectors, is one of the options for the main tracking system at the ILC. In the present thesis a detailed simulation tool to study the performance of a TPC was developed. Its goal is to find the optimal settings to reach an excellent momentum and spatial resolution. After an introduction to the present status of particle physics and the ILC project with special focus on the TPC as central tracker, the simulation framework is presented. The basic simulation methods and implemented processes are introduced. Within this stand-alone simulation framework each electron produced by primary ionisation is transferred through the gas volume and amplified using Gas Electron Multipliers (GEMs). The output format of the simulation is identical to the raw data from a

  6. Statistics of Deep Convection in the Congo Basin Derived From High-Resolution Simulations.

    Science.gov (United States)

    White, B.; Stier, P.; Kipling, Z.; Gryspeerdt, E.; Taylor, S.

    2016-12-01

    Convection transports moisture, momentum, heat and aerosols through the troposphere, and so the temporal variability of convection is a major driver of global weather and climate. The Congo basin is home to some of the most intense convective activity on the planet and is under strong seasonal influence of biomass burning aerosol. However, deep convection in the Congo basin remains under studied compared to other regions of tropical storm systems, especially when compared to the neighbouring, relatively well-understood West African climate system. We use the WRF model to perform a high-resolution, cloud-system resolving simulation to investigate convective storm systems in the Congo. Our setup pushes the boundaries of current computational resources, using a 1 km grid length over a domain covering millions of square kilometres and for a time period of one month. This allows us to draw statistical conclusions on the nature of the simulated storm systems. Comparing data from satellite observations and the model enables us to quantify the diurnal variability of deep convection in the Congo basin. This approach allows us to evaluate our simulations despite the lack of in-situ observational data. This provides a more comprehensive analysis of the diurnal cycle than has previously been shown. Further, we show that high-resolution convection-permitting simulations performed over near-seasonal timescales can be used in conjunction with satellite observations as an effective tool to evaluate new convection parameterisations.

  7. The acceptance of surface detector arrays for high energy cosmological muon neutrinos

    International Nuclear Information System (INIS)

    Vo Van Thuan; Hoang Van Khanh

    2011-01-01

    In order to search for ultra-high energy cosmological earth-skimming muon neutrinos by the surface detector array (SD) similar to one of the Pierre Auger Observatory (PAO), we propose to use the transition electromagnetic radiation at the medium interface induced by earth-skimming muons for triggering a few of aligned neighboring Cherenkov SD stations. Simulations of the acceptance of a modeling SD array have been done to estimate the detection probability of earth-skimming muon neutrinos.

  8. Monte Carlo Simulations of Ultra-High Energy Resolution Gamma Detectors for Nuclear Safeguards

    International Nuclear Information System (INIS)

    Robles, A.; Drury, O.B.; Friedrich, S.

    2009-01-01

    Ultra-high energy resolution superconducting gamma-ray detectors can improve the accuracy of non-destructive analysis for unknown radioactive materials. These detectors offer an order of magnitude improvement in resolution over conventional high purity germanium detectors. The increase in resolution reduces errors from line overlap and allows for the identification of weaker gamma-rays by increasing the magnitude of the peaks above the background. In order to optimize the detector geometry and to understand the spectral response function Geant4, a Monte Carlo simulation package coded in C++, was used to model the detectors. Using a 1 mm 3 Sn absorber and a monochromatic gamma source, different absorber geometries were tested. The simulation was expanded to include the Cu block behind the absorber and four layers of shielding required for detector operation at 0.1 K. The energy spectrum was modeled for an Am-241 and a Cs-137 source, including scattering events in the shielding, and the results were compared to experimental data. For both sources the main spectral features such as the photopeak, the Compton continuum, the escape x-rays and the backscatter peak were identified. Finally, the low energy response of a Pu-239 source was modeled to assess the feasibility of Pu-239 detection in spent fuel. This modeling of superconducting detectors can serve as a guide to optimize the configuration in future spectrometer designs.

  9. The Atacama Cosmology Telescope: Cosmology from Galaxy Clusters Detected via the Sunyaev-Zel'dovich Effect

    Energy Technology Data Exchange (ETDEWEB)

    Sehgal, Neelima; Trac, Hy; Acquaviva, Viviana; Ade, Peter A.R.; Aguirre, Paula; Amiri, Mandana; Appel, John W.; Barrientos, L.Felipe; Battistelli, Elia S.; Bond, J.Richard; Brown, Ben; Burger, Bryce; Chervenak, Jay; Das, Sudeep; Devlin, Mark J.; Dicker, Simon R.; Doriese, W.Bertrand; Dunkley, Joanna; Dunner, Rolando; Essinger-Hileman, Thomas; Fisher, Ryan P.

    2011-08-18

    We present constraints on cosmological parameters based on a sample of Sunyaev-Zeldovich-selected galaxy clusters detected in a millimeter-wave survey by the Atacama Cosmology Telescope. The cluster sample used in this analysis consists of 9 optically-confirmed high-mass clusters comprising the high-significance end of the total cluster sample identified in 455 square degrees of sky surveyed during 2008 at 148GHz. We focus on the most massive systems to reduce the degeneracy between unknown cluster astrophysics and cosmology derived from SZ surveys. We describe the scaling relation between cluster mass and SZ signal with a 4-parameter fit. Marginalizing over the values of the parameters in this fit with conservative priors gives {sigma}{sub 8} = 0.851 {+-} 0.115 and w = -1.14 {+-} 0.35 for a spatially-flat wCDM cosmological model with WMAP 7-year priors on cosmological parameters. This gives a modest improvement in statistical uncertainty over WMAP 7-year constraints alone. Fixing the scaling relation between cluster mass and SZ signal to a fiducial relation obtained from numerical simulations and calibrated by X-ray observations, we find {sigma}{sub 8} = 0.821 {+-} 0.044 and w = -1.05 {+-} 0.20. These results are consistent with constraints from WMAP 7 plus baryon acoustic oscillations plus type Ia supernoava which give {sigma}{sub 8} = 0.802 {+-} 0.038 and w = -0.98 {+-} 0.053. A stacking analysis of the clusters in this sample compared to clusters simulated assuming the fiducial model also shows good agreement. These results suggest that, given the sample of clusters used here, both the astrophysics of massive clusters and the cosmological parameters derived from them are broadly consistent with current models.

  10. Comparison of Explicitly Simulated and Downscaled Tropical Cyclone Activity in a High-Resolution Global Climate Model

    Directory of Open Access Journals (Sweden)

    Hirofumi Tomita

    2010-01-01

    Full Text Available The response of tropical cyclone activity to climate change is a matter of great inherent interest and practical importance. Most current global climate models are not, however, capable of adequately resolving tropical cyclones; this has led to the development of downscaling techniques designed to infer tropical cyclone activity from the large-scale fields produced by climate models. Here we compare the statistics of tropical cyclones simulated explicitly in a very high resolution (~14 km grid mesh global climate model to the results of one such downscaling technique driven by the same global model. This is done for a simulation of the current climate and also for a simulation of a climate warmed by the addition of carbon dioxide. The explicitly simulated and downscaled storms are similarly distributed in space, but the intensity distribution of the downscaled events has a somewhat longer high-intensity tail, owing to the higher resolution of the downscaling model. Both explicitly simulated and downscaled events show large increases in the frequency of events at the high-intensity ends of their respective intensity distributions, but the downscaled storms also show increases in low-intensity events, whereas the explicitly simulated weaker events decline in number. On the regional scale, there are large differences in the responses of the explicitly simulated and downscaled events to global warming. In particular, the power dissipation of downscaled events shows a 175% increase in the Atlantic, while the power dissipation of explicitly simulated events declines there.

  11. NEUTRINO-DRIVEN CONVECTION IN CORE-COLLAPSE SUPERNOVAE: HIGH-RESOLUTION SIMULATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Radice, David; Ott, Christian D. [TAPIR, Walter Burke Institute for Theoretical Physics, Mailcode 350-17, California Institute of Technology, Pasadena, CA 91125 (United States); Abdikamalov, Ernazar [Department of Physics, School of Science and Technology, Nazarbayev University, Astana 010000 (Kazakhstan); Couch, Sean M. [Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824 (United States); Haas, Roland [Max-Planck-Institut für Gravitationsphysik, Albert-Einstein-Institut, D-14476 Golm (Germany); Schnetter, Erik, E-mail: dradice@caltech.edu [Perimeter Institute for Theoretical Physics, Waterloo, ON (Canada)

    2016-03-20

    We present results from high-resolution semiglobal simulations of neutrino-driven convection in core-collapse supernovae. We employ an idealized setup with parameterized neutrino heating/cooling and nuclear dissociation at the shock front. We study the internal dynamics of neutrino-driven convection and its role in redistributing energy and momentum through the gain region. We find that even if buoyant plumes are able to locally transfer heat up to the shock, convection is not able to create a net positive energy flux and overcome the downward transport of energy from the accretion flow. Turbulent convection does, however, provide a significant effective pressure support to the accretion flow as it favors the accumulation of energy, mass, and momentum in the gain region. We derive an approximate equation that is able to explain and predict the shock evolution in terms of integrals of quantities such as the turbulent pressure in the gain region or the effects of nonradial motion of the fluid. We use this relation as a way to quantify the role of turbulence in the dynamics of the accretion shock. Finally, we investigate the effects of grid resolution, which we change by a factor of 20 between the lowest and highest resolution. Our results show that the shallow slopes of the turbulent kinetic energy spectra reported in previous studies are a numerical artifact. Kolmogorov scaling is progressively recovered as the resolution is increased.

  12. NEUTRINO-DRIVEN CONVECTION IN CORE-COLLAPSE SUPERNOVAE: HIGH-RESOLUTION SIMULATIONS

    International Nuclear Information System (INIS)

    Radice, David; Ott, Christian D.; Abdikamalov, Ernazar; Couch, Sean M.; Haas, Roland; Schnetter, Erik

    2016-01-01

    We present results from high-resolution semiglobal simulations of neutrino-driven convection in core-collapse supernovae. We employ an idealized setup with parameterized neutrino heating/cooling and nuclear dissociation at the shock front. We study the internal dynamics of neutrino-driven convection and its role in redistributing energy and momentum through the gain region. We find that even if buoyant plumes are able to locally transfer heat up to the shock, convection is not able to create a net positive energy flux and overcome the downward transport of energy from the accretion flow. Turbulent convection does, however, provide a significant effective pressure support to the accretion flow as it favors the accumulation of energy, mass, and momentum in the gain region. We derive an approximate equation that is able to explain and predict the shock evolution in terms of integrals of quantities such as the turbulent pressure in the gain region or the effects of nonradial motion of the fluid. We use this relation as a way to quantify the role of turbulence in the dynamics of the accretion shock. Finally, we investigate the effects of grid resolution, which we change by a factor of 20 between the lowest and highest resolution. Our results show that the shallow slopes of the turbulent kinetic energy spectra reported in previous studies are a numerical artifact. Kolmogorov scaling is progressively recovered as the resolution is increased

  13. Reconstructing the distribution of haloes and mock galaxies below the resolution limit in cosmological simulations

    OpenAIRE

    de la Torre, Sylvain; Peacock, John A.

    2012-01-01

    We present a method for populating dark matter simulations with haloes of mass below the resolution limit. It is based on stochastically sampling a field derived from the density field of the halo catalogue, using constraints from the conditional halo mass function n(m|{\\delta}). We test the accuracy of the method and show its application in the context of building mock galaxy samples. We find that this technique allows precise reproduction of the two-point statistics of galaxies in mock samp...

  14. Monte-Carlo simulation of a high-resolution inverse geometry spectrometer on the SNS. Long Wavelength Target Station

    International Nuclear Information System (INIS)

    Bordallo, H.N.; Herwig, K.W.

    2001-01-01

    Using the Monte-Carlo simulation program McStas, we present the design principles of the proposed high-resolution inverse geometry spectrometer on the SNS-Long Wavelength Target Station (LWTS). The LWTS will provide the high flux of long wavelength neutrons at the requisite pulse rate required by the spectrometer design. The resolution of this spectrometer lies between that routinely achieved by spin echo techniques and the design goal of the high power target station backscattering spectrometer. Covering this niche in energy resolution will allow systematic studies over the large dynamic range required by many disciplines, such as protein dynamics. (author)

  15. High-resolution global climate modelling: the UPSCALE project, a large-simulation campaign

    Directory of Open Access Journals (Sweden)

    M. S. Mizielinski

    2014-08-01

    Full Text Available The UPSCALE (UK on PRACE: weather-resolving Simulations of Climate for globAL Environmental risk project constructed and ran an ensemble of HadGEM3 (Hadley Centre Global Environment Model 3 atmosphere-only global climate simulations over the period 1985–2011, at resolutions of N512 (25 km, N216 (60 km and N96 (130 km as used in current global weather forecasting, seasonal prediction and climate modelling respectively. Alongside these present climate simulations a parallel ensemble looking at extremes of future climate was run, using a time-slice methodology to consider conditions at the end of this century. These simulations were primarily performed using a 144 million core hour, single year grant of computing time from PRACE (the Partnership for Advanced Computing in Europe in 2012, with additional resources supplied by the Natural Environment Research Council (NERC and the Met Office. Almost 400 terabytes of simulation data were generated on the HERMIT supercomputer at the High Performance Computing Center Stuttgart (HLRS, and transferred to the JASMIN super-data cluster provided by the Science and Technology Facilities Council Centre for Data Archival (STFC CEDA for analysis and storage. In this paper we describe the implementation of the project, present the technical challenges in terms of optimisation, data output, transfer and storage that such a project involves and include details of the model configuration and the composition of the UPSCALE data set. This data set is available for scientific analysis to allow assessment of the value of model resolution in both present and potential future climate conditions.

  16. Constraints on cosmological parameters in power-law cosmology

    International Nuclear Information System (INIS)

    Rani, Sarita; Singh, J.K.; Altaibayeva, A.; Myrzakulov, R.; Shahalam, M.

    2015-01-01

    In this paper, we examine observational constraints on the power law cosmology; essentially dependent on two parameters H 0 (Hubble constant) and q (deceleration parameter). We investigate the constraints on these parameters using the latest 28 points of H(z) data and 580 points of Union2.1 compilation data and, compare the results with the results of ΛCDM . We also forecast constraints using a simulated data set for the future JDEM, supernovae survey. Our studies give better insight into power law cosmology than the earlier done analysis by Kumar [arXiv:1109.6924] indicating it tuning well with Union2.1 compilation data but not with H(z) data. However, the constraints obtained on and i.e. H 0 average and q average using the simulated data set for the future JDEM, supernovae survey are found to be inconsistent with the values obtained from the H(z) and Union2.1 compilation data. We also perform the statefinder analysis and find that the power-law cosmological models approach the standard ΛCDM model as q → −1. Finally, we observe that although the power law cosmology explains several prominent features of evolution of the Universe, it fails in details

  17. Impact of ocean model resolution on CCSM climate simulations

    Energy Technology Data Exchange (ETDEWEB)

    Kirtman, Ben P.; Rousset, Clement; Siqueira, Leo [University of Miami, Rosenstiel School for Marine and Atmospheric Science, Coral Gables, FL (United States); Bitz, Cecilia [University of Washington, Department of Atmospheric Science, Seattle, WA (United States); Bryan, Frank; Dennis, John; Hearn, Nathan; Loft, Richard; Tomas, Robert; Vertenstein, Mariana [National Center for Atmospheric Research, Boulder, CO (United States); Collins, William [University of California, Berkeley, Berkeley, CA (United States); Kinter, James L.; Stan, Cristiana [Center for Ocean-Land-Atmosphere Studies, Calverton, MD (United States); George Mason University, Fairfax, VA (United States)

    2012-09-15

    The current literature provides compelling evidence suggesting that an eddy-resolving (as opposed to eddy-permitting or eddy-parameterized) ocean component model will significantly impact the simulation of the large-scale climate, although this has not been fully tested to date in multi-decadal global coupled climate simulations. The purpose of this paper is to examine how resolved ocean fronts and eddies impact the simulation of large-scale climate. The model used for this study is the NCAR Community Climate System Model version 3.5 (CCSM3.5) - the forerunner to CCSM4. Two experiments are reported here. The control experiment is a 155-year present-day climate simulation using a 0.5 atmosphere component (zonal resolution 0.625 meridional resolution 0.5 ; land surface component at the same resolution) coupled to ocean and sea-ice components with zonal resolution of 1.2 and meridional resolution varying from 0.27 at the equator to 0.54 in the mid-latitudes. The second simulation uses the same atmospheric and land-surface models coupled to eddy-resolving 0.1 ocean and sea-ice component models. The simulations are compared in terms of how the representation of smaller scale features in the time mean ocean circulation and ocean eddies impact the mean and variable climate. In terms of the global mean surface temperature, the enhanced ocean resolution leads to a ubiquitous surface warming with a global mean surface temperature increase of about 0.2 C relative to the control. The warming is largest in the Arctic and regions of strong ocean fronts and ocean eddy activity (i.e., Southern Ocean, western boundary currents). The Arctic warming is associated with significant losses of sea-ice in the high-resolution simulation. The sea surface temperature gradients in the North Atlantic, in particular, are better resolved in the high-resolution model leading to significantly sharper temperature gradients and associated large-scale shifts in the rainfall. In the extra-tropics, the

  18. High-resolution simulation and forecasting of Jeddah floods using WRF version 3.5

    KAUST Repository

    Deng, Liping

    2013-12-01

    Modeling flash flood events in arid environments is a difficult but important task that has impacts on both water resource related issues and also emergency management and response. The challenge is often related to adequately describing the precursor intense rainfall events that cause these flood responses, as they are generally poorly simulated and forecast. Jeddah, the second largest city in the Kingdom of Saudi Arabia, has suffered from a number of flash floods over the last decade, following short-intense rainfall events. The research presented here focuses on examining four historic Jeddah flash floods (Nov. 25-26 2009, Dec. 29-30 2010, Jan. 14-15 2011 and Jan. 25-26 2011) and investigates the feasibility of using numerical weather prediction models to achieve a more realistic simulation of these flood-producing rainfall events. The Weather Research and Forecasting (WRF) model (version 3.5) is used to simulate precipitation and meteorological conditions via a high-resolution inner domain (1-km) around Jeddah. A range of different convective closure and microphysics parameterization, together with high-resolution (4-km) sea surface temperature data are employed. Through examining comparisons between the WRF model output and in-situ, radar and satellite data, the characteristics and mechanism producing the extreme rainfall events are discussed and the capacity of the WRF model to accurately forecast these rainstorms is evaluated.

  19. High-resolution simulation and forecasting of Jeddah floods using WRF version 3.5

    KAUST Repository

    Deng, Liping; McCabe, Matthew; Stenchikov, Georgiy L.; Evans, Jason; Kucera, Paul

    2013-01-01

    Modeling flash flood events in arid environments is a difficult but important task that has impacts on both water resource related issues and also emergency management and response. The challenge is often related to adequately describing the precursor intense rainfall events that cause these flood responses, as they are generally poorly simulated and forecast. Jeddah, the second largest city in the Kingdom of Saudi Arabia, has suffered from a number of flash floods over the last decade, following short-intense rainfall events. The research presented here focuses on examining four historic Jeddah flash floods (Nov. 25-26 2009, Dec. 29-30 2010, Jan. 14-15 2011 and Jan. 25-26 2011) and investigates the feasibility of using numerical weather prediction models to achieve a more realistic simulation of these flood-producing rainfall events. The Weather Research and Forecasting (WRF) model (version 3.5) is used to simulate precipitation and meteorological conditions via a high-resolution inner domain (1-km) around Jeddah. A range of different convective closure and microphysics parameterization, together with high-resolution (4-km) sea surface temperature data are employed. Through examining comparisons between the WRF model output and in-situ, radar and satellite data, the characteristics and mechanism producing the extreme rainfall events are discussed and the capacity of the WRF model to accurately forecast these rainstorms is evaluated.

  20. Cosmological N -body simulations with generic hot dark matter

    Energy Technology Data Exchange (ETDEWEB)

    Brandbyge, Jacob; Hannestad, Steen, E-mail: jacobb@phys.au.dk, E-mail: sth@phys.au.dk [Department of Physics and Astronomy, University of Aarhus, Ny Munkegade 120, DK–8000 Aarhus C (Denmark)

    2017-10-01

    We have calculated the non-linear effects of generic fermionic and bosonic hot dark matter components in cosmological N -body simulations. For sub-eV masses, the non-linear power spectrum suppression caused by thermal free-streaming resembles the one seen for massive neutrinos, whereas for masses larger than 1 eV, the non-linear relative suppression of power is smaller than in linear theory. We furthermore find that in the non-linear regime, one can map fermionic to bosonic models by performing a simple transformation.

  1. A 4.5 km resolution Arctic Ocean simulation with the global multi-resolution model FESOM 1.4

    Science.gov (United States)

    Wang, Qiang; Wekerle, Claudia; Danilov, Sergey; Wang, Xuezhu; Jung, Thomas

    2018-04-01

    In the framework of developing a global modeling system which can facilitate modeling studies on Arctic Ocean and high- to midlatitude linkage, we evaluate the Arctic Ocean simulated by the multi-resolution Finite Element Sea ice-Ocean Model (FESOM). To explore the value of using high horizontal resolution for Arctic Ocean modeling, we use two global meshes differing in the horizontal resolution only in the Arctic Ocean (24 km vs. 4.5 km). The high resolution significantly improves the model's representation of the Arctic Ocean. The most pronounced improvement is in the Arctic intermediate layer, in terms of both Atlantic Water (AW) mean state and variability. The deepening and thickening bias of the AW layer, a common issue found in coarse-resolution simulations, is significantly alleviated by using higher resolution. The topographic steering of the AW is stronger and the seasonal and interannual temperature variability along the ocean bottom topography is enhanced in the high-resolution simulation. The high resolution also improves the ocean surface circulation, mainly through a better representation of the narrow straits in the Canadian Arctic Archipelago (CAA). The representation of CAA throughflow not only influences the release of water masses through the other gateways but also the circulation pathways inside the Arctic Ocean. However, the mean state and variability of Arctic freshwater content and the variability of freshwater transport through the Arctic gateways appear not to be very sensitive to the increase in resolution employed here. By highlighting the issues that are independent of model resolution, we address that other efforts including the improvement of parameterizations are still required.

  2. THE AGORA HIGH-RESOLUTION GALAXY SIMULATIONS COMPARISON PROJECT. II. ISOLATED DISK TEST

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ji-hoon [Kavli Institute for Particle Astrophysics and Cosmology, SLAC National Accelerator Laboratory, Menlo Park, CA 94025 (United States); Agertz, Oscar [Department of Physics, University of Surrey, Guildford, Surrey, GU2 7XH (United Kingdom); Teyssier, Romain; Feldmann, Robert [Centre for Theoretical Astrophysics and Cosmology, Institute for Computational Science, University of Zurich, Zurich, 8057 (Switzerland); Butler, Michael J. [Max-Planck-Institut für Astronomie, D-69117 Heidelberg (Germany); Ceverino, Daniel [Zentrum für Astronomie der Universität Heidelberg, Institut für Theoretische Astrophysik, D-69120 Heidelberg (Germany); Choi, Jun-Hwan [Department of Astronomy, University of Texas, Austin, TX 78712 (United States); Keller, Ben W. [Department of Physics and Astronomy, McMaster University, Hamilton, ON L8S 4M1 (Canada); Lupi, Alessandro [Institut d’Astrophysique de Paris, Sorbonne Universites, UPMC Univ Paris 6 et CNRS, F-75014 Paris (France); Quinn, Thomas; Wallace, Spencer [Department of Astronomy, University of Washington, Seattle, WA 98195 (United States); Revaz, Yves [Institute of Physics, Laboratoire d’Astrophysique, École Polytechnique Fédérale de Lausanne, CH-1015 Lausanne (Switzerland); Gnedin, Nickolay Y. [Particle Astrophysics Center, Fermi National Accelerator Laboratory, Batavia, IL 60510 (United States); Leitner, Samuel N. [Department of Astronomy, University of Maryland, College Park, MD 20742 (United States); Shen, Sijing [Kavli Institute for Cosmology, University of Cambridge, Cambridge, CB3 0HA (United Kingdom); Smith, Britton D., E-mail: me@jihoonkim.org [Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh EH9 3HJ (United Kingdom); Collaboration: AGORA Collaboration; and others

    2016-12-20

    Using an isolated Milky Way-mass galaxy simulation, we compare results from nine state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, we find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt–Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly formed stellar clump mass functions show more significant variation (difference by up to a factor of ∼3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low-density region, and between more diffusive and less diffusive schemes in the high-density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.

  3. Regional Community Climate Simulations with variable resolution meshes in the Community Earth System Model

    Science.gov (United States)

    Zarzycki, C. M.; Gettelman, A.; Callaghan, P.

    2017-12-01

    Accurately predicting weather extremes such as precipitation (floods and droughts) and temperature (heat waves) requires high resolution to resolve mesoscale dynamics and topography at horizontal scales of 10-30km. Simulating such resolutions globally for climate scales (years to decades) remains computationally impractical. Simulating only a small region of the planet is more tractable at these scales for climate applications. This work describes global simulations using variable-resolution static meshes with multiple dynamical cores that target the continental United States using developmental versions of the Community Earth System Model version 2 (CESM2). CESM2 is tested in idealized, aquaplanet and full physics configurations to evaluate variable mesh simulations against uniform high and uniform low resolution simulations at resolutions down to 15km. Different physical parameterization suites are also evaluated to gauge their sensitivity to resolution. Idealized variable-resolution mesh cases compare well to high resolution tests. More recent versions of the atmospheric physics, including cloud schemes for CESM2, are more stable with respect to changes in horizontal resolution. Most of the sensitivity is due to sensitivity to timestep and interactions between deep convection and large scale condensation, expected from the closure methods. The resulting full physics model produces a comparable climate to the global low resolution mesh and similar high frequency statistics in the high resolution region. Some biases are reduced (orographic precipitation in the western United States), but biases do not necessarily go away at high resolution (e.g. summertime JJA surface Temp). The simulations are able to reproduce uniform high resolution results, making them an effective tool for regional climate studies and are available in CESM2.

  4. High-resolution Hydrodynamic Simulation of Tidal Detonation of a Helium White Dwarf by an Intermediate Mass Black Hole

    Science.gov (United States)

    Tanikawa, Ataru

    2018-05-01

    We demonstrate tidal detonation during a tidal disruption event (TDE) of a helium (He) white dwarf (WD) with 0.45 M ⊙ by an intermediate mass black hole using extremely high-resolution simulations. Tanikawa et al. have shown tidal detonation in results of previous studies from unphysical heating due to low-resolution simulations, and such unphysical heating occurs in three-dimensional (3D) smoothed particle hydrodynamics (SPH) simulations even with 10 million SPH particles. In order to avoid such unphysical heating, we perform 3D SPH simulations up to 300 million SPH particles, and 1D mesh simulations using flow structure in the 3D SPH simulations for 1D initial conditions. The 1D mesh simulations have higher resolutions than the 3D SPH simulations. We show that tidal detonation occurs and confirm that this result is perfectly converged with different space resolution in both 3D SPH and 1D mesh simulations. We find that detonation waves independently arise in leading parts of the WD, and yield large amounts of 56Ni. Although detonation waves are not generated in trailing parts of the WD, the trailing parts would receive detonation waves generated in the leading parts and would leave large amounts of Si group elements. Eventually, this He WD TDE would synthesize 56Ni of 0.30 M ⊙ and Si group elements of 0.08 M ⊙, and could be observed as a luminous thermonuclear transient comparable to SNe Ia.

  5. How does pressure gravitate? Cosmological constant problem confronts observational cosmology

    Science.gov (United States)

    Narimani, Ali; Afshordi, Niayesh; Scott, Douglas

    2014-08-01

    An important and long-standing puzzle in the history of modern physics is the gross inconsistency between theoretical expectations and cosmological observations of the vacuum energy density, by at least 60 orders of magnitude, otherwise known as the cosmological constant problem. A characteristic feature of vacuum energy is that it has a pressure with the same amplitude, but opposite sign to its energy density, while all the precision tests of General Relativity are either in vacuum, or for media with negligible pressure. Therefore, one may wonder whether an anomalous coupling to pressure might be responsible for decoupling vacuum from gravity. We test this possibility in the context of the Gravitational Aether proposal, using current cosmological observations, which probe the gravity of relativistic pressure in the radiation era. Interestingly, we find that the best fit for anomalous pressure coupling is about half-way between General Relativity (GR), and Gravitational Aether (GA), if we include Planck together with WMAP and BICEP2 polarization cosmic microwave background (CMB) observations. Taken at face value, this data combination excludes both GR and GA at around the 3 σ level. However, including higher resolution CMB observations (``highL'') or baryonic acoustic oscillations (BAO) pushes the best fit closer to GR, excluding the Gravitational Aether solution to the cosmological constant problem at the 4- 5 σ level. This constraint effectively places a limit on the anomalous coupling to pressure in the parametrized post-Newtonian (PPN) expansion, ζ4 = 0.105 ± 0.049 (+highL CMB), or ζ4 = 0.066 ± 0.039 (+BAO). These represent the most precise measurement of this parameter to date, indicating a mild tension with GR (for ΛCDM including tensors, with 0ζ4=), and also among different data sets.

  6. How does pressure gravitate? Cosmological constant problem confronts observational cosmology

    International Nuclear Information System (INIS)

    Narimani, Ali; Scott, Douglas; Afshordi, Niayesh

    2014-01-01

    An important and long-standing puzzle in the history of modern physics is the gross inconsistency between theoretical expectations and cosmological observations of the vacuum energy density, by at least 60 orders of magnitude, otherwise known as the cosmological constant problem. A characteristic feature of vacuum energy is that it has a pressure with the same amplitude, but opposite sign to its energy density, while all the precision tests of General Relativity are either in vacuum, or for media with negligible pressure. Therefore, one may wonder whether an anomalous coupling to pressure might be responsible for decoupling vacuum from gravity. We test this possibility in the context of the Gravitational Aether proposal, using current cosmological observations, which probe the gravity of relativistic pressure in the radiation era. Interestingly, we find that the best fit for anomalous pressure coupling is about half-way between General Relativity (GR), and Gravitational Aether (GA), if we include Planck together with WMAP and BICEP2 polarization cosmic microwave background (CMB) observations. Taken at face value, this data combination excludes both GR and GA at around the 3 σ level. However, including higher resolution CMB observations (''highL'') or baryonic acoustic oscillations (BAO) pushes the best fit closer to GR, excluding the Gravitational Aether solution to the cosmological constant problem at the 4- 5 σ level. This constraint effectively places a limit on the anomalous coupling to pressure in the parametrized post-Newtonian (PPN) expansion, ζ 4  = 0.105 ± 0.049 (+highL CMB), or ζ 4  = 0.066 ± 0.039 (+BAO). These represent the most precise measurement of this parameter to date, indicating a mild tension with GR (for ΛCDM including tensors, with 0ζ 4 =), and also among different data sets

  7. Understanding big bang in loop quantum cosmology: Recent advances

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Parampreet [Perimeter Institute for Theoretical Physics, 31 Caroline Street North, Waterloo, Ontario N2L 2Y5 (Canada)], E-mail: psingh@perimeterinstitute.ca

    2008-11-01

    We discuss the way non-perturbative quantization of cosmological spacetimes in loop quantum cosmology provides insights on the physics of Planck scale and the resolution of big bang singularity. In recent years, rigorous examination of mathematical and physical aspects of the quantum theory, have singled out a consistent quantization which is physically viable and various early ideas have been shown to be inconsistent. These include 'physical effects' originating from modifications to inverse scale factors in the flat models. The singularity resolution is understood to originate from the non-local nature of curvature in the quantum theory and the underlying polymer representation. Based on insights from extensive numerical simulations, an exactly solvable model involving a small approximation at the quantum level can be developed. The model predicts occurrence of bounce for a dense subspace of the Hilbert space and a supremum for the value of energy density. It also provides answers to the growth of fluctuations, showing that semi-classicality is preserved to an amazing degree across the bounce.

  8. Understanding big bang in loop quantum cosmology: Recent advances

    Science.gov (United States)

    Singh, Parampreet

    2008-11-01

    We discuss the way non-perturbative quantization of cosmological spacetimes in loop quantum cosmology provides insights on the physics of Planck scale and the resolution of big bang singularity. In recent years, rigorous examination of mathematical and physical aspects of the quantum theory, have singled out a consistent quantization which is physically viable and various early ideas have been shown to be inconsistent. These include 'physical effects' originating from modifications to inverse scale factors in the flat models. The singularity resolution is understood to originate from the non-local nature of curvature in the quantum theory and the underlying polymer representation. Based on insights from extensive numerical simulations, an exactly solvable model involving a small approximation at the quantum level can be developed. The model predicts occurrence of bounce for a dense subspace of the Hilbert space and a supremum for the value of energy density. It also provides answers to the growth of fluctuations, showing that semi-classicality is preserved to an amazing degree across the bounce.

  9. Understanding big bang in loop quantum cosmology: Recent advances

    International Nuclear Information System (INIS)

    Singh, Parampreet

    2008-01-01

    We discuss the way non-perturbative quantization of cosmological spacetimes in loop quantum cosmology provides insights on the physics of Planck scale and the resolution of big bang singularity. In recent years, rigorous examination of mathematical and physical aspects of the quantum theory, have singled out a consistent quantization which is physically viable and various early ideas have been shown to be inconsistent. These include 'physical effects' originating from modifications to inverse scale factors in the flat models. The singularity resolution is understood to originate from the non-local nature of curvature in the quantum theory and the underlying polymer representation. Based on insights from extensive numerical simulations, an exactly solvable model involving a small approximation at the quantum level can be developed. The model predicts occurrence of bounce for a dense subspace of the Hilbert space and a supremum for the value of energy density. It also provides answers to the growth of fluctuations, showing that semi-classicality is preserved to an amazing degree across the bounce.

  10. The shape of dark matter haloes in the Aquarius simulations : Evolution and memory

    NARCIS (Netherlands)

    Vera-Ciro, C.A.; Sales, L. V.; Helmi, A.; Reyle, C; Robin, A; Schultheis, M

    We use the high resolution cosmological N-body simulations from the Aquarius project to investigate in detail the mechanisms that determine the shape of Milky Way-type dark matter haloes. We find that, when measured at the instantaneous virial radius, the shape of individual haloes changes with

  11. The shape of dark matter haloes in the Aquarius simulations: Evolution and memory

    NARCIS (Netherlands)

    Vera-Ciro, C. A.; Sales, L. V.; Helmi, A.

    We use the high resolution cosmological N-body simulations from the Aquarius project to investigate in detail the mechanisms that determine the shape of Milky Way-type dark matter haloes. We find that, when measured at the instantaneous virial radius, the shape of individual haloes changes with

  12. Weak gravitational lensing towards high-precision cosmology

    International Nuclear Information System (INIS)

    Berge, Joel

    2007-01-01

    This thesis aims at studying weak gravitational lensing as a tool for high-precision cosmology. We first present the development and validation of a precise and accurate tool for measuring gravitational shear, based on the shapelets formalism. We then use shapelets on real images for the first time, we analyze CFHTLS images, and combine them with XMM-LSS data. We measure the normalisation of the density fluctuations power spectrum σ 8 , and the one of the mass-temperature relation for galaxy clusters. The analysis of the Hubble space telescope COSMOS field confirms our σ 8 measurement and introduces tomography. Finally, aiming at optimizing future surveys, we compare the individual and combined merits of cluster counts and power spectrum tomography. Our results demonstrate that next generation surveys will allow weak lensing to yield its full potential in the high-precision cosmology era. (author) [fr

  13. The simulation of a data acquisition system for a proposed high resolution PET scanner

    Energy Technology Data Exchange (ETDEWEB)

    Rotolo, C.; Larwill, M.; Chappa, S. [Fermi National Accelerator Lab., Batavia, IL (United States); Ordonez, C. [Chicago Univ., IL (United States)

    1993-10-01

    The simulation of a specific data acquisition (DAQ) system architecture for a proposed high resolution Positron Emission Tomography (PET) scanner is discussed. Stochastic processes are used extensively to model PET scanner signal timing and probable DAQ circuit limitations. Certain architectural parameters, along with stochastic parameters, are varied to quantatively study the resulting output under various conditions. The inclusion of the DAQ in the model represents a novel method of more complete simulations of tomograph designs, and could prove to be of pivotal importance in the optimization of such designs.

  14. The simulation of a data acquisition system for a proposed high resolution PET scanner

    International Nuclear Information System (INIS)

    Rotolo, C.; Larwill, M.; Chappa, S.; Ordonez, C.

    1993-10-01

    The simulation of a specific data acquisition (DAQ) system architecture for a proposed high resolution Positron Emission Tomography (PET) scanner is discussed. Stochastic processes are used extensively to model PET scanner signal timing and probable DAQ circuit limitations. Certain architectural parameters, along with stochastic parameters, are varied to quantatively study the resulting output under various conditions. The inclusion of the DAQ in the model represents a novel method of more complete simulations of tomograph designs, and could prove to be of pivotal importance in the optimization of such designs

  15. Quality and sensitivity of high-resolution numerical simulation of urban heat islands

    Science.gov (United States)

    Li, Dan; Bou-Zeid, Elie

    2014-05-01

    High-resolution numerical simulations of the urban heat island (UHI) effect with the widely-used Weather Research and Forecasting (WRF) model are assessed. Both the sensitivity of the results to the simulation setup, and the quality of the simulated fields as representations of the real world, are investigated. Results indicate that the WRF-simulated surface temperatures are more sensitive to the planetary boundary layer (PBL) scheme choice during nighttime, and more sensitive to the surface thermal roughness length parameterization during daytime. The urban surface temperatures simulated by WRF are also highly sensitive to the urban canopy model (UCM) used. The implementation in this study of an improved UCM (the Princeton UCM or PUCM) that allows the simulation of heterogeneous urban facets and of key hydrological processes, together with the so-called CZ09 parameterization for the thermal roughness length, significantly reduce the bias (Changing UCMs and PBL schemes does not alter the performance of WRF in reproducing bulk boundary layer temperature profiles significantly. The results illustrate the wide range of urban environmental conditions that various configurations of WRF can produce, and the significant biases that should be assessed before inferences are made based on WRF outputs. The optimal set-up of WRF-PUCM developed in this paper also paves the way for a confident exploration of the city-scale impacts of UHI mitigation strategies in the companion paper (Li et al 2014).

  16. Cosmological Evolution of the Central Engine in High-Luminosity, High-Accretion Rate AGN

    Directory of Open Access Journals (Sweden)

    Matteo Guainazzi

    2014-12-01

    Full Text Available In this paper I discuss the status of observational studies aiming at probing the cosmological evolution of the central engine in high-luminosity, high-accretion rate Active Galactic Nuclei (AGN. X-ray spectroscopic surveys, supported by extensive multi-wavelength coverage, indicate a remarkable invariance of the accretion disk plus corona system, and of their coupling up to redshifts z≈6. Furthermore, hard X-ray (E >10 keV surveys show that nearby Seyfert Galaxies share the same central engine notwithstanding their optical classication. These results suggest that the high-luminosity, high accretion rate quasar phase of AGN evolution is homogeneous over cosmological times.

  17. Cusps in the center of galaxies: a real conflict with observations or a numerical artefact of cosmological simulations?

    International Nuclear Information System (INIS)

    Baushev, A.N.; Valle, L. del; Campusano, L.E.; Escala, A.; Muñoz, R.R.; Palma, G.A.

    2017-01-01

    Galaxy observations and N-body cosmological simulations produce conflicting dark matter halo density profiles for galaxy central regions. While simulations suggest a cuspy and universal density profile (UDP) of this region, the majority of observations favor variable profiles with a core in the center. In this paper, we investigate the convergency of standard N-body simulations, especially in the cusp region, following the approach proposed by [1]. We simulate the well known Hernquist model using the SPH code Gadget-3 and consider the full array of dynamical parameters of the particles. We find that, although the cuspy profile is stable, all integrals of motion characterizing individual particles suffer strong unphysical variations along the whole halo, revealing an effective interaction between the test bodies. This result casts doubts on the reliability of the velocity distribution function obtained in the simulations. Moreover, we find unphysical Fokker-Planck streams of particles in the cusp region. The same streams should appear in cosmological N-body simulations, being strong enough to change the shape of the cusp or even to create it. Our analysis, based on the Hernquist model and the standard SPH code, strongly suggests that the UDPs generally found by the cosmological N-body simulations may be a consequence of numerical effects. A much better understanding of the N-body simulation convergency is necessary before a 'core-cusp problem' can properly be used to question the validity of the CDM model.

  18. Cusps in the center of galaxies: a real conflict with observations or a numerical artefact of cosmological simulations?

    Energy Technology Data Exchange (ETDEWEB)

    Baushev, A.N.; Valle, L. del; Campusano, L.E.; Escala, A.; Muñoz, R.R. [Departamento de Astronomía, Universidad de Chile, Casilla 36-D, Correo Central, Santiago (Chile); Palma, G.A., E-mail: baushev@gmail.com, E-mail: ldelvalleb@gmail.com, E-mail: luis@das.uchile.cl, E-mail: aescala@das.uchile.cl, E-mail: rmunoz@das.uchile.cl, E-mail: gpalmaquilod@ing.uchile.cl [Departamento de Física, FCFM, Universidad de Chile, Blanco Encalada 2008, Santiago (Chile)

    2017-05-01

    Galaxy observations and N-body cosmological simulations produce conflicting dark matter halo density profiles for galaxy central regions. While simulations suggest a cuspy and universal density profile (UDP) of this region, the majority of observations favor variable profiles with a core in the center. In this paper, we investigate the convergency of standard N-body simulations, especially in the cusp region, following the approach proposed by [1]. We simulate the well known Hernquist model using the SPH code Gadget-3 and consider the full array of dynamical parameters of the particles. We find that, although the cuspy profile is stable, all integrals of motion characterizing individual particles suffer strong unphysical variations along the whole halo, revealing an effective interaction between the test bodies. This result casts doubts on the reliability of the velocity distribution function obtained in the simulations. Moreover, we find unphysical Fokker-Planck streams of particles in the cusp region. The same streams should appear in cosmological N-body simulations, being strong enough to change the shape of the cusp or even to create it. Our analysis, based on the Hernquist model and the standard SPH code, strongly suggests that the UDPs generally found by the cosmological N-body simulations may be a consequence of numerical effects. A much better understanding of the N-body simulation convergency is necessary before a 'core-cusp problem' can properly be used to question the validity of the CDM model.

  19. Cosmological CP Violation

    CERN Document Server

    Tomaschitz, R

    1994-01-01

    Spinor fields are studied in infinite, topologically multiply connected Robertson-Walker cosmologies. Unitary spinor representations for the discrete covering groups of the spacelike slices are constructed. The spectral resolution of Dirac's equation is given in terms of horospherical elementary waves, on which the treatment of spin and energy is based in these cosmologies. The meaning of the energy and the particle-antiparticle concept is explained in the context of this varying cosmic background. Discrete symmetries, in particular inversions of the multiply connected spacelike slices, are studied. The violation of the unitarity of the parity operator, due to self-interference of P-reflected wave packets, is discussed. The violation of the CP and CPT invariance - already on the level of the free Dirac equation on this cosmological background - is pointed out.

  20. Updated vegetation information in high resolution regional climate simulations using WRF

    DEFF Research Database (Denmark)

    Nielsen, Joakim Refslund; Dellwik, Ebba; Hahmann, Andrea N.

    Climate studies show that the frequency of heat wave events and above-average high temperatures during the summer months over Europe will increase in the coming decades. Such climatic changes and long-term meteorological conditions will impact the seasonal development of vegetation and ultimately...... modify the energy distribution at the land surface. In weather and climate models it is important to represent the vegetation variability accurately to obtain reliable results. The weather research and forecasting (WRF) model uses a green vegetation fraction (GVF) climatology to represent the seasonal...... or changes in management practice since it is derived more than twenty years ago. In this study, a new high resolution, high quality GVF product is applied in a WRF climate simulation over Denmark during the 2006 heat wave year. The new GVF product reflects the year 2006 and it was previously tested...

  1. HALO EXPANSION IN COSMOLOGICAL HYDRO SIMULATIONS: TOWARD A BARYONIC SOLUTION OF THE CUSP/CORE PROBLEM IN MASSIVE SPIRALS

    Energy Technology Data Exchange (ETDEWEB)

    Maccio, A. V.; Stinson, G. [Max-Planck-Institut fuer Astronomie, 69117 Heidelberg (Germany); Brook, C. B.; Gibson, B. K. [University of Central Lancashire, Jeremiah Horrocks Institute for Astrophysics and Supercomputing, Preston PR1 2HE (United Kingdom); Wadsley, J.; Couchman, H. M. P. [Department of Physics and Astronomy, McMaster University, Hamilton, Ontario, L8S 4M1 (Canada); Shen, S. [Department of Astronomy and Astrophysics, University of California Santa Cruz, Santa Cruz, CA 95064 (United States); Quinn, T., E-mail: maccio@mpia.de, E-mail: stinson@mpia.de [Astronomy Department, University of Washington, Seattle, WA 98195-1580 (United States)

    2012-01-15

    A clear prediction of the cold dark matter (CDM) model is the existence of cuspy dark matter halo density profiles on all mass scales. This is not in agreement with the observed rotation curves of spiral galaxies, challenging on small scales the otherwise successful CDM paradigm. In this work we employ high-resolution cosmological hydrodynamical simulations to study the effects of dissipative processes on the inner distribution of dark matter in Milky Way like objects (M Almost-Equal-To 10{sup 12} M{sub Sun }). Our simulations include supernova feedback, and the effects of the radiation pressure of massive stars before they explode as supernovae. The increased stellar feedback results in the expansion of the dark matter halo instead of contraction with respect to N-body simulations. Baryons are able to erase the dark matter cuspy distribution, creating a flat, cored, dark matter density profile in the central several kiloparsecs of a massive Milky-Way-like halo. The profile is well fit by a Burkert profile, with fitting parameters consistent with the observations. In addition, we obtain flat rotation curves as well as extended, exponential stellar disk profiles. While the stellar disk we obtain is still partially too thick to resemble the Milky Way thin disk, this pilot study shows that there is enough energy available in the baryonic component to alter the dark matter distribution even in massive disk galaxies, providing a possible solution to the long-standing problem of cusps versus cores.

  2. Use of High-Resolution WRF Simulations to Forecast Lightning Threat

    Science.gov (United States)

    McCaul, E. W., Jr.; LaCasse, K.; Goodman, S. J.; Cecil, D. J.

    2008-01-01

    Recent observational studies have confirmed the existence of a robust statistical relationship between lightning flash rates and the amount of large precipitating ice hydrometeors aloft in storms. This relationship is exploited, in conjunction with the capabilities of cloud-resolving forecast models such as WRF, to forecast explicitly the threat of lightning from convective storms using selected output fields from the model forecasts. The simulated vertical flux of graupel at -15C and the shape of the simulated reflectivity profile are tested in this study as proxies for charge separation processes and their associated lightning risk. Our lightning forecast method differs from others in that it is entirely based on high-resolution simulation output, without reliance on any climatological data. short [6-8 h) simulations are conducted for a number of case studies for which three-dmmensional lightning validation data from the North Alabama Lightning Mapping Array are available. Experiments indicate that initialization of the WRF model on a 2 km grid using Eta boundary conditions, Doppler radar radial velocity fields, and METAR and ACARS data y&eld satisfactory simulations. __nalyses of the lightning threat fields suggests that both the graupel flux and reflectivity profile approaches, when properly calibrated, can yield reasonable lightning threat forecasts, although an ensemble approach is probably desirable in order to reduce the tendency for misplacement of modeled storms to hurt the accuracy of the forecasts. Our lightning threat forecasts are also compared to other more traditional means of forecasting thunderstorms, such as those based on inspection of the convective available potential energy field.

  3. PDF added value of a high resolution climate simulation for precipitation

    Science.gov (United States)

    Soares, Pedro M. M.; Cardoso, Rita M.

    2015-04-01

    dynamical downscaling, based on simple PDF skill scores. The measure can assess the full quality of the PDFs and at the same time integrates a flexible manner to weight differently the PDF tails. In this study we apply the referred method to characaterize the PDF added value of a high resolution simulation with the WRF model. Results from a WRF climate simulation centred at the Iberian Penisnula with two nested grids, a larger one at 27km and a smaller one at 9km. This simulation is forced by ERA-Interim. The observational data used covers from rain gauges precipitation records to observational regular grids of daily precipitation. Two regular gridded precipitation datasets are used. A Portuguese grid precipitation dataset developed at 0.2°× 0.2°, from observed rain gauges daily precipitation. A second one corresponding to the ENSEMBLES observational gridded dataset for Europe, which includes daily precipitation values at 0.25°. The analisys shows an important PDF added value from the higher resolution simulation, regarding the full PDF and the extremes. This method shows higher potential to be applied to other simulation exercises and to evaluate other variables.

  4. Low-resolution simulations of vesicle suspensions in 2D

    Science.gov (United States)

    Kabacaoğlu, Gökberk; Quaife, Bryan; Biros, George

    2018-03-01

    Vesicle suspensions appear in many biological and industrial applications. These suspensions are characterized by rich and complex dynamics of vesicles due to their interaction with the bulk fluid, and their large deformations and nonlinear elastic properties. Many existing state-of-the-art numerical schemes can resolve such complex vesicle flows. However, even when using provably optimal algorithms, these simulations can be computationally expensive, especially for suspensions with a large number of vesicles. These high computational costs can limit the use of simulations for parameter exploration, optimization, or uncertainty quantification. One way to reduce the cost is to use low-resolution discretizations in space and time. However, it is well-known that simply reducing the resolution results in vesicle collisions, numerical instabilities, and often in erroneous results. In this paper, we investigate the effect of a number of algorithmic empirical fixes (which are commonly used by many groups) in an attempt to make low-resolution simulations more stable and more predictive. Based on our empirical studies for a number of flow configurations, we propose a scheme that attempts to integrate these fixes in a systematic way. This low-resolution scheme is an extension of our previous work [51,53]. Our low-resolution correction algorithms (LRCA) include anti-aliasing and membrane reparametrization for avoiding spurious oscillations in vesicles' membranes, adaptive time stepping and a repulsion force for handling vesicle collisions and, correction of vesicles' area and arc-length for maintaining physical vesicle shapes. We perform a systematic error analysis by comparing the low-resolution simulations of dilute and dense suspensions with their high-fidelity, fully resolved, counterparts. We observe that the LRCA enables both efficient and statistically accurate low-resolution simulations of vesicle suspensions, while it can be 10× to 100× faster.

  5. Toolbox for Urban Mobility Simulation: High Resolution Population Dynamics for Global Cities

    Science.gov (United States)

    Bhaduri, B. L.; Lu, W.; Liu, C.; Thakur, G.; Karthik, R.

    2015-12-01

    In this rapidly urbanizing world, unprecedented rate of population growth is not only mirrored by increasing demand for energy, food, water, and other natural resources, but has detrimental impacts on environmental and human security. Transportation simulations are frequently used for mobility assessment in urban planning, traffic operation, and emergency management. Previous research, involving purely analytical techniques to simulations capturing behavior, has investigated questions and scenarios regarding the relationships among energy, emissions, air quality, and transportation. Primary limitations of past attempts have been availability of input data, useful "energy and behavior focused" models, validation data, and adequate computational capability that allows adequate understanding of the interdependencies of our transportation system. With increasing availability and quality of traditional and crowdsourced data, we have utilized the OpenStreetMap roads network, and has integrated high resolution population data with traffic simulation to create a Toolbox for Urban Mobility Simulations (TUMS) at global scale. TUMS consists of three major components: data processing, traffic simulation models, and Internet-based visualizations. It integrates OpenStreetMap, LandScanTM population, and other open data (Census Transportation Planning Products, National household Travel Survey, etc.) to generate both normal traffic operation and emergency evacuation scenarios. TUMS integrates TRANSIMS and MITSIM as traffic simulation engines, which are open-source and widely-accepted for scalable traffic simulations. Consistent data and simulation platform allows quick adaption to various geographic areas that has been demonstrated for multiple cities across the world. We are combining the strengths of geospatial data sciences, high performance simulations, transportation planning, and emissions, vehicle and energy technology development to design and develop a simulation

  6. Astrophysics, cosmology and high energy physics

    International Nuclear Information System (INIS)

    Rees, M.J.

    1983-01-01

    A brief survey is given of some topics in astrophysics and cosmology, with special emphasis on the inter-relation between the properties of the early Universe and recent ideas in high energy physics, and on simple order-of-magnitude arguments showing how the scales and dimensions of cosmic phenomena are related to basic physical constants. (orig.)

  7. High-resolution, regional-scale crop yield simulations for the Southwestern United States

    Science.gov (United States)

    Stack, D. H.; Kafatos, M.; Medvigy, D.; El-Askary, H. M.; Hatzopoulos, N.; Kim, J.; Kim, S.; Prasad, A. K.; Tremback, C.; Walko, R. L.; Asrar, G. R.

    2012-12-01

    Over the past few decades, there have been many process-based crop models developed with the goal of better understanding the impacts of climate, soils, and management decisions on crop yields. These models simulate the growth and development of crops in response to environmental drivers. Traditionally, process-based crop models have been run at the individual farm level for yield optimization and management scenario testing. Few previous studies have used these models over broader geographic regions, largely due to the lack of gridded high-resolution meteorological and soil datasets required as inputs for these data intensive process-based models. In particular, assessment of regional-scale yield variability due to climate change requires high-resolution, regional-scale, climate projections, and such projections have been unavailable until recently. The goal of this study was to create a framework for extending the Agricultural Production Systems sIMulator (APSIM) crop model for use at regional scales and analyze spatial and temporal yield changes in the Southwestern United States (CA, AZ, and NV). Using the scripting language Python, an automated pipeline was developed to link Regional Climate Model (RCM) output with the APSIM crop model, thus creating a one-way nested modeling framework. This framework was used to combine climate, soil, land use, and agricultural management datasets in order to better understand the relationship between climate variability and crop yield at the regional-scale. Three different RCMs were used to drive APSIM: OLAM, RAMS, and WRF. Preliminary results suggest that, depending on the model inputs, there is some variability between simulated RCM driven maize yields and historical yields obtained from the United States Department of Agriculture (USDA). Furthermore, these simulations showed strong non-linear correlations between yield and meteorological drivers, with critical threshold values for some of the inputs (e.g. minimum and

  8. Cosmology

    International Nuclear Information System (INIS)

    Contopoulos, G.; Kotsakis, D.

    1987-01-01

    An extensive first part on a wealth of observational results relevant to cosmology lays the foundation for the second and central part of the book; the chapters on general relativity, the various cosmological theories, and the early universe. The authors present in a complete and almost non-mathematical way the ideas and theoretical concepts of modern cosmology including the exciting impact of high-energy particle physics, e.g. in the concept of the ''inflationary universe''. The final part addresses the deeper implications of cosmology, the arrow of time, the universality of physical laws, inflation and causality, and the anthropic principle

  9. The quantum Higgs field and the resolution of the cosmological constant paradox in the Weyl-geometrical Universe

    Science.gov (United States)

    de Martini, Francesco

    The nature of the scalar field responsible for the cosmological inflation is found to be rooted in the most fundamental concept of the Weyl’s differential geometry: the parallel displacement of vectors in curved spacetime. Within this novel geometrical scenario, the standard electroweak theory of leptons based on the SU(2)L⊗U(1)Y as well as on the conformal groups of spacetime Weyl’s transformations is analyzed within the framework of a general-relativistic, conformally-covariant scalar-tensor theory that includes the electromagnetic and the Yang-Mills fields. A Higgs mechanism within a spontaneous symmetry breaking process is identified and this offers formal connections between some relevant properties of the elementary particles and the dark energy content of the Universe. An “effective cosmological potential”: Veff is expressed in terms of the dark energy potential: |VΛ| via the “mass reduction parameter”: |ζ|≡|Veff||VΛ|, a general property of the Universe. The mass of the Higgs boson, which is considered a “free parameter” by the standard electroweak theory, by our theory is found to be proportional to the mass MU≡|Veff| which contributes to the measured Cosmological Constant, i.e. the measured content of vacuum-energy in the Universe. The nonintegrable application of the Weyl’s geometry leads to a Proca equation accounting for the dynamics of a ϕρ-particle, a vector-meson proposed as an optimum candidate for Dark Matter. The peculiar mathematical structure of Veff offers a clue towards a very general resolution in 4-D of a most intriguing puzzle of modern quantum field theory, the “cosmological constant paradox”(here referred to as: “Λ-paradox”). Indeed, our “universal” theory offers a resolution of the “Λ-paradox” for all exponential inflationary potentials: VΛ(ϕ)∝e‑nϕ, and for all linear superpositions of these potentials, where n belongs to the mathematical set of the “real numbers”. An explicit

  10. Halo statistics analysis within medium volume cosmological N-body simulation

    Directory of Open Access Journals (Sweden)

    Martinović N.

    2015-01-01

    Full Text Available In this paper we present halo statistics analysis of a ΛCDM N body cosmological simulation (from first halo formation until z = 0. We study mean major merger rate as a function of time, where for time we consider both per redshift and per Gyr dependence. For latter we find that it scales as the well known power law (1 + zn for which we obtain n = 2.4. The halo mass function and halo growth function are derived and compared both with analytical and empirical fits. We analyse halo growth through out entire simulation, making it possible to continuously monitor evolution of halo number density within given mass ranges. The halo formation redshift is studied exploring possibility for a new simple preliminary analysis during the simulation run. Visualization of the simulation is portrayed as well. At redshifts z = 0−7 halos from simulation have good statistics for further analysis especially in mass range of 1011 − 1014 M./h. [176021 ’Visible and invisible matter in nearby galaxies: theory and observations

  11. Quantum Gravity and Cosmology: an intimate interplay

    Science.gov (United States)

    Sakellariadou, Mairi

    2017-08-01

    I will briefly discuss three cosmological models built upon three distinct quantum gravity proposals. I will first highlight the cosmological rôle of a vector field in the framework of a string/brane cosmological model. I will then present the resolution of the big bang singularity and the occurrence of an early era of accelerated expansion of a geometric origin, in the framework of group field theory condensate cosmology. I will then summarise results from an extended gravitational model based on non-commutative spectral geometry, a model that offers a purely geometric explanation for the standard model of particle physics.

  12. The metallicity distribution of H I systems in the EAGLE cosmological simulations

    Science.gov (United States)

    Rahmati, Alireza; Oppenheimer, Benjamin D.

    2018-06-01

    The metallicity of strong H I systems, spanning from damped Lyman α absorbers (DLAs) to Lyman-limit systems (LLSs), is explored between z = 5 → 0 using the EAGLE high-resolution cosmological hydrodynamic simulation of galaxy formation. The metallicities of LLSs and DLAs steadily increase with time in agreement with observations. DLAs are more metal rich than LLSs, although the metallicities in the LLS column density range (N_{H I }≈ 10^{17}-10^{20} cm^{-2}) are relatively flat, evolving from a median H I-weighted metallicity of {Z}≲ 10^{-2} Z_{⊙} at z = 3 to ≈10-0.5 Z⊙ by z = 0. The metal content of H I systems tracks the increasing stellar content of the Universe, holding ≈ 5 {per cent} of the integrated total metals released from stars at z = 0. We also consider partial LLS (pLLS, N_{H I}≈ 10^{16}-10^{17} cm^{-2}) metallicities, and find good agreement with Wotta et al. for the fraction of systems above (37 per cent) and below (63 per cent) 0.1 Z⊙. We also find a large dispersion of pLLS metallicities, although we do not reproduce the observed metallicity bimodality and instead we make the prediction that a larger sample will yield more pLLSs around 0.1 Z⊙. We underpredict the median metallicity of strong LLSs, and predict a population of Z 3 that are not observed, which may indicate more widespread early enrichment in the real Universe compared to EAGLE.

  13. Development of a High-Resolution Climate Model for Future Climate Change Projection on the Earth Simulator

    Science.gov (United States)

    Kanzawa, H.; Emori, S.; Nishimura, T.; Suzuki, T.; Inoue, T.; Hasumi, H.; Saito, F.; Abe-Ouchi, A.; Kimoto, M.; Sumi, A.

    2002-12-01

    The fastest supercomputer of the world, the Earth Simulator (total peak performance 40TFLOPS) has recently been available for climate researches in Yokohama, Japan. We are planning to conduct a series of future climate change projection experiments on the Earth Simulator with a high-resolution coupled ocean-atmosphere climate model. The main scientific aims for the experiments are to investigate 1) the change in global ocean circulation with an eddy-permitting ocean model, 2) the regional details of the climate change including Asian monsoon rainfall pattern, tropical cyclones and so on, and 3) the change in natural climate variability with a high-resolution model of the coupled ocean-atmosphere system. To meet these aims, an atmospheric GCM, CCSR/NIES AGCM, with T106(~1.1o) horizontal resolution and 56 vertical layers is to be coupled with an oceanic GCM, COCO, with ~ 0.28ox 0.19o horizontal resolution and 48 vertical layers. This coupled ocean-atmosphere climate model, named MIROC, also includes a land-surface model, a dynamic-thermodynamic seaice model, and a river routing model. The poles of the oceanic model grid system are rotated from the geographic poles so that they are placed in Greenland and Antarctic land masses to avoild the singularity of the grid system. Each of the atmospheric and the oceanic parts of the model is parallelized with the Message Passing Interface (MPI) technique. The coupling of the two is to be done with a Multi Program Multi Data (MPMD) fashion. A 100-model-year integration will be possible in one actual month with 720 vector processors (which is only 14% of the full resources of the Earth Simulator).

  14. High-Resolution Mesoscale Simulations of the 6-7 May 2000 Missouri Flash Flood: Impact of Model Initialization and Land Surface Treatment

    Science.gov (United States)

    Baker, R. David; Wang, Yansen; Tao, Wei-Kuo; Wetzel, Peter; Belcher, Larry R.

    2004-01-01

    High-resolution mesoscale model simulations of the 6-7 May 2000 Missouri flash flood event were performed to test the impact of model initialization and land surface treatment on timing, intensity, and location of extreme precipitation. In this flash flood event, a mesoscale convective system (MCS) produced over 340 mm of rain in roughly 9 hours in some locations. Two different types of model initialization were employed: 1) NCEP global reanalysis with 2.5-degree grid spacing and 12-hour temporal resolution, and 2) Eta reanalysis with 40- km grid spacing and $hour temporal resolution. In addition, two different land surface treatments were considered. A simple land scheme. (SLAB) keeps soil moisture fixed at initial values throughout the simulation, while a more sophisticated land model (PLACE) allows for r interactive feedback. Simulations with high-resolution Eta model initialization show considerable improvement in the intensity of precipitation due to the presence in the initialization of a residual mesoscale convective vortex (hlCV) from a previous MCS. Simulations with the PLACE land model show improved location of heavy precipitation. Since soil moisture can vary over time in the PLACE model, surface energy fluxes exhibit strong spatial gradients. These surface energy flux gradients help produce a strong low-level jet (LLJ) in the correct location. The LLJ then interacts with the cold outflow boundary of the MCS to produce new convective cells. The simulation with both high-resolution model initialization and time-varying soil moisture test reproduces the intensity and location of observed rainfall.

  15. Is the cosmological singularity compulsory

    International Nuclear Information System (INIS)

    Bekenstein, J.D.; Meisels, A.

    1980-01-01

    The cosmological singularity is inherent in all conventional general relativistic cosmological models. There can be no question that it is an unphysical feature; yet there does not seem to be any convervative way of eliminating it. Here we present singularity-free isotropic cosmological models which are indistinguishable from general relativistic ones at late times. They are based on the general theory of variable rest masses that we developed recently. Outside cosmology this theory simulates general relativity well. Thus it provides a framework incorporating those features which have made geneal relativity so sucessful while providing a way out of singularity dilemma. The cosmological models can be made to incorporate Dirac's large numbers hypothesis. G(now)/G(0)approx.10 -38

  16. IXO and the Missing Baryons: The Need High Resolution Spectroscopy

    Science.gov (United States)

    Nicastro, Fabrizio

    2009-01-01

    About half of the baryons in the Universe are currently eluding detection. Hydrodynamical simulations for the formation of Large Scale Structures (LSSs), predict that these baryons, at zmatter: the Warm-Hot Intergalactic Medium (WHIM). The WHIM has probably been progressively enriched with metals, during phases of intense starburst and AGN activity, up to possibly solar metallicity (Cen & Ostriker, 2006), and should therefore shine and/or absorb in in the soft X-ray band, via electronic transitions from the most abundant metals. The importance of detecting and studying the WHIM lies not only in the possibility of finally making a complete census of all baryons in the Universe, but also in the possibility of (a) directly measuring the metallicity history of the Universe, and so investigating on metal-transport in the Universe and galaxy-IGM, AGN-IGM feedback mechanisms, (b) directly measuring the heating history of the Universe, and so understanding the process of LSS formation and shocks, and (c) performing cosmological parameter measurements through a 3D 2-point angular correlation function analysis of the WHIM filaments. Detecting, and studying the WHIM with the current X-ray instrumentation however, is extremely challenging, because of the low sensitivity and resolution of the Chandra and XMM-Newton gratings, and the very low 'grasp' of all currently available imaging-spectrometers. IXO, instead, thanks to its large grating effective area (> 1000 cm2 at 0.5 keV) and high spectral resolution (R>2500 at 0.5 keV) will be perfectly suited to attack the problem in a systematic way. Here we demonstrate that high resolution gratings are crucial for these kind of studies and show that the IXO gratings will be able to detect more than 300-700 OVII WHIM filaments along about 70 lines of sight, in less than 0.7.

  17. Spectroscopic failures in photometric redshift calibration: cosmological biases and survey requirements

    Energy Technology Data Exchange (ETDEWEB)

    Cunha, Carlos E. [KIPAC, Menlo Park; Huterer, Dragan [Michigan U.; Lin, Huan [Fermilab; Busha, Michael T. [Zurich U.; Wechsler, Risa H. [SLAC

    2014-10-11

    We use N-body-spectro-photometric simulations to investigate the impact of incompleteness and incorrect redshifts in spectroscopic surveys to photometric redshift training and calibration and the resulting effects on cosmological parameter estimation from weak lensing shear-shear correlations. The photometry of the simulations is modeled after the upcoming Dark Energy Survey and the spectroscopy is based on a low/intermediate resolution spectrograph with wavelength coverage of 5500{\\AA} < {\\lambda} < 9500{\\AA}. The principal systematic errors that such a spectroscopic follow-up encounters are incompleteness (inability to obtain spectroscopic redshifts for certain galaxies) and wrong redshifts. Encouragingly, we find that a neural network-based approach can effectively describe the spectroscopic incompleteness in terms of the galaxies' colors, so that the spectroscopic selection can be applied to the photometric sample. Hence, we find that spectroscopic incompleteness yields no appreciable biases to cosmology, although the statistical constraints degrade somewhat because the photometric survey has to be culled to match the spectroscopic selection. Unfortunately, wrong redshifts have a more severe impact: the cosmological biases are intolerable if more than a percent of the spectroscopic redshifts are incorrect. Moreover, we find that incorrect redshifts can also substantially degrade the accuracy of training set based photo-z estimators. The main problem is the difficulty of obtaining redshifts, either spectroscopically or photometrically, for objects at z > 1.3. We discuss several approaches for reducing the cosmological biases, in particular finding that photo-z error estimators can reduce biases appreciably.

  18. A scalar-tensor bimetric brane world cosmology

    International Nuclear Information System (INIS)

    Youm, Donam

    2001-08-01

    We study a scalar-tensor bimetric cosmology in the Randall-Sundrum model with one positive tension brane, where the biscalar field is assumed to be confined on the brane. The effective Friedmann equations on the brane are obtained and analyzed. We comment on resolution of cosmological problems in this bimetric model. (author)

  19. Sensitivity to grid resolution in the ability of a chemical transport model to simulate observed oxidant chemistry under high-isoprene conditions

    Directory of Open Access Journals (Sweden)

    K. Yu

    2016-04-01

    Full Text Available Formation of ozone and organic aerosol in continental atmospheres depends on whether isoprene emitted by vegetation is oxidized by the high-NOx pathway (where peroxy radicals react with NO or by low-NOx pathways (where peroxy radicals react by alternate channels, mostly with HO2. We used mixed layer observations from the SEAC4RS aircraft campaign over the Southeast US to test the ability of the GEOS-Chem chemical transport model at different grid resolutions (0.25°  ×  0.3125°, 2°  ×  2.5°, 4°  ×  5° to simulate this chemistry under high-isoprene, variable-NOx conditions. Observations of isoprene and NOx over the Southeast US show a negative correlation, reflecting the spatial segregation of emissions; this negative correlation is captured in the model at 0.25°  ×  0.3125° resolution but not at coarser resolutions. As a result, less isoprene oxidation takes place by the high-NOx pathway in the model at 0.25°  ×  0.3125° resolution (54 % than at coarser resolution (59 %. The cumulative probability distribution functions (CDFs of NOx, isoprene, and ozone concentrations show little difference across model resolutions and good agreement with observations, while formaldehyde is overestimated at coarse resolution because excessive isoprene oxidation takes place by the high-NOx pathway with high formaldehyde yield. The good agreement of simulated and observed concentration variances implies that smaller-scale non-linearities (urban and power plant plumes are not important on the regional scale. Correlations of simulated vs. observed concentrations do not improve with grid resolution because finer modes of variability are intrinsically more difficult to capture. Higher model resolution leads to decreased conversion of NOx to organic nitrates and increased conversion to nitric acid, with total reactive nitrogen oxides (NOy changing little across model resolutions. Model concentrations in the

  20. Peculiar velocity effects in high-resolution microwave background experiments

    International Nuclear Information System (INIS)

    Challinor, Anthony; Leeuwen, Floor van

    2002-01-01

    We investigate the impact of peculiar velocity effects due to the motion of the solar system relative to the cosmic microwave background (CMB) on high resolution CMB experiments. It is well known that on the largest angular scales the combined effects of Doppler shifts and aberration are important; the lowest Legendre multipoles of total intensity receive power from the large CMB monopole in transforming from the CMB frame. On small angular scales aberration dominates and is shown here to lead to significant distortions of the total intensity and polarization multipoles in transforming from the rest frame of the CMB to the frame of the solar system. We provide convenient analytic results for the distortions as series expansions in the relative velocity of the two frames, but at the highest resolutions a numerical quadrature is required. Although many of the high resolution multipoles themselves are severely distorted by the frame transformations, we show that their statistical properties distort by only an insignificant amount. Therefore, the cosmological parameter estimation is insensitive to the transformation from the CMB frame (where theoretical predictions are calculated) to the rest frame of the experiment

  1. High-resolution regional climate model evaluation using variable-resolution CESM over California

    Science.gov (United States)

    Huang, X.; Rhoades, A.; Ullrich, P. A.; Zarzycki, C. M.

    2015-12-01

    Understanding the effect of climate change at regional scales remains a topic of intensive research. Though computational constraints remain a problem, high horizontal resolution is needed to represent topographic forcing, which is a significant driver of local climate variability. Although regional climate models (RCMs) have traditionally been used at these scales, variable-resolution global climate models (VRGCMs) have recently arisen as an alternative for studying regional weather and climate allowing two-way interaction between these domains without the need for nudging. In this study, the recently developed variable-resolution option within the Community Earth System Model (CESM) is assessed for long-term regional climate modeling over California. Our variable-resolution simulations will focus on relatively high resolutions for climate assessment, namely 28km and 14km regional resolution, which are much more typical for dynamically downscaled studies. For comparison with the more widely used RCM method, the Weather Research and Forecasting (WRF) model will be used for simulations at 27km and 9km. All simulations use the AMIP (Atmospheric Model Intercomparison Project) protocols. The time period is from 1979-01-01 to 2005-12-31 (UTC), and year 1979 was discarded as spin up time. The mean climatology across California's diverse climate zones, including temperature and precipitation, is analyzed and contrasted with the Weather Research and Forcasting (WRF) model (as a traditional RCM), regional reanalysis, gridded observational datasets and uniform high-resolution CESM at 0.25 degree with the finite volume (FV) dynamical core. The results show that variable-resolution CESM is competitive in representing regional climatology on both annual and seasonal time scales. This assessment adds value to the use of VRGCMs for projecting climate change over the coming century and improve our understanding of both past and future regional climate related to fine

  2. A method for generating high resolution satellite image time series

    Science.gov (United States)

    Guo, Tao

    2014-10-01

    There is an increasing demand for satellite remote sensing data with both high spatial and temporal resolution in many applications. But it still is a challenge to simultaneously improve spatial resolution and temporal frequency due to the technical limits of current satellite observation systems. To this end, much R&D efforts have been ongoing for years and lead to some successes roughly in two aspects, one includes super resolution, pan-sharpen etc. methods which can effectively enhance the spatial resolution and generate good visual effects, but hardly preserve spectral signatures and result in inadequate analytical value, on the other hand, time interpolation is a straight forward method to increase temporal frequency, however it increase little informative contents in fact. In this paper we presented a novel method to simulate high resolution time series data by combing low resolution time series data and a very small number of high resolution data only. Our method starts with a pair of high and low resolution data set, and then a spatial registration is done by introducing LDA model to map high and low resolution pixels correspondingly. Afterwards, temporal change information is captured through a comparison of low resolution time series data, and then projected onto the high resolution data plane and assigned to each high resolution pixel according to the predefined temporal change patterns of each type of ground objects. Finally the simulated high resolution data is generated. A preliminary experiment shows that our method can simulate a high resolution data with a reasonable accuracy. The contribution of our method is to enable timely monitoring of temporal changes through analysis of time sequence of low resolution images only, and usage of costly high resolution data can be reduces as much as possible, and it presents a highly effective way to build up an economically operational monitoring solution for agriculture, forest, land use investigation

  3. Air quality high resolution simulations of Italian urban areas with WRF-CHIMERE

    Science.gov (United States)

    Falasca, Serena; Curci, Gabriele

    2017-04-01

    The new European Directive on ambient air quality and cleaner air for Europe (2008/50/EC) encourages the use of modeling techniques to support the observations in the assessment and forecasting of air quality. The modelling system based on the combination of the WRF meteorological model and the CHIMERE chemistry-transport model is used to perform simulations at high resolution over the main Italian cities (e.g. Milan, Rome). Three domains covering Europe, Italy and the urban areas are nested with a decreasing grid size up to 1 km. Numerical results are produced for a winter month and a summer month of the year 2010 and are validated using ground-based observations (e.g. from the European air quality database AirBase). A sensitivity study is performed using different physics options, domain resolution and grid ratio; different urban parameterization schemes are tested using also characteristic morphology parameters for the cities considered. A spatial reallocation of anthropogenic emissions derived from international (e.g. EMEP, TNO, HTAP) and national (e.g. CTN-ACE) emissions inventories and based on the land cover datasets (Global Land Cover Facility and GlobCover) and the OpenStreetMap tool is also included. Preliminary results indicate that the introduction of the spatial redistribution at high-resolution allows a more realistic reproduction of the distribution of the emission flows and thus the concentrations of the pollutants, with significant advantages especially for the urban environments.

  4. Analysis of a high-resolution regional climate simulation for Alpine temperature. Validation and influence of the NAO

    Energy Technology Data Exchange (ETDEWEB)

    Proemmel, K. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Kuestenforschung

    2008-11-06

    To determine whether the increase in resolution of climate models improves the representation of climate is a crucial topic in regional climate modelling. An improvement over coarser-scale models is expected especially in areas with complex orography or along coastlines. However, some studies have shown no clear added value for regional climate models. In this study a high-resolution regional climate model simulation performed with REMO over the period 1958-1998 is analysed for 2m temperature over the orographically complex European Alps and their surroundings called the Greater Alpine Region (GAR). The model setup is in hindcast mode meaning that the simulation is driven with perfect boundary conditions by the ERA40 reanalysis through prescribing the values at the lateral boundaries and spectral nudging of the large-scale wind field inside the model domain. The added value is analysed between the regional climate simulation with a resolution of 1/6 and the driving reanalysis with a resolution of 1.125 . Before analysing the added value both the REMO simulation and the ERA40 reanalysis are validated against different station datasets of monthly and daily mean 2m temperature. The largest dataset is the dense, homogenised and quality controlled HISTALP dataset covering the whole GAR, which gave the opportunity for the validation undertaken in this study. The temporal variability of temperature, as quantified by correlation, is well represented by both REMO and ERA40. However, both show considerable biases. The REMO bias reaches 3 K in summer in regions known to experience a problem with summer drying in a number of regional models. In winter the bias is strongly influenced by the choice of the temperature lapse rate, which is applied to compare grid box and station data at different altitudes, and has the strongest influence on inner Alpine subregions where the altitude differences are largest. By applying a constant lapse rate the REMO bias in winter in the high

  5. Rotation curves of high-resolution LSB and SPARC galaxies with fuzzy and multistate (ultralight boson) scalar field dark matter

    Science.gov (United States)

    Bernal, T.; Fernández-Hernández, L. M.; Matos, T.; Rodríguez-Meza, M. A.

    2018-04-01

    Cold dark matter (CDM) has shown to be an excellent candidate for the dark matter (DM) of the Universe at large scales; however, it presents some challenges at the galactic level. The scalar field dark matter (SFDM), also called fuzzy, wave, Bose-Einstein condensate, or ultralight axion DM, is identical to CDM at cosmological scales but different at the galactic ones. SFDM forms core haloes, it has a natural cut-off in its matter power spectrum, and it predicts well-formed galaxies at high redshifts. In this work we reproduce the rotation curves of high-resolution low surface brightness (LSB) and SPARC galaxies with two SFDM profiles: (1) the soliton+NFW profile in the fuzzy DM (FDM) model, arising empirically from cosmological simulations of real, non-interacting scalar field (SF) at zero temperature, and (2) the multistate SFDM (mSFDM) profile, an exact solution to the Einstein-Klein-Gordon equations for a real, self-interacting SF, with finite temperature into the SF potential, introducing several quantum states as a realistic model for an SFDM halo. From the fits with the soliton+NFW profile, we obtained for the boson mass 0.212 motivated framework additional or alternative to the FDM profile.

  6. Galactic r-process enrichment by neutron star mergers in cosmological simulations of a Milky Way-mass galaxy

    Science.gov (United States)

    van de Voort, Freeke; Quataert, Eliot; Hopkins, Philip F.; Kereš, Dušan; Faucher-Giguère, Claude-André

    2015-02-01

    We quantify the stellar abundances of neutron-rich r-process nuclei in cosmological zoom-in simulations of a Milky Way-mass galaxy from the Feedback In Realistic Environments project. The galaxy is enriched with r-process elements by binary neutron star (NS) mergers and with iron and other metals by supernovae. These calculations include key hydrodynamic mixing processes not present in standard semi-analytic chemical evolution models, such as galactic winds and hydrodynamic flows associated with structure formation. We explore a range of models for the rate and delay time of NS mergers, intended to roughly bracket the wide range of models consistent with current observational constraints. We show that NS mergers can produce [r-process/Fe] abundance ratios and scatter that appear reasonably consistent with observational constraints. At low metallicity, [Fe/H] ≲ -2, we predict there is a wide range of stellar r-process abundance ratios, with both supersolar and subsolar abundances. Low-metallicity stars or stars that are outliers in their r-process abundance ratios are, on average, formed at high redshift and located at large galactocentric radius. Because NS mergers are rare, our results are not fully converged with respect to resolution, particularly at low metallicity. However, the uncertain rate and delay time distribution of NS mergers introduce an uncertainty in the r-process abundances comparable to that due to finite numerical resolution. Overall, our results are consistent with NS mergers being the source of most of the r-process nuclei in the Universe.

  7. Quality and sensitivity of high-resolution numerical simulation of urban heat islands

    International Nuclear Information System (INIS)

    Li, Dan; Bou-Zeid, Elie

    2014-01-01

    High-resolution numerical simulations of the urban heat island (UHI) effect with the widely-used Weather Research and Forecasting (WRF) model are assessed. Both the sensitivity of the results to the simulation setup, and the quality of the simulated fields as representations of the real world, are investigated. Results indicate that the WRF-simulated surface temperatures are more sensitive to the planetary boundary layer (PBL) scheme choice during nighttime, and more sensitive to the surface thermal roughness length parameterization during daytime. The urban surface temperatures simulated by WRF are also highly sensitive to the urban canopy model (UCM) used. The implementation in this study of an improved UCM (the Princeton UCM or PUCM) that allows the simulation of heterogeneous urban facets and of key hydrological processes, together with the so-called CZ09 parameterization for the thermal roughness length, significantly reduce the bias (<1.5 °C) in the surface temperature fields as compared to satellite observations during daytime. The boundary layer potential temperature profiles are captured by WRF reasonable well at both urban and rural sites; the biases in these profiles relative to aircraft-mounted senor measurements are on the order of 1.5 °C. Changing UCMs and PBL schemes does not alter the performance of WRF in reproducing bulk boundary layer temperature profiles significantly. The results illustrate the wide range of urban environmental conditions that various configurations of WRF can produce, and the significant biases that should be assessed before inferences are made based on WRF outputs. The optimal set-up of WRF-PUCM developed in this paper also paves the way for a confident exploration of the city-scale impacts of UHI mitigation strategies in the companion paper (Li et al 2014). (letter)

  8. Nonlinear evolution of f(R) cosmologies. I. Methodology

    International Nuclear Information System (INIS)

    Oyaizu, Hiroaki

    2008-01-01

    We introduce the method and the implementation of a cosmological simulation of a class of metric-variation f(R) models that accelerate the cosmological expansion without a cosmological constant and evade solar-system bounds of small-field deviations to general relativity. Such simulations are shown to reduce to solving a nonlinear Poisson equation for the scalar degree of freedom introduced by the f(R) modifications. We detail the method to efficiently solve the nonlinear Poisson equation by using a Newton-Gauss-Seidel relaxation scheme coupled with the multigrid method to accelerate the convergence. The simulations are shown to satisfy tests comparing the simulated outcome to analytical solutions for simple situations, and the dynamics of the simulations are tested with orbital and Zeldovich collapse tests. Finally, we present several static and dynamical simulations using realistic cosmological parameters to highlight the differences between standard physics and f(R) physics. In general, we find that the f(R) modifications result in stronger gravitational attraction that enhances the dark matter power spectrum by ∼20% for large but observationally allowed f(R) modifications. A more detailed study of the nonlinear f(R) effects on the power spectrum are presented in a companion paper.

  9. High Resolution Numerical Simulations of Primary Atomization in Diesel Sprays with Single Component Reference Fuels

    Science.gov (United States)

    2015-09-01

    NC. 14. ABSTRACT A high-resolution numerical simulation of jet breakup and spray formation from a complex diesel fuel injector at diesel engine... diesel fuel injector at diesel engine type conditions has been performed. A full understanding of the primary atomization process in diesel fuel... diesel liquid sprays the complexity is further compounded by the physical attributes present including nozzle turbulence, large density ratios

  10. The Coyote Universe II: Cosmological Models and Precision Emulation of the Nonlinear Matter Power Spectrum

    Energy Technology Data Exchange (ETDEWEB)

    Heitmann, Katrin [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Higdon, David [Los Alamos National Laboratory; Williams, Brian J [Los Alamos National Laboratory; White, Martin [Los Alamos National Laboratory; Wagner, Christian [Los Alamos National Laboratory

    2008-01-01

    The power spectrum of density fluctuations is a foundational source of cosmological information. Precision cosmological probes targeted primarily at investigations of dark energy require accurate theoretical determinations of the power spectrum in the nonlinear regime. To exploit the observational power of future cosmological surveys, accuracy demands on the theory are at the one percent level or better. Numerical simulations are currently the only way to produce sufficiently error-controlled predictions for the power spectrum. The very high computational cost of (precision) N-body simulations is a major obstacle to obtaining predictions in the nonlinear regime, while scanning over cosmological parameters. Near-future observations, however, are likely to provide a meaningful constraint only on constant dark energy equation of state 'wCDM' cosmologies. In this paper we demonstrate that a limited set of only 37 cosmological models -- the 'Coyote Universe' suite -- can be used to predict the nonlinear matter power spectrum at the required accuracy over a prior parameter range set by cosmic microwave background observations. This paper is the second in a series of three, with the final aim to provide a high-accuracy prediction scheme for the nonlinear matter power spectrum for wCDM cosmologies.

  11. Cosmology [2011 European School of High-Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Rubakov, V A [Moscow, INR (Russian Federation)

    2014-07-01

    In these lectures we first concentrate on the cosmological problems which, hopefully, have to do with the new physics to be probed at the LHC: the nature and origin of dark matter and generation of matter-antimatter asymmetry. We give several examples showing the LHC cosmological potential. These are WIMPs as cold dark matter, gravitinos as warm dark matter, and electroweak baryogenesis as a mechanism for generating matter-antimatter asymmetry. In the remaining part of the lectures we discuss the cosmological perturbations as a tool for studying the epoch preceeding the conventional hot stage of the cosmological evolution.

  12. High-resolution multi-slice PET

    International Nuclear Information System (INIS)

    Yasillo, N.J.; Chintu Chen; Ordonez, C.E.; Kapp, O.H.; Sosnowski, J.; Beck, R.N.

    1992-01-01

    This report evaluates the progress to test the feasibility and to initiate the design of a high resolution multi-slice PET system. The following specific areas were evaluated: detector development and testing; electronics configuration and design; mechanical design; and system simulation. The design and construction of a multiple-slice, high-resolution positron tomograph will provide substantial improvements in the accuracy and reproducibility of measurements of the distribution of activity concentrations in the brain. The range of functional brain research and our understanding of local brain function will be greatly extended when the development of this instrumentation is completed

  13. The large-scale environment from cosmological simulations - I. The baryonic cosmic web

    Science.gov (United States)

    Cui, Weiguang; Knebe, Alexander; Yepes, Gustavo; Yang, Xiaohu; Borgani, Stefano; Kang, Xi; Power, Chris; Staveley-Smith, Lister

    2018-01-01

    Using a series of cosmological simulations that includes one dark-matter-only (DM-only) run, one gas cooling-star formation-supernova feedback (CSF) run and one that additionally includes feedback from active galactic nuclei (AGNs), we classify the large-scale structures with both a velocity-shear-tensor code (VWEB) and a tidal-tensor code (PWEB). We find that the baryonic processes have almost no impact on large-scale structures - at least not when classified using aforementioned techniques. More importantly, our results confirm that the gas component alone can be used to infer the filamentary structure of the universe practically un-biased, which could be applied to cosmology constraints. In addition, the gas filaments are classified with its velocity (VWEB) and density (PWEB) fields, which can theoretically connect to the radio observations, such as H I surveys. This will help us to bias-freely link the radio observations with dark matter distributions at large scale.

  14. HBT+: an improved code for finding subhaloes and building merger trees in cosmological simulations

    Science.gov (United States)

    Han, Jiaxin; Cole, Shaun; Frenk, Carlos S.; Benitez-Llambay, Alejandro; Helly, John

    2018-02-01

    Dark matter subhalos are the remnants of (incomplete) halo mergers. Identifying them and establishing their evolutionary links in the form of merger trees is one of the most important applications of cosmological simulations. The HBT (Hierachical Bound-Tracing) code identifies haloes as they form and tracks their evolution as they merge, simultaneously detecting subhaloes and building their merger trees. Here we present a new implementation of this approach, HBT+ , that is much faster, more user friendly, and more physically complete than the original code. Applying HBT+ to cosmological simulations, we show that both the subhalo mass function and the peak-mass function are well fitted by similar double-Schechter functions. The ratio between the two is highest at the high-mass end, reflecting the resilience of massive subhaloes that experience substantial dynamical friction but limited tidal stripping. The radial distribution of the most-massive subhaloes is more concentrated than the universal radial distribution of lower mass subhaloes. Subhalo finders that work in configuration space tend to underestimate the masses of massive subhaloes, an effect that is stronger in the host centre. This may explain, at least in part, the excess of massive subhaloes in galaxy cluster centres inferred from recent lensing observations. We demonstrate that the peak-mass function is a powerful diagnostic of merger tree defects, and the merger trees constructed using HBT+ do not suffer from the missing or switched links that tend to afflict merger trees constructed from more conventional halo finders. We make the HBT+ code publicly available.

  15. TESTING STRICT HYDROSTATIC EQUILIBRIUM IN SIMULATED CLUSTERS OF GALAXIES: IMPLICATIONS FOR A1689

    International Nuclear Information System (INIS)

    Molnar, S. M.; Umetsu, K.; Chiu, I.-N.; Chen, P.; Hearn, N.; Broadhurst, T.; Bryan, G.; Shang, C.

    2010-01-01

    Accurate mass determination of clusters of galaxies is crucial if they are to be used as cosmological probes. However, there are some discrepancies between cluster masses determined based on gravitational lensing and X-ray observations assuming strict hydrostatic equilibrium (i.e., the equilibrium gas pressure is provided entirely by thermal pressure). Cosmological simulations suggest that turbulent gas motions remaining from hierarchical structure formation may provide a significant contribution to the equilibrium pressure in clusters. We analyze a sample of massive clusters of galaxies drawn from high-resolution cosmological simulations and find a significant contribution (20%-45%) from non-thermal pressure near the center of relaxed clusters, and, in accord with previous studies, a minimum contribution at about 0.1 R vir , growing to about 30%-45% at the virial radius, R vir . Our results strongly suggest that relaxed clusters should have significant non-thermal support in their core region. As an example, we test the validity of strict hydrostatic equilibrium in the well-studied massive galaxy cluster A1689 using the latest high-resolution gravitational lensing and X-ray observations. We find a contribution of about 40% from non-thermal pressure within the core region of A1689, suggesting an alternate explanation for the mass discrepancy: the strict hydrostatic equilibrium is not valid in this region.

  16. On the cosmological propagation of high energy particles in magnetic fields

    International Nuclear Information System (INIS)

    Alves Batista, Rafael

    2015-04-01

    In the present work the connection between high energy particles and cosmic magnetic fields is explored. Particularly, the focus lies on the propagation of ultra-high energy cosmic rays (UHECRs) and very-high energy gamma rays (VHEGRs) over cosmological distances, under the influence of cosmic magnetic fields. The first part of this work concerns the propagation of UHECRs in the magnetized cosmic web, which was studied both analytically and numerically. A parametrization for the suppression of the UHECR flux at energies ∝ 10 18 eV due to diffusion in extragalactic magnetic fields was found, making it possible to set an upper limit on the energy at which this magnetic horizon effect sets in, which is cosmological effects in three-dimensional simulations, which enables time dependent studies considering simultaneously magnetic field effects and the cosmological evolution of the universe. An interesting possibility is to use UHECRs to constrain properties of cosmic magnetic fields, and vice-versa. Numerical simulations of the propagation of UHECRs in the magnetized cosmic web, obtained through magnetohydrodynamical simulations of structure formation, were performed. It was studied the effects of different magnetic field seeds on the distribution of cosmic magnetic fields today, and their impact on the propagation of cosmic rays. Furthermore, the influence of uncertainties of the strength of

  17. Astrophysical cosmology

    Science.gov (United States)

    Bardeen, J. M.

    The last several years have seen a tremendous ferment of activity in astrophysical cosmology. Much of the theoretical impetus has come from particle physics theories of the early universe and candidates for dark matter, but what promise to be even more significant are improved direct observations of high z galaxies and intergalactic matter, deeper and more comprehensive redshift surveys, and the increasing power of computer simulations of the dynamical evolution of large scale structure. Upper limits on the anisotropy of the microwave background radiation are gradually getting tighter and constraining more severely theoretical scenarios for the evolution of the universe.

  18. Astrophysical cosmology

    International Nuclear Information System (INIS)

    Bardeen, J.M.

    1986-01-01

    The last several years have seen a tremendous ferment of activity in astrophysical cosmology. Much of the theoretical impetus has come from particle physics theories of the early universe and candidates for dark matter, but what promise to be even more significant are improved direct observations of high z galaxies and intergalactic matter, deeper and more comprehensive redshift surveys, and the increasing power of computer simulations of the dynamical evolution of large scale structure. Upper limits on the anisotropy of the microwave background radiation are gradually getting tighter and constraining more severely theoretical scenarios for the evolution of the universe. 47 refs

  19. Multi-Scale Initial Conditions For Cosmological Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Hahn, Oliver; /KIPAC, Menlo Park; Abel, Tom; /KIPAC, Menlo Park /ZAH, Heidelberg /HITS, Heidelberg

    2011-11-04

    We discuss a new algorithm to generate multi-scale initial conditions with multiple levels of refinements for cosmological 'zoom-in' simulations. The method uses an adaptive convolution of Gaussian white noise with a real-space transfer function kernel together with an adaptive multi-grid Poisson solver to generate displacements and velocities following first- (1LPT) or second-order Lagrangian perturbation theory (2LPT). The new algorithm achieves rms relative errors of the order of 10{sup -4} for displacements and velocities in the refinement region and thus improves in terms of errors by about two orders of magnitude over previous approaches. In addition, errors are localized at coarse-fine boundaries and do not suffer from Fourier-space-induced interference ringing. An optional hybrid multi-grid and Fast Fourier Transform (FFT) based scheme is introduced which has identical Fourier-space behaviour as traditional approaches. Using a suite of re-simulations of a galaxy cluster halo our real-space-based approach is found to reproduce correlation functions, density profiles, key halo properties and subhalo abundances with per cent level accuracy. Finally, we generalize our approach for two-component baryon and dark-matter simulations and demonstrate that the power spectrum evolution is in excellent agreement with linear perturbation theory. For initial baryon density fields, it is suggested to use the local Lagrangian approximation in order to generate a density field for mesh-based codes that is consistent with the Lagrangian perturbation theory instead of the current practice of using the Eulerian linearly scaled densities.

  20. The implementation of sea ice model on a regional high-resolution scale

    Science.gov (United States)

    Prasad, Siva; Zakharov, Igor; Bobby, Pradeep; McGuire, Peter

    2015-09-01

    The availability of high-resolution atmospheric/ocean forecast models, satellite data and access to high-performance computing clusters have provided capability to build high-resolution models for regional ice condition simulation. The paper describes the implementation of the Los Alamos sea ice model (CICE) on a regional scale at high resolution. The advantage of the model is its ability to include oceanographic parameters (e.g., currents) to provide accurate results. The sea ice simulation was performed over Baffin Bay and the Labrador Sea to retrieve important parameters such as ice concentration, thickness, ridging, and drift. Two different forcing models, one with low resolution and another with a high resolution, were used for the estimation of sensitivity of model results. Sea ice behavior over 7 years was simulated to analyze ice formation, melting, and conditions in the region. Validation was based on comparing model results with remote sensing data. The simulated ice concentration correlated well with Advanced Microwave Scanning Radiometer for EOS (AMSR-E) and Ocean and Sea Ice Satellite Application Facility (OSI-SAF) data. Visual comparison of ice thickness trends estimated from the Soil Moisture and Ocean Salinity satellite (SMOS) agreed with the simulation for year 2010-2011.

  1. Statistical Issues in Galaxy Cluster Cosmology

    Science.gov (United States)

    Mantz, Adam

    2013-01-01

    The number and growth of massive galaxy clusters are sensitive probes of cosmological structure formation. Surveys at various wavelengths can detect clusters to high redshift, but the fact that cluster mass is not directly observable complicates matters, requiring us to simultaneously constrain scaling relations of observable signals with mass. The problem can be cast as one of regression, in which the data set is truncated, the (cosmology-dependent) underlying population must be modeled, and strong, complex correlations between measurements often exist. Simulations of cosmological structure formation provide a robust prediction for the number of clusters in the Universe as a function of mass and redshift (the mass function), but they cannot reliably predict the observables used to detect clusters in sky surveys (e.g. X-ray luminosity). Consequently, observers must constrain observable-mass scaling relations using additional data, and use the scaling relation model in conjunction with the mass function to predict the number of clusters as a function of redshift and luminosity.

  2. Simulation of the oxidation pathway on Si(100) using high-resolution EELS

    Energy Technology Data Exchange (ETDEWEB)

    Hogan, Conor [Consiglio Nazionale delle Ricerche, Istituto di Struttura della Materia (CNR-ISM), Rome (Italy); Dipartimento di Fisica, Universita di Roma ' ' Tor Vergata' ' , Roma (Italy); European Theoretical Spectroscopy Facility (ETSF), Roma (Italy); Caramella, Lucia; Onida, Giovanni [Dipartimento di Fisica, Universita degli Studi di Milano (Italy); European Theoretical Spectroscopy Facility (ETSF), Milano (Italy)

    2012-06-15

    We compute high-resolution electron energy loss spectra (HREELS) of possible structural motifs that form during the dynamic oxidation process on Si(100), including the important metastable precursor silanone and an adjacent-dimer bridge (ADB) structure that may seed oxide formation. Spectroscopic fingerprints of single site, silanone, and ''seed'' structures are identified and related to changes in the surface bandstructure of the clean surface. Incorporation of oxygen into the silicon lattice through adsorption and dissociation of water is also examined. Results are compared to available HREELS spectra and surface optical data, which are closely related. Our simulations confirm that HREELS offers complementary evidence to surface optical spectroscopy, and show that its high sensitivity allows it to distinguish between energetically and structurally similar oxidation models. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  3. Simulating return signals of a spaceborne high-spectral resolution lidar channel at 532 nm

    Science.gov (United States)

    Xiao, Yu; Binglong, Chen; Min, Min; Xingying, Zhang; Lilin, Yao; Yiming, Zhao; Lidong, Wang; Fu, Wang; Xiaobo, Deng

    2018-06-01

    High spectral resolution lidar (HSRL) system employs a narrow spectral filter to separate the particulate (cloud/aerosol) and molecular scattering components in lidar return signals, which improves the quality of the retrieved cloud/aerosol optical properties. To better develop a future spaceborne HSRL system, a novel simulation technique was developed to simulate spaceborne HSRL return signals at 532 nm using the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) cloud/aerosol extinction coefficients product and numerical weather prediction data. For validating simulated data, a mathematical particulate extinction coefficient retrieval method for spaceborne HSRL return signals is described here. We compare particulate extinction coefficient profiles from the CALIPSO operational product with simulated spaceborne HSRL data. Further uncertainty analysis shows that relative uncertainties are acceptable for retrieving the optical properties of cloud and aerosol. The final results demonstrate that they agree well with each other. It indicates that the return signals of the spaceborne HSRL molecular channel at 532 nm will be suitable for developing operational algorithms supporting a future spaceborne HSRL system.

  4. A Non-hydrostatic Atmospheric Model for Global High-resolution Simulation

    Science.gov (United States)

    Peng, X.; Li, X.

    2017-12-01

    A three-dimensional non-hydrostatic atmosphere model, GRAPES_YY, is developed on the spherical Yin-Yang grid system in order to enforce global high-resolution weather simulation or forecasting at the CAMS/CMA. The quasi-uniform grid makes the computation be of high efficiency and free of pole problem. Full representation of the three-dimensional Coriolis force is considered in the governing equations. Under the constraint of third-order boundary interpolation, the model is integrated with the semi-implicit semi-Lagrangian method using the same code on both zones. A static halo region is set to ensure computation of cross-boundary transport and updating Dirichlet-type boundary conditions in the solution process of elliptical equations with the Schwarz method. A series of dynamical test cases, including the solid-body advection, the balanced geostrophic flow, zonal flow over an isolated mountain, development of the Rossby-Haurwitz wave and a baroclinic wave, are carried out, and excellent computational stability and accuracy of the dynamic core has been confirmed. After implementation of the physical processes of long and short-wave radiation, cumulus convection, micro-physical transformation of water substances and the turbulent processes in the planetary boundary layer include surface layer vertical fluxes parameterization, a long-term run of the model is then put forward under an idealized aqua-planet configuration to test the model physics and model ability in both short-term and long-term integrations. In the aqua-planet experiment, the model shows an Earth-like structure of circulation. The time-zonal mean temperature, wind components and humidity illustrate reasonable subtropical zonal westerly jet, meridional three-cell circulation, tropical convection and thermodynamic structures. The specific SST and solar insolation being symmetric about the equator enhance the ITCZ and tropical precipitation, which concentrated in tropical region. Additional analysis and

  5. Quantifying uncertainty in Transcranial Magnetic Stimulation - A high resolution simulation study in ICBM space.

    Science.gov (United States)

    Toschi, Nicola; Keck, Martin E; Welt, Tobias; Guerrisi, Maria

    2012-01-01

    Transcranial Magnetic Stimulation offers enormous potential for noninvasive brain stimulation. While it is known that brain tissue significantly "reshapes" induced field and charge distributions, most modeling investigations to-date have focused on single-subject data with limited generality. Further, the effects of the significant uncertainties which exist in the simulation (i.e. brain conductivity distributions) and stimulation (e.g. coil positioning and orientations) setup have not been quantified. In this study, we construct a high-resolution anisotropic head model in standard ICBM space, which can be used as a population-representative standard for bioelectromagnetic simulations. Further, we employ Monte-Carlo simulations in order to quantify how uncertainties in conductivity values propagate all the way to induced field and currents, demonstrating significant, regionally dependent dispersions in values which are commonly assumed "ground truth". This framework can be leveraged in order to quantify the effect of any type of uncertainty in noninvasive brain stimulation and bears relevance in all applications of TMS, both investigative and therapeutic.

  6. VAST PLANES OF SATELLITES IN A HIGH-RESOLUTION SIMULATION OF THE LOCAL GROUP: COMPARISON TO ANDROMEDA

    International Nuclear Information System (INIS)

    Gillet, N.; Ocvirk, P.; Aubert, D.; Knebe, A.; Yepes, G.; Libeskind, N.; Gottlöber, S.; Hoffman, Y.

    2015-01-01

    We search for vast planes of satellites (VPoS) in a high-resolution simulation of the Local Group performed by the CLUES project, which improves significantly the resolution of previous similar studies. We use a simple method for detecting planar configurations of satellites, and validate it on the known plane of M31. We implement a range of prescriptions for modeling the satellite populations, roughly reproducing the variety of recipes used in the literature, and investigate the occurrence and properties of planar structures in these populations. The structure of the simulated satellite systems is strongly non-random and contains planes of satellites, predominantly co-rotating, with, in some cases, sizes comparable to the plane observed in M31 by Ibata et al. However, the latter is slightly richer in satellites, slightly thinner, and has stronger co-rotation, which makes it stand out as overall more exceptional than the simulated planes, when compared to a random population. Although the simulated planes we find are generally dominated by one real structure forming its backbone, they are also partly fortuitous and are thus not kinematically coherent structures as a whole. Provided that the simulated and observed planes of satellites are indeed of the same nature, our results suggest that the VPoS of M31 is not a coherent disk and that one-third to one-half of its satellites must have large proper motions perpendicular to the plane

  7. Particle physics and cosmology

    International Nuclear Information System (INIS)

    Turner, M.S.; Schramm, D.N.

    1985-01-01

    During the past year, the research of the members of our group has spanned virtually all the topics at the interface of cosmology and particle physics: inflationary Universe scenarios, astrophysical and cosmological constraints on particle properties, ultra-high energy cosmic ray physics, quantum field theory in curved space-time, cosmology with extra dimensions, superstring cosmology, neutrino astronomy with large, underground detectors, and the formation of structure in the Universe

  8. Computational high-resolution heart phantoms for medical imaging and dosimetry simulations

    Energy Technology Data Exchange (ETDEWEB)

    Gu Songxiang; Kyprianou, Iacovos [Center for Devices and Radiological Health, US Food and Drug Administration, Silver Spring, MD (United States); Gupta, Rajiv, E-mail: songxiang.gu@fda.hhs.gov, E-mail: rgupta1@partners.org, E-mail: iacovos.kyprianou@fda.hhs.gov [Massachusetts General Hospital, Boston, MA (United States)

    2011-09-21

    Cardiovascular disease in general and coronary artery disease (CAD) in particular, are the leading cause of death worldwide. They are principally diagnosed using either invasive percutaneous transluminal coronary angiograms or non-invasive computed tomography angiograms (CTA). Minimally invasive therapies for CAD such as angioplasty and stenting are rendered under fluoroscopic guidance. Both invasive and non-invasive imaging modalities employ ionizing radiation and there is concern for deterministic and stochastic effects of radiation. Accurate simulation to optimize image quality with minimal radiation dose requires detailed, gender-specific anthropomorphic phantoms with anatomically correct heart and associated vasculature. Such phantoms are currently unavailable. This paper describes an open source heart phantom development platform based on a graphical user interface. Using this platform, we have developed seven high-resolution cardiac/coronary artery phantoms for imaging and dosimetry from seven high-quality CTA datasets. To extract a phantom from a coronary CTA, the relationship between the intensity distribution of the myocardium, the ventricles and the coronary arteries is identified via histogram analysis of the CTA images. By further refining the segmentation using anatomy-specific criteria such as vesselness, connectivity criteria required by the coronary tree and image operations such as active contours, we are able to capture excellent detail within our phantoms. For example, in one of the female heart phantoms, as many as 100 coronary artery branches could be identified. Triangular meshes are fitted to segmented high-resolution CTA data. We have also developed a visualization tool for adding stenotic lesions to the coronaries. The male and female heart phantoms generated so far have been cross-registered and entered in the mesh-based Virtual Family of phantoms with matched age/gender information. Any phantom in this family, along with user

  9. Computational high-resolution heart phantoms for medical imaging and dosimetry simulations

    International Nuclear Information System (INIS)

    Gu Songxiang; Kyprianou, Iacovos; Gupta, Rajiv

    2011-01-01

    Cardiovascular disease in general and coronary artery disease (CAD) in particular, are the leading cause of death worldwide. They are principally diagnosed using either invasive percutaneous transluminal coronary angiograms or non-invasive computed tomography angiograms (CTA). Minimally invasive therapies for CAD such as angioplasty and stenting are rendered under fluoroscopic guidance. Both invasive and non-invasive imaging modalities employ ionizing radiation and there is concern for deterministic and stochastic effects of radiation. Accurate simulation to optimize image quality with minimal radiation dose requires detailed, gender-specific anthropomorphic phantoms with anatomically correct heart and associated vasculature. Such phantoms are currently unavailable. This paper describes an open source heart phantom development platform based on a graphical user interface. Using this platform, we have developed seven high-resolution cardiac/coronary artery phantoms for imaging and dosimetry from seven high-quality CTA datasets. To extract a phantom from a coronary CTA, the relationship between the intensity distribution of the myocardium, the ventricles and the coronary arteries is identified via histogram analysis of the CTA images. By further refining the segmentation using anatomy-specific criteria such as vesselness, connectivity criteria required by the coronary tree and image operations such as active contours, we are able to capture excellent detail within our phantoms. For example, in one of the female heart phantoms, as many as 100 coronary artery branches could be identified. Triangular meshes are fitted to segmented high-resolution CTA data. We have also developed a visualization tool for adding stenotic lesions to the coronaries. The male and female heart phantoms generated so far have been cross-registered and entered in the mesh-based Virtual Family of phantoms with matched age/gender information. Any phantom in this family, along with user

  10. Adaptive resolution simulation of salt solutions

    International Nuclear Information System (INIS)

    Bevc, Staš; Praprotnik, Matej; Junghans, Christoph; Kremer, Kurt

    2013-01-01

    We present an adaptive resolution simulation of aqueous salt (NaCl) solutions at ambient conditions using the adaptive resolution scheme. Our multiscale approach concurrently couples the atomistic and coarse-grained models of the aqueous NaCl, where water molecules and ions change their resolution while moving from one resolution domain to the other. We employ standard extended simple point charge (SPC/E) and simple point charge (SPC) water models in combination with AMBER and GROMOS force fields for ion interactions in the atomistic domain. Electrostatics in our model are described by the generalized reaction field method. The effective interactions for water–water and water–ion interactions in the coarse-grained model are derived using structure-based coarse-graining approach while the Coulomb interactions between ions are appropriately screened. To ensure an even distribution of water molecules and ions across the simulation box we employ thermodynamic forces. We demonstrate that the equilibrium structural, e.g. radial distribution functions and density distributions of all the species, and dynamical properties are correctly reproduced by our adaptive resolution method. Our multiscale approach, which is general and can be used for any classical non-polarizable force-field and/or types of ions, will significantly speed up biomolecular simulation involving aqueous salt. (paper)

  11. Comparison of Two Grid Refinement Approaches for High Resolution Regional Climate Modeling: MPAS vs WRF

    Science.gov (United States)

    Leung, L.; Hagos, S. M.; Rauscher, S.; Ringler, T.

    2012-12-01

    This study compares two grid refinement approaches using global variable resolution model and nesting for high-resolution regional climate modeling. The global variable resolution model, Model for Prediction Across Scales (MPAS), and the limited area model, Weather Research and Forecasting (WRF) model, are compared in an idealized aqua-planet context with a focus on the spatial and temporal characteristics of tropical precipitation simulated by the models using the same physics package from the Community Atmosphere Model (CAM4). For MPAS, simulations have been performed with a quasi-uniform resolution global domain at coarse (1 degree) and high (0.25 degree) resolution, and a variable resolution domain with a high-resolution region at 0.25 degree configured inside a coarse resolution global domain at 1 degree resolution. Similarly, WRF has been configured to run on a coarse (1 degree) and high (0.25 degree) resolution tropical channel domain as well as a nested domain with a high-resolution region at 0.25 degree nested two-way inside the coarse resolution (1 degree) tropical channel. The variable resolution or nested simulations are compared against the high-resolution simulations that serve as virtual reality. Both MPAS and WRF simulate 20-day Kelvin waves propagating through the high-resolution domains fairly unaffected by the change in resolution. In addition, both models respond to increased resolution with enhanced precipitation. Grid refinement induces zonal asymmetry in precipitation (heating), accompanied by zonal anomalous Walker like circulations and standing Rossby wave signals. However, there are important differences between the anomalous patterns in MPAS and WRF due to differences in the grid refinement approaches and sensitivity of model physics to grid resolution. This study highlights the need for "scale aware" parameterizations in variable resolution and nested regional models.

  12. The Impact of High-Resolution Sea Surface Temperatures on the Simulated Nocturnal Florida Marine Boundary Layer

    Science.gov (United States)

    LaCasse, Katherine M.; Splitt, Michael E.; Lazarus, Steven M.; Lapenta, William M.

    2008-01-01

    High- and low-resolution sea surface temperature (SST) analysis products are used to initialize the Weather Research and Forecasting (WRF) Model for May 2004 for short-term forecasts over Florida and surrounding waters. Initial and boundary conditions for the simulations were provided by a combination of observations, large-scale model output, and analysis products. The impact of using a 1-km Moderate Resolution Imaging Spectroradiometer (MODIS) SST composite on subsequent evolution of the marine atmospheric boundary layer (MABL) is assessed through simulation comparisons and limited validation. Model results are presented for individual simulations, as well as for aggregates of easterly- and westerly-dominated low-level flows. The simulation comparisons show that the use of MODIS SST composites results in enhanced convergence zones. earlier and more intense horizontal convective rolls. and an increase in precipitation as well as a change in precipitation location. Validation of 10-m winds with buoys shows a slight improvement in wind speed. The most significant results of this study are that 1) vertical wind stress divergence and pressure gradient accelerations across the Florida Current region vary in importance as a function of flow direction and stability and 2) the warmer Florida Current in the MODIS product transports heat vertically and downwind of this heat source, modifying the thermal structure and the MABL wind field primarily through pressure gradient adjustments.

  13. High-resolution WRF-LES simulations for real episodes: A case study for prealpine terrain

    Science.gov (United States)

    Hald, Cornelius; Mauder, Matthias; Laux, Patrick; Kunstmann, Harald

    2017-04-01

    While in most large or regional scale weather and climate models turbulence is parametrized, LES (Large Eddy Simulation) allows for the explicit modeling of turbulent structures in the atmosphere. With the exponential growth in available computing power the technique has become more and more applicable, yet it has mostly been used to model idealized scenarios. It is investigated how well WRF-LES can represent small scale weather patterns. The results are evaluated against different hydrometeorological measurements. We use WRF-LES to model the diurnal cycle for a 48 hour episode in summer over moderately complex terrain in southern Germany. The model setup uses a high resolution digital elevation model, land use and vegetation map. The atmospheric boundary conditions are set by reanalysis data. Schemes for radiation and microphysics and a land-surface model are employed. The biggest challenge in modeling arises from the high horizontal resolution of dx = 30m, since the subgrid-scale model then requires a vertical resolution dz ≈ 10m for optimal results. We observe model instabilities and present solutions like smoothing of the surface input data, careful positioning of the model domain and shortening of the model time step down to a twentieth of a second. Model results are compared to an array of various instruments including eddy covariance stations, LIDAR, RASS, SODAR, weather stations and unmanned aerial vehicles. All instruments are part of the TERENO pre-Alpine area and were employed in the orchestrated measurement campaign ScaleX in July 2015. Examination of the results show reasonable agreement between model and measurements in temperature- and moisture profiles. Modeled wind profiles are highly dependent on the vertical resolution and are in accordance with measurements only at higher wind speeds. A direct comparison of turbulence is made difficult by the purely statistical character of turbulent motions in the model.

  14. Using Instrument Simulators and a Satellite Database to Evaluate Microphysical Assumptions in High-Resolution Simulations of Hurricane Rita

    Science.gov (United States)

    Hristova-Veleva, S. M.; Chao, Y.; Chau, A. H.; Haddad, Z. S.; Knosp, B.; Lambrigtsen, B.; Li, P.; Martin, J. M.; Poulsen, W. L.; Rodriguez, E.; Stiles, B. W.; Turk, J.; Vu, Q.

    2009-12-01

    Improving forecasting of hurricane intensity remains a significant challenge for the research and operational communities. Many factors determine a tropical cyclone’s intensity. Ultimately, though, intensity is dependent on the magnitude and distribution of the latent heating that accompanies the hydrometeor production during the convective process. Hence, the microphysical processes and their representation in hurricane models are of crucial importance for accurately simulating hurricane intensity and evolution. The accurate modeling of the microphysical processes becomes increasingly important when running high-resolution models that should properly reflect the convective processes in the hurricane eyewall. There are many microphysical parameterizations available today. However, evaluating their performance and selecting the most representative ones remains a challenge. Several field campaigns were focused on collecting in situ microphysical observations to help distinguish between different modeling approaches and improve on the most promising ones. However, these point measurements cannot adequately reflect the space and time correlations characteristic of the convective processes. An alternative approach to evaluating microphysical assumptions is to use multi-parameter remote sensing observations of the 3D storm structure and evolution. In doing so, we could compare modeled to retrieved geophysical parameters. The satellite retrievals, however, carry their own uncertainty. To increase the fidelity of the microphysical evaluation results, we can use instrument simulators to produce satellite observables from the model fields and compare to the observed. This presentation will illustrate how instrument simulators can be used to discriminate between different microphysical assumptions. We will compare and contrast the members of high-resolution ensemble WRF model simulations of Hurricane Rita (2005), each member reflecting different microphysical assumptions

  15. STAR FORMATION AND FEEDBACK IN SMOOTHED PARTICLE HYDRODYNAMIC SIMULATIONS. II. RESOLUTION EFFECTS

    International Nuclear Information System (INIS)

    Christensen, Charlotte R.; Quinn, Thomas; Bellovary, Jillian; Stinson, Gregory; Wadsley, James

    2010-01-01

    We examine the effect of mass and force resolution on a specific star formation (SF) recipe using a set of N-body/smooth particle hydrodynamic simulations of isolated galaxies. Our simulations span halo masses from 10 9 to 10 13 M sun , more than 4 orders of magnitude in mass resolution, and 2 orders of magnitude in the gravitational softening length, ε, representing the force resolution. We examine the total global SF rate, the SF history, and the quantity of stellar feedback and compare the disk structure of the galaxies. Based on our analysis, we recommend using at least 10 4 particles each for the dark matter (DM) and gas component and a force resolution of ε ∼ 10 -3 R vir when studying global SF and feedback. When the spatial distribution of stars is important, the number of gas and DM particles must be increased to at least 10 5 of each. Low-mass resolution simulations with fixed softening lengths show particularly weak stellar disks due to two-body heating. While decreasing spatial resolution in low-mass resolution simulations limits two-body effects, density and potential gradients cannot be sustained. Regardless of the softening, low-mass resolution simulations contain fewer high density regions where SF may occur. Galaxies of approximately 10 10 M sun display unique sensitivity to both mass and force resolution. This mass of galaxy has a shallow potential and is on the verge of forming a disk. The combination of these factors gives this galaxy the potential for strong gas outflows driven by supernova feedback and makes it particularly sensitive to any changes to the simulation parameters.

  16. The cosmological principle is not in the sky

    Science.gov (United States)

    Park, Chan-Gyung; Hyun, Hwasu; Noh, Hyerim; Hwang, Jai-chan

    2017-08-01

    The homogeneity of matter distribution at large scales, known as the cosmological principle, is a central assumption in the standard cosmological model. The case is testable though, thus no longer needs to be a principle. Here we perform a test for spatial homogeneity using the Sloan Digital Sky Survey Luminous Red Galaxies (LRG) sample by counting galaxies within a specified volume with the radius scale varying up to 300 h-1 Mpc. We directly confront the large-scale structure data with the definition of spatial homogeneity by comparing the averages and dispersions of galaxy number counts with allowed ranges of the random distribution with homogeneity. The LRG sample shows significantly larger dispersions of number counts than the random catalogues up to 300 h-1 Mpc scale, and even the average is located far outside the range allowed in the random distribution; the deviations are statistically impossible to be realized in the random distribution. This implies that the cosmological principle does not hold even at such large scales. The same analysis of mock galaxies derived from the N-body simulation, however, suggests that the LRG sample is consistent with the current paradigm of cosmology, thus the simulation is also not homogeneous in that scale. We conclude that the cosmological principle is neither in the observed sky nor demanded to be there by the standard cosmological world model. This reveals the nature of the cosmological principle adopted in the modern cosmology paradigm, and opens a new field of research in theoretical cosmology.

  17. Speeding up N-body simulations of modified gravity: chameleon screening models

    Science.gov (United States)

    Bose, Sownak; Li, Baojiu; Barreira, Alexandre; He, Jian-hua; Hellwing, Wojciech A.; Koyama, Kazuya; Llinares, Claudio; Zhao, Gong-Bo

    2017-02-01

    We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f(R) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergence rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f(R) simulations. For example, a test simulation with 5123 particles in a box of size 512 Mpc/h is now 5 times faster than before, while a Millennium-resolution simulation for f(R) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.

  18. Speeding up N -body simulations of modified gravity: chameleon screening models

    International Nuclear Information System (INIS)

    Bose, Sownak; Li, Baojiu; He, Jian-hua; Llinares, Claudio; Barreira, Alexandre; Hellwing, Wojciech A.; Koyama, Kazuya; Zhao, Gong-Bo

    2017-01-01

    We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f ( R ) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergence rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f ( R ) simulations. For example, a test simulation with 512 3 particles in a box of size 512 Mpc/ h is now 5 times faster than before, while a Millennium-resolution simulation for f ( R ) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.

  19. Speeding up N -body simulations of modified gravity: chameleon screening models

    Energy Technology Data Exchange (ETDEWEB)

    Bose, Sownak; Li, Baojiu; He, Jian-hua; Llinares, Claudio [Institute for Computational Cosmology, Department of Physics, Durham University, Durham DH1 3LE (United Kingdom); Barreira, Alexandre [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching (Germany); Hellwing, Wojciech A.; Koyama, Kazuya [Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth PO1 3FX (United Kingdom); Zhao, Gong-Bo, E-mail: sownak.bose@durham.ac.uk, E-mail: baojiu.li@durham.ac.uk, E-mail: barreira@mpa-garching.mpg.de, E-mail: jianhua.he@durham.ac.uk, E-mail: wojciech.hellwing@port.ac.uk, E-mail: kazuya.koyama@port.ac.uk, E-mail: claudio.llinares@durham.ac.uk, E-mail: gbzhao@nao.cas.cn [National Astronomy Observatories, Chinese Academy of Science, Beijing, 100012 (China)

    2017-02-01

    We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f ( R ) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergence rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f ( R ) simulations. For example, a test simulation with 512{sup 3} particles in a box of size 512 Mpc/ h is now 5 times faster than before, while a Millennium-resolution simulation for f ( R ) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.

  20. Galaxy formation: internal mechanisms and cosmological processes

    International Nuclear Information System (INIS)

    Martig, Marie

    2010-01-01

    This thesis is devoted to galaxy formation and evolution in a cosmological context. Cosmological simulations have unveiled two main modes of galaxy growth: hierarchical growth by mergers and accretion of cold gas from cosmic filaments. However, these simulations rarely take into account small scale mechanisms, that govern internal evolution and that are a key ingredient to understand galaxy formation and evolution. Thanks to a new simulation technique that I have developed, I first studied the colors of galaxies, and in particular the reddening of elliptical galaxies. I showed that the gas disk in an elliptical galaxy could be stabilized against star formation because of the galaxy's stellar component being within a spheroid instead of a disk. This mechanism can explain the red colors of some elliptical galaxies that contain a gas disk. I also studied the formation of spiral galaxies: most cosmological simulations cannot explain the formation of Milky Way-like galaxies, i.e. with a large disk and a small bulge. I showed that this issue could be partly solved by taking into account in the simulations the mass loss from evolved stars through stellar winds, planetary nebulae and supernovae explosions. (author) [fr

  1. Numerical Convergence in the Dark Matter Halos Properties Using Cosmological Simulations

    Science.gov (United States)

    Mosquera-Escobar, X. E.; Muñoz-Cuartas, J. C.

    2017-07-01

    Nowadays, the accepted cosmological model is the so called -Cold Dark Matter (CDM). In such model, the universe is considered to be homogeneous and isotropic, composed of diverse components as the dark matter and dark energy, where the latter is the most abundant one. Dark matter plays an important role because it is responsible for the generation of gravitational potential wells, commonly called dark matter halos. At the end, dark matter halos are characterized by a set of parameters (mass, radius, concentration, spin parameter), these parameters provide valuable information for different studies, such as galaxy formation, gravitational lensing, etc. In this work we use the publicly available code Gadget2 to perform cosmological simulations to find to what extent the numerical parameters of the simu- lations, such as gravitational softening, integration time step and force calculation accuracy affect the physical properties of the dark matter halos. We ran a suite of simulations where these parameters were varied in a systematic way in order to explore accurately their impact on the structural parameters of dark matter halos. We show that the variations on the numerical parameters affect the structural pa- rameters of dark matter halos, such as concentration, virial radius, and concentration. We show that these modifications emerged when structures become non- linear (at redshift 2) for the scale of our simulations, such that these variations affected the formation and evolution structure of halos mainly at later cosmic times. As a quantitative result, we propose which would be the most appropriate values for the numerical parameters of the simulations, such that they do not affect the halo properties that are formed. For force calculation accuracy we suggest values smaller or equal to 0.0001, integration time step smaller o equal to 0.005 and for gravitational softening we propose equal to 1/60th of the mean interparticle distance, these values, correspond to the

  2. Simulating high spatial resolution high severity burned area in Sierra Nevada forests for California Spotted Owl habitat climate change risk assessment and management.

    Science.gov (United States)

    Keyser, A.; Westerling, A. L.; Jones, G.; Peery, M. Z.

    2017-12-01

    Sierra Nevada forests have experienced an increase in very large fires with significant areas of high burn severity, such as the Rim (2013) and King (2014) fires, that have impacted habitat of endangered species such as the California spotted owl. In order to support land manager forest management planning and risk assessment activities, we used historical wildfire histories from the Monitoring Trends in Burn Severity project and gridded hydroclimate and land surface characteristics data to develope statistical models to simulate the frequency, location and extent of high severity burned area in Sierra Nevada forest wildfires as functions of climate and land surface characteristics. We define high severity here as BA90 area: the area comprising patches with ninety percent or more basal area killed within a larger fire. We developed a system of statistical models to characterize the probability of large fire occurrence, the probability of significant BA90 area present given a large fire, and the total extent of BA90 area in a fire on a 1/16 degree lat/lon grid over the Sierra Nevada. Repeated draws from binomial and generalized pareto distributions using these probabilities generated a library of simulated histories of high severity fire for a range of near (50 yr) future climate and fuels management scenarios. Fuels management scenarios were provided by USFS Region 5. Simulated BA90 area was then downscaled to 30 m resolution using a statistical model we developed using Random Forest techniques to estimate the probability of adjacent 30m pixels burning with ninety percent basal kill as a function of fire size and vegetation and topographic features. The result is a library of simulated high resolution maps of BA90 burned areas for a range of climate and fuels management scenarios with which we estimated conditional probabilities of owl nesting sites being impacted by high severity wildfire.

  3. HIGH-RESOLUTION SIMULATIONS OF CONVECTION PRECEDING IGNITION IN TYPE Ia SUPERNOVAE USING ADAPTIVE MESH REFINEMENT

    International Nuclear Information System (INIS)

    Nonaka, A.; Aspden, A. J.; Almgren, A. S.; Bell, J. B.; Zingale, M.; Woosley, S. E.

    2012-01-01

    We extend our previous three-dimensional, full-star simulations of the final hours of convection preceding ignition in Type Ia supernovae to higher resolution using the adaptive mesh refinement capability of our low Mach number code, MAESTRO. We report the statistics of the ignition of the first flame at an effective 4.34 km resolution and general flow field properties at an effective 2.17 km resolution. We find that off-center ignition is likely, with radius of 50 km most favored and a likely range of 40-75 km. This is consistent with our previous coarser (8.68 km resolution) simulations, implying that we have achieved sufficient resolution in our determination of likely ignition radii. The dynamics of the last few hot spots preceding ignition suggest that a multiple ignition scenario is not likely. With improved resolution, we can more clearly see the general flow pattern in the convective region, characterized by a strong outward plume with a lower speed recirculation. We show that the convective core is turbulent with a Kolmogorov spectrum and has a lower turbulent intensity and larger integral length scale than previously thought (on the order of 16 km s –1 and 200 km, respectively), and we discuss the potential consequences for the first flames.

  4. Simulation of synoptic and sub-synoptic phenomena over East Africa and Arabian Peninsula for current and future climate using a high resolution AGCM

    KAUST Repository

    Raj, Jerry

    2015-04-01

    Climate regimes of East Africa and Arabia are complex and are poorly understood. East Africa has large-scale tropical controls like major convergence zones and air streams. The region is in the proximity of two monsoons, north-east and south-west, and the humid and thermally unstable Congo air stream. The domain comprises regions with one, two, and three rainfall maxima, and the rainfall pattern over this region has high spatial variability. To explore the synoptic and sub-synoptic phenomena that drive the climate of the region we conducted climate simulations using a high resolution Atmospheric General Circulation Model (AGCM), GFDL\\'s High Resolution Atmospheric Model (HiRAM). Historic simulations (1975-2004) and future projections (2007-2050), with both RCP 4.5 and RCP 8.5 pathways, were performed according to CORDEX standard. The sea surface temperature (SST) was prescribed from the 2°x2.5° latitude-longitude resolution GFDL Earth System Model runs of IPCC AR5, as bottom boundary condition over the ocean. Our simulations were conducted at a horizontal grid spacing of 25 km, which is an ample resolution for regional climate simulation. In comparison with the regional models, global HiRAM has the advantage of accounting for two-way interaction between regional and global scale processes. Our initial results show that HiRAM simulations for historic period well reproduce the regional climate in East Africa and the Arabian Peninsula with their complex interplay of regional and global processes. Our future projections indicate warming and increased precipitation over the Ethiopian highlands and the Greater Horn of Africa. We found significant regional differences between RCP 4.5 and RCP 8.5 projections, e.g., west coast of the Arabian Peninsula, show anomalies of opposite signs in these two simulations.

  5. The effect of baryons in the cosmological lensing PDFs

    Science.gov (United States)

    Castro, Tiago; Quartin, Miguel; Giocoli, Carlo; Borgani, Stefano; Dolag, Klaus

    2018-05-01

    Observational cosmology is passing through a unique moment of grandeur with the amount of quality data growing fast. However, in order to better take advantage of this moment, data analysis tools have to keep up the pace. Understanding the effect of baryonic matter on the large-scale structure is one of the challenges to be faced in cosmology. In this work, we have thoroughly studied the effect of baryonic physics on different lensing statistics. Making use of the Magneticum Pathfinder suite of simulations we show that the influence of luminous matter on the 1-point lensing statistics of point sources is significant, enhancing the probability of magnified objects with μ > 3 by a factor of 2 and the occurrence of multiple-images by a factor 5 - 500 depending on the source redshift and size. We also discuss the dependence of the lensing statistics on the angular resolution of sources. Our results and methodology were carefully tested in order to guarantee that our uncertainties are much smaller than the effects here presented.

  6. Simulation of high-resolution MFM tip using exchange-spring magnet

    Energy Technology Data Exchange (ETDEWEB)

    Saito, H. [Faculty of Resource Science and Engineering, Akita University, Akita 010-8502 (Japan)]. E-mail: hsaito@ipc.akita-u.ac.jp; Yatsuyanagi, D. [Faculty of Resource Science and Engineering, Akita University, Akita 010-8502 (Japan); Ishio, S. [Faculty of Resource Science and Engineering, Akita University, Akita 010-8502 (Japan); Ito, A. [Nitto Optical Co. Ltd., Misato, Akita 019-1403 (Japan); Kawamura, H. [Nitto Optical Co. Ltd., Misato, Akita 019-1403 (Japan); Ise, K. [Research Institute of Advanced Technology Akita, Akita 010-1623 (Japan); Taguchi, K. [Research Institute of Advanced Technology Akita, Akita 010-1623 (Japan); Takahashi, S. [Research Institute of Advanced Technology Akita, Akita 010-1623 (Japan)

    2007-03-15

    The transfer function of magnetic force microscope (MFM) tips using an exchange-spring trilayer composed of a centered soft magnetic layer and two hard magnetic layers was calculated and the resolution was estimated by considering the thermodynamic noise limit of an MFM cantilever. It was found that reducing the thickness of the centered soft magnetic layer and the magnetization of hard magnetic layer are important to obtain high resolution. Tips using an exchange-spring trilayer with a very thin FeCo layer and isotropic hard magnetic layers, such as CoPt and FePt, are found to be suitable for obtaining a resolution less than 10 nm at room temperature.

  7. Geometric perspective on singularity resolution and uniqueness in loop quantum cosmology

    International Nuclear Information System (INIS)

    Corichi, Alejandro; Singh, Parampreet

    2009-01-01

    We reexamine the issue of singularity resolution in homogeneous loop quantum cosmology from the perspective of geometrical entities such as expansion rate and the shear scalar. These quantities are very reliable measures of the properties of spacetime and can be defined not only at the classical and effective level, but also at an operator level in the quantum theory. From their behavior in the effective constraint surface and in the effective loop quantum spacetime, we show that one can severely restrict the ambiguities in regularization of the quantum constraint and rule out unphysical choices. We analyze this in the flat isotropic model and the Bianchi-I spacetimes. In the former case we show that the expansion rate is absolutely bounded only for the so-called improved quantization, a result which synergizes with uniqueness of this quantization as proved earlier. Surprisingly, for the Bianchi-I spacetime, we show that out of the available choices, the expansion rate and shear are bounded for only one regularization of the quantum constraint. It turns out that only for this choice, the theory exhibits quantum gravity corrections at a unique scale, and is physically viable.

  8. Uncertainty of global summer precipitation in the CMIP5 models: a comparison between high-resolution and low-resolution models

    Science.gov (United States)

    Huang, Danqing; Yan, Peiwen; Zhu, Jian; Zhang, Yaocun; Kuang, Xueyuan; Cheng, Jing

    2018-04-01

    The uncertainty of global summer precipitation simulated by the 23 CMIP5 CGCMs and the possible impacts of model resolutions are investigated in this study. Large uncertainties exist over the tropical and subtropical regions, which can be mainly attributed to convective precipitation simulation. High-resolution models (HRMs) and low-resolution models (LRMs) are further investigated to demonstrate their different contributions to the uncertainties of the ensemble mean. It shows that the high-resolution model ensemble means (HMME) and low-resolution model ensemble mean (LMME) mitigate the biases between the MME and observation over most continents and oceans, respectively. The HMME simulates more precipitation than the LMME over most oceans, but less precipitation over some continents. The dominant precipitation category in the HRMs (LRMs) is the heavy precipitation (moderate precipitation) over the tropic regions. The combinations of convective and stratiform precipitation are also quite different: the HMME has much higher ratio of stratiform precipitation while the LMME has more convective precipitation. Finally, differences in precipitation between the HMME and LMME can be traced to their differences in the SST simulations via the local and remote air-sea interaction.

  9. High-resolution simulations of cylindrical void collapse in energetic materials: Effect of primary and secondary collapse on initiation thresholds

    Science.gov (United States)

    Rai, Nirmal Kumar; Schmidt, Martin J.; Udaykumar, H. S.

    2017-04-01

    Void collapse in energetic materials leads to hot spot formation and enhanced sensitivity. Much recent work has been directed towards simulation of collapse-generated reactive hot spots. The resolution of voids in calculations to date has varied as have the resulting predictions of hot spot intensity. Here we determine the required resolution for reliable cylindrical void collapse calculations leading to initiation of chemical reactions. High-resolution simulations of collapse provide new insights into the mechanism of hot spot generation. It is found that initiation can occur in two different modes depending on the loading intensity: Either the initiation occurs due to jet impact at the first collapse instant or it can occur at secondary lobes at the periphery of the collapsed void. A key observation is that secondary lobe collapse leads to large local temperatures that initiate reactions. This is due to a combination of a strong blast wave from the site of primary void collapse and strong colliding jets and vortical flows generated during the collapse of the secondary lobes. The secondary lobe collapse results in a significant lowering of the predicted threshold for ignition of the energetic material. The results suggest that mesoscale simulations of void fields may suffer from significant uncertainty in threshold predictions because unresolved calculations cannot capture the secondary lobe collapse phenomenon. The implications of this uncertainty for mesoscale simulations are discussed in this paper.

  10. arXiv Neutrino masses and cosmology with Lyman-alpha forest power spectrum

    CERN Document Server

    Palanque-Delabrouille, Nathalie; Baur, Julien; Magneville, Christophe; Rossi, Graziano; Lesgourgues, Julien; Borde, Arnaud; Burtin, Etienne; LeGoff, Jean-Marc; Rich, James; Viel, Matteo; Weinberg, David

    2015-11-06

    We present constraints on neutrino masses, the primordial fluctuation spectrum from inflation, and other parameters of the $\\Lambda$CDM model, using the one-dimensional Ly$\\alpha$-forest power spectrum measured by Palanque-Delabrouille et al. (2013) from SDSS-III/BOSS, complemented by Planck 2015 cosmic microwave background (CMB) data and other cosmological probes. This paper improves on the previous analysis by Palanque-Delabrouille et al. (2015) by using a more powerful set of calibrating hydrodynamical simulations that reduces uncertainties associated with resolution and box size, by adopting a more flexible set of nuisance parameters for describing the evolution of the intergalactic medium, by including additional freedom to account for systematic uncertainties, and by using Planck 2015 constraints in place of Planck 2013. Fitting Ly$\\alpha$ data alone leads to cosmological parameters in excellent agreement with the values derived independently from CMB data, except for a weak tension on the scalar index ...

  11. Luciola Hypertelescope Space Observatory. Versatile, Upgradable High-Resolution Imaging,from Stars to Deep-Field Cosmology

    Science.gov (United States)

    Labeyrie, Antoine; Le Coroller, Herve; Dejonghe, Julien; Lardiere, Olivier; Aime, Claude; Dohlen, Kjetil; Mourard, Denis; Lyon, Richard; Carpenter, Kenneth G.

    2008-01-01

    Luciola is a large (one kilometer) "multi-aperture densified-pupil imaging interferometer", or "hypertelescope" employing many small apertures, rather than a few large ones, for obtaining direct snapshot images with a high information content. A diluted collector mirror, deployed in space as a flotilla of small mirrors, focuses a sky image which is exploited by several beam-combiner spaceships. Each contains a pupil densifier micro-lens array to avoid the diffractive spread and image attenuation caused by the small sub-apertures. The elucidation of hypertelescope imaging properties during the last decade has shown that many small apertures tend to be far more efficient, regarding the science yield, than a few large ones providing a comparable collecting area. For similar underlying physical reasons, radio-astronomy has also evolved in the direction of many-antenna systems such as the proposed Low Frequency Array having hundreds of thousands of individual receivers . With its high limiting magnitude, reaching the mv=30 limit of HST when 100 collectors of 25cm will match its collecting area, high-resolution direct imaging in multiple channels, broad spectral coverage from the 1200 Angstrom ultra-violet to the 20 micron infra-red, apodization, coronagraphic and spectroscopic capabilities, the proposed hypertelescope observatory addresses very broad and innovative science covering different areas of ESA s Cosmic Vision program. In the initial phase, a focal spacecraft covering the UV to near IR spectral range of EMCCD photon-counting cameras ( currently 200 to 1000nm), will image details on the surface of many stars, as well as their environment, including multiple stars and clusters. Spectra will be obtained for each resel. It will also image neutron star, black-hole and micro-quasar candidates, as well as active galactic nuclei, quasars, gravitational lenses, and other Cosmic Vision targets observable with the initial modest crowding limit. With subsequent upgrade

  12. High-resolution model for the simulation of the activity distribution and radiation field at the German FRJ-2 research reactor

    International Nuclear Information System (INIS)

    Winter, D.; Haeussler, A.; Abbasi, F.; Simons, F.; Nabbi, R.; Thomauske, B.

    2013-01-01

    F or the decommissioning of nuclear facilities in Germany, activity and dose rate atlases (ADAs) are required for the approval of the domestic regulatory authority. Thus, high detailed modeling efforts are demanded in order to optimize the quantification and the characterization of nuclear waste as well as to realize optimum radiation protection. For the generation of ADAs, computer codes based on the Monte-Carlo method are increasingly employed because of their potential for high resolution simulation of the neutron and gamma transport for activity and dose rate predictions, respectively. However, the demand on the modeling effort and the simulation time increases with the size and the complexity of the whole model that becomes a limiting factor. For instance, the German FRJ-2 research reactor consisting of a complex reactor core, the graphite reflector, and the adjacent thermal and biological shielding structures represents such a case. For the solving of this drawback, various techniques such as variance reduction methods are applied. A further simple but effective approach is the modeling of the regions of interest with appropriate boundary conditions e.g. surface source or reflective surfaces. In the framework of the existing research a high sophisticated simulation tool is developed which is characterized by: - CAD-based model generation for Monte-Carlo transport simulations; - Production and 3D visualization of high resolution activity and dose rate atlases; - Application of coupling routines and interface structures for optimum and automated simulations. The whole simulation system is based on the Monte-Carlo code MCNP5 and the depletion/activation code ORIGEN2. The numerical and computational efficiency of the proposed methods is discussed in this paper on the basis of the simulation and CAD-based model of the FRJ-2 research reactor with emphasis on the effect of variance reduction methods. (orig.)

  13. High-resolution model for the simulation of the activity distribution and radiation field at the German FRJ-2 research reactor

    Energy Technology Data Exchange (ETDEWEB)

    Winter, D.; Haeussler, A.; Abbasi, F.; Simons, F.; Nabbi, R.; Thomauske, B. [RWTH Aachen Univ. (Germany). Inst. of Nuclear Fuel Cycle; Damm, G. [Research Center Juelich (Germany)

    2013-11-15

    F or the decommissioning of nuclear facilities in Germany, activity and dose rate atlases (ADAs) are required for the approval of the domestic regulatory authority. Thus, high detailed modeling efforts are demanded in order to optimize the quantification and the characterization of nuclear waste as well as to realize optimum radiation protection. For the generation of ADAs, computer codes based on the Monte-Carlo method are increasingly employed because of their potential for high resolution simulation of the neutron and gamma transport for activity and dose rate predictions, respectively. However, the demand on the modeling effort and the simulation time increases with the size and the complexity of the whole model that becomes a limiting factor. For instance, the German FRJ-2 research reactor consisting of a complex reactor core, the graphite reflector, and the adjacent thermal and biological shielding structures represents such a case. For the solving of this drawback, various techniques such as variance reduction methods are applied. A further simple but effective approach is the modeling of the regions of interest with appropriate boundary conditions e.g. surface source or reflective surfaces. In the framework of the existing research a high sophisticated simulation tool is developed which is characterized by: - CAD-based model generation for Monte-Carlo transport simulations; - Production and 3D visualization of high resolution activity and dose rate atlases; - Application of coupling routines and interface structures for optimum and automated simulations. The whole simulation system is based on the Monte-Carlo code MCNP5 and the depletion/activation code ORIGEN2. The numerical and computational efficiency of the proposed methods is discussed in this paper on the basis of the simulation and CAD-based model of the FRJ-2 research reactor with emphasis on the effect of variance reduction methods. (orig.)

  14. Potential for added value in precipitation simulated by high-resolution nested Regional Climate Models and observations

    Energy Technology Data Exchange (ETDEWEB)

    Di Luca, Alejandro; Laprise, Rene [Universite du Quebec a Montreal (UQAM), Centre ESCER (Etude et Simulation du Climat a l' Echelle Regionale), Departement des Sciences de la Terre et de l' Atmosphere, PK-6530, Succ. Centre-ville, B.P. 8888, Montreal, QC (Canada); De Elia, Ramon [Universite du Quebec a Montreal, Ouranos Consortium, Centre ESCER (Etude et Simulation du Climat a l' Echelle Regionale), Montreal (Canada)

    2012-03-15

    Regional Climate Models (RCMs) constitute the most often used method to perform affordable high-resolution regional climate simulations. The key issue in the evaluation of nested regional models is to determine whether RCM simulations improve the representation of climatic statistics compared to the driving data, that is, whether RCMs add value. In this study we examine a necessary condition that some climate statistics derived from the precipitation field must satisfy in order that the RCM technique can generate some added value: we focus on whether the climate statistics of interest contain some fine spatial-scale variability that would be absent on a coarser grid. The presence and magnitude of fine-scale precipitation variance required to adequately describe a given climate statistics will then be used to quantify the potential added value (PAV) of RCMs. Our results show that the PAV of RCMs is much higher for short temporal scales (e.g., 3-hourly data) than for long temporal scales (16-day average data) due to the filtering resulting from the time-averaging process. PAV is higher in warm season compared to cold season due to the higher proportion of precipitation falling from small-scale weather systems in the warm season. In regions of complex topography, the orographic forcing induces an extra component of PAV, no matter the season or the temporal scale considered. The PAV is also estimated using high-resolution datasets based on observations allowing the evaluation of the sensitivity of changing resolution in the real climate system. The results show that RCMs tend to reproduce relatively well the PAV compared to observations although showing an overestimation of the PAV in warm season and mountainous regions. (orig.)

  15. AX-GADGET: a new code for cosmological simulations of Fuzzy Dark Matter and Axion models

    Science.gov (United States)

    Nori, Matteo; Baldi, Marco

    2018-05-01

    We present a new module of the parallel N-Body code P-GADGET3 for cosmological simulations of light bosonic non-thermal dark matter, often referred as Fuzzy Dark Matter (FDM). The dynamics of the FDM features a highly non-linear Quantum Potential (QP) that suppresses the growth of structures at small scales. Most of the previous attempts of FDM simulations either evolved suppressed initial conditions, completely neglecting the dynamical effects of QP throughout cosmic evolution, or resorted to numerically challenging full-wave solvers. The code provides an interesting alternative, following the FDM evolution without impairing the overall performance. This is done by computing the QP acceleration through the Smoothed Particle Hydrodynamics (SPH) routines, with improved schemes to ensure precise and stable derivatives. As an extension of the P-GADGET3 code, it inherits all the additional physics modules implemented up to date, opening a wide range of possibilities to constrain FDM models and explore its degeneracies with other physical phenomena. Simulations are compared with analytical predictions and results of other codes, validating the QP as a crucial player in structure formation at small scales.

  16. Open problems in string cosmology

    International Nuclear Information System (INIS)

    Toumbas, N.

    2010-01-01

    Some of the open problems in string cosmology are highlighted within the context of the recently constructed thermal and quantum superstring cosmological solutions. Emphasis is given on the high temperature cosmological regime, where it is argued that thermal string vacua in the presence of gravito-magnetic fluxes can be used to bypass the Hagedorn instabilities of string gas cosmology. This article is based on a talk given at the workshop on ''Cosmology and Strings'', Corfu, September 6-13, 2009. (Abstract Copyright [2010], Wiley Periodicals, Inc.)

  17. Hyper-resolution urban flood modeling using high-resolution radar precipitation and LiDAR data

    Science.gov (United States)

    Noh, S. J.; Lee, S.; Lee, J.; Seo, D. J.

    2016-12-01

    Floods occur most frequently among all natural hazards, often causing widespread economic damage and loss of human lives. In particular, urban flooding is becoming increasingly costly and difficult to manage with a greater concentration of population and assets in urban centers. Despite of known benefits for accurate representation of small scale features and flow interaction among different flow domains, which have significant impact on flood propagation, high-resolution modeling has not been fully utilized due to expensive computation and various uncertainties from model structure, input and parameters. In this study, we assess the potential of hyper-resolution hydrologic-hydraulic modeling using high-resolution radar precipitation and LiDAR data for improved urban flood prediction and hazard mapping. We describe a hyper-resolution 1D-2D coupled urban flood model for pipe and surface flows and evaluate the accuracy of the street-level inundation information produced. For detailed geometric representation of urban areas and for computational efficiency, we use 1 m-resolution topographical data, processed from LiDAR measurements, in conjunction with adaptive mesh refinement. For street-level simulation in large urban areas at grid sizes of 1 to 10 m, a hybrid parallel computing scheme using MPI and openMP is also implemented in a high-performance computing system. The modeling approach developed is applied for the Johnson Creek Catchment ( 40 km2), which makes up the Arlington Urban Hydroinformatics Testbed. In addition, discussion will be given on availability of hyper-resolution simulation archive for improved real-time flood mapping.

  18. KiDS-450: cosmological constraints from weak-lensing peak statistics - II: Inference from shear peaks using N-body simulations

    Science.gov (United States)

    Martinet, Nicolas; Schneider, Peter; Hildebrandt, Hendrik; Shan, HuanYuan; Asgari, Marika; Dietrich, Jörg P.; Harnois-Déraps, Joachim; Erben, Thomas; Grado, Aniello; Heymans, Catherine; Hoekstra, Henk; Klaes, Dominik; Kuijken, Konrad; Merten, Julian; Nakajima, Reiko

    2018-02-01

    We study the statistics of peaks in a weak-lensing reconstructed mass map of the first 450 deg2 of the Kilo Degree Survey (KiDS-450). The map is computed with aperture masses directly applied to the shear field with an NFW-like compensated filter. We compare the peak statistics in the observations with that of simulations for various cosmologies to constrain the cosmological parameter S_8 = σ _8 √{Ω _m/0.3}, which probes the (Ωm, σ8) plane perpendicularly to its main degeneracy. We estimate S8 = 0.750 ± 0.059, using peaks in the signal-to-noise range 0 ≤ S/N ≤ 4, and accounting for various systematics, such as multiplicative shear bias, mean redshift bias, baryon feedback, intrinsic alignment, and shear-position coupling. These constraints are ˜ 25 per cent tighter than the constraints from the high significance peaks alone (3 ≤ S/N ≤ 4) which typically trace single-massive haloes. This demonstrates the gain of information from low-S/N peaks. However, we find that including S/N KiDS-450. Combining shear peaks with non-tomographic measurements of the shear two-point correlation functions yields a ˜20 per cent improvement in the uncertainty on S8 compared to the shear two-point correlation functions alone, highlighting the great potential of peaks as a cosmological probe.

  19. Massive optimal data compression and density estimation for scalable, likelihood-free inference in cosmology

    Science.gov (United States)

    Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen

    2018-03-01

    Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data-space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper we use massive asymptotically-optimal data compression to reduce the dimensionality of the data-space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parameterized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate Density Estimation Likelihood-Free Inference with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological datasets.

  20. Resolution of Cosmological Singularity and a Plausible Mechanism of the Big Bang

    OpenAIRE

    Choudhury, D. C.

    2001-01-01

    The initial cosmological singularity in the framework of the general theory of relativity is resolved by introducing the effect of the uncertainty principle of quantum theory without violating conventional laws of physics. A plausible account of the mechanism of the big bang, analogous to that of a nuclear explosion, is given and the currently accepted Planck temperature of about 10^(32)K at the beginning of the big bang is predicted. Subj-class: cosmology: theory-pre-big bang; mechanism of t...

  1. String theory and applications to phenomenology and cosmology

    International Nuclear Information System (INIS)

    Florakis, I.G.

    2011-07-01

    This thesis treats applications of String Theory to problems of cosmology and high energy phenomenology. In particular, we investigate problems related to the description of the initial state of the universe, using the methods of perturbative String Theory. After a review of the string-theoretic tools that will be employed, we discuss a novel degeneracy symmetry between the bosonic and fermionic massive towers of states (MSDS symmetry), living at particular points of moduli space. We study the marginal deformations of MSDS vacua and exhibit their natural thermal interpretation, in connection with the resolution of the Hagedorn divergences of string thermodynamics. The cosmological evolution of a special, 2-dimensional thermal 'Hybrid' model is presented and the correct implementation of the full stringy degrees of freedom leads to the absence of gravitational singularities, within a fully perturbative treatment. (author)

  2. Simulating quantum effects of cosmological expansion using a static ion trap

    Science.gov (United States)

    Menicucci, Nicolas C.; Olson, S. Jay; Milburn, Gerard J.

    2010-09-01

    We propose a new experimental test bed that uses ions in the collective ground state of a static trap to study the analogue of quantum-field effects in cosmological spacetimes, including the Gibbons-Hawking effect for a single detector in de Sitter spacetime, as well as the possibility of modeling inflationary structure formation and the entanglement signature of de Sitter spacetime. To date, proposals for using trapped ions in analogue gravity experiments have simulated the effect of gravity on the field modes by directly manipulating the ions' motion. In contrast, by associating laboratory time with conformal time in the simulated universe, we can encode the full effect of curvature in the modulation of the laser used to couple the ions' vibrational motion and electronic states. This model simplifies the experimental requirements for modeling the analogue of an expanding universe using trapped ions, and it enlarges the validity of the ion-trap analogy to a wide range of interesting cases.

  3. Aerosol midlatitude cyclone indirect effects in observations and high-resolution simulations

    Directory of Open Access Journals (Sweden)

    D. T. McCoy

    2018-04-01

    Full Text Available Aerosol–cloud interactions are a major source of uncertainty in inferring the climate sensitivity from the observational record of temperature. The adjustment of clouds to aerosol is a poorly constrained aspect of these aerosol–cloud interactions. Here, we examine the response of midlatitude cyclone cloud properties to a change in cloud droplet number concentration (CDNC. Idealized experiments in high-resolution, convection-permitting global aquaplanet simulations with constant CDNC are compared to 13 years of remote-sensing observations. Observations and idealized aquaplanet simulations agree that increased warm conveyor belt (WCB moisture flux into cyclones is consistent with higher cyclone liquid water path (CLWP. When CDNC is increased a larger LWP is needed to give the same rain rate. The LWP adjusts to allow the rain rate to be equal to the moisture flux into the cyclone along the WCB. This results in an increased CLWP for higher CDNC at a fixed WCB moisture flux in both observations and simulations. If observed cyclones in the top and bottom tercile of CDNC are contrasted it is found that they have not only higher CLWP but also cloud cover and albedo. The difference in cyclone albedo between the cyclones in the top and bottom third of CDNC is observed by CERES to be between 0.018 and 0.032, which is consistent with a 4.6–8.3 Wm−2 in-cyclone enhancement in upwelling shortwave when scaled by annual-mean insolation. Based on a regression model to observed cyclone properties, roughly 60 % of the observed variability in CLWP can be explained by CDNC and WCB moisture flux.

  4. Real-time haptic cutting of high-resolution soft tissues.

    Science.gov (United States)

    Wu, Jun; Westermann, Rüdiger; Dick, Christian

    2014-01-01

    We present our systematic efforts in advancing the computational performance of physically accurate soft tissue cutting simulation, which is at the core of surgery simulators in general. We demonstrate a real-time performance of 15 simulation frames per second for haptic soft tissue cutting of a deformable body at an effective resolution of 170,000 finite elements. This is achieved by the following innovative components: (1) a linked octree discretization of the deformable body, which allows for fast and robust topological modifications of the simulation domain, (2) a composite finite element formulation, which thoroughly reduces the number of simulation degrees of freedom and thus enables to carefully balance simulation performance and accuracy, (3) a highly efficient geometric multigrid solver for solving the linear systems of equations arising from implicit time integration, (4) an efficient collision detection algorithm that effectively exploits the composition structure, and (5) a stable haptic rendering algorithm for computing the feedback forces. Considering that our method increases the finite element resolution for physically accurate real-time soft tissue cutting simulation by an order of magnitude, our technique has a high potential to significantly advance the realism of surgery simulators.

  5. Cosmic Microwave Background as a Thermal Gas of SU(2 Photons: Implications for the High-z Cosmological Model and the Value of H0

    Directory of Open Access Journals (Sweden)

    Steffen Hahn

    2017-01-01

    Full Text Available Presently, we are facing a 3σ tension in the most basic cosmological parameter, the Hubble constant H0. This tension arises when fitting the Lambda-cold-dark-matter model (ΛCDM to the high-precision temperature-temperature (TT power spectrum of the Cosmic Microwave Background (CMB and to local cosmological observations. We propose a resolution of this problem by postulating that the thermal photon gas of the CMB obeys an SU(2 rather than U(1 gauge principle, suggesting a high-z cosmological model which is void of dark-matter. Observationally, we rely on precise low-frequency intensity measurements in the CMB spectrum and on a recent model independent (low-z extraction of the relation between the comoving sound horizon rs at the end of the baryon drag epoch and H0 (rsH0=const. We point out that the commonly employed condition for baryon-velocity freeze-out is imprecise, judged by a careful inspection of the formal solution to the associated Euler equation. As a consequence, the above-mentioned 3σ tension actually transforms into a 5σ discrepancy. To make contact with successful low-z  ΛCDM cosmology we propose an interpolation based on percolated/depercolated vortices of a Planck-scale axion condensate. For a first consistency test of such an all-z model we compute the angular scale of the sound horizon at photon decoupling.

  6. Cosmology

    International Nuclear Information System (INIS)

    Novikov, I.D.

    1979-01-01

    Progress made by this Commission over the period 1976-1978 is reviewed. Topics include the Hubble constant, deceleration parameter, large-scale distribution of matter in the universe, radio astronomy and cosmology, space astronomy and cosmology, formation of galaxies, physics near the cosmological singularity, and unconventional cosmological models. (C.F.)

  7. Effects of high spatial and temporal resolution Earth observations on simulated hydrometeorological variables in a cropland (southwestern France

    Directory of Open Access Journals (Sweden)

    J. Etchanchu

    2017-11-01

    Full Text Available Agricultural landscapes are often constituted by a patchwork of crop fields whose seasonal evolution is dependent on specific crop rotation patterns and phenologies. This temporal and spatial heterogeneity affects surface hydrometeorological processes and must be taken into account in simulations of land surface and distributed hydrological models. The Sentinel-2 mission allows for the monitoring of land cover and vegetation dynamics at unprecedented spatial resolutions and revisit frequencies (20 m and 5 days, respectively that are fully compatible with such heterogeneous agricultural landscapes. Here, we evaluate the impact of Sentinel-2-like remote sensing data on the simulation of surface water and energy fluxes via the Interactions between the Surface Biosphere Atmosphere (ISBA land surface model included in the EXternalized SURface (SURFEX modeling platform. The study focuses on the effect of the leaf area index (LAI spatial and temporal variability on these fluxes. We compare the use of the LAI climatology from ECOCLIMAP-II, used by default in SURFEX-ISBA, and time series of LAI derived from the high-resolution Formosat-2 satellite data (8 m. The study area is an agricultural zone in southwestern France covering 576 km2 (24 km  ×  24 km. An innovative plot-scale approach is used, in which each computational unit has a homogeneous vegetation type. Evaluation of the simulations quality is done by comparing model outputs with in situ eddy covariance measurements of latent heat flux (LE. Our results show that the use of LAI derived from high-resolution remote sensing significantly improves simulated evapotranspiration with respect to ECOCLIMAP-II, especially when the surface is covered with summer crops. The comparison with in situ measurements shows an improvement of roughly 0.3 in the correlation coefficient and a decrease of around 30 % of the root mean square error (RMSE in the simulated evapotranspiration. This

  8. Hydrodynamic Simulation of the Cosmological X-Ray Background

    Science.gov (United States)

    Croft, Rupert A. C.; Di Matteo, Tiziana; Davé, Romeel; Hernquist, Lars; Katz, Neal; Fardal, Mark A.; Weinberg, David H.

    2001-08-01

    We use a hydrodynamic simulation of an inflationary cold dark matter model with a cosmological constant to predict properties of the extragalactic X-ray background (XRB). We focus on emission from the intergalactic medium (IGM), with particular attention to diffuse emission from warm-hot gas that lies in relatively smooth filamentary structures between galaxies and galaxy clusters. We also include X-rays from point sources associated with galaxies in the simulation, and we make maps of the angular distribution of the emission. Although much of the X-ray luminous gas has a filamentary structure, the filaments are not evident in the simulated maps because of projection effects. In the soft (0.5-2 keV) band, our calculated mean intensity of radiation from intergalactic and cluster gas is 2.3×10-12 ergs-1 cm-2 deg-2, 35% of the total softband emission. This intensity is compatible at the ~1 σ level with estimates of the unresolved soft background intensity from deep ROSAT and Chandra measurements. Only 4% of the hard (2-10 keV) emission is associated with intergalactic gas. Relative to active galactic nuclei flux, the IGM component of the XRB peaks at a lower redshift (median z~0.45) and spans a narrower redshift range, so its clustering makes an important contribution to the angular correlation function of the total emission. The clustering on the scales accessible to our simulation (0.1‧-10') is significant, with an amplitude roughly consistent with an extrapolation of recent ROSAT results to small scales. A cross-correlation analysis of the XRB against nearby galaxies taken from a simulated redshift survey also yields a strong signal from the IGM. Our conclusions about the soft background intensity differ from those of some recent papers that have argued that the expected emission from gas in galaxy, group, and cluster halos would exceed the observed background unless much of the gas is expelled by supernova feedback. We obtain reasonable compatibility with

  9. High-resolution SMA imaging of bright submillimetre sources from the SCUBA-2 Cosmology Legacy Survey

    Science.gov (United States)

    Hill, Ryley; Chapman, Scott C.; Scott, Douglas; Petitpas, Glen; Smail, Ian; Chapin, Edward L.; Gurwell, Mark A.; Perry, Ryan; Blain, Andrew W.; Bremer, Malcolm N.; Chen, Chian-Chou; Dunlop, James S.; Farrah, Duncan; Fazio, Giovanni G.; Geach, James E.; Howson, Paul; Ivison, R. J.; Lacaille, Kevin; Michałowski, Michał J.; Simpson, James M.; Swinbank, A. M.; van der Werf, Paul P.; Wilner, David J.

    2018-06-01

    We have used the Submillimeter Array (SMA) at 860 μm to observe the brightest sources in the Submillimeter Common User Bolometer Array-2 (SCUBA-2) Cosmology Legacy Survey (S2CLS). The goal of this survey is to exploit the large field of the S2CLS along with the resolution and sensitivity of the SMA to construct a large sample of these rare sources and to study their statistical properties. We have targeted 70 of the brightest single-dish SCUBA-2 850 μm sources down to S850 ≈ 8 mJy, achieving an average synthesized beam of 2.4 arcsec and an average rms of σ860 = 1.5 mJy beam-1 in our primary beam-corrected maps. We searched our SMA maps for 4σ peaks, corresponding to S860 ≳ 6 mJy sources, and detected 62, galaxies, including three pairs. We include in our study 35 archival observations, bringing our sample size to 105 bright single-dish submillimetre sources with interferometric follow-up. We compute the cumulative and differential number counts, finding them to overlap with previous single-dish survey number counts within the uncertainties, although our cumulative number count is systematically lower than the parent S2CLS cumulative number count by 14 ± 6 per cent between 11 and 15 mJy. We estimate the probability that a ≳10 mJy single-dish submillimetre source resolves into two or more galaxies with similar flux densities to be less than 15 per cent. Assuming the remaining 85 per cent of the targets are ultraluminous starburst galaxies between z = 2 and 3, we find a likely volume density of ≳400 M⊙ yr-1 sources to be {˜ } 3^{+0.7}_{-0.6} {× } 10^{-7} Mpc-3. We show that the descendants of these galaxies could be ≳4 × 1011 M⊙ local quiescent galaxies, and that about 10 per cent of their total stellar mass would have formed during these short bursts of star formation.

  10. A new method to assess the added value of high-resolution regional climate simulations: application to the EURO-CORDEX dataset

    Science.gov (United States)

    Soares, P. M. M.; Cardoso, R. M.

    2017-12-01

    Regional climate models (RCM) are used with increasing resolutions pursuing to represent in an improved way regional to local scale atmospheric phenomena. The EURO-CORDEX simulations at 0.11° and simulations exploiting finer grid spacing approaching the convective-permitting regimes are representative examples. The climate runs are computationally very demanding and do not always show improvements. These depend on the region, variable and object of study. The gains or losses associated with the use of higher resolution in relation to the forcing model (global climate model or reanalysis), or to different resolution RCM simulations, is known as added value. Its characterization is a long-standing issue, and many different added-value measures have been proposed. In the current paper, a new method is proposed to assess the added value of finer resolution simulations, in comparison to its forcing data or coarser resolution counterparts. This approach builds on a probability density function (PDF) matching score, giving a normalised measure of the difference between diverse resolution PDFs, mediated by the observational ones. The distribution added value (DAV) is an objective added value measure that can be applied to any variable, region or temporal scale, from hindcast or historical (non-synchronous) simulations. The DAVs metric and an application to the EURO-CORDEX simulations, for daily temperatures and precipitation, are here presented. The EURO-CORDEX simulations at both resolutions (0.44o,0.11o) display a clear added value in relation to ERA-Interim, with values around 30% in summer and 20% in the intermediate seasons, for precipitation. When both RCM resolutions are directly compared the added value is limited. The regions with the larger precipitation DAVs are areas where convection is relevant, e.g. Alps and Iberia. When looking at the extreme precipitation PDF tail, the higher resolution improvement is generally greater than the low resolution for seasons

  11. Neutrino cosmology

    International Nuclear Information System (INIS)

    Berstein, J.

    1984-01-01

    These lectures offer a self-contained review of the role of neutrinos in cosmology. The first part deals with the question 'What is a neutrino.' and describes in a historical context the theoretical ideas and experimental discoveries related to the different types of neutrinos and their properties. The basic differences between the Dirac neutrino and the Majorana neutrino are pointed out and the evidence for different neutrino 'flavours', neutrino mass, and neutrino oscillations is discussed. The second part summarizes current views on cosmology, particularly as they are affected by recent theoretical and experimental advances in high-energy particle physics. Finally, the close relationship between neutrino physics and cosmology is brought out in more detail, to show how cosmological constraints can limit the various theoretical possibilities for neutrinos and, more particularly, how increasing knowledge of neutrino properties can contribute to our understanding of the origin, history, and future of the Universe. The level is that of the beginning graduate student. (orig.)

  12. High-resolution RCMs as pioneers for future GCMs

    Science.gov (United States)

    Schar, C.; Ban, N.; Arteaga, A.; Charpilloz, C.; Di Girolamo, S.; Fuhrer, O.; Hoefler, T.; Leutwyler, D.; Lüthi, D.; Piaget, N.; Ruedisuehli, S.; Schlemmer, L.; Schulthess, T. C.; Wernli, H.

    2017-12-01

    Currently large efforts are underway to refine the horizontal resolution of global and regional climate models to O(1 km), with the intent to represent convective clouds explicitly rather than using semi-empirical parameterizations. This refinement will move the governing equations closer to first principles and is expected to reduce the uncertainties of climate models. High resolution is particularly attractive in order to better represent critical cloud feedback processes (e.g. related to global climate sensitivity and extratropical summer convection) and extreme events (such as heavy precipitation events, floods, and hurricanes). The presentation will be illustrated using decade-long simulations at 2 km horizontal grid spacing, some of these covering the European continent on a computational mesh with 1536x1536x60 grid points. To accomplish such simulations, use is made of emerging heterogeneous supercomputing architectures, using a version of the COSMO limited-area weather and climate model that is able to run entirely on GPUs. Results show that kilometer-scale resolution dramatically improves the simulation of precipitation in terms of the diurnal cycle and short-term extremes. The modeling framework is used to address changes of precipitation scaling with climate change. It is argued that already today, modern supercomputers would in principle enable global atmospheric convection-resolving climate simulations, provided appropriately refactored codes were available, and provided solutions were found to cope with the rapidly growing output volume. A discussion will be provided of key challenges affecting the design of future high-resolution climate models. It is suggested that km-scale RCMs should be exploited to pioneer this terrain, at a time when GCMs are not yet available at such resolutions. Areas of interest include the development of new parameterization schemes adequate for km-scale resolution, the exploration of new validation methodologies and data

  13. Dark-ages Reionization and Galaxy Formation Simulation - XIV. Gas accretion, cooling, and star formation in dwarf galaxies at high redshift

    Science.gov (United States)

    Qin, Yuxiang; Duffy, Alan R.; Mutch, Simon J.; Poole, Gregory B.; Geil, Paul M.; Mesinger, Andrei; Wyithe, J. Stuart B.

    2018-06-01

    We study dwarf galaxy formation at high redshift (z ≥ 5) using a suite of high-resolution, cosmological hydrodynamic simulations and a semi-analytic model (SAM). We focus on gas accretion, cooling, and star formation in this work by isolating the relevant process from reionization and supernova feedback, which will be further discussed in a companion paper. We apply the SAM to halo merger trees constructed from a collisionless N-body simulation sharing identical initial conditions to the hydrodynamic suite, and calibrate the free parameters against the stellar mass function predicted by the hydrodynamic simulations at z = 5. By making comparisons of the star formation history and gas components calculated by the two modelling techniques, we find that semi-analytic prescriptions that are commonly adopted in the literature of low-redshift galaxy formation do not accurately represent dwarf galaxy properties in the hydrodynamic simulation at earlier times. We propose three modifications to SAMs that will provide more accurate high-redshift simulations. These include (1) the halo mass and baryon fraction which are overestimated by collisionless N-body simulations; (2) the star formation efficiency which follows a different cosmic evolutionary path from the hydrodynamic simulation; and (3) the cooling rate which is not well defined for dwarf galaxies at high redshift. Accurate semi-analytic modelling of dwarf galaxy formation informed by detailed hydrodynamical modelling will facilitate reliable semi-analytic predictions over the large volumes needed for the study of reionization.

  14. Dark-ages Reionization and Galaxy Formation Simulation - XIV. Gas accretion, cooling and star formation in dwarf galaxies at high redshift

    Science.gov (United States)

    Qin, Yuxiang; Duffy, Alan R.; Mutch, Simon J.; Poole, Gregory B.; Geil, Paul M.; Mesinger, Andrei; Wyithe, J. Stuart B.

    2018-03-01

    We study dwarf galaxy formation at high redshift (z ≥ 5) using a suite of high-resolution, cosmological hydrodynamic simulations and a semi-analytic model (SAM). We focus on gas accretion, cooling and star formation in this work by isolating the relevant process from reionization and supernova feedback, which will be further discussed in a companion paper. We apply the SAM to halo merger trees constructed from a collisionless N-body simulation sharing identical initial conditions to the hydrodynamic suite, and calibrate the free parameters against the stellar mass function predicted by the hydrodynamic simulations at z = 5. By making comparisons of the star formation history and gas components calculated by the two modelling techniques, we find that semi-analytic prescriptions that are commonly adopted in the literature of low-redshift galaxy formation do not accurately represent dwarf galaxy properties in the hydrodynamic simulation at earlier times. We propose 3 modifications to SAMs that will provide more accurate high-redshift simulations. These include 1) the halo mass and baryon fraction which are overestimated by collisionless N-body simulations; 2) the star formation efficiency which follows a different cosmic evolutionary path from the hydrodynamic simulation; and 3) the cooling rate which is not well defined for dwarf galaxies at high redshift. Accurate semi-analytic modelling of dwarf galaxy formation informed by detailed hydrodynamical modelling will facilitate reliable semi-analytic predictions over the large volumes needed for the study of reionization.

  15. High-resolution nested model simulations of the climatological circulation in the southeastern Mediterranean Sea

    Directory of Open Access Journals (Sweden)

    S. Brenner

    2003-01-01

    Full Text Available As part of the Mediterranean Forecasting System Pilot Project (MFSPP we have implemented a high-resolution (2 km horizontal grid, 30 sigma levels version of the Princeton Ocean Model for the southeastern corner of the Mediterranean Sea. The domain extends 200 km offshore and includes the continental shelf and slope, and part of the open sea. The model is nested in an intermediate resolution (5.5 km grid model that covers the entire Levantine, Ionian, and Aegean Sea. The nesting is one way so that velocity, temperature, and salinity along the boundaries are interpolated from the relevant intermediate model variables. An integral constraint is applied so that the net mass flux across the open boundaries is identical to the net flux in the intermediate model. The model is integrated for three perpetual years with surface forcing specified from monthly mean climatological wind stress and heat fluxes. The model is stable and spins up within the first year to produce a repeating seasonal cycle throughout the three-year integration period. While there is some internal variability evident in the results, it is clear that, due to the relatively small domain, the results are strongly influenced by the imposed lateral boundary conditions. The results closely follow the simulation of the intermediate model. The main improvement is in the simulation over the narrow shelf region, which is not adequately resolved by the coarser grid model. Comparisons with direct current measurements over the shelf and slope show reasonable agreement despite the limitations of the climatological forcing. The model correctly simulates the direction and the typical speeds of the flow over the shelf and slope, but has difficulty properly re-producing the seasonal cycle in the speed.Key words. Oceanography: general (continental shelf processes; numerical modelling; ocean prediction

  16. The Atacama Cosmology Telescope: cosmological parameters from three seasons of data

    International Nuclear Information System (INIS)

    Sievers, Jonathan L.; Appel, John William; Hlozek, Renée A.; Nolta, Michael R.; Battaglia, Nick; Bond, J. Richard; Acquaviva, Viviana; Addison, Graeme E.; Amiri, Mandana; Battistelli, Elia S.; Burger, Bryce; Ade, Peter A. R.; Aguirre, Paula; Barrientos, L. Felipe; Brown, Ben; Calabrese, Erminia; Chervenak, Jay; Crichton, Devin; Das, Sudeep; Devlin, Mark J.

    2013-01-01

    We present constraints on cosmological and astrophysical parameters from high-resolution microwave background maps at 148 GHz and 218 GHz made by the Atacama Cosmology Telescope (ACT) in three seasons of observations from 2008 to 2010. A model of primary cosmological and secondary foreground parameters is fit to the map power spectra and lensing deflection power spectrum, including contributions from both the thermal Sunyaev-Zeldovich (tSZ) effect and the kinematic Sunyaev-Zeldovich (kSZ) effect, Poisson and correlated anisotropy from unresolved infrared sources, radio sources, and the correlation between the tSZ effect and infrared sources. The power ℓ 2 C ℓ /2π of the thermal SZ power spectrum at 148 GHz is measured to be 3.4±1.4  μK 2 at ℓ = 3000, while the corresponding amplitude of the kinematic SZ power spectrum has a 95% confidence level upper limit of 8.6  μK 2 . Combining ACT power spectra with the WMAP 7-year temperature and polarization power spectra, we find excellent consistency with the LCDM model. We constrain the number of effective relativistic degrees of freedom in the early universe to be N eff = 2.79±0.56, in agreement with the canonical value of N eff = 3.046 for three massless neutrinos. We constrain the sum of the neutrino masses to be Σm ν < 0.39 eV at 95% confidence when combining ACT and WMAP 7-year data with BAO and Hubble constant measurements. We constrain the amount of primordial helium to be Y p = 0.225±0.034, and measure no variation in the fine structure constant α since recombination, with α/α 0 = 1.004±0.005. We also find no evidence for any running of the scalar spectral index, dn s /dln k = −0.004±0.012

  17. Feasibility of High-Resolution Soil Erosion Measurements by Means of Rainfall Simulations and SfM Photogrammetry

    Directory of Open Access Journals (Sweden)

    Phoebe Hänsel

    2016-11-01

    Full Text Available The silty soils of the intensively used agricultural landscape of the Saxon loess province, eastern Germany, are very prone to soil erosion, mainly caused by water erosion. Rainfall simulations, and also increasingly structure-from-motion (SfM photogrammetry, are used as methods in soil erosion research not only to assess soil erosion by water, but also to quantify soil loss. This study aims to validate SfM photogrammetry determined soil loss estimations with rainfall simulations measurements. Rainfall simulations were performed at three agricultural sites in central Saxony. Besides the measured data runoff and soil loss by sampling (in mm, terrestrial images were taken from the plots with digital cameras before and after the rainfall simulation. Subsequently, SfM photogrammetry was used to reconstruct soil surface changes due to soil erosion in terms of high resolution digital elevation models (DEMs for the pre- and post-event (resolution 1 × 1 mm. By multi-temporal change detection, the digital elevation model of difference (DoD and an averaged soil loss (in mm is received, which was compared to the soil loss by sampling. Soil loss by DoD was higher than soil loss by sampling. The method of SfM photogrammetry-determined soil loss estimations also include a comparison of three different ground control point (GCP approaches, revealing that the most complex one delivers the most reliable soil loss by DoD. Additionally, soil bulk density changes and splash erosion beyond the plot were measured during the rainfall simulation experiments in order to separate these processes and associated surface changes from the soil loss by DoD. Furthermore, splash was negligibly small, whereas higher soil densities after the rainfall simulations indicated soil compaction. By means of calculated soil surface changes due to soil compaction, the soil loss by DoD achieved approximately the same value as the soil loss by rainfall simulation.

  18. Scalar-tensor cosmology with cosmological constant

    International Nuclear Information System (INIS)

    Maslanka, K.

    1983-01-01

    The equations of scalar-tensor theory of gravitation with cosmological constant in the case of homogeneous and isotropic cosmological model can be reduced to dynamical system of three differential equations with unknown functions H=R/R, THETA=phi/phi, S=e/phi. When new variables are introduced the system becomes more symmetrical and cosmological solutions R(t), phi(t), e(t) are found. It is shown that when cosmological constant is introduced large class of solutions which depend also on Dicke-Brans parameter can be obtained. Investigations of these solutions give general limits for cosmological constant and mean density of matter in plane model. (author)

  19. Cosmological Structure Formation: From Dawn till Dusk

    DEFF Research Database (Denmark)

    Heneka, Caroline Samantha

    Cosmology has entered an era where a plethora data is available on structure formation to constrain astrophysics and underlying cosmology. This thesis strives to both investigate new observables and modeling of the Epoch of Reionization, as well as to constrain dark energy phenomenology with mass......Cosmology has entered an era where a plethora data is available on structure formation to constrain astrophysics and underlying cosmology. This thesis strives to both investigate new observables and modeling of the Epoch of Reionization, as well as to constrain dark energy phenomenology...... with massive galaxy clusters, traveling from the dawn of structure formation, when the first galaxies appear, to its dusk, when a representative part of the mass in the Universe is settled in massive structures. This hunt for accurate constraints on cosmology is complemented with the demonstration of novel...... Bayesian statistical tools and kinematical constraints on dark energy. Starting at the dawn of structure formation, we study emission line fluctuations, employing semi-numerical simulations of cosmological volumes of their line emission, in order to cross-correlate fluctuations in brightness. This cross...

  20. Towards a resolution of the cosmological singularity in non-local higher derivative theories of gravity

    International Nuclear Information System (INIS)

    Biswas, Tirthabir; Koivisto, Tomi; Mazumdar, Anupam

    2010-01-01

    One of the greatest problems of standard cosmology is the Big Bang singularity. Previously it has been shown that non-local ghostfree higher-derivative modifications of Einstein gravity in the ultra-violet regime can admit non-singular bouncing solutions. In this paper we study in more details the dynamical properties of the equations of motion for these theories of gravity in presence of positive and negative cosmological constants and radiation. We find stable inflationary attractor solutions in the presence of a positive cosmological constant which renders inflation geodesically complete, while in the presence of a negative cosmological constant a cyclic universe emerges. We also provide an algorithm for tracking the super-Hubble perturbations during the bounce and show that the bouncing solutions are free from any perturbative instability

  1. Cosmology in one dimension: Vlasov dynamics.

    Science.gov (United States)

    Manfredi, Giovanni; Rouet, Jean-Louis; Miller, Bruce; Shiozawa, Yui

    2016-04-01

    Numerical simulations of self-gravitating systems are generally based on N-body codes, which solve the equations of motion of a large number of interacting particles. This approach suffers from poor statistical sampling in regions of low density. In contrast, Vlasov codes, by meshing the entire phase space, can reach higher accuracy irrespective of the density. Here, we perform one-dimensional Vlasov simulations of a long-standing cosmological problem, namely, the fractal properties of an expanding Einstein-de Sitter universe in Newtonian gravity. The N-body results are confirmed for high-density regions and extended to regions of low matter density, where the N-body approach usually fails.

  2. Theoretical cosmology

    International Nuclear Information System (INIS)

    Raychaudhuri, A.K.

    1979-01-01

    The subject is covered in chapters, entitled; introduction; Newtonian gravitation and cosmology; general relativity and relativistic cosmology; analysis of observational data; relativistic models not obeying the cosmological principle; microwave radiation background; thermal history of the universe and nucleosynthesis; singularity of cosmological models; gravitational constant as a field variable; cosmological models based on Einstein-Cartan theory; cosmological singularity in two recent theories; fate of perturbations of isotropic universes; formation of galaxies; baryon symmetric cosmology; assorted topics (including extragalactic radio sources; Mach principle). (U.K.)

  3. Observable cosmology and cosmological models

    International Nuclear Information System (INIS)

    Kardashev, N.S.; Lukash, V.N.; Novikov, I.D.

    1987-01-01

    Modern state of observation cosmology is briefly discussed. Among other things, a problem, related to Hibble constant and slowdown constant determining is considered. Within ''pancake'' theory hot (neutrino) cosmological model explains well the large-scale structure of the Universe, but does not explain the galaxy formation. A cold cosmological model explains well light object formation, but contradicts data on large-scale structure

  4. Inhomogenous loop quantum cosmology with matter

    International Nuclear Information System (INIS)

    Martín-de Bias, D; Mena Marugán, G A; Martín-Benito, M

    2012-01-01

    The linearly polarized Gowdy T 3 model with a massless scalar field with the same symmetries as the metric is quantized by applying a hybrid approach. The homogeneous geometry degrees of freedom are loop quantized, fact which leads to the resolution of the cosmological singularity, while a Fock quantization is employed for both matter and gravitational inhomogeneities. Owing to the inclusion of the massless scalar field this system allows us to modelize flat Friedmann-Robertson-Walker cosmologies filled with inhomogeneities propagating in one direction. It provides a perfect scenario to study the quantum back-reaction between the inhomogeneities and the polymeric homogeneous and isotropic background.

  5. High-resolution flood modeling of urban areas using MSN_Flood

    Directory of Open Access Journals (Sweden)

    Michael Hartnett

    2017-07-01

    Full Text Available Although existing hydraulic models have been used to simulate and predict urban flooding, most of these models are inadequate due to the high spatial resolution required to simulate flows in urban floodplains. Nesting high-resolution subdomains within coarser-resolution models is an efficient solution for enabling simultaneous calculation of flooding due to tides, surges, and high river flows. MSN_Flood has been developed to incorporate moving boundaries around nested domains, permitting alternate flooding and drying along the boundary and in the interior of the domain. Ghost cells adjacent to open boundary cells convert open boundaries, in effect, into internal boundaries. The moving boundary may be multi-segmented and non-continuous, with recirculating flow across the boundary. When combined with a bespoke adaptive interpolation scheme, this approach facilitates a dynamic internal boundary. Based on an alternating-direction semi-implicit finite difference scheme, MSN_Flood was used to hindcast a major flood event in Cork City resulting from the combined pressures of fluvial, tidal, and storm surge processes. The results show that the model is computationally efficient, as the 2-m high-resolution nest is used only in the urban flooded region. Elsewhere, lower-resolution nests are used. The results also show that the model is highly accurate when compared with measured data. The model is capable of incorporating nested sub-domains when the nested boundary is multi-segmented and highly complex with lateral gradients of elevation and velocities. This is a major benefit when modelling urban floodplains at very high resolution.

  6. On the evolution of galaxy clustering and cosmological N-body simulations

    International Nuclear Information System (INIS)

    Fall, S.M.

    1978-01-01

    Some aspects of the problem of simulating the evolution of galaxy clustering by N-body computer experiments are discussed. The results of four 1000-body experiments are presented and interpreted on the basis of simple scaling arguments for the gravitational condensation of bound aggregates. They indicate that the internal dynamics of condensed aggregates are negligible in determining the form of the pair-correlation function xi. On small scales the form of xi is determined by discreteness effects in the initial N-body distribution and is not sensitive to this distribution. The experiments discussed here test the simple scaling arguments effectively for only one value of the cosmological density parameter (Ω = 1) and one form of the initial fluctuation spectrum (n = 0). (author)

  7. Validation of high-resolution aerosol optical thickness simulated by a global non-hydrostatic model against remote sensing measurements

    Science.gov (United States)

    Goto, Daisuke; Sato, Yousuke; Yashiro, Hisashi; Suzuki, Kentaroh; Nakajima, Teruyuki

    2017-02-01

    A high-performance computing resource allows us to conduct numerical simulations with a horizontal grid spacing that is sufficiently high to resolve cloud systems. The cutting-edge computational capability, which was provided by the K computer at RIKEN in Japan, enabled the authors to perform long-term, global simulations of air pollutions and clouds with unprecedentedly high horizontal resolutions. In this study, a next generation model capable of simulating global air pollutions with O(10 km) grid spacing by coupling an atmospheric chemistry model to the Non-hydrostatic Icosahedral Atmospheric Model (NICAM) was performed. Using the newly developed model, month-long simulations for July were conducted with 14 km grid spacing on the K computer. Regarding the global distributions of aerosol optical thickness (AOT), it was found that the correlation coefficient (CC) between the simulation and AERONET measurements was approximately 0.7, and the normalized mean bias was -10%. The simulated AOT was also compared with satellite-retrieved values; the CC was approximately 0.6. The radiative effects due to each chemical species (dust, sea salt, organics, and sulfate) were also calculated and compared with multiple measurements. As a result, the simulated fluxes of upward shortwave radiation at the top of atmosphere and the surface compared well with the observed values, whereas those of downward shortwave radiation at the surface were underestimated, even if all aerosol components were considered. However, the aerosol radiative effects on the downward shortwave flux at the surface were found to be as high as 10 W/m2 in a global scale; thus, simulated aerosol distributions can strongly affect the simulated air temperature and dynamic circulation.

  8. A New High Resolution Climate Dataset for Climate Change Impacts Assessments in New England

    Science.gov (United States)

    Komurcu, M.; Huber, M.

    2016-12-01

    Assessing regional impacts of climate change (such as changes in extreme events, land surface hydrology, water resources, energy, ecosystems and economy) requires much higher resolution climate variables than those available from global model projections. While it is possible to run global models in higher resolution, the high computational cost associated with these simulations prevent their use in such manner. To alleviate this problem, dynamical downscaling offers a method to deliver higher resolution climate variables. As part of an NSF EPSCoR funded interdisciplinary effort to assess climate change impacts on New Hampshire ecosystems, hydrology and economy (the New Hampshire Ecosystems and Society project), we create a unique high-resolution climate dataset for New England. We dynamically downscale global model projections under a high impact emissions scenario using the Weather Research and Forecasting model (WRF) with three nested grids of 27, 9 and 3 km horizontal resolution with the highest resolution innermost grid focusing over New England. We prefer dynamical downscaling over other methods such as statistical downscaling because it employs physical equations to progressively simulate climate variables as atmospheric processes interact with surface processes, emissions, radiation, clouds, precipitation and other model components, hence eliminates fix relationships between variables. In addition to simulating mean changes in regional climate, dynamical downscaling also allows for the simulation of climate extremes that significantly alter climate change impacts. We simulate three time slices: 2006-2015, 2040-2060 and 2080-2100. This new high-resolution climate dataset (with more than 200 variables saved in hourly (six hourly) intervals for the highest resolution domain (outer two domains)) along with model input and restart files used in our WRF simulations will be publicly available for use to the broader scientific community to support in-depth climate

  9. Astroparticle physics and cosmology

    International Nuclear Information System (INIS)

    Senjanovic, G.; Smirnov, A.Yu.; Thompson, G.

    2001-01-01

    In this volume a wide spectrum of topics of modern astroparticle physics, such as neutrino astrophysics, dark matter of the universe, high energy cosmic rays, topological defects in cosmology, γ-ray bursts, phase transitions at high temperatures, is covered. The articles written by top level experts in the field give a comprehensive view of the state-of-the-art of modern cosmology

  10. Astroparticle physics and cosmology

    Energy Technology Data Exchange (ETDEWEB)

    Senjanovic, G; Smirnov, A Yu; Thompson, G [eds.

    2001-11-15

    In this volume a wide spectrum of topics of modern astroparticle physics, such as neutrino astrophysics, dark matter of the universe, high energy cosmic rays, topological defects in cosmology, {gamma}-ray bursts, phase transitions at high temperatures, is covered. The articles written by top level experts in the field give a comprehensive view of the state-of-the-art of modern cosmology.

  11. Variability of wet troposphere delays over inland reservoirs as simulated by a high-resolution regional climate model

    Science.gov (United States)

    Clark, E.; Lettenmaier, D. P.

    2014-12-01

    Satellite radar altimetry is widely used for measuring global sea level variations and, increasingly, water height variations of inland water bodies. Existing satellite radar altimeters measure water surfaces directly below the spacecraft (approximately at nadir). Over the ocean, most of these satellites use radiometry to measure the delay of radar signals caused by water vapor in the atmosphere (also known as the wet troposphere delay (WTD)). However, radiometry can only be used to estimate this delay over the largest inland water bodies, such as the Great Lakes, due to spatial resolution issues. As a result, atmospheric models are typically used to simulate and correct for the WTD at the time of observations. The resolutions of these models are quite coarse, at best about 5000 km2 at 30˚N. The upcoming NASA- and CNES-led Surface Water and Ocean Topography (SWOT) mission, on the other hand, will use interferometric synthetic aperture radar (InSAR) techniques to measure a 120-km-wide swath of the Earth's surface. SWOT is expected to make useful measurements of water surface elevation and extent (and storage change) for inland water bodies at spatial scales as small as 250 m, which is much smaller than current altimetry targets and several orders of magnitude smaller than the models used for wet troposphere corrections. Here, we calculate WTD from very high-resolution (4/3-km to 4-km) simulations of the Weather Research and Forecasting (WRF) regional climate model, and use the results to evaluate spatial variations in WTD. We focus on six U.S. reservoirs: Lake Elwell (MT), Lake Pend Oreille (ID), Upper Klamath Lake (OR), Elephant Butte (NM), Ray Hubbard (TX), and Sam Rayburn (TX). The reservoirs vary in climate, shape, use, and size. Because evaporation from open water impacts local water vapor content, we compare time series of WTD over land and water in the vicinity of each reservoir. To account for resolution effects, we examine the difference in WRF-simulated

  12. Current cosmology

    International Nuclear Information System (INIS)

    Zeldovich, Ya.

    1984-01-01

    The knowledge is summed up of contemporary cosmology on the universe and its development resulting from a great number of highly sensitive observations and the application of contemporary physical theories to the entire universe. The questions are assessed of mass density in the universe, the structure and origin of the universe, its baryon asymmetry and the quantum explanation of the origin of the universe. Physical problems are presented which should be resolved for the future development of cosmology. (Ha)

  13. Cosmology as relativistic particle mechanics: from big crunch to big bang

    Energy Technology Data Exchange (ETDEWEB)

    Russo, J G [Institucio Catalana de Recerca i Estudis Avancats, Departament ECM, Facultat de FIsica, Universitat de Barcelona, Diagonal 647, E-08028 Barcelona (Spain); Townsend, P K [Institucio Catalana de Recerca i Estudis Avancats, Departament ECM, Facultat de FIsica, Universitat de Barcelona, Diagonal 647, E-08028 Barcelona (Spain)

    2005-02-21

    Cosmology can be viewed as geodesic motion in an appropriate metric on an 'augmented' target space; here we obtain these geodesics from an effective relativistic particle action. As an application, we find some exact (flat and curved) cosmologies for models with N scalar fields taking values in a hyperbolic target space for which the augmented target space is a Milne universe. The singularities of these cosmologies correspond to points at which the particle trajectory crosses the Milne horizon, suggesting a novel resolution of them, which we explore via the Wheeler-DeWitt equation.

  14. Supersymmetric null-like holographic cosmologies

    International Nuclear Information System (INIS)

    Lin Fengli; Wen Wenyu

    2006-01-01

    We construct a new class of 1/4-BPS time dependent domain-wall solutions with null-like metric and dilaton in type II supergravities, which admit a null-like big bang singularity. Based on the domain-wall/QFT correspondence, these solutions are dual to 1/4-supersymmetric quantum field theories living on a boundary cosmological background with time dependent coupling constant and UV cutoff. In particular we evaluate the holographic c function for the 2-dimensional dual field theory living on the corresponding null-like cosmology. We find that this c function runs in accordance with the c-theorem as the boundary universe evolves, this means that the number of degrees of freedom is divergent at big bang and suggests the possible resolution of big bang singularity

  15. Initial conditions for cosmological N-body simulations of the scalar sector of theories of Newtonian, Relativistic and Modified Gravity

    International Nuclear Information System (INIS)

    Valkenburg, Wessel; Hu, Bin

    2015-01-01

    We present a description for setting initial particle displacements and field values for simulations of arbitrary metric theories of gravity, for perfect and imperfect fluids with arbitrary characteristics. We extend the Zel'dovich Approximation to nontrivial theories of gravity, and show how scale dependence implies curved particle paths, even in the entirely linear regime of perturbations. For a viable choice of Effective Field Theory of Modified Gravity, initial conditions set at high redshifts are affected at the level of up to 5% at Mpc scales, which exemplifies the importance of going beyond Λ-Cold Dark Matter initial conditions for modifications of gravity outside of the quasi-static approximation. In addition, we show initial conditions for a simulation where a scalar modification of gravity is modelled in a Lagrangian particle-like description. Our description paves the way for simulations and mock galaxy catalogs under theories of gravity beyond the standard model, crucial for progress towards precision tests of gravity and cosmology

  16. A New Signal Model for Axion Cavity Searches from N -body Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Lentz, Erik W.; Rosenberg, Leslie J. [Physics Department, University of Washington, Seattle, WA 98195-1580 (United States); Quinn, Thomas R.; Tremmel, Michael J., E-mail: lentze@phys.washington.edu, E-mail: ljrosenberg@phys.washington.edu, E-mail: trq@astro.washington.edu, E-mail: mjt29@astro.washington.edu [Astronomy Department, University of Washington, Seattle, WA 98195-1580 (United States)

    2017-08-20

    Signal estimates for direct axion dark matter (DM) searches have used the isothermal sphere halo model for the last several decades. While insightful, the isothermal model does not capture effects from a halo’s infall history nor the influence of baryonic matter, which has been shown to significantly influence a halo’s inner structure. The high resolution of cavity axion detectors can make use of modern cosmological structure-formation simulations, which begin from realistic initial conditions, incorporate a wide range of baryonic physics, and are capable of resolving detailed structure. This work uses a state-of-the-art cosmological N -body+Smoothed-Particle Hydrodynamics simulation to develop an improved signal model for axion cavity searches. Signal shapes from a class of galaxies encompassing the Milky Way are found to depart significantly from the isothermal sphere. A new signal model for axion detectors is proposed and projected sensitivity bounds on the Axion DM eXperiment (ADMX) data are presented.

  17. High-resolution simulation of link-level vehicle emissions and concentrations for air pollutants in a traffic-populated eastern Asian city

    Directory of Open Access Journals (Sweden)

    S. Zhang

    2016-08-01

    Full Text Available Vehicle emissions containing air pollutants created substantial environmental impacts on air quality for many traffic-populated cities in eastern Asia. A high-resolution emission inventory is a useful tool compared with traditional tools (e.g. registration data-based approach to accurately evaluate real-world traffic dynamics and their environmental burden. In this study, Macau, one of the most populated cities in the world, is selected to demonstrate a high-resolution simulation of vehicular emissions and their contribution to air pollutant concentrations by coupling multimodels. First, traffic volumes by vehicle category on 47 typical roads were investigated during weekdays in 2010 and further applied in a networking demand simulation with the TransCAD model to establish hourly profiles of link-level vehicle counts. Local vehicle driving speed and vehicle age distribution data were also collected in Macau. Second, based on a localized vehicle emission model (e.g. the emission factor model for the Beijing vehicle fleet – Macau, EMBEV–Macau, this study established a link-based vehicle emission inventory in Macau with high resolution meshed in a temporal and spatial framework. Furthermore, we employed the AERMOD (AMS/EPA Regulatory Model model to map concentrations of CO and primary PM2.5 contributed by local vehicle emissions during weekdays in November 2010. This study has discerned the strong impact of traffic flow dynamics on the temporal and spatial patterns of vehicle emissions, such as a geographic discrepancy of spatial allocation up to 26 % between THC and PM2.5 emissions owing to spatially heterogeneous vehicle-use intensity between motorcycles and diesel fleets. We also identified that the estimated CO2 emissions from gasoline vehicles agreed well with the statistical fuel consumption in Macau. Therefore, this paper provides a case study and a solid framework for developing high-resolution environment assessment tools for other

  18. High-resolution 3D simulations of NIF ignition targets performed on Sequoia with HYDRA

    Science.gov (United States)

    Marinak, M. M.; Clark, D. S.; Jones, O. S.; Kerbel, G. D.; Sepke, S.; Patel, M. V.; Koning, J. M.; Schroeder, C. R.

    2015-11-01

    Developments in the multiphysics ICF code HYDRA enable it to perform large-scale simulations on the Sequoia machine at LLNL. With an aggregate computing power of 20 Petaflops, Sequoia offers an unprecedented capability to resolve the physical processes in NIF ignition targets for a more complete, consistent treatment of the sources of asymmetry. We describe modifications to HYDRA that enable it to scale to over one million processes on Sequoia. These include new options for replicating parts of the mesh over a subset of the processes, to avoid strong scaling limits. We consider results from a 3D full ignition capsule-only simulation performed using over one billion zones run on 262,000 processors which resolves surface perturbations through modes l = 200. We also report progress towards a high-resolution 3D integrated hohlraum simulation performed using 262,000 processors which resolves surface perturbations on the ignition capsule through modes l = 70. These aim for the most complete calculations yet of the interactions and overall impact of the various sources of asymmetry for NIF ignition targets. This work was performed under the auspices of the Lawrence Livermore National Security, LLC, (LLNS) under Contract No. DE-AC52-07NA27344.

  19. A High-Resolution Spatially Explicit Monte-Carlo Simulation Approach to Commercial and Residential Electricity and Water Demand Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Morton, April M [ORNL; McManamay, Ryan A [ORNL; Nagle, Nicholas N [ORNL; Piburn, Jesse O [ORNL; Stewart, Robert N [ORNL; Surendran Nair, Sujithkumar [ORNL

    2016-01-01

    Abstract As urban areas continue to grow and evolve in a world of increasing environmental awareness, the need for high resolution spatially explicit estimates for energy and water demand has become increasingly important. Though current modeling efforts mark significant progress in the effort to better understand the spatial distribution of energy and water consumption, many are provided at a course spatial resolution or rely on techniques which depend on detailed region-specific data sources that are not publicly available for many parts of the U.S. Furthermore, many existing methods do not account for errors in input data sources and may therefore not accurately reflect inherent uncertainties in model outputs. We propose an alternative and more flexible Monte-Carlo simulation approach to high-resolution residential and commercial electricity and water consumption modeling that relies primarily on publicly available data sources. The method s flexible data requirement and statistical framework ensure that the model is both applicable to a wide range of regions and reflective of uncertainties in model results. Key words: Energy Modeling, Water Modeling, Monte-Carlo Simulation, Uncertainty Quantification Acknowledgment This manuscript has been authored by employees of UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the U.S. Department of Energy. Accordingly, the United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  20. Texton-based super-resolution for achieving high spatiotemporal resolution in hybrid camera system

    Science.gov (United States)

    Kamimura, Kenji; Tsumura, Norimichi; Nakaguchi, Toshiya; Miyake, Yoichi

    2010-05-01

    Many super-resolution methods have been proposed to enhance the spatial resolution of images by using iteration and multiple input images. In a previous paper, we proposed the example-based super-resolution method to enhance an image through pixel-based texton substitution to reduce the computational cost. In this method, however, we only considered the enhancement of a texture image. In this study, we modified this texton substitution method for a hybrid camera to reduce the required bandwidth of a high-resolution video camera. We applied our algorithm to pairs of high- and low-spatiotemporal-resolution videos, which were synthesized to simulate a hybrid camera. The result showed that the fine detail of the low-resolution video can be reproduced compared with bicubic interpolation and the required bandwidth could be reduced to about 1/5 in a video camera. It was also shown that the peak signal-to-noise ratios (PSNRs) of the images improved by about 6 dB in a trained frame and by 1.0-1.5 dB in a test frame, as determined by comparison with the processed image using bicubic interpolation, and the average PSNRs were higher than those obtained by the well-known Freeman’s patch-based super-resolution method. Compared with that of the Freeman’s patch-based super-resolution method, the computational time of our method was reduced to almost 1/10.

  1. Coating Thickness Measurement of the Simulated TRISO-Coated Fuel Particles using an Image Plate and a High Resolution Scanner

    International Nuclear Information System (INIS)

    Kim, Woong Ki; Kim, Yeon Ku; Jeong, Kyung Chai; Lee, Young Woo; Kim, Bong Goo; Eom, Sung Ho; Kim, Young Min; Yeo, Sung Hwan; Cho, Moon Sung

    2014-01-01

    In this study, the thickness of the coating layers of 196 coated particles was measured using an Image Plate detector, high resolution scanner and digital image processing techniques. The experimental results are as follows. - An X-ray image was acquired for 196 simulated TRISO-coated fuel particles with ZrO 2 kernel using an Image Plate with high resolution in a reduced amount of time. - We could observe clear boundaries between coating layers for 196 particles. - The geometric distortion error was compensated for the calculation. - The coating thickness of the TRISO-coated fuel particles can be nondestructively measured using X-ray radiography and digital image processing technology. - We can increase the number of TRISO-coated particles to be inspected by increasing the number of Image Plate detectors. A TRISO-coated fuel particle for an HTGR (high temperature gas-cooled reactor) is composed of a nuclear fuel kernel and outer coating layers. The coating layers consist of buffer PyC (pyrolytic carbon), inner PyC (I-PyC), SiC, and outer PyC (O-PyC) layer. The coating thickness is measured to evaluate the soundness of the coating layers. X-ray radiography is one of the nondestructive alternatives for measuring the coating thickness without generating a radioactive waste. Several billion particles are subject to be loaded in a reactor. A lot of sample particles should be tested as much as possible. The acquired X-ray images for the measurement of coating thickness have included a small number of particles because of the restricted resolution and size of the X-ray detector. We tried to test many particles for an X-ray exposure to reduce the measurement time. In this experiment, an X-ray image was acquired for 196 simulated TRISO-coated fuel particles using an image plate and high resolution scanner with a pixel size of 25Χ25 μm 2 . The coating thickness for the particles could be measured on the image

  2. Modelling non-dust fluids in cosmology

    International Nuclear Information System (INIS)

    Christopherson, Adam J.; Hidalgo, Juan Carlos; Malik, Karim A.

    2013-01-01

    Currently, most of the numerical simulations of structure formation use Newtonian gravity. When modelling pressureless dark matter, or 'dust', this approach gives the correct results for scales much smaller than the cosmological horizon, but for scenarios in which the fluid has pressure this is no longer the case. In this article, we present the correspondence of perturbations in Newtonian and cosmological perturbation theory, showing exact mathematical equivalence for pressureless matter, and giving the relativistic corrections for matter with pressure. As an example, we study the case of scalar field dark matter which features non-zero pressure perturbations. We discuss some problems which may arise when evolving the perturbations in this model with Newtonian numerical simulations and with CMB Boltzmann codes

  3. The Effect of Color Choice on Learner Interpretation of a Cosmology Visualization

    Science.gov (United States)

    Buck, Zoe

    2013-01-01

    As we turn more and more to high-end computing to understand the Universe at cosmological scales, dynamic visualizations of simulations will take on a vital role as perceptual and cognitive tools. In collaboration with the Adler Planetarium and University of California High-Performance AstroComputing Center (UC-HiPACC), I am interested in better…

  4. Cosmology with equivalence principle breaking in the dark sector

    International Nuclear Information System (INIS)

    Keselman, Jose Ariel; Nusser, Adi; Peebles, P. J. E.

    2010-01-01

    A long-range force acting only between nonbaryonic particles would be associated with a large violation of the weak equivalence principle. We explore cosmological consequences of this idea, which we label ReBEL (daRk Breaking Equivalence principLe). A high resolution hydrodynamical simulation of the distributions of baryons and dark matter confirms our previous findings that a ReBEL force of comparable strength to gravity on comoving scales of about 1 h -1 Mpc causes voids between the concentrations of large galaxies to be more nearly empty, suppresses accretion of intergalactic matter onto galaxies at low redshift, and produces an early generation of dense dark-matter halos. A preliminary analysis indicates the ReBEL scenario is consistent with the one-dimensional power spectrum of the Lyman-Alpha forest and the three-dimensional galaxy autocorrelation function. Segregation of baryons and DM in galaxies and systems of galaxies is a strong prediction of ReBEL. ReBEL naturally correlates the baryon mass fraction in groups and clusters of galaxies with the system mass, in agreement with recent measurements.

  5. The Eccentric Satellites Problem: Comparing Milky Way Satellite Orbital Properties to Simulation Results

    Science.gov (United States)

    Haji, Umran; Pryor, Carlton; Applebaum, Elaad; Brooks, Alyson

    2018-01-01

    We compare the orbital properties of the satellite galaxies of the Milky Way to those of satellites found in simulated Milky Way-like systems as a means of testing cosmological simulations of galaxy formation. The particular problem that we are investigating is a discrepancy in the distribution of orbital eccentricities. Previous studies of Milky Way-mass systems analyzed in a semi-analytic ΛCDM cosmological model have found that the satellites tend to have significantly larger fractions of their kinetic energy invested in radial motion with respect to their central galaxy than do the real-world Milky Way satellites. We analyze several high-resolution ("zoom-in") hydrodynamical simulations of Milky Way-mass galaxies and their associated satellite systems to investigate why previous works found Milky Way-like systems to be rare. We find a possible relationship between a quiescent galactic assembly history and a distribution of satellite kinematics resembling that of the Milky Way. This project has been supported by funding from National Science Foundation grant PHY-1560077.

  6. Watershed sensitivity and hydrologic response to high-resolution climate model

    Science.gov (United States)

    Troin, M.; Caya, D.

    2012-12-01

    Global climate models (GCMs) are fundamental research tools to assess climate change impacts on water resources. Regional climate models (RCMs) are complementary to GCMs. The added benefit of RCMs for hydrological applications is still not well understood because watersheds respond differently to RCM experiments. It is expected that the new generation of RCMs improve the representation of climate processes making it more attractive for impact studies. Given the cost of RCMs, it is ascertain to identify whether high-resolution RCMs allow offering more details than what is simulated in GCMs or RCMs with coarser resolution to address impacts on water resources. This study aims to assess the added value of RCM with emphasis on using high-resolution climate models. More specifically is how the hydrological cycle is represented when the resolution in climate models is increased (45 vs 200km; 15 vs 45km). We used simulations from the Canadian RCM (CRCM) driven by reanalyses integrated on high-resolution domains (45 and 15km) and CRCM driven by multiple members of two GCMs (the Canadian CGCM3; the German ECHAM5) with a horizontal resolution of 45 km. CRCM data and data from their host GCMs are compared to observation over 1971-2000. Precipitation and temperature from CRCM and GCMs' simulations are inputted into the hydrological SWAT model to simulate streamflow in watersheds for the historical period. The selected watersheds are two basins in Quebec (QC) and one basin in British Columbia (BC), Canada. CRCM-45km driven by GCMs performs well in representing precipitation but shows a cold bias of 3.3°C. Such bias in temperature is more significant for the BC basin (4.5°C) due to the Rocky Mountains. For the CRCM-45km/GCM combination (CGCM3 or ECHAM5), comparable skills in reproducing the observed climate are identified even though CGCM3 analyzed alone provides more accurate indication of climatology in the basins than ECHAM5. When we compared to GCMs results, CRCM-45km

  7. Identifying added value in high-resolution climate simulations over Scandinavia

    DEFF Research Database (Denmark)

    Mayer, Stephania; Fox Maule, Cathrine; Sobolowski, Stefan

    2015-01-01

    High-resolution data are needed in order to assess potential impacts of extreme events on infrastructure in the mid-latitudes. Dynamical downscaling offers one way to obtain this information. However, prior to implementation in any impacts assessment scheme, model output must be validated and det...

  8. Data Driven Approach for High Resolution Population Distribution and Dynamics Models

    Energy Technology Data Exchange (ETDEWEB)

    Bhaduri, Budhendra L [ORNL; Bright, Eddie A [ORNL; Rose, Amy N [ORNL; Liu, Cheng [ORNL; Urban, Marie L [ORNL; Stewart, Robert N [ORNL

    2014-01-01

    High resolution population distribution data are vital for successfully addressing critical issues ranging from energy and socio-environmental research to public health to human security. Commonly available population data from Census is constrained both in space and time and does not capture population dynamics as functions of space and time. This imposes a significant limitation on the fidelity of event-based simulation models with sensitive space-time resolution. This paper describes ongoing development of high-resolution population distribution and dynamics models, at Oak Ridge National Laboratory, through spatial data integration and modeling with behavioral or activity-based mobility datasets for representing temporal dynamics of population. The model is resolved at 1 km resolution globally and describes the U.S. population for nighttime and daytime at 90m. Integration of such population data provides the opportunity to develop simulations and applications in critical infrastructure management from local to global scales.

  9. High-resolution surface analysis for extended-range downscaling with limited-area atmospheric models

    Science.gov (United States)

    Separovic, Leo; Husain, Syed Zahid; Yu, Wei; Fernig, David

    2014-12-01

    High-resolution limited-area model (LAM) simulations are frequently employed to downscale coarse-resolution objective analyses over a specified area of the globe using high-resolution computational grids. When LAMs are integrated over extended time frames, from months to years, they are prone to deviations in land surface variables that can be harmful to the quality of the simulated near-surface fields. Nudging of the prognostic surface fields toward a reference-gridded data set is therefore devised in order to prevent the atmospheric model from diverging from the expected values. This paper presents a method to generate high-resolution analyses of land-surface variables, such as surface canopy temperature, soil moisture, and snow conditions, to be used for the relaxation of lower boundary conditions in extended-range LAM simulations. The proposed method is based on performing offline simulations with an external surface model, forced with the near-surface meteorological fields derived from short-range forecast, operational analyses, and observed temperatures and humidity. Results show that the outputs of the surface model obtained in the present study have potential to improve the near-surface atmospheric fields in extended-range LAM integrations.

  10. Intramolecular diffusive motion in alkane monolayers studied by high-resolution quasielastic neutron scattering and molecular dynamics simulations

    DEFF Research Database (Denmark)

    Hansen, Flemming Yssing; Criswell, L.; Fuhrmann, D

    2004-01-01

    Molecular dynamics simulations of a tetracosane (n-C24H50) monolayer adsorbed on a graphite basal-plane surface show that there are diffusive motions associated with the creation and annihilation of gauche defects occurring on a time scale of similar to0.1-4 ns. We present evidence...... that these relatively slow motions are observable by high-energy-resolution quasielastic neutron scattering (QNS) thus demonstrating QNS as a technique, complementary to nuclear magnetic resonance, for studying conformational dynamics on a nanosecond time scale in molecular monolayers....

  11. Cosmological hydrodynamical simulations of galaxy clusters: X-ray scaling relations and their evolution

    Science.gov (United States)

    Truong, N.; Rasia, E.; Mazzotta, P.; Planelles, S.; Biffi, V.; Fabjan, D.; Beck, A. M.; Borgani, S.; Dolag, K.; Gaspari, M.; Granato, G. L.; Murante, G.; Ragone-Figueroa, C.; Steinborn, L. K.

    2018-03-01

    We analyse cosmological hydrodynamical simulations of galaxy clusters to study the X-ray scaling relations between total masses and observable quantities such as X-ray luminosity, gas mass, X-ray temperature, and YX. Three sets of simulations are performed with an improved version of the smoothed particle hydrodynamics GADGET-3 code. These consider the following: non-radiative gas, star formation and stellar feedback, and the addition of feedback by active galactic nuclei (AGN). We select clusters with M500 > 1014 M⊙E(z)-1, mimicking the typical selection of Sunyaev-Zeldovich samples. This permits to have a mass range large enough to enable robust fitting of the relations even at z ˜ 2. The results of the analysis show a general agreement with observations. The values of the slope of the mass-gas mass and mass-temperature relations at z = 2 are 10 per cent lower with respect to z = 0 due to the applied mass selection, in the former case, and to the effect of early merger in the latter. We investigate the impact of the slope variation on the study of the evolution of the normalization. We conclude that cosmological studies through scaling relations should be limited to the redshift range z = 0-1, where we find that the slope, the scatter, and the covariance matrix of the relations are stable. The scaling between mass and YX is confirmed to be the most robust relation, being almost independent of the gas physics. At higher redshifts, the scaling relations are sensitive to the inclusion of AGNs which influences low-mass systems. The detailed study of these objects will be crucial to evaluate the AGN effect on the ICM.

  12. Multi-resolution voxel phantom modeling: a high-resolution eye model for computational dosimetry.

    Science.gov (United States)

    Caracappa, Peter F; Rhodes, Ashley; Fiedler, Derek

    2014-09-21

    Voxel models of the human body are commonly used for simulating radiation dose with a Monte Carlo radiation transport code. Due to memory limitations, the voxel resolution of these computational phantoms is typically too large to accurately represent the dimensions of small features such as the eye. Recently reduced recommended dose limits to the lens of the eye, which is a radiosensitive tissue with a significant concern for cataract formation, has lent increased importance to understanding the dose to this tissue. A high-resolution eye model is constructed using physiological data for the dimensions of radiosensitive tissues, and combined with an existing set of whole-body models to form a multi-resolution voxel phantom, which is used with the MCNPX code to calculate radiation dose from various exposure types. This phantom provides an accurate representation of the radiation transport through the structures of the eye. Two alternate methods of including a high-resolution eye model within an existing whole-body model are developed. The accuracy and performance of each method is compared against existing computational phantoms.

  13. Generalized Nonlinear Chirp Scaling Algorithm for High-Resolution Highly Squint SAR Imaging.

    Science.gov (United States)

    Yi, Tianzhu; He, Zhihua; He, Feng; Dong, Zhen; Wu, Manqing

    2017-11-07

    This paper presents a modified approach for high-resolution, highly squint synthetic aperture radar (SAR) data processing. Several nonlinear chirp scaling (NLCS) algorithms have been proposed to solve the azimuth variance of the frequency modulation rates that are caused by the linear range walk correction (LRWC). However, the azimuth depth of focusing (ADOF) is not handled well by these algorithms. The generalized nonlinear chirp scaling (GNLCS) algorithm that is proposed in this paper uses the method of series reverse (MSR) to improve the ADOF and focusing precision. It also introduces a high order processing kernel to avoid the range block processing. Simulation results show that the GNLCS algorithm can enlarge the ADOF and focusing precision for high-resolution highly squint SAR data.

  14. Generalized Nonlinear Chirp Scaling Algorithm for High-Resolution Highly Squint SAR Imaging

    Directory of Open Access Journals (Sweden)

    Tianzhu Yi

    2017-11-01

    Full Text Available This paper presents a modified approach for high-resolution, highly squint synthetic aperture radar (SAR data processing. Several nonlinear chirp scaling (NLCS algorithms have been proposed to solve the azimuth variance of the frequency modulation rates that are caused by the linear range walk correction (LRWC. However, the azimuth depth of focusing (ADOF is not handled well by these algorithms. The generalized nonlinear chirp scaling (GNLCS algorithm that is proposed in this paper uses the method of series reverse (MSR to improve the ADOF and focusing precision. It also introduces a high order processing kernel to avoid the range block processing. Simulation results show that the GNLCS algorithm can enlarge the ADOF and focusing precision for high-resolution highly squint SAR data.

  15. The relative entropy is fundamental to adaptive resolution simulations

    Science.gov (United States)

    Kreis, Karsten; Potestio, Raffaello

    2016-07-01

    Adaptive resolution techniques are powerful methods for the efficient simulation of soft matter systems in which they simultaneously employ atomistic and coarse-grained (CG) force fields. In such simulations, two regions with different resolutions are coupled with each other via a hybrid transition region, and particles change their description on the fly when crossing this boundary. Here we show that the relative entropy, which provides a fundamental basis for many approaches in systematic coarse-graining, is also an effective instrument for the understanding of adaptive resolution simulation methodologies. We demonstrate that the use of coarse-grained potentials which minimize the relative entropy with respect to the atomistic system can help achieve a smoother transition between the different regions within the adaptive setup. Furthermore, we derive a quantitative relation between the width of the hybrid region and the seamlessness of the coupling. Our results do not only shed light on the what and how of adaptive resolution techniques but will also help setting up such simulations in an optimal manner.

  16. Effects of model resolution and parameterizations on the simulations of clouds, precipitation, and their interactions with aerosols

    Science.gov (United States)

    Lee, Seoung Soo; Li, Zhanqing; Zhang, Yuwei; Yoo, Hyelim; Kim, Seungbum; Kim, Byung-Gon; Choi, Yong-Sang; Mok, Jungbin; Um, Junshik; Ock Choi, Kyoung; Dong, Danhong

    2018-01-01

    This study investigates the roles played by model resolution and microphysics parameterizations in the well-known uncertainties or errors in simulations of clouds, precipitation, and their interactions with aerosols by the numerical weather prediction (NWP) models. For this investigation, we used cloud-system-resolving model (CSRM) simulations as benchmark simulations that adopt high-resolution and full-fledged microphysical processes. These simulations were evaluated against observations, and this evaluation demonstrated that the CSRM simulations can function as benchmark simulations. Comparisons between the CSRM simulations and the simulations at the coarse resolutions that are generally adopted by current NWP models indicate that the use of coarse resolutions as in the NWP models can lower not only updrafts and other cloud variables (e.g., cloud mass, condensation, deposition, and evaporation) but also their sensitivity to increasing aerosol concentration. The parameterization of the saturation process plays an important role in the sensitivity of cloud variables to aerosol concentrations. while the parameterization of the sedimentation process has a substantial impact on how cloud variables are distributed vertically. The variation in cloud variables with resolution is much greater than what happens with varying microphysics parameterizations, which suggests that the uncertainties in the NWP simulations are associated with resolution much more than microphysics parameterizations.

  17. Cosmology solved? Maybe

    International Nuclear Information System (INIS)

    Turner, Michael S.

    1999-01-01

    For two decades the hot big-bang model as been referred to as the standard cosmology - and for good reason. For just as long cosmologists have known that there are fundamental questions that are not answered by the standard cosmology and point to a grander theory. The best candidate for that grander theory is inflation + cold dark matter. It holds that the Universe is flat, that slowly moving elementary particles left over from the earliest moments provide the cosmic infrastructure, and that the primeval density inhomogeneities that seed all the structure arose from quantum fluctuations. There is now prima facie evidence that supports two basic tenets of this paradigm. An avalanche of high-quality cosmological observations will soon make this case stronger or will break it. Key questions remain to be answered; foremost among them are: identification and detection of the cold dark matter particles and elucidation of the dark-energy component. These are exciting times in cosmology!

  18. Cosmology solved? Maybe

    Energy Technology Data Exchange (ETDEWEB)

    Turner, Michael S

    1999-03-01

    For two decades the hot big-bang model as been referred to as the standard cosmology - and for good reason. For just as long cosmologists have known that there are fundamental questions that are not answered by the standard cosmology and point to a grander theory. The best candidate for that grander theory is inflation + cold dark matter. It holds that the Universe is flat, that slowly moving elementary particles left over from the earliest moments provide the cosmic infrastructure, and that the primeval density inhomogeneities that seed all the structure arose from quantum fluctuations. There is now prima facie evidence that supports two basic tenets of this paradigm. An avalanche of high-quality cosmological observations will soon make this case stronger or will break it. Key questions remain to be answered; foremost among them are: identification and detection of the cold dark matter particles and elucidation of the dark-energy component. These are exciting times in cosmology{exclamation_point}.

  19. Numerical cosmology: Revealing the universe using computers

    International Nuclear Information System (INIS)

    Centrella, J.; Matzner, R.A.; Tolman, B.W.

    1986-01-01

    In this paper the authors present two research projects which study the evolution of different periods in the history of the universe using numerical simulations. The first investigates the synthesis of light elements in an inhomogeneous early universe dominated by shocks and non-linear gravitational waves. The second follows the evolution of large scale structures during the later history of the universe and calculates their effect on the 3K background radiation. Their simulations are carried out using modern supercomputers and make heavy use of multidimensional color graphics, including film to elucidate the results. Both projects provide the authors the opportunity to do experiments in cosmology and assess their results against fundamental cosmological observations

  20. From Modeling of Plasticity in Single-Crystal Superalloys to High-Resolution X-rays Three-Crystal Diffractometer Peaks Simulation

    Science.gov (United States)

    Jacques, Alain

    2016-12-01

    The dislocation-based modeling of the high-temperature creep of two-phased single-crystal superalloys requires input data beyond strain vs time curves. This may be obtained by use of in situ experiments combining high-temperature creep tests with high-resolution synchrotron three-crystal diffractometry. Such tests give access to changes in phase volume fractions and to the average components of the stress tensor in each phase as well as the plastic strain of each phase. Further progress may be obtained by a new method making intensive use of the Fast Fourier Transform, and first modeling the behavior of a representative volume of material (stress fields, plastic strain, dislocation densities…), then simulating directly the corresponding diffraction peaks, taking into account the displacement field within the material, chemical variations, and beam coherence. Initial tests indicate that the simulated peak shapes are close to the experimental ones and are quite sensitive to the details of the microstructure and to dislocation densities at interfaces and within the soft γ phase.

  1. High-resolution seismic wave propagation using local time stepping

    KAUST Repository

    Peter, Daniel; Rietmann, Max; Galvez, Percy; Ampuero, Jean Paul

    2017-01-01

    High-resolution seismic wave simulations often require local refinements in numerical meshes to accurately capture e.g. steep topography or complex fault geometry. Together with explicit time schemes, this dramatically reduces the global time step

  2. Precision Cosmology

    Science.gov (United States)

    Jones, Bernard J. T.

    2017-04-01

    Preface; Notation and conventions; Part I. 100 Years of Cosmology: 1. Emerging cosmology; 2. The cosmic expansion; 3. The cosmic microwave background; 4. Recent cosmology; Part II. Newtonian Cosmology: 5. Newtonian cosmology; 6. Dark energy cosmological models; 7. The early universe; 8. The inhomogeneous universe; 9. The inflationary universe; Part III. Relativistic Cosmology: 10. Minkowski space; 11. The energy momentum tensor; 12. General relativity; 13. Space-time geometry and calculus; 14. The Einstein field equations; 15. Solutions of the Einstein equations; 16. The Robertson-Walker solution; 17. Congruences, curvature and Raychaudhuri; 18. Observing and measuring the universe; Part IV. The Physics of Matter and Radiation: 19. Physics of the CMB radiation; 20. Recombination of the primeval plasma; 21. CMB polarisation; 22. CMB anisotropy; Part V. Precision Tools for Precision Cosmology: 23. Likelihood; 24. Frequentist hypothesis testing; 25. Statistical inference: Bayesian; 26. CMB data processing; 27. Parametrising the universe; 28. Precision cosmology; 29. Epilogue; Appendix A. SI, CGS and Planck units; Appendix B. Magnitudes and distances; Appendix C. Representing vectors and tensors; Appendix D. The electromagnetic field; Appendix E. Statistical distributions; Appendix F. Functions on a sphere; Appendix G. Acknowledgements; References; Index.

  3. Emergence of the product of constant curvature spaces in loop quantum cosmology

    International Nuclear Information System (INIS)

    Dadhich, Naresh; Joe, Anton; Singh, Parampreet

    2015-01-01

    The loop quantum dynamics of Kantowski–Sachs spacetime and the interior of higher genus black hole spacetimes with a cosmological constant has some peculiar features not shared by various other spacetimes in loop quantum cosmology. As in the other cases, though the quantum geometric effects resolve the physical singularity and result in a non-singular bounce, after the bounce a spacetime with small spacetime curvature does not emerge in either the subsequent backward or the forward evolution. Rather, in the asymptotic limit the spacetime manifold is a product of two constant curvature spaces. Interestingly, though the spacetime curvature of these asymptotic spacetimes is very high, their effective metric is a solution to Einstein’s field equations. Analysis of the components of the Ricci tensor shows that after the singularity resolution, the Kantowski–Sachs spacetime leads to an effective metric which can be interpreted as the ‘charged’ Nariai, while the higher genus black hole interior can similarly be interpreted as an anti Bertotti–Robinson spacetime with a cosmological constant. These spacetimes are ‘charged’ in the sense that the energy–momentum tensor that satisfies Einstein’s field equations is formally the same as the one for the uniform electromagnetic field, albeit it has a purely quantum geometric origin. The asymptotic spacetimes also have an emergent cosmological constant which is different in magnitude, and sometimes even its sign, from the cosmological constant in the Kantowski–Sachs and the interior of higher genus black hole metrics. With a fine tuning of the latter cosmological constant, we show that ‘uncharged’ Nariai, and anti Bertotti–Robinson spacetimes with a vanishing emergent cosmological constant can also be obtained. (paper)

  4. Detailed high-resolution three-dimensional simulations of OMEGA separated reactants inertial confinement fusion experiments

    Energy Technology Data Exchange (ETDEWEB)

    Haines, Brian M., E-mail: bmhaines@lanl.gov; Fincke, James R.; Shah, Rahul C.; Boswell, Melissa; Fowler, Malcolm M.; Gore, Robert A.; Hayes-Sterbenz, Anna C.; Jungman, Gerard; Klein, Andreas; Rundberg, Robert S.; Steinkamp, Michael J.; Wilhelmy, Jerry B. [Los Alamos National Laboratory, MS T087, Los Alamos, New Mexico 87545 (United States); Grim, Gary P. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); Forrest, Chad J.; Silverstein, Kevin; Marshall, Frederic J. [Laboratory for Laser Energetics, University of Rochester, Rochester, New York 14623 (United States)

    2016-07-15

    We present results from the comparison of high-resolution three-dimensional (3D) simulations with data from the implosions of inertial confinement fusion capsules with separated reactants performed on the OMEGA laser facility. Each capsule, referred to as a “CD Mixcap,” is filled with tritium and has a polystyrene (CH) shell with a deuterated polystyrene (CD) layer whose burial depth is varied. In these implosions, fusion reactions between deuterium and tritium ions can occur only in the presence of atomic mix between the gas fill and shell material. The simulations feature accurate models for all known experimental asymmetries and do not employ any adjustable parameters to improve agreement with experimental data. Simulations are performed with the RAGE radiation-hydrodynamics code using an Implicit Large Eddy Simulation (ILES) strategy for the hydrodynamics. We obtain good agreement with the experimental data, including the DT/TT neutron yield ratios used to diagnose mix, for all burial depths of the deuterated shell layer. Additionally, simulations demonstrate good agreement with converged simulations employing explicit models for plasma diffusion and viscosity, suggesting that the implicit sub-grid model used in ILES is sufficient to model these processes in these experiments. In our simulations, mixing is driven by short-wavelength asymmetries and longer-wavelength features are responsible for developing flows that transport mixed material towards the center of the hot spot. Mix material transported by this process is responsible for most of the mix (DT) yield even for the capsule with a CD layer adjacent to the tritium fuel. Consistent with our previous results, mix does not play a significant role in TT neutron yield degradation; instead, this is dominated by the displacement of fuel from the center of the implosion due to the development of turbulent instabilities seeded by long-wavelength asymmetries. Through these processes, the long

  5. EGS4CYL a Montecarlo simulation method of a PET or spect equipment at high spatial resolution

    International Nuclear Information System (INIS)

    Ferriani, S.; Galli, M.

    1995-11-01

    This report describes a Montecarlo simulation method for the simulation of a Pet or Spect equipment. The method is based on the Egs4cyl code. This work has been done in the framework of the Hirespet collaboration, for the developing of an high spatial resolution tomograph, the method will be used for the project of the tomograph. The treated geometry consists of a set of coaxial cylinders, surrounded by a ring of detectors. The detectors have a box shape, a collimator in front of each of them can be included, by means of geometrical constraints to the incident particles. An isotropic source is in the middle of the system. For the particles transport the Egs4code is used, for storing and plotting results the Cern packages Higz and Hbook are used

  6. A high resolution hydrodynamic 3-D model simulation of the malta shelf area

    Directory of Open Access Journals (Sweden)

    A. F. Drago

    2003-01-01

    Full Text Available The seasonal variability of the water masses and transport in the Malta Channel and proximity of the Maltese Islands have been simulated by a high resolution (1.6 km horizontal grid on average, 15 vertical sigma layers eddy resolving primitive equation shelf model (ROSARIO-I. The numerical simulation was run with climatological forcing and includes thermohaline dynamics with a turbulence scheme for the vertical mixing coefficients on the basis of the Princeton Ocean Model (POM. The model has been coupled by one-way nesting along three lateral boundaries (east, south and west to an intermediate coarser resolution model (5 km implemented over the Sicilian Channel area. The fields at the open boundaries and the atmospheric forcing at the air-sea interface were applied on a repeating "perpetual" year climatological cycle. The ability of the model to reproduce a realistic circulation of the Sicilian-Maltese shelf area has been demonstrated. The skill of the nesting procedure was tested by model-modelc omparisons showing that the major features of the coarse model flow field can be reproduced by the fine model with additional eddy space scale components. The numerical results included upwelling, mainly in summer and early autumn, along the southern coasts of Sicily and Malta; a strong eastward shelf surface flow along shore to Sicily, forming part of the Atlantic Ionian Stream, with a presence throughout the year and with significant seasonal modulation, and a westward winter intensified flow of LIW centered at a depth of around 280 m under the shelf break to the south of Malta. The seasonal variability in the thermohaline structure of the domain and the associated large-scale flow structures can be related to the current knowledge on the observed hydrography of the area. The level of mesoscale resolution achieved by the model allowed the spatial and temporal evolution of the changing flow patterns, triggered by internal dynamics, to be followed in

  7. A high resolution hydrodynamic 3-D model simulation of the malta shelf area

    Directory of Open Access Journals (Sweden)

    A. F. Drago

    Full Text Available The seasonal variability of the water masses and transport in the Malta Channel and proximity of the Maltese Islands have been simulated by a high resolution (1.6 km horizontal grid on average, 15 vertical sigma layers eddy resolving primitive equation shelf model (ROSARIO-I. The numerical simulation was run with climatological forcing and includes thermohaline dynamics with a turbulence scheme for the vertical mixing coefficients on the basis of the Princeton Ocean Model (POM. The model has been coupled by one-way nesting along three lateral boundaries (east, south and west to an intermediate coarser resolution model (5 km implemented over the Sicilian Channel area. The fields at the open boundaries and the atmospheric forcing at the air-sea interface were applied on a repeating "perpetual" year climatological cycle.

    The ability of the model to reproduce a realistic circulation of the Sicilian-Maltese shelf area has been demonstrated. The skill of the nesting procedure was tested by model-modelc omparisons showing that the major features of the coarse model flow field can be reproduced by the fine model with additional eddy space scale components. The numerical results included upwelling, mainly in summer and early autumn, along the southern coasts of Sicily and Malta; a strong eastward shelf surface flow along shore to Sicily, forming part of the Atlantic Ionian Stream, with a presence throughout the year and with significant seasonal modulation, and a westward winter intensified flow of LIW centered at a depth of around 280 m under the shelf break to the south of Malta. The seasonal variability in the thermohaline structure of the domain and the associated large-scale flow structures can be related to the current knowledge on the observed hydrography of the area. The level of mesoscale resolution achieved by the model allowed the spatial and temporal evolution of the changing flow patterns, triggered by

  8. Constraining holographic cosmology using Planck data

    Science.gov (United States)

    Afshordi, Niayesh; Gould, Elizabeth; Skenderis, Kostas

    2017-06-01

    Holographic cosmology offers a novel framework for describing the very early Universe in which cosmological predictions are expressed in terms of the observables of a three-dimensional quantum field theory (QFT). This framework includes conventional slow-roll inflation, which is described in terms of a strongly coupled QFT, but it also allows for qualitatively new models for the very early Universe, where the dual QFT may be weakly coupled. The new models describe a universe which is nongeometric at early times. While standard slow-roll inflation leads to a (near-) power-law primordial power spectrum, perturbative super-renormalizable QFTs yield a new holographic spectral shape. Here, we compare the two predictions against cosmological observations. We use CosmoMC to determine the best fit parameters, and MultiNest for Bayesian evidence, comparing the likelihoods. We find that the dual QFT should be nonperturbative at the very low multipoles (l ≲30 ), while for higher multipoles (l ≳30 ) the new holographic model, based on perturbative QFT, fits the data just as well as the standard power-law spectrum assumed in Λ CDM cosmology. This finding opens the door to applications of nonperturbative QFT techniques, such as lattice simulations, to observational cosmology on gigaparsec scales and beyond.

  9. How To Model Supernovae in Simulations of Star and Galaxy Formation

    Science.gov (United States)

    Hopkins, Philip F.; Wetzel, Andrew; Kereš, Dušan; Faucher-Giguére, Claude-André; Quataert, Eliot; Boylan-Kolchin, Michael; Murray, Norman; Hayward, Christopher C.; El-Badry, Kareem

    2018-03-01

    We study the implementation of mechanical feedback from supernovae (SNe) and stellar mass loss in galaxy simulations, within the Feedback In Realistic Environments (FIRE) project. We present the FIRE-2 algorithm for coupling mechanical feedback, which can be applied to any hydrodynamics method (e.g. fixed-grid, moving-mesh, and mesh-less methods), and black hole as well as stellar feedback. This algorithm ensures manifest conservation of mass, energy, and momentum, and avoids imprinting "preferred directions" on the ejecta. We show that it is critical to incorporate both momentum and thermal energy of mechanical ejecta in a self-consistent manner, accounting for SNe cooling radii when they are not resolved. Using idealized simulations of single SN explosions, we show that the FIRE-2 algorithm, independent of resolution, reproduces converged solutions in both energy and momentum. In contrast, common "fully-thermal" (energy-dump) or "fully-kinetic" (particle-kicking) schemes in the literature depend strongly on resolution: when applied at mass resolution ≳ 100 M⊙, they diverge by orders-of-magnitude from the converged solution. In galaxy-formation simulations, this divergence leads to orders-of-magnitude differences in galaxy properties, unless those models are adjusted in a resolution-dependent way. We show that all models that individually time-resolve SNe converge to the FIRE-2 solution at sufficiently high resolution (simulations and cosmological galaxy-formation simulations, the FIRE-2 algorithm converges much faster than other sub-grid models without re-tuning parameters.

  10. Smoot Group Cosmology

    Science.gov (United States)

    the Universe About Cosmology Planck Satellite Launched Cosmology Videos Professor George Smoot's group conducts research on the early universe (cosmology) using the Cosmic Microwave Background radiation (CMB science goals regarding cosmology. George Smoot named Director of Korean Cosmology Institute The GRB

  11. Mathematical cosmology

    International Nuclear Information System (INIS)

    Wainwright, J.

    1990-01-01

    The workshop on mathematical cosmology was devoted to four topics of current interest. This report contains a brief discussion of the historical background of each topic and a concise summary of the content of each talk. The topics were; the observational cosmology program, the cosmological perturbation program, isotropic singularities, and the evolution of Bianchi cosmologies. (author)

  12. Extended-range high-resolution dynamical downscaling over a continental-scale spatial domain with atmospheric and surface nudging

    Science.gov (United States)

    Husain, S. Z.; Separovic, L.; Yu, W.; Fernig, D.

    2014-12-01

    Extended-range high-resolution mesoscale simulations with limited-area atmospheric models when applied to downscale regional analysis fields over large spatial domains can provide valuable information for many applications including the weather-dependent renewable energy industry. Long-term simulations over a continental-scale spatial domain, however, require mechanisms to control the large-scale deviations in the high-resolution simulated fields from the coarse-resolution driving fields. As enforcement of the lateral boundary conditions is insufficient to restrict such deviations, large scales in the simulated high-resolution meteorological fields are therefore spectrally nudged toward the driving fields. Different spectral nudging approaches, including the appropriate nudging length scales as well as the vertical profiles and temporal relaxations for nudging, have been investigated to propose an optimal nudging strategy. Impacts of time-varying nudging and generation of hourly analysis estimates are explored to circumvent problems arising from the coarse temporal resolution of the regional analysis fields. Although controlling the evolution of the atmospheric large scales generally improves the outputs of high-resolution mesoscale simulations within the surface layer, the prognostically evolving surface fields can nevertheless deviate from their expected values leading to significant inaccuracies in the predicted surface layer meteorology. A forcing strategy based on grid nudging of the different surface fields, including surface temperature, soil moisture, and snow conditions, toward their expected values obtained from a high-resolution offline surface scheme is therefore proposed to limit any considerable deviation. Finally, wind speed and temperature at wind turbine hub height predicted by different spectrally nudged extended-range simulations are compared against observations to demonstrate possible improvements achievable using higher spatiotemporal

  13. Charged current cross section for massive cosmological neutrinos impinging on radioactive nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Lazauskas, R.; Volpe, C. [Institut de Physique Nuclueaire, 91 - Orsay (France); Vogel, P. [Kellogg Radiation Lab., Caltech, Pasadena, California (United States)

    2007-07-01

    We discuss the cross section formula both for massless and massive neutrinos on stable and radioactive nuclei. The latter could be of interest for the detection of cosmological neutrinos whose observation is one of the main challenges of modern cosmology. We analyze the signal to background ratio as a function of the ratio m{nu}/{delta}, i.e. the neutrino mass over the detector resolution and show that an energy resolution {delta} {<=} 0.5 eV would be required for sub-eV neutrino masses, independently of the gravitational neutrino clustering. Finally we mention the non-resonant character of neutrino capture on radioactive nuclei. (authors)

  14. Weighing the galactic disc using the Jeans equation: lessons from simulations

    Science.gov (United States)

    Candlish, G. N.; Smith, R.; Moni Bidin, C.; Gibson, B. K.

    2016-03-01

    Using three-dimensional stellar kinematic data from simulated galaxies, we examine the efficacy of a Jeans equation analysis in reconstructing the total disk surface density, including the dark matter, at the `Solar' radius. Our simulation data set includes galaxies formed in a cosmological context using state-of-the-art high-resolution cosmological zoom simulations, and other idealized models. The cosmologically formed galaxies have been demonstrated to lie on many of the observed scaling relations for late-type spirals, and thus offer an interesting surrogate for real galaxies with the obvious advantage that all the kinematical data are known perfectly. We show that the vertical velocity dispersion is typically the dominant kinematic quantity in the analysis, and that the traditional method of using only the vertical force is reasonably effective at low heights above the disk plane. At higher heights the inclusion of the radial force becomes increasingly important. We also show that the method is sensitive to uncertainties in the measured disk parameters, particularly the scalelengths of the assumed double exponential density distribution, and the scalelength of the radial velocity dispersion. In addition, we show that disk structure and low number statistics can lead to significant errors in the calculated surface densities. Finally, we examine the implications of our results for previous studies of this sort, suggesting that more accurate measurements of the scalelengths may help reconcile conflicting estimates of the local dark matter density in the literature.

  15. The influence of atmospheric grid resolution in a climate model-forced ice sheet simulation

    Science.gov (United States)

    Lofverstrom, Marcus; Liakka, Johan

    2018-04-01

    Coupled climate-ice sheet simulations have been growing in popularity in recent years. Experiments of this type are however challenging as ice sheets evolve over multi-millennial timescales, which is beyond the practical integration limit of most Earth system models. A common method to increase model throughput is to trade resolution for computational efficiency (compromise accuracy for speed). Here we analyze how the resolution of an atmospheric general circulation model (AGCM) influences the simulation quality in a stand-alone ice sheet model. Four identical AGCM simulations of the Last Glacial Maximum (LGM) were run at different horizontal resolutions: T85 (1.4°), T42 (2.8°), T31 (3.8°), and T21 (5.6°). These simulations were subsequently used as forcing of an ice sheet model. While the T85 climate forcing reproduces the LGM ice sheets to a high accuracy, the intermediate resolution cases (T42 and T31) fail to build the Eurasian ice sheet. The T21 case fails in both Eurasia and North America. Sensitivity experiments using different surface mass balance parameterizations improve the simulations of the Eurasian ice sheet in the T42 case, but the compromise is a substantial ice buildup in Siberia. The T31 and T21 cases do not improve in the same way in Eurasia, though the latter simulates the continent-wide Laurentide ice sheet in North America. The difficulty to reproduce the LGM ice sheets in the T21 case is in broad agreement with previous studies using low-resolution atmospheric models, and is caused by a substantial deterioration of the model climate between the T31 and T21 resolutions. It is speculated that this deficiency may demonstrate a fundamental problem with using low-resolution atmospheric models in these types of experiments.

  16. Air-sea exchange over Black Sea estimated from high resolution regional climate simulations

    Science.gov (United States)

    Velea, Liliana; Bojariu, Roxana; Cica, Roxana

    2013-04-01

    Black Sea is an important influencing factor for the climate of bordering countries, showing cyclogenetic activity (Trigo et al, 1999) and influencing Mediterranean cyclones passing over. As for other seas, standard observations of the atmosphere are limited in time and space and available observation-based estimations of air-sea exchange terms present quite large ranges of uncertainty. The reanalysis datasets (e.g. ERA produced by ECMWF) provide promising validation estimates of climatic characteristics against the ones in available climatic data (Schrum et al, 2001), while cannot reproduce some local features due to relatively coarse horizontal resolution. Detailed and realistic information on smaller-scale processes are foreseen to be provided by regional climate models, due to continuous improvements of physical parameterizations and numerical solutions and thus affording simulations at high spatial resolution. The aim of the study is to assess the potential of three regional climate models in reproducing known climatological characteristics of air-sea exchange over Black Sea, as well as to explore the added value of the model compared to the input (reanalysis) data. We employ results of long-term (1961-2000) simulations performed within ENSEMBLE project (http://ensemblesrt3.dmi.dk/) using models ETHZ-CLM, CNRM-ALADIN, METO-HadCM, for which the integration domain covers the whole area of interest. The analysis is performed for the entire basin for several variables entering the heat and water budget terms and available as direct output from the models, at seasonal and annual scale. A comparison with independent data (ERA-INTERIM) and findings from other studies (e.g. Schrum et al, 2001) is also presented. References: Schrum, C., Staneva, J., Stanev, E. and Ozsoy, E., 2001: Air-sea exchange in the Black Sea estimated from atmospheric analysis for the period 1979-1993, J. Marine Systems, 31, 3-19 Trigo, I. F., T. D. Davies, and G. R. Bigg (1999): Objective

  17. Cosmological Particle Data Compression in Practice

    Science.gov (United States)

    Zeyen, M.; Ahrens, J.; Hagen, H.; Heitmann, K.; Habib, S.

    2017-12-01

    In cosmological simulations trillions of particles are handled and several terabytes of unstructured particle data are generated in each time step. Transferring this data directly from memory to disk in an uncompressed way results in a massive load on I/O and storage systems. Hence, one goal of domain scientists is to compress the data before storing it to disk while minimizing the loss of information. To prevent reading back uncompressed data from disk, this can be done in an in-situ process. Since the simulation continuously generates data, the available time for the compression of one time step is limited. Therefore, the evaluation of compression techniques has shifted from only focusing on compression rates to include run-times and scalability.In recent years several compression techniques for cosmological data have become available. These techniques can be either lossy or lossless, depending on the technique. For both cases, this study aims to evaluate and compare the state of the art compression techniques for unstructured particle data. This study focuses on the techniques available in the Blosc framework with its multi-threading support, the XZ Utils toolkit with the LZMA algorithm that achieves high compression rates, and the widespread FPZIP and ZFP methods for lossy compressions.For the investigated compression techniques, quantitative performance indicators such as compression rates, run-time/throughput, and reconstruction errors are measured. Based on these factors, this study offers a comprehensive analysis of the individual techniques and discusses their applicability for in-situ compression. In addition, domain specific measures are evaluated on the reconstructed data sets, and the relative error rates and statistical properties are analyzed and compared. Based on this study future challenges and directions in the compression of unstructured cosmological particle data were identified.

  18. Cosmology and Gravitation: the grand scheme for High-Energy Physics

    CERN Document Server

    Binétruy, P.

    2014-12-10

    These lectures describe how the Standard Model of cosmology ( Λ CDM) has developped, based on observational facts but also on ideas formed in the context of the theory of fundamental interactions, both gravitational and non-gravitational, the latter being described by the Standard Model of high energy physics. It focuses on the latest developments, in particular the precise knowledge of the early Universe provided by the observation of the Cosmic Microwave Background and the discovery of the present acceleration of the expansion of the Universe. While insisting on the successes of the Standard Model of cosmology, we will stress that it rests on three pillars which involve many open questions: the theory of inflation, the nature of dark matter and of dark energy. We will devote one chapter to each of these issues, describing in particular how this impacts our views on the theory of fundamental interactions. More technical parts are given in italics. They may be skipped altogether.

  19. A Coastal Bay Summer Breeze Study, Part 2: High-resolution Numerical Simulation of Sea-breeze Local Influences

    Science.gov (United States)

    Calmet, Isabelle; Mestayer, Patrice G.; van Eijk, Alexander M. J.; Herlédant, Olivier

    2018-04-01

    We complete the analysis of the data obtained during the experimental campaign around the semi circular bay of Quiberon, France, during two weeks in June 2006 (see Part 1). A reanalysis of numerical simulations performed with the Advanced Regional Prediction System model is presented. Three nested computational domains with increasing horizontal resolution down to 100 m, and a vertical resolution of 10 m at the lowest level, are used to reproduce the local-scale variations of the breeze close to the water surface of the bay. The Weather Research and Forecasting mesoscale model is used to assimilate the meteorological data. Comparisons of the simulations with the experimental data obtained at three sites reveal a good agreement of the flow over the bay and around the Quiberon peninsula during the daytime periods of sea-breeze development and weakening. In conditions of offshore synoptic flow, the simulations demonstrate that the semi-circular shape of the bay induces a corresponding circular shape in the offshore zones of stagnant flow preceding the sea-breeze onset, which move further offshore thereafter. The higher-resolution simulations are successful in reproducing the small-scale impacts of the peninsula and local coasts (breeze deviations, wakes, flow divergences), and in demonstrating the complexity of the breeze fields close to the surface over the bay. Our reanalysis also provides guidance for numerical simulation strategies for analyzing the structure and evolution of the near-surface breeze over a semi-circular bay, and for forecasting important flow details for use in upcoming sailing competitions.

  20. Arbitrary scalar-field and quintessence cosmological models

    International Nuclear Information System (INIS)

    Harko, Tiberiu; Lobo, Francisco S.N.; Mak, M.K.

    2014-01-01

    The mechanism of the initial inflationary scenario of the Universe and of its late-time acceleration can be described by assuming the existence of some gravitationally coupled scalar fields φ, with the inflaton field generating inflation and the quintessence field being responsible for the late accelerated expansion. Various inflationary and late-time accelerated scenarios are distinguished by the choice of an effective self-interaction potential V(φ), which simulates a temporarily non-vanishing cosmological term. In this work, we present a new formalism for the analysis of scalar fields in flat isotropic and homogeneous cosmological models. The basic evolution equation of the models can be reduced to a first-order non-linear differential equation. Approximate solutions of this equation can be constructed in the limiting cases of the scalar-field kinetic energy and potential energy dominance, respectively, as well as in the intermediate regime. Moreover, we present several new accelerating and decelerating exact cosmological solutions, based on the exact integration of the basic evolution equation for scalar-field cosmologies. More specifically, exact solutions are obtained for exponential, generalized cosine hyperbolic, and power-law potentials, respectively. Cosmological models with power-law scalar field potentials are also analyzed in detail. (orig.)

  1. Low-redshift Lyman limit systems as diagnostics of cosmological inflows and outflows

    Science.gov (United States)

    Hafen, Zachary; Faucher-Giguère, Claude-André; Anglés-Alcázar, Daniel; Kereš, Dušan; Feldmann, Robert; Chan, T. K.; Quataert, Eliot; Murray, Norman; Hopkins, Philip F.

    2017-08-01

    We use cosmological hydrodynamic simulations with stellar feedback from the FIRE (Feedback In Realistic Environments) project to study the physical nature of Lyman limit systems (LLSs) at z ≤ 1. At these low redshifts, LLSs are closely associated with dense gas structures surrounding galaxies, such as galactic winds, dwarf satellites and cool inflows from the intergalactic medium. Our analysis is based on 14 zoom-in simulations covering the halo mass range Mh ≈ 109-1013 M⊙ at z = 0, which we convolve with the dark matter halo mass function to produce cosmological statistics. We find that the majority of cosmologically selected LLSs are associated with haloes in the mass range 1010 ≲ Mh ≲ 1012 M⊙. The incidence and H I column density distribution of simulated absorbers with columns in the range 10^{16.2} ≤ N_{H I} ≤ 2× 10^{20} cm-2 are consistent with observations. High-velocity outflows (with radial velocity exceeding the halo circular velocity by a factor of ≳ 2) tend to have higher metallicities ([X/H] ˜ -0.5) while very low metallicity ([X/H] standard deviation) [X/H] = -0.9 (0.4) and does not show significant evidence for bimodality, in contrast to recent observational studies, but consistent with LLSs arising from haloes with a broad range of masses and metallicities.

  2. High resolution geodynamo simulations with strongly-driven convection and low viscosity

    Science.gov (United States)

    Schaeffer, Nathanael; Fournier, Alexandre; Jault, Dominique; Aubert, Julien

    2015-04-01

    Numerical simulations have been successful at explaining the magnetic field of the Earth for 20 years. However, the regime in which these simulations operate is in many respect very far from what is expected in the Earth's core. By reviewing previous work, we find that it appears difficult to have both low viscosity (low magnetic Prandtl number) and strong magnetic fields in numerical models (large ratio of magnetic over kinetic energy, a.k.a inverse squared Alfvén number). In order to understand better the dynamics and turbulence of the core, we have run a series of 3 simulations, with increasingly demanding parameters. The last simulation is at the limit of what nowadays codes can do on current super computers, with a resolution of 2688 grid points in longitude, 1344 in latitude, and 1024 radial levels. We will show various features of these numerical simulations, including what appears as trends when pushing the parameters toward the one of the Earth. The dynamics is very rich. From short time scales to large time scales, we observe at large scales: Inertial Waves, Torsional Alfvén Waves, columnar convective overturn dynamics and long-term thermal winds. In addition, the dynamics inside and outside the tangent cylinder seem to follow different routes. We find that the ohmic dissipation largely dominates the viscous one and that the magnetic energy dominates the kinetic energy. The magnetic field seems to play an ambiguous role. Despite the large magnetic field, which has an important impact on the flow, we find that the force balance for the mean flow is a thermal wind balance, and that the scale of convective cells is still dominated by viscous effects.

  3. Surface Wind Regionalization over Complex Terrain: Evaluation and Analysis of a High-Resolution WRF Simulation

    NARCIS (Netherlands)

    Jiménez, P.A.; González-Rouco, J.F.; García-Bustamante, E.; Navarro, J.; Montávez, J.P.; Vilà-Guerau de Arellano, J.; Dudhia, J.; Muñoz-Roldan, A.

    2010-01-01

    This study analyzes the daily-mean surface wind variability over an area characterized by complex topography through comparing observations and a 2-km-spatial-resolution simulation performed with the Weather Research and Forecasting (WRF) model for the period 1992–2005. The evaluation focuses on the

  4. ACTPol: Status and preliminary CMB polarization results from the Atacama Cosmology Telescope

    Science.gov (United States)

    Koopman, Brian

    2014-03-01

    The Atacama Cosmology Telescope Polarimeter (ACTPol) is a polarization sensitive upgrade for the Atacama Cosmology Telescope, located at an elevation of 5190 m on Cerro Toco in Chile. In summer 2013, ACTPol achieved first light with one third of the final detector configuration. The remaining two thirds of the detector array will be installed during spring 2014, enabling full sensitivity, high resolution, observations at both 90 GHz and 150 GHz. Using approximately 3,000 transition-edge sensor bolometers, ACTPol will enable measurements of small angular scale polarization anisotropies in the Cosmic Microwave Background (CMB). I will present a status update for the ACTPol receiver and some preliminary results. ACTPol measurements will allow us to probe the spectral index of inflation as well as to constrain early dark energy and the sum of neutrino masses.

  5. Simulating the interaction of jets with the intracluster medium

    Science.gov (United States)

    Weinberger, Rainer; Ehlert, Kristian; Pfrommer, Christoph; Pakmor, Rüdiger; Springel, Volker

    2017-10-01

    Jets from supermassive black holes in the centres of galaxy clusters are a potential candidate for moderating gas cooling and subsequent star formation through depositing energy in the intracluster gas. In this work, we simulate the jet-intracluster medium interaction using the moving-mesh magnetohydrodynamics code arepo. Our model injects supersonic, low-density, collimated and magnetized outflows in cluster centres, which are then stopped by the surrounding gas, thermalize and inflate low-density cavities filled with cosmic rays. We perform high-resolution, non-radiative simulations of the lobe creation, expansion and disruption, and find that its dynamical evolution is in qualitative agreement with simulations of idealized low-density cavities that are dominated by a large-scale Rayleigh-Taylor instability. The buoyant rising of the lobe does not create energetically significant small-scale chaotic motion in a volume-filling fashion, but rather a systematic upward motion in the wake of the lobe and a corresponding back-flow antiparallel to it. We find that, overall, 50 per cent of the injected energy ends up in material that is not part of the lobe, and about 25 per cent remains in the inner 100 kpc. We conclude that jet-inflated, buoyantly rising cavities drive systematic gas motions that play an important role in heating the central regions, while mixing of lobe material is subdominant. Encouragingly, the main mechanisms responsible for this energy deposition can be modelled already at resolutions within reach in future, high-resolution cosmological simulations of galaxy clusters.

  6. The WASCAL high-resolution regional climate simulation ensemble for West Africa: concept, dissemination and assessment

    Science.gov (United States)

    Heinzeller, Dominikus; Dieng, Diarra; Smiatek, Gerhard; Olusegun, Christiana; Klein, Cornelia; Hamann, Ilse; Salack, Seyni; Bliefernicht, Jan; Kunstmann, Harald

    2018-04-01

    Climate change and constant population growth pose severe challenges to 21st century rural Africa. Within the framework of the West African Science Service Center on Climate Change and Adapted Land Use (WASCAL), an ensemble of high-resolution regional climate change scenarios for the greater West African region is provided to support the development of effective adaptation and mitigation measures. This contribution presents the overall concept of the WASCAL regional climate simulations, as well as detailed information on the experimental design, and provides information on the format and dissemination of the available data. All data are made available to the public at the CERA long-term archive of the German Climate Computing Center (DKRZ) with a subset available at the PANGAEA Data Publisher for Earth & Environmental Science portal (https://doi.pangaea.de/10.1594/PANGAEA.880512" target="_blank">https://doi.pangaea.de/10.1594/PANGAEA.880512). A brief assessment of the data are presented to provide guidance for future users. Regional climate projections are generated at high (12 km) and intermediate (60 km) resolution using the Weather Research and Forecasting Model (WRF). The simulations cover the validation period 1980-2010 and the two future periods 2020-2050 and 2070-2100. A brief comparison to observations and two climate change scenarios from the Coordinated Regional Downscaling Experiment (CORDEX) initiative is presented to provide guidance on the data set to future users and to assess their climate change signal. Under the RCP4.5 (Representative Concentration Pathway 4.5) scenario, the results suggest an increase in temperature by 1.5 °C at the coast of Guinea and by up to 3 °C in the northern Sahel by the end of the 21st century, in line with existing climate projections for the region. They also project an increase in precipitation by up to 300 mm per year along the coast of Guinea, by up to 150 mm per year in the Soudano region adjacent in the north and

  7. Cosmology with the cosmic web

    Science.gov (United States)

    Forero-Romero, J. E.

    2017-07-01

    This talk summarizes different algorithms that can be used to trace the cosmic web both in simulations and observations. We present different applications in galaxy formation and cosmology. To finalize, we show how the Dark Energy Spectroscopic Instrument (DESI) could be a good place to apply these techniques.

  8. High-resolution X-ray television and high-resolution video recorders

    International Nuclear Information System (INIS)

    Haendle, J.; Horbaschek, H.; Alexandrescu, M.

    1977-01-01

    The improved transmission properties of the high-resolution X-ray television chain described here make it possible to transmit more information per television image. The resolution in the fluoroscopic image, which is visually determined, depends on the dose rate and the inertia of the television pick-up tube. This connection is discussed. In the last few years, video recorders have been increasingly used in X-ray diagnostics. The video recorder is a further quality-limiting element in X-ray television. The development of function patterns of high-resolution magnetic video recorders shows that this quality drop may be largely overcome. The influence of electrical band width and number of lines on the resolution in the X-ray television image stored is explained in more detail. (orig.) [de

  9. Test Particle Simulations of Electron Injection by the Bursty Bulk Flows (BBFs) using High Resolution Lyon-Feddor-Mobarry (LFM) Code

    Science.gov (United States)

    Eshetu, W. W.; Lyon, J.; Wiltberger, M. J.; Hudson, M. K.

    2017-12-01

    Test particle simulations of electron injection by the bursty bulk flows (BBFs) have been done using a test particle tracer code [1], and the output fields of the Lyon-Feddor-Mobarry global magnetohydro- dynamics (MHD) code[2]. The MHD code was run with high resolu- tion (oct resolution), and with specified solar wind conditions so as to reproduce the observed qualitative picture of the BBFs [3]. Test par- ticles were injected so that they interact with earthward propagating BBFs. The result of the simulation shows that electrons are pushed ahead of the BBFs and accelerated into the inner magnetosphere. Once electrons are in the inner magnetosphere they are further energized by drift resonance with the azimuthal electric field. In addition pitch angle scattering of electrons resulting in the violation conservation of the first adiabatic invariant has been observed. The violation of the first adiabatic invariant occurs as electrons cross a weak magnetic field region with a strong gradient of the field perturbed by the BBFs. References 1. Kress, B. T., Hudson,M. K., Looper, M. D. , Albert, J., Lyon, J. G., and Goodrich, C. C. (2007), Global MHD test particle simulations of ¿ 10 MeV radiation belt electrons during storm sudden commencement, J. Geophys. Res., 112, A09215, doi:10.1029/2006JA012218. Lyon,J. G., Fedder, J. A., and Mobarry, C.M., The Lyon- Fedder-Mobarry (LFM) Global MHD Magnetospheric Simulation Code (2004), J. Atm. And Solar-Terrestrial Phys., 66, Issue 15-16, 1333- 1350,doi:10.1016/j.jastp. Wiltberger, Merkin, M., Lyon, J. G., and Ohtani, S. (2015), High-resolution global magnetohydrodynamic simulation of bursty bulk flows, J. Geophys. Res. Space Physics, 120, 45554566, doi:10.1002/2015JA021080.

  10. String Gas Cosmology

    OpenAIRE

    Brandenberger, Robert H.

    2008-01-01

    String gas cosmology is a string theory-based approach to early universe cosmology which is based on making use of robust features of string theory such as the existence of new states and new symmetries. A first goal of string gas cosmology is to understand how string theory can effect the earliest moments of cosmology before the effective field theory approach which underlies standard and inflationary cosmology becomes valid. String gas cosmology may also provide an alternative to the curren...

  11. Adaptive Resolution Simulation of MARTINI Solvents

    NARCIS (Netherlands)

    Zavadlav, Julija; Melo, Manuel N.; Cunha, Ana V.; de Vries, Alex H.; Marrink, Siewert J.; Praprotnik, Matej

    We present adaptive resolution dynamics simulations of aqueous and apolar solvents coarse-grained molecular models that are compatible with the MARTINI force field. As representatives of both classes solvents we have chosen liquid water and butane, respectively, at ambient temperature. The solvent

  12. Spatial Variability in Column CO2 Inferred from High Resolution GEOS-5 Global Model Simulations: Implications for Remote Sensing and Inversions

    Science.gov (United States)

    Ott, L.; Putman, B.; Collatz, J.; Gregg, W.

    2012-01-01

    Column CO2 observations from current and future remote sensing missions represent a major advancement in our understanding of the carbon cycle and are expected to help constrain source and sink distributions. However, data assimilation and inversion methods are challenged by the difference in scale of models and observations. OCO-2 footprints represent an area of several square kilometers while NASA s future ASCENDS lidar mission is likely to have an even smaller footprint. In contrast, the resolution of models used in global inversions are typically hundreds of kilometers wide and often cover areas that include combinations of land, ocean and coastal areas and areas of significant topographic, land cover, and population density variations. To improve understanding of scales of atmospheric CO2 variability and representativeness of satellite observations, we will present results from a global, 10-km simulation of meteorology and atmospheric CO2 distributions performed using NASA s GEOS-5 general circulation model. This resolution, typical of mesoscale atmospheric models, represents an order of magnitude increase in resolution over typical global simulations of atmospheric composition allowing new insight into small scale CO2 variations across a wide range of surface flux and meteorological conditions. The simulation includes high resolution flux datasets provided by NASA s Carbon Monitoring System Flux Pilot Project at half degree resolution that have been down-scaled to 10-km using remote sensing datasets. Probability distribution functions are calculated over larger areas more typical of global models (100-400 km) to characterize subgrid-scale variability in these models. Particular emphasis is placed on coastal regions and regions containing megacities and fires to evaluate the ability of coarse resolution models to represent these small scale features. Additionally, model output are sampled using averaging kernels characteristic of OCO-2 and ASCENDS measurement

  13. High-resolution seismic wave propagation using local time stepping

    KAUST Repository

    Peter, Daniel

    2017-03-13

    High-resolution seismic wave simulations often require local refinements in numerical meshes to accurately capture e.g. steep topography or complex fault geometry. Together with explicit time schemes, this dramatically reduces the global time step size for ground-motion simulations due to numerical stability conditions. To alleviate this problem, local time stepping (LTS) algorithms allow an explicit time stepping scheme to adapt the time step to the element size, allowing nearoptimal time steps everywhere in the mesh. This can potentially lead to significantly faster simulation runtimes.

  14. Singularity resolution in quantum gravity

    International Nuclear Information System (INIS)

    Husain, Viqar; Winkler, Oliver

    2004-01-01

    We examine the singularity resolution issue in quantum gravity by studying a new quantization of standard Friedmann-Robertson-Walker geometrodynamics. The quantization procedure is inspired by the loop quantum gravity program, and is based on an alternative to the Schroedinger representation normally used in metric variable quantum cosmology. We show that in this representation for quantum geometrodynamics there exists a densely defined inverse scale factor operator, and that the Hamiltonian constraint acts as a difference operator on the basis states. We find that the cosmological singularity is avoided in the quantum dynamics. We discuss these results with a view to identifying the criteria that constitute 'singularity resolution' in quantum gravity

  15. Estimating cosmological parameters by the simulated data of gravitational waves from the Einstein Telescope

    Science.gov (United States)

    Cai, Rong-Gen; Yang, Tao

    2017-02-01

    We investigate the constraint ability of the gravitational wave (GW) as the standard siren on the cosmological parameters by using the third-generation gravitational wave detector: the Einstein Telescope. The binary merger of a neutron with either a neutron or black hole is hypothesized to be the progenitor of a short and intense burst of γ rays; some fraction of those binary mergers could be detected both through electromagnetic radiation and gravitational waves. Thus we can determine both the luminosity distance and redshift of the source separately. We simulate the luminosity distances and redshift measurements from 100 to 1000 GW events. We use two different algorithms to constrain the cosmological parameters. For the Hubble constant H0 and dark matter density parameter Ωm, we adopt the Markov chain Monte Carlo approach. We find that with about 500-600 GW events we can constrain the Hubble constant with an accuracy comparable to Planck temperature data and Planck lensing combined results, while for the dark matter density, GWs alone seem not able to provide the constraints as good as for the Hubble constant; the sensitivity of 1000 GW events is a little lower than that of Planck data. It should require more than 1000 events to match the Planck sensitivity. Yet, for analyzing the more complex dynamical property of dark energy, i.e., the equation of state w , we adopt a new powerful nonparametric method: the Gaussian process. We can reconstruct w directly from the observational luminosity distance at every redshift. In the low redshift region, we find that about 700 GW events can give the constraints of w (z ) comparable to the constraints of a constant w by Planck data with type-Ia supernovae. Those results show that GWs as the standard sirens to probe the cosmological parameters can provide an independent and complementary alternative to current experiments.

  16. Cosmological Simulations with Scale-Free Initial Conditions. I. Adiabatic Hydrodynamics

    International Nuclear Information System (INIS)

    Owen, J.M.; Weinberg, D.H.; Evrard, A.E.; Hernquist, L.; Katz, N.

    1998-01-01

    We analyze hierarchical structure formation based on scale-free initial conditions in an Einstein endash de Sitter universe, including a baryonic component with Ω bary = 0.05. We present three independent, smoothed particle hydrodynamics (SPH) simulations, performed at two resolutions (32 3 and 64 3 dark matter and baryonic particles) and with two different SPH codes (TreeSPH and P3MSPH). Each simulation is based on identical initial conditions, which consist of Gaussian-distributed initial density fluctuations that have a power spectrum P(k) ∝ k -1 . The baryonic material is modeled as an ideal gas subject only to shock heating and adiabatic heating and cooling; radiative cooling and photoionization heating are not included. The evolution is expected to be self-similar in time, and under certain restrictions we identify the expected scalings for many properties of the distribution of collapsed objects in all three realizations. The distributions of dark matter masses, baryon masses, and mass- and emission-weighted temperatures scale quite reliably. However, the density estimates in the central regions of these structures are determined by the degree of numerical resolution. As a result, mean gas densities and Bremsstrahlung luminosities obey the expected scalings only when calculated within a limited dynamic range in density contrast. The temperatures and luminosities of the groups show tight correlations with the baryon masses, which we find can be well represented by power laws. The Press-Schechter (PS) approximation predicts the distribution of group dark matter and baryon masses fairly well, though it tends to overestimate the baryon masses. Combining the PS mass distribution with the measured relations for T(M) and L(M) predicts the temperature and luminosity distributions fairly accurately, though there are some discrepancies at high temperatures/luminosities. In general the three simulations agree well for the properties of resolved groups, where a group

  17. Monte Carlo simulation of the resolution volume for the SEQUOIA spectrometer

    Directory of Open Access Journals (Sweden)

    Granroth G.E.

    2015-01-01

    Full Text Available Monte Carlo ray tracing simulations, of direct geometry spectrometers, have been particularly useful in instrument design and characterization. However, these tools can also be useful for experiment planning and analysis. To this end, the McStas Monte Carlo ray tracing model of SEQUOIA, the fine resolution fermi chopper spectrometer at the Spallation Neutron Source (SNS of Oak Ridge National Laboratory (ORNL, has been modified to include the time of flight resolution sample and detector components. With these components, the resolution ellipsoid can be calculated for any detector pixel and energy bin of the instrument. The simulation is split in two pieces. First, the incident beamline up to the sample is simulated for 1 × 1011 neutron packets (4 days on 30 cores. This provides a virtual source for the backend that includes the resolution sample and monitor components. Next, a series of detector and energy pixels are computed in parallel. It takes on the order of 30 s to calculate a single resolution ellipsoid on a single core. Python scripts have been written to transform the ellipsoid into the space of an oriented single crystal, and to characterize the ellipsoid in various ways. Though this tool is under development as a planning tool, we have successfully used it to provide the resolution function for convolution with theoretical models. Specifically, theoretical calculations of the spin waves in YFeO3 were compared to measurements taken on SEQUOIA. Though the overall features of the spectra can be explained while neglecting resolution effects, the variation in intensity of the modes is well described once the resolution is included. As this was a single sharp mode, the simulated half intensity value of the resolution ellipsoid was used to provide the resolution width. A description of the simulation, its use, and paths forward for this technique will be discussed.

  18. Observational cosmology

    International Nuclear Information System (INIS)

    Partridge, R.B.

    1977-01-01

    Some sixty years after the development of relativistic cosmology by Einstein and his colleagues, observations are finally beginning to have an important impact on our views of the Universe. The available evidence seems to support one of the simplest cosmological models, the hot Big Bang model. The aim of this paper is to assess the observational support for certain assumptions underlying the hot Big Bang model. These are that the Universe is isobaric and homogeneous on a large scale; that it is expanding from an initial state of high density and temperature; and that the proper theory to describe the dynamics of the Universe is unmodified General Relativity. The properties of the cosmic microwave background radiation and recent observations of the abundance of light elements, in particular, support these assumptions. Also examined here are the data bearing on the related questions of the geometry and the future of the Universe (is it ever-expanding, or fated to recollapse). Finally, some difficulties and faults of the standard model are discussed, particularly various aspects of the 'initial condition' problem. It appears that the simplest Big Bang cosmological model calls for a highly specific set of initial conditions to produce the presently observed properties of the Universe. (Auth.)

  19. Cosmological Constraints on Mirror Matter Parameters

    International Nuclear Information System (INIS)

    Wallemacq, Quentin; Ciarcelluti, Paolo

    2014-01-01

    Up-to-date estimates of the cosmological parameters are presented as a result of numerical simulations of cosmic microwave background and large scale structure, considering a flat Universe in which the dark matter is made entirely or partly of mirror matter, and the primordial perturbations are scalar adiabatic and in linear regime. A statistical analysis using the Markov Chain Monte Carlo method allows to obtain constraints of the cosmological parameters. As a result, we show that a Universe with pure mirror dark matter is statistically equivalent to the case of an admixture with cold dark matter. The upper limits for the ratio of the temperatures of ordinary and mirror sectors are around 0.3 for both the cosmological models, which show the presence of a dominant fraction of mirror matter, 0.06≲Ω_m_i_r_r_o_rh"2≲0.12.

  20. The Intergalactic Medium as a Cosmological Tool

    Energy Technology Data Exchange (ETDEWEB)

    Viel, Matteo, E-mail: viel@oats.inaf.i [INAF - Osservatorio Astronomico di Trieste, Via G.B. Tiepolo 11, I-34131 Trieste (Italy); INFN/National Institute for Nuclear Physics, Via Valerio 2, I-34127 Trieste (Italy)

    2009-10-15

    In this talk I will review the capabilities of high-resolution (UVES and Keck) and low resolution (Sloan Digital Sky Survey - SDSS) quasar (QSO) Lyman-alpha absorption spectra as cosmological tools to probe the dark matter distribution in the high redshift universe. I will first summarize the results in terms of cosmological parameters and then discuss consistency with the parameters derived from other large scale structure observable such as the Cosmic Microwave Background (CMB) and weak lensing surveys. When the Lyman-alpha forest data are combined with CMB data and the weak lensing results of the z-COSMOS survey the constraints are: sigma{sub 8}=0.800+-0.023, n{sub s}=0.971+-0.011OMEGA{sub m}=0.247+-0.016 (1-sigma error bars), in perfect agreement with the CMB results of WMAP year five alone. I will briefly address the importance of Lyman-alpha for constraining the neutrino mass fraction. Furthermore, I will present constraints on the mass of warm dark matter (WDM) particles derived from the Lyman-alpha flux power spectrum of 55 high-resolution HIRES Lyman-alpha forest spectra at 2.0=1.2keV (2sigma) if the WDM consists of early decoupled thermal relics and m{sub WDM}>=5.6keV (2sigma) for sterile neutrinos. Adding the SDSS Lyman-alpha flux power spectrum at 2.2=4keV and m{sub WDM}>=28keV (2sigma) for thermal relics and sterile neutrinos. These results improve previous findings by a factor two and are currently the tightest constraints on the coldness of cold dark matter. Finally, I will discuss: i) recent results for a mixture of cold and warm dark matter and the constraints for sterile neutrinos as dark matter candidates in a physically motivated framework (resonant production); ii) perspectives of cross-correlating the Lyman-alpha forest with convergence maps of the cosmic microwave background; iii) fitting of the flux probability distribution function.

  1. Evolution of N/O ratios in galaxies from cosmological hydrodynamical simulations

    Science.gov (United States)

    Vincenzo, Fiorenzo; Kobayashi, Chiaki

    2018-04-01

    We study the redshift evolution of the gas-phase O/H and N/O abundances, both (i) for individual ISM regions within single spatially-resolved galaxies and (ii) when dealing with average abundances in the whole ISM of many unresolved galaxies. We make use of a cosmological hydrodynamical simulation including detailed chemical enrichment, which properly takes into account the variety of different stellar nucleosynthetic sources of O and N in galaxies. We identify 33 galaxies in the simulation, lying within dark matter halos with virial mass in the range 1011 ≤ MDM ≤ 1013 M⊙ and reconstruct how they evolved with redshift. For the local and global measurements, the observed increasing trend of N/O at high O/H can be explained, respectively, (i) as the consequence of metallicity gradients which have settled in the galaxy interstellar medium, where the innermost galactic regions have the highest O/H abundances and the highest N/O ratios, and (ii) as the consequence of an underlying average mass-metallicity relation that galaxies obey as they evolve across cosmic epochs, where - at any redshift - less massive galaxies have lower average O/H and N/O ratios than the more massive ones. We do not find a strong dependence on the environment. For both local and global relations, the predicted N/O-O/H relation is due to the mostly secondary origin of N in stars. We also predict that the O/H and N/O gradients in the galaxy interstellar medium gradually flatten as functions of redshift, with the average N/O ratios being strictly coupled with the galaxy star formation history. Because N production strongly depends on O abundances, we obtain a universal relation for the N/O-O/H abundance diagram whether we consider average abundances of many unresolved galaxies put together or many abundance measurements within a single spatially-resolved galaxy.

  2. Quantum propagation across cosmological singularities

    Science.gov (United States)

    Gielen, Steffen; Turok, Neil

    2017-05-01

    The initial singularity is the most troubling feature of the standard cosmology, which quantum effects are hoped to resolve. In this paper, we study quantum cosmology with conformal (Weyl) invariant matter. We show that it is natural to extend the scale factor to negative values, allowing a large, collapsing universe to evolve across a quantum "bounce" into an expanding universe like ours. We compute the Feynman propagator for Friedmann-Robertson-Walker backgrounds exactly, identifying curious pathologies in the case of curved (open or closed) universes. We then include anisotropies, fixing the operator ordering of the quantum Hamiltonian by imposing covariance under field redefinitions and again finding exact solutions. We show how complex classical solutions allow one to circumvent the singularity while maintaining the validity of the semiclassical approximation. The simplest isotropic universes sit on a critical boundary, beyond which there is qualitatively different behavior, with potential for instability. Additional scalars improve the theory's stability. Finally, we study the semiclassical propagation of inhomogeneous perturbations about the flat, isotropic case, at linear and nonlinear order, showing that, at least at this level, there is no particle production across the bounce. These results form the basis for a promising new approach to quantum cosmology and the resolution of the big bang singularity.

  3. High resolution crop growth simulation for identification of potential adaptation strategies under climate change

    Science.gov (United States)

    Kim, K. S.; Yoo, B. H.

    2016-12-01

    Impact assessment of climate change on crop production would facilitate planning of adaptation strategies. Because socio-environmental conditions would differ by local areas, it would be advantageous to assess potential adaptation measures at a specific area. The objectives of this study was to develop a crop growth simulation system at a very high spatial resolution, e.g., 30 m, and to assess different adaptation options including shift of planting date and use of different cultivars. The Decision Support System for Agrotechnology Transfer (DSSAT) model was used to predict yields of soybean and maize in Korea. Gridded data for climate and soil were used to prepare input data for the DSSAT model. Weather input data were prepared at the resolution of 30 m using bilinear interpolation from gridded climate scenario data. Those climate data were obtained from Korean Meteorology Administration. Spatial resolution of temperature and precipitation was 1 km whereas that of solar radiation was 12.5 km. Soil series data at the 30 m resolution were obtained from the soil database operated by Rural Development Administration, Korea. The SOL file, which is a soil input file for the DSSAT model was prepared using physical and chemical properties of a given soil series, which were available from the soil database. Crop yields were predicted by potential adaptation options based on planting date and cultivar. For example, 10 planting dates and three cultivars were used to identify ideal management options for climate change adaptation. In prediction of maize yield, combination of 20 planting dates and two cultivars was used as management options. Predicted crop yields differed by site even within a relatively small region. For example, the maximum of average yields for 2001-2010 seasons differed by sites In a county of which areas is 520 km2 (Fig. 1). There was also spatial variation in the ideal management option in the region (Fig. 2). These results suggested that local

  4. Refinement procedure for the image alignment in high-resolution electron tomography.

    Science.gov (United States)

    Houben, L; Bar Sadan, M

    2011-01-01

    High-resolution electron tomography from a tilt series of transmission electron microscopy images requires an accurate image alignment procedure in order to maximise the resolution of the tomogram. This is the case in particular for ultra-high resolution where even very small misalignments between individual images can dramatically reduce the fidelity of the resultant reconstruction. A tomographic-reconstruction based and marker-free method is proposed, which uses an iterative optimisation of the tomogram resolution. The method utilises a search algorithm that maximises the contrast in tomogram sub-volumes. Unlike conventional cross-correlation analysis it provides the required correlation over a large tilt angle separation and guarantees a consistent alignment of images for the full range of object tilt angles. An assessment based on experimental reconstructions shows that the marker-free procedure is competitive to the reference of marker-based procedures at lower resolution and yields sub-pixel accuracy even for simulated high-resolution data. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Probabilistic Cosmological Mass Mapping from Weak Lensing Shear

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, M. D.; Dawson, W. A. [Lawrence Livermore National Laboratory, Livermore, CA 94551 (United States); Ng, K. Y. [University of California, Davis, Davis, CA 95616 (United States); Marshall, P. J. [Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, Stanford, CA 94035 (United States); Meyers, J. E. [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544 (United States); Bard, D. J., E-mail: schneider42@llnl.gov, E-mail: dstn@cmu.edu, E-mail: boutigny@in2p3.fr, E-mail: djbard@slac.stanford.edu, E-mail: jmeyers314@stanford.edu [National Energy Research Scientific Computing Center, Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720-8150 (United States)

    2017-04-10

    We infer gravitational lensing shear and convergence fields from galaxy ellipticity catalogs under a spatial process prior for the lensing potential. We demonstrate the performance of our algorithm with simulated Gaussian-distributed cosmological lensing shear maps and a reconstruction of the mass distribution of the merging galaxy cluster Abell 781 using galaxy ellipticities measured with the Deep Lens Survey. Given interim posterior samples of lensing shear or convergence fields on the sky, we describe an algorithm to infer cosmological parameters via lens field marginalization. In the most general formulation of our algorithm we make no assumptions about weak shear or Gaussian-distributed shape noise or shears. Because we require solutions and matrix determinants of a linear system of dimension that scales with the number of galaxies, we expect our algorithm to require parallel high-performance computing resources for application to ongoing wide field lensing surveys.

  6. Horizons of cosmology

    CERN Document Server

    Silk, Joseph

    2011-01-01

    Horizons of Cosmology: Exploring Worlds Seen and Unseen is the fourth title published in the Templeton Science and Religion Series, in which scientists from a wide range of fields distill their experience and knowledge into brief tours of their respective specialties. In this volume, highly esteemed astrophysicist Joseph Silk explores the vast mysteries and speculations of the field of cosmology in a way that balances an accessible style for the general reader and enough technical detail for advanced students and professionals. Indeed, while the p

  7. Computational complexity of the landscape II-Cosmological considerations

    Science.gov (United States)

    Denef, Frederik; Douglas, Michael R.; Greene, Brian; Zukowski, Claire

    2018-05-01

    We propose a new approach for multiverse analysis based on computational complexity, which leads to a new family of "computational" measure factors. By defining a cosmology as a space-time containing a vacuum with specified properties (for example small cosmological constant) together with rules for how time evolution will produce the vacuum, we can associate global time in a multiverse with clock time on a supercomputer which simulates it. We argue for a principle of "limited computational complexity" governing early universe dynamics as simulated by this supercomputer, which translates to a global measure for regulating the infinities of eternal inflation. The rules for time evolution can be thought of as a search algorithm, whose details should be constrained by a stronger principle of "minimal computational complexity". Unlike previously studied global measures, ours avoids standard equilibrium considerations and the well-known problems of Boltzmann Brains and the youngness paradox. We also give various definitions of the computational complexity of a cosmology, and argue that there are only a few natural complexity classes.

  8. Dimensional cosmological principles

    International Nuclear Information System (INIS)

    Chi, L.K.

    1985-01-01

    The dimensional cosmological principles proposed by Wesson require that the density, pressure, and mass of cosmological models be functions of the dimensionless variables which are themselves combinations of the gravitational constant, the speed of light, and the spacetime coordinates. The space coordinate is not the comoving coordinate. In this paper, the dimensional cosmological principle and the dimensional perfect cosmological principle are reformulated by using the comoving coordinate. The dimensional perfect cosmological principle is further modified to allow the possibility that mass creation may occur. Self-similar spacetimes are found to be models obeying the new dimensional cosmological principle

  9. Development of local-scale high-resolution atmospheric dispersion model using large-eddy simulation. Part 3: turbulent flow and plume dispersion in building arrays

    Czech Academy of Sciences Publication Activity Database

    Nakayama, H.; Jurčáková, Klára; Nagai, H.

    2013-01-01

    Roč. 50, č. 5 (2013), s. 503-519 ISSN 0022-3131 Institutional support: RVO:61388998 Keywords : local-scale high-resolution dispersion model * nuclear emergency response system * large-eddy simulation * spatially developing turbulent boundary layer flow Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 1.452, year: 2013

  10. A high-resolution regional reanalysis for Europe

    Science.gov (United States)

    Ohlwein, C.

    2015-12-01

    Reanalyses gain more and more importance as a source of meteorological information for many purposes and applications. Several global reanalyses projects (e.g., ERA, MERRA, CSFR, JMA9) produce and verify these data sets to provide time series as long as possible combined with a high data quality. Due to a spatial resolution down to 50-70km and 3-hourly temporal output, they are not suitable for small scale problems (e.g., regional climate assessment, meso-scale NWP verification, input for subsequent models such as river runoff simulations). The implementation of regional reanalyses based on a limited area model along with a data assimilation scheme is able to generate reanalysis data sets with high spatio-temporal resolution. Within the Hans-Ertel-Centre for Weather Research (HErZ), the climate monitoring branch concentrates efforts on the assessment and analysis of regional climate in Germany and Europe. In joint cooperation with DWD (German Meteorological Service), a high-resolution reanalysis system based on the COSMO model has been developed. The regional reanalysis for Europe matches the domain of the CORDEX EURO-11 specifications, albeit at a higher spatial resolution, i.e., 0.055° (6km) instead of 0.11° (12km) and comprises the assimilation of observational data using the existing nudging scheme of COSMO complemented by a special soil moisture analysis with boundary conditions provided by ERA-Interim data. The reanalysis data set covers the past 20 years. Extensive evaluation of the reanalysis is performed using independent observations with special emphasis on precipitation and high-impact weather situations indicating a better representation of small scale variability. Further, the evaluation shows an added value of the regional reanalysis with respect to the forcing ERA Interim reanalysis and compared to a pure high-resolution dynamical downscaling approach without data assimilation.

  11. The coupling of high-speed high resolution experimental data and LES through data assimilation techniques

    Science.gov (United States)

    Harris, S.; Labahn, J. W.; Frank, J. H.; Ihme, M.

    2017-11-01

    Data assimilation techniques can be integrated with time-resolved numerical simulations to improve predictions of transient phenomena. In this study, optimal interpolation and nudging are employed for assimilating high-speed high-resolution measurements obtained for an inert jet into high-fidelity large-eddy simulations. This experimental data set was chosen as it provides both high spacial and temporal resolution for the three-component velocity field in the shear layer of the jet. Our first objective is to investigate the impact that data assimilation has on the resulting flow field for this inert jet. This is accomplished by determining the region influenced by the data assimilation and corresponding effect on the instantaneous flow structures. The second objective is to determine optimal weightings for two data assimilation techniques. The third objective is to investigate how the frequency at which the data is assimilated affects the overall predictions. Graduate Research Assistant, Department of Mechanical Engineering.

  12. Systematic Biases in Weak Lensing Cosmology with the Dark Energy Survey

    Energy Technology Data Exchange (ETDEWEB)

    Samuroff, Simon [Manchester U.

    2017-01-01

    This thesis sets out a practical guide to applying shear measurements as a cosmological tool. We first present one of two science-ready galaxy shape catalogues from Year 1 of the Dark Energy Survey (DES Y1), which covers 1500 square degrees in four bands $griz$, with a median redshift of $0.59$. We describe the shape measurement process implemented by the DES Y1 imshape catalogue, which contains 21.9 million high-quality $r$-band bulge/disc fits. In Chapter 3 a new suite of image simulations, referred to as Hoopoe, are presented. The Hoopoe dataset is tailored to DES Y1 and includes realistic blending, spatial masks and variation in the point spread function. We derive shear corrections, which we show are robust to changes in calibration method, galaxy binning and variance within the simulated dataset. Sources of systematic uncertainty in the simulation-based shear calibration are discussed, leading to a final estimate of the $1\\sigma$ uncertainties in the residual multiplica tive bias after calibration of 0.025. Chapter 4 describes an extension of the analysis on the Hoopoe simulations into a detailed investigation of the impact of galaxy neighbours on shape measurement and shear cosmology. Four mechanisms by which neighbours can have a non-negligible influence on shear measurement are identified. These effects, if ignored, would contribute a net multiplicative bias of $m \\sim 0.03 - 0.09$ in DES Y1, though the precise impact will depend on both the measurement code and the selection cuts applied. We use the cosmological inference pipeline of DES Y1 to explore the cosmological implications of neighbour bias and show that omitting blending from the calibration simulation for DES Y1 would bias the inferred clustering amplitude $S_8 \\equiv \\sigma_8 (\\omegam /0.3)^{0.5}$ by $1.5 \\sigma$ towards low values. Finally, we use the Hoopoe simulations to test the effect of neighbour-induced spatial correlations in the multiplicative bias. We find the cosmo logical

  13. WRF high resolution dynamical downscaling of ERA-Interim for Portugal

    Energy Technology Data Exchange (ETDEWEB)

    Soares, Pedro M.M. [University of Lisbon, Instituto Dom Luiz, Lisbon (Portugal); Faculdade de Ciencias da Universidade de Lisboa, Lisbon (Portugal); Cardoso, Rita M.; Miranda, Pedro M.A.; Medeiros, Joana de [University of Lisbon, Instituto Dom Luiz, Lisbon (Portugal); Belo-Pereira, Margarida; Espirito-Santo, Fatima [Instituto de Meteorologia, Lisbon (Portugal)

    2012-11-15

    This study proposes a dynamically downscaled climatology of Portugal, produced by a high resolution (9 km) WRF simulation, forced by 20 years of ERA-Interim reanalysis (1989-2008), nested in an intermediate domain with 27 km of resolution. The Portuguese mainland is characterized by large precipitation gradients, with observed mean annual precipitation ranging from about 400 to over 2,200 mm, with a very wet northwest and rather dry southeast, largely explained by orographic processes. Model results are compared with all available stations with continuous records, comprising daily information in 32 stations for temperature and 308 for precipitation, through the computation of mean climatologies, standard statistical errors on daily to seasonally timescales, and distributions of extreme events. Results show that WRF at 9 km outperforms ERA-Interim in all analyzed variables, with good results in the representation of the annual cycles in each region. The biases of minimum and maximum temperature are reduced, with improvement of the description of temperature variability at the extreme range of its distribution. The largest gain of the high resolution simulations is visible in the rainiest regions of Portugal, where orographic enhancement is crucial. These improvements are striking in the high ranking percentiles in all seasons, describing extreme precipitation events. WRF results at 9 km compare favorably with published results supporting its use as a high-resolution regional climate model. This higher resolution allows a better representation of extreme events that are of major importance to develop mitigation/adaptation strategies by policy makers and downstream users of regional climate models in applications such as flash floods or heat waves. (orig.)

  14. Approximate Bayesian computation for forward modeling in cosmology

    International Nuclear Information System (INIS)

    Akeret, Joël; Refregier, Alexandre; Amara, Adam; Seehars, Sebastian; Hasner, Caspar

    2015-01-01

    Bayesian inference is often used in cosmology and astrophysics to derive constraints on model parameters from observations. This approach relies on the ability to compute the likelihood of the data given a choice of model parameters. In many practical situations, the likelihood function may however be unavailable or intractable due to non-gaussian errors, non-linear measurements processes, or complex data formats such as catalogs and maps. In these cases, the simulation of mock data sets can often be made through forward modeling. We discuss how Approximate Bayesian Computation (ABC) can be used in these cases to derive an approximation to the posterior constraints using simulated data sets. This technique relies on the sampling of the parameter set, a distance metric to quantify the difference between the observation and the simulations and summary statistics to compress the information in the data. We first review the principles of ABC and discuss its implementation using a Population Monte-Carlo (PMC) algorithm and the Mahalanobis distance metric. We test the performance of the implementation using a Gaussian toy model. We then apply the ABC technique to the practical case of the calibration of image simulations for wide field cosmological surveys. We find that the ABC analysis is able to provide reliable parameter constraints for this problem and is therefore a promising technique for other applications in cosmology and astrophysics. Our implementation of the ABC PMC method is made available via a public code release

  15. Cosmological phase transitions

    International Nuclear Information System (INIS)

    Kolb, E.W.

    1987-01-01

    If the universe stated from conditions of high temperature and density, there should have been a series of phase transitions associated with spontaneous symmetry breaking. The cosmological phase transitions could have observable consequences in the present Universe. Some of the consequences including the formation of topological defects and cosmological inflation are reviewed here. One of the most important tools in building particle physics models is the use of spontaneous symmetry breaking (SSB). The proposal that there are underlying symmetries of nature that are not manifest in the vacuum is a crucial link in the unification of forces. Of particular interest for cosmology is the expectation that are the high temperatures of the big bang symmetries broken today will be restored, and that there are phase transitions to the broken state. The possibility that topological defects will be produced in the transition is the subject of this section. The possibility that the Universe will undergo inflation in a phase transition will be the subject of the next section. Before discussing the creation of topological defects in the phase transition, some general aspects of high-temperature restoration of symmetry and the development of the phase transition will be reviewed. 29 references, 1 figure, 1 table

  16. Refinement procedure for the image alignment in high-resolution electron tomography

    International Nuclear Information System (INIS)

    Houben, L.; Bar Sadan, M.

    2011-01-01

    High-resolution electron tomography from a tilt series of transmission electron microscopy images requires an accurate image alignment procedure in order to maximise the resolution of the tomogram. This is the case in particular for ultra-high resolution where even very small misalignments between individual images can dramatically reduce the fidelity of the resultant reconstruction. A tomographic-reconstruction based and marker-free method is proposed, which uses an iterative optimisation of the tomogram resolution. The method utilises a search algorithm that maximises the contrast in tomogram sub-volumes. Unlike conventional cross-correlation analysis it provides the required correlation over a large tilt angle separation and guarantees a consistent alignment of images for the full range of object tilt angles. An assessment based on experimental reconstructions shows that the marker-free procedure is competitive to the reference of marker-based procedures at lower resolution and yields sub-pixel accuracy even for simulated high-resolution data. -- Highlights: → Alignment procedure for electron tomography based on iterative tomogram contrast optimisation. → Marker-free, independent of object, little user interaction. → Accuracy competitive with fiducial marker methods and suited for high-resolution tomography.

  17. Extended-Range High-Resolution Dynamical Downscaling over a Continental-Scale Domain

    Science.gov (United States)

    Husain, S. Z.; Separovic, L.; Yu, W.; Fernig, D.

    2014-12-01

    High-resolution mesoscale simulations, when applied for downscaling meteorological fields over large spatial domains and for extended time periods, can provide valuable information for many practical application scenarios including the weather-dependent renewable energy industry. In the present study, a strategy has been proposed to dynamically downscale coarse-resolution meteorological fields from Environment Canada's regional analyses for a period of multiple years over the entire Canadian territory. The study demonstrates that a continuous mesoscale simulation over the entire domain is the most suitable approach in this regard. Large-scale deviations in the different meteorological fields pose the biggest challenge for extended-range simulations over continental scale domains, and the enforcement of the lateral boundary conditions is not sufficient to restrict such deviations. A scheme has therefore been developed to spectrally nudge the simulated high-resolution meteorological fields at the different model vertical levels towards those embedded in the coarse-resolution driving fields derived from the regional analyses. A series of experiments were carried out to determine the optimal nudging strategy including the appropriate nudging length scales, nudging vertical profile and temporal relaxation. A forcing strategy based on grid nudging of the different surface fields, including surface temperature, soil-moisture, and snow conditions, towards their expected values obtained from a high-resolution offline surface scheme was also devised to limit any considerable deviation in the evolving surface fields due to extended-range temporal integrations. The study shows that ensuring large-scale atmospheric similarities helps to deliver near-surface statistical scores for temperature, dew point temperature and horizontal wind speed that are better or comparable to the operational regional forecasts issued by Environment Canada. Furthermore, the meteorological fields

  18. The Megamaser Cosmology Project. X. High-resolution Maps and Mass Constraints for SMBHs

    Science.gov (United States)

    Zhao, W.; Braatz, J. A.; Condon, J. J.; Lo, K. Y.; Reid, M. J.; Henkel, C.; Pesce, D. W.; Greene, J. E.; Gao, F.; Kuo, C. Y.; Impellizzeri, C. M. V.

    2018-02-01

    We present high-resolution (submas) Very Long Baseline Interferometry maps of nuclear H2O megamasers for seven galaxies. In UGC 6093, the well-aligned systemic masers and high-velocity masers originate in an edge-on, flat disk and we determine the mass of the central supermassive black holes (SMBH) to be M SMBH = 2.58 × 107 M ⊙ (±7%). For J1346+5228, the distribution of masers is consistent with a disk, but the faint high-velocity masers are only marginally detected, and we constrain the mass of the SMBH to be in the range (1.5–2.0) × 107 M ⊙. The origin of the masers in Mrk 1210 is less clear, as the systemic and high-velocity masers are misaligned and show a disorganized velocity structure. We present one possible model in which the masers originate in a tilted, warped disk, but we do not rule out the possibility of other explanations including outflow masers. In NGC 6926, we detect a set of redshifted masers, clustered within a parsec of each other, and a single blueshifted maser about 4.4 pc away, an offset that would be unusually large for a maser disk system. Nevertheless, if it is a disk system, we estimate the enclosed mass to be M SMBH < 4.8 × 107 M ⊙. For NGC 5793, we detect redshifted masers spaced about 1.4 pc from a clustered set of blueshifted features. The orientation of the structure supports a disk scenario as suggested by Hagiwara et al. We estimate the enclosed mass to be M SMBH < 1.3 × 107 M ⊙. For NGC 2824 and J0350‑0127, the masers may be associated with parsec- or subparsec-scale jets or outflows.

  19. Effects of display resolution and size on primary diagnosis of chest images using a high-resolution electronic work station

    International Nuclear Information System (INIS)

    Fuhrman, C.R.; Cooperstein, L.A.; Herron, J.; Good, W.F.; Good, B.; Gur, D.; Maitz, G.; Tabor, E.; Hoy, R.J.

    1987-01-01

    To evaluate the acceptability of electronically displayed planar images, the authors have a high-resolution work station. This system utilizes a high-resolution film digitizer (100-micro resolution) interfaced to a mainframe computer and two high-resolution (2,048 X 2,048) display devices (Azuray). In a clinically simulated multiobserver blind study (19 cases and five observers) a prodetermined series of reading sessions is stored on magnetic disk and is transferred to the displays while the preceding set of images is being reviewed. Images can be linearly processed on the fly into 2,000 X 2,000 full resolution, 1,000 X 1,000 minified display, or 1,000 X 1,000 interpolated for full-size display. Results of the study indicate that radiologists accept but do not like significant minification (more than X2), and they rate 2,000 X 2,000 images as having better diagnostic quality than 1,000 X 1,000 images

  20. Toward an ultra-high resolution community climate system model for the BlueGene platform

    International Nuclear Information System (INIS)

    Dennis, John M; Jacob, Robert; Vertenstein, Mariana; Craig, Tony; Loy, Raymond

    2007-01-01

    Global climate models need to simulate several small, regional-scale processes which affect the global circulation in order to accurately simulate the climate. This is particularly important in the ocean where small scale features such as oceanic eddies are currently represented with adhoc parameterizations. There is also a need for higher resolution to provide climate predictions at small, regional scales. New high-performance computing platforms such as the IBM BlueGene can provide the necessary computational power to perform ultra-high resolution climate model integrations. We have begun to investigate the scaling of the individual components of the Community Climate System Model to prepare it for integrations on BlueGene and similar platforms. Our investigations show that it is possible to successfully utilize O(32K) processors. We describe the scalability of five models: the Parallel Ocean Program (POP), the Community Ice CodE (CICE), the Community Land Model (CLM), and the new CCSM sequential coupler (CPL7) which are components of the next generation Community Climate System Model (CCSM); as well as the High-Order Method Modeling Environment (HOMME) which is a dynamical core currently being evaluated within the Community Atmospheric Model. For our studies we concentrate on 1/10 0 resolution for CICE, POP, and CLM models and 1/4 0 resolution for HOMME. The ability to simulate high resolutions on the massively parallel petascale systems that will dominate high-performance computing for the foreseeable future is essential to the advancement of climate science

  1. Parameterized post-Newtonian cosmology

    International Nuclear Information System (INIS)

    Sanghai, Viraj A A; Clifton, Timothy

    2017-01-01

    Einstein’s theory of gravity has been extensively tested on solar system scales, and for isolated astrophysical systems, using the perturbative framework known as the parameterized post-Newtonian (PPN) formalism. This framework is designed for use in the weak-field and slow-motion limit of gravity, and can be used to constrain a large class of metric theories of gravity with data collected from the aforementioned systems. Given the potential of future surveys to probe cosmological scales to high precision, it is a topic of much contemporary interest to construct a similar framework to link Einstein’s theory of gravity and its alternatives to observations on cosmological scales. Our approach to this problem is to adapt and extend the existing PPN formalism for use in cosmology. We derive a set of equations that use the same parameters to consistently model both weak fields and cosmology. This allows us to parameterize a large class of modified theories of gravity and dark energy models on cosmological scales, using just four functions of time. These four functions can be directly linked to the background expansion of the universe, first-order cosmological perturbations, and the weak-field limit of the theory. They also reduce to the standard PPN parameters on solar system scales. We illustrate how dark energy models and scalar-tensor and vector-tensor theories of gravity fit into this framework, which we refer to as ‘parameterized post-Newtonian cosmology’ (PPNC). (paper)

  2. Parameterized post-Newtonian cosmology

    Science.gov (United States)

    Sanghai, Viraj A. A.; Clifton, Timothy

    2017-03-01

    Einstein’s theory of gravity has been extensively tested on solar system scales, and for isolated astrophysical systems, using the perturbative framework known as the parameterized post-Newtonian (PPN) formalism. This framework is designed for use in the weak-field and slow-motion limit of gravity, and can be used to constrain a large class of metric theories of gravity with data collected from the aforementioned systems. Given the potential of future surveys to probe cosmological scales to high precision, it is a topic of much contemporary interest to construct a similar framework to link Einstein’s theory of gravity and its alternatives to observations on cosmological scales. Our approach to this problem is to adapt and extend the existing PPN formalism for use in cosmology. We derive a set of equations that use the same parameters to consistently model both weak fields and cosmology. This allows us to parameterize a large class of modified theories of gravity and dark energy models on cosmological scales, using just four functions of time. These four functions can be directly linked to the background expansion of the universe, first-order cosmological perturbations, and the weak-field limit of the theory. They also reduce to the standard PPN parameters on solar system scales. We illustrate how dark energy models and scalar-tensor and vector-tensor theories of gravity fit into this framework, which we refer to as ‘parameterized post-Newtonian cosmology’ (PPNC).

  3. 3D detectors with high space and time resolution

    Science.gov (United States)

    Loi, A.

    2018-01-01

    For future high luminosity LHC experiments it will be important to develop new detector systems with increased space and time resolution and also better radiation hardness in order to operate in high luminosity environment. A possible technology which could give such performances is 3D silicon detectors. This work explores the possibility of a pixel geometry by designing and simulating different solutions, using Sentaurus Tecnology Computer Aided Design (TCAD) as design and simulation tool, and analysing their performances. A key factor during the selection was the generated electric field and the carrier velocity inside the active area of the pixel.

  4. SPIRAL2/DESIR high resolution mass separator

    Energy Technology Data Exchange (ETDEWEB)

    Kurtukian-Nieto, T., E-mail: kurtukia@cenbg.in2p3.fr [Centre d’Études Nucléaires de Bordeaux Gradignan, Université Bordeaux 1-CNRS/IN2P3, BP 120, F-33175 Gradignan Cedex (France); Baartman, R. [TRIUMF, 4004 Wesbrook Mall, Vancouver B.C., V6T 2A3 (Canada); Blank, B.; Chiron, T. [Centre d’Études Nucléaires de Bordeaux Gradignan, Université Bordeaux 1-CNRS/IN2P3, BP 120, F-33175 Gradignan Cedex (France); Davids, C. [Physics Division, Argonne National Laboratory, Argonne, IL 60439 (United States); Delalee, F. [Centre d’Études Nucléaires de Bordeaux Gradignan, Université Bordeaux 1-CNRS/IN2P3, BP 120, F-33175 Gradignan Cedex (France); Duval, M. [GANIL, CEA/DSM-CNRS/IN2P3, Bd Henri Becquerel, BP 55027, F-14076 Caen Cedex 5 (France); El Abbeir, S.; Fournier, A. [Centre d’Études Nucléaires de Bordeaux Gradignan, Université Bordeaux 1-CNRS/IN2P3, BP 120, F-33175 Gradignan Cedex (France); Lunney, D. [CSNSM-IN2P3-CNRS, Université de Paris Sud, F-91405 Orsay (France); Méot, F. [BNL, Upton, Long Island, New York (United States); Serani, L. [Centre d’Études Nucléaires de Bordeaux Gradignan, Université Bordeaux 1-CNRS/IN2P3, BP 120, F-33175 Gradignan Cedex (France); Stodel, M.-H.; Varenne, F. [GANIL, CEA/DSM-CNRS/IN2P3, Bd Henri Becquerel, BP 55027, F-14076 Caen Cedex 5 (France); and others

    2013-12-15

    DESIR is the low-energy part of the SPIRAL2 ISOL facility under construction at GANIL. DESIR includes a high-resolution mass separator (HRS) with a designed resolving power m/Δm of 31,000 for a 1 π-mm-mrad beam emittance, obtained using a high-intensity beam cooling device. The proposed design consists of two 90-degree magnetic dipoles, complemented by electrostatic quadrupoles, sextupoles, and a multipole, arranged in a symmetric configuration to minimize aberrations. A detailed description of the design and results of extensive simulations are given.

  5. The evolution of extreme precipitations in high resolution scenarios over France

    Science.gov (United States)

    Colin, J.; Déqué, M.; Somot, S.

    2009-09-01

    Over the past years, improving the modelling of extreme events and their variability at climatic time scales has become one of the challenging issue raised in the regional climate research field. This study shows the results of a high resolution (12 km) scenario run over France with the limited area model (LAM) ALADIN-Climat, regarding the representation of extreme precipitations. The runs were conducted in the framework of the ANR-SCAMPEI national project on high resolution scenarios over French mountains. As a first step, we attempt to quantify one of the uncertainties implied by the use of LAM : the size of the area on which the model is run. In particular, we address the issue of whether a relatively small domain allows the model to create its small scale process. Indeed, high resolution scenarios cannot be run on large domains because of the computation time. Therefore one needs to answer this preliminary question before producing and analyzing such scenarios. To do so, we worked in the framework of a « big brother » experiment. We performed a 23-year long global simulation in present-day climate (1979-2001) with the ARPEGE-Climat GCM, at a resolution of approximately 50 km over Europe (stretched grid). This first simulation, named ARP50, constitutes the « big brother » reference of our experiment. It has been validated in comparison with the CRU climatology. Then we filtered the short waves (up to 200 km) from ARP50 in order to obtain the equivalent of coarse resolution lateral boundary conditions (LBC). We have carried out three ALADIN-Climat simulations at a 50 km resolution with these LBC, using different configurations of the model : * FRA50, run over a small domain (2000 x 2000 km, centered over France), * EUR50, run over a larger domain (5000 x 5000 km, centered over France as well), * EUR50-SN, run over the large domain (using spectral nudging). Considering the facts that ARPEGE-Climat and ALADIN-Climat models share the same physics and dynamics

  6. Cosmic Microwave Background: cosmology from the Planck perspective

    Science.gov (United States)

    De Zotti, Gianfranco

    2017-08-01

    The Planck mission has measured the angular anisotropies in the temperature of the Cosmic Microwave Background (CMB) with an accuracy set by fundamental limits. These data have allowed the determination of the cosmological parameters with extraordinary precision. These lecture notes present an overview of the mission and of its cosmological results. After a short history of the project, the Planck instruments and their performances are introduced and compared with those of the WMAP satellite. Next the approach to data analysis adopted by the Planck collaboration is described. This includes the techniques for dealing with the contamination of the CMB signal by astrophysical foreground emissions and for determining cosmological parameters from the analysis of the CMB power spectrum. The power spectra measured by Planck were found to be very well described by the standard spatially flat six-parameter ΛCDM cosmology with a power-law spectrum of adiabatic scalar perturbations. This is a remarkable result, considering that the six parameters account for the about 2500 independent power spectrum values measured by Planck (the power was measured for about 2500 multipoles), not to mention the about one trillion science samples produced. A large grid of cosmological models was also explored, using a range of additional astrophysical data sets in addition to Planck and high-resolution CMB data from ground-based experiments. On the whole, the Planck analysis of the CMB power spectrum allowed to vary and determined 16 parameters. Many other interesting parameters were derived from them. Although Planck was not initially designed to carry out high accuracy measurements of the CMB polarization anisotropies, its capabilities in this respect were significantly enhanced during its development. The quality of its polarization measurements have exceeded all original expectations. Planck's polarisation data confirmed and improved the understanding of the details of the cosmological

  7. Tests of high-resolution simulations over a region of complex terrain in Southeast coast of Brazil

    Science.gov (United States)

    Chou, Sin Chan; Luís Gomes, Jorge; Ristic, Ivan; Mesinger, Fedor; Sueiro, Gustavo; Andrade, Diego; Lima-e-Silva, Pedro Paulo

    2013-04-01

    The Eta Model is used operationally by INPE at the Centre for Weather Forecasts and Climate Studies (CPTEC) to produce weather forecasts over South America since 1997. The model has gone through upgrades along these years. In order to prepare the model for operational higher resolution forecasts, the model is configured and tested over a region of complex topography located near the coast of Southeast Brazil. The model domain includes the two Brazilians cities, Rio de Janeiro and Sao Paulo, urban areas, preserved tropical forest, pasture fields, and complex terrain where it can rise from sea level up to about 1000 m. Accurate near-surface wind direction and magnitude are needed for the power plant emergency plan. Besides, the region suffers from frequent events of floods and landslides, therefore accurate local forecasts are required for disaster warnings. The objective of this work is to carry out a series of numerical experiments to test and evaluate high resolution simulations in this complex area. Verification of model runs uses observations taken from the nuclear power plant and higher resolution reanalyses data. The runs were tested in a period when flow was predominately forced by local conditions and in a period forced by frontal passage. The Eta Model was configured initially with 2-km horizontal resolution and 50 layers. The Eta-2km is a second nesting, it is driven by Eta-15km, which in its turn is driven by Era-Interim reanalyses. The series of experiments consists of replacing surface layer stability function, adjusting cloud microphysics scheme parameters, further increasing vertical and horizontal resolutions. By replacing the stability function for the stable conditions substantially increased the katabatic winds and verified better against the tower wind data. Precipitation produced by the model was excessive in the region. Increasing vertical resolution to 60 layers caused a further increase in precipitation production. This excessive

  8. High-resolution projections of surface water availability for Tasmania, Australia

    Directory of Open Access Journals (Sweden)

    J. C. Bennett

    2012-05-01

    Full Text Available Changes to streamflows caused by climate change may have major impacts on the management of water for hydro-electricity generation and agriculture in Tasmania, Australia. We describe changes to Tasmanian surface water availability from 1961–1990 to 2070–2099 using high-resolution simulations. Six fine-scale (∼10 km2 simulations of daily rainfall and potential evapotranspiration are generated with the CSIRO Conformal Cubic Atmospheric Model (CCAM, a variable-resolution regional climate model (RCM. These variables are bias-corrected with quantile mapping and used as direct inputs to the hydrological models AWBM, IHACRES, Sacramento, SIMHYD and SMAR-G to project streamflows.

    The performance of the hydrological models is assessed against 86 streamflow gauges across Tasmania. The SIMHYD model is the least biased (median bias = −3% while IHACRES has the largest bias (median bias = −22%. We find the hydrological models that best simulate observed streamflows produce similar streamflow projections.

    There is much greater variation in projections between RCM simulations than between hydrological models. Marked decreases of up to 30% are projected for annual runoff in central Tasmania, while runoff is generally projected to increase in the east. Daily streamflow variability is projected to increase for most of Tasmania, consistent with increases in rainfall intensity. Inter-annual variability of streamflows is projected to increase across most of Tasmania.

    This is the first major Australian study to use high-resolution bias-corrected rainfall and potential evapotranspiration projections as direct inputs to hydrological models. Our study shows that these simulations are capable of producing realistic streamflows, allowing for increased confidence in assessing future changes to surface water variability.

  9. Religion, theology and cosmology

    Directory of Open Access Journals (Sweden)

    John T. Fitzgerald

    2013-10-01

    Full Text Available Cosmology is one of the predominant research areas of the contemporary world. Advances in modern cosmology have prompted renewed interest in the intersections between religion, theology and cosmology. This article, which is intended as a brief introduction to the series of studies on theological cosmology in this journal, identifies three general areas of theological interest stemming from the modern scientific study of cosmology: contemporary theology and ethics; cosmology and world religions; and ancient cosmologies. These intersections raise important questions about the relationship of religion and cosmology, which has recently been addressed by William Scott Green and is the focus of the final portion of the article.

  10. An introduction to cosmology

    CERN Document Server

    Narlikar, Jayant Vishnu

    2002-01-01

    The third edition of this successful textbook is fully updated and includes important recent developments in cosmology. It begins with an introduction to cosmology and general relativity, and goes on to cover the mathematical models of standard cosmology. The physical aspects of cosmology, including primordial nucleosynthesis, the astroparticle physics of inflation, and the current ideas on structure formation are discussed. Alternative models of cosmology are reviewed, including the model of Quasi-Steady State Cosmology, which has recently been proposed as an alternative to Big Bang Cosmology.

  11. Impact of Variable-Resolution Meshes on Regional Climate Simulations

    Science.gov (United States)

    Fowler, L. D.; Skamarock, W. C.; Bruyere, C. L.

    2014-12-01

    The Model for Prediction Across Scales (MPAS) is currently being used for seasonal-scale simulations on globally-uniform and regionally-refined meshes. Our ongoing research aims at analyzing simulations of tropical convective activity and tropical cyclone development during one hurricane season over the North Atlantic Ocean, contrasting statistics obtained with a variable-resolution mesh against those obtained with a quasi-uniform mesh. Analyses focus on the spatial distribution, frequency, and intensity of convective and grid-scale precipitations, and their relative contributions to the total precipitation as a function of the horizontal scale. Multi-month simulations initialized on May 1st 2005 using ERA-Interim re-analyses indicate that MPAS performs satisfactorily as a regional climate model for different combinations of horizontal resolutions and transitions between the coarse and refined meshes. Results highlight seamless transitions for convection, cloud microphysics, radiation, and land-surface processes between the quasi-uniform and locally- refined meshes, despite the fact that the physics parameterizations were not developed for variable resolution meshes. Our goal of analyzing the performance of MPAS is twofold. First, we want to establish that MPAS can be successfully used as a regional climate model, bypassing the need for nesting and nudging techniques at the edges of the computational domain as done in traditional regional climate modeling. Second, we want to assess the performance of our convective and cloud microphysics parameterizations as the horizontal resolution varies between the lower-resolution quasi-uniform and higher-resolution locally-refined areas of the global domain.

  12. The WASCAL high-resolution regional climate simulation ensemble for West Africa: concept, dissemination and assessment

    Directory of Open Access Journals (Sweden)

    D. Heinzeller

    2018-04-01

    Full Text Available Climate change and constant population growth pose severe challenges to 21st century rural Africa. Within the framework of the West African Science Service Center on Climate Change and Adapted Land Use (WASCAL, an ensemble of high-resolution regional climate change scenarios for the greater West African region is provided to support the development of effective adaptation and mitigation measures. This contribution presents the overall concept of the WASCAL regional climate simulations, as well as detailed information on the experimental design, and provides information on the format and dissemination of the available data. All data are made available to the public at the CERA long-term archive of the German Climate Computing Center (DKRZ with a subset available at the PANGAEA Data Publisher for Earth & Environmental Science portal (https://doi.pangaea.de/10.1594/PANGAEA.880512. A brief assessment of the data are presented to provide guidance for future users. Regional climate projections are generated at high (12 km and intermediate (60 km resolution using the Weather Research and Forecasting Model (WRF. The simulations cover the validation period 1980–2010 and the two future periods 2020–2050 and 2070–2100. A brief comparison to observations and two climate change scenarios from the Coordinated Regional Downscaling Experiment (CORDEX initiative is presented to provide guidance on the data set to future users and to assess their climate change signal. Under the RCP4.5 (Representative Concentration Pathway 4.5 scenario, the results suggest an increase in temperature by 1.5 °C at the coast of Guinea and by up to 3 °C in the northern Sahel by the end of the 21st century, in line with existing climate projections for the region. They also project an increase in precipitation by up to 300 mm per year along the coast of Guinea, by up to 150 mm per year in the Soudano region adjacent in the north and almost no change in

  13. Distributed Modeling with Parflow using High Resolution LIDAR Data

    Science.gov (United States)

    Barnes, M.; Welty, C.; Miller, A. J.

    2012-12-01

    Urban landscapes provide a challenging domain for the application of distributed surface-subsurface hydrologic models. Engineered water infrastructure and altered topography influence surface and subsurface flow paths, yet these effects are difficult to quantify. In this work, a parallel, distributed watershed model (ParFlow) is used to simulate urban watersheds using spatial data at the meter and sub-meter scale. An approach using GRASS GIS (Geographic Resources Analysis Support System) is presented that incorporates these data to construct inputs for the ParFlow simulation. LIDAR topography provides the basis for the fully coupled overland flow simulation. Methods to address real discontinuities in the urban land-surface for use with the grid-based kinematic wave approximation used in ParFlow are presented. The spatial distribution of impervious surface is delineated accurately from high-resolution land cover data; hydrogeological properties are specified from literature values. An application is presented for part of the Dead Run subwatershed of the Gwynns Falls in Baltimore County, MD. The domain is approximately 3 square kilometers, and includes a highly impacted urban stream, a major freeway, and heterogeneous urban development represented at a 10-m horizontal resolution and 1-m vertical resolution. This resolution captures urban features such as building footprints and highways at an appropriate scale. The Dead Run domain provides an effective test case for ParFlow application at the fine scale in an urban environment. Preliminary model runs employ a homogeneous subsurface domain with no-flow boundaries. Initial results reflect the highly articulated topography of the road network and the combined influence of surface runoff from impervious surfaces and subsurface flux toward the channel network. Subsequent model runs will include comparisons of the coupled surface-subsurface response of alternative versions of the Dead Run domain with and without impervious

  14. Analysis of the impact of spatial resolution on land/water classifications using high-resolution aerial imagery

    Science.gov (United States)

    Enwright, Nicholas M.; Jones, William R.; Garber, Adrienne L.; Keller, Matthew J.

    2014-01-01

    Long-term monitoring efforts often use remote sensing to track trends in habitat or landscape conditions over time. To most appropriately compare observations over time, long-term monitoring efforts strive for consistency in methods. Thus, advances and changes in technology over time can present a challenge. For instance, modern camera technology has led to an increasing availability of very high-resolution imagery (i.e. submetre and metre) and a shift from analogue to digital photography. While numerous studies have shown that image resolution can impact the accuracy of classifications, most of these studies have focused on the impacts of comparing spatial resolution changes greater than 2 m. Thus, a knowledge gap exists on the impacts of minor changes in spatial resolution (i.e. submetre to about 1.5 m) in very high-resolution aerial imagery (i.e. 2 m resolution or less). This study compared the impact of spatial resolution on land/water classifications of an area dominated by coastal marsh vegetation in Louisiana, USA, using 1:12,000 scale colour-infrared analogue aerial photography (AAP) scanned at four different dot-per-inch resolutions simulating ground sample distances (GSDs) of 0.33, 0.54, 1, and 2 m. Analysis of the impact of spatial resolution on land/water classifications was conducted by exploring various spatial aspects of the classifications including density of waterbodies and frequency distributions in waterbody sizes. This study found that a small-magnitude change (1–1.5 m) in spatial resolution had little to no impact on the amount of water classified (i.e. percentage mapped was less than 1.5%), but had a significant impact on the mapping of very small waterbodies (i.e. waterbodies ≤ 250 m2). These findings should interest those using temporal image classifications derived from very high-resolution aerial photography as a component of long-term monitoring programs.

  15. High-resolution numerical modeling of mesoscale island wakes and sensitivity to static topographic relief data

    Directory of Open Access Journals (Sweden)

    C. G. Nunalee

    2015-08-01

    Full Text Available Recent decades have witnessed a drastic increase in the fidelity of numerical weather prediction (NWP modeling. Currently, both research-grade and operational NWP models regularly perform simulations with horizontal grid spacings as fine as 1 km. This migration towards higher resolution potentially improves NWP model solutions by increasing the resolvability of mesoscale processes and reducing dependency on empirical physics parameterizations. However, at the same time, the accuracy of high-resolution simulations, particularly in the atmospheric boundary layer (ABL, is also sensitive to orographic forcing which can have significant variability on the same spatial scale as, or smaller than, NWP model grids. Despite this sensitivity, many high-resolution atmospheric simulations do not consider uncertainty with respect to selection of static terrain height data set. In this paper, we use the Weather Research and Forecasting (WRF model to simulate realistic cases of lower tropospheric flow over and downstream of mountainous islands using the default global 30 s United States Geographic Survey terrain height data set (GTOPO30, the Shuttle Radar Topography Mission (SRTM, and the Global Multi-resolution Terrain Elevation Data set (GMTED2010 terrain height data sets. While the differences between the SRTM-based and GMTED2010-based simulations are extremely small, the GTOPO30-based simulations differ significantly. Our results demonstrate cases where the differences between the source terrain data sets are significant enough to produce entirely different orographic wake mechanics, such as vortex shedding vs. no vortex shedding. These results are also compared to MODIS visible satellite imagery and ASCAT near-surface wind retrievals. Collectively, these results highlight the importance of utilizing accurate static orographic boundary conditions when running high-resolution mesoscale models.

  16. Los Angeles megacity: a high-resolution land–atmosphere modelling system for urban CO2 emissions

    Directory of Open Access Journals (Sweden)

    S. Feng

    2016-07-01

    Full Text Available Megacities are major sources of anthropogenic fossil fuel CO2 (FFCO2 emissions. The spatial extents of these large urban systems cover areas of 10 000 km2 or more with complex topography and changing landscapes. We present a high-resolution land–atmosphere modelling system for urban CO2 emissions over the Los Angeles (LA megacity area. The Weather Research and Forecasting (WRF-Chem model was coupled to a very high-resolution FFCO2 emission product, Hestia-LA, to simulate atmospheric CO2 concentrations across the LA megacity at spatial resolutions as fine as  ∼  1 km. We evaluated multiple WRF configurations, selecting one that minimized errors in wind speed, wind direction, and boundary layer height as evaluated by its performance against meteorological data collected during the CalNex-LA campaign (May–June 2010. Our results show no significant difference between moderate-resolution (4 km and high-resolution (1.3 km simulations when evaluated against surface meteorological data, but the high-resolution configurations better resolved planetary boundary layer heights and vertical gradients in the horizontal mean winds. We coupled our WRF configuration with the Vulcan 2.2 (10 km resolution and Hestia-LA (1.3 km resolution fossil fuel CO2 emission products to evaluate the impact of the spatial resolution of the CO2 emission products and the meteorological transport model on the representation of spatiotemporal variability in simulated atmospheric CO2 concentrations. We find that high spatial resolution in the fossil fuel CO2 emissions is more important than in the atmospheric model to capture CO2 concentration variability across the LA megacity. Finally, we present a novel approach that employs simultaneous correlations of the simulated atmospheric CO2 fields to qualitatively evaluate the greenhouse gas measurement network over the LA megacity. Spatial correlations in the atmospheric CO2 fields reflect the coverage of

  17. Idealized climate change simulations with a high-resolution physical model: HadGEM3-GC2

    Science.gov (United States)

    Senior, Catherine A.; Andrews, Timothy; Burton, Chantelle; Chadwick, Robin; Copsey, Dan; Graham, Tim; Hyder, Pat; Jackson, Laura; McDonald, Ruth; Ridley, Jeff; Ringer, Mark; Tsushima, Yoko

    2016-06-01

    Idealized climate change simulations with a new physical climate model, HadGEM3-GC2 from The Met Office Hadley Centre are presented and contrasted with the earlier MOHC model, HadGEM2-ES. The role of atmospheric resolution is also investigated. The Transient Climate Response (TCR) is 1.9 K/2.1 K at N216/N96 and Effective Climate Sensitivity (ECS) is 3.1 K/3.2 K at N216/N96. These are substantially lower than HadGEM2-ES (TCR: 2.5 K; ECS: 4.6 K) arising from a combination of changes in the size of climate feedbacks. While the change in the net cloud feedback between HadGEM3 and HadGEM2 is relatively small, there is a change in sign of its longwave and a strengthening of its shortwave components. At a global scale, there is little impact of the increase in atmospheric resolution on the future climate change signal and even at a broad regional scale, many features are robust including tropical rainfall changes, however, there are some significant exceptions. For the North Atlantic and western Europe, the tripolar pattern of winter storm changes found in most CMIP5 models is little impacted by resolution but for the most intense storms, there is a larger percentage increase in number at higher resolution than at lower resolution. Arctic sea-ice sensitivity shows a larger dependence on resolution than on atmospheric physics.

  18. S-World: A high resolution global soil database for simulation modelling (Invited)

    Science.gov (United States)

    Stoorvogel, J. J.

    2013-12-01

    There is an increasing call for high resolution soil information at the global level. A good example for such a call is the Global Gridded Crop Model Intercomparison carried out within AgMIP. While local studies can make use of surveying techniques to collect additional techniques this is practically impossible at the global level. It is therefore important to rely on legacy data like the Harmonized World Soil Database. Several efforts do exist that aim at the development of global gridded soil property databases. These estimates of the variation of soil properties can be used to assess e.g., global soil carbon stocks. However, they do not allow for simulation runs with e.g., crop growth simulation models as these models require a description of the entire pedon rather than a few soil properties. This study provides the required quantitative description of pedons at a 1 km resolution for simulation modelling. It uses the Harmonized World Soil Database (HWSD) for the spatial distribution of soil types, the ISRIC-WISE soil profile database to derive information on soil properties per soil type, and a range of co-variables on topography, climate, and land cover to further disaggregate the available data. The methodology aims to take stock of these available data. The soil database is developed in five main steps. Step 1: All 148 soil types are ordered on the basis of their expected topographic position using e.g., drainage, salinization, and pedogenesis. Using the topographic ordering and combining the HWSD with a digital elevation model allows for the spatial disaggregation of the composite soil units. This results in a new soil map with homogeneous soil units. Step 2: The ranges of major soil properties for the topsoil and subsoil of each of the 148 soil types are derived from the ISRIC-WISE soil profile database. Step 3: A model of soil formation is developed that focuses on the basic conceptual question where we are within the range of a particular soil property

  19. Mathematical cosmology

    International Nuclear Information System (INIS)

    Landsberg, P.T.; Evans, D.A.

    1977-01-01

    The subject is dealt with in chapters, entitled: cosmology -some fundamentals; Newtonian gravitation - some fundamentals; the cosmological differential equation - the particle model and the continuum model; some simple Friedmann models; the classification of the Friedmann models; the steady-state model; universe with pressure; optical effects of the expansion according to various theories of light; optical observations and cosmological models. (U.K.)

  20. ANL high resolution injector

    International Nuclear Information System (INIS)

    Minehara, E.; Kutschera, W.; Hartog, P.D.; Billquist, P.

    1985-01-01

    The ANL (Argonne National Laboratory) high-resolution injector has been installed to obtain higher mass resolution and higher preacceleration, and to utilize effectively the full mass range of ATLAS (Argonne Tandem Linac Accelerator System). Preliminary results of the first beam test are reported briefly. The design and performance, in particular a high-mass-resolution magnet with aberration compensation, are discussed. 7 refs., 5 figs., 2 tabs

  1. Tracing the Origin of Black Hole Accretion Through Numerical Hydrodynamic Simulations

    Science.gov (United States)

    Spicer, Sandy; Somerville, Rachel; Choi, Ena; Brennan, Ryan

    2018-01-01

    It is now widely accepted that supermassive black holes co-evolve with galaxies, and may play an important role in galaxy evolution. However, the origin of the gas that fuels black hole accretion, and the resulting observable radiation, is not well understood or quantified. We use high-resolution "zoom-in" cosmological numerical hydrodynamic simulations including modeling of black hole accretion and feedback to trace the inflow and outflow of gas within galaxies from the early formation period up to present day. We track gas particles that black holes interact with over time to trace the origin of the gas that feeds supermassive black holes. These gas particles can come from satellite galaxies, cosmological accretion, or be a result of stellar evolution. We aim to track the origin of the gas particles that accrete onto the central black hole as a function of halo mass and cosmic time. Answering these questions will help us understand the connection between galaxy and black hole evolution.

  2. Toward an ultra-high resolution community climate system model for the BlueGene platform

    Energy Technology Data Exchange (ETDEWEB)

    Dennis, John M [Computer Science Section, National Center for Atmospheric Research, Boulder, CO (United States); Jacob, Robert [Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL (United States); Vertenstein, Mariana [Climate and Global Dynamics Division, National Center for Atmospheric Research, Boulder, CO (United States); Craig, Tony [Climate and Global Dynamics Division, National Center for Atmospheric Research, Boulder, CO (United States); Loy, Raymond [Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL (United States)

    2007-07-15

    Global climate models need to simulate several small, regional-scale processes which affect the global circulation in order to accurately simulate the climate. This is particularly important in the ocean where small scale features such as oceanic eddies are currently represented with adhoc parameterizations. There is also a need for higher resolution to provide climate predictions at small, regional scales. New high-performance computing platforms such as the IBM BlueGene can provide the necessary computational power to perform ultra-high resolution climate model integrations. We have begun to investigate the scaling of the individual components of the Community Climate System Model to prepare it for integrations on BlueGene and similar platforms. Our investigations show that it is possible to successfully utilize O(32K) processors. We describe the scalability of five models: the Parallel Ocean Program (POP), the Community Ice CodE (CICE), the Community Land Model (CLM), and the new CCSM sequential coupler (CPL7) which are components of the next generation Community Climate System Model (CCSM); as well as the High-Order Method Modeling Environment (HOMME) which is a dynamical core currently being evaluated within the Community Atmospheric Model. For our studies we concentrate on 1/10{sup 0} resolution for CICE, POP, and CLM models and 1/4{sup 0} resolution for HOMME. The ability to simulate high resolutions on the massively parallel petascale systems that will dominate high-performance computing for the foreseeable future is essential to the advancement of climate science.

  3. Navigating Earthquake Physics with High-Resolution Array Back-Projection

    Science.gov (United States)

    Meng, Lingsen

    Understanding earthquake source dynamics is a fundamental goal of geophysics. Progress toward this goal has been slow due to the gap between state-of-art earthquake simulations and the limited source imaging techniques based on conventional low-frequency finite fault inversions. Seismic array processing is an alternative source imaging technique that employs the higher frequency content of the earthquakes and provides finer detail of the source process with few prior assumptions. While the back-projection provides key observations of previous large earthquakes, the standard beamforming back-projection suffers from low resolution and severe artifacts. This thesis introduces the MUSIC technique, a high-resolution array processing method that aims to narrow the gap between the seismic observations and earthquake simulations. The MUSIC is a high-resolution method taking advantage of the higher order signal statistics. The method has not been widely used in seismology yet because of the nonstationary and incoherent nature of the seismic signal. We adapt MUSIC to transient seismic signal by incorporating the Multitaper cross-spectrum estimates. We also adopt a "reference window" strategy that mitigates the "swimming artifact," a systematic drift effect in back projection. The improved MUSIC back projections allow the imaging of recent large earthquakes in finer details which give rise to new perspectives on dynamic simulations. In the 2011 Tohoku-Oki earthquake, we observe frequency-dependent rupture behaviors which relate to the material variation along the dip of the subduction interface. In the 2012 off-Sumatra earthquake, we image the complicated ruptures involving orthogonal fault system and an usual branching direction. This result along with our complementary dynamic simulations probes the pressure-insensitive strength of the deep oceanic lithosphere. In another example, back projection is applied to the 2010 M7 Haiti earthquake recorded at regional distance. The

  4. Galaxy Zoo: Comparing the visual morphology of synthetic galaxies from the Illustris simulation with those in the real Universe.

    Science.gov (United States)

    Dickinson, Hugh; Lintott, Chris; Scarlata, Claudia; Fortson, Lucy; Bamford, Steven; Cardamone, Carolin; Keel, William C.; Kruk, Sandor; Masters, Karen; Simmons, Brooke D.; Vogelsberger, Mark; Torrey, Paul; Snyder, Gregory; Galaxy Zoo Science Team

    2018-01-01

    We present a comparision between the Illustris simulations and classifications from Galaxy Zoo, aiming to test the ability of modern large-scale cosmological simulations to accurately reproduce the local galaxy population. This comparison is enabled by the increasingly high spatial and temporal resolution obtained by such surveys.Using classifications that were accumulated via the Galaxy Zoo citizen science interface, we compare the visual morphologies for simulated images of Illustris galaxies with a compatible sample of images drawn from the Sloan Digital Sky Survey (SDSS) Legacy Survey.For simulated galaxies with stellar masses less than 1011 M⊙, significant differences are identified, which are most likely due to the limited resolution of the simulation, but could be revealing real differences in the dynamical evolution of populations of galaxies in the real and model universes. Above 1011 M⊙, Illustris galaxy morphologies correspond better with those of their SDSS counterparts, although even in this mass range the simulation appears to underproduce obviously disk-like galaxies. Morphologies of Illustris galaxies less massive than 1011 M⊙ should be treated with care.

  5. Cosmology with the Large Synoptic Survey Telescope: an overview

    Science.gov (United States)

    Zhan, Hu; Tyson, J. Anthony

    2018-06-01

    The Large Synoptic Survey Telescope (LSST) is a high étendue imaging facility that is being constructed atop Cerro Pachón in northern Chile. It is scheduled to begin science operations in 2022. With an ( effective) aperture, a novel three-mirror design achieving a seeing-limited field of view, and a 3.2 gigapixel camera, the LSST has the deep-wide-fast imaging capability necessary to carry out an survey in six passbands (ugrizy) to a coadded depth of over 10 years using of its observational time. The remaining of the time will be devoted to considerably deeper and faster time-domain observations and smaller surveys. In total, each patch of the sky in the main survey will receive 800 visits allocated across the six passbands with exposure visits. The huge volume of high-quality LSST data will provide a wide range of science opportunities and, in particular, open a new era of precision cosmology with unprecedented statistical power and tight control of systematic errors. In this review, we give a brief account of the LSST cosmology program with an emphasis on dark energy investigations. The LSST will address dark energy physics and cosmology in general by exploiting diverse precision probes including large-scale structure, weak lensing, type Ia supernovae, galaxy clusters, and strong lensing. Combined with the cosmic microwave background data, these probes form interlocking tests on the cosmological model and the nature of dark energy in the presence of various systematics. The LSST data products will be made available to the US and Chilean scientific communities and to international partners with no proprietary period. Close collaborations with contemporaneous imaging and spectroscopy surveys observing at a variety of wavelengths, resolutions, depths, and timescales will be a vital part of the LSST science program, which will not only enhance specific studies but, more importantly, also allow a more complete understanding of the Universe through different windows.

  6. Introduction to cosmology

    CERN Document Server

    Roos, Matts

    2015-01-01

    The Fourth Edition of Introduction to Cosmology provides a concise, authoritative study of cosmology at an introductory level. Starting from elementary principles and the early history of cosmology, the text carefully guides the student on to curved spacetimes, special and general relativity, gravitational lensing, the thermal history of the Universe, and cosmological models, including extended gravity models, black holes and Hawking's recent conjectures on the not-so-black holes.

  7. A general CFD framework for fault-resilient simulations based on multi-resolution information fusion

    Science.gov (United States)

    Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em

    2017-10-01

    We develop a general CFD framework for multi-resolution simulations to target multiscale problems but also resilience in exascale simulations, where faulty processors may lead to gappy, in space-time, simulated fields. We combine approximation theory and domain decomposition together with statistical learning techniques, e.g. coKriging, to estimate boundary conditions and minimize communications by performing independent parallel runs. To demonstrate this new simulation approach, we consider two benchmark problems. First, we solve the heat equation (a) on a small number of spatial "patches" distributed across the domain, simulated by finite differences at fine resolution and (b) on the entire domain simulated at very low resolution, thus fusing multi-resolution models to obtain the final answer. Second, we simulate the flow in a lid-driven cavity in an analogous fashion, by fusing finite difference solutions obtained with fine and low resolution assuming gappy data sets. We investigate the influence of various parameters for this framework, including the correlation kernel, the size of a buffer employed in estimating boundary conditions, the coarseness of the resolution of auxiliary data, and the communication frequency across different patches in fusing the information at different resolution levels. In addition to its robustness and resilience, the new framework can be employed to generalize previous multiscale approaches involving heterogeneous discretizations or even fundamentally different flow descriptions, e.g. in continuum-atomistic simulations.

  8. On the cosmological gravitational waves and cosmological distances

    Science.gov (United States)

    Belinski, V. A.; Vereshchagin, G. V.

    2018-03-01

    We show that solitonic cosmological gravitational waves propagated through the Friedmann universe and generated by the inhomogeneities of the gravitational field near the Big Bang can be responsible for increase of cosmological distances.

  9. Modelling high-resolution electron microscopy based on core-loss spectroscopy

    International Nuclear Information System (INIS)

    Allen, L.J.; Findlay, S.D.; Oxley, M.P.; Witte, C.; Zaluzec, N.J.

    2006-01-01

    There are a number of factors affecting the formation of images based on core-loss spectroscopy in high-resolution electron microscopy. We demonstrate unambiguously the need to use a full nonlocal description of the effective core-loss interaction for experimental results obtained from high angular resolution electron channelling electron spectroscopy. The implications of this model are investigated for atomic resolution scanning transmission electron microscopy. Simulations are used to demonstrate that core-loss spectroscopy images formed using fine probes proposed for future microscopes can result in images that do not correspond visually with the structure that has led to their formation. In this context, we also examine the effect of varying detector geometries. The importance of the contribution to core-loss spectroscopy images by dechannelled or diffusely scattered electrons is reiterated here

  10. EFFECT OF MEASUREMENT ERRORS ON PREDICTED COSMOLOGICAL CONSTRAINTS FROM SHEAR PEAK STATISTICS WITH LARGE SYNOPTIC SURVEY TELESCOPE

    Energy Technology Data Exchange (ETDEWEB)

    Bard, D.; Chang, C.; Kahn, S. M.; Gilmore, K.; Marshall, S. [KIPAC, Stanford University, 452 Lomita Mall, Stanford, CA 94309 (United States); Kratochvil, J. M.; Huffenberger, K. M. [Department of Physics, University of Miami, Coral Gables, FL 33124 (United States); May, M. [Physics Department, Brookhaven National Laboratory, Upton, NY 11973 (United States); AlSayyad, Y.; Connolly, A.; Gibson, R. R.; Jones, L.; Krughoff, S. [Department of Astronomy, University of Washington, Seattle, WA 98195 (United States); Ahmad, Z.; Bankert, J.; Grace, E.; Hannel, M.; Lorenz, S. [Department of Physics, Purdue University, West Lafayette, IN 47907 (United States); Haiman, Z.; Jernigan, J. G., E-mail: djbard@slac.stanford.edu [Department of Astronomy and Astrophysics, Columbia University, New York, NY 10027 (United States); and others

    2013-09-01

    We study the effect of galaxy shape measurement errors on predicted cosmological constraints from the statistics of shear peak counts with the Large Synoptic Survey Telescope (LSST). We use the LSST Image Simulator in combination with cosmological N-body simulations to model realistic shear maps for different cosmological models. We include both galaxy shape noise and, for the first time, measurement errors on galaxy shapes. We find that the measurement errors considered have relatively little impact on the constraining power of shear peak counts for LSST.

  11. Cosmological principle

    International Nuclear Information System (INIS)

    Wesson, P.S.

    1979-01-01

    The Cosmological Principle states: the universe looks the same to all observers regardless of where they are located. To most astronomers today the Cosmological Principle means the universe looks the same to all observers because density of the galaxies is the same in all places. A new Cosmological Principle is proposed. It is called the Dimensional Cosmological Principle. It uses the properties of matter in the universe: density (rho), pressure (p), and mass (m) within some region of space of length (l). The laws of physics require incorporation of constants for gravity (G) and the speed of light (C). After combining the six parameters into dimensionless numbers, the best choices are: 8πGl 2 rho/c 2 , 8πGl 2 rho/c 4 , and 2 Gm/c 2 l (the Schwarzchild factor). The Dimensional Cosmological Principal came about because old ideas conflicted with the rapidly-growing body of observational evidence indicating that galaxies in the universe have a clumpy rather than uniform distribution

  12. Cosmology and particle physics

    International Nuclear Information System (INIS)

    Turner, M.S.

    1985-01-01

    The author reviews the standard cosmology, focusing on primordial nucleosynthesis, and discusses how the standard cosmology has been used to place constraints on the properties of various particles. Baryogenesis is examined in which the B, C, CP violating interactions in GUTs provide a dynamical explanation for the predominance of matter over antimatter and the present baryon-to-baryon ratio. Monoposes, cosmology and astrophysics are reviewed. The author also discusses supersymmetry/supergravity and cosmology, superstrings and cosmology in extra dimensions, and axions, astrophics, and cosmology

  13. High-Resolution Remotely Sensed Small Target Detection by Imitating Fly Visual Perception Mechanism

    Directory of Open Access Journals (Sweden)

    Fengchen Huang

    2012-01-01

    Full Text Available The difficulty and limitation of small target detection methods for high-resolution remote sensing data have been a recent research hot spot. Inspired by the information capture and processing theory of fly visual system, this paper endeavors to construct a characterized model of information perception and make use of the advantages of fast and accurate small target detection under complex varied nature environment. The proposed model forms a theoretical basis of small target detection for high-resolution remote sensing data. After the comparison of prevailing simulation mechanism behind fly visual systems, we propose a fly-imitated visual system method of information processing for high-resolution remote sensing data. A small target detector and corresponding detection algorithm are designed by simulating the mechanism of information acquisition, compression, and fusion of fly visual system and the function of pool cell and the character of nonlinear self-adaption. Experiments verify the feasibility and rationality of the proposed small target detection model and fly-imitated visual perception method.

  14. High-resolution remotely sensed small target detection by imitating fly visual perception mechanism.

    Science.gov (United States)

    Huang, Fengchen; Xu, Lizhong; Li, Min; Tang, Min

    2012-01-01

    The difficulty and limitation of small target detection methods for high-resolution remote sensing data have been a recent research hot spot. Inspired by the information capture and processing theory of fly visual system, this paper endeavors to construct a characterized model of information perception and make use of the advantages of fast and accurate small target detection under complex varied nature environment. The proposed model forms a theoretical basis of small target detection for high-resolution remote sensing data. After the comparison of prevailing simulation mechanism behind fly visual systems, we propose a fly-imitated visual system method of information processing for high-resolution remote sensing data. A small target detector and corresponding detection algorithm are designed by simulating the mechanism of information acquisition, compression, and fusion of fly visual system and the function of pool cell and the character of nonlinear self-adaption. Experiments verify the feasibility and rationality of the proposed small target detection model and fly-imitated visual perception method.

  15. Cosmology

    CERN Document Server

    Vittorio, Nicola

    2018-01-01

    Modern cosmology has changed significantly over the years, from the discovery to the precision measurement era. The data now available provide a wealth of information, mostly consistent with a model where dark matter and dark energy are in a rough proportion of 3:7. The time is right for a fresh new textbook which captures the state-of-the art in cosmology. Written by one of the world's leading cosmologists, this brand new, thoroughly class-tested textbook provides graduate and undergraduate students with coverage of the very latest developments and experimental results in the field. Prof. Nicola Vittorio shows what is meant by precision cosmology, from both theoretical and observational perspectives.

  16. Surface drag effects on simulated wind fields in high-resolution atmospheric forecast model

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Kyo Sun; Lim, Jong Myoung; Ji, Young Yong [Environmental Radioactivity Assessment Team,Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Shin, Hye Yum [NOAA/Geophysical Fluid Dynamics Laboratory, Princeton (United States); Hong, Jin Kyu [Yonsei University, Seoul (Korea, Republic of)

    2017-04-15

    It has been reported that the Weather Research and Forecasting (WRF) model generally shows a substantial over prediction bias at low to moderate wind speeds and winds are too geostrophic (Cheng and Steenburgh 2005), which limits the application of WRF model in the area that requires the accurate surface wind estimation such as wind-energy application, air-quality studies, and radioactive-pollutants dispersion studies. The surface drag generated by the subgrid-scale orography is represented by introducing a sink term in the momentum equation in their studies. The purpose of our study is to evaluate the simulated meteorological fields in the high-resolution WRF framework, that includes the parameterization of subgrid-scale orography developed by Mass and Ovens (2010), and enhance the forecast skill of low-level wind fields, which plays an important role in transport and dispersion of air pollutants including radioactive pollutants. The positive bias in 10-m wind speed is significantly alleviated by implementing the subgrid-scale orography parameterization, while other meteorological fields including 10-m wind direction are not changed. Increased variance of subgrid- scale orography enhances the sink of momentum and further reduces the bias in 10-m wind speed.

  17. Estimating Hydraulic Resistance for Floodplain Mapping and Hydraulic Studies from High-Resolution Topography: Physical and Numerical Simulations

    Science.gov (United States)

    Minear, J. T.

    2017-12-01

    One of the primary unknown variables in hydraulic analyses is hydraulic resistance, values for which are typically set using broad assumptions or calibration, with very few methods available for independent and robust determination. A better understanding of hydraulic resistance would be highly useful for understanding floodplain processes, forecasting floods, advancing sediment transport and hydraulic coupling, and improving higher dimensional flood modeling (2D+), as well as correctly calculating flood discharges for floods that are not directly measured. The relationship of observed features to hydraulic resistance is difficult to objectively quantify in the field, partially because resistance occurs at a variety of scales (i.e. grain, unit and reach) and because individual resistance elements, such as trees, grass and sediment grains, are inherently difficult to measure. Similar to photogrammetric techniques, Terrestrial Laser Scanning (TLS, also known as Ground-based LiDAR) has shown great ability to rapidly collect high-resolution topographic datasets for geomorphic and hydrodynamic studies and could be used to objectively quantify the features that collectively create hydraulic resistance in the field. Because of its speed in data collection and remote sensing ability, TLS can be used both for pre-flood and post-flood studies that require relatively quick response in relatively dangerous settings. Using datasets collected from experimental flume runs and numerical simulations, as well as field studies of several rivers in California and post-flood rivers in Colorado, this study evaluates the use of high-resolution topography to estimate hydraulic resistance, particularly from grain-scale elements. Contrary to conventional practice, experimental laboratory runs with bed grain size held constant but with varying grain-scale protusion create a nearly twenty-fold variation in measured hydraulic resistance. The ideal application of this high-resolution topography

  18. Relativistic numerical cosmology with silent universes

    Science.gov (United States)

    Bolejko, Krzysztof

    2018-01-01

    Relativistic numerical cosmology is most often based either on the exact solutions of the Einstein equations, or perturbation theory, or weak-field limit, or the BSSN formalism. The silent universe provides an alternative approach to investigate relativistic evolution of cosmological systems. The silent universe is based on the solution of the Einstein equations in 1  +  3 comoving coordinates with additional constraints imposed. These constraints include: the gravitational field is sourced by dust and cosmological constant only, both rotation and magnetic part of the Weyl tensor vanish, and the shear is diagnosable. This paper describes the code simsilun (free software distributed under the terms of the reposi General Public License), which implements the equations of the silent universe. The paper also discusses applications of the silent universe and it uses the Millennium simulation to set up the initial conditions for the code simsilun. The simulation obtained this way consists of 16 777 216 worldlines, which are evolved from z  =  80 to z  =  0. Initially, the mean evolution (averaged over the whole domain) follows the evolution of the background ΛCDM model. However, once the evolution of cosmic structures becomes nonlinear, the spatial curvature evolves from ΩK =0 to ΩK ≈ 0.1 at the present day. The emergence of the spatial curvature is associated with ΩM and Ω_Λ being smaller by approximately 0.05 compared to the ΛCDM.

  19. High-resolution climate modelling of Antarctica and the Antarctic Peninsula

    NARCIS (Netherlands)

    van Wessem, J.M.|info:eu-repo/dai/nl/413533085

    2016-01-01

    In this thesis we have used a high-resolution regional atmospheric climate model (RACMO2.3) to simulate the present-day climate (1979-2014) of Antarctica and the Antarctic Peninsula. We have evaluated the model results with several observations, such as in situ surface energy balance (SEB)

  20. The Philosophy of Cosmology

    Science.gov (United States)

    Chamcham, Khalil; Silk, Joseph; Barrow, John D.; Saunders, Simon

    2017-04-01

    Part I. Issues in the Philosophy of Cosmology: 1. Cosmology, cosmologia and the testing of cosmological theories George F. R. Ellis; 2. Black holes, cosmology and the passage of time: three problems at the limits of science Bernard Carr; 3. Moving boundaries? - comments on the relationship between philosophy and cosmology Claus Beisbart; 4. On the question why there exists something rather than nothing Roderich Tumulka; Part II. Structures in the Universe and the Structure of Modern Cosmology: 5. Some generalities about generality John D. Barrow; 6. Emergent structures of effective field theories Jean-Philippe Uzan; 7. Cosmological structure formation Joel R. Primack; 8. Formation of galaxies Joseph Silk; Part III. Foundations of Cosmology: Gravity and the Quantum: 9. The observer strikes back James Hartle and Thomas Hertog; 10. Testing inflation Chris Smeenk; 11. Why Boltzmann brains do not fluctuate into existence from the de Sitter vacuum Kimberly K. Boddy, Sean M. Carroll and Jason Pollack; 12. Holographic inflation revised Tom Banks; 13. Progress and gravity: overcoming divisions between general relativity and particle physics and between physics and HPS J. Brian Pitts; Part IV. Quantum Foundations and Quantum Gravity: 14. Is time's arrow perspectival? Carlo Rovelli; 15. Relational quantum cosmology Francesca Vidotto; 16. Cosmological ontology and epistemology Don N. Page; 17. Quantum origin of cosmological structure and dynamical reduction theories Daniel Sudarsky; 18. Towards a novel approach to semi-classical gravity Ward Struyve; Part V. Methodological and Philosophical Issues: 19. Limits of time in cosmology Svend E. Rugh and Henrik Zinkernagel; 20. Self-locating priors and cosmological measures Cian Dorr and Frank Arntzenius; 21. On probability and cosmology: inference beyond data? Martin Sahlén; 22. Testing the multiverse: Bayes, fine-tuning and typicality Luke A. Barnes; 23. A new perspective on Einstein's philosophy of cosmology Cormac O

  1. Cosmology with clusters in the CMB

    International Nuclear Information System (INIS)

    Majumdar, Subhabrata

    2008-01-01

    Ever since the seminal work by Sunyaev and Zel'dovich describing the distortion of the CMB spectrum, due to photons passing through the hot inter cluster gas on its way to us from the surface of last scattering (the so called Sunyaev-Zel'dovich effect (SZE)), small scale distortions of the CMB by clusters has been used to detect clusters as well as to do cosmology with clusters. Cosmology with clusters in the CMB can be divided into three distinct regimes: a) when the clusters are completely unresolved and contribute to the secondary CMB distortions power spectrum at small angular scales; b) when we can just about resolve the clusters so as to detect the clusters through its total SZE flux such that the clusters can be tagged and counted for doing cosmology and c) when we can completely resolve the clusters so as to measure their sizes and other cluster structural properties and their evolution with redshift. In this article, we take a look at these three aspects of SZE cluster studies and their implication for using clusters as cosmological probes. We show that clusters can be used as effective probes of cosmology, when in all of these three cases, one explores the synergy between cluster physics and cosmology as well take clues about cluster physics from the latest high precision cluster observations (for example, from Chandra and XMM - Newton). As a specific case, we show how an observationally motivated cluster SZ template can explain the CBI-excess without the need for a high σ 8 . We also briefly discuss 'self-calibration' in cluster surveys and the prospect of using clusters as an ensemble of cosmic rulers to break degeneracies arising in cluster cosmology.

  2. Ultra-high resolution AMOLED

    Science.gov (United States)

    Wacyk, Ihor; Prache, Olivier; Ghosh, Amal

    2011-06-01

    AMOLED microdisplays continue to show improvement in resolution and optical performance, enhancing their appeal for a broad range of near-eye applications such as night vision, simulation and training, situational awareness, augmented reality, medical imaging, and mobile video entertainment and gaming. eMagin's latest development of an HDTV+ resolution technology integrates an OLED pixel of 3.2 × 9.6 microns in size on a 0.18 micron CMOS backplane to deliver significant new functionality as well as the capability to implement a 1920×1200 microdisplay in a 0.86" diagonal area. In addition to the conventional matrix addressing circuitry, the HDTV+ display includes a very lowpower, low-voltage-differential-signaling (LVDS) serialized interface to minimize cable and connector size as well as electromagnetic emissions (EMI), an on-chip set of look-up-tables for digital gamma correction, and a novel pulsewidth- modulation (PWM) scheme that together with the standard analog control provides a total dimming range of 0.05cd/m2 to 2000cd/m2 in the monochrome version. The PWM function also enables an impulse drive mode of operation that significantly reduces motion artifacts in high speed scene changes. An internal 10-bit DAC ensures that a full 256 gamma-corrected gray levels are available across the entire dimming range, resulting in a measured dynamic range exceeding 20-bits. This device has been successfully tested for operation at frame rates ranging from 30Hz up to 85Hz. This paper describes the operational features and detailed optical and electrical test results for the new AMOLED WUXGA resolution microdisplay.

  3. Particle cosmology comes of age

    International Nuclear Information System (INIS)

    Turner, M.S.

    1988-01-01

    The application of modern ideas in particle physics to astrophysical and cosmological settings is a continuation of a fruitful tradition in astrophysics which began with the application of atomic physics, and then nuclear physics. In the past decade particle cosmology and particle astrophysics have been recognized as 'legitimate activities' by both particle physicists and astrophysicists and astronomers. During this time there has been a high level of theoretical activity producing much speculation about the earliest history of the Universe, as well as important and interesting astrophysical and cosmological constraints to particle physics theories. This period of intense theoretical activity has produced a number of ideas most worthy of careful consideration and scrutiny, and even more importantly, amenable to experimental/observational test. Among the ideas which are likely to be tested in the next decade are: the cosmological bound to the number of neutrino flavors, inflation, relic WIMPs as the dark matter, and MSW neutrino oscillations as a solution to the solar neutrino problems. (orig.)

  4. Particle cosmology comes of age

    International Nuclear Information System (INIS)

    Turner, M.S.

    1987-12-01

    The application of modern ideas in particle physics to astrophysical and cosmological settings is a continuation of a fruitful tradition in astrophysics which began with the application of atomic physics, and then nuclear physics. In the past decade particle cosmology and particle astrophysics have been recognized as 'legitimate activities' by both particle physicists and astrophysicists and astronomers. During this time there has been a high level of theoretical activity producing much speculation about the earliest history of the Universe, as well as important and interesting astrophysical and cosmological constraints to particle physics theories. This period of intense theoretical activity has produced a number of ideas most worthy of careful consideration and scrutiny, and even more importantly, amenable to experimental/observational test. Among the ideas which are likely to be tested in the next decade are: the cosmological bound to the number of neutrino flavors, inflation, relic WIMPs as the dark matter, and MSW neutrino oscillations as a solution to the solar neutrino problems. 94 refs

  5. SPMHD simulations of structure formation

    Science.gov (United States)

    Barnes, David J.; On, Alvina Y. L.; Wu, Kinwah; Kawata, Daisuke

    2018-05-01

    The intracluster medium of galaxy clusters is permeated by μ {G} magnetic fields. Observations with current and future facilities have the potential to illuminate the role of these magnetic fields play in the astrophysical processes of galaxy clusters. To obtain a greater understanding of how the initial seed fields evolve to the magnetic fields in the intracluster medium requires magnetohydrodynamic simulations. We critically assess the current smoothed particle magnetohydrodynamic (SPMHD) schemes, especially highlighting the impact of a hyperbolic divergence cleaning scheme and artificial resistivity switch on the magnetic field evolution in cosmological simulations of the formation of a galaxy cluster using the N-body/SPMHD code GCMHD++. The impact and performance of the cleaning scheme and two different schemes for the artificial resistivity switch is demonstrated via idealized test cases and cosmological simulations. We demonstrate that the hyperbolic divergence cleaning scheme is effective at suppressing the growth of the numerical divergence error of the magnetic field and should be applied to any SPMHD simulation. Although the artificial resistivity is important in the strong field regime, it can suppress the growth of the magnetic field in the weak field regime, such as galaxy clusters. With sufficient resolution, simulations with divergence cleaning can reproduce observed magnetic fields. We conclude that the cleaning scheme alone is sufficient for galaxy cluster simulations, but our results indicate that the SPMHD scheme must be carefully chosen depending on the regime of the magnetic field.

  6. Machine vision-based high-resolution weed mapping and patch-sprayer performance simulation

    NARCIS (Netherlands)

    Tang, L.; Tian, L.F.; Steward, B.L.

    1999-01-01

    An experimental machine vision-based patch-sprayer was developed. This sprayer was primarily designed to do real-time weed density estimation and variable herbicide application rate control. However, the sprayer also had the capability to do high-resolution weed mapping if proper mapping techniques

  7. Time variation of the cosmological redshift in Dicke-Brans-Jordan cosmologies

    International Nuclear Information System (INIS)

    Ruediger, R.

    1982-01-01

    In this paper the time variation z of the cosmological redshift z is discussed for Dicke-Brans-Jordan (DBJ) cosmologies. We determine the general z-z relation in the functional form zH -1 0 = F(z; q 0 , sigma 0 ,xi 0 , ω) for small values of z, where all the symbols have their conventional meanings. For certain combinations of cosmological parameters, which are within the present observational limitations, the DBJ terms in the function F can dominate the general relativistic terms. Furthermore, zH -1 0 can be positive in DBJ cosmologies in contrast to general relativistic cosmologies with q 0 >0

  8. Very high resolution regional climate simulations on the 4 km scale as a basis for carbon balance assessments in northeast European Russia

    Science.gov (United States)

    Stendel, Martin; Hesselbjerg Christensen, Jens; Adalgeirsdottir, Gudfinna; Rinke, Annette; Matthes, Heidrun; Marchenko, Sergej; Daanen, Ronald; Romanovsky, Vladimir

    2010-05-01

    Simulations with global circulation models (GCMs) clearly indicate that major climate changes in polar regions can be expected during the 21st century. Model studies have shown that the area of the Northern Hemisphere underlain by permafrost could be reduced substantially in a warmer climate. However, thawing of permafrost, in particular if it is ice-rich, is subject to a time lag due to the large latent heat of fusion. State-of-the-art GCMs are unable to adequately model these processes because (a) even the most advanced subsurface schemes rarely treat depths below 5 m explicitly, and (b) soil thawing and freezing processes cannot be dealt with directly due to the coarse resolution of present GCMs. Any attempt to model subsurface processes needs information about soil properties, vegetation and snow cover, which are hardly realistic on a typical GCM grid. Furthermore, simulated GCM precipitation is often underestimated and the proportion of rain and snow is incorrect. One possibility to overcome resolution-related problems is to use regional climate models (RCMs). Such an RCM, HIRHAM, has until now been the only one used for the entire circumpolar domain, and its most recent version, HIRHAM5, has also been used in the high resolution study described here. Instead of the traditional approach via a degree-day based frost index from observations or model data, we use the regional model to create boundary conditions for an advanced permafrost model. This approach offers the advantage that the permafrost model can be run on the grid of the regional model, i.e. in a considerably higher resolution than in previous approaches. We here present results from a new time-slice integration with an unprecedented horizontal resolution of only 4 km, covering northeast European Russia. This model simulation has served as basis for an assessment of the carbon balance for a region in northeast European Russia within the EU-funded Carbo-North project.

  9. The high-resolution regional reanalysis COSMO-REA6

    Science.gov (United States)

    Ohlwein, C.

    2016-12-01

    Reanalyses gain more and more importance as a source of meteorological information for many purposes and applications. Several global reanalyses projects (e.g., ERA, MERRA, CSFR, JMA9) produce and verify these data sets to provide time series as long as possible combined with a high data quality. Due to a spatial resolution down to 50-70km and 3-hourly temporal output, they are not suitable for small scale problems (e.g., regional climate assessment, meso-scale NWP verification, input for subsequent models such as river runoff simulations). The implementation of regional reanalyses based on a limited area model along with a data assimilation scheme is able to generate reanalysis data sets with high spatio-temporal resolution. Within the Hans-Ertel-Centre for Weather Research (HErZ), the climate monitoring branch concentrates efforts on the assessment and analysis of regional climate in Germany and Europe. In joint cooperation with DWD (German Meteorological Service), a high-resolution reanalysis system based on the COSMO model has been developed. The regional reanalysis for Europe matches the domain of the CORDEX EURO-11 specifications, albeit at a higher spatial resolution, i.e., 0.055° (6km) instead of 0.11° (12km) and comprises the assimilation of observational data using the existing nudging scheme of COSMO complemented by a special soil moisture analysis with boundary conditions provided by ERA-Interim data. The reanalysis data set covers the past 20 years. Extensive evaluation of the reanalysis is performed using independent observations with special emphasis on precipitation and high-impact weather situations indicating a better representation of small scale variability. Further, the evaluation shows an added value of the regional reanalysis with respect to the forcing ERA Interim reanalysis and compared to a pure high-resolution dynamical downscaling approach without data assimilation.

  10. Qualitative cosmology

    International Nuclear Information System (INIS)

    Khalatnikov, I.M.; Belinskij, V.A.

    1984-01-01

    Application of the qualitative theory of dynamic systems to analysis of homogeneous cosmological models is described. Together with the well-known cases, requiring ideal liquid, the properties of cosmological evolution of matter with dissipative processes due to viscosity are considered. New cosmological effects occur, when viscosity terms being one and the same order with the rest terms in the equations of gravitation or even exceeding them. In these cases the description of the dissipative process by means of only two viscosity coefficients (volume and shift) may become inapplicable because all the rest decomposition terms of dissipative addition to the energy-momentum in velocity gradient can be large application of equations with hydrodynamic viscosty should be considered as a model of dissipative effects in cosmology

  11. Correlation between centre offsets and gas velocity dispersion of galaxy clusters in cosmological simulations

    Science.gov (United States)

    Li, Ming-Hua; Zhu, Weishan; Zhao, Dong

    2018-05-01

    The gas is the dominant component of baryonic matter in most galaxy groups and clusters. The spatial offsets of gas centre from the halo centre could be an indicator of the dynamical state of cluster. Knowledge of such offsets is important for estimate the uncertainties when using clusters as cosmological probes. In this paper, we study the centre offsets roff between the gas and that of all the matter within halo systems in ΛCDM cosmological hydrodynamic simulations. We focus on two kinds of centre offsets: one is the three-dimensional PB offsets between the gravitational potential minimum of the entire halo and the barycentre of the ICM, and the other is the two-dimensional PX offsets between the potential minimum of the halo and the iterative centroid of the projected synthetic X-ray emission of the halo. Haloes at higher redshifts tend to have larger values of rescaled offsets roff/r200 and larger gas velocity dispersion σ v^gas/σ _{200}. For both types of offsets, we find that the correlation between the rescaled centre offsets roff/r200 and the rescaled 3D gas velocity dispersion, σ _v^gas/σ _{200} can be approximately described by a quadratic function as r_{off}/r_{200} ∝ (σ v^gas/σ _{200} - k_2)2. A Bayesian analysis with MCMC method is employed to estimate the model parameters. Dependence of the correlation relation on redshifts and the gas mass fraction are also investigated.

  12. Assessment of high-resolution methods for numerical simulations of compressible turbulence with shock waves

    International Nuclear Information System (INIS)

    Johnsen, Eric; Larsson, Johan; Bhagatwala, Ankit V.; Cabot, William H.; Moin, Parviz; Olson, Britton J.; Rawat, Pradeep S.; Shankar, Santhosh K.; Sjoegreen, Bjoern; Yee, H.C.; Zhong Xiaolin; Lele, Sanjiva K.

    2010-01-01

    Flows in which shock waves and turbulence are present and interact dynamically occur in a wide range of applications, including inertial confinement fusion, supernovae explosion, and scramjet propulsion. Accurate simulations of such problems are challenging because of the contradictory requirements of numerical methods used to simulate turbulence, which must minimize any numerical dissipation that would otherwise overwhelm the small scales, and shock-capturing schemes, which introduce numerical dissipation to stabilize the solution. The objective of the present work is to evaluate the performance of several numerical methods capable of simultaneously handling turbulence and shock waves. A comprehensive range of high-resolution methods (WENO, hybrid WENO/central difference, artificial diffusivity, adaptive characteristic-based filter, and shock fitting) and suite of test cases (Taylor-Green vortex, Shu-Osher problem, shock-vorticity/entropy wave interaction, Noh problem, compressible isotropic turbulence) relevant to problems with shocks and turbulence are considered. The results indicate that the WENO methods provide sharp shock profiles, but overwhelm the physical dissipation. The hybrid method is minimally dissipative and leads to sharp shocks and well-resolved broadband turbulence, but relies on an appropriate shock sensor. Artificial diffusivity methods in which the artificial bulk viscosity is based on the magnitude of the strain-rate tensor resolve vortical structures well but damp dilatational modes in compressible turbulence; dilatation-based artificial bulk viscosity methods significantly improve this behavior. For well-defined shocks, the shock fitting approach yields good results.

  13. Introduction to cosmology

    CERN Document Server

    Roos, Matts

    2003-01-01

    The Third Edition of the hugely successful Introduction to Cosmology provides a concise, authoritative study of cosmology at an introductory level. Starting from elementary principles and the history of cosmology, the text carefully guides the student on to curved spacetimes, general relativity, black holes, cosmological models, particles and symmetries, and phase transitions. Extensively revised, this latest edition includes broader and updated coverage of distance measures, gravitational lensing and waves, dark energy and quintessence, the thermal history of the Universe, inflation,

  14. Improved Synthesis of Global Irradiance with One-Minute Resolution for PV System Simulations

    Directory of Open Access Journals (Sweden)

    Martin Hofmann

    2014-01-01

    Full Text Available High resolution global irradiance time series are needed for accurate simulations of photovoltaic (PV systems, since the typical volatile PV power output induced by fast irradiance changes cannot be simulated properly with commonly available hourly averages of global irradiance. We present a two-step algorithm that is capable of synthesizing one-minute global irradiance time series based on hourly averaged datasets. The algorithm is initialized by deriving characteristic transition probability matrices (TPM for different weather conditions (cloudless, broken clouds and overcast from a large number of high resolution measurements. Once initialized, the algorithm is location-independent and capable of synthesizing one-minute values based on hourly averaged global irradiance of any desired location. The one-minute time series are derived by discrete-time Markov chains based on a TPM that matches the weather condition of the input dataset. One-minute time series generated with the presented algorithm are compared with measured high resolution data and show a better agreement compared to two existing synthesizing algorithms in terms of temporal variability and characteristic frequency distributions of global irradiance and clearness index values. A comparison based on measurements performed in Lindenberg, Germany, and Carpentras, France, shows a reduction of the frequency distribution root mean square errors of more than 60% compared to the two existing synthesizing algorithms.

  15. High Resolution N-Body Simulations of Terrestrial Planet Growth

    Science.gov (United States)

    Clark Wallace, Spencer; Quinn, Thomas R.

    2018-04-01

    We investigate planetesimal accretion with a direct N-body simulation of an annulus at 1 AU around a 1 M_sun star. The planetesimal ring, which initially contains N = 106 bodies is evolved through the runaway growth stage into the phase of oligarchic growth. We find that the mass distribution of planetesimals develops a bump around 1022 g shortly after the oligarchs form. This feature is absent in previous lower resolution studies. We find that this bump marks a boundary between growth modes. Below the bump mass, planetesimals are packed tightly enough together to populate first order mean motion resonances with the oligarchs. These resonances act to heat the tightly packed, low mass planetesimals, inhibiting their growth. We examine the eccentricity evolution of a dynamically hot planetary embryo embedded in an annulus of planetesimals and find that dynamical friction acts more strongly on the embryo when the planetesimals are finely resolved. This effect disappears when the annulus is made narrow enough to exclude most of the mean motion resonances. Additionally, we find that the 1022 g bump is significantly less prominent when we follow planetesimal growth with a skinny annulus.This feature, which is reminiscent of the power law break seen in the size distribution of asteroid belt objects may be an important clue for constraining the initial size of planetesimals in planet formation models.

  16. Higgs cosmology

    Science.gov (United States)

    Rajantie, Arttu

    2018-01-01

    The discovery of the Higgs boson in 2012 and other results from the Large Hadron Collider have confirmed the standard model of particle physics as the correct theory of elementary particles and their interactions up to energies of several TeV. Remarkably, the theory may even remain valid all the way to the Planck scale of quantum gravity, and therefore it provides a solid theoretical basis for describing the early Universe. Furthermore, the Higgs field itself has unique properties that may have allowed it to play a central role in the evolution of the Universe, from inflation to cosmological phase transitions and the origin of both baryonic and dark matter, and possibly to determine its ultimate fate through the electroweak vacuum instability. These connections between particle physics and cosmology have given rise to a new and growing field of Higgs cosmology, which promises to shed new light on some of the most puzzling questions about the Universe as new data from particle physics experiments and cosmological observations become available. This article is part of the Theo Murphy meeting issue `Higgs cosmology'.

  17. Quantum gravity and quantum cosmology

    CERN Document Server

    Papantonopoulos, Lefteris; Siopsis, George; Tsamis, Nikos

    2013-01-01

    Quantum gravity has developed into a fast-growing subject in physics and it is expected that probing the high-energy and high-curvature regimes of gravitating systems will shed some light on how to eventually achieve an ultraviolet complete quantum theory of gravity. Such a theory would provide the much needed information about fundamental problems of classical gravity, such as the initial big-bang singularity, the cosmological constant problem, Planck scale physics and the early-time inflationary evolution of our Universe.   While in the first part of this book concepts of quantum gravity are introduced and approached from different angles, the second part discusses these theories in connection with cosmological models and observations, thereby exploring which types of signatures of modern and mathematically rigorous frameworks can be detected by experiments. The third and final part briefly reviews the observational status of dark matter and dark energy, and introduces alternative cosmological models.   ...

  18. Assessment of summer rainfall forecast skill in the Intra-Americas in GFDL high and low-resolution models

    Science.gov (United States)

    Krishnamurthy, Lakshmi; Muñoz, Ángel G.; Vecchi, Gabriel A.; Msadek, Rym; Wittenberg, Andrew T.; Stern, Bill; Gudgel, Rich; Zeng, Fanrong

    2018-05-01

    The Caribbean low-level jet (CLLJ) is an important component of the atmospheric circulation over the Intra-Americas Sea (IAS) which impacts the weather and climate both locally and remotely. It influences the rainfall variability in the Caribbean, Central America, northern South America, the tropical Pacific and the continental Unites States through the transport of moisture. We make use of high-resolution coupled and uncoupled models from the Geophysical Fluid Dynamics Laboratory (GFDL) to investigate the simulation of the CLLJ and its teleconnections and further compare with low-resolution models. The high-resolution coupled model FLOR shows improvements in the simulation of the CLLJ and its teleconnections with rainfall and SST over the IAS compared to the low-resolution coupled model CM2.1. The CLLJ is better represented in uncoupled models (AM2.1 and AM2.5) forced with observed sea-surface temperatures (SSTs), emphasizing the role of SSTs in the simulation of the CLLJ. Further, we determine the forecast skill for observed rainfall using both high- and low-resolution predictions of rainfall and SSTs for the July-August-September season. We determine the role of statistical correction of model biases, coupling and horizontal resolution on the forecast skill. Statistical correction dramatically improves area-averaged forecast skill. But the analysis of spatial distribution in skill indicates that the improvement in skill after statistical correction is region dependent. Forecast skill is sensitive to coupling in parts of the Caribbean, Central and northern South America, and it is mostly insensitive over North America. Comparison of forecast skill between high and low-resolution coupled models does not show any dramatic difference. However, uncoupled models show improvement in the area-averaged skill in the high-resolution atmospheric model compared to lower resolution model. Understanding and improving the forecast skill over the IAS has important implications

  19. Development of high-resolution multi-scale modelling system for simulation of coastal-fluvial urban flooding

    Science.gov (United States)

    Comer, Joanne; Indiana Olbert, Agnieszka; Nash, Stephen; Hartnett, Michael

    2017-02-01

    Urban developments in coastal zones are often exposed to natural hazards such as flooding. In this research, a state-of-the-art, multi-scale nested flood (MSN_Flood) model is applied to simulate complex coastal-fluvial urban flooding due to combined effects of tides, surges and river discharges. Cork city on Ireland's southwest coast is a study case. The flood modelling system comprises a cascade of four dynamically linked models that resolve the hydrodynamics of Cork Harbour and/or its sub-region at four scales: 90, 30, 6 and 2 m. Results demonstrate that the internalization of the nested boundary through the use of ghost cells combined with a tailored adaptive interpolation technique creates a highly dynamic moving boundary that permits flooding and drying of the nested boundary. This novel feature of MSN_Flood provides a high degree of choice regarding the location of the boundaries to the nested domain and therefore flexibility in model application. The nested MSN_Flood model through dynamic downscaling facilitates significant improvements in accuracy of model output without incurring the computational expense of high spatial resolution over the entire model domain. The urban flood model provides full characteristics of water levels and flow regimes necessary for flood hazard identification and flood risk assessment.

  20. On globally static and stationary cosmologies with or without a cosmological constant and the dark energy problem

    International Nuclear Information System (INIS)

    Buchert, Thomas

    2006-01-01

    In the framework of spatially averaged inhomogeneous cosmologies in classical general relativity, effective Einstein equations govern the regional and the global dynamics of averaged scalar variables of cosmological models. A particular solution may be characterized by a cosmic equation of state. In this paper, it is pointed out that a globally static averaged dust model is conceivable without employing a compensating cosmological constant. Much in the spirit of Einstein's original model we discuss consequences for the global, but also for the regional properties of this cosmology. We then consider the wider class of globally stationary cosmologies that are conceivable in the presented framework. All these models are based on exact solutions of the averaged Einstein equations and provide examples of cosmologies in an out-of-equilibrium state, which we characterize by an information-theoretical measure. It is shown that such cosmologies preserve high-magnitude kinematical fluctuations and so tend to maintain their global properties. The same is true for a Λ-driven cosmos in such a state despite exponential expansion. We outline relations to inflationary scenarios and put the dark energy problem into perspective. Here, it is argued, on the grounds of the discussed cosmologies, that a classical explanation of dark energy through backreaction effects is theoretically conceivable, if the matter-dominated universe emerged from a non-perturbative state in the vicinity of the stationary solution. We also discuss a number of caveats that furnish strong counter arguments in the framework of structure formation in a perturbed Friedmannian model

  1. High resolution muon computed tomography at neutrino beam facilities

    International Nuclear Information System (INIS)

    Suerfu, B.; Tully, C.G.

    2016-01-01

    X-ray computed tomography (CT) has an indispensable role in constructing 3D images of objects made from light materials. However, limited by absorption coefficients, X-rays cannot deeply penetrate materials such as copper and lead. Here we show via simulation that muon beams can provide high resolution tomographic images of dense objects and of structures within the interior of dense objects. The effects of resolution broadening from multiple scattering diminish with increasing muon momentum. As the momentum of the muon increases, the contrast of the image goes down and therefore requires higher resolution in the muon spectrometer to resolve the image. The variance of the measured muon momentum reaches a minimum and then increases with increasing muon momentum. The impact of the increase in variance is to require a higher integrated muon flux to reduce fluctuations. The flux requirements and level of contrast needed for high resolution muon computed tomography are well matched to the muons produced in the pion decay pipe at a neutrino beam facility and what can be achieved for momentum resolution in a muon spectrometer. Such an imaging system can be applied in archaeology, art history, engineering, material identification and whenever there is a need to image inside a transportable object constructed of dense materials

  2. Smoot Cosmology Group

    Science.gov (United States)

    . ______________________________________________________________________________________ Nobelist George Smoot to Direct Korean Cosmology Institute Nobel Laureate George Smoot has been appointed director of a new cosmology institute in South Korea that will work closely with the year-old Berkeley the Early Universe (IEU) at EWHA Womans University in Seoul, Korea will provide cosmology education

  3. Ensemble flood simulation for a small dam catchment in Japan using 10 and 2 km resolution nonhydrostatic model rainfalls

    Science.gov (United States)

    Kobayashi, Kenichiro; Otsuka, Shigenori; Apip; Saito, Kazuo

    2016-08-01

    This paper presents a study on short-term ensemble flood forecasting specifically for small dam catchments in Japan. Numerical ensemble simulations of rainfall from the Japan Meteorological Agency nonhydrostatic model (JMA-NHM) are used as the input data to a rainfall-runoff model for predicting river discharge into a dam. The ensemble weather simulations use a conventional 10 km and a high-resolution 2 km spatial resolutions. A distributed rainfall-runoff model is constructed for the Kasahori dam catchment (approx. 70 km2) and applied with the ensemble rainfalls. The results show that the hourly maximum and cumulative catchment-average rainfalls of the 2 km resolution JMA-NHM ensemble simulation are more appropriate than the 10 km resolution rainfalls. All the simulated inflows based on the 2 and 10 km rainfalls become larger than the flood discharge of 140 m3 s-1, a threshold value for flood control. The inflows with the 10 km resolution ensemble rainfall are all considerably smaller than the observations, while at least one simulated discharge out of 11 ensemble members with the 2 km resolution rainfalls reproduces the first peak of the inflow at the Kasahori dam with similar amplitude to observations, although there are spatiotemporal lags between simulation and observation. To take positional lags into account of the ensemble discharge simulation, the rainfall distribution in each ensemble member is shifted so that the catchment-averaged cumulative rainfall of the Kasahori dam maximizes. The runoff simulation with the position-shifted rainfalls shows much better results than the original ensemble discharge simulations.

  4. THE CHALLENGE OF THE LARGEST STRUCTURES IN THE UNIVERSE TO COSMOLOGY

    International Nuclear Information System (INIS)

    Park, Changbom; Choi, Yun-Young; Kim, Sungsoo S.; Kim, Kap-Sung; Kim, Juhan; Gott III, J. Richard

    2012-01-01

    Large galaxy redshift surveys have long been used to constrain cosmological models and structure formation scenarios. In particular, the largest structures discovered observationally are thought to carry critical information on the amplitude of large-scale density fluctuations or homogeneity of the universe, and have often challenged the standard cosmological framework. The Sloan Great Wall (SGW) recently found in the Sloan Digital Sky Survey (SDSS) region casts doubt on the concordance cosmological model with a cosmological constant (i.e., the flat ΛCDM model). Here we show that the existence of the SGW is perfectly consistent with the ΛCDM model, a result that only our very large cosmological N-body simulation (the Horizon Run 2, HR2) could supply. In addition, we report on the discovery of a void complex in the SDSS much larger than the SGW, and show that such size of the largest void is also predicted in the ΛCDM paradigm. Our results demonstrate that an initially homogeneous isotropic universe with primordial Gaussian random phase density fluctuations growing in accordance with the general relativity can explain the richness and size of the observed large-scale structures in the SDSS. Using the HR2 simulation we predict that a future galaxy redshift survey about four times deeper or with 3 mag fainter limit than the SDSS should reveal a largest structure of bright galaxies about twice as big as the SGW.

  5. Changes in snow cover over China in the 21st century as simulated by a high resolution regional climate model

    International Nuclear Information System (INIS)

    Shi Ying; Gao Xuejie; Wu Jia; Giorgi, Filippo

    2011-01-01

    On the basis of the climate change simulations conducted using a high resolution regional climate model, the Abdus Salam International Centre for Theoretical Physics (ICTP) Regional Climate Model, RegCM3, at 25 km grid spacing, future changes in snow cover over China are analyzed. The simulations are carried out for the period of 1951–2100 following the IPCC SRES A1B emission scenario. The results suggest good performances of the model in simulating the number of snow cover days and the snow cover depth, as well as the starting and ending dates of snow cover to the present day (1981–2000). Their spatial distributions and amounts show fair consistency between the simulation and observation, although with some discrepancies. In general, decreases in the number of snow cover days and the snow cover depth, together with postponed snow starting dates and advanced snow ending dates, are simulated for the future, except in some places where the opposite appears. The most dramatic changes are found over the Tibetan Plateau among the three major snow cover areas of Northeast, Northwest and the Tibetan Plateau in China.

  6. Extending cosmology: the metric approach

    OpenAIRE

    Mendoza, S.

    2012-01-01

    Comment: 2012, Extending Cosmology: The Metric Approach, Open Questions in Cosmology; Review article for an Intech "Open questions in cosmology" book chapter (19 pages, 3 figures). Available from: http://www.intechopen.com/books/open-questions-in-cosmology/extending-cosmology-the-metric-approach

  7. Cosmological Results from High-z Supernovae

    Science.gov (United States)

    Tonry, John L.; Schmidt, Brian P.; Barris, Brian; Candia, Pablo; Challis, Peter; Clocchiatti, Alejandro; Coil, Alison L.; Filippenko, Alexei V.; Garnavich, Peter; Hogan, Craig; Holland, Stephen T.; Jha, Saurabh; Kirshner, Robert P.; Krisciunas, Kevin; Leibundgut, Bruno; Li, Weidong; Matheson, Thomas; Phillips, Mark M.; Riess, Adam G.; Schommer, Robert; Smith, R. Chris; Sollerman, Jesper; Spyromilio, Jason; Stubbs, Christopher W.; Suntzeff, Nicholas B.

    2003-09-01

    The High-z Supernova Search Team has discovered and observed eight new supernovae in the redshift interval z=0.3-1.2. These independent observations, analyzed by similar but distinct methods, confirm the results of Riess and Perlmutter and coworkers that supernova luminosity distances imply an accelerating universe. More importantly, they extend the redshift range of consistently observed Type Ia supernovae (SNe Ia) to z~1, where the signature of cosmological effects has the opposite sign of some plausible systematic effects. Consequently, these measurements not only provide another quantitative confirmation of the importance of dark energy, but also constitute a powerful qualitative test for the cosmological origin of cosmic acceleration. We find a rate for SN Ia of (1.4+/-0.5)×10-4h3Mpc-3yr-1 at a mean redshift of 0.5. We present distances and host extinctions for 230 SN Ia. These place the following constraints on cosmological quantities: if the equation of state parameter of the dark energy is w=-1, then H0t0=0.96+/-0.04, and ΩΛ-1.4ΩM=0.35+/-0.14. Including the constraint of a flat universe, we find ΩM=0.28+/-0.05, independent of any large-scale structure measurements. Adopting a prior based on the Two Degree Field (2dF) Redshift Survey constraint on ΩM and assuming a flat universe, we find that the equation of state parameter of the dark energy lies in the range -1.48-1, we obtain wInstitute, which is operated by the Association of Universities for Research in Astronomy (AURA), Inc., under NASA contract NAS 5-26555. This research is primarily associated with proposal GO-8177, but also uses and reports results from proposals GO-7505, 7588, 8641, and 9118. Based in part on observations taken with the Canada-France-Hawaii Telescope, operated by the National Research Council of Canada, le Centre National de la Recherche Scientifique de France, and the University of Hawaii. CTIO: Based in part on observations taken at the Cerro Tololo Inter

  8. Elements of the universe in Philo’s De Vita Mosis: Cosmological theology or theological cosmology?

    Directory of Open Access Journals (Sweden)

    Gert J. Steyn

    2013-11-01

    Full Text Available It is the intention of this article to investigate how Philo’s understanding of the universe, and particularly its four basic elements as taught by the Greek philosophers, influenced his description of the God of Israel’s world in which the Moses narrative unfolds. Given the fact that Philo was a theologian par excellence, the question can be asked whether Philo’s approach is closer to what one might call ‘theological cosmology’ or rather closer to ‘cosmological theology’? After a brief survey of Philo’s inclination to interpret Jewish history in the light of Greek cosmology, the study proceeds with his universe as symbolised in the high priest’s vestments. The τετρακτύςwith its 10 points of harmony is a key to Philo’s symbolism and numerology. The article concludes that Philo is not writing cosmology per se in his De Vita Mosis, but he is rather writing a theology that sketches the cosmic superiority and involvement of Israel’s God against the backdrop of Greek cosmology as it was influenced by Pythagoras’ geometry and numerology as well as by Plato’s philosophy. In this sense his account in the De Vita Mosisis closer to a cosmological theology. He utilises the cosmological picture of the Greco-Hellenistic world in order to introduce and present the powerful nature and qualities of Israel’s God.

  9. Resolution of a cosmological paradox using concepts from general relativity theory

    International Nuclear Information System (INIS)

    Silverman, A.N.

    1986-01-01

    According to the big bang theory, the universe began about 15 billion years ago and has been continually expanding ever since. If certain elementary physical concepts are naively applied to this cosmological theory, it can lead to a paradox in which distant astronomical objects seem to have lain at distances from the Earth larger than the possible size of the universe. The paradox is resolved by using concepts from general relativity theory. These concepts may appear startling to some readers

  10. High Resolution Modeling of Hurricanes in a Climate Context

    Science.gov (United States)

    Knutson, T. R.

    2007-12-01

    Modeling of tropical cyclone activity in a climate context initially focused on simulation of relatively weak tropical storm-like disturbances as resolved by coarse grid (200 km) global models. As computing power has increased, multi-year simulations with global models of grid spacing 20-30 km have become feasible. Increased resolution also allowed for simulation storms of increasing intensity, and some global models generate storms of hurricane strength, depending on their resolution and other factors, although detailed hurricane structure is not simulated realistically. Results from some recent high resolution global model studies are reviewed. An alternative for hurricane simulation is regional downscaling. An early approach was to embed an operational (GFDL) hurricane prediction model within a global model solution, either for 5-day case studies of particular model storm cases, or for "idealized experiments" where an initial vortex is inserted into an idealized environments derived from global model statistics. Using this approach, hurricanes up to category five intensity can be simulated, owing to the model's relatively high resolution (9 km grid) and refined physics. Variants on this approach have been used to provide modeling support for theoretical predictions that greenhouse warming will increase the maximum intensities of hurricanes. These modeling studies also simulate increased hurricane rainfall rates in a warmer climate. The studies do not address hurricane frequency issues, and vertical shear is neglected in the idealized studies. A recent development is the use of regional model dynamical downscaling for extended (e.g., season-length) integrations of hurricane activity. In a study for the Atlantic basin, a non-hydrostatic model with grid spacing of 18km is run without convective parameterization, but with internal spectral nudging toward observed large-scale (basin wavenumbers 0-2) atmospheric conditions from reanalyses. Using this approach, our

  11. Unimodular-mimetic cosmology

    International Nuclear Information System (INIS)

    Nojiri, S; Odintsov, S D; Oikonomou, V K

    2016-01-01

    We combine the unimodular gravity and mimetic gravity theories into a unified theoretical framework, which is proposed to provide a suggestive proposal for a framework that may assist in the discussion and search for a solution to the cosmological constant problem and the dark matter issue. After providing the formulation of the unimodular mimetic gravity and investigating all the new features that the vacuum unimodular gravity implies, by using the underlying reconstruction method, we realize some well known cosmological evolutions, with some of these being exotic for the ordinary Einstein–Hilbert gravity. Specifically we provide the vacuum unimodular mimetic gravity description of the de Sitter cosmology and of the perfect fluid with constant equation of state cosmology. As we demonstrate, these cosmologies can be realized by vacuum mimetic unimodular gravity, without the existence of any matter fluid source. Moreover, we investigate how cosmologically viable cosmologies, which are compatible with the recent observational data, can be realized by the vacuum unimodular mimetic gravity. Since in some cases, a graceful exit from inflation problem might exist, we provide a qualitative description of the mechanism that can potentially generate the graceful exit from inflation in these theories, by searching for the unstable de Sitter solutions in the context of unimodular mimetic theories of gravity. (paper)

  12. Cosmological constant problem

    International Nuclear Information System (INIS)

    Weinberg, S.

    1989-01-01

    Cosmological constant problem is discussed. History of the problem is briefly considered. Five different approaches to solution of the problem are described: supersymmetry, supergravity, superstring; anthropic approach; mechamism of lagrangian alignment; modification of gravitation theory and quantum cosmology. It is noted that approach, based on quantum cosmology is the most promising one

  13. Introduction to cosmology

    CERN Multimedia

    CERN. Geneva. Audiovisual Unit

    2001-01-01

    Cosmology and particle physics have enjoyed a useful relationship over the entire histories of both subjects. Today, ideas and techniques in cosmology are frequently used to elucidate and constrain theories of elementary particles. These lectures give an elementary overview of the essential elements of cosmology, which is necessary to understand this relationship.

  14. Introduction to cosmology

    CERN Multimedia

    CERN. Geneva

    1999-01-01

    Cosmology and particle physics have enjoyed a useful relationship over the entire histories of both subjects. Today, ideas and techniques in cosmology are frequently used to elucidate and constrain theories of elementary particles. These lectures give an elementary overview of the essential elements of cosmology, which is necessary to understand this relationship.

  15. Impacts of spatial resolution and representation of flow connectivity on large-scale simulation of floods

    Directory of Open Access Journals (Sweden)

    C. M. R. Mateo

    2017-10-01

    Full Text Available Global-scale river models (GRMs are core tools for providing consistent estimates of global flood hazard, especially in data-scarce regions. Due to former limitations in computational power and input datasets, most GRMs have been developed to use simplified representations of flow physics and run at coarse spatial resolutions. With increasing computational power and improved datasets, the application of GRMs to finer resolutions is becoming a reality. To support development in this direction, the suitability of GRMs for application to finer resolutions needs to be assessed. This study investigates the impacts of spatial resolution and flow connectivity representation on the predictive capability of a GRM, CaMa-Flood, in simulating the 2011 extreme flood in Thailand. Analyses show that when single downstream connectivity (SDC is assumed, simulation results deteriorate with finer spatial resolution; Nash–Sutcliffe efficiency coefficients decreased by more than 50 % between simulation results at 10 km resolution and 1 km resolution. When multiple downstream connectivity (MDC is represented, simulation results slightly improve with finer spatial resolution. The SDC simulations result in excessive backflows on very flat floodplains due to the restrictive flow directions at finer resolutions. MDC channels attenuated these effects by maintaining flow connectivity and flow capacity between floodplains in varying spatial resolutions. While a regional-scale flood was chosen as a test case, these findings should be universal and may have significant impacts on large- to global-scale simulations, especially in regions where mega deltas exist.These results demonstrate that a GRM can be used for higher resolution simulations of large-scale floods, provided that MDC in rivers and floodplains is adequately represented in the model structure.

  16. Impacts of spatial resolution and representation of flow connectivity on large-scale simulation of floods

    Science.gov (United States)

    Mateo, Cherry May R.; Yamazaki, Dai; Kim, Hyungjun; Champathong, Adisorn; Vaze, Jai; Oki, Taikan

    2017-10-01

    Global-scale river models (GRMs) are core tools for providing consistent estimates of global flood hazard, especially in data-scarce regions. Due to former limitations in computational power and input datasets, most GRMs have been developed to use simplified representations of flow physics and run at coarse spatial resolutions. With increasing computational power and improved datasets, the application of GRMs to finer resolutions is becoming a reality. To support development in this direction, the suitability of GRMs for application to finer resolutions needs to be assessed. This study investigates the impacts of spatial resolution and flow connectivity representation on the predictive capability of a GRM, CaMa-Flood, in simulating the 2011 extreme flood in Thailand. Analyses show that when single downstream connectivity (SDC) is assumed, simulation results deteriorate with finer spatial resolution; Nash-Sutcliffe efficiency coefficients decreased by more than 50 % between simulation results at 10 km resolution and 1 km resolution. When multiple downstream connectivity (MDC) is represented, simulation results slightly improve with finer spatial resolution. The SDC simulations result in excessive backflows on very flat floodplains due to the restrictive flow directions at finer resolutions. MDC channels attenuated these effects by maintaining flow connectivity and flow capacity between floodplains in varying spatial resolutions. While a regional-scale flood was chosen as a test case, these findings should be universal and may have significant impacts on large- to global-scale simulations, especially in regions where mega deltas exist.These results demonstrate that a GRM can be used for higher resolution simulations of large-scale floods, provided that MDC in rivers and floodplains is adequately represented in the model structure.

  17. High-resolution 3D X-ray imaging of intracranial nitinol stents

    International Nuclear Information System (INIS)

    Snoeren, Rudolph M.; With, Peter H.N. de; Soederman, Michael; Kroon, Johannes N.; Roijers, Ruben B.; Babic, Drazenko

    2012-01-01

    To assess an optimized 3D imaging protocol for intracranial nitinol stents in 3D C-arm flat detector imaging. For this purpose, an image quality simulation and an in vitro study was carried out. Nitinol stents of various brands were placed inside an anthropomorphic head phantom, using iodine contrast. Experiments with objects were preceded by image quality and dose simulations. We varied X-ray imaging parameters in a commercially interventional X-ray system to set 3D image quality in the contrast-noise-sharpness space. Beam quality was varied to evaluate contrast of the stents while keeping absorbed dose below recommended values. Two detector formats were used, paired with an appropriate pixel size and X-ray focus size. Zoomed reconstructions were carried out and snapshot images acquired. High contrast spatial resolution was assessed with a CT phantom. We found an optimal protocol for imaging intracranial nitinol stents. Contrast resolution was optimized for nickel-titanium-containing stents. A high spatial resolution larger than 2.1 lp/mm allows struts to be visualized. We obtained images of stents of various brands and a representative set of images is shown. Independent of the make, struts can be imaged with virtually continuous strokes. Measured absorbed doses are shown to be lower than 50 mGy Computed Tomography Dose Index (CTDI). By balancing the modulation transfer of the imaging components and tuning the high-contrast imaging capabilities, we have shown that thin nitinol stent wires can be reconstructed with high contrast-to-noise ratio and good detail, while keeping radiation doses within recommended values. Experimental results compare well with imaging simulations. (orig.)

  18. Method of Obtaining High Resolution Intrinsic Wire Boom Damping Parameters for Multi-Body Dynamics Simulations

    Science.gov (United States)

    Yew, Alvin G.; Chai, Dean J.; Olney, David J.

    2010-01-01

    The goal of NASA's Magnetospheric MultiScale (MMS) mission is to understand magnetic reconnection with sensor measurements from four spinning satellites flown in a tight tetrahedron formation. Four of the six electric field sensors on each satellite are located at the end of 60- meter wire booms to increase measurement sensitivity in the spin plane and to minimize motion coupling from perturbations on the main body. A propulsion burn however, might induce boom oscillations that could impact science measurements if oscillations do not damp to values on the order of 0.1 degree in a timely fashion. Large damping time constants could also adversely affect flight dynamics and attitude control performance. In this paper, we will discuss the implementation of a high resolution method for calculating the boom's intrinsic damping, which was used in multi-body dynamics simulations. In summary, experimental data was obtained with a scaled-down boom, which was suspended as a pendulum in vacuum. Optical techniques were designed to accurately measure the natural decay of angular position and subsequently, data processing algorithms resulted in excellent spatial and temporal resolutions. This method was repeated in a parametric study for various lengths, root tensions and vacuum levels. For all data sets, regression models for damping were applied, including: nonlinear viscous, frequency-independent hysteretic, coulomb and some combination of them. Our data analysis and dynamics models have shown that the intrinsic damping for the baseline boom is insufficient, thereby forcing project management to explore mitigation strategies.

  19. The Dirac-Milne cosmology

    Science.gov (United States)

    Benoit-Lévy, Aurélien; Chardin, Gabriel

    2014-05-01

    We study an unconventional cosmology, in which we investigate the consequences that antigravity would pose to cosmology. We present the main characteristics of the Dirac-Milne Universe, a cosmological model where antimatter has a negative active gravitational mass. In this non-standard Universe, separate domains of matter and antimatter coexist at our epoch without annihilation, separated by a gravitationally induced depletion zone. We show that this cosmology does not require a priori the Dark Matter and Dark Energy components of the standard model of cosmology. Additionally, inflation becomes an unnecessary ingredient. Investigating this model, we show that the classical cosmological tests such as primordial nucleosynthesis, Type Ia supernovæ and Cosmic Microwave Background are surprisingly concordant.

  20. A feasibility study of PETiPIX: an ultra high resolution small animal PET scanner

    Science.gov (United States)

    Li, K.; Safavi-Naeini, M.; Franklin, D. R.; Petasecca, M.; Guatelli, S.; Rosenfeld, A. B.; Hutton, B. F.; Lerch, M. L. F.

    2013-12-01

    PETiPIX is an ultra high spatial resolution positron emission tomography (PET) scanner designed for imaging mice brains. Four Timepix pixellated silicon detector modules are placed in an edge-on configuration to form a scanner with a field of view (FoV) 15 mm in diameter. Each detector module consists of 256 × 256 pixels with dimensions of 55 × 55 × 300 μm3. Monte Carlo simulations using GEANT4 Application for Tomographic Emission (GATE) were performed to evaluate the feasibility of the PETiPIX design, including estimation of system sensitivity, angular dependence, spatial resolution (point source, hot and cold phantom studies) and evaluation of potential detector shield designs. Initial experimental work also established that scattered photons and recoil electrons could be detected using a single edge-on Timepix detector with a positron source. Simulation results estimate a spatial resolution of 0.26 mm full width at half maximum (FWHM) at the centre of FoV and 0.29 mm FWHM overall spatial resolution with sensitivity of 0.01%, and indicate that a 1.5 mm thick tungsten shield parallel to the detectors will absorb the majority of non-coplanar annihilation photons, significantly reducing the rates of randoms. Results from the simulated phantom studies demonstrate that PETiPIX is a promising design for studies demanding high resolution images of mice brains.

  1. Implications of a decay law for the cosmological constant in higher dimensional cosmology and cosmological wormholes

    International Nuclear Information System (INIS)

    Rami, El-Nabulsi Ahmad

    2009-01-01

    Higher dimensional cosmological implications of a decay law for the cosmological constant term are analyzed. Three independent cosmological models are explored mainly: 1) In the first model, the effective cosmological constant was chosen to decay with times like Δ effective = Ca -2 + D(b/a I ) 2 where a I is an arbitrary scale factor characterizing the isotropic epoch which proceeds the graceful exit period. Further, the extra-dimensional scale factor decays classically like b(t) approx. a x (t), x is a real negative number. 2) In the second model, we adopt in addition to Δ effective = Ca -2 + D(b/a I ) 2 the phenomenological law b(t) = a(t)exp( -Qt) as we expect that at the origin of time, there is no distinction between the visible and extra dimensions; Q is a real number. 3) In the third model, we study a Δ - decaying extra-dimensional cosmology with a static traversable wormhole in which the four-dimensional Friedmann-Robertson-Walker spacetime is subject to the conventional perfect fluid while the extra-dimensional part is endowed by an exotic fluid violating strong energy condition and where the cosmological constant in (3+n+1) is assumed to decays like Δ(a) = 3Ca -2 . The three models are discussed and explored in some details where many interesting points are revealed. (author)

  2. BOOK REVIEW: Observational Cosmology Observational Cosmology

    Science.gov (United States)

    Howell, Dale Andrew

    2013-04-01

    Observational Cosmology by Stephen Serjeant fills a niche that was underserved in the textbook market: an up-to-date, thorough cosmology textbook focused on observations, aimed at advanced undergraduates. Not everything about the book is perfect - some subjects get short shrift, in some cases jargon dominates, and there are too few exercises. Still, on the whole, the book is a welcome addition. For decades, the classic textbooks of cosmology have focused on theory. But for every Sunyaev-Zel'dovich effect there is a Butcher-Oemler effect; there are as many cosmological phenomena established by observations, and only explained later by theory, as there were predicted by theory and confirmed by observations. In fact, in the last decade, there has been an explosion of new cosmological findings driven by observations. Some are so new that you won't find them mentioned in books just a few years old. So it is not just refreshing to see a book that reflects the new realities of cosmology, it is vital, if students are to truly stay up on a field that has widened in scope considerably. Observational Cosmology is filled with full-color images, and graphs from the latest experiments. How exciting it is that we live in an era where satellites and large experiments have gathered so much data to reveal astounding details about the origin of the universe and its evolution. To have all the latest data gathered together and explained in one book will be a revelation to students. In fact, at times it was to me. I've picked up modern cosmological knowledge through a patchwork of reading papers, going to colloquia, and serving on grant and telescope allocation panels. To go back and see them explained from square one, and summarized succinctly, filled in quite a few gaps in my own knowledge and corrected a few misconceptions I'd acquired along the way. To make room for all these graphs and observational details, a few things had to be left out. For one, there are few derivations

  3. The Cosmological Dependence of Galaxy Cluster Morphologies

    Science.gov (United States)

    Crone, Mary Margaret

    1995-01-01

    Measuring the density of the universe has been a fundamental problem in cosmology ever since the "Big Bang" model was developed over sixty years ago. In this simple and successful model, the age and eventual fate of the universe are determined by its density, its rate of expansion, and the value of a universal "cosmological constant". Analytic models suggest that many properties of galaxy clusters are sensitive to cosmological parameters. In this thesis, I use N-body simulations to examine cluster density profiles, abundances, and degree of subclustering to test the feasibility of using them as cosmological tests. The dependence on both cosmology and initial density field is examined, using a grid of cosmologies and scale-free initial power spectra P(k)~ k n. Einstein-deSitter ( Omegao=1), open ( Omegao=0.2 and 0.1) and flat, low density (Omegao=0.2, lambdao=0.8) models are studied, with initial spectral indices n=-2, -1 and 0. Of particular interest are the results for cluster profiles and substructure. The average density profiles are well fit by a power law p(r)~ r ^{-alpha} for radii where the local density contrast is between 100 and 3000. There is a clear trend toward steeper slopes with both increasing n and decreasing Omegao, with profile slopes in the open models consistently higher than Omega=1 values for the range of n examined. The amount of substructure in each model is quantified and explained in terms of cluster merger histories and the behavior of substructure statistics. The statistic which best distinguishes models is a very simple measure of deviations from symmetry in the projected mass distribution --the "Center-of-Mass Shift" as a function of overdensity. Some statistics which are quite sensitive to substructure perform relatively poorly as cosmological indicators. Density profiles and the Center-of-Mass test are both well-suited for comparison with weak lensing data and galaxy distributions. Such data are currently being collected and should

  4. Encyclopedia of cosmology historical, philosophical, and scientific foundations of modern cosmology

    CERN Document Server

    Hetherington, Norriss S

    2014-01-01

    The Encyclopedia of Cosmology, first published in 1993, recounts the history, philosophical assumptions, methodological ambiguities, and human struggles that have influenced the various responses to the basic questions of cosmology through the ages, as well as referencing important scientific theories.Just as the recognition of social conventions in other cultures can lead to a more productive perspective on our own behaviour, so too a study of the cosmologies of other times and places can enable us recognise elements of our own cosmology that might otherwise pass as inevitable developments.Ap

  5. Neutrino mass constraints from joint cosmological probes.

    Science.gov (United States)

    Kwan, Juliana

    2018-01-01

    One of the most promising avenues to come from precision cosmology is the measurement of the sum of neutrino masses in the next 5-10 years. Ongoing imaging surveys, such as the Dark Energy Survey and the Hyper Suprime Cam survey, will cover a substantial volume of the sky and when combined with existing spectroscopic data, are expected to deliver a definitive measurement in the near future. But it is important that the accuracy of theoretical predictions matches the precision of the observational data so that the neutrino mass signal can be properly detected without systematic error. To this end, we have run a suite of high precision, large volume cosmological N-body simulations containing massive neutrinos to quantify their effect on probes of large scale structure such as weak lensing and galaxy clustering. In this talk, I will describe the analytical tools that we have developed to extract the neutrino mass that are capable of fully utilizing the non-linear regime of structure formation. These include predictions for the bias in the clustering of dark matter halos (one of the fundamental ingredients of the halo model) with an error of only a few percent.

  6. Higgs cosmology.

    Science.gov (United States)

    Rajantie, Arttu

    2018-03-06

    The discovery of the Higgs boson in 2012 and other results from the Large Hadron Collider have confirmed the standard model of particle physics as the correct theory of elementary particles and their interactions up to energies of several TeV. Remarkably, the theory may even remain valid all the way to the Planck scale of quantum gravity, and therefore it provides a solid theoretical basis for describing the early Universe. Furthermore, the Higgs field itself has unique properties that may have allowed it to play a central role in the evolution of the Universe, from inflation to cosmological phase transitions and the origin of both baryonic and dark matter, and possibly to determine its ultimate fate through the electroweak vacuum instability. These connections between particle physics and cosmology have given rise to a new and growing field of Higgs cosmology, which promises to shed new light on some of the most puzzling questions about the Universe as new data from particle physics experiments and cosmological observations become available.This article is part of the Theo Murphy meeting issue 'Higgs cosmology'. © 2018 The Author(s).

  7. Stochastic porous media modeling and high-resolution schemes for numerical simulation of subsurface immiscible fluid flow transport

    Science.gov (United States)

    Brantson, Eric Thompson; Ju, Binshan; Wu, Dan; Gyan, Patricia Semwaah

    2018-04-01

    This paper proposes stochastic petroleum porous media modeling for immiscible fluid flow simulation using Dykstra-Parson coefficient (V DP) and autocorrelation lengths to generate 2D stochastic permeability values which were also used to generate porosity fields through a linear interpolation technique based on Carman-Kozeny equation. The proposed method of permeability field generation in this study was compared to turning bands method (TBM) and uniform sampling randomization method (USRM). On the other hand, many studies have also reported that, upstream mobility weighting schemes, commonly used in conventional numerical reservoir simulators do not accurately capture immiscible displacement shocks and discontinuities through stochastically generated porous media. This can be attributed to high level of numerical smearing in first-order schemes, oftentimes misinterpreted as subsurface geological features. Therefore, this work employs high-resolution schemes of SUPERBEE flux limiter, weighted essentially non-oscillatory scheme (WENO), and monotone upstream-centered schemes for conservation laws (MUSCL) to accurately capture immiscible fluid flow transport in stochastic porous media. The high-order schemes results match well with Buckley Leverett (BL) analytical solution without any non-oscillatory solutions. The governing fluid flow equations were solved numerically using simultaneous solution (SS) technique, sequential solution (SEQ) technique and iterative implicit pressure and explicit saturation (IMPES) technique which produce acceptable numerical stability and convergence rate. A comparative and numerical examples study of flow transport through the proposed method, TBM and USRM permeability fields revealed detailed subsurface instabilities with their corresponding ultimate recovery factors. Also, the impact of autocorrelation lengths on immiscible fluid flow transport were analyzed and quantified. A finite number of lines used in the TBM resulted into visual

  8. Orbits of massive satellite galaxies - II. Bayesian estimates of the Milky Way and Andromeda masses using high-precision astrometry and cosmological simulations

    Science.gov (United States)

    Patel, Ekta; Besla, Gurtina; Mandel, Kaisey

    2017-07-01

    In the era of high-precision astrometry, space observatories like the Hubble Space Telescope (HST) and Gaia are providing unprecedented 6D phase-space information of satellite galaxies. Such measurements can shed light on the structure and assembly history of the Local Group, but improved statistical methods are needed to use them efficiently. Here we illustrate such a method using analogues of the Local Group's two most massive satellite galaxies, the Large Magellanic Cloud (LMC) and Triangulum (M33), from the Illustris dark-matter-only cosmological simulation. We use a Bayesian inference scheme combining measurements of positions, velocities and specific orbital angular momenta (j) of the LMC/M33 with importance sampling of their simulated analogues to compute posterior estimates of the Milky Way (MW) and Andromeda's (M31) halo masses. We conclude that the resulting host halo mass is more susceptible to bias when using measurements of the current position and velocity of satellites, especially when satellites are at short-lived phases of their orbits (I.e. at pericentre). Instead, the j value of a satellite is well conserved over time and provides a more reliable constraint on host mass. The inferred virial mass of the MW (M31) using j of the LMC (M33) is {{M}}_{vir, MW} = 1.02^{+0.77}_{-0.55} × 10^{12} M⊙ ({{M}}_{vir, M31} = 1.37^{+1.39}_{-0.75} × 10^{12} M⊙). Choosing simulated analogues whose j values are consistent with the conventional picture of a previous (<3 Gyr ago), close encounter (<100 kpc) of M33 about M31 results in a very low virial mass for M31 (˜1012 M⊙). This supports the new scenario put forth in Patel, Besla & Sohn, wherein M33 is on its first passage about M31 or on a long-period orbit. We conclude that this Bayesian inference scheme, utilizing satellite j, is a promising method to reduce the current factor of 2 spread in the mass range of the MW and M31. This method is easily adaptable to include additional satellites as new 6D

  9. Neutrino cosmology

    CERN Document Server

    Lesgourgues, Julien; Miele, Gennaro; Pastor, Sergio

    2013-01-01

    The role that neutrinos have played in the evolution of the Universe is the focus of one of the most fascinating research areas that has stemmed from the interplay between cosmology, astrophysics and particle physics. In this self-contained book, the authors bring together all aspects of the role of neutrinos in cosmology, spanning from leptogenesis to primordial nucleosynthesis, their role in CMB and structure formation, to the problem of their direct detection. The book starts by guiding the reader through aspects of fundamental neutrino physics, such as the standard cosmological model and the statistical mechanics in the expanding Universe, before discussing the history of neutrinos in chronological order from the very early stages until today. This timely book will interest graduate students and researchers in astrophysics, cosmology and particle physics, who work with either a theoretical or experimental focus.

  10. Investigating the physics and environment of Lyman limit systems in cosmological simulations

    Science.gov (United States)

    Erkal, Denis

    2015-07-01

    In this work, I investigate the properties of Lyman limit systems (LLSs) using state-of-the-art zoom-in cosmological galaxy formation simulations with on the fly radiative transfer, which includes both the cosmic UV background (UVB) and local stellar sources. I compare the simulation results to observations of the incidence frequency of LLSs and the H I column density distribution function over the redshift range z = 2-5 and find good agreement. I explore the connection between LLSs and their host haloes and find that LLSs reside in haloes with a wide range of halo masses with a nearly constant covering fraction within a virial radius. Over the range z = 2-5, I find that more than half of the LLSs reside in haloes with M test a simple model which encapsulates many of their properties. I confirm that LLSs have a characteristic absorption length given by the Jeans length and that they are in photoionization equilibrium at low column densities. Finally, I investigate the self-shielding of LLSs to the UVB and explore how the non-sphericity of LLSs affects the photoionization rate at a given N_{H I}. I find that at z ≈ 3, LLSs have an optical depth of unity at a column density of ˜1018 cm-2 and that this is the column density which characterizes the onset of self-shielding.

  11. Supernova cosmology

    International Nuclear Information System (INIS)

    Leibundgut, B.

    2005-01-01

    Supernovae have developed into a versatile tool for cosmology. Their impact on the cosmological model has been profound and led to the discovery of the accelerated expansion. The current status of the cosmological model as perceived through supernova observations will be presented. Supernovae are currently the only astrophysical objects that can measure the dynamics of the cosmic expansion during the past eight billion years. Ongoing experiments are trying to determine the characteristics of the accelerated expansion and give insight into what might be the physical explanation for the acceleration. (author)

  12. Nonsingular cosmology from evolutionary quantum gravity

    Science.gov (United States)

    Cianfrani, Francesco; Montani, Giovanni; Pittorino, Fabrizio

    2014-11-01

    We provide a cosmological implementation of the evolutionary quantum gravity, describing an isotropic Universe, in the presence of a negative cosmological constant and a massive (preinflationary) scalar field. We demonstrate that the considered Universe has a nonsingular quantum behavior, associated to a primordial bounce, whose ground state has a high occupation number. Furthermore, in such a vacuum state, the super-Hamiltonian eigenvalue is negative, corresponding to a positive emerging dust energy density. The regularization of the model is performed via a polymer quantum approach to the Universe scale factor and the proper classical limit is then recovered, in agreement with a preinflationary state of the Universe. Since the dust energy density is redshifted by the Universe de Sitter phase and the cosmological constant does not enter the ground state eigenvalue, we get a late-time cosmology, compatible with the present observations, endowed with a turning point in the far future.

  13. Supersymmetry and cosmology

    International Nuclear Information System (INIS)

    Feng, Jonathan L.

    2005-01-01

    Cosmology now provides unambiguous, quantitative evidence for new particle physics. I discuss the implications of cosmology for supersymmetry and vice versa. Topics include: motivations for supersymmetry; supersymmetry breaking; dark energy; freeze out and WIMPs; neutralino dark matter; cosmologically preferred regions of minimal supergravity; direct and indirect detection of neutralinos; the DAMA and HEAT signals; inflation and reheating; gravitino dark matter; Big Bang nucleosynthesis; and the cosmic microwave background. I conclude with speculations about the prospects for a microscopic description of the dark universe, stressing the necessity of diverse experiments on both sides of the particle physics/cosmology interface

  14. The inflationary cosmology

    International Nuclear Information System (INIS)

    Sasaki, Misao

    1983-01-01

    We review the recent status of the inflationary cosmology. After exhibiting the essence of difficulties associated with the horizon, flatness and baryon number problems in the standard big-bang cosmology, we discuss that the inflationary universe scenario is one of the most plausible solutions to these fundamental cosmological problems. Since there are two qualitatively different versions of the inflationary universe scenario, we review each of them separately and discuss merits and demerits of each version. The Hawking radiation in de Sitter space is also reviewed since it may play an essential role in the inflationary cosmology. (author)

  15. Some lessons and thoughts from development of an old-fashioned high-resolution atmospheric general circulation model

    Science.gov (United States)

    Ohfuchi, Wataru; Enomoto, Takeshi; Yoshioka, Mayumi K.; Takaya, Koutarou

    2014-05-01

    Some high-resolution simulations with a conventional atmospheric general circulation model (AGCM) were conducted right after the first Earth Simulator started operating in the spring of 2002. More simulations with various resolutions followed. The AGCM in this study, AFES (Agcm For the Earth Simulator), is a primitive equation spectral transform method model with a cumulus convection parameterization. In this presentation, some findings from comparisons between high and low-resolution simulations, and some future perspectives of old-fashioned AGCMs will be discussed. One obvious advantage of increasing resolution is capability of resolving the fine structures of topography and atmospheric flow. By increasing resolution from T39 (about 320 km horizontal grid interval) to T79 (160 km), to T159 (80 km) to T319 (40 km), topographic precipitation over Japan becomes increasingly realistic. This feature is necessary for climate and weather studies involving both global and local aspects. In order to resolve submesoscale (about 100 km horizontal scale) atmospheric circulation, about 10-km grid interval is necessary. Comparing T1279 (10 km) simulations with T319 ones, it is found that, for example, the intensity of heavy rain associated with Baiu front and the central pressure of typhoon become more realistic. These realistic submesoscale phenomena should have impact on larger-sale flow through dynamics and thermodynamics. An interesting finding by increasing horizontal resolution of a conventional AGCM is that some cumulus convection parameterizations, such as Arakawa-Schubert type scheme, gradually stop producing precipitation, while some others, such as Emanuel type, do not. With the former, the grid condensation increases with the model resolution to compensate. Which characteristics are more desirable is arguable but it is an important feature one has to consider when developing a high-resolution conventional AGCM. Many may think that conventional primitive equation

  16. Defect testing of large aperture optics based on high resolution CCD camera

    International Nuclear Information System (INIS)

    Cheng Xiaofeng; Xu Xu; Zhang Lin; He Qun; Yuan Xiaodong; Jiang Xiaodong; Zheng Wanguo

    2009-01-01

    A fast testing method on inspecting defects of large aperture optics was introduced. With uniform illumination by LED source at grazing incidence, the image of defects on the surface of and inside the large aperture optics could be enlarged due to scattering. The images of defects were got by high resolution CCD camera and microscope, and the approximate mathematical relation between viewing dimension and real dimension of defects was simulated. Thus the approximate real dimension and location of all defects could be calculated through the high resolution pictures. (authors)

  17. Accelerated high-resolution photoacoustic tomography via compressed sensing

    Science.gov (United States)

    Arridge, Simon; Beard, Paul; Betcke, Marta; Cox, Ben; Huynh, Nam; Lucka, Felix; Ogunlade, Olumide; Zhang, Edward

    2016-12-01

    Current 3D photoacoustic tomography (PAT) systems offer either high image quality or high frame rates but are not able to deliver high spatial and temporal resolution simultaneously, which limits their ability to image dynamic processes in living tissue (4D PAT). A particular example is the planar Fabry-Pérot (FP) photoacoustic scanner, which yields high-resolution 3D images but takes several minutes to sequentially map the incident photoacoustic field on the 2D sensor plane, point-by-point. However, as the spatio-temporal complexity of many absorbing tissue structures is rather low, the data recorded in such a conventional, regularly sampled fashion is often highly redundant. We demonstrate that combining model-based, variational image reconstruction methods using spatial sparsity constraints with the development of novel PAT acquisition systems capable of sub-sampling the acoustic wave field can dramatically increase the acquisition speed while maintaining a good spatial resolution: first, we describe and model two general spatial sub-sampling schemes. Then, we discuss how to implement them using the FP interferometer and demonstrate the potential of these novel compressed sensing PAT devices through simulated data from a realistic numerical phantom and through measured data from a dynamic experimental phantom as well as from in vivo experiments. Our results show that images with good spatial resolution and contrast can be obtained from highly sub-sampled PAT data if variational image reconstruction techniques that describe the tissues structures with suitable sparsity-constraints are used. In particular, we examine the use of total variation (TV) regularization enhanced by Bregman iterations. These novel reconstruction strategies offer new opportunities to dramatically increase the acquisition speed of photoacoustic scanners that employ point-by-point sequential scanning as well as reducing the channel count of parallelized schemes that use detector arrays.

  18. A high resolution ion microscope for cold atoms

    International Nuclear Information System (INIS)

    Stecker, Markus; Schefzyk, Hannah; Fortágh, József; Günther, Andreas

    2017-01-01

    We report on an ion-optical system that serves as a microscope for ultracold ground state and Rydberg atoms. The system is designed to achieve a magnification of up to 1000 and a spatial resolution in the 100 nm range, thereby surpassing many standard imaging techniques for cold atoms. The microscope consists of four electrostatic lenses and a microchannel plate in conjunction with a delay line detector in order to achieve single particle sensitivity with high temporal and spatial resolution. We describe the design process of the microscope including ion-optical simulations of the imaging system and characterize aberrations and the resolution limit. Furthermore, we present the experimental realization of the microscope in a cold atom setup and investigate its performance by patterned ionization with a structure size down to 2.7 μ m. The microscope meets the requirements for studying various many-body effects, ranging from correlations in cold quantum gases up to Rydberg molecule formation. (paper)

  19. Cosmological evolution as squeezing: a toy model for group field cosmology

    Science.gov (United States)

    Adjei, Eugene; Gielen, Steffen; Wieland, Wolfgang

    2018-05-01

    We present a simple model of quantum cosmology based on the group field theory (GFT) approach to quantum gravity. The model is formulated on a subspace of the GFT Fock space for the quanta of geometry, with a fixed volume per quantum. In this Hilbert space, cosmological expansion corresponds to the generation of new quanta. Our main insight is that the evolution of a flat Friedmann–Lemaître–Robertson–Walker universe with a massless scalar field can be described on this Hilbert space as squeezing, familiar from quantum optics. As in GFT cosmology, we find that the three-volume satisfies an effective Friedmann equation similar to the one of loop quantum cosmology, connecting the classical contracting and expanding solutions by a quantum bounce. The only free parameter in the model is identified with Newton’s constant. We also comment on the possible topological interpretation of our squeezed states. This paper can serve as an introduction into the main ideas of GFT cosmology without requiring the full GFT formalism; our results can also motivate new developments in GFT and its cosmological application.

  20. Cosmological models without singularities

    International Nuclear Information System (INIS)

    Petry, W.

    1981-01-01

    A previously studied theory of gravitation in flat space-time is applied to homogeneous and isotropic cosmological models. There exist two different classes of models without singularities: (i) ever-expanding models, (ii) oscillating models. The first class contains models with hot big bang. For these models there exist at the beginning of the universe-in contrast to Einstein's theory-very high but finite densities of matter and radiation with a big bang of very short duration. After short time these models pass into the homogeneous and isotropic models of Einstein's theory with spatial curvature equal to zero and cosmological constant ALPHA >= O. (author)

  1. Ultra-high resolution protein crystallography

    International Nuclear Information System (INIS)

    Takeda, Kazuki; Hirano, Yu; Miki, Kunio

    2010-01-01

    Many protein structures have been determined by X-ray crystallography and deposited with the Protein Data Bank. However, these structures at usual resolution (1.5< d<3.0 A) are insufficient in their precision and quantity for elucidating the molecular mechanism of protein functions directly from structural information. Several studies at ultra-high resolution (d<0.8 A) have been performed with synchrotron radiation in the last decade. The highest resolution of the protein crystals was achieved at 0.54 A resolution for a small protein, crambin. In such high resolution crystals, almost all of hydrogen atoms of proteins and some hydrogen atoms of bound water molecules are experimentally observed. In addition, outer-shell electrons of proteins can be analyzed by the multipole refinement procedure. However, the influence of X-rays should be precisely estimated in order to derive meaningful information from the crystallographic results. In this review, we summarize refinement procedures, current status and perspectives for ultra high resolution protein crystallography. (author)

  2. Overview of Proposal on High Resolution Climate Model Simulations of Recent Hurricane and Typhoon Activity: The Impact of SSTs and the Madden Julian Oscillation

    Science.gov (United States)

    Schubert, Siegfried; Kang, In-Sik; Reale, Oreste

    2009-01-01

    This talk gives an update on the progress and further plans for a coordinated project to carry out and analyze high-resolution simulations of tropical storm activity with a number of state-of-the-art global climate models. Issues addressed include, the mechanisms by which SSTs control tropical storm. activity on inter-annual and longer time scales, the modulation of that activity by the Madden Julian Oscillation on sub-seasonal time scales, as well as the sensitivity of the results to model formulation. The project also encourages companion coarser resolution runs to help assess resolution dependence, and. the ability of the models to capture the large-scale and long-terra changes in the parameters important for hurricane development. Addressing the above science questions is critical to understanding the nature of the variability of the Asian-Australian monsoon and its regional impacts, and thus CLIVAR RAMP fully endorses the proposed tropical storm simulation activity. The project is open to all interested organizations and investigators, and the results from the runs will be shared among the participants, as well as made available to the broader scientific community for analysis.

  3. Playing With Conflict: Teaching Conflict Resolution through Simulations and Games

    Science.gov (United States)

    Powers, Richard B.; Kirkpatrick, Kat

    2013-01-01

    Playing With Conflict is a weekend course for graduate students in Portland State University's Conflict Resolution program and undergraduates in all majors. Students participate in simulations, games, and experiential exercises to learn and practice conflict resolution skills. Graduate students create a guided role-play of a conflict. In addition…

  4. Cosmological N-body simulations with a tree code - Fluctuations in the linear and nonlinear regimes

    International Nuclear Information System (INIS)

    Suginohara, Tatsushi; Suto, Yasushi; Bouchet, F.R.; Hernquist, L.

    1991-01-01

    The evolution of gravitational systems is studied numerically in a cosmological context using a hierarchical tree algorithm with fully periodic boundary conditions. The simulations employ 262,144 particles, which are initially distributed according to scale-free power spectra. The subsequent evolution is followed in both flat and open universes. With this large number of particles, the discretized system can accurately model the linear phase. It is shown that the dynamics in the nonlinear regime depends on both the spectral index n and the density parameter Omega. In Omega = 1 universes, the evolution of the two-point correlation function Xi agrees well with similarity solutions for Xi greater than about 100 but its slope is steeper in open models with the same n. 28 refs

  5. Resolution of cosmological singularity and a plausible mechanism of the big bang

    International Nuclear Information System (INIS)

    Choudhury, D.C.

    2002-01-01

    The initial cosmological singularity in the framework of the general theory of relativity is resolved by introducing the effect of the uncertainty principle of quantum theory without violating conventional laws of physics. A plausible account of the mechanism of the big bang, analogous to that of a nuclear explosion, is given and the currently accepted Planck temperature of ≅10 32 K at the beginning of the big bang is predicted

  6. Interactive desktop analysis of high resolution simulations: application to turbulent plume dynamics and current sheet formation

    International Nuclear Information System (INIS)

    Clyne, John; Mininni, Pablo; Norton, Alan; Rast, Mark

    2007-01-01

    The ever increasing processing capabilities of the supercomputers available to computational scientists today, combined with the need for higher and higher resolution computational grids, has resulted in deluges of simulation data. Yet the computational resources and tools required to make sense of these vast numerical outputs through subsequent analysis are often far from adequate, making such analysis of the data a painstaking, if not a hopeless, task. In this paper, we describe a new tool for the scientific investigation of massive computational datasets. This tool (VAPOR) employs data reduction, advanced visualization, and quantitative analysis operations to permit the interactive exploration of vast datasets using only a desktop PC equipped with a commodity graphics card. We describe VAPORs use in the study of two problems. The first, motivated by stellar envelope convection, investigates the hydrodynamic stability of compressible thermal starting plumes as they descend through a stratified layer of increasing density with depth. The second looks at current sheet formation in an incompressible helical magnetohydrodynamic flow to understand the early spontaneous development of quasi two-dimensional (2D) structures embedded within the 3D solution. Both of the problems were studied at sufficiently high spatial resolution, a grid of 504 2 by 2048 points for the first and 1536 3 points for the second, to overwhelm the interactive capabilities of typically available analysis resources

  7. Post-inflationary brane cosmology

    International Nuclear Information System (INIS)

    Mazumdar, Anupam

    2001-01-01

    The brane cosmology has invoked new challenges to the usual Big Bang cosmology. In this paper we present a brief account on thermal history of the post-inflationary brane cosmology. We have realized that it is not obvious that the post-inflationary brane cosmology would always deviate from the standard Big Bang cosmology. However, if it deviates some stringent conditions on the brane tension are to be satisfied. In this regard we study various implications on gravitino production and its abundance. We discuss Affleck-Dine mechanism for baryogenesis and make some comments on moduli and dilaton problems in this context

  8. A whirling plane of satellite galaxies around Centaurus A challenges cold dark matter cosmology.

    Science.gov (United States)

    Müller, Oliver; Pawlowski, Marcel S; Jerjen, Helmut; Lelli, Federico

    2018-02-02

    The Milky Way and Andromeda galaxies are each surrounded by a thin plane of satellite dwarf galaxies that may be corotating. Cosmological simulations predict that most satellite galaxy systems are close to isotropic with random motions, so those two well-studied systems are often interpreted as rare statistical outliers. We test this assumption using the kinematics of satellite galaxies around the Centaurus A galaxy. Our statistical analysis reveals evidence for corotation in a narrow plane: Of the 16 Centaurus A satellites with kinematic data, 14 follow a coherent velocity pattern aligned with the long axis of their spatial distribution. In standard cosmological simulations, cosmological paradigm. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  9. Cosmology

    CERN Document Server

    García-Bellido, J

    2015-01-01

    In these lectures I review the present status of the so-called Standard Cosmological Model, based on the hot Big Bang Theory and the Inflationary Paradigm. I will make special emphasis on the recent developments in observational cosmology, mainly the acceleration of the universe, the precise measurements of the microwave background anisotropies, and the formation of structure like galaxies and clusters of galaxies from tiny primordial fluctuations generated during inflation.

  10. The effective field theory of nonsingular cosmology: II

    Energy Technology Data Exchange (ETDEWEB)

    Cai, Yong; Li, Hai-Guang [University of Chinese Academy of Sciences, School of Physics, Beijing (China); Qiu, Taotao [Central China Normal University, Institute of Astrophysics, Wuhan (China); Piao, Yun-Song [University of Chinese Academy of Sciences, School of Physics, Beijing (China); Chinese Academy of Sciences, Institute of Theoretical Physics, P.O. Box 2735, Beijing (China)

    2017-06-15

    Based on the effective field theory (EFT) of cosmological perturbations, we explicitly clarify the pathology in nonsingular cubic Galileon models and show how to cure it in EFT with new insights into this issue. With the least set of EFT operators that are capable to avoid instabilities in nonsingular cosmologies, we construct a nonsingular model dubbed the Genesis-inflation model, in which a slowly expanding phase (namely, Genesis) with increasing energy density is followed by slow-roll inflation. The spectrum of the primordial perturbation may be simulated numerically, which shows itself a large-scale cutoff, as the large-scale anomalies in CMB might be a hint for. (orig.)

  11. High-resolution numerical simulation of summer wind field comparing WRF boundary-layer parametrizations over complex Arctic topography: case study from central Spitsbergen

    Czech Academy of Sciences Publication Activity Database

    Láska, K.; Chládová, Zuzana; Hošek, Jiří

    2017-01-01

    Roč. 26, č. 4 (2017), s. 391-408 ISSN 0941-2948 Institutional support: RVO:68378289 Keywords : surface wind field * model evaluation * topographic effect * circulation pattern * Svalbard Subject RIV: DG - Athmosphere Sciences, Meteorology OBOR OECD: Meteorology and atmospheric sciences Impact factor: 1.989, year: 2016 http://www.schweizerbart.de/papers/metz/detail/prepub/87659/High_resolution_numerical_simulation_of_summer_wind_field_comparing_WRF_boundary_layer_parametrizations_over_complex_Arctic_topography_case_study_from_central_Spitsbergen

  12. Einstein and cosmology

    International Nuclear Information System (INIS)

    Gekman, O.

    1982-01-01

    The brief essay of the development of the main ideas of relativistic cosmology is presented. The Einstein's cosmological work about the Universe - ''Cosmological considerations in connection with the general relativity theory'' - gave the basis to all further treatments in this field. In 1922 A. Friedman's work appeared, in which the first expanding Universe model was proposed as a solution of the Einstein field equations. The model was spherically closed, but its curvature radius was a function of time. About 1955 the searches for anisotropic homogeneous solutions to Einstein field equation began. It turned out that isotropic cosmological models are unstable in general. The predominant part of them transform to anisotropic at insignificant breaking of isotropy. The discovery of isotropic background cosmic radiation in 1965, along with the Hubble low of the Universe expansion, served as the direct confirmation of cosmology based on the Einstein theory

  13. Study of drift tube resolution using numerical simulations

    International Nuclear Information System (INIS)

    Lundin, M.C.

    1990-01-01

    The results off a simulation of straw tube detector response are presented. These gas ionization detectors and the electronics which must presumably go along with them are characterized in a simple but meaningful manner. The physical processes which comprise the response of the individual straw tubes are broken down and examined in detail. Different parameters of the simulation are varied and resulting predictions of drift tube spatial resolution are shown. In addition, small aspects of the predictions are compared to recent laboratory results, which can be seen as a measure of the simulation's usefulness. 10 refs., 8 figs

  14. Cosmology and time

    Directory of Open Access Journals (Sweden)

    Balbi Amedeo

    2013-09-01

    Full Text Available Time has always played a crucial role in cosmology. I review some of the aspects of the present cosmological model which are more directly related to time, such as: the definition of a cosmic time; the existence of typical timescales and epochs in an expanding universe; the problem of the initial singularity and the origin of time; the cosmological arrow of time.

  15. High-resolution simulations of the final assembly of Earth-like planets. 2. Water delivery and planetary habitability.

    Science.gov (United States)

    Raymond, Sean N; Quinn, Thomas; Lunine, Jonathan I

    2007-02-01

    The water content and habitability of terrestrial planets are determined during their final assembly, from perhaps 100 1,000-km "planetary embryos " and a swarm of billions of 1-10-km "planetesimals. " During this process, we assume that water-rich material is accreted by terrestrial planets via impacts of water-rich bodies that originate in the outer asteroid region. We present analysis of water delivery and planetary habitability in five high-resolution simulations containing about 10 times more particles than in previous simulations. These simulations formed 15 terrestrial planets from 0.4 to 2.6 Earth masses, including five planets in the habitable zone. Every planet from each simulation accreted at least the Earth's current water budget; most accreted several times that amount (assuming no impact depletion). Each planet accreted at least five water-rich embryos and planetesimals from the past 2.5 astronomical units; most accreted 10-20 water-rich bodies. We present a new model for water delivery to terrestrial planets in dynamically calm systems, with low-eccentricity or low-mass giant planets-such systems may be very common in the Galaxy. We suggest that water is accreted in comparable amounts from a few planetary embryos in a " hit or miss " way and from millions of planetesimals in a statistically robust process. Variations in water content are likely to be caused by fluctuations in the number of water-rich embryos accreted, as well as from systematic effects, such as planetary mass and location, and giant planet properties.

  16. Coupled atmosphere ocean climate model simulations in the Mediterranean region: effect of a high-resolution marine model on cyclones and precipitation

    Directory of Open Access Journals (Sweden)

    A. Sanna

    2013-06-01

    Full Text Available In this study we investigate the importance of an eddy-permitting Mediterranean Sea circulation model on the simulation of atmospheric cyclones and precipitation in a climate model. This is done by analyzing results of two fully coupled GCM (general circulation models simulations, differing only for the presence/absence of an interactive marine module, at very high-resolution (~ 1/16°, for the simulation of the 3-D circulation of the Mediterranean Sea. Cyclones are tracked by applying an objective Lagrangian algorithm to the MSLP (mean sea level pressure field. On annual basis, we find a statistically significant difference in vast cyclogenesis regions (northern Adriatic, Sirte Gulf, Aegean Sea and southern Turkey and in lifetime, giving evidence of the effect of both land–sea contrast and surface heat flux intensity and spatial distribution on cyclone characteristics. Moreover, annual mean convective precipitation changes significantly in the two model climatologies as a consequence of differences in both air–sea interaction strength and frequency of cyclogenesis in the two analyzed simulations.

  17. Quantum cosmology - science of Genesis

    International Nuclear Information System (INIS)

    Padmanabhan, Thanu

    1987-01-01

    Quantum cosmology, the marriage between the theories of the microscopic and macroscopic Universe, is examined in an attempt to explain the birth of the Universe in the 'big bang'. A quantum cosmological model of the Universe does not exist, but a rough approximation, or 'poor man's' version of quantum cosmology has been developed. The idea is to combine the theory of quantum mechanics with the classical cosmological solutions to obtain a quantum mechanical version of cosmology. Such a model of quantum cosmology is described -here the quantum universe behaves like a hydrogen atom with the Planck length replacing the Bohr radius. Properties of quantum cosmologies and the significance of the Planck length are both discussed. (UK)

  18. High-resolution stochastic generation of extreme rainfall intensity for urban drainage modelling applications

    Science.gov (United States)

    Peleg, Nadav; Blumensaat, Frank; Molnar, Peter; Fatichi, Simone; Burlando, Paolo

    2016-04-01

    Urban drainage response is highly dependent on the spatial and temporal structure of rainfall. Therefore, measuring and simulating rainfall at a high spatial and temporal resolution is a fundamental step to fully assess urban drainage system reliability and related uncertainties. This is even more relevant when considering extreme rainfall events. However, the current space-time rainfall models have limitations in capturing extreme rainfall intensity statistics for short durations. Here, we use the STREAP (Space-Time Realizations of Areal Precipitation) model, which is a novel stochastic rainfall generator for simulating high-resolution rainfall fields that preserve the spatio-temporal structure of rainfall and its statistical characteristics. The model enables a generation of rain fields at 102 m and minute scales in a fast and computer-efficient way matching the requirements for hydrological analysis of urban drainage systems. The STREAP model was applied successfully in the past to generate high-resolution extreme rainfall intensities over a small domain. A sub-catchment in the city of Luzern (Switzerland) was chosen as a case study to: (i) evaluate the ability of STREAP to disaggregate extreme rainfall intensities for urban drainage applications; (ii) assessing the role of stochastic climate variability of rainfall in flow response and (iii) evaluate the degree of non-linearity between extreme rainfall intensity and system response (i.e. flow) for a small urban catchment. The channel flow at the catchment outlet is simulated by means of a calibrated hydrodynamic sewer model.

  19. Cluster cosmology with next-generation surveys.

    Science.gov (United States)

    Ascaso, B.

    2017-03-01

    The advent of next-generation surveys will provide a large number of cluster detections that will serve the basis for constraining cos mological parameters using cluster counts. The main two observational ingredients needed are the cluster selection function and the calibration of the mass-observable relation. In this talk, we present the methodology designed to obtain robust predictions of both ingredients based on realistic cosmological simulations mimicking the following next-generation surveys: J-PAS, LSST and Euclid. We display recent results on the selection functions for these mentioned surveys together with others coming from other next-generation surveys such as eROSITA, ACTpol and SPTpol. We notice that the optical and IR surveys will reach the lowest masses between 0.3simulations, obtaining similar scatter to other observational results limited to higher redshifts. Finally, we describe the technique that we are developing to perform a Fisher Matrix analysis to provide cosmological constraints for the considered next-generation surveys and introduce very preliminary results.

  20. Stability analysis in tachyonic potential chameleon cosmology

    Energy Technology Data Exchange (ETDEWEB)

    Farajollahi, H.; Salehi, A.; Tayebi, F.; Ravanpak, A., E-mail: hosseinf@guilan.ac.ir, E-mail: a.salehi@guilan.ac.ir, E-mail: ftayebi@guilan.ac.ir, E-mail: aravanpak@guilan.ac.ir [Department of Physics, University of Guilan, Rasht (Iran, Islamic Republic of)

    2011-05-01

    We study general properties of attractors for tachyonic potential chameleon scalar-field model which possess cosmological scaling solutions. An analytic formulation is given to obtain fixed points with a discussion on their stability. The model predicts a dynamical equation of state parameter with phantom crossing behavior for an accelerating universe. We constrain the parameters of the model by best fitting with the recent data-sets from supernovae and simulated data points for redshift drift experiment generated by Monte Carlo simulations.

  1. Stability analysis in tachyonic potential chameleon cosmology

    International Nuclear Information System (INIS)

    Farajollahi, H.; Salehi, A.; Tayebi, F.; Ravanpak, A.

    2011-01-01

    We study general properties of attractors for tachyonic potential chameleon scalar-field model which possess cosmological scaling solutions. An analytic formulation is given to obtain fixed points with a discussion on their stability. The model predicts a dynamical equation of state parameter with phantom crossing behavior for an accelerating universe. We constrain the parameters of the model by best fitting with the recent data-sets from supernovae and simulated data points for redshift drift experiment generated by Monte Carlo simulations

  2. BMS in cosmology

    International Nuclear Information System (INIS)

    Kehagias, A.; Riotto, A.

    2016-01-01

    Symmetries play an interesting role in cosmology. They are useful in characterizing the cosmological perturbations generated during inflation and lead to consistency relations involving the soft limit of the statistical correlators of large-scale structure dark matter and galaxies overdensities. On the other hand, in observational cosmology the carriers of the information about these large-scale statistical distributions are light rays traveling on null geodesics. Motivated by this simple consideration, we study the structure of null infinity and the associated BMS symmetry in a cosmological setting. For decelerating Friedmann-Robertson-Walker backgrounds, for which future null infinity exists, we find that the BMS transformations which leaves the asymptotic metric invariant to leading order. Contrary to the asymptotic flat case, the BMS transformations in cosmology generate Goldstone modes corresponding to scalar, vector and tensor degrees of freedom which may exist at null infinity and perturb the asymptotic data. Therefore, BMS transformations generate physically inequivalent vacua as they populate the universe at null infinity with these physical degrees of freedom. We also discuss the gravitational memory effect when cosmological expansion is taken into account. In this case, there are extra contribution to the gravitational memory due to the tail of the retarded Green functions which are supported not only on the light-cone, but also in its interior. The gravitational memory effect can be understood also from an asymptotic point of view as a transition among cosmological BMS-related vacua.

  3. BMS in cosmology

    Energy Technology Data Exchange (ETDEWEB)

    Kehagias, A. [Physics Division, National Technical University of Athens, 15780 Zografou Campus, Athens (Greece); Riotto, A. [Department of Theoretical Physics,24 quai E. Ansermet, CH-1211 Geneva 4 (Switzerland); Center for Astroparticle Physics (CAP),24 quai E. Ansermet, CH-1211 Geneva 4 (Switzerland)

    2016-05-25

    Symmetries play an interesting role in cosmology. They are useful in characterizing the cosmological perturbations generated during inflation and lead to consistency relations involving the soft limit of the statistical correlators of large-scale structure dark matter and galaxies overdensities. On the other hand, in observational cosmology the carriers of the information about these large-scale statistical distributions are light rays traveling on null geodesics. Motivated by this simple consideration, we study the structure of null infinity and the associated BMS symmetry in a cosmological setting. For decelerating Friedmann-Robertson-Walker backgrounds, for which future null infinity exists, we find that the BMS transformations which leaves the asymptotic metric invariant to leading order. Contrary to the asymptotic flat case, the BMS transformations in cosmology generate Goldstone modes corresponding to scalar, vector and tensor degrees of freedom which may exist at null infinity and perturb the asymptotic data. Therefore, BMS transformations generate physically inequivalent vacua as they populate the universe at null infinity with these physical degrees of freedom. We also discuss the gravitational memory effect when cosmological expansion is taken into account. In this case, there are extra contribution to the gravitational memory due to the tail of the retarded Green functions which are supported not only on the light-cone, but also in its interior. The gravitational memory effect can be understood also from an asymptotic point of view as a transition among cosmological BMS-related vacua.

  4. High resolution solar observations

    International Nuclear Information System (INIS)

    Title, A.

    1985-01-01

    Currently there is a world-wide effort to develop optical technology required for large diffraction limited telescopes that must operate with high optical fluxes. These developments can be used to significantly improve high resolution solar telescopes both on the ground and in space. When looking at the problem of high resolution observations it is essential to keep in mind that a diffraction limited telescope is an interferometer. Even a 30 cm aperture telescope, which is small for high resolution observations, is a big interferometer. Meter class and above diffraction limited telescopes can be expected to be very unforgiving of inattention to details. Unfortunately, even when an earth based telescope has perfect optics there are still problems with the quality of its optical path. The optical path includes not only the interior of the telescope, but also the immediate interface between the telescope and the atmosphere, and finally the atmosphere itself

  5. High speed, High resolution terahertz spectrometers

    International Nuclear Information System (INIS)

    Kim, Youngchan; Yee, Dae Su; Yi, Miwoo; Ahn, Jaewook

    2008-01-01

    A variety of sources and methods have been developed for terahertz spectroscopy during almost two decades. Terahertz time domain spectroscopy (THz TDS)has attracted particular attention as a basic measurement method in the fields of THz science and technology. Recently, asynchronous optical sampling (AOS)THz TDS has been demonstrated, featuring rapid data acquisition and a high spectral resolution. Also, terahertz frequency comb spectroscopy (TFCS)possesses attractive features for high precision terahertz spectroscopy. In this presentation, we report on these two types of terahertz spectrometer. Our high speed, high resolution terahertz spectrometer is demonstrated using two mode locked femtosecond lasers with slightly different repetition frequencies without a mechanical delay stage. The repetition frequencies of the two femtosecond lasers are stabilized by use of two phase locked loops sharing the same reference oscillator. The time resolution of our terahertz spectrometer is measured using the cross correlation method to be 270 fs. AOS THz TDS is presented in Fig. 1, which shows a time domain waveform rapidly acquired on a 10ns time window. The inset shows a zoom into the signal with 100ps time window. The spectrum obtained by the fast Fourier Transformation (FFT)of the time domain waveform has a frequency resolution of 100MHz. The dependence of the signal to noise ratio (SNR)on the measurement time is also investigated

  6. Cosmological implication of wide field Sunyaev-Zel'dovich galaxy clusters survey: exploration by simulation

    International Nuclear Information System (INIS)

    Juin, Jean-Baptiste

    2005-01-01

    The goal of my Phd research is to prepare the data analysis of the near future wide-field observations of galaxy clusters detected by Sunyaev Zel'dovitch effect. I set up a complete chain of original tools to carry out this study. These tools allow me to highlight critical and important points of selection effects that has to be taken into account in future analysis. Analysis chain is composed by: a simulation of observed millimeter sky, state-of-the-art algorithms of SZ galaxy clusters extraction from observed maps, a statistical model of selection effects of the whole detection chain and, finally, tools to constrain, from detected SZ sources catalog, the cosmological parameters. I focus myself on multi-channel experiments equipped with large bolometer camera. I use these tools for a prospecting on Olimpo experiment. (author) [fr

  7. Philosophical Roots of Cosmology

    Science.gov (United States)

    Ivanovic, M.

    2008-10-01

    We shall consider the philosophical roots of cosmology in the earlier Greek philosophy. Our goal is to answer the question: Are earlier Greek theories of pure philosophical-mythological character, as often philosophers cited it, or they have scientific character. On the bases of methodological criteria, we shall contend that the latter is the case. In order to answer the question about contemporary situation of the relation philosophy-cosmology, we shall consider the next question: Is contemporary cosmology completely independent of philosophical conjectures? The answer demands consideration of methodological character about scientific status of contemporary cosmology. We also consider some aspects of the relation contemporary philosophy-cosmology.

  8. Observational cosmology

    NARCIS (Netherlands)

    Sanders, RH; Papantonopoulos, E

    2005-01-01

    I discuss the classical cosmological tests, i.e., angular size-redshift, flux-redshift, and galaxy number counts, in the light of the cosmology prescribed by the interpretation of the CMB anisotropies. The discussion is somewhat of a primer for physicists, with emphasis upon the possible systematic

  9. Cosmological constant and advanced gravitational wave detectors

    International Nuclear Information System (INIS)

    Wang, Y.; Turner, E.L.

    1997-01-01

    Interferometric gravitational wave detectors could measure the frequency sweep of a binary inspiral (characterized by its chirp mass) to high accuracy. The observed chirp mass is the intrinsic chirp mass of the binary source multiplied by (1+z), where z is the redshift of the source. Assuming a nonzero cosmological constant, we compute the expected redshift distribution of observed events for an advanced LIGO detector. We find that the redshift distribution has a robust and sizable dependence on the cosmological constant; the data from advanced LIGO detectors could provide an independent measurement of the cosmological constant. copyright 1997 The American Physical Society

  10. Second-order Cosmological Perturbations Engendered by Point-like Masses

    Energy Technology Data Exchange (ETDEWEB)

    Brilenkov, Ruslan [Institute for Astro- and Particle Physics, University of Innsbruck, Technikerstrasse 25/8, A‐6020 Innsbruck (Austria); Eingorn, Maxim, E-mail: ruslan.brilenkov@gmail.com, E-mail: maxim.eingorn@gmail.com [North Carolina Central University, CREST and NASA Research Centers, 1801 Fayetteville St., Durham, NC 27707 (United States)

    2017-08-20

    In the ΛCDM framework, presenting nonrelativistic matter inhomogeneities as discrete massive particles, we develop the second‐order cosmological perturbation theory. Our approach relies on the weak gravitational field limit. The derived equations for the second‐order scalar, vector, and tensor metric corrections are suitable at arbitrary distances, including regions with nonlinear contrasts of the matter density. We thoroughly verify fulfillment of all Einstein equations, as well as self‐consistency of order assignments. In addition, we achieve logical positive results in the Minkowski background limit. Feasible investigations of the cosmological back-reaction manifestations by means of relativistic simulations are also outlined.

  11. Evaluation of a High-Resolution Regional Reanalysis for Europe

    Science.gov (United States)

    Ohlwein, C.; Wahl, S.; Keller, J. D.; Bollmeyer, C.

    2014-12-01

    Reanalyses gain more and more importance as a source of meteorological information for many purposes and applications. Several global reanalyses projects (e.g., ERA, MERRA, CSFR, JMA9) produce and verify these data sets to provide time series as long as possible combined with a high data quality. Due to a spatial resolution down to 50-70km and 3-hourly temporal output, they are not suitable for small scale problems (e.g., regional climate assessment, meso-scale NWP verification, input for subsequent models such as river runoff simulations). The implementation of regional reanalyses based on a limited area model along with a data assimilation scheme is able to generate reanalysis data sets with high spatio-temporal resolution. Within the Hans-Ertel-Centre for Weather Research (HErZ), the climate monitoring branch concentrates efforts on the assessment and analysis of regional climate in Germany and Europe. In joint cooperation with DWD (German Meteorological Service), a high-resolution reanalysis system based on the COSMO model has been developed. The regional reanalysis for Europe matches the domain of the CORDEX EURO-11 specifications, albeit at a higher spatial resolution, i.e., 0.055° (6km) instead of 0.11° (12km) and comprises the assimilation of observational data using the existing nudging scheme of COSMO complemented by a special soil moisture analysis with boundary conditions provided by ERA-Interim data. The reanalysis data set covers 6 years (2007-2012) and is currently extended to 16 years. Extensive evaluation of the reanalysis is performed using independent observations with special emphasis on precipitation and high-impact weather situations indicating a better representation of small scale variability. Further, the evaluation shows an added value of the regional reanalysis with respect to the forcing ERA Interim reanalysis and compared to a pure high-resolution dynamical downscaling approach without data assimilation.

  12. High spatial resolution and high brightness ion beam probe for in-situ elemental and isotopic analysis

    Science.gov (United States)

    Long, Tao; Clement, Stephen W. J.; Bao, Zemin; Wang, Peizhi; Tian, Di; Liu, Dunyi

    2018-03-01

    A high spatial resolution and high brightness ion beam from a cold cathode duoplasmatron source and primary ion optics are presented and applied to in-situ analysis of micro-scale geological material with complex structural and chemical features. The magnetic field in the source as well as the influence of relative permeability of magnetic materials on source performance was simulated using COMSOL to confirm the magnetic field strength of the source. Based on SIMION simulation, a high brightness and high spatial resolution negative ion optical system has been developed to achieve Critical (Gaussian) illumination mode. The ion source and primary column are installed on a new Time-of-Flight secondary ion mass spectrometer for analysis of geological samples. The diameter of the ion beam was measured by the knife-edge method and a scanning electron microscope (SEM). Results show that an O2- beam of ca. 5 μm diameter with a beam intensity of ∼5 nA and an O- beam of ca. 5 μm diameter with a beam intensity of ∼50 nA were obtained, respectively. This design will open new possibilities for in-situ elemental and isotopic analysis in geological studies.

  13. A subspace approach to high-resolution spectroscopic imaging.

    Science.gov (United States)

    Lam, Fan; Liang, Zhi-Pei

    2014-04-01

    To accelerate spectroscopic imaging using sparse sampling of (k,t)-space and subspace (or low-rank) modeling to enable high-resolution metabolic imaging with good signal-to-noise ratio. The proposed method, called SPectroscopic Imaging by exploiting spatiospectral CorrElation, exploits a unique property known as partial separability of spectroscopic signals. This property indicates that high-dimensional spectroscopic signals reside in a very low-dimensional subspace and enables special data acquisition and image reconstruction strategies to be used to obtain high-resolution spatiospectral distributions with good signal-to-noise ratio. More specifically, a hybrid chemical shift imaging/echo-planar spectroscopic imaging pulse sequence is proposed for sparse sampling of (k,t)-space, and a low-rank model-based algorithm is proposed for subspace estimation and image reconstruction from sparse data with the capability to incorporate prior information and field inhomogeneity correction. The performance of the proposed method has been evaluated using both computer simulations and phantom studies, which produced very encouraging results. For two-dimensional spectroscopic imaging experiments on a metabolite phantom, a factor of 10 acceleration was achieved with a minimal loss in signal-to-noise ratio compared to the long chemical shift imaging experiments and with a significant gain in signal-to-noise ratio compared to the accelerated echo-planar spectroscopic imaging experiments. The proposed method, SPectroscopic Imaging by exploiting spatiospectral CorrElation, is able to significantly accelerate spectroscopic imaging experiments, making high-resolution metabolic imaging possible. Copyright © 2014 Wiley Periodicals, Inc.

  14. In-beam measurement of the position resolution of a highly segmented coaxial germanium detector

    International Nuclear Information System (INIS)

    Descovich, M.; Lee, I.Y.; Fallon, P.; Cromaz, M.; Macchiavelli, A.O.; Radford, D.C.; Vetter, K.; Clark, R.M.; Deleplanque, M.A.; Stephens, F.S.; Ward, D.

    2005-01-01

    The position resolution of a highly segmented coaxial germanium detector was determined by analyzing the 2055keV γ-ray transition of Zr90 excited in a fusion-evaporation reaction. The high velocity of the Zr90 nuclei imparted large Doppler shifts. Digital analysis of the detector signals recovered the energy and position of individual γ-ray interactions. The location of the first interaction in the crystal was used to correct the Doppler energy shift. Comparison of the measured energy resolution with simulations implied a position resolution (root mean square) of 2mm in three-dimensions

  15. Scale-relativistic cosmology

    International Nuclear Information System (INIS)

    Nottale, Laurent

    2003-01-01

    The principle of relativity, when it is applied to scale transformations, leads to the suggestion of a generalization of fundamental dilations laws. These new special scale-relativistic resolution transformations involve log-Lorentz factors and lead to the occurrence of a minimal and of a maximal length-scale in nature, which are invariant under dilations. The minimal length-scale, that replaces the zero from the viewpoint of its physical properties, is identified with the Planck length l P , and the maximal scale, that replaces infinity, is identified with the cosmic scale L=Λ -1/2 , where Λ is the cosmological constant.The new interpretation of the Planck scale has several implications for the structure and history of the early Universe: we consider the questions of the origin, of the status of physical laws at very early times, of the horizon/causality problem and of fluctuations at recombination epoch.The new interpretation of the cosmic scale has consequences for our knowledge of the present universe, concerning in particular Mach's principle, the large number coincidence, the problem of the vacuum energy density, the nature and the value of the cosmological constant. The value (theoretically predicted ten years ago) of the scaled cosmological constant Ω Λ =0.75+/-0.15 is now supported by several different experiments (Hubble diagram of Supernovae, Boomerang measurements, gravitational lensing by clusters of galaxies).The scale-relativity framework also allows one to suggest a solution to the missing mass problem, and to make theoretical predictions of fundamental energy scales, thanks to the interpretation of new structures in scale space: fractal/classical transitions as Compton lengths, mass-coupling relations and critical value 4π 2 of inverse couplings. Among them, we find a structure at 3.27+/-0.26x10 20 eV, which agrees closely with the observed highest energy cosmic rays at 3.2+/-0.9x10 20 eV, and another at 5.3x10 -3 eV, which corresponds to the

  16. Simulations of Cyclone Sidr in the Bay of Bengal with a High-Resolution Model: Sensitivity to Large-Scale Boundary Forcing

    Science.gov (United States)

    Kumar, Anil; Done, James; Dudhia, Jimy; Niyogi, Dev

    2011-01-01

    The predictability of Cyclone Sidr in the Bay of Bengal was explored in terms of track and intensity using the Advanced Research Hurricane Weather Research Forecast (AHW) model. This constitutes the first application of the AHW over an area that lies outside the region of the North Atlantic for which this model was developed and tested. Several experiments were conducted to understand the possible contributing factors that affected Sidr s intensity and track simulation by varying the initial start time and domain size. Results show that Sidr s track was strongly controlled by the synoptic flow at the 500-hPa level, seen especially due to the strong mid-latitude westerly over north-central India. A 96-h forecast produced westerly winds over north-central India at the 500-hPa level that were notably weaker; this likely caused the modeled cyclone track to drift from the observed actual track. Reducing the model domain size reduced model error in the synoptic-scale winds at 500 hPa and produced an improved cyclone track. Specifically, the cyclone track appeared to be sensitive to the upstream synoptic flow, and was, therefore, sensitive to the location of the western boundary of the domain. However, cyclone intensity remained largely unaffected by this synoptic wind error at the 500-hPa level. Comparison of the high resolution, moving nested domain with a single coarser resolution domain showed little difference in tracks, but resulted in significantly different intensities. Experiments on the domain size with regard to the total precipitation simulated by the model showed that precipitation patterns and 10-m surface winds were also different. This was mainly due to the mid-latitude westerly flow across the west side of the model domain. The analysis also suggested that the total precipitation pattern and track was unchanged when the domain was extended toward the east, north, and south. Furthermore, this highlights our conclusion that Sidr was influenced from the west

  17. Grand unification and the fundamental problems of classical cosmology

    International Nuclear Information System (INIS)

    Turner, M.S.

    1981-01-01

    The accomplishments of classical cosmology are reviewed. In particular, the hot big bang model provides a reliable framework for understanding the evolution of the universe back to times at least as early as approx. 0.01 s after the big bang. At present there are (at least) six fundamental problems which have not yet been (completely) resolved. They are: (1) the origin of the baryon number-to-entropy ratio, (2) the origin of the isotropy, (3) the origin of the homogeneity and inhomogeneity, (4) the origin of the flatness, (5) the cosmological constant, and (6) the monopole problem. The role that grand unification has played, and may play in the resolution of these puzzles is discussed. Guth's inflationary universe, which addresses five of these six problems, is reviewed

  18. Cosmological Hubble constant and nuclear Hubble constant

    International Nuclear Information System (INIS)

    Horbuniev, Amelia; Besliu, Calin; Jipa, Alexandru

    2005-01-01

    The evolution of the Universe after the Big Bang and the evolution of the dense and highly excited nuclear matter formed by relativistic nuclear collisions are investigated and compared. Values of the Hubble constants for cosmological and nuclear processes are obtained. For nucleus-nucleus collisions at high energies the nuclear Hubble constant is obtained in the frame of different models involving the hydrodynamic flow of the nuclear matter. Significant difference in the values of the two Hubble constant - cosmological and nuclear - is observed

  19. Recovering the colour-dependent albedo of exoplanets with high-resolution spectroscopy: from ESPRESSO to the ELT.

    Science.gov (United States)

    Martins, J. H. C.; Figueira, P.; Santos, N. C.; Melo, C.; Garcia Muñoz, A.; Faria, J.; Pepe, F.; Lovis, C.

    2018-05-01

    The characterization of planetary atmospheres is a daunting task, pushing current observing facilities to their limits. The next generation of high-resolution spectrographs mounted on large telescopes - such as ESPRESSO@VLT and HIRES@ELT - will allow us to probe and characterize exoplanetary atmospheres in greater detail than possible to this point. We present a method that permits the recovery of the colour-dependent reflectivity of exoplanets from high-resolution spectroscopic observations. Determining the wavelength-dependent albedo will provide insight into the chemical properties and weather of the exoplanet atmospheres. For this work, we simulated ESPRESSO@VLT and HIRES@ELT high-resolution observations of known planetary systems with several albedo configurations. We demonstrate how the cross correlation technique applied to theses simulated observations can be used to successfully recover the geometric albedo of exoplanets over a range of wavelengths. In all cases, we were able to recover the wavelength dependent albedo of the simulated exoplanets and distinguish between several atmospheric models representing different atmospheric configurations. In brief, we demonstrate that the cross correlation technique allows for the recovery of exoplanetary albedo functions from optical observations with the next generation of high-resolution spectrographs that will be mounted on large telescopes with reasonable exposure times. Its recovery will permit the characterization of exoplanetary atmospheres in terms of composition and dynamics and consolidates the cross correlation technique as a powerful tool for exoplanet characterization.

  20. Simulating the Agulhas system in global ocean models - nesting vs. multi-resolution unstructured meshes

    Science.gov (United States)

    Biastoch, Arne; Sein, Dmitry; Durgadoo, Jonathan V.; Wang, Qiang; Danilov, Sergey

    2018-01-01

    Many questions in ocean and climate modelling require the combined use of high resolution, global coverage and multi-decadal integration length. For this combination, even modern resources limit the use of traditional structured-mesh grids. Here we compare two approaches: A high-resolution grid nested into a global model at coarser resolution (NEMO with AGRIF) and an unstructured-mesh grid (FESOM) which allows to variably enhance resolution where desired. The Agulhas system around South Africa is used as a testcase, providing an energetic interplay of a strong western boundary current and mesoscale dynamics. Its open setting into the horizontal and global overturning circulations also requires global coverage. Both model configurations simulate a reasonable large-scale circulation. Distribution and temporal variability of the wind-driven circulation are quite comparable due to the same atmospheric forcing. However, the overturning circulation differs, owing each model's ability to represent formation and spreading of deep water masses. In terms of regional, high-resolution dynamics, all elements of the Agulhas system are well represented. Owing to the strong nonlinearity in the system, Agulhas Current transports of both configurations and in comparison with observations differ in strength and temporal variability. Similar decadal trends in Agulhas Current transport and Agulhas leakage are linked to the trends in wind forcing.