WorldWideScience

Sample records for high-resolution cosmological simulations

  1. Evaluating Galactic Habitability Using High Resolution Cosmological Simulations of Galaxy Formation

    OpenAIRE

    Forgan, Duncan; Dayal, Pratika; Cockell, Charles; Libeskind, Noam

    2015-01-01

    D. F. acknowledges support from STFC consolidated grant ST/J001422/1, and the ‘ECOGAL’ ERC Advanced Grant. P. D. acknowledges the support of the Addison Wheeler Fellowship awarded by the Institute of Advanced Study at Durham University. N. I. L. is supported by the Deutsche Forschungs Gemeinschaft (DFG). We present the first model that couples high-resolution simulations of the formation of local group galaxies with calculations of the galactic habitable zone (GHZ), a region of space which...

  2. Precision cosmology with time delay lenses: high resolution imaging requirements

    Energy Technology Data Exchange (ETDEWEB)

    Meng, Xiao-Lei; Liao, Kai [Department of Astronomy, Beijing Normal University, 19 Xinjiekouwai Street, Beijing, 100875 (China); Treu, Tommaso; Agnello, Adriano [Department of Physics, University of California, Broida Hall, Santa Barbara, CA 93106 (United States); Auger, Matthew W. [Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom); Marshall, Philip J., E-mail: xlmeng919@gmail.com, E-mail: tt@astro.ucla.edu, E-mail: aagnello@physics.ucsb.edu, E-mail: mauger@ast.cam.ac.uk, E-mail: liaokai@mail.bnu.edu.cn, E-mail: dr.phil.marshall@gmail.com [Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, 452 Lomita Mall, Stanford, CA 94305 (United States)

    2015-09-01

    Lens time delays are a powerful probe of cosmology, provided that the gravitational potential of the main deflector can be modeled with sufficient precision. Recent work has shown that this can be achieved by detailed modeling of the host galaxies of lensed quasars, which appear as ''Einstein Rings'' in high resolution images. The distortion of these arcs and counter-arcs, as measured over a large number of pixels, provides tight constraints on the difference between the gravitational potential between the quasar image positions, and thus on cosmology in combination with the measured time delay. We carry out a systematic exploration of the high resolution imaging required to exploit the thousands of lensed quasars that will be discovered by current and upcoming surveys with the next decade. Specifically, we simulate realistic lens systems as imaged by the Hubble Space Telescope (HST), James Webb Space Telescope (JWST), and ground based adaptive optics images taken with Keck or the Thirty Meter Telescope (TMT). We compare the performance of these pointed observations with that of images taken by the Euclid (VIS), Wide-Field Infrared Survey Telescope (WFIRST) and Large Synoptic Survey Telescope (LSST) surveys. We use as our metric the precision with which the slope γ' of the total mass density profile ρ{sub tot}∝ r{sup −γ'} for the main deflector can be measured. Ideally, we require that the statistical error on γ' be less than 0.02, such that it is subdominant to other sources of random and systematic uncertainties. We find that survey data will likely have sufficient depth and resolution to meet the target only for the brighter gravitational lens systems, comparable to those discovered by the SDSS survey. For fainter systems, that will be discovered by current and future surveys, targeted follow-up will be required. However, the exposure time required with upcoming facilitites such as JWST, the Keck Next Generation

  3. Precision cosmology with time delay lenses: High resolution imaging requirements

    Energy Technology Data Exchange (ETDEWEB)

    Meng, Xiao -Lei [Beijing Normal Univ., Beijing (China); Univ. of California, Santa Barbara, CA (United States); Treu, Tommaso [Univ. of California, Santa Barbara, CA (United States); Univ. of California, Los Angeles, CA (United States); Agnello, Adriano [Univ. of California, Santa Barbara, CA (United States); Univ. of California, Los Angeles, CA (United States); Auger, Matthew W. [Univ. of Cambridge, Cambridge (United Kingdom); Liao, Kai [Beijing Normal Univ., Beijing (China); Univ. of California, Santa Barbara, CA (United States); Univ. of California, Los Angeles, CA (United States); Marshall, Philip J. [Stanford Univ., Stanford, CA (United States)

    2015-09-28

    Lens time delays are a powerful probe of cosmology, provided that the gravitational potential of the main deflector can be modeled with sufficient precision. Recent work has shown that this can be achieved by detailed modeling of the host galaxies of lensed quasars, which appear as ``Einstein Rings'' in high resolution images. The distortion of these arcs and counter-arcs, as measured over a large number of pixels, provides tight constraints on the difference between the gravitational potential between the quasar image positions, and thus on cosmology in combination with the measured time delay. We carry out a systematic exploration of the high resolution imaging required to exploit the thousands of lensed quasars that will be discovered by current and upcoming surveys with the next decade. Specifically, we simulate realistic lens systems as imaged by the Hubble Space Telescope (HST), James Webb Space Telescope (JWST), and ground based adaptive optics images taken with Keck or the Thirty Meter Telescope (TMT). We compare the performance of these pointed observations with that of images taken by the Euclid (VIS), Wide-Field Infrared Survey Telescope (WFIRST) and Large Synoptic Survey Telescope (LSST) surveys. We use as our metric the precision with which the slope γ' of the total mass density profile ρtot∝ r–γ' for the main deflector can be measured. Ideally, we require that the statistical error on γ' be less than 0.02, such that it is subdominant to other sources of random and systematic uncertainties. We find that survey data will likely have sufficient depth and resolution to meet the target only for the brighter gravitational lens systems, comparable to those discovered by the SDSS survey. For fainter systems, that will be discovered by current and future surveys, targeted follow-up will be required. Furthermore, the exposure time required with upcoming facilitites such as JWST, the Keck Next Generation Adaptive

  4. GERLUMPH DATA RELEASE 1: HIGH-RESOLUTION COSMOLOGICAL MICROLENSING MAGNIFICATION MAPS AND eResearch TOOLS

    International Nuclear Information System (INIS)

    Vernardos, G.; Fluke, C. J.; Croton, D.; Bate, N. F.

    2014-01-01

    As synoptic all-sky surveys begin to discover new multiply lensed quasars, the flow of data will enable statistical cosmological microlensing studies of sufficient size to constrain quasar accretion disk and supermassive black hole properties. In preparation for this new era, we are undertaking the GPU-Enabled, High Resolution cosmological MicroLensing parameter survey (GERLUMPH). We present here the GERLUMPH Data Release 1, which consists of 12,342 high resolution cosmological microlensing magnification maps and provides the first uniform coverage of the convergence, shear, and smooth matter fraction parameter space. We use these maps to perform a comprehensive numerical investigation of the mass-sheet degeneracy, finding excellent agreement with its predictions. We study the effect of smooth matter on microlensing induced magnification fluctuations. In particular, in the minima and saddle-point regions, fluctuations are enhanced only along the critical line, while in the maxima region they are always enhanced for high smooth matter fractions (≈0.9). We describe our approach to data management, including the use of an SQL database with a Web interface for data access and online analysis, obviating the need for individuals to download large volumes of data. In combination with existing observational databases and online applications, the GERLUMPH archive represents a fundamental component of a new microlensing eResearch cloud. Our maps and tools are publicly available at http://gerlumph.swin.edu.au/

  5. GERLUMPH DATA RELEASE 1: HIGH-RESOLUTION COSMOLOGICAL MICROLENSING MAGNIFICATION MAPS AND eResearch TOOLS

    Energy Technology Data Exchange (ETDEWEB)

    Vernardos, G.; Fluke, C. J.; Croton, D. [Centre for Astrophysics and Supercomputing, Swinburne University of Technology, P.O. Box 218, Hawthorn, Victoria, 3122 (Australia); Bate, N. F. [Sydney Institute for Astronomy, School of Physics, A28, University of Sydney, NSW, 2006 (Australia)

    2014-03-01

    As synoptic all-sky surveys begin to discover new multiply lensed quasars, the flow of data will enable statistical cosmological microlensing studies of sufficient size to constrain quasar accretion disk and supermassive black hole properties. In preparation for this new era, we are undertaking the GPU-Enabled, High Resolution cosmological MicroLensing parameter survey (GERLUMPH). We present here the GERLUMPH Data Release 1, which consists of 12,342 high resolution cosmological microlensing magnification maps and provides the first uniform coverage of the convergence, shear, and smooth matter fraction parameter space. We use these maps to perform a comprehensive numerical investigation of the mass-sheet degeneracy, finding excellent agreement with its predictions. We study the effect of smooth matter on microlensing induced magnification fluctuations. In particular, in the minima and saddle-point regions, fluctuations are enhanced only along the critical line, while in the maxima region they are always enhanced for high smooth matter fractions (≈0.9). We describe our approach to data management, including the use of an SQL database with a Web interface for data access and online analysis, obviating the need for individuals to download large volumes of data. In combination with existing observational databases and online applications, the GERLUMPH archive represents a fundamental component of a new microlensing eResearch cloud. Our maps and tools are publicly available at http://gerlumph.swin.edu.au/.

  6. The AGORA High-resolution Galaxy Simulations Comparison Project

    OpenAIRE

    Kim Ji-hoon; Abel Tom; Agertz Oscar; Bryan Greg L.; Ceverino Daniel; Christensen Charlotte; Conroy Charlie; Dekel Avishai; Gnedin Nickolay Y.; Goldbaum Nathan J.; Guedes Javiera; Hahn Oliver; Hobbs Alexander; Hopkins Philip F.; Hummels Cameron B.

    2014-01-01

    The Astrophysical Journal Supplement Series 210.1 (2014): 14 reproduced by permission of the AAS We introduce the Assembling Galaxies Of Resolved Anatomy (AGORA) project, a comprehensive numerical study of well-resolved galaxies within the ΛCDM cosmology. Cosmological hydrodynamic simulations with force resolutions of ∼100 proper pc or better will be run with a variety of code platforms to follow the hierarchical growth, star formation history, morphological transformation, and the cycle o...

  7. Cosmological implications of the MAXIMA-1 high-resolution cosmic microwave background anisotropy measurement

    International Nuclear Information System (INIS)

    Stompor, R.; Abroe, M.; Ade, P.; Balbi, A.; Barbosa, D.; Bock, J.; Borrill, J.; Boscaleri, A.; de Bernardis, P.; Ferreira, P.G.; Hanany, S.; Hristov, V.; Jaffe, A.H.; Lee, A.T.; Pascale, E.; Rabii, B.; Richards, P.L.; Smoot, G.F.; Winant, C.D.; Wu, J.H.P.

    2001-01-01

    We discuss the cosmological implications of the new constraints on the power spectrum of the cosmic microwave background (CMB) anisotropy derived from a new high-resolution analysis of the MAXIMA-1 measurement. The power spectrum indicates excess power at lsimilar to 860 over the average level of power at 411 less than or equal to l less than or equal to 785. This excess is statistically significant at the similar to 95 percent confidence level. Its position coincides with that of the third acoustic peak, as predicted by generic inflationary models selected to fit the first acoustic peak as observed in the data. The height of the excess power agrees with the predictions of a family of inflationary models with cosmological parameters that are fixed to fit the CMB data previously provided by BOOMERANG-LDB and MAXIMA-1 experiments. Our results therefore lend support for inflationary models and more generally for the dominance of adiabatic coherent perturbations in the structure formation of the universe. At the same time, they seem to disfavor a large variety of the nonstandard (but inflation-based) models that have been proposed to improve the quality of fits to the CMB data and the consistency with other cosmological observables. Within standard inflationary models, our results combined with the COBE/Differential Microwave Radiometer data give best-fit values and 95 percent confidence limits for the baryon density, Omega (b)h(2)similar or equal to 0.033 +/- 0.013, and the total density, Omega =0.9(-0.16)(+0.18). The primordial spectrum slope (n(s)) and the optical depth to the last scattering surface (tau (c)) are found to be degenerate and to obey the relation n(s) similar or equal to (0.99 +/- 0.14) + 0.46tau (c), for tau (c) less than or equal to 0.5 (all at 95 percent confidence levels)

  8. Constraining Stochastic Parametrisation Schemes Using High-Resolution Model Simulations

    Science.gov (United States)

    Christensen, H. M.; Dawson, A.; Palmer, T.

    2017-12-01

    Stochastic parametrisations are used in weather and climate models as a physically motivated way to represent model error due to unresolved processes. Designing new stochastic schemes has been the target of much innovative research over the last decade. While a focus has been on developing physically motivated approaches, many successful stochastic parametrisation schemes are very simple, such as the European Centre for Medium-Range Weather Forecasts (ECMWF) multiplicative scheme `Stochastically Perturbed Parametrisation Tendencies' (SPPT). The SPPT scheme improves the skill of probabilistic weather and seasonal forecasts, and so is widely used. However, little work has focused on assessing the physical basis of the SPPT scheme. We address this matter by using high-resolution model simulations to explicitly measure the `error' in the parametrised tendency that SPPT seeks to represent. The high resolution simulations are first coarse-grained to the desired forecast model resolution before they are used to produce initial conditions and forcing data needed to drive the ECMWF Single Column Model (SCM). By comparing SCM forecast tendencies with the evolution of the high resolution model, we can measure the `error' in the forecast tendencies. In this way, we provide justification for the multiplicative nature of SPPT, and for the temporal and spatial scales of the stochastic perturbations. However, we also identify issues with the SPPT scheme. It is therefore hoped these measurements will improve both holistic and process based approaches to stochastic parametrisation. Figure caption: Instantaneous snapshot of the optimal SPPT stochastic perturbation, derived by comparing high-resolution simulations with a low resolution forecast model.

  9. Validation of High-resolution Climate Simulations over Northern Europe.

    Science.gov (United States)

    Muna, R. A.

    2005-12-01

    Two AMIP2-type (Gates 1992) experiments have been performed with climate versions of ARPEGE/IFS model examine for North Atlantic North Europe, and Norwegian region and analyzed the effect of increasing resolution on the simulated biases. The ECMWF reanalysis or ERA-15 has been used to validate the simulations. Each of the simulations is an integration of the period 1979 to 1996. The global simulations used observed monthly mean sea surface temperatures (SST) as lower boundary condition. All aspects but the horizontal resolutions are similar in the two simulations. The first simulation has a uniform horizontal resolution of T63L. The second one has a variable resolution (T106Lc3) with the highest resolution in the Norwegian Sea. Both simulations have 31 vertical layers in the same locations. For each simulation the results were divided into two seasons: winter (DJF) and summer (JJA). The parameters investigated were mean sea level pressure, geopotential and temperature at 850 hPa and 500 hPa. To find out the causes of temperature bias during summer, latent and sensible heat flux, total cloud cover and total precipitation were analyzed. The high-resolution simulation exhibits more or less realistic climate over Nordic, Artic and European region. The overall performance of the simulations shows improvements of generally all fields investigated with increasing resolution over the target area both in winter (DJF) and summer (JJA).

  10. THE AGORA HIGH-RESOLUTION GALAXY SIMULATIONS COMPARISON PROJECT

    International Nuclear Information System (INIS)

    Kim, Ji-hoon; Conroy, Charlie; Goldbaum, Nathan J.; Krumholz, Mark R.; Abel, Tom; Agertz, Oscar; Gnedin, Nickolay Y.; Kravtsov, Andrey V.; Bryan, Greg L.; Ceverino, Daniel; Christensen, Charlotte; Hummels, Cameron B.; Dekel, Avishai; Guedes, Javiera; Hahn, Oliver; Hobbs, Alexander; Hopkins, Philip F.; Iannuzzi, Francesca; Keres, Dusan; Klypin, Anatoly

    2014-01-01

    We introduce the Assembling Galaxies Of Resolved Anatomy (AGORA) project, a comprehensive numerical study of well-resolved galaxies within the ΛCDM cosmology. Cosmological hydrodynamic simulations with force resolutions of ∼100 proper pc or better will be run with a variety of code platforms to follow the hierarchical growth, star formation history, morphological transformation, and the cycle of baryons in and out of eight galaxies with halo masses M vir ≅ 10 10 , 10 11 , 10 12 , and 10 13 M ☉ at z = 0 and two different ('violent' and 'quiescent') assembly histories. The numerical techniques and implementations used in this project include the smoothed particle hydrodynamics codes GADGET and GASOLINE, and the adaptive mesh refinement codes ART, ENZO, and RAMSES. The codes share common initial conditions and common astrophysics packages including UV background, metal-dependent radiative cooling, metal and energy yields of supernovae, and stellar initial mass function. These are described in detail in the present paper. Subgrid star formation and feedback prescriptions will be tuned to provide a realistic interstellar and circumgalactic medium using a non-cosmological disk galaxy simulation. Cosmological runs will be systematically compared with each other using a common analysis toolkit and validated against observations to verify that the solutions are robust—i.e., that the astrophysical assumptions are responsible for any success, rather than artifacts of particular implementations. The goals of the AGORA project are, broadly speaking, to raise the realism and predictive power of galaxy simulations and the understanding of the feedback processes that regulate galaxy 'metabolism'. The initial conditions for the AGORA galaxies as well as simulation outputs at various epochs will be made publicly available to the community. The proof-of-concept dark-matter-only test of the formation of a galactic halo with a z = 0 mass of M

  11. Montecarlo simulation for a new high resolution elemental analysis methodology

    Energy Technology Data Exchange (ETDEWEB)

    Figueroa S, Rodolfo; Brusa, Daniel; Riveros, Alberto [Universidad de La Frontera, Temuco (Chile). Facultad de Ingenieria y Administracion

    1996-12-31

    Full text. Spectra generated by binary, ternary and multielement matrixes when irradiated by a variable energy photon beam are simulated by means of a Monte Carlo code. Significative jumps in the counting rate are shown when the photon energy is just over the edge associated to each element, because of the emission of characteristic X rays. For a given associated energy, the net height of these jumps depends mainly on the concentration and of the sample absorption coefficient. The spectra were obtained by a monochromatic energy scan considering all the emitted radiation by the sample in a 2{pi} solid angle, associating a single multichannel spectrometer channel to each incident energy (Multichannel Scaling (MCS) mode). The simulated spectra were made with Monte Carlo simulation software adaptation of the package called PENELOPE (Penetration and Energy Loss of Positrons and Electrons in matter). The results show that it is possible to implement a new high resolution spectroscopy methodology, where a synchrotron would be an ideal source, due to the high intensity and ability to control the energy of the incident beam. The high energy resolution would be determined by the monochromating system and not by the detection system and not by the detection system, which would basicalbe a photon counter. (author)

  12. Montecarlo simulation for a new high resolution elemental analysis methodology

    International Nuclear Information System (INIS)

    Figueroa S, Rodolfo; Brusa, Daniel; Riveros, Alberto

    1996-01-01

    Full text. Spectra generated by binary, ternary and multielement matrixes when irradiated by a variable energy photon beam are simulated by means of a Monte Carlo code. Significative jumps in the counting rate are shown when the photon energy is just over the edge associated to each element, because of the emission of characteristic X rays. For a given associated energy, the net height of these jumps depends mainly on the concentration and of the sample absorption coefficient. The spectra were obtained by a monochromatic energy scan considering all the emitted radiation by the sample in a 2π solid angle, associating a single multichannel spectrometer channel to each incident energy (Multichannel Scaling (MCS) mode). The simulated spectra were made with Monte Carlo simulation software adaptation of the package called PENELOPE (Penetration and Energy Loss of Positrons and Electrons in matter). The results show that it is possible to implement a new high resolution spectroscopy methodology, where a synchrotron would be an ideal source, due to the high intensity and ability to control the energy of the incident beam. The high energy resolution would be determined by the monochromating system and not by the detection system and not by the detection system, which would basicalbe a photon counter. (author)

  13. Kinetic Energy from Supernova Feedback in High-resolution Galaxy Simulations

    Science.gov (United States)

    Simpson, Christine M.; Bryan, Greg L.; Hummels, Cameron; Ostriker, Jeremiah P.

    2015-08-01

    We describe a new method for adding a prescribed amount of kinetic energy to simulated gas modeled on a cartesian grid by directly altering grid cells’ mass and velocity in a distributed fashion. The method is explored in the context of supernova (SN) feedback in high-resolution (˜10 pc) hydrodynamic simulations of galaxy formation. Resolution dependence is a primary consideration in our application of the method, and simulations of isolated explosions (performed at different resolutions) motivate a resolution-dependent scaling for the injected fraction of kinetic energy that we apply in cosmological simulations of a 109 M⊙ dwarf halo. We find that in high-density media (≳50 cm-3) with coarse resolution (≳4 pc per cell), results are sensitive to the initial kinetic energy fraction due to early and rapid cooling. In our galaxy simulations, the deposition of small amounts of SN energy in kinetic form (as little as 1%) has a dramatic impact on the evolution of the system, resulting in an order-of-magnitude suppression of stellar mass. The overall behavior of the galaxy in the two highest resolution simulations we perform appears to converge. We discuss the resulting distribution of stellar metallicities, an observable sensitive to galactic wind properties, and find that while the new method demonstrates increased agreement with observed systems, significant discrepancies remain, likely due to simplistic assumptions that neglect contributions from SNe Ia and stellar winds.

  14. The Megamaser Cosmology Project. X. High-resolution Maps and Mass Constraints for SMBHs

    Science.gov (United States)

    Zhao, W.; Braatz, J. A.; Condon, J. J.; Lo, K. Y.; Reid, M. J.; Henkel, C.; Pesce, D. W.; Greene, J. E.; Gao, F.; Kuo, C. Y.; Impellizzeri, C. M. V.

    2018-02-01

    We present high-resolution (submas) Very Long Baseline Interferometry maps of nuclear H2O megamasers for seven galaxies. In UGC 6093, the well-aligned systemic masers and high-velocity masers originate in an edge-on, flat disk and we determine the mass of the central supermassive black holes (SMBH) to be M SMBH = 2.58 × 107 M ⊙ (±7%). For J1346+5228, the distribution of masers is consistent with a disk, but the faint high-velocity masers are only marginally detected, and we constrain the mass of the SMBH to be in the range (1.5–2.0) × 107 M ⊙. The origin of the masers in Mrk 1210 is less clear, as the systemic and high-velocity masers are misaligned and show a disorganized velocity structure. We present one possible model in which the masers originate in a tilted, warped disk, but we do not rule out the possibility of other explanations including outflow masers. In NGC 6926, we detect a set of redshifted masers, clustered within a parsec of each other, and a single blueshifted maser about 4.4 pc away, an offset that would be unusually large for a maser disk system. Nevertheless, if it is a disk system, we estimate the enclosed mass to be M SMBH < 4.8 × 107 M ⊙. For NGC 5793, we detect redshifted masers spaced about 1.4 pc from a clustered set of blueshifted features. The orientation of the structure supports a disk scenario as suggested by Hagiwara et al. We estimate the enclosed mass to be M SMBH < 1.3 × 107 M ⊙. For NGC 2824 and J0350‑0127, the masers may be associated with parsec- or subparsec-scale jets or outflows.

  15. Operational High Resolution Chemical Kinetics Simulation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Numerical simulations of chemical kinetics are critical to addressing urgent issues in both the developed and developing world. Ongoing demand for higher resolution...

  16. High-resolution SMA imaging of bright submillimetre sources from the SCUBA-2 Cosmology Legacy Survey

    Science.gov (United States)

    Hill, Ryley; Chapman, Scott C.; Scott, Douglas; Petitpas, Glen; Smail, Ian; Chapin, Edward L.; Gurwell, Mark A.; Perry, Ryan; Blain, Andrew W.; Bremer, Malcolm N.; Chen, Chian-Chou; Dunlop, James S.; Farrah, Duncan; Fazio, Giovanni G.; Geach, James E.; Howson, Paul; Ivison, R. J.; Lacaille, Kevin; Michałowski, Michał J.; Simpson, James M.; Swinbank, A. M.; van der Werf, Paul P.; Wilner, David J.

    2018-06-01

    We have used the Submillimeter Array (SMA) at 860 μm to observe the brightest sources in the Submillimeter Common User Bolometer Array-2 (SCUBA-2) Cosmology Legacy Survey (S2CLS). The goal of this survey is to exploit the large field of the S2CLS along with the resolution and sensitivity of the SMA to construct a large sample of these rare sources and to study their statistical properties. We have targeted 70 of the brightest single-dish SCUBA-2 850 μm sources down to S850 ≈ 8 mJy, achieving an average synthesized beam of 2.4 arcsec and an average rms of σ860 = 1.5 mJy beam-1 in our primary beam-corrected maps. We searched our SMA maps for 4σ peaks, corresponding to S860 ≳ 6 mJy sources, and detected 62, galaxies, including three pairs. We include in our study 35 archival observations, bringing our sample size to 105 bright single-dish submillimetre sources with interferometric follow-up. We compute the cumulative and differential number counts, finding them to overlap with previous single-dish survey number counts within the uncertainties, although our cumulative number count is systematically lower than the parent S2CLS cumulative number count by 14 ± 6 per cent between 11 and 15 mJy. We estimate the probability that a ≳10 mJy single-dish submillimetre source resolves into two or more galaxies with similar flux densities to be less than 15 per cent. Assuming the remaining 85 per cent of the targets are ultraluminous starburst galaxies between z = 2 and 3, we find a likely volume density of ≳400 M⊙ yr-1 sources to be {˜ } 3^{+0.7}_{-0.6} {× } 10^{-7} Mpc-3. We show that the descendants of these galaxies could be ≳4 × 1011 M⊙ local quiescent galaxies, and that about 10 per cent of their total stellar mass would have formed during these short bursts of star formation.

  17. SPECTRA OF STRONG MAGNETOHYDRODYNAMIC TURBULENCE FROM HIGH-RESOLUTION SIMULATIONS

    International Nuclear Information System (INIS)

    Beresnyak, Andrey

    2014-01-01

    Magnetohydrodynamic (MHD) turbulence is present in a variety of solar and astrophysical environments. Solar wind fluctuations with frequencies lower than 0.1 Hz are believed to be mostly governed by Alfvénic turbulence with particle transport depending on the power spectrum and the anisotropy of such turbulence. Recently, conflicting spectral slopes for the inertial range of MHD turbulence have been reported by different groups. Spectral shapes from earlier simulations showed that MHD turbulence is less scale-local compared with hydrodynamic turbulence. This is why higher-resolution simulations, and careful and rigorous numerical analysis is especially needed for the MHD case. In this Letter, we present two groups of simulations with resolution up to 4096 3 , which are numerically well-resolved and have been analyzed with an exact and well-tested method of scaling study. Our results from both simulation groups indicate that the asymptotic power spectral slope for all energy-related quantities, such as total energy and residual energy, is around –1.7, close to Kolmogorov's –5/3. This suggests that residual energy is a constant fraction of the total energy and that in the asymptotic regime of Alfvénic turbulence magnetic and kinetic spectra have the same scaling. The –1.5 slope for energy and the –2 slope for residual energy, which have been suggested earlier, are incompatible with our numerics

  18. High Resolution N-Body Simulations of Terrestrial Planet Growth

    Science.gov (United States)

    Clark Wallace, Spencer; Quinn, Thomas R.

    2018-04-01

    We investigate planetesimal accretion with a direct N-body simulation of an annulus at 1 AU around a 1 M_sun star. The planetesimal ring, which initially contains N = 106 bodies is evolved through the runaway growth stage into the phase of oligarchic growth. We find that the mass distribution of planetesimals develops a bump around 1022 g shortly after the oligarchs form. This feature is absent in previous lower resolution studies. We find that this bump marks a boundary between growth modes. Below the bump mass, planetesimals are packed tightly enough together to populate first order mean motion resonances with the oligarchs. These resonances act to heat the tightly packed, low mass planetesimals, inhibiting their growth. We examine the eccentricity evolution of a dynamically hot planetary embryo embedded in an annulus of planetesimals and find that dynamical friction acts more strongly on the embryo when the planetesimals are finely resolved. This effect disappears when the annulus is made narrow enough to exclude most of the mean motion resonances. Additionally, we find that the 1022 g bump is significantly less prominent when we follow planetesimal growth with a skinny annulus.This feature, which is reminiscent of the power law break seen in the size distribution of asteroid belt objects may be an important clue for constraining the initial size of planetesimals in planet formation models.

  19. Computer simulation of high resolution transmission electron micrographs: theory and analysis

    International Nuclear Information System (INIS)

    Kilaas, R.

    1985-03-01

    Computer simulation of electron micrographs is an invaluable aid in their proper interpretation and in defining optimum conditions for obtaining images experimentally. Since modern instruments are capable of atomic resolution, simulation techniques employing high precision are required. This thesis makes contributions to four specific areas of this field. First, the validity of a new method for simulating high resolution electron microscope images has been critically examined. Second, three different methods for computing scattering amplitudes in High Resolution Transmission Electron Microscopy (HRTEM) have been investigated as to their ability to include upper Laue layer (ULL) interaction. Third, a new method for computing scattering amplitudes in high resolution transmission electron microscopy has been examined. Fourth, the effect of a surface layer of amorphous silicon dioxide on images of crystalline silicon has been investigated for a range of crystal thicknesses varying from zero to 2 1/2 times that of the surface layer

  20. Very high-resolution regional climate simulations over Scandinavia-present climate

    DEFF Research Database (Denmark)

    Christensen, Ole B.; Christensen, Jens H.; Machenhauer, Bennert

    1998-01-01

    realistically simulated. It is found in particular that in mountainous regions the high-resolution simulation shows improvements in the simulation of hydrologically relevant fields such as runoff and snow cover. Also, the distribution of precipitation on different intensity classes is most realistically...... on a high-density station network for the Scandinavian countries compiled for the present study. The simulated runoff is compared with observed data from Sweden extracted from a Swedish climatological atlas. These runoff data indicate that the precipitation analyses are underestimating the true...... simulated in the high-resolution simulation. It does, however, inherit certain large-scale systematic errors from the driving GCM. In many cases these errors increase with increasing resolution. Model verification of near-surface temperature and precipitation is made using a new gridded climatology based...

  1. Eulerian and Lagrangian statistics from high resolution numerical simulations of weakly compressible turbulence

    NARCIS (Netherlands)

    Benzi, R.; Biferale, L.; Fisher, R.T.; Lamb, D.Q.; Toschi, F.

    2009-01-01

    We report a detailed study of Eulerian and Lagrangian statistics from high resolution Direct Numerical Simulations of isotropic weakly compressible turbulence. Reynolds number at the Taylor microscale is estimated to be around 600. Eulerian and Lagrangian statistics is evaluated over a huge data

  2. Quantifying uncertainty due to internal variability using high-resolution regional climate model simulations

    Science.gov (United States)

    Gutmann, E. D.; Ikeda, K.; Deser, C.; Rasmussen, R.; Clark, M. P.; Arnold, J. R.

    2015-12-01

    The uncertainty in future climate predictions is as large or larger than the mean climate change signal. As such, any predictions of future climate need to incorporate and quantify the sources of this uncertainty. One of the largest sources comes from the internal, chaotic, variability within the climate system itself. This variability has been approximated using the 30 ensemble members of the Community Earth System Model (CESM) large ensemble. Here we examine the wet and dry end members of this ensemble for cool-season precipitation in the Colorado Rocky Mountains with a set of high-resolution regional climate model simulations. We have used the Weather Research and Forecasting model (WRF) to simulate the periods 1990-2000, 2025-2035, and 2070-2080 on a 4km grid. These simulations show that the broad patterns of change depicted in CESM are inherited by the high-resolution simulations; however, the differences in the height and location of the mountains in the WRF simulation, relative to the CESM simulation, means that the location and magnitude of the precipitation changes are very different. We further show that high-resolution simulations with the Intermediate Complexity Atmospheric Research model (ICAR) predict a similar spatial pattern in the change signal as WRF for these ensemble members. We then use ICAR to examine the rest of the CESM Large Ensemble as well as the uncertainty in the regional climate model due to the choice of physics parameterizations.

  3. Utilization of Short-Simulations for Tuning High-Resolution Climate Model

    Science.gov (United States)

    Lin, W.; Xie, S.; Ma, P. L.; Rasch, P. J.; Qian, Y.; Wan, H.; Ma, H. Y.; Klein, S. A.

    2016-12-01

    Many physical parameterizations in atmospheric models are sensitive to resolution. Tuning the models that involve a multitude of parameters at high resolution is computationally expensive, particularly when relying primarily on multi-year simulations. This work describes a complementary set of strategies for tuning high-resolution atmospheric models, using ensembles of short simulations to reduce the computational cost and elapsed time. Specifically, we utilize the hindcast approach developed through the DOE Cloud Associated Parameterization Testbed (CAPT) project for high-resolution model tuning, which is guided by a combination of short (tests have been found to be effective in numerous previous studies in identifying model biases due to parameterized fast physics, and we demonstrate that it is also useful for tuning. After the most egregious errors are addressed through an initial "rough" tuning phase, longer simulations are performed to "hone in" on model features that evolve over longer timescales. We explore these strategies to tune the DOE ACME (Accelerated Climate Modeling for Energy) model. For the ACME model at 0.25° resolution, it is confirmed that, given the same parameters, major biases in global mean statistics and many spatial features are consistent between Atmospheric Model Intercomparison Project (AMIP)-type simulations and CAPT-type hindcasts, with just a small number of short-term simulations for the latter over the corresponding season. The use of CAPT hindcasts to find parameter choice for the reduction of large model biases dramatically improves the turnaround time for the tuning at high resolution. Improvement seen in CAPT hindcasts generally translates to improved AMIP-type simulations. An iterative CAPT-AMIP tuning approach is therefore adopted during each major tuning cycle, with the former to survey the likely responses and narrow the parameter space, and the latter to verify the results in climate context along with assessment in

  4. Simulations of structure formation in interacting dark energy cosmologies

    International Nuclear Information System (INIS)

    Baldi, M.

    2009-01-01

    The evidence in favor of a dark energy component dominating the Universe, and driving its presently accelerated expansion, has progressively grown during the last decade of cosmological observations. If this dark energy is given by a dynamic scalar field, it may also have a direct interaction with other matter fields in the Universe, in particular with cold dark matter. Such interaction would imprint new features on the cosmological background evolution as well as on the growth of cosmic structure, like an additional long-range fifth-force between massive particles, or a variation in time of the dark matter particle mass. We present here the implementation of these new physical effects in the N-body code GADGET-2, and we discuss the outcomes of a series of high-resolution N-body simulations for a selected family of interacting dark energy models. We interestingly find, in contrast with previous claims, that the inner overdensity of dark matter halos decreases in these models with respect to ΛCDM, and consistently halo concentrations show a progressive reduction for increasing couplings. Furthermore, the coupling induces a bias in the overdensities of cold dark matter and baryons that determines a decrease of the halo baryon fraction below its cosmological value. These results go in the direction of alleviating tensions between astrophysical observations and the predictions of the ΛCDM model on small scales, thereby opening new room for coupled dark energy models as an alternative to the cosmological constant.

  5. Cosmological N -body simulations including radiation perturbations

    DEFF Research Database (Denmark)

    Brandbyge, Jacob; Rampf, Cornelius; Tram, Thomas

    2017-01-01

    CosmologicalN-body simulations are the standard tools to study the emergence of the observed large-scale structure of the Universe. Such simulations usually solve for the gravitational dynamics of matter within the Newtonian approximation, thus discarding general relativistic effects such as the ......CosmologicalN-body simulations are the standard tools to study the emergence of the observed large-scale structure of the Universe. Such simulations usually solve for the gravitational dynamics of matter within the Newtonian approximation, thus discarding general relativistic effects...

  6. Luciola Hypertelescope Space Observatory. Versatile, Upgradable High-Resolution Imaging,from Stars to Deep-Field Cosmology

    Science.gov (United States)

    Labeyrie, Antoine; Le Coroller, Herve; Dejonghe, Julien; Lardiere, Olivier; Aime, Claude; Dohlen, Kjetil; Mourard, Denis; Lyon, Richard; Carpenter, Kenneth G.

    2008-01-01

    Luciola is a large (one kilometer) "multi-aperture densified-pupil imaging interferometer", or "hypertelescope" employing many small apertures, rather than a few large ones, for obtaining direct snapshot images with a high information content. A diluted collector mirror, deployed in space as a flotilla of small mirrors, focuses a sky image which is exploited by several beam-combiner spaceships. Each contains a pupil densifier micro-lens array to avoid the diffractive spread and image attenuation caused by the small sub-apertures. The elucidation of hypertelescope imaging properties during the last decade has shown that many small apertures tend to be far more efficient, regarding the science yield, than a few large ones providing a comparable collecting area. For similar underlying physical reasons, radio-astronomy has also evolved in the direction of many-antenna systems such as the proposed Low Frequency Array having hundreds of thousands of individual receivers . With its high limiting magnitude, reaching the mv=30 limit of HST when 100 collectors of 25cm will match its collecting area, high-resolution direct imaging in multiple channels, broad spectral coverage from the 1200 Angstrom ultra-violet to the 20 micron infra-red, apodization, coronagraphic and spectroscopic capabilities, the proposed hypertelescope observatory addresses very broad and innovative science covering different areas of ESA s Cosmic Vision program. In the initial phase, a focal spacecraft covering the UV to near IR spectral range of EMCCD photon-counting cameras ( currently 200 to 1000nm), will image details on the surface of many stars, as well as their environment, including multiple stars and clusters. Spectra will be obtained for each resel. It will also image neutron star, black-hole and micro-quasar candidates, as well as active galactic nuclei, quasars, gravitational lenses, and other Cosmic Vision targets observable with the initial modest crowding limit. With subsequent upgrade

  7. Hydrologic Simulation in Mediterranean flood prone Watersheds using high-resolution quality data

    Science.gov (United States)

    Eirini Vozinaki, Anthi; Alexakis, Dimitrios; Pappa, Polixeni; Tsanis, Ioannis

    2015-04-01

    Flooding is a significant threat causing lots of inconveniencies in several societies, worldwide. The fact that the climatic change is already happening, increases the flooding risk, which is no longer a substantial menace to several societies and their economies. The improvement of spatial-resolution and accuracy of the topography and land use data due to remote sensing techniques could provide integrated flood inundation simulations. In this work hydrological analysis of several historic flood events in Mediterranean flood prone watersheds (island of Crete/Greece) takes place. Satellite images of high resolution are elaborated. A very high resolution (VHR) digital elevation model (DEM) is produced from a GeoEye-1 0.5-m-resolution satellite stereo pair and is used for floodplain management and mapping applications such as watershed delineation and river cross-section extraction. Sophisticated classification algorithms are implemented for improving Land Use/ Land Cover maps accuracy. In addition, soil maps are updated with means of Radar satellite images. The above high-resolution data are innovatively used to simulate and validate several historical flood events in Mediterranean watersheds, which have experienced severe flooding in the past. The hydrologic/hydraulic models used for flood inundation simulation in this work are HEC-HMS and HEC-RAS. The Natural Resource Conservation Service (NRCS) curve number (CN) approach is implemented to account for the effect of LULC and soil on the hydrologic response of the catchment. The use of high resolution data provides detailed validation results and results of high precision, accordingly. Furthermore, the meteorological forecasting data, which are also combined to the simulation model results, manage the development of an integrated flood forecasting and early warning system tool, which is capable of confronting or even preventing this imminent risk. The research reported in this paper was fully supported by the

  8. High Resolution Numerical Simulations of Primary Atomization in Diesel Sprays with Single Component Reference Fuels

    Science.gov (United States)

    2015-09-01

    NC. 14. ABSTRACT A high-resolution numerical simulation of jet breakup and spray formation from a complex diesel fuel injector at diesel engine... diesel fuel injector at diesel engine type conditions has been performed. A full understanding of the primary atomization process in diesel fuel... diesel liquid sprays the complexity is further compounded by the physical attributes present including nozzle turbulence, large density ratios

  9. Propagation Diagnostic Simulations Using High-Resolution Equatorial Plasma Bubble Simulations

    Science.gov (United States)

    Rino, C. L.; Carrano, C. S.; Yokoyama, T.

    2017-12-01

    In a recent paper, under review, equatorial-plasma-bubble (EPB) simulations were used to conduct a comparative analysis of the EPB spectra characteristics with high-resolution in-situ measurements from the C/NOFS satellite. EPB realizations sampled in planes perpendicular to magnetic field lines provided well-defined EPB structure at altitudes penetrating both high and low-density regions. The average C/NOFS structure in highly disturbed regions showed nearly identical two-component inverse-power-law spectral characteristics as the measured EPB structure. This paper describes the results of PWE simulations using the same two-dimensional cross-field EPB realizations. New Irregularity Parameter Estimation (IPE) diagnostics, which are based on two-dimensional equivalent-phase-screen theory [A theory of scintillation for two-component power law irregularity spectra: Overview and numerical results, by Charles Carrano and Charles Rino, DOI: 10.1002/2015RS005903], have been successfully applied to extract two-component inverse-power-law parameters from measured intensity spectra. The EPB simulations [Low and Midlatitude Ionospheric Plasma DensityIrregularities and Their Effects on Geomagnetic Field, by Tatsuhiro Yokoyama and Claudia Stolle, DOI 10.1007/s11214-016-0295-7] have sufficient resolution to populate the structure scales (tens of km to hundreds of meters) that cause strong scintillation at GPS frequencies. The simulations provide an ideal geometry whereby the ramifications of varying structure along the propagation path can be investigated. It is well known path-integrated one-dimensional spectra increase the one-dimensional index by one. The relation requires decorrelation along the propagation path. Correlated structure would be interpreted as stochastic total-electron-content (TEC). The simulations are performed with unmodified structure. Because the EPB structure is confined to the central region of the sample planes, edge effects are minimized. Consequently

  10. The simulation of a data acquisition system for a proposed high resolution PET scanner

    Energy Technology Data Exchange (ETDEWEB)

    Rotolo, C.; Larwill, M.; Chappa, S. [Fermi National Accelerator Lab., Batavia, IL (United States); Ordonez, C. [Chicago Univ., IL (United States)

    1993-10-01

    The simulation of a specific data acquisition (DAQ) system architecture for a proposed high resolution Positron Emission Tomography (PET) scanner is discussed. Stochastic processes are used extensively to model PET scanner signal timing and probable DAQ circuit limitations. Certain architectural parameters, along with stochastic parameters, are varied to quantatively study the resulting output under various conditions. The inclusion of the DAQ in the model represents a novel method of more complete simulations of tomograph designs, and could prove to be of pivotal importance in the optimization of such designs.

  11. The simulation of a data acquisition system for a proposed high resolution PET scanner

    International Nuclear Information System (INIS)

    Rotolo, C.; Larwill, M.; Chappa, S.; Ordonez, C.

    1993-10-01

    The simulation of a specific data acquisition (DAQ) system architecture for a proposed high resolution Positron Emission Tomography (PET) scanner is discussed. Stochastic processes are used extensively to model PET scanner signal timing and probable DAQ circuit limitations. Certain architectural parameters, along with stochastic parameters, are varied to quantatively study the resulting output under various conditions. The inclusion of the DAQ in the model represents a novel method of more complete simulations of tomograph designs, and could prove to be of pivotal importance in the optimization of such designs

  12. High resolution real time capable combustion chamber simulation; Zeitlich hochaufloesende echtzeitfaehige Brennraumsimulation

    Energy Technology Data Exchange (ETDEWEB)

    Piewek, J. [Volkswagen AG, Wolfsburg (Germany)

    2008-07-01

    The article describes a zero-dimensional model for the real time capable combustion chamber pressure calculation with analogue pressure sensor output. The closed-loop-operation of an Engine Control Unit is shown at the hardware-in-the-loop-simulator (HiL simulator) for a 4-cylinder common rail diesel engine. The presentation of the model focuses on the simulation of the load variation which does not depend on the injection system and thus the simulated heat release rate. Particular attention is paid to the simulation and the resulting test possibilities regarding to full-variable valve gears. It is shown that black box models consisting in the HiL mean value model for the aspirated gas mass, the exhaust gas temperature after the outlet valve and the mean indicated pressure can be replaced by calculations from the high-resolution combustion chamber model. (orig.)

  13. AUTOMATIC INTERPRETATION OF HIGH RESOLUTION SAR IMAGES: FIRST RESULTS OF SAR IMAGE SIMULATION FOR SINGLE BUILDINGS

    Directory of Open Access Journals (Sweden)

    J. Tao

    2012-09-01

    Full Text Available Due to the all-weather data acquisition capabilities, high resolution space borne Synthetic Aperture Radar (SAR plays an important role in remote sensing applications like change detection. However, because of the complex geometric mapping of buildings in urban areas, SAR images are often hard to interpret. SAR simulation techniques ease the visual interpretation of SAR images, while fully automatic interpretation is still a challenge. This paper presents a method for supporting the interpretation of high resolution SAR images with simulated radar images using a LiDAR digital surface model (DSM. Line features are extracted from the simulated and real SAR images and used for matching. A single building model is generated from the DSM and used for building recognition in the SAR image. An application for the concept is presented for the city centre of Munich where the comparison of the simulation to the TerraSAR-X data shows a good similarity. Based on the result of simulation and matching, special features (e.g. like double bounce lines, shadow areas etc. can be automatically indicated in SAR image.

  14. Cosmological simulations of multicomponent cold dark matter.

    Science.gov (United States)

    Medvedev, Mikhail V

    2014-08-15

    The nature of dark matter is unknown. A number of dark matter candidates are quantum flavor-mixed particles but this property has never been accounted for in cosmology. Here we explore this possibility from the first principles via extensive N-body cosmological simulations and demonstrate that the two-component dark matter model agrees with observational data at all scales. Substantial reduction of substructure and flattening of density profiles in the centers of dark matter halos found in simulations can simultaneously resolve several outstanding puzzles of modern cosmology. The model shares the "why now?" fine-tuning caveat pertinent to all self-interacting models. Predictions for direct and indirect detection dark matter experiments are made.

  15. MODELING AND SIMULATION OF HIGH RESOLUTION OPTICAL REMOTE SENSING SATELLITE GEOMETRIC CHAIN

    Directory of Open Access Journals (Sweden)

    Z. Xia

    2018-04-01

    Full Text Available The high resolution satellite with the longer focal length and the larger aperture has been widely used in georeferencing of the observed scene in recent years. The consistent end to end model of high resolution remote sensing satellite geometric chain is presented, which consists of the scene, the three line array camera, the platform including attitude and position information, the time system and the processing algorithm. The integrated design of the camera and the star tracker is considered and the simulation method of the geolocation accuracy is put forward by introduce the new index of the angle between the camera and the star tracker. The model is validated by the geolocation accuracy simulation according to the test method of the ZY-3 satellite imagery rigorously. The simulation results show that the geolocation accuracy is within 25m, which is highly consistent with the test results. The geolocation accuracy can be improved about 7 m by the integrated design. The model combined with the simulation method is applicable to the geolocation accuracy estimate before the satellite launching.

  16. Statistics of Deep Convection in the Congo Basin Derived From High-Resolution Simulations.

    Science.gov (United States)

    White, B.; Stier, P.; Kipling, Z.; Gryspeerdt, E.; Taylor, S.

    2016-12-01

    Convection transports moisture, momentum, heat and aerosols through the troposphere, and so the temporal variability of convection is a major driver of global weather and climate. The Congo basin is home to some of the most intense convective activity on the planet and is under strong seasonal influence of biomass burning aerosol. However, deep convection in the Congo basin remains under studied compared to other regions of tropical storm systems, especially when compared to the neighbouring, relatively well-understood West African climate system. We use the WRF model to perform a high-resolution, cloud-system resolving simulation to investigate convective storm systems in the Congo. Our setup pushes the boundaries of current computational resources, using a 1 km grid length over a domain covering millions of square kilometres and for a time period of one month. This allows us to draw statistical conclusions on the nature of the simulated storm systems. Comparing data from satellite observations and the model enables us to quantify the diurnal variability of deep convection in the Congo basin. This approach allows us to evaluate our simulations despite the lack of in-situ observational data. This provides a more comprehensive analysis of the diurnal cycle than has previously been shown. Further, we show that high-resolution convection-permitting simulations performed over near-seasonal timescales can be used in conjunction with satellite observations as an effective tool to evaluate new convection parameterisations.

  17. Simulation of high-resolution X-ray microscopic images for improved alignment

    International Nuclear Information System (INIS)

    Song Xiangxia; Zhang Xiaobo; Liu Gang; Cheng Xianchao; Li Wenjie; Guan Yong; Liu Ying; Xiong Ying; Tian Yangchao

    2011-01-01

    The introduction of precision optical elements to X-ray microscopes necessitates fine realignment to achieve optimal high-resolution imaging. In this paper, we demonstrate a numerical method for simulating image formation that facilitates alignment of the source, condenser, objective lens, and CCD camera. This algorithm, based on ray-tracing and Rayleigh-Sommerfeld diffraction theory, is applied to simulate the X-ray microscope beamline U7A of National Synchrotron Radiation Laboratory (NSRL). The simulations and imaging experiments show that the algorithm is useful for guiding experimental adjustments. Our alignment simulation method is an essential tool for the transmission X-ray microscope (TXM) with optical elements and may also be useful for the alignment of optical components in other modes of microscopy.

  18. High-resolution simulation and forecasting of Jeddah floods using WRF version 3.5

    KAUST Repository

    Deng, Liping

    2013-12-01

    Modeling flash flood events in arid environments is a difficult but important task that has impacts on both water resource related issues and also emergency management and response. The challenge is often related to adequately describing the precursor intense rainfall events that cause these flood responses, as they are generally poorly simulated and forecast. Jeddah, the second largest city in the Kingdom of Saudi Arabia, has suffered from a number of flash floods over the last decade, following short-intense rainfall events. The research presented here focuses on examining four historic Jeddah flash floods (Nov. 25-26 2009, Dec. 29-30 2010, Jan. 14-15 2011 and Jan. 25-26 2011) and investigates the feasibility of using numerical weather prediction models to achieve a more realistic simulation of these flood-producing rainfall events. The Weather Research and Forecasting (WRF) model (version 3.5) is used to simulate precipitation and meteorological conditions via a high-resolution inner domain (1-km) around Jeddah. A range of different convective closure and microphysics parameterization, together with high-resolution (4-km) sea surface temperature data are employed. Through examining comparisons between the WRF model output and in-situ, radar and satellite data, the characteristics and mechanism producing the extreme rainfall events are discussed and the capacity of the WRF model to accurately forecast these rainstorms is evaluated.

  19. Can High-resolution WRF Simulations Be Used for Short-term Forecasting of Lightning?

    Science.gov (United States)

    Goodman, S. J.; Lapenta, W.; McCaul, E. W., Jr.; LaCasse, K.; Petersen, W.

    2006-01-01

    A number of research teams have begun to make quasi-operational forecast simulations at high resolution with models such as the Weather Research and Forecast (WRF) model. These model runs have used horizontal meshes of 2-4 km grid spacing, and thus resolved convective storms explicitly. In the light of recent global satellite-based observational studies that reveal robust relationships between total lightning flash rates and integrated amounts of precipitation-size ice hydrometeors in storms, it is natural to inquire about the capabilities of these convection-resolving models in representing the ice hydrometeor fields faithfully. If they do, this might make operational short-term forecasts of lightning activity feasible. We examine high-resolution WRF simulations from several Southeastern cases for which either NLDN or LMA lightning data were available. All the WRF runs use a standard microphysics package that depicts only three ice species, cloud ice, snow and graupel. The realism of the WRF simulations is examined by comparisons with both lightning and radar observations and with additional even higher-resolution cloud-resolving model runs. Preliminary findings are encouraging in that they suggest that WRF often makes convective storms of the proper size in approximately the right location, but they also indicate that higher resolution and better hydrometeor microphysics would be helpful in improving the realism of the updraft strengths, reflectivity and ice hydrometeor fields.

  20. High-resolution simulation and forecasting of Jeddah floods using WRF version 3.5

    KAUST Repository

    Deng, Liping; McCabe, Matthew; Stenchikov, Georgiy L.; Evans, Jason; Kucera, Paul

    2013-01-01

    Modeling flash flood events in arid environments is a difficult but important task that has impacts on both water resource related issues and also emergency management and response. The challenge is often related to adequately describing the precursor intense rainfall events that cause these flood responses, as they are generally poorly simulated and forecast. Jeddah, the second largest city in the Kingdom of Saudi Arabia, has suffered from a number of flash floods over the last decade, following short-intense rainfall events. The research presented here focuses on examining four historic Jeddah flash floods (Nov. 25-26 2009, Dec. 29-30 2010, Jan. 14-15 2011 and Jan. 25-26 2011) and investigates the feasibility of using numerical weather prediction models to achieve a more realistic simulation of these flood-producing rainfall events. The Weather Research and Forecasting (WRF) model (version 3.5) is used to simulate precipitation and meteorological conditions via a high-resolution inner domain (1-km) around Jeddah. A range of different convective closure and microphysics parameterization, together with high-resolution (4-km) sea surface temperature data are employed. Through examining comparisons between the WRF model output and in-situ, radar and satellite data, the characteristics and mechanism producing the extreme rainfall events are discussed and the capacity of the WRF model to accurately forecast these rainstorms is evaluated.

  1. A numerical relativity scheme for cosmological simulations

    Science.gov (United States)

    Daverio, David; Dirian, Yves; Mitsou, Ermis

    2017-12-01

    Cosmological simulations involving the fully covariant gravitational dynamics may prove relevant in understanding relativistic/non-linear features and, therefore, in taking better advantage of the upcoming large scale structure survey data. We propose a new 3  +  1 integration scheme for general relativity in the case where the matter sector contains a minimally-coupled perfect fluid field. The original feature is that we completely eliminate the fluid components through the constraint equations, thus remaining with a set of unconstrained evolution equations for the rest of the fields. This procedure does not constrain the lapse function and shift vector, so it holds in arbitrary gauge and also works for arbitrary equation of state. An important advantage of this scheme is that it allows one to define and pass an adaptation of the robustness test to the cosmological context, at least in the case of pressureless perfect fluid matter, which is the relevant one for late-time cosmology.

  2. Seeding black holes in cosmological simulations

    Science.gov (United States)

    Taylor, P.; Kobayashi, C.

    2014-08-01

    We present a new model for the formation of black holes in cosmological simulations, motivated by the first star formation. Black holes form from high density peaks of primordial gas, and grow via both gas accretion and mergers. Massive black holes heat the surrounding material, suppressing star formation at the centres of galaxies, and driving galactic winds. We perform an investigation into the physical effects of the model parameters, and obtain a `best' set of these parameters by comparing the outcome of simulations to observations. With this best set, we successfully reproduce the cosmic star formation rate history, black hole mass-velocity dispersion relation, and the size-velocity dispersion relation of galaxies. The black hole seed mass is ˜103 M⊙, which is orders of magnitude smaller than that which has been used in previous cosmological simulations with active galactic nuclei, but suggests that the origin of the seed black holes is the death of Population III stars.

  3. Verification of high resolution simulation of precipitation and wind in Portugal

    Science.gov (United States)

    Menezes, Isilda; Pereira, Mário; Moreira, Demerval; Carvalheiro, Luís; Bugalho, Lourdes; Corte-Real, João

    2017-04-01

    Demand of energy and freshwater continues to grow as the global population and demands increase. Precipitation feed the freshwater ecosystems which provides a wealth of goods and services for society and river flow to sustain native species and natural ecosystem functions. The adoption of the wind and hydro-electric power supplies will sustain energy demands/services without restricting the economic growth and accelerated policies scenarios. However, the international meteorological observation network is not sufficiently dense to directly support high resolution climatic research. In this sense, coupled global and regional atmospheric models constitute the most appropriate physical and numerical tool for weather forecasting and downscaling in high resolution grids with the capacity to solve problems resulting from the lack of observed data and measuring errors. Thus, this study aims to calibrate and validate of the WRF regional model from precipitation and wind fields simulation, in high spatial resolution grid cover in Portugal. The simulations were performed in two-way nesting with three grids of increasing resolution (60 km, 20 km and 5 km) and the model performance assessed for the summer and winter months (January and July), using input variables from two different reanalyses and forecasted databases (ERA-Interim and NCEP-FNL) and different forcing schemes. The verification procedure included: (i) the use of several statistics error estimators, correlation based measures and relative errors descriptors; and, (ii) an observed dataset composed by time series of hourly precipitation, wind speed and direction provided by the Portuguese meteorological institute for a comprehensive set of weather stations. Main results suggested the good ability of the WRF to: (i) reproduce the spatial patterns of the mean and total observed fields; (ii) with relatively small values of bias and other errors; and, (iii) and good temporal correlation. These findings are in good

  4. Quality and sensitivity of high-resolution numerical simulation of urban heat islands

    Science.gov (United States)

    Li, Dan; Bou-Zeid, Elie

    2014-05-01

    High-resolution numerical simulations of the urban heat island (UHI) effect with the widely-used Weather Research and Forecasting (WRF) model are assessed. Both the sensitivity of the results to the simulation setup, and the quality of the simulated fields as representations of the real world, are investigated. Results indicate that the WRF-simulated surface temperatures are more sensitive to the planetary boundary layer (PBL) scheme choice during nighttime, and more sensitive to the surface thermal roughness length parameterization during daytime. The urban surface temperatures simulated by WRF are also highly sensitive to the urban canopy model (UCM) used. The implementation in this study of an improved UCM (the Princeton UCM or PUCM) that allows the simulation of heterogeneous urban facets and of key hydrological processes, together with the so-called CZ09 parameterization for the thermal roughness length, significantly reduce the bias (Changing UCMs and PBL schemes does not alter the performance of WRF in reproducing bulk boundary layer temperature profiles significantly. The results illustrate the wide range of urban environmental conditions that various configurations of WRF can produce, and the significant biases that should be assessed before inferences are made based on WRF outputs. The optimal set-up of WRF-PUCM developed in this paper also paves the way for a confident exploration of the city-scale impacts of UHI mitigation strategies in the companion paper (Li et al 2014).

  5. Simulation study for high resolution alpha particle spectrometry with mesh type collimator

    International Nuclear Information System (INIS)

    Park, Seunghoon; Kwak, Sungwoo; Kang, Hanbyeol; Shin, Jungki; Park, Iljin

    2014-01-01

    An alpha particle spectrometry with a mesh type collimator plays a crucial role in identifying specific radionuclide in a radioactive source collected from the atmosphere or environment. The energy resolution is degraded without collimation because particles with a high angle have a longer path to travel in the air. Therefore, collision with the background increases. The collimator can cut out particles which traveling at a high angle. As a result, an energy distribution with high resolution can be obtained. Therefore, the mesh type collimator is simulated for high resolution alpha particle spectrometry. In conclusion, the collimator can improve resolution. With collimator, the collimator is a role of cutting out particles with a high angle, so, low energy tail and broadened energy distribution can be reduced. The mesh diameter is found out as an important factor to control resolution and counting efficiency. Therefore, a target particle, for example, 235 U, can be distinguished by a detector with a collimator under a mixture of various nuclides, for example: 232 U, 238 U, and 232 Th

  6. Air quality high resolution simulations of Italian urban areas with WRF-CHIMERE

    Science.gov (United States)

    Falasca, Serena; Curci, Gabriele

    2017-04-01

    The new European Directive on ambient air quality and cleaner air for Europe (2008/50/EC) encourages the use of modeling techniques to support the observations in the assessment and forecasting of air quality. The modelling system based on the combination of the WRF meteorological model and the CHIMERE chemistry-transport model is used to perform simulations at high resolution over the main Italian cities (e.g. Milan, Rome). Three domains covering Europe, Italy and the urban areas are nested with a decreasing grid size up to 1 km. Numerical results are produced for a winter month and a summer month of the year 2010 and are validated using ground-based observations (e.g. from the European air quality database AirBase). A sensitivity study is performed using different physics options, domain resolution and grid ratio; different urban parameterization schemes are tested using also characteristic morphology parameters for the cities considered. A spatial reallocation of anthropogenic emissions derived from international (e.g. EMEP, TNO, HTAP) and national (e.g. CTN-ACE) emissions inventories and based on the land cover datasets (Global Land Cover Facility and GlobCover) and the OpenStreetMap tool is also included. Preliminary results indicate that the introduction of the spatial redistribution at high-resolution allows a more realistic reproduction of the distribution of the emission flows and thus the concentrations of the pollutants, with significant advantages especially for the urban environments.

  7. Toolbox for Urban Mobility Simulation: High Resolution Population Dynamics for Global Cities

    Science.gov (United States)

    Bhaduri, B. L.; Lu, W.; Liu, C.; Thakur, G.; Karthik, R.

    2015-12-01

    In this rapidly urbanizing world, unprecedented rate of population growth is not only mirrored by increasing demand for energy, food, water, and other natural resources, but has detrimental impacts on environmental and human security. Transportation simulations are frequently used for mobility assessment in urban planning, traffic operation, and emergency management. Previous research, involving purely analytical techniques to simulations capturing behavior, has investigated questions and scenarios regarding the relationships among energy, emissions, air quality, and transportation. Primary limitations of past attempts have been availability of input data, useful "energy and behavior focused" models, validation data, and adequate computational capability that allows adequate understanding of the interdependencies of our transportation system. With increasing availability and quality of traditional and crowdsourced data, we have utilized the OpenStreetMap roads network, and has integrated high resolution population data with traffic simulation to create a Toolbox for Urban Mobility Simulations (TUMS) at global scale. TUMS consists of three major components: data processing, traffic simulation models, and Internet-based visualizations. It integrates OpenStreetMap, LandScanTM population, and other open data (Census Transportation Planning Products, National household Travel Survey, etc.) to generate both normal traffic operation and emergency evacuation scenarios. TUMS integrates TRANSIMS and MITSIM as traffic simulation engines, which are open-source and widely-accepted for scalable traffic simulations. Consistent data and simulation platform allows quick adaption to various geographic areas that has been demonstrated for multiple cities across the world. We are combining the strengths of geospatial data sciences, high performance simulations, transportation planning, and emissions, vehicle and energy technology development to design and develop a simulation

  8. THE AGORA HIGH-RESOLUTION GALAXY SIMULATIONS COMPARISON PROJECT. II. ISOLATED DISK TEST

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ji-hoon [Kavli Institute for Particle Astrophysics and Cosmology, SLAC National Accelerator Laboratory, Menlo Park, CA 94025 (United States); Agertz, Oscar [Department of Physics, University of Surrey, Guildford, Surrey, GU2 7XH (United Kingdom); Teyssier, Romain; Feldmann, Robert [Centre for Theoretical Astrophysics and Cosmology, Institute for Computational Science, University of Zurich, Zurich, 8057 (Switzerland); Butler, Michael J. [Max-Planck-Institut für Astronomie, D-69117 Heidelberg (Germany); Ceverino, Daniel [Zentrum für Astronomie der Universität Heidelberg, Institut für Theoretische Astrophysik, D-69120 Heidelberg (Germany); Choi, Jun-Hwan [Department of Astronomy, University of Texas, Austin, TX 78712 (United States); Keller, Ben W. [Department of Physics and Astronomy, McMaster University, Hamilton, ON L8S 4M1 (Canada); Lupi, Alessandro [Institut d’Astrophysique de Paris, Sorbonne Universites, UPMC Univ Paris 6 et CNRS, F-75014 Paris (France); Quinn, Thomas; Wallace, Spencer [Department of Astronomy, University of Washington, Seattle, WA 98195 (United States); Revaz, Yves [Institute of Physics, Laboratoire d’Astrophysique, École Polytechnique Fédérale de Lausanne, CH-1015 Lausanne (Switzerland); Gnedin, Nickolay Y. [Particle Astrophysics Center, Fermi National Accelerator Laboratory, Batavia, IL 60510 (United States); Leitner, Samuel N. [Department of Astronomy, University of Maryland, College Park, MD 20742 (United States); Shen, Sijing [Kavli Institute for Cosmology, University of Cambridge, Cambridge, CB3 0HA (United Kingdom); Smith, Britton D., E-mail: me@jihoonkim.org [Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh EH9 3HJ (United Kingdom); Collaboration: AGORA Collaboration; and others

    2016-12-20

    Using an isolated Milky Way-mass galaxy simulation, we compare results from nine state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, we find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt–Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly formed stellar clump mass functions show more significant variation (difference by up to a factor of ∼3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low-density region, and between more diffusive and less diffusive schemes in the high-density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.

  9. Simulation of the oxidation pathway on Si(100) using high-resolution EELS

    Energy Technology Data Exchange (ETDEWEB)

    Hogan, Conor [Consiglio Nazionale delle Ricerche, Istituto di Struttura della Materia (CNR-ISM), Rome (Italy); Dipartimento di Fisica, Universita di Roma ' ' Tor Vergata' ' , Roma (Italy); European Theoretical Spectroscopy Facility (ETSF), Roma (Italy); Caramella, Lucia; Onida, Giovanni [Dipartimento di Fisica, Universita degli Studi di Milano (Italy); European Theoretical Spectroscopy Facility (ETSF), Milano (Italy)

    2012-06-15

    We compute high-resolution electron energy loss spectra (HREELS) of possible structural motifs that form during the dynamic oxidation process on Si(100), including the important metastable precursor silanone and an adjacent-dimer bridge (ADB) structure that may seed oxide formation. Spectroscopic fingerprints of single site, silanone, and ''seed'' structures are identified and related to changes in the surface bandstructure of the clean surface. Incorporation of oxygen into the silicon lattice through adsorption and dissociation of water is also examined. Results are compared to available HREELS spectra and surface optical data, which are closely related. Our simulations confirm that HREELS offers complementary evidence to surface optical spectroscopy, and show that its high sensitivity allows it to distinguish between energetically and structurally similar oxidation models. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  10. Updated vegetation information in high resolution regional climate simulations using WRF

    DEFF Research Database (Denmark)

    Nielsen, Joakim Refslund; Dellwik, Ebba; Hahmann, Andrea N.

    Climate studies show that the frequency of heat wave events and above-average high temperatures during the summer months over Europe will increase in the coming decades. Such climatic changes and long-term meteorological conditions will impact the seasonal development of vegetation and ultimately...... modify the energy distribution at the land surface. In weather and climate models it is important to represent the vegetation variability accurately to obtain reliable results. The weather research and forecasting (WRF) model uses a green vegetation fraction (GVF) climatology to represent the seasonal...... or changes in management practice since it is derived more than twenty years ago. In this study, a new high resolution, high quality GVF product is applied in a WRF climate simulation over Denmark during the 2006 heat wave year. The new GVF product reflects the year 2006 and it was previously tested...

  11. High resolution simulations of orographic flow over a complex terrain on the Southeast coast of Brazil

    Science.gov (United States)

    Chou, S. C.; Zolino, M. M.; Gomes, J. L.; Bustamante, J. F.; Lima-e-Silva, P. P.

    2012-04-01

    The Eta Model is used operationally by CPTEC to produce weather forecasts over South America since 1997. The model has gone through upgrades. In order to prepare the model for operational higher resolution forecasts, the model is configured and tested over a region of complex topography located near the coast of Southeast Brazil. The Eta Model was configured, with 2-km horizontal resolution and 50 layers. The Eta-2km is a second nesting, it is driven by Eta-15km, which in its turn is driven by Era-Interim reanalyses. The model domain includes the two Brazilians cities, Rio de Janeiro and Sao Paulo, urban areas, preserved tropical forest, pasture fields, and complex terrain and coastline. Mountains can rise up to about 700m. The region suffers frequent events of floods and landslides. The objective of this work is to evaluate high resolution simulations of wind and temperature in this complex area. Verification of model runs uses observations taken from the nuclear power plant. Accurate near-surface wind direction and magnitude are needed for the plant emergency plan and winds are highly sensitive to model spatial resolution and atmospheric stability. Verification of two cases during summer shows that model has clear diurnal cycle signal for wind in that region. The area is characterized by weak winds which makes the simulation more difficult. The simulated wind magnitude is about 1.5m/s, which is close to observations of about 2m/s; however, the observed change of wind direction of the sea breeze is fast whereas it is slow in the simulations. Nighttime katabatic flow is captured by the simulations. Comparison against Eta-5km runs show that the valley circulation is better described in the 2-km resolution run. Simulated temperatures follow closely the observed diurnal cycle. Experiments improving some surface conditions such as the surface temperature and land cover show simulation error reduction and improved diurnal cycle.

  12. Quantifying uncertainty in Transcranial Magnetic Stimulation - A high resolution simulation study in ICBM space.

    Science.gov (United States)

    Toschi, Nicola; Keck, Martin E; Welt, Tobias; Guerrisi, Maria

    2012-01-01

    Transcranial Magnetic Stimulation offers enormous potential for noninvasive brain stimulation. While it is known that brain tissue significantly "reshapes" induced field and charge distributions, most modeling investigations to-date have focused on single-subject data with limited generality. Further, the effects of the significant uncertainties which exist in the simulation (i.e. brain conductivity distributions) and stimulation (e.g. coil positioning and orientations) setup have not been quantified. In this study, we construct a high-resolution anisotropic head model in standard ICBM space, which can be used as a population-representative standard for bioelectromagnetic simulations. Further, we employ Monte-Carlo simulations in order to quantify how uncertainties in conductivity values propagate all the way to induced field and currents, demonstrating significant, regionally dependent dispersions in values which are commonly assumed "ground truth". This framework can be leveraged in order to quantify the effect of any type of uncertainty in noninvasive brain stimulation and bears relevance in all applications of TMS, both investigative and therapeutic.

  13. Experimental Investigation and High Resolution Simulation of In-Situ Combustion Processes

    Energy Technology Data Exchange (ETDEWEB)

    Margot Gerritsen; Tony Kovscek

    2008-04-30

    This final technical report describes work performed for the project 'Experimental Investigation and High Resolution Numerical Simulator of In-Situ Combustion Processes', DE-FC26-03NT15405. In summary, this work improved our understanding of in-situ combustion (ISC) process physics and oil recovery. This understanding was translated into improved conceptual models and a suite of software algorithms that extended predictive capabilities. We pursued experimental, theoretical, and numerical tasks during the performance period. The specific project objectives were (i) identification, experimentally, of chemical additives/injectants that improve combustion performance and delineation of the physics of improved performance, (ii) establishment of a benchmark one-dimensional, experimental data set for verification of in-situ combustion dynamics computed by simulators, (iii) develop improved numerical methods that can be used to describe in-situ combustion more accurately, and (iv) to lay the underpinnings of a highly efficient, 3D, in-situ combustion simulator using adaptive mesh refinement techniques and parallelization. We believe that project goals were met and exceeded as discussed.

  14. High-resolution 3D simulations of NIF ignition targets performed on Sequoia with HYDRA

    Science.gov (United States)

    Marinak, M. M.; Clark, D. S.; Jones, O. S.; Kerbel, G. D.; Sepke, S.; Patel, M. V.; Koning, J. M.; Schroeder, C. R.

    2015-11-01

    Developments in the multiphysics ICF code HYDRA enable it to perform large-scale simulations on the Sequoia machine at LLNL. With an aggregate computing power of 20 Petaflops, Sequoia offers an unprecedented capability to resolve the physical processes in NIF ignition targets for a more complete, consistent treatment of the sources of asymmetry. We describe modifications to HYDRA that enable it to scale to over one million processes on Sequoia. These include new options for replicating parts of the mesh over a subset of the processes, to avoid strong scaling limits. We consider results from a 3D full ignition capsule-only simulation performed using over one billion zones run on 262,000 processors which resolves surface perturbations through modes l = 200. We also report progress towards a high-resolution 3D integrated hohlraum simulation performed using 262,000 processors which resolves surface perturbations on the ignition capsule through modes l = 70. These aim for the most complete calculations yet of the interactions and overall impact of the various sources of asymmetry for NIF ignition targets. This work was performed under the auspices of the Lawrence Livermore National Security, LLC, (LLNS) under Contract No. DE-AC52-07NA27344.

  15. Quality and sensitivity of high-resolution numerical simulation of urban heat islands

    International Nuclear Information System (INIS)

    Li, Dan; Bou-Zeid, Elie

    2014-01-01

    High-resolution numerical simulations of the urban heat island (UHI) effect with the widely-used Weather Research and Forecasting (WRF) model are assessed. Both the sensitivity of the results to the simulation setup, and the quality of the simulated fields as representations of the real world, are investigated. Results indicate that the WRF-simulated surface temperatures are more sensitive to the planetary boundary layer (PBL) scheme choice during nighttime, and more sensitive to the surface thermal roughness length parameterization during daytime. The urban surface temperatures simulated by WRF are also highly sensitive to the urban canopy model (UCM) used. The implementation in this study of an improved UCM (the Princeton UCM or PUCM) that allows the simulation of heterogeneous urban facets and of key hydrological processes, together with the so-called CZ09 parameterization for the thermal roughness length, significantly reduce the bias (<1.5 °C) in the surface temperature fields as compared to satellite observations during daytime. The boundary layer potential temperature profiles are captured by WRF reasonable well at both urban and rural sites; the biases in these profiles relative to aircraft-mounted senor measurements are on the order of 1.5 °C. Changing UCMs and PBL schemes does not alter the performance of WRF in reproducing bulk boundary layer temperature profiles significantly. The results illustrate the wide range of urban environmental conditions that various configurations of WRF can produce, and the significant biases that should be assessed before inferences are made based on WRF outputs. The optimal set-up of WRF-PUCM developed in this paper also paves the way for a confident exploration of the city-scale impacts of UHI mitigation strategies in the companion paper (Li et al 2014). (letter)

  16. High-resolution, regional-scale crop yield simulations for the Southwestern United States

    Science.gov (United States)

    Stack, D. H.; Kafatos, M.; Medvigy, D.; El-Askary, H. M.; Hatzopoulos, N.; Kim, J.; Kim, S.; Prasad, A. K.; Tremback, C.; Walko, R. L.; Asrar, G. R.

    2012-12-01

    Over the past few decades, there have been many process-based crop models developed with the goal of better understanding the impacts of climate, soils, and management decisions on crop yields. These models simulate the growth and development of crops in response to environmental drivers. Traditionally, process-based crop models have been run at the individual farm level for yield optimization and management scenario testing. Few previous studies have used these models over broader geographic regions, largely due to the lack of gridded high-resolution meteorological and soil datasets required as inputs for these data intensive process-based models. In particular, assessment of regional-scale yield variability due to climate change requires high-resolution, regional-scale, climate projections, and such projections have been unavailable until recently. The goal of this study was to create a framework for extending the Agricultural Production Systems sIMulator (APSIM) crop model for use at regional scales and analyze spatial and temporal yield changes in the Southwestern United States (CA, AZ, and NV). Using the scripting language Python, an automated pipeline was developed to link Regional Climate Model (RCM) output with the APSIM crop model, thus creating a one-way nested modeling framework. This framework was used to combine climate, soil, land use, and agricultural management datasets in order to better understand the relationship between climate variability and crop yield at the regional-scale. Three different RCMs were used to drive APSIM: OLAM, RAMS, and WRF. Preliminary results suggest that, depending on the model inputs, there is some variability between simulated RCM driven maize yields and historical yields obtained from the United States Department of Agriculture (USDA). Furthermore, these simulations showed strong non-linear correlations between yield and meteorological drivers, with critical threshold values for some of the inputs (e.g. minimum and

  17. Computational high-resolution heart phantoms for medical imaging and dosimetry simulations

    Energy Technology Data Exchange (ETDEWEB)

    Gu Songxiang; Kyprianou, Iacovos [Center for Devices and Radiological Health, US Food and Drug Administration, Silver Spring, MD (United States); Gupta, Rajiv, E-mail: songxiang.gu@fda.hhs.gov, E-mail: rgupta1@partners.org, E-mail: iacovos.kyprianou@fda.hhs.gov [Massachusetts General Hospital, Boston, MA (United States)

    2011-09-21

    Cardiovascular disease in general and coronary artery disease (CAD) in particular, are the leading cause of death worldwide. They are principally diagnosed using either invasive percutaneous transluminal coronary angiograms or non-invasive computed tomography angiograms (CTA). Minimally invasive therapies for CAD such as angioplasty and stenting are rendered under fluoroscopic guidance. Both invasive and non-invasive imaging modalities employ ionizing radiation and there is concern for deterministic and stochastic effects of radiation. Accurate simulation to optimize image quality with minimal radiation dose requires detailed, gender-specific anthropomorphic phantoms with anatomically correct heart and associated vasculature. Such phantoms are currently unavailable. This paper describes an open source heart phantom development platform based on a graphical user interface. Using this platform, we have developed seven high-resolution cardiac/coronary artery phantoms for imaging and dosimetry from seven high-quality CTA datasets. To extract a phantom from a coronary CTA, the relationship between the intensity distribution of the myocardium, the ventricles and the coronary arteries is identified via histogram analysis of the CTA images. By further refining the segmentation using anatomy-specific criteria such as vesselness, connectivity criteria required by the coronary tree and image operations such as active contours, we are able to capture excellent detail within our phantoms. For example, in one of the female heart phantoms, as many as 100 coronary artery branches could be identified. Triangular meshes are fitted to segmented high-resolution CTA data. We have also developed a visualization tool for adding stenotic lesions to the coronaries. The male and female heart phantoms generated so far have been cross-registered and entered in the mesh-based Virtual Family of phantoms with matched age/gender information. Any phantom in this family, along with user

  18. Computational high-resolution heart phantoms for medical imaging and dosimetry simulations

    International Nuclear Information System (INIS)

    Gu Songxiang; Kyprianou, Iacovos; Gupta, Rajiv

    2011-01-01

    Cardiovascular disease in general and coronary artery disease (CAD) in particular, are the leading cause of death worldwide. They are principally diagnosed using either invasive percutaneous transluminal coronary angiograms or non-invasive computed tomography angiograms (CTA). Minimally invasive therapies for CAD such as angioplasty and stenting are rendered under fluoroscopic guidance. Both invasive and non-invasive imaging modalities employ ionizing radiation and there is concern for deterministic and stochastic effects of radiation. Accurate simulation to optimize image quality with minimal radiation dose requires detailed, gender-specific anthropomorphic phantoms with anatomically correct heart and associated vasculature. Such phantoms are currently unavailable. This paper describes an open source heart phantom development platform based on a graphical user interface. Using this platform, we have developed seven high-resolution cardiac/coronary artery phantoms for imaging and dosimetry from seven high-quality CTA datasets. To extract a phantom from a coronary CTA, the relationship between the intensity distribution of the myocardium, the ventricles and the coronary arteries is identified via histogram analysis of the CTA images. By further refining the segmentation using anatomy-specific criteria such as vesselness, connectivity criteria required by the coronary tree and image operations such as active contours, we are able to capture excellent detail within our phantoms. For example, in one of the female heart phantoms, as many as 100 coronary artery branches could be identified. Triangular meshes are fitted to segmented high-resolution CTA data. We have also developed a visualization tool for adding stenotic lesions to the coronaries. The male and female heart phantoms generated so far have been cross-registered and entered in the mesh-based Virtual Family of phantoms with matched age/gender information. Any phantom in this family, along with user

  19. Modeling Supermassive Black Holes in Cosmological Simulations

    Science.gov (United States)

    Tremmel, Michael

    My thesis work has focused on improving the implementation of supermassive black hole (SMBH) physics in cosmological hydrodynamic simulations. SMBHs are ubiquitous in mas- sive galaxies, as well as bulge-less galaxies and dwarfs, and are thought to be a critical component to massive galaxy evolution. Still, much is unknown about how SMBHs form, grow, and affect their host galaxies. Cosmological simulations are an invaluable tool for un- derstanding the formation of galaxies, self-consistently tracking their evolution with realistic merger and gas accretion histories. SMBHs are often modeled in these simulations (generally as a necessity to produce realistic massive galaxies), but their implementations are commonly simplified in ways that can limit what can be learned. Current and future observations are opening new windows into the lifecycle of SMBHs and their host galaxies, but require more detailed, physically motivated simulations. Within the novel framework I have developed, SMBHs 1) are seeded at early times without a priori assumptions of galaxy occupation, 2) grow in a way that accounts for the angular momentum of gas, and 3) experience realistic orbital evolution. I show how this model, properly tuned with a novel parameter optimiza- tion technique, results in realistic galaxies and SMBHs. Utilizing the unique ability of these simulations to capture the dynamical evolution of SMBHs, I present the first self-consistent prediction for the formation timescales of close SMBH pairs, precursors to SMBH binaries and merger events potentially detected by future gravitational wave experiments.

  20. Use of High-Resolution WRF Simulations to Forecast Lightning Threat

    Science.gov (United States)

    McCaul, E. W., Jr.; LaCasse, K.; Goodman, S. J.; Cecil, D. J.

    2008-01-01

    Recent observational studies have confirmed the existence of a robust statistical relationship between lightning flash rates and the amount of large precipitating ice hydrometeors aloft in storms. This relationship is exploited, in conjunction with the capabilities of cloud-resolving forecast models such as WRF, to forecast explicitly the threat of lightning from convective storms using selected output fields from the model forecasts. The simulated vertical flux of graupel at -15C and the shape of the simulated reflectivity profile are tested in this study as proxies for charge separation processes and their associated lightning risk. Our lightning forecast method differs from others in that it is entirely based on high-resolution simulation output, without reliance on any climatological data. short [6-8 h) simulations are conducted for a number of case studies for which three-dmmensional lightning validation data from the North Alabama Lightning Mapping Array are available. Experiments indicate that initialization of the WRF model on a 2 km grid using Eta boundary conditions, Doppler radar radial velocity fields, and METAR and ACARS data y&eld satisfactory simulations. __nalyses of the lightning threat fields suggests that both the graupel flux and reflectivity profile approaches, when properly calibrated, can yield reasonable lightning threat forecasts, although an ensemble approach is probably desirable in order to reduce the tendency for misplacement of modeled storms to hurt the accuracy of the forecasts. Our lightning threat forecasts are also compared to other more traditional means of forecasting thunderstorms, such as those based on inspection of the convective available potential energy field.

  1. High-resolution simulations of galaxy formation in a cold dark matter scenario

    International Nuclear Information System (INIS)

    Kates, R.E.; Klypin, A.A.

    1990-01-01

    We present the results of our numerical simulations of galaxy clustering in a two-dimensional model. Our simulations allowed better resolution than could be obtained in three-dimensional simulations. We used a spectrum of initial perturbations corresponding to a cold dark matter (CDM) model and followed the history of each particle by modelling the shocking and subsequent cooling of matter. We took into account cooling processes in a hot plasma with primeval cosmic abundances of H and He as well as Compton cooling. (However, the influence of these processes on the trajectories of ordinary matter particles was not simulated in the present code.) As a result of the high resolution, we were able to observe a network of chains on all scales down to the limits of resolution. This network extends out from dense clusters and superclusters and penetrates into voids (with decreasing density). In addition to the dark matter network structure, a definite prediction of our simulations is the existence of a connected filamentary structure consisting of hot gas with a temperature of 10 6 K and extending over 100-150 Mpc. (Throughout this paper, we assume the Hubble constant H 0 =50 km/sec/Mpc.) These structures trace high-density filaments of the dark matter distribution and should be searched for in soft X-ray observations. In contrast to common assumptions, we found that peaks of the linearized density distribution were not reliable tracers of the eventual galaxy distribution. We were also able to demonstrate that the influence of small-scale fluctuations on the structure at larger scales is always small, even at the late nonlinear stage. (orig.)

  2. Assessment of high-resolution methods for numerical simulations of compressible turbulence with shock waves

    International Nuclear Information System (INIS)

    Johnsen, Eric; Larsson, Johan; Bhagatwala, Ankit V.; Cabot, William H.; Moin, Parviz; Olson, Britton J.; Rawat, Pradeep S.; Shankar, Santhosh K.; Sjoegreen, Bjoern; Yee, H.C.; Zhong Xiaolin; Lele, Sanjiva K.

    2010-01-01

    Flows in which shock waves and turbulence are present and interact dynamically occur in a wide range of applications, including inertial confinement fusion, supernovae explosion, and scramjet propulsion. Accurate simulations of such problems are challenging because of the contradictory requirements of numerical methods used to simulate turbulence, which must minimize any numerical dissipation that would otherwise overwhelm the small scales, and shock-capturing schemes, which introduce numerical dissipation to stabilize the solution. The objective of the present work is to evaluate the performance of several numerical methods capable of simultaneously handling turbulence and shock waves. A comprehensive range of high-resolution methods (WENO, hybrid WENO/central difference, artificial diffusivity, adaptive characteristic-based filter, and shock fitting) and suite of test cases (Taylor-Green vortex, Shu-Osher problem, shock-vorticity/entropy wave interaction, Noh problem, compressible isotropic turbulence) relevant to problems with shocks and turbulence are considered. The results indicate that the WENO methods provide sharp shock profiles, but overwhelm the physical dissipation. The hybrid method is minimally dissipative and leads to sharp shocks and well-resolved broadband turbulence, but relies on an appropriate shock sensor. Artificial diffusivity methods in which the artificial bulk viscosity is based on the magnitude of the strain-rate tensor resolve vortical structures well but damp dilatational modes in compressible turbulence; dilatation-based artificial bulk viscosity methods significantly improve this behavior. For well-defined shocks, the shock fitting approach yields good results.

  3. The simulation of medicanes in a high-resolution regional climate model

    Energy Technology Data Exchange (ETDEWEB)

    Cavicchia, Leone [Centro Euro-Mediterraneo per i Cambiamenti Climatici, Bologna (Italy); Helmholtz-Zentrum Geesthacht, Institute of Coastal Research, Geesthacht (Germany); Ca' Foscari University, Venice (Italy); Storch, Hans von [Helmholtz-Zentrum Geesthacht, Institute of Coastal Research, Geesthacht (Germany); University of Hamburg, Meteorological Institute, Hamburg (Germany)

    2012-11-15

    Medicanes, strong mesoscale cyclones with tropical-like features, develop occasionally over the Mediterranean Sea. Due to the scarcity of observations over sea and the coarse resolution of the long-term reanalysis datasets, it is difficult to study systematically the multidecadal statistics of sub-synoptic medicanes. Our goal is to assess the long-term variability and trends of medicanes, obtaining a long-term climatology through dynamical downscaling of the NCEP/NCAR reanalysis data. In this paper, we examine the robustness of this method and investigate the value added for the study of medicanes. To do so, we performed several climate mode simulations with a high resolution regional atmospheric model (CCLM) for a number of test cases described in the literature. We find that the medicanes are formed in the simulations, with deeper pressures and stronger winds than in the driving global NCEP reanalysis. The tracks are adequately reproduced. We conclude that our methodology is suitable for constructing multi-decadal statistics and scenarios of current and possible future medicane activities. (orig.)

  4. The 2010 Pakistan floods: high-resolution simulations with the WRF model

    Science.gov (United States)

    Viterbo, Francesca; Parodi, Antonio; Molini, Luca; Provenzale, Antonello; von Hardenberg, Jost; Palazzi, Elisa

    2013-04-01

    Estimating current and future water resources in high mountain regions with complex orography is a difficult but crucial task. In particular, the French-Italian project PAPRIKA is focused on two specific regions in the Hindu-Kush -- Himalaya -- Karakorum (HKKH)region: the Shigar basin in Pakistan, at the feet of K2, and the Khumbu valley in Nepal, at the feet of Mount Everest. In this framework, we use the WRF model to simulate precipitation and meteorological conditions with high resolution in areas with extreme orographic slopes, comparing the model output with station and satellite data. Once validated the model, we shall run a set of three future time-slices at very high spatial resolution, in the periods 2046-2050, 2071-2075 and 2096-2100, nested in different climate change scenarios (EXtreme PREcipitation and Hydrological climate Scenario Simulations -EXPRESS-Hydro project). As a prelude to this study, here we discuss the simulation of specific, high-intensity rainfall events in this area. In this paper we focus on the 2010 Pakistan floods which began in late July 2010, producing heavy monsoon rains in the Khyber Pakhtunkhwa, Sindh, Punjab and Balochistan regions of Pakistan and affecting the Indus River basin. Approximately one-fifth of Pakistan's total land area was underwater, with a death toll of about 2000 people. This event has been simulated with the WRF model (version 3.3.) in cloud-permitting mode (d01 14 km and d02 3.5 km): different convective closures and microphysics parameterization have been used. A deeper understanding of the processes responsible for this event has been gained through comparison with rainfall depth observations, radiosounding data and geostationary/polar satellite images.

  5. Achieving accurate simulations of urban impacts on ozone at high resolution

    International Nuclear Information System (INIS)

    Li, J; Georgescu, M; Mahalov, A; Moustaoui, M; Hyde, P

    2014-01-01

    The effects of urbanization on ozone levels have been widely investigated over cities primarily located in temperate and/or humid regions. In this study, nested WRF-Chem simulations with a finest grid resolution of 1 km are conducted to investigate ozone concentrations [O 3 ] due to urbanization within cities in arid/semi-arid environments. First, a method based on a shape preserving Monotonic Cubic Interpolation (MCI) is developed and used to downscale anthropogenic emissions from the 4 km resolution 2005 National Emissions Inventory (NEI05) to the finest model resolution of 1 km. Using the rapidly expanding Phoenix metropolitan region as the area of focus, we demonstrate the proposed MCI method achieves ozone simulation results with appreciably improved correspondence to observations relative to the default interpolation method of the WRF-Chem system. Next, two additional sets of experiments are conducted, with the recommended MCI approach, to examine impacts of urbanization on ozone production: (1) the urban land cover is included (i.e., urbanization experiments) and, (2) the urban land cover is replaced with the region’s native shrubland. Impacts due to the presence of the built environment on [O 3 ] are highly heterogeneous across the metropolitan area. Increased near surface [O 3 ] due to urbanization of 10–20 ppb is predominantly a nighttime phenomenon while simulated impacts during daytime are negligible. Urbanization narrows the daily [O 3 ] range (by virtue of increasing nighttime minima), an impact largely due to the region’s urban heat island. Our results demonstrate the importance of the MCI method for accurate representation of the diurnal profile of ozone, and highlight its utility for high-resolution air quality simulations for urban areas. (letter)

  6. Detailed high-resolution three-dimensional simulations of OMEGA separated reactants inertial confinement fusion experiments

    Energy Technology Data Exchange (ETDEWEB)

    Haines, Brian M., E-mail: bmhaines@lanl.gov; Fincke, James R.; Shah, Rahul C.; Boswell, Melissa; Fowler, Malcolm M.; Gore, Robert A.; Hayes-Sterbenz, Anna C.; Jungman, Gerard; Klein, Andreas; Rundberg, Robert S.; Steinkamp, Michael J.; Wilhelmy, Jerry B. [Los Alamos National Laboratory, MS T087, Los Alamos, New Mexico 87545 (United States); Grim, Gary P. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); Forrest, Chad J.; Silverstein, Kevin; Marshall, Frederic J. [Laboratory for Laser Energetics, University of Rochester, Rochester, New York 14623 (United States)

    2016-07-15

    We present results from the comparison of high-resolution three-dimensional (3D) simulations with data from the implosions of inertial confinement fusion capsules with separated reactants performed on the OMEGA laser facility. Each capsule, referred to as a “CD Mixcap,” is filled with tritium and has a polystyrene (CH) shell with a deuterated polystyrene (CD) layer whose burial depth is varied. In these implosions, fusion reactions between deuterium and tritium ions can occur only in the presence of atomic mix between the gas fill and shell material. The simulations feature accurate models for all known experimental asymmetries and do not employ any adjustable parameters to improve agreement with experimental data. Simulations are performed with the RAGE radiation-hydrodynamics code using an Implicit Large Eddy Simulation (ILES) strategy for the hydrodynamics. We obtain good agreement with the experimental data, including the DT/TT neutron yield ratios used to diagnose mix, for all burial depths of the deuterated shell layer. Additionally, simulations demonstrate good agreement with converged simulations employing explicit models for plasma diffusion and viscosity, suggesting that the implicit sub-grid model used in ILES is sufficient to model these processes in these experiments. In our simulations, mixing is driven by short-wavelength asymmetries and longer-wavelength features are responsible for developing flows that transport mixed material towards the center of the hot spot. Mix material transported by this process is responsible for most of the mix (DT) yield even for the capsule with a CD layer adjacent to the tritium fuel. Consistent with our previous results, mix does not play a significant role in TT neutron yield degradation; instead, this is dominated by the displacement of fuel from the center of the implosion due to the development of turbulent instabilities seeded by long-wavelength asymmetries. Through these processes, the long

  7. Simulation studies for a high resolution time projection chamber at the international linear collider

    Energy Technology Data Exchange (ETDEWEB)

    Muennich, A.

    2007-03-26

    The International Linear Collider (ILC) is planned to be the next large accelerator. The ILC will be able to perform high precision measurements only possible at the clean environment of electron positron collisions. In order to reach this high accuracy, the requirements for the detector performance are challenging. Several detector concepts are currently under study. The understanding of the detector and its performance will be crucial to extract the desired physics results from the data. To optimise the detector design, simulation studies are needed. Simulation packages like GEANT4 allow to model the detector geometry and simulate the energy deposit in the different materials. However, the detector response taking into account the transportation of the produced charge to the readout devices and the effects ofthe readout electronics cannot be described in detail. These processes in the detector will change the measured position of the energy deposit relative to the point of origin. The determination of this detector response is the task of detailed simulation studies, which have to be carried out for each subdetector. A high resolution Time Projection Chamber (TPC) with gas amplification based on micro pattern gas detectors, is one of the options for the main tracking system at the ILC. In the present thesis a detailed simulation tool to study the performance of a TPC was developed. Its goal is to find the optimal settings to reach an excellent momentum and spatial resolution. After an introduction to the present status of particle physics and the ILC project with special focus on the TPC as central tracker, the simulation framework is presented. The basic simulation methods and implemented processes are introduced. Within this stand-alone simulation framework each electron produced by primary ionisation is transferred through the gas volume and amplified using Gas Electron Multipliers (GEMs). The output format of the simulation is identical to the raw data from a

  8. Aerosol midlatitude cyclone indirect effects in observations and high-resolution simulations

    Directory of Open Access Journals (Sweden)

    D. T. McCoy

    2018-04-01

    Full Text Available Aerosol–cloud interactions are a major source of uncertainty in inferring the climate sensitivity from the observational record of temperature. The adjustment of clouds to aerosol is a poorly constrained aspect of these aerosol–cloud interactions. Here, we examine the response of midlatitude cyclone cloud properties to a change in cloud droplet number concentration (CDNC. Idealized experiments in high-resolution, convection-permitting global aquaplanet simulations with constant CDNC are compared to 13 years of remote-sensing observations. Observations and idealized aquaplanet simulations agree that increased warm conveyor belt (WCB moisture flux into cyclones is consistent with higher cyclone liquid water path (CLWP. When CDNC is increased a larger LWP is needed to give the same rain rate. The LWP adjusts to allow the rain rate to be equal to the moisture flux into the cyclone along the WCB. This results in an increased CLWP for higher CDNC at a fixed WCB moisture flux in both observations and simulations. If observed cyclones in the top and bottom tercile of CDNC are contrasted it is found that they have not only higher CLWP but also cloud cover and albedo. The difference in cyclone albedo between the cyclones in the top and bottom third of CDNC is observed by CERES to be between 0.018 and 0.032, which is consistent with a 4.6–8.3 Wm−2 in-cyclone enhancement in upwelling shortwave when scaled by annual-mean insolation. Based on a regression model to observed cyclone properties, roughly 60 % of the observed variability in CLWP can be explained by CDNC and WCB moisture flux.

  9. High-resolution nested model simulations of the climatological circulation in the southeastern Mediterranean Sea

    Directory of Open Access Journals (Sweden)

    S. Brenner

    2003-01-01

    Full Text Available As part of the Mediterranean Forecasting System Pilot Project (MFSPP we have implemented a high-resolution (2 km horizontal grid, 30 sigma levels version of the Princeton Ocean Model for the southeastern corner of the Mediterranean Sea. The domain extends 200 km offshore and includes the continental shelf and slope, and part of the open sea. The model is nested in an intermediate resolution (5.5 km grid model that covers the entire Levantine, Ionian, and Aegean Sea. The nesting is one way so that velocity, temperature, and salinity along the boundaries are interpolated from the relevant intermediate model variables. An integral constraint is applied so that the net mass flux across the open boundaries is identical to the net flux in the intermediate model. The model is integrated for three perpetual years with surface forcing specified from monthly mean climatological wind stress and heat fluxes. The model is stable and spins up within the first year to produce a repeating seasonal cycle throughout the three-year integration period. While there is some internal variability evident in the results, it is clear that, due to the relatively small domain, the results are strongly influenced by the imposed lateral boundary conditions. The results closely follow the simulation of the intermediate model. The main improvement is in the simulation over the narrow shelf region, which is not adequately resolved by the coarser grid model. Comparisons with direct current measurements over the shelf and slope show reasonable agreement despite the limitations of the climatological forcing. The model correctly simulates the direction and the typical speeds of the flow over the shelf and slope, but has difficulty properly re-producing the seasonal cycle in the speed.Key words. Oceanography: general (continental shelf processes; numerical modelling; ocean prediction

  10. High Resolution Simulations of Future Climate in West Africa Using a Variable-Resolution Atmospheric Model

    Science.gov (United States)

    Adegoke, J. O.; Engelbrecht, F.; Vezhapparambu, S.

    2013-12-01

    In previous work demonstrated the application of a var¬iable-resolution global atmospheric model, the conformal-cubic atmospheric model (CCAM), across a wide range of spatial and time scales to investigate the ability of the model to provide realistic simulations of present-day climate and plausible projections of future climate change over sub-Saharan Africa. By applying the model in stretched-grid mode the versatility of the model dynamics, numerical formulation and physical parameterizations to function across a range of length scales over the region of interest, was also explored. We primarily used CCAM to illustrate the capability of the model to function as a flexible downscaling tool at the climate-change time scale. Here we report on additional long term climate projection studies performed by downscaling at much higher resolutions (8 Km) over an area that stretches from just south of Sahara desert to the southern coast of the Niger Delta and into the Gulf of Guinea. To perform these simulations, CCAM was provided with synoptic-scale forcing of atmospheric circulation from 2.5 deg resolution NCEP reanalysis at 6-hourly interval and SSTs from NCEP reanalysis data uses as lower boundary forcing. CCAM 60 Km resolution downscaled to 8 Km (Schmidt factor 24.75) then 8 Km resolution simulation downscaled to 1 Km (Schmidt factor 200) over an area approximately 50 Km x 50 Km in the southern Lake Chad Basin (LCB). Our intent in conducting these high resolution model runs was to obtain a deeper understanding of linkages between the projected future climate and the hydrological processes that control the surface water regime in this part of sub-Saharan Africa.

  11. Earth System Modeling 2.0: A Blueprint for Models That Learn From Observations and Targeted High-Resolution Simulations

    Science.gov (United States)

    Schneider, Tapio; Lan, Shiwei; Stuart, Andrew; Teixeira, João.

    2017-12-01

    Climate projections continue to be marred by large uncertainties, which originate in processes that need to be parameterized, such as clouds, convection, and ecosystems. But rapid progress is now within reach. New computational tools and methods from data assimilation and machine learning make it possible to integrate global observations and local high-resolution simulations in an Earth system model (ESM) that systematically learns from both and quantifies uncertainties. Here we propose a blueprint for such an ESM. We outline how parameterization schemes can learn from global observations and targeted high-resolution simulations, for example, of clouds and convection, through matching low-order statistics between ESMs, observations, and high-resolution simulations. We illustrate learning algorithms for ESMs with a simple dynamical system that shares characteristics of the climate system; and we discuss the opportunities the proposed framework presents and the challenges that remain to realize it.

  12. Method of Obtaining High Resolution Intrinsic Wire Boom Damping Parameters for Multi-Body Dynamics Simulations

    Science.gov (United States)

    Yew, Alvin G.; Chai, Dean J.; Olney, David J.

    2010-01-01

    The goal of NASA's Magnetospheric MultiScale (MMS) mission is to understand magnetic reconnection with sensor measurements from four spinning satellites flown in a tight tetrahedron formation. Four of the six electric field sensors on each satellite are located at the end of 60- meter wire booms to increase measurement sensitivity in the spin plane and to minimize motion coupling from perturbations on the main body. A propulsion burn however, might induce boom oscillations that could impact science measurements if oscillations do not damp to values on the order of 0.1 degree in a timely fashion. Large damping time constants could also adversely affect flight dynamics and attitude control performance. In this paper, we will discuss the implementation of a high resolution method for calculating the boom's intrinsic damping, which was used in multi-body dynamics simulations. In summary, experimental data was obtained with a scaled-down boom, which was suspended as a pendulum in vacuum. Optical techniques were designed to accurately measure the natural decay of angular position and subsequently, data processing algorithms resulted in excellent spatial and temporal resolutions. This method was repeated in a parametric study for various lengths, root tensions and vacuum levels. For all data sets, regression models for damping were applied, including: nonlinear viscous, frequency-independent hysteretic, coulomb and some combination of them. Our data analysis and dynamics models have shown that the intrinsic damping for the baseline boom is insufficient, thereby forcing project management to explore mitigation strategies.

  13. Surface drag effects on simulated wind fields in high-resolution atmospheric forecast model

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Kyo Sun; Lim, Jong Myoung; Ji, Young Yong [Environmental Radioactivity Assessment Team,Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Shin, Hye Yum [NOAA/Geophysical Fluid Dynamics Laboratory, Princeton (United States); Hong, Jin Kyu [Yonsei University, Seoul (Korea, Republic of)

    2017-04-15

    It has been reported that the Weather Research and Forecasting (WRF) model generally shows a substantial over prediction bias at low to moderate wind speeds and winds are too geostrophic (Cheng and Steenburgh 2005), which limits the application of WRF model in the area that requires the accurate surface wind estimation such as wind-energy application, air-quality studies, and radioactive-pollutants dispersion studies. The surface drag generated by the subgrid-scale orography is represented by introducing a sink term in the momentum equation in their studies. The purpose of our study is to evaluate the simulated meteorological fields in the high-resolution WRF framework, that includes the parameterization of subgrid-scale orography developed by Mass and Ovens (2010), and enhance the forecast skill of low-level wind fields, which plays an important role in transport and dispersion of air pollutants including radioactive pollutants. The positive bias in 10-m wind speed is significantly alleviated by implementing the subgrid-scale orography parameterization, while other meteorological fields including 10-m wind direction are not changed. Increased variance of subgrid- scale orography enhances the sink of momentum and further reduces the bias in 10-m wind speed.

  14. NEUTRINO-DRIVEN CONVECTION IN CORE-COLLAPSE SUPERNOVAE: HIGH-RESOLUTION SIMULATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Radice, David; Ott, Christian D. [TAPIR, Walter Burke Institute for Theoretical Physics, Mailcode 350-17, California Institute of Technology, Pasadena, CA 91125 (United States); Abdikamalov, Ernazar [Department of Physics, School of Science and Technology, Nazarbayev University, Astana 010000 (Kazakhstan); Couch, Sean M. [Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824 (United States); Haas, Roland [Max-Planck-Institut für Gravitationsphysik, Albert-Einstein-Institut, D-14476 Golm (Germany); Schnetter, Erik, E-mail: dradice@caltech.edu [Perimeter Institute for Theoretical Physics, Waterloo, ON (Canada)

    2016-03-20

    We present results from high-resolution semiglobal simulations of neutrino-driven convection in core-collapse supernovae. We employ an idealized setup with parameterized neutrino heating/cooling and nuclear dissociation at the shock front. We study the internal dynamics of neutrino-driven convection and its role in redistributing energy and momentum through the gain region. We find that even if buoyant plumes are able to locally transfer heat up to the shock, convection is not able to create a net positive energy flux and overcome the downward transport of energy from the accretion flow. Turbulent convection does, however, provide a significant effective pressure support to the accretion flow as it favors the accumulation of energy, mass, and momentum in the gain region. We derive an approximate equation that is able to explain and predict the shock evolution in terms of integrals of quantities such as the turbulent pressure in the gain region or the effects of nonradial motion of the fluid. We use this relation as a way to quantify the role of turbulence in the dynamics of the accretion shock. Finally, we investigate the effects of grid resolution, which we change by a factor of 20 between the lowest and highest resolution. Our results show that the shallow slopes of the turbulent kinetic energy spectra reported in previous studies are a numerical artifact. Kolmogorov scaling is progressively recovered as the resolution is increased.

  15. NEUTRINO-DRIVEN CONVECTION IN CORE-COLLAPSE SUPERNOVAE: HIGH-RESOLUTION SIMULATIONS

    International Nuclear Information System (INIS)

    Radice, David; Ott, Christian D.; Abdikamalov, Ernazar; Couch, Sean M.; Haas, Roland; Schnetter, Erik

    2016-01-01

    We present results from high-resolution semiglobal simulations of neutrino-driven convection in core-collapse supernovae. We employ an idealized setup with parameterized neutrino heating/cooling and nuclear dissociation at the shock front. We study the internal dynamics of neutrino-driven convection and its role in redistributing energy and momentum through the gain region. We find that even if buoyant plumes are able to locally transfer heat up to the shock, convection is not able to create a net positive energy flux and overcome the downward transport of energy from the accretion flow. Turbulent convection does, however, provide a significant effective pressure support to the accretion flow as it favors the accumulation of energy, mass, and momentum in the gain region. We derive an approximate equation that is able to explain and predict the shock evolution in terms of integrals of quantities such as the turbulent pressure in the gain region or the effects of nonradial motion of the fluid. We use this relation as a way to quantify the role of turbulence in the dynamics of the accretion shock. Finally, we investigate the effects of grid resolution, which we change by a factor of 20 between the lowest and highest resolution. Our results show that the shallow slopes of the turbulent kinetic energy spectra reported in previous studies are a numerical artifact. Kolmogorov scaling is progressively recovered as the resolution is increased

  16. A Non-hydrostatic Atmospheric Model for Global High-resolution Simulation

    Science.gov (United States)

    Peng, X.; Li, X.

    2017-12-01

    A three-dimensional non-hydrostatic atmosphere model, GRAPES_YY, is developed on the spherical Yin-Yang grid system in order to enforce global high-resolution weather simulation or forecasting at the CAMS/CMA. The quasi-uniform grid makes the computation be of high efficiency and free of pole problem. Full representation of the three-dimensional Coriolis force is considered in the governing equations. Under the constraint of third-order boundary interpolation, the model is integrated with the semi-implicit semi-Lagrangian method using the same code on both zones. A static halo region is set to ensure computation of cross-boundary transport and updating Dirichlet-type boundary conditions in the solution process of elliptical equations with the Schwarz method. A series of dynamical test cases, including the solid-body advection, the balanced geostrophic flow, zonal flow over an isolated mountain, development of the Rossby-Haurwitz wave and a baroclinic wave, are carried out, and excellent computational stability and accuracy of the dynamic core has been confirmed. After implementation of the physical processes of long and short-wave radiation, cumulus convection, micro-physical transformation of water substances and the turbulent processes in the planetary boundary layer include surface layer vertical fluxes parameterization, a long-term run of the model is then put forward under an idealized aqua-planet configuration to test the model physics and model ability in both short-term and long-term integrations. In the aqua-planet experiment, the model shows an Earth-like structure of circulation. The time-zonal mean temperature, wind components and humidity illustrate reasonable subtropical zonal westerly jet, meridional three-cell circulation, tropical convection and thermodynamic structures. The specific SST and solar insolation being symmetric about the equator enhance the ITCZ and tropical precipitation, which concentrated in tropical region. Additional analysis and

  17. A web portal for hydrodynamical, cosmological simulations

    Science.gov (United States)

    Ragagnin, A.; Dolag, K.; Biffi, V.; Cadolle Bel, M.; Hammer, N. J.; Krukau, A.; Petkova, M.; Steinborn, D.

    2017-07-01

    This article describes a data centre hosting a web portal for accessing and sharing the output of large, cosmological, hydro-dynamical simulations with a broad scientific community. It also allows users to receive related scientific data products by directly processing the raw simulation data on a remote computing cluster. The data centre has a multi-layer structure: a web portal, a job control layer, a computing cluster and a HPC storage system. The outer layer enables users to choose an object from the simulations. Objects can be selected by visually inspecting 2D maps of the simulation data, by performing highly compounded and elaborated queries or graphically by plotting arbitrary combinations of properties. The user can run analysis tools on a chosen object. These services allow users to run analysis tools on the raw simulation data. The job control layer is responsible for handling and performing the analysis jobs, which are executed on a computing cluster. The innermost layer is formed by a HPC storage system which hosts the large, raw simulation data. The following services are available for the users: (I) CLUSTERINSPECT visualizes properties of member galaxies of a selected galaxy cluster; (II) SIMCUT returns the raw data of a sub-volume around a selected object from a simulation, containing all the original, hydro-dynamical quantities; (III) SMAC creates idealized 2D maps of various, physical quantities and observables of a selected object; (IV) PHOX generates virtual X-ray observations with specifications of various current and upcoming instruments.

  18. Changes in Moisture Flux over the Tibetan Plateau during 1979-2011: Insights from a High Resolution Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Yanhong; Leung, Lai-Yung R.; Zhang, Yongxin; Cuo, Lan

    2015-05-15

    Net precipitation (precipitation minus evapotranspiration, P-E) changes between 1979 and 2011 from a high resolution regional climate simulation and its reanalysis forcing are analyzed over the Tibet Plateau (TP) and compared to the global land data assimilation system (GLDAS) product. The high resolution simulation better resolves precipitation changes than its coarse resolution forcing, which contributes dominantly to the improved P-E change in the regional simulation compared to the global reanalysis. Hence, the former may provide better insights about the drivers of P-E changes. The mechanism behind the P-E changes is explored by decomposing the column integrated moisture flux convergence into thermodynamic, dynamic, and transient eddy components. High-resolution climate simulation improves the spatial pattern of P-E changes over the best available global reanalysis. High-resolution climate simulation also facilitates new and substantial findings regarding the role of thermodynamics and transient eddies in P-E changes reflected in observed changes in major river basins fed by runoff from the TP. The analysis revealed the contrasting convergence/divergence changes between the northwestern and southeastern TP and feedback through latent heat release as an important mechanism leading to the mean P-E changes in the TP.

  19. Cosmological simulation with dust formation and destruction

    Science.gov (United States)

    Aoyama, Shohei; Hou, Kuan-Chou; Hirashita, Hiroyuki; Nagamine, Kentaro; Shimizu, Ikkoh

    2018-06-01

    To investigate the evolution of dust in a cosmological volume, we perform hydrodynamic simulations, in which the enrichment of metals and dust is treated self-consistently with star formation and stellar feedback. We consider dust evolution driven by dust production in stellar ejecta, dust destruction by sputtering, grain growth by accretion and coagulation, and grain disruption by shattering, and treat small and large grains separately to trace the grain size distribution. After confirming that our model nicely reproduces the observed relation between dust-to-gas ratio and metallicity for nearby galaxies, we concentrate on the dust abundance over the cosmological volume in this paper. The comoving dust mass density has a peak at redshift z ˜ 1-2, coincident with the observationally suggested dustiest epoch in the Universe. In the local Universe, roughly 10 per cent of the dust is contained in the intergalactic medium (IGM), where only 1/3-1/4 of the dust survives against dust destruction by sputtering. We also show that the dust mass function is roughly reproduced at ≲ 108 M⊙, while the massive end still has a discrepancy, which indicates the necessity of stronger feedback in massive galaxies. In addition, our model broadly reproduces the observed radial profile of dust surface density in the circum-galactic medium (CGM). While our model satisfies the observational constraints for the dust extinction on cosmological scales, it predicts that the dust in the CGM and IGM is dominated by large (>0.03 μm) grains, which is in tension with the steep reddening curves observed in the CGM.

  20. High-resolution simulations of the thermophysiological effects of human exposure to 100 MHz RF energy

    International Nuclear Information System (INIS)

    Nelson, David A; Curran, Allen R; Nyberg, Hans A; Marttila, Eric A; Mason, Patrick A; Ziriax, John M

    2013-01-01

    Human exposure to radio frequency (RF) electromagnetic energy is known to result in tissue heating and can raise temperatures substantially in some situations. Standards for safe exposure to RF do not reflect bio-heat transfer considerations however. Thermoregulatory function (vasodilation, sweating) may mitigate RF heating effects in some environments and exposure scenarios. Conversely, a combination of an extreme environment (high temperature, high humidity), high activity levels and thermally insulating garments may exacerbate RF exposure and pose a risk of unsafe temperature elevation, even for power densities which might be acceptable in a normothermic environment. A high-resolution thermophysiological model, incorporating a heterogeneous tissue model of a seated adult has been developed and used to replicate a series of whole-body exposures at a frequency (100 MHz) which approximates that of human whole-body resonance. Exposures were simulated at three power densities (4, 6 and 8 mW cm −2 ) plus a sham exposure and at three different ambient temperatures (24, 28 and 31 °C). The maximum hypothalamic temperature increase over the course of a 45 min exposure was 0.28 °C and occurred in the most extreme conditions (T amb = 31 °C, PD = 8 mW cm −2 ). Skin temperature increases attributable to RF exposure were modest, with the exception of a ‘hot spot’ in the vicinity of the ankle where skin temperatures exceeded 39 °C. Temperature increases in internal organs and tissues were small, except for connective tissue and bone in the lower leg and foot. Temperature elevation also was noted in the spinal cord, consistent with a hot spot previously identified in the literature. (paper)

  1. High-resolution WRF-LES simulations for real episodes: A case study for prealpine terrain

    Science.gov (United States)

    Hald, Cornelius; Mauder, Matthias; Laux, Patrick; Kunstmann, Harald

    2017-04-01

    While in most large or regional scale weather and climate models turbulence is parametrized, LES (Large Eddy Simulation) allows for the explicit modeling of turbulent structures in the atmosphere. With the exponential growth in available computing power the technique has become more and more applicable, yet it has mostly been used to model idealized scenarios. It is investigated how well WRF-LES can represent small scale weather patterns. The results are evaluated against different hydrometeorological measurements. We use WRF-LES to model the diurnal cycle for a 48 hour episode in summer over moderately complex terrain in southern Germany. The model setup uses a high resolution digital elevation model, land use and vegetation map. The atmospheric boundary conditions are set by reanalysis data. Schemes for radiation and microphysics and a land-surface model are employed. The biggest challenge in modeling arises from the high horizontal resolution of dx = 30m, since the subgrid-scale model then requires a vertical resolution dz ≈ 10m for optimal results. We observe model instabilities and present solutions like smoothing of the surface input data, careful positioning of the model domain and shortening of the model time step down to a twentieth of a second. Model results are compared to an array of various instruments including eddy covariance stations, LIDAR, RASS, SODAR, weather stations and unmanned aerial vehicles. All instruments are part of the TERENO pre-Alpine area and were employed in the orchestrated measurement campaign ScaleX in July 2015. Examination of the results show reasonable agreement between model and measurements in temperature- and moisture profiles. Modeled wind profiles are highly dependent on the vertical resolution and are in accordance with measurements only at higher wind speeds. A direct comparison of turbulence is made difficult by the purely statistical character of turbulent motions in the model.

  2. Creating high-resolution digital elevation model using thin plate spline interpolation and Monte Carlo simulation

    International Nuclear Information System (INIS)

    Pohjola, J.; Turunen, J.; Lipping, T.

    2009-07-01

    In this report creation of the digital elevation model of Olkiluoto area incorporating a large area of seabed is described. The modeled area covers 960 square kilometers and the apparent resolution of the created elevation model was specified to be 2.5 x 2.5 meters. Various elevation data like contour lines and irregular elevation measurements were used as source data in the process. The precision and reliability of the available source data varied largely. Digital elevation model (DEM) comprises a representation of the elevation of the surface of the earth in particular area in digital format. DEM is an essential component of geographic information systems designed for the analysis and visualization of the location-related data. DEM is most often represented either in raster or Triangulated Irregular Network (TIN) format. After testing several methods the thin plate spline interpolation was found to be best suited for the creation of the elevation model. The thin plate spline method gave the smallest error in the test where certain amount of points was removed from the data and the resulting model looked most natural. In addition to the elevation data the confidence interval at each point of the new model was required. The Monte Carlo simulation method was selected for this purpose. The source data points were assigned probability distributions according to what was known about their measurement procedure and from these distributions 1 000 (20 000 in the first version) values were drawn for each data point. Each point of the newly created DEM had thus as many realizations. The resulting high resolution DEM will be used in modeling the effects of land uplift and evolution of the landscape in the time range of 10 000 years from the present. This time range comes from the requirements set for the spent nuclear fuel repository site. (orig.)

  3. A high resolution hydrodynamic 3-D model simulation of the malta shelf area

    Directory of Open Access Journals (Sweden)

    A. F. Drago

    2003-01-01

    Full Text Available The seasonal variability of the water masses and transport in the Malta Channel and proximity of the Maltese Islands have been simulated by a high resolution (1.6 km horizontal grid on average, 15 vertical sigma layers eddy resolving primitive equation shelf model (ROSARIO-I. The numerical simulation was run with climatological forcing and includes thermohaline dynamics with a turbulence scheme for the vertical mixing coefficients on the basis of the Princeton Ocean Model (POM. The model has been coupled by one-way nesting along three lateral boundaries (east, south and west to an intermediate coarser resolution model (5 km implemented over the Sicilian Channel area. The fields at the open boundaries and the atmospheric forcing at the air-sea interface were applied on a repeating "perpetual" year climatological cycle. The ability of the model to reproduce a realistic circulation of the Sicilian-Maltese shelf area has been demonstrated. The skill of the nesting procedure was tested by model-modelc omparisons showing that the major features of the coarse model flow field can be reproduced by the fine model with additional eddy space scale components. The numerical results included upwelling, mainly in summer and early autumn, along the southern coasts of Sicily and Malta; a strong eastward shelf surface flow along shore to Sicily, forming part of the Atlantic Ionian Stream, with a presence throughout the year and with significant seasonal modulation, and a westward winter intensified flow of LIW centered at a depth of around 280 m under the shelf break to the south of Malta. The seasonal variability in the thermohaline structure of the domain and the associated large-scale flow structures can be related to the current knowledge on the observed hydrography of the area. The level of mesoscale resolution achieved by the model allowed the spatial and temporal evolution of the changing flow patterns, triggered by internal dynamics, to be followed in

  4. A high resolution hydrodynamic 3-D model simulation of the malta shelf area

    Directory of Open Access Journals (Sweden)

    A. F. Drago

    Full Text Available The seasonal variability of the water masses and transport in the Malta Channel and proximity of the Maltese Islands have been simulated by a high resolution (1.6 km horizontal grid on average, 15 vertical sigma layers eddy resolving primitive equation shelf model (ROSARIO-I. The numerical simulation was run with climatological forcing and includes thermohaline dynamics with a turbulence scheme for the vertical mixing coefficients on the basis of the Princeton Ocean Model (POM. The model has been coupled by one-way nesting along three lateral boundaries (east, south and west to an intermediate coarser resolution model (5 km implemented over the Sicilian Channel area. The fields at the open boundaries and the atmospheric forcing at the air-sea interface were applied on a repeating "perpetual" year climatological cycle.

    The ability of the model to reproduce a realistic circulation of the Sicilian-Maltese shelf area has been demonstrated. The skill of the nesting procedure was tested by model-modelc omparisons showing that the major features of the coarse model flow field can be reproduced by the fine model with additional eddy space scale components. The numerical results included upwelling, mainly in summer and early autumn, along the southern coasts of Sicily and Malta; a strong eastward shelf surface flow along shore to Sicily, forming part of the Atlantic Ionian Stream, with a presence throughout the year and with significant seasonal modulation, and a westward winter intensified flow of LIW centered at a depth of around 280 m under the shelf break to the south of Malta. The seasonal variability in the thermohaline structure of the domain and the associated large-scale flow structures can be related to the current knowledge on the observed hydrography of the area. The level of mesoscale resolution achieved by the model allowed the spatial and temporal evolution of the changing flow patterns, triggered by

  5. An analysis of MM5 sensitivity to different parameterizations for high-resolution climate simulations

    Science.gov (United States)

    Argüeso, D.; Hidalgo-Muñoz, J. M.; Gámiz-Fortis, S. R.; Esteban-Parra, M. J.; Castro-Díez, Y.

    2009-04-01

    An evaluation of MM5 mesoscale model sensitivity to different parameterizations schemes is presented in terms of temperature and precipitation for high-resolution integrations over Andalusia (South of Spain). As initial and boundary conditions ERA-40 Reanalysis data are used. Two domains were used, a coarse one with dimensions of 55 by 60 grid points with spacing of 30 km and a nested domain of 48 by 72 grid points grid spaced 10 km. Coarse domain fully covers Iberian Peninsula and Andalusia fits loosely in the finer one. In addition to parameterization tests, two dynamical downscaling techniques have been applied in order to examine the influence of initial conditions on RCM long-term studies. Regional climate studies usually employ continuous integration for the period under survey, initializing atmospheric fields only at the starting point and feeding boundary conditions regularly. An alternative approach is based on frequent re-initialization of atmospheric fields; hence the simulation is divided in several independent integrations. Altogether, 20 simulations have been performed using varying physics options, of which 4 were fulfilled applying the re-initialization technique. Surface temperature and accumulated precipitation (daily and monthly scale) were analyzed for a 5-year period covering from 1990 to 1994. Results have been compared with daily observational data series from 110 stations for temperature and 95 for precipitation Both daily and monthly average temperatures are generally well represented by the model. Conversely, daily precipitation results present larger deviations from observational data. However, noticeable accuracy is gained when comparing with monthly precipitation observations. There are some especially conflictive subregions where precipitation is scarcely captured, such as the Southeast of the Iberian Peninsula, mainly due to its extremely convective nature. Regarding parameterization schemes performance, every set provides very

  6. The WASCAL high-resolution regional climate simulation ensemble for West Africa: concept, dissemination and assessment

    Science.gov (United States)

    Heinzeller, Dominikus; Dieng, Diarra; Smiatek, Gerhard; Olusegun, Christiana; Klein, Cornelia; Hamann, Ilse; Salack, Seyni; Bliefernicht, Jan; Kunstmann, Harald

    2018-04-01

    Climate change and constant population growth pose severe challenges to 21st century rural Africa. Within the framework of the West African Science Service Center on Climate Change and Adapted Land Use (WASCAL), an ensemble of high-resolution regional climate change scenarios for the greater West African region is provided to support the development of effective adaptation and mitigation measures. This contribution presents the overall concept of the WASCAL regional climate simulations, as well as detailed information on the experimental design, and provides information on the format and dissemination of the available data. All data are made available to the public at the CERA long-term archive of the German Climate Computing Center (DKRZ) with a subset available at the PANGAEA Data Publisher for Earth & Environmental Science portal (https://doi.pangaea.de/10.1594/PANGAEA.880512" target="_blank">https://doi.pangaea.de/10.1594/PANGAEA.880512). A brief assessment of the data are presented to provide guidance for future users. Regional climate projections are generated at high (12 km) and intermediate (60 km) resolution using the Weather Research and Forecasting Model (WRF). The simulations cover the validation period 1980-2010 and the two future periods 2020-2050 and 2070-2100. A brief comparison to observations and two climate change scenarios from the Coordinated Regional Downscaling Experiment (CORDEX) initiative is presented to provide guidance on the data set to future users and to assess their climate change signal. Under the RCP4.5 (Representative Concentration Pathway 4.5) scenario, the results suggest an increase in temperature by 1.5 °C at the coast of Guinea and by up to 3 °C in the northern Sahel by the end of the 21st century, in line with existing climate projections for the region. They also project an increase in precipitation by up to 300 mm per year along the coast of Guinea, by up to 150 mm per year in the Soudano region adjacent in the north and

  7. PDF added value of a high resolution climate simulation for precipitation

    Science.gov (United States)

    Soares, Pedro M. M.; Cardoso, Rita M.

    2015-04-01

    dynamical downscaling, based on simple PDF skill scores. The measure can assess the full quality of the PDFs and at the same time integrates a flexible manner to weight differently the PDF tails. In this study we apply the referred method to characaterize the PDF added value of a high resolution simulation with the WRF model. Results from a WRF climate simulation centred at the Iberian Penisnula with two nested grids, a larger one at 27km and a smaller one at 9km. This simulation is forced by ERA-Interim. The observational data used covers from rain gauges precipitation records to observational regular grids of daily precipitation. Two regular gridded precipitation datasets are used. A Portuguese grid precipitation dataset developed at 0.2°× 0.2°, from observed rain gauges daily precipitation. A second one corresponding to the ENSEMBLES observational gridded dataset for Europe, which includes daily precipitation values at 0.25°. The analisys shows an important PDF added value from the higher resolution simulation, regarding the full PDF and the extremes. This method shows higher potential to be applied to other simulation exercises and to evaluate other variables.

  8. Machine vision-based high-resolution weed mapping and patch-sprayer performance simulation

    NARCIS (Netherlands)

    Tang, L.; Tian, L.F.; Steward, B.L.

    1999-01-01

    An experimental machine vision-based patch-sprayer was developed. This sprayer was primarily designed to do real-time weed density estimation and variable herbicide application rate control. However, the sprayer also had the capability to do high-resolution weed mapping if proper mapping techniques

  9. Identifying added value in high-resolution climate simulations over Scandinavia

    DEFF Research Database (Denmark)

    Mayer, Stephania; Fox Maule, Cathrine; Sobolowski, Stefan

    2015-01-01

    High-resolution data are needed in order to assess potential impacts of extreme events on infrastructure in the mid-latitudes. Dynamical downscaling offers one way to obtain this information. However, prior to implementation in any impacts assessment scheme, model output must be validated and det...

  10. Using Instrument Simulators and a Satellite Database to Evaluate Microphysical Assumptions in High-Resolution Simulations of Hurricane Rita

    Science.gov (United States)

    Hristova-Veleva, S. M.; Chao, Y.; Chau, A. H.; Haddad, Z. S.; Knosp, B.; Lambrigtsen, B.; Li, P.; Martin, J. M.; Poulsen, W. L.; Rodriguez, E.; Stiles, B. W.; Turk, J.; Vu, Q.

    2009-12-01

    Improving forecasting of hurricane intensity remains a significant challenge for the research and operational communities. Many factors determine a tropical cyclone’s intensity. Ultimately, though, intensity is dependent on the magnitude and distribution of the latent heating that accompanies the hydrometeor production during the convective process. Hence, the microphysical processes and their representation in hurricane models are of crucial importance for accurately simulating hurricane intensity and evolution. The accurate modeling of the microphysical processes becomes increasingly important when running high-resolution models that should properly reflect the convective processes in the hurricane eyewall. There are many microphysical parameterizations available today. However, evaluating their performance and selecting the most representative ones remains a challenge. Several field campaigns were focused on collecting in situ microphysical observations to help distinguish between different modeling approaches and improve on the most promising ones. However, these point measurements cannot adequately reflect the space and time correlations characteristic of the convective processes. An alternative approach to evaluating microphysical assumptions is to use multi-parameter remote sensing observations of the 3D storm structure and evolution. In doing so, we could compare modeled to retrieved geophysical parameters. The satellite retrievals, however, carry their own uncertainty. To increase the fidelity of the microphysical evaluation results, we can use instrument simulators to produce satellite observables from the model fields and compare to the observed. This presentation will illustrate how instrument simulators can be used to discriminate between different microphysical assumptions. We will compare and contrast the members of high-resolution ensemble WRF model simulations of Hurricane Rita (2005), each member reflecting different microphysical assumptions

  11. S-World: A high resolution global soil database for simulation modelling (Invited)

    Science.gov (United States)

    Stoorvogel, J. J.

    2013-12-01

    There is an increasing call for high resolution soil information at the global level. A good example for such a call is the Global Gridded Crop Model Intercomparison carried out within AgMIP. While local studies can make use of surveying techniques to collect additional techniques this is practically impossible at the global level. It is therefore important to rely on legacy data like the Harmonized World Soil Database. Several efforts do exist that aim at the development of global gridded soil property databases. These estimates of the variation of soil properties can be used to assess e.g., global soil carbon stocks. However, they do not allow for simulation runs with e.g., crop growth simulation models as these models require a description of the entire pedon rather than a few soil properties. This study provides the required quantitative description of pedons at a 1 km resolution for simulation modelling. It uses the Harmonized World Soil Database (HWSD) for the spatial distribution of soil types, the ISRIC-WISE soil profile database to derive information on soil properties per soil type, and a range of co-variables on topography, climate, and land cover to further disaggregate the available data. The methodology aims to take stock of these available data. The soil database is developed in five main steps. Step 1: All 148 soil types are ordered on the basis of their expected topographic position using e.g., drainage, salinization, and pedogenesis. Using the topographic ordering and combining the HWSD with a digital elevation model allows for the spatial disaggregation of the composite soil units. This results in a new soil map with homogeneous soil units. Step 2: The ranges of major soil properties for the topsoil and subsoil of each of the 148 soil types are derived from the ISRIC-WISE soil profile database. Step 3: A model of soil formation is developed that focuses on the basic conceptual question where we are within the range of a particular soil property

  12. Towards Forming a Primordial Protostar in a Cosmological AMR Simulation

    Science.gov (United States)

    Turk, Matthew J.; Abel, Tom; O'Shea, Brian W.

    2008-03-01

    Modeling the formation of the first stars in the universe is a well-posed problem and ideally suited for computational investigation.We have conducted high-resolution numerical studies of the formation of primordial stars. Beginning with primordial initial conditions appropriate for a ΛCDM model, we used the Eulerian adaptive mesh refinement code (Enzo) to achieve unprecedented numerical resolution, resolving cosmological scales as well as sub-stellar scales simultaneously. Building on the work of Abel, Bryan and Norman (2002), we followed the evolution of the first collapsing cloud until molecular hydrogen is optically thick to cooling radiation. In addition, the calculations account for the process of collision-induced emission (CIE) and add approximations to the optical depth in both molecular hydrogen roto-vibrational cooling and CIE. Also considered are the effects of chemical heating/cooling from the formation/destruction of molecular hydrogen. We present the results of these simulations, showing the formation of a 10 Jupiter-mass protostellar core bounded by a strongly aspherical accretion shock. Accretion rates are found to be as high as one solar mass per year.

  13. Compactified cosmological simulations of the infinite universe

    Science.gov (United States)

    Rácz, Gábor; Szapudi, István; Csabai, István; Dobos, László

    2018-06-01

    We present a novel N-body simulation method that compactifies the infinite spatial extent of the Universe into a finite sphere with isotropic boundary conditions to follow the evolution of the large-scale structure. Our approach eliminates the need for periodic boundary conditions, a mere numerical convenience which is not supported by observation and which modifies the law of force on large scales in an unrealistic fashion. We demonstrate that our method outclasses standard simulations executed on workstation-scale hardware in dynamic range, it is balanced in following a comparable number of high and low k modes and, its fundamental geometry and topology match observations. Our approach is also capable of simulating an expanding, infinite universe in static coordinates with Newtonian dynamics. The price of these achievements is that most of the simulated volume has smoothly varying mass and spatial resolution, an approximation that carries different systematics than periodic simulations. Our initial implementation of the method is called StePS which stands for Stereographically projected cosmological simulations. It uses stereographic projection for space compactification and naive O(N^2) force calculation which is nevertheless faster to arrive at a correlation function of the same quality than any standard (tree or P3M) algorithm with similar spatial and mass resolution. The N2 force calculation is easy to adapt to modern graphics cards, hence our code can function as a high-speed prediction tool for modern large-scale surveys. To learn about the limits of the respective methods, we compare StePS with GADGET-2 running matching initial conditions.

  14. Compactified Cosmological Simulations of the Infinite Universe

    Science.gov (United States)

    Rácz, Gábor; Szapudi, István; Csabai, István; Dobos, László

    2018-03-01

    We present a novel N-body simulation method that compactifies the infinite spatial extent of the Universe into a finite sphere with isotropic boundary conditions to follow the evolution of the large-scale structure. Our approach eliminates the need for periodic boundary conditions, a mere numerical convenience which is not supported by observation and which modifies the law of force on large scales in an unrealistic fashion. We demonstrate that our method outclasses standard simulations executed on workstation-scale hardware in dynamic range, it is balanced in following a comparable number of high and low k modes and, its fundamental geometry and topology match observations. Our approach is also capable of simulating an expanding, infinite universe in static coordinates with Newtonian dynamics. The price of these achievements is that most of the simulated volume has smoothly varying mass and spatial resolution, an approximation that carries different systematics than periodic simulations. Our initial implementation of the method is called StePS which stands for Stereographically Projected Cosmological Simulations. It uses stereographic projection for space compactification and naive O(N^2) force calculation which is nevertheless faster to arrive at a correlation function of the same quality than any standard (tree or P3M) algorithm with similar spatial and mass resolution. The N2 force calculation is easy to adapt to modern graphics cards, hence our code can function as a high-speed prediction tool for modern large-scale surveys. To learn about the limits of the respective methods, we compare StePS with GADGET-2 running matching initial conditions.

  15. A Monte Carlo Simulation Framework for Testing Cosmological Models

    Directory of Open Access Journals (Sweden)

    Heymann Y.

    2014-10-01

    Full Text Available We tested alternative cosmologies using Monte Carlo simulations based on the sam- pling method of the zCosmos galactic survey. The survey encompasses a collection of observable galaxies with respective redshifts that have been obtained for a given spec- troscopic area of the sky. Using a cosmological model, we can convert the redshifts into light-travel times and, by slicing the survey into small redshift buckets, compute a curve of galactic density over time. Because foreground galaxies obstruct the images of more distant galaxies, we simulated the theoretical galactic density curve using an average galactic radius. By comparing the galactic density curves of the simulations with that of the survey, we could assess the cosmologies. We applied the test to the expanding-universe cosmology of de Sitter and to a dichotomous cosmology.

  16. Principles and simulations of high-resolution STM imaging with a flexible tip apex

    Czech Academy of Sciences Publication Activity Database

    Krejčí, Ondřej; Hapala, Prokop; Ondráček, Martin; Jelínek, Pavel

    2017-01-01

    Roč. 95, č. 4 (2017), 1-9, č. článku 045407. ISSN 2469-9950 R&D Projects: GA ČR(CZ) GC14-16963J Institutional support: RVO:68378271 Keywords : STM * AFM * high-resolution Subject RIV: BM - Solid Matter Physics ; Magnetism OBOR OECD: Condensed matter physics (including formerly solid state physics, supercond.) Impact factor: 3.836, year: 2016

  17. Changes in Moisture Flux Over the Tibetan Plateau During 1979-2011: Insights from a High Resolution Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Yanhong; Leung, Lai-Yung R.; Zhang, Yongxin; Cuo, Lan

    2015-05-01

    Net precipitation (precipitation minus evapotranspiration, P-E) changes from a high resolution regional climate simulation and its reanalysis forcing are analyzed over the Tibet Plateau (TP) and compared to the global land data assimilation system (GLDAS) product. The mechanism behind the P-E changes is explored by decomposing the column integrated moisture flux convergence into thermodynamic, dynamic, and transient eddy components. High-resolution climate simulation improves the spatial pattern of P-E changes over the best available global reanalysis. Improvement in simulating precipitation changes at high elevations contributes dominantly to the improved P-E changes. High-resolution climate simulation also facilitates new and substantial findings regarding the role of thermodynamics and transient eddies in P-E changes reflected in observed changes in major river basins fed by runoff from the TP. The analysis revealed the contrasting convergence/divergence changes between the northwestern and southeastern TP and feedback through latent heat release as an important mechanism leading to the mean P-E changes in the TP.

  18. Local-scale high-resolution atmospheric dispersion model using large-eddy simulation. LOHDIM-LES

    International Nuclear Information System (INIS)

    Nakayama, Hiromasa; Nagai, Haruyasu

    2016-03-01

    We developed LOcal-scale High-resolution atmospheric DIspersion Model using Large-Eddy Simulation (LOHDIM-LES). This dispersion model is designed based on LES which is effective to reproduce unsteady behaviors of turbulent flows and plume dispersion. The basic equations are the continuity equation, the Navier-Stokes equation, and the scalar conservation equation. Buildings and local terrain variability are resolved by high-resolution grids with a few meters and these turbulent effects are represented by immersed boundary method. In simulating atmospheric turbulence, boundary layer flows are generated by a recycling turbulent inflow technique in a driver region set up at the upstream of the main analysis region. This turbulent inflow data are imposed at the inlet of the main analysis region. By this approach, the LOHDIM-LES can provide detailed information on wind velocities and plume concentration in the investigated area. (author)

  19. Origin of chemically distinct discs in the Auriga cosmological simulations

    Science.gov (United States)

    Grand, Robert J. J.; Bustamante, Sebastián; Gómez, Facundo A.; Kawata, Daisuke; Marinacci, Federico; Pakmor, Rüdiger; Rix, Hans-Walter; Simpson, Christine M.; Sparre, Martin; Springel, Volker

    2018-03-01

    The stellar disc of the Milky Way shows complex spatial and abundance structure that is central to understanding the key physical mechanisms responsible for shaping our Galaxy. In this study, we use six very high resolution cosmological zoom-in simulations of Milky Way-sized haloes to study the prevalence and formation of chemically distinct disc components. We find that our simulations develop a clearly bimodal distribution in the [α/Fe]-[Fe/H] plane. We find two main pathways to creating this dichotomy, which operate in different regions of the galaxies: (a) an early (z > 1) and intense high-[α/Fe] star formation phase in the inner region (R ≲ 5 kpc) induced by gas-rich mergers, followed by more quiescent low-[α/Fe] star formation; and (b) an early phase of high-[α/Fe] star formation in the outer disc followed by a shrinking of the gas disc owing to a temporarily lowered gas accretion rate, after which disc growth resumes. In process (b), a double-peaked star formation history around the time and radius of disc shrinking accentuates the dichotomy. If the early star formation phase is prolonged (rather than short and intense), chemical evolution proceeds as per process (a) in the inner region, but the dichotomy is less clear. In the outer region, the dichotomy is only evident if the first intense phase of star formation covers a large enough radial range before disc shrinking occurs; otherwise, the outer disc consists of only low-[α/Fe] sequence stars. We discuss the implication that both processes occurred in the Milky Way.

  20. Simulation of high-resolution MFM tip using exchange-spring magnet

    Energy Technology Data Exchange (ETDEWEB)

    Saito, H. [Faculty of Resource Science and Engineering, Akita University, Akita 010-8502 (Japan)]. E-mail: hsaito@ipc.akita-u.ac.jp; Yatsuyanagi, D. [Faculty of Resource Science and Engineering, Akita University, Akita 010-8502 (Japan); Ishio, S. [Faculty of Resource Science and Engineering, Akita University, Akita 010-8502 (Japan); Ito, A. [Nitto Optical Co. Ltd., Misato, Akita 019-1403 (Japan); Kawamura, H. [Nitto Optical Co. Ltd., Misato, Akita 019-1403 (Japan); Ise, K. [Research Institute of Advanced Technology Akita, Akita 010-1623 (Japan); Taguchi, K. [Research Institute of Advanced Technology Akita, Akita 010-1623 (Japan); Takahashi, S. [Research Institute of Advanced Technology Akita, Akita 010-1623 (Japan)

    2007-03-15

    The transfer function of magnetic force microscope (MFM) tips using an exchange-spring trilayer composed of a centered soft magnetic layer and two hard magnetic layers was calculated and the resolution was estimated by considering the thermodynamic noise limit of an MFM cantilever. It was found that reducing the thickness of the centered soft magnetic layer and the magnetization of hard magnetic layer are important to obtain high resolution. Tips using an exchange-spring trilayer with a very thin FeCo layer and isotropic hard magnetic layers, such as CoPt and FePt, are found to be suitable for obtaining a resolution less than 10 nm at room temperature.

  1. Simulation and Prediction of Weather Radar Clutter Using a Wave Propagator on High Resolution NWP Data

    DEFF Research Database (Denmark)

    Benzon, Hans-Henrik; Bovith, Thomas

    2008-01-01

    for prediction of this type of weather radar clutter is presented. The method uses a wave propagator to identify areas of potential non-standard propagation. The wave propagator uses a three dimensional refractivity field derived from the geophysical parameters: temperature, humidity, and pressure obtained from......Weather radars are essential sensors for observation of precipitation in the troposphere and play a major part in weather forecasting and hydrological modelling. Clutter caused by non-standard wave propagation is a common problem in weather radar applications, and in this paper a method...... a high-resolution Numerical Weather Prediction (NWP) model. The wave propagator is based on the parabolic equation approximation to the electromagnetic wave equation. The parabolic equation is solved using the well-known Fourier split-step method. Finally, the radar clutter prediction technique is used...

  2. Cosmology

    International Nuclear Information System (INIS)

    Novikov, I.D.

    1979-01-01

    Progress made by this Commission over the period 1976-1978 is reviewed. Topics include the Hubble constant, deceleration parameter, large-scale distribution of matter in the universe, radio astronomy and cosmology, space astronomy and cosmology, formation of galaxies, physics near the cosmological singularity, and unconventional cosmological models. (C.F.)

  3. An efficient non hydrostatic dynamical care far high-resolution simulations down to the urban scale

    International Nuclear Information System (INIS)

    Bonaventura, L.; Cesari, D.

    2005-01-01

    Numerical simulations of idealized stratified flows aver obstacles at different spatial scales demonstrate the very general applicability and the parallel efficiency of a new non hydrostatic dynamical care far simulation of mesoscale flows aver complex terrain

  4. Appending High-Resolution Elevation Data to GPS Speed Traces for Vehicle Energy Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Wood, E.; Burton, E.; Duran, A.; Gonder, J.

    2014-06-01

    Accurate and reliable global positioning system (GPS)-based vehicle use data are highly valuable for many transportation, analysis, and automotive considerations. Model-based design, real-world fuel economy analysis, and the growing field of autonomous and connected technologies (including predictive powertrain control and self-driving cars) all have a vested interest in high-fidelity estimation of powertrain loads and vehicle usage profiles. Unfortunately, road grade can be a difficult property to extract from GPS data with consistency. In this report, we present a methodology for appending high-resolution elevation data to GPS speed traces via a static digital elevation model. Anomalous data points in the digital elevation model are addressed during a filtration/smoothing routine, resulting in an elevation profile that can be used to calculate road grade. This process is evaluated against a large, commercially available height/slope dataset from the Navteq/Nokia/HERE Advanced Driver Assistance Systems product. Results will show good agreement with the Advanced Driver Assistance Systems data in the ability to estimate road grade between any two consecutive points in the contiguous United States.

  5. Hygromorphic characterization of softwood under high resolution X-ray tomography for hygrothermal simulation

    Science.gov (United States)

    El Hachem, Chady; Abahri, Kamilia; Vicente, Jérôme; Bennacer, Rachid; Belarbi, Rafik

    2018-03-01

    Because of their complex hygromorphic shape, microstructural study of wooden materials behavior has recently been the point of interest of researchers. The purpose of this study, in a first part, consists in characterizing by high resolution X-ray tomography the microstructural properties of spruce wood. In a second part, the subresulting geometrical parameters will be incorporated when evaluating the wooden hygrothermal transfers behavior. To do so, volume reconstructions of 3 Dimensional images (3D), obtained with a voxel size of 0.5 μm were achieved. The post-treatment of the corresponding volumes has given access to averages and standard deviations of lumens' diameters and cell walls' thicknesses. These results were performed for both early wood and latewood. Further, a segmentation approach for individualizing wood lumens was developed, which presents an important challenge in understanding localized physical properties. In this context, 3D heat and mass transfers within the real reconstructed geometries took place in order to highlight the effect of wood directions on the equivalent conductivity and moisture diffusion coefficients. Results confirm that the softwood cellular structure has a critical impact on the reliability of the studied physical parameters.

  6. COINCIDENCES BETWEEN O VI AND O VII LINES: INSIGHTS FROM HIGH-RESOLUTION SIMULATIONS OF THE WARM-HOT INTERGALACTIC MEDIUM

    International Nuclear Information System (INIS)

    Cen Renyue

    2012-01-01

    With high-resolution (0.46 h –1 kpc), large-scale, adaptive mesh-refinement Eulerian cosmological hydrodynamic simulations we compute properties of O VI and O VII absorbers from the warm-hot intergalactic medium (WHIM) at z = 0. Our new simulations are in broad agreement with previous simulations with ∼40% of the intergalactic medium being in the WHIM. Our simulations are in agreement with observed properties of O VI absorbers with respect to the line incidence rate and Doppler-width-column-density relation. It is found that the amount of gas in the WHIM below and above 10 6 K is roughly equal. Strong O VI absorbers are found to be predominantly collisionally ionized. It is found that (61%, 57%, 39%) of O VI absorbers of log N(O VI) cm 2 = (12.5-13, 13-14, > 14) have T 5 K. Cross correlations between galaxies and strong [N(O VI) > 10 14 cm –2 ] O VI absorbers on ∼100-300 kpc scales are suggested as a potential differentiator between collisional ionization and photoionization models. Quantitative prediction is made for the presence of broad and shallow O VI lines that are largely missed by current observations but will be detectable by Cosmic Origins Spectrograph observations. The reported 3σ upper limit on the mean column density of coincidental O VII lines at the location of detected O VI lines by Yao et al. is above our predicted value by a factor of 2.5-4. The claimed observational detection of O VII lines by Nicastro et al., if true, is 2σ above what our simulations predict.

  7. Monte-Carlo simulation of a high-resolution inverse geometry spectrometer on the SNS. Long Wavelength Target Station

    International Nuclear Information System (INIS)

    Bordallo, H.N.; Herwig, K.W.

    2001-01-01

    Using the Monte-Carlo simulation program McStas, we present the design principles of the proposed high-resolution inverse geometry spectrometer on the SNS-Long Wavelength Target Station (LWTS). The LWTS will provide the high flux of long wavelength neutrons at the requisite pulse rate required by the spectrometer design. The resolution of this spectrometer lies between that routinely achieved by spin echo techniques and the design goal of the high power target station backscattering spectrometer. Covering this niche in energy resolution will allow systematic studies over the large dynamic range required by many disciplines, such as protein dynamics. (author)

  8. Very high resolution regional climate model simulations over Greenland: Identifying added value

    DEFF Research Database (Denmark)

    Lucas-Picher, P.; Wulff-Nielsen, M.; Christensen, J.H.

    2012-01-01

    models. However, the bias between the simulations and the few available observations does not reduce with higher resolution. This is partly explained by the lack of observations in regions where the higher resolution is expected to improve the simulated climate. The RCM simulations show......This study presents two simulations of the climate over Greenland with the regional climate model (RCM) HIRHAM5 at 0.05° and 0.25° resolution driven at the lateral boundaries by the ERA-Interim reanalysis for the period 1989–2009. These simulations are validated against observations from...... that the temperature has increased the most in the northern part of Greenland and at lower elevations over the period 1989–2009. Higher resolution increases the relief variability in the model topography and causes the simulated precipitation to be larger on the coast and smaller over the main ice sheet compared...

  9. Initialization of high resolution surface wind simulations using NWS gridded data

    Science.gov (United States)

    J. Forthofer; K. Shannon; Bret Butler

    2010-01-01

    WindNinja is a standalone computer model designed to provide the user with simulations of surface wind flow. It is deterministic and steady state. It is currently being modified to allow the user to initialize the flow calculation using National Digital Forecast Database. It essentially allows the user to downscale the coarse scale simulations from meso-scale models to...

  10. High-resolution Hydrodynamic Simulation of Tidal Detonation of a Helium White Dwarf by an Intermediate Mass Black Hole

    Science.gov (United States)

    Tanikawa, Ataru

    2018-05-01

    We demonstrate tidal detonation during a tidal disruption event (TDE) of a helium (He) white dwarf (WD) with 0.45 M ⊙ by an intermediate mass black hole using extremely high-resolution simulations. Tanikawa et al. have shown tidal detonation in results of previous studies from unphysical heating due to low-resolution simulations, and such unphysical heating occurs in three-dimensional (3D) smoothed particle hydrodynamics (SPH) simulations even with 10 million SPH particles. In order to avoid such unphysical heating, we perform 3D SPH simulations up to 300 million SPH particles, and 1D mesh simulations using flow structure in the 3D SPH simulations for 1D initial conditions. The 1D mesh simulations have higher resolutions than the 3D SPH simulations. We show that tidal detonation occurs and confirm that this result is perfectly converged with different space resolution in both 3D SPH and 1D mesh simulations. We find that detonation waves independently arise in leading parts of the WD, and yield large amounts of 56Ni. Although detonation waves are not generated in trailing parts of the WD, the trailing parts would receive detonation waves generated in the leading parts and would leave large amounts of Si group elements. Eventually, this He WD TDE would synthesize 56Ni of 0.30 M ⊙ and Si group elements of 0.08 M ⊙, and could be observed as a luminous thermonuclear transient comparable to SNe Ia.

  11. GLOBAL HIGH-RESOLUTION N-BODY SIMULATION OF PLANET FORMATION. I. PLANETESIMAL-DRIVEN MIGRATION

    Energy Technology Data Exchange (ETDEWEB)

    Kominami, J. D. [Earth-Life Science Institute, Tokyo Institute of Technology, Meguro-Ku, Tokyo (Japan); Daisaka, H. [Hitotsubashi University, Kunitachi-shi, Tokyo (Japan); Makino, J. [RIKEN Advanced Institute for Computational Science, Chuo-ku, Kobe, Hyogo (Japan); Fujimoto, M., E-mail: kominami@mail.jmlab.jp, E-mail: daisaka@phys.science.hit-u.ac.jp, E-mail: makino@mail.jmlab.jp, E-mail: fujimoto.masaki@jaxa.jp [Japan Aerospace Exploration Agency, Sagamihara-shi, Kanagawa (Japan)

    2016-03-01

    We investigated whether outward planetesimal-driven migration (PDM) takes place or not in simulations when the self-gravity of planetesimals is included. We performed N-body simulations of planetesimal disks with a large width (0.7–4 au) that ranges over the ice line. The simulations consisted of two stages. The first-stage simulations were carried out to see the runaway growth phase using the planetesimals of initially the same mass. The runaway growth took place both at the inner edge of the disk and at the region just outside the ice line. This result was utilized for the initial setup of the second-stage simulations, in which the runaway bodies just outside the ice line were replaced by the protoplanets with about the isolation mass. In the second-stage simulations, the outward migration of the protoplanet was followed by the stopping of the migration due to the increase of the random velocity of the planetesimals. Owing to this increase of random velocities, one of the PDM criteria derived in Minton and Levison was broken. In the current simulations, the effect of the gas disk is not considered. It is likely that the gas disk plays an important role in PDM, and we plan to study its effect in future papers.

  12. Star Formation History of Dwarf Galaxies in Cosmological Hydrodynamic Simulations

    Directory of Open Access Journals (Sweden)

    Kentaro Nagamine

    2010-01-01

    Full Text Available We examine the past and current work on the star formation (SF histories of dwarf galaxies in cosmological hydrodynamic simulations. The results obtained from different numerical methods are still somewhat mixed, but the differences are understandable if we consider the numerical and resolution effects. It remains a challenge to simulate the episodic nature of SF history in dwarf galaxies at late times within the cosmological context of a cold dark matter model. More work is needed to solve the mysteries of SF history of dwarf galaxies employing large-scale hydrodynamic simulations on the next generation of supercomputers.

  13. Comparison of Explicitly Simulated and Downscaled Tropical Cyclone Activity in a High-Resolution Global Climate Model

    Directory of Open Access Journals (Sweden)

    Hirofumi Tomita

    2010-01-01

    Full Text Available The response of tropical cyclone activity to climate change is a matter of great inherent interest and practical importance. Most current global climate models are not, however, capable of adequately resolving tropical cyclones; this has led to the development of downscaling techniques designed to infer tropical cyclone activity from the large-scale fields produced by climate models. Here we compare the statistics of tropical cyclones simulated explicitly in a very high resolution (~14 km grid mesh global climate model to the results of one such downscaling technique driven by the same global model. This is done for a simulation of the current climate and also for a simulation of a climate warmed by the addition of carbon dioxide. The explicitly simulated and downscaled storms are similarly distributed in space, but the intensity distribution of the downscaled events has a somewhat longer high-intensity tail, owing to the higher resolution of the downscaling model. Both explicitly simulated and downscaled events show large increases in the frequency of events at the high-intensity ends of their respective intensity distributions, but the downscaled storms also show increases in low-intensity events, whereas the explicitly simulated weaker events decline in number. On the regional scale, there are large differences in the responses of the explicitly simulated and downscaled events to global warming. In particular, the power dissipation of downscaled events shows a 175% increase in the Atlantic, while the power dissipation of explicitly simulated events declines there.

  14. Evaluation of a high-resolution regional climate simulation over Greenland

    Energy Technology Data Exchange (ETDEWEB)

    Lefebre, Filip [Universite catholique de Louvain, Institut d' Astronomie et de Geophysique G. Lemaitre, Louvain-la-Neuve (Belgium); Vito - Flemish Institute for Technological Research, Integral Environmental Studies, Mol (Belgium); Fettweis, Xavier; Ypersele, Jean-Pascal van; Marbaix, Philippe [Universite catholique de Louvain, Institut d' Astronomie et de Geophysique G. Lemaitre, Louvain-la-Neuve (Belgium); Gallee, Hubert [Laboratoire de Glaciologie et de Geophysique de l' Environnement, Grenoble (France); Greuell, Wouter [Utrecht University, Institute for Marine and Atmospheric Research, Utrecht (Netherlands); Calanca, Pierluigi [Swiss Federal Research Station for Agroecology and Agriculture, Zurich (Switzerland)

    2005-07-01

    A simulation of the 1991 summer has been performed over south Greenland with a coupled atmosphere-snow regional climate model (RCM) forced by the ECMWF re-analysis. The simulation is evaluated with in-situ coastal and ice-sheet atmospheric and glaciological observations. Modelled air temperature, specific humidity, wind speed and radiative fluxes are in good agreement with the available observations, although uncertainties in the radiative transfer scheme need further investigation to improve the model's performance. In the sub-surface snow-ice model, surface albedo is calculated from the simulated snow grain shape and size, snow depth, meltwater accumulation, cloudiness and ice albedo. The use of snow metamorphism processes allows a realistic modelling of the temporal variations in the surface albedo during both melting periods and accumulation events. Concerning the surface albedo, the main finding is that an accurate albedo simulation during the melting season strongly depends on a proper initialization of the surface conditions which mainly result from winter accumulation processes. Furthermore, in a sensitivity experiment with a constant 0.8 albedo over the whole ice sheet, the average amount of melt decreased by more than 60%, which highlights the importance of a correctly simulated surface albedo. The use of this coupled atmosphere-snow RCM offers new perspectives in the study of the Greenland surface mass balance due to the represented feedback between the surface climate and the surface albedo, which is the most sensitive parameter in energy-balance-based ablation calculations. (orig.)

  15. Geant4 simulation of a 3D high resolution gamma camera

    International Nuclear Information System (INIS)

    Akhdar, H.; Kezzar, K.; Aksouh, F.; Assemi, N.; AlGhamdi, S.; AlGarawi, M.; Gerl, J.

    2015-01-01

    The aim of this work is to develop a 3D gamma camera with high position resolution and sensitivity relying on both distance/absorption and Compton scattering techniques and without using any passive collimation. The proposed gamma camera is simulated in order to predict its performance using the full benefit of Geant4 features that allow the construction of the needed geometry of the detectors, have full control of the incident gamma particles and study the response of the detector in order to test the suggested geometries. Three different geometries are simulated and each configuration is tested with three different scintillation materials (LaBr3, LYSO and CeBr3)

  16. Surface Wind Regionalization over Complex Terrain: Evaluation and Analysis of a High-Resolution WRF Simulation

    NARCIS (Netherlands)

    Jiménez, P.A.; González-Rouco, J.F.; García-Bustamante, E.; Navarro, J.; Montávez, J.P.; Vilà-Guerau de Arellano, J.; Dudhia, J.; Muñoz-Roldan, A.

    2010-01-01

    This study analyzes the daily-mean surface wind variability over an area characterized by complex topography through comparing observations and a 2-km-spatial-resolution simulation performed with the Weather Research and Forecasting (WRF) model for the period 1992–2005. The evaluation focuses on the

  17. High-resolution global climate modelling: the UPSCALE project, a large-simulation campaign

    Directory of Open Access Journals (Sweden)

    M. S. Mizielinski

    2014-08-01

    Full Text Available The UPSCALE (UK on PRACE: weather-resolving Simulations of Climate for globAL Environmental risk project constructed and ran an ensemble of HadGEM3 (Hadley Centre Global Environment Model 3 atmosphere-only global climate simulations over the period 1985–2011, at resolutions of N512 (25 km, N216 (60 km and N96 (130 km as used in current global weather forecasting, seasonal prediction and climate modelling respectively. Alongside these present climate simulations a parallel ensemble looking at extremes of future climate was run, using a time-slice methodology to consider conditions at the end of this century. These simulations were primarily performed using a 144 million core hour, single year grant of computing time from PRACE (the Partnership for Advanced Computing in Europe in 2012, with additional resources supplied by the Natural Environment Research Council (NERC and the Met Office. Almost 400 terabytes of simulation data were generated on the HERMIT supercomputer at the High Performance Computing Center Stuttgart (HLRS, and transferred to the JASMIN super-data cluster provided by the Science and Technology Facilities Council Centre for Data Archival (STFC CEDA for analysis and storage. In this paper we describe the implementation of the project, present the technical challenges in terms of optimisation, data output, transfer and storage that such a project involves and include details of the model configuration and the composition of the UPSCALE data set. This data set is available for scientific analysis to allow assessment of the value of model resolution in both present and potential future climate conditions.

  18. High resolution geodynamo simulations with strongly-driven convection and low viscosity

    Science.gov (United States)

    Schaeffer, Nathanael; Fournier, Alexandre; Jault, Dominique; Aubert, Julien

    2015-04-01

    Numerical simulations have been successful at explaining the magnetic field of the Earth for 20 years. However, the regime in which these simulations operate is in many respect very far from what is expected in the Earth's core. By reviewing previous work, we find that it appears difficult to have both low viscosity (low magnetic Prandtl number) and strong magnetic fields in numerical models (large ratio of magnetic over kinetic energy, a.k.a inverse squared Alfvén number). In order to understand better the dynamics and turbulence of the core, we have run a series of 3 simulations, with increasingly demanding parameters. The last simulation is at the limit of what nowadays codes can do on current super computers, with a resolution of 2688 grid points in longitude, 1344 in latitude, and 1024 radial levels. We will show various features of these numerical simulations, including what appears as trends when pushing the parameters toward the one of the Earth. The dynamics is very rich. From short time scales to large time scales, we observe at large scales: Inertial Waves, Torsional Alfvén Waves, columnar convective overturn dynamics and long-term thermal winds. In addition, the dynamics inside and outside the tangent cylinder seem to follow different routes. We find that the ohmic dissipation largely dominates the viscous one and that the magnetic energy dominates the kinetic energy. The magnetic field seems to play an ambiguous role. Despite the large magnetic field, which has an important impact on the flow, we find that the force balance for the mean flow is a thermal wind balance, and that the scale of convective cells is still dominated by viscous effects.

  19. Estimating Hydraulic Resistance for Floodplain Mapping and Hydraulic Studies from High-Resolution Topography: Physical and Numerical Simulations

    Science.gov (United States)

    Minear, J. T.

    2017-12-01

    One of the primary unknown variables in hydraulic analyses is hydraulic resistance, values for which are typically set using broad assumptions or calibration, with very few methods available for independent and robust determination. A better understanding of hydraulic resistance would be highly useful for understanding floodplain processes, forecasting floods, advancing sediment transport and hydraulic coupling, and improving higher dimensional flood modeling (2D+), as well as correctly calculating flood discharges for floods that are not directly measured. The relationship of observed features to hydraulic resistance is difficult to objectively quantify in the field, partially because resistance occurs at a variety of scales (i.e. grain, unit and reach) and because individual resistance elements, such as trees, grass and sediment grains, are inherently difficult to measure. Similar to photogrammetric techniques, Terrestrial Laser Scanning (TLS, also known as Ground-based LiDAR) has shown great ability to rapidly collect high-resolution topographic datasets for geomorphic and hydrodynamic studies and could be used to objectively quantify the features that collectively create hydraulic resistance in the field. Because of its speed in data collection and remote sensing ability, TLS can be used both for pre-flood and post-flood studies that require relatively quick response in relatively dangerous settings. Using datasets collected from experimental flume runs and numerical simulations, as well as field studies of several rivers in California and post-flood rivers in Colorado, this study evaluates the use of high-resolution topography to estimate hydraulic resistance, particularly from grain-scale elements. Contrary to conventional practice, experimental laboratory runs with bed grain size held constant but with varying grain-scale protusion create a nearly twenty-fold variation in measured hydraulic resistance. The ideal application of this high-resolution topography

  20. A high-resolution code for large eddy simulation of incompressible turbulent boundary layer flows

    KAUST Repository

    Cheng, Wan

    2014-03-01

    We describe a framework for large eddy simulation (LES) of incompressible turbulent boundary layers over a flat plate. This framework uses a fractional-step method with fourth-order finite difference on a staggered mesh. We present several laminar examples to establish the fourth-order accuracy and energy conservation property of the code. Furthermore, we implement a recycling method to generate turbulent inflow. We use the stretched spiral vortex subgrid-scale model and virtual wall model to simulate the turbulent boundary layer flow. We find that the case with Reθ ≈ 2.5 × 105 agrees well with available experimental measurements of wall friction, streamwise velocity profiles and turbulent intensities. We demonstrate that for cases with extremely large Reynolds numbers (Reθ = 1012), the present LES can reasonably predict the flow with a coarse mesh. The parallel implementation of the LES code demonstrates reasonable scaling on O(103) cores. © 2013 Elsevier Ltd.

  1. Air-Sea Interaction Processes in Low and High-Resolution Coupled Climate Model Simulations for the Southeast Pacific

    Science.gov (United States)

    Porto da Silveira, I.; Zuidema, P.; Kirtman, B. P.

    2017-12-01

    The rugged topography of the Andes Cordillera along with strong coastal upwelling, strong sea surface temperatures (SST) gradients and extensive but geometrically-thin stratocumulus decks turns the Southeast Pacific (SEP) into a challenge for numerical modeling. In this study, hindcast simulations using the Community Climate System Model (CCSM4) at two resolutions were analyzed to examine the importance of resolution alone, with the parameterizations otherwise left unchanged. The hindcasts were initialized on January 1 with the real-time oceanic and atmospheric reanalysis (CFSR) from 1982 to 2003, forming a 10-member ensemble. The two resolutions are (0.1o oceanic and 0.5o atmospheric) and (1.125o oceanic and 0.9o atmospheric). The SST error growth in the first six days of integration (fast errors) and those resulted from model drift (saturated errors) are assessed and compared towards evaluating the model processes responsible for the SST error growth. For the high-resolution simulation, SST fast errors are positive (+0.3oC) near the continental borders and negative offshore (-0.1oC). Both are associated with a decrease in cloud cover, a weakening of the prevailing southwesterly winds and a reduction of latent heat flux. The saturated errors possess a similar spatial pattern, but are larger and are more spatially concentrated. This suggests that the processes driving the errors already become established within the first week, in contrast to the low-resolution simulations. These, instead, manifest too-warm SSTs related to too-weak upwelling, driven by too-strong winds and Ekman pumping. Nevertheless, the ocean surface tends to be cooler in the low-resolution simulation than the high-resolution due to a higher cloud cover. Throughout the integration, saturated SST errors become positive and could reach values up to +4oC. These are accompanied by upwelling dumping and a decrease in cloud cover. High and low resolution models presented notable differences in how SST

  2. Medical images of patients in voxel structures in high resolution for Monte Carlo simulation

    International Nuclear Information System (INIS)

    Boia, Leonardo S.; Menezes, Artur F.; Silva, Ademir X.

    2011-01-01

    This work aims to present a computational process of conversion of tomographic and MRI medical images from patients in voxel structures to an input file, which will be manipulated in Monte Carlo Simulation code for tumor's radiotherapic treatments. The problem's scenario inherent to the patient is simulated by such process, using the volume element (voxel) as a unit of computational tracing. The head's voxel structure geometry has voxels with volumetric dimensions around 1 mm 3 and a population of millions, which helps - in that way, for a realistic simulation and a decrease in image's digital process techniques for adjustments and equalizations. With such additional data from the code, a more critical analysis can be developed in order to determine the volume of the tumor, and the protection, beside the patients' medical images were borrowed by Clinicas Oncologicas Integradas (COI/RJ), joined to the previous performed planning. In order to execute this computational process, SAPDI computational system is used in a digital image process for optimization of data, conversion program Scan2MCNP, which manipulates, processes, and converts the medical images into voxel structures to input files and the graphic visualizer Moritz for the verification of image's geometry placing. (author)

  3. Medical images of patients in voxel structures in high resolution for Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Boia, Leonardo S.; Menezes, Artur F.; Silva, Ademir X., E-mail: lboia@con.ufrj.b, E-mail: ademir@con.ufrj.b [Universidade Federal do Rio de Janeiro (PEN/COPPE/UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia. Programa de Engenharia Nuclear; Salmon Junior, Helio A. [Clinicas Oncologicas Integradas (COI), Rio de Janeiro, RJ (Brazil)

    2011-07-01

    This work aims to present a computational process of conversion of tomographic and MRI medical images from patients in voxel structures to an input file, which will be manipulated in Monte Carlo Simulation code for tumor's radiotherapic treatments. The problem's scenario inherent to the patient is simulated by such process, using the volume element (voxel) as a unit of computational tracing. The head's voxel structure geometry has voxels with volumetric dimensions around 1 mm{sup 3} and a population of millions, which helps - in that way, for a realistic simulation and a decrease in image's digital process techniques for adjustments and equalizations. With such additional data from the code, a more critical analysis can be developed in order to determine the volume of the tumor, and the protection, beside the patients' medical images were borrowed by Clinicas Oncologicas Integradas (COI/RJ), joined to the previous performed planning. In order to execute this computational process, SAPDI computational system is used in a digital image process for optimization of data, conversion program Scan2MCNP, which manipulates, processes, and converts the medical images into voxel structures to input files and the graphic visualizer Moritz for the verification of image's geometry placing. (author)

  4. Phase I and phase II reductive metabolism simulation of nitro aromatic xenobiotics with electrochemistry coupled with high resolution mass spectrometry.

    Science.gov (United States)

    Bussy, Ugo; Chung-Davidson, Yu-Wen; Li, Ke; Li, Weiming

    2014-11-01

    Electrochemistry combined with (liquid chromatography) high resolution mass spectrometry was used to simulate the general reductive metabolism of three biologically important nitro aromatic molecules: 3-trifluoromethyl-4-nitrophenol (TFM), niclosamide, and nilutamide. TFM is a pesticide used in the Laurential Great Lakes while niclosamide and nilutamide are used in cancer therapy. At first, a flow-through electrochemical cell was directly connected to a high resolution mass spectrometer to evaluate the ability of electrochemistry to produce the main reduction metabolites of nitro aromatic, nitroso, hydroxylamine, and amine functional groups. Electrochemical experiments were then carried out at a constant potential of -2.5 V before analysis of the reduction products by LC-HRMS, which confirmed the presence of the nitroso, hydroxylamine, and amine species as well as dimers. Dimer identification illustrates the reactivity of the nitroso species with amine and hydroxylamine species. To investigate xenobiotic metabolism, the reactivity of nitroso species to biomolecules was also examined. Binding of the nitroso metabolite to glutathione was demonstrated by the observation of adducts by LC-ESI(+)-HRMS and the characteristics of their MSMS fragmentation. In conclusion, electrochemistry produces the main reductive metabolites of nitro aromatics and supports the observation of nitroso reactivity through dimer or glutathione adduct formation.

  5. Evaluation of high-resolution climate simulations for West Africa using COSMO-CLM

    Science.gov (United States)

    Dieng, Diarra; Smiatek, Gerhard; Bliefernicht, Jan; Laux, Patrick; Heinzeller, Dominikus; Kunstmann, Harald; Sarr, Abdoulaye; Thierno Gaye, Amadou

    2017-04-01

    The climate change modeling activities within the WASCAL program (West African Science Service Center on Climate Change and Adapted Land Use) concentrate on the provisioning of future climate change scenario data at high spatial and temporal resolution and quality in West Africa. Such information is highly required for impact studies in water resources and agriculture for the development of reliable climate change adaptation and mitigation strategies. In this study, we present a detailed evaluation of high simulation runs based on the regional climate model, COSMO model in CLimate Mode (COSMO-CLM). The model is applied over West Africa in a nested approach with two simulation domains at 0.44° and 0.11° resolution using reanalysis data from ERA-Interim (1979-2013). The models runs are compared to several state-of-the-art observational references (e.g., CRU, CHIRPS) including daily precipitation data provided by national meteorological services in West Africa. Special attention is paid to the reproduction of the dynamics of the West African Monsoon (WMA), its associated precipitation patterns and crucial agro-climatological indices such as the onset of the rainy season. In addition, first outcomes of the regional climate change simulations driven by MPI-ESM-LR are presented for a historical period (1980 to 2010) and two future periods (2020 to 2050, 2070 to 2100). The evaluation of the reanalysis runs shows that COSMO-CLM is able to reproduce the observed major climate characteristics including the West African Monsoon within the range of comparable RCM evaluations studies. However, substantial uncertainties remain, especially in the Sahel zone. The added value of the higher resolution of the nested run is reflected in a smaller bias in extreme precipitation statistics with respect to the reference data.

  6. IMPLEMENTING THE DC MODE IN COSMOLOGICAL SIMULATIONS WITH SUPERCOMOVING VARIABLES

    International Nuclear Information System (INIS)

    Gnedin, Nickolay Y.; Kravtsov, Andrey V.; Rudd, Douglas H.

    2011-01-01

    As emphasized by previous studies, proper treatment of the density fluctuation on the fundamental scale of a cosmological simulation volume-the D C mode - is critical for accurate modeling of spatial correlations on scales ∼> 10% of simulation box size. We provide further illustration of the effects of the DC mode on the abundance of halos in small boxes and show that it is straightforward to incorporate this mode in cosmological codes that use the 'supercomoving' variables. The equations governing evolution of dark matter and baryons recast with these variables are particularly simple and include the expansion factor, and hence the effect of the DC mode, explicitly only in the Poisson equation.

  7. HIGH-RESOLUTION SIMULATIONS OF CONVECTION PRECEDING IGNITION IN TYPE Ia SUPERNOVAE USING ADAPTIVE MESH REFINEMENT

    International Nuclear Information System (INIS)

    Nonaka, A.; Aspden, A. J.; Almgren, A. S.; Bell, J. B.; Zingale, M.; Woosley, S. E.

    2012-01-01

    We extend our previous three-dimensional, full-star simulations of the final hours of convection preceding ignition in Type Ia supernovae to higher resolution using the adaptive mesh refinement capability of our low Mach number code, MAESTRO. We report the statistics of the ignition of the first flame at an effective 4.34 km resolution and general flow field properties at an effective 2.17 km resolution. We find that off-center ignition is likely, with radius of 50 km most favored and a likely range of 40-75 km. This is consistent with our previous coarser (8.68 km resolution) simulations, implying that we have achieved sufficient resolution in our determination of likely ignition radii. The dynamics of the last few hot spots preceding ignition suggest that a multiple ignition scenario is not likely. With improved resolution, we can more clearly see the general flow pattern in the convective region, characterized by a strong outward plume with a lower speed recirculation. We show that the convective core is turbulent with a Kolmogorov spectrum and has a lower turbulent intensity and larger integral length scale than previously thought (on the order of 16 km s –1 and 200 km, respectively), and we discuss the potential consequences for the first flames.

  8. Kinetic energy spectra, vertical resolution and dissipation in high-resolution atmospheric simulations.

    Science.gov (United States)

    Skamarock, W. C.

    2017-12-01

    We have performed week-long full-physics simulations with the MPAS global model at 15 km cell spacing using vertical mesh spacings of 800, 400, 200 and 100 meters in the mid-troposphere through the mid-stratosphere. We find that the horizontal kinetic energy spectra in the upper troposphere and stratosphere does not converge with increasing vertical resolution until we reach 200 meter level spacing. Examination of the solutions indicates that significant inertia-gravity waves are not vertically resolved at the lower vertical resolutions. Diagnostics from the simulations indicate that the primary kinetic energy dissipation results from the vertical mixing within the PBL parameterization and from the gravity-wave drag parameterization, with smaller but significant contributions from damping in the vertical transport scheme and from the horizontal filters in the dynamical core. Most of the kinetic energy dissipation in the free atmosphere occurs within breaking mid-latitude baroclinic waves. We will briefly review these results and their implications for atmospheric model configuration and for atmospheric dynamics, specifically that related to the dynamics associated with the mesoscale kinetic energy spectrum.

  9. ANALYZING AND VISUALIZING COSMOLOGICAL SIMULATIONS WITH ParaView

    International Nuclear Information System (INIS)

    Woodring, Jonathan; Ahrens, James; Heitmann, Katrin; Pope, Adrian; Fasel, Patricia; Hsu, Chung-Hsing; Habib, Salman

    2011-01-01

    The advent of large cosmological sky surveys-ushering in the era of precision cosmology-has been accompanied by ever larger cosmological simulations. The analysis of these simulations, which currently encompass tens of billions of particles and up to a trillion particles in the near future, is often as daunting as carrying out the simulations in the first place. Therefore, the development of very efficient analysis tools combining qualitative and quantitative capabilities is a matter of some urgency. In this paper, we introduce new analysis features implemented within ParaView, a fully parallel, open-source visualization toolkit, to analyze large N-body simulations. A major aspect of ParaView is that it can live and operate on the same machines and utilize the same parallel power as the simulation codes themselves. In addition, data movement is in a serious bottleneck now and will become even more of an issue in the future; an interactive visualization and analysis tool that can handle data in situ is fast becoming essential. The new features in ParaView include particle readers and a very efficient halo finder that identifies friends-of-friends halos and determines common halo properties, including spherical overdensity properties. In combination with many other functionalities already existing within ParaView, such as histogram routines or interfaces to programming languages like Python, this enhanced version enables fast, interactive, and convenient analyses of large cosmological simulations. In addition, development paths are available for future extensions.

  10. Development of a High-Resolution Climate Model for Future Climate Change Projection on the Earth Simulator

    Science.gov (United States)

    Kanzawa, H.; Emori, S.; Nishimura, T.; Suzuki, T.; Inoue, T.; Hasumi, H.; Saito, F.; Abe-Ouchi, A.; Kimoto, M.; Sumi, A.

    2002-12-01

    The fastest supercomputer of the world, the Earth Simulator (total peak performance 40TFLOPS) has recently been available for climate researches in Yokohama, Japan. We are planning to conduct a series of future climate change projection experiments on the Earth Simulator with a high-resolution coupled ocean-atmosphere climate model. The main scientific aims for the experiments are to investigate 1) the change in global ocean circulation with an eddy-permitting ocean model, 2) the regional details of the climate change including Asian monsoon rainfall pattern, tropical cyclones and so on, and 3) the change in natural climate variability with a high-resolution model of the coupled ocean-atmosphere system. To meet these aims, an atmospheric GCM, CCSR/NIES AGCM, with T106(~1.1o) horizontal resolution and 56 vertical layers is to be coupled with an oceanic GCM, COCO, with ~ 0.28ox 0.19o horizontal resolution and 48 vertical layers. This coupled ocean-atmosphere climate model, named MIROC, also includes a land-surface model, a dynamic-thermodynamic seaice model, and a river routing model. The poles of the oceanic model grid system are rotated from the geographic poles so that they are placed in Greenland and Antarctic land masses to avoild the singularity of the grid system. Each of the atmospheric and the oceanic parts of the model is parallelized with the Message Passing Interface (MPI) technique. The coupling of the two is to be done with a Multi Program Multi Data (MPMD) fashion. A 100-model-year integration will be possible in one actual month with 720 vector processors (which is only 14% of the full resources of the Earth Simulator).

  11. Changes in snow cover over China in the 21st century as simulated by a high resolution regional climate model

    International Nuclear Information System (INIS)

    Shi Ying; Gao Xuejie; Wu Jia; Giorgi, Filippo

    2011-01-01

    On the basis of the climate change simulations conducted using a high resolution regional climate model, the Abdus Salam International Centre for Theoretical Physics (ICTP) Regional Climate Model, RegCM3, at 25 km grid spacing, future changes in snow cover over China are analyzed. The simulations are carried out for the period of 1951–2100 following the IPCC SRES A1B emission scenario. The results suggest good performances of the model in simulating the number of snow cover days and the snow cover depth, as well as the starting and ending dates of snow cover to the present day (1981–2000). Their spatial distributions and amounts show fair consistency between the simulation and observation, although with some discrepancies. In general, decreases in the number of snow cover days and the snow cover depth, together with postponed snow starting dates and advanced snow ending dates, are simulated for the future, except in some places where the opposite appears. The most dramatic changes are found over the Tibetan Plateau among the three major snow cover areas of Northeast, Northwest and the Tibetan Plateau in China.

  12. VAST PLANES OF SATELLITES IN A HIGH-RESOLUTION SIMULATION OF THE LOCAL GROUP: COMPARISON TO ANDROMEDA

    International Nuclear Information System (INIS)

    Gillet, N.; Ocvirk, P.; Aubert, D.; Knebe, A.; Yepes, G.; Libeskind, N.; Gottlöber, S.; Hoffman, Y.

    2015-01-01

    We search for vast planes of satellites (VPoS) in a high-resolution simulation of the Local Group performed by the CLUES project, which improves significantly the resolution of previous similar studies. We use a simple method for detecting planar configurations of satellites, and validate it on the known plane of M31. We implement a range of prescriptions for modeling the satellite populations, roughly reproducing the variety of recipes used in the literature, and investigate the occurrence and properties of planar structures in these populations. The structure of the simulated satellite systems is strongly non-random and contains planes of satellites, predominantly co-rotating, with, in some cases, sizes comparable to the plane observed in M31 by Ibata et al. However, the latter is slightly richer in satellites, slightly thinner, and has stronger co-rotation, which makes it stand out as overall more exceptional than the simulated planes, when compared to a random population. Although the simulated planes we find are generally dominated by one real structure forming its backbone, they are also partly fortuitous and are thus not kinematically coherent structures as a whole. Provided that the simulated and observed planes of satellites are indeed of the same nature, our results suggest that the VPoS of M31 is not a coherent disk and that one-third to one-half of its satellites must have large proper motions perpendicular to the plane

  13. Outcomes and challenges of global high-resolution non-hydrostatic atmospheric simulations using the K computer

    Science.gov (United States)

    Satoh, Masaki; Tomita, Hirofumi; Yashiro, Hisashi; Kajikawa, Yoshiyuki; Miyamoto, Yoshiaki; Yamaura, Tsuyoshi; Miyakawa, Tomoki; Nakano, Masuo; Kodama, Chihiro; Noda, Akira T.; Nasuno, Tomoe; Yamada, Yohei; Fukutomi, Yoshiki

    2017-12-01

    This article reviews the major outcomes of a 5-year (2011-2016) project using the K computer to perform global numerical atmospheric simulations based on the non-hydrostatic icosahedral atmospheric model (NICAM). The K computer was made available to the public in September 2012 and was used as a primary resource for Japan's Strategic Programs for Innovative Research (SPIRE), an initiative to investigate five strategic research areas; the NICAM project fell under the research area of climate and weather simulation sciences. Combining NICAM with high-performance computing has created new opportunities in three areas of research: (1) higher resolution global simulations that produce more realistic representations of convective systems, (2) multi-member ensemble simulations that are able to perform extended-range forecasts 10-30 days in advance, and (3) multi-decadal simulations for climatology and variability. Before the K computer era, NICAM was used to demonstrate realistic simulations of intra-seasonal oscillations including the Madden-Julian oscillation (MJO), merely as a case study approach. Thanks to the big leap in computational performance of the K computer, we could greatly increase the number of cases of MJO events for numerical simulations, in addition to integrating time and horizontal resolution. We conclude that the high-resolution global non-hydrostatic model, as used in this five-year project, improves the ability to forecast intra-seasonal oscillations and associated tropical cyclogenesis compared with that of the relatively coarser operational models currently in use. The impacts of the sub-kilometer resolution simulation and the multi-decadal simulations using NICAM are also reviewed.

  14. Air-sea exchange over Black Sea estimated from high resolution regional climate simulations

    Science.gov (United States)

    Velea, Liliana; Bojariu, Roxana; Cica, Roxana

    2013-04-01

    Black Sea is an important influencing factor for the climate of bordering countries, showing cyclogenetic activity (Trigo et al, 1999) and influencing Mediterranean cyclones passing over. As for other seas, standard observations of the atmosphere are limited in time and space and available observation-based estimations of air-sea exchange terms present quite large ranges of uncertainty. The reanalysis datasets (e.g. ERA produced by ECMWF) provide promising validation estimates of climatic characteristics against the ones in available climatic data (Schrum et al, 2001), while cannot reproduce some local features due to relatively coarse horizontal resolution. Detailed and realistic information on smaller-scale processes are foreseen to be provided by regional climate models, due to continuous improvements of physical parameterizations and numerical solutions and thus affording simulations at high spatial resolution. The aim of the study is to assess the potential of three regional climate models in reproducing known climatological characteristics of air-sea exchange over Black Sea, as well as to explore the added value of the model compared to the input (reanalysis) data. We employ results of long-term (1961-2000) simulations performed within ENSEMBLE project (http://ensemblesrt3.dmi.dk/) using models ETHZ-CLM, CNRM-ALADIN, METO-HadCM, for which the integration domain covers the whole area of interest. The analysis is performed for the entire basin for several variables entering the heat and water budget terms and available as direct output from the models, at seasonal and annual scale. A comparison with independent data (ERA-INTERIM) and findings from other studies (e.g. Schrum et al, 2001) is also presented. References: Schrum, C., Staneva, J., Stanev, E. and Ozsoy, E., 2001: Air-sea exchange in the Black Sea estimated from atmospheric analysis for the period 1979-1993, J. Marine Systems, 31, 3-19 Trigo, I. F., T. D. Davies, and G. R. Bigg (1999): Objective

  15. Development of numerical simulation technology for high resolution thermal hydraulic analysis

    International Nuclear Information System (INIS)

    Yoon, Han Young; Kim, K. D.; Kim, B. J.; Kim, J. T.; Park, I. K.; Bae, S. W.; Song, C. H.; Lee, S. W.; Lee, S. J.; Lee, J. R.; Chung, S. K.; Chung, B. D.; Cho, H. K.; Choi, S. K.; Ha, K. S.; Hwang, M. K.; Yun, B. J.; Jeong, J. J.; Sul, A. S.; Lee, H. D.; Kim, J. W.

    2012-04-01

    A realistic simulation of two phase flows is essential for the advanced design and safe operation of a nuclear reactor system. The need for a multi dimensional analysis of thermal hydraulics in nuclear reactor components is further increasing with advanced design features, such as a direct vessel injection system, a gravity driven safety injection system, and a passive secondary cooling system. These features require more detailed analysis with enhanced accuracy. In this regard, KAERI has developed a three dimensional thermal hydraulics code, CUPID, for the analysis of transient, multi dimensional, two phase flows in nuclear reactor components. The code was designed for use as a component scale code, and/or a three dimensional component, which can be coupled with a system code. This report presents an overview of the CUPID code development and preliminary assessment, mainly focusing on the numerical solution method and its verification and validation. It was shown that the CUPID code was successfully verified. The results of the validation calculations show that the CUPID code is very promising, but a systematic approach for the validation and improvement of the physical models is still needed

  16. Interactive desktop analysis of high resolution simulations: application to turbulent plume dynamics and current sheet formation

    International Nuclear Information System (INIS)

    Clyne, John; Mininni, Pablo; Norton, Alan; Rast, Mark

    2007-01-01

    The ever increasing processing capabilities of the supercomputers available to computational scientists today, combined with the need for higher and higher resolution computational grids, has resulted in deluges of simulation data. Yet the computational resources and tools required to make sense of these vast numerical outputs through subsequent analysis are often far from adequate, making such analysis of the data a painstaking, if not a hopeless, task. In this paper, we describe a new tool for the scientific investigation of massive computational datasets. This tool (VAPOR) employs data reduction, advanced visualization, and quantitative analysis operations to permit the interactive exploration of vast datasets using only a desktop PC equipped with a commodity graphics card. We describe VAPORs use in the study of two problems. The first, motivated by stellar envelope convection, investigates the hydrodynamic stability of compressible thermal starting plumes as they descend through a stratified layer of increasing density with depth. The second looks at current sheet formation in an incompressible helical magnetohydrodynamic flow to understand the early spontaneous development of quasi two-dimensional (2D) structures embedded within the 3D solution. Both of the problems were studied at sufficiently high spatial resolution, a grid of 504 2 by 2048 points for the first and 1536 3 points for the second, to overwhelm the interactive capabilities of typically available analysis resources

  17. Forecasting wildland fire behavior using high-resolution large-eddy simulations

    Science.gov (United States)

    Munoz-Esparza, D.; Kosovic, B.; Jimenez, P. A.; Anderson, A.; DeCastro, A.; Brown, B.

    2017-12-01

    Wildland fires are responsible for large socio-economic impacts. Fires affect the environment, damage structures, threaten lives, cause health issues, and involve large suppression costs. These impacts can be mitigated via accurate fire spread forecast to inform the incident management team. To this end, the state of Colorado is funding the development of the Colorado Fire Prediction System (CO-FPS). The system is based on the Weather Research and Forecasting (WRF) model enhanced with a fire behavior module (WRF-Fire). Realistic representation of wildland fire behavior requires explicit representation of small scale weather phenomena to properly account for coupled atmosphere-wildfire interactions. Moreover, transport and dispersion of biomass burning emissions from wildfires is controlled by turbulent processes in the atmospheric boundary layer, which are difficult to parameterize and typically lead to large errors when simplified source estimation and injection height methods are used. Therefore, we utilize turbulence-resolving large-eddy simulations at a resolution of 111 m to forecast fire spread and smoke distribution using a coupled atmosphere-wildfire model. This presentation will describe our improvements to the level-set based fire-spread algorithm in WRF-Fire and an evaluation of the operational system using 12 wildfire events that occurred in Colorado in 2016, as well as other historical fires. In addition, the benefits of explicit representation of turbulence for smoke transport and dispersion will be demonstrated.

  18. Flooding Simulation of Extreme Event on Barnegat Bay by High-Resolution Two Dimensional Hydrodynamic Model

    Science.gov (United States)

    Wang, Y.; Ramaswamy, V.; Saleh, F.

    2017-12-01

    Barnegat Bay located on the east coast of New Jersey, United States and is separated from the Atlantic Ocean by the narrow Barnegat Peninsula which acts as a barrier island. The bay is fed by several rivers which empty through small estuaries along the inner shore. In terms of vulnerability from flooding, the Barnegat Peninsula is under the influence of both coastal storm surge and riverine flooding. Barnegat Bay was hit by Hurricane Sandy causing flood damages with extensive cross-island flow at many streets perpendicular to the shoreline. The objective of this work is to identify and quantify the sources of flooding using a two dimensional inland hydrodynamic model. The hydrodynamic model was forced by three observed coastal boundary conditions, and one hydrologic boundary condition from United States Geological Survey (USGS). The model reliability was evaluated with both FEMA spatial flooding extend and USGS High water marks. Simulated flooding extent showed good agreement with the reanalysis spatial inundation extents. Results offered important perspectives on the flow of the water into the bay, the velocity and the depth of the inundated areas. Using such information can enable emergency managers and decision makers identify evacuation and deploy flood defenses.

  19. Impact of irrigations on simulated convective activity over Central Greece: A high resolution study

    Science.gov (United States)

    Kotsopoulos, S.; Tegoulias, I.; Pytharoulis, I.; Kartsios, S.; Bampzelis, D.; Karacostas, T.

    2014-12-01

    The aim of this research is to investigate the impact of irrigations in the characteristics of convective activity simulated by the non-hydrostatic Weather Research and Forecasting model with the Advanced Research dynamic solver (WRF-ARW, version 3.5.1), under different upper air synoptic conditions in central Greece. To this end, 42 cases equally distributed under the six most frequent upper air synoptic conditions, which are associated with convective activity in the region of interest, were utilized considering two different soil moisture scenarios. In the first scenario, the model was initialized with the surface soil moisture of the ECMWF analysis data that usually does not take into account the modification of soil moisture due to agricultural activity in the area of interest. In the second scenario, the soil moisture in the upper soil layers of the study area was modified to the field capacity for the irrigated cropland. Three model domains, covering Europe, the Mediterranean Sea and northern Africa (d01), the wider area of Greece (d02) and central Greece - Thessaly region (d03) are used at horizontal grid-spacings of 15km, 5km and 1km respectively. The model numerical results indicate a strong dependence of convective spatiotemporal characteristics from the soil moisture difference between the two scenarios. Acknowledgements: This research is co-financed by the European Union (European Regional Development Fund) and Greek national funds, through the action "COOPERATION 2011: Partnerships of Production and Research Institutions in Focused Research and Technology Sectors" (contract number 11SYN_8_1088 - DAPHNE) in the framework of the operational programme "Competitiveness and Entrepreneurship" and Regions in Transition (OPC II, NSRF 2007-2013).

  20. Coating Thickness Measurement of the Simulated TRISO-Coated Fuel Particles using an Image Plate and a High Resolution Scanner

    International Nuclear Information System (INIS)

    Kim, Woong Ki; Kim, Yeon Ku; Jeong, Kyung Chai; Lee, Young Woo; Kim, Bong Goo; Eom, Sung Ho; Kim, Young Min; Yeo, Sung Hwan; Cho, Moon Sung

    2014-01-01

    In this study, the thickness of the coating layers of 196 coated particles was measured using an Image Plate detector, high resolution scanner and digital image processing techniques. The experimental results are as follows. - An X-ray image was acquired for 196 simulated TRISO-coated fuel particles with ZrO 2 kernel using an Image Plate with high resolution in a reduced amount of time. - We could observe clear boundaries between coating layers for 196 particles. - The geometric distortion error was compensated for the calculation. - The coating thickness of the TRISO-coated fuel particles can be nondestructively measured using X-ray radiography and digital image processing technology. - We can increase the number of TRISO-coated particles to be inspected by increasing the number of Image Plate detectors. A TRISO-coated fuel particle for an HTGR (high temperature gas-cooled reactor) is composed of a nuclear fuel kernel and outer coating layers. The coating layers consist of buffer PyC (pyrolytic carbon), inner PyC (I-PyC), SiC, and outer PyC (O-PyC) layer. The coating thickness is measured to evaluate the soundness of the coating layers. X-ray radiography is one of the nondestructive alternatives for measuring the coating thickness without generating a radioactive waste. Several billion particles are subject to be loaded in a reactor. A lot of sample particles should be tested as much as possible. The acquired X-ray images for the measurement of coating thickness have included a small number of particles because of the restricted resolution and size of the X-ray detector. We tried to test many particles for an X-ray exposure to reduce the measurement time. In this experiment, an X-ray image was acquired for 196 simulated TRISO-coated fuel particles using an image plate and high resolution scanner with a pixel size of 25Χ25 μm 2 . The coating thickness for the particles could be measured on the image

  1. A High-Resolution Spatially Explicit Monte-Carlo Simulation Approach to Commercial and Residential Electricity and Water Demand Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Morton, April M [ORNL; McManamay, Ryan A [ORNL; Nagle, Nicholas N [ORNL; Piburn, Jesse O [ORNL; Stewart, Robert N [ORNL; Surendran Nair, Sujithkumar [ORNL

    2016-01-01

    Abstract As urban areas continue to grow and evolve in a world of increasing environmental awareness, the need for high resolution spatially explicit estimates for energy and water demand has become increasingly important. Though current modeling efforts mark significant progress in the effort to better understand the spatial distribution of energy and water consumption, many are provided at a course spatial resolution or rely on techniques which depend on detailed region-specific data sources that are not publicly available for many parts of the U.S. Furthermore, many existing methods do not account for errors in input data sources and may therefore not accurately reflect inherent uncertainties in model outputs. We propose an alternative and more flexible Monte-Carlo simulation approach to high-resolution residential and commercial electricity and water consumption modeling that relies primarily on publicly available data sources. The method s flexible data requirement and statistical framework ensure that the model is both applicable to a wide range of regions and reflective of uncertainties in model results. Key words: Energy Modeling, Water Modeling, Monte-Carlo Simulation, Uncertainty Quantification Acknowledgment This manuscript has been authored by employees of UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the U.S. Department of Energy. Accordingly, the United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  2. Potential for added value in precipitation simulated by high-resolution nested Regional Climate Models and observations

    Energy Technology Data Exchange (ETDEWEB)

    Di Luca, Alejandro; Laprise, Rene [Universite du Quebec a Montreal (UQAM), Centre ESCER (Etude et Simulation du Climat a l' Echelle Regionale), Departement des Sciences de la Terre et de l' Atmosphere, PK-6530, Succ. Centre-ville, B.P. 8888, Montreal, QC (Canada); De Elia, Ramon [Universite du Quebec a Montreal, Ouranos Consortium, Centre ESCER (Etude et Simulation du Climat a l' Echelle Regionale), Montreal (Canada)

    2012-03-15

    Regional Climate Models (RCMs) constitute the most often used method to perform affordable high-resolution regional climate simulations. The key issue in the evaluation of nested regional models is to determine whether RCM simulations improve the representation of climatic statistics compared to the driving data, that is, whether RCMs add value. In this study we examine a necessary condition that some climate statistics derived from the precipitation field must satisfy in order that the RCM technique can generate some added value: we focus on whether the climate statistics of interest contain some fine spatial-scale variability that would be absent on a coarser grid. The presence and magnitude of fine-scale precipitation variance required to adequately describe a given climate statistics will then be used to quantify the potential added value (PAV) of RCMs. Our results show that the PAV of RCMs is much higher for short temporal scales (e.g., 3-hourly data) than for long temporal scales (16-day average data) due to the filtering resulting from the time-averaging process. PAV is higher in warm season compared to cold season due to the higher proportion of precipitation falling from small-scale weather systems in the warm season. In regions of complex topography, the orographic forcing induces an extra component of PAV, no matter the season or the temporal scale considered. The PAV is also estimated using high-resolution datasets based on observations allowing the evaluation of the sensitivity of changing resolution in the real climate system. The results show that RCMs tend to reproduce relatively well the PAV compared to observations although showing an overestimation of the PAV in warm season and mountainous regions. (orig.)

  3. High resolution crop growth simulation for identification of potential adaptation strategies under climate change

    Science.gov (United States)

    Kim, K. S.; Yoo, B. H.

    2016-12-01

    Impact assessment of climate change on crop production would facilitate planning of adaptation strategies. Because socio-environmental conditions would differ by local areas, it would be advantageous to assess potential adaptation measures at a specific area. The objectives of this study was to develop a crop growth simulation system at a very high spatial resolution, e.g., 30 m, and to assess different adaptation options including shift of planting date and use of different cultivars. The Decision Support System for Agrotechnology Transfer (DSSAT) model was used to predict yields of soybean and maize in Korea. Gridded data for climate and soil were used to prepare input data for the DSSAT model. Weather input data were prepared at the resolution of 30 m using bilinear interpolation from gridded climate scenario data. Those climate data were obtained from Korean Meteorology Administration. Spatial resolution of temperature and precipitation was 1 km whereas that of solar radiation was 12.5 km. Soil series data at the 30 m resolution were obtained from the soil database operated by Rural Development Administration, Korea. The SOL file, which is a soil input file for the DSSAT model was prepared using physical and chemical properties of a given soil series, which were available from the soil database. Crop yields were predicted by potential adaptation options based on planting date and cultivar. For example, 10 planting dates and three cultivars were used to identify ideal management options for climate change adaptation. In prediction of maize yield, combination of 20 planting dates and two cultivars was used as management options. Predicted crop yields differed by site even within a relatively small region. For example, the maximum of average yields for 2001-2010 seasons differed by sites In a county of which areas is 520 km2 (Fig. 1). There was also spatial variation in the ideal management option in the region (Fig. 2). These results suggested that local

  4. GALAXY CLUSTER RADIO RELICS IN ADAPTIVE MESH REFINEMENT COSMOLOGICAL SIMULATIONS: RELIC PROPERTIES AND SCALING RELATIONSHIPS

    International Nuclear Information System (INIS)

    Skillman, Samuel W.; Hallman, Eric J.; Burns, Jack O.; Smith, Britton D.; O'Shea, Brian W.; Turk, Matthew J.

    2011-01-01

    Cosmological shocks are a critical part of large-scale structure formation, and are responsible for heating the intracluster medium in galaxy clusters. In addition, they are capable of accelerating non-thermal electrons and protons. In this work, we focus on the acceleration of electrons at shock fronts, which is thought to be responsible for radio relics-extended radio features in the vicinity of merging galaxy clusters. By combining high-resolution adaptive mesh refinement/N-body cosmological simulations with an accurate shock-finding algorithm and a model for electron acceleration, we calculate the expected synchrotron emission resulting from cosmological structure formation. We produce synthetic radio maps of a large sample of galaxy clusters and present luminosity functions and scaling relationships. With upcoming long-wavelength radio telescopes, we expect to see an abundance of radio emission associated with merger shocks in the intracluster medium. By producing observationally motivated statistics, we provide predictions that can be compared with observations to further improve our understanding of magnetic fields and electron shock acceleration.

  5. North Atlantic Tropical Cyclones: historical simulations and future changes with the new high-resolution Arpege AGCM.

    Science.gov (United States)

    Pilon, R.; Chauvin, F.; Palany, P.; Belmadani, A.

    2017-12-01

    A new version of the variable high-resolution Meteo-France Arpege atmospheric general circulation model (AGCM) has been developed for tropical cyclones (TC) studies, with a focus on the North Atlantic basin, where the model horizontal resolution is 15 km. Ensemble historical AMIP (Atmospheric Model Intercomparison Project)-type simulations (1965-2014) and future projections (2020-2080) under the IPCC (Intergovernmental Panel on Climate Change) representative concentration pathway (RCP) 8.5 scenario have been produced. TC-like vortices tracking algorithm is used to investigate TC activity and variability. TC frequency, genesis, geographical distribution and intensity are examined. Historical simulations are compared to best-track and reanalysis datasets. Model TC frequency is generally realistic but tends to be too high during the rst decade of the historical simulations. Biases appear to originate from both the tracking algorithm and model climatology. Nevertheless, the model is able to simulate extremely well intense TCs corresponding to category 5 hurricanes in the North Atlantic, where grid resolution is highest. Interaction between developing TCs and vertical wind shear is shown to be contributing factor for TC variability. Future changes in TC activity and properties are also discussed.

  6. High-resolution simulations of cylindrical void collapse in energetic materials: Effect of primary and secondary collapse on initiation thresholds

    Science.gov (United States)

    Rai, Nirmal Kumar; Schmidt, Martin J.; Udaykumar, H. S.

    2017-04-01

    Void collapse in energetic materials leads to hot spot formation and enhanced sensitivity. Much recent work has been directed towards simulation of collapse-generated reactive hot spots. The resolution of voids in calculations to date has varied as have the resulting predictions of hot spot intensity. Here we determine the required resolution for reliable cylindrical void collapse calculations leading to initiation of chemical reactions. High-resolution simulations of collapse provide new insights into the mechanism of hot spot generation. It is found that initiation can occur in two different modes depending on the loading intensity: Either the initiation occurs due to jet impact at the first collapse instant or it can occur at secondary lobes at the periphery of the collapsed void. A key observation is that secondary lobe collapse leads to large local temperatures that initiate reactions. This is due to a combination of a strong blast wave from the site of primary void collapse and strong colliding jets and vortical flows generated during the collapse of the secondary lobes. The secondary lobe collapse results in a significant lowering of the predicted threshold for ignition of the energetic material. The results suggest that mesoscale simulations of void fields may suffer from significant uncertainty in threshold predictions because unresolved calculations cannot capture the secondary lobe collapse phenomenon. The implications of this uncertainty for mesoscale simulations are discussed in this paper.

  7. Cosmology

    International Nuclear Information System (INIS)

    Contopoulos, G.; Kotsakis, D.

    1987-01-01

    An extensive first part on a wealth of observational results relevant to cosmology lays the foundation for the second and central part of the book; the chapters on general relativity, the various cosmological theories, and the early universe. The authors present in a complete and almost non-mathematical way the ideas and theoretical concepts of modern cosmology including the exciting impact of high-energy particle physics, e.g. in the concept of the ''inflationary universe''. The final part addresses the deeper implications of cosmology, the arrow of time, the universality of physical laws, inflation and causality, and the anthropic principle

  8. Feasibility of High-Resolution Soil Erosion Measurements by Means of Rainfall Simulations and SfM Photogrammetry

    Directory of Open Access Journals (Sweden)

    Phoebe Hänsel

    2016-11-01

    Full Text Available The silty soils of the intensively used agricultural landscape of the Saxon loess province, eastern Germany, are very prone to soil erosion, mainly caused by water erosion. Rainfall simulations, and also increasingly structure-from-motion (SfM photogrammetry, are used as methods in soil erosion research not only to assess soil erosion by water, but also to quantify soil loss. This study aims to validate SfM photogrammetry determined soil loss estimations with rainfall simulations measurements. Rainfall simulations were performed at three agricultural sites in central Saxony. Besides the measured data runoff and soil loss by sampling (in mm, terrestrial images were taken from the plots with digital cameras before and after the rainfall simulation. Subsequently, SfM photogrammetry was used to reconstruct soil surface changes due to soil erosion in terms of high resolution digital elevation models (DEMs for the pre- and post-event (resolution 1 × 1 mm. By multi-temporal change detection, the digital elevation model of difference (DoD and an averaged soil loss (in mm is received, which was compared to the soil loss by sampling. Soil loss by DoD was higher than soil loss by sampling. The method of SfM photogrammetry-determined soil loss estimations also include a comparison of three different ground control point (GCP approaches, revealing that the most complex one delivers the most reliable soil loss by DoD. Additionally, soil bulk density changes and splash erosion beyond the plot were measured during the rainfall simulation experiments in order to separate these processes and associated surface changes from the soil loss by DoD. Furthermore, splash was negligibly small, whereas higher soil densities after the rainfall simulations indicated soil compaction. By means of calculated soil surface changes due to soil compaction, the soil loss by DoD achieved approximately the same value as the soil loss by rainfall simulation.

  9. Clues to the 'Magellanic Galaxy' from cosmological simulations

    NARCIS (Netherlands)

    Sales, Laura V.; Navarro, Julio F.; Cooper, Andrew P.; White, Simon D. M.; Frenk, Carlos S.; Helmi, Amina

    2011-01-01

    We use cosmological simulations from the Aquarius Project to study the orbital history of the Large Magellanic Cloud (LMC) and its potential association with other satellites of the Milky Way (MW). We search for dynamical analogues to the LMC and find a subhalo that matches the LMC position and

  10. Cosmological N-body simulations with generic hot dark matter

    DEFF Research Database (Denmark)

    Brandbyge, Jacob; Hannestad, Steen

    2017-01-01

    We have calculated the non-linear effects of generic fermionic and bosonic hot dark matter components in cosmological N-body simulations. For sub-eV masses, the non-linear power spectrum suppression caused by thermal free-streaming resembles the one seen for massive neutrinos, whereas for masses...

  11. Remapping dark matter halo catalogues between cosmological simulations

    Science.gov (United States)

    Mead, A. J.; Peacock, J. A.

    2014-05-01

    We present and test a method for modifying the catalogue of dark matter haloes produced from a given cosmological simulation, so that it resembles the result of a simulation with an entirely different set of parameters. This extends the method of Angulo & White, which rescales the full particle distribution from a simulation. Working directly with the halo catalogue offers an advantage in speed, and also allows modifications of the internal structure of the haloes to account for non-linear differences between cosmologies. Our method can be used directly on a halo catalogue in a self-contained manner without any additional information about the overall density field; although the large-scale displacement field is required by the method, this can be inferred from the halo catalogue alone. We show proof of concept of our method by rescaling a matter-only simulation with no baryon acoustic oscillation (BAO) features to a more standard Λ cold dark matter model containing a cosmological constant and a BAO signal. In conjunction with the halo occupation approach, this method provides a basis for the rapid generation of mock galaxy samples spanning a wide range of cosmological parameters.

  12. Simulating cosmologies beyond ΛCDM with PINOCCHIO

    Energy Technology Data Exchange (ETDEWEB)

    Rizzo, Luca A. [Institut de Physique Theorique, Universite Paris-Saclay CEA, CNRS, F-91191 Gif-sur-Yvette, Cedex (France); Villaescusa-Navarro, Francisco [Center for Computational Astrophysics, 160 5th Ave, New York, NY, 10010 (United States); Monaco, Pierluigi [Sezione di Astronomia, Dipartimento di Fisica, Università di Trieste, via G.B. Tiepolo 11, I-34143 Trieste (Italy); Munari, Emiliano [Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Juliane Maries Vej 30, DK-2100 Copenhagen (Denmark); Borgani, Stefano [INAF – Astronomical Observatory of Trieste, via G.B. Tiepolo 11, I-34143 Trieste (Italy); Castorina, Emanuele [Berkeley Center for Cosmological Physics, University of California, Berkeley, CA 94720 (United States); Sefusatti, Emiliano, E-mail: luca.rizzo@cea.fr, E-mail: fvillaescusa@simonsfoundation.org, E-mail: monaco@oats.inaf.it, E-mail: munari@dark-cosmology.dk, E-mail: borgani@oats.inaf.it, E-mail: ecastorina@berkeley.edu, E-mail: emiliano.sefusatti@brera.inaf.it [INAF, Osservatorio Astronomico di Brera, Via Bianchi 46, I-23807 Merate (Italy)

    2017-01-01

    We present a method that extends the capabilities of the PINpointing Orbit-Crossing Collapsed HIerarchical Objects (PINOCCHIO) code, allowing it to generate accurate dark matter halo mock catalogues in cosmological models where the linear growth factor and the growth rate depend on scale. Such cosmologies comprise, among others, models with massive neutrinos and some classes of modified gravity theories. We validate the code by comparing the halo properties from PINOCCHIO against N-body simulations, focusing on cosmologies with massive neutrinos: νΛCDM. We analyse the halo mass function, halo two-point correlation function and halo power spectrum, showing that PINOCCHIO reproduces the results from simulations with the same level of precision as the original code (∼ 5–10%). We demonstrate that the abundance of halos in cosmologies with massless and massive neutrinos from PINOCCHIO matches very well the outcome of simulations, and point out that PINOCCHIO can reproduce the Ω{sub ν}–σ{sub 8} degeneracy that affects the halo mass function. We finally show that the clustering properties of the halos from PINOCCHIO matches accurately those from simulations both in real and redshift-space, in the latter case up to k = 0.3 h Mpc{sup −1}. We emphasize that the computational time required by PINOCCHIO to generate mock halo catalogues is orders of magnitude lower than the one needed for N-body simulations. This makes this tool ideal for applications like covariance matrix studies within the standard ΛCDM model but also in cosmologies with massive neutrinos or some modified gravity theories.

  13. Analyzing and Visualizing Cosmological Simulations with ParaView

    Science.gov (United States)

    Woodring, Jonathan; Heitmann, Katrin; Ahrens, James; Fasel, Patricia; Hsu, Chung-Hsing; Habib, Salman; Pope, Adrian

    2011-07-01

    The advent of large cosmological sky surveys—ushering in the era of precision cosmology—has been accompanied by ever larger cosmological simulations. The analysis of these simulations, which currently encompass tens of billions of particles and up to a trillion particles in the near future, is often as daunting as carrying out the simulations in the first place. Therefore, the development of very efficient analysis tools combining qualitative and quantitative capabilities is a matter of some urgency. In this paper, we introduce new analysis features implemented within ParaView, a fully parallel, open-source visualization toolkit, to analyze large N-body simulations. A major aspect of ParaView is that it can live and operate on the same machines and utilize the same parallel power as the simulation codes themselves. In addition, data movement is in a serious bottleneck now and will become even more of an issue in the future; an interactive visualization and analysis tool that can handle data in situ is fast becoming essential. The new features in ParaView include particle readers and a very efficient halo finder that identifies friends-of-friends halos and determines common halo properties, including spherical overdensity properties. In combination with many other functionalities already existing within ParaView, such as histogram routines or interfaces to programming languages like Python, this enhanced version enables fast, interactive, and convenient analyses of large cosmological simulations. In addition, development paths are available for future extensions.

  14. Analysis of a high-resolution regional climate simulation for Alpine temperature. Validation and influence of the NAO

    Energy Technology Data Exchange (ETDEWEB)

    Proemmel, K. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Kuestenforschung

    2008-11-06

    To determine whether the increase in resolution of climate models improves the representation of climate is a crucial topic in regional climate modelling. An improvement over coarser-scale models is expected especially in areas with complex orography or along coastlines. However, some studies have shown no clear added value for regional climate models. In this study a high-resolution regional climate model simulation performed with REMO over the period 1958-1998 is analysed for 2m temperature over the orographically complex European Alps and their surroundings called the Greater Alpine Region (GAR). The model setup is in hindcast mode meaning that the simulation is driven with perfect boundary conditions by the ERA40 reanalysis through prescribing the values at the lateral boundaries and spectral nudging of the large-scale wind field inside the model domain. The added value is analysed between the regional climate simulation with a resolution of 1/6 and the driving reanalysis with a resolution of 1.125 . Before analysing the added value both the REMO simulation and the ERA40 reanalysis are validated against different station datasets of monthly and daily mean 2m temperature. The largest dataset is the dense, homogenised and quality controlled HISTALP dataset covering the whole GAR, which gave the opportunity for the validation undertaken in this study. The temporal variability of temperature, as quantified by correlation, is well represented by both REMO and ERA40. However, both show considerable biases. The REMO bias reaches 3 K in summer in regions known to experience a problem with summer drying in a number of regional models. In winter the bias is strongly influenced by the choice of the temperature lapse rate, which is applied to compare grid box and station data at different altitudes, and has the strongest influence on inner Alpine subregions where the altitude differences are largest. By applying a constant lapse rate the REMO bias in winter in the high

  15. High-resolution simulations of the final assembly of Earth-like planets. 2. Water delivery and planetary habitability.

    Science.gov (United States)

    Raymond, Sean N; Quinn, Thomas; Lunine, Jonathan I

    2007-02-01

    The water content and habitability of terrestrial planets are determined during their final assembly, from perhaps 100 1,000-km "planetary embryos " and a swarm of billions of 1-10-km "planetesimals. " During this process, we assume that water-rich material is accreted by terrestrial planets via impacts of water-rich bodies that originate in the outer asteroid region. We present analysis of water delivery and planetary habitability in five high-resolution simulations containing about 10 times more particles than in previous simulations. These simulations formed 15 terrestrial planets from 0.4 to 2.6 Earth masses, including five planets in the habitable zone. Every planet from each simulation accreted at least the Earth's current water budget; most accreted several times that amount (assuming no impact depletion). Each planet accreted at least five water-rich embryos and planetesimals from the past 2.5 astronomical units; most accreted 10-20 water-rich bodies. We present a new model for water delivery to terrestrial planets in dynamically calm systems, with low-eccentricity or low-mass giant planets-such systems may be very common in the Galaxy. We suggest that water is accreted in comparable amounts from a few planetary embryos in a " hit or miss " way and from millions of planetesimals in a statistically robust process. Variations in water content are likely to be caused by fluctuations in the number of water-rich embryos accreted, as well as from systematic effects, such as planetary mass and location, and giant planet properties.

  16. Pushing down the low-mass halo concentration frontier with the Lomonosov cosmological simulations

    Science.gov (United States)

    Pilipenko, Sergey V.; Sánchez-Conde, Miguel A.; Prada, Francisco; Yepes, Gustavo

    2017-12-01

    We introduce the Lomonosov suite of high-resolution N-body cosmological simulations covering a full box of size 32 h-1 Mpc with low-mass resolution particles (2 × 107 h-1 M⊙) and three zoom-in simulations of overdense, underdense and mean density regions at much higher particle resolution (4 × 104 h-1 M⊙). The main purpose of this simulation suite is to extend the concentration-mass relation of dark matter haloes down to masses below those typically available in large cosmological simulations. The three different density regions available at higher resolution provide a better understanding of the effect of the local environment on halo concentration, known to be potentially important for small simulation boxes and small halo masses. Yet, we find the correction to be small in comparison with the scatter of halo concentrations. We conclude that zoom simulations, despite their limited representativity of the volume of the Universe, can be effectively used for the measurement of halo concentrations at least at the halo masses probed by our simulations. In any case, after a precise characterization of this effect, we develop a robust technique to extrapolate the concentration values found in zoom simulations to larger volumes with greater accuracy. Altogether, Lomonosov provides a measure of the concentration-mass relation in the halo mass range 107-1010 h-1 M⊙ with superb halo statistics. This work represents a first important step to measure halo concentrations at intermediate, yet vastly unexplored halo mass scales, down to the smallest ones. All Lomonosov data and files are public for community's use.

  17. Stochastic porous media modeling and high-resolution schemes for numerical simulation of subsurface immiscible fluid flow transport

    Science.gov (United States)

    Brantson, Eric Thompson; Ju, Binshan; Wu, Dan; Gyan, Patricia Semwaah

    2018-04-01

    This paper proposes stochastic petroleum porous media modeling for immiscible fluid flow simulation using Dykstra-Parson coefficient (V DP) and autocorrelation lengths to generate 2D stochastic permeability values which were also used to generate porosity fields through a linear interpolation technique based on Carman-Kozeny equation. The proposed method of permeability field generation in this study was compared to turning bands method (TBM) and uniform sampling randomization method (USRM). On the other hand, many studies have also reported that, upstream mobility weighting schemes, commonly used in conventional numerical reservoir simulators do not accurately capture immiscible displacement shocks and discontinuities through stochastically generated porous media. This can be attributed to high level of numerical smearing in first-order schemes, oftentimes misinterpreted as subsurface geological features. Therefore, this work employs high-resolution schemes of SUPERBEE flux limiter, weighted essentially non-oscillatory scheme (WENO), and monotone upstream-centered schemes for conservation laws (MUSCL) to accurately capture immiscible fluid flow transport in stochastic porous media. The high-order schemes results match well with Buckley Leverett (BL) analytical solution without any non-oscillatory solutions. The governing fluid flow equations were solved numerically using simultaneous solution (SS) technique, sequential solution (SEQ) technique and iterative implicit pressure and explicit saturation (IMPES) technique which produce acceptable numerical stability and convergence rate. A comparative and numerical examples study of flow transport through the proposed method, TBM and USRM permeability fields revealed detailed subsurface instabilities with their corresponding ultimate recovery factors. Also, the impact of autocorrelation lengths on immiscible fluid flow transport were analyzed and quantified. A finite number of lines used in the TBM resulted into visual

  18. Incorporation of Three-dimensional Radiative Transfer into a Very High Resolution Simulation of Horizontally Inhomogeneous Clouds

    Science.gov (United States)

    Ishida, H.; Ota, Y.; Sekiguchi, M.; Sato, Y.

    2016-12-01

    A three-dimensional (3D) radiative transfer calculation scheme is developed to estimate horizontal transport of radiation energy in a very high resolution (with the order of 10 m in spatial grid) simulation of cloud evolution, especially for horizontally inhomogeneous clouds such as shallow cumulus and stratocumulus. Horizontal radiative transfer due to inhomogeneous clouds seems to cause local heating/cooling in an atmosphere with a fine spatial scale. It is, however, usually difficult to estimate the 3D effects, because the 3D radiative transfer often needs a large resource for computation compared to a plane-parallel approximation. This study attempts to incorporate a solution scheme that explicitly solves the 3D radiative transfer equation into a numerical simulation, because this scheme has an advantage in calculation for a sequence of time evolution (i.e., the scene at a time is little different from that at the previous time step). This scheme is also appropriate to calculation of radiation with strong absorption, such as the infrared regions. For efficient computation, this scheme utilizes several techniques, e.g., the multigrid method for iteration solution, and a correlated-k distribution method refined for efficient approximation of the wavelength integration. For a case study, the scheme is applied to an infrared broadband radiation calculation in a broken cloud field generated with a large eddy simulation model. The horizontal transport of infrared radiation, which cannot be estimated by the plane-parallel approximation, and its variation in time can be retrieved. The calculation result elucidates that the horizontal divergences and convergences of infrared radiation flux are not negligible, especially at the boundaries of clouds and within optically thin clouds, and the radiative cooling at lateral boundaries of clouds may reduce infrared radiative heating in clouds. In a future work, the 3D effects on radiative heating/cooling will be able to be

  19. Cosmology

    CERN Document Server

    García-Bellido, J

    2015-01-01

    In these lectures I review the present status of the so-called Standard Cosmological Model, based on the hot Big Bang Theory and the Inflationary Paradigm. I will make special emphasis on the recent developments in observational cosmology, mainly the acceleration of the universe, the precise measurements of the microwave background anisotropies, and the formation of structure like galaxies and clusters of galaxies from tiny primordial fluctuations generated during inflation.

  20. Resolution convergence in cosmological hydrodynamical simulations using adaptive mesh refinement

    Science.gov (United States)

    Snaith, Owain N.; Park, Changbom; Kim, Juhan; Rosdahl, Joakim

    2018-06-01

    We have explored the evolution of gas distributions from cosmological simulations carried out using the RAMSES adaptive mesh refinement (AMR) code, to explore the effects of resolution on cosmological hydrodynamical simulations. It is vital to understand the effect of both the resolution of initial conditions (ICs) and the final resolution of the simulation. Lower initial resolution simulations tend to produce smaller numbers of low-mass structures. This will strongly affect the assembly history of objects, and has the same effect of simulating different cosmologies. The resolution of ICs is an important factor in simulations, even with a fixed maximum spatial resolution. The power spectrum of gas in simulations using AMR diverges strongly from the fixed grid approach - with more power on small scales in the AMR simulations - even at fixed physical resolution and also produces offsets in the star formation at specific epochs. This is because before certain times the upper grid levels are held back to maintain approximately fixed physical resolution, and to mimic the natural evolution of dark matter only simulations. Although the impact of hold-back falls with increasing spatial and IC resolutions, the offsets in the star formation remain down to a spatial resolution of 1 kpc. These offsets are of the order of 10-20 per cent, which is below the uncertainty in the implemented physics but are expected to affect the detailed properties of galaxies. We have implemented a new grid-hold-back approach to minimize the impact of hold-back on the star formation rate.

  1. Simulation-based marginal likelihood for cluster strong lensing cosmology

    Science.gov (United States)

    Killedar, M.; Borgani, S.; Fabjan, D.; Dolag, K.; Granato, G.; Meneghetti, M.; Planelles, S.; Ragone-Figueroa, C.

    2018-01-01

    Comparisons between observed and predicted strong lensing properties of galaxy clusters have been routinely used to claim either tension or consistency with Λ cold dark matter cosmology. However, standard approaches to such cosmological tests are unable to quantify the preference for one cosmology over another. We advocate approximating the relevant Bayes factor using a marginal likelihood that is based on the following summary statistic: the posterior probability distribution function for the parameters of the scaling relation between Einstein radii and cluster mass, α and β. We demonstrate, for the first time, a method of estimating the marginal likelihood using the X-ray selected z > 0.5 Massive Cluster Survey clusters as a case in point and employing both N-body and hydrodynamic simulations of clusters. We investigate the uncertainty in this estimate and consequential ability to compare competing cosmologies, which arises from incomplete descriptions of baryonic processes, discrepancies in cluster selection criteria, redshift distribution and dynamical state. The relation between triaxial cluster masses at various overdensities provides a promising alternative to the strong lensing test.

  2. Cosmological simulations using a static scalar-tensor theory

    Energy Technology Data Exchange (ETDEWEB)

    RodrIguez-Meza, M A [Depto. de Fisica, Instituto Nacional de Investigaciones Nucleares, Col. Escandon, Apdo. Postal 18-1027, 11801 Mexico D.F (Mexico); Gonzalez-Morales, A X [Departamento Ingenierias, Universidad Iberoamericana, Prol. Paseo de la Reforma 880 Lomas de Santa Fe, Mexico D.F. Mexico (Mexico); Gabbasov, R F [Depto. de Fisica, Instituto Nacional de Investigaciones Nucleares, Col. Escandon, Apdo. Postal 18-1027, 11801 Mexico D.F (Mexico); Cervantes-Cota, Jorge L [Depto. de Fisica, Instituto Nacional de Investigaciones Nucleares, Col. Escandon, Apdo. Postal 18-1027, 11801 Mexico D.F (Mexico)

    2007-11-15

    We present {lambda}CDM N-body cosmological simulations in the framework of of a static general scalar-tensor theory of gravity. Due to the influence of the non-minimally coupled scalar field, the gravitational potential is modified by a Yukawa type term, yielding a new structure formation dynamics. We present some preliminary results and, in particular, we compute the density and velocity profiles of the most massive group.

  3. The Atacama Cosmology Telescope: High-Resolution Sunyaev-Zel'dovich Array Observations of ACT SZE-Selected Clusters from the Equatorial Strip

    Science.gov (United States)

    Reese, Erik D.; Mroczkowski, Tony; Menanteau, Felipe; Hilton, Matt; Sievers, Jonathan; Aguirre, Paula; Appel, John William; Baker, Andrew J.; Bond, J. Richard; Das, Sudeep; hide

    2011-01-01

    We present follow-up observations with the Sunyaev-Zel'dovich Array (SZA) of optically-confirmed galaxy clusters found in the equatorial survey region of the Atacama Cosmology Telescope (ACT): ACT-CL J0022-0036, ACT-CL J2051+0057, and ACT-CL J2337+0016. ACT-CL J0022-0036 is a newly-discovered, massive (10(exp 15) Msun), high-redshift (z=0.81) cluster revealed by ACT through the Sunyaev-Zel'dovich effect (SZE). Deep, targeted observations with the SZA allow us to probe a broader range of cluster spatial scales, better disentangle cluster decrements from radio point source emission, and derive more robust integrated SZE flux and mass estimates than we can with ACT data alone. For the two clusters we detect with the SZA we compute integrated SZE signal and derive masses from the SZA data only. ACT-CL J2337+0016, also known as Abell 2631, has archival Chandra data that allow an additional X-ray-based mass estimate. Optical richness is also used to estimate cluster masses and shows good agreement with the SZE and X-ray-based estimates. Based on the point sources detected by the SZA in these three cluster fields and an extrapolation to ACT's frequency, we estimate that point sources could be contaminating the SZE decrement at the less than = 20% level for some fraction of clusters.

  4. The Atacama Cosmology Telescope: High-Resolution Sunyaev-Zeldovich Array Observations of ACT SZE-Selected Clusters from the Equatorial Strip

    Science.gov (United States)

    Reese, Erik; Mroczkowski, Tony; Menateau, Felipe; Hilton, Matt; Sievers, Jonathan; Aguirre, Paula; Appel, John William; Baker, Andrew J.; Bond, J. Richard; Das, Sudeep; hide

    2011-01-01

    We present follow-up observations with the Sunyaev-Zel'dovich Array (SZA) of optically-confirmed galaxy clusters found in the equatorial survey region of the Atacama Cosmology Telescope (ACT): ACT-CL J0022-0036, ACT-CL J2051+0057, and ACT-CL J2337+0016. ACT-CL J0022-0036 is a newly-discovered, massive ( approximately equals 10(exp 15) Solar M), high-redshift (z = 0.81) cluster revealed by ACT through the Sunyaev-Zeldovich effect (SZE). Deep, targeted observations with the SZA allow us to probe a broader range of cluster spatial scales, better disentangle cluster decrements from radio point source emission, and derive more robust integrated SZE flux and mass estimates than we can with ACT data alone. For the two clusters we detect with the SZA we compute integrated SZE signal and derive masses from the SZA data only. ACT-CL J2337+0016, also known as Abell 2631, has archival Chandra data that allow an additional X-ray-based mass estimate. Optical richness is also used to estimate cluster masses and shows good agreement with the SZE and X-ray-based estimates. Based on the point sources detected by the SZA in these three cluster fields and an extrapolation to ACT's frequency, we estimate that point sources could be contaminating the SZE decrement at the approx < 20% level for some fraction of clusters.

  5. THE ATACAMA COSMOLOGY TELESCOPE: HIGH-RESOLUTION SUNYAEV-ZEL'DOVICH ARRAY OBSERVATIONS OF ACT SZE-SELECTED CLUSTERS FROM THE EQUATORIAL STRIP

    Energy Technology Data Exchange (ETDEWEB)

    Reese, Erik D.; Mroczkowski, Tony; Devlin, Mark J.; Dicker, Simon R. [Department of Physics and Astronomy, University of Pennsylvania, 209 South 33rd Street, Philadelphia, PA 19104 (United States); Menanteau, Felipe; Baker, Andrew J. [Department of Physics and Astronomy, Rutgers, The State University of New Jersey, Piscataway, NJ 08854-8019 (United States); Hilton, Matt [School of Physics and Astronomy, University of Nottingham, University Park, Nottingham, NG7 2RD (United Kingdom); Sievers, Jonathan; Bond, J. Richard; Hajian, Amir [Canadian Institute for Theoretical Astrophysics, University of Toronto, Toronto, ON M5S 3H8 (Canada); Aguirre, Paula; Duenner, Rolando [Departamento de Astronomia y Astrofisica, Facultad de Fisica, Pontificia Universidad Catolica de Chile, Casilla 306, Santiago 22 (Chile); Appel, John William; Das, Sudeep; Essinger-Hileman, Thomas; Hincks, Adam D. [Joseph Henry Laboratories of Physics, Jadwin Hall, Princeton University, Princeton, NJ 08544 (United States); Fowler, Joseph W.; Hill, J. Colin [Department of Astrophysical Sciences, Peyton Hall, Princeton University, Princeton, NJ 08544 (United States); Halpern, Mark; Hasselfield, Matthew [Department of Physics and Astronomy, University of British Columbia, Vancouver, BC V6T 1Z4 (Canada); and others

    2012-05-20

    We present follow-up observations with the Sunyaev-Zel'dovich Array (SZA) of optically confirmed galaxy clusters found in the equatorial survey region of the Atacama Cosmology Telescope (ACT): ACT-CL J0022-0036, ACT-CL J2051+0057, and ACT-CL J2337+0016. ACT-CL J0022-0036 is a newly discovered, massive ({approx_equal} 10{sup 15} M{sub Sun }), high-redshift (z = 0.81) cluster revealed by ACT through the Sunyaev-Zel'dovich effect (SZE). Deep, targeted observations with the SZA allow us to probe a broader range of cluster spatial scales, better disentangle cluster decrements from radio point-source emission, and derive more robust integrated SZE flux and mass estimates than we can with ACT data alone. For the two clusters we detect with the SZA we compute integrated SZE signal and derive masses from the SZA data only. ACT-CL J2337+0016, also known as A2631, has archival Chandra data that allow an additional X-ray-based mass estimate. Optical richness is also used to estimate cluster masses and shows good agreement with the SZE and X-ray-based estimates. Based on the point sources detected by the SZA in these three cluster fields and an extrapolation to ACT's frequency, we estimate that point sources could be contaminating the SZE decrement at the {approx}< 20% level for some fraction of clusters.

  6. The origin of kinematically distinct cores and misaligned gas discs in galaxies from cosmological simulations

    Science.gov (United States)

    Taylor, Philip; Federrath, Christoph; Kobayashi, Chiaki

    2018-06-01

    Integral field spectroscopy surveys provide spatially resolved gas and stellar kinematics of galaxies. They have unveiled a range of atypical kinematic phenomena, which require detailed modelling to understand. We present results from a cosmological simulation that includes stellar and AGN feedback. We find that the distribution of angles between the gas and stellar angular momenta of galaxies is not affected by projection effects. We examine five galaxies (≈6 per cent of well resolved galaxies) that display atypical kinematics; two of the galaxies have kinematically distinct cores (KDC), while the other three have counter-rotating gas and stars. All five form the majority of their stars in the field, subsequently falling into cosmological filaments where the relative orientation of the stellar angular momentum and the bulk gas flow leads to the formation of a counter-rotating gas disc. The accreted gas exchanges angular momentum with pre-existing co-rotating gas causing it to fall to the centre of the galaxy. This triggers low-level AGN feedback, which reduces star formation. Later, two of the galaxies experience a minor merger (stellar mass ratio ˜1/10) with a galaxy on a retrograde orbit compared to the spin of the stellar component of the primary. This produces the KDCs, and is a different mechanism than suggested by other works. The role of minor mergers in the kinematic evolution of galaxies may have been under-appreciated in the past, and large, high-resolution cosmological simulations will be necessary to gain a better understanding in this area.

  7. Cosmology

    CERN Document Server

    Vittorio, Nicola

    2018-01-01

    Modern cosmology has changed significantly over the years, from the discovery to the precision measurement era. The data now available provide a wealth of information, mostly consistent with a model where dark matter and dark energy are in a rough proportion of 3:7. The time is right for a fresh new textbook which captures the state-of-the art in cosmology. Written by one of the world's leading cosmologists, this brand new, thoroughly class-tested textbook provides graduate and undergraduate students with coverage of the very latest developments and experimental results in the field. Prof. Nicola Vittorio shows what is meant by precision cosmology, from both theoretical and observational perspectives.

  8. Variability of wet troposphere delays over inland reservoirs as simulated by a high-resolution regional climate model

    Science.gov (United States)

    Clark, E.; Lettenmaier, D. P.

    2014-12-01

    Satellite radar altimetry is widely used for measuring global sea level variations and, increasingly, water height variations of inland water bodies. Existing satellite radar altimeters measure water surfaces directly below the spacecraft (approximately at nadir). Over the ocean, most of these satellites use radiometry to measure the delay of radar signals caused by water vapor in the atmosphere (also known as the wet troposphere delay (WTD)). However, radiometry can only be used to estimate this delay over the largest inland water bodies, such as the Great Lakes, due to spatial resolution issues. As a result, atmospheric models are typically used to simulate and correct for the WTD at the time of observations. The resolutions of these models are quite coarse, at best about 5000 km2 at 30˚N. The upcoming NASA- and CNES-led Surface Water and Ocean Topography (SWOT) mission, on the other hand, will use interferometric synthetic aperture radar (InSAR) techniques to measure a 120-km-wide swath of the Earth's surface. SWOT is expected to make useful measurements of water surface elevation and extent (and storage change) for inland water bodies at spatial scales as small as 250 m, which is much smaller than current altimetry targets and several orders of magnitude smaller than the models used for wet troposphere corrections. Here, we calculate WTD from very high-resolution (4/3-km to 4-km) simulations of the Weather Research and Forecasting (WRF) regional climate model, and use the results to evaluate spatial variations in WTD. We focus on six U.S. reservoirs: Lake Elwell (MT), Lake Pend Oreille (ID), Upper Klamath Lake (OR), Elephant Butte (NM), Ray Hubbard (TX), and Sam Rayburn (TX). The reservoirs vary in climate, shape, use, and size. Because evaporation from open water impacts local water vapor content, we compare time series of WTD over land and water in the vicinity of each reservoir. To account for resolution effects, we examine the difference in WRF-simulated

  9. High-resolution simulations of unstable cylindrical gravity currents undergoing wandering and splitting motions in a rotating system

    Science.gov (United States)

    Dai, Albert; Wu, Ching-Sen

    2018-02-01

    High-resolution simulations of unstable cylindrical gravity currents when wandering and splitting motions occur in a rotating system are reported. In this study, our attention is focused on the situation of unstable rotating cylindrical gravity currents when the ratio of Coriolis to inertia forces is larger, namely, 0.5 ≤ C ≤ 2.0, in comparison to the stable ones when C ≤ 0.3 as investigated previously by the authors. The simulations reproduce the major features of the unstable rotating cylindrical gravity currents observed in the laboratory, i.e., vortex-wandering or vortex-splitting following the contraction-relaxation motion, and good agreement is found when compared with the experimental results on the outrush radius of the advancing front and on the number of bulges. Furthermore, the simulations provide energy budget information which could not be attained in the laboratory. After the heavy fluid is released, the heavy fluid collapses and a contraction-relaxation motion is at work for approximately 2-3 revolutions of the system. During the contraction-relaxation motion of the heavy fluid, the unstable rotating cylindrical gravity currents behave similar to the stable ones. Towards the end of the contraction-relaxation motion, the dissipation rate in the system reaches a local minimum and a quasi-geostrophic equilibrium state is reached. After the quasi-geostrophic equilibrium state, vortex-wandering or vortex-splitting may occur depending on the ratio of Coriolis to inertia forces. The vortex-splitting process begins with non-axisymmetric bulges and, as the bulges grow, the kinetic energy increases at the expense of decreasing potential energy in the system. The completion of vortex-splitting is accompanied by a local maximum of dissipation rate and a local maximum of kinetic energy in the system. A striking feature of the unstable rotating cylindrical gravity currents is the persistent upwelling and downwelling motions, which are observed for both the

  10. Mediterranean Thermohaline Response to Large-Scale Winter Atmospheric Forcing in a High-Resolution Ocean Model Simulation

    Science.gov (United States)

    Cusinato, Eleonora; Zanchettin, Davide; Sannino, Gianmaria; Rubino, Angelo

    2018-04-01

    Large-scale circulation anomalies over the North Atlantic and Euro-Mediterranean regions described by dominant climate modes, such as the North Atlantic Oscillation (NAO), the East Atlantic pattern (EA), the East Atlantic/Western Russian (EAWR) and the Mediterranean Oscillation Index (MOI), significantly affect interannual-to-decadal climatic and hydroclimatic variability in the Euro-Mediterranean region. However, whereas previous studies assessed the impact of such climate modes on air-sea heat and freshwater fluxes in the Mediterranean Sea, the propagation of these atmospheric forcing signals from the surface toward the interior and the abyss of the Mediterranean Sea remains unexplored. Here, we use a high-resolution ocean model simulation covering the 1979-2013 period to investigate spatial patterns and time scales of the Mediterranean thermohaline response to winter forcing from NAO, EA, EAWR and MOI. We find that these modes significantly imprint on the thermohaline properties in key areas of the Mediterranean Sea through a variety of mechanisms. Typically, density anomalies induced by all modes remain confined in the upper 600 m depth and remain significant for up to 18-24 months. One of the clearest propagation signals refers to the EA in the Adriatic and northern Ionian seas: There, negative EA anomalies are associated to an extensive positive density response, with anomalies that sink to the bottom of the South Adriatic Pit within a 2-year time. Other strong responses are the thermally driven responses to the EA in the Gulf of Lions and to the EAWR in the Aegean Sea. MOI and EAWR forcing of thermohaline properties in the Eastern Mediterranean sub-basins seems to be determined by reinforcement processes linked to the persistency of these modes in multiannual anomalous states. Our study also suggests that NAO, EA, EAWR and MOI could critically interfere with internal, deep and abyssal ocean dynamics and variability in the Mediterranean Sea.

  11. Development of local-scale high-resolution atmospheric dispersion model using large-eddy simulation. Part 3: turbulent flow and plume dispersion in building arrays

    Czech Academy of Sciences Publication Activity Database

    Nakayama, H.; Jurčáková, Klára; Nagai, H.

    2013-01-01

    Roč. 50, č. 5 (2013), s. 503-519 ISSN 0022-3131 Institutional support: RVO:61388998 Keywords : local-scale high-resolution dispersion model * nuclear emergency response system * large-eddy simulation * spatially developing turbulent boundary layer flow Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 1.452, year: 2013

  12. Effects of the initial conditions on cosmological $N$-body simulations

    OpenAIRE

    L'Huillier, Benjamin; Park, Changbom; Kim, Juhan

    2014-01-01

    Cosmology is entering an era of percent level precision due to current large observational surveys. This precision in observation is now demanding more accuracy from numerical methods and cosmological simulations. In this paper, we study the accuracy of $N$-body numerical simulations and their dependence on changes in the initial conditions and in the simulation algorithms. For this purpose, we use a series of cosmological $N$-body simulations with varying initial conditions. We test the infl...

  13. The WASCAL high-resolution regional climate simulation ensemble for West Africa: concept, dissemination and assessment

    Directory of Open Access Journals (Sweden)

    D. Heinzeller

    2018-04-01

    Full Text Available Climate change and constant population growth pose severe challenges to 21st century rural Africa. Within the framework of the West African Science Service Center on Climate Change and Adapted Land Use (WASCAL, an ensemble of high-resolution regional climate change scenarios for the greater West African region is provided to support the development of effective adaptation and mitigation measures. This contribution presents the overall concept of the WASCAL regional climate simulations, as well as detailed information on the experimental design, and provides information on the format and dissemination of the available data. All data are made available to the public at the CERA long-term archive of the German Climate Computing Center (DKRZ with a subset available at the PANGAEA Data Publisher for Earth & Environmental Science portal (https://doi.pangaea.de/10.1594/PANGAEA.880512. A brief assessment of the data are presented to provide guidance for future users. Regional climate projections are generated at high (12 km and intermediate (60 km resolution using the Weather Research and Forecasting Model (WRF. The simulations cover the validation period 1980–2010 and the two future periods 2020–2050 and 2070–2100. A brief comparison to observations and two climate change scenarios from the Coordinated Regional Downscaling Experiment (CORDEX initiative is presented to provide guidance on the data set to future users and to assess their climate change signal. Under the RCP4.5 (Representative Concentration Pathway 4.5 scenario, the results suggest an increase in temperature by 1.5 °C at the coast of Guinea and by up to 3 °C in the northern Sahel by the end of the 21st century, in line with existing climate projections for the region. They also project an increase in precipitation by up to 300 mm per year along the coast of Guinea, by up to 150 mm per year in the Soudano region adjacent in the north and almost no change in

  14. An Investigation of Intracluster Light Evolution Using Cosmological Hydrodynamical Simulations

    Science.gov (United States)

    Tang, Lin; Lin, Weipeng; Cui, Weiguang; Kang, Xi; Wang, Yang; Contini, E.; Yu, Yu

    2018-06-01

    Intracluster light (ICL) in observations is usually identified through the surface brightness limit (SBL) method. In this paper, for the first time we produce mock images of galaxy groups and clusters, using a cosmological hydrodynamical simulation to investigate the ICL fraction and focus on its dependence on observational parameters, e.g., the SBL, the effects of cosmological redshift-dimming, point-spread function (PSF), and CCD pixel size. Detailed analyses suggest that the width of the PSF has a significant effect on the measured ICL fraction, while the relatively small pixel size shows almost no influence. It is found that the measured ICL fraction depends strongly on the SBL. At a fixed SBL and redshift, the measured ICL fraction decreases with increasing halo mass, while with a much fainter SBL, it does not depend on halo mass at low redshifts. In our work, the measured ICL fraction shows a clear dependence on the cosmological redshift-dimming effect. It is found that there is more mass locked in the ICL component than light, suggesting that the use of a constant mass-to-light ratio at high surface brightness levels will lead to an underestimate of ICL mass. Furthermore, it is found that the radial profile of ICL shows a characteristic radius that is almost independent of halo mass. The current measurement of ICL from observations has a large dispersion due to different methods, and we emphasize the importance of using the same definition when observational results are compared with theoretical predictions.

  15. MassiveNuS: cosmological massive neutrino simulations

    Science.gov (United States)

    Liu, Jia; Bird, Simeon; Zorrilla Matilla, José Manuel; Hill, J. Colin; Haiman, Zoltán; Madhavacheril, Mathew S.; Petri, Andrea; Spergel, David N.

    2018-03-01

    The non-zero mass of neutrinos suppresses the growth of cosmic structure on small scales. Since the level of suppression depends on the sum of the masses of the three active neutrino species, the evolution of large-scale structure is a promising tool to constrain the total mass of neutrinos and possibly shed light on the mass hierarchy. In this work, we investigate these effects via a large suite of N-body simulations that include massive neutrinos using an analytic linear-response approximation: the Cosmological Massive Neutrino Simulations (MassiveNuS). The simulations include the effects of radiation on the background expansion, as well as the clustering of neutrinos in response to the nonlinear dark matter evolution. We allow three cosmological parameters to vary: the neutrino mass sum Mν in the range of 0–0.6 eV, the total matter density Ωm, and the primordial power spectrum amplitude As. The rms density fluctuation in spheres of 8 comoving Mpc/h (σ8) is a derived parameter as a result. Our data products include N-body snapshots, halo catalogues, merger trees, ray-traced galaxy lensing convergence maps for four source redshift planes between zs=1–2.5, and ray-traced cosmic microwave background lensing convergence maps. We describe the simulation procedures and code validation in this paper. The data are publicly available at http://columbialensing.org.

  16. Tests of high-resolution simulations over a region of complex terrain in Southeast coast of Brazil

    Science.gov (United States)

    Chou, Sin Chan; Luís Gomes, Jorge; Ristic, Ivan; Mesinger, Fedor; Sueiro, Gustavo; Andrade, Diego; Lima-e-Silva, Pedro Paulo

    2013-04-01

    The Eta Model is used operationally by INPE at the Centre for Weather Forecasts and Climate Studies (CPTEC) to produce weather forecasts over South America since 1997. The model has gone through upgrades along these years. In order to prepare the model for operational higher resolution forecasts, the model is configured and tested over a region of complex topography located near the coast of Southeast Brazil. The model domain includes the two Brazilians cities, Rio de Janeiro and Sao Paulo, urban areas, preserved tropical forest, pasture fields, and complex terrain where it can rise from sea level up to about 1000 m. Accurate near-surface wind direction and magnitude are needed for the power plant emergency plan. Besides, the region suffers from frequent events of floods and landslides, therefore accurate local forecasts are required for disaster warnings. The objective of this work is to carry out a series of numerical experiments to test and evaluate high resolution simulations in this complex area. Verification of model runs uses observations taken from the nuclear power plant and higher resolution reanalyses data. The runs were tested in a period when flow was predominately forced by local conditions and in a period forced by frontal passage. The Eta Model was configured initially with 2-km horizontal resolution and 50 layers. The Eta-2km is a second nesting, it is driven by Eta-15km, which in its turn is driven by Era-Interim reanalyses. The series of experiments consists of replacing surface layer stability function, adjusting cloud microphysics scheme parameters, further increasing vertical and horizontal resolutions. By replacing the stability function for the stable conditions substantially increased the katabatic winds and verified better against the tower wind data. Precipitation produced by the model was excessive in the region. Increasing vertical resolution to 60 layers caused a further increase in precipitation production. This excessive

  17. Cosmological N -body simulations with generic hot dark matter

    Energy Technology Data Exchange (ETDEWEB)

    Brandbyge, Jacob; Hannestad, Steen, E-mail: jacobb@phys.au.dk, E-mail: sth@phys.au.dk [Department of Physics and Astronomy, University of Aarhus, Ny Munkegade 120, DK–8000 Aarhus C (Denmark)

    2017-10-01

    We have calculated the non-linear effects of generic fermionic and bosonic hot dark matter components in cosmological N -body simulations. For sub-eV masses, the non-linear power spectrum suppression caused by thermal free-streaming resembles the one seen for massive neutrinos, whereas for masses larger than 1 eV, the non-linear relative suppression of power is smaller than in linear theory. We furthermore find that in the non-linear regime, one can map fermionic to bosonic models by performing a simple transformation.

  18. HOT GAS HALOS AROUND DISK GALAXIES: CONFRONTING COSMOLOGICAL SIMULATIONS WITH OBSERVATIONS

    International Nuclear Information System (INIS)

    Rasmussen, Jesper; Sommer-Larsen, Jesper; Pedersen, Kristian; Toft, Sune; Grove, Lisbeth F.; Benson, Andrew; Bower, Richard G.

    2009-01-01

    Models of disk galaxy formation commonly predict the existence of an extended reservoir of accreted hot gas surrounding massive spirals at low redshift. As a test of these models, we use X-ray and Hα data of the two massive, quiescent edge-on spirals NGC 5746 and NGC 5170 to investigate the amount and origin of any hot gas in their halos. Contrary to our earlier claim, the Chandra analysis of NGC 5746, employing more recent calibration data, does not reveal any significant evidence for diffuse X-ray emission outside the optical disk, with a 3σ upper limit to the halo X-ray luminosity of 4 x 10 39 erg s -1 . An identical study of the less massive NGC 5170 also fails to detect any extraplanar X-ray emission. By extracting hot halo properties of disk galaxies formed in cosmological hydrodynamical simulations, we compare these results to expectations for cosmological accretion of hot gas by spirals. For Milky-Way-sized galaxies, these high-resolution simulations predict hot halo X-ray luminosities which are lower by a factor of ∼2 compared to our earlier results reported by Toft et al. We find the new simulation predictions to be consistent with our observational constraints for both NGC 5746 and NGC 5170, while also confirming that the hot gas detected so far around more actively star-forming spirals is in general probably associated with stellar activity in the disk. Observational results on quiescent disk galaxies at the high-mass end are nevertheless providing powerful constraints on theoretical predictions, and hence on the assumed input physics in numerical studies of disk galaxy formation and evolution.

  19. A small-scale dynamo in feedback-dominated galaxies - III. Cosmological simulations

    Science.gov (United States)

    Rieder, Michael; Teyssier, Romain

    2017-12-01

    Magnetic fields are widely observed in the Universe in virtually all astrophysical objects, from individual stars to entire galaxies, even in the intergalactic medium, but their specific genesis has long been debated. Due to the development of more realistic models of galaxy formation, viable scenarios are emerging to explain cosmic magnetism, thanks to both deeper observations and more efficient and accurate computer simulations. We present here a new cosmological high-resolution zoom-in magnetohydrodynamic (MHD) simulation, using the adaptive mesh refinement technique, of a dwarf galaxy with an initially weak and uniform magnetic seed field that is amplified by a small-scale dynamo (SSD) driven by supernova-induced turbulence. As first structures form from the gravitational collapse of small density fluctuations, the frozen-in magnetic field separates from the cosmic expansion and grows through compression. In a second step, star formation sets in and establishes a strong galactic fountain, self-regulated by supernova explosions. Inside the galaxy, the interstellar medium becomes highly turbulent, dominated by strong supersonic shocks, as demonstrated by the spectral analysis of the gas kinetic energy. In this turbulent environment, the magnetic field is quickly amplified via a SSD process and is finally carried out into the circumgalactic medium by a galactic wind. This realistic cosmological simulation explains how initially weak magnetic seed fields can be amplified quickly in early, feedback-dominated galaxies, and predicts, as a consequence of the SSD process, that high-redshift magnetic fields are likely to be dominated by their small-scale components.

  20. Immersed boundary methods for high-resolution simulation of atmospheric boundary-layer flow over complex terrain

    Science.gov (United States)

    Lundquist, Katherine Ann

    Mesoscale models, such as the Weather Research and Forecasting (WRF) model, are increasingly used for high resolution simulations, particularly in complex terrain, but errors associated with terrain-following coordinates degrade the accuracy of the solution. Use of an alternative Cartesian gridding technique, known as an immersed boundary method (IBM), alleviates coordinate transformation errors and eliminates restrictions on terrain slope which currently limit mesoscale models to slowly varying terrain. In this dissertation, an immersed boundary method is developed for use in numerical weather prediction. Use of the method facilitates explicit resolution of complex terrain, even urban terrain, in the WRF mesoscale model. First, the errors that arise in the WRF model when complex terrain is present are presented. This is accomplished using a scalar advection test case, and comparing the numerical solution to the analytical solution. Results are presented for different orders of advection schemes, grid resolutions and aspect ratios, as well as various degrees of terrain slope. For comparison, results from the same simulation are presented using the IBM. Both two-dimensional and three-dimensional immersed boundary methods are then described, along with details that are specific to the implementation of IBM in the WRF code. Our IBM is capable of imposing both Dirichlet and Neumann boundary conditions. Additionally, a method for coupling atmospheric physics parameterizations at the immersed boundary is presented, making IB methods much more functional in the context of numerical weather prediction models. The two-dimensional IB method is verified through comparisons of solutions for gentle terrain slopes when using IBM and terrain-following grids. The canonical case of flow over a Witch of Agnesi hill provides validation of the basic no-slip and zero gradient boundary conditions. Specified diurnal heating in a valley, producing anabatic winds, is used to validate the

  1. Immersed Boundary Methods for High-Resolution Simulation of Atmospheric Boundary-Layer Flow Over Complex Terrain

    Energy Technology Data Exchange (ETDEWEB)

    Lundquist, K A [Univ. of California, Berkeley, CA (United States)

    2010-05-12

    Mesoscale models, such as the Weather Research and Forecasting (WRF) model, are increasingly used for high resolution simulations, particularly in complex terrain, but errors associated with terrain-following coordinates degrade the accuracy of the solution. Use of an alternative Cartesian gridding technique, known as an immersed boundary method (IBM), alleviates coordinate transformation errors and eliminates restrictions on terrain slope which currently limit mesoscale models to slowly varying terrain. In this dissertation, an immersed boundary method is developed for use in numerical weather prediction. Use of the method facilitates explicit resolution of complex terrain, even urban terrain, in the WRF mesoscale model. First, the errors that arise in the WRF model when complex terrain is present are presented. This is accomplished using a scalar advection test case, and comparing the numerical solution to the analytical solution. Results are presented for different orders of advection schemes, grid resolutions and aspect ratios, as well as various degrees of terrain slope. For comparison, results from the same simulation are presented using the IBM. Both two-dimensional and three-dimensional immersed boundary methods are then described, along with details that are specific to the implementation of IBM in the WRF code. Our IBM is capable of imposing both Dirichlet and Neumann boundary conditions. Additionally, a method for coupling atmospheric physics parameterizations at the immersed boundary is presented, making IB methods much more functional in the context of numerical weather prediction models. The two-dimensional IB method is verified through comparisons of solutions for gentle terrain slopes when using IBM and terrain-following grids. The canonical case of flow over a Witch of Agnesi hill provides validation of the basic no-slip and zero gradient boundary conditions. Specified diurnal heating in a valley, producing anabatic winds, is used to validate the

  2. Cosmological simulations of isotropic conduction in galaxy clusters

    International Nuclear Information System (INIS)

    Smith, Britton; O'Shea, Brian W.; Voit, G. Mark; Ventimiglia, David; Skillman, Samuel W.

    2013-01-01

    Simulations of galaxy clusters have a difficult time reproducing the radial gas-property gradients and red central galaxies observed to exist in the cores of galaxy clusters. Thermal conduction has been suggested as a mechanism that can help bring simulations of cluster cores into better alignment with observations by stabilizing the feedback processes that regulate gas cooling, but this idea has not yet been well tested with cosmological numerical simulations. Here we present cosmological simulations of 10 galaxy clusters performed with five different levels of isotropic Spitzer conduction, which alters both the cores and outskirts of clusters, though not dramatically. In the cores, conduction flattens central temperature gradients, making them nearly isothermal and slightly lowering the central density, but failing to prevent a cooling catastrophe there. Conduction has little effect on temperature gradients outside of cluster cores because outward conductive heat flow tends to inflate the outer parts of the intracluster medium (ICM), instead of raising its temperature. In general, conduction tends reduce temperature inhomogeneity in the ICM, but our simulations indicate that those homogenizing effects would be extremely difficult to observe in ∼5 keV clusters. Outside the virial radius, our conduction implementation lowers the gas densities and temperatures because it reduces the Mach numbers of accretion shocks. We conclude that, despite the numerous small ways in which conduction alters the structure of galaxy clusters, none of these effects are significant enough to make the efficiency of conduction easily measurable, unless its effects are more pronounced in clusters hotter than those we have simulated.

  3. THE PRESSURE OF THE STAR-FORMING INTERSTELLAR MEDIUM IN COSMOLOGICAL SIMULATIONS

    International Nuclear Information System (INIS)

    Munshi, Ferah; Quinn, Thomas R.; Governato, Fabio; Christensen, Charlotte; Wadsley, James; Loebman, Sarah; Shen, Sijing

    2014-01-01

    We examine the pressure of the star-forming interstellar medium (ISM) of Milky-Way-sized disk galaxies using fully cosmological SPH+N-body, high-resolution simulations. These simulations include explicit treatment of metal-line cooling in addition to dust and self-shielding, H 2 -based star formation. The four simulated halos have masses ranging from a few times 10 10 to nearly 10 12 solar masses. Using a kinematic decomposition of these galaxies into present-day bulge and disk components, we find that the typical pressure of the star-forming ISM in the present-day bulge is higher than that in the present-day disk by an order of magnitude. We also find that the pressure of the star-forming ISM at high redshift is, on average, higher than ISM pressures at low redshift. This explains why the bulge forms at higher pressures: the disk assembles at lower redshift when the ISM exhibits lower pressure and the bulge forms at high redshift when the ISM has higher pressure. If ISM pressure and IMF variation are tied together, these results could indicate a time-dependent IMF in Milky-Way-like systems as well as a different IMF in the bulge and the disk

  4. High-resolution numerical simulation of summer wind field comparing WRF boundary-layer parametrizations over complex Arctic topography: case study from central Spitsbergen

    Czech Academy of Sciences Publication Activity Database

    Láska, K.; Chládová, Zuzana; Hošek, Jiří

    2017-01-01

    Roč. 26, č. 4 (2017), s. 391-408 ISSN 0941-2948 Institutional support: RVO:68378289 Keywords : surface wind field * model evaluation * topographic effect * circulation pattern * Svalbard Subject RIV: DG - Athmosphere Sciences, Meteorology OBOR OECD: Meteorology and atmospheric sciences Impact factor: 1.989, year: 2016 http://www.schweizerbart.de/papers/metz/detail/prepub/87659/High_resolution_numerical_simulation_of_summer_wind_field_comparing_WRF_boundary_layer_parametrizations_over_complex_Arctic_topography_case_study_from_central_Spitsbergen

  5. Simulating nonlinear cosmological structure formation with massive neutrinos

    Energy Technology Data Exchange (ETDEWEB)

    Banerjee, Arka; Dalal, Neal, E-mail: abanerj6@illinois.edu, E-mail: dalaln@illinois.edu [Department of Physics, University of Illinois at Urbana-Champaign, 1110 West Green Street, Urbana, IL 61801-3080 (United States)

    2016-11-01

    We present a new method for simulating cosmologies that contain massive particles with thermal free streaming motion, such as massive neutrinos or warm/hot dark matter. This method combines particle and fluid descriptions of the thermal species to eliminate the shot noise known to plague conventional N-body simulations. We describe this method in detail, along with results for a number of test cases to validate our method, and check its range of applicability. Using this method, we demonstrate that massive neutrinos can produce a significant scale-dependence in the large-scale biasing of deep voids in the matter field. We show that this scale-dependence may be quantitatively understood using an extremely simple spherical expansion model which reproduces the behavior of the void bias for different neutrino parameters.

  6. Simulating nonlinear cosmological structure formation with massive neutrinos

    International Nuclear Information System (INIS)

    Banerjee, Arka; Dalal, Neal

    2016-01-01

    We present a new method for simulating cosmologies that contain massive particles with thermal free streaming motion, such as massive neutrinos or warm/hot dark matter. This method combines particle and fluid descriptions of the thermal species to eliminate the shot noise known to plague conventional N-body simulations. We describe this method in detail, along with results for a number of test cases to validate our method, and check its range of applicability. Using this method, we demonstrate that massive neutrinos can produce a significant scale-dependence in the large-scale biasing of deep voids in the matter field. We show that this scale-dependence may be quantitatively understood using an extremely simple spherical expansion model which reproduces the behavior of the void bias for different neutrino parameters.

  7. Statistical Analyses of High-Resolution Aircraft and Satellite Observations of Sea Ice: Applications for Improving Model Simulations

    Science.gov (United States)

    Farrell, S. L.; Kurtz, N. T.; Richter-Menge, J.; Harbeck, J. P.; Onana, V.

    2012-12-01

    Satellite-derived estimates of ice thickness and observations of ice extent over the last decade point to a downward trend in the basin-scale ice volume of the Arctic Ocean. This loss has broad-ranging impacts on the regional climate and ecosystems, as well as implications for regional infrastructure, marine navigation, national security, and resource exploration. New observational datasets at small spatial and temporal scales are now required to improve our understanding of physical processes occurring within the ice pack and advance parameterizations in the next generation of numerical sea-ice models. High-resolution airborne and satellite observations of the sea ice are now available at meter-scale resolution or better that provide new details on the properties and morphology of the ice pack across basin scales. For example the NASA IceBridge airborne campaign routinely surveys the sea ice of the Arctic and Southern Oceans with an advanced sensor suite including laser and radar altimeters and digital cameras that together provide high-resolution measurements of sea ice freeboard, thickness, snow depth and lead distribution. Here we present statistical analyses of the ice pack primarily derived from the following IceBridge instruments: the Digital Mapping System (DMS), a nadir-looking, high-resolution digital camera; the Airborne Topographic Mapper, a scanning lidar; and the University of Kansas snow radar, a novel instrument designed to estimate snow depth on sea ice. Together these instruments provide data from which a wide range of sea ice properties may be derived. We provide statistics on lead distribution and spacing, lead width and area, floe size and distance between floes, as well as ridge height, frequency and distribution. The goals of this study are to (i) identify unique statistics that can be used to describe the characteristics of specific ice regions, for example first-year/multi-year ice, diffuse ice edge/consolidated ice pack, and convergent

  8. N-body simulations for coupled scalar-field cosmology

    International Nuclear Information System (INIS)

    Li Baojiu; Barrow, John D.

    2011-01-01

    We describe in detail the general methodology and numerical implementation of consistent N-body simulations for coupled-scalar-field models, including background cosmology and the generation of initial conditions (with the different couplings to different matter species taken into account). We perform fully consistent simulations for a class of coupled-scalar-field models with an inverse power-law potential and negative coupling constant, for which the chameleon mechanism does not work. We find that in such cosmological models the scalar-field potential plays a negligible role except in the background expansion, and the fifth force that is produced is proportional to gravity in magnitude, justifying the use of a rescaled gravitational constant G in some earlier N-body simulation works for similar models. We then study the effects of the scalar coupling on the nonlinear matter power spectra and compare with linear perturbation calculations to see the agreement and places where the nonlinear treatment deviates from the linear approximation. We also propose an algorithm to identify gravitationally virialized matter halos, trying to take account of the fact that the virialization itself is also modified by the scalar-field coupling. We use the algorithm to measure the mass function and study the properties of dark-matter halos. We find that the net effect of the scalar coupling helps produce more heavy halos in our simulation boxes and suppresses the inner (but not the outer) density profile of halos compared with the ΛCDM prediction, while the suppression weakens as the coupling between the scalar field and dark-matter particles increases in strength.

  9. Quantitative and comparative visualization applied to cosmological simulations

    International Nuclear Information System (INIS)

    Ahrens, James; Heitmann, Katrin; Habib, Salman; Ankeny, Lee; McCormick, Patrick; Inman, Jeff; Armstrong, Ryan; Ma, Kwan-Liu

    2006-01-01

    Cosmological simulations follow the formation of nonlinear structure in dark and luminous matter. The associated simulation volumes and dynamic range are very large, making visualization both a necessary and challenging aspect of the analysis of these datasets. Our goal is to understand sources of inconsistency between different simulation codes that are started from the same initial conditions. Quantitative visualization supports the definition and reasoning about analytically defined features of interest. Comparative visualization supports the ability to visually study, side by side, multiple related visualizations of these simulations. For instance, a scientist can visually distinguish that there are fewer halos (localized lumps of tracer particles) in low-density regions for one simulation code out of a collection. This qualitative result will enable the scientist to develop a hypothesis, such as loss of halos in low-density regions due to limited resolution, to explain the inconsistency between the different simulations. Quantitative support then allows one to confirm or reject the hypothesis. If the hypothesis is rejected, this step may lead to new insights and a new hypothesis, not available from the purely qualitative analysis. We will present methods to significantly improve the Scientific analysis process by incorporating quantitative analysis as the driver for visualization. Aspects of this work are included as part of two visualization tools, ParaView, an open-source large data visualization tool, and Scout, an analysis-language based, hardware-accelerated visualization tool

  10. Selecting ultra-faint dwarf candidate progenitors in cosmological N-body simulations at high redshifts

    Science.gov (United States)

    Safarzadeh, Mohammadtaher; Ji, Alexander P.; Dooley, Gregory A.; Frebel, Anna; Scannapieco, Evan; Gómez, Facundo A.; O'Shea, Brian W.

    2018-06-01

    The smallest satellites of the Milky Way ceased forming stars during the epoch of reionization and thus provide archaeological access to galaxy formation at z > 6. Numerical studies of these ultrafaint dwarf galaxies (UFDs) require expensive cosmological simulations with high mass resolution that are carried out down to z = 0. However, if we are able to statistically identify UFD host progenitors at high redshifts with relatively high probabilities, we can avoid this high computational cost. To find such candidates, we analyse the merger trees of Milky Way type haloes from the high-resolution Caterpillar suite of dark matter only simulations. Satellite UFD hosts at z = 0 are identified based on four different abundance matching (AM) techniques. All the haloes at high redshifts are traced forward in time in order to compute the probability of surviving as satellite UFDs today. Our results show that selecting potential UFD progenitors based solely on their mass at z = 12 (8) results in a 10 per cent (20 per cent) chance of obtaining a surviving UFD at z = 0 in three of the AM techniques we adopted. We find that the progenitors of surviving satellite UFDs have lower virial ratios (η), and are preferentially located at large distances from the main MW progenitor, while they show no correlation with concentration parameter. Haloes with favorable locations and virial ratios are ≈3 times more likely to survive as satellite UFD candidates at z = 0.

  11. INTELLIGENT DESIGN: ON THE EMULATION OF COSMOLOGICAL SIMULATIONS

    International Nuclear Information System (INIS)

    Schneider, Michael D.; Holm, Oskar; Knox, Lloyd

    2011-01-01

    Simulation design is the choice of locations in parameter space at which simulations are to be run and is the first step in building an emulator capable of quickly providing estimates of simulation results for arbitrary locations in the parameter space. We introduce an alteration to the 'OALHS' design used by Heitmann et al. that reduces the number of simulation runs required to achieve a fixed accuracy in our case study by a factor of two. We also compare interpolation procedures for emulators and find that interpolation via Gaussian process models and via the much-easier-to-implement polynomial interpolation have comparable accuracy. A very simple emulation-building procedure consisting of a design sampled from the parameter prior distribution, combined with interpolation via polynomials also performs well. Although our primary motivation is efficient emulators of nonlinear cosmological N-body simulations, in an appendix we describe an emulator for the cosmic microwave background temperature power spectrum publicly available as a computer code.

  12. High-resolution simulation of link-level vehicle emissions and concentrations for air pollutants in a traffic-populated eastern Asian city

    Directory of Open Access Journals (Sweden)

    S. Zhang

    2016-08-01

    Full Text Available Vehicle emissions containing air pollutants created substantial environmental impacts on air quality for many traffic-populated cities in eastern Asia. A high-resolution emission inventory is a useful tool compared with traditional tools (e.g. registration data-based approach to accurately evaluate real-world traffic dynamics and their environmental burden. In this study, Macau, one of the most populated cities in the world, is selected to demonstrate a high-resolution simulation of vehicular emissions and their contribution to air pollutant concentrations by coupling multimodels. First, traffic volumes by vehicle category on 47 typical roads were investigated during weekdays in 2010 and further applied in a networking demand simulation with the TransCAD model to establish hourly profiles of link-level vehicle counts. Local vehicle driving speed and vehicle age distribution data were also collected in Macau. Second, based on a localized vehicle emission model (e.g. the emission factor model for the Beijing vehicle fleet – Macau, EMBEV–Macau, this study established a link-based vehicle emission inventory in Macau with high resolution meshed in a temporal and spatial framework. Furthermore, we employed the AERMOD (AMS/EPA Regulatory Model model to map concentrations of CO and primary PM2.5 contributed by local vehicle emissions during weekdays in November 2010. This study has discerned the strong impact of traffic flow dynamics on the temporal and spatial patterns of vehicle emissions, such as a geographic discrepancy of spatial allocation up to 26 % between THC and PM2.5 emissions owing to spatially heterogeneous vehicle-use intensity between motorcycles and diesel fleets. We also identified that the estimated CO2 emissions from gasoline vehicles agreed well with the statistical fuel consumption in Macau. Therefore, this paper provides a case study and a solid framework for developing high-resolution environment assessment tools for other

  13. High-Resolution Mesoscale Simulations of the 6-7 May 2000 Missouri Flash Flood: Impact of Model Initialization and Land Surface Treatment

    Science.gov (United States)

    Baker, R. David; Wang, Yansen; Tao, Wei-Kuo; Wetzel, Peter; Belcher, Larry R.

    2004-01-01

    High-resolution mesoscale model simulations of the 6-7 May 2000 Missouri flash flood event were performed to test the impact of model initialization and land surface treatment on timing, intensity, and location of extreme precipitation. In this flash flood event, a mesoscale convective system (MCS) produced over 340 mm of rain in roughly 9 hours in some locations. Two different types of model initialization were employed: 1) NCEP global reanalysis with 2.5-degree grid spacing and 12-hour temporal resolution, and 2) Eta reanalysis with 40- km grid spacing and $hour temporal resolution. In addition, two different land surface treatments were considered. A simple land scheme. (SLAB) keeps soil moisture fixed at initial values throughout the simulation, while a more sophisticated land model (PLACE) allows for r interactive feedback. Simulations with high-resolution Eta model initialization show considerable improvement in the intensity of precipitation due to the presence in the initialization of a residual mesoscale convective vortex (hlCV) from a previous MCS. Simulations with the PLACE land model show improved location of heavy precipitation. Since soil moisture can vary over time in the PLACE model, surface energy fluxes exhibit strong spatial gradients. These surface energy flux gradients help produce a strong low-level jet (LLJ) in the correct location. The LLJ then interacts with the cold outflow boundary of the MCS to produce new convective cells. The simulation with both high-resolution model initialization and time-varying soil moisture test reproduces the intensity and location of observed rainfall.

  14. Climate SPHINX: High-resolution present-day and future climate simulations with an improved representation of small-scale variability

    Science.gov (United States)

    Davini, Paolo; von Hardenberg, Jost; Corti, Susanna; Subramanian, Aneesh; Weisheimer, Antje; Christensen, Hannah; Juricke, Stephan; Palmer, Tim

    2016-04-01

    The PRACE Climate SPHINX project investigates the sensitivity of climate simulations to model resolution and stochastic parameterization. The EC-Earth Earth-System Model is used to explore the impact of stochastic physics in 30-years climate integrations as a function of model resolution (from 80km up to 16km for the atmosphere). The experiments include more than 70 simulations in both a historical scenario (1979-2008) and a climate change projection (2039-2068), using RCP8.5 CMIP5 forcing. A total amount of 20 million core hours will be used at end of the project (March 2016) and about 150 TBytes of post-processed data will be available to the climate community. Preliminary results show a clear improvement in the representation of climate variability over the Euro-Atlantic following resolution increase. More specifically, the well-known atmospheric blocking negative bias over Europe is definitely resolved. High resolution runs also show improved fidelity in representation of tropical variability - such as the MJO and its propagation - over the low resolution simulations. It is shown that including stochastic parameterization in the low resolution runs help to improve some of the aspects of the MJO propagation further. These findings show the importance of representing the impact of small scale processes on the large scale climate variability either explicitly (with high resolution simulations) or stochastically (in low resolution simulations).

  15. AMM15: a new high-resolution NEMO configuration for operational simulation of the European north-west shelf

    Science.gov (United States)

    Graham, Jennifer A.; O'Dea, Enda; Holt, Jason; Polton, Jeff; Hewitt, Helene T.; Furner, Rachel; Guihou, Karen; Brereton, Ashley; Arnold, Alex; Wakelin, Sarah; Castillo Sanchez, Juan Manuel; Mayorga Adame, C. Gabriela

    2018-02-01

    This paper describes the next-generation ocean forecast model for the European north-west shelf, which will become the basis of operational forecasts in 2018. This new system will provide a step change in resolution and therefore our ability to represent small-scale processes. The new model has a resolution of 1.5 km compared with a grid spacing of 7 km in the current operational system. AMM15 (Atlantic Margin Model, 1.5 km) is introduced as a new regional configuration of NEMO v3.6. Here we describe the technical details behind this configuration, with modifications appropriate for the new high-resolution domain. Results from a 30-year non-assimilative run using the AMM15 domain demonstrate the ability of this model to represent the mean state and variability of the region.Overall, there is an improvement in the representation of the mean state across the region, suggesting similar improvements may be seen in the future operational system. However, the reduction in seasonal bias is greater off-shelf than on-shelf. In the North Sea, biases are largely unchanged. Since there has been no change to the vertical resolution or parameterization schemes, performance improvements are not expected in regions where stratification is dominated by vertical processes rather than advection. This highlights the fact that increased horizontal resolution will not lead to domain-wide improvements. Further work is needed to target bias reduction across the north-west shelf region.

  16. High Resolution Elevation Contours

    Data.gov (United States)

    Minnesota Department of Natural Resources — This dataset contains contours generated from high resolution data sources such as LiDAR. Generally speaking this data is 2 foot or less contour interval.

  17. Vertical Rise Velocity of Equatorial Plasma Bubbles Estimated from Equatorial Atmosphere Radar Observations and High-Resolution Bubble Model Simulations

    Science.gov (United States)

    Yokoyama, T.; Ajith, K. K.; Yamamoto, M.; Niranjan, K.

    2017-12-01

    Equatorial plasma bubble (EPB) is a well-known phenomenon in the equatorial ionospheric F region. As it causes severe scintillation in the amplitude and phase of radio signals, it is important to understand and forecast the occurrence of EPBs from a space weather point of view. The development of EPBs is presently believed as an evolution of the generalized Rayleigh-Taylor instability. We have already developed a 3D high-resolution bubble (HIRB) model with a grid spacing of as small as 1 km and presented nonlinear growth of EPBs which shows very turbulent internal structures such as bifurcation and pinching. As EPBs have field-aligned structures, the latitude range that is affected by EPBs depends on the apex altitude of EPBs over the dip equator. However, it was not easy to observe the apex altitude and vertical rise velocity of EPBs. Equatorial Atmosphere Radar (EAR) in Indonesia is capable of steering radar beams quickly so that the growth phase of EPBs can be captured clearly. The vertical rise velocities of the EPBs observed around the midnight hours are significantly smaller compared to those observed in postsunset hours. Further, the vertical growth of the EPBs around midnight hours ceases at relatively lower altitudes, whereas the majority of EPBs at postsunset hours found to have grown beyond the maximum detectable altitude of the EAR. The HIRB model with varying background conditions are employed to investigate the possible factors that control the vertical rise velocity and maximum attainable altitudes of EPBs. The estimated rise velocities from EAR observations at both postsunset and midnight hours are, in general, consistent with the nonlinear evolution of EPBs from the HIRB model.

  18. High-resolution model for the simulation of the activity distribution and radiation field at the German FRJ-2 research reactor

    International Nuclear Information System (INIS)

    Winter, D.; Haeussler, A.; Abbasi, F.; Simons, F.; Nabbi, R.; Thomauske, B.

    2013-01-01

    F or the decommissioning of nuclear facilities in Germany, activity and dose rate atlases (ADAs) are required for the approval of the domestic regulatory authority. Thus, high detailed modeling efforts are demanded in order to optimize the quantification and the characterization of nuclear waste as well as to realize optimum radiation protection. For the generation of ADAs, computer codes based on the Monte-Carlo method are increasingly employed because of their potential for high resolution simulation of the neutron and gamma transport for activity and dose rate predictions, respectively. However, the demand on the modeling effort and the simulation time increases with the size and the complexity of the whole model that becomes a limiting factor. For instance, the German FRJ-2 research reactor consisting of a complex reactor core, the graphite reflector, and the adjacent thermal and biological shielding structures represents such a case. For the solving of this drawback, various techniques such as variance reduction methods are applied. A further simple but effective approach is the modeling of the regions of interest with appropriate boundary conditions e.g. surface source or reflective surfaces. In the framework of the existing research a high sophisticated simulation tool is developed which is characterized by: - CAD-based model generation for Monte-Carlo transport simulations; - Production and 3D visualization of high resolution activity and dose rate atlases; - Application of coupling routines and interface structures for optimum and automated simulations. The whole simulation system is based on the Monte-Carlo code MCNP5 and the depletion/activation code ORIGEN2. The numerical and computational efficiency of the proposed methods is discussed in this paper on the basis of the simulation and CAD-based model of the FRJ-2 research reactor with emphasis on the effect of variance reduction methods. (orig.)

  19. High-resolution model for the simulation of the activity distribution and radiation field at the German FRJ-2 research reactor

    Energy Technology Data Exchange (ETDEWEB)

    Winter, D.; Haeussler, A.; Abbasi, F.; Simons, F.; Nabbi, R.; Thomauske, B. [RWTH Aachen Univ. (Germany). Inst. of Nuclear Fuel Cycle; Damm, G. [Research Center Juelich (Germany)

    2013-11-15

    F or the decommissioning of nuclear facilities in Germany, activity and dose rate atlases (ADAs) are required for the approval of the domestic regulatory authority. Thus, high detailed modeling efforts are demanded in order to optimize the quantification and the characterization of nuclear waste as well as to realize optimum radiation protection. For the generation of ADAs, computer codes based on the Monte-Carlo method are increasingly employed because of their potential for high resolution simulation of the neutron and gamma transport for activity and dose rate predictions, respectively. However, the demand on the modeling effort and the simulation time increases with the size and the complexity of the whole model that becomes a limiting factor. For instance, the German FRJ-2 research reactor consisting of a complex reactor core, the graphite reflector, and the adjacent thermal and biological shielding structures represents such a case. For the solving of this drawback, various techniques such as variance reduction methods are applied. A further simple but effective approach is the modeling of the regions of interest with appropriate boundary conditions e.g. surface source or reflective surfaces. In the framework of the existing research a high sophisticated simulation tool is developed which is characterized by: - CAD-based model generation for Monte-Carlo transport simulations; - Production and 3D visualization of high resolution activity and dose rate atlases; - Application of coupling routines and interface structures for optimum and automated simulations. The whole simulation system is based on the Monte-Carlo code MCNP5 and the depletion/activation code ORIGEN2. The numerical and computational efficiency of the proposed methods is discussed in this paper on the basis of the simulation and CAD-based model of the FRJ-2 research reactor with emphasis on the effect of variance reduction methods. (orig.)

  20. Atlantic hurricanes and associated insurance loss potentials in future climate scenarios: limitations of high-resolution AGCM simulations

    Directory of Open Access Journals (Sweden)

    Thomas F. Stocker

    2012-01-01

    Full Text Available Potential future changes in tropical cyclone (TC characteristics are among the more serious regional threats of global climate change. Therefore, a better understanding of how anthropogenic climate change may affect TCs and how these changes translate in socio-economic impacts is required. Here, we apply a TC detection and tracking method that was developed for ERA-40 data to time-slice experiments of two atmospheric general circulation models, namely the fifth version of the European Centre model of Hamburg model (MPI, Hamburg, Germany, T213 and the Japan Meteorological Agency/ Meteorological research Institute model (MRI, Tsukuba city, Japan, TL959. For each model, two climate simulations are available: a control simulation for present-day conditions to evaluate the model against observations, and a scenario simulation to assess future changes. The evaluation of the control simulations shows that the number of intense storms is underestimated due to the model resolution. To overcome this deficiency, simulated cyclone intensities are scaled to the best track data leading to a better representation of the TC intensities. Both models project an increased number of major hurricanes and modified trajectories in their scenario simulations. These changes have an effect on the projected loss potentials. However, these state-of-the-art models still yield contradicting results, and therefore they are not yet suitable to provide robust estimates of losses due to uncertainties in simulated hurricane intensity, location and frequency.

  1. Validation of high-resolution aerosol optical thickness simulated by a global non-hydrostatic model against remote sensing measurements

    Science.gov (United States)

    Goto, Daisuke; Sato, Yousuke; Yashiro, Hisashi; Suzuki, Kentaroh; Nakajima, Teruyuki

    2017-02-01

    A high-performance computing resource allows us to conduct numerical simulations with a horizontal grid spacing that is sufficiently high to resolve cloud systems. The cutting-edge computational capability, which was provided by the K computer at RIKEN in Japan, enabled the authors to perform long-term, global simulations of air pollutions and clouds with unprecedentedly high horizontal resolutions. In this study, a next generation model capable of simulating global air pollutions with O(10 km) grid spacing by coupling an atmospheric chemistry model to the Non-hydrostatic Icosahedral Atmospheric Model (NICAM) was performed. Using the newly developed model, month-long simulations for July were conducted with 14 km grid spacing on the K computer. Regarding the global distributions of aerosol optical thickness (AOT), it was found that the correlation coefficient (CC) between the simulation and AERONET measurements was approximately 0.7, and the normalized mean bias was -10%. The simulated AOT was also compared with satellite-retrieved values; the CC was approximately 0.6. The radiative effects due to each chemical species (dust, sea salt, organics, and sulfate) were also calculated and compared with multiple measurements. As a result, the simulated fluxes of upward shortwave radiation at the top of atmosphere and the surface compared well with the observed values, whereas those of downward shortwave radiation at the surface were underestimated, even if all aerosol components were considered. However, the aerosol radiative effects on the downward shortwave flux at the surface were found to be as high as 10 W/m2 in a global scale; thus, simulated aerosol distributions can strongly affect the simulated air temperature and dynamic circulation.

  2. Simulation of synoptic and sub-synoptic phenomena over East Africa and Arabian Peninsula for current and future climate using a high resolution AGCM

    KAUST Repository

    Raj, Jerry

    2015-04-01

    Climate regimes of East Africa and Arabia are complex and are poorly understood. East Africa has large-scale tropical controls like major convergence zones and air streams. The region is in the proximity of two monsoons, north-east and south-west, and the humid and thermally unstable Congo air stream. The domain comprises regions with one, two, and three rainfall maxima, and the rainfall pattern over this region has high spatial variability. To explore the synoptic and sub-synoptic phenomena that drive the climate of the region we conducted climate simulations using a high resolution Atmospheric General Circulation Model (AGCM), GFDL\\'s High Resolution Atmospheric Model (HiRAM). Historic simulations (1975-2004) and future projections (2007-2050), with both RCP 4.5 and RCP 8.5 pathways, were performed according to CORDEX standard. The sea surface temperature (SST) was prescribed from the 2°x2.5° latitude-longitude resolution GFDL Earth System Model runs of IPCC AR5, as bottom boundary condition over the ocean. Our simulations were conducted at a horizontal grid spacing of 25 km, which is an ample resolution for regional climate simulation. In comparison with the regional models, global HiRAM has the advantage of accounting for two-way interaction between regional and global scale processes. Our initial results show that HiRAM simulations for historic period well reproduce the regional climate in East Africa and the Arabian Peninsula with their complex interplay of regional and global processes. Our future projections indicate warming and increased precipitation over the Ethiopian highlands and the Greater Horn of Africa. We found significant regional differences between RCP 4.5 and RCP 8.5 projections, e.g., west coast of the Arabian Peninsula, show anomalies of opposite signs in these two simulations.

  3. The Abacus Cosmos: A Suite of Cosmological N-body Simulations

    Science.gov (United States)

    Garrison, Lehman H.; Eisenstein, Daniel J.; Ferrer, Douglas; Tinker, Jeremy L.; Pinto, Philip A.; Weinberg, David H.

    2018-06-01

    We present a public data release of halo catalogs from a suite of 125 cosmological N-body simulations from the ABACUS project. The simulations span 40 wCDM cosmologies centered on the Planck 2015 cosmology at two mass resolutions, 4 × 1010 h ‑1 M ⊙ and 1 × 1010 h ‑1 M ⊙, in 1.1 h ‑1 Gpc and 720 h ‑1 Mpc boxes, respectively. The boxes are phase-matched to suppress sample variance and isolate cosmology dependence. Additional volume is available via 16 boxes of fixed cosmology and varied phase; a few boxes of single-parameter excursions from Planck 2015 are also provided. Catalogs spanning z = 1.5 to 0.1 are available for friends-of-friends and ROCKSTAR halo finders and include particle subsamples. All data products are available at https://lgarrison.github.io/AbacusCosmos.

  4. Numerical techniques for large cosmological N-body simulations

    International Nuclear Information System (INIS)

    Efstathiou, G.; Davis, M.; Frenk, C.S.; White, S.D.M.

    1985-01-01

    We describe and compare techniques for carrying out large N-body simulations of the gravitational evolution of clustering in the fundamental cube of an infinite periodic universe. In particular, we consider both particle mesh (PM) codes and P 3 M codes in which a higher resolution force is obtained by direct summation of contributions from neighboring particles. We discuss the mesh-induced anisotropies in the forces calculated by these schemes, and the extent to which they can model the desired 1/r 2 particle-particle interaction. We also consider how transformation of the time variable can improve the efficiency with which the equations of motion are integrated. We present tests of the accuracy with which the resulting schemes conserve energy and are able to follow individual particle trajectories. We have implemented an algorithm which allows initial conditions to be set up to model any desired spectrum of linear growing mode density fluctuations. A number of tests demonstrate the power of this algorithm and delineate the conditions under which it is effective. We carry out several test simulations using a variety of techniques in order to show how the results are affected by dynamic range limitations in the force calculations, by boundary effects, by residual artificialities in the initial conditions, and by the number of particles employed. For most purposes cosmological simulations are limited by the resolution of their force calculation rather than by the number of particles they can employ. For this reason, while PM codes are quite adequate to study the evolution of structure on large scale, P 3 M methods are to be preferred, in spite of their greater cost and complexity, whenever the evolution of small-scale structure is important

  5. Multi-Scale Initial Conditions For Cosmological Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Hahn, Oliver; /KIPAC, Menlo Park; Abel, Tom; /KIPAC, Menlo Park /ZAH, Heidelberg /HITS, Heidelberg

    2011-11-04

    We discuss a new algorithm to generate multi-scale initial conditions with multiple levels of refinements for cosmological 'zoom-in' simulations. The method uses an adaptive convolution of Gaussian white noise with a real-space transfer function kernel together with an adaptive multi-grid Poisson solver to generate displacements and velocities following first- (1LPT) or second-order Lagrangian perturbation theory (2LPT). The new algorithm achieves rms relative errors of the order of 10{sup -4} for displacements and velocities in the refinement region and thus improves in terms of errors by about two orders of magnitude over previous approaches. In addition, errors are localized at coarse-fine boundaries and do not suffer from Fourier-space-induced interference ringing. An optional hybrid multi-grid and Fast Fourier Transform (FFT) based scheme is introduced which has identical Fourier-space behaviour as traditional approaches. Using a suite of re-simulations of a galaxy cluster halo our real-space-based approach is found to reproduce correlation functions, density profiles, key halo properties and subhalo abundances with per cent level accuracy. Finally, we generalize our approach for two-component baryon and dark-matter simulations and demonstrate that the power spectrum evolution is in excellent agreement with linear perturbation theory. For initial baryon density fields, it is suggested to use the local Lagrangian approximation in order to generate a density field for mesh-based codes that is consistent with the Lagrangian perturbation theory instead of the current practice of using the Eulerian linearly scaled densities.

  6. Hydrodynamic Simulation of the Cosmological X-Ray Background

    Science.gov (United States)

    Croft, Rupert A. C.; Di Matteo, Tiziana; Davé, Romeel; Hernquist, Lars; Katz, Neal; Fardal, Mark A.; Weinberg, David H.

    2001-08-01

    We use a hydrodynamic simulation of an inflationary cold dark matter model with a cosmological constant to predict properties of the extragalactic X-ray background (XRB). We focus on emission from the intergalactic medium (IGM), with particular attention to diffuse emission from warm-hot gas that lies in relatively smooth filamentary structures between galaxies and galaxy clusters. We also include X-rays from point sources associated with galaxies in the simulation, and we make maps of the angular distribution of the emission. Although much of the X-ray luminous gas has a filamentary structure, the filaments are not evident in the simulated maps because of projection effects. In the soft (0.5-2 keV) band, our calculated mean intensity of radiation from intergalactic and cluster gas is 2.3×10-12 ergs-1 cm-2 deg-2, 35% of the total softband emission. This intensity is compatible at the ~1 σ level with estimates of the unresolved soft background intensity from deep ROSAT and Chandra measurements. Only 4% of the hard (2-10 keV) emission is associated with intergalactic gas. Relative to active galactic nuclei flux, the IGM component of the XRB peaks at a lower redshift (median z~0.45) and spans a narrower redshift range, so its clustering makes an important contribution to the angular correlation function of the total emission. The clustering on the scales accessible to our simulation (0.1‧-10') is significant, with an amplitude roughly consistent with an extrapolation of recent ROSAT results to small scales. A cross-correlation analysis of the XRB against nearby galaxies taken from a simulated redshift survey also yields a strong signal from the IGM. Our conclusions about the soft background intensity differ from those of some recent papers that have argued that the expected emission from gas in galaxy, group, and cluster halos would exceed the observed background unless much of the gas is expelled by supernova feedback. We obtain reasonable compatibility with

  7. The Impact of High-Resolution Sea Surface Temperatures on the Simulated Nocturnal Florida Marine Boundary Layer

    Science.gov (United States)

    LaCasse, Katherine M.; Splitt, Michael E.; Lazarus, Steven M.; Lapenta, William M.

    2008-01-01

    High- and low-resolution sea surface temperature (SST) analysis products are used to initialize the Weather Research and Forecasting (WRF) Model for May 2004 for short-term forecasts over Florida and surrounding waters. Initial and boundary conditions for the simulations were provided by a combination of observations, large-scale model output, and analysis products. The impact of using a 1-km Moderate Resolution Imaging Spectroradiometer (MODIS) SST composite on subsequent evolution of the marine atmospheric boundary layer (MABL) is assessed through simulation comparisons and limited validation. Model results are presented for individual simulations, as well as for aggregates of easterly- and westerly-dominated low-level flows. The simulation comparisons show that the use of MODIS SST composites results in enhanced convergence zones. earlier and more intense horizontal convective rolls. and an increase in precipitation as well as a change in precipitation location. Validation of 10-m winds with buoys shows a slight improvement in wind speed. The most significant results of this study are that 1) vertical wind stress divergence and pressure gradient accelerations across the Florida Current region vary in importance as a function of flow direction and stability and 2) the warmer Florida Current in the MODIS product transports heat vertically and downwind of this heat source, modifying the thermal structure and the MABL wind field primarily through pressure gradient adjustments.

  8. Coupled multi-group neutron photon transport for the simulation of high-resolution gamma-ray spectroscopy applications

    Energy Technology Data Exchange (ETDEWEB)

    Burns, Kimberly A. [Georgia Inst. of Technology, Atlanta, GA (United States)

    2009-08-01

    The accurate and efficient simulation of coupled neutron-photon problems is necessary for several important radiation detection applications. Examples include the detection of nuclear threats concealed in cargo containers and prompt gamma neutron activation analysis for nondestructive determination of elemental composition of unknown samples.

  9. A Coastal Bay Summer Breeze Study, Part 2: High-resolution Numerical Simulation of Sea-breeze Local Influences

    Science.gov (United States)

    Calmet, Isabelle; Mestayer, Patrice G.; van Eijk, Alexander M. J.; Herlédant, Olivier

    2018-04-01

    We complete the analysis of the data obtained during the experimental campaign around the semi circular bay of Quiberon, France, during two weeks in June 2006 (see Part 1). A reanalysis of numerical simulations performed with the Advanced Regional Prediction System model is presented. Three nested computational domains with increasing horizontal resolution down to 100 m, and a vertical resolution of 10 m at the lowest level, are used to reproduce the local-scale variations of the breeze close to the water surface of the bay. The Weather Research and Forecasting mesoscale model is used to assimilate the meteorological data. Comparisons of the simulations with the experimental data obtained at three sites reveal a good agreement of the flow over the bay and around the Quiberon peninsula during the daytime periods of sea-breeze development and weakening. In conditions of offshore synoptic flow, the simulations demonstrate that the semi-circular shape of the bay induces a corresponding circular shape in the offshore zones of stagnant flow preceding the sea-breeze onset, which move further offshore thereafter. The higher-resolution simulations are successful in reproducing the small-scale impacts of the peninsula and local coasts (breeze deviations, wakes, flow divergences), and in demonstrating the complexity of the breeze fields close to the surface over the bay. Our reanalysis also provides guidance for numerical simulation strategies for analyzing the structure and evolution of the near-surface breeze over a semi-circular bay, and for forecasting important flow details for use in upcoming sailing competitions.

  10. High resolution solar observations

    International Nuclear Information System (INIS)

    Title, A.

    1985-01-01

    Currently there is a world-wide effort to develop optical technology required for large diffraction limited telescopes that must operate with high optical fluxes. These developments can be used to significantly improve high resolution solar telescopes both on the ground and in space. When looking at the problem of high resolution observations it is essential to keep in mind that a diffraction limited telescope is an interferometer. Even a 30 cm aperture telescope, which is small for high resolution observations, is a big interferometer. Meter class and above diffraction limited telescopes can be expected to be very unforgiving of inattention to details. Unfortunately, even when an earth based telescope has perfect optics there are still problems with the quality of its optical path. The optical path includes not only the interior of the telescope, but also the immediate interface between the telescope and the atmosphere, and finally the atmosphere itself

  11. Dark matter direct detection signals inferred from a cosmological N-body simulation with baryons

    International Nuclear Information System (INIS)

    Ling, F.-S.; Nezri, E.; Athanassoula, E.; Teyssier, R.

    2010-01-01

    We extract at redshift z = 0 a Milky Way sized object including gas, stars and dark matter (DM) from a recent, high-resolution cosmological N-body simulation with baryons. Its resolution is sufficient to witness the formation of a rotating disk and bulge at the center of the halo potential, therefore providing a realistic description of the birth and the evolution of galactic structures in the ΛCDM cosmology paradigm. The phase-space structure of the central galaxy reveals that, throughout a thick region, the dark halo is co-rotating on average with the stellar disk. At the Earth's location, the rotating component, sometimes called dark disk in the literature, is characterized by a minimum lag velocity v lag ≅ 75 km/s, in which case it contributes to around 25% of the total DM local density, whose value is ρ DM ≅ 0.37GeV/cm 3 . The velocity distributions also show strong deviations from pure Gaussian and Maxwellian distributions, with a sharper drop of the high velocity tail. We give a detailed study of the impact of these features on the predictions for DM signals in direct detection experiments. In particular, the question of whether the modulation signal observed by DAMA is or is not excluded by limits set by other experiments (CDMS, XENON and CRESST...) is re-analyzed and compared to the case of a standard Maxwellian halo. We consider spin-independent interactions for both the elastic and the inelastic scattering scenarios. For the first time, we calculate the allowed regions for DAMA and the exclusion limits of other null experiments directly from the velocity distributions found in the simulation. We then compare these results with the predictions of various analytical distributions. We find that the compatibility between DAMA and the other experiments is improved. In the elastic scenario, the DAMA modulation signal is slightly enhanced in the so-called channeling region, as a result of several effects that include a departure from a Maxwellian

  12. A High-resolution Simulation of the Transport of Gazeous Pollutants from a Severe Effusive Volcanic Eruption

    Science.gov (United States)

    Durand, J.; Tulet, P.; Filippi, J. B.; Leriche, M.

    2014-12-01

    The Reunion Island experienced its biggest eruption of Piton de la Fournaise volcano during April 2007. Known as "the eruption of the century", this event degassed more than 230 KT of SO2. Theses emissions led to important health issues, accompanied by environmental and infrastructure degradations. We want to show a modeling study uses the mesoscale chemical model MesoNH to simulate the transport of gazeous SO2 between April 2nd and 7th, with a focus on the influence of heat fluxes from lava. Three domains are nested from 2km to 100m horizontal spacing grid, allow us to better represent the phenomenology of its eruption. This modelling study have coupled on-line (i) the MesoNH mesoscale dynamics, (ii) a gas and aqueous chemical scheme, and (iii) a surface scheme that integrates a new sheme for the lava heat flux and its surface propagation. Thus, all flows (heat sensible and latent, vapor, SO2, CO2, CO) are triggered depending on its dynamic. Our simulations reproduce quite faithfully the surface field observation of SO2. Various sensitivity analyzes exhibit that volcano sulfur distribution was mainly controlled by the lava heat flow.Without heat flow parameterization, the surface concentrations are multiplied by a factor 30 compared to the reference simulation.Numerical modeling allows us to distinguish acid rain produced by the emission of water vapor and chloride when the lava flows into the seawater of those formed by the mixing of the volcanic SO2 into the raindrops of convective clouds.

  13. Intramolecular diffusive motion in alkane monolayers studied by high-resolution quasielastic neutron scattering and molecular dynamics simulations

    DEFF Research Database (Denmark)

    Hansen, Flemming Yssing; Criswell, L.; Fuhrmann, D

    2004-01-01

    Molecular dynamics simulations of a tetracosane (n-C24H50) monolayer adsorbed on a graphite basal-plane surface show that there are diffusive motions associated with the creation and annihilation of gauche defects occurring on a time scale of similar to0.1-4 ns. We present evidence...... that these relatively slow motions are observable by high-energy-resolution quasielastic neutron scattering (QNS) thus demonstrating QNS as a technique, complementary to nuclear magnetic resonance, for studying conformational dynamics on a nanosecond time scale in molecular monolayers....

  14. High resolution temperature mapping of gas turbine combustor simulator exhaust with femtosecond laser induced fiber Bragg gratings

    Science.gov (United States)

    Walker, Robert B.; Yun, Sangsig; Ding, Huimin; Charbonneau, Michel; Coulas, David; Lu, Ping; Mihailov, Stephen J.; Ramachandran, Nanthan

    2017-04-01

    Femtosecond infrared (fs-IR) laser written fiber Bragg gratings (FBGs), have demonstrated great potential for extreme sensing. Such conditions are inherent in advanced gas turbine engines under development to reduce greenhouse gas emissions; and the ability to measure temperature gradients in these harsh environments is currently limited by the lack of sensors and controls capable of withstanding the high temperature, pressure and corrosive conditions present. This paper discusses fabrication and deployment of several fs-IR written FBG arrays, for monitoring exhaust temperature gradients of a gas turbine combustor simulator. Results include: contour plots of measured temperature gradients, contrast with thermocouple data.

  15. Vegetation and Carbon Cycle Dynamics in the High-Resolution Transient Holocene Simulations Using the MPI Earth System Model

    Science.gov (United States)

    Brovkin, V.; Lorenz, S.; Raddatz, T.; Claussen, M.; Dallmeyer, A.

    2017-12-01

    One of the interesting periods to investigate a climatic role of terrestrial biosphere is the Holocene, when, despite of the relatively steady global climate, the atmospheric CO2 grew by about 20 ppm from 7 kyr BP to pre-industrial. We use a new setup of the Max Planck Institute Earth System Model MPI-ESM1 consisting of the latest version of the atmospheric model ECHAM6, including the land surface model JSBACH3 with carbon cycle and vegetation dynamics, coupled to the ocean circulation model MPI-OM, which includes the HAMOCC model of ocean biogeochemistry. The model has been run for several simulations over the Holocene period of the last 8000 years under the forcing data sets of orbital insolation, atmospheric greenhouse gases, volcanic aerosols, solar irradiance and stratospheric ozone, as well as land-use changes. In response to this forcing, the land carbon storage increased by about 60 PgC between 8 and 4 kyr BP, stayed relatively constant until 2 kyr BP, and decreased by about 90 PgC by 1850 AD due to land use changes. At 8 kyr BP, vegetation cover was much denser in Africa, mainly due to increased rainfall in response to the orbital forcing. Boreal forests moved northward in both, North America and Eurasia. The boreal forest expansion in North America is much less pronounced than in Eurasia. Simulated physical ocean fields, including surface temperatures and meridional overturning, do not change substantially in the Holocene. Carbonate ion concentration in deep ocean decreases in both, prescribed and interactive CO2simulations. Comparison with available proxies for terrestrial vegetation and for the ocean carbonate chemistry will be presented. Vegetation and soil carbon changes significantly affected atmospheric CO2 during the periods of strong volcanic eruptions. In response to the eruption-caused cooling, the land initially stores more carbon as respiration decreases, but then it releases even more carbon die to productivity decrease. This decadal

  16. Chirality in MoS2 nano tubes studied by molecular dynamics simulation and images of high resolution microscopy

    International Nuclear Information System (INIS)

    Perez A, M.

    2003-01-01

    The nano tubes is a new material intensely studied from 1991 due to their characteristics that are the result of their nano metric size and of the associated quantum effects. Great part of these investigations have been focused to the characterization, modelling and computerized simulation, in order to studying its properties and possible behavior without necessity of the real manipulation of the material. The obtention of the structural properties in the different forms of particles of nano metric dimensions observed in the Transmission Electron Microscope is of great aid to study them mesoscopic characteristic of the material. (Author)

  17. Impact of the dynamical core on the direct simulation of tropical cyclones in a high-resolution global model

    International Nuclear Information System (INIS)

    Reed, K. A.

    2015-01-01

    Our paper examines the impact of the dynamical core on the simulation of tropical cyclone (TC) frequency, distribution, and intensity. The dynamical core, the central fluid flow component of any general circulation model (GCM), is often overlooked in the analysis of a model's ability to simulate TCs compared to the impact of more commonly documented components (e.g., physical parameterizations). The Community Atmosphere Model version 5 is configured with multiple dynamics packages. This analysis demonstrates that the dynamical core has a significant impact on storm intensity and frequency, even in the presence of similar large-scale environments. In particular, the spectral element core produces stronger TCs and more hurricanes than the finite-volume core using very similar parameterization packages despite the latter having a slightly more favorable TC environment. Furthermore, these results suggest that more detailed investigations into the impact of the GCM dynamical core on TC climatology are needed to fully understand these uncertainties. Key Points The impact of the GCM dynamical core is often overlooked in TC assessments The CAM5 dynamical core has a significant impact on TC frequency and intensity A larger effort is needed to better understand this uncertainty

  18. Galaxy Formation Efficiency and the Multiverse Explanation of the Cosmological Constant with EAGLE Simulations

    Science.gov (United States)

    Barnes, Luke A.; Elahi, Pascal J.; Salcido, Jaime; Bower, Richard G.; Lewis, Geraint F.; Theuns, Tom; Schaller, Matthieu; Crain, Robert A.; Schaye, Joop

    2018-04-01

    Models of the very early universe, including inflationary models, are argued to produce varying universe domains with different values of fundamental constants and cosmic parameters. Using the cosmological hydrodynamical simulation code from the EAGLE collaboration, we investigate the effect of the cosmological constant on the formation of galaxies and stars. We simulate universes with values of the cosmological constant ranging from Λ = 0 to Λ0 × 300, where Λ0 is the value of the cosmological constant in our Universe. Because the global star formation rate in our Universe peaks at t = 3.5 Gyr, before the onset of accelerating expansion, increases in Λ of even an order of magnitude have only a small effect on the star formation history and efficiency of the universe. We use our simulations to predict the observed value of the cosmological constant, given a measure of the multiverse. Whether the cosmological constant is successfully predicted depends crucially on the measure. The impact of the cosmological constant on the formation of structure in the universe does not seem to be a sharp enough function of Λ to explain its observed value alone.

  19. High-resolution fast temperature mapping of a gas turbine combustor simulator with femtosecond infrared laser written fiber Bragg gratings

    Science.gov (United States)

    Walker, Robert B.; Yun, Sangsig; Ding, Huimin; Charbonneau, Michel; Coulas, David; Ramachandran, Nanthan; Mihailov, Stephen J.

    2017-02-01

    Femtosecond infrared (fs-IR) written fiber Bragg gratings (FBGs), have demonstrated great potential for extreme sensing. Such conditions are inherent to the advanced gas turbine engines under development to reduce greenhouse gas emissions; and the ability to measure temperature gradients in these harsh environments is currently limited by the lack of sensors and controls capable of withstanding the high temperature, pressure and corrosive conditions present. This paper discusses fabrication and deployment of several fs-IR written FBG arrays, for monitoring the sidewall and exhaust temperature gradients of a gas turbine combustor simulator. Results include: contour plots of measured temperature gradients contrasted with thermocouple data, discussion of deployment strategies and comments on reliability.

  20. Mesoscale spiral vortex embedded within a Lake Michigan snow squall band - High resolution satellite observations and numerical model simulations

    Science.gov (United States)

    Lyons, Walter A.; Keen, Cecil S.; Hjelmfelt, Mark; Pease, Steven R.

    1988-01-01

    It is known that Great Lakes snow squall convection occurs in a variety of different modes depending on various factors such as air-water temperature contrast, boundary-layer wind shear, and geostrophic wind direction. An exceptional and often neglected source of data for mesoscale cloud studies is the ultrahigh resolution multispectral data produced by Landsat satellites. On October 19, 1972, a clearly defined spiral vortex was noted in a Landsat-1 image near the southern end of Lake Michigan during an exceptionally early cold air outbreak over a still very warm lake. In a numerical simulation using a three-dimensional Eulerian hydrostatic primitive equation mesoscale model with an initially uniform wind field, a definite analog to the observed vortex was generated. This suggests that intense surface heating can be a principal cause in the development of a low-level mesoscale vortex.

  1. Vegetation and land carbon feedbacks in the high-resolution transient Holocene simulations using the MPI Earth system model

    Science.gov (United States)

    Brovkin, Victor; Lorenz, Stephan; Raddatz, Thomas

    2017-04-01

    Plants influence climate through changes in the land surface biophysics (albedo, transpiration) and concentrations of the atmospheric greenhouse gases. One of the interesting periods to investigate a climatic role of terrestrial biosphere is the Holocene, when, despite of the relatively steady global climate, the atmospheric CO2 grew by about 20 ppm from 7 kyr BP to pre-industrial. We use a new setup of the Max Planck Institute Earth System Model MPI-ESM1 consisting of the latest version of the atmospheric model ECHAM6, including the land surface model JSBACH3 with carbon cycle and vegetation dynamics, coupled to the ocean circulation model MPI-OM, which includes the HAMOCC model of ocean biogeochemistry. The model has been run for several simulations over the Holocene period of the last 8000 years under the forcing data sets of orbital insolation, atmospheric greenhouse gases, volcanic aerosols, solar irradiance and stratospheric ozone, as well as land-use changes. In response to this forcing, the land carbon storage increased by about 60 PgC between 8 and 4 kyr BP, stayed relatively constant until 2 kyr BP, and decreased by about 90 PgC by 1850 AD due to land use changes. Vegetation and soil carbon changes significantly affected atmospheric CO2 during the periods of strong volcanic eruptions. In response to the eruption-caused cooling, the land initially stores more carbon as respiration decreases, but then it releases even more carbon due to productivity decrease. This decadal- scale variability helps to quantify the vegetation and land carbon feedbacks during the past periods when the temporal resolution of the ice-core CO2 record is not sufficient to capture fast CO2 variations. From a set of Holocene simulations with prescribed or interactive atmospheric CO2, we get estimates of climate-carbon feedback useful for future climate studies. Members of the Hamburg Holocene Team: Jürgen Bader1, Sebastian Bathiany2, Victor Brovkin1, Martin Claussen1,3, Traute Cr

  2. Covariability of seasonal temperature and precipitation over the Iberian Peninsula in high-resolution regional climate simulations (1001-2099)

    Science.gov (United States)

    Fernández-Montes, S.; Gómez-Navarro, J. J.; Rodrigo, F. S.; García-Valero, J. A.; Montávez, J. P.

    2017-04-01

    Precipitation and surface temperature are interdependent variables, both as a response to atmospheric dynamics and due to intrinsic thermodynamic relationships and feedbacks between them. This study analyzes the covariability of seasonal temperature (T) and precipitation (P) across the Iberian Peninsula (IP) using regional climate paleosimulations for the period 1001-1990, driven by reconstructions of external forcings. Future climate (1990-2099) was simulated according to SRES scenarios A2 and B2. These simulations enable exploring, at high spatial resolution, robust and physically consistent relationships. In winter, positive P-T correlations dominate west-central IP (Pearson correlation coefficient ρ = + 0.43, for 1001-1990), due to prevalent cold-dry and warm-wet conditions, while this relationship weakens and become negative towards mountainous, northern and eastern regions. In autumn, negative correlations appear in similar regions as in winter, whereas for summer they extend also to the N/NW of the IP. In spring, the whole IP depicts significant negative correlations, strongest for eastern regions (ρ = - 0.51). This is due to prevalent frequency of warm-dry and cold-wet modes in these regions and seasons. At the temporal scale, regional correlation series between seasonal anomalies of temperature and precipitation (assessed in 31 years running windows in 1001-1990) show very large multidecadal variability. For winter and spring, periodicities of about 50-60 years arise. The frequency of warm-dry and cold-wet modes appears correlated with the North Atlantic Oscillation (NAO), explaining mainly co-variability changes in spring. For winter and some regions in autumn, maximum and minimum P-T correlations appear in periods with enhanced meridional or easterly circulation (low or high pressure anomalies in the Mediterranean and Europe). In spring and summer, the Atlantic Multidecadal Oscillation shows some fingerprint on the frequency of warm/cold modes. For

  3. Idealized climate change simulations with a high-resolution physical model: HadGEM3-GC2

    Science.gov (United States)

    Senior, Catherine A.; Andrews, Timothy; Burton, Chantelle; Chadwick, Robin; Copsey, Dan; Graham, Tim; Hyder, Pat; Jackson, Laura; McDonald, Ruth; Ridley, Jeff; Ringer, Mark; Tsushima, Yoko

    2016-06-01

    Idealized climate change simulations with a new physical climate model, HadGEM3-GC2 from The Met Office Hadley Centre are presented and contrasted with the earlier MOHC model, HadGEM2-ES. The role of atmospheric resolution is also investigated. The Transient Climate Response (TCR) is 1.9 K/2.1 K at N216/N96 and Effective Climate Sensitivity (ECS) is 3.1 K/3.2 K at N216/N96. These are substantially lower than HadGEM2-ES (TCR: 2.5 K; ECS: 4.6 K) arising from a combination of changes in the size of climate feedbacks. While the change in the net cloud feedback between HadGEM3 and HadGEM2 is relatively small, there is a change in sign of its longwave and a strengthening of its shortwave components. At a global scale, there is little impact of the increase in atmospheric resolution on the future climate change signal and even at a broad regional scale, many features are robust including tropical rainfall changes, however, there are some significant exceptions. For the North Atlantic and western Europe, the tripolar pattern of winter storm changes found in most CMIP5 models is little impacted by resolution but for the most intense storms, there is a larger percentage increase in number at higher resolution than at lower resolution. Arctic sea-ice sensitivity shows a larger dependence on resolution than on atmospheric physics.

  4. Development of high-resolution multi-scale modelling system for simulation of coastal-fluvial urban flooding

    Science.gov (United States)

    Comer, Joanne; Indiana Olbert, Agnieszka; Nash, Stephen; Hartnett, Michael

    2017-02-01

    Urban developments in coastal zones are often exposed to natural hazards such as flooding. In this research, a state-of-the-art, multi-scale nested flood (MSN_Flood) model is applied to simulate complex coastal-fluvial urban flooding due to combined effects of tides, surges and river discharges. Cork city on Ireland's southwest coast is a study case. The flood modelling system comprises a cascade of four dynamically linked models that resolve the hydrodynamics of Cork Harbour and/or its sub-region at four scales: 90, 30, 6 and 2 m. Results demonstrate that the internalization of the nested boundary through the use of ghost cells combined with a tailored adaptive interpolation technique creates a highly dynamic moving boundary that permits flooding and drying of the nested boundary. This novel feature of MSN_Flood provides a high degree of choice regarding the location of the boundaries to the nested domain and therefore flexibility in model application. The nested MSN_Flood model through dynamic downscaling facilitates significant improvements in accuracy of model output without incurring the computational expense of high spatial resolution over the entire model domain. The urban flood model provides full characteristics of water levels and flow regimes necessary for flood hazard identification and flood risk assessment.

  5. Interfaces and strain in InGaAsP/InP heterostructures assessed with dynamical simulations of high-resolution x-ray diffraction curves

    International Nuclear Information System (INIS)

    Vandenberg, J.M.

    1995-01-01

    The interfacial structure of a lattice-matched InGaAs/InP/(100)InP superlattice with a long period of ∼630 Angstrom has been studied by fully dynamical simulations of high-resolution x-ray diffraction curves. This structure exhibits a very symmetrical x-ray pattern enveloping a large number of closely spaced satellite intensities with pronounced maxima and minima. It appears in the dynamical analysis that the position and shape of these maxima and minima is extremely sensitive to the number N of molecular layers and atomic spacing d of the InGaAs and InP layer and in particular the presence of strained interfacial layers. The structural model of strained interfaces was also applied to an epitaxial lattice-matched 700 Angstrom InP/400 Angstrom InGaAsP/(100)InP beterostructure. 9 refs., 3 figs

  6. X-ray clusters from a high-resolution hydrodynamic PPM simulation of the cold dark matter universe

    Science.gov (United States)

    Bryan, Greg L.; Cen, Renyue; Norman, Michael L.; Ostriker, Jermemiah P.; Stone, James M.

    1994-01-01

    A new three-dimensional hydrodynamic code based on the piecewise parabolic method (PPM) is utilized to compute the distribution of hot gas in the standard Cosmic Background Explorer (COBE)-normalized cold dark matter (CDM) universe. Utilizing periodic boundary conditions, a box with size 85 h(exp-1) Mpc, having cell size 0.31 h(exp-1) Mpc, is followed in a simulation with 270(exp 3)=10(exp 7.3) cells. Adopting standard parameters determined from COBE and light-element nucleosynthesis, Sigma(sub 8)=1.05, Omega(sub b)=0.06, we find the X-ray-emitting clusters, compute the luminosity function at several wavelengths, the temperature distribution, and estimated sizes, as well as the evolution of these quantities with redshift. The results, which are compared with those obtained in the preceding paper (Kang et al. 1994a), may be used in conjuction with ROSAT and other observational data sets. Overall, the results of the two computations are qualitatively very similar with regard to the trends of cluster properties, i.e., how the number density, radius, and temeprature depend on luminosity and redshift. The total luminosity from clusters is approximately a factor of 2 higher using the PPM code (as compared to the 'total variation diminishing' (TVD) code used in the previous paper) with the number of bright clusters higher by a similar factor. The primary conclusions of the prior paper, with regard to the power spectrum of the primeval density perturbations, are strengthened: the standard CDM model, normalized to the COBE microwave detection, predicts too many bright X-ray emitting clusters, by a factor probably in excess of 5. The comparison between observations and theoretical predictions for the evolution of cluster properties, luminosity functions, and size and temperature distributions should provide an important discriminator among competing scenarios for the development of structure in the universe.

  7. High-Resolution Biogeochemical Simulation Identifies Practical Opportunities for Bioenergy Landscape Intensification Across Diverse US Agricultural Regions

    Science.gov (United States)

    Field, J.; Adler, P. R.; Evans, S.; Paustian, K.; Marx, E.; Easter, M.

    2015-12-01

    The sustainability of biofuel expansion is strongly dependent on the environmental footprint of feedstock production, including both direct impacts within feedstock-producing areas and potential leakage effects due to disruption of existing food, feed, or fiber production. Assessing and minimizing these impacts requires novel methods compared to traditional supply chain lifecycle assessment. When properly validated and applied at appropriate spatial resolutions, biogeochemical process models are useful for simulating how the productivity and soil greenhouse gas fluxes of cultivating both conventional crops and advanced feedstock crops respond across gradients of land quality and management intensity. In this work we use the DayCent model to assess the biogeochemical impacts of agricultural residue collection, establishment of perennial grasses on marginal cropland or conservation easements, and intensification of existing cropping at high spatial resolution across several real-world case study landscapes in diverse US agricultural regions. We integrate the resulting estimates of productivity, soil carbon changes, and nitrous oxide emissions with crop production budgets and lifecycle inventories, and perform a basic optimization to generate landscape cost/GHG frontiers and determine the most practical opportunities for low-impact feedstock provisioning. The optimization is constrained to assess the minimum combined impacts of residue collection, land use change, and intensification of existing agriculture necessary for the landscape to supply a commercial-scale biorefinery while maintaining exiting food, feed, and fiber production levels. These techniques can be used to assess how different feedstock provisioning strategies perform on both economic and environmental criteria, and sensitivity of performance to environmental and land use factors. The included figure shows an example feedstock cost-GHG mitigation tradeoff frontier for a commercial-scale cellulosic

  8. The metallicity distribution of H I systems in the EAGLE cosmological simulations

    Science.gov (United States)

    Rahmati, Alireza; Oppenheimer, Benjamin D.

    2018-06-01

    The metallicity of strong H I systems, spanning from damped Lyman α absorbers (DLAs) to Lyman-limit systems (LLSs), is explored between z = 5 → 0 using the EAGLE high-resolution cosmological hydrodynamic simulation of galaxy formation. The metallicities of LLSs and DLAs steadily increase with time in agreement with observations. DLAs are more metal rich than LLSs, although the metallicities in the LLS column density range (N_{H I }≈ 10^{17}-10^{20} cm^{-2}) are relatively flat, evolving from a median H I-weighted metallicity of {Z}≲ 10^{-2} Z_{⊙} at z = 3 to ≈10-0.5 Z⊙ by z = 0. The metal content of H I systems tracks the increasing stellar content of the Universe, holding ≈ 5 {per cent} of the integrated total metals released from stars at z = 0. We also consider partial LLS (pLLS, N_{H I}≈ 10^{16}-10^{17} cm^{-2}) metallicities, and find good agreement with Wotta et al. for the fraction of systems above (37 per cent) and below (63 per cent) 0.1 Z⊙. We also find a large dispersion of pLLS metallicities, although we do not reproduce the observed metallicity bimodality and instead we make the prediction that a larger sample will yield more pLLSs around 0.1 Z⊙. We underpredict the median metallicity of strong LLSs, and predict a population of Z 3 that are not observed, which may indicate more widespread early enrichment in the real Universe compared to EAGLE.

  9. Coupled atmosphere ocean climate model simulations in the Mediterranean region: effect of a high-resolution marine model on cyclones and precipitation

    Directory of Open Access Journals (Sweden)

    A. Sanna

    2013-06-01

    Full Text Available In this study we investigate the importance of an eddy-permitting Mediterranean Sea circulation model on the simulation of atmospheric cyclones and precipitation in a climate model. This is done by analyzing results of two fully coupled GCM (general circulation models simulations, differing only for the presence/absence of an interactive marine module, at very high-resolution (~ 1/16°, for the simulation of the 3-D circulation of the Mediterranean Sea. Cyclones are tracked by applying an objective Lagrangian algorithm to the MSLP (mean sea level pressure field. On annual basis, we find a statistically significant difference in vast cyclogenesis regions (northern Adriatic, Sirte Gulf, Aegean Sea and southern Turkey and in lifetime, giving evidence of the effect of both land–sea contrast and surface heat flux intensity and spatial distribution on cyclone characteristics. Moreover, annual mean convective precipitation changes significantly in the two model climatologies as a consequence of differences in both air–sea interaction strength and frequency of cyclogenesis in the two analyzed simulations.

  10. Hot gas in the cold dark matter scenario: X-ray clusters from a high-resolution numerical simulation

    Science.gov (United States)

    Kang, Hyesung; Cen, Renyue; Ostriker, Jeremiah P.; Ryu, Dongsu

    1994-01-01

    A new, three-dimensional, shock-capturing hydrodynamic code is utilized to determine the distribution of hot gas in a standard cold dark matter (CDM) model of the universe. Periodic boundary conditions are assumed: a box with size 85 h(exp -1) Mpc having cell size 0.31 h(exp -1) Mpc is followed in a simulation with 270(exp 3) = 10(exp 7.3) cells. Adopting standard parameters determined from COBE and light-element nucleosynthesis, sigma(sub 8) = 1.05, omega(sub b) = 0.06, and assuming h = 0.5, we find the X-ray-emitting clusters and compute the luminosity function at several wavelengths, the temperature distribution, and estimated sizes, as well as the evolution of these quantities with redshift. We find that most of the total X-ray emissivity in our box originates in a relatively small number of identifiable clusters which occupy approximately 10(exp -3) of the box volume. This standard CDM model, normalized to COBE, produces approximately 5 times too much emission from clusters having L(sub x) is greater than 10(exp 43) ergs/s, a not-unexpected result. If all other parameters were unchanged, we would expect adequate agreement for sigma(sub 8) = 0.6. This provides a new and independent argument for lower small-scale power than standard CDM at the 8 h(exp -1) Mpc scale. The background radiation field at 1 keV due to clusters in this model is approximately one-third of the observed background, which, after correction for numerical effects, again indicates approximately 5 times too much emission and the appropriateness of sigma(sub 8) = 0.6. If we have used the observed ratio of gas to total mass in clusters, rather than basing the mean density on light-element nucleosynthesis, then the computed luminosity of each cluster would have increased still further, by a factor of approximately 10. The number density of clusters increases to z approximately 1, but the luminosity per typical cluster decreases, with the result that evolution in the number density of bright

  11. High resolution data acquisition

    Science.gov (United States)

    Thornton, Glenn W.; Fuller, Kenneth R.

    1993-01-01

    A high resolution event interval timing system measures short time intervals such as occur in high energy physics or laser ranging. Timing is provided from a clock (38) pulse train (37) and analog circuitry (44) for generating a triangular wave (46) synchronously with the pulse train (37). The triangular wave (46) has an amplitude and slope functionally related to the time elapsed during each clock pulse in the train. A converter (18, 32) forms a first digital value of the amplitude and slope of the triangle wave at the start of the event interval and a second digital value of the amplitude and slope of the triangle wave at the end of the event interval. A counter (26) counts the clock pulse train (37) during the interval to form a gross event interval time. A computer (52) then combines the gross event interval time and the first and second digital values to output a high resolution value for the event interval.

  12. ANL high resolution injector

    International Nuclear Information System (INIS)

    Minehara, E.; Kutschera, W.; Hartog, P.D.; Billquist, P.

    1985-01-01

    The ANL (Argonne National Laboratory) high-resolution injector has been installed to obtain higher mass resolution and higher preacceleration, and to utilize effectively the full mass range of ATLAS (Argonne Tandem Linac Accelerator System). Preliminary results of the first beam test are reported briefly. The design and performance, in particular a high-mass-resolution magnet with aberration compensation, are discussed. 7 refs., 5 figs., 2 tabs

  13. Ultra high resolution tomography

    Energy Technology Data Exchange (ETDEWEB)

    Haddad, W.S.

    1994-11-15

    Recent work and results on ultra high resolution three dimensional imaging with soft x-rays will be presented. This work is aimed at determining microscopic three dimensional structure of biological and material specimens. Three dimensional reconstructed images of a microscopic test object will be presented; the reconstruction has a resolution on the order of 1000 A in all three dimensions. Preliminary work with biological samples will also be shown, and the experimental and numerical methods used will be discussed.

  14. High resolution (transformers.

    Science.gov (United States)

    Garcia-Souto, Jose A; Lamela-Rivera, Horacio

    2006-10-16

    A novel fiber-optic interferometric sensor is presented for vibrations measurements and analysis. In this approach, it is shown applied to the vibrations of electrical structures within power transformers. A main feature of the sensor is that an unambiguous optical phase measurement is performed using the direct detection of the interferometer output, without external modulation, for a more compact and stable implementation. High resolution of the interferometric measurement is obtained with this technique (transformers are also highlighted.

  15. Test Particle Simulations of Electron Injection by the Bursty Bulk Flows (BBFs) using High Resolution Lyon-Feddor-Mobarry (LFM) Code

    Science.gov (United States)

    Eshetu, W. W.; Lyon, J.; Wiltberger, M. J.; Hudson, M. K.

    2017-12-01

    Test particle simulations of electron injection by the bursty bulk flows (BBFs) have been done using a test particle tracer code [1], and the output fields of the Lyon-Feddor-Mobarry global magnetohydro- dynamics (MHD) code[2]. The MHD code was run with high resolu- tion (oct resolution), and with specified solar wind conditions so as to reproduce the observed qualitative picture of the BBFs [3]. Test par- ticles were injected so that they interact with earthward propagating BBFs. The result of the simulation shows that electrons are pushed ahead of the BBFs and accelerated into the inner magnetosphere. Once electrons are in the inner magnetosphere they are further energized by drift resonance with the azimuthal electric field. In addition pitch angle scattering of electrons resulting in the violation conservation of the first adiabatic invariant has been observed. The violation of the first adiabatic invariant occurs as electrons cross a weak magnetic field region with a strong gradient of the field perturbed by the BBFs. References 1. Kress, B. T., Hudson,M. K., Looper, M. D. , Albert, J., Lyon, J. G., and Goodrich, C. C. (2007), Global MHD test particle simulations of ¿ 10 MeV radiation belt electrons during storm sudden commencement, J. Geophys. Res., 112, A09215, doi:10.1029/2006JA012218. Lyon,J. G., Fedder, J. A., and Mobarry, C.M., The Lyon- Fedder-Mobarry (LFM) Global MHD Magnetospheric Simulation Code (2004), J. Atm. And Solar-Terrestrial Phys., 66, Issue 15-16, 1333- 1350,doi:10.1016/j.jastp. Wiltberger, Merkin, M., Lyon, J. G., and Ohtani, S. (2015), High-resolution global magnetohydrodynamic simulation of bursty bulk flows, J. Geophys. Res. Space Physics, 120, 45554566, doi:10.1002/2015JA021080.

  16. Simulations of Cyclone Sidr in the Bay of Bengal with a High-Resolution Model: Sensitivity to Large-Scale Boundary Forcing

    Science.gov (United States)

    Kumar, Anil; Done, James; Dudhia, Jimy; Niyogi, Dev

    2011-01-01

    The predictability of Cyclone Sidr in the Bay of Bengal was explored in terms of track and intensity using the Advanced Research Hurricane Weather Research Forecast (AHW) model. This constitutes the first application of the AHW over an area that lies outside the region of the North Atlantic for which this model was developed and tested. Several experiments were conducted to understand the possible contributing factors that affected Sidr s intensity and track simulation by varying the initial start time and domain size. Results show that Sidr s track was strongly controlled by the synoptic flow at the 500-hPa level, seen especially due to the strong mid-latitude westerly over north-central India. A 96-h forecast produced westerly winds over north-central India at the 500-hPa level that were notably weaker; this likely caused the modeled cyclone track to drift from the observed actual track. Reducing the model domain size reduced model error in the synoptic-scale winds at 500 hPa and produced an improved cyclone track. Specifically, the cyclone track appeared to be sensitive to the upstream synoptic flow, and was, therefore, sensitive to the location of the western boundary of the domain. However, cyclone intensity remained largely unaffected by this synoptic wind error at the 500-hPa level. Comparison of the high resolution, moving nested domain with a single coarser resolution domain showed little difference in tracks, but resulted in significantly different intensities. Experiments on the domain size with regard to the total precipitation simulated by the model showed that precipitation patterns and 10-m surface winds were also different. This was mainly due to the mid-latitude westerly flow across the west side of the model domain. The analysis also suggested that the total precipitation pattern and track was unchanged when the domain was extended toward the east, north, and south. Furthermore, this highlights our conclusion that Sidr was influenced from the west

  17. From Modeling of Plasticity in Single-Crystal Superalloys to High-Resolution X-rays Three-Crystal Diffractometer Peaks Simulation

    Science.gov (United States)

    Jacques, Alain

    2016-12-01

    The dislocation-based modeling of the high-temperature creep of two-phased single-crystal superalloys requires input data beyond strain vs time curves. This may be obtained by use of in situ experiments combining high-temperature creep tests with high-resolution synchrotron three-crystal diffractometry. Such tests give access to changes in phase volume fractions and to the average components of the stress tensor in each phase as well as the plastic strain of each phase. Further progress may be obtained by a new method making intensive use of the Fast Fourier Transform, and first modeling the behavior of a representative volume of material (stress fields, plastic strain, dislocation densities…), then simulating directly the corresponding diffraction peaks, taking into account the displacement field within the material, chemical variations, and beam coherence. Initial tests indicate that the simulated peak shapes are close to the experimental ones and are quite sensitive to the details of the microstructure and to dislocation densities at interfaces and within the soft γ phase.

  18. Overview of Proposal on High Resolution Climate Model Simulations of Recent Hurricane and Typhoon Activity: The Impact of SSTs and the Madden Julian Oscillation

    Science.gov (United States)

    Schubert, Siegfried; Kang, In-Sik; Reale, Oreste

    2009-01-01

    This talk gives an update on the progress and further plans for a coordinated project to carry out and analyze high-resolution simulations of tropical storm activity with a number of state-of-the-art global climate models. Issues addressed include, the mechanisms by which SSTs control tropical storm. activity on inter-annual and longer time scales, the modulation of that activity by the Madden Julian Oscillation on sub-seasonal time scales, as well as the sensitivity of the results to model formulation. The project also encourages companion coarser resolution runs to help assess resolution dependence, and. the ability of the models to capture the large-scale and long-terra changes in the parameters important for hurricane development. Addressing the above science questions is critical to understanding the nature of the variability of the Asian-Australian monsoon and its regional impacts, and thus CLIVAR RAMP fully endorses the proposed tropical storm simulation activity. The project is open to all interested organizations and investigators, and the results from the runs will be shared among the participants, as well as made available to the broader scientific community for analysis.

  19. HALO EXPANSION IN COSMOLOGICAL HYDRO SIMULATIONS: TOWARD A BARYONIC SOLUTION OF THE CUSP/CORE PROBLEM IN MASSIVE SPIRALS

    Energy Technology Data Exchange (ETDEWEB)

    Maccio, A. V.; Stinson, G. [Max-Planck-Institut fuer Astronomie, 69117 Heidelberg (Germany); Brook, C. B.; Gibson, B. K. [University of Central Lancashire, Jeremiah Horrocks Institute for Astrophysics and Supercomputing, Preston PR1 2HE (United Kingdom); Wadsley, J.; Couchman, H. M. P. [Department of Physics and Astronomy, McMaster University, Hamilton, Ontario, L8S 4M1 (Canada); Shen, S. [Department of Astronomy and Astrophysics, University of California Santa Cruz, Santa Cruz, CA 95064 (United States); Quinn, T., E-mail: maccio@mpia.de, E-mail: stinson@mpia.de [Astronomy Department, University of Washington, Seattle, WA 98195-1580 (United States)

    2012-01-15

    A clear prediction of the cold dark matter (CDM) model is the existence of cuspy dark matter halo density profiles on all mass scales. This is not in agreement with the observed rotation curves of spiral galaxies, challenging on small scales the otherwise successful CDM paradigm. In this work we employ high-resolution cosmological hydrodynamical simulations to study the effects of dissipative processes on the inner distribution of dark matter in Milky Way like objects (M Almost-Equal-To 10{sup 12} M{sub Sun }). Our simulations include supernova feedback, and the effects of the radiation pressure of massive stars before they explode as supernovae. The increased stellar feedback results in the expansion of the dark matter halo instead of contraction with respect to N-body simulations. Baryons are able to erase the dark matter cuspy distribution, creating a flat, cored, dark matter density profile in the central several kiloparsecs of a massive Milky-Way-like halo. The profile is well fit by a Burkert profile, with fitting parameters consistent with the observations. In addition, we obtain flat rotation curves as well as extended, exponential stellar disk profiles. While the stellar disk we obtain is still partially too thick to resemble the Milky Way thin disk, this pilot study shows that there is enough energy available in the baryonic component to alter the dark matter distribution even in massive disk galaxies, providing a possible solution to the long-standing problem of cusps versus cores.

  20. A Mass-Flux Scheme View of a High-Resolution Simulation of a Transition from Shallow to Deep Cumulus Convection.

    Science.gov (United States)

    Kuang, Zhiming; Bretherton, Christopher S.

    2006-07-01

    In this paper, an idealized, high-resolution simulation of a gradually forced transition from shallow, nonprecipitating to deep, precipitating cumulus convection is described; how the cloud and transport statistics evolve as the convection deepens is explored; and the collected statistics are used to evaluate assumptions in current cumulus schemes. The statistical analysis methodologies that are used do not require tracing the history of individual clouds or air parcels; instead they rely on probing the ensemble characteristics of cumulus convection in the large model dataset. They appear to be an attractive way for analyzing outputs from cloud-resolving numerical experiments. Throughout the simulation, it is found that 1) the initial thermodynamic properties of the updrafts at the cloud base have rather tight distributions; 2) contrary to the assumption made in many cumulus schemes, nearly undiluted air parcels are too infrequent to be relevant to any stage of the simulated convection; and 3) a simple model with a spectrum of entraining plumes appears to reproduce most features of the cloudy updrafts, but significantly overpredicts the mass flux as the updrafts approach their levels of zero buoyancy. A buoyancy-sorting model was suggested as a potential remedy. The organized circulations of cold pools seem to create clouds with larger-sized bases and may correspondingly contribute to their smaller lateral entrainment rates. Our results do not support a mass-flux closure based solely on convective available potential energy (CAPE), and are in general agreement with a convective inhibition (CIN)-based closure. The general similarity in the ensemble characteristics of shallow and deep convection and the continuous evolution of the thermodynamic structure during the transition provide justification for developing a single unified cumulus parameterization that encompasses both shallow and deep convection.

  1. Very high resolution regional climate simulations on the 4 km scale as a basis for carbon balance assessments in northeast European Russia

    Science.gov (United States)

    Stendel, Martin; Hesselbjerg Christensen, Jens; Adalgeirsdottir, Gudfinna; Rinke, Annette; Matthes, Heidrun; Marchenko, Sergej; Daanen, Ronald; Romanovsky, Vladimir

    2010-05-01

    Simulations with global circulation models (GCMs) clearly indicate that major climate changes in polar regions can be expected during the 21st century. Model studies have shown that the area of the Northern Hemisphere underlain by permafrost could be reduced substantially in a warmer climate. However, thawing of permafrost, in particular if it is ice-rich, is subject to a time lag due to the large latent heat of fusion. State-of-the-art GCMs are unable to adequately model these processes because (a) even the most advanced subsurface schemes rarely treat depths below 5 m explicitly, and (b) soil thawing and freezing processes cannot be dealt with directly due to the coarse resolution of present GCMs. Any attempt to model subsurface processes needs information about soil properties, vegetation and snow cover, which are hardly realistic on a typical GCM grid. Furthermore, simulated GCM precipitation is often underestimated and the proportion of rain and snow is incorrect. One possibility to overcome resolution-related problems is to use regional climate models (RCMs). Such an RCM, HIRHAM, has until now been the only one used for the entire circumpolar domain, and its most recent version, HIRHAM5, has also been used in the high resolution study described here. Instead of the traditional approach via a degree-day based frost index from observations or model data, we use the regional model to create boundary conditions for an advanced permafrost model. This approach offers the advantage that the permafrost model can be run on the grid of the regional model, i.e. in a considerably higher resolution than in previous approaches. We here present results from a new time-slice integration with an unprecedented horizontal resolution of only 4 km, covering northeast European Russia. This model simulation has served as basis for an assessment of the carbon balance for a region in northeast European Russia within the EU-funded Carbo-North project.

  2. High resolution ultrasonic densitometer

    International Nuclear Information System (INIS)

    Dress, W.B.

    1983-01-01

    The velocity of torsional stress pulses in an ultrasonic waveguide of non-circular cross section is affected by the temperature and density of the surrounding medium. Measurement of the transit times of acoustic echoes from the ends of a sensor section are interpreted as level, density, and temperature of the fluid environment surrounding that section. This paper examines methods of making these measurements to obtain high resolution, temperature-corrected absolute and relative density and level determinations of the fluid. Possible applications include on-line process monitoring, a hand-held density probe for battery charge state indication, and precise inventory control for such diverse fluids as uranium salt solutions in accountability storage and gasoline in service station storage tanks

  3. High resolution numerical simulation (WRF V3) of an extreme rainy event over the Guadeloupe archipelago: Case of 3-5 january 2011.

    Science.gov (United States)

    Bernard, Didier C.; Cécé, Raphaël; Dorville, Jean-François

    2013-04-01

    During the dry season, the Guadeloupe archipelago may be affected by extreme rainy disturbances which may induce floods in a very short time. C. Brévignon (2003) considered a heavy rain event for rainfall upper 100 mm per day (out of mountainous areas) for this tropical region. During a cold front passage (3-5 January 2011), torrential rainfalls caused floods, major damages, landslides and five deaths. This phenomenon has put into question the current warning system based on large scale numerical models. This low-resolution forecasting (around 50-km scale) has been unsuitable for small tropical island like Guadeloupe (1600 km2). The most affected area was the middle of Grande-Terre island which is the main flat island of the archipelago (area of 587 km2, peak at 136 m). It is the most populated sector of Guadeloupe. In this area, observed rainfall have reached to 100-160 mm in 24 hours (this amount is equivalent to two months of rain for January (C. Brévignon, 2003)), in less 2 hours drainage systems have been saturated, and five people died in a ravine. Since two years, the atmospheric model WRF ARW V3 (Skamarock et al., 2008) has been used to modeling meteorological variables fields observed over the Guadeloupe archipelago at high resolution 1-km scale (Cécé et al., 2011). The model error estimators show that meteorological variables seem to be properly simulated for standard types of weather: undisturbed, strong or weak trade winds. These simulations indicate that for synoptic winds weak to moderate, a small island like Grande-Terre is able to generate inland convergence zones during daytime. In this presentation, we apply this high resolution model to simulate this extreme rainy disturbance of 3-5 January 2011. The evolution of modeling meteorological variable fields is analyzed in the most affected area of Grande-Terre (city of Les Abymes). The main goal is to examine local quasi-stationary updraft systems and highlight their convective mechanisms. The

  4. Can small island mountains provide relief from the Subtropical Precipitation Decline? Simulating future precipitation regimes for small island nations using high resolution Regional Climate Models.

    Science.gov (United States)

    Bowden, J.; Terando, A. J.; Misra, V.; Wootten, A.

    2017-12-01

    Small island nations are vulnerable to changes in the hydrologic cycle because of their limited water resources. This risk to water security is likely even higher in sub-tropical regions where anthropogenic forcing of the climate system is expected to lead to a drier future (the so-called `dry-get-drier' pattern). However, high-resolution numerical modeling experiments have also shown an enhancement of existing orographically-influenced precipitation patterns on islands with steep topography, potentially mitigating subtropical drying on windward mountain sides. Here we explore the robustness of the near-term (25-45 years) subtropical precipitation decline (SPD) across two island groupings in the Caribbean, Puerto Rico and the U.S. Virgin Islands. These islands, forming the boundary between the Greater and Lesser Antilles, significantly differ in size, topographic relief, and orientation to prevailing winds. Two 2-km horizontal resolution regional climate model simulations are used to downscale a total of three different GCMs under the RCP8.5 emissions scenario. Results indicate some possibility for modest increases in precipitation at the leading edge of the Luquillo Mountains in Puerto Rico, but consistent declines elsewhere. We conclude with a discussion of potential explanations for these patterns and the attendant risks to water security that subtropical small island nations could face as the climate warms.

  5. Comparing AMR and SPH Cosmological Simulations. I. Dark Matter and Adiabatic Simulations

    Science.gov (United States)

    O'Shea, Brian W.; Nagamine, Kentaro; Springel, Volker; Hernquist, Lars; Norman, Michael L.

    2005-09-01

    We compare two cosmological hydrodynamic simulation codes in the context of hierarchical galaxy formation: the Lagrangian smoothed particle hydrodynamics (SPH) code GADGET, and the Eulerian adaptive mesh refinement (AMR) code Enzo. Both codes represent dark matter with the N-body method but use different gravity solvers and fundamentally different approaches for baryonic hydrodynamics. The SPH method in GADGET uses a recently developed ``entropy conserving'' formulation of SPH, while for the mesh-based Enzo two different formulations of Eulerian hydrodynamics are employed: the piecewise parabolic method (PPM) extended with a dual energy formulation for cosmology, and the artificial viscosity-based scheme used in the magnetohydrodynamics code ZEUS. In this paper we focus on a comparison of cosmological simulations that follow either only dark matter, or also a nonradiative (``adiabatic'') hydrodynamic gaseous component. We perform multiple simulations using both codes with varying spatial and mass resolution with identical initial conditions. The dark matter-only runs agree generally quite well provided Enzo is run with a comparatively fine root grid and a low overdensity threshold for mesh refinement, otherwise the abundance of low-mass halos is suppressed. This can be readily understood as a consequence of the hierarchical particle-mesh algorithm used by Enzo to compute gravitational forces, which tends to deliver lower force resolution than the tree-algorithm of GADGET at early times before any adaptive mesh refinement takes place. At comparable force resolution we find that the latter offers substantially better performance and lower memory consumption than the present gravity solver in Enzo. In simulations that include adiabatic gasdynamics we find general agreement in the distribution functions of temperature, entropy, and density for gas of moderate to high overdensity, as found inside dark matter halos. However, there are also some significant differences in

  6. Quadratic genetic modifications: a streamlined route to cosmological simulations with controlled merger history

    Science.gov (United States)

    Rey, Martin P.; Pontzen, Andrew

    2018-02-01

    Recent work has studied the interplay between a galaxy's history and its observable properties using `genetically modified' cosmological zoom simulations. The approach systematically generates alternative histories for a halo, while keeping its cosmological environment fixed. Applications to date altered linear properties of the initial conditions, such as the mean overdensity of specified regions; we extend the formulation to include quadratic features, such as local variance, that determines the overall importance of smooth accretion relative to mergers in a galaxy's history. We introduce an efficient algorithm for this new class of modification and demonstrate its ability to control the variance of a region in a one-dimensional toy model. Outcomes of this work are twofold: (i) a clarification of the formulation of genetic modifications and (ii) a proof of concept for quadratic modifications leading the way to a forthcoming implementation in cosmological simulations.

  7. Modern Cosmology

    CERN Document Server

    Zhang Yuan Zhong

    2002-01-01

    This book is one of a series in the areas of high-energy physics, cosmology and gravitation published by the Institute of Physics. It includes courses given at a doctoral school on 'Relativistic Cosmology: Theory and Observation' held in Spring 2000 at the Centre for Scientific Culture 'Alessandro Volta', Italy, sponsored by SIGRAV-Societa Italiana di Relativita e Gravitazione (Italian Society of Relativity and Gravitation) and the University of Insubria. This book collects 15 review reports given by a number of outstanding scientists. They touch upon the main aspects of modern cosmology from observational matters to theoretical models, such as cosmological models, the early universe, dark matter and dark energy, modern observational cosmology, cosmic microwave background, gravitational lensing, and numerical simulations in cosmology. In particular, the introduction to the basics of cosmology includes the basic equations, covariant and tetrad descriptions, Friedmann models, observation and horizons, etc. The ...

  8. Spatial Variability in Column CO2 Inferred from High Resolution GEOS-5 Global Model Simulations: Implications for Remote Sensing and Inversions

    Science.gov (United States)

    Ott, L.; Putman, B.; Collatz, J.; Gregg, W.

    2012-01-01

    Column CO2 observations from current and future remote sensing missions represent a major advancement in our understanding of the carbon cycle and are expected to help constrain source and sink distributions. However, data assimilation and inversion methods are challenged by the difference in scale of models and observations. OCO-2 footprints represent an area of several square kilometers while NASA s future ASCENDS lidar mission is likely to have an even smaller footprint. In contrast, the resolution of models used in global inversions are typically hundreds of kilometers wide and often cover areas that include combinations of land, ocean and coastal areas and areas of significant topographic, land cover, and population density variations. To improve understanding of scales of atmospheric CO2 variability and representativeness of satellite observations, we will present results from a global, 10-km simulation of meteorology and atmospheric CO2 distributions performed using NASA s GEOS-5 general circulation model. This resolution, typical of mesoscale atmospheric models, represents an order of magnitude increase in resolution over typical global simulations of atmospheric composition allowing new insight into small scale CO2 variations across a wide range of surface flux and meteorological conditions. The simulation includes high resolution flux datasets provided by NASA s Carbon Monitoring System Flux Pilot Project at half degree resolution that have been down-scaled to 10-km using remote sensing datasets. Probability distribution functions are calculated over larger areas more typical of global models (100-400 km) to characterize subgrid-scale variability in these models. Particular emphasis is placed on coastal regions and regions containing megacities and fires to evaluate the ability of coarse resolution models to represent these small scale features. Additionally, model output are sampled using averaging kernels characteristic of OCO-2 and ASCENDS measurement

  9. A Lagrangian trajectory view on transport and mixing processes between the eye, eyewall, and environment using a high resolution simulation of Hurricane Bonnie (1998)

    Science.gov (United States)

    Cram, Thomas A.; Persing, John; Montgomery, Michael T.; Braun, Scott A.

    2006-01-01

    The transport and mixing characteristics of a large sample of air parcels within a mature and vertically sheared hurricane vortex is examined. Data from a high-resolution (2 km grid spacing) numerical simulation of "real-case" Hurricane Bonnie (1998) is used to calculate Lagrangian trajectories of air parcels in various subdomains of the hurricane (namely, the eye, eyewall, and near-environment) to study the degree of interaction (transport and mixing) between these subdomains. It is found that 1) there is transport and mixing from the low-level eye to the eyewall that carries high- Be air which can enhance the efficiency of the hurricane heat engine; 2) a portion of the low-level inflow of the hurricane bypasses the eyewall to enter the eye, that both replaces the mass of the low-level eye and lingers for a sufficient time (order 1 hour) to acquire enhanced entropy characteristics through interaction with the ocean beneath the eye; 3) air in the mid- to upper-level eye is exchanged with the eyewall such that more than half the air of the eye is exchanged in five hours in this case of a sheared hurricane; and 4) that one-fifth of the mass in the eyewall at a height of 5 km has an origin in the mid- to upper-level environment where thet(sub e) is much less than in the eyewall, which ventilates the ensemble average eyewall theta(sub e) by about 1 K. Implications of these findings to the problem of hurricane intensity forecasting are discussed.

  10. SACRA - global data sets of satellite-derived crop calendars for agricultural simulations: an estimation of a high-resolution crop calendar using satellite-sensed NDVI

    Science.gov (United States)

    Kotsuki, S.; Tanaka, K.

    2015-01-01

    To date, many studies have performed numerical estimations of food production and agricultural water demand to understand the present and future supply-demand relationship. A crop calendar (CC) is an essential input datum to estimate food production and agricultural water demand accurately with the numerical estimations. CC defines the date or month when farmers plant and harvest in cropland. This study aims to develop a new global data set of a satellite-derived crop calendar for agricultural simulations (SACRA) and reveal advantages and disadvantages of the satellite-derived CC compared to other global products. We estimate global CC at a spatial resolution of 5 min (≈10 km) using the satellite-sensed NDVI data, which corresponds well to vegetation growth and death on the land surface. We first demonstrate that SACRA shows similar spatial pattern in planting date compared to a census-based product. Moreover, SACRA reflects a variety of CC in the same administrative unit, since it uses high-resolution satellite data. However, a disadvantage is that the mixture of several crops in a grid is not considered in SACRA. We also address that the cultivation period of SACRA clearly corresponds to the time series of NDVI. Therefore, accuracy of SACRA depends on the accuracy of NDVI used for the CC estimation. Although SACRA shows different CC from a census-based product in some regions, multiple usages of the two products are useful to take into consideration the uncertainty of the CC. An advantage of SACRA compared to the census-based products is that SACRA provides not only planting/harvesting dates but also a peak date from the time series of NDVI data.

  11. Deconstructing cosmology

    CERN Document Server

    Sanders, Robert H

    2016-01-01

    The advent of sensitive high-resolution observations of the cosmic microwave background radiation and their successful interpretation in terms of the standard cosmological model has led to great confidence in this model's reality. The prevailing attitude is that we now understand the Universe and need only work out the details. In this book, Sanders traces the development and successes of Lambda-CDM, and argues that this triumphalism may be premature. The model's two major components, dark energy and dark matter, have the character of the pre-twentieth-century luminiferous aether. While there is astronomical evidence for these hypothetical fluids, their enigmatic properties call into question our assumptions of the universality of locally determined physical law. Sanders explains how modified Newtonian dynamics (MOND) is a significant challenge for cold dark matter. Overall, the message is hopeful: the field of cosmology has not become frozen, and there is much fundamental work ahead for tomorrow's cosmologis...

  12. Halo Models of Large Scale Structure and Reliability of Cosmological N-Body Simulations

    Directory of Open Access Journals (Sweden)

    José Gaite

    2013-05-01

    Full Text Available Halo models of the large scale structure of the Universe are critically examined, focusing on the definition of halos as smooth distributions of cold dark matter. This definition is essentially based on the results of cosmological N-body simulations. By a careful analysis of the standard assumptions of halo models and N-body simulations and by taking into account previous studies of self-similarity of the cosmic web structure, we conclude that N-body cosmological simulations are not fully reliable in the range of scales where halos appear. Therefore, to have a consistent definition of halos is necessary either to define them as entities of arbitrary size with a grainy rather than smooth structure or to define their size in terms of small-scale baryonic physics.

  13. Quantification of discreteness effects in cosmological N-body simulations: Initial conditions

    International Nuclear Information System (INIS)

    Joyce, M.; Marcos, B.

    2007-01-01

    The relation between the results of cosmological N-body simulations, and the continuum theoretical models they simulate, is currently not understood in a way which allows a quantification of N dependent effects. In this first of a series of papers on this issue, we consider the quantification of such effects in the initial conditions of such simulations. A general formalism developed in [A. Gabrielli, Phys. Rev. E 70, 066131 (2004).] allows us to write down an exact expression for the power spectrum of the point distributions generated by the standard algorithm for generating such initial conditions. Expanded perturbatively in the amplitude of the input (i.e. theoretical, continuum) power spectrum, we obtain at linear order the input power spectrum, plus two terms which arise from discreteness and contribute at large wave numbers. For cosmological type power spectra, one obtains as expected, the input spectrum for wave numbers k smaller than that characteristic of the discreteness. The comparison of real space correlation properties is more subtle because the discreteness corrections are not as strongly localized in real space. For cosmological type spectra the theoretical mass variance in spheres and two-point correlation function are well approximated above a finite distance. For typical initial amplitudes this distance is a few times the interparticle distance, but it diverges as this amplitude (or, equivalently, the initial redshift of the cosmological simulation) goes to zero, at fixed particle density. We discuss briefly the physical significance of these discreteness terms in the initial conditions, in particular, with respect to the definition of the continuum limit of N-body simulations

  14. Halo mass and weak galaxy-galaxy lensing profiles in rescaled cosmological N-body simulations

    Science.gov (United States)

    Renneby, Malin; Hilbert, Stefan; Angulo, Raúl E.

    2018-05-01

    We investigate 3D density and weak lensing profiles of dark matter haloes predicted by a cosmology-rescaling algorithm for N-body simulations. We extend the rescaling method of Angulo & White (2010) and Angulo & Hilbert (2015) to improve its performance on intra-halo scales by using models for the concentration-mass-redshift relation based on excursion set theory. The accuracy of the method is tested with numerical simulations carried out with different cosmological parameters. We find that predictions for median density profiles are more accurate than ˜5 % for haloes with masses of 1012.0 - 1014.5h-1 M⊙ for radii 0.05 baryons, are likely required for interpreting future (dark energy task force stage IV) experiments.

  15. High-resolution multi-slice PET

    International Nuclear Information System (INIS)

    Yasillo, N.J.; Chintu Chen; Ordonez, C.E.; Kapp, O.H.; Sosnowski, J.; Beck, R.N.

    1992-01-01

    This report evaluates the progress to test the feasibility and to initiate the design of a high resolution multi-slice PET system. The following specific areas were evaluated: detector development and testing; electronics configuration and design; mechanical design; and system simulation. The design and construction of a multiple-slice, high-resolution positron tomograph will provide substantial improvements in the accuracy and reproducibility of measurements of the distribution of activity concentrations in the brain. The range of functional brain research and our understanding of local brain function will be greatly extended when the development of this instrumentation is completed

  16. Fast Generation of Ensembles of Cosmological N-Body Simulations via Mode-Resampling

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, M D; Cole, S; Frenk, C S; Szapudi, I

    2011-02-14

    We present an algorithm for quickly generating multiple realizations of N-body simulations to be used, for example, for cosmological parameter estimation from surveys of large-scale structure. Our algorithm uses a new method to resample the large-scale (Gaussian-distributed) Fourier modes in a periodic N-body simulation box in a manner that properly accounts for the nonlinear mode-coupling between large and small scales. We find that our method for adding new large-scale mode realizations recovers the nonlinear power spectrum to sub-percent accuracy on scales larger than about half the Nyquist frequency of the simulation box. Using 20 N-body simulations, we obtain a power spectrum covariance matrix estimate that matches the estimator from Takahashi et al. (from 5000 simulations) with < 20% errors in all matrix elements. Comparing the rates of convergence, we determine that our algorithm requires {approx}8 times fewer simulations to achieve a given error tolerance in estimates of the power spectrum covariance matrix. The degree of success of our algorithm indicates that we understand the main physical processes that give rise to the correlations in the matter power spectrum. Namely, the large-scale Fourier modes modulate both the degree of structure growth through the variation in the effective local matter density and also the spatial frequency of small-scale perturbations through large-scale displacements. We expect our algorithm to be useful for noise modeling when constraining cosmological parameters from weak lensing (cosmic shear) and galaxy surveys, rescaling summary statistics of N-body simulations for new cosmological parameter values, and any applications where the influence of Fourier modes larger than the simulation size must be accounted for.

  17. External versus internal triggers of bar formation in cosmological zoom-in simulations

    Science.gov (United States)

    Zana, Tommaso; Dotti, Massimo; Capelo, Pedro R.; Bonoli, Silvia; Haardt, Francesco; Mayer, Lucio; Spinoso, Daniele

    2018-01-01

    The emergence of a large-scale stellar bar is one of the most striking features in disc galaxies. By means of state-of-the-art cosmological zoom-in simulations, we study the formation and evolution of bars in Milky Way-like galaxies in a fully cosmological context, including the physics of gas dissipation, star formation and supernova feedback. Our goal is to characterize the actual trigger of the non-axisymmetric perturbation that leads to the strong bar observable in the simulations at z = 0, discriminating between an internal/secular and an external/tidal origin. To this aim, we run a suite of cosmological zoom-in simulations altering the original history of galaxy-satellite interactions at a time when the main galaxy, though already bar-unstable, does not feature any non-axisymmetric structure yet. We find that the main effect of a late minor merger and of a close fly-by is to delay the time of bar formation and those two dynamical events are not directly responsible for the development of the bar and do not alter significantly its global properties (e.g. its final extension). We conclude that, once the disc has grown to a mass large enough to sustain global non-axisymmetric modes, then bar formation is inevitable.

  18. Climatology of Tibetan Plateau Vortices and connection to upper-level flow in reanalysis data and a high-resolution model simulation

    Science.gov (United States)

    Curio, Julia; Schiemann, Reinhard; Hodges, Kevin; Turner, Andrew

    2017-04-01

    The Tibetan Plateau (TP) and surrounding high mountain ranges constitute an important forcing of the atmospheric circulation over Asia due to their height and extent. Therefore, the TP impacts weather and climate in downstream regions of East Asia, especially precipitation. Mesoscale Tibetan Plateau Vortices (TPVs) are known to be one of the major precipitation-bearing systems on the TP. They are mainly present at the 500 hPa level and have a vertical extent of 2-3 km while their horizontal scale is around 500 km. Their average lifetime is 18 hours. There are two types of TPVs: the largest number originating and staying on the TP, while a smaller number is able to move off the plateau to the east. The latter category can cause extreme precipitation events and severe flooding in large parts of eastern and southern China downstream of the TP, e.g. the Yangtze River valley. The first aim of the study is to identify and track TPVs in reanalysis data and to connect the TPV activity to the position and strength of the upper-level subtropical jet stream, and to determine favourable conditions for TPV development and maintenance. We identify and track TPVs using the TRACK algorithm developed by Hodges et al. (1994). Relative vorticity at the 500 hPa level from the ERA-Interim and NCEP-CFSR reanalyses are used as input data. TPVs are retained which originate on the TP and which persist for at least two days, since these are more likely to move off the TP to the East. The second aim is to identify TPVs in a high-resolution, present-day climate model simulation of the MetOffice Unified Model (UPSCALE, HadGEM3 GA3.0) to assess how well the model represents the TPV climatology and variability. We find that the reanalysis data sets and the model show similar results for the statistical measures of TPVs (genesis, track, and lysis density). The TPV genesis region is small and stable at a specific region of the TP throughout the year. The reason for this seems to be the convergence

  19. Modified Baryonic Dynamics: two-component cosmological simulations with light sterile neutrinos

    Energy Technology Data Exchange (ETDEWEB)

    Angus, G.W.; Gentile, G. [Department of Physics and Astrophysics, Vrije Universiteit Brussel, Pleinlaan 2, Brussels, 1050 Belgium (Belgium); Diaferio, A. [Dipartimento di Fisica, Università di Torino, Via P. Giuria 1, Torino, I-10125 Italy (Italy); Famaey, B. [Observatoire astronomique de Strasbourg, CNRS UMR 7550, Université de Strasbourg, 11 rue de l' Université, Strasbourg, F-67000 France (France); Heyden, K.J. van der, E-mail: garry.angus@vub.ac.be, E-mail: diaferio@ph.unito.it, E-mail: benoit.famaey@astro.unistra.fr, E-mail: gianfranco.gentile@ugent.be, E-mail: heyden@ast.uct.ac.za [Astrophysics, Cosmology and Gravity Centre, Dept. of Astronomy, University of Cape Town, Private Bag X3, Rondebosch, 7701 South Africa (South Africa)

    2014-10-01

    In this article we continue to test cosmological models centred on Modified Newtonian Dynamics (MOND) with light sterile neutrinos, which could in principle be a way to solve the fine-tuning problems of the standard model on galaxy scales while preserving successful predictions on larger scales. Due to previous failures of the simple MOND cosmological model, here we test a speculative model where the modified gravitational field is produced only by the baryons and the sterile neutrinos produce a purely Newtonian field (hence Modified Baryonic Dynamics). We use two-component cosmological simulations to separate the baryonic N-body particles from the sterile neutrino ones. The premise is to attenuate the over-production of massive galaxy cluster halos which were prevalent in the original MOND plus light sterile neutrinos scenario. Theoretical issues with such a formulation notwithstanding, the Modified Baryonic Dynamics model fails to produce the correct amplitude for the galaxy cluster mass function for any reasonable value of the primordial power spectrum normalisation.

  20. Berkeley High-Resolution Ball

    International Nuclear Information System (INIS)

    Diamond, R.M.

    1984-10-01

    Criteria for a high-resolution γ-ray system are discussed. Desirable properties are high resolution, good response function, and moderate solid angle so as to achieve not only double- but triple-coincidences with good statistics. The Berkeley High-Resolution Ball involved the first use of bismuth germanate (BGO) for anti-Compton shield for Ge detectors. The resulting compact shield permitted rather close packing of 21 detectors around a target. In addition, a small central BGO ball gives the total γ-ray energy and multiplicity, as well as the angular pattern of the γ rays. The 21-detector array is nearly complete, and the central ball has been designed, but not yet constructed. First results taken with 9 detector modules are shown for the nucleus 156 Er. The complex decay scheme indicates a transition from collective rotation (prolate shape) to single- particle states (possibly oblate) near spin 30 h, and has other interesting features

  1. Feasibility of a CdTe-based SPECT for high-resolution low-dose small animal imaging: a Monte Carlo simulation study

    International Nuclear Information System (INIS)

    Park, S-J; Yu, A R; Lee, Y-J; Kim, Y-S; Kim, H-J

    2014-01-01

    Dedicated single-photon-emission computed tomography (SPECT) systems based on pixelated semiconductors such as cadmium telluride (CdTe) are in development to study small animal models of human disease. In an effort to develop a high-resolution, low-dose system for small animal imaging, we compared a CdTe-based SPECT system and a conventional NaI(Tl)-based SPECT system in terms of spatial resolution, sensitivity, contrast, and contrast-to-noise ratio (CNR). In addition, we investigated the radiation absorbed dose and calculated a figure of merit (FOM) for both SPECT systems. Using the conventional NaI(Tl)-based SPECT system, we achieved a spatial resolution of 1.66 mm at a 30 mm source-to-collimator distance, and a resolution of 2.4-mm hot-rods. Using the newly-developed CdTe-based SPECT system, we achieved a spatial resolution of 1.32 mm FWHM at a 30 mm source-to-collimator distance, and a resolution of 1.7-mm hot-rods. The sensitivities at a 30 mm source-to-collimator distance were 115.73 counts/sec/MBq and 83.38 counts/sec/MBq for the CdTe-based SPECT and conventional NaI(Tl)-based SPECT systems, respectively. To compare quantitative measurements in the mouse brain, we calculated the CNR for images from both systems. The CNR from the CdTe-based SPECT system was 4.41, while that from the conventional NaI(Tl)-based SPECT system was 3.11 when the injected striatal dose was 160 Bq/voxel. The CNR increased as a function of injected dose in both systems. The FOM of the CdTe-based SPECT system was superior to that of the conventional NaI(Tl)-based SPECT system, and the highest FOM was achieved with the CdTe-based SPECT at a dose of 40 Bq/voxel injected into the striatum. Thus, a CdTe-based SPECT system showed significant improvement in performance compared with a conventional system in terms of spatial resolution, sensitivity, and CNR, while reducing the radiation dose to the small animal subject. Herein, we discuss the feasibility of a CdTe-based SPECT system for high-resolution

  2. Modeling and simulation of tumor-influenced high resolution real-time physics-based breast models for model-guided robotic interventions

    Science.gov (United States)

    Neylon, John; Hasse, Katelyn; Sheng, Ke; Santhanam, Anand P.

    2016-03-01

    Breast radiation therapy is typically delivered to the patient in either supine or prone position. Each of these positioning systems has its limitations in terms of tumor localization, dose to the surrounding normal structures, and patient comfort. We envision developing a pneumatically controlled breast immobilization device that will enable the benefits of both supine and prone positioning. In this paper, we present a physics-based breast deformable model that aids in both the design of the breast immobilization device as well as a control module for the device during every day positioning. The model geometry is generated from a subject's CT scan acquired during the treatment planning stage. A GPU based deformable model is then generated for the breast. A mass-spring-damper approach is then employed for the deformable model, with the spring modeled to represent a hyperelastic tissue behavior. Each voxel of the CT scan is then associated with a mass element, which gives the model its high resolution nature. The subject specific elasticity is then estimated from a CT scan in prone position. Our results show that the model can deform at >60 deformations per second, which satisfies the real-time requirement for robotic positioning. The model interacts with a computer designed immobilization device to position the breast and tumor anatomy in a reproducible location. The design of the immobilization device was also systematically varied based on the breast geometry, tumor location, elasticity distribution and the reproducibility of the desired tumor location.

  3. High resolution metric imaging payload

    Science.gov (United States)

    Delclaud, Y.

    2017-11-01

    Alcatel Space Industries has become Europe's leader in the field of high and very high resolution optical payloads, in the frame work of earth observation system able to provide military government with metric images from space. This leadership allowed ALCATEL to propose for the export market, within a French collaboration frame, a complete space based system for metric observation.

  4. Simulation of synoptic and sub-synoptic phenomena over East Africa and Arabian Peninsula for current and future climate using a high resolution AGCM

    KAUST Repository

    Raj, Jerry; Bangalath, Hamza Kunhu; Stenchikov, Georgiy L.

    2015-01-01

    between regional and global scale processes. Our initial results show that HiRAM simulations for historic period well reproduce the regional climate in East Africa and the Arabian Peninsula with their complex interplay of regional and global processes. Our

  5. Modern Cosmology

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Yuanzhong

    2002-06-21

    This book is one of a series in the areas of high-energy physics, cosmology and gravitation published by the Institute of Physics. It includes courses given at a doctoral school on 'Relativistic Cosmology: Theory and Observation' held in Spring 2000 at the Centre for Scientific Culture 'Alessandro Volta', Italy, sponsored by SIGRAV-Societa Italiana di Relativita e Gravitazione (Italian Society of Relativity and Gravitation) and the University of Insubria. This book collects 15 review reports given by a number of outstanding scientists. They touch upon the main aspects of modern cosmology from observational matters to theoretical models, such as cosmological models, the early universe, dark matter and dark energy, modern observational cosmology, cosmic microwave background, gravitational lensing, and numerical simulations in cosmology. In particular, the introduction to the basics of cosmology includes the basic equations, covariant and tetrad descriptions, Friedmann models, observation and horizons, etc. The chapters on the early universe involve inflationary theories, particle physics in the early universe, and the creation of matter in the universe. The chapters on dark matter (DM) deal with experimental evidence of DM, neutrino oscillations, DM candidates in supersymmetry models and supergravity, structure formation in the universe, dark-matter search with innovative techniques, and dark energy (cosmological constant), etc. The chapters about structure in the universe consist of the basis for structure formation, quantifying large-scale structure, cosmic background fluctuation, galaxy space distribution, and the clustering of galaxies. In the field of modern observational cosmology, galaxy surveys and cluster surveys are given. The chapter on gravitational lensing describes the lens basics and models, galactic microlensing and galaxy clusters as lenses. The last chapter, 'Numerical simulations in cosmology', deals with spatial and

  6. Simulating quantum effects of cosmological expansion using a static ion trap

    Science.gov (United States)

    Menicucci, Nicolas C.; Olson, S. Jay; Milburn, Gerard J.

    2010-09-01

    We propose a new experimental test bed that uses ions in the collective ground state of a static trap to study the analogue of quantum-field effects in cosmological spacetimes, including the Gibbons-Hawking effect for a single detector in de Sitter spacetime, as well as the possibility of modeling inflationary structure formation and the entanglement signature of de Sitter spacetime. To date, proposals for using trapped ions in analogue gravity experiments have simulated the effect of gravity on the field modes by directly manipulating the ions' motion. In contrast, by associating laboratory time with conformal time in the simulated universe, we can encode the full effect of curvature in the modulation of the laser used to couple the ions' vibrational motion and electronic states. This model simplifies the experimental requirements for modeling the analogue of an expanding universe using trapped ions, and it enlarges the validity of the ion-trap analogy to a wide range of interesting cases.

  7. Estimating cosmological parameters by the simulated data of gravitational waves from the Einstein Telescope

    Science.gov (United States)

    Cai, Rong-Gen; Yang, Tao

    2017-02-01

    We investigate the constraint ability of the gravitational wave (GW) as the standard siren on the cosmological parameters by using the third-generation gravitational wave detector: the Einstein Telescope. The binary merger of a neutron with either a neutron or black hole is hypothesized to be the progenitor of a short and intense burst of γ rays; some fraction of those binary mergers could be detected both through electromagnetic radiation and gravitational waves. Thus we can determine both the luminosity distance and redshift of the source separately. We simulate the luminosity distances and redshift measurements from 100 to 1000 GW events. We use two different algorithms to constrain the cosmological parameters. For the Hubble constant H0 and dark matter density parameter Ωm, we adopt the Markov chain Monte Carlo approach. We find that with about 500-600 GW events we can constrain the Hubble constant with an accuracy comparable to Planck temperature data and Planck lensing combined results, while for the dark matter density, GWs alone seem not able to provide the constraints as good as for the Hubble constant; the sensitivity of 1000 GW events is a little lower than that of Planck data. It should require more than 1000 events to match the Planck sensitivity. Yet, for analyzing the more complex dynamical property of dark energy, i.e., the equation of state w , we adopt a new powerful nonparametric method: the Gaussian process. We can reconstruct w directly from the observational luminosity distance at every redshift. In the low redshift region, we find that about 700 GW events can give the constraints of w (z ) comparable to the constraints of a constant w by Planck data with type-Ia supernovae. Those results show that GWs as the standard sirens to probe the cosmological parameters can provide an independent and complementary alternative to current experiments.

  8. Impacts of spectral nudging on the simulated surface air temperature in summer compared with the selection of shortwave radiation and land surface model physics parameterization in a high-resolution regional atmospheric model

    Science.gov (United States)

    Park, Jun; Hwang, Seung-On

    2017-11-01

    The impact of a spectral nudging technique for the dynamical downscaling of the summer surface air temperature in a high-resolution regional atmospheric model is assessed. The performance of this technique is measured by comparing 16 analysis-driven simulation sets of physical parameterization combinations of two shortwave radiation and four land surface model schemes of the model, which are known to be crucial for the simulation of the surface air temperature. It is found that the application of spectral nudging to the outermost domain has a greater impact on the regional climate than any combination of shortwave radiation and land surface model physics schemes. The optimal choice of two model physics parameterizations is helpful for obtaining more realistic spatiotemporal distributions of land surface variables such as the surface air temperature, precipitation, and surface fluxes. However, employing spectral nudging adds more value to the results; the improvement is greater than using sophisticated shortwave radiation and land surface model physical parameterizations. This result indicates that spectral nudging applied to the outermost domain provides a more accurate lateral boundary condition to the innermost domain when forced by analysis data by securing the consistency with large-scale forcing over a regional domain. This consequently indirectly helps two physical parameterizations to produce small-scale features closer to the observed values, leading to a better representation of the surface air temperature in a high-resolution downscaled climate.

  9. Simulating single-phase and two-phase non-Newtonian fluid flow of a digital rock scanned at high resolution

    Science.gov (United States)

    Tembely, Moussa; Alsumaiti, Ali M.; Jouini, Mohamed S.; Rahimov, Khurshed; Dolatabadi, Ali

    2017-11-01

    Most of the digital rock physics (DRP) simulations focus on Newtonian fluids and overlook the detailed description of rock-fluid interaction. A better understanding of multiphase non-Newtonian fluid flow at pore-scale is crucial for optimizing enhanced oil recovery (EOR). The Darcy scale properties of reservoir rocks such as the capillary pressure curves and the relative permeability are controlled by the pore-scale behavior of the multiphase flow. In the present work, a volume of fluid (VOF) method coupled with an adaptive meshing technique is used to perform the pore-scale simulation on a 3D X-ray micro-tomography (CT) images of rock samples. The numerical model is based on the resolution of the Navier-Stokes equations along with a phase fraction equation incorporating the dynamics contact model. The simulations of a single phase flow for the absolute permeability showed a good agreement with the literature benchmark. Subsequently, the code is used to simulate a two-phase flow consisting of a polymer solution, displaying a shear-thinning power law viscosity. The simulations enable to access the impact of the consistency factor (K), the behavior index (n), along with the two contact angles (advancing and receding) on the relative permeability.

  10. High-resolution seismic wave propagation using local time stepping

    KAUST Repository

    Peter, Daniel; Rietmann, Max; Galvez, Percy; Ampuero, Jean Paul

    2017-01-01

    High-resolution seismic wave simulations often require local refinements in numerical meshes to accurately capture e.g. steep topography or complex fault geometry. Together with explicit time schemes, this dramatically reduces the global time step

  11. MO-G-17A-04: Internal Dosimetric Calculations for Pediatric Nuclear Imaging Applications, Using Monte Carlo Simulations and High-Resolution Pediatric Computational Models

    Energy Technology Data Exchange (ETDEWEB)

    Papadimitroulas, P; Kagadis, GC [University of Patras, Rion, Ahaia (Greece); Loudos, G [Technical Educational Institute of Athens, Aigaleo, Attiki (Greece)

    2014-06-15

    Purpose: Our purpose is to evaluate the administered absorbed dose in pediatric, nuclear imaging studies. Monte Carlo simulations with the incorporation of pediatric computational models can serve as reference for the accurate determination of absorbed dose. The procedure of the calculated dosimetric factors is described, while a dataset of reference doses is created. Methods: Realistic simulations were executed using the GATE toolkit and a series of pediatric computational models, developed by the “IT'IS Foundation”. The series of the phantoms used in our work includes 6 models in the range of 5–14 years old (3 boys and 3 girls). Pre-processing techniques were applied to the images, to incorporate the phantoms in GATE simulations. The resolution of the phantoms was set to 2 mm3. The most important organ densities were simulated according to the GATE “Materials Database”. Several used radiopharmaceuticals in SPECT and PET applications are being tested, following the EANM pediatric dosage protocol. The biodistributions of the several isotopes used as activity maps in the simulations, were derived by the literature. Results: Initial results of absorbed dose per organ (mGy) are presented in a 5 years old girl from the whole body exposure to 99mTc - SestaMIBI, 30 minutes after administration. Heart, kidney, liver, ovary, pancreas and brain are the most critical organs, in which the S-factors are calculated. The statistical uncertainty in the simulation procedure was kept lower than 5%. The Sfactors for each target organ are calculated in Gy/(MBq*sec) with highest dose being absorbed in kidneys and pancreas (9.29*10{sup 10} and 0.15*10{sup 10} respectively). Conclusion: An approach for the accurate dosimetry on pediatric models is presented, creating a reference dosage dataset for several radionuclides in children computational models with the advantages of MC techniques. Our study is ongoing, extending our investigation to other reference models and

  12. MO-G-17A-04: Internal Dosimetric Calculations for Pediatric Nuclear Imaging Applications, Using Monte Carlo Simulations and High-Resolution Pediatric Computational Models

    International Nuclear Information System (INIS)

    Papadimitroulas, P; Kagadis, GC; Loudos, G

    2014-01-01

    Purpose: Our purpose is to evaluate the administered absorbed dose in pediatric, nuclear imaging studies. Monte Carlo simulations with the incorporation of pediatric computational models can serve as reference for the accurate determination of absorbed dose. The procedure of the calculated dosimetric factors is described, while a dataset of reference doses is created. Methods: Realistic simulations were executed using the GATE toolkit and a series of pediatric computational models, developed by the “IT'IS Foundation”. The series of the phantoms used in our work includes 6 models in the range of 5–14 years old (3 boys and 3 girls). Pre-processing techniques were applied to the images, to incorporate the phantoms in GATE simulations. The resolution of the phantoms was set to 2 mm3. The most important organ densities were simulated according to the GATE “Materials Database”. Several used radiopharmaceuticals in SPECT and PET applications are being tested, following the EANM pediatric dosage protocol. The biodistributions of the several isotopes used as activity maps in the simulations, were derived by the literature. Results: Initial results of absorbed dose per organ (mGy) are presented in a 5 years old girl from the whole body exposure to 99mTc - SestaMIBI, 30 minutes after administration. Heart, kidney, liver, ovary, pancreas and brain are the most critical organs, in which the S-factors are calculated. The statistical uncertainty in the simulation procedure was kept lower than 5%. The Sfactors for each target organ are calculated in Gy/(MBq*sec) with highest dose being absorbed in kidneys and pancreas (9.29*10 10 and 0.15*10 10 respectively). Conclusion: An approach for the accurate dosimetry on pediatric models is presented, creating a reference dosage dataset for several radionuclides in children computational models with the advantages of MC techniques. Our study is ongoing, extending our investigation to other reference models and evaluating the

  13. Investigation into the Formation, Structure, and Evolution of an EF4 Tornado in East China Using a High-Resolution Numerical Simulation

    Science.gov (United States)

    Yao, Dan; Xue, Haile; Yin, Jinfang; Sun, Jisong; Liang, Xudong; Guo, Jianping

    2018-04-01

    Devastating tornadoes in China have received growing attention in recent years, but little is known about their formation, structure, and evolution on the tornadic scale. Most of these tornadoes develop within the East Asian monsoon regime, in an environment quite different from tornadoes in the U.S. In this study, we used an idealized, highresolution (25-m grid spacing) numerical simulation to investigate the deadly EF4 (Enhanced Fujita scale category 4) tornado that occurred on 23 June 2016 and claimed 99 lives in Yancheng, Jiangsu Province. A tornadic supercell developed in the simulation that had striking similarities to radar observations. The violent tornado in Funing County was reproduced, exceeding EF4 (74 m s-1), consistent with the on-site damage survey. It was accompanied by a funnel cloud that extended to the surface, and exhibited a double-helix vorticity structure. The signal of tornado genesis was found first at the cloud base in the pressure perturbation field, and then developed both upward and downward in terms of maximum vertical velocity overlapping with the intense vertical vorticity centers. The tornado's demise was found to accompany strong downdrafts overlapping with the intense vorticity centers. One of the interesting findings of this work is that a violent surface vortex was able to be generated and maintained, even though the simulation employed a free-slip lower boundary condition. The success of this simulation, despite using an idealized numerical approach, provides a means to investigate more historical tornadoes in China.

  14. Correlation between centre offsets and gas velocity dispersion of galaxy clusters in cosmological simulations

    Science.gov (United States)

    Li, Ming-Hua; Zhu, Weishan; Zhao, Dong

    2018-05-01

    The gas is the dominant component of baryonic matter in most galaxy groups and clusters. The spatial offsets of gas centre from the halo centre could be an indicator of the dynamical state of cluster. Knowledge of such offsets is important for estimate the uncertainties when using clusters as cosmological probes. In this paper, we study the centre offsets roff between the gas and that of all the matter within halo systems in ΛCDM cosmological hydrodynamic simulations. We focus on two kinds of centre offsets: one is the three-dimensional PB offsets between the gravitational potential minimum of the entire halo and the barycentre of the ICM, and the other is the two-dimensional PX offsets between the potential minimum of the halo and the iterative centroid of the projected synthetic X-ray emission of the halo. Haloes at higher redshifts tend to have larger values of rescaled offsets roff/r200 and larger gas velocity dispersion σ v^gas/σ _{200}. For both types of offsets, we find that the correlation between the rescaled centre offsets roff/r200 and the rescaled 3D gas velocity dispersion, σ _v^gas/σ _{200} can be approximately described by a quadratic function as r_{off}/r_{200} ∝ (σ v^gas/σ _{200} - k_2)2. A Bayesian analysis with MCMC method is employed to estimate the model parameters. Dependence of the correlation relation on redshifts and the gas mass fraction are also investigated.

  15. The large-scale environment from cosmological simulations - I. The baryonic cosmic web

    Science.gov (United States)

    Cui, Weiguang; Knebe, Alexander; Yepes, Gustavo; Yang, Xiaohu; Borgani, Stefano; Kang, Xi; Power, Chris; Staveley-Smith, Lister

    2018-01-01

    Using a series of cosmological simulations that includes one dark-matter-only (DM-only) run, one gas cooling-star formation-supernova feedback (CSF) run and one that additionally includes feedback from active galactic nuclei (AGNs), we classify the large-scale structures with both a velocity-shear-tensor code (VWEB) and a tidal-tensor code (PWEB). We find that the baryonic processes have almost no impact on large-scale structures - at least not when classified using aforementioned techniques. More importantly, our results confirm that the gas component alone can be used to infer the filamentary structure of the universe practically un-biased, which could be applied to cosmology constraints. In addition, the gas filaments are classified with its velocity (VWEB) and density (PWEB) fields, which can theoretically connect to the radio observations, such as H I surveys. This will help us to bias-freely link the radio observations with dark matter distributions at large scale.

  16. A new method to assess the added value of high-resolution regional climate simulations: application to the EURO-CORDEX dataset

    Science.gov (United States)

    Soares, P. M. M.; Cardoso, R. M.

    2017-12-01

    Regional climate models (RCM) are used with increasing resolutions pursuing to represent in an improved way regional to local scale atmospheric phenomena. The EURO-CORDEX simulations at 0.11° and simulations exploiting finer grid spacing approaching the convective-permitting regimes are representative examples. The climate runs are computationally very demanding and do not always show improvements. These depend on the region, variable and object of study. The gains or losses associated with the use of higher resolution in relation to the forcing model (global climate model or reanalysis), or to different resolution RCM simulations, is known as added value. Its characterization is a long-standing issue, and many different added-value measures have been proposed. In the current paper, a new method is proposed to assess the added value of finer resolution simulations, in comparison to its forcing data or coarser resolution counterparts. This approach builds on a probability density function (PDF) matching score, giving a normalised measure of the difference between diverse resolution PDFs, mediated by the observational ones. The distribution added value (DAV) is an objective added value measure that can be applied to any variable, region or temporal scale, from hindcast or historical (non-synchronous) simulations. The DAVs metric and an application to the EURO-CORDEX simulations, for daily temperatures and precipitation, are here presented. The EURO-CORDEX simulations at both resolutions (0.44o,0.11o) display a clear added value in relation to ERA-Interim, with values around 30% in summer and 20% in the intermediate seasons, for precipitation. When both RCM resolutions are directly compared the added value is limited. The regions with the larger precipitation DAVs are areas where convection is relevant, e.g. Alps and Iberia. When looking at the extreme precipitation PDF tail, the higher resolution improvement is generally greater than the low resolution for seasons

  17. Halo statistics analysis within medium volume cosmological N-body simulation

    Directory of Open Access Journals (Sweden)

    Martinović N.

    2015-01-01

    Full Text Available In this paper we present halo statistics analysis of a ΛCDM N body cosmological simulation (from first halo formation until z = 0. We study mean major merger rate as a function of time, where for time we consider both per redshift and per Gyr dependence. For latter we find that it scales as the well known power law (1 + zn for which we obtain n = 2.4. The halo mass function and halo growth function are derived and compared both with analytical and empirical fits. We analyse halo growth through out entire simulation, making it possible to continuously monitor evolution of halo number density within given mass ranges. The halo formation redshift is studied exploring possibility for a new simple preliminary analysis during the simulation run. Visualization of the simulation is portrayed as well. At redshifts z = 0−7 halos from simulation have good statistics for further analysis especially in mass range of 1011 − 1014 M./h. [176021 ’Visible and invisible matter in nearby galaxies: theory and observations

  18. SPIRAL2/DESIR high resolution mass separator

    Energy Technology Data Exchange (ETDEWEB)

    Kurtukian-Nieto, T., E-mail: kurtukia@cenbg.in2p3.fr [Centre d’Études Nucléaires de Bordeaux Gradignan, Université Bordeaux 1-CNRS/IN2P3, BP 120, F-33175 Gradignan Cedex (France); Baartman, R. [TRIUMF, 4004 Wesbrook Mall, Vancouver B.C., V6T 2A3 (Canada); Blank, B.; Chiron, T. [Centre d’Études Nucléaires de Bordeaux Gradignan, Université Bordeaux 1-CNRS/IN2P3, BP 120, F-33175 Gradignan Cedex (France); Davids, C. [Physics Division, Argonne National Laboratory, Argonne, IL 60439 (United States); Delalee, F. [Centre d’Études Nucléaires de Bordeaux Gradignan, Université Bordeaux 1-CNRS/IN2P3, BP 120, F-33175 Gradignan Cedex (France); Duval, M. [GANIL, CEA/DSM-CNRS/IN2P3, Bd Henri Becquerel, BP 55027, F-14076 Caen Cedex 5 (France); El Abbeir, S.; Fournier, A. [Centre d’Études Nucléaires de Bordeaux Gradignan, Université Bordeaux 1-CNRS/IN2P3, BP 120, F-33175 Gradignan Cedex (France); Lunney, D. [CSNSM-IN2P3-CNRS, Université de Paris Sud, F-91405 Orsay (France); Méot, F. [BNL, Upton, Long Island, New York (United States); Serani, L. [Centre d’Études Nucléaires de Bordeaux Gradignan, Université Bordeaux 1-CNRS/IN2P3, BP 120, F-33175 Gradignan Cedex (France); Stodel, M.-H.; Varenne, F. [GANIL, CEA/DSM-CNRS/IN2P3, Bd Henri Becquerel, BP 55027, F-14076 Caen Cedex 5 (France); and others

    2013-12-15

    DESIR is the low-energy part of the SPIRAL2 ISOL facility under construction at GANIL. DESIR includes a high-resolution mass separator (HRS) with a designed resolving power m/Δm of 31,000 for a 1 π-mm-mrad beam emittance, obtained using a high-intensity beam cooling device. The proposed design consists of two 90-degree magnetic dipoles, complemented by electrostatic quadrupoles, sextupoles, and a multipole, arranged in a symmetric configuration to minimize aberrations. A detailed description of the design and results of extensive simulations are given.

  19. Evaluation of high-resolution GRAMM-GRAL (v15.12/v14.8) NOx simulations over the city of Zürich, Switzerland

    Science.gov (United States)

    Berchet, Antoine; Zink, Katrin; Oettl, Dietmar; Brunner, Jürg; Emmenegger, Lukas; Brunner, Dominik

    2017-09-01

    Hourly NOx concentrations were simulated for the city of Zürich, Switzerland, at 10 m resolution for the years 2013-2014. The simulations were generated with the nested mesoscale meteorology and micro-scale dispersion model system GRAMM-GRAL (versions v15.12 and v14.8) by applying a catalogue-based approach. This approach was specifically designed to enable long-term city-wide building-resolving simulations with affordable computation costs. It relies on a discrete set of possible weather situations and corresponding steady-state flow and dispersion patterns that are pre-computed and then matched hourly with actual meteorological observations. The modelling system was comprehensively evaluated using eight sites continuously monitoring NOx concentrations and 65 passive samplers measuring NO2 concentrations on a 2-weekly basis all over the city. The system was demonstrated to fulfil the European Commission standards for air pollution modelling at nearly all sites. The average spatial distribution was very well represented, despite a general tendency to overestimate the observed concentrations, possibly due to a crude representation of traffic-induced turbulence and to underestimated dispersion in the vicinity of buildings. The temporal variability of concentrations explained by varying emissions and weather situations was accurately reproduced on different timescales. The seasonal cycle of concentrations, mostly driven by stronger vertical dispersion in summer than in winter, was very well captured in the 2-year simulation period. Short-term events, such as episodes of particularly high and low concentrations, were detected in most cases by the system, although some unrealistic pollution peaks were occasionally generated, pointing at some limitations of the steady-state approximation. The different patterns of the diurnal cycle of concentrations observed in the city were generally well captured as well. The evaluation confirmed the adequacy of the catalogue

  20. Evaluation of high-resolution GRAMM–GRAL (v15.12/v14.8 NOx simulations over the city of Zürich, Switzerland

    Directory of Open Access Journals (Sweden)

    A. Berchet

    2017-09-01

    Full Text Available Hourly NOx concentrations were simulated for the city of Zürich, Switzerland, at 10 m resolution for the years 2013–2014. The simulations were generated with the nested mesoscale meteorology and micro-scale dispersion model system GRAMM–GRAL (versions v15.12 and v14.8 by applying a catalogue-based approach. This approach was specifically designed to enable long-term city-wide building-resolving simulations with affordable computation costs. It relies on a discrete set of possible weather situations and corresponding steady-state flow and dispersion patterns that are pre-computed and then matched hourly with actual meteorological observations. The modelling system was comprehensively evaluated using eight sites continuously monitoring NOx concentrations and 65 passive samplers measuring NO2 concentrations on a 2-weekly basis all over the city. The system was demonstrated to fulfil the European Commission standards for air pollution modelling at nearly all sites. The average spatial distribution was very well represented, despite a general tendency to overestimate the observed concentrations, possibly due to a crude representation of traffic-induced turbulence and to underestimated dispersion in the vicinity of buildings. The temporal variability of concentrations explained by varying emissions and weather situations was accurately reproduced on different timescales. The seasonal cycle of concentrations, mostly driven by stronger vertical dispersion in summer than in winter, was very well captured in the 2-year simulation period. Short-term events, such as episodes of particularly high and low concentrations, were detected in most cases by the system, although some unrealistic pollution peaks were occasionally generated, pointing at some limitations of the steady-state approximation. The different patterns of the diurnal cycle of concentrations observed in the city were generally well captured as well. The evaluation confirmed the

  1. On the evolution of galaxy clustering and cosmological N-body simulations

    International Nuclear Information System (INIS)

    Fall, S.M.

    1978-01-01

    Some aspects of the problem of simulating the evolution of galaxy clustering by N-body computer experiments are discussed. The results of four 1000-body experiments are presented and interpreted on the basis of simple scaling arguments for the gravitational condensation of bound aggregates. They indicate that the internal dynamics of condensed aggregates are negligible in determining the form of the pair-correlation function xi. On small scales the form of xi is determined by discreteness effects in the initial N-body distribution and is not sensitive to this distribution. The experiments discussed here test the simple scaling arguments effectively for only one value of the cosmological density parameter (Ω = 1) and one form of the initial fluctuation spectrum (n = 0). (author)

  2. Cosmological implication of wide field Sunyaev-Zel'dovich galaxy clusters survey: exploration by simulation

    International Nuclear Information System (INIS)

    Juin, Jean-Baptiste

    2005-01-01

    The goal of my Phd research is to prepare the data analysis of the near future wide-field observations of galaxy clusters detected by Sunyaev Zel'dovitch effect. I set up a complete chain of original tools to carry out this study. These tools allow me to highlight critical and important points of selection effects that has to be taken into account in future analysis. Analysis chain is composed by: a simulation of observed millimeter sky, state-of-the-art algorithms of SZ galaxy clusters extraction from observed maps, a statistical model of selection effects of the whole detection chain and, finally, tools to constrain, from detected SZ sources catalog, the cosmological parameters. I focus myself on multi-channel experiments equipped with large bolometer camera. I use these tools for a prospecting on Olimpo experiment. (author) [fr

  3. Cosmological N-body simulations with a tree code - Fluctuations in the linear and nonlinear regimes

    International Nuclear Information System (INIS)

    Suginohara, Tatsushi; Suto, Yasushi; Bouchet, F.R.; Hernquist, L.

    1991-01-01

    The evolution of gravitational systems is studied numerically in a cosmological context using a hierarchical tree algorithm with fully periodic boundary conditions. The simulations employ 262,144 particles, which are initially distributed according to scale-free power spectra. The subsequent evolution is followed in both flat and open universes. With this large number of particles, the discretized system can accurately model the linear phase. It is shown that the dynamics in the nonlinear regime depends on both the spectral index n and the density parameter Omega. In Omega = 1 universes, the evolution of the two-point correlation function Xi agrees well with similarity solutions for Xi greater than about 100 but its slope is steeper in open models with the same n. 28 refs

  4. Cusps in the center of galaxies: a real conflict with observations or a numerical artefact of cosmological simulations?

    Energy Technology Data Exchange (ETDEWEB)

    Baushev, A.N.; Valle, L. del; Campusano, L.E.; Escala, A.; Muñoz, R.R. [Departamento de Astronomía, Universidad de Chile, Casilla 36-D, Correo Central, Santiago (Chile); Palma, G.A., E-mail: baushev@gmail.com, E-mail: ldelvalleb@gmail.com, E-mail: luis@das.uchile.cl, E-mail: aescala@das.uchile.cl, E-mail: rmunoz@das.uchile.cl, E-mail: gpalmaquilod@ing.uchile.cl [Departamento de Física, FCFM, Universidad de Chile, Blanco Encalada 2008, Santiago (Chile)

    2017-05-01

    Galaxy observations and N-body cosmological simulations produce conflicting dark matter halo density profiles for galaxy central regions. While simulations suggest a cuspy and universal density profile (UDP) of this region, the majority of observations favor variable profiles with a core in the center. In this paper, we investigate the convergency of standard N-body simulations, especially in the cusp region, following the approach proposed by [1]. We simulate the well known Hernquist model using the SPH code Gadget-3 and consider the full array of dynamical parameters of the particles. We find that, although the cuspy profile is stable, all integrals of motion characterizing individual particles suffer strong unphysical variations along the whole halo, revealing an effective interaction between the test bodies. This result casts doubts on the reliability of the velocity distribution function obtained in the simulations. Moreover, we find unphysical Fokker-Planck streams of particles in the cusp region. The same streams should appear in cosmological N-body simulations, being strong enough to change the shape of the cusp or even to create it. Our analysis, based on the Hernquist model and the standard SPH code, strongly suggests that the UDPs generally found by the cosmological N-body simulations may be a consequence of numerical effects. A much better understanding of the N-body simulation convergency is necessary before a 'core-cusp problem' can properly be used to question the validity of the CDM model.

  5. Cusps in the center of galaxies: a real conflict with observations or a numerical artefact of cosmological simulations?

    International Nuclear Information System (INIS)

    Baushev, A.N.; Valle, L. del; Campusano, L.E.; Escala, A.; Muñoz, R.R.; Palma, G.A.

    2017-01-01

    Galaxy observations and N-body cosmological simulations produce conflicting dark matter halo density profiles for galaxy central regions. While simulations suggest a cuspy and universal density profile (UDP) of this region, the majority of observations favor variable profiles with a core in the center. In this paper, we investigate the convergency of standard N-body simulations, especially in the cusp region, following the approach proposed by [1]. We simulate the well known Hernquist model using the SPH code Gadget-3 and consider the full array of dynamical parameters of the particles. We find that, although the cuspy profile is stable, all integrals of motion characterizing individual particles suffer strong unphysical variations along the whole halo, revealing an effective interaction between the test bodies. This result casts doubts on the reliability of the velocity distribution function obtained in the simulations. Moreover, we find unphysical Fokker-Planck streams of particles in the cusp region. The same streams should appear in cosmological N-body simulations, being strong enough to change the shape of the cusp or even to create it. Our analysis, based on the Hernquist model and the standard SPH code, strongly suggests that the UDPs generally found by the cosmological N-body simulations may be a consequence of numerical effects. A much better understanding of the N-body simulation convergency is necessary before a 'core-cusp problem' can properly be used to question the validity of the CDM model.

  6. HBT+: an improved code for finding subhaloes and building merger trees in cosmological simulations

    Science.gov (United States)

    Han, Jiaxin; Cole, Shaun; Frenk, Carlos S.; Benitez-Llambay, Alejandro; Helly, John

    2018-02-01

    Dark matter subhalos are the remnants of (incomplete) halo mergers. Identifying them and establishing their evolutionary links in the form of merger trees is one of the most important applications of cosmological simulations. The HBT (Hierachical Bound-Tracing) code identifies haloes as they form and tracks their evolution as they merge, simultaneously detecting subhaloes and building their merger trees. Here we present a new implementation of this approach, HBT+ , that is much faster, more user friendly, and more physically complete than the original code. Applying HBT+ to cosmological simulations, we show that both the subhalo mass function and the peak-mass function are well fitted by similar double-Schechter functions. The ratio between the two is highest at the high-mass end, reflecting the resilience of massive subhaloes that experience substantial dynamical friction but limited tidal stripping. The radial distribution of the most-massive subhaloes is more concentrated than the universal radial distribution of lower mass subhaloes. Subhalo finders that work in configuration space tend to underestimate the masses of massive subhaloes, an effect that is stronger in the host centre. This may explain, at least in part, the excess of massive subhaloes in galaxy cluster centres inferred from recent lensing observations. We demonstrate that the peak-mass function is a powerful diagnostic of merger tree defects, and the merger trees constructed using HBT+ do not suffer from the missing or switched links that tend to afflict merger trees constructed from more conventional halo finders. We make the HBT+ code publicly available.

  7. Cosmological hydrodynamical simulations of galaxy clusters: X-ray scaling relations and their evolution

    Science.gov (United States)

    Truong, N.; Rasia, E.; Mazzotta, P.; Planelles, S.; Biffi, V.; Fabjan, D.; Beck, A. M.; Borgani, S.; Dolag, K.; Gaspari, M.; Granato, G. L.; Murante, G.; Ragone-Figueroa, C.; Steinborn, L. K.

    2018-03-01

    We analyse cosmological hydrodynamical simulations of galaxy clusters to study the X-ray scaling relations between total masses and observable quantities such as X-ray luminosity, gas mass, X-ray temperature, and YX. Three sets of simulations are performed with an improved version of the smoothed particle hydrodynamics GADGET-3 code. These consider the following: non-radiative gas, star formation and stellar feedback, and the addition of feedback by active galactic nuclei (AGN). We select clusters with M500 > 1014 M⊙E(z)-1, mimicking the typical selection of Sunyaev-Zeldovich samples. This permits to have a mass range large enough to enable robust fitting of the relations even at z ˜ 2. The results of the analysis show a general agreement with observations. The values of the slope of the mass-gas mass and mass-temperature relations at z = 2 are 10 per cent lower with respect to z = 0 due to the applied mass selection, in the former case, and to the effect of early merger in the latter. We investigate the impact of the slope variation on the study of the evolution of the normalization. We conclude that cosmological studies through scaling relations should be limited to the redshift range z = 0-1, where we find that the slope, the scatter, and the covariance matrix of the relations are stable. The scaling between mass and YX is confirmed to be the most robust relation, being almost independent of the gas physics. At higher redshifts, the scaling relations are sensitive to the inclusion of AGNs which influences low-mass systems. The detailed study of these objects will be crucial to evaluate the AGN effect on the ICM.

  8. A simulation study of high-resolution x-ray computed tomography imaging using irregular sampling with a photon-counting detector

    International Nuclear Information System (INIS)

    Lee, Seungwan; Choi, Yu-Na; Kim, Hee-Joung

    2013-01-01

    The purpose of this study was to improve the spatial resolution for the x-ray computed tomography (CT) imaging with a photon-counting detector using an irregular sampling method. The geometric shift-model of detector was proposed to produce the irregular sampling pattern and increase the number of samplings in the radial direction. The conventional micro-x-ray CT system and the novel system with the geometric shift-model of detector were simulated using analytic and Monte Carlo simulations. The projections were reconstructed using filtered back-projection (FBP), algebraic reconstruction technique (ART), and total variation (TV) minimization algorithms, and the reconstructed images were compared in terms of normalized root-mean-square error (NRMSE), full-width at half-maximum (FWHM), and coefficient-of-variation (COV). The results showed that the image quality improved in the novel system with the geometric shift-model of detector, and the NRMSE, FWHM, and COV were lower for the images reconstructed using the TV minimization technique in the novel system with the geometric shift-model of detector. The irregular sampling method produced by the geometric shift-model of detector can improve the spatial resolution and reduce artifacts and noise for reconstructed images obtained from an x-ray CT system with a photon-counting detector. -- Highlights: • We proposed a novel sampling method based on a spiral pattern to improve the spatial resolution. • The novel sampling method increased the number of samplings in the radial direction. • The spatial resolution was improved by the novel sampling method

  9. X-ray clusters in a cold dark matter + lambda universe: A direct, large-scale, high-resolution, hydrodynamic simulation

    Science.gov (United States)

    Cen, Renyue; Ostriker, Jeremiah P.

    1994-01-01

    A new, three-dimensional, shock-capturing, hydrodynamic code is utilized to determine the distribution of hot gas in a cold dark matter (CDM) + lambda model universe. Periodic boundary conditions are assumed: a box with size 85/h Mpc, having cell size 0.31/h Mpc, is followed in a simulation with 270(exp 3) = 10(exp 7.3) cells. We adopt omega = 0.45, lambda = 0.55, h identically equal to H/100 km/s/Mpc = 0.6, and then, from the cosmic background explorer (COBE) and light element nucleosynthesis, sigma(sub 8) = 0.77, omega(sub b) = 0.043. We identify the X-ray emitting clusters in the simulation box, compute the luminosity function at several wavelength bands, the temperature function and estimated sizes, as well as the evolution of these quantities with redshift. This open model succeeds in matching local observations of clusters in contrast to the standard omega = 1, CDM model, which fails. It predicts an order of magnitude decline in the number density of bright (h nu = 2-10 keV) clusters from z = 0 to z = 2 in contrast to a slight increase in the number density for standard omega = 1, CDM model. This COBE-normalized CDM + lambda model produces approximately the same number of X-ray clusters having L(sub x) greater than 10(exp 43) erg/s as observed. The background radiation field at 1 keV due to clusters is approximately the observed background which, after correction for numerical effects, again indicates that the model is consistent with observations.

  10. Feasibility of performing high resolution cloud-resolving simulations of historic extreme events: The San Fruttuoso (Liguria, italy) case of 1915.

    Science.gov (United States)

    Parodi, Antonio; Boni, Giorgio; Ferraris, Luca; Gallus, William; Maugeri, Maurizio; Molini, Luca; Siccardi, Franco

    2017-04-01

    Recent studies show that highly localized and persistent back-building mesoscale convective systems represent one of the most dangerous flash-flood producing storms in the north-western Mediterranean area. Substantial warming of the Mediterranean Sea in recent decades raises concerns over possible increases in frequency or intensity of these types of events as increased atmospheric temperatures generally support increases in water vapor content. Analyses of available historical records do not provide a univocal answer, since these may be likely affected by a lack of detailed observations for older events. In the present study, 20th Century Reanalysis Project initial and boundary condition data in ensemble mode are used to address the feasibility of performing cloud-resolving simulations with 1 km horizontal grid spacing of a historic extreme event that occurred over Liguria (Italy): The San Fruttuoso case of 1915. The proposed approach focuses on the ensemble Weather Research and Forecasting (WRF) model runs, as they are the ones most likely to best simulate the event. It is found that these WRF runs generally do show wind and precipitation fields that are consistent with the occurrence of highly localized and persistent back-building mesoscale convective systems, although precipitation peak amounts are underestimated. Systematic small north-westward position errors with regard to the heaviest rain and strongest convergence areas imply that the Reanalysis members may not be adequately representing the amount of cool air over the Po Plain outflowing into the Liguria Sea through the Apennines gap. Regarding the role of historical data sources, this study shows that in addition to Reanalysis products, unconventional data, such as historical meteorological bulletins, newspapers and even photographs can be very valuable sources of knowledge in the reconstruction of past extreme events.

  11. ACCRETION SHOCKS IN CLUSTERS OF GALAXIES AND THEIR SZ SIGNATURE FROM COSMOLOGICAL SIMULATIONS

    International Nuclear Information System (INIS)

    Molnar, Sandor M.; Hearn, Nathan; Haiman, Zoltan; Bryan, Greg; Evrard, August E.; Lake, George

    2009-01-01

    Cold dark matter (CDM) hierarchical structure formation models predict the existence of large-scale accretion shocks between the virial and turnaround radii of clusters of galaxies. Kocsis et al. suggest that the Sunyaev-Zel'dovich signal associated with such shocks might be observable with the next generation radio interferometer, ALMA (Atacama Large Millimeter Array). We study the three-dimensional distribution of accretion shocks around individual clusters of galaxies drawn from adaptive mesh refinement (AMR) and smoothed particle hydrodynamics simulations of ΛCDM (dark energy dominated CDM) models. In relaxed clusters, we find two distinct sets of shocks. One set ('virial shocks'), with Mach numbers of 2.5-4, is located at radii 0.9-1.3 R vir , where R vir is the spherical infall estimate of the virial radius, covering about 40%-50% of the total surface area around clusters at these radii. Another set of stronger shocks ( e xternal shocks ) is located farther out, at about 3 R vir , with large Mach numbers (∼100), covering about 40%-60% of the surface area. We simulate SZ surface brightness maps of relaxed massive galaxy clusters drawn from high-resolution AMR runs, and conclude that ALMA should be capable of detecting the virial shocks in massive clusters of galaxies. More simulations are needed to improve estimates of astrophysical noise and to determine optimal observational strategies.

  12. AX-GADGET: a new code for cosmological simulations of Fuzzy Dark Matter and Axion models

    Science.gov (United States)

    Nori, Matteo; Baldi, Marco

    2018-05-01

    We present a new module of the parallel N-Body code P-GADGET3 for cosmological simulations of light bosonic non-thermal dark matter, often referred as Fuzzy Dark Matter (FDM). The dynamics of the FDM features a highly non-linear Quantum Potential (QP) that suppresses the growth of structures at small scales. Most of the previous attempts of FDM simulations either evolved suppressed initial conditions, completely neglecting the dynamical effects of QP throughout cosmic evolution, or resorted to numerically challenging full-wave solvers. The code provides an interesting alternative, following the FDM evolution without impairing the overall performance. This is done by computing the QP acceleration through the Smoothed Particle Hydrodynamics (SPH) routines, with improved schemes to ensure precise and stable derivatives. As an extension of the P-GADGET3 code, it inherits all the additional physics modules implemented up to date, opening a wide range of possibilities to constrain FDM models and explore its degeneracies with other physical phenomena. Simulations are compared with analytical predictions and results of other codes, validating the QP as a crucial player in structure formation at small scales.

  13. High-resolution ultrasonic spectroscopy

    Directory of Open Access Journals (Sweden)

    V. Buckin

    2018-03-01

    Full Text Available High-resolution ultrasonic spectroscopy (HR-US is an analytical technique for direct and non-destructive monitoring of molecular and micro-structural transformations in liquids and semi-solid materials. It is based on precision measurements of ultrasonic velocity and attenuation in analysed samples. The application areas of HR-US in research, product development, and quality and process control include analysis of conformational transitions of polymers, ligand binding, molecular self-assembly and aggregation, crystallisation, gelation, characterisation of phase transitions and phase diagrams, and monitoring of chemical and biochemical reactions. The technique does not require optical markers or optical transparency. The HR-US measurements can be performed in small sample volumes (down to droplet size, over broad temperature range, at ambient and elevated pressures, and in various measuring regimes such as automatic temperature ramps, titrations and measurements in flow.

  14. High Resolution Thermometry for EXACT

    Science.gov (United States)

    Panek, J. S.; Nash, A. E.; Larson, M.; Mulders, N.

    2000-01-01

    High Resolution Thermometers (HRTs) based on SQUID detection of the magnetization of a paramagnetic salt or a metal alloy has been commonly used for sub-nano Kelvin temperature resolution in low temperature physics experiments. The main applications to date have been for temperature ranges near the lambda point of He-4 (2.177 K). These thermometers made use of materials such as Cu(NH4)2Br4 *2H2O, GdCl3, or PdFe. None of these materials are suitable for EXACT, which will explore the region of the He-3/He-4 tricritical point at 0.87 K. The experiment requirements and properties of several candidate paramagnetic materials will be presented, as well as preliminary test results.

  15. Numerical Convergence in the Dark Matter Halos Properties Using Cosmological Simulations

    Science.gov (United States)

    Mosquera-Escobar, X. E.; Muñoz-Cuartas, J. C.

    2017-07-01

    Nowadays, the accepted cosmological model is the so called -Cold Dark Matter (CDM). In such model, the universe is considered to be homogeneous and isotropic, composed of diverse components as the dark matter and dark energy, where the latter is the most abundant one. Dark matter plays an important role because it is responsible for the generation of gravitational potential wells, commonly called dark matter halos. At the end, dark matter halos are characterized by a set of parameters (mass, radius, concentration, spin parameter), these parameters provide valuable information for different studies, such as galaxy formation, gravitational lensing, etc. In this work we use the publicly available code Gadget2 to perform cosmological simulations to find to what extent the numerical parameters of the simu- lations, such as gravitational softening, integration time step and force calculation accuracy affect the physical properties of the dark matter halos. We ran a suite of simulations where these parameters were varied in a systematic way in order to explore accurately their impact on the structural parameters of dark matter halos. We show that the variations on the numerical parameters affect the structural pa- rameters of dark matter halos, such as concentration, virial radius, and concentration. We show that these modifications emerged when structures become non- linear (at redshift 2) for the scale of our simulations, such that these variations affected the formation and evolution structure of halos mainly at later cosmic times. As a quantitative result, we propose which would be the most appropriate values for the numerical parameters of the simulations, such that they do not affect the halo properties that are formed. For force calculation accuracy we suggest values smaller or equal to 0.0001, integration time step smaller o equal to 0.005 and for gravitational softening we propose equal to 1/60th of the mean interparticle distance, these values, correspond to the

  16. High resolution tomographic instrument development

    International Nuclear Information System (INIS)

    1992-01-01

    Our recent work has concentrated on the development of high-resolution PET instrumentation reflecting in part the growing importance of PET in nuclear medicine imaging. We have developed a number of positron imaging instruments and have the distinction that every instrument has been placed in operation and has had an extensive history of application for basic research and clinical study. The present program is a logical continuation of these earlier successes. PCR-I, a single ring positron tomograph was the first demonstration of analog coding using BGO. It employed 4 mm detectors and is currently being used for a wide range of biological studies. These are of immense importance in guiding the direction for future instruments. In particular, PCR-II, a volume sensitive positron tomograph with 3 mm spatial resolution has benefited greatly from the studies using PCR-I. PCR-II is currently in the final stages of assembly and testing and will shortly be placed in operation for imaging phantoms, animals and ultimately humans. Perhaps the most important finding resulting from our previous study is that resolution and sensitivity must be carefully balanced to achieve a practical high resolution system. PCR-II has been designed to have the detection characteristics required to achieve 3 mm resolution in human brain under practical imaging situations. The development of algorithms by the group headed by Dr. Chesler is based on a long history of prior study including his joint work with Drs. Pelc and Reiderer and Stearns. This body of expertise will be applied to the processing of data from PCR-II when it becomes operational

  17. High resolution tomographic instrument development

    Energy Technology Data Exchange (ETDEWEB)

    1992-08-01

    Our recent work has concentrated on the development of high-resolution PET instrumentation reflecting in part the growing importance of PET in nuclear medicine imaging. We have developed a number of positron imaging instruments and have the distinction that every instrument has been placed in operation and has had an extensive history of application for basic research and clinical study. The present program is a logical continuation of these earlier successes. PCR-I, a single ring positron tomograph was the first demonstration of analog coding using BGO. It employed 4 mm detectors and is currently being used for a wide range of biological studies. These are of immense importance in guiding the direction for future instruments. In particular, PCR-II, a volume sensitive positron tomograph with 3 mm spatial resolution has benefited greatly from the studies using PCR-I. PCR-II is currently in the final stages of assembly and testing and will shortly be placed in operation for imaging phantoms, animals and ultimately humans. Perhaps the most important finding resulting from our previous study is that resolution and sensitivity must be carefully balanced to achieve a practical high resolution system. PCR-II has been designed to have the detection characteristics required to achieve 3 mm resolution in human brain under practical imaging situations. The development of algorithms by the group headed by Dr. Chesler is based on a long history of prior study including his joint work with Drs. Pelc and Reiderer and Stearns. This body of expertise will be applied to the processing of data from PCR-II when it becomes operational.

  18. High resolution tomographic instrument development

    Energy Technology Data Exchange (ETDEWEB)

    1992-01-01

    Our recent work has concentrated on the development of high-resolution PET instrumentation reflecting in part the growing importance of PET in nuclear medicine imaging. We have developed a number of positron imaging instruments and have the distinction that every instrument has been placed in operation and has had an extensive history of application for basic research and clinical study. The present program is a logical continuation of these earlier successes. PCR-I, a single ring positron tomograph was the first demonstration of analog coding using BGO. It employed 4 mm detectors and is currently being used for a wide range of biological studies. These are of immense importance in guiding the direction for future instruments. In particular, PCR-II, a volume sensitive positron tomograph with 3 mm spatial resolution has benefited greatly from the studies using PCR-I. PCR-II is currently in the final stages of assembly and testing and will shortly be placed in operation for imaging phantoms, animals and ultimately humans. Perhaps the most important finding resulting from our previous study is that resolution and sensitivity must be carefully balanced to achieve a practical high resolution system. PCR-II has been designed to have the detection characteristics required to achieve 3 mm resolution in human brain under practical imaging situations. The development of algorithms by the group headed by Dr. Chesler is based on a long history of prior study including his joint work with Drs. Pelc and Reiderer and Stearns. This body of expertise will be applied to the processing of data from PCR-II when it becomes operational.

  19. IXO and the Missing Baryons: The Need High Resolution Spectroscopy

    Science.gov (United States)

    Nicastro, Fabrizio

    2009-01-01

    About half of the baryons in the Universe are currently eluding detection. Hydrodynamical simulations for the formation of Large Scale Structures (LSSs), predict that these baryons, at zmatter: the Warm-Hot Intergalactic Medium (WHIM). The WHIM has probably been progressively enriched with metals, during phases of intense starburst and AGN activity, up to possibly solar metallicity (Cen & Ostriker, 2006), and should therefore shine and/or absorb in in the soft X-ray band, via electronic transitions from the most abundant metals. The importance of detecting and studying the WHIM lies not only in the possibility of finally making a complete census of all baryons in the Universe, but also in the possibility of (a) directly measuring the metallicity history of the Universe, and so investigating on metal-transport in the Universe and galaxy-IGM, AGN-IGM feedback mechanisms, (b) directly measuring the heating history of the Universe, and so understanding the process of LSS formation and shocks, and (c) performing cosmological parameter measurements through a 3D 2-point angular correlation function analysis of the WHIM filaments. Detecting, and studying the WHIM with the current X-ray instrumentation however, is extremely challenging, because of the low sensitivity and resolution of the Chandra and XMM-Newton gratings, and the very low 'grasp' of all currently available imaging-spectrometers. IXO, instead, thanks to its large grating effective area (> 1000 cm2 at 0.5 keV) and high spectral resolution (R>2500 at 0.5 keV) will be perfectly suited to attack the problem in a systematic way. Here we demonstrate that high resolution gratings are crucial for these kind of studies and show that the IXO gratings will be able to detect more than 300-700 OVII WHIM filaments along about 70 lines of sight, in less than 0.7.

  20. Investigating the physics and environment of Lyman limit systems in cosmological simulations

    Science.gov (United States)

    Erkal, Denis

    2015-07-01

    In this work, I investigate the properties of Lyman limit systems (LLSs) using state-of-the-art zoom-in cosmological galaxy formation simulations with on the fly radiative transfer, which includes both the cosmic UV background (UVB) and local stellar sources. I compare the simulation results to observations of the incidence frequency of LLSs and the H I column density distribution function over the redshift range z = 2-5 and find good agreement. I explore the connection between LLSs and their host haloes and find that LLSs reside in haloes with a wide range of halo masses with a nearly constant covering fraction within a virial radius. Over the range z = 2-5, I find that more than half of the LLSs reside in haloes with M test a simple model which encapsulates many of their properties. I confirm that LLSs have a characteristic absorption length given by the Jeans length and that they are in photoionization equilibrium at low column densities. Finally, I investigate the self-shielding of LLSs to the UVB and explore how the non-sphericity of LLSs affects the photoionization rate at a given N_{H I}. I find that at z ≈ 3, LLSs have an optical depth of unity at a column density of ˜1018 cm-2 and that this is the column density which characterizes the onset of self-shielding.

  1. Baseline metal enrichment from Population III star formation in cosmological volume simulations

    Science.gov (United States)

    Jaacks, Jason; Thompson, Robert; Finkelstein, Steven L.; Bromm, Volker

    2018-04-01

    We utilize the hydrodynamic and N-body code GIZMO coupled with our newly developed sub-grid Population III (Pop III) Legacy model, designed specifically for cosmological volume simulations, to study the baseline metal enrichment from Pop III star formation at z > 7. In this idealized numerical experiment, we only consider Pop III star formation. We find that our model Pop III star formation rate density (SFRD), which peaks at ˜ 10- 3 M⊙ yr- 1 Mpc- 1 near z ˜ 10, agrees well with previous numerical studies and is consistent with the observed estimates for Pop II SFRDs. The mean Pop III metallicity rises smoothly from z = 25 to 7, but does not reach the critical metallicity value, Zcrit = 10-4 Z⊙, required for the Pop III to Pop II transition in star formation mode until z ≃ 7. This suggests that, while individual haloes can suppress in situ Pop III star formation, the external enrichment is insufficient to globally terminate Pop III star formation. The maximum enrichment from Pop III star formation in star-forming dark matter haloes is Z ˜ 10-2 Z⊙, whereas the minimum found in externally enriched haloes is Z ≳ 10-7 Z⊙. Finally, mock observations of our simulated IGM enriched with Pop III metals produce equivalent widths similar to observations of an extremely metal-poor damped Lyman alpha system at z = 7.04, which is thought to be enriched by Pop III star formation only.

  2. High-Resolution Mass Spectrometers

    Science.gov (United States)

    Marshall, Alan G.; Hendrickson, Christopher L.

    2008-07-01

    Over the past decade, mass spectrometry has been revolutionized by access to instruments of increasingly high mass-resolving power. For small molecules up to ˜400 Da (e.g., drugs, metabolites, and various natural organic mixtures ranging from foods to petroleum), it is possible to determine elemental compositions (CcHhNnOoSsPp…) of thousands of chemical components simultaneously from accurate mass measurements (the same can be done up to 1000 Da if additional information is included). At higher mass, it becomes possible to identify proteins (including posttranslational modifications) from proteolytic peptides, as well as lipids, glycoconjugates, and other biological components. At even higher mass (˜100,000 Da or higher), it is possible to characterize posttranslational modifications of intact proteins and to map the binding surfaces of large biomolecule complexes. Here we review the principles and techniques of the highest-resolution analytical mass spectrometers (time-of-flight and Fourier transform ion cyclotron resonance and orbitrap mass analyzers) and describe some representative high-resolution applications.

  3. Efficient simulations of large-scale structure in modified gravity cosmologies with comoving Lagrangian acceleration

    Science.gov (United States)

    Valogiannis, Georgios; Bean, Rachel

    2017-05-01

    We implement an adaptation of the cola approach, a hybrid scheme that combines Lagrangian perturbation theory with an N-body approach, to model nonlinear collapse in chameleon and symmetron modified gravity models. Gravitational screening is modeled effectively through the attachment of a suppression factor to the linearized Klein-Gordon equations. The adapted cola approach is benchmarked, with respect to an N-body code both for the Λ cold dark matter (Λ CDM ) scenario and for the modified gravity theories. It is found to perform well in the estimation of the dark matter power spectra, with consistency of 1% to k ˜2.5 h /Mpc . Redshift space distortions are shown to be effectively modeled through a Lorentzian parametrization with a velocity dispersion fit to the data. We find that cola performs less well in predicting the halo mass functions but has consistency, within 1 σ uncertainties of our simulations, in the relative changes to the mass function induced by the modified gravity models relative to Λ CDM . The results demonstrate that cola, proposed to enable accurate and efficient, nonlinear predictions for Λ CDM , can be effectively applied to a wider set of cosmological scenarios, with intriguing properties, for which clustering behavior needs to be understood for upcoming surveys such as LSST, DESI, Euclid, and WFIRST.

  4. Holocene climate change in North Africa and the end of the African humid period - results of new high-resolution transient simulations with the MPI-ESM 1.3

    Science.gov (United States)

    Dallmeyer, Anne; Claussen, Martin; Lorenz, Stephan

    2017-04-01

    The Max-Planck-Institute for Meteorology has recently undertaken high-resolution transient Holocene simulations using the fully-coupled Earth System Model MPI-ESM 1.3. The simulations cover the last 8000 years and are forced not only by reconstructed Holocene orbital variations and atmospheric greenhouse gas concentrations, but also by recent compilations of Holocene volcanic aerosol distributions, variations in spectral solar irradiance, stratospheric ozone and land-use change. The simulations reveal the ubiquitous "Holocene conundrum": simulated global mean temperatures increase during the mid-Holocene and stay constant during the late Holocene. Simulated mid-Holocene near-surface temperatures are too cold in large parts of the world. Simulated precipitation, however, agrees much better with reconstruction than temperatures do. Likewise simulated global biome pattern fit reconstructions nicely, except for North Western America. First results of these simulations are presented with the main focus on the North African monsoon region. The amplitude of the mid-Holocene African Humid Period (AHP) is well captured in terms of precipitation and vegetation cover, so is the south-ward transgression of the termination of the AHP seen in reconstructions. The Holocene weakening and southward retreat of the North African monsoon as well as changes in the monsoon dynamic including shifts in the seasonal cycle and their relation to the locally varying termination of the AHP are discussed in detail. Members of the Hamburg Holocene Team: Jürgen Bader (1), Sebastian Bathiany (2), Victor Brovkin (1), Martin Claussen (1,3), Traute Crüger (1), Roberta D'agostino (1), Anne Dallmeyer (1), Sabine Egerer (1), Vivienne Groner (1), Matthias Heinze (1), Tatiana Ilyina (1), Johann Jungclaus (1), Thomas Kleinen (1), Alexander Lemburg (1), Stephan Lorenz (1), Thomas Raddatz (1), Hauke Schmidt (1), Gerhard Schmiedl (3), Bjorn Stevens (1), Claudia Timmreck (1), Matthew Toohey (4) (1) Max

  5. High-resolution intravital microscopy.

    Directory of Open Access Journals (Sweden)

    Volker Andresen

    Full Text Available Cellular communication constitutes a fundamental mechanism of life, for instance by permitting transfer of information through synapses in the nervous system and by leading to activation of cells during the course of immune responses. Monitoring cell-cell interactions within living adult organisms is crucial in order to draw conclusions on their behavior with respect to the fate of cells, tissues and organs. Until now, there is no technology available that enables dynamic imaging deep within the tissue of living adult organisms at sub-cellular resolution, i.e. detection at the level of few protein molecules. Here we present a novel approach called multi-beam striped-illumination which applies for the first time the principle and advantages of structured-illumination, spatial modulation of the excitation pattern, to laser-scanning-microscopy. We use this approach in two-photon-microscopy--the most adequate optical deep-tissue imaging-technique. As compared to standard two-photon-microscopy, it achieves significant contrast enhancement and up to 3-fold improved axial resolution (optical sectioning while photobleaching, photodamage and acquisition speed are similar. Its imaging depth is comparable to multifocal two-photon-microscopy and only slightly less than in standard single-beam two-photon-microscopy. Precisely, our studies within mouse lymph nodes demonstrated 216% improved axial and 23% improved lateral resolutions at a depth of 80 µm below the surface. Thus, we are for the first time able to visualize the dynamic interactions between B cells and immune complex deposits on follicular dendritic cells within germinal centers (GCs of live mice. These interactions play a decisive role in the process of clonal selection, leading to affinity maturation of the humoral immune response. This novel high-resolution intravital microscopy method has a huge potential for numerous applications in neurosciences, immunology, cancer research and

  6. High-Resolution Intravital Microscopy

    Science.gov (United States)

    Andresen, Volker; Pollok, Karolin; Rinnenthal, Jan-Leo; Oehme, Laura; Günther, Robert; Spiecker, Heinrich; Radbruch, Helena; Gerhard, Jenny; Sporbert, Anje; Cseresnyes, Zoltan; Hauser, Anja E.; Niesner, Raluca

    2012-01-01

    Cellular communication constitutes a fundamental mechanism of life, for instance by permitting transfer of information through synapses in the nervous system and by leading to activation of cells during the course of immune responses. Monitoring cell-cell interactions within living adult organisms is crucial in order to draw conclusions on their behavior with respect to the fate of cells, tissues and organs. Until now, there is no technology available that enables dynamic imaging deep within the tissue of living adult organisms at sub-cellular resolution, i.e. detection at the level of few protein molecules. Here we present a novel approach called multi-beam striped-illumination which applies for the first time the principle and advantages of structured-illumination, spatial modulation of the excitation pattern, to laser-scanning-microscopy. We use this approach in two-photon-microscopy - the most adequate optical deep-tissue imaging-technique. As compared to standard two-photon-microscopy, it achieves significant contrast enhancement and up to 3-fold improved axial resolution (optical sectioning) while photobleaching, photodamage and acquisition speed are similar. Its imaging depth is comparable to multifocal two-photon-microscopy and only slightly less than in standard single-beam two-photon-microscopy. Precisely, our studies within mouse lymph nodes demonstrated 216% improved axial and 23% improved lateral resolutions at a depth of 80 µm below the surface. Thus, we are for the first time able to visualize the dynamic interactions between B cells and immune complex deposits on follicular dendritic cells within germinal centers (GCs) of live mice. These interactions play a decisive role in the process of clonal selection, leading to affinity maturation of the humoral immune response. This novel high-resolution intravital microscopy method has a huge potential for numerous applications in neurosciences, immunology, cancer research and developmental biology

  7. A method for generating high resolution satellite image time series

    Science.gov (United States)

    Guo, Tao

    2014-10-01

    There is an increasing demand for satellite remote sensing data with both high spatial and temporal resolution in many applications. But it still is a challenge to simultaneously improve spatial resolution and temporal frequency due to the technical limits of current satellite observation systems. To this end, much R&D efforts have been ongoing for years and lead to some successes roughly in two aspects, one includes super resolution, pan-sharpen etc. methods which can effectively enhance the spatial resolution and generate good visual effects, but hardly preserve spectral signatures and result in inadequate analytical value, on the other hand, time interpolation is a straight forward method to increase temporal frequency, however it increase little informative contents in fact. In this paper we presented a novel method to simulate high resolution time series data by combing low resolution time series data and a very small number of high resolution data only. Our method starts with a pair of high and low resolution data set, and then a spatial registration is done by introducing LDA model to map high and low resolution pixels correspondingly. Afterwards, temporal change information is captured through a comparison of low resolution time series data, and then projected onto the high resolution data plane and assigned to each high resolution pixel according to the predefined temporal change patterns of each type of ground objects. Finally the simulated high resolution data is generated. A preliminary experiment shows that our method can simulate a high resolution data with a reasonable accuracy. The contribution of our method is to enable timely monitoring of temporal changes through analysis of time sequence of low resolution images only, and usage of costly high resolution data can be reduces as much as possible, and it presents a highly effective way to build up an economically operational monitoring solution for agriculture, forest, land use investigation

  8. Evolution of N/O ratios in galaxies from cosmological hydrodynamical simulations

    Science.gov (United States)

    Vincenzo, Fiorenzo; Kobayashi, Chiaki

    2018-04-01

    We study the redshift evolution of the gas-phase O/H and N/O abundances, both (i) for individual ISM regions within single spatially-resolved galaxies and (ii) when dealing with average abundances in the whole ISM of many unresolved galaxies. We make use of a cosmological hydrodynamical simulation including detailed chemical enrichment, which properly takes into account the variety of different stellar nucleosynthetic sources of O and N in galaxies. We identify 33 galaxies in the simulation, lying within dark matter halos with virial mass in the range 1011 ≤ MDM ≤ 1013 M⊙ and reconstruct how they evolved with redshift. For the local and global measurements, the observed increasing trend of N/O at high O/H can be explained, respectively, (i) as the consequence of metallicity gradients which have settled in the galaxy interstellar medium, where the innermost galactic regions have the highest O/H abundances and the highest N/O ratios, and (ii) as the consequence of an underlying average mass-metallicity relation that galaxies obey as they evolve across cosmic epochs, where - at any redshift - less massive galaxies have lower average O/H and N/O ratios than the more massive ones. We do not find a strong dependence on the environment. For both local and global relations, the predicted N/O-O/H relation is due to the mostly secondary origin of N in stars. We also predict that the O/H and N/O gradients in the galaxy interstellar medium gradually flatten as functions of redshift, with the average N/O ratios being strictly coupled with the galaxy star formation history. Because N production strongly depends on O abundances, we obtain a universal relation for the N/O-O/H abundance diagram whether we consider average abundances of many unresolved galaxies put together or many abundance measurements within a single spatially-resolved galaxy.

  9. Simulations of the WFIRST Supernova Survey and Forecasts of Cosmological Constraints

    Energy Technology Data Exchange (ETDEWEB)

    Hounsell, R. [Illinois U., Urbana, Astron. Dept.; Scolnic, D. [Chicago U., KICP; Foley, R. J. [UC, Santa Cruz; Kessler, R. [Chicago U., KICP; Miranda, V. [Pennsylvania U.; Avelino, A. [Harvard-Smithsonian Ctr. Astrophys.; Bohlin, R. C. [Baltimore, Space Telescope Sci.; Filippenko, A. V. [UC, Berkeley; Frieman, J. [Fermilab; Jha, S. W. [Rutgers U., Piscataway; Kelly, P. L. [UC, Berkeley; Kirshner, R. P. [Xerox, Palo Alto; Mandel, K. [Harvard-Smithsonian Ctr. Astrophys.; Rest, A. [Baltimore, Space Telescope Sci.; Riess, A. G. [Johns Hopkins U.; Rodney, S. A. [South Carolina U.; Strolger, L. [Baltimore, Space Telescope Sci.

    2017-02-06

    The Wide Field InfraRed Survey Telescope (WFIRST) was the highest rankedlarge space-based mission of the 2010 New Worlds, New Horizons decadal survey.It is now a NASA mission in formulation with a planned launch in themid-2020's. A primary mission objective is to precisely constrain the nature ofdark energy through multiple probes, including Type Ia supernovae (SNe Ia).Here, we present the first realistic simulations of the WFIRST SN survey basedon current hardware specifications and using open-source tools. We simulate SNlight curves and spectra as viewed by the WFIRST wide-field channel (WFC)imager and integral field channel (IFC) spectrometer, respectively. We examine11 survey strategies with different time allocations between the WFC and IFC,two of which are based upon the strategy described by the WFIRST ScienceDefinition Team, which measures SN distances exclusively from IFC data. Wepropagate statistical and, crucially, systematic uncertainties to predict thedark energy task force figure of merit (DETF FoM) for each strategy. Theincrease in FoM values with SN search area is limited by the overhead times foreach exposure. For IFC-focused strategies the largest individual systematicuncertainty is the wavelength-dependent calibration uncertainty, whereas forWFC-focused strategies, it is the intrinsic scatter uncertainty. We find thatthe best IFC-focused and WFC-exclusive strategies have comparable FoM values.Even without improvements to other cosmological probes, the WFIRST SN surveyhas the potential to increase the FoM by more than an order of magnitude fromthe current values. Although the survey strategies presented here have not beenfully optimized, these initial investigations are an important step in thedevelopment of the final hardware design and implementation of the WFIRSTmission.

  10. Understanding Large-scale Structure in the SSA22 Protocluster Region Using Cosmological Simulations

    Science.gov (United States)

    Topping, Michael W.; Shapley, Alice E.; Steidel, Charles C.; Naoz, Smadar; Primack, Joel R.

    2018-01-01

    We investigate the nature and evolution of large-scale structure within the SSA22 protocluster region at z = 3.09 using cosmological simulations. A redshift histogram constructed from current spectroscopic observations of the SSA22 protocluster reveals two separate peaks at z = 3.065 (blue) and z = 3.095 (red). Based on these data, we report updated overdensity and mass calculations for the SSA22 protocluster. We find {δ }b,{gal}=4.8+/- 1.8 and {δ }r,{gal}=9.5+/- 2.0 for the blue and red peaks, respectively, and {δ }t,{gal}=7.6+/- 1.4 for the entire region. These overdensities correspond to masses of {M}b=(0.76+/- 0.17)× {10}15{h}-1 {M}ȯ , {M}r=(2.15+/- 0.32)× {10}15{h}-1 {M}ȯ , and {M}t=(3.19+/- 0.40)× {10}15{h}-1 {M}ȯ for the red, blue, and total peaks, respectively. We use the Small MultiDark Planck (SMDPL) simulation to identify comparably massive z∼ 3 protoclusters, and uncover the underlying structure and ultimate fate of the SSA22 protocluster. For this analysis, we construct mock redshift histograms for each simulated z∼ 3 protocluster, quantitatively comparing them with the observed SSA22 data. We find that the observed double-peaked structure in the SSA22 redshift histogram corresponds not to a single coalescing cluster, but rather the proximity of a ∼ {10}15{h}-1 {M}ȯ protocluster and at least one > {10}14{h}-1 {M}ȯ cluster progenitor. Such associations in the SMDPL simulation are easily understood within the framework of hierarchical clustering of dark matter halos. We finally find that the opportunity to observe such a phenomenon is incredibly rare, with an occurrence rate of 7.4{h}3 {{{Gpc}}}-3. Based on data obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration, and was made possible by the generous financial support of the W.M. Keck Foundation.

  11. Section on High Resolution Optical Imaging (HROI)

    Data.gov (United States)

    Federal Laboratory Consortium — The Section on High Resolution Optical Imaging (HROI) develops novel technologies for studying biological processes at unprecedented speed and resolution. Research...

  12. High Resolution Orientation Distribution Function

    DEFF Research Database (Denmark)

    Schmidt, Søren; Gade-Nielsen, Nicolai Fog; Høstergaard, Martin

    2012-01-01

    from the deformed material. The underlying mathematical formalism supports all crystallographic space groups and reduces the problem to solving a (large) set of linear equations. An implementation on multi-core CPUs and Graphical Processing Units (GPUs) is discussed along with an example on simulated...

  13. THE FATE OF DWARF GALAXIES IN CLUSTERS AND THE ORIGIN OF INTRACLUSTER STARS. II. COSMOLOGICAL SIMULATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Martel, Hugo [Departement de physique, de genie physique et d' optique, Universite Laval, Quebec, QC (Canada); Barai, Paramita [Osservatorio Astronomico di Trieste, I-34143 Trieste (Italy); Brito, William [Centre de Recherche en Astrophysique du Quebec, C.P. 6128, Succ. Centre-Ville, Montreal, QC (Canada)

    2012-09-20

    We combine an N-body simulation algorithm with a subgrid treatment of galaxy formation, mergers, and tidal destruction, and an observed conditional luminosity function {Phi}(L|M), to study the origin and evolution of galactic and extragalactic light inside a cosmological volume of size (100 Mpc){sup 3}, in a concordance {Lambda}CDM model. This algorithm simulates the growth of large-scale structures and the formation of clusters, the evolution of the galaxy population in clusters, the destruction of galaxies by mergers and tides, and the evolution of the intracluster light (ICL). We find that destruction of galaxies by mergers dominates over destruction by tides by about an order of magnitude at all redshifts. However, tidal destruction is sufficient to produce ICL fractions f{sub ICL} that are sufficiently high to match observations. Our simulation produces 18 massive clusters (M{sub cl} > 10{sup 14} M{sub Sun }) with values of f{sub ICL} ranging from 1% to 58% at z = 0. There is a weak trend of f{sub ICL} to increase with cluster mass. The bulk of the ICL ({approx}60%) is provided by intermediate galaxies of total masses 10{sup 11}-10{sup 12} M{sub Sun} and stellar masses 6 Multiplication-Sign 10{sup 8} M{sub Sun} to 3 Multiplication-Sign 10{sup 10} M{sub Sun} that were tidally destroyed by even more massive galaxies. The contribution of low-mass galaxies to the ICL is small and the contribution of dwarf galaxies is negligible, even though, by numbers, most galaxies that are tidally destroyed are dwarfs. Tracking clusters back in time, we find that their values of f{sub ICL} tend to increase over time, but can experience sudden changes that are sometimes non-monotonic. These changes occur during major mergers involving clusters of comparable masses but very different intracluster luminosities. Most of the tidal destruction events take place in the central regions of clusters. As a result, the ICL is more centrally concentrated than the galactic light. Our results

  14. THE FATE OF DWARF GALAXIES IN CLUSTERS AND THE ORIGIN OF INTRACLUSTER STARS. II. COSMOLOGICAL SIMULATIONS

    International Nuclear Information System (INIS)

    Martel, Hugo; Barai, Paramita; Brito, William

    2012-01-01

    We combine an N-body simulation algorithm with a subgrid treatment of galaxy formation, mergers, and tidal destruction, and an observed conditional luminosity function Φ(L|M), to study the origin and evolution of galactic and extragalactic light inside a cosmological volume of size (100 Mpc) 3 , in a concordance ΛCDM model. This algorithm simulates the growth of large-scale structures and the formation of clusters, the evolution of the galaxy population in clusters, the destruction of galaxies by mergers and tides, and the evolution of the intracluster light (ICL). We find that destruction of galaxies by mergers dominates over destruction by tides by about an order of magnitude at all redshifts. However, tidal destruction is sufficient to produce ICL fractions f ICL that are sufficiently high to match observations. Our simulation produces 18 massive clusters (M cl > 10 14 M ☉ ) with values of f ICL ranging from 1% to 58% at z = 0. There is a weak trend of f ICL to increase with cluster mass. The bulk of the ICL (∼60%) is provided by intermediate galaxies of total masses 10 11 -10 12 M ☉ and stellar masses 6 × 10 8 M ☉ to 3 × 10 10 M ☉ that were tidally destroyed by even more massive galaxies. The contribution of low-mass galaxies to the ICL is small and the contribution of dwarf galaxies is negligible, even though, by numbers, most galaxies that are tidally destroyed are dwarfs. Tracking clusters back in time, we find that their values of f ICL tend to increase over time, but can experience sudden changes that are sometimes non-monotonic. These changes occur during major mergers involving clusters of comparable masses but very different intracluster luminosities. Most of the tidal destruction events take place in the central regions of clusters. As a result, the ICL is more centrally concentrated than the galactic light. Our results support tidal destruction of intermediate-mass galaxies as a plausible scenario for the origin of the ICL.

  15. Simulating the Growth of a Disk Galaxy and its Supermassive Black Hole in a Cosmological Simulating the Growth of a Disk Galaxy and its Supermassive Black Hole in a Cosmological Context

    International Nuclear Information System (INIS)

    Levine, Robyn Deborah; JILA, Boulder

    2008-01-01

    Supermassive black holes (SMBHs) are ubiquitous in the centers of galaxies. Their formation and subsequent evolution is inextricably linked to that of their host galaxies, and the study of galaxy formation is incomplete without the inclusion of SMBHs. The present work seeks to understand the growth and evolution of SMBHs through their interaction with the host galaxy and its environment. In the first part of the thesis (Chap. 2 and 3), we combine a simple semi-analytic model of outflows from active galactic nuclei (AGN) with a simulated dark matter density distribution to study the impact of SMBH feedback on cosmological scales. We find that constraints can be placed on the kinetic efficiency of such feedback using observations of the filling fraction of the Lyα forest. We also find that AGN feedback is energetic enough to redistribute baryons over cosmological distances, having potentially significant effects on the interpretation of cosmological data which are sensitive to the total matter density distribution (e.g. weak lensing). However, truly assessing the impact of AGN feedback in the universe necessitates large-dynamic range simulations with extensive treatment of baryonic physics to first model the fueling of SMBHs. In the second part of the thesis (Chap. 4-6) we use a hydrodynamic adaptive mesh refinement simulation to follow the growth and evolution of a typical disk galaxy hosting a SMBH, in a cosmological context. The simulation covers a dynamical range of 10 million allowing us to study the transport of matter and angular momentum from super-galactic scales all the way down to the outer edge of the accretion disk around the SMBH. Focusing our attention on the central few hundred parsecs of the galaxy, we find the presence of a cold, self-gravitating, molecular gas disk which is globally unstable. The global instabilities drive super-sonic turbulence, which maintains local stability and allows gas to fuel a SMBH without first fragmenting completely

  16. Development of AMS high resolution injector system

    International Nuclear Information System (INIS)

    Bao Yiwen; Guan Xialing; Hu Yueming

    2008-01-01

    The Beijing HI-13 tandem accelerator AMS high resolution injector system was developed. The high resolution energy achromatic system consists of an electrostatic analyzer and a magnetic analyzer, which mass resolution can reach 600 and transmission is better than 80%. (authors)

  17. On the origin of the Hubble sequence: I. Insights on galaxy color migration from cosmological simulations

    International Nuclear Information System (INIS)

    Cen, Renyue

    2014-01-01

    An analysis of more than 3000 galaxies resolved at better than 114 h –1 pc at z = 0.62 in a 'LAOZI' cosmological adaptive mesh refinement hydrodynamic simulation is performed and insights are gained on star formation quenching and color migration. The vast majority of red galaxies are found to be within three virial radii of a larger galaxy at the onset of quenching, when the specific star formation rate experiences the sharpest decline to fall below ∼10 –2 -10 –1 Gyr –1 (depending on the redshift). Thus, we shall call this mechanism 'environment quenching', which encompasses satellite quenching. Two physical processes are largely responsible: Ram pressure stripping first disconnects the galaxy from the cold gas supply on large scales, followed by a longer period of cold gas starvation taking place in a high velocity-dispersion environment, in which during the early part of the process, the existing dense cold gas in the central region (≤10 kpc) is consumed by in situ star formation. On average, quenching is found to be more efficient (i.e., a larger fraction of galaxies being quenched) but not faster (i.e., the duration being weakly dependent on the environment) in a denser environment. Throughout this quenching period and the ensuing one in the red sequence, galaxies follow nearly vertical tracks in the color-stellar mass diagram. In contrast, individual galaxies of all masses grow most of their stellar masses in the blue cloud, prior to the onset of quenching, and progressively more massive blue galaxies with already relatively older mean stellar ages continue to enter the red sequence. Consequently, correlations among observables of red galaxies—such as the age-mass relation— are largely inherited from their blue progenitors at the onset of quenching. While the color makeup of the entire galaxy population strongly depends on the environment, which is a direct result of environment quenching, physical properties of blue

  18. An Efficient, Semi-implicit Pressure-based Scheme Employing a High-resolution Finitie Element Method for Simulating Transient and Steady, Inviscid and Viscous, Compressible Flows on Unstructured Grids

    Energy Technology Data Exchange (ETDEWEB)

    Richard C. Martineau; Ray A. Berry

    2003-04-01

    A new semi-implicit pressure-based Computational Fluid Dynamics (CFD) scheme for simulating a wide range of transient and steady, inviscid and viscous compressible flow on unstructured finite elements is presented here. This new CFD scheme, termed the PCICEFEM (Pressure-Corrected ICE-Finite Element Method) scheme, is composed of three computational phases, an explicit predictor, an elliptic pressure Poisson solution, and a semiimplicit pressure-correction of the flow variables. The PCICE-FEM scheme is capable of second-order temporal accuracy by incorporating a combination of a time-weighted form of the two-step Taylor-Galerkin Finite Element Method scheme as an explicit predictor for the balance of momentum equations and the finite element form of a time-weighted trapezoid rule method for the semi-implicit form of the governing hydrodynamic equations. Second-order spatial accuracy is accomplished by linear unstructured finite element discretization. The PCICE-FEM scheme employs Flux-Corrected Transport as a high-resolution filter for shock capturing. The scheme is capable of simulating flows from the nearly incompressible to the high supersonic flow regimes. The PCICE-FEM scheme represents an advancement in mass-momentum coupled, pressurebased schemes. The governing hydrodynamic equations for this scheme are the conservative form of the balance of momentum equations (Navier-Stokes), mass conservation equation, and total energy equation. An operator splitting process is performed along explicit and implicit operators of the semi-implicit governing equations to render the PCICE-FEM scheme in the class of predictor-corrector schemes. The complete set of semi-implicit governing equations in the PCICE-FEM scheme are cast in this form, an explicit predictor phase and a semi-implicit pressure-correction phase with the elliptic pressure Poisson solution coupling the predictor-corrector phases. The result of this predictor-corrector formulation is that the pressure Poisson

  19. Ultra-high resolution AMOLED

    Science.gov (United States)

    Wacyk, Ihor; Prache, Olivier; Ghosh, Amal

    2011-06-01

    AMOLED microdisplays continue to show improvement in resolution and optical performance, enhancing their appeal for a broad range of near-eye applications such as night vision, simulation and training, situational awareness, augmented reality, medical imaging, and mobile video entertainment and gaming. eMagin's latest development of an HDTV+ resolution technology integrates an OLED pixel of 3.2 × 9.6 microns in size on a 0.18 micron CMOS backplane to deliver significant new functionality as well as the capability to implement a 1920×1200 microdisplay in a 0.86" diagonal area. In addition to the conventional matrix addressing circuitry, the HDTV+ display includes a very lowpower, low-voltage-differential-signaling (LVDS) serialized interface to minimize cable and connector size as well as electromagnetic emissions (EMI), an on-chip set of look-up-tables for digital gamma correction, and a novel pulsewidth- modulation (PWM) scheme that together with the standard analog control provides a total dimming range of 0.05cd/m2 to 2000cd/m2 in the monochrome version. The PWM function also enables an impulse drive mode of operation that significantly reduces motion artifacts in high speed scene changes. An internal 10-bit DAC ensures that a full 256 gamma-corrected gray levels are available across the entire dimming range, resulting in a measured dynamic range exceeding 20-bits. This device has been successfully tested for operation at frame rates ranging from 30Hz up to 85Hz. This paper describes the operational features and detailed optical and electrical test results for the new AMOLED WUXGA resolution microdisplay.

  20. Mathematical cosmology

    International Nuclear Information System (INIS)

    Wainwright, J.

    1990-01-01

    The workshop on mathematical cosmology was devoted to four topics of current interest. This report contains a brief discussion of the historical background of each topic and a concise summary of the content of each talk. The topics were; the observational cosmology program, the cosmological perturbation program, isotropic singularities, and the evolution of Bianchi cosmologies. (author)

  1. UNDERSTANDING BLACK HOLE MASS ASSEMBLY VIA ACCRETION AND MERGERS AT LATE TIMES IN COSMOLOGICAL SIMULATIONS

    International Nuclear Information System (INIS)

    Kulier, Andrea; Ostriker, Jeremiah P.; Lackner, Claire N.; Cen, Renyue; Natarajan, Priyamvada

    2015-01-01

    Accretion is thought to primarily contribute to the mass accumulation history of supermassive black holes (SMBHs) throughout cosmic time. While this may be true at high redshifts, at lower redshifts and for the most massive black holes (BHs) mergers themselves might add significantly to the mass budget. We explore this in two disparate environments—a massive cluster and a void region. We evolve SMBHs from 4 > z > 0 using merger trees derived from hydrodynamical cosmological simulations of these two regions, scaled to the observed value of the stellar mass fraction to account for overcooling. Mass gains from gas accretion proportional to bulge growth and BH-BH mergers are tracked, as are BHs that remain ''orbiting'' due to insufficient dynamical friction in a merger remnant, as well as those that are ejected due to gravitational recoil. We find that gas accretion remains the dominant source of mass accumulation in almost all SMBHs; mergers contribute 2.5% ± 0.1% for all SMBHs in the cluster and 1.0% ± 0.1% in the void since z = 4. However, mergers are significant for massive SMBHs. The fraction of mass accumulated from mergers for central BHs generally increases for larger values of the host bulge mass: in the void, the fraction is 2% at M *, bul = 10 10 M ☉ , increasing to 4% at M *, bul ≳ 10 11 M ☉ , and in the cluster it is 4% at M *, bul = 10 10 M ☉ and 23% at 10 12 M ☉ . We also find that the total mass in orbiting SMBHs is negligible in the void, but significant in the cluster, in which a potentially detectable 40% of SMBHs and ≈8% of the total SMBH mass (where the total includes central, orbiting, and ejected SMBHs) is found orbiting at z = 0. The existence of orbiting and ejected SMBHs requires modification of the Soltan argument. We estimate this correction to the integrated accreted mass density of SMBHs to be in the range 6%-21%, with a mean value of 11% ± 3%. Quantifying the growth due to mergers at these late times

  2. Dynamic high resolution imaging of rats

    International Nuclear Information System (INIS)

    Miyaoka, R.S.; Lewellen, T.K.; Bice, A.N.

    1990-01-01

    A positron emission tomography with the sensitivity and resolution to do dynamic imaging of rats would be an invaluable tool for biological researchers. In this paper, the authors determine the biological criteria for dynamic positron emission imaging of rats. To be useful, 3 mm isotropic resolution and 2-3 second time binning were necessary characteristics for such a dedicated tomograph. A single plane in which two objects of interest could be imaged simultaneously was considered acceptable. Multi-layered detector designs were evaluated as a possible solution to the dynamic imaging and high resolution imaging requirements. The University of Washington photon history generator was used to generate data to investigate a tomograph's sensitivity to true, scattered and random coincidences for varying detector ring diameters. Intrinsic spatial uniformity advantages of multi-layered detector designs over conventional detector designs were investigated using a Monte Carlo program. As a result, a modular three layered detector prototype is being developed. A module will consist of a layer of five 3.5 mm wide crystals and two layers of six 2.5 mm wide crystals. The authors believe adequate sampling can be achieved with a stationary detector system using these modules. Economical crystal decoding strategies have been investigated and simulations have been run to investigate optimum light channeling methods for block decoding strategies. An analog block decoding method has been proposed and will be experimentally evaluated to determine whether it can provide the desired performance

  3. Explicit Cloud Nucleation from Arbitrary Mixtures of Aerosol Types and Sizes Using an Ultra-Efficient In-Line Aerosol Bin Model in High-Resolution Simulations of Hurricanes

    Science.gov (United States)

    Walko, R. L.; Ashby, T.; Cotton, W. R.

    2017-12-01

    The fundamental role of atmospheric aerosols in the process of cloud droplet nucleation is well known, and there is ample evidence that the concentration, size, and chemistry of aerosols can strongly influence microphysical, thermodynamic, and ultimately dynamic properties and evolution of clouds and convective systems. With the increasing availability of observation- and model-based environmental representations of different types of anthropogenic and natural aerosols, there is increasing need for models to be able to represent which aerosols nucleate and which do not in supersaturated conditions. However, this is a very complex process that involves competition for water vapor between multiple aerosol species (chemistries) and different aerosol sizes within each species. Attempts have been made to parameterize the nucleation properties of mixtures of different aerosol species, but it is very difficult or impossible to represent all possible mixtures that may occur in practice. As part of a modeling study of the impact of anthropogenic and natural aerosols on hurricanes, we developed an ultra-efficient aerosol bin model to represent nucleation in a high-resolution atmospheric model that explicitly represents cloud- and subcloud-scale vertical motion. The bin model is activated at any time and location in a simulation where supersaturation occurs and is potentially capable of activating new cloud droplets. The bins are populated from the aerosol species that are present at the given time and location and by multiple sizes from each aerosol species according to a characteristic size distribution, and the chemistry of each species is represented by its absorption or adsorption characteristics. The bin model is integrated in time increments that are smaller than that of the atmospheric model in order to temporally resolve the peak supersaturation, which determines the total nucleated number. Even though on the order of 100 bins are typically utilized, this leads only

  4. Theoretical cosmology

    International Nuclear Information System (INIS)

    Raychaudhuri, A.K.

    1979-01-01

    The subject is covered in chapters, entitled; introduction; Newtonian gravitation and cosmology; general relativity and relativistic cosmology; analysis of observational data; relativistic models not obeying the cosmological principle; microwave radiation background; thermal history of the universe and nucleosynthesis; singularity of cosmological models; gravitational constant as a field variable; cosmological models based on Einstein-Cartan theory; cosmological singularity in two recent theories; fate of perturbations of isotropic universes; formation of galaxies; baryon symmetric cosmology; assorted topics (including extragalactic radio sources; Mach principle). (U.K.)

  5. Astrophysical cosmology

    Science.gov (United States)

    Bardeen, J. M.

    The last several years have seen a tremendous ferment of activity in astrophysical cosmology. Much of the theoretical impetus has come from particle physics theories of the early universe and candidates for dark matter, but what promise to be even more significant are improved direct observations of high z galaxies and intergalactic matter, deeper and more comprehensive redshift surveys, and the increasing power of computer simulations of the dynamical evolution of large scale structure. Upper limits on the anisotropy of the microwave background radiation are gradually getting tighter and constraining more severely theoretical scenarios for the evolution of the universe.

  6. Astrophysical cosmology

    International Nuclear Information System (INIS)

    Bardeen, J.M.

    1986-01-01

    The last several years have seen a tremendous ferment of activity in astrophysical cosmology. Much of the theoretical impetus has come from particle physics theories of the early universe and candidates for dark matter, but what promise to be even more significant are improved direct observations of high z galaxies and intergalactic matter, deeper and more comprehensive redshift surveys, and the increasing power of computer simulations of the dynamical evolution of large scale structure. Upper limits on the anisotropy of the microwave background radiation are gradually getting tighter and constraining more severely theoretical scenarios for the evolution of the universe. 47 refs

  7. High resolution sequence stratigraphy in China

    International Nuclear Information System (INIS)

    Zhang Shangfeng; Zhang Changmin; Yin Yanshi; Yin Taiju

    2008-01-01

    Since high resolution sequence stratigraphy was introduced into China by DENG Hong-wen in 1995, it has been experienced two development stages in China which are the beginning stage of theory research and development of theory research and application, and the stage of theoretical maturity and widely application that is going into. It is proved by practices that high resolution sequence stratigraphy plays more and more important roles in the exploration and development of oil and gas in Chinese continental oil-bearing basin and the research field spreads to the exploration of coal mine, uranium mine and other strata deposits. However, the theory of high resolution sequence stratigraphy still has some shortages, it should be improved in many aspects. The authors point out that high resolution sequence stratigraphy should be characterized quantitatively and modelized by computer techniques. (authors)

  8. High resolution CT of the chest

    Energy Technology Data Exchange (ETDEWEB)

    Barneveld Binkhuysen, F H [Eemland Hospital (Netherlands), Dept. of Radiology

    1996-12-31

    Compared to conventional CT high resolution CT (HRCT) shows several extra anatomical structures which might effect both diagnosis and therapy. The extra anatomical structures were discussed briefly in this article. (18 refs.).

  9. High-resolution spectrometer at PEP

    International Nuclear Information System (INIS)

    Weiss, J.M.; HRS Collaboration.

    1982-01-01

    A description is presented of the High Resolution Spectrometer experiment (PEP-12) now running at PEP. The advanced capabilities of the detector are demonstrated with first physics results expected in the coming months

  10. High-resolution downscaling for hydrological management

    Science.gov (United States)

    Ulbrich, Uwe; Rust, Henning; Meredith, Edmund; Kpogo-Nuwoklo, Komlan; Vagenas, Christos

    2017-04-01

    Hydrological modellers and water managers require high-resolution climate data to model regional hydrologies and how these may respond to future changes in the large-scale climate. The ability to successfully model such changes and, by extension, critical infrastructure planning is often impeded by a lack of suitable climate data. This typically takes the form of too-coarse data from climate models, which are not sufficiently detailed in either space or time to be able to support water management decisions and hydrological research. BINGO (Bringing INnovation in onGOing water management; ) aims to bridge the gap between the needs of hydrological modellers and planners, and the currently available range of climate data, with the overarching aim of providing adaptation strategies for climate change-related challenges. Producing the kilometre- and sub-daily-scale climate data needed by hydrologists through continuous simulations is generally computationally infeasible. To circumvent this hurdle, we adopt a two-pronged approach involving (1) selective dynamical downscaling and (2) conditional stochastic weather generators, with the former presented here. We take an event-based approach to downscaling in order to achieve the kilometre-scale input needed by hydrological modellers. Computational expenses are minimized by identifying extremal weather patterns for each BINGO research site in lower-resolution simulations and then only downscaling to the kilometre-scale (convection permitting) those events during which such patterns occur. Here we (1) outline the methodology behind the selection of the events, and (2) compare the modelled precipitation distribution and variability (preconditioned on the extremal weather patterns) with that found in observations.

  11. Structure of high-resolution NMR spectra

    CERN Document Server

    Corio, PL

    2012-01-01

    Structure of High-Resolution NMR Spectra provides the principles, theories, and mathematical and physical concepts of high-resolution nuclear magnetic resonance spectra.The book presents the elementary theory of magnetic resonance; the quantum mechanical theory of angular momentum; the general theory of steady state spectra; and multiple quantum transitions, double resonance and spin echo experiments.Physicists, chemists, and researchers will find the book a valuable reference text.

  12. PHoToNs–A parallel heterogeneous and threads oriented code for cosmological N-body simulation

    Science.gov (United States)

    Wang, Qiao; Cao, Zong-Yan; Gao, Liang; Chi, Xue-Bin; Meng, Chen; Wang, Jie; Wang, Long

    2018-06-01

    We introduce a new code for cosmological simulations, PHoToNs, which incorporates features for performing massive cosmological simulations on heterogeneous high performance computer (HPC) systems and threads oriented programming. PHoToNs adopts a hybrid scheme to compute gravitational force, with the conventional Particle-Mesh (PM) algorithm to compute the long-range force, the Tree algorithm to compute the short range force and the direct summation Particle-Particle (PP) algorithm to compute gravity from very close particles. A self-similar space filling a Peano-Hilbert curve is used to decompose the computing domain. Threads programming is advantageously used to more flexibly manage the domain communication, PM calculation and synchronization, as well as Dual Tree Traversal on the CPU+MIC platform. PHoToNs scales well and efficiency of the PP kernel achieves 68.6% of peak performance on MIC and 74.4% on CPU platforms. We also test the accuracy of the code against the much used Gadget-2 in the community and found excellent agreement.

  13. Peculiar velocity effects in high-resolution microwave background experiments

    International Nuclear Information System (INIS)

    Challinor, Anthony; Leeuwen, Floor van

    2002-01-01

    We investigate the impact of peculiar velocity effects due to the motion of the solar system relative to the cosmic microwave background (CMB) on high resolution CMB experiments. It is well known that on the largest angular scales the combined effects of Doppler shifts and aberration are important; the lowest Legendre multipoles of total intensity receive power from the large CMB monopole in transforming from the CMB frame. On small angular scales aberration dominates and is shown here to lead to significant distortions of the total intensity and polarization multipoles in transforming from the rest frame of the CMB to the frame of the solar system. We provide convenient analytic results for the distortions as series expansions in the relative velocity of the two frames, but at the highest resolutions a numerical quadrature is required. Although many of the high resolution multipoles themselves are severely distorted by the frame transformations, we show that their statistical properties distort by only an insignificant amount. Therefore, the cosmological parameter estimation is insensitive to the transformation from the CMB frame (where theoretical predictions are calculated) to the rest frame of the experiment

  14. Zeolites - a high resolution electron microscopy study

    International Nuclear Information System (INIS)

    Alfredsson, V.

    1994-10-01

    High resolution transmission electron microscopy (HRTEM) has been used to investigate a number of zeolites (EMT, FAU, LTL, MFI and MOR) and a member of the mesoporous M41S family. The electron optical artefact, manifested as a dark spot in the projected centre of the large zeolite channels, caused by insufficient transfer of certain reflections in the objective lens has been explained. The artefact severely hinders observation of materials confined in the zeolite channels and cavities. It is shown how to circumvent the artefact problem and how to image confined materials in spite of disturbance caused by the artefact. Image processing by means of a Wiener filter has been applied for removal of the artefact. The detailed surface structure of FAU has been investigated. Comparison of experimental micrographs with images simulated using different surface models indicates that the surface can be terminated in different ways depending on synthesis methods. The dealuminated form of FAU (USY) is covered by an amorphous region. Platinum incorporated in FAU has a preponderance to aggregate in the (111) twin planes, probably due to a local difference in cage structure with more spacious cages. It is shown that platinum is intra-zeolitic as opposed to being located on the external surface of the zeolite crystal. This could be deduced from tomography of ultra-thin sections among observations. HRTEM studies of the mesoporous MCM-41 show that the pores have a hexagonal shape and also supports the mechanistic model proposed which involves a cooperative formation of a mesophase including the silicate species as well as the surfactant. 66 refs, 24 figs

  15. 天津近海风能资源的高分辨率数值模拟与评估%High-Resolution Numerical Simulation and Assessment of the Offshore Wind Energy Resource in Tianjin

    Institute of Scientific and Technical Information of China (English)

    杨艳娟; 李明财; 任雨; 熊明明

    2011-01-01

    Wind energy is a rapidly growing alternative energy source and has been widely developed around the world over the last 10 years. Offshore wind power generation is now becoming a new trend in the development of future wind power generation because wind tends to blow faster and be more uniform over offshore areas than on the land. Accurate assessment of wind energy resource is fundamental and valuable for wind energy developers and potential wind energy users because it allows them to choose a general area of the estimated high wind resource for more detailed examination. However, it is difficult to make direct observations from meteorological variables over offshore areas, which calls for numerical simulation with high resolution so as to derive the availability and potential of wind energy. The distribution of wind energy resources with 1 km horizontal resolution and 10 m vertical resolution in Tianjin coastal areas was simulated using the numerical model MM5 and Calmet to derive wind energy potential over the offshore areas. In addition, the simulation efficiency was determined by comparing observation data with three wind-measurement towers over the same period. Results show that the annual mean wind speed and trend of daily mean wind speed were simulated well, and the relative deviations between observations and simulated values at three wind measurement towers were 7.11%, 12.99%, and 6.14%, respectively. This suggests that the models are effective in assessing the offshore wind energy resource in Tianjin. The long time wind energy resource was obtained by comparing simulated year’s and recent 20 years’ mean wind speed. It was found that annual mean wind speed is (6.6~7.0)m/s, and annual mean wind power density is above 340w/m2, which indicate that the offshore wind energy resource in Tianjin is exploitable and could be used for grid-connected power generation. The assessment shows that the MM5/Calmet model is capable of providing reasonable wind status

  16. A high resolution portable spectroscopy system

    International Nuclear Information System (INIS)

    Kulkarni, C.P.; Vaidya, P.P.; Paulson, M.; Bhatnagar, P.V.; Pande, S.S.; Padmini, S.

    2003-01-01

    Full text: This paper describes the system details of a High Resolution Portable Spectroscopy System (HRPSS) developed at Electronics Division, BARC. The system can be used for laboratory class, high-resolution nuclear spectroscopy applications. The HRPSS consists of a specially designed compact NIM bin, with built-in power supplies, accommodating a low power, high resolution MCA, and on-board embedded computer for spectrum building and communication. A NIM based spectroscopy amplifier and a HV module for detector bias are integrated (plug-in) in the bin. The system communicates with a host PC via a serial link. Along-with a laptop PC, and a portable HP-Ge detector, the HRPSS offers a laboratory class performance for portable applications

  17. Application of the Ewald method to cosmological N-body simulations

    International Nuclear Information System (INIS)

    Hernquist, L.; Suto, Yasushi; Bouchet, F.R.

    1990-03-01

    Fully periodic boundary conditions are incorporated into a gridless cosmological N-body code using the Ewald method. It is shown that the linear evolution of density fluctuations agrees well with analytic calculations, contrary to the case of quasi-periodic boundary conditions where the fundamental mode grows too rapidly. The implementation of fully periodic boundaries is of particular importance to relative comparisons of methods based on hierarchical tree algorithms and more traditional schemes using Fourier techniques such as PM and P 3 M codes. (author)

  18. High-resolution seismic wave propagation using local time stepping

    KAUST Repository

    Peter, Daniel

    2017-03-13

    High-resolution seismic wave simulations often require local refinements in numerical meshes to accurately capture e.g. steep topography or complex fault geometry. Together with explicit time schemes, this dramatically reduces the global time step size for ground-motion simulations due to numerical stability conditions. To alleviate this problem, local time stepping (LTS) algorithms allow an explicit time stepping scheme to adapt the time step to the element size, allowing nearoptimal time steps everywhere in the mesh. This can potentially lead to significantly faster simulation runtimes.

  19. Mathematical cosmology

    International Nuclear Information System (INIS)

    Landsberg, P.T.; Evans, D.A.

    1977-01-01

    The subject is dealt with in chapters, entitled: cosmology -some fundamentals; Newtonian gravitation - some fundamentals; the cosmological differential equation - the particle model and the continuum model; some simple Friedmann models; the classification of the Friedmann models; the steady-state model; universe with pressure; optical effects of the expansion according to various theories of light; optical observations and cosmological models. (U.K.)

  20. Formation of globular cluster candidates in merging proto-galaxies at high redshift: a view from the FIRE cosmological simulations

    Science.gov (United States)

    Kim, Ji-hoon; Ma, Xiangcheng; Grudić, Michael Y.; Hopkins, Philip F.; Hayward, Christopher C.; Wetzel, Andrew; Faucher-Giguère, Claude-André; Kereš, Dušan; Garrison-Kimmel, Shea; Murray, Norman

    2018-03-01

    Using a state-of-the-art cosmological simulation of merging proto-galaxies at high redshift from the FIRE project, with explicit treatments of star formation and stellar feedback in the interstellar medium, we investigate the formation of star clusters and examine one of the formation hypotheses of present-day metal-poor globular clusters. We find that frequent mergers in high-redshift proto-galaxies could provide a fertile environment to produce long-lasting bound star clusters. The violent merger event disturbs the gravitational potential and pushes a large gas mass of ≳ 105-6 M⊙ collectively to high density, at which point it rapidly turns into stars before stellar feedback can stop star formation. The high dynamic range of the reported simulation is critical in realizing such dense star-forming clouds with a small dynamical time-scale, tff ≲ 3 Myr, shorter than most stellar feedback time-scales. Our simulation then allows us to trace how clusters could become virialized and tightly bound to survive for up to ˜420 Myr till the end of the simulation. Because the cluster's tightly bound core was formed in one short burst, and the nearby older stars originally grouped with the cluster tend to be preferentially removed, at the end of the simulation the cluster has a small age spread.

  1. High resolution Neutron and Synchrotron Powder Diffraction

    International Nuclear Information System (INIS)

    Hewat, A.W.

    1986-01-01

    The use of high-resolution powder diffraction has grown rapidly in the past years, with the development of Rietveld (1967) methods of data analysis and new high-resolution diffractometers and multidetectors. The number of publications in this area has increased from a handful per year until 1973 to 150 per year in 1984, with a ten-year total of over 1000. These papers cover a wide area of solid state-chemistry, physics and materials science, and have been grouped under 20 subject headings, ranging from catalysts to zeolites, and from battery electrode materials to pre-stressed superconducting wires. In 1985 two new high-resolution diffractometers are being commissioned, one at the SNS laboratory near Oxford, and one at the ILL in Grenoble. In different ways these machines represent perhaps the ultimate that can be achieved with neutrons and will permit refinement of complex structures with about 250 parameters and unit cell volumes of about 2500 Angstrom/sp3/. The new European Synchotron Facility will complement the Grenoble neutron diffractometers, and extend the role of high-resolution powder diffraction to the direct solution of crystal structures, pioneered in Sweden

  2. High resolution CT in diffuse lung disease

    International Nuclear Information System (INIS)

    Webb, W.R.

    1995-01-01

    High resolution CT (computerized tomography) was discussed in detail. The conclusions were HRCT is able to define lung anatomy at the secondary lobular level and define a variety of abnormalities in patients with diffuse lung diseases. Evidence from numerous studies indicates that HRCT can play a major role in the assessment of diffuse infiltrative lung disease and is indicate clinically (95 refs.)

  3. Classification of high resolution satellite images

    OpenAIRE

    Karlsson, Anders

    2003-01-01

    In this thesis the Support Vector Machine (SVM)is applied on classification of high resolution satellite images. Sveral different measures for classification, including texture mesasures, 1st order statistics, and simple contextual information were evaluated. Additionnally, the image was segmented, using an enhanced watershed method, in order to improve the classification accuracy.

  4. High resolution CT in diffuse lung disease

    Energy Technology Data Exchange (ETDEWEB)

    Webb, W R [California Univ., San Francisco, CA (United States). Dept. of Radiology

    1996-12-31

    High resolution CT (computerized tomography) was discussed in detail. The conclusions were HRCT is able to define lung anatomy at the secondary lobular level and define a variety of abnormalities in patients with diffuse lung diseases. Evidence from numerous studies indicates that HRCT can play a major role in the assessment of diffuse infiltrative lung disease and is indicate clinically (95 refs.).

  5. High-resolution clean-sc

    NARCIS (Netherlands)

    Sijtsma, P.; Snellen, M.

    2016-01-01

    In this paper a high-resolution extension of CLEAN-SC is proposed: HR-CLEAN-SC. Where CLEAN-SC uses peak sources in “dirty maps” to define so-called source components, HR-CLEAN-SC takes advantage of the fact that source components can likewise be derived from points at some distance from the peak,

  6. A High-Resolution Stopwatch for Cents

    Science.gov (United States)

    Gingl, Z.; Kopasz, K.

    2011-01-01

    A very low-cost, easy-to-make stopwatch is presented to support various experiments in mechanics. The high-resolution stopwatch is based on two photodetectors connected directly to the microphone input of a sound card. Dedicated free open-source software has been developed and made available to download. The efficiency is demonstrated by a free…

  7. Planning for shallow high resolution seismic surveys

    CSIR Research Space (South Africa)

    Fourie, CJS

    2008-11-01

    Full Text Available of the input wave. This information can be used in conjunction with this spreadsheet to aid the geophysicist in designing shallow high resolution seismic surveys to achieve maximum resolution and penetration. This Excel spreadsheet is available free from...

  8. Observable cosmology and cosmological models

    International Nuclear Information System (INIS)

    Kardashev, N.S.; Lukash, V.N.; Novikov, I.D.

    1987-01-01

    Modern state of observation cosmology is briefly discussed. Among other things, a problem, related to Hibble constant and slowdown constant determining is considered. Within ''pancake'' theory hot (neutrino) cosmological model explains well the large-scale structure of the Universe, but does not explain the galaxy formation. A cold cosmological model explains well light object formation, but contradicts data on large-scale structure

  9. Rotation curves of high-resolution LSB and SPARC galaxies with fuzzy and multistate (ultralight boson) scalar field dark matter

    Science.gov (United States)

    Bernal, T.; Fernández-Hernández, L. M.; Matos, T.; Rodríguez-Meza, M. A.

    2018-04-01

    Cold dark matter (CDM) has shown to be an excellent candidate for the dark matter (DM) of the Universe at large scales; however, it presents some challenges at the galactic level. The scalar field dark matter (SFDM), also called fuzzy, wave, Bose-Einstein condensate, or ultralight axion DM, is identical to CDM at cosmological scales but different at the galactic ones. SFDM forms core haloes, it has a natural cut-off in its matter power spectrum, and it predicts well-formed galaxies at high redshifts. In this work we reproduce the rotation curves of high-resolution low surface brightness (LSB) and SPARC galaxies with two SFDM profiles: (1) the soliton+NFW profile in the fuzzy DM (FDM) model, arising empirically from cosmological simulations of real, non-interacting scalar field (SF) at zero temperature, and (2) the multistate SFDM (mSFDM) profile, an exact solution to the Einstein-Klein-Gordon equations for a real, self-interacting SF, with finite temperature into the SF potential, introducing several quantum states as a realistic model for an SFDM halo. From the fits with the soliton+NFW profile, we obtained for the boson mass 0.212 motivated framework additional or alternative to the FDM profile.

  10. Initial conditions for cosmological N-body simulations of the scalar sector of theories of Newtonian, Relativistic and Modified Gravity

    International Nuclear Information System (INIS)

    Valkenburg, Wessel; Hu, Bin

    2015-01-01

    We present a description for setting initial particle displacements and field values for simulations of arbitrary metric theories of gravity, for perfect and imperfect fluids with arbitrary characteristics. We extend the Zel'dovich Approximation to nontrivial theories of gravity, and show how scale dependence implies curved particle paths, even in the entirely linear regime of perturbations. For a viable choice of Effective Field Theory of Modified Gravity, initial conditions set at high redshifts are affected at the level of up to 5% at Mpc scales, which exemplifies the importance of going beyond Λ-Cold Dark Matter initial conditions for modifications of gravity outside of the quasi-static approximation. In addition, we show initial conditions for a simulation where a scalar modification of gravity is modelled in a Lagrangian particle-like description. Our description paves the way for simulations and mock galaxy catalogs under theories of gravity beyond the standard model, crucial for progress towards precision tests of gravity and cosmology

  11. Galactic Angular Momentum in Cosmological Zoom-in Simulations. I. Disk and Bulge Components and the Galaxy-Halo Connection

    Science.gov (United States)

    Sokołowska, Aleksandra; Capelo, Pedro R.; Fall, S. Michael; Mayer, Lucio; Shen, Sijing; Bonoli, Silvia

    2017-02-01

    We investigate the angular momentum evolution of four disk galaxies residing in Milky-Way-sized halos formed in cosmological zoom-in simulations with various sub-grid physics and merging histories. We decompose these galaxies, kinematically and photometrically, into their disk and bulge components. The simulated galaxies and their components lie on the observed sequences in the j *-M * diagram, relating the specific angular momentum and mass of the stellar component. We find that galaxies in low-density environments follow the relation {j}* \\propto {M}* α past major mergers, with α ˜ 0.6 in the case of strong feedback, when bulge-to-disk ratios are relatively constant, and α ˜ 1.4 in the other cases, when secular processes operate on shorter timescales. We compute the retention factors (I.e., the ratio of the specific angular momenta of stars and dark matter) for both disks and bulges and show that they vary relatively slowly after averaging over numerous but brief fluctuations. For disks, the retention factors are usually close to unity, while for bulges, they are a few times smaller. Our simulations therefore indicate that galaxies and their halos grow in a quasi-homologous way.

  12. Smartphone microendoscopy for high resolution fluorescence imaging

    Directory of Open Access Journals (Sweden)

    Xiangqian Hong

    2016-09-01

    Full Text Available High resolution optical endoscopes are increasingly used in diagnosis of various medical conditions of internal organs, such as the cervix and gastrointestinal (GI tracts, but they are too expensive for use in resource-poor settings. On the other hand, smartphones with high resolution cameras and Internet access have become more affordable, enabling them to diffuse into most rural areas and developing countries in the past decade. In this paper, we describe a smartphone microendoscope that can take fluorescence images with a spatial resolution of 3.1 μm. Images collected from ex vivo, in vitro and in vivo samples using the device are also presented. The compact and cost-effective smartphone microendoscope may be envisaged as a powerful tool for detecting pre-cancerous lesions of internal organs in low and middle-income countries (LMICs.

  13. High resolution NMR theory and chemical applications

    CERN Document Server

    Becker, Edwin D

    2012-01-01

    High Resolution NMR: Theory and Chemical Applications discusses the principles and theory of nuclear magnetic resonance and how this concept is used in the chemical sciences. This book is written at an intermediate level, with mathematics used to augment verbal descriptions of the phenomena. This text pays attention to developing and interrelating four approaches - the steady state energy levels, the rotating vector picture, the density matrix, and the product operator formalism. The style of this book is based on the assumption that the reader has an acquaintance with the general principles of quantum mechanics, but no extensive background in quantum theory or proficiency in mathematics is required. This book begins with a description of the basic physics, together with a brief account of the historical development of the field. It looks at the study of NMR in liquids, including high resolution NMR in the solid state and the principles of NMR imaging and localized spectroscopy. This book is intended to assis...

  14. High resolution NMR theory and chemical applications

    CERN Document Server

    Becker, Edwin D

    1999-01-01

    High Resolution NMR provides a broad treatment of the principles and theory of nuclear magnetic resonance (NMR) as it is used in the chemical sciences. It is written at an "intermediate" level, with mathematics used to augment, rather than replace, clear verbal descriptions of the phenomena. The book is intended to allow a graduate student, advanced undergraduate, or researcher to understand NMR at a fundamental level, and to see illustrations of the applications of NMR to the determination of the structure of small organic molecules and macromolecules, including proteins. Emphasis is on the study of NMR in liquids, but the treatment also includes high resolution NMR in the solid state and the principles of NMR imaging and localized spectroscopy. Careful attention is given to developing and interrelating four approaches - steady state energy levels, the rotating vector picture, the density matrix, and the product operator formalism. The presentation is based on the assumption that the reader has an acquaintan...

  15. High resolution imaging of boron carbide microstructures

    International Nuclear Information System (INIS)

    MacKinnon, I.D.R.; Aselage, T.; Van Deusen, S.B.

    1986-01-01

    Two samples of boron carbide have been examined using high resolution transmission electron microscopy (HRTEM). A hot-pressed B 13 C 2 sample shows a high density of variable width twins normal to (10*1). Subtle shifts or offsets of lattice fringes along the twin plane and normal to approx.(10*5) were also observed. A B 4 C powder showed little evidence of stacking disorder in crystalline regions

  16. High-Resolution MRI in Rectal Cancer

    International Nuclear Information System (INIS)

    Dieguez, Adriana

    2010-01-01

    High-resolution MRI is the best method of assessing the relation of the rectal tumor with the potential circumferential resection margin (CRM). Therefore it is currently considered the method of choice for local staging of rectal cancer. The primary surgery of rectal cancer is total mesorectal excision (TME), which plane of dissection is formed by the mesorectal fascia surrounding mesorectal fat and rectum. This fascia will determine the circumferential margin of resection. At the same time, high resolution MRI allows adequate pre-operative identification of important prognostic risk factors, improving the selection and indication of therapy for each patient. This information includes, besides the circumferential margin of resection, tumor and lymph node staging, extramural vascular invasion and the description of lower rectal tumors. All these should be described in detail in the report, being part of the discussion in the multidisciplinary team, the place where the decisions involving the patient with rectal cancer will take place. The aim of this study is to provide the information necessary to understand the use of high resolution MRI in the identification of prognostic risk factors in rectal cancer. The technical requirements and standardized report for this study will be describe, as well as the anatomical landmarks of importance for the total mesorectal excision (TME), as we have said is the surgery of choice for rectal cancer. (authors) [es

  17. Ultra-high resolution protein crystallography

    International Nuclear Information System (INIS)

    Takeda, Kazuki; Hirano, Yu; Miki, Kunio

    2010-01-01

    Many protein structures have been determined by X-ray crystallography and deposited with the Protein Data Bank. However, these structures at usual resolution (1.5< d<3.0 A) are insufficient in their precision and quantity for elucidating the molecular mechanism of protein functions directly from structural information. Several studies at ultra-high resolution (d<0.8 A) have been performed with synchrotron radiation in the last decade. The highest resolution of the protein crystals was achieved at 0.54 A resolution for a small protein, crambin. In such high resolution crystals, almost all of hydrogen atoms of proteins and some hydrogen atoms of bound water molecules are experimentally observed. In addition, outer-shell electrons of proteins can be analyzed by the multipole refinement procedure. However, the influence of X-rays should be precisely estimated in order to derive meaningful information from the crystallographic results. In this review, we summarize refinement procedures, current status and perspectives for ultra high resolution protein crystallography. (author)

  18. High resolution, high speed ultrahigh vacuum microscopy

    International Nuclear Information System (INIS)

    Poppa, Helmut

    2004-01-01

    The history and future of transmission electron microscopy (TEM) is discussed as it refers to the eventual development of instruments and techniques applicable to the real time in situ investigation of surface processes with high resolution. To reach this objective, it was necessary to transform conventional high resolution instruments so that an ultrahigh vacuum (UHV) environment at the sample site was created, that access to the sample by various in situ sample modification procedures was provided, and that in situ sample exchanges with other integrated surface analytical systems became possible. Furthermore, high resolution image acquisition systems had to be developed to take advantage of the high speed imaging capabilities of projection imaging microscopes. These changes to conventional electron microscopy and its uses were slowly realized in a few international laboratories over a period of almost 40 years by a relatively small number of researchers crucially interested in advancing the state of the art of electron microscopy and its applications to diverse areas of interest; often concentrating on the nucleation, growth, and properties of thin films on well defined material surfaces. A part of this review is dedicated to the recognition of the major contributions to surface and thin film science by these pioneers. Finally, some of the important current developments in aberration corrected electron optics and eventual adaptations to in situ UHV microscopy are discussed. As a result of all the path breaking developments that have led to today's highly sophisticated UHV-TEM systems, integrated fundamental studies are now possible that combine many traditional surface science approaches. Combined investigations to date have involved in situ and ex situ surface microscopies such as scanning tunneling microscopy/atomic force microscopy, scanning Auger microscopy, and photoemission electron microscopy, and area-integrating techniques such as x-ray photoelectron

  19. USGS High Resolution Orthoimagery Collection - Historical - National Geospatial Data Asset (NGDA) High Resolution Orthoimagery

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — USGS high resolution orthorectified images from The National Map combine the image characteristics of an aerial photograph with the geometric qualities of a map. An...

  20. Keeping it real: revisiting a real-space approach to running ensembles of cosmological N-body simulations

    International Nuclear Information System (INIS)

    Orban, Chris

    2013-01-01

    In setting up initial conditions for ensembles of cosmological N-body simulations there are, fundamentally, two choices: either maximizing the correspondence of the initial density field to the assumed fourier-space clustering or, instead, matching to real-space statistics and allowing the DC mode (i.e. overdensity) to vary from box to box as it would in the real universe. As a stringent test of both approaches, I perform ensembles of simulations using power law and a ''powerlaw times a bump'' model inspired by baryon acoustic oscillations (BAO), exploiting the self-similarity of these initial conditions to quantify the accuracy of the matter-matter two-point correlation results. The real-space method, which was originally proposed by Pen 1997 [1] and implemented by Sirko 2005 [2], performed well in producing the expected self-similar behavior and corroborated the non-linear evolution of the BAO feature observed in conventional simulations, even in the strongly-clustered regime (σ 8 ∼>1). In revisiting the real-space method championed by [2], it was also noticed that this earlier study overlooked an important integral constraint correction to the correlation function in results from the conventional approach that can be important in ΛCDM simulations with L box ∼ −1 Gpc and on scales r∼>L box /10. Rectifying this issue shows that the fourier space and real space methods are about equally accurate and efficient for modeling the evolution and growth of the correlation function, contrary to previous claims. An appendix provides a useful independent-of-epoch analytic formula for estimating the importance of the integral constraint bias on correlation function measurements in ΛCDM simulations

  1. Precision Cosmology

    Science.gov (United States)

    Jones, Bernard J. T.

    2017-04-01

    Preface; Notation and conventions; Part I. 100 Years of Cosmology: 1. Emerging cosmology; 2. The cosmic expansion; 3. The cosmic microwave background; 4. Recent cosmology; Part II. Newtonian Cosmology: 5. Newtonian cosmology; 6. Dark energy cosmological models; 7. The early universe; 8. The inhomogeneous universe; 9. The inflationary universe; Part III. Relativistic Cosmology: 10. Minkowski space; 11. The energy momentum tensor; 12. General relativity; 13. Space-time geometry and calculus; 14. The Einstein field equations; 15. Solutions of the Einstein equations; 16. The Robertson-Walker solution; 17. Congruences, curvature and Raychaudhuri; 18. Observing and measuring the universe; Part IV. The Physics of Matter and Radiation: 19. Physics of the CMB radiation; 20. Recombination of the primeval plasma; 21. CMB polarisation; 22. CMB anisotropy; Part V. Precision Tools for Precision Cosmology: 23. Likelihood; 24. Frequentist hypothesis testing; 25. Statistical inference: Bayesian; 26. CMB data processing; 27. Parametrising the universe; 28. Precision cosmology; 29. Epilogue; Appendix A. SI, CGS and Planck units; Appendix B. Magnitudes and distances; Appendix C. Representing vectors and tensors; Appendix D. The electromagnetic field; Appendix E. Statistical distributions; Appendix F. Functions on a sphere; Appendix G. Acknowledgements; References; Index.

  2. Dissipative N-body simulations of the formation of single galaxies in a cold dark-matter cosmology

    International Nuclear Information System (INIS)

    Ewell, M.W. Jr.

    1988-01-01

    The details of an N-body code designed specifically to study the collapse of a single protogalaxy are presented. This code uses a spherical harmonic expansion to model the gravity and a sticky-particle algorithm to model the gas physics. It includes external tides and cosmologically realistic boundary conditions. The results of twelve simulations using this code are given. The initial conditions for these runs use mean-density profiles and r.m.s. quadrupoles and tides taken from the CDM power spectrum. The simulations start when the center of the perturbation first goes nonlinear, and continue until a redshift Z ∼ 1-2. The resulting rotation curves are approximately flat out to 100 kpc, but do show some structure. The circular velocity is 200 km/sec around a 3σ peak. The final systems have λ approx-equal .03. The angular momentum per unit mass of the baryons implies disk scale lengths of 1-3 kpc. The tidal forces are strong enough to profoundly influence the collapse geometry. In particular, the usual assumption, that tidal torques produce a system approximately in solid-body rotation, is shown to be seriously in error

  3. A Particular Appetite: Cosmological Hydrodynamic Simulations of Preferential Accretion in the Supermassive Black Holes of Milky Way Size Galaxies

    Science.gov (United States)

    Sanchez, Natalie; Bellovary, Jillian M.; Holley-Bockelmann, Kelly

    2016-01-01

    With the use of cosmological hydrodynamic simulations of Milky Way-type galaxies, we identify the preferential source of gas that is accreted by the supermassive black holes (SMBHs) they host. We examine simulations of two Milky Way analogs, each distinguished by a differing merger history. One galaxy is characterized by several major mergers and the other has a more quiescent history. By examining and comparing these two galaxies, which have a similar structure at z=0, we asses the importance of merger history on black hole accretion. This study is an extension of Bellovary et. al. 2013, which studied accretion onto SMBHs in massive, high redshift galaxies. Bellovary found that the fraction of gas accreted by the galaxy was proportional to that which was accreted by its SMBH. Contrary to Bellovary's previous results, we found that though the gas accreted by a quiescent galaxy will mirror the accretion of its central SMBH, a galaxy that is characterized by an active merger history will have a SMBH that preferentially accretes gas gained through mergers. We move forward by examining the angular momentum of the gas accreted by these Milky Way-type galaxies to better understand the mechanisms fueling their central SMBH.

  4. On estimating cosmology-dependent covariance matrices

    International Nuclear Information System (INIS)

    Morrison, Christopher B.; Schneider, Michael D.

    2013-01-01

    We describe a statistical model to estimate the covariance matrix of matter tracer two-point correlation functions with cosmological simulations. Assuming a fixed number of cosmological simulation runs, we describe how to build a 'statistical emulator' of the two-point function covariance over a specified range of input cosmological parameters. Because the simulation runs with different cosmological models help to constrain the form of the covariance, we predict that the cosmology-dependent covariance may be estimated with a comparable number of simulations as would be needed to estimate the covariance for fixed cosmology. Our framework is a necessary first step in planning a simulations campaign for analyzing the next generation of cosmological surveys

  5. High resolution extremity CT for biomechanics modeling

    International Nuclear Information System (INIS)

    Ashby, A.E.; Brand, H.; Hollerbach, K.; Logan, C.M.; Martz, H.E.

    1995-01-01

    With the advent of ever more powerful computing and finite element analysis (FEA) capabilities, the bone and joint geometry detail available from either commercial surface definitions or from medical CT scans is inadequate. For dynamic FEA modeling of joints, precise articular contours are necessary to get appropriate contact definition. In this project, a fresh cadaver extremity was suspended in parafin in a lucite cylinder and then scanned with an industrial CT system to generate a high resolution data set for use in biomechanics modeling

  6. Ultra-high resolution coded wavefront sensor

    KAUST Repository

    Wang, Congli

    2017-06-08

    Wavefront sensors and more general phase retrieval methods have recently attracted a lot of attention in a host of application domains, ranging from astronomy to scientific imaging and microscopy. In this paper, we introduce a new class of sensor, the Coded Wavefront Sensor, which provides high spatio-temporal resolution using a simple masked sensor under white light illumination. Specifically, we demonstrate megapixel spatial resolution and phase accuracy better than 0.1 wavelengths at reconstruction rates of 50 Hz or more, thus opening up many new applications from high-resolution adaptive optics to real-time phase retrieval in microscopy.

  7. High resolution extremity CT for biomechanics modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ashby, A.E.; Brand, H.; Hollerbach, K.; Logan, C.M.; Martz, H.E.

    1995-09-23

    With the advent of ever more powerful computing and finite element analysis (FEA) capabilities, the bone and joint geometry detail available from either commercial surface definitions or from medical CT scans is inadequate. For dynamic FEA modeling of joints, precise articular contours are necessary to get appropriate contact definition. In this project, a fresh cadaver extremity was suspended in parafin in a lucite cylinder and then scanned with an industrial CT system to generate a high resolution data set for use in biomechanics modeling.

  8. High-resolution computer-aided moire

    Science.gov (United States)

    Sciammarella, Cesar A.; Bhat, Gopalakrishna K.

    1991-12-01

    This paper presents a high resolution computer assisted moire technique for the measurement of displacements and strains at the microscopic level. The detection of micro-displacements using a moire grid and the problem associated with the recovery of displacement field from the sampled values of the grid intensity are discussed. A two dimensional Fourier transform method for the extraction of displacements from the image of the moire grid is outlined. An example of application of the technique to the measurement of strains and stresses in the vicinity of the crack tip in a compact tension specimen is given.

  9. Laboratory of High resolution gamma spectrometry

    International Nuclear Information System (INIS)

    Mendez G, A.; Giber F, J.; Rivas C, I.; Reyes A, B.

    1992-01-01

    The Department of Nuclear Experimentation of the Nuclear Systems Management requests the collaboration of the Engineering unit for the supervision of the execution of the work of the High resolution Gamma spectrometry and low bottom laboratory, using the hut of the sub critic reactor of the Nuclear Center of Mexico. This laboratory has the purpose of determining the activity of special materials irradiated in nuclear power plants. In this report the architecture development, concepts, materials and diagrams for the realization of this type of work are presented. (Author)

  10. High resolution neutron spectroscopy for helium isotopes

    International Nuclear Information System (INIS)

    Abdel-Wahab, M.S.; Klages, H.O.; Schmalz, G.; Haesner, B.H.; Kecskemeti, J.; Schwarz, P.; Wilczynski, J.

    1992-01-01

    A high resolution fast neutron time-of-flight spectrometer is described, neutron time-of-flight spectra are taken using a specially designed TDC in connection to an on-line computer. The high time-of-flight resolution of 5 ps/m enabled the study of the total cross section of 4 He for neutrons near the 3/2 + resonance in the 5 He nucleus. The resonance parameters were determined by a single level Breit-Winger fit to the data. (orig.)

  11. Observational cosmology

    NARCIS (Netherlands)

    Sanders, RH; Papantonopoulos, E

    2005-01-01

    I discuss the classical cosmological tests, i.e., angular size-redshift, flux-redshift, and galaxy number counts, in the light of the cosmology prescribed by the interpretation of the CMB anisotropies. The discussion is somewhat of a primer for physicists, with emphasis upon the possible systematic

  12. Effects of the Size of Cosmological N-body Simulations on Physical ...

    Indian Academy of Sciences (India)

    Apart from N-body simulations, an analytical prescription given by Press & ...... Little, B., Weinberg, D. H., Park, C. 1991, MNRAS, 253, 295. Ma, C.-P. ... Padmanabhan, T. 1993, Structure Formation in the Universe, Cambridge University Press.

  13. The shape of dark matter haloes in the Aquarius simulations : Evolution and memory

    NARCIS (Netherlands)

    Vera-Ciro, C.A.; Sales, L. V.; Helmi, A.; Reyle, C; Robin, A; Schultheis, M

    We use the high resolution cosmological N-body simulations from the Aquarius project to investigate in detail the mechanisms that determine the shape of Milky Way-type dark matter haloes. We find that, when measured at the instantaneous virial radius, the shape of individual haloes changes with

  14. The shape of dark matter haloes in the Aquarius simulations: Evolution and memory

    NARCIS (Netherlands)

    Vera-Ciro, C. A.; Sales, L. V.; Helmi, A.

    We use the high resolution cosmological N-body simulations from the Aquarius project to investigate in detail the mechanisms that determine the shape of Milky Way-type dark matter haloes. We find that, when measured at the instantaneous virial radius, the shape of individual haloes changes with

  15. Limiting liability via high resolution image processing

    Energy Technology Data Exchange (ETDEWEB)

    Greenwade, L.E.; Overlin, T.K.

    1996-12-31

    The utilization of high resolution image processing allows forensic analysts and visualization scientists to assist detectives by enhancing field photographs, and by providing the tools and training to increase the quality and usability of field photos. Through the use of digitized photographs and computerized enhancement software, field evidence can be obtained and processed as `evidence ready`, even in poor lighting and shadowed conditions or darkened rooms. These images, which are most often unusable when taken with standard camera equipment, can be shot in the worst of photographic condition and be processed as usable evidence. Visualization scientists have taken the use of digital photographic image processing and moved the process of crime scene photos into the technology age. The use of high resolution technology will assist law enforcement in making better use of crime scene photography and positive identification of prints. Valuable court room and investigation time can be saved and better served by this accurate, performance based process. Inconclusive evidence does not lead to convictions. Enhancement of the photographic capability helps solve one major problem with crime scene photos, that if taken with standard equipment and without the benefit of enhancement software would be inconclusive, thus allowing guilty parties to be set free due to lack of evidence.

  16. High-Resolution Scintimammography: A Pilot Study

    Energy Technology Data Exchange (ETDEWEB)

    Rachel F. Brem; Joelle M. Schoonjans; Douglas A. Kieper; Stan Majewski; Steven Goodman; Cahid Civelek

    2002-07-01

    This study evaluated a novel high-resolution breast-specific gamma camera (HRBGC) for the detection of suggestive breast lesions. Methods: Fifty patients (with 58 breast lesions) for whom a scintimammogram was clinically indicated were prospectively evaluated with a general-purpose gamma camera and a novel HRBGC prototype. The results of conventional and high-resolution nuclear studies were prospectively classified as negative (normal or benign) or positive (suggestive or malignant) by 2 radiologists who were unaware of the mammographic and histologic results. All of the included lesions were confirmed by pathology. Results: There were 30 benign and 28 malignant lesions. The sensitivity for detection of breast cancer was 64.3% (18/28) with the conventional camera and 78.6% (22/28) with the HRBGC. The specificity with both systems was 93.3% (28/30). For the 18 nonpalpable lesions, sensitivity was 55.5% (10/18) and 72.2% (13/18) with the general-purpose camera and the HRBGC, respectively. For lesions 1 cm, 7 of 15 were detected with the general-purpose camera and 10 of 15 with the HRBGC. Four lesions (median size, 8.5 mm) were detected only with the HRBGC and were missed by the conventional camera. Conclusion: Evaluation of indeterminate breast lesions with an HRBGC results in improved sensitivity for the detection of cancer, with greater improvement shown for nonpalpable and 1-cm lesions.

  17. High resolution studies of barium Rydberg states

    International Nuclear Information System (INIS)

    Eliel, E.R.

    1982-01-01

    The subtle structure of Rydberg states of barium with orbital angular momentum 0, 1, 2 and 3 is investigated. Some aspects of atomic theory for a configuration with two valence electrons are reviewed. The Multi Channel Quantum Defect Theory (MQDT) is concisely introduced as a convenient way to describe interactions between Rydberg series. Three high-resolution UV studies are presented. The first two, presenting results on a transition in indium and europium serve as an illustration of the frequency doubling technique. The third study is of hyperfine structure and isotope shifts in low-lying p states in Sr and Ba. An extensive study of the 6snp and 6snf Rydberg states of barium is presented with particular emphasis on the 6snf states. It is shown that the level structure cannot be fully explained with the model introduced earlier. Rather an effective two-body spin-orbit interaction has to be introduced to account for the observed splittings, illustrating that high resolution studies on Rydberg states offer an unique opportunity to determine the importance of such effects. Finally, the 6sns and 6snd series are considered. The hyperfine induced isotope shift in the simple excitation spectra to 6sns 1 S 0 is discussed and attention is paid to series perturbers. It is shown that level mixing parameters can easily be extracted from the experimental data. (Auth.)

  18. Principles of high resolution NMR in solids

    CERN Document Server

    Mehring, Michael

    1983-01-01

    The field of Nuclear Magnetic Resonance (NMR) has developed at a fascinating pace during the last decade. It always has been an extremely valuable tool to the organic chemist by supplying molecular "finger print" spectra at the atomic level. Unfortunately the high resolution achievable in liquid solutions could not be obtained in solids and physicists and physical chemists had to live with unresolved lines open to a wealth of curve fitting procedures and a vast amount of speculations. High resolution NMR in solids seemed to be a paradoxon. Broad structure­ less lines are usually encountered when dealing with NMR in solids. Only with the recent advent of mUltiple pulse, magic angle, cross-polarization, two-dimen­ sional and multiple-quantum spectroscopy and other techniques during the last decade it became possible to resolve finer details of nuclear spin interactions in solids. I have felt that graduate students, researchers and others beginning to get involved with these techniques needed a book which trea...

  19. High-Resolution PET Detector. Final report

    International Nuclear Information System (INIS)

    Karp, Joel

    2014-01-01

    The objective of this project was to develop an understanding of the limits of performance for a high resolution PET detector using an approach based on continuous scintillation crystals rather than pixelated crystals. The overall goal was to design a high-resolution detector, which requires both high spatial resolution and high sensitivity for 511 keV gammas. Continuous scintillation detectors (Anger cameras) have been used extensively for both single-photon and PET scanners, however, these instruments were based on NaI(Tl) scintillators using relatively large, individual photo-multipliers. In this project we investigated the potential of this type of detector technology to achieve higher spatial resolution through the use of improved scintillator materials and photo-sensors, and modification of the detector surface to optimize the light response function.We achieved an average spatial resolution of 3-mm for a 25-mm thick, LYSO continuous detector using a maximum likelihood position algorithm and shallow slots cut into the entrance surface

  20. Simulations and cosmological inference: A statistical model for power spectra means and covariances

    International Nuclear Information System (INIS)

    Schneider, Michael D.; Knox, Lloyd; Habib, Salman; Heitmann, Katrin; Higdon, David; Nakhleh, Charles

    2008-01-01

    We describe an approximate statistical model for the sample variance distribution of the nonlinear matter power spectrum that can be calibrated from limited numbers of simulations. Our model retains the common assumption of a multivariate normal distribution for the power spectrum band powers but takes full account of the (parameter-dependent) power spectrum covariance. The model is calibrated using an extension of the framework in Habib et al. (2007) to train Gaussian processes for the power spectrum mean and covariance given a set of simulation runs over a hypercube in parameter space. We demonstrate the performance of this machinery by estimating the parameters of a power-law model for the power spectrum. Within this framework, our calibrated sample variance distribution is robust to errors in the estimated covariance and shows rapid convergence of the posterior parameter constraints with the number of training simulations.

  1. Cosmological observations with a wide field telescope in space: Pixel simulations of EUCLID spectrometer

    International Nuclear Information System (INIS)

    Zoubian, Julien

    2012-01-01

    The observations of the supernovae, the cosmic microwave background, and more recently the measurement of baryon acoustic oscillations and the weak lensing effects, converge to a Lambda CDM model, with an accelerating expansion of the today Universe. This model need two dark components to fit the observations, the dark matter and the dark energy. Two approaches seem particularly promising to measure both geometry of the Universe and growth of dark matter structures, the analysis of the weak distortions of distant galaxies by gravitational lensing and the study of the baryon acoustic oscillations. Both methods required a very large sky surveys of several thousand square degrees. In the context of the spectroscopic survey of the space mission EUCLID, dedicated to the study of the dark side of the universe, I developed a pixel simulation tool for analyzing instrumental performances. The proposed method can be summarized in three steps. The first step is to simulate the observables, i.e. mainly the sources of the sky. I work up a new method, adapted for spectroscopic simulations, which allows to mock an existing survey of galaxies in ensuring that the distribution of the spectral properties of galaxies are representative of current observations, in particular the distribution of the emission lines. The second step is to simulate the instrument and produce images which are equivalent to the expected real images. Based on the pixel simulator of the HST, I developed a new tool to compute the images of the spectroscopic channel of EUCLID. The new simulator have the particularity to be able to simulate PSF with various energy distributions and detectors which have different pixels. The last step is the estimation of the performances of the instrument. Based on existing tools, I set up a pipeline of image processing and performances measurement. My main results were: 1) to validate the method by simulating an existing survey of galaxies, the WISP survey, 2) to determine the

  2. High Resolution Powder Diffraction and Structure Determination

    International Nuclear Information System (INIS)

    Cox, D. E.

    1999-01-01

    It is clear that high-resolution synchrotrons X-ray powder diffraction is a very powerful and convenient tool for material characterization and structure determination. Most investigations to date have been carried out under ambient conditions and have focused on structure solution and refinement. The application of high-resolution techniques to increasingly complex structures will certainly represent an important part of future studies, and it has been seen how ab initio solution of structures with perhaps 100 atoms in the asymmetric unit is within the realms of possibility. However, the ease with which temperature-dependence measurements can be made combined with improvements in the technology of position-sensitive detectors will undoubtedly stimulate precise in situ structural studies of phase transitions and related phenomena. One challenge in this area will be to develop high-resolution techniques for ultra-high pressure investigations in diamond anvil cells. This will require highly focused beams and very precise collimation in front of the cell down to dimensions of 50 (micro)m or less. Anomalous scattering offers many interesting possibilities as well. As a means of enhancing scattering contrast it has applications not only to the determination of cation distribution in mixed systems such as the superconducting oxides discussed in Section 9.5.3, but also to the location of specific cations in partially occupied sites, such as the extra-framework positions in zeolites, for example. Another possible application is to provide phasing information for ab initio structure solution. Finally, the precise determination of f as a function of energy through an absorption edge can provide useful information about cation oxidation states, particularly in conjunction with XANES data. In contrast to many experiments at a synchrotron facility, powder diffraction is a relatively simple and user-friendly technique, and most of the procedures and software for data analysis

  3. Achieving Extreme Resolution in Numerical Cosmology Using Adaptive Mesh Refinement: Resolving Primordial Star Formation

    Directory of Open Access Journals (Sweden)

    Greg L. Bryan

    2002-01-01

    Full Text Available As an entry for the 2001 Gordon Bell Award in the "special" category, we describe our 3-d, hybrid, adaptive mesh refinement (AMR code Enzo designed for high-resolution, multiphysics, cosmological structure formation simulations. Our parallel implementation places no limit on the depth or complexity of the adaptive grid hierarchy, allowing us to achieve unprecedented spatial and temporal dynamic range. We report on a simulation of primordial star formation which develops over 8000 subgrids at 34 levels of refinement to achieve a local refinement of a factor of 1012 in space and time. This allows us to resolve the properties of the first stars which form in the universe assuming standard physics and a standard cosmological model. Achieving extreme resolution requires the use of 128-bit extended precision arithmetic (EPA to accurately specify the subgrid positions. We describe our EPA AMR implementation on the IBM SP2 Blue Horizon system at the San Diego Supercomputer Center.

  4. Supernova cosmology

    International Nuclear Information System (INIS)

    Leibundgut, B.

    2005-01-01

    Supernovae have developed into a versatile tool for cosmology. Their impact on the cosmological model has been profound and led to the discovery of the accelerated expansion. The current status of the cosmological model as perceived through supernova observations will be presented. Supernovae are currently the only astrophysical objects that can measure the dynamics of the cosmic expansion during the past eight billion years. Ongoing experiments are trying to determine the characteristics of the accelerated expansion and give insight into what might be the physical explanation for the acceleration. (author)

  5. High resolution CT of the lung

    Energy Technology Data Exchange (ETDEWEB)

    Itoh, Harumi (Kyoto Univ. (Japan). Faculty of Medicine)

    1991-02-01

    The emergence of computed tomography (CT) in the early 1970s has greatly contributed to diagnostic radiology. The brain was the first organ examined with CT, followed by the abdomen. For the chest, CT has also come into use shortly after the introduction in the examination of the thoracic cavity and mediastinum. CT techniques were, however, of limited significance in the evaluation of pulmonary diseases, especially diffuse pulmonary diseases. High-resolution CT (HRCT) has been introduced in clinical investigations of the lung field. This article is designed to present chest radiographic and conventional tomographic interpretations and to introduce findings of HRCT corresponding to the same shadows, with a summation of the significance of HRCT and issues of diagnostic imaging. Materials outlined are tuberculosis, pneumoconiosis, bronchopneumonia, mycoplasma pneumonia, lymphangitic carcinomatosis, sarcoidosis, diffuse panbronchiolitis, interstitial pneumonia, and pulmonary emphysema. Finally, an overview of basic investigations evolved from HRCT is given. (N.K.) 140 refs.

  6. Constructing a WISE High Resolution Galaxy Atlas

    Science.gov (United States)

    Jarrett, T. H.; Masci, F.; Tsai, C. W.; Petty, S.; Cluver, M.; Assef, Roberto J.; Benford, D.; Blain, A.; Bridge, C.; Donoso, E.; hide

    2012-01-01

    After eight months of continuous observations, the Wide-field Infrared Survey Explorer (WISE) mapped the entire sky at 3.4 micron, 4.6 micron, 12 micron, and 22 micron. We have begun a dedicated WISE High Resolution Galaxy Atlas project to fully characterize large, nearby galaxies and produce a legacy image atlas and source catalog. Here we summarize the deconvolution techniques used to significantly improve the spatial resolution of WISE imaging, specifically designed to study the internal anatomy of nearby galaxies. As a case study, we present results for the galaxy NGC 1566, comparing the WISE enhanced-resolution image processing to that of Spitzer, Galaxy Evolution Explorer, and ground-based imaging. This is the first paper in a two-part series; results for a larger sample of nearby galaxies are presented in the second paper.

  7. A high resolution jet analysis for LEP

    International Nuclear Information System (INIS)

    Hariri, S.

    1992-11-01

    A high resolution multijet analysis of hadronic events produced in e + e - annihilation at a C.M.S. energy of 91.2 GeV is described. Hadronic events produced in e + e - annihilations are generated using the Monte Carlo program JETSET7.3 with its two options: Matrix Element (M.E.) and Parton Showers (P.S.). The shower option is used with its default parameter values while the M.E. option is used with an invariant mass cut Y CUT =0.01 instead of 0.02. This choice ensures a better continuity in the evolution of the event shape variables. (K.A.) 3 refs.; 26 figs.; 1 tab

  8. High Resolution Displays Using NCAP Liquid Crystals

    Science.gov (United States)

    Macknick, A. Brian; Jones, Phil; White, Larry

    1989-07-01

    Nematic curvilinear aligned phase (NCAP) liquid crystals have been found useful for high information content video displays. NCAP materials are liquid crystals which have been encapsulated in a polymer matrix and which have a light transmission which is variable with applied electric fields. Because NCAP materials do not require polarizers, their on-state transmission is substantially better than twisted nematic cells. All dimensional tolerances are locked in during the encapsulation process and hence there are no critical sealing or spacing issues. By controlling the polymer/liquid crystal morphology, switching speeds of NCAP materials have been significantly improved over twisted nematic systems. Recent work has combined active matrix addressing with NCAP materials. Active matrices, such as thin film transistors, have given displays of high resolution. The paper will discuss the advantages of NCAP materials specifically designed for operation at video rates on transistor arrays; applications for both backlit and projection displays will be discussed.

  9. High resolution VUV facility at INDUS-1

    International Nuclear Information System (INIS)

    Krishnamurty, G.; Saraswathy, P.; Rao, P.M.R.; Mishra, A.P.; Kartha, V.B.

    1993-01-01

    Synchrotron radiation (SR) generated in the electron storage rings is an unique source for the study of atomic and molecular spectroscopy especially in the vacuum ultra violet region. Realizing the potential of this light source, efforts are in progress to develop a beamline facility at INDUS-1 to carry out high resolution atomic and molecular spectroscopy. This beam line consists of a fore-optic which is a combination of three cylindrical mirrors. The mirrors are so chosen that SR beam having a 60 mrad (horizontal) x 6 mrad (vertical) divergence is focussed onto a slit of a 6.65 metre off-plane spectrometer in Eagle Mount equipped with horizontal slit and vertical dispersion. The design of the various components of the beam line is completed. It is decided to build the spectrometer as per the requirements of the user community. Details of the various aspects of the beam line will be presented. (author). 3 figs

  10. High-resolution CT of airway reactivity

    International Nuclear Information System (INIS)

    Herold, C.J.; Brown, R.H.; Hirshman, C.A.; Mitzner, W.; Zerhouni, E.A.

    1990-01-01

    Assessment of airway reactivity has generally been limited to experimental nonimaging models. This authors of this paper used high-resolution CT (HRCT) to evaluate airway reactivity and to calculate airway resistance (Raw) compared with lung resistance (RL). Ten anesthetized and ventilated dogs were investigated with HRCT (10 contiguous 2-mm sections through the lower lung lobes) during control state, following aerosol histamine challenge, and following posthistamine hyperinflation. The HRCT scans were digitized, and areas of 10 airways per dog (diameter, 1-10 mm) were measured with a computer edging process. Changes in airway area and Raw (calculated by 1/[area] 2 ) were measured. RL was assessed separately, following the same protocol. Data were analyzed by use of a paired t-test with significance at p < .05

  11. High resolution crystal calorimetry at LHC

    International Nuclear Information System (INIS)

    Schneegans, M.; Ferrere, D.; Lebeau, M.; Vivargent, M.

    1991-01-01

    The search for Higgs bosons above Lep200 reach could be one of the main tasks of the future pp and ee colliders. In the intermediate mass region, and in particular in the range 80-140 GeV/c 2 , only the 2-photon decay mode of a Higgs produced inclusively or in association with a W, gives a good chance of observation. A 'dedicated' very high resolution calorimeter with photon angle reconstruction and pion identification capability should detect a Higgs signal with high probability. A crystal calorimeter can be considered as a conservative approach to such a detector, since a large design and operation experience already exists. The extensive R and D needed for finding a dense, fast and radiation hard crystal, is under way. Guide-lines for designing an optimum calorimeter for LHC are discussed and preliminary configurations are given. (author) 7 refs., 3 figs., 2 tabs

  12. High resolution tomography using analog coding

    International Nuclear Information System (INIS)

    Brownell, G.L.; Burnham, C.A.; Chesler, D.A.

    1985-01-01

    As part of a 30-year program in the development of positron instrumentation, the authors have developed a high resolution bismuth germanate (BGO) ring tomography (PCR) employing 360 detectors and 90 photomultiplier tubes for one plane. The detectors are shaped as trapezoid and are 4 mm wide at the front end. When assembled, they form an essentially continuous cylindrical detector. Light from a scintillation in the detector is viewed through a cylindrical light pipe by the photomultiplier tubes. By use of an analog coding scheme, the detector emitting light is identified from the phototube signals. In effect, each phototube can identify four crystals. PCR is designed as a static device and does not use interpolative motion. This results in considerable advantage when performing dynamic studies. PCR is the positron tomography analog of the γ-camera widely used in nuclear medicine

  13. High-resolution CT of otosclerosis

    International Nuclear Information System (INIS)

    Dewen, Yang; Kodama, Takao; Tono, Tetsuya; Ochiai, Reiji; Kiyomizu, Kensuke; Suzuki, Yukiko; Yano, Takanori; Watanabe, Katsushi

    1997-01-01

    High-resolution CT (HRCT) scans of thirty-two patients (60 ears) with the clinical diagnosis of fenestral otosclerosis were evaluated retrospectively. HRCT was performed with 1-mm-thick targeted sections and 1-mm (36 ears) or 0.5-mm (10 ears) intervals in the semiaxial projection. Seven patients (14 ears) underwent helical scanning with a 1-mm slice thickness and 1-mm/sec table speed. Forty-five ears (75%) were found to have one or more otospongiotic or otosclerotic foci on HRCT. In most instances (30 ears), the otospongiotic foci were found in the region of the fissula ante fenestram. No significant correlations between CT findings and air conduction threshold were observed. We found a significant relationship between lesions of the labyrinthine capsule and sensorineural hearing loss. We conclude that HRCT is a valuable modality for diagnosing otosclerosis, especially when otospongiotic focus is detected. (author)

  14. High resolution CT in pulmonary sarcoidosis

    International Nuclear Information System (INIS)

    Spina, Juan C.; Curros, Marisela L.; Gomez, M.; Gonzalez, A.; Chacon, Carolina; Guerendiain, G.

    2000-01-01

    Objectives: To establish the particular advantages of High Resolution CT (HRCT) for the diagnosis of pulmonary sarcoidosis. Material and Methods: A series of fourteen patients, (4 men and 10 women; mean age 44,5 years) with thoracic sarcoidosis. All patients were studied using HRCT and diagnosis was confirmed for each case. Confidence intervals were obtained for different disease manifestations. Results: The most common findings were: lymph node enlargement (n=14 patients), pulmonary nodules (n=13), thickening of septa (n=6), peribronquial vascular thickening (n=5) pulmonary pseudo mass (n=5) and signs of fibrosis (n=4). The stage most commonly observed was stage II. It is worth noting that no cases of pleural effusion or cavitations of pulmonary lesions were observed. Conclusions: In this series, confidence interval overlapping for lymph node enlargement, single pulmonary nodules and septum thickening, allows to infer that their presence in a young adult, with few clinical symptoms, forces to rule out first the possibility of sarcoidosis. (author)

  15. Improved methods for high resolution electron microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, J.R.

    1987-04-01

    Existing methods of making support films for high resolution transmission electron microscopy are investigated and novel methods are developed. Existing methods of fabricating fenestrated, metal reinforced specimen supports (microgrids) are evaluated for their potential to reduce beam induced movement of monolamellar crystals of C/sub 44/H/sub 90/ paraffin supported on thin carbon films. Improved methods of producing hydrophobic carbon films by vacuum evaporation, and improved methods of depositing well ordered monolamellar paraffin crystals on carbon films are developed. A novel technique for vacuum evaporation of metals is described which is used to reinforce microgrids. A technique is also developed to bond thin carbon films to microgrids with a polymer bonding agent. Unique biochemical methods are described to accomplish site specific covalent modification of membrane proteins. Protocols are given which covalently convert the carboxy terminus of papain cleaved bacteriorhodopsin to a free thiol. 53 refs., 19 figs., 1 tab.

  16. High resolution infrared spectroscopy of symbiotic stars

    International Nuclear Information System (INIS)

    Bensammar, S.

    1989-01-01

    We report here very early results of high resolution (5x10 3 - 4x10 4 ) infrared spectroscopy (1 - 2.5 μm) of different symbiotic stars (T CrB, RW Hya, CI Cyg, PU Vul) observed with the Fourier Transform Spectrometer of the 3.60m Canada France Hawaii Telescope. These stars are usually considered as interacting binaries and only little details are known about the nature of their cool component. CO absorption lines are detected for the four stars. Very different profiles of hydrogen Brackett γ and helium 10830 A lines are shown for CI Cyg observed at different phases, while Pu Vul shows very intense emission lines

  17. GRANULOMETRIC MAPS FROM HIGH RESOLUTION SATELLITE IMAGES

    Directory of Open Access Journals (Sweden)

    Catherine Mering

    2011-05-01

    Full Text Available A new method of land cover mapping from satellite images using granulometric analysis is presented here. Discontinuous landscapes such as steppian bushes of semi arid regions and recently growing urban settlements are especially concerned by this study. Spatial organisations of the land cover are quantified by means of the size distribution analysis of the land cover units extracted from high resolution remotely sensed images. A granulometric map is built by automatic classification of every pixel of the image according to the granulometric density inside a sliding neighbourhood. Granulometric mapping brings some advantages over traditional thematic mapping by remote sensing by focusing on fine spatial events and small changes in one peculiar category of the landscape.

  18. The origin of ICM enrichment in the outskirts of present-day galaxy clusters from cosmological hydrodynamical simulations

    Science.gov (United States)

    Biffi, V.; Planelles, S.; Borgani, S.; Rasia, E.; Murante, G.; Fabjan, D.; Gaspari, M.

    2018-05-01

    The uniformity of the intracluster medium (ICM) enrichment level in the outskirts of nearby galaxy clusters suggests that chemical elements were deposited and widely spread into the intergalactic medium before the cluster formation. This observational evidence is supported by numerical findings from cosmological hydrodynamical simulations, as presented in Biffi et al., including the effect of thermal feedback from active galactic nuclei. Here, we further investigate this picture, by tracing back in time the spatial origin and metallicity evolution of the gas residing at z = 0 in the outskirts of simulated galaxy clusters. In these regions, we find a large distribution of iron abundances, including a component of highly enriched gas, already present at z = 2. At z > 1, the gas in the present-day outskirts was distributed over tens of virial radii from the main cluster and had been already enriched within high-redshift haloes. At z = 2, about 40 {per cent} of the most Fe-rich gas at z = 0 was not residing in any halo more massive than 10^{11} h^{-1} M_{⊙} in the region and yet its average iron abundance was already 0.4, w.r.t. the solar value by Anders & Grevesse. This confirms that the in situ enrichment of the ICM in the outskirts of present-day clusters does not play a significant role, and its uniform metal abundance is rather the consequence of the accretion of both low-metallicity and pre-enriched (at z > 2) gas, from the diffuse component and through merging substructures. These findings do not depend on the mass of the cluster nor on its core properties.

  19. Neutrino cosmology

    International Nuclear Information System (INIS)

    Berstein, J.

    1984-01-01

    These lectures offer a self-contained review of the role of neutrinos in cosmology. The first part deals with the question 'What is a neutrino.' and describes in a historical context the theoretical ideas and experimental discoveries related to the different types of neutrinos and their properties. The basic differences between the Dirac neutrino and the Majorana neutrino are pointed out and the evidence for different neutrino 'flavours', neutrino mass, and neutrino oscillations is discussed. The second part summarizes current views on cosmology, particularly as they are affected by recent theoretical and experimental advances in high-energy particle physics. Finally, the close relationship between neutrino physics and cosmology is brought out in more detail, to show how cosmological constraints can limit the various theoretical possibilities for neutrinos and, more particularly, how increasing knowledge of neutrino properties can contribute to our understanding of the origin, history, and future of the Universe. The level is that of the beginning graduate student. (orig.)

  20. Qualitative cosmology

    International Nuclear Information System (INIS)

    Khalatnikov, I.M.; Belinskij, V.A.

    1984-01-01

    Application of the qualitative theory of dynamic systems to analysis of homogeneous cosmological models is described. Together with the well-known cases, requiring ideal liquid, the properties of cosmological evolution of matter with dissipative processes due to viscosity are considered. New cosmological effects occur, when viscosity terms being one and the same order with the rest terms in the equations of gravitation or even exceeding them. In these cases the description of the dissipative process by means of only two viscosity coefficients (volume and shift) may become inapplicable because all the rest decomposition terms of dissipative addition to the energy-momentum in velocity gradient can be large application of equations with hydrodynamic viscosty should be considered as a model of dissipative effects in cosmology

  1. Neutrino cosmology

    CERN Document Server

    Lesgourgues, Julien; Miele, Gennaro; Pastor, Sergio

    2013-01-01

    The role that neutrinos have played in the evolution of the Universe is the focus of one of the most fascinating research areas that has stemmed from the interplay between cosmology, astrophysics and particle physics. In this self-contained book, the authors bring together all aspects of the role of neutrinos in cosmology, spanning from leptogenesis to primordial nucleosynthesis, their role in CMB and structure formation, to the problem of their direct detection. The book starts by guiding the reader through aspects of fundamental neutrino physics, such as the standard cosmological model and the statistical mechanics in the expanding Universe, before discussing the history of neutrinos in chronological order from the very early stages until today. This timely book will interest graduate students and researchers in astrophysics, cosmology and particle physics, who work with either a theoretical or experimental focus.

  2. Modern cosmology

    International Nuclear Information System (INIS)

    Zeldovich, Y.B.

    1983-01-01

    This paper fives a general review of modern cosmology. The following subjects are discussed: hot big bang and periodization of the evolution; Hubble expansion; the structure of the universe (pancake theory); baryon asymmetry; inflatory universe. (Auth.)

  3. Cosmological Simulations with Scale-Free Initial Conditions. I. Adiabatic Hydrodynamics

    International Nuclear Information System (INIS)

    Owen, J.M.; Weinberg, D.H.; Evrard, A.E.; Hernquist, L.; Katz, N.

    1998-01-01

    We analyze hierarchical structure formation based on scale-free initial conditions in an Einstein endash de Sitter universe, including a baryonic component with Ω bary = 0.05. We present three independent, smoothed particle hydrodynamics (SPH) simulations, performed at two resolutions (32 3 and 64 3 dark matter and baryonic particles) and with two different SPH codes (TreeSPH and P3MSPH). Each simulation is based on identical initial conditions, which consist of Gaussian-distributed initial density fluctuations that have a power spectrum P(k) ∝ k -1 . The baryonic material is modeled as an ideal gas subject only to shock heating and adiabatic heating and cooling; radiative cooling and photoionization heating are not included. The evolution is expected to be self-similar in time, and under certain restrictions we identify the expected scalings for many properties of the distribution of collapsed objects in all three realizations. The distributions of dark matter masses, baryon masses, and mass- and emission-weighted temperatures scale quite reliably. However, the density estimates in the central regions of these structures are determined by the degree of numerical resolution. As a result, mean gas densities and Bremsstrahlung luminosities obey the expected scalings only when calculated within a limited dynamic range in density contrast. The temperatures and luminosities of the groups show tight correlations with the baryon masses, which we find can be well represented by power laws. The Press-Schechter (PS) approximation predicts the distribution of group dark matter and baryon masses fairly well, though it tends to overestimate the baryon masses. Combining the PS mass distribution with the measured relations for T(M) and L(M) predicts the temperature and luminosity distributions fairly accurately, though there are some discrepancies at high temperatures/luminosities. In general the three simulations agree well for the properties of resolved groups, where a group

  4. Reconstructing the distribution of haloes and mock galaxies below the resolution limit in cosmological simulations

    OpenAIRE

    de la Torre, Sylvain; Peacock, John A.

    2012-01-01

    We present a method for populating dark matter simulations with haloes of mass below the resolution limit. It is based on stochastically sampling a field derived from the density field of the halo catalogue, using constraints from the conditional halo mass function n(m|{\\delta}). We test the accuracy of the method and show its application in the context of building mock galaxy samples. We find that this technique allows precise reproduction of the two-point statistics of galaxies in mock samp...

  5. A subspace approach to high-resolution spectroscopic imaging.

    Science.gov (United States)

    Lam, Fan; Liang, Zhi-Pei

    2014-04-01

    To accelerate spectroscopic imaging using sparse sampling of (k,t)-space and subspace (or low-rank) modeling to enable high-resolution metabolic imaging with good signal-to-noise ratio. The proposed method, called SPectroscopic Imaging by exploiting spatiospectral CorrElation, exploits a unique property known as partial separability of spectroscopic signals. This property indicates that high-dimensional spectroscopic signals reside in a very low-dimensional subspace and enables special data acquisition and image reconstruction strategies to be used to obtain high-resolution spatiospectral distributions with good signal-to-noise ratio. More specifically, a hybrid chemical shift imaging/echo-planar spectroscopic imaging pulse sequence is proposed for sparse sampling of (k,t)-space, and a low-rank model-based algorithm is proposed for subspace estimation and image reconstruction from sparse data with the capability to incorporate prior information and field inhomogeneity correction. The performance of the proposed method has been evaluated using both computer simulations and phantom studies, which produced very encouraging results. For two-dimensional spectroscopic imaging experiments on a metabolite phantom, a factor of 10 acceleration was achieved with a minimal loss in signal-to-noise ratio compared to the long chemical shift imaging experiments and with a significant gain in signal-to-noise ratio compared to the accelerated echo-planar spectroscopic imaging experiments. The proposed method, SPectroscopic Imaging by exploiting spatiospectral CorrElation, is able to significantly accelerate spectroscopic imaging experiments, making high-resolution metabolic imaging possible. Copyright © 2014 Wiley Periodicals, Inc.

  6. Geological survey by high resolution electrical survey on granite areas

    International Nuclear Information System (INIS)

    Sugimoto, Yoshihiro; Yamada, Naoyuki

    2002-03-01

    As an Integral part of the geological survey in 'The study of the regions ground water flow system' that we are carrying out with Tono Geoscience Center, we proved the relation between the uncontinuation structure such as lineament in the base rock and resistivity structure (resistivity distribution), for the purpose of that confirms the efficacy of the high resolution electrical survey as geological survey, we carried out high resolution electrical survey on granite area. We obtained the following result, by the comparison of resistivity distribution with established geological survey, lineament analysis and investigative drilling. 1. The resistivity structure of this survey area is almost able to classify it into the following four range. 1) the low resistivity range of 50-800 Ωm, 2) The resistivity range like the middle of 200-2000 Ωm, 3) The high resistivity range of 2000 Ωm over, 4) The low resistivity range of depth of the survey line 400-550 section. 2. The low resistivity range of 4) that correspond with the established geological data is not admitted. 3. It was confirmed that resistivity structure almost correspond to geological structure by the comparison with the established data. 4. The small-scale low resistivity area is admitted in the point equivalent to the lineament position of established. 5. We carried out it with the simulation method about the low resistivity range of 4). As a result, it understood that it has the possibility that the narrow ratio low resistivity area is shown as the wide ratio resistivity range in the analysis section. In the survey in this time, it is conceivable that the resistivity distribution with the possibility of the unhomogeneous and uncontinuation structure of the base rock is being shown conspicuously, the efficacy of the high resolution resistivity survey as geological survey on granite was shown. (author)

  7. Current cosmology

    International Nuclear Information System (INIS)

    Zeldovich, Ya.

    1984-01-01

    The knowledge is summed up of contemporary cosmology on the universe and its development resulting from a great number of highly sensitive observations and the application of contemporary physical theories to the entire universe. The questions are assessed of mass density in the universe, the structure and origin of the universe, its baryon asymmetry and the quantum explanation of the origin of the universe. Physical problems are presented which should be resolved for the future development of cosmology. (Ha)

  8. Particle cosmology

    CERN Multimedia

    CERN. Geneva

    2007-01-01

    The understanding of the Universe at the largest and smallest scales traditionally has been the subject of cosmology and particle physics, respectively. Studying the evolution of the Universe connects today's large scales with the tiny scales in the very early Universe and provides the link between the physics of particles and of the cosmos. This series of five lectures aims at a modern and critical presentation of the basic ideas, methods, models and observations in today's particle cosmology.

  9. High-resolution RCMs as pioneers for future GCMs

    Science.gov (United States)

    Schar, C.; Ban, N.; Arteaga, A.; Charpilloz, C.; Di Girolamo, S.; Fuhrer, O.; Hoefler, T.; Leutwyler, D.; Lüthi, D.; Piaget, N.; Ruedisuehli, S.; Schlemmer, L.; Schulthess, T. C.; Wernli, H.

    2017-12-01

    Currently large efforts are underway to refine the horizontal resolution of global and regional climate models to O(1 km), with the intent to represent convective clouds explicitly rather than using semi-empirical parameterizations. This refinement will move the governing equations closer to first principles and is expected to reduce the uncertainties of climate models. High resolution is particularly attractive in order to better represent critical cloud feedback processes (e.g. related to global climate sensitivity and extratropical summer convection) and extreme events (such as heavy precipitation events, floods, and hurricanes). The presentation will be illustrated using decade-long simulations at 2 km horizontal grid spacing, some of these covering the European continent on a computational mesh with 1536x1536x60 grid points. To accomplish such simulations, use is made of emerging heterogeneous supercomputing architectures, using a version of the COSMO limited-area weather and climate model that is able to run entirely on GPUs. Results show that kilometer-scale resolution dramatically improves the simulation of precipitation in terms of the diurnal cycle and short-term extremes. The modeling framework is used to address changes of precipitation scaling with climate change. It is argued that already today, modern supercomputers would in principle enable global atmospheric convection-resolving climate simulations, provided appropriately refactored codes were available, and provided solutions were found to cope with the rapidly growing output volume. A discussion will be provided of key challenges affecting the design of future high-resolution climate models. It is suggested that km-scale RCMs should be exploited to pioneer this terrain, at a time when GCMs are not yet available at such resolutions. Areas of interest include the development of new parameterization schemes adequate for km-scale resolution, the exploration of new validation methodologies and data

  10. High resolution modelling of extreme precipitation events in urban areas

    Science.gov (United States)

    Siemerink, Martijn; Volp, Nicolette; Schuurmans, Wytze; Deckers, Dave

    2015-04-01

    The present day society needs to adjust to the effects of climate change. More extreme weather conditions are expected, which can lead to longer periods of drought, but also to more extreme precipitation events. Urban water systems are not designed for such extreme events. Most sewer systems are not able to drain the excessive storm water, causing urban flooding. This leads to high economic damage. In order to take appropriate measures against extreme urban storms, detailed knowledge about the behaviour of the urban water system above and below the streets is required. To investigate the behaviour of urban water systems during extreme precipitation events new assessment tools are necessary. These tools should provide a detailed and integral description of the flow in the full domain of overland runoff, sewer flow, surface water flow and groundwater flow. We developed a new assessment tool, called 3Di, which provides detailed insight in the urban water system. This tool is based on a new numerical methodology that can accurately deal with the interaction between overland runoff, sewer flow and surface water flow. A one-dimensional model for the sewer system and open channel flow is fully coupled to a two-dimensional depth-averaged model that simulates the overland flow. The tool uses a subgrid-based approach in order to take high resolution information of the sewer system and of the terrain into account [1, 2]. The combination of using the high resolution information and the subgrid based approach results in an accurate and efficient modelling tool. It is now possible to simulate entire urban water systems using extreme high resolution (0.5m x 0.5m) terrain data in combination with a detailed sewer and surface water network representation. The new tool has been tested in several Dutch cities, such as Rotterdam, Amsterdam and The Hague. We will present the results of an extreme precipitation event in the city of Schiedam (The Netherlands). This city deals with

  11. Quasars at the Cosmic Dawn: effects on Reionization properties in cosmological simulations

    Science.gov (United States)

    Garaldi, Enrico; Compostella, Michele; Porciani, Cristiano

    2018-05-01

    We study a model of cosmic reionization where quasars (QSOs) are the dominant source of ionizing photons at all relevant epochs. We employ a suite of adaptive hydrodynamical simulations post-processed with a multi-wavelength Monte Carlo radiative-transfer code and calibrate them in order to accurately reproduce the observed quasar luminosity function and emissivity evolution. Our results show that the QSO-only model fails in reproducing key observables linked to the Helium reionization, as the temperature evolution of the inter-galactic medium (IGM) and the HeII effective optical depth in synthetic Lyα spectra. Nevertheless, we find hints that an increased quasar contribution can explain recent measurements of a large inhomogeneity in the IGM at redshift z ~ 5. Finally, we devise a method capable of constraining the QSOs contribution to the reionization from the properties of the HeII Lyα forest at z ~ 3.5.

  12. Integrated High Resolution Monitoring of Mediterranean vegetation

    Science.gov (United States)

    Cesaraccio, Carla; Piga, Alessandra; Ventura, Andrea; Arca, Angelo; Duce, Pierpaolo; Mereu, Simone

    2017-04-01

    The study of the vegetation features in a complex and highly vulnerable ecosystems, such as Mediterranean maquis, leads to the need of using continuous monitoring systems at high spatial and temporal resolution, for a better interpretation of the mechanisms of phenological and eco-physiological processes. Near-surface remote sensing techniques are used to quantify, at high temporal resolution, and with a certain degree of spatial integration, the seasonal variations of the surface optical and radiometric properties. In recent decades, the design and implementation of global monitoring networks involved the use of non-destructive and/or cheaper approaches such as (i) continuous surface fluxes measurement stations, (ii) phenological observation networks, and (iii) measurement of temporal and spatial variations of the vegetation spectral properties. In this work preliminary results from the ECO-SCALE (Integrated High Resolution Monitoring of Mediterranean vegetation) project are reported. The project was manly aimed to develop an integrated system for environmental monitoring based on digital photography, hyperspectral radiometry , and micrometeorological techniques during three years of experimentation (2013-2016) in a Mediterranean site of Italy (Capo Caccia, Alghero). The main results concerned the analysis of chromatic coordinates indices from digital images, to characterized the phenological patterns for typical shrubland species, determining start and duration of the growing season, and the physiological status in relation to different environmental drought conditions; then the seasonal patterns of canopy phenology, was compared to NEE (Net Ecosystem Exchange) patterns, showing similarities. However, maximum values of NEE and ER (Ecosystem respiration), and short term variation, seemed mainly tuned by inter annual pattern of meteorological variables, in particular of temperature recorded in the months preceding the vegetation green-up. Finally, green signals

  13. High-Resolution Integrated Optical System

    Science.gov (United States)

    Prakapenka, V. B.; Goncharov, A. F.; Holtgrewe, N.; Greenberg, E.

    2017-12-01

    Raman and optical spectroscopy in-situ at extreme high pressure and temperature conditions relevant to the planets' deep interior is a versatile tool for characterization of wide range of properties of minerals essential for understanding the structure, composition, and evolution of terrestrial and giant planets. Optical methods, greatly complementing X-ray diffraction and spectroscopy techniques, become crucial when dealing with light elements. Study of vibrational and optical properties of minerals and volatiles, was a topic of many research efforts in past decades. A great deal of information on the materials properties under extreme pressure and temperature has been acquired including that related to structural phase changes, electronic transitions, and chemical transformations. These provide an important insight into physical and chemical states of planetary interiors (e.g. nature of deep reservoirs) and their dynamics including heat and mass transport (e.g. deep carbon cycle). Optical and vibrational spectroscopy can be also very instrumental for elucidating the nature of the materials molten states such as those related to the Earth's volatiles (CO2, CH4, H2O), aqueous fluids and silicate melts, planetary ices (H2O, CH4, NH3), noble gases, and H2. The optical spectroscopy study performed concomitantly with X-ray diffraction and spectroscopy measurements at the GSECARS beamlines on the same sample and at the same P-T conditions would greatly enhance the quality of this research and, moreover, will provide unique new information on chemical state of matter. The advanced high-resolution user-friendly integrated optical system is currently under construction and expected to be completed by 2018. In our conceptual design we have implemented Raman spectroscopy with five excitation wavelengths (266, 473, 532, 660, 946 nm), confocal imaging, double sided IR laser heating combined with high temperature Raman (including coherent anti-Stokes Raman scattering) and

  14. Galactic r-process enrichment by neutron star mergers in cosmological simulations of a Milky Way-mass galaxy

    Science.gov (United States)

    van de Voort, Freeke; Quataert, Eliot; Hopkins, Philip F.; Kereš, Dušan; Faucher-Giguère, Claude-André

    2015-02-01

    We quantify the stellar abundances of neutron-rich r-process nuclei in cosmological zoom-in simulations of a Milky Way-mass galaxy from the Feedback In Realistic Environments project. The galaxy is enriched with r-process elements by binary neutron star (NS) mergers and with iron and other metals by supernovae. These calculations include key hydrodynamic mixing processes not present in standard semi-analytic chemical evolution models, such as galactic winds and hydrodynamic flows associated with structure formation. We explore a range of models for the rate and delay time of NS mergers, intended to roughly bracket the wide range of models consistent with current observational constraints. We show that NS mergers can produce [r-process/Fe] abundance ratios and scatter that appear reasonably consistent with observational constraints. At low metallicity, [Fe/H] ≲ -2, we predict there is a wide range of stellar r-process abundance ratios, with both supersolar and subsolar abundances. Low-metallicity stars or stars that are outliers in their r-process abundance ratios are, on average, formed at high redshift and located at large galactocentric radius. Because NS mergers are rare, our results are not fully converged with respect to resolution, particularly at low metallicity. However, the uncertain rate and delay time distribution of NS mergers introduce an uncertainty in the r-process abundances comparable to that due to finite numerical resolution. Overall, our results are consistent with NS mergers being the source of most of the r-process nuclei in the Universe.

  15. High resolution muon computed tomography at neutrino beam facilities

    International Nuclear Information System (INIS)

    Suerfu, B.; Tully, C.G.

    2016-01-01

    X-ray computed tomography (CT) has an indispensable role in constructing 3D images of objects made from light materials. However, limited by absorption coefficients, X-rays cannot deeply penetrate materials such as copper and lead. Here we show via simulation that muon beams can provide high resolution tomographic images of dense objects and of structures within the interior of dense objects. The effects of resolution broadening from multiple scattering diminish with increasing muon momentum. As the momentum of the muon increases, the contrast of the image goes down and therefore requires higher resolution in the muon spectrometer to resolve the image. The variance of the measured muon momentum reaches a minimum and then increases with increasing muon momentum. The impact of the increase in variance is to require a higher integrated muon flux to reduce fluctuations. The flux requirements and level of contrast needed for high resolution muon computed tomography are well matched to the muons produced in the pion decay pipe at a neutrino beam facility and what can be achieved for momentum resolution in a muon spectrometer. Such an imaging system can be applied in archaeology, art history, engineering, material identification and whenever there is a need to image inside a transportable object constructed of dense materials

  16. High resolution computed tomography of positron emitters

    International Nuclear Information System (INIS)

    Derenzo, S.E.; Budinger, T.F.; Cahoon, J.L.; Huesman, R.H.; Jackson, H.G.

    1976-10-01

    High resolution computed transaxial radionuclide tomography has been performed on phantoms containing positron-emitting isotopes. The imaging system consisted of two opposing groups of eight NaI(Tl) crystals 8 mm x 30 mm x 50 mm deep and the phantoms were rotated to measure coincident events along 8960 projection integrals as they would be measured by a 280-crystal ring system now under construction. The spatial resolution in the reconstructed images is 7.5 mm FWHM at the center of the ring and approximately 11 mm FWHM at a radius of 10 cm. We present measurements of imaging and background rates under various operating conditions. Based on these measurements, the full 280-crystal system will image 10,000 events per sec with 400 μCi in a section 1 cm thick and 20 cm in diameter. We show that 1.5 million events are sufficient to reliably image 3.5-mm hot spots with 14-mm center-to-center spacing and isolated 9-mm diameter cold spots in phantoms 15 to 20 cm in diameter

  17. High speed, High resolution terahertz spectrometers

    International Nuclear Information System (INIS)

    Kim, Youngchan; Yee, Dae Su; Yi, Miwoo; Ahn, Jaewook

    2008-01-01

    A variety of sources and methods have been developed for terahertz spectroscopy during almost two decades. Terahertz time domain spectroscopy (THz TDS)has attracted particular attention as a basic measurement method in the fields of THz science and technology. Recently, asynchronous optical sampling (AOS)THz TDS has been demonstrated, featuring rapid data acquisition and a high spectral resolution. Also, terahertz frequency comb spectroscopy (TFCS)possesses attractive features for high precision terahertz spectroscopy. In this presentation, we report on these two types of terahertz spectrometer. Our high speed, high resolution terahertz spectrometer is demonstrated using two mode locked femtosecond lasers with slightly different repetition frequencies without a mechanical delay stage. The repetition frequencies of the two femtosecond lasers are stabilized by use of two phase locked loops sharing the same reference oscillator. The time resolution of our terahertz spectrometer is measured using the cross correlation method to be 270 fs. AOS THz TDS is presented in Fig. 1, which shows a time domain waveform rapidly acquired on a 10ns time window. The inset shows a zoom into the signal with 100ps time window. The spectrum obtained by the fast Fourier Transformation (FFT)of the time domain waveform has a frequency resolution of 100MHz. The dependence of the signal to noise ratio (SNR)on the measurement time is also investigated

  18. High resolution CT of temporal bone trauma

    International Nuclear Information System (INIS)

    Youn, Eun Kyung

    1986-01-01

    Radiographic studies of the temporal bone following head trauma are indicated when there is cerebrospinal fluid otorrhea or rhinorrhoea, hearing loss, or facial nerve paralysis. Plain radiography displays only 17-30% of temporal bone fractures and pluridirectional tomography is both difficult to perform, particularly in the acutely ill patient, and less satisfactory for the demonstration of fine fractures. Consequently, high resolution CT is the imaging method of choice for the investigation of suspected temporal bone trauma and allows special resolution of fine bony detail comparable to that attainable by conventional tomography. Eight cases of temporal bone trauma examined at Korea General Hospital April 1985 through May 1986. The results were as follows: Seven patients (87%) suffered longitudinal fractures. In 6 patients who had purely conductive hearing loss, CT revealed various ossicular chain abnormality. In one patient who had neuro sensory hearing loss, CT demonstrated intract ossicular with a fracture nearing lateral wall of the lateral semicircular canal. In one patient who had mixed hearing loss, CT showed complex fracture.

  19. High resolution SETI: Experiences and prospects

    Science.gov (United States)

    Horowitz, Paul; Clubok, Ken

    Megachannel spectroscopy with sub-Hertz resolution constitutes an attractive strategy for a microwave search for extraterrestrial intelligence (SETI), assuming the transmission of a narrowband radiofrequency beacon. Such resolution matches the properties of the interstellar medium, and the necessary Doppler corrections provide a high degree of interference rejection. We have constructed a frequency-agile receiver with an FFT-based 8 megachannel digital spectrum analyzer, on-line signal recognition, and multithreshold archiving. We are using it to conduct a meridian transit search of the northern sky at the Harvard-Smithsonian 26-m antenna, with a second identical system scheduled to begin observations in Argentina this month. Successive 400 kHz spectra, at 0.05 Hz resolution, are searched for features characteristic of an intentional narrowband beacon transmission. These spectra are centered on guessable frequencies (such as λ21 cm), referenced successively to the local standard of rest, the galactic barycenter, and the cosmic blackbody rest frame. This search has rejected interference admirably, but is greatly limited both in total frequency coverage and sensitivity to signals other than carriers. We summarize five years of high resolution SETI at Harvard, in the context of answering the questions "How useful is narrowband SETI, how serious are its limitations, what can be done to circumvent them, and in what direction should SETI evolve?" Increasingly powerful signal processing hardware, combined with ever-higher memory densities, are particularly relevant, permitting the construction of compact and affordable gigachannel spectrum analyzers covering hundreds of megahertz of instantaneous bandwidth.

  20. High-resolution CCD imaging alternatives

    Science.gov (United States)

    Brown, D. L.; Acker, D. E.

    1992-08-01

    High resolution CCD color cameras have recently stimulated the interest of a large number of potential end-users for a wide range of practical applications. Real-time High Definition Television (HDTV) systems are now being used or considered for use in applications ranging from entertainment program origination through digital image storage to medical and scientific research. HDTV generation of electronic images offers significant cost and time-saving advantages over the use of film in such applications. Further in still image systems electronic image capture is faster and more efficient than conventional image scanners. The CCD still camera can capture 3-dimensional objects into the computing environment directly without having to shoot a picture on film develop it and then scan the image into a computer. 2. EXTENDING CCD TECHNOLOGY BEYOND BROADCAST Most standard production CCD sensor chips are made for broadcast-compatible systems. One popular CCD and the basis for this discussion offers arrays of roughly 750 x 580 picture elements (pixels) or a total array of approximately 435 pixels (see Fig. 1). FOR. A has developed a technique to increase the number of available pixels for a given image compared to that produced by the standard CCD itself. Using an inter-lined CCD with an overall spatial structure several times larger than the photo-sensitive sensor areas each of the CCD sensors is shifted in two dimensions in order to fill in spatial gaps between adjacent sensors.

  1. High resolution simultaneous measurements of airborne radionuclides

    International Nuclear Information System (INIS)

    Abe, T.; Yamaguchi, Y.; Tanaka, K.; Komura, K.

    2006-01-01

    High resolution (2-3 hrs) simultaneous measurements of airborne radionuclides, 212 Pb, 210 Pb and 7 Be, have been performed by using extremely low background Ge detectors at Ogoya Underground Laboratory. We have measured above radionuclides at three monitoring points viz, 1) Low Level Radioactivity Laboratory (LLRL) Kanazawa University, 2) Shishiku Plateau (640 m MSL) located about 8 km from LLRL to investigate vertical difference of activity levels, and 3) Hegura Island (10 m MSL) located about 50 km from Noto Peninsula in the Sea of Japan to evaluate the influences of Asian continent or mainland of Japan on the variation to the activity levels. Variations of short-lived 212 Pb concentration showed noticeable time lags between at LLRL and at Shishiku Plateau. These time lags might be caused by change of height of a planetary boundary layer. On the contrary, variations of long-lived 210 Pb and 7 Be showed simultaneity at three locations because of homogeneity of these concentrations all over the area. (author)

  2. The IRX-β dust attenuation relation in cosmological galaxy formation simulations

    Science.gov (United States)

    Narayanan, Desika; Davé, Romeel; Johnson, Benjamin D.; Thompson, Robert; Conroy, Charlie; Geach, James

    2018-02-01

    We utilize a series of galaxy formation simulations to investigate the relationship between the ultraviolet (UV) slope, β, and the infrared excess (IRX) in the spectral energy distributions (SEDs) of galaxies. Our main goals are to understand the origin of and scatter in the IRX-β relation; to assess the efficacy of simplified stellar population synthesis screen models in capturing the essential physics in the IRX-β relation; and to understand systematic deviations from the canonical local IRX-β relations in particular populations of high-redshift galaxies. Our main results follow. Young galaxies with relatively cospatial UV and IR emitting regions and a Milky Way-like extinction curve fall on or near the standard Meurer relation. This behaviour is well captured by simplified screen models. Scatter in the IRX-β relation is dominated by three major effects: (i) older stellar populations drive galaxies below the relations defined for local starbursts due to a reddening of their intrinsic UV SEDs; (ii) complex geometries in high-z heavily star-forming galaxies drive galaxies towards blue UV slopes owing to optically thin UV sightlines; (iii) shallow extinction curves drive galaxies downwards in the IRX-β plane due to lowered near-ultraviolet/far-ultraviolet extinction ratios. We use these features of the UV slopes of galaxies to derive a fitting relation that reasonably collapses the scatter back towards the canonical local relation. Finally, we use these results to develop an understanding for the location of two particularly enigmatic populations of galaxies in the IRX-β plane: z ˜ 2-4 dusty star-forming galaxies and z > 5 star-forming galaxies.

  3. High-resolution X-ray television and high-resolution video recorders

    International Nuclear Information System (INIS)

    Haendle, J.; Horbaschek, H.; Alexandrescu, M.

    1977-01-01

    The improved transmission properties of the high-resolution X-ray television chain described here make it possible to transmit more information per television image. The resolution in the fluoroscopic image, which is visually determined, depends on the dose rate and the inertia of the television pick-up tube. This connection is discussed. In the last few years, video recorders have been increasingly used in X-ray diagnostics. The video recorder is a further quality-limiting element in X-ray television. The development of function patterns of high-resolution magnetic video recorders shows that this quality drop may be largely overcome. The influence of electrical band width and number of lines on the resolution in the X-ray television image stored is explained in more detail. (orig.) [de

  4. Is the cosmological singularity compulsory

    International Nuclear Information System (INIS)

    Bekenstein, J.D.; Meisels, A.

    1980-01-01

    The cosmological singularity is inherent in all conventional general relativistic cosmological models. There can be no question that it is an unphysical feature; yet there does not seem to be any convervative way of eliminating it. Here we present singularity-free isotropic cosmological models which are indistinguishable from general relativistic ones at late times. They are based on the general theory of variable rest masses that we developed recently. Outside cosmology this theory simulates general relativity well. Thus it provides a framework incorporating those features which have made geneal relativity so sucessful while providing a way out of singularity dilemma. The cosmological models can be made to incorporate Dirac's large numbers hypothesis. G(now)/G(0)approx.10 -38

  5. Higgs cosmology.

    Science.gov (United States)

    Rajantie, Arttu

    2018-03-06

    The discovery of the Higgs boson in 2012 and other results from the Large Hadron Collider have confirmed the standard model of particle physics as the correct theory of elementary particles and their interactions up to energies of several TeV. Remarkably, the theory may even remain valid all the way to the Planck scale of quantum gravity, and therefore it provides a solid theoretical basis for describing the early Universe. Furthermore, the Higgs field itself has unique properties that may have allowed it to play a central role in the evolution of the Universe, from inflation to cosmological phase transitions and the origin of both baryonic and dark matter, and possibly to determine its ultimate fate through the electroweak vacuum instability. These connections between particle physics and cosmology have given rise to a new and growing field of Higgs cosmology, which promises to shed new light on some of the most puzzling questions about the Universe as new data from particle physics experiments and cosmological observations become available.This article is part of the Theo Murphy meeting issue 'Higgs cosmology'. © 2018 The Author(s).

  6. Cosmological principle

    International Nuclear Information System (INIS)

    Wesson, P.S.

    1979-01-01

    The Cosmological Principle states: the universe looks the same to all observers regardless of where they are located. To most astronomers today the Cosmological Principle means the universe looks the same to all observers because density of the galaxies is the same in all places. A new Cosmological Principle is proposed. It is called the Dimensional Cosmological Principle. It uses the properties of matter in the universe: density (rho), pressure (p), and mass (m) within some region of space of length (l). The laws of physics require incorporation of constants for gravity (G) and the speed of light (C). After combining the six parameters into dimensionless numbers, the best choices are: 8πGl 2 rho/c 2 , 8πGl 2 rho/c 4 , and 2 Gm/c 2 l (the Schwarzchild factor). The Dimensional Cosmological Principal came about because old ideas conflicted with the rapidly-growing body of observational evidence indicating that galaxies in the universe have a clumpy rather than uniform distribution

  7. Higgs cosmology

    Science.gov (United States)

    Rajantie, Arttu

    2018-01-01

    The discovery of the Higgs boson in 2012 and other results from the Large Hadron Collider have confirmed the standard model of particle physics as the correct theory of elementary particles and their interactions up to energies of several TeV. Remarkably, the theory may even remain valid all the way to the Planck scale of quantum gravity, and therefore it provides a solid theoretical basis for describing the early Universe. Furthermore, the Higgs field itself has unique properties that may have allowed it to play a central role in the evolution of the Universe, from inflation to cosmological phase transitions and the origin of both baryonic and dark matter, and possibly to determine its ultimate fate through the electroweak vacuum instability. These connections between particle physics and cosmology have given rise to a new and growing field of Higgs cosmology, which promises to shed new light on some of the most puzzling questions about the Universe as new data from particle physics experiments and cosmological observations become available. This article is part of the Theo Murphy meeting issue `Higgs cosmology'.

  8. Using High Resolution Simulations with WRF/SSiB Regional Climate Model Constrained by In Situ Observations to Assess the Impacts of Dust in Snow in the Upper Colorado River Basin

    Science.gov (United States)

    Oaida, C. M.; Skiles, M.; Painter, T. H.; Xue, Y.

    2015-12-01

    The mountain snowpack is an essential resource for both the environment as well as society. Observational and energy balance modeling work have shown that dust on snow (DOS) in western U.S. (WUS) is a major contributor to snow processes, including snowmelt timing and runoff amount in regions like the Upper Colorado River Basin (UCRB). In order to accurately estimate the impact of DOS to the hydrologic cycle and water resources, now and under a changing climate, we need to be able to (1) adequately simulate the snowpack (accumulation), and (2) realistically represent DOS processes in models. Energy balance models do not capture the impact on a broader local or regional scale, nor the land-atmosphere feedbacks, while GCM studies cannot resolve orographic-related precipitation processes, and therefore snowpack accumulation, owing to coarse spatial resolution and smoother terrain. All this implies the impacts of dust on snow on the mountain snowpack and other hydrologic processes are likely not well captured in current modeling studies. Recent increase in computing power allows for RCMs to be used at higher spatial resolutions, while recent in situ observations of dust in snow properties can help constrain modeling simulations. Therefore, in the work presented here, we take advantage of these latest resources to address the some of the challenges outlined above. We employ the newly enhanced WRF/SSiB regional climate model at 4 km horizontal resolution. This scale has been shown by others to be adequate in capturing orographic processes over WUS. We also constrain the magnitude of dust deposition provided by a global chemistry and transport model, with in situ measurements taken at sites in the UCRB. Furthermore, we adjust the dust absorptive properties based on observed values at these sites, as opposed to generic global ones. This study aims to improve simulation of the impact of dust in snow on the hydrologic cycle and related water resources.

  9. Processing method for high resolution monochromator

    International Nuclear Information System (INIS)

    Kiriyama, Koji; Mitsui, Takaya

    2006-12-01

    A processing method for high resolution monochromator (HRM) has been developed at Japanese Atomic Energy Agency/Quantum Beam Science Directorate/Synchrotron Radiation Research unit at SPring-8. For manufacturing a HRM, a sophisticated slicing machine and X-ray diffractometer have been installed for shaping a crystal ingot and orienting precisely the surface of a crystal ingot, respectively. The specification of the slicing machine is following; Maximum size of a diamond blade is φ 350mm in diameter, φ 38.1mm in the spindle diameter, and 2mm in thickness. A large crystal such as an ingot with 100mm in diameter, 200mm in length can be cut. Thin crystal samples such as a wafer can be also cut using by another sample holder. Working distance of a main shaft with the direction perpendicular to working table in the machine is 350mm at maximum. Smallest resolution of the main shaft with directions of front-and-back and top-and-bottom are 0.001mm read by a digital encoder. 2mm/min can set for cutting samples in the forward direction. For orienting crystal faces relative to the blade direction adjustment, a one-circle goniometer and 2-circle segment are equipped on the working table in the machine. A rotation and a tilt of the stage can be done by manual operation. Digital encoder in a turn stage is furnished and has angle resolution of less than 0.01 degrees. In addition, a hand drill as a supporting device for detailed processing of crystal is prepared. Then, an ideal crystal face can be cut from crystal samples within an accuracy of about 0.01 degrees. By installation of these devices, a high energy resolution monochromator crystal for inelastic x-ray scattering and a beam collimator are got in hand and are expected to be used for nanotechnology studies. (author)

  10. Toward high-resolution optoelectronic retinal prosthesis

    Science.gov (United States)

    Palanker, Daniel; Huie, Philip; Vankov, Alexander; Asher, Alon; Baccus, Steven

    2005-04-01

    It has been already demonstrated that electrical stimulation of retina can produce visual percepts in blind patients suffering from macular degeneration and retinitis pigmentosa. Current retinal implants provide very low resolution (just a few electrodes), while several thousand pixels are required for functional restoration of sight. We present a design of the optoelectronic retinal prosthetic system that can activate a retinal stimulating array with pixel density up to 2,500 pix/mm2 (geometrically corresponding to a visual acuity of 20/80), and allows for natural eye scanning rather than scanning with a head-mounted camera. The system operates similarly to "virtual reality" imaging devices used in military and medical applications. An image from a video camera is projected by a goggle-mounted infrared LED-LCD display onto the retina, activating an array of powered photodiodes in the retinal implant. Such a system provides a broad field of vision by allowing for natural eye scanning. The goggles are transparent to visible light, thus allowing for simultaneous utilization of remaining natural vision along with prosthetic stimulation. Optical control of the implant allows for simple adjustment of image processing algorithms and for learning. A major prerequisite for high resolution stimulation is the proximity of neural cells to the stimulation sites. This can be achieved with sub-retinal implants constructed in a manner that directs migration of retinal cells to target areas. Two basic implant geometries are described: perforated membranes and protruding electrode arrays. Possibility of the tactile neural stimulation is also examined.

  11. High-resolution phylogenetic microbial community profiling

    Energy Technology Data Exchange (ETDEWEB)

    Singer, Esther; Coleman-Derr, Devin; Bowman, Brett; Schwientek, Patrick; Clum, Alicia; Copeland, Alex; Ciobanu, Doina; Cheng, Jan-Fang; Gies, Esther; Hallam, Steve; Tringe, Susannah; Woyke, Tanja

    2014-03-17

    The representation of bacterial and archaeal genome sequences is strongly biased towards cultivated organisms, which belong to merely four phylogenetic groups. Functional information and inter-phylum level relationships are still largely underexplored for candidate phyla, which are often referred to as microbial dark matter. Furthermore, a large portion of the 16S rRNA gene records in the GenBank database are labeled as environmental samples and unclassified, which is in part due to low read accuracy, potential chimeric sequences produced during PCR amplifications and the low resolution of short amplicons. In order to improve the phylogenetic classification of novel species and advance our knowledge of the ecosystem function of uncultivated microorganisms, high-throughput full length 16S rRNA gene sequencing methodologies with reduced biases are needed. We evaluated the performance of PacBio single-molecule real-time (SMRT) sequencing in high-resolution phylogenetic microbial community profiling. For this purpose, we compared PacBio and Illumina metagenomic shotgun and 16S rRNA gene sequencing of a mock community as well as of an environmental sample from Sakinaw Lake, British Columbia. Sakinaw Lake is known to contain a large age of microbial species from candidate phyla. Sequencing results show that community structure based on PacBio shotgun and 16S rRNA gene sequences is highly similar in both the mock and the environmental communities. Resolution power and community representation accuracy from SMRT sequencing data appeared to be independent of GC content of microbial genomes and was higher when compared to Illumina-based metagenome shotgun and 16S rRNA gene (iTag) sequences, e.g. full-length sequencing resolved all 23 OTUs in the mock community, while iTags did not resolve closely related species. SMRT sequencing hence offers various potential benefits when characterizing uncharted microbial communities.

  12. KiDS-450: cosmological constraints from weak-lensing peak statistics - II: Inference from shear peaks using N-body simulations

    Science.gov (United States)

    Martinet, Nicolas; Schneider, Peter; Hildebrandt, Hendrik; Shan, HuanYuan; Asgari, Marika; Dietrich, Jörg P.; Harnois-Déraps, Joachim; Erben, Thomas; Grado, Aniello; Heymans, Catherine; Hoekstra, Henk; Klaes, Dominik; Kuijken, Konrad; Merten, Julian; Nakajima, Reiko

    2018-02-01

    We study the statistics of peaks in a weak-lensing reconstructed mass map of the first 450 deg2 of the Kilo Degree Survey (KiDS-450). The map is computed with aperture masses directly applied to the shear field with an NFW-like compensated filter. We compare the peak statistics in the observations with that of simulations for various cosmologies to constrain the cosmological parameter S_8 = σ _8 √{Ω _m/0.3}, which probes the (Ωm, σ8) plane perpendicularly to its main degeneracy. We estimate S8 = 0.750 ± 0.059, using peaks in the signal-to-noise range 0 ≤ S/N ≤ 4, and accounting for various systematics, such as multiplicative shear bias, mean redshift bias, baryon feedback, intrinsic alignment, and shear-position coupling. These constraints are ˜ 25 per cent tighter than the constraints from the high significance peaks alone (3 ≤ S/N ≤ 4) which typically trace single-massive haloes. This demonstrates the gain of information from low-S/N peaks. However, we find that including S/N KiDS-450. Combining shear peaks with non-tomographic measurements of the shear two-point correlation functions yields a ˜20 per cent improvement in the uncertainty on S8 compared to the shear two-point correlation functions alone, highlighting the great potential of peaks as a cosmological probe.

  13. Bayesian Peptide Peak Detection for High Resolution TOF Mass Spectrometry.

    Science.gov (United States)

    Zhang, Jianqiu; Zhou, Xiaobo; Wang, Honghui; Suffredini, Anthony; Zhang, Lin; Huang, Yufei; Wong, Stephen

    2010-11-01

    In this paper, we address the issue of peptide ion peak detection for high resolution time-of-flight (TOF) mass spectrometry (MS) data. A novel Bayesian peptide ion peak detection method is proposed for TOF data with resolution of 10 000-15 000 full width at half-maximum (FWHW). MS spectra exhibit distinct characteristics at this resolution, which are captured in a novel parametric model. Based on the proposed parametric model, a Bayesian peak detection algorithm based on Markov chain Monte Carlo (MCMC) sampling is developed. The proposed algorithm is tested on both simulated and real datasets. The results show a significant improvement in detection performance over a commonly employed method. The results also agree with expert's visual inspection. Moreover, better detection consistency is achieved across MS datasets from patients with identical pathological condition.

  14. Large angle cosmic microwave background fluctuations from cosmic strings with a cosmological constant

    International Nuclear Information System (INIS)

    Landriau, M.; Shellard, E.P.S.

    2004-01-01

    In this paper, we present results for large-angle cosmic microwave background anisotropies generated from high resolution simulations of cosmic string networks in a range of flat Friedmann-Robertson-Walker universes with a cosmological constant. Using an ensemble of all-sky maps, we compare with the Cosmic Background Explorer data to infer a normalization (or upper bound) on the string linear energy density μ. For a flat matter-dominated model (Ω M =1) we find Gμ/c 2 ≅0.7x10 -6 , which is lower than previous constraints probably because of the more accurate inclusion of string small-scale structure. For a cosmological constant within an observationally acceptable range, we find a relatively weak dependence with Gμ/c 2 less than 10% higher

  15. Fractal cosmology

    International Nuclear Information System (INIS)

    Dickau, Jonathan J.

    2009-01-01

    The use of fractals and fractal-like forms to describe or model the universe has had a long and varied history, which begins long before the word fractal was actually coined. Since the introduction of mathematical rigor to the subject of fractals, by Mandelbrot and others, there have been numerous cosmological theories and analyses of astronomical observations which suggest that the universe exhibits fractality or is by nature fractal. In recent years, the term fractal cosmology has come into usage, as a description for those theories and methods of analysis whereby a fractal nature of the cosmos is shown.

  16. HIGH RESOLUTION AIRBORNE SHALLOW WATER MAPPING

    Directory of Open Access Journals (Sweden)

    F. Steinbacher

    2012-07-01

    Full Text Available In order to meet the requirements of the European Water Framework Directive (EU-WFD, authorities face the problem of repeatedly performing area-wide surveying of all kinds of inland waters. Especially for mid-sized or small rivers this is a considerable challenge imposing insurmountable logistical efforts and costs. It is therefore investigated if large-scale surveying of a river system on an operational basis is feasible by employing airborne hydrographic laser scanning. In cooperation with the Bavarian Water Authority (WWA Weilheim a pilot project was initiated by the Unit of Hydraulic Engineering at the University of Innsbruck and RIEGL Laser Measurement Systems exploiting the possibilities of a new LIDAR measurement system with high spatial resolution and high measurement rate to capture about 70 km of riverbed and foreland for the river Loisach in Bavaria/Germany and the estuary and parts of the shoreline (about 40km in length of lake Ammersee. The entire area surveyed was referenced to classic terrestrial cross-section surveys with the aim to derive products for the monitoring and managing needs of the inland water bodies forced by the EU-WFD. The survey was performed in July 2011 by helicopter and airplane and took 3 days in total. In addition, high resolution areal images were taken to provide an optical reference, offering a wide range of possibilities on further research, monitoring, and managing responsibilities. The operating altitude was about 500 m to maintain eye-safety, even for the aided eye, the airspeed was about 55 kts for the helicopter and 75 kts for the aircraft. The helicopter was used in the alpine regions while the fixed wing aircraft was used in the plains and the urban area, using appropriate scan rates to receive evenly distributed point clouds. The resulting point density ranged from 10 to 25 points per square meter. By carefully selecting days with optimum water quality, satisfactory penetration down to the river

  17. High Resolution Airborne Shallow Water Mapping

    Science.gov (United States)

    Steinbacher, F.; Pfennigbauer, M.; Aufleger, M.; Ullrich, A.

    2012-07-01

    In order to meet the requirements of the European Water Framework Directive (EU-WFD), authorities face the problem of repeatedly performing area-wide surveying of all kinds of inland waters. Especially for mid-sized or small rivers this is a considerable challenge imposing insurmountable logistical efforts and costs. It is therefore investigated if large-scale surveying of a river system on an operational basis is feasible by employing airborne hydrographic laser scanning. In cooperation with the Bavarian Water Authority (WWA Weilheim) a pilot project was initiated by the Unit of Hydraulic Engineering at the University of Innsbruck and RIEGL Laser Measurement Systems exploiting the possibilities of a new LIDAR measurement system with high spatial resolution and high measurement rate to capture about 70 km of riverbed and foreland for the river Loisach in Bavaria/Germany and the estuary and parts of the shoreline (about 40km in length) of lake Ammersee. The entire area surveyed was referenced to classic terrestrial cross-section surveys with the aim to derive products for the monitoring and managing needs of the inland water bodies forced by the EU-WFD. The survey was performed in July 2011 by helicopter and airplane and took 3 days in total. In addition, high resolution areal images were taken to provide an optical reference, offering a wide range of possibilities on further research, monitoring, and managing responsibilities. The operating altitude was about 500 m to maintain eye-safety, even for the aided eye, the airspeed was about 55 kts for the helicopter and 75 kts for the aircraft. The helicopter was used in the alpine regions while the fixed wing aircraft was used in the plains and the urban area, using appropriate scan rates to receive evenly distributed point clouds. The resulting point density ranged from 10 to 25 points per square meter. By carefully selecting days with optimum water quality, satisfactory penetration down to the river bed was achieved

  18. EVOLUTION OF THE MASS-METALLICITY RELATIONS IN PASSIVE AND STAR-FORMING GALAXIES FROM SPH-COSMOLOGICAL SIMULATIONS

    International Nuclear Information System (INIS)

    Romeo Velonà, A. D.; Gavignaud, I.; Meza, A.; Sommer-Larsen, J.; Napolitano, N. R.; Antonuccio-Delogu, V.; Cielo, S.

    2013-01-01

    We present results from SPH-cosmological simulations, including self-consistent modeling of supernova feedback and chemical evolution, of galaxies belonging to two clusters and 12 groups. We reproduce the mass-metallicity (ZM) relation of galaxies classified in two samples according to their star-forming (SF) activity, as parameterized by their specific star formation rate (sSFR), across a redshift range up to z = 2. The overall ZM relation for the composite population evolves according to a redshift-dependent quadratic functional form that is consistent with other empirical estimates, provided that the highest mass bin of the brightest central galaxies is excluded. Its slope shows irrelevant evolution in the passive sample, being steeper in groups than in clusters. However, the subsample of high-mass passive galaxies only is characterized by a steep increase of the slope with redshift, from which it can be inferred that the bulk of the slope evolution of the ZM relation is driven by the more massive passive objects. The scatter of the passive sample is dominated by low-mass galaxies at all redshifts and keeps constant over cosmic times. The mean metallicity is highest in cluster cores and lowest in normal groups, following the same environmental sequence as that previously found in the red sequence building. The ZM relation for the SF sample reveals an increasing scatter with redshift, indicating that it is still being built at early epochs. The SF galaxies make up a tight sequence in the SFR-M * plane at high redshift, whose scatter increases with time alongside the consolidation of the passive sequence. We also confirm the anti-correlation between sSFR and stellar mass, pointing at a key role of the former in determining the galaxy downsizing, as the most significant means of diagnostics of the star formation efficiency. Likewise, an anti-correlation between sSFR and metallicity can be established for the SF galaxies, while on the contrary more active galaxies

  19. High Resolution Modeling of Hurricanes in a Climate Context

    Science.gov (United States)

    Knutson, T. R.

    2007-12-01

    Modeling of tropical cyclone activity in a climate context initially focused on simulation of relatively weak tropical storm-like disturbances as resolved by coarse grid (200 km) global models. As computing power has increased, multi-year simulations with global models of grid spacing 20-30 km have become feasible. Increased resolution also allowed for simulation storms of increasing intensity, and some global models generate storms of hurricane strength, depending on their resolution and other factors, although detailed hurricane structure is not simulated realistically. Results from some recent high resolution global model studies are reviewed. An alternative for hurricane simulation is regional downscaling. An early approach was to embed an operational (GFDL) hurricane prediction model within a global model solution, either for 5-day case studies of particular model storm cases, or for "idealized experiments" where an initial vortex is inserted into an idealized environments derived from global model statistics. Using this approach, hurricanes up to category five intensity can be simulated, owing to the model's relatively high resolution (9 km grid) and refined physics. Variants on this approach have been used to provide modeling support for theoretical predictions that greenhouse warming will increase the maximum intensities of hurricanes. These modeling studies also simulate increased hurricane rainfall rates in a warmer climate. The studies do not address hurricane frequency issues, and vertical shear is neglected in the idealized studies. A recent development is the use of regional model dynamical downscaling for extended (e.g., season-length) integrations of hurricane activity. In a study for the Atlantic basin, a non-hydrostatic model with grid spacing of 18km is run without convective parameterization, but with internal spectral nudging toward observed large-scale (basin wavenumbers 0-2) atmospheric conditions from reanalyses. Using this approach, our

  20. Cosmological inflation

    CERN Document Server

    Enqvist, K

    2012-01-01

    The very basics of cosmological inflation are discussed. We derive the equations of motion for the inflaton field, introduce the slow-roll parameters, and present the computation of the inflationary perturbations and their connection to the temperature fluctuations of the cosmic microwave background.

  1. Mathematical cosmology

    CERN Document Server

    Ellis, G F R

    1993-01-01

    Many topics were covered in the submitted papers, showing much life in this subject at present. They ranged from conventional calculations in specific cosmological models to provocatively speculative work. Space and time restrictions required selecting from them, for summarisation here; the book of Abstracts should be consulted for a full overview.

  2. Galileon cosmology

    International Nuclear Information System (INIS)

    Chow, Nathan; Khoury, Justin

    2009-01-01

    We study the cosmology of a galileon scalar-tensor theory, obtained by covariantizing the decoupling Lagrangian of the Dvali-Gabadadze-Poratti (DGP) model. Despite being local in 3+1 dimensions, the resulting cosmological evolution is remarkably similar to that of the full 4+1-dimensional DGP framework, both for the expansion history and the evolution of density perturbations. As in the DGP model, the covariant galileon theory yields two branches of solutions, depending on the sign of the galileon velocity. Perturbations are stable on one branch and ghostlike on the other. An interesting effect uncovered in our analysis is a cosmological version of the Vainshtein screening mechanism: at early times, the galileon dynamics are dominated by self-interaction terms, resulting in its energy density being suppressed compared to matter or radiation; once the matter density has redshifted sufficiently, the galileon becomes an important component of the energy density and contributes to dark energy. We estimate conservatively that the resulting expansion history is consistent with the observed late-time cosmology, provided that the scale of modification satisfies r c > or approx. 15 Gpc.

  3. Analysis of high-resolution simulations for the Black Forest region from a point of view of tourism climatology - a comparison between two regional climate models (REMO and CLM)

    Science.gov (United States)

    Endler, Christina; Matzarakis, Andreas

    2011-03-01

    An analysis of climate simulations from a point of view of tourism climatology based on two regional climate models, namely REMO and CLM, was performed for a regional domain in the southwest of Germany, the Black Forest region, for two time frames, 1971-2000 that represents the twentieth century climate and 2021-2050 that represents the future climate. In that context, the Intergovernmental Panel on Climate Change (IPCC) scenarios A1B and B1 are used. The analysis focuses on human-biometeorological and applied climatologic issues, especially for tourism purposes - that means parameters belonging to thermal (physiologically equivalent temperature, PET), physical (precipitation, snow, wind), and aesthetic (fog, cloud cover) facets of climate in tourism. In general, both models reveal similar trends, but differ in their extent. The trend of thermal comfort is contradicting: it tends to decrease in REMO, while it shows a slight increase in CLM. Moreover, REMO reveals a wider range of future climate trends than CLM, especially for sunshine, dry days, and heat stress. Both models are driven by the same global coupled atmosphere-ocean model ECHAM5/MPI-OM. Because both models are not able to resolve meso- and micro-scale processes such as cloud microphysics, differences between model results and discrepancies in the development of even those parameters (e.g., cloud formation and cover) are due to different model parameterization and formulation. Climatic changes expected by 2050 are small compared to 2100, but may have major impacts on tourism as for example, snow cover and its duration are highly vulnerable to a warmer climate directly affecting tourism in winter. Beyond indirect impacts are of high relevance as they influence tourism as well. Thus, changes in climate, natural environment, demography, tourists' demands, among other things affect economy in general. The analysis of the CLM results and its comparison with the REMO results complete the analysis performed

  4. High resolution time integration for SN radiation transport

    International Nuclear Information System (INIS)

    Thoreson, Greg; McClarren, Ryan G.; Chang, Jae H.

    2009-01-01

    First-order, second-order, and high resolution time discretization schemes are implemented and studied for the discrete ordinates (S N ) equations. The high resolution method employs a rate of convergence better than first-order, but also suppresses artificial oscillations introduced by second-order schemes in hyperbolic partial differential equations. The high resolution method achieves these properties by nonlinearly adapting the time stencil to use a first-order method in regions where oscillations could be created. We employ a quasi-linear solution scheme to solve the nonlinear equations that arise from the high resolution method. All three methods were compared for accuracy and convergence rates. For non-absorbing problems, both second-order and high resolution converged to the same solution as the first-order with better convergence rates. High resolution is more accurate than first-order and matches or exceeds the second-order method

  5. Quantifying and containing the curse of high resolution coronal imaging

    Directory of Open Access Journals (Sweden)

    V. Delouille

    2008-10-01

    Full Text Available Future missions such as Solar Orbiter (SO, InterHelioprobe, or Solar Probe aim at approaching the Sun closer than ever before, with on board some high resolution imagers (HRI having a subsecond cadence and a pixel area of about (80 km2 at the Sun during perihelion. In order to guarantee their scientific success, it is necessary to evaluate if the photon counts available at these resolution and cadence will provide a sufficient signal-to-noise ratio (SNR. For example, if the inhomogeneities in the Quiet Sun emission prevail at higher resolution, one may hope to locally have more photon counts than in the case of a uniform source. It is relevant to quantify how inhomogeneous the quiet corona will be for a pixel pitch that is about 20 times smaller than in the case of SoHO/EIT, and 5 times smaller than TRACE. We perform a first step in this direction by analyzing and characterizing the spatial intermittency of Quiet Sun images thanks to a multifractal analysis. We identify the parameters that specify the scale-invariance behavior. This identification allows next to select a family of multifractal processes, namely the Compound Poisson Cascades, that can synthesize artificial images having some of the scale-invariance properties observed on the recorded images. The prevalence of self-similarity in Quiet Sun coronal images makes it relevant to study the ratio between the SNR present at SoHO/EIT images and in coarsened images. SoHO/EIT images thus play the role of "high resolution" images, whereas the "low-resolution" coarsened images are rebinned so as to simulate a smaller angular resolution and/or a larger distance to the Sun. For a fixed difference in angular resolution and in Spacecraft-Sun distance, we determine the proportion of pixels having a SNR preserved at high resolution given a particular increase in effective area. If scale-invariance continues to prevail at smaller scales, the conclusion reached with SoHO/EIT images can be transposed

  6. A high resolution solar atlas for fluorescence calculations

    Science.gov (United States)

    Hearn, M. F.; Ohlmacher, J. T.; Schleicher, D. G.

    1983-01-01

    The characteristics required of a solar atlas to be used for studying the fluorescence process in comets are examined. Several sources of low resolution data were combined to provide an absolutely calibrated spectrum from 2250 A to 7000A. Three different sources of high resolution data were also used to cover this same spectral range. The low resolution data were then used to put each high resolution spectrum on an absolute scale. The three high resolution spectra were then combined in their overlap regions to produce a single, absolutely calibrated high resolution spectrum over the entire spectral range.

  7. High resolution time integration for Sn radiation transport

    International Nuclear Information System (INIS)

    Thoreson, Greg; McClarren, Ryan G.; Chang, Jae H.

    2008-01-01

    First order, second order and high resolution time discretization schemes are implemented and studied for the S n equations. The high resolution method employs a rate of convergence better than first order, but also suppresses artificial oscillations introduced by second order schemes in hyperbolic differential equations. All three methods were compared for accuracy and convergence rates. For non-absorbing problems, both second order and high resolution converged to the same solution as the first order with better convergence rates. High resolution is more accurate than first order and matches or exceeds the second order method. (authors)

  8. Evaluation of a High-Resolution Regional Reanalysis for Europe

    Science.gov (United States)

    Ohlwein, C.; Wahl, S.; Keller, J. D.; Bollmeyer, C.

    2014-12-01

    Reanalyses gain more and more importance as a source of meteorological information for many purposes and applications. Several global reanalyses projects (e.g., ERA, MERRA, CSFR, JMA9) produce and verify these data sets to provide time series as long as possible combined with a high data quality. Due to a spatial resolution down to 50-70km and 3-hourly temporal output, they are not suitable for small scale problems (e.g., regional climate assessment, meso-scale NWP verification, input for subsequent models such as river runoff simulations). The implementation of regional reanalyses based on a limited area model along with a data assimilation scheme is able to generate reanalysis data sets with high spatio-temporal resolution. Within the Hans-Ertel-Centre for Weather Research (HErZ), the climate monitoring branch concentrates efforts on the assessment and analysis of regional climate in Germany and Europe. In joint cooperation with DWD (German Meteorological Service), a high-resolution reanalysis system based on the COSMO model has been developed. The regional reanalysis for Europe matches the domain of the CORDEX EURO-11 specifications, albeit at a higher spatial resolution, i.e., 0.055° (6km) instead of 0.11° (12km) and comprises the assimilation of observational data using the existing nudging scheme of COSMO complemented by a special soil moisture analysis with boundary conditions provided by ERA-Interim data. The reanalysis data set covers 6 years (2007-2012) and is currently extended to 16 years. Extensive evaluation of the reanalysis is performed using independent observations with special emphasis on precipitation and high-impact weather situations indicating a better representation of small scale variability. Further, the evaluation shows an added value of the regional reanalysis with respect to the forcing ERA Interim reanalysis and compared to a pure high-resolution dynamical downscaling approach without data assimilation.

  9. The high-resolution regional reanalysis COSMO-REA6

    Science.gov (United States)

    Ohlwein, C.

    2016-12-01

    Reanalyses gain more and more importance as a source of meteorological information for many purposes and applications. Several global reanalyses projects (e.g., ERA, MERRA, CSFR, JMA9) produce and verify these data sets to provide time series as long as possible combined with a high data quality. Due to a spatial resolution down to 50-70km and 3-hourly temporal output, they are not suitable for small scale problems (e.g., regional climate assessment, meso-scale NWP verification, input for subsequent models such as river runoff simulations). The implementation of regional reanalyses based on a limited area model along with a data assimilation scheme is able to generate reanalysis data sets with high spatio-temporal resolution. Within the Hans-Ertel-Centre for Weather Research (HErZ), the climate monitoring branch concentrates efforts on the assessment and analysis of regional climate in Germany and Europe. In joint cooperation with DWD (German Meteorological Service), a high-resolution reanalysis system based on the COSMO model has been developed. The regional reanalysis for Europe matches the domain of the CORDEX EURO-11 specifications, albeit at a higher spatial resolution, i.e., 0.055° (6km) instead of 0.11° (12km) and comprises the assimilation of observational data using the existing nudging scheme of COSMO complemented by a special soil moisture analysis with boundary conditions provided by ERA-Interim data. The reanalysis data set covers the past 20 years. Extensive evaluation of the reanalysis is performed using independent observations with special emphasis on precipitation and high-impact weather situations indicating a better representation of small scale variability. Further, the evaluation shows an added value of the regional reanalysis with respect to the forcing ERA Interim reanalysis and compared to a pure high-resolution dynamical downscaling approach without data assimilation.

  10. A High-resolution Reanalysis for the European CORDEX Region

    Science.gov (United States)

    Bentzien, Sabrina; Bollmeyer, Christoph; Crewell, Susanne; Friederichs, Petra; Hense, Andreas; Keller, Jan; Keune, Jessica; Kneifel, Stefan; Ohlwein, Christian; Pscheidt, Ieda; Redl, Stephanie; Steinke, Sandra

    2014-05-01

    A High-resolution Reanalysis for the European CORDEX Region Within the Hans-Ertel-Centre for Weather Research (HErZ), the climate monitoring branch concentrates efforts on the assessment and analysis of regional climate in Germany and Europe. In joint cooperation with DWD (German Meteorological Service), a high-resolution reanalysis system based on the COSMO model has been developed. Reanalyses gain more and more importance as a source of meteorological information for many purposes and applications. Several global reanalyses projects (e.g., ERA, MERRA, CSFR, JMA9) produce and verify these data sets to provide time series as long as possible combined with a high data quality. Due to a spatial resolution down to 50-70km and 3-hourly temporal output, they are not suitable for small scale problems (e.g., regional climate assessment, meso-scale NWP verification, input for subsequent models such as river runoff simulations). The implementation of regional reanalyses based on a limited area model along with a data assimilation scheme is able to generate reanalysis data sets with high spatio-temporal resolution. The work presented here focuses on the regional reanalysis for Europe with a domain matching the CORDEX-EURO-11 specifications, albeit at a higher spatial resolution, i.e., 0.055° (6km) instead of 0.11° (12km). The COSMO reanalysis system comprises the assimilation of observational data using the existing nudging scheme of COSMO and is complemented by a special soil moisture analysis and boundary conditions given by ERA-interim data. The reanalysis data set currently covers 6 years (2007-2012). The evaluation of the reanalyses is done using independent observations with special emphasis on precipitation and high-impact weather situations. The development and evaluation of the COSMO-based reanalysis for the CORDEX-Euro domain can be seen as a preparation for joint European activities on the development of an ensemble system of regional reanalyses for Europe.

  11. A high-resolution regional reanalysis for Europe

    Science.gov (United States)

    Ohlwein, C.

    2015-12-01

    Reanalyses gain more and more importance as a source of meteorological information for many purposes and applications. Several global reanalyses projects (e.g., ERA, MERRA, CSFR, JMA9) produce and verify these data sets to provide time series as long as possible combined with a high data quality. Due to a spatial resolution down to 50-70km and 3-hourly temporal output, they are not suitable for small scale problems (e.g., regional climate assessment, meso-scale NWP verification, input for subsequent models such as river runoff simulations). The implementation of regional reanalyses based on a limited area model along with a data assimilation scheme is able to generate reanalysis data sets with high spatio-temporal resolution. Within the Hans-Ertel-Centre for Weather Research (HErZ), the climate monitoring branch concentrates efforts on the assessment and analysis of regional climate in Germany and Europe. In joint cooperation with DWD (German Meteorological Service), a high-resolution reanalysis system based on the COSMO model has been developed. The regional reanalysis for Europe matches the domain of the CORDEX EURO-11 specifications, albeit at a higher spatial resolution, i.e., 0.055° (6km) instead of 0.11° (12km) and comprises the assimilation of observational data using the existing nudging scheme of COSMO complemented by a special soil moisture analysis with boundary conditions provided by ERA-Interim data. The reanalysis data set covers the past 20 years. Extensive evaluation of the reanalysis is performed using independent observations with special emphasis on precipitation and high-impact weather situations indicating a better representation of small scale variability. Further, the evaluation shows an added value of the regional reanalysis with respect to the forcing ERA Interim reanalysis and compared to a pure high-resolution dynamical downscaling approach without data assimilation.

  12. The implementation of sea ice model on a regional high-resolution scale

    Science.gov (United States)

    Prasad, Siva; Zakharov, Igor; Bobby, Pradeep; McGuire, Peter

    2015-09-01

    The availability of high-resolution atmospheric/ocean forecast models, satellite data and access to high-performance computing clusters have provided capability to build high-resolution models for regional ice condition simulation. The paper describes the implementation of the Los Alamos sea ice model (CICE) on a regional scale at high resolution. The advantage of the model is its ability to include oceanographic parameters (e.g., currents) to provide accurate results. The sea ice simulation was performed over Baffin Bay and the Labrador Sea to retrieve important parameters such as ice concentration, thickness, ridging, and drift. Two different forcing models, one with low resolution and another with a high resolution, were used for the estimation of sensitivity of model results. Sea ice behavior over 7 years was simulated to analyze ice formation, melting, and conditions in the region. Validation was based on comparing model results with remote sensing data. The simulated ice concentration correlated well with Advanced Microwave Scanning Radiometer for EOS (AMSR-E) and Ocean and Sea Ice Satellite Application Facility (OSI-SAF) data. Visual comparison of ice thickness trends estimated from the Soil Moisture and Ocean Salinity satellite (SMOS) agreed with the simulation for year 2010-2011.

  13. High-resolution gravity model of Venus

    Science.gov (United States)

    Reasenberg, R. D.; Goldberg, Z. M.

    1992-01-01

    The anomalous gravity field of Venus shows high correlation with surface features revealed by radar. We extract gravity models from the Doppler tracking data from the Pioneer Venus Orbiter by means of a two-step process. In the first step, we solve the nonlinear spacecraft state estimation problem using a Kalman filter-smoother. The Kalman filter has been evaluated through simulations. This evaluation and some unusual features of the filter are discussed. In the second step, we perform a geophysical inversion using a linear Bayesian estimator. To allow an unbiased comparison between gravity and topography, we use a simulation technique to smooth and distort the radar topographic data so as to yield maps having the same characteristics as our gravity maps. The maps presented cover 2/3 of the surface of Venus and display the strong topography-gravity correlation previously reported. The topography-gravity scatter plots show two distinct trends.

  14. High Resolution Representation and Simulation of Braiding Patterns

    DEFF Research Database (Denmark)

    Zwierzycki, Mateusz; Vestartas, Petras; Heinrich, Mary Katherine

    2017-01-01

    a contemporary architectural context. Within the flora robotica project, complex braided structures are a core element of the architectural vision, driving a need for generalized braid design modeling tools that can support fabrication. Due to limited availability of existing suitable tools, this interest...

  15. Updated vegetation information in high resolution WRF simulations

    DEFF Research Database (Denmark)

    Nielsen, Joakim Refslund; Dellwik, Ebba; Hahmann, Andrea N.

    2013-01-01

    modify the energy distribution at the land surface. In weather and climate models it is important to represent the vegetation variability accurately to obtain reliable results. The weather research and forecasting (WRF) model uses green vegetation fraction (GVF) time series to represent vegetation...... seasonality. The GVF of each grid cell is additionally used to scale other parameters such as LAI, roughness, emissivity and albedo within predefined intervals. However, the default GYP used by WRF does not reflect recent climatic changes or change in management practices since it was derived more than 20...

  16. Medieval Cosmology

    Science.gov (United States)

    Grant, E.; Murdin, P.

    2000-11-01

    During the early Middle Ages (ca 500 to ca 1130) scholars with an interest in cosmology had little useful and dependable literature. They relied heavily on a partial Latin translation of PLATO's Timaeus by Chalcidius (4th century AD), and on a series of encyclopedic treatises associated with the names of Pliny the Elder (ca AD 23-79), Seneca (4 BC-AD 65), Macrobius (fl 5th century AD), Martianus ...

  17. Observational cosmology

    International Nuclear Information System (INIS)

    Partridge, R.B.

    1977-01-01

    Some sixty years after the development of relativistic cosmology by Einstein and his colleagues, observations are finally beginning to have an important impact on our views of the Universe. The available evidence seems to support one of the simplest cosmological models, the hot Big Bang model. The aim of this paper is to assess the observational support for certain assumptions underlying the hot Big Bang model. These are that the Universe is isobaric and homogeneous on a large scale; that it is expanding from an initial state of high density and temperature; and that the proper theory to describe the dynamics of the Universe is unmodified General Relativity. The properties of the cosmic microwave background radiation and recent observations of the abundance of light elements, in particular, support these assumptions. Also examined here are the data bearing on the related questions of the geometry and the future of the Universe (is it ever-expanding, or fated to recollapse). Finally, some difficulties and faults of the standard model are discussed, particularly various aspects of the 'initial condition' problem. It appears that the simplest Big Bang cosmological model calls for a highly specific set of initial conditions to produce the presently observed properties of the Universe. (Auth.)

  18. Scalable Algorithms for Large High-Resolution Terrain Data

    DEFF Research Database (Denmark)

    Mølhave, Thomas; Agarwal, Pankaj K.; Arge, Lars Allan

    2010-01-01

    In this paper we demonstrate that the technology required to perform typical GIS computations on very large high-resolution terrain models has matured enough to be ready for use by practitioners. We also demonstrate the impact that high-resolution data has on common problems. To our knowledge, so...

  19. High-resolution X-ray diffraction studies of multilayers

    DEFF Research Database (Denmark)

    Christensen, Finn Erland; Hornstrup, Allan; Schnopper, H. W.

    1988-01-01

    High-resolution X-ray diffraction studies of the perfection of state-of-the-art multilayers are presented. Data were obtained using a triple-axis perfect-crystal X-ray diffractometer. Measurements reveal large-scale figure errors in the substrate. A high-resolution triple-axis set up is required...

  20. Achieving sensitive, high-resolution laser spectroscopy at CRIS

    Energy Technology Data Exchange (ETDEWEB)

    Groote, R. P. de [Instituut voor Kern- en Stralingsfysica, KU Leuven (Belgium); Lynch, K. M., E-mail: kara.marie.lynch@cern.ch [EP Department, CERN, ISOLDE (Switzerland); Wilkins, S. G. [The University of Manchester, School of Physics and Astronomy (United Kingdom); Collaboration: the CRIS collaboration

    2017-11-15

    The Collinear Resonance Ionization Spectroscopy (CRIS) experiment, located at the ISOLDE facility, has recently performed high-resolution laser spectroscopy, with linewidths down to 20 MHz. In this article, we present the modifications to the beam line and the newly-installed laser systems that have made sensitive, high-resolution measurements possible. Highlights of recent experimental campaigns are presented.

  1. High resolution UV spectroscopy and laser-focused nanofabrication

    NARCIS (Netherlands)

    Myszkiewicz, G.

    2005-01-01

    This thesis combines two at first glance different techniques: High Resolution Laser Induced Fluorescence Spectroscopy (LIF) of small aromatic molecules and Laser Focusing of atoms for Nanofabrication. The thesis starts with the introduction to the high resolution LIF technique of small aromatic

  2. Smoot Group Cosmology

    Science.gov (United States)

    the Universe About Cosmology Planck Satellite Launched Cosmology Videos Professor George Smoot's group conducts research on the early universe (cosmology) using the Cosmic Microwave Background radiation (CMB science goals regarding cosmology. George Smoot named Director of Korean Cosmology Institute The GRB

  3. High resolution NMR spectroscopy of synthetic polymers in bulk

    International Nuclear Information System (INIS)

    Komorski, R.A.

    1986-01-01

    The contents of this book are: Overview of high-resolution NMR of solid polymers; High-resolution NMR of glassy amorphous polymers; Carbon-13 solid-state NMR of semicrystalline polymers; Conformational analysis of polymers of solid-state NMR; High-resolution NMR studies of oriented polymers; High-resolution solid-state NMR of protons in polymers; and Deuterium NMR of solid polymers. This work brings together the various approaches for high-resolution NMR studies of bulk polymers into one volume. Heavy emphasis is, of course, given to 13C NMR studies both above and below Tg. Standard high-power pulse and wide-line techniques are not covered

  4. A high resolution global scale groundwater model

    Science.gov (United States)

    de Graaf, Inge; Sutanudjaja, Edwin; van Beek, Rens; Bierkens, Marc

    2014-05-01

    As the world's largest accessible source of freshwater, groundwater plays a vital role in satisfying the basic needs of human society. It serves as a primary source of drinking water and supplies water for agricultural and industrial activities. During times of drought, groundwater storage provides a large natural buffer against water shortage and sustains flows to rivers and wetlands, supporting ecosystem habitats and biodiversity. Yet, the current generation of global scale hydrological models (GHMs) do not include a groundwater flow component, although it is a crucial part of the hydrological cycle. Thus, a realistic physical representation of the groundwater system that allows for the simulation of groundwater head dynamics and lateral flows is essential for GHMs that increasingly run at finer resolution. In this study we present a global groundwater model with a resolution of 5 arc-minutes (approximately 10 km at the equator) using MODFLOW (McDonald and Harbaugh, 1988). With this global groundwater model we eventually intend to simulate the changes in the groundwater system over time that result from variations in recharge and abstraction. Aquifer schematization and properties of this groundwater model were developed from available global lithological maps and datasets (Dürr et al., 2005; Gleeson et al., 2010; Hartmann and Moosdorf, 2013), combined with our estimate of aquifer thickness for sedimentary basins. We forced the groundwater model with the output from the global hydrological model PCR-GLOBWB (van Beek et al., 2011), specifically the net groundwater recharge and average surface water levels derived from routed channel discharge. For the parameterization, we relied entirely on available global datasets and did not calibrate the model so that it can equally be expanded to data poor environments. Based on our sensitivity analysis, in which we run the model with various hydrogeological parameter settings, we observed that most variance in groundwater

  5. Modeling fire behavior on tropical islands with high-resolution weather data

    Science.gov (United States)

    John W. Benoit; Francis M. Fujioka; David R. Weise

    2009-01-01

    In this study, we consider fire behavior simulation in tropical island scenarios such as Hawaii and Puerto Rico. The development of a system to provide real-time fire behavior prediction in Hawaii is discussed. This involves obtaining fuels and topography information at a fine scale, as well as supplying daily high-resolution weather forecast data for the area of...

  6. High-resolution climate modelling of Antarctica and the Antarctic Peninsula

    NARCIS (Netherlands)

    van Wessem, J.M.|info:eu-repo/dai/nl/413533085

    2016-01-01

    In this thesis we have used a high-resolution regional atmospheric climate model (RACMO2.3) to simulate the present-day climate (1979-2014) of Antarctica and the Antarctic Peninsula. We have evaluated the model results with several observations, such as in situ surface energy balance (SEB)

  7. The SLUGGS survey: a comparison of total-mass profiles of early-type galaxies from observations and cosmological simulations, to ˜4 effective radii

    Science.gov (United States)

    Bellstedt, Sabine; Forbes, Duncan A.; Romanowsky, Aaron J.; Remus, Rhea-Silvia; Stevens, Adam R. H.; Brodie, Jean P.; Poci, Adriano; McDermid, Richard; Alabi, Adebusola; Chevalier, Leonie; Adams, Caitlin; Ferré-Mateu, Anna; Wasserman, Asher; Pandya, Viraj

    2018-06-01

    We apply the Jeans Anisotropic Multi-Gaussian Expansion dynamical modelling method to SAGES Legacy Unifying Globulars and GalaxieS (SLUGGS) survey data of early-type galaxies in the stellar mass range 1010 physical processes shaping the mass distributions of galaxies in cosmological simulations are still incomplete. For galaxies with M* > 1010.7 M⊙ in the Magneticum simulations, we identify a significant anticorrelation between total-mass density profile slopes and the fraction of stellar mass formed ex situ (i.e. accreted), whereas this anticorrelation is weaker for lower stellar masses, implying that the measured total-mass density slopes for low-mass galaxies are less likely to be determined by merger activity.

  8. Defect testing of large aperture optics based on high resolution CCD camera

    International Nuclear Information System (INIS)

    Cheng Xiaofeng; Xu Xu; Zhang Lin; He Qun; Yuan Xiaodong; Jiang Xiaodong; Zheng Wanguo

    2009-01-01

    A fast testing method on inspecting defects of large aperture optics was introduced. With uniform illumination by LED source at grazing incidence, the image of defects on the surface of and inside the large aperture optics could be enlarged due to scattering. The images of defects were got by high resolution CCD camera and microscope, and the approximate mathematical relation between viewing dimension and real dimension of defects was simulated. Thus the approximate real dimension and location of all defects could be calculated through the high resolution pictures. (authors)

  9. High resolution multi-scalar drought indices for Iberia

    Science.gov (United States)

    Russo, Ana; Gouveia, Célia; Trigo, Ricardo; Jerez, Sonia

    2014-05-01

    The Iberian Peninsula has been recurrently affected by drought episodes and by adverse associated effects (Gouveia et al., 2009), ranging from severe water shortages to losses of hydroelectricity production, increasing risk of forest fires, forest decline and triggering processes of land degradation and desertification. Moreover, Iberia corresponds to one of the most sensitive areas to current and future climate change and is nowadays considered a hot spot of climate change with high probability for the increase of extreme events (Giorgi and Lionello, 2008). The spatial and temporal behavior of climatic droughts at different time scales was analyzed using spatially distributed time series of multi-scalar drought indicators, such as the Standardized Precipitation Evapotranspiration Index (SPEI) (Vicente-Serrano et al., 2010). This new climatic drought index is based on the simultaneous use of precipitation and temperature fields with the advantage of combining a multi-scalar character with the capacity to include the effects of temperature variability on drought assessment. Moreover, reanalysis data and the higher resolution hindcasted databases obtained from them are valuable surrogates of the sparse observations and widely used for in-depth characterizations of the present-day climate. Accordingly, this work aims to enhance the knowledge on high resolution drought patterns in Iberian Peninsula, taking advantage of high-resolution (10km) regional MM5 simulations of the recent past (1959-2007) over Iberia. It should be stressed that these high resolution meteorological fields (e.g. temperature, precipitation) have been validated for various purposes (Jerez et al., 2013). A detailed characterization of droughts since the 1960s using the 10 km resolution hidncasted simulation was performed with the aim to explore the conditions favoring drought onset, duration and ending, as well as the subsequent short, medium and long-term impacts affecting the environment and the

  10. Comparison of Two Grid Refinement Approaches for High Resolution Regional Climate Modeling: MPAS vs WRF

    Science.gov (United States)

    Leung, L.; Hagos, S. M.; Rauscher, S.; Ringler, T.

    2012-