WorldWideScience

Sample records for high-resolution n-body simulations

  1. High Resolution N-Body Simulations of Terrestrial Planet Growth

    Science.gov (United States)

    Clark Wallace, Spencer; Quinn, Thomas R.

    2018-04-01

    We investigate planetesimal accretion with a direct N-body simulation of an annulus at 1 AU around a 1 M_sun star. The planetesimal ring, which initially contains N = 106 bodies is evolved through the runaway growth stage into the phase of oligarchic growth. We find that the mass distribution of planetesimals develops a bump around 1022 g shortly after the oligarchs form. This feature is absent in previous lower resolution studies. We find that this bump marks a boundary between growth modes. Below the bump mass, planetesimals are packed tightly enough together to populate first order mean motion resonances with the oligarchs. These resonances act to heat the tightly packed, low mass planetesimals, inhibiting their growth. We examine the eccentricity evolution of a dynamically hot planetary embryo embedded in an annulus of planetesimals and find that dynamical friction acts more strongly on the embryo when the planetesimals are finely resolved. This effect disappears when the annulus is made narrow enough to exclude most of the mean motion resonances. Additionally, we find that the 1022 g bump is significantly less prominent when we follow planetesimal growth with a skinny annulus.This feature, which is reminiscent of the power law break seen in the size distribution of asteroid belt objects may be an important clue for constraining the initial size of planetesimals in planet formation models.

  2. GLOBAL HIGH-RESOLUTION N-BODY SIMULATION OF PLANET FORMATION. I. PLANETESIMAL-DRIVEN MIGRATION

    Energy Technology Data Exchange (ETDEWEB)

    Kominami, J. D. [Earth-Life Science Institute, Tokyo Institute of Technology, Meguro-Ku, Tokyo (Japan); Daisaka, H. [Hitotsubashi University, Kunitachi-shi, Tokyo (Japan); Makino, J. [RIKEN Advanced Institute for Computational Science, Chuo-ku, Kobe, Hyogo (Japan); Fujimoto, M., E-mail: kominami@mail.jmlab.jp, E-mail: daisaka@phys.science.hit-u.ac.jp, E-mail: makino@mail.jmlab.jp, E-mail: fujimoto.masaki@jaxa.jp [Japan Aerospace Exploration Agency, Sagamihara-shi, Kanagawa (Japan)

    2016-03-01

    We investigated whether outward planetesimal-driven migration (PDM) takes place or not in simulations when the self-gravity of planetesimals is included. We performed N-body simulations of planetesimal disks with a large width (0.7–4 au) that ranges over the ice line. The simulations consisted of two stages. The first-stage simulations were carried out to see the runaway growth phase using the planetesimals of initially the same mass. The runaway growth took place both at the inner edge of the disk and at the region just outside the ice line. This result was utilized for the initial setup of the second-stage simulations, in which the runaway bodies just outside the ice line were replaced by the protoplanets with about the isolation mass. In the second-stage simulations, the outward migration of the protoplanet was followed by the stopping of the migration due to the increase of the random velocity of the planetesimals. Owing to this increase of random velocities, one of the PDM criteria derived in Minton and Levison was broken. In the current simulations, the effect of the gas disk is not considered. It is likely that the gas disk plays an important role in PDM, and we plan to study its effect in future papers.

  3. Cosmological N -body simulations including radiation perturbations

    DEFF Research Database (Denmark)

    Brandbyge, Jacob; Rampf, Cornelius; Tram, Thomas

    2017-01-01

    CosmologicalN-body simulations are the standard tools to study the emergence of the observed large-scale structure of the Universe. Such simulations usually solve for the gravitational dynamics of matter within the Newtonian approximation, thus discarding general relativistic effects such as the ......CosmologicalN-body simulations are the standard tools to study the emergence of the observed large-scale structure of the Universe. Such simulations usually solve for the gravitational dynamics of matter within the Newtonian approximation, thus discarding general relativistic effects...

  4. Particle Number Dependence of the N-body Simulations of Moon Formation

    Science.gov (United States)

    Sasaki, Takanori; Hosono, Natsuki

    2018-04-01

    The formation of the Moon from the circumterrestrial disk has been investigated by using N-body simulations with the number N of particles limited from 104 to 105. We develop an N-body simulation code on multiple Pezy-SC processors and deploy Framework for Developing Particle Simulators to deal with large number of particles. We execute several high- and extra-high-resolution N-body simulations of lunar accretion from a circumterrestrial disk of debris generated by a giant impact on Earth. The number of particles is up to 107, in which 1 particle corresponds to a 10 km sized satellitesimal. We find that the spiral structures inside the Roche limit radius differ between low-resolution simulations (N ≤ 105) and high-resolution simulations (N ≥ 106). According to this difference, angular momentum fluxes, which determine the accretion timescale of the Moon also depend on the numerical resolution.

  5. FORMING CIRCUMBINARY PLANETS: N-BODY SIMULATIONS OF KEPLER-34

    International Nuclear Information System (INIS)

    Lines, S.; Leinhardt, Z. M.; Paardekooper, S.; Baruteau, C.; Thebault, P.

    2014-01-01

    Observations of circumbinary planets orbiting very close to the central stars have shown that planet formation may occur in a very hostile environment, where the gravitational pull from the binary should be very strong on the primordial protoplanetary disk. Elevated impact velocities and orbit crossings from eccentricity oscillations are the primary contributors to high energy, potentially destructive collisions that inhibit the growth of aspiring planets. In this work, we conduct high-resolution, inter-particle gravity enabled N-body simulations to investigate the feasibility of planetesimal growth in the Kepler-34 system. We improve upon previous work by including planetesimal disk self-gravity and an extensive collision model to accurately handle inter-planetesimal interactions. We find that super-catastrophic erosion events are the dominant mechanism up to and including the orbital radius of Kepler-34(AB)b, making in situ growth unlikely. It is more plausible that Kepler-34(AB)b migrated from a region beyond 1.5 AU. Based on the conclusions that we have made for Kepler-34, it seems likely that all of the currently known circumbinary planets have also migrated significantly from their formation location with the possible exception of Kepler-47(AB)c

  6. FORMING CIRCUMBINARY PLANETS: N-BODY SIMULATIONS OF KEPLER-34

    Energy Technology Data Exchange (ETDEWEB)

    Lines, S.; Leinhardt, Z. M. [School of Physics, University of Bristol, H. H. Wills Physics Laboratory, Tyndall Avenue, Bristol BS8 1TL (United Kingdom); Paardekooper, S.; Baruteau, C. [DAMTP, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA (United Kingdom); Thebault, P., E-mail: stefan.lines@bristol.ac.uk [LESIA-Observatoire de Paris, UPMC Univ. Paris 06, Univ. Paris-Diderot, F-92195 Meudon Cedex (France)

    2014-02-10

    Observations of circumbinary planets orbiting very close to the central stars have shown that planet formation may occur in a very hostile environment, where the gravitational pull from the binary should be very strong on the primordial protoplanetary disk. Elevated impact velocities and orbit crossings from eccentricity oscillations are the primary contributors to high energy, potentially destructive collisions that inhibit the growth of aspiring planets. In this work, we conduct high-resolution, inter-particle gravity enabled N-body simulations to investigate the feasibility of planetesimal growth in the Kepler-34 system. We improve upon previous work by including planetesimal disk self-gravity and an extensive collision model to accurately handle inter-planetesimal interactions. We find that super-catastrophic erosion events are the dominant mechanism up to and including the orbital radius of Kepler-34(AB)b, making in situ growth unlikely. It is more plausible that Kepler-34(AB)b migrated from a region beyond 1.5 AU. Based on the conclusions that we have made for Kepler-34, it seems likely that all of the currently known circumbinary planets have also migrated significantly from their formation location with the possible exception of Kepler-47(AB)c.

  7. Forming Circumbinary Planets: N-body Simulations of Kepler-34

    Science.gov (United States)

    Lines, S.; Leinhardt, Z. M.; Paardekooper, S.; Baruteau, C.; Thebault, P.

    2014-02-01

    Observations of circumbinary planets orbiting very close to the central stars have shown that planet formation may occur in a very hostile environment, where the gravitational pull from the binary should be very strong on the primordial protoplanetary disk. Elevated impact velocities and orbit crossings from eccentricity oscillations are the primary contributors to high energy, potentially destructive collisions that inhibit the growth of aspiring planets. In this work, we conduct high-resolution, inter-particle gravity enabled N-body simulations to investigate the feasibility of planetesimal growth in the Kepler-34 system. We improve upon previous work by including planetesimal disk self-gravity and an extensive collision model to accurately handle inter-planetesimal interactions. We find that super-catastrophic erosion events are the dominant mechanism up to and including the orbital radius of Kepler-34(AB)b, making in situ growth unlikely. It is more plausible that Kepler-34(AB)b migrated from a region beyond 1.5 AU. Based on the conclusions that we have made for Kepler-34, it seems likely that all of the currently known circumbinary planets have also migrated significantly from their formation location with the possible exception of Kepler-47(AB)c.

  8. Post-Newtonian N-body simulations

    Science.gov (United States)

    Aarseth, Sverre J.

    2007-06-01

    We report on the first fully consistent conventional cluster simulation which includes terms up to the third-order post-Newtonian approximation. Numerical problems for treating extremely energetic binaries orbiting a single massive object are circumvented by employing the special `wheel-spoke' regularization method of Zare which has not been used in large-N simulations before. Idealized models containing N = 1 × 105 particles of mass 1Msolar with a central black hole (BH) of 300Msolar have been studied on GRAPE-type computers. An initial half-mass radius of rh ~= 0.1 pc is sufficiently small to yield examples of relativistic coalescence. This is achieved by significant binary shrinkage within a density cusp environment, followed by the generation of extremely high eccentricities which are induced by Kozai cycles and/or resonant relaxation. More realistic models with white dwarfs and 10 times larger half-mass radii also show evidence of general relativity effects before disruption. An experimentation with the post-Newtonian terms suggests that reducing the time-scales for activating the different orders progressively may be justified for obtaining qualitatively correct solutions without aiming for precise predictions of the final gravitational radiation wave form. The results obtained suggest that the standard loss-cone arguments underestimate the swallowing rate in globular clusters containing a central BH.

  9. Cosmological N-body simulations with generic hot dark matter

    DEFF Research Database (Denmark)

    Brandbyge, Jacob; Hannestad, Steen

    2017-01-01

    We have calculated the non-linear effects of generic fermionic and bosonic hot dark matter components in cosmological N-body simulations. For sub-eV masses, the non-linear power spectrum suppression caused by thermal free-streaming resembles the one seen for massive neutrinos, whereas for masses...

  10. Relativistic initial conditions for N-body simulations

    Energy Technology Data Exchange (ETDEWEB)

    Fidler, Christian [Catholic University of Louvain—Center for Cosmology, Particle Physics and Phenomenology (CP3) 2, Chemin du Cyclotron, B-1348 Louvain-la-Neuve (Belgium); Tram, Thomas; Crittenden, Robert; Koyama, Kazuya; Wands, David [Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth PO1 3FX (United Kingdom); Rampf, Cornelius, E-mail: christian.fidler@uclouvain.be, E-mail: thomas.tram@port.ac.uk, E-mail: rampf@thphys.uni-heidelberg.de, E-mail: robert.crittenden@port.ac.uk, E-mail: kazuya.koyama@port.ac.uk, E-mail: david.wands@port.ac.uk [Institut für Theoretische Physik, Universität Heidelberg, Philosophenweg 16, D–69120 Heidelberg (Germany)

    2017-06-01

    Initial conditions for (Newtonian) cosmological N-body simulations are usually set by re-scaling the present-day power spectrum obtained from linear (relativistic) Boltzmann codes to the desired initial redshift of the simulation. This back-scaling method can account for the effect of inhomogeneous residual thermal radiation at early times, which is absent in the Newtonian simulations. We analyse this procedure from a fully relativistic perspective, employing the recently-proposed Newtonian motion gauge framework. We find that N-body simulations for ΛCDM cosmology starting from back-scaled initial conditions can be self-consistently embedded in a relativistic space-time with first-order metric potentials calculated using a linear Boltzmann code. This space-time coincides with a simple ''N-body gauge'' for z < 50 for all observable modes. Care must be taken, however, when simulating non-standard cosmologies. As an example, we analyse the back-scaling method in a cosmology with decaying dark matter, and show that metric perturbations become large at early times in the back-scaling approach, indicating a breakdown of the perturbative description. We suggest a suitable ''forwards approach' for such cases.

  11. ZENO: N-body and SPH Simulation Codes

    Science.gov (United States)

    Barnes, Joshua E.

    2011-02-01

    The ZENO software package integrates N-body and SPH simulation codes with a large array of programs to generate initial conditions and analyze numerical simulations. Written in C, the ZENO system is portable between Mac, Linux, and Unix platforms. It is in active use at the Institute for Astronomy (IfA), at NRAO, and possibly elsewhere. Zeno programs can perform a wide range of simulation and analysis tasks. While many of these programs were first created for specific projects, they embody algorithms of general applicability and embrace a modular design strategy, so existing code is easily applied to new tasks. Major elements of the system include: Structured data file utilities facilitate basic operations on binary data, including import/export of ZENO data to other systems.Snapshot generation routines create particle distributions with various properties. Systems with user-specified density profiles can be realized in collisionless or gaseous form; multiple spherical and disk components may be set up in mutual equilibrium.Snapshot manipulation routines permit the user to sift, sort, and combine particle arrays, translate and rotate particle configurations, and assign new values to data fields associated with each particle.Simulation codes include both pure N-body and combined N-body/SPH programs: Pure N-body codes are available in both uniprocessor and parallel versions.SPH codes offer a wide range of options for gas physics, including isothermal, adiabatic, and radiating models. Snapshot analysis programs calculate temporal averages, evaluate particle statistics, measure shapes and density profiles, compute kinematic properties, and identify and track objects in particle distributions.Visualization programs generate interactive displays and produce still images and videos of particle distributions; the user may specify arbitrary color schemes and viewing transformations.

  12. Constraining Stochastic Parametrisation Schemes Using High-Resolution Model Simulations

    Science.gov (United States)

    Christensen, H. M.; Dawson, A.; Palmer, T.

    2017-12-01

    Stochastic parametrisations are used in weather and climate models as a physically motivated way to represent model error due to unresolved processes. Designing new stochastic schemes has been the target of much innovative research over the last decade. While a focus has been on developing physically motivated approaches, many successful stochastic parametrisation schemes are very simple, such as the European Centre for Medium-Range Weather Forecasts (ECMWF) multiplicative scheme `Stochastically Perturbed Parametrisation Tendencies' (SPPT). The SPPT scheme improves the skill of probabilistic weather and seasonal forecasts, and so is widely used. However, little work has focused on assessing the physical basis of the SPPT scheme. We address this matter by using high-resolution model simulations to explicitly measure the `error' in the parametrised tendency that SPPT seeks to represent. The high resolution simulations are first coarse-grained to the desired forecast model resolution before they are used to produce initial conditions and forcing data needed to drive the ECMWF Single Column Model (SCM). By comparing SCM forecast tendencies with the evolution of the high resolution model, we can measure the `error' in the forecast tendencies. In this way, we provide justification for the multiplicative nature of SPPT, and for the temporal and spatial scales of the stochastic perturbations. However, we also identify issues with the SPPT scheme. It is therefore hoped these measurements will improve both holistic and process based approaches to stochastic parametrisation. Figure caption: Instantaneous snapshot of the optimal SPPT stochastic perturbation, derived by comparing high-resolution simulations with a low resolution forecast model.

  13. Relativistic N-body simulations with massive neutrinos

    Science.gov (United States)

    Adamek, Julian; Durrer, Ruth; Kunz, Martin

    2017-11-01

    Some of the dark matter in the Universe is made up of massive neutrinos. Their impact on the formation of large scale structure can be used to determine their absolute mass scale from cosmology, but to this end accurate numerical simulations have to be developed. Due to their relativistic nature, neutrinos pose additional challenges when one tries to include them in N-body simulations that are traditionally based on Newtonian physics. Here we present the first numerical study of massive neutrinos that uses a fully relativistic approach. Our N-body code, gevolution, is based on a weak-field formulation of general relativity that naturally provides a self-consistent framework for relativistic particle species. This allows us to model neutrinos from first principles, without invoking any ad-hoc recipes. Our simulation suite comprises some of the largest neutrino simulations performed to date. We study the effect of massive neutrinos on the nonlinear power spectra and the halo mass function, focusing on the interesting mass range between 0.06 eV and 0.3 eV and including a case for an inverted mass hierarchy.

  14. Validation of High-resolution Climate Simulations over Northern Europe.

    Science.gov (United States)

    Muna, R. A.

    2005-12-01

    Two AMIP2-type (Gates 1992) experiments have been performed with climate versions of ARPEGE/IFS model examine for North Atlantic North Europe, and Norwegian region and analyzed the effect of increasing resolution on the simulated biases. The ECMWF reanalysis or ERA-15 has been used to validate the simulations. Each of the simulations is an integration of the period 1979 to 1996. The global simulations used observed monthly mean sea surface temperatures (SST) as lower boundary condition. All aspects but the horizontal resolutions are similar in the two simulations. The first simulation has a uniform horizontal resolution of T63L. The second one has a variable resolution (T106Lc3) with the highest resolution in the Norwegian Sea. Both simulations have 31 vertical layers in the same locations. For each simulation the results were divided into two seasons: winter (DJF) and summer (JJA). The parameters investigated were mean sea level pressure, geopotential and temperature at 850 hPa and 500 hPa. To find out the causes of temperature bias during summer, latent and sensible heat flux, total cloud cover and total precipitation were analyzed. The high-resolution simulation exhibits more or less realistic climate over Nordic, Artic and European region. The overall performance of the simulations shows improvements of generally all fields investigated with increasing resolution over the target area both in winter (DJF) and summer (JJA).

  15. Cosmological N -body simulations with generic hot dark matter

    Energy Technology Data Exchange (ETDEWEB)

    Brandbyge, Jacob; Hannestad, Steen, E-mail: jacobb@phys.au.dk, E-mail: sth@phys.au.dk [Department of Physics and Astronomy, University of Aarhus, Ny Munkegade 120, DK–8000 Aarhus C (Denmark)

    2017-10-01

    We have calculated the non-linear effects of generic fermionic and bosonic hot dark matter components in cosmological N -body simulations. For sub-eV masses, the non-linear power spectrum suppression caused by thermal free-streaming resembles the one seen for massive neutrinos, whereas for masses larger than 1 eV, the non-linear relative suppression of power is smaller than in linear theory. We furthermore find that in the non-linear regime, one can map fermionic to bosonic models by performing a simple transformation.

  16. N-Body simulations of tidal encounters between stellar systems

    International Nuclear Information System (INIS)

    Rao, P.D.; Ramamani, N.; Alladin, S.M.

    1985-10-01

    N-Body simulations have been performed to study the tidal effects of a primary stellar system on a secondary stellar system of density close to the Roche density. Two hyperbolic, one parabolic and one elliptic encounters have been simulated. The changes in energy, angular momentum, mass distribution, and shape of the secondary system have been determined in each case. The inner region containing about 40% of the mass was found to be practically unchanged and the mass exterior to the tidal radius was found to escape. The intermediate region showed tidal distension. The thickness of this region decreased as we went from hyperbolic encounters to the elliptic encounter keeping the distance of closest approach constant. The numerical results for the fractional change in energy have been compared with the predictions of the available analytic formulae and the usefulness and limitations of the formulae have been discussed. (author)

  17. N-body simulations for coupled scalar-field cosmology

    International Nuclear Information System (INIS)

    Li Baojiu; Barrow, John D.

    2011-01-01

    We describe in detail the general methodology and numerical implementation of consistent N-body simulations for coupled-scalar-field models, including background cosmology and the generation of initial conditions (with the different couplings to different matter species taken into account). We perform fully consistent simulations for a class of coupled-scalar-field models with an inverse power-law potential and negative coupling constant, for which the chameleon mechanism does not work. We find that in such cosmological models the scalar-field potential plays a negligible role except in the background expansion, and the fifth force that is produced is proportional to gravity in magnitude, justifying the use of a rescaled gravitational constant G in some earlier N-body simulation works for similar models. We then study the effects of the scalar coupling on the nonlinear matter power spectra and compare with linear perturbation calculations to see the agreement and places where the nonlinear treatment deviates from the linear approximation. We also propose an algorithm to identify gravitationally virialized matter halos, trying to take account of the fact that the virialization itself is also modified by the scalar-field coupling. We use the algorithm to measure the mass function and study the properties of dark-matter halos. We find that the net effect of the scalar coupling helps produce more heavy halos in our simulation boxes and suppresses the inner (but not the outer) density profile of halos compared with the ΛCDM prediction, while the suppression weakens as the coupling between the scalar field and dark-matter particles increases in strength.

  18. Montecarlo simulation for a new high resolution elemental analysis methodology

    Energy Technology Data Exchange (ETDEWEB)

    Figueroa S, Rodolfo; Brusa, Daniel; Riveros, Alberto [Universidad de La Frontera, Temuco (Chile). Facultad de Ingenieria y Administracion

    1996-12-31

    Full text. Spectra generated by binary, ternary and multielement matrixes when irradiated by a variable energy photon beam are simulated by means of a Monte Carlo code. Significative jumps in the counting rate are shown when the photon energy is just over the edge associated to each element, because of the emission of characteristic X rays. For a given associated energy, the net height of these jumps depends mainly on the concentration and of the sample absorption coefficient. The spectra were obtained by a monochromatic energy scan considering all the emitted radiation by the sample in a 2{pi} solid angle, associating a single multichannel spectrometer channel to each incident energy (Multichannel Scaling (MCS) mode). The simulated spectra were made with Monte Carlo simulation software adaptation of the package called PENELOPE (Penetration and Energy Loss of Positrons and Electrons in matter). The results show that it is possible to implement a new high resolution spectroscopy methodology, where a synchrotron would be an ideal source, due to the high intensity and ability to control the energy of the incident beam. The high energy resolution would be determined by the monochromating system and not by the detection system and not by the detection system, which would basicalbe a photon counter. (author)

  19. Montecarlo simulation for a new high resolution elemental analysis methodology

    International Nuclear Information System (INIS)

    Figueroa S, Rodolfo; Brusa, Daniel; Riveros, Alberto

    1996-01-01

    Full text. Spectra generated by binary, ternary and multielement matrixes when irradiated by a variable energy photon beam are simulated by means of a Monte Carlo code. Significative jumps in the counting rate are shown when the photon energy is just over the edge associated to each element, because of the emission of characteristic X rays. For a given associated energy, the net height of these jumps depends mainly on the concentration and of the sample absorption coefficient. The spectra were obtained by a monochromatic energy scan considering all the emitted radiation by the sample in a 2π solid angle, associating a single multichannel spectrometer channel to each incident energy (Multichannel Scaling (MCS) mode). The simulated spectra were made with Monte Carlo simulation software adaptation of the package called PENELOPE (Penetration and Energy Loss of Positrons and Electrons in matter). The results show that it is possible to implement a new high resolution spectroscopy methodology, where a synchrotron would be an ideal source, due to the high intensity and ability to control the energy of the incident beam. The high energy resolution would be determined by the monochromating system and not by the detection system and not by the detection system, which would basicalbe a photon counter. (author)

  20. Evaluation of clustering statistics with N-body simulations

    International Nuclear Information System (INIS)

    Quinn, T.R.

    1986-01-01

    Two series of N-body simulations are used to determine the effectiveness of various clustering statistics in revealing initial conditions from evolved models. All the simulations contained 16384 particles and were integrated with the PPPM code. One series is a family of models with power at only one wavelength. The family contains five models with the wavelength of the power separated by factors of √2. The second series is a family of all equal power combinations of two wavelengths taken from the first series. The clustering statistics examined are the two point correlation function, the multiplicity function, the nearest neighbor distribution, the void probability distribution, the distribution of counts in cells, and the peculiar velocity distribution. It is found that the covariance function, the nearest neighbor distribution, and the void probability distribution are relatively insensitive to the initial conditions. The distribution of counts in cells show a little more sensitivity, but the multiplicity function is the best of the statistics considered for revealing the initial conditions

  1. Numerical techniques for large cosmological N-body simulations

    International Nuclear Information System (INIS)

    Efstathiou, G.; Davis, M.; Frenk, C.S.; White, S.D.M.

    1985-01-01

    We describe and compare techniques for carrying out large N-body simulations of the gravitational evolution of clustering in the fundamental cube of an infinite periodic universe. In particular, we consider both particle mesh (PM) codes and P 3 M codes in which a higher resolution force is obtained by direct summation of contributions from neighboring particles. We discuss the mesh-induced anisotropies in the forces calculated by these schemes, and the extent to which they can model the desired 1/r 2 particle-particle interaction. We also consider how transformation of the time variable can improve the efficiency with which the equations of motion are integrated. We present tests of the accuracy with which the resulting schemes conserve energy and are able to follow individual particle trajectories. We have implemented an algorithm which allows initial conditions to be set up to model any desired spectrum of linear growing mode density fluctuations. A number of tests demonstrate the power of this algorithm and delineate the conditions under which it is effective. We carry out several test simulations using a variety of techniques in order to show how the results are affected by dynamic range limitations in the force calculations, by boundary effects, by residual artificialities in the initial conditions, and by the number of particles employed. For most purposes cosmological simulations are limited by the resolution of their force calculation rather than by the number of particles they can employ. For this reason, while PM codes are quite adequate to study the evolution of structure on large scale, P 3 M methods are to be preferred, in spite of their greater cost and complexity, whenever the evolution of small-scale structure is important

  2. N-body simulations of terrestrial planet formation under the influence of a hot Jupiter

    International Nuclear Information System (INIS)

    Ogihara, Masahiro; Kobayashi, Hiroshi; Inutsuka, Shu-ichiro

    2014-01-01

    We investigate the formation of multiple-planet systems in the presence of a hot Jupiter (HJ) using extended N-body simulations that are performed simultaneously with semianalytic calculations. Our primary aims are to describe the planet formation process starting from planetesimals using high-resolution simulations, and to examine the dependences of the architecture of planetary systems on input parameters (e.g., disk mass, disk viscosity). We observe that protoplanets that arise from oligarchic growth and undergo type I migration stop migrating when they join a chain of resonant planets outside the orbit of an HJ. The formation of a resonant chain is almost independent of our model parameters, and is thus a robust process. At the end of our simulations, several terrestrial planets remain at around 0.1 AU. The formed planets are not equal mass; the largest planet constitutes more than 50% of the total mass in the close-in region, which is also less dependent on parameters. In the previous work of this paper, we have found a new physical mechanism of induced migration of the HJ, which is called a crowding-out. If the HJ opens up a wide gap in the disk (e.g., owing to low disk viscosity), crowding-out becomes less efficient and the HJ remains. We also discuss angular momentum transfer between the planets and disk.

  3. A New Signal Model for Axion Cavity Searches from N -body Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Lentz, Erik W.; Rosenberg, Leslie J. [Physics Department, University of Washington, Seattle, WA 98195-1580 (United States); Quinn, Thomas R.; Tremmel, Michael J., E-mail: lentze@phys.washington.edu, E-mail: ljrosenberg@phys.washington.edu, E-mail: trq@astro.washington.edu, E-mail: mjt29@astro.washington.edu [Astronomy Department, University of Washington, Seattle, WA 98195-1580 (United States)

    2017-08-20

    Signal estimates for direct axion dark matter (DM) searches have used the isothermal sphere halo model for the last several decades. While insightful, the isothermal model does not capture effects from a halo’s infall history nor the influence of baryonic matter, which has been shown to significantly influence a halo’s inner structure. The high resolution of cavity axion detectors can make use of modern cosmological structure-formation simulations, which begin from realistic initial conditions, incorporate a wide range of baryonic physics, and are capable of resolving detailed structure. This work uses a state-of-the-art cosmological N -body+Smoothed-Particle Hydrodynamics simulation to develop an improved signal model for axion cavity searches. Signal shapes from a class of galaxies encompassing the Milky Way are found to depart significantly from the isothermal sphere. A new signal model for axion detectors is proposed and projected sensitivity bounds on the Axion DM eXperiment (ADMX) data are presented.

  4. Effects of the initial conditions on cosmological $N$-body simulations

    OpenAIRE

    L'Huillier, Benjamin; Park, Changbom; Kim, Juhan

    2014-01-01

    Cosmology is entering an era of percent level precision due to current large observational surveys. This precision in observation is now demanding more accuracy from numerical methods and cosmological simulations. In this paper, we study the accuracy of $N$-body numerical simulations and their dependence on changes in the initial conditions and in the simulation algorithms. For this purpose, we use a series of cosmological $N$-body simulations with varying initial conditions. We test the infl...

  5. HNBody: A Simulation Package for Hierarchical N-Body Systems

    Science.gov (United States)

    Rauch, Kevin P.

    2018-04-01

    HNBody (http://www.hnbody.org/) is an extensible software package forintegrating the dynamics of N-body systems. Although general purpose, itincorporates several features and algorithms particularly well-suited tosystems containing a hierarchy (wide dynamic range) of masses. HNBodyversion 1 focused heavily on symplectic integration of nearly-Kepleriansystems. Here I describe the capabilities of the redesigned and expandedpackage version 2, which includes: symplectic integrators up to eighth order(both leap frog and Wisdom-Holman type methods), with symplectic corrector andclose encounter support; variable-order, variable-timestep Bulirsch-Stoer andStörmer integrators; post-Newtonian and multipole physics options; advancedround-off control for improved long-term stability; multi-threading and SIMDvectorization enhancements; seamless availability of extended precisionarithmetic for all calculations; extremely flexible configuration andoutput. Tests of the physical correctness of the algorithms are presentedusing JPL Horizons ephemerides (https://ssd.jpl.nasa.gov/?horizons) andpreviously published results for reference. The features and performanceof HNBody are also compared to several other freely available N-body codes,including MERCURY (Chambers), SWIFT (Levison & Duncan) and WHFAST (Rein &Tamayo).

  6. Operational High Resolution Chemical Kinetics Simulation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Numerical simulations of chemical kinetics are critical to addressing urgent issues in both the developed and developing world. Ongoing demand for higher resolution...

  7. Effects of the Size of Cosmological N-body Simulations on Physical ...

    Indian Academy of Sciences (India)

    Apart from N-body simulations, an analytical prescription given by Press & ...... Little, B., Weinberg, D. H., Park, C. 1991, MNRAS, 253, 295. Ma, C.-P. ... Padmanabhan, T. 1993, Structure Formation in the Universe, Cambridge University Press.

  8. N-body simulations for f(R) gravity using a self-adaptive particle-mesh code

    Science.gov (United States)

    Zhao, Gong-Bo; Li, Baojiu; Koyama, Kazuya

    2011-02-01

    We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu [Phys. Rev. DPRVDAQ1550-7998 78, 123524 (2008)10.1103/PhysRevD.78.123524] and Schmidt [Phys. Rev. DPRVDAQ1550-7998 79, 083518 (2009)10.1103/PhysRevD.79.083518], and extend the resolution up to k˜20h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discuss how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.

  9. N-body simulations for f(R) gravity using a self-adaptive particle-mesh code

    International Nuclear Information System (INIS)

    Zhao Gongbo; Koyama, Kazuya; Li Baojiu

    2011-01-01

    We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu et al.[Phys. Rev. D 78, 123524 (2008)] and Schmidt et al.[Phys. Rev. D 79, 083518 (2009)], and extend the resolution up to k∼20 h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discuss how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.

  10. SPECTRA OF STRONG MAGNETOHYDRODYNAMIC TURBULENCE FROM HIGH-RESOLUTION SIMULATIONS

    International Nuclear Information System (INIS)

    Beresnyak, Andrey

    2014-01-01

    Magnetohydrodynamic (MHD) turbulence is present in a variety of solar and astrophysical environments. Solar wind fluctuations with frequencies lower than 0.1 Hz are believed to be mostly governed by Alfvénic turbulence with particle transport depending on the power spectrum and the anisotropy of such turbulence. Recently, conflicting spectral slopes for the inertial range of MHD turbulence have been reported by different groups. Spectral shapes from earlier simulations showed that MHD turbulence is less scale-local compared with hydrodynamic turbulence. This is why higher-resolution simulations, and careful and rigorous numerical analysis is especially needed for the MHD case. In this Letter, we present two groups of simulations with resolution up to 4096 3 , which are numerically well-resolved and have been analyzed with an exact and well-tested method of scaling study. Our results from both simulation groups indicate that the asymptotic power spectral slope for all energy-related quantities, such as total energy and residual energy, is around –1.7, close to Kolmogorov's –5/3. This suggests that residual energy is a constant fraction of the total energy and that in the asymptotic regime of Alfvénic turbulence magnetic and kinetic spectra have the same scaling. The –1.5 slope for energy and the –2 slope for residual energy, which have been suggested earlier, are incompatible with our numerics

  11. The AGORA High-resolution Galaxy Simulations Comparison Project

    OpenAIRE

    Kim Ji-hoon; Abel Tom; Agertz Oscar; Bryan Greg L.; Ceverino Daniel; Christensen Charlotte; Conroy Charlie; Dekel Avishai; Gnedin Nickolay Y.; Goldbaum Nathan J.; Guedes Javiera; Hahn Oliver; Hobbs Alexander; Hopkins Philip F.; Hummels Cameron B.

    2014-01-01

    The Astrophysical Journal Supplement Series 210.1 (2014): 14 reproduced by permission of the AAS We introduce the Assembling Galaxies Of Resolved Anatomy (AGORA) project, a comprehensive numerical study of well-resolved galaxies within the ΛCDM cosmology. Cosmological hydrodynamic simulations with force resolutions of ∼100 proper pc or better will be run with a variety of code platforms to follow the hierarchical growth, star formation history, morphological transformation, and the cycle o...

  12. THE AGORA HIGH-RESOLUTION GALAXY SIMULATIONS COMPARISON PROJECT

    International Nuclear Information System (INIS)

    Kim, Ji-hoon; Conroy, Charlie; Goldbaum, Nathan J.; Krumholz, Mark R.; Abel, Tom; Agertz, Oscar; Gnedin, Nickolay Y.; Kravtsov, Andrey V.; Bryan, Greg L.; Ceverino, Daniel; Christensen, Charlotte; Hummels, Cameron B.; Dekel, Avishai; Guedes, Javiera; Hahn, Oliver; Hobbs, Alexander; Hopkins, Philip F.; Iannuzzi, Francesca; Keres, Dusan; Klypin, Anatoly

    2014-01-01

    We introduce the Assembling Galaxies Of Resolved Anatomy (AGORA) project, a comprehensive numerical study of well-resolved galaxies within the ΛCDM cosmology. Cosmological hydrodynamic simulations with force resolutions of ∼100 proper pc or better will be run with a variety of code platforms to follow the hierarchical growth, star formation history, morphological transformation, and the cycle of baryons in and out of eight galaxies with halo masses M vir ≅ 10 10 , 10 11 , 10 12 , and 10 13 M ☉ at z = 0 and two different ('violent' and 'quiescent') assembly histories. The numerical techniques and implementations used in this project include the smoothed particle hydrodynamics codes GADGET and GASOLINE, and the adaptive mesh refinement codes ART, ENZO, and RAMSES. The codes share common initial conditions and common astrophysics packages including UV background, metal-dependent radiative cooling, metal and energy yields of supernovae, and stellar initial mass function. These are described in detail in the present paper. Subgrid star formation and feedback prescriptions will be tuned to provide a realistic interstellar and circumgalactic medium using a non-cosmological disk galaxy simulation. Cosmological runs will be systematically compared with each other using a common analysis toolkit and validated against observations to verify that the solutions are robust—i.e., that the astrophysical assumptions are responsible for any success, rather than artifacts of particular implementations. The goals of the AGORA project are, broadly speaking, to raise the realism and predictive power of galaxy simulations and the understanding of the feedback processes that regulate galaxy 'metabolism'. The initial conditions for the AGORA galaxies as well as simulation outputs at various epochs will be made publicly available to the community. The proof-of-concept dark-matter-only test of the formation of a galactic halo with a z = 0 mass of M

  13. The effect of early radiation in N-body simulations of cosmic structure formation

    DEFF Research Database (Denmark)

    Adamek, Julian; Brandbyge, Jacob; Fidler, Christian

    2017-01-01

    Newtonian N-body simulations have been employed successfully over the past decades for the simulation of the cosmological large-scale structure. Such simulations usually ignore radiation perturbations (photons and massless neutrinos) and the impact of general relativity (GR) beyond the background...

  14. Halo Models of Large Scale Structure and Reliability of Cosmological N-Body Simulations

    Directory of Open Access Journals (Sweden)

    José Gaite

    2013-05-01

    Full Text Available Halo models of the large scale structure of the Universe are critically examined, focusing on the definition of halos as smooth distributions of cold dark matter. This definition is essentially based on the results of cosmological N-body simulations. By a careful analysis of the standard assumptions of halo models and N-body simulations and by taking into account previous studies of self-similarity of the cosmic web structure, we conclude that N-body cosmological simulations are not fully reliable in the range of scales where halos appear. Therefore, to have a consistent definition of halos is necessary either to define them as entities of arbitrary size with a grainy rather than smooth structure or to define their size in terms of small-scale baryonic physics.

  15. Computer simulation of high resolution transmission electron micrographs: theory and analysis

    International Nuclear Information System (INIS)

    Kilaas, R.

    1985-03-01

    Computer simulation of electron micrographs is an invaluable aid in their proper interpretation and in defining optimum conditions for obtaining images experimentally. Since modern instruments are capable of atomic resolution, simulation techniques employing high precision are required. This thesis makes contributions to four specific areas of this field. First, the validity of a new method for simulating high resolution electron microscope images has been critically examined. Second, three different methods for computing scattering amplitudes in High Resolution Transmission Electron Microscopy (HRTEM) have been investigated as to their ability to include upper Laue layer (ULL) interaction. Third, a new method for computing scattering amplitudes in high resolution transmission electron microscopy has been examined. Fourth, the effect of a surface layer of amorphous silicon dioxide on images of crystalline silicon has been investigated for a range of crystal thicknesses varying from zero to 2 1/2 times that of the surface layer

  16. Dark matter direct detection signals inferred from a cosmological N-body simulation with baryons

    International Nuclear Information System (INIS)

    Ling, F.-S.; Nezri, E.; Athanassoula, E.; Teyssier, R.

    2010-01-01

    We extract at redshift z = 0 a Milky Way sized object including gas, stars and dark matter (DM) from a recent, high-resolution cosmological N-body simulation with baryons. Its resolution is sufficient to witness the formation of a rotating disk and bulge at the center of the halo potential, therefore providing a realistic description of the birth and the evolution of galactic structures in the ΛCDM cosmology paradigm. The phase-space structure of the central galaxy reveals that, throughout a thick region, the dark halo is co-rotating on average with the stellar disk. At the Earth's location, the rotating component, sometimes called dark disk in the literature, is characterized by a minimum lag velocity v lag ≅ 75 km/s, in which case it contributes to around 25% of the total DM local density, whose value is ρ DM ≅ 0.37GeV/cm 3 . The velocity distributions also show strong deviations from pure Gaussian and Maxwellian distributions, with a sharper drop of the high velocity tail. We give a detailed study of the impact of these features on the predictions for DM signals in direct detection experiments. In particular, the question of whether the modulation signal observed by DAMA is or is not excluded by limits set by other experiments (CDMS, XENON and CRESST...) is re-analyzed and compared to the case of a standard Maxwellian halo. We consider spin-independent interactions for both the elastic and the inelastic scattering scenarios. For the first time, we calculate the allowed regions for DAMA and the exclusion limits of other null experiments directly from the velocity distributions found in the simulation. We then compare these results with the predictions of various analytical distributions. We find that the compatibility between DAMA and the other experiments is improved. In the elastic scenario, the DAMA modulation signal is slightly enhanced in the so-called channeling region, as a result of several effects that include a departure from a Maxwellian

  17. Very high-resolution regional climate simulations over Scandinavia-present climate

    DEFF Research Database (Denmark)

    Christensen, Ole B.; Christensen, Jens H.; Machenhauer, Bennert

    1998-01-01

    realistically simulated. It is found in particular that in mountainous regions the high-resolution simulation shows improvements in the simulation of hydrologically relevant fields such as runoff and snow cover. Also, the distribution of precipitation on different intensity classes is most realistically...... on a high-density station network for the Scandinavian countries compiled for the present study. The simulated runoff is compared with observed data from Sweden extracted from a Swedish climatological atlas. These runoff data indicate that the precipitation analyses are underestimating the true...... simulated in the high-resolution simulation. It does, however, inherit certain large-scale systematic errors from the driving GCM. In many cases these errors increase with increasing resolution. Model verification of near-surface temperature and precipitation is made using a new gridded climatology based...

  18. Eulerian and Lagrangian statistics from high resolution numerical simulations of weakly compressible turbulence

    NARCIS (Netherlands)

    Benzi, R.; Biferale, L.; Fisher, R.T.; Lamb, D.Q.; Toschi, F.

    2009-01-01

    We report a detailed study of Eulerian and Lagrangian statistics from high resolution Direct Numerical Simulations of isotropic weakly compressible turbulence. Reynolds number at the Taylor microscale is estimated to be around 600. Eulerian and Lagrangian statistics is evaluated over a huge data

  19. Quantifying uncertainty due to internal variability using high-resolution regional climate model simulations

    Science.gov (United States)

    Gutmann, E. D.; Ikeda, K.; Deser, C.; Rasmussen, R.; Clark, M. P.; Arnold, J. R.

    2015-12-01

    The uncertainty in future climate predictions is as large or larger than the mean climate change signal. As such, any predictions of future climate need to incorporate and quantify the sources of this uncertainty. One of the largest sources comes from the internal, chaotic, variability within the climate system itself. This variability has been approximated using the 30 ensemble members of the Community Earth System Model (CESM) large ensemble. Here we examine the wet and dry end members of this ensemble for cool-season precipitation in the Colorado Rocky Mountains with a set of high-resolution regional climate model simulations. We have used the Weather Research and Forecasting model (WRF) to simulate the periods 1990-2000, 2025-2035, and 2070-2080 on a 4km grid. These simulations show that the broad patterns of change depicted in CESM are inherited by the high-resolution simulations; however, the differences in the height and location of the mountains in the WRF simulation, relative to the CESM simulation, means that the location and magnitude of the precipitation changes are very different. We further show that high-resolution simulations with the Intermediate Complexity Atmospheric Research model (ICAR) predict a similar spatial pattern in the change signal as WRF for these ensemble members. We then use ICAR to examine the rest of the CESM Large Ensemble as well as the uncertainty in the regional climate model due to the choice of physics parameterizations.

  20. Fast Generation of Ensembles of Cosmological N-Body Simulations via Mode-Resampling

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, M D; Cole, S; Frenk, C S; Szapudi, I

    2011-02-14

    We present an algorithm for quickly generating multiple realizations of N-body simulations to be used, for example, for cosmological parameter estimation from surveys of large-scale structure. Our algorithm uses a new method to resample the large-scale (Gaussian-distributed) Fourier modes in a periodic N-body simulation box in a manner that properly accounts for the nonlinear mode-coupling between large and small scales. We find that our method for adding new large-scale mode realizations recovers the nonlinear power spectrum to sub-percent accuracy on scales larger than about half the Nyquist frequency of the simulation box. Using 20 N-body simulations, we obtain a power spectrum covariance matrix estimate that matches the estimator from Takahashi et al. (from 5000 simulations) with < 20% errors in all matrix elements. Comparing the rates of convergence, we determine that our algorithm requires {approx}8 times fewer simulations to achieve a given error tolerance in estimates of the power spectrum covariance matrix. The degree of success of our algorithm indicates that we understand the main physical processes that give rise to the correlations in the matter power spectrum. Namely, the large-scale Fourier modes modulate both the degree of structure growth through the variation in the effective local matter density and also the spatial frequency of small-scale perturbations through large-scale displacements. We expect our algorithm to be useful for noise modeling when constraining cosmological parameters from weak lensing (cosmic shear) and galaxy surveys, rescaling summary statistics of N-body simulations for new cosmological parameter values, and any applications where the influence of Fourier modes larger than the simulation size must be accounted for.

  1. Utilization of Short-Simulations for Tuning High-Resolution Climate Model

    Science.gov (United States)

    Lin, W.; Xie, S.; Ma, P. L.; Rasch, P. J.; Qian, Y.; Wan, H.; Ma, H. Y.; Klein, S. A.

    2016-12-01

    Many physical parameterizations in atmospheric models are sensitive to resolution. Tuning the models that involve a multitude of parameters at high resolution is computationally expensive, particularly when relying primarily on multi-year simulations. This work describes a complementary set of strategies for tuning high-resolution atmospheric models, using ensembles of short simulations to reduce the computational cost and elapsed time. Specifically, we utilize the hindcast approach developed through the DOE Cloud Associated Parameterization Testbed (CAPT) project for high-resolution model tuning, which is guided by a combination of short (tests have been found to be effective in numerous previous studies in identifying model biases due to parameterized fast physics, and we demonstrate that it is also useful for tuning. After the most egregious errors are addressed through an initial "rough" tuning phase, longer simulations are performed to "hone in" on model features that evolve over longer timescales. We explore these strategies to tune the DOE ACME (Accelerated Climate Modeling for Energy) model. For the ACME model at 0.25° resolution, it is confirmed that, given the same parameters, major biases in global mean statistics and many spatial features are consistent between Atmospheric Model Intercomparison Project (AMIP)-type simulations and CAPT-type hindcasts, with just a small number of short-term simulations for the latter over the corresponding season. The use of CAPT hindcasts to find parameter choice for the reduction of large model biases dramatically improves the turnaround time for the tuning at high resolution. Improvement seen in CAPT hindcasts generally translates to improved AMIP-type simulations. An iterative CAPT-AMIP tuning approach is therefore adopted during each major tuning cycle, with the former to survey the likely responses and narrow the parameter space, and the latter to verify the results in climate context along with assessment in

  2. The Matter Bispectrum in N-body Simulations with non-Gaussian Initial Conditions

    OpenAIRE

    Sefusatti, Emiliano; Crocce, Martin; Desjacques, Vincent

    2010-01-01

    We present measurements of the dark matter bispectrum in N-body simulations with non-Gaussian initial conditions of the local kind for a large variety of triangular configurations and compare them with predictions from Eulerian perturbation theory up to one-loop corrections. We find that the effects of primordial non-Gaussianity at large scales, when compared to perturbation theory, are well described by the initial component of the matter bispectrum, linearly extrapolated at the redshift of ...

  3. Hydrologic Simulation in Mediterranean flood prone Watersheds using high-resolution quality data

    Science.gov (United States)

    Eirini Vozinaki, Anthi; Alexakis, Dimitrios; Pappa, Polixeni; Tsanis, Ioannis

    2015-04-01

    Flooding is a significant threat causing lots of inconveniencies in several societies, worldwide. The fact that the climatic change is already happening, increases the flooding risk, which is no longer a substantial menace to several societies and their economies. The improvement of spatial-resolution and accuracy of the topography and land use data due to remote sensing techniques could provide integrated flood inundation simulations. In this work hydrological analysis of several historic flood events in Mediterranean flood prone watersheds (island of Crete/Greece) takes place. Satellite images of high resolution are elaborated. A very high resolution (VHR) digital elevation model (DEM) is produced from a GeoEye-1 0.5-m-resolution satellite stereo pair and is used for floodplain management and mapping applications such as watershed delineation and river cross-section extraction. Sophisticated classification algorithms are implemented for improving Land Use/ Land Cover maps accuracy. In addition, soil maps are updated with means of Radar satellite images. The above high-resolution data are innovatively used to simulate and validate several historical flood events in Mediterranean watersheds, which have experienced severe flooding in the past. The hydrologic/hydraulic models used for flood inundation simulation in this work are HEC-HMS and HEC-RAS. The Natural Resource Conservation Service (NRCS) curve number (CN) approach is implemented to account for the effect of LULC and soil on the hydrologic response of the catchment. The use of high resolution data provides detailed validation results and results of high precision, accordingly. Furthermore, the meteorological forecasting data, which are also combined to the simulation model results, manage the development of an integrated flood forecasting and early warning system tool, which is capable of confronting or even preventing this imminent risk. The research reported in this paper was fully supported by the

  4. High Resolution Numerical Simulations of Primary Atomization in Diesel Sprays with Single Component Reference Fuels

    Science.gov (United States)

    2015-09-01

    NC. 14. ABSTRACT A high-resolution numerical simulation of jet breakup and spray formation from a complex diesel fuel injector at diesel engine... diesel fuel injector at diesel engine type conditions has been performed. A full understanding of the primary atomization process in diesel fuel... diesel liquid sprays the complexity is further compounded by the physical attributes present including nozzle turbulence, large density ratios

  5. Propagation Diagnostic Simulations Using High-Resolution Equatorial Plasma Bubble Simulations

    Science.gov (United States)

    Rino, C. L.; Carrano, C. S.; Yokoyama, T.

    2017-12-01

    In a recent paper, under review, equatorial-plasma-bubble (EPB) simulations were used to conduct a comparative analysis of the EPB spectra characteristics with high-resolution in-situ measurements from the C/NOFS satellite. EPB realizations sampled in planes perpendicular to magnetic field lines provided well-defined EPB structure at altitudes penetrating both high and low-density regions. The average C/NOFS structure in highly disturbed regions showed nearly identical two-component inverse-power-law spectral characteristics as the measured EPB structure. This paper describes the results of PWE simulations using the same two-dimensional cross-field EPB realizations. New Irregularity Parameter Estimation (IPE) diagnostics, which are based on two-dimensional equivalent-phase-screen theory [A theory of scintillation for two-component power law irregularity spectra: Overview and numerical results, by Charles Carrano and Charles Rino, DOI: 10.1002/2015RS005903], have been successfully applied to extract two-component inverse-power-law parameters from measured intensity spectra. The EPB simulations [Low and Midlatitude Ionospheric Plasma DensityIrregularities and Their Effects on Geomagnetic Field, by Tatsuhiro Yokoyama and Claudia Stolle, DOI 10.1007/s11214-016-0295-7] have sufficient resolution to populate the structure scales (tens of km to hundreds of meters) that cause strong scintillation at GPS frequencies. The simulations provide an ideal geometry whereby the ramifications of varying structure along the propagation path can be investigated. It is well known path-integrated one-dimensional spectra increase the one-dimensional index by one. The relation requires decorrelation along the propagation path. Correlated structure would be interpreted as stochastic total-electron-content (TEC). The simulations are performed with unmodified structure. Because the EPB structure is confined to the central region of the sample planes, edge effects are minimized. Consequently

  6. Studying Tidal Effects In Planetary Systems With Posidonius. A N-Body Simulator Written In Rust.

    Science.gov (United States)

    Blanco-Cuaresma, Sergi; Bolmont, Emeline

    2017-10-01

    Planetary systems with several planets in compact orbital configurations such as TRAPPIST-1 are surely affected by tidal effects. Its study provides us with important insight about its evolution. We developed a second generation of a N-body code based on the tidal model used in Mercury-T, re-implementing and improving its functionalities using Rust as programming language (including a Python interface for easy use) and the WHFAST integrator. The new open source code ensures memory safety, reproducibility of numerical N-body experiments, it improves the spin integration compared to Mercury-T and allows to take into account a new prescription for the dissipation of tidal inertial waves in the convective envelope of stars. Posidonius is also suitable for binary system simulations with evolving stars.

  7. On the evolution of galaxy clustering and cosmological N-body simulations

    International Nuclear Information System (INIS)

    Fall, S.M.

    1978-01-01

    Some aspects of the problem of simulating the evolution of galaxy clustering by N-body computer experiments are discussed. The results of four 1000-body experiments are presented and interpreted on the basis of simple scaling arguments for the gravitational condensation of bound aggregates. They indicate that the internal dynamics of condensed aggregates are negligible in determining the form of the pair-correlation function xi. On small scales the form of xi is determined by discreteness effects in the initial N-body distribution and is not sensitive to this distribution. The experiments discussed here test the simple scaling arguments effectively for only one value of the cosmological density parameter (Ω = 1) and one form of the initial fluctuation spectrum (n = 0). (author)

  8. N-MODY: A Code for Collisionless N-body Simulations in Modified Newtonian Dynamics

    Science.gov (United States)

    Londrillo, Pasquale; Nipoti, Carlo

    2011-02-01

    N-MODY is a parallel particle-mesh code for collisionless N-body simulations in modified Newtonian dynamics (MOND). N-MODY is based on a numerical potential solver in spherical coordinates that solves the non-linear MOND field equation, and is ideally suited to simulate isolated stellar systems. N-MODY can be used also to compute the MOND potential of arbitrary static density distributions. A few applications of N-MODY indicate that some astrophysically relevant dynamical processes are profoundly different in MOND and in Newtonian gravity with dark matter.

  9. The simulation of a data acquisition system for a proposed high resolution PET scanner

    Energy Technology Data Exchange (ETDEWEB)

    Rotolo, C.; Larwill, M.; Chappa, S. [Fermi National Accelerator Lab., Batavia, IL (United States); Ordonez, C. [Chicago Univ., IL (United States)

    1993-10-01

    The simulation of a specific data acquisition (DAQ) system architecture for a proposed high resolution Positron Emission Tomography (PET) scanner is discussed. Stochastic processes are used extensively to model PET scanner signal timing and probable DAQ circuit limitations. Certain architectural parameters, along with stochastic parameters, are varied to quantatively study the resulting output under various conditions. The inclusion of the DAQ in the model represents a novel method of more complete simulations of tomograph designs, and could prove to be of pivotal importance in the optimization of such designs.

  10. The simulation of a data acquisition system for a proposed high resolution PET scanner

    International Nuclear Information System (INIS)

    Rotolo, C.; Larwill, M.; Chappa, S.; Ordonez, C.

    1993-10-01

    The simulation of a specific data acquisition (DAQ) system architecture for a proposed high resolution Positron Emission Tomography (PET) scanner is discussed. Stochastic processes are used extensively to model PET scanner signal timing and probable DAQ circuit limitations. Certain architectural parameters, along with stochastic parameters, are varied to quantatively study the resulting output under various conditions. The inclusion of the DAQ in the model represents a novel method of more complete simulations of tomograph designs, and could prove to be of pivotal importance in the optimization of such designs

  11. Speeding up N -body simulations of modified gravity: chameleon screening models

    Energy Technology Data Exchange (ETDEWEB)

    Bose, Sownak; Li, Baojiu; He, Jian-hua; Llinares, Claudio [Institute for Computational Cosmology, Department of Physics, Durham University, Durham DH1 3LE (United Kingdom); Barreira, Alexandre [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching (Germany); Hellwing, Wojciech A.; Koyama, Kazuya [Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth PO1 3FX (United Kingdom); Zhao, Gong-Bo, E-mail: sownak.bose@durham.ac.uk, E-mail: baojiu.li@durham.ac.uk, E-mail: barreira@mpa-garching.mpg.de, E-mail: jianhua.he@durham.ac.uk, E-mail: wojciech.hellwing@port.ac.uk, E-mail: kazuya.koyama@port.ac.uk, E-mail: claudio.llinares@durham.ac.uk, E-mail: gbzhao@nao.cas.cn [National Astronomy Observatories, Chinese Academy of Science, Beijing, 100012 (China)

    2017-02-01

    We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f ( R ) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergence rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f ( R ) simulations. For example, a test simulation with 512{sup 3} particles in a box of size 512 Mpc/ h is now 5 times faster than before, while a Millennium-resolution simulation for f ( R ) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.

  12. Speeding up N-body simulations of modified gravity: chameleon screening models

    Science.gov (United States)

    Bose, Sownak; Li, Baojiu; Barreira, Alexandre; He, Jian-hua; Hellwing, Wojciech A.; Koyama, Kazuya; Llinares, Claudio; Zhao, Gong-Bo

    2017-02-01

    We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f(R) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergence rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f(R) simulations. For example, a test simulation with 5123 particles in a box of size 512 Mpc/h is now 5 times faster than before, while a Millennium-resolution simulation for f(R) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.

  13. Speeding up N -body simulations of modified gravity: chameleon screening models

    International Nuclear Information System (INIS)

    Bose, Sownak; Li, Baojiu; He, Jian-hua; Llinares, Claudio; Barreira, Alexandre; Hellwing, Wojciech A.; Koyama, Kazuya; Zhao, Gong-Bo

    2017-01-01

    We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f ( R ) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergence rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f ( R ) simulations. For example, a test simulation with 512 3 particles in a box of size 512 Mpc/ h is now 5 times faster than before, while a Millennium-resolution simulation for f ( R ) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.

  14. High resolution real time capable combustion chamber simulation; Zeitlich hochaufloesende echtzeitfaehige Brennraumsimulation

    Energy Technology Data Exchange (ETDEWEB)

    Piewek, J. [Volkswagen AG, Wolfsburg (Germany)

    2008-07-01

    The article describes a zero-dimensional model for the real time capable combustion chamber pressure calculation with analogue pressure sensor output. The closed-loop-operation of an Engine Control Unit is shown at the hardware-in-the-loop-simulator (HiL simulator) for a 4-cylinder common rail diesel engine. The presentation of the model focuses on the simulation of the load variation which does not depend on the injection system and thus the simulated heat release rate. Particular attention is paid to the simulation and the resulting test possibilities regarding to full-variable valve gears. It is shown that black box models consisting in the HiL mean value model for the aspirated gas mass, the exhaust gas temperature after the outlet valve and the mean indicated pressure can be replaced by calculations from the high-resolution combustion chamber model. (orig.)

  15. AUTOMATIC INTERPRETATION OF HIGH RESOLUTION SAR IMAGES: FIRST RESULTS OF SAR IMAGE SIMULATION FOR SINGLE BUILDINGS

    Directory of Open Access Journals (Sweden)

    J. Tao

    2012-09-01

    Full Text Available Due to the all-weather data acquisition capabilities, high resolution space borne Synthetic Aperture Radar (SAR plays an important role in remote sensing applications like change detection. However, because of the complex geometric mapping of buildings in urban areas, SAR images are often hard to interpret. SAR simulation techniques ease the visual interpretation of SAR images, while fully automatic interpretation is still a challenge. This paper presents a method for supporting the interpretation of high resolution SAR images with simulated radar images using a LiDAR digital surface model (DSM. Line features are extracted from the simulated and real SAR images and used for matching. A single building model is generated from the DSM and used for building recognition in the SAR image. An application for the concept is presented for the city centre of Munich where the comparison of the simulation to the TerraSAR-X data shows a good similarity. Based on the result of simulation and matching, special features (e.g. like double bounce lines, shadow areas etc. can be automatically indicated in SAR image.

  16. High performance direct gravitational N-body simulations on graphics processing units II: An implementation in CUDA

    NARCIS (Netherlands)

    Belleman, R.G.; Bédorf, J.; Portegies Zwart, S.F.

    2008-01-01

    We present the results of gravitational direct N-body simulations using the graphics processing unit (GPU) on a commercial NVIDIA GeForce 8800GTX designed for gaming computers. The force evaluation of the N-body problem is implemented in "Compute Unified Device Architecture" (CUDA) using the GPU to

  17. MODELING AND SIMULATION OF HIGH RESOLUTION OPTICAL REMOTE SENSING SATELLITE GEOMETRIC CHAIN

    Directory of Open Access Journals (Sweden)

    Z. Xia

    2018-04-01

    Full Text Available The high resolution satellite with the longer focal length and the larger aperture has been widely used in georeferencing of the observed scene in recent years. The consistent end to end model of high resolution remote sensing satellite geometric chain is presented, which consists of the scene, the three line array camera, the platform including attitude and position information, the time system and the processing algorithm. The integrated design of the camera and the star tracker is considered and the simulation method of the geolocation accuracy is put forward by introduce the new index of the angle between the camera and the star tracker. The model is validated by the geolocation accuracy simulation according to the test method of the ZY-3 satellite imagery rigorously. The simulation results show that the geolocation accuracy is within 25m, which is highly consistent with the test results. The geolocation accuracy can be improved about 7 m by the integrated design. The model combined with the simulation method is applicable to the geolocation accuracy estimate before the satellite launching.

  18. Statistics of Deep Convection in the Congo Basin Derived From High-Resolution Simulations.

    Science.gov (United States)

    White, B.; Stier, P.; Kipling, Z.; Gryspeerdt, E.; Taylor, S.

    2016-12-01

    Convection transports moisture, momentum, heat and aerosols through the troposphere, and so the temporal variability of convection is a major driver of global weather and climate. The Congo basin is home to some of the most intense convective activity on the planet and is under strong seasonal influence of biomass burning aerosol. However, deep convection in the Congo basin remains under studied compared to other regions of tropical storm systems, especially when compared to the neighbouring, relatively well-understood West African climate system. We use the WRF model to perform a high-resolution, cloud-system resolving simulation to investigate convective storm systems in the Congo. Our setup pushes the boundaries of current computational resources, using a 1 km grid length over a domain covering millions of square kilometres and for a time period of one month. This allows us to draw statistical conclusions on the nature of the simulated storm systems. Comparing data from satellite observations and the model enables us to quantify the diurnal variability of deep convection in the Congo basin. This approach allows us to evaluate our simulations despite the lack of in-situ observational data. This provides a more comprehensive analysis of the diurnal cycle than has previously been shown. Further, we show that high-resolution convection-permitting simulations performed over near-seasonal timescales can be used in conjunction with satellite observations as an effective tool to evaluate new convection parameterisations.

  19. Simulation of high-resolution X-ray microscopic images for improved alignment

    International Nuclear Information System (INIS)

    Song Xiangxia; Zhang Xiaobo; Liu Gang; Cheng Xianchao; Li Wenjie; Guan Yong; Liu Ying; Xiong Ying; Tian Yangchao

    2011-01-01

    The introduction of precision optical elements to X-ray microscopes necessitates fine realignment to achieve optimal high-resolution imaging. In this paper, we demonstrate a numerical method for simulating image formation that facilitates alignment of the source, condenser, objective lens, and CCD camera. This algorithm, based on ray-tracing and Rayleigh-Sommerfeld diffraction theory, is applied to simulate the X-ray microscope beamline U7A of National Synchrotron Radiation Laboratory (NSRL). The simulations and imaging experiments show that the algorithm is useful for guiding experimental adjustments. Our alignment simulation method is an essential tool for the transmission X-ray microscope (TXM) with optical elements and may also be useful for the alignment of optical components in other modes of microscopy.

  20. Evaluating Galactic Habitability Using High Resolution Cosmological Simulations of Galaxy Formation

    OpenAIRE

    Forgan, Duncan; Dayal, Pratika; Cockell, Charles; Libeskind, Noam

    2015-01-01

    D. F. acknowledges support from STFC consolidated grant ST/J001422/1, and the ‘ECOGAL’ ERC Advanced Grant. P. D. acknowledges the support of the Addison Wheeler Fellowship awarded by the Institute of Advanced Study at Durham University. N. I. L. is supported by the Deutsche Forschungs Gemeinschaft (DFG). We present the first model that couples high-resolution simulations of the formation of local group galaxies with calculations of the galactic habitable zone (GHZ), a region of space which...

  1. High-resolution simulation and forecasting of Jeddah floods using WRF version 3.5

    KAUST Repository

    Deng, Liping

    2013-12-01

    Modeling flash flood events in arid environments is a difficult but important task that has impacts on both water resource related issues and also emergency management and response. The challenge is often related to adequately describing the precursor intense rainfall events that cause these flood responses, as they are generally poorly simulated and forecast. Jeddah, the second largest city in the Kingdom of Saudi Arabia, has suffered from a number of flash floods over the last decade, following short-intense rainfall events. The research presented here focuses on examining four historic Jeddah flash floods (Nov. 25-26 2009, Dec. 29-30 2010, Jan. 14-15 2011 and Jan. 25-26 2011) and investigates the feasibility of using numerical weather prediction models to achieve a more realistic simulation of these flood-producing rainfall events. The Weather Research and Forecasting (WRF) model (version 3.5) is used to simulate precipitation and meteorological conditions via a high-resolution inner domain (1-km) around Jeddah. A range of different convective closure and microphysics parameterization, together with high-resolution (4-km) sea surface temperature data are employed. Through examining comparisons between the WRF model output and in-situ, radar and satellite data, the characteristics and mechanism producing the extreme rainfall events are discussed and the capacity of the WRF model to accurately forecast these rainstorms is evaluated.

  2. Can High-resolution WRF Simulations Be Used for Short-term Forecasting of Lightning?

    Science.gov (United States)

    Goodman, S. J.; Lapenta, W.; McCaul, E. W., Jr.; LaCasse, K.; Petersen, W.

    2006-01-01

    A number of research teams have begun to make quasi-operational forecast simulations at high resolution with models such as the Weather Research and Forecast (WRF) model. These model runs have used horizontal meshes of 2-4 km grid spacing, and thus resolved convective storms explicitly. In the light of recent global satellite-based observational studies that reveal robust relationships between total lightning flash rates and integrated amounts of precipitation-size ice hydrometeors in storms, it is natural to inquire about the capabilities of these convection-resolving models in representing the ice hydrometeor fields faithfully. If they do, this might make operational short-term forecasts of lightning activity feasible. We examine high-resolution WRF simulations from several Southeastern cases for which either NLDN or LMA lightning data were available. All the WRF runs use a standard microphysics package that depicts only three ice species, cloud ice, snow and graupel. The realism of the WRF simulations is examined by comparisons with both lightning and radar observations and with additional even higher-resolution cloud-resolving model runs. Preliminary findings are encouraging in that they suggest that WRF often makes convective storms of the proper size in approximately the right location, but they also indicate that higher resolution and better hydrometeor microphysics would be helpful in improving the realism of the updraft strengths, reflectivity and ice hydrometeor fields.

  3. High-resolution simulation and forecasting of Jeddah floods using WRF version 3.5

    KAUST Repository

    Deng, Liping; McCabe, Matthew; Stenchikov, Georgiy L.; Evans, Jason; Kucera, Paul

    2013-01-01

    Modeling flash flood events in arid environments is a difficult but important task that has impacts on both water resource related issues and also emergency management and response. The challenge is often related to adequately describing the precursor intense rainfall events that cause these flood responses, as they are generally poorly simulated and forecast. Jeddah, the second largest city in the Kingdom of Saudi Arabia, has suffered from a number of flash floods over the last decade, following short-intense rainfall events. The research presented here focuses on examining four historic Jeddah flash floods (Nov. 25-26 2009, Dec. 29-30 2010, Jan. 14-15 2011 and Jan. 25-26 2011) and investigates the feasibility of using numerical weather prediction models to achieve a more realistic simulation of these flood-producing rainfall events. The Weather Research and Forecasting (WRF) model (version 3.5) is used to simulate precipitation and meteorological conditions via a high-resolution inner domain (1-km) around Jeddah. A range of different convective closure and microphysics parameterization, together with high-resolution (4-km) sea surface temperature data are employed. Through examining comparisons between the WRF model output and in-situ, radar and satellite data, the characteristics and mechanism producing the extreme rainfall events are discussed and the capacity of the WRF model to accurately forecast these rainstorms is evaluated.

  4. An Accelerating Solution for N-Body MOND Simulation with FPGA-SoC

    Directory of Open Access Journals (Sweden)

    Bo Peng

    2016-01-01

    Full Text Available As a modified-gravity proposal to handle the dark matter problem on galactic scales, Modified Newtonian Dynamics (MOND has shown a great success. However, the N-body MOND simulation is quite challenged by its computation complexity, which appeals to acceleration of the simulation calculation. In this paper, we present a highly integrated accelerating solution for N-body MOND simulations. By using the FPGA-SoC, which integrates both FPGA and SoC (system on chip in one chip, our solution exhibits potentials for better performance, higher integration, and lower power consumption. To handle the calculation bottleneck of potential summation, on one hand, we develop a strategy to simplify the pipeline, in which the square calculation task is conducted by the DSP48E1 of Xilinx 7 series FPGAs, so as to reduce the logic resource utilization of each pipeline; on the other hand, advantages of particle-mesh scheme are taken to overcome the bottleneck on bandwidth. Our experiment results show that 2 more pipelines can be integrated in Zynq-7020 FPGA-SoC with the simplified pipeline, and the bandwidth requirement is reduced significantly. Furthermore, our accelerating solution has a full range of advantages over different processors. Compared with GPU, our work is about 10 times better in performance per watt and 50% better in performance per cost.

  5. Quantification of discreteness effects in cosmological N-body simulations: Initial conditions

    International Nuclear Information System (INIS)

    Joyce, M.; Marcos, B.

    2007-01-01

    The relation between the results of cosmological N-body simulations, and the continuum theoretical models they simulate, is currently not understood in a way which allows a quantification of N dependent effects. In this first of a series of papers on this issue, we consider the quantification of such effects in the initial conditions of such simulations. A general formalism developed in [A. Gabrielli, Phys. Rev. E 70, 066131 (2004).] allows us to write down an exact expression for the power spectrum of the point distributions generated by the standard algorithm for generating such initial conditions. Expanded perturbatively in the amplitude of the input (i.e. theoretical, continuum) power spectrum, we obtain at linear order the input power spectrum, plus two terms which arise from discreteness and contribute at large wave numbers. For cosmological type power spectra, one obtains as expected, the input spectrum for wave numbers k smaller than that characteristic of the discreteness. The comparison of real space correlation properties is more subtle because the discreteness corrections are not as strongly localized in real space. For cosmological type spectra the theoretical mass variance in spheres and two-point correlation function are well approximated above a finite distance. For typical initial amplitudes this distance is a few times the interparticle distance, but it diverges as this amplitude (or, equivalently, the initial redshift of the cosmological simulation) goes to zero, at fixed particle density. We discuss briefly the physical significance of these discreteness terms in the initial conditions, in particular, with respect to the definition of the continuum limit of N-body simulations

  6. Halo mass and weak galaxy-galaxy lensing profiles in rescaled cosmological N-body simulations

    Science.gov (United States)

    Renneby, Malin; Hilbert, Stefan; Angulo, Raúl E.

    2018-05-01

    We investigate 3D density and weak lensing profiles of dark matter haloes predicted by a cosmology-rescaling algorithm for N-body simulations. We extend the rescaling method of Angulo & White (2010) and Angulo & Hilbert (2015) to improve its performance on intra-halo scales by using models for the concentration-mass-redshift relation based on excursion set theory. The accuracy of the method is tested with numerical simulations carried out with different cosmological parameters. We find that predictions for median density profiles are more accurate than ˜5 % for haloes with masses of 1012.0 - 1014.5h-1 M⊙ for radii 0.05 baryons, are likely required for interpreting future (dark energy task force stage IV) experiments.

  7. The Abacus Cosmos: A Suite of Cosmological N-body Simulations

    Science.gov (United States)

    Garrison, Lehman H.; Eisenstein, Daniel J.; Ferrer, Douglas; Tinker, Jeremy L.; Pinto, Philip A.; Weinberg, David H.

    2018-06-01

    We present a public data release of halo catalogs from a suite of 125 cosmological N-body simulations from the ABACUS project. The simulations span 40 wCDM cosmologies centered on the Planck 2015 cosmology at two mass resolutions, 4 × 1010 h ‑1 M ⊙ and 1 × 1010 h ‑1 M ⊙, in 1.1 h ‑1 Gpc and 720 h ‑1 Mpc boxes, respectively. The boxes are phase-matched to suppress sample variance and isolate cosmology dependence. Additional volume is available via 16 boxes of fixed cosmology and varied phase; a few boxes of single-parameter excursions from Planck 2015 are also provided. Catalogs spanning z = 1.5 to 0.1 are available for friends-of-friends and ROCKSTAR halo finders and include particle subsamples. All data products are available at https://lgarrison.github.io/AbacusCosmos.

  8. Verification of high resolution simulation of precipitation and wind in Portugal

    Science.gov (United States)

    Menezes, Isilda; Pereira, Mário; Moreira, Demerval; Carvalheiro, Luís; Bugalho, Lourdes; Corte-Real, João

    2017-04-01

    Demand of energy and freshwater continues to grow as the global population and demands increase. Precipitation feed the freshwater ecosystems which provides a wealth of goods and services for society and river flow to sustain native species and natural ecosystem functions. The adoption of the wind and hydro-electric power supplies will sustain energy demands/services without restricting the economic growth and accelerated policies scenarios. However, the international meteorological observation network is not sufficiently dense to directly support high resolution climatic research. In this sense, coupled global and regional atmospheric models constitute the most appropriate physical and numerical tool for weather forecasting and downscaling in high resolution grids with the capacity to solve problems resulting from the lack of observed data and measuring errors. Thus, this study aims to calibrate and validate of the WRF regional model from precipitation and wind fields simulation, in high spatial resolution grid cover in Portugal. The simulations were performed in two-way nesting with three grids of increasing resolution (60 km, 20 km and 5 km) and the model performance assessed for the summer and winter months (January and July), using input variables from two different reanalyses and forecasted databases (ERA-Interim and NCEP-FNL) and different forcing schemes. The verification procedure included: (i) the use of several statistics error estimators, correlation based measures and relative errors descriptors; and, (ii) an observed dataset composed by time series of hourly precipitation, wind speed and direction provided by the Portuguese meteorological institute for a comprehensive set of weather stations. Main results suggested the good ability of the WRF to: (i) reproduce the spatial patterns of the mean and total observed fields; (ii) with relatively small values of bias and other errors; and, (iii) and good temporal correlation. These findings are in good

  9. Quality and sensitivity of high-resolution numerical simulation of urban heat islands

    Science.gov (United States)

    Li, Dan; Bou-Zeid, Elie

    2014-05-01

    High-resolution numerical simulations of the urban heat island (UHI) effect with the widely-used Weather Research and Forecasting (WRF) model are assessed. Both the sensitivity of the results to the simulation setup, and the quality of the simulated fields as representations of the real world, are investigated. Results indicate that the WRF-simulated surface temperatures are more sensitive to the planetary boundary layer (PBL) scheme choice during nighttime, and more sensitive to the surface thermal roughness length parameterization during daytime. The urban surface temperatures simulated by WRF are also highly sensitive to the urban canopy model (UCM) used. The implementation in this study of an improved UCM (the Princeton UCM or PUCM) that allows the simulation of heterogeneous urban facets and of key hydrological processes, together with the so-called CZ09 parameterization for the thermal roughness length, significantly reduce the bias (Changing UCMs and PBL schemes does not alter the performance of WRF in reproducing bulk boundary layer temperature profiles significantly. The results illustrate the wide range of urban environmental conditions that various configurations of WRF can produce, and the significant biases that should be assessed before inferences are made based on WRF outputs. The optimal set-up of WRF-PUCM developed in this paper also paves the way for a confident exploration of the city-scale impacts of UHI mitigation strategies in the companion paper (Li et al 2014).

  10. Kinetic Energy from Supernova Feedback in High-resolution Galaxy Simulations

    Science.gov (United States)

    Simpson, Christine M.; Bryan, Greg L.; Hummels, Cameron; Ostriker, Jeremiah P.

    2015-08-01

    We describe a new method for adding a prescribed amount of kinetic energy to simulated gas modeled on a cartesian grid by directly altering grid cells’ mass and velocity in a distributed fashion. The method is explored in the context of supernova (SN) feedback in high-resolution (˜10 pc) hydrodynamic simulations of galaxy formation. Resolution dependence is a primary consideration in our application of the method, and simulations of isolated explosions (performed at different resolutions) motivate a resolution-dependent scaling for the injected fraction of kinetic energy that we apply in cosmological simulations of a 109 M⊙ dwarf halo. We find that in high-density media (≳50 cm-3) with coarse resolution (≳4 pc per cell), results are sensitive to the initial kinetic energy fraction due to early and rapid cooling. In our galaxy simulations, the deposition of small amounts of SN energy in kinetic form (as little as 1%) has a dramatic impact on the evolution of the system, resulting in an order-of-magnitude suppression of stellar mass. The overall behavior of the galaxy in the two highest resolution simulations we perform appears to converge. We discuss the resulting distribution of stellar metallicities, an observable sensitive to galactic wind properties, and find that while the new method demonstrates increased agreement with observed systems, significant discrepancies remain, likely due to simplistic assumptions that neglect contributions from SNe Ia and stellar winds.

  11. Simulation study for high resolution alpha particle spectrometry with mesh type collimator

    International Nuclear Information System (INIS)

    Park, Seunghoon; Kwak, Sungwoo; Kang, Hanbyeol; Shin, Jungki; Park, Iljin

    2014-01-01

    An alpha particle spectrometry with a mesh type collimator plays a crucial role in identifying specific radionuclide in a radioactive source collected from the atmosphere or environment. The energy resolution is degraded without collimation because particles with a high angle have a longer path to travel in the air. Therefore, collision with the background increases. The collimator can cut out particles which traveling at a high angle. As a result, an energy distribution with high resolution can be obtained. Therefore, the mesh type collimator is simulated for high resolution alpha particle spectrometry. In conclusion, the collimator can improve resolution. With collimator, the collimator is a role of cutting out particles with a high angle, so, low energy tail and broadened energy distribution can be reduced. The mesh diameter is found out as an important factor to control resolution and counting efficiency. Therefore, a target particle, for example, 235 U, can be distinguished by a detector with a collimator under a mixture of various nuclides, for example: 232 U, 238 U, and 232 Th

  12. Air quality high resolution simulations of Italian urban areas with WRF-CHIMERE

    Science.gov (United States)

    Falasca, Serena; Curci, Gabriele

    2017-04-01

    The new European Directive on ambient air quality and cleaner air for Europe (2008/50/EC) encourages the use of modeling techniques to support the observations in the assessment and forecasting of air quality. The modelling system based on the combination of the WRF meteorological model and the CHIMERE chemistry-transport model is used to perform simulations at high resolution over the main Italian cities (e.g. Milan, Rome). Three domains covering Europe, Italy and the urban areas are nested with a decreasing grid size up to 1 km. Numerical results are produced for a winter month and a summer month of the year 2010 and are validated using ground-based observations (e.g. from the European air quality database AirBase). A sensitivity study is performed using different physics options, domain resolution and grid ratio; different urban parameterization schemes are tested using also characteristic morphology parameters for the cities considered. A spatial reallocation of anthropogenic emissions derived from international (e.g. EMEP, TNO, HTAP) and national (e.g. CTN-ACE) emissions inventories and based on the land cover datasets (Global Land Cover Facility and GlobCover) and the OpenStreetMap tool is also included. Preliminary results indicate that the introduction of the spatial redistribution at high-resolution allows a more realistic reproduction of the distribution of the emission flows and thus the concentrations of the pollutants, with significant advantages especially for the urban environments.

  13. Toolbox for Urban Mobility Simulation: High Resolution Population Dynamics for Global Cities

    Science.gov (United States)

    Bhaduri, B. L.; Lu, W.; Liu, C.; Thakur, G.; Karthik, R.

    2015-12-01

    In this rapidly urbanizing world, unprecedented rate of population growth is not only mirrored by increasing demand for energy, food, water, and other natural resources, but has detrimental impacts on environmental and human security. Transportation simulations are frequently used for mobility assessment in urban planning, traffic operation, and emergency management. Previous research, involving purely analytical techniques to simulations capturing behavior, has investigated questions and scenarios regarding the relationships among energy, emissions, air quality, and transportation. Primary limitations of past attempts have been availability of input data, useful "energy and behavior focused" models, validation data, and adequate computational capability that allows adequate understanding of the interdependencies of our transportation system. With increasing availability and quality of traditional and crowdsourced data, we have utilized the OpenStreetMap roads network, and has integrated high resolution population data with traffic simulation to create a Toolbox for Urban Mobility Simulations (TUMS) at global scale. TUMS consists of three major components: data processing, traffic simulation models, and Internet-based visualizations. It integrates OpenStreetMap, LandScanTM population, and other open data (Census Transportation Planning Products, National household Travel Survey, etc.) to generate both normal traffic operation and emergency evacuation scenarios. TUMS integrates TRANSIMS and MITSIM as traffic simulation engines, which are open-source and widely-accepted for scalable traffic simulations. Consistent data and simulation platform allows quick adaption to various geographic areas that has been demonstrated for multiple cities across the world. We are combining the strengths of geospatial data sciences, high performance simulations, transportation planning, and emissions, vehicle and energy technology development to design and develop a simulation

  14. THE AGORA HIGH-RESOLUTION GALAXY SIMULATIONS COMPARISON PROJECT. II. ISOLATED DISK TEST

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ji-hoon [Kavli Institute for Particle Astrophysics and Cosmology, SLAC National Accelerator Laboratory, Menlo Park, CA 94025 (United States); Agertz, Oscar [Department of Physics, University of Surrey, Guildford, Surrey, GU2 7XH (United Kingdom); Teyssier, Romain; Feldmann, Robert [Centre for Theoretical Astrophysics and Cosmology, Institute for Computational Science, University of Zurich, Zurich, 8057 (Switzerland); Butler, Michael J. [Max-Planck-Institut für Astronomie, D-69117 Heidelberg (Germany); Ceverino, Daniel [Zentrum für Astronomie der Universität Heidelberg, Institut für Theoretische Astrophysik, D-69120 Heidelberg (Germany); Choi, Jun-Hwan [Department of Astronomy, University of Texas, Austin, TX 78712 (United States); Keller, Ben W. [Department of Physics and Astronomy, McMaster University, Hamilton, ON L8S 4M1 (Canada); Lupi, Alessandro [Institut d’Astrophysique de Paris, Sorbonne Universites, UPMC Univ Paris 6 et CNRS, F-75014 Paris (France); Quinn, Thomas; Wallace, Spencer [Department of Astronomy, University of Washington, Seattle, WA 98195 (United States); Revaz, Yves [Institute of Physics, Laboratoire d’Astrophysique, École Polytechnique Fédérale de Lausanne, CH-1015 Lausanne (Switzerland); Gnedin, Nickolay Y. [Particle Astrophysics Center, Fermi National Accelerator Laboratory, Batavia, IL 60510 (United States); Leitner, Samuel N. [Department of Astronomy, University of Maryland, College Park, MD 20742 (United States); Shen, Sijing [Kavli Institute for Cosmology, University of Cambridge, Cambridge, CB3 0HA (United Kingdom); Smith, Britton D., E-mail: me@jihoonkim.org [Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh EH9 3HJ (United Kingdom); Collaboration: AGORA Collaboration; and others

    2016-12-20

    Using an isolated Milky Way-mass galaxy simulation, we compare results from nine state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, we find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt–Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly formed stellar clump mass functions show more significant variation (difference by up to a factor of ∼3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low-density region, and between more diffusive and less diffusive schemes in the high-density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.

  15. Halo statistics analysis within medium volume cosmological N-body simulation

    Directory of Open Access Journals (Sweden)

    Martinović N.

    2015-01-01

    Full Text Available In this paper we present halo statistics analysis of a ΛCDM N body cosmological simulation (from first halo formation until z = 0. We study mean major merger rate as a function of time, where for time we consider both per redshift and per Gyr dependence. For latter we find that it scales as the well known power law (1 + zn for which we obtain n = 2.4. The halo mass function and halo growth function are derived and compared both with analytical and empirical fits. We analyse halo growth through out entire simulation, making it possible to continuously monitor evolution of halo number density within given mass ranges. The halo formation redshift is studied exploring possibility for a new simple preliminary analysis during the simulation run. Visualization of the simulation is portrayed as well. At redshifts z = 0−7 halos from simulation have good statistics for further analysis especially in mass range of 1011 − 1014 M./h. [176021 ’Visible and invisible matter in nearby galaxies: theory and observations

  16. Simulation of the oxidation pathway on Si(100) using high-resolution EELS

    Energy Technology Data Exchange (ETDEWEB)

    Hogan, Conor [Consiglio Nazionale delle Ricerche, Istituto di Struttura della Materia (CNR-ISM), Rome (Italy); Dipartimento di Fisica, Universita di Roma ' ' Tor Vergata' ' , Roma (Italy); European Theoretical Spectroscopy Facility (ETSF), Roma (Italy); Caramella, Lucia; Onida, Giovanni [Dipartimento di Fisica, Universita degli Studi di Milano (Italy); European Theoretical Spectroscopy Facility (ETSF), Milano (Italy)

    2012-06-15

    We compute high-resolution electron energy loss spectra (HREELS) of possible structural motifs that form during the dynamic oxidation process on Si(100), including the important metastable precursor silanone and an adjacent-dimer bridge (ADB) structure that may seed oxide formation. Spectroscopic fingerprints of single site, silanone, and ''seed'' structures are identified and related to changes in the surface bandstructure of the clean surface. Incorporation of oxygen into the silicon lattice through adsorption and dissociation of water is also examined. Results are compared to available HREELS spectra and surface optical data, which are closely related. Our simulations confirm that HREELS offers complementary evidence to surface optical spectroscopy, and show that its high sensitivity allows it to distinguish between energetically and structurally similar oxidation models. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  17. Updated vegetation information in high resolution regional climate simulations using WRF

    DEFF Research Database (Denmark)

    Nielsen, Joakim Refslund; Dellwik, Ebba; Hahmann, Andrea N.

    Climate studies show that the frequency of heat wave events and above-average high temperatures during the summer months over Europe will increase in the coming decades. Such climatic changes and long-term meteorological conditions will impact the seasonal development of vegetation and ultimately...... modify the energy distribution at the land surface. In weather and climate models it is important to represent the vegetation variability accurately to obtain reliable results. The weather research and forecasting (WRF) model uses a green vegetation fraction (GVF) climatology to represent the seasonal...... or changes in management practice since it is derived more than twenty years ago. In this study, a new high resolution, high quality GVF product is applied in a WRF climate simulation over Denmark during the 2006 heat wave year. The new GVF product reflects the year 2006 and it was previously tested...

  18. The shape of the invisible halo: N-body simulations on parallel supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Warren, M.S.; Zurek, W.H. (Los Alamos National Lab., NM (USA)); Quinn, P.J. (Australian National Univ., Canberra (Australia). Mount Stromlo and Siding Spring Observatories); Salmon, J.K. (California Inst. of Tech., Pasadena, CA (USA))

    1990-01-01

    We study the shapes of halos and the relationship to their angular momentum content by means of N-body (N {approximately} 10{sup 6}) simulations. Results indicate that in relaxed halos with no apparent substructure: (i) the shape and orientation of the isodensity contours tends to persist throughout the virialised portion of the halo; (ii) most ({approx}70%) of the halos are prolate; (iii) the approximate direction of the angular momentum vector tends to persist throughout the halo; (iv) for spherical shells centered on the core of the halo the magnitude of the specific angular momentum is approximately proportional to their radius; (v) the shortest axis of the ellipsoid which approximates the shape of the halo tends to align with the rotation axis of the halo. This tendency is strongest in the fastest rotating halos. 13 refs., 4 figs.

  19. High resolution simulations of orographic flow over a complex terrain on the Southeast coast of Brazil

    Science.gov (United States)

    Chou, S. C.; Zolino, M. M.; Gomes, J. L.; Bustamante, J. F.; Lima-e-Silva, P. P.

    2012-04-01

    The Eta Model is used operationally by CPTEC to produce weather forecasts over South America since 1997. The model has gone through upgrades. In order to prepare the model for operational higher resolution forecasts, the model is configured and tested over a region of complex topography located near the coast of Southeast Brazil. The Eta Model was configured, with 2-km horizontal resolution and 50 layers. The Eta-2km is a second nesting, it is driven by Eta-15km, which in its turn is driven by Era-Interim reanalyses. The model domain includes the two Brazilians cities, Rio de Janeiro and Sao Paulo, urban areas, preserved tropical forest, pasture fields, and complex terrain and coastline. Mountains can rise up to about 700m. The region suffers frequent events of floods and landslides. The objective of this work is to evaluate high resolution simulations of wind and temperature in this complex area. Verification of model runs uses observations taken from the nuclear power plant. Accurate near-surface wind direction and magnitude are needed for the plant emergency plan and winds are highly sensitive to model spatial resolution and atmospheric stability. Verification of two cases during summer shows that model has clear diurnal cycle signal for wind in that region. The area is characterized by weak winds which makes the simulation more difficult. The simulated wind magnitude is about 1.5m/s, which is close to observations of about 2m/s; however, the observed change of wind direction of the sea breeze is fast whereas it is slow in the simulations. Nighttime katabatic flow is captured by the simulations. Comparison against Eta-5km runs show that the valley circulation is better described in the 2-km resolution run. Simulated temperatures follow closely the observed diurnal cycle. Experiments improving some surface conditions such as the surface temperature and land cover show simulation error reduction and improved diurnal cycle.

  20. Quantifying uncertainty in Transcranial Magnetic Stimulation - A high resolution simulation study in ICBM space.

    Science.gov (United States)

    Toschi, Nicola; Keck, Martin E; Welt, Tobias; Guerrisi, Maria

    2012-01-01

    Transcranial Magnetic Stimulation offers enormous potential for noninvasive brain stimulation. While it is known that brain tissue significantly "reshapes" induced field and charge distributions, most modeling investigations to-date have focused on single-subject data with limited generality. Further, the effects of the significant uncertainties which exist in the simulation (i.e. brain conductivity distributions) and stimulation (e.g. coil positioning and orientations) setup have not been quantified. In this study, we construct a high-resolution anisotropic head model in standard ICBM space, which can be used as a population-representative standard for bioelectromagnetic simulations. Further, we employ Monte-Carlo simulations in order to quantify how uncertainties in conductivity values propagate all the way to induced field and currents, demonstrating significant, regionally dependent dispersions in values which are commonly assumed "ground truth". This framework can be leveraged in order to quantify the effect of any type of uncertainty in noninvasive brain stimulation and bears relevance in all applications of TMS, both investigative and therapeutic.

  1. Experimental Investigation and High Resolution Simulation of In-Situ Combustion Processes

    Energy Technology Data Exchange (ETDEWEB)

    Margot Gerritsen; Tony Kovscek

    2008-04-30

    This final technical report describes work performed for the project 'Experimental Investigation and High Resolution Numerical Simulator of In-Situ Combustion Processes', DE-FC26-03NT15405. In summary, this work improved our understanding of in-situ combustion (ISC) process physics and oil recovery. This understanding was translated into improved conceptual models and a suite of software algorithms that extended predictive capabilities. We pursued experimental, theoretical, and numerical tasks during the performance period. The specific project objectives were (i) identification, experimentally, of chemical additives/injectants that improve combustion performance and delineation of the physics of improved performance, (ii) establishment of a benchmark one-dimensional, experimental data set for verification of in-situ combustion dynamics computed by simulators, (iii) develop improved numerical methods that can be used to describe in-situ combustion more accurately, and (iv) to lay the underpinnings of a highly efficient, 3D, in-situ combustion simulator using adaptive mesh refinement techniques and parallelization. We believe that project goals were met and exceeded as discussed.

  2. High-resolution 3D simulations of NIF ignition targets performed on Sequoia with HYDRA

    Science.gov (United States)

    Marinak, M. M.; Clark, D. S.; Jones, O. S.; Kerbel, G. D.; Sepke, S.; Patel, M. V.; Koning, J. M.; Schroeder, C. R.

    2015-11-01

    Developments in the multiphysics ICF code HYDRA enable it to perform large-scale simulations on the Sequoia machine at LLNL. With an aggregate computing power of 20 Petaflops, Sequoia offers an unprecedented capability to resolve the physical processes in NIF ignition targets for a more complete, consistent treatment of the sources of asymmetry. We describe modifications to HYDRA that enable it to scale to over one million processes on Sequoia. These include new options for replicating parts of the mesh over a subset of the processes, to avoid strong scaling limits. We consider results from a 3D full ignition capsule-only simulation performed using over one billion zones run on 262,000 processors which resolves surface perturbations through modes l = 200. We also report progress towards a high-resolution 3D integrated hohlraum simulation performed using 262,000 processors which resolves surface perturbations on the ignition capsule through modes l = 70. These aim for the most complete calculations yet of the interactions and overall impact of the various sources of asymmetry for NIF ignition targets. This work was performed under the auspices of the Lawrence Livermore National Security, LLC, (LLNS) under Contract No. DE-AC52-07NA27344.

  3. Quality and sensitivity of high-resolution numerical simulation of urban heat islands

    International Nuclear Information System (INIS)

    Li, Dan; Bou-Zeid, Elie

    2014-01-01

    High-resolution numerical simulations of the urban heat island (UHI) effect with the widely-used Weather Research and Forecasting (WRF) model are assessed. Both the sensitivity of the results to the simulation setup, and the quality of the simulated fields as representations of the real world, are investigated. Results indicate that the WRF-simulated surface temperatures are more sensitive to the planetary boundary layer (PBL) scheme choice during nighttime, and more sensitive to the surface thermal roughness length parameterization during daytime. The urban surface temperatures simulated by WRF are also highly sensitive to the urban canopy model (UCM) used. The implementation in this study of an improved UCM (the Princeton UCM or PUCM) that allows the simulation of heterogeneous urban facets and of key hydrological processes, together with the so-called CZ09 parameterization for the thermal roughness length, significantly reduce the bias (<1.5 °C) in the surface temperature fields as compared to satellite observations during daytime. The boundary layer potential temperature profiles are captured by WRF reasonable well at both urban and rural sites; the biases in these profiles relative to aircraft-mounted senor measurements are on the order of 1.5 °C. Changing UCMs and PBL schemes does not alter the performance of WRF in reproducing bulk boundary layer temperature profiles significantly. The results illustrate the wide range of urban environmental conditions that various configurations of WRF can produce, and the significant biases that should be assessed before inferences are made based on WRF outputs. The optimal set-up of WRF-PUCM developed in this paper also paves the way for a confident exploration of the city-scale impacts of UHI mitigation strategies in the companion paper (Li et al 2014). (letter)

  4. High-resolution, regional-scale crop yield simulations for the Southwestern United States

    Science.gov (United States)

    Stack, D. H.; Kafatos, M.; Medvigy, D.; El-Askary, H. M.; Hatzopoulos, N.; Kim, J.; Kim, S.; Prasad, A. K.; Tremback, C.; Walko, R. L.; Asrar, G. R.

    2012-12-01

    Over the past few decades, there have been many process-based crop models developed with the goal of better understanding the impacts of climate, soils, and management decisions on crop yields. These models simulate the growth and development of crops in response to environmental drivers. Traditionally, process-based crop models have been run at the individual farm level for yield optimization and management scenario testing. Few previous studies have used these models over broader geographic regions, largely due to the lack of gridded high-resolution meteorological and soil datasets required as inputs for these data intensive process-based models. In particular, assessment of regional-scale yield variability due to climate change requires high-resolution, regional-scale, climate projections, and such projections have been unavailable until recently. The goal of this study was to create a framework for extending the Agricultural Production Systems sIMulator (APSIM) crop model for use at regional scales and analyze spatial and temporal yield changes in the Southwestern United States (CA, AZ, and NV). Using the scripting language Python, an automated pipeline was developed to link Regional Climate Model (RCM) output with the APSIM crop model, thus creating a one-way nested modeling framework. This framework was used to combine climate, soil, land use, and agricultural management datasets in order to better understand the relationship between climate variability and crop yield at the regional-scale. Three different RCMs were used to drive APSIM: OLAM, RAMS, and WRF. Preliminary results suggest that, depending on the model inputs, there is some variability between simulated RCM driven maize yields and historical yields obtained from the United States Department of Agriculture (USDA). Furthermore, these simulations showed strong non-linear correlations between yield and meteorological drivers, with critical threshold values for some of the inputs (e.g. minimum and

  5. Computational high-resolution heart phantoms for medical imaging and dosimetry simulations

    Energy Technology Data Exchange (ETDEWEB)

    Gu Songxiang; Kyprianou, Iacovos [Center for Devices and Radiological Health, US Food and Drug Administration, Silver Spring, MD (United States); Gupta, Rajiv, E-mail: songxiang.gu@fda.hhs.gov, E-mail: rgupta1@partners.org, E-mail: iacovos.kyprianou@fda.hhs.gov [Massachusetts General Hospital, Boston, MA (United States)

    2011-09-21

    Cardiovascular disease in general and coronary artery disease (CAD) in particular, are the leading cause of death worldwide. They are principally diagnosed using either invasive percutaneous transluminal coronary angiograms or non-invasive computed tomography angiograms (CTA). Minimally invasive therapies for CAD such as angioplasty and stenting are rendered under fluoroscopic guidance. Both invasive and non-invasive imaging modalities employ ionizing radiation and there is concern for deterministic and stochastic effects of radiation. Accurate simulation to optimize image quality with minimal radiation dose requires detailed, gender-specific anthropomorphic phantoms with anatomically correct heart and associated vasculature. Such phantoms are currently unavailable. This paper describes an open source heart phantom development platform based on a graphical user interface. Using this platform, we have developed seven high-resolution cardiac/coronary artery phantoms for imaging and dosimetry from seven high-quality CTA datasets. To extract a phantom from a coronary CTA, the relationship between the intensity distribution of the myocardium, the ventricles and the coronary arteries is identified via histogram analysis of the CTA images. By further refining the segmentation using anatomy-specific criteria such as vesselness, connectivity criteria required by the coronary tree and image operations such as active contours, we are able to capture excellent detail within our phantoms. For example, in one of the female heart phantoms, as many as 100 coronary artery branches could be identified. Triangular meshes are fitted to segmented high-resolution CTA data. We have also developed a visualization tool for adding stenotic lesions to the coronaries. The male and female heart phantoms generated so far have been cross-registered and entered in the mesh-based Virtual Family of phantoms with matched age/gender information. Any phantom in this family, along with user

  6. Computational high-resolution heart phantoms for medical imaging and dosimetry simulations

    International Nuclear Information System (INIS)

    Gu Songxiang; Kyprianou, Iacovos; Gupta, Rajiv

    2011-01-01

    Cardiovascular disease in general and coronary artery disease (CAD) in particular, are the leading cause of death worldwide. They are principally diagnosed using either invasive percutaneous transluminal coronary angiograms or non-invasive computed tomography angiograms (CTA). Minimally invasive therapies for CAD such as angioplasty and stenting are rendered under fluoroscopic guidance. Both invasive and non-invasive imaging modalities employ ionizing radiation and there is concern for deterministic and stochastic effects of radiation. Accurate simulation to optimize image quality with minimal radiation dose requires detailed, gender-specific anthropomorphic phantoms with anatomically correct heart and associated vasculature. Such phantoms are currently unavailable. This paper describes an open source heart phantom development platform based on a graphical user interface. Using this platform, we have developed seven high-resolution cardiac/coronary artery phantoms for imaging and dosimetry from seven high-quality CTA datasets. To extract a phantom from a coronary CTA, the relationship between the intensity distribution of the myocardium, the ventricles and the coronary arteries is identified via histogram analysis of the CTA images. By further refining the segmentation using anatomy-specific criteria such as vesselness, connectivity criteria required by the coronary tree and image operations such as active contours, we are able to capture excellent detail within our phantoms. For example, in one of the female heart phantoms, as many as 100 coronary artery branches could be identified. Triangular meshes are fitted to segmented high-resolution CTA data. We have also developed a visualization tool for adding stenotic lesions to the coronaries. The male and female heart phantoms generated so far have been cross-registered and entered in the mesh-based Virtual Family of phantoms with matched age/gender information. Any phantom in this family, along with user

  7. Use of High-Resolution WRF Simulations to Forecast Lightning Threat

    Science.gov (United States)

    McCaul, E. W., Jr.; LaCasse, K.; Goodman, S. J.; Cecil, D. J.

    2008-01-01

    Recent observational studies have confirmed the existence of a robust statistical relationship between lightning flash rates and the amount of large precipitating ice hydrometeors aloft in storms. This relationship is exploited, in conjunction with the capabilities of cloud-resolving forecast models such as WRF, to forecast explicitly the threat of lightning from convective storms using selected output fields from the model forecasts. The simulated vertical flux of graupel at -15C and the shape of the simulated reflectivity profile are tested in this study as proxies for charge separation processes and their associated lightning risk. Our lightning forecast method differs from others in that it is entirely based on high-resolution simulation output, without reliance on any climatological data. short [6-8 h) simulations are conducted for a number of case studies for which three-dmmensional lightning validation data from the North Alabama Lightning Mapping Array are available. Experiments indicate that initialization of the WRF model on a 2 km grid using Eta boundary conditions, Doppler radar radial velocity fields, and METAR and ACARS data y&eld satisfactory simulations. __nalyses of the lightning threat fields suggests that both the graupel flux and reflectivity profile approaches, when properly calibrated, can yield reasonable lightning threat forecasts, although an ensemble approach is probably desirable in order to reduce the tendency for misplacement of modeled storms to hurt the accuracy of the forecasts. Our lightning threat forecasts are also compared to other more traditional means of forecasting thunderstorms, such as those based on inspection of the convective available potential energy field.

  8. High-resolution simulations of galaxy formation in a cold dark matter scenario

    International Nuclear Information System (INIS)

    Kates, R.E.; Klypin, A.A.

    1990-01-01

    We present the results of our numerical simulations of galaxy clustering in a two-dimensional model. Our simulations allowed better resolution than could be obtained in three-dimensional simulations. We used a spectrum of initial perturbations corresponding to a cold dark matter (CDM) model and followed the history of each particle by modelling the shocking and subsequent cooling of matter. We took into account cooling processes in a hot plasma with primeval cosmic abundances of H and He as well as Compton cooling. (However, the influence of these processes on the trajectories of ordinary matter particles was not simulated in the present code.) As a result of the high resolution, we were able to observe a network of chains on all scales down to the limits of resolution. This network extends out from dense clusters and superclusters and penetrates into voids (with decreasing density). In addition to the dark matter network structure, a definite prediction of our simulations is the existence of a connected filamentary structure consisting of hot gas with a temperature of 10 6 K and extending over 100-150 Mpc. (Throughout this paper, we assume the Hubble constant H 0 =50 km/sec/Mpc.) These structures trace high-density filaments of the dark matter distribution and should be searched for in soft X-ray observations. In contrast to common assumptions, we found that peaks of the linearized density distribution were not reliable tracers of the eventual galaxy distribution. We were also able to demonstrate that the influence of small-scale fluctuations on the structure at larger scales is always small, even at the late nonlinear stage. (orig.)

  9. Assessment of high-resolution methods for numerical simulations of compressible turbulence with shock waves

    International Nuclear Information System (INIS)

    Johnsen, Eric; Larsson, Johan; Bhagatwala, Ankit V.; Cabot, William H.; Moin, Parviz; Olson, Britton J.; Rawat, Pradeep S.; Shankar, Santhosh K.; Sjoegreen, Bjoern; Yee, H.C.; Zhong Xiaolin; Lele, Sanjiva K.

    2010-01-01

    Flows in which shock waves and turbulence are present and interact dynamically occur in a wide range of applications, including inertial confinement fusion, supernovae explosion, and scramjet propulsion. Accurate simulations of such problems are challenging because of the contradictory requirements of numerical methods used to simulate turbulence, which must minimize any numerical dissipation that would otherwise overwhelm the small scales, and shock-capturing schemes, which introduce numerical dissipation to stabilize the solution. The objective of the present work is to evaluate the performance of several numerical methods capable of simultaneously handling turbulence and shock waves. A comprehensive range of high-resolution methods (WENO, hybrid WENO/central difference, artificial diffusivity, adaptive characteristic-based filter, and shock fitting) and suite of test cases (Taylor-Green vortex, Shu-Osher problem, shock-vorticity/entropy wave interaction, Noh problem, compressible isotropic turbulence) relevant to problems with shocks and turbulence are considered. The results indicate that the WENO methods provide sharp shock profiles, but overwhelm the physical dissipation. The hybrid method is minimally dissipative and leads to sharp shocks and well-resolved broadband turbulence, but relies on an appropriate shock sensor. Artificial diffusivity methods in which the artificial bulk viscosity is based on the magnitude of the strain-rate tensor resolve vortical structures well but damp dilatational modes in compressible turbulence; dilatation-based artificial bulk viscosity methods significantly improve this behavior. For well-defined shocks, the shock fitting approach yields good results.

  10. The simulation of medicanes in a high-resolution regional climate model

    Energy Technology Data Exchange (ETDEWEB)

    Cavicchia, Leone [Centro Euro-Mediterraneo per i Cambiamenti Climatici, Bologna (Italy); Helmholtz-Zentrum Geesthacht, Institute of Coastal Research, Geesthacht (Germany); Ca' Foscari University, Venice (Italy); Storch, Hans von [Helmholtz-Zentrum Geesthacht, Institute of Coastal Research, Geesthacht (Germany); University of Hamburg, Meteorological Institute, Hamburg (Germany)

    2012-11-15

    Medicanes, strong mesoscale cyclones with tropical-like features, develop occasionally over the Mediterranean Sea. Due to the scarcity of observations over sea and the coarse resolution of the long-term reanalysis datasets, it is difficult to study systematically the multidecadal statistics of sub-synoptic medicanes. Our goal is to assess the long-term variability and trends of medicanes, obtaining a long-term climatology through dynamical downscaling of the NCEP/NCAR reanalysis data. In this paper, we examine the robustness of this method and investigate the value added for the study of medicanes. To do so, we performed several climate mode simulations with a high resolution regional atmospheric model (CCLM) for a number of test cases described in the literature. We find that the medicanes are formed in the simulations, with deeper pressures and stronger winds than in the driving global NCEP reanalysis. The tracks are adequately reproduced. We conclude that our methodology is suitable for constructing multi-decadal statistics and scenarios of current and possible future medicane activities. (orig.)

  11. The 2010 Pakistan floods: high-resolution simulations with the WRF model

    Science.gov (United States)

    Viterbo, Francesca; Parodi, Antonio; Molini, Luca; Provenzale, Antonello; von Hardenberg, Jost; Palazzi, Elisa

    2013-04-01

    Estimating current and future water resources in high mountain regions with complex orography is a difficult but crucial task. In particular, the French-Italian project PAPRIKA is focused on two specific regions in the Hindu-Kush -- Himalaya -- Karakorum (HKKH)region: the Shigar basin in Pakistan, at the feet of K2, and the Khumbu valley in Nepal, at the feet of Mount Everest. In this framework, we use the WRF model to simulate precipitation and meteorological conditions with high resolution in areas with extreme orographic slopes, comparing the model output with station and satellite data. Once validated the model, we shall run a set of three future time-slices at very high spatial resolution, in the periods 2046-2050, 2071-2075 and 2096-2100, nested in different climate change scenarios (EXtreme PREcipitation and Hydrological climate Scenario Simulations -EXPRESS-Hydro project). As a prelude to this study, here we discuss the simulation of specific, high-intensity rainfall events in this area. In this paper we focus on the 2010 Pakistan floods which began in late July 2010, producing heavy monsoon rains in the Khyber Pakhtunkhwa, Sindh, Punjab and Balochistan regions of Pakistan and affecting the Indus River basin. Approximately one-fifth of Pakistan's total land area was underwater, with a death toll of about 2000 people. This event has been simulated with the WRF model (version 3.3.) in cloud-permitting mode (d01 14 km and d02 3.5 km): different convective closures and microphysics parameterization have been used. A deeper understanding of the processes responsible for this event has been gained through comparison with rainfall depth observations, radiosounding data and geostationary/polar satellite images.

  12. Achieving accurate simulations of urban impacts on ozone at high resolution

    International Nuclear Information System (INIS)

    Li, J; Georgescu, M; Mahalov, A; Moustaoui, M; Hyde, P

    2014-01-01

    The effects of urbanization on ozone levels have been widely investigated over cities primarily located in temperate and/or humid regions. In this study, nested WRF-Chem simulations with a finest grid resolution of 1 km are conducted to investigate ozone concentrations [O 3 ] due to urbanization within cities in arid/semi-arid environments. First, a method based on a shape preserving Monotonic Cubic Interpolation (MCI) is developed and used to downscale anthropogenic emissions from the 4 km resolution 2005 National Emissions Inventory (NEI05) to the finest model resolution of 1 km. Using the rapidly expanding Phoenix metropolitan region as the area of focus, we demonstrate the proposed MCI method achieves ozone simulation results with appreciably improved correspondence to observations relative to the default interpolation method of the WRF-Chem system. Next, two additional sets of experiments are conducted, with the recommended MCI approach, to examine impacts of urbanization on ozone production: (1) the urban land cover is included (i.e., urbanization experiments) and, (2) the urban land cover is replaced with the region’s native shrubland. Impacts due to the presence of the built environment on [O 3 ] are highly heterogeneous across the metropolitan area. Increased near surface [O 3 ] due to urbanization of 10–20 ppb is predominantly a nighttime phenomenon while simulated impacts during daytime are negligible. Urbanization narrows the daily [O 3 ] range (by virtue of increasing nighttime minima), an impact largely due to the region’s urban heat island. Our results demonstrate the importance of the MCI method for accurate representation of the diurnal profile of ozone, and highlight its utility for high-resolution air quality simulations for urban areas. (letter)

  13. Detailed high-resolution three-dimensional simulations of OMEGA separated reactants inertial confinement fusion experiments

    Energy Technology Data Exchange (ETDEWEB)

    Haines, Brian M., E-mail: bmhaines@lanl.gov; Fincke, James R.; Shah, Rahul C.; Boswell, Melissa; Fowler, Malcolm M.; Gore, Robert A.; Hayes-Sterbenz, Anna C.; Jungman, Gerard; Klein, Andreas; Rundberg, Robert S.; Steinkamp, Michael J.; Wilhelmy, Jerry B. [Los Alamos National Laboratory, MS T087, Los Alamos, New Mexico 87545 (United States); Grim, Gary P. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); Forrest, Chad J.; Silverstein, Kevin; Marshall, Frederic J. [Laboratory for Laser Energetics, University of Rochester, Rochester, New York 14623 (United States)

    2016-07-15

    We present results from the comparison of high-resolution three-dimensional (3D) simulations with data from the implosions of inertial confinement fusion capsules with separated reactants performed on the OMEGA laser facility. Each capsule, referred to as a “CD Mixcap,” is filled with tritium and has a polystyrene (CH) shell with a deuterated polystyrene (CD) layer whose burial depth is varied. In these implosions, fusion reactions between deuterium and tritium ions can occur only in the presence of atomic mix between the gas fill and shell material. The simulations feature accurate models for all known experimental asymmetries and do not employ any adjustable parameters to improve agreement with experimental data. Simulations are performed with the RAGE radiation-hydrodynamics code using an Implicit Large Eddy Simulation (ILES) strategy for the hydrodynamics. We obtain good agreement with the experimental data, including the DT/TT neutron yield ratios used to diagnose mix, for all burial depths of the deuterated shell layer. Additionally, simulations demonstrate good agreement with converged simulations employing explicit models for plasma diffusion and viscosity, suggesting that the implicit sub-grid model used in ILES is sufficient to model these processes in these experiments. In our simulations, mixing is driven by short-wavelength asymmetries and longer-wavelength features are responsible for developing flows that transport mixed material towards the center of the hot spot. Mix material transported by this process is responsible for most of the mix (DT) yield even for the capsule with a CD layer adjacent to the tritium fuel. Consistent with our previous results, mix does not play a significant role in TT neutron yield degradation; instead, this is dominated by the displacement of fuel from the center of the implosion due to the development of turbulent instabilities seeded by long-wavelength asymmetries. Through these processes, the long

  14. Simulation studies for a high resolution time projection chamber at the international linear collider

    Energy Technology Data Exchange (ETDEWEB)

    Muennich, A.

    2007-03-26

    The International Linear Collider (ILC) is planned to be the next large accelerator. The ILC will be able to perform high precision measurements only possible at the clean environment of electron positron collisions. In order to reach this high accuracy, the requirements for the detector performance are challenging. Several detector concepts are currently under study. The understanding of the detector and its performance will be crucial to extract the desired physics results from the data. To optimise the detector design, simulation studies are needed. Simulation packages like GEANT4 allow to model the detector geometry and simulate the energy deposit in the different materials. However, the detector response taking into account the transportation of the produced charge to the readout devices and the effects ofthe readout electronics cannot be described in detail. These processes in the detector will change the measured position of the energy deposit relative to the point of origin. The determination of this detector response is the task of detailed simulation studies, which have to be carried out for each subdetector. A high resolution Time Projection Chamber (TPC) with gas amplification based on micro pattern gas detectors, is one of the options for the main tracking system at the ILC. In the present thesis a detailed simulation tool to study the performance of a TPC was developed. Its goal is to find the optimal settings to reach an excellent momentum and spatial resolution. After an introduction to the present status of particle physics and the ILC project with special focus on the TPC as central tracker, the simulation framework is presented. The basic simulation methods and implemented processes are introduced. Within this stand-alone simulation framework each electron produced by primary ionisation is transferred through the gas volume and amplified using Gas Electron Multipliers (GEMs). The output format of the simulation is identical to the raw data from a

  15. Aerosol midlatitude cyclone indirect effects in observations and high-resolution simulations

    Directory of Open Access Journals (Sweden)

    D. T. McCoy

    2018-04-01

    Full Text Available Aerosol–cloud interactions are a major source of uncertainty in inferring the climate sensitivity from the observational record of temperature. The adjustment of clouds to aerosol is a poorly constrained aspect of these aerosol–cloud interactions. Here, we examine the response of midlatitude cyclone cloud properties to a change in cloud droplet number concentration (CDNC. Idealized experiments in high-resolution, convection-permitting global aquaplanet simulations with constant CDNC are compared to 13 years of remote-sensing observations. Observations and idealized aquaplanet simulations agree that increased warm conveyor belt (WCB moisture flux into cyclones is consistent with higher cyclone liquid water path (CLWP. When CDNC is increased a larger LWP is needed to give the same rain rate. The LWP adjusts to allow the rain rate to be equal to the moisture flux into the cyclone along the WCB. This results in an increased CLWP for higher CDNC at a fixed WCB moisture flux in both observations and simulations. If observed cyclones in the top and bottom tercile of CDNC are contrasted it is found that they have not only higher CLWP but also cloud cover and albedo. The difference in cyclone albedo between the cyclones in the top and bottom third of CDNC is observed by CERES to be between 0.018 and 0.032, which is consistent with a 4.6–8.3 Wm−2 in-cyclone enhancement in upwelling shortwave when scaled by annual-mean insolation. Based on a regression model to observed cyclone properties, roughly 60 % of the observed variability in CLWP can be explained by CDNC and WCB moisture flux.

  16. High-resolution nested model simulations of the climatological circulation in the southeastern Mediterranean Sea

    Directory of Open Access Journals (Sweden)

    S. Brenner

    2003-01-01

    Full Text Available As part of the Mediterranean Forecasting System Pilot Project (MFSPP we have implemented a high-resolution (2 km horizontal grid, 30 sigma levels version of the Princeton Ocean Model for the southeastern corner of the Mediterranean Sea. The domain extends 200 km offshore and includes the continental shelf and slope, and part of the open sea. The model is nested in an intermediate resolution (5.5 km grid model that covers the entire Levantine, Ionian, and Aegean Sea. The nesting is one way so that velocity, temperature, and salinity along the boundaries are interpolated from the relevant intermediate model variables. An integral constraint is applied so that the net mass flux across the open boundaries is identical to the net flux in the intermediate model. The model is integrated for three perpetual years with surface forcing specified from monthly mean climatological wind stress and heat fluxes. The model is stable and spins up within the first year to produce a repeating seasonal cycle throughout the three-year integration period. While there is some internal variability evident in the results, it is clear that, due to the relatively small domain, the results are strongly influenced by the imposed lateral boundary conditions. The results closely follow the simulation of the intermediate model. The main improvement is in the simulation over the narrow shelf region, which is not adequately resolved by the coarser grid model. Comparisons with direct current measurements over the shelf and slope show reasonable agreement despite the limitations of the climatological forcing. The model correctly simulates the direction and the typical speeds of the flow over the shelf and slope, but has difficulty properly re-producing the seasonal cycle in the speed.Key words. Oceanography: general (continental shelf processes; numerical modelling; ocean prediction

  17. High Resolution Simulations of Future Climate in West Africa Using a Variable-Resolution Atmospheric Model

    Science.gov (United States)

    Adegoke, J. O.; Engelbrecht, F.; Vezhapparambu, S.

    2013-12-01

    In previous work demonstrated the application of a var¬iable-resolution global atmospheric model, the conformal-cubic atmospheric model (CCAM), across a wide range of spatial and time scales to investigate the ability of the model to provide realistic simulations of present-day climate and plausible projections of future climate change over sub-Saharan Africa. By applying the model in stretched-grid mode the versatility of the model dynamics, numerical formulation and physical parameterizations to function across a range of length scales over the region of interest, was also explored. We primarily used CCAM to illustrate the capability of the model to function as a flexible downscaling tool at the climate-change time scale. Here we report on additional long term climate projection studies performed by downscaling at much higher resolutions (8 Km) over an area that stretches from just south of Sahara desert to the southern coast of the Niger Delta and into the Gulf of Guinea. To perform these simulations, CCAM was provided with synoptic-scale forcing of atmospheric circulation from 2.5 deg resolution NCEP reanalysis at 6-hourly interval and SSTs from NCEP reanalysis data uses as lower boundary forcing. CCAM 60 Km resolution downscaled to 8 Km (Schmidt factor 24.75) then 8 Km resolution simulation downscaled to 1 Km (Schmidt factor 200) over an area approximately 50 Km x 50 Km in the southern Lake Chad Basin (LCB). Our intent in conducting these high resolution model runs was to obtain a deeper understanding of linkages between the projected future climate and the hydrological processes that control the surface water regime in this part of sub-Saharan Africa.

  18. Earth System Modeling 2.0: A Blueprint for Models That Learn From Observations and Targeted High-Resolution Simulations

    Science.gov (United States)

    Schneider, Tapio; Lan, Shiwei; Stuart, Andrew; Teixeira, João.

    2017-12-01

    Climate projections continue to be marred by large uncertainties, which originate in processes that need to be parameterized, such as clouds, convection, and ecosystems. But rapid progress is now within reach. New computational tools and methods from data assimilation and machine learning make it possible to integrate global observations and local high-resolution simulations in an Earth system model (ESM) that systematically learns from both and quantifies uncertainties. Here we propose a blueprint for such an ESM. We outline how parameterization schemes can learn from global observations and targeted high-resolution simulations, for example, of clouds and convection, through matching low-order statistics between ESMs, observations, and high-resolution simulations. We illustrate learning algorithms for ESMs with a simple dynamical system that shares characteristics of the climate system; and we discuss the opportunities the proposed framework presents and the challenges that remain to realize it.

  19. Selecting ultra-faint dwarf candidate progenitors in cosmological N-body simulations at high redshifts

    Science.gov (United States)

    Safarzadeh, Mohammadtaher; Ji, Alexander P.; Dooley, Gregory A.; Frebel, Anna; Scannapieco, Evan; Gómez, Facundo A.; O'Shea, Brian W.

    2018-06-01

    The smallest satellites of the Milky Way ceased forming stars during the epoch of reionization and thus provide archaeological access to galaxy formation at z > 6. Numerical studies of these ultrafaint dwarf galaxies (UFDs) require expensive cosmological simulations with high mass resolution that are carried out down to z = 0. However, if we are able to statistically identify UFD host progenitors at high redshifts with relatively high probabilities, we can avoid this high computational cost. To find such candidates, we analyse the merger trees of Milky Way type haloes from the high-resolution Caterpillar suite of dark matter only simulations. Satellite UFD hosts at z = 0 are identified based on four different abundance matching (AM) techniques. All the haloes at high redshifts are traced forward in time in order to compute the probability of surviving as satellite UFDs today. Our results show that selecting potential UFD progenitors based solely on their mass at z = 12 (8) results in a 10 per cent (20 per cent) chance of obtaining a surviving UFD at z = 0 in three of the AM techniques we adopted. We find that the progenitors of surviving satellite UFDs have lower virial ratios (η), and are preferentially located at large distances from the main MW progenitor, while they show no correlation with concentration parameter. Haloes with favorable locations and virial ratios are ≈3 times more likely to survive as satellite UFD candidates at z = 0.

  20. Method of Obtaining High Resolution Intrinsic Wire Boom Damping Parameters for Multi-Body Dynamics Simulations

    Science.gov (United States)

    Yew, Alvin G.; Chai, Dean J.; Olney, David J.

    2010-01-01

    The goal of NASA's Magnetospheric MultiScale (MMS) mission is to understand magnetic reconnection with sensor measurements from four spinning satellites flown in a tight tetrahedron formation. Four of the six electric field sensors on each satellite are located at the end of 60- meter wire booms to increase measurement sensitivity in the spin plane and to minimize motion coupling from perturbations on the main body. A propulsion burn however, might induce boom oscillations that could impact science measurements if oscillations do not damp to values on the order of 0.1 degree in a timely fashion. Large damping time constants could also adversely affect flight dynamics and attitude control performance. In this paper, we will discuss the implementation of a high resolution method for calculating the boom's intrinsic damping, which was used in multi-body dynamics simulations. In summary, experimental data was obtained with a scaled-down boom, which was suspended as a pendulum in vacuum. Optical techniques were designed to accurately measure the natural decay of angular position and subsequently, data processing algorithms resulted in excellent spatial and temporal resolutions. This method was repeated in a parametric study for various lengths, root tensions and vacuum levels. For all data sets, regression models for damping were applied, including: nonlinear viscous, frequency-independent hysteretic, coulomb and some combination of them. Our data analysis and dynamics models have shown that the intrinsic damping for the baseline boom is insufficient, thereby forcing project management to explore mitigation strategies.

  1. Surface drag effects on simulated wind fields in high-resolution atmospheric forecast model

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Kyo Sun; Lim, Jong Myoung; Ji, Young Yong [Environmental Radioactivity Assessment Team,Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Shin, Hye Yum [NOAA/Geophysical Fluid Dynamics Laboratory, Princeton (United States); Hong, Jin Kyu [Yonsei University, Seoul (Korea, Republic of)

    2017-04-15

    It has been reported that the Weather Research and Forecasting (WRF) model generally shows a substantial over prediction bias at low to moderate wind speeds and winds are too geostrophic (Cheng and Steenburgh 2005), which limits the application of WRF model in the area that requires the accurate surface wind estimation such as wind-energy application, air-quality studies, and radioactive-pollutants dispersion studies. The surface drag generated by the subgrid-scale orography is represented by introducing a sink term in the momentum equation in their studies. The purpose of our study is to evaluate the simulated meteorological fields in the high-resolution WRF framework, that includes the parameterization of subgrid-scale orography developed by Mass and Ovens (2010), and enhance the forecast skill of low-level wind fields, which plays an important role in transport and dispersion of air pollutants including radioactive pollutants. The positive bias in 10-m wind speed is significantly alleviated by implementing the subgrid-scale orography parameterization, while other meteorological fields including 10-m wind direction are not changed. Increased variance of subgrid- scale orography enhances the sink of momentum and further reduces the bias in 10-m wind speed.

  2. NEUTRINO-DRIVEN CONVECTION IN CORE-COLLAPSE SUPERNOVAE: HIGH-RESOLUTION SIMULATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Radice, David; Ott, Christian D. [TAPIR, Walter Burke Institute for Theoretical Physics, Mailcode 350-17, California Institute of Technology, Pasadena, CA 91125 (United States); Abdikamalov, Ernazar [Department of Physics, School of Science and Technology, Nazarbayev University, Astana 010000 (Kazakhstan); Couch, Sean M. [Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824 (United States); Haas, Roland [Max-Planck-Institut für Gravitationsphysik, Albert-Einstein-Institut, D-14476 Golm (Germany); Schnetter, Erik, E-mail: dradice@caltech.edu [Perimeter Institute for Theoretical Physics, Waterloo, ON (Canada)

    2016-03-20

    We present results from high-resolution semiglobal simulations of neutrino-driven convection in core-collapse supernovae. We employ an idealized setup with parameterized neutrino heating/cooling and nuclear dissociation at the shock front. We study the internal dynamics of neutrino-driven convection and its role in redistributing energy and momentum through the gain region. We find that even if buoyant plumes are able to locally transfer heat up to the shock, convection is not able to create a net positive energy flux and overcome the downward transport of energy from the accretion flow. Turbulent convection does, however, provide a significant effective pressure support to the accretion flow as it favors the accumulation of energy, mass, and momentum in the gain region. We derive an approximate equation that is able to explain and predict the shock evolution in terms of integrals of quantities such as the turbulent pressure in the gain region or the effects of nonradial motion of the fluid. We use this relation as a way to quantify the role of turbulence in the dynamics of the accretion shock. Finally, we investigate the effects of grid resolution, which we change by a factor of 20 between the lowest and highest resolution. Our results show that the shallow slopes of the turbulent kinetic energy spectra reported in previous studies are a numerical artifact. Kolmogorov scaling is progressively recovered as the resolution is increased.

  3. NEUTRINO-DRIVEN CONVECTION IN CORE-COLLAPSE SUPERNOVAE: HIGH-RESOLUTION SIMULATIONS

    International Nuclear Information System (INIS)

    Radice, David; Ott, Christian D.; Abdikamalov, Ernazar; Couch, Sean M.; Haas, Roland; Schnetter, Erik

    2016-01-01

    We present results from high-resolution semiglobal simulations of neutrino-driven convection in core-collapse supernovae. We employ an idealized setup with parameterized neutrino heating/cooling and nuclear dissociation at the shock front. We study the internal dynamics of neutrino-driven convection and its role in redistributing energy and momentum through the gain region. We find that even if buoyant plumes are able to locally transfer heat up to the shock, convection is not able to create a net positive energy flux and overcome the downward transport of energy from the accretion flow. Turbulent convection does, however, provide a significant effective pressure support to the accretion flow as it favors the accumulation of energy, mass, and momentum in the gain region. We derive an approximate equation that is able to explain and predict the shock evolution in terms of integrals of quantities such as the turbulent pressure in the gain region or the effects of nonradial motion of the fluid. We use this relation as a way to quantify the role of turbulence in the dynamics of the accretion shock. Finally, we investigate the effects of grid resolution, which we change by a factor of 20 between the lowest and highest resolution. Our results show that the shallow slopes of the turbulent kinetic energy spectra reported in previous studies are a numerical artifact. Kolmogorov scaling is progressively recovered as the resolution is increased

  4. A Non-hydrostatic Atmospheric Model for Global High-resolution Simulation

    Science.gov (United States)

    Peng, X.; Li, X.

    2017-12-01

    A three-dimensional non-hydrostatic atmosphere model, GRAPES_YY, is developed on the spherical Yin-Yang grid system in order to enforce global high-resolution weather simulation or forecasting at the CAMS/CMA. The quasi-uniform grid makes the computation be of high efficiency and free of pole problem. Full representation of the three-dimensional Coriolis force is considered in the governing equations. Under the constraint of third-order boundary interpolation, the model is integrated with the semi-implicit semi-Lagrangian method using the same code on both zones. A static halo region is set to ensure computation of cross-boundary transport and updating Dirichlet-type boundary conditions in the solution process of elliptical equations with the Schwarz method. A series of dynamical test cases, including the solid-body advection, the balanced geostrophic flow, zonal flow over an isolated mountain, development of the Rossby-Haurwitz wave and a baroclinic wave, are carried out, and excellent computational stability and accuracy of the dynamic core has been confirmed. After implementation of the physical processes of long and short-wave radiation, cumulus convection, micro-physical transformation of water substances and the turbulent processes in the planetary boundary layer include surface layer vertical fluxes parameterization, a long-term run of the model is then put forward under an idealized aqua-planet configuration to test the model physics and model ability in both short-term and long-term integrations. In the aqua-planet experiment, the model shows an Earth-like structure of circulation. The time-zonal mean temperature, wind components and humidity illustrate reasonable subtropical zonal westerly jet, meridional three-cell circulation, tropical convection and thermodynamic structures. The specific SST and solar insolation being symmetric about the equator enhance the ITCZ and tropical precipitation, which concentrated in tropical region. Additional analysis and

  5. Sixth- and eighth-order Hermite integrator for N-body simulations

    Science.gov (United States)

    Nitadori, Keigo; Makino, Junichiro

    2008-10-01

    We present sixth- and eighth-order Hermite integrators for astrophysical N-body simulations, which use the derivatives of accelerations up to second-order ( snap) and third-order ( crackle). These schemes do not require previous values for the corrector, and require only one previous value to construct the predictor. Thus, they are fairly easy to implement. The additional cost of the calculation of the higher-order derivatives is not very high. Even for the eighth-order scheme, the number of floating-point operations for force calculation is only about two times larger than that for traditional fourth-order Hermite scheme. The sixth-order scheme is better than the traditional fourth-order scheme for most cases. When the required accuracy is very high, the eighth-order one is the best. These high-order schemes have several practical advantages. For example, they allow a larger number of particles to be integrated in parallel than the fourth-order scheme does, resulting in higher execution efficiency in both general-purpose parallel computers and GRAPE systems.

  6. The GENGA code: gravitational encounters in N-body simulations with GPU acceleration

    International Nuclear Information System (INIS)

    Grimm, Simon L.; Stadel, Joachim G.

    2014-01-01

    We describe an open source GPU implementation of a hybrid symplectic N-body integrator, GENGA (Gravitational ENcounters with Gpu Acceleration), designed to integrate planet and planetesimal dynamics in the late stage of planet formation and stability analyses of planetary systems. GENGA uses a hybrid symplectic integrator to handle close encounters with very good energy conservation, which is essential in long-term planetary system integration. We extended the second-order hybrid integration scheme to higher orders. The GENGA code supports three simulation modes: integration of up to 2048 massive bodies, integration with up to a million test particles, or parallel integration of a large number of individual planetary systems. We compare the results of GENGA to Mercury and pkdgrav2 in terms of energy conservation and performance and find that the energy conservation of GENGA is comparable to Mercury and around two orders of magnitude better than pkdgrav2. GENGA runs up to 30 times faster than Mercury and up to 8 times faster than pkdgrav2. GENGA is written in CUDA C and runs on all NVIDIA GPUs with a computing capability of at least 2.0.

  7. The GENGA code: gravitational encounters in N-body simulations with GPU acceleration

    Energy Technology Data Exchange (ETDEWEB)

    Grimm, Simon L.; Stadel, Joachim G., E-mail: sigrimm@physik.uzh.ch [Institute for Computational Science, University of Zürich, Winterthurerstrasse 190, CH-8057 Zürich (Switzerland)

    2014-11-20

    We describe an open source GPU implementation of a hybrid symplectic N-body integrator, GENGA (Gravitational ENcounters with Gpu Acceleration), designed to integrate planet and planetesimal dynamics in the late stage of planet formation and stability analyses of planetary systems. GENGA uses a hybrid symplectic integrator to handle close encounters with very good energy conservation, which is essential in long-term planetary system integration. We extended the second-order hybrid integration scheme to higher orders. The GENGA code supports three simulation modes: integration of up to 2048 massive bodies, integration with up to a million test particles, or parallel integration of a large number of individual planetary systems. We compare the results of GENGA to Mercury and pkdgrav2 in terms of energy conservation and performance and find that the energy conservation of GENGA is comparable to Mercury and around two orders of magnitude better than pkdgrav2. GENGA runs up to 30 times faster than Mercury and up to 8 times faster than pkdgrav2. GENGA is written in CUDA C and runs on all NVIDIA GPUs with a computing capability of at least 2.0.

  8. Simulations of collisions between N-body classical systems in interaction

    International Nuclear Information System (INIS)

    Morisseau, Francois

    2006-05-01

    The Classical N-body Dynamics (CNBD) is dedicated to the simulation of collisions between classical systems. The 2-body interaction used here has the properties of the Van der Waals potential and depends on just a few parameters. This work has two main goals. First, some theoretical approaches assume that the dynamical stage of the collisions plays an important role. Moreover, colliding nuclei are supposed to present a 1. order liquid-gas phase transition. Several signals have been introduced to show this transition. We have searched for two of them: the bimodality of the mass asymmetry and negative heat capacity. We have found them and we give an explanation of their presence in our calculations. Second, we have improved the interaction by adding a Coulomb like potential and by taking into account the stronger proton-neutron interaction in nuclei. Then we have figured out the relations that exist between the parameters of the 2-body interaction and the properties of the systems. These studies allow us to fit the properties of the classical systems to those of the nuclei. In this manuscript the first results of this fit are shown. (author)

  9. N-body simulations of planet formation: understanding exoplanet system architectures

    Science.gov (United States)

    Coleman, Gavin; Nelson, Richard

    2015-12-01

    Observations have demonstrated the existence of a significant population of compact systems comprised of super-Earths and Neptune-mass planets, and a population of gas giants that appear to occur primarily in either short-period (100 days) orbits. The broad diversity of system architectures raises the question of whether or not the same formation processes operating in standard disc models can explain these planets, or if different scenarios are required instead to explain the widely differing architectures. To explore this issue, we present the results from a comprehensive suite of N-body simulations of planetary system formation that include the following physical processes: gravitational interactions and collisions between planetary embryos and planetesimals; type I and II migration; gas accretion onto planetary cores; self-consistent viscous disc evolution and disc removal through photo-evaporation. Our results indicate that the formation and survival of compact systems of super-Earths and Neptune-mass planets occur commonly in disc models where a simple prescription for the disc viscosity is assumed, but such models never lead to the formation and survival of gas giant planets due to migration into the star. Inspired in part by the ALMA observations of HL Tau, and by MHD simulations that display the formation of long-lived zonal flows, we have explored the consequences of assuming that the disc viscosity varies in both time and space. We find that the radial structuring of the disc leads to conditions in which systems of giant planets are able to form and survive. Furthermore, these giants generally occupy those regions of the mass-period diagram that are densely populated by the observed gas giants, suggesting that the planet traps generated by radial structuring of protoplanetary discs may be a necessary ingredient for forming giant planets.

  10. Changes in Moisture Flux over the Tibetan Plateau during 1979-2011: Insights from a High Resolution Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Yanhong; Leung, Lai-Yung R.; Zhang, Yongxin; Cuo, Lan

    2015-05-15

    Net precipitation (precipitation minus evapotranspiration, P-E) changes between 1979 and 2011 from a high resolution regional climate simulation and its reanalysis forcing are analyzed over the Tibet Plateau (TP) and compared to the global land data assimilation system (GLDAS) product. The high resolution simulation better resolves precipitation changes than its coarse resolution forcing, which contributes dominantly to the improved P-E change in the regional simulation compared to the global reanalysis. Hence, the former may provide better insights about the drivers of P-E changes. The mechanism behind the P-E changes is explored by decomposing the column integrated moisture flux convergence into thermodynamic, dynamic, and transient eddy components. High-resolution climate simulation improves the spatial pattern of P-E changes over the best available global reanalysis. High-resolution climate simulation also facilitates new and substantial findings regarding the role of thermodynamics and transient eddies in P-E changes reflected in observed changes in major river basins fed by runoff from the TP. The analysis revealed the contrasting convergence/divergence changes between the northwestern and southeastern TP and feedback through latent heat release as an important mechanism leading to the mean P-E changes in the TP.

  11. High-resolution simulations of the thermophysiological effects of human exposure to 100 MHz RF energy

    International Nuclear Information System (INIS)

    Nelson, David A; Curran, Allen R; Nyberg, Hans A; Marttila, Eric A; Mason, Patrick A; Ziriax, John M

    2013-01-01

    Human exposure to radio frequency (RF) electromagnetic energy is known to result in tissue heating and can raise temperatures substantially in some situations. Standards for safe exposure to RF do not reflect bio-heat transfer considerations however. Thermoregulatory function (vasodilation, sweating) may mitigate RF heating effects in some environments and exposure scenarios. Conversely, a combination of an extreme environment (high temperature, high humidity), high activity levels and thermally insulating garments may exacerbate RF exposure and pose a risk of unsafe temperature elevation, even for power densities which might be acceptable in a normothermic environment. A high-resolution thermophysiological model, incorporating a heterogeneous tissue model of a seated adult has been developed and used to replicate a series of whole-body exposures at a frequency (100 MHz) which approximates that of human whole-body resonance. Exposures were simulated at three power densities (4, 6 and 8 mW cm −2 ) plus a sham exposure and at three different ambient temperatures (24, 28 and 31 °C). The maximum hypothalamic temperature increase over the course of a 45 min exposure was 0.28 °C and occurred in the most extreme conditions (T amb = 31 °C, PD = 8 mW cm −2 ). Skin temperature increases attributable to RF exposure were modest, with the exception of a ‘hot spot’ in the vicinity of the ankle where skin temperatures exceeded 39 °C. Temperature increases in internal organs and tissues were small, except for connective tissue and bone in the lower leg and foot. Temperature elevation also was noted in the spinal cord, consistent with a hot spot previously identified in the literature. (paper)

  12. High-resolution WRF-LES simulations for real episodes: A case study for prealpine terrain

    Science.gov (United States)

    Hald, Cornelius; Mauder, Matthias; Laux, Patrick; Kunstmann, Harald

    2017-04-01

    While in most large or regional scale weather and climate models turbulence is parametrized, LES (Large Eddy Simulation) allows for the explicit modeling of turbulent structures in the atmosphere. With the exponential growth in available computing power the technique has become more and more applicable, yet it has mostly been used to model idealized scenarios. It is investigated how well WRF-LES can represent small scale weather patterns. The results are evaluated against different hydrometeorological measurements. We use WRF-LES to model the diurnal cycle for a 48 hour episode in summer over moderately complex terrain in southern Germany. The model setup uses a high resolution digital elevation model, land use and vegetation map. The atmospheric boundary conditions are set by reanalysis data. Schemes for radiation and microphysics and a land-surface model are employed. The biggest challenge in modeling arises from the high horizontal resolution of dx = 30m, since the subgrid-scale model then requires a vertical resolution dz ≈ 10m for optimal results. We observe model instabilities and present solutions like smoothing of the surface input data, careful positioning of the model domain and shortening of the model time step down to a twentieth of a second. Model results are compared to an array of various instruments including eddy covariance stations, LIDAR, RASS, SODAR, weather stations and unmanned aerial vehicles. All instruments are part of the TERENO pre-Alpine area and were employed in the orchestrated measurement campaign ScaleX in July 2015. Examination of the results show reasonable agreement between model and measurements in temperature- and moisture profiles. Modeled wind profiles are highly dependent on the vertical resolution and are in accordance with measurements only at higher wind speeds. A direct comparison of turbulence is made difficult by the purely statistical character of turbulent motions in the model.

  13. Creating high-resolution digital elevation model using thin plate spline interpolation and Monte Carlo simulation

    International Nuclear Information System (INIS)

    Pohjola, J.; Turunen, J.; Lipping, T.

    2009-07-01

    In this report creation of the digital elevation model of Olkiluoto area incorporating a large area of seabed is described. The modeled area covers 960 square kilometers and the apparent resolution of the created elevation model was specified to be 2.5 x 2.5 meters. Various elevation data like contour lines and irregular elevation measurements were used as source data in the process. The precision and reliability of the available source data varied largely. Digital elevation model (DEM) comprises a representation of the elevation of the surface of the earth in particular area in digital format. DEM is an essential component of geographic information systems designed for the analysis and visualization of the location-related data. DEM is most often represented either in raster or Triangulated Irregular Network (TIN) format. After testing several methods the thin plate spline interpolation was found to be best suited for the creation of the elevation model. The thin plate spline method gave the smallest error in the test where certain amount of points was removed from the data and the resulting model looked most natural. In addition to the elevation data the confidence interval at each point of the new model was required. The Monte Carlo simulation method was selected for this purpose. The source data points were assigned probability distributions according to what was known about their measurement procedure and from these distributions 1 000 (20 000 in the first version) values were drawn for each data point. Each point of the newly created DEM had thus as many realizations. The resulting high resolution DEM will be used in modeling the effects of land uplift and evolution of the landscape in the time range of 10 000 years from the present. This time range comes from the requirements set for the spent nuclear fuel repository site. (orig.)

  14. Clusters of galaxies compared with N-body simulations: masses and mass segregation

    International Nuclear Information System (INIS)

    Struble, M.F.; Bludman, S.A.

    1979-01-01

    With three virially stable N-body simulations of Wielen, it is shown that use of the expression for the total mass derived from averaged quantities (velocity dispersion and mean harmonic radius) yields an overestimate of the mass by as much as a factor of 2-3, and use of the heaviest mass sample gives an underestimate by a factor of 2-3. The estimate of the mass using mass weighted quantities (i.e., derived from the customary definition of kinetic and potential energies) yields a better value irrespectively of mass sample as applied to late time intervals of the models (>= three two-body relaxation times). The uncertainty is at most approximately 50%. This suggests that it is better to employ the mass weighted expression for the mass when determining cluster masses. The virial ratio, which is a ratio of the mass weighted/averaged expression for the potential energy, is found to vary between 1 and 2. It is concluded that ratios for observed clusters approximately 4-10 cannot be explained even by the imprecision of the expression for the mass using averaged quantities, and certainly implies the presence of unseen matter. Total masses via customary application of the virial theorem are calculated for 39 clusters, and total masses for 12 clusters are calculated by a variant of the usual application. The distribution of cluster masses is also presented and briefly discussed. Mass segregation in Wielen's models is studied in terms of the binding energy per unit mass of the 'heavy' sample compared with the 'light' sample. The general absence of mass segregation in relaxaed clusters and the large virial discrepancies are attributed to a population of many low-mass objects that may constitute the bulk mass of clusters of galaxies. (Auth.)

  15. A PARALLEL MONTE CARLO CODE FOR SIMULATING COLLISIONAL N-BODY SYSTEMS

    Energy Technology Data Exchange (ETDEWEB)

    Pattabiraman, Bharath; Umbreit, Stefan; Liao, Wei-keng; Choudhary, Alok; Kalogera, Vassiliki; Memik, Gokhan; Rasio, Frederic A., E-mail: bharath@u.northwestern.edu [Center for Interdisciplinary Exploration and Research in Astrophysics, Northwestern University, Evanston, IL (United States)

    2013-02-15

    We present a new parallel code for computing the dynamical evolution of collisional N-body systems with up to N {approx} 10{sup 7} particles. Our code is based on the Henon Monte Carlo method for solving the Fokker-Planck equation, and makes assumptions of spherical symmetry and dynamical equilibrium. The principal algorithmic developments involve optimizing data structures and the introduction of a parallel random number generation scheme as well as a parallel sorting algorithm required to find nearest neighbors for interactions and to compute the gravitational potential. The new algorithms we introduce along with our choice of decomposition scheme minimize communication costs and ensure optimal distribution of data and workload among the processing units. Our implementation uses the Message Passing Interface library for communication, which makes it portable to many different supercomputing architectures. We validate the code by calculating the evolution of clusters with initial Plummer distribution functions up to core collapse with the number of stars, N, spanning three orders of magnitude from 10{sup 5} to 10{sup 7}. We find that our results are in good agreement with self-similar core-collapse solutions, and the core-collapse times generally agree with expectations from the literature. Also, we observe good total energy conservation, within {approx}< 0.04% throughout all simulations. We analyze the performance of the code, and demonstrate near-linear scaling of the runtime with the number of processors up to 64 processors for N = 10{sup 5}, 128 for N = 10{sup 6} and 256 for N = 10{sup 7}. The runtime reaches saturation with the addition of processors beyond these limits, which is a characteristic of the parallel sorting algorithm. The resulting maximum speedups we achieve are approximately 60 Multiplication-Sign , 100 Multiplication-Sign , and 220 Multiplication-Sign , respectively.

  16. Structure formation by a fifth force: N-body versus linear simulations

    Science.gov (United States)

    Li, Baojiu; Zhao, Hongsheng

    2009-08-01

    We lay out the frameworks to numerically study the structure formation in both linear and nonlinear regimes in general dark-matter-coupled scalar field models, and give an explicit example where the scalar field serves as a dynamical dark energy. Adopting parameters of the scalar field which yield a realistic cosmic microwave background (CMB) spectrum, we generate the initial conditions for our N-body simulations, which follow the spatial distributions of the dark matter and the scalar field by solving their equations of motion using the multilevel adaptive grid technique. We show that the spatial configuration of the scalar field tracks well the voids and clusters of dark matter. Indeed, the propagation of scalar degree of freedom effectively acts as a fifth force on dark matter particles, whose range and magnitude are determined by the two model parameters (μ,γ), local dark matter density as well as the background value for the scalar field. The model behaves like the ΛCDM paradigm on scales relevant to the CMB spectrum, which are well beyond the probe of the local fifth force and thus not significantly affected by the matter-scalar coupling. On scales comparable or shorter than the range of the local fifth force, the fifth force is perfectly parallel to gravity and their strengths have a fixed ratio 2γ2 determined by the matter-scalar coupling, provided that the chameleon effect is weak; if on the other hand there is a strong chameleon effect (i.e., the scalar field almost resides at its effective potential minimum everywhere in the space), the fifth force indeed has suppressed effects in high density regions and shows no obvious correlation with gravity, which means that the dark-matter-scalar-field coupling is not simply equivalent to a rescaling of the gravitational constant or the mass of the dark matter particles. We show these spatial distributions and (lack of) correlations at typical redshifts (z=0,1,5.5) in our multigrid million-particle simulations

  17. Structure formation by a fifth force: N-body versus linear simulations

    International Nuclear Information System (INIS)

    Li Baojiu; Zhao Hongsheng

    2009-01-01

    We lay out the frameworks to numerically study the structure formation in both linear and nonlinear regimes in general dark-matter-coupled scalar field models, and give an explicit example where the scalar field serves as a dynamical dark energy. Adopting parameters of the scalar field which yield a realistic cosmic microwave background (CMB) spectrum, we generate the initial conditions for our N-body simulations, which follow the spatial distributions of the dark matter and the scalar field by solving their equations of motion using the multilevel adaptive grid technique. We show that the spatial configuration of the scalar field tracks well the voids and clusters of dark matter. Indeed, the propagation of scalar degree of freedom effectively acts as a fifth force on dark matter particles, whose range and magnitude are determined by the two model parameters (μ,γ), local dark matter density as well as the background value for the scalar field. The model behaves like the ΛCDM paradigm on scales relevant to the CMB spectrum, which are well beyond the probe of the local fifth force and thus not significantly affected by the matter-scalar coupling. On scales comparable or shorter than the range of the local fifth force, the fifth force is perfectly parallel to gravity and their strengths have a fixed ratio 2γ 2 determined by the matter-scalar coupling, provided that the chameleon effect is weak; if on the other hand there is a strong chameleon effect (i.e., the scalar field almost resides at its effective potential minimum everywhere in the space), the fifth force indeed has suppressed effects in high density regions and shows no obvious correlation with gravity, which means that the dark-matter-scalar-field coupling is not simply equivalent to a rescaling of the gravitational constant or the mass of the dark matter particles. We show these spatial distributions and (lack of) correlations at typical redshifts (z=0,1,5.5) in our multigrid million

  18. N-body simulations with a cosmic vector for dark energy

    Science.gov (United States)

    Carlesi, Edoardo; Knebe, Alexander; Yepes, Gustavo; Gottlöber, Stefan; Jiménez, Jose Beltrán.; Maroto, Antonio L.

    2012-07-01

    We present the results of a series of cosmological N-body simulations of a vector dark energy (VDE) model, performed using a suitably modified version of the publicly available GADGET-2 code. The set-ups of our simulations were calibrated pursuing a twofold aim: (1) to analyse the large-scale distribution of massive objects and (2) to determine the properties of halo structure in this different framework. We observe that structure formation is enhanced in VDE, since the mass function at high redshift is boosted up to a factor of 10 with respect to Λ cold dark matter (ΛCDM), possibly alleviating tensions with the observations of massive clusters at high redshifts and early reionization epoch. Significant differences can also be found for the value of the growth factor, which in VDE shows a completely different behaviour, and in the distribution of voids, which in this cosmology are on average smaller and less abundant. We further studied the structure of dark matter haloes more massive than 5 × 1013 h-1 M⊙, finding that no substantial difference emerges when comparing spin parameter, shape, triaxiality and profiles of structures evolved under different cosmological pictures. Nevertheless, minor differences can be found in the concentration-mass relation and the two-point correlation function, both showing different amplitudes and steeper slopes. Using an additional series of simulations of a ΛCDM scenario with the same ? and σ8 used in the VDE cosmology, we have been able to establish whether the modifications induced in the new cosmological picture were due to the particular nature of the dynamical dark energy or a straightforward consequence of the cosmological parameters. On large scales, the dynamical effects of the cosmic vector field can be seen in the peculiar evolution of the cluster number density function with redshift, in the shape of the mass function, in the distribution of voids and on the characteristic form of the growth index γ(z). On

  19. A high resolution hydrodynamic 3-D model simulation of the malta shelf area

    Directory of Open Access Journals (Sweden)

    A. F. Drago

    2003-01-01

    Full Text Available The seasonal variability of the water masses and transport in the Malta Channel and proximity of the Maltese Islands have been simulated by a high resolution (1.6 km horizontal grid on average, 15 vertical sigma layers eddy resolving primitive equation shelf model (ROSARIO-I. The numerical simulation was run with climatological forcing and includes thermohaline dynamics with a turbulence scheme for the vertical mixing coefficients on the basis of the Princeton Ocean Model (POM. The model has been coupled by one-way nesting along three lateral boundaries (east, south and west to an intermediate coarser resolution model (5 km implemented over the Sicilian Channel area. The fields at the open boundaries and the atmospheric forcing at the air-sea interface were applied on a repeating "perpetual" year climatological cycle. The ability of the model to reproduce a realistic circulation of the Sicilian-Maltese shelf area has been demonstrated. The skill of the nesting procedure was tested by model-modelc omparisons showing that the major features of the coarse model flow field can be reproduced by the fine model with additional eddy space scale components. The numerical results included upwelling, mainly in summer and early autumn, along the southern coasts of Sicily and Malta; a strong eastward shelf surface flow along shore to Sicily, forming part of the Atlantic Ionian Stream, with a presence throughout the year and with significant seasonal modulation, and a westward winter intensified flow of LIW centered at a depth of around 280 m under the shelf break to the south of Malta. The seasonal variability in the thermohaline structure of the domain and the associated large-scale flow structures can be related to the current knowledge on the observed hydrography of the area. The level of mesoscale resolution achieved by the model allowed the spatial and temporal evolution of the changing flow patterns, triggered by internal dynamics, to be followed in

  20. A high resolution hydrodynamic 3-D model simulation of the malta shelf area

    Directory of Open Access Journals (Sweden)

    A. F. Drago

    Full Text Available The seasonal variability of the water masses and transport in the Malta Channel and proximity of the Maltese Islands have been simulated by a high resolution (1.6 km horizontal grid on average, 15 vertical sigma layers eddy resolving primitive equation shelf model (ROSARIO-I. The numerical simulation was run with climatological forcing and includes thermohaline dynamics with a turbulence scheme for the vertical mixing coefficients on the basis of the Princeton Ocean Model (POM. The model has been coupled by one-way nesting along three lateral boundaries (east, south and west to an intermediate coarser resolution model (5 km implemented over the Sicilian Channel area. The fields at the open boundaries and the atmospheric forcing at the air-sea interface were applied on a repeating "perpetual" year climatological cycle.

    The ability of the model to reproduce a realistic circulation of the Sicilian-Maltese shelf area has been demonstrated. The skill of the nesting procedure was tested by model-modelc omparisons showing that the major features of the coarse model flow field can be reproduced by the fine model with additional eddy space scale components. The numerical results included upwelling, mainly in summer and early autumn, along the southern coasts of Sicily and Malta; a strong eastward shelf surface flow along shore to Sicily, forming part of the Atlantic Ionian Stream, with a presence throughout the year and with significant seasonal modulation, and a westward winter intensified flow of LIW centered at a depth of around 280 m under the shelf break to the south of Malta. The seasonal variability in the thermohaline structure of the domain and the associated large-scale flow structures can be related to the current knowledge on the observed hydrography of the area. The level of mesoscale resolution achieved by the model allowed the spatial and temporal evolution of the changing flow patterns, triggered by

  1. An analysis of MM5 sensitivity to different parameterizations for high-resolution climate simulations

    Science.gov (United States)

    Argüeso, D.; Hidalgo-Muñoz, J. M.; Gámiz-Fortis, S. R.; Esteban-Parra, M. J.; Castro-Díez, Y.

    2009-04-01

    An evaluation of MM5 mesoscale model sensitivity to different parameterizations schemes is presented in terms of temperature and precipitation for high-resolution integrations over Andalusia (South of Spain). As initial and boundary conditions ERA-40 Reanalysis data are used. Two domains were used, a coarse one with dimensions of 55 by 60 grid points with spacing of 30 km and a nested domain of 48 by 72 grid points grid spaced 10 km. Coarse domain fully covers Iberian Peninsula and Andalusia fits loosely in the finer one. In addition to parameterization tests, two dynamical downscaling techniques have been applied in order to examine the influence of initial conditions on RCM long-term studies. Regional climate studies usually employ continuous integration for the period under survey, initializing atmospheric fields only at the starting point and feeding boundary conditions regularly. An alternative approach is based on frequent re-initialization of atmospheric fields; hence the simulation is divided in several independent integrations. Altogether, 20 simulations have been performed using varying physics options, of which 4 were fulfilled applying the re-initialization technique. Surface temperature and accumulated precipitation (daily and monthly scale) were analyzed for a 5-year period covering from 1990 to 1994. Results have been compared with daily observational data series from 110 stations for temperature and 95 for precipitation Both daily and monthly average temperatures are generally well represented by the model. Conversely, daily precipitation results present larger deviations from observational data. However, noticeable accuracy is gained when comparing with monthly precipitation observations. There are some especially conflictive subregions where precipitation is scarcely captured, such as the Southeast of the Iberian Peninsula, mainly due to its extremely convective nature. Regarding parameterization schemes performance, every set provides very

  2. The WASCAL high-resolution regional climate simulation ensemble for West Africa: concept, dissemination and assessment

    Science.gov (United States)

    Heinzeller, Dominikus; Dieng, Diarra; Smiatek, Gerhard; Olusegun, Christiana; Klein, Cornelia; Hamann, Ilse; Salack, Seyni; Bliefernicht, Jan; Kunstmann, Harald

    2018-04-01

    Climate change and constant population growth pose severe challenges to 21st century rural Africa. Within the framework of the West African Science Service Center on Climate Change and Adapted Land Use (WASCAL), an ensemble of high-resolution regional climate change scenarios for the greater West African region is provided to support the development of effective adaptation and mitigation measures. This contribution presents the overall concept of the WASCAL regional climate simulations, as well as detailed information on the experimental design, and provides information on the format and dissemination of the available data. All data are made available to the public at the CERA long-term archive of the German Climate Computing Center (DKRZ) with a subset available at the PANGAEA Data Publisher for Earth & Environmental Science portal (https://doi.pangaea.de/10.1594/PANGAEA.880512" target="_blank">https://doi.pangaea.de/10.1594/PANGAEA.880512). A brief assessment of the data are presented to provide guidance for future users. Regional climate projections are generated at high (12 km) and intermediate (60 km) resolution using the Weather Research and Forecasting Model (WRF). The simulations cover the validation period 1980-2010 and the two future periods 2020-2050 and 2070-2100. A brief comparison to observations and two climate change scenarios from the Coordinated Regional Downscaling Experiment (CORDEX) initiative is presented to provide guidance on the data set to future users and to assess their climate change signal. Under the RCP4.5 (Representative Concentration Pathway 4.5) scenario, the results suggest an increase in temperature by 1.5 °C at the coast of Guinea and by up to 3 °C in the northern Sahel by the end of the 21st century, in line with existing climate projections for the region. They also project an increase in precipitation by up to 300 mm per year along the coast of Guinea, by up to 150 mm per year in the Soudano region adjacent in the north and

  3. PDF added value of a high resolution climate simulation for precipitation

    Science.gov (United States)

    Soares, Pedro M. M.; Cardoso, Rita M.

    2015-04-01

    dynamical downscaling, based on simple PDF skill scores. The measure can assess the full quality of the PDFs and at the same time integrates a flexible manner to weight differently the PDF tails. In this study we apply the referred method to characaterize the PDF added value of a high resolution simulation with the WRF model. Results from a WRF climate simulation centred at the Iberian Penisnula with two nested grids, a larger one at 27km and a smaller one at 9km. This simulation is forced by ERA-Interim. The observational data used covers from rain gauges precipitation records to observational regular grids of daily precipitation. Two regular gridded precipitation datasets are used. A Portuguese grid precipitation dataset developed at 0.2°× 0.2°, from observed rain gauges daily precipitation. A second one corresponding to the ENSEMBLES observational gridded dataset for Europe, which includes daily precipitation values at 0.25°. The analisys shows an important PDF added value from the higher resolution simulation, regarding the full PDF and the extremes. This method shows higher potential to be applied to other simulation exercises and to evaluate other variables.

  4. Machine vision-based high-resolution weed mapping and patch-sprayer performance simulation

    NARCIS (Netherlands)

    Tang, L.; Tian, L.F.; Steward, B.L.

    1999-01-01

    An experimental machine vision-based patch-sprayer was developed. This sprayer was primarily designed to do real-time weed density estimation and variable herbicide application rate control. However, the sprayer also had the capability to do high-resolution weed mapping if proper mapping techniques

  5. Identifying added value in high-resolution climate simulations over Scandinavia

    DEFF Research Database (Denmark)

    Mayer, Stephania; Fox Maule, Cathrine; Sobolowski, Stefan

    2015-01-01

    High-resolution data are needed in order to assess potential impacts of extreme events on infrastructure in the mid-latitudes. Dynamical downscaling offers one way to obtain this information. However, prior to implementation in any impacts assessment scheme, model output must be validated and det...

  6. The gravitational interaction between N-body (star clusters) and hydrodynamic (ISM) codes in disk galaxy simulations

    International Nuclear Information System (INIS)

    Schroeder, M.C.; Comins, N.F.

    1986-01-01

    During the past twenty years, three approaches to numerical simulations of the evolution of galaxies have been developed. The first approach, N-body programs, models the motion of clusters of stars as point particles which interact via their gravitational potentials to determine the system dynamics. Some N-body codes model molecular clouds as colliding, inelastic particles. The second approach, hydrodynamic models of galactic dynamics, simulates the activity of the interstellar medium as a compressible gas. These models presently do not include stars, the effect of gravitational fields, or allow for stellar evolution and exchange of mass or angular momentum between stars and the interstellar medium. The third approach, stochastic star formation simulations of disk galaxies, allows for the interaction between stars and interstellar gas, but does not allow the star particles to move under the influence of gravity

  7. Using Instrument Simulators and a Satellite Database to Evaluate Microphysical Assumptions in High-Resolution Simulations of Hurricane Rita

    Science.gov (United States)

    Hristova-Veleva, S. M.; Chao, Y.; Chau, A. H.; Haddad, Z. S.; Knosp, B.; Lambrigtsen, B.; Li, P.; Martin, J. M.; Poulsen, W. L.; Rodriguez, E.; Stiles, B. W.; Turk, J.; Vu, Q.

    2009-12-01

    Improving forecasting of hurricane intensity remains a significant challenge for the research and operational communities. Many factors determine a tropical cyclone’s intensity. Ultimately, though, intensity is dependent on the magnitude and distribution of the latent heating that accompanies the hydrometeor production during the convective process. Hence, the microphysical processes and their representation in hurricane models are of crucial importance for accurately simulating hurricane intensity and evolution. The accurate modeling of the microphysical processes becomes increasingly important when running high-resolution models that should properly reflect the convective processes in the hurricane eyewall. There are many microphysical parameterizations available today. However, evaluating their performance and selecting the most representative ones remains a challenge. Several field campaigns were focused on collecting in situ microphysical observations to help distinguish between different modeling approaches and improve on the most promising ones. However, these point measurements cannot adequately reflect the space and time correlations characteristic of the convective processes. An alternative approach to evaluating microphysical assumptions is to use multi-parameter remote sensing observations of the 3D storm structure and evolution. In doing so, we could compare modeled to retrieved geophysical parameters. The satellite retrievals, however, carry their own uncertainty. To increase the fidelity of the microphysical evaluation results, we can use instrument simulators to produce satellite observables from the model fields and compare to the observed. This presentation will illustrate how instrument simulators can be used to discriminate between different microphysical assumptions. We will compare and contrast the members of high-resolution ensemble WRF model simulations of Hurricane Rita (2005), each member reflecting different microphysical assumptions

  8. S-World: A high resolution global soil database for simulation modelling (Invited)

    Science.gov (United States)

    Stoorvogel, J. J.

    2013-12-01

    There is an increasing call for high resolution soil information at the global level. A good example for such a call is the Global Gridded Crop Model Intercomparison carried out within AgMIP. While local studies can make use of surveying techniques to collect additional techniques this is practically impossible at the global level. It is therefore important to rely on legacy data like the Harmonized World Soil Database. Several efforts do exist that aim at the development of global gridded soil property databases. These estimates of the variation of soil properties can be used to assess e.g., global soil carbon stocks. However, they do not allow for simulation runs with e.g., crop growth simulation models as these models require a description of the entire pedon rather than a few soil properties. This study provides the required quantitative description of pedons at a 1 km resolution for simulation modelling. It uses the Harmonized World Soil Database (HWSD) for the spatial distribution of soil types, the ISRIC-WISE soil profile database to derive information on soil properties per soil type, and a range of co-variables on topography, climate, and land cover to further disaggregate the available data. The methodology aims to take stock of these available data. The soil database is developed in five main steps. Step 1: All 148 soil types are ordered on the basis of their expected topographic position using e.g., drainage, salinization, and pedogenesis. Using the topographic ordering and combining the HWSD with a digital elevation model allows for the spatial disaggregation of the composite soil units. This results in a new soil map with homogeneous soil units. Step 2: The ranges of major soil properties for the topsoil and subsoil of each of the 148 soil types are derived from the ISRIC-WISE soil profile database. Step 3: A model of soil formation is developed that focuses on the basic conceptual question where we are within the range of a particular soil property

  9. Distribution function approach to redshift space distortions. Part II: N-body simulations

    International Nuclear Information System (INIS)

    Okumura, Teppei; Seljak, Uroš; McDonald, Patrick; Desjacques, Vincent

    2012-01-01

    Measurement of redshift-space distortions (RSD) offers an attractive method to directly probe the cosmic growth history of density perturbations. A distribution function approach where RSD can be written as a sum over density weighted velocity moment correlators has recently been developed. In this paper we use results of N-body simulations to investigate the individual contributions and convergence of this expansion for dark matter. If the series is expanded as a function of powers of μ, cosine of the angle between the Fourier mode and line of sight, then there are a finite number of terms contributing at each order. We present these terms and investigate their contribution to the total as a function of wavevector k. For μ 2 the correlation between density and momentum dominates on large scales. Higher order corrections, which act as a Finger-of-God (FoG) term, contribute 1% at k ∼ 0.015hMpc −1 , 10% at k ∼ 0.05hMpc −1 at z = 0, while for k > 0.15hMpc −1 they dominate and make the total negative. These higher order terms are dominated by density-energy density correlations which contributes negatively to the power, while the contribution from vorticity part of momentum density auto-correlation adds to the total power, but is an order of magnitude lower. For μ 4 term the dominant term on large scales is the scalar part of momentum density auto-correlation, while higher order terms dominate for k > 0.15hMpc −1 . For μ 6 and μ 8 we find it has very little power for k −1 , shooting up by 2–3 orders of magnitude between k −1 and k −1 . We also compare the expansion to the full 2-d P ss (k,μ), as well as to the monopole, quadrupole, and hexadecapole integrals of P ss (k,μ). For these statistics an infinite number of terms contribute and we find that the expansion achieves percent level accuracy for kμ −1 at 6-th order, but breaks down on smaller scales because the series is no longer perturbative. We explore resummation of the terms into Fo

  10. Principles and simulations of high-resolution STM imaging with a flexible tip apex

    Czech Academy of Sciences Publication Activity Database

    Krejčí, Ondřej; Hapala, Prokop; Ondráček, Martin; Jelínek, Pavel

    2017-01-01

    Roč. 95, č. 4 (2017), 1-9, č. článku 045407. ISSN 2469-9950 R&D Projects: GA ČR(CZ) GC14-16963J Institutional support: RVO:68378271 Keywords : STM * AFM * high-resolution Subject RIV: BM - Solid Matter Physics ; Magnetism OBOR OECD: Condensed matter physics (including formerly solid state physics, supercond.) Impact factor: 3.836, year: 2016

  11. Changes in Moisture Flux Over the Tibetan Plateau During 1979-2011: Insights from a High Resolution Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Yanhong; Leung, Lai-Yung R.; Zhang, Yongxin; Cuo, Lan

    2015-05-01

    Net precipitation (precipitation minus evapotranspiration, P-E) changes from a high resolution regional climate simulation and its reanalysis forcing are analyzed over the Tibet Plateau (TP) and compared to the global land data assimilation system (GLDAS) product. The mechanism behind the P-E changes is explored by decomposing the column integrated moisture flux convergence into thermodynamic, dynamic, and transient eddy components. High-resolution climate simulation improves the spatial pattern of P-E changes over the best available global reanalysis. Improvement in simulating precipitation changes at high elevations contributes dominantly to the improved P-E changes. High-resolution climate simulation also facilitates new and substantial findings regarding the role of thermodynamics and transient eddies in P-E changes reflected in observed changes in major river basins fed by runoff from the TP. The analysis revealed the contrasting convergence/divergence changes between the northwestern and southeastern TP and feedback through latent heat release as an important mechanism leading to the mean P-E changes in the TP.

  12. Testing lowered isothermal models with direct N-body simulations of globular clusters - II. Multimass models

    Science.gov (United States)

    Peuten, M.; Zocchi, A.; Gieles, M.; Hénault-Brunet, V.

    2017-09-01

    Lowered isothermal models, such as the multimass Michie-King models, have been successful in describing observational data of globular clusters. In this study, we assess whether such models are able to describe the phase space properties of evolutionary N-body models. We compare the multimass models as implemented in limepy (Gieles & Zocchi) to N-body models of star clusters with different retention fractions for the black holes and neutron stars evolving in a tidal field. We find that multimass models successfully reproduce the density and velocity dispersion profiles of the different mass components in all evolutionary phases and for different remnants retention. We further use these results to study the evolution of global model parameters. We find that over the lifetime of clusters, radial anisotropy gradually evolves from the low- to the high-mass components and we identify features in the properties of observable stars that are indicative of the presence of stellar-mass black holes. We find that the model velocity scale depends on mass as m-δ, with δ ≃ 0.5 for almost all models, but the dependence of central velocity dispersion on m can be shallower, depending on the dark remnant content, and agrees well with that of the N-body models. The reported model parameters, and correlations amongst them, can be used as theoretical priors when fitting these types of mass models to observational data.

  13. Local-scale high-resolution atmospheric dispersion model using large-eddy simulation. LOHDIM-LES

    International Nuclear Information System (INIS)

    Nakayama, Hiromasa; Nagai, Haruyasu

    2016-03-01

    We developed LOcal-scale High-resolution atmospheric DIspersion Model using Large-Eddy Simulation (LOHDIM-LES). This dispersion model is designed based on LES which is effective to reproduce unsteady behaviors of turbulent flows and plume dispersion. The basic equations are the continuity equation, the Navier-Stokes equation, and the scalar conservation equation. Buildings and local terrain variability are resolved by high-resolution grids with a few meters and these turbulent effects are represented by immersed boundary method. In simulating atmospheric turbulence, boundary layer flows are generated by a recycling turbulent inflow technique in a driver region set up at the upstream of the main analysis region. This turbulent inflow data are imposed at the inlet of the main analysis region. By this approach, the LOHDIM-LES can provide detailed information on wind velocities and plume concentration in the investigated area. (author)

  14. The effect of thermal velocities on structure formation in N-body simulations of warm dark matter

    Science.gov (United States)

    Leo, Matteo; Baugh, Carlton M.; Li, Baojiu; Pascoli, Silvia

    2017-11-01

    We investigate the impact of thermal velocities in N-body simulations of structure formation in warm dark matter models. Adopting the commonly used approach of adding thermal velocities, randomly selected from a Fermi-Dirac distribution, to the gravitationally-induced velocities of the simulation particles, we compare the matter and velocity power spectra measured from CDM and WDM simulations, in the latter case with and without thermal velocities. This prescription for adding thermal velocities introduces numerical noise into the initial conditions, which influences structure formation. At early times, the noise affects dramatically the power spectra measured from simulations with thermal velocities, with deviations of the order of ~ Script O(10) (in the matter power spectra) and of the order of ~ Script O(102) (in the velocity power spectra) compared to those extracted from simulations without thermal velocities. At late times, these effects are less pronounced with deviations of less than a few percent. Increasing the resolution of the N-body simulation shifts these discrepancies to higher wavenumbers. We also find that spurious haloes start to appear in simulations which include thermal velocities at a mass that is ~3 times larger than in simulations without thermal velocities.

  15. Application of the Ewald method to cosmological N-body simulations

    International Nuclear Information System (INIS)

    Hernquist, L.; Suto, Yasushi; Bouchet, F.R.

    1990-03-01

    Fully periodic boundary conditions are incorporated into a gridless cosmological N-body code using the Ewald method. It is shown that the linear evolution of density fluctuations agrees well with analytic calculations, contrary to the case of quasi-periodic boundary conditions where the fundamental mode grows too rapidly. The implementation of fully periodic boundaries is of particular importance to relative comparisons of methods based on hierarchical tree algorithms and more traditional schemes using Fourier techniques such as PM and P 3 M codes. (author)

  16. A new gravitational N-body simulation algorithm for investigation of Lagrangian turbulence in astrophysical and cosmological systems

    Energy Technology Data Exchange (ETDEWEB)

    Rosa, Reinaldo Roberto; Gomes, Vitor; Araujo, Amarisio [Instituto Nacional de Pesquisas Espaciais (INPE), Sao Jose dos Campos, SP (Brazil); Clua, Esteban [Universidade Federal Fluminense (UFF), Niteroi, RJ (Brazil)

    2011-07-01

    Full text: Turbulent-like behaviour is an important and recent ingredient in the investigation of large-scale structure formation in the observable universe. Recently, an established statistical method was used to demonstrate the importance of considering chaotic advection (or Lagrange turbulence) in combination with gravitational instabilities in the {Lambda}-CDM simulations performed from the Virgo Consortium (VC). However, the Hubble volumes simulated from GADGET-VC algorithm have some limitations for direct Lagrangian data analysis due to the large amount of data and no real time computation for particle kinetic velocity along the dark matter structure evolution. Hence, the Lab for Computing and Applied Mathematics at INPE, Brazil, has been working for the past two years in computational environments to achieve the so-called COsmic LAgrangian TUrbulence Simulator (COLATUS) allowing N-body simulation from a Lagrangian perspective. The COLATUS prototype, as usual packages, computes gravitational forces with a hierarchical tree algorithm in combination with a local particle kinetic velocity vector in a particle-mesh scheme for long-range gravitational forces. In the present work we show preliminary simulations for 106 particles showing Lagrangian power spectra for individual particles converging to a stable power-law of S(v) {approx} v{sup 5}. The code may be run on an arbitrary number of processors, with a restriction to powers of two. COLATUS has a potential to evaluate complex kinematics of a single particle in a simulated N-body gravitational system. However, to introduce this method as a GNU software further improvements and investigations are necessary. Then, the mapping techniques for the N-body problem incorporating radiation pressure and fluid characteristics by means of smoothed particle hydrodynamics (SPH) are discussed. Finally, we focus on the all-pairs computational kernel and its future GPU implementation using the NVIDIA CUDA programming model

  17. A new gravitational N-body simulation algorithm for investigation of Lagrangian turbulence in astrophysical and cosmological systems

    International Nuclear Information System (INIS)

    Rosa, Reinaldo Roberto; Gomes, Vitor; Araujo, Amarisio; Clua, Esteban

    2011-01-01

    Full text: Turbulent-like behaviour is an important and recent ingredient in the investigation of large-scale structure formation in the observable universe. Recently, an established statistical method was used to demonstrate the importance of considering chaotic advection (or Lagrange turbulence) in combination with gravitational instabilities in the Λ-CDM simulations performed from the Virgo Consortium (VC). However, the Hubble volumes simulated from GADGET-VC algorithm have some limitations for direct Lagrangian data analysis due to the large amount of data and no real time computation for particle kinetic velocity along the dark matter structure evolution. Hence, the Lab for Computing and Applied Mathematics at INPE, Brazil, has been working for the past two years in computational environments to achieve the so-called COsmic LAgrangian TUrbulence Simulator (COLATUS) allowing N-body simulation from a Lagrangian perspective. The COLATUS prototype, as usual packages, computes gravitational forces with a hierarchical tree algorithm in combination with a local particle kinetic velocity vector in a particle-mesh scheme for long-range gravitational forces. In the present work we show preliminary simulations for 106 particles showing Lagrangian power spectra for individual particles converging to a stable power-law of S(v) ∼ v 5 . The code may be run on an arbitrary number of processors, with a restriction to powers of two. COLATUS has a potential to evaluate complex kinematics of a single particle in a simulated N-body gravitational system. However, to introduce this method as a GNU software further improvements and investigations are necessary. Then, the mapping techniques for the N-body problem incorporating radiation pressure and fluid characteristics by means of smoothed particle hydrodynamics (SPH) are discussed. Finally, we focus on the all-pairs computational kernel and its future GPU implementation using the NVIDIA CUDA programming model. (author)

  18. Simulation of high-resolution MFM tip using exchange-spring magnet

    Energy Technology Data Exchange (ETDEWEB)

    Saito, H. [Faculty of Resource Science and Engineering, Akita University, Akita 010-8502 (Japan)]. E-mail: hsaito@ipc.akita-u.ac.jp; Yatsuyanagi, D. [Faculty of Resource Science and Engineering, Akita University, Akita 010-8502 (Japan); Ishio, S. [Faculty of Resource Science and Engineering, Akita University, Akita 010-8502 (Japan); Ito, A. [Nitto Optical Co. Ltd., Misato, Akita 019-1403 (Japan); Kawamura, H. [Nitto Optical Co. Ltd., Misato, Akita 019-1403 (Japan); Ise, K. [Research Institute of Advanced Technology Akita, Akita 010-1623 (Japan); Taguchi, K. [Research Institute of Advanced Technology Akita, Akita 010-1623 (Japan); Takahashi, S. [Research Institute of Advanced Technology Akita, Akita 010-1623 (Japan)

    2007-03-15

    The transfer function of magnetic force microscope (MFM) tips using an exchange-spring trilayer composed of a centered soft magnetic layer and two hard magnetic layers was calculated and the resolution was estimated by considering the thermodynamic noise limit of an MFM cantilever. It was found that reducing the thickness of the centered soft magnetic layer and the magnetization of hard magnetic layer are important to obtain high resolution. Tips using an exchange-spring trilayer with a very thin FeCo layer and isotropic hard magnetic layers, such as CoPt and FePt, are found to be suitable for obtaining a resolution less than 10 nm at room temperature.

  19. Simulation and Prediction of Weather Radar Clutter Using a Wave Propagator on High Resolution NWP Data

    DEFF Research Database (Denmark)

    Benzon, Hans-Henrik; Bovith, Thomas

    2008-01-01

    for prediction of this type of weather radar clutter is presented. The method uses a wave propagator to identify areas of potential non-standard propagation. The wave propagator uses a three dimensional refractivity field derived from the geophysical parameters: temperature, humidity, and pressure obtained from......Weather radars are essential sensors for observation of precipitation in the troposphere and play a major part in weather forecasting and hydrological modelling. Clutter caused by non-standard wave propagation is a common problem in weather radar applications, and in this paper a method...... a high-resolution Numerical Weather Prediction (NWP) model. The wave propagator is based on the parabolic equation approximation to the electromagnetic wave equation. The parabolic equation is solved using the well-known Fourier split-step method. Finally, the radar clutter prediction technique is used...

  20. An efficient non hydrostatic dynamical care far high-resolution simulations down to the urban scale

    International Nuclear Information System (INIS)

    Bonaventura, L.; Cesari, D.

    2005-01-01

    Numerical simulations of idealized stratified flows aver obstacles at different spatial scales demonstrate the very general applicability and the parallel efficiency of a new non hydrostatic dynamical care far simulation of mesoscale flows aver complex terrain

  1. Appending High-Resolution Elevation Data to GPS Speed Traces for Vehicle Energy Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Wood, E.; Burton, E.; Duran, A.; Gonder, J.

    2014-06-01

    Accurate and reliable global positioning system (GPS)-based vehicle use data are highly valuable for many transportation, analysis, and automotive considerations. Model-based design, real-world fuel economy analysis, and the growing field of autonomous and connected technologies (including predictive powertrain control and self-driving cars) all have a vested interest in high-fidelity estimation of powertrain loads and vehicle usage profiles. Unfortunately, road grade can be a difficult property to extract from GPS data with consistency. In this report, we present a methodology for appending high-resolution elevation data to GPS speed traces via a static digital elevation model. Anomalous data points in the digital elevation model are addressed during a filtration/smoothing routine, resulting in an elevation profile that can be used to calculate road grade. This process is evaluated against a large, commercially available height/slope dataset from the Navteq/Nokia/HERE Advanced Driver Assistance Systems product. Results will show good agreement with the Advanced Driver Assistance Systems data in the ability to estimate road grade between any two consecutive points in the contiguous United States.

  2. Hygromorphic characterization of softwood under high resolution X-ray tomography for hygrothermal simulation

    Science.gov (United States)

    El Hachem, Chady; Abahri, Kamilia; Vicente, Jérôme; Bennacer, Rachid; Belarbi, Rafik

    2018-03-01

    Because of their complex hygromorphic shape, microstructural study of wooden materials behavior has recently been the point of interest of researchers. The purpose of this study, in a first part, consists in characterizing by high resolution X-ray tomography the microstructural properties of spruce wood. In a second part, the subresulting geometrical parameters will be incorporated when evaluating the wooden hygrothermal transfers behavior. To do so, volume reconstructions of 3 Dimensional images (3D), obtained with a voxel size of 0.5 μm were achieved. The post-treatment of the corresponding volumes has given access to averages and standard deviations of lumens' diameters and cell walls' thicknesses. These results were performed for both early wood and latewood. Further, a segmentation approach for individualizing wood lumens was developed, which presents an important challenge in understanding localized physical properties. In this context, 3D heat and mass transfers within the real reconstructed geometries took place in order to highlight the effect of wood directions on the equivalent conductivity and moisture diffusion coefficients. Results confirm that the softwood cellular structure has a critical impact on the reliability of the studied physical parameters.

  3. K-means clustering for optimal partitioning and dynamic load balancing of parallel hierarchical N-body simulations

    International Nuclear Information System (INIS)

    Marzouk, Youssef M.; Ghoniem, Ahmed F.

    2005-01-01

    A number of complex physical problems can be approached through N-body simulation, from fluid flow at high Reynolds number to gravitational astrophysics and molecular dynamics. In all these applications, direct summation is prohibitively expensive for large N and thus hierarchical methods are employed for fast summation. This work introduces new algorithms, based on k-means clustering, for partitioning parallel hierarchical N-body interactions. We demonstrate that the number of particle-cluster interactions and the order at which they are performed are directly affected by partition geometry. Weighted k-means partitions minimize the sum of clusters' second moments and create well-localized domains, and thus reduce the computational cost of N-body approximations by enabling the use of lower-order approximations and fewer cells. We also introduce compatible techniques for dynamic load balancing, including adaptive scaling of cluster volumes and adaptive redistribution of cluster centroids. We demonstrate the performance of these algorithms by constructing a parallel treecode for vortex particle simulations, based on the serial variable-order Cartesian code developed by Lindsay and Krasny [Journal of Computational Physics 172 (2) (2001) 879-907]. The method is applied to vortex simulations of a transverse jet. Results show outstanding parallel efficiencies even at high concurrencies, with velocity evaluation errors maintained at or below their serial values; on a realistic distribution of 1.2 million vortex particles, we observe a parallel efficiency of 98% on 1024 processors. Excellent load balance is achieved even in the face of several obstacles, such as an irregular, time-evolving particle distribution containing a range of length scales and the continual introduction of new vortex particles throughout the domain. Moreover, results suggest that k-means yields a more efficient partition of the domain than a global oct-tree

  4. Monte-Carlo simulation of a high-resolution inverse geometry spectrometer on the SNS. Long Wavelength Target Station

    International Nuclear Information System (INIS)

    Bordallo, H.N.; Herwig, K.W.

    2001-01-01

    Using the Monte-Carlo simulation program McStas, we present the design principles of the proposed high-resolution inverse geometry spectrometer on the SNS-Long Wavelength Target Station (LWTS). The LWTS will provide the high flux of long wavelength neutrons at the requisite pulse rate required by the spectrometer design. The resolution of this spectrometer lies between that routinely achieved by spin echo techniques and the design goal of the high power target station backscattering spectrometer. Covering this niche in energy resolution will allow systematic studies over the large dynamic range required by many disciplines, such as protein dynamics. (author)

  5. BOOSTED TIDAL DISRUPTION BY MASSIVE BLACK HOLE BINARIES DURING GALAXY MERGERS FROM THE VIEW OF N -BODY SIMULATION

    Energy Technology Data Exchange (ETDEWEB)

    Li, Shuo; Berczik, Peter; Spurzem, Rainer [National Astronomical Observatories and Key Laboratory of Computational Astrophysics, Chinese Academy of Sciences, 20A Datun Rd., Chaoyang District, Beijing 100012 (China); Liu, F. K., E-mail: lishuo@nao.cas.cn [Department of Astronomy, School of Physics, Peking University, Yiheyuan Lu 5, Haidian Qu, Beijing 100871 (China)

    2017-01-10

    Supermassive black hole binaries (SMBHBs) are productions of the hierarchical galaxy formation model. There are many close connections between a central SMBH and its host galaxy because the former plays very important roles on galaxy formation and evolution. For this reason, the evolution of SMBHBs in merging galaxies is a fundamental challenge. Since there are many discussions about SMBHB evolution in a gas-rich environment, we focus on the quiescent galaxy, using tidal disruption (TD) as a diagnostic tool. Our study is based on a series of numerical, large particle number, direct N -body simulations for dry major mergers. According to the simulation results, the evolution can be divided into three phases. In phase I, the TD rate for two well separated SMBHs in a merging system is similar to that for a single SMBH in an isolated galaxy. After two SMBHs approach close enough to form a bound binary in phase II, the disruption rate can be enhanced by ∼2 orders of magnitude within a short time. This “boosted” disruption stage finishes after the SMBHB evolves to a compact binary system in phase III, corresponding to a reduction in disruption rate back to a level of a few times higher than in phase I. We also discuss how to correctly extrapolate our N -body simulation results to reality, and the implications of our results to observations.

  6. Very high resolution regional climate model simulations over Greenland: Identifying added value

    DEFF Research Database (Denmark)

    Lucas-Picher, P.; Wulff-Nielsen, M.; Christensen, J.H.

    2012-01-01

    models. However, the bias between the simulations and the few available observations does not reduce with higher resolution. This is partly explained by the lack of observations in regions where the higher resolution is expected to improve the simulated climate. The RCM simulations show......This study presents two simulations of the climate over Greenland with the regional climate model (RCM) HIRHAM5 at 0.05° and 0.25° resolution driven at the lateral boundaries by the ERA-Interim reanalysis for the period 1989–2009. These simulations are validated against observations from...... that the temperature has increased the most in the northern part of Greenland and at lower elevations over the period 1989–2009. Higher resolution increases the relief variability in the model topography and causes the simulated precipitation to be larger on the coast and smaller over the main ice sheet compared...

  7. Initialization of high resolution surface wind simulations using NWS gridded data

    Science.gov (United States)

    J. Forthofer; K. Shannon; Bret Butler

    2010-01-01

    WindNinja is a standalone computer model designed to provide the user with simulations of surface wind flow. It is deterministic and steady state. It is currently being modified to allow the user to initialize the flow calculation using National Digital Forecast Database. It essentially allows the user to downscale the coarse scale simulations from meso-scale models to...

  8. High-resolution Hydrodynamic Simulation of Tidal Detonation of a Helium White Dwarf by an Intermediate Mass Black Hole

    Science.gov (United States)

    Tanikawa, Ataru

    2018-05-01

    We demonstrate tidal detonation during a tidal disruption event (TDE) of a helium (He) white dwarf (WD) with 0.45 M ⊙ by an intermediate mass black hole using extremely high-resolution simulations. Tanikawa et al. have shown tidal detonation in results of previous studies from unphysical heating due to low-resolution simulations, and such unphysical heating occurs in three-dimensional (3D) smoothed particle hydrodynamics (SPH) simulations even with 10 million SPH particles. In order to avoid such unphysical heating, we perform 3D SPH simulations up to 300 million SPH particles, and 1D mesh simulations using flow structure in the 3D SPH simulations for 1D initial conditions. The 1D mesh simulations have higher resolutions than the 3D SPH simulations. We show that tidal detonation occurs and confirm that this result is perfectly converged with different space resolution in both 3D SPH and 1D mesh simulations. We find that detonation waves independently arise in leading parts of the WD, and yield large amounts of 56Ni. Although detonation waves are not generated in trailing parts of the WD, the trailing parts would receive detonation waves generated in the leading parts and would leave large amounts of Si group elements. Eventually, this He WD TDE would synthesize 56Ni of 0.30 M ⊙ and Si group elements of 0.08 M ⊙, and could be observed as a luminous thermonuclear transient comparable to SNe Ia.

  9. A PARALLEL MONTE CARLO CODE FOR SIMULATING COLLISIONAL N-BODY SYSTEMS

    International Nuclear Information System (INIS)

    Pattabiraman, Bharath; Umbreit, Stefan; Liao, Wei-keng; Choudhary, Alok; Kalogera, Vassiliki; Memik, Gokhan; Rasio, Frederic A.

    2013-01-01

    We present a new parallel code for computing the dynamical evolution of collisional N-body systems with up to N ∼ 10 7 particles. Our code is based on the Hénon Monte Carlo method for solving the Fokker-Planck equation, and makes assumptions of spherical symmetry and dynamical equilibrium. The principal algorithmic developments involve optimizing data structures and the introduction of a parallel random number generation scheme as well as a parallel sorting algorithm required to find nearest neighbors for interactions and to compute the gravitational potential. The new algorithms we introduce along with our choice of decomposition scheme minimize communication costs and ensure optimal distribution of data and workload among the processing units. Our implementation uses the Message Passing Interface library for communication, which makes it portable to many different supercomputing architectures. We validate the code by calculating the evolution of clusters with initial Plummer distribution functions up to core collapse with the number of stars, N, spanning three orders of magnitude from 10 5 to 10 7 . We find that our results are in good agreement with self-similar core-collapse solutions, and the core-collapse times generally agree with expectations from the literature. Also, we observe good total energy conservation, within ∼ 5 , 128 for N = 10 6 and 256 for N = 10 7 . The runtime reaches saturation with the addition of processors beyond these limits, which is a characteristic of the parallel sorting algorithm. The resulting maximum speedups we achieve are approximately 60×, 100×, and 220×, respectively.

  10. Comparison of Explicitly Simulated and Downscaled Tropical Cyclone Activity in a High-Resolution Global Climate Model

    Directory of Open Access Journals (Sweden)

    Hirofumi Tomita

    2010-01-01

    Full Text Available The response of tropical cyclone activity to climate change is a matter of great inherent interest and practical importance. Most current global climate models are not, however, capable of adequately resolving tropical cyclones; this has led to the development of downscaling techniques designed to infer tropical cyclone activity from the large-scale fields produced by climate models. Here we compare the statistics of tropical cyclones simulated explicitly in a very high resolution (~14 km grid mesh global climate model to the results of one such downscaling technique driven by the same global model. This is done for a simulation of the current climate and also for a simulation of a climate warmed by the addition of carbon dioxide. The explicitly simulated and downscaled storms are similarly distributed in space, but the intensity distribution of the downscaled events has a somewhat longer high-intensity tail, owing to the higher resolution of the downscaling model. Both explicitly simulated and downscaled events show large increases in the frequency of events at the high-intensity ends of their respective intensity distributions, but the downscaled storms also show increases in low-intensity events, whereas the explicitly simulated weaker events decline in number. On the regional scale, there are large differences in the responses of the explicitly simulated and downscaled events to global warming. In particular, the power dissipation of downscaled events shows a 175% increase in the Atlantic, while the power dissipation of explicitly simulated events declines there.

  11. Evaluation of a high-resolution regional climate simulation over Greenland

    Energy Technology Data Exchange (ETDEWEB)

    Lefebre, Filip [Universite catholique de Louvain, Institut d' Astronomie et de Geophysique G. Lemaitre, Louvain-la-Neuve (Belgium); Vito - Flemish Institute for Technological Research, Integral Environmental Studies, Mol (Belgium); Fettweis, Xavier; Ypersele, Jean-Pascal van; Marbaix, Philippe [Universite catholique de Louvain, Institut d' Astronomie et de Geophysique G. Lemaitre, Louvain-la-Neuve (Belgium); Gallee, Hubert [Laboratoire de Glaciologie et de Geophysique de l' Environnement, Grenoble (France); Greuell, Wouter [Utrecht University, Institute for Marine and Atmospheric Research, Utrecht (Netherlands); Calanca, Pierluigi [Swiss Federal Research Station for Agroecology and Agriculture, Zurich (Switzerland)

    2005-07-01

    A simulation of the 1991 summer has been performed over south Greenland with a coupled atmosphere-snow regional climate model (RCM) forced by the ECMWF re-analysis. The simulation is evaluated with in-situ coastal and ice-sheet atmospheric and glaciological observations. Modelled air temperature, specific humidity, wind speed and radiative fluxes are in good agreement with the available observations, although uncertainties in the radiative transfer scheme need further investigation to improve the model's performance. In the sub-surface snow-ice model, surface albedo is calculated from the simulated snow grain shape and size, snow depth, meltwater accumulation, cloudiness and ice albedo. The use of snow metamorphism processes allows a realistic modelling of the temporal variations in the surface albedo during both melting periods and accumulation events. Concerning the surface albedo, the main finding is that an accurate albedo simulation during the melting season strongly depends on a proper initialization of the surface conditions which mainly result from winter accumulation processes. Furthermore, in a sensitivity experiment with a constant 0.8 albedo over the whole ice sheet, the average amount of melt decreased by more than 60%, which highlights the importance of a correctly simulated surface albedo. The use of this coupled atmosphere-snow RCM offers new perspectives in the study of the Greenland surface mass balance due to the represented feedback between the surface climate and the surface albedo, which is the most sensitive parameter in energy-balance-based ablation calculations. (orig.)

  12. Geant4 simulation of a 3D high resolution gamma camera

    International Nuclear Information System (INIS)

    Akhdar, H.; Kezzar, K.; Aksouh, F.; Assemi, N.; AlGhamdi, S.; AlGarawi, M.; Gerl, J.

    2015-01-01

    The aim of this work is to develop a 3D gamma camera with high position resolution and sensitivity relying on both distance/absorption and Compton scattering techniques and without using any passive collimation. The proposed gamma camera is simulated in order to predict its performance using the full benefit of Geant4 features that allow the construction of the needed geometry of the detectors, have full control of the incident gamma particles and study the response of the detector in order to test the suggested geometries. Three different geometries are simulated and each configuration is tested with three different scintillation materials (LaBr3, LYSO and CeBr3)

  13. Surface Wind Regionalization over Complex Terrain: Evaluation and Analysis of a High-Resolution WRF Simulation

    NARCIS (Netherlands)

    Jiménez, P.A.; González-Rouco, J.F.; García-Bustamante, E.; Navarro, J.; Montávez, J.P.; Vilà-Guerau de Arellano, J.; Dudhia, J.; Muñoz-Roldan, A.

    2010-01-01

    This study analyzes the daily-mean surface wind variability over an area characterized by complex topography through comparing observations and a 2-km-spatial-resolution simulation performed with the Weather Research and Forecasting (WRF) model for the period 1992–2005. The evaluation focuses on the

  14. High-resolution global climate modelling: the UPSCALE project, a large-simulation campaign

    Directory of Open Access Journals (Sweden)

    M. S. Mizielinski

    2014-08-01

    Full Text Available The UPSCALE (UK on PRACE: weather-resolving Simulations of Climate for globAL Environmental risk project constructed and ran an ensemble of HadGEM3 (Hadley Centre Global Environment Model 3 atmosphere-only global climate simulations over the period 1985–2011, at resolutions of N512 (25 km, N216 (60 km and N96 (130 km as used in current global weather forecasting, seasonal prediction and climate modelling respectively. Alongside these present climate simulations a parallel ensemble looking at extremes of future climate was run, using a time-slice methodology to consider conditions at the end of this century. These simulations were primarily performed using a 144 million core hour, single year grant of computing time from PRACE (the Partnership for Advanced Computing in Europe in 2012, with additional resources supplied by the Natural Environment Research Council (NERC and the Met Office. Almost 400 terabytes of simulation data were generated on the HERMIT supercomputer at the High Performance Computing Center Stuttgart (HLRS, and transferred to the JASMIN super-data cluster provided by the Science and Technology Facilities Council Centre for Data Archival (STFC CEDA for analysis and storage. In this paper we describe the implementation of the project, present the technical challenges in terms of optimisation, data output, transfer and storage that such a project involves and include details of the model configuration and the composition of the UPSCALE data set. This data set is available for scientific analysis to allow assessment of the value of model resolution in both present and potential future climate conditions.

  15. High resolution geodynamo simulations with strongly-driven convection and low viscosity

    Science.gov (United States)

    Schaeffer, Nathanael; Fournier, Alexandre; Jault, Dominique; Aubert, Julien

    2015-04-01

    Numerical simulations have been successful at explaining the magnetic field of the Earth for 20 years. However, the regime in which these simulations operate is in many respect very far from what is expected in the Earth's core. By reviewing previous work, we find that it appears difficult to have both low viscosity (low magnetic Prandtl number) and strong magnetic fields in numerical models (large ratio of magnetic over kinetic energy, a.k.a inverse squared Alfvén number). In order to understand better the dynamics and turbulence of the core, we have run a series of 3 simulations, with increasingly demanding parameters. The last simulation is at the limit of what nowadays codes can do on current super computers, with a resolution of 2688 grid points in longitude, 1344 in latitude, and 1024 radial levels. We will show various features of these numerical simulations, including what appears as trends when pushing the parameters toward the one of the Earth. The dynamics is very rich. From short time scales to large time scales, we observe at large scales: Inertial Waves, Torsional Alfvén Waves, columnar convective overturn dynamics and long-term thermal winds. In addition, the dynamics inside and outside the tangent cylinder seem to follow different routes. We find that the ohmic dissipation largely dominates the viscous one and that the magnetic energy dominates the kinetic energy. The magnetic field seems to play an ambiguous role. Despite the large magnetic field, which has an important impact on the flow, we find that the force balance for the mean flow is a thermal wind balance, and that the scale of convective cells is still dominated by viscous effects.

  16. Estimating Hydraulic Resistance for Floodplain Mapping and Hydraulic Studies from High-Resolution Topography: Physical and Numerical Simulations

    Science.gov (United States)

    Minear, J. T.

    2017-12-01

    One of the primary unknown variables in hydraulic analyses is hydraulic resistance, values for which are typically set using broad assumptions or calibration, with very few methods available for independent and robust determination. A better understanding of hydraulic resistance would be highly useful for understanding floodplain processes, forecasting floods, advancing sediment transport and hydraulic coupling, and improving higher dimensional flood modeling (2D+), as well as correctly calculating flood discharges for floods that are not directly measured. The relationship of observed features to hydraulic resistance is difficult to objectively quantify in the field, partially because resistance occurs at a variety of scales (i.e. grain, unit and reach) and because individual resistance elements, such as trees, grass and sediment grains, are inherently difficult to measure. Similar to photogrammetric techniques, Terrestrial Laser Scanning (TLS, also known as Ground-based LiDAR) has shown great ability to rapidly collect high-resolution topographic datasets for geomorphic and hydrodynamic studies and could be used to objectively quantify the features that collectively create hydraulic resistance in the field. Because of its speed in data collection and remote sensing ability, TLS can be used both for pre-flood and post-flood studies that require relatively quick response in relatively dangerous settings. Using datasets collected from experimental flume runs and numerical simulations, as well as field studies of several rivers in California and post-flood rivers in Colorado, this study evaluates the use of high-resolution topography to estimate hydraulic resistance, particularly from grain-scale elements. Contrary to conventional practice, experimental laboratory runs with bed grain size held constant but with varying grain-scale protusion create a nearly twenty-fold variation in measured hydraulic resistance. The ideal application of this high-resolution topography

  17. A high-resolution code for large eddy simulation of incompressible turbulent boundary layer flows

    KAUST Repository

    Cheng, Wan

    2014-03-01

    We describe a framework for large eddy simulation (LES) of incompressible turbulent boundary layers over a flat plate. This framework uses a fractional-step method with fourth-order finite difference on a staggered mesh. We present several laminar examples to establish the fourth-order accuracy and energy conservation property of the code. Furthermore, we implement a recycling method to generate turbulent inflow. We use the stretched spiral vortex subgrid-scale model and virtual wall model to simulate the turbulent boundary layer flow. We find that the case with Reθ ≈ 2.5 × 105 agrees well with available experimental measurements of wall friction, streamwise velocity profiles and turbulent intensities. We demonstrate that for cases with extremely large Reynolds numbers (Reθ = 1012), the present LES can reasonably predict the flow with a coarse mesh. The parallel implementation of the LES code demonstrates reasonable scaling on O(103) cores. © 2013 Elsevier Ltd.

  18. Air-Sea Interaction Processes in Low and High-Resolution Coupled Climate Model Simulations for the Southeast Pacific

    Science.gov (United States)

    Porto da Silveira, I.; Zuidema, P.; Kirtman, B. P.

    2017-12-01

    The rugged topography of the Andes Cordillera along with strong coastal upwelling, strong sea surface temperatures (SST) gradients and extensive but geometrically-thin stratocumulus decks turns the Southeast Pacific (SEP) into a challenge for numerical modeling. In this study, hindcast simulations using the Community Climate System Model (CCSM4) at two resolutions were analyzed to examine the importance of resolution alone, with the parameterizations otherwise left unchanged. The hindcasts were initialized on January 1 with the real-time oceanic and atmospheric reanalysis (CFSR) from 1982 to 2003, forming a 10-member ensemble. The two resolutions are (0.1o oceanic and 0.5o atmospheric) and (1.125o oceanic and 0.9o atmospheric). The SST error growth in the first six days of integration (fast errors) and those resulted from model drift (saturated errors) are assessed and compared towards evaluating the model processes responsible for the SST error growth. For the high-resolution simulation, SST fast errors are positive (+0.3oC) near the continental borders and negative offshore (-0.1oC). Both are associated with a decrease in cloud cover, a weakening of the prevailing southwesterly winds and a reduction of latent heat flux. The saturated errors possess a similar spatial pattern, but are larger and are more spatially concentrated. This suggests that the processes driving the errors already become established within the first week, in contrast to the low-resolution simulations. These, instead, manifest too-warm SSTs related to too-weak upwelling, driven by too-strong winds and Ekman pumping. Nevertheless, the ocean surface tends to be cooler in the low-resolution simulation than the high-resolution due to a higher cloud cover. Throughout the integration, saturated SST errors become positive and could reach values up to +4oC. These are accompanied by upwelling dumping and a decrease in cloud cover. High and low resolution models presented notable differences in how SST

  19. Medical images of patients in voxel structures in high resolution for Monte Carlo simulation

    International Nuclear Information System (INIS)

    Boia, Leonardo S.; Menezes, Artur F.; Silva, Ademir X.

    2011-01-01

    This work aims to present a computational process of conversion of tomographic and MRI medical images from patients in voxel structures to an input file, which will be manipulated in Monte Carlo Simulation code for tumor's radiotherapic treatments. The problem's scenario inherent to the patient is simulated by such process, using the volume element (voxel) as a unit of computational tracing. The head's voxel structure geometry has voxels with volumetric dimensions around 1 mm 3 and a population of millions, which helps - in that way, for a realistic simulation and a decrease in image's digital process techniques for adjustments and equalizations. With such additional data from the code, a more critical analysis can be developed in order to determine the volume of the tumor, and the protection, beside the patients' medical images were borrowed by Clinicas Oncologicas Integradas (COI/RJ), joined to the previous performed planning. In order to execute this computational process, SAPDI computational system is used in a digital image process for optimization of data, conversion program Scan2MCNP, which manipulates, processes, and converts the medical images into voxel structures to input files and the graphic visualizer Moritz for the verification of image's geometry placing. (author)

  20. Medical images of patients in voxel structures in high resolution for Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Boia, Leonardo S.; Menezes, Artur F.; Silva, Ademir X., E-mail: lboia@con.ufrj.b, E-mail: ademir@con.ufrj.b [Universidade Federal do Rio de Janeiro (PEN/COPPE/UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia. Programa de Engenharia Nuclear; Salmon Junior, Helio A. [Clinicas Oncologicas Integradas (COI), Rio de Janeiro, RJ (Brazil)

    2011-07-01

    This work aims to present a computational process of conversion of tomographic and MRI medical images from patients in voxel structures to an input file, which will be manipulated in Monte Carlo Simulation code for tumor's radiotherapic treatments. The problem's scenario inherent to the patient is simulated by such process, using the volume element (voxel) as a unit of computational tracing. The head's voxel structure geometry has voxels with volumetric dimensions around 1 mm{sup 3} and a population of millions, which helps - in that way, for a realistic simulation and a decrease in image's digital process techniques for adjustments and equalizations. With such additional data from the code, a more critical analysis can be developed in order to determine the volume of the tumor, and the protection, beside the patients' medical images were borrowed by Clinicas Oncologicas Integradas (COI/RJ), joined to the previous performed planning. In order to execute this computational process, SAPDI computational system is used in a digital image process for optimization of data, conversion program Scan2MCNP, which manipulates, processes, and converts the medical images into voxel structures to input files and the graphic visualizer Moritz for the verification of image's geometry placing. (author)

  1. Phase I and phase II reductive metabolism simulation of nitro aromatic xenobiotics with electrochemistry coupled with high resolution mass spectrometry.

    Science.gov (United States)

    Bussy, Ugo; Chung-Davidson, Yu-Wen; Li, Ke; Li, Weiming

    2014-11-01

    Electrochemistry combined with (liquid chromatography) high resolution mass spectrometry was used to simulate the general reductive metabolism of three biologically important nitro aromatic molecules: 3-trifluoromethyl-4-nitrophenol (TFM), niclosamide, and nilutamide. TFM is a pesticide used in the Laurential Great Lakes while niclosamide and nilutamide are used in cancer therapy. At first, a flow-through electrochemical cell was directly connected to a high resolution mass spectrometer to evaluate the ability of electrochemistry to produce the main reduction metabolites of nitro aromatic, nitroso, hydroxylamine, and amine functional groups. Electrochemical experiments were then carried out at a constant potential of -2.5 V before analysis of the reduction products by LC-HRMS, which confirmed the presence of the nitroso, hydroxylamine, and amine species as well as dimers. Dimer identification illustrates the reactivity of the nitroso species with amine and hydroxylamine species. To investigate xenobiotic metabolism, the reactivity of nitroso species to biomolecules was also examined. Binding of the nitroso metabolite to glutathione was demonstrated by the observation of adducts by LC-ESI(+)-HRMS and the characteristics of their MSMS fragmentation. In conclusion, electrochemistry produces the main reductive metabolites of nitro aromatics and supports the observation of nitroso reactivity through dimer or glutathione adduct formation.

  2. Evaluation of high-resolution climate simulations for West Africa using COSMO-CLM

    Science.gov (United States)

    Dieng, Diarra; Smiatek, Gerhard; Bliefernicht, Jan; Laux, Patrick; Heinzeller, Dominikus; Kunstmann, Harald; Sarr, Abdoulaye; Thierno Gaye, Amadou

    2017-04-01

    The climate change modeling activities within the WASCAL program (West African Science Service Center on Climate Change and Adapted Land Use) concentrate on the provisioning of future climate change scenario data at high spatial and temporal resolution and quality in West Africa. Such information is highly required for impact studies in water resources and agriculture for the development of reliable climate change adaptation and mitigation strategies. In this study, we present a detailed evaluation of high simulation runs based on the regional climate model, COSMO model in CLimate Mode (COSMO-CLM). The model is applied over West Africa in a nested approach with two simulation domains at 0.44° and 0.11° resolution using reanalysis data from ERA-Interim (1979-2013). The models runs are compared to several state-of-the-art observational references (e.g., CRU, CHIRPS) including daily precipitation data provided by national meteorological services in West Africa. Special attention is paid to the reproduction of the dynamics of the West African Monsoon (WMA), its associated precipitation patterns and crucial agro-climatological indices such as the onset of the rainy season. In addition, first outcomes of the regional climate change simulations driven by MPI-ESM-LR are presented for a historical period (1980 to 2010) and two future periods (2020 to 2050, 2070 to 2100). The evaluation of the reanalysis runs shows that COSMO-CLM is able to reproduce the observed major climate characteristics including the West African Monsoon within the range of comparable RCM evaluations studies. However, substantial uncertainties remain, especially in the Sahel zone. The added value of the higher resolution of the nested run is reflected in a smaller bias in extreme precipitation statistics with respect to the reference data.

  3. HIGH-RESOLUTION SIMULATIONS OF CONVECTION PRECEDING IGNITION IN TYPE Ia SUPERNOVAE USING ADAPTIVE MESH REFINEMENT

    International Nuclear Information System (INIS)

    Nonaka, A.; Aspden, A. J.; Almgren, A. S.; Bell, J. B.; Zingale, M.; Woosley, S. E.

    2012-01-01

    We extend our previous three-dimensional, full-star simulations of the final hours of convection preceding ignition in Type Ia supernovae to higher resolution using the adaptive mesh refinement capability of our low Mach number code, MAESTRO. We report the statistics of the ignition of the first flame at an effective 4.34 km resolution and general flow field properties at an effective 2.17 km resolution. We find that off-center ignition is likely, with radius of 50 km most favored and a likely range of 40-75 km. This is consistent with our previous coarser (8.68 km resolution) simulations, implying that we have achieved sufficient resolution in our determination of likely ignition radii. The dynamics of the last few hot spots preceding ignition suggest that a multiple ignition scenario is not likely. With improved resolution, we can more clearly see the general flow pattern in the convective region, characterized by a strong outward plume with a lower speed recirculation. We show that the convective core is turbulent with a Kolmogorov spectrum and has a lower turbulent intensity and larger integral length scale than previously thought (on the order of 16 km s –1 and 200 km, respectively), and we discuss the potential consequences for the first flames.

  4. Kinetic energy spectra, vertical resolution and dissipation in high-resolution atmospheric simulations.

    Science.gov (United States)

    Skamarock, W. C.

    2017-12-01

    We have performed week-long full-physics simulations with the MPAS global model at 15 km cell spacing using vertical mesh spacings of 800, 400, 200 and 100 meters in the mid-troposphere through the mid-stratosphere. We find that the horizontal kinetic energy spectra in the upper troposphere and stratosphere does not converge with increasing vertical resolution until we reach 200 meter level spacing. Examination of the solutions indicates that significant inertia-gravity waves are not vertically resolved at the lower vertical resolutions. Diagnostics from the simulations indicate that the primary kinetic energy dissipation results from the vertical mixing within the PBL parameterization and from the gravity-wave drag parameterization, with smaller but significant contributions from damping in the vertical transport scheme and from the horizontal filters in the dynamical core. Most of the kinetic energy dissipation in the free atmosphere occurs within breaking mid-latitude baroclinic waves. We will briefly review these results and their implications for atmospheric model configuration and for atmospheric dynamics, specifically that related to the dynamics associated with the mesoscale kinetic energy spectrum.

  5. Development of a High-Resolution Climate Model for Future Climate Change Projection on the Earth Simulator

    Science.gov (United States)

    Kanzawa, H.; Emori, S.; Nishimura, T.; Suzuki, T.; Inoue, T.; Hasumi, H.; Saito, F.; Abe-Ouchi, A.; Kimoto, M.; Sumi, A.

    2002-12-01

    The fastest supercomputer of the world, the Earth Simulator (total peak performance 40TFLOPS) has recently been available for climate researches in Yokohama, Japan. We are planning to conduct a series of future climate change projection experiments on the Earth Simulator with a high-resolution coupled ocean-atmosphere climate model. The main scientific aims for the experiments are to investigate 1) the change in global ocean circulation with an eddy-permitting ocean model, 2) the regional details of the climate change including Asian monsoon rainfall pattern, tropical cyclones and so on, and 3) the change in natural climate variability with a high-resolution model of the coupled ocean-atmosphere system. To meet these aims, an atmospheric GCM, CCSR/NIES AGCM, with T106(~1.1o) horizontal resolution and 56 vertical layers is to be coupled with an oceanic GCM, COCO, with ~ 0.28ox 0.19o horizontal resolution and 48 vertical layers. This coupled ocean-atmosphere climate model, named MIROC, also includes a land-surface model, a dynamic-thermodynamic seaice model, and a river routing model. The poles of the oceanic model grid system are rotated from the geographic poles so that they are placed in Greenland and Antarctic land masses to avoild the singularity of the grid system. Each of the atmospheric and the oceanic parts of the model is parallelized with the Message Passing Interface (MPI) technique. The coupling of the two is to be done with a Multi Program Multi Data (MPMD) fashion. A 100-model-year integration will be possible in one actual month with 720 vector processors (which is only 14% of the full resources of the Earth Simulator).

  6. Changes in snow cover over China in the 21st century as simulated by a high resolution regional climate model

    International Nuclear Information System (INIS)

    Shi Ying; Gao Xuejie; Wu Jia; Giorgi, Filippo

    2011-01-01

    On the basis of the climate change simulations conducted using a high resolution regional climate model, the Abdus Salam International Centre for Theoretical Physics (ICTP) Regional Climate Model, RegCM3, at 25 km grid spacing, future changes in snow cover over China are analyzed. The simulations are carried out for the period of 1951–2100 following the IPCC SRES A1B emission scenario. The results suggest good performances of the model in simulating the number of snow cover days and the snow cover depth, as well as the starting and ending dates of snow cover to the present day (1981–2000). Their spatial distributions and amounts show fair consistency between the simulation and observation, although with some discrepancies. In general, decreases in the number of snow cover days and the snow cover depth, together with postponed snow starting dates and advanced snow ending dates, are simulated for the future, except in some places where the opposite appears. The most dramatic changes are found over the Tibetan Plateau among the three major snow cover areas of Northeast, Northwest and the Tibetan Plateau in China.

  7. VAST PLANES OF SATELLITES IN A HIGH-RESOLUTION SIMULATION OF THE LOCAL GROUP: COMPARISON TO ANDROMEDA

    International Nuclear Information System (INIS)

    Gillet, N.; Ocvirk, P.; Aubert, D.; Knebe, A.; Yepes, G.; Libeskind, N.; Gottlöber, S.; Hoffman, Y.

    2015-01-01

    We search for vast planes of satellites (VPoS) in a high-resolution simulation of the Local Group performed by the CLUES project, which improves significantly the resolution of previous similar studies. We use a simple method for detecting planar configurations of satellites, and validate it on the known plane of M31. We implement a range of prescriptions for modeling the satellite populations, roughly reproducing the variety of recipes used in the literature, and investigate the occurrence and properties of planar structures in these populations. The structure of the simulated satellite systems is strongly non-random and contains planes of satellites, predominantly co-rotating, with, in some cases, sizes comparable to the plane observed in M31 by Ibata et al. However, the latter is slightly richer in satellites, slightly thinner, and has stronger co-rotation, which makes it stand out as overall more exceptional than the simulated planes, when compared to a random population. Although the simulated planes we find are generally dominated by one real structure forming its backbone, they are also partly fortuitous and are thus not kinematically coherent structures as a whole. Provided that the simulated and observed planes of satellites are indeed of the same nature, our results suggest that the VPoS of M31 is not a coherent disk and that one-third to one-half of its satellites must have large proper motions perpendicular to the plane

  8. Outcomes and challenges of global high-resolution non-hydrostatic atmospheric simulations using the K computer

    Science.gov (United States)

    Satoh, Masaki; Tomita, Hirofumi; Yashiro, Hisashi; Kajikawa, Yoshiyuki; Miyamoto, Yoshiaki; Yamaura, Tsuyoshi; Miyakawa, Tomoki; Nakano, Masuo; Kodama, Chihiro; Noda, Akira T.; Nasuno, Tomoe; Yamada, Yohei; Fukutomi, Yoshiki

    2017-12-01

    This article reviews the major outcomes of a 5-year (2011-2016) project using the K computer to perform global numerical atmospheric simulations based on the non-hydrostatic icosahedral atmospheric model (NICAM). The K computer was made available to the public in September 2012 and was used as a primary resource for Japan's Strategic Programs for Innovative Research (SPIRE), an initiative to investigate five strategic research areas; the NICAM project fell under the research area of climate and weather simulation sciences. Combining NICAM with high-performance computing has created new opportunities in three areas of research: (1) higher resolution global simulations that produce more realistic representations of convective systems, (2) multi-member ensemble simulations that are able to perform extended-range forecasts 10-30 days in advance, and (3) multi-decadal simulations for climatology and variability. Before the K computer era, NICAM was used to demonstrate realistic simulations of intra-seasonal oscillations including the Madden-Julian oscillation (MJO), merely as a case study approach. Thanks to the big leap in computational performance of the K computer, we could greatly increase the number of cases of MJO events for numerical simulations, in addition to integrating time and horizontal resolution. We conclude that the high-resolution global non-hydrostatic model, as used in this five-year project, improves the ability to forecast intra-seasonal oscillations and associated tropical cyclogenesis compared with that of the relatively coarser operational models currently in use. The impacts of the sub-kilometer resolution simulation and the multi-decadal simulations using NICAM are also reviewed.

  9. Air-sea exchange over Black Sea estimated from high resolution regional climate simulations

    Science.gov (United States)

    Velea, Liliana; Bojariu, Roxana; Cica, Roxana

    2013-04-01

    Black Sea is an important influencing factor for the climate of bordering countries, showing cyclogenetic activity (Trigo et al, 1999) and influencing Mediterranean cyclones passing over. As for other seas, standard observations of the atmosphere are limited in time and space and available observation-based estimations of air-sea exchange terms present quite large ranges of uncertainty. The reanalysis datasets (e.g. ERA produced by ECMWF) provide promising validation estimates of climatic characteristics against the ones in available climatic data (Schrum et al, 2001), while cannot reproduce some local features due to relatively coarse horizontal resolution. Detailed and realistic information on smaller-scale processes are foreseen to be provided by regional climate models, due to continuous improvements of physical parameterizations and numerical solutions and thus affording simulations at high spatial resolution. The aim of the study is to assess the potential of three regional climate models in reproducing known climatological characteristics of air-sea exchange over Black Sea, as well as to explore the added value of the model compared to the input (reanalysis) data. We employ results of long-term (1961-2000) simulations performed within ENSEMBLE project (http://ensemblesrt3.dmi.dk/) using models ETHZ-CLM, CNRM-ALADIN, METO-HadCM, for which the integration domain covers the whole area of interest. The analysis is performed for the entire basin for several variables entering the heat and water budget terms and available as direct output from the models, at seasonal and annual scale. A comparison with independent data (ERA-INTERIM) and findings from other studies (e.g. Schrum et al, 2001) is also presented. References: Schrum, C., Staneva, J., Stanev, E. and Ozsoy, E., 2001: Air-sea exchange in the Black Sea estimated from atmospheric analysis for the period 1979-1993, J. Marine Systems, 31, 3-19 Trigo, I. F., T. D. Davies, and G. R. Bigg (1999): Objective

  10. Development of numerical simulation technology for high resolution thermal hydraulic analysis

    International Nuclear Information System (INIS)

    Yoon, Han Young; Kim, K. D.; Kim, B. J.; Kim, J. T.; Park, I. K.; Bae, S. W.; Song, C. H.; Lee, S. W.; Lee, S. J.; Lee, J. R.; Chung, S. K.; Chung, B. D.; Cho, H. K.; Choi, S. K.; Ha, K. S.; Hwang, M. K.; Yun, B. J.; Jeong, J. J.; Sul, A. S.; Lee, H. D.; Kim, J. W.

    2012-04-01

    A realistic simulation of two phase flows is essential for the advanced design and safe operation of a nuclear reactor system. The need for a multi dimensional analysis of thermal hydraulics in nuclear reactor components is further increasing with advanced design features, such as a direct vessel injection system, a gravity driven safety injection system, and a passive secondary cooling system. These features require more detailed analysis with enhanced accuracy. In this regard, KAERI has developed a three dimensional thermal hydraulics code, CUPID, for the analysis of transient, multi dimensional, two phase flows in nuclear reactor components. The code was designed for use as a component scale code, and/or a three dimensional component, which can be coupled with a system code. This report presents an overview of the CUPID code development and preliminary assessment, mainly focusing on the numerical solution method and its verification and validation. It was shown that the CUPID code was successfully verified. The results of the validation calculations show that the CUPID code is very promising, but a systematic approach for the validation and improvement of the physical models is still needed

  11. Interactive desktop analysis of high resolution simulations: application to turbulent plume dynamics and current sheet formation

    International Nuclear Information System (INIS)

    Clyne, John; Mininni, Pablo; Norton, Alan; Rast, Mark

    2007-01-01

    The ever increasing processing capabilities of the supercomputers available to computational scientists today, combined with the need for higher and higher resolution computational grids, has resulted in deluges of simulation data. Yet the computational resources and tools required to make sense of these vast numerical outputs through subsequent analysis are often far from adequate, making such analysis of the data a painstaking, if not a hopeless, task. In this paper, we describe a new tool for the scientific investigation of massive computational datasets. This tool (VAPOR) employs data reduction, advanced visualization, and quantitative analysis operations to permit the interactive exploration of vast datasets using only a desktop PC equipped with a commodity graphics card. We describe VAPORs use in the study of two problems. The first, motivated by stellar envelope convection, investigates the hydrodynamic stability of compressible thermal starting plumes as they descend through a stratified layer of increasing density with depth. The second looks at current sheet formation in an incompressible helical magnetohydrodynamic flow to understand the early spontaneous development of quasi two-dimensional (2D) structures embedded within the 3D solution. Both of the problems were studied at sufficiently high spatial resolution, a grid of 504 2 by 2048 points for the first and 1536 3 points for the second, to overwhelm the interactive capabilities of typically available analysis resources

  12. Forecasting wildland fire behavior using high-resolution large-eddy simulations

    Science.gov (United States)

    Munoz-Esparza, D.; Kosovic, B.; Jimenez, P. A.; Anderson, A.; DeCastro, A.; Brown, B.

    2017-12-01

    Wildland fires are responsible for large socio-economic impacts. Fires affect the environment, damage structures, threaten lives, cause health issues, and involve large suppression costs. These impacts can be mitigated via accurate fire spread forecast to inform the incident management team. To this end, the state of Colorado is funding the development of the Colorado Fire Prediction System (CO-FPS). The system is based on the Weather Research and Forecasting (WRF) model enhanced with a fire behavior module (WRF-Fire). Realistic representation of wildland fire behavior requires explicit representation of small scale weather phenomena to properly account for coupled atmosphere-wildfire interactions. Moreover, transport and dispersion of biomass burning emissions from wildfires is controlled by turbulent processes in the atmospheric boundary layer, which are difficult to parameterize and typically lead to large errors when simplified source estimation and injection height methods are used. Therefore, we utilize turbulence-resolving large-eddy simulations at a resolution of 111 m to forecast fire spread and smoke distribution using a coupled atmosphere-wildfire model. This presentation will describe our improvements to the level-set based fire-spread algorithm in WRF-Fire and an evaluation of the operational system using 12 wildfire events that occurred in Colorado in 2016, as well as other historical fires. In addition, the benefits of explicit representation of turbulence for smoke transport and dispersion will be demonstrated.

  13. Flooding Simulation of Extreme Event on Barnegat Bay by High-Resolution Two Dimensional Hydrodynamic Model

    Science.gov (United States)

    Wang, Y.; Ramaswamy, V.; Saleh, F.

    2017-12-01

    Barnegat Bay located on the east coast of New Jersey, United States and is separated from the Atlantic Ocean by the narrow Barnegat Peninsula which acts as a barrier island. The bay is fed by several rivers which empty through small estuaries along the inner shore. In terms of vulnerability from flooding, the Barnegat Peninsula is under the influence of both coastal storm surge and riverine flooding. Barnegat Bay was hit by Hurricane Sandy causing flood damages with extensive cross-island flow at many streets perpendicular to the shoreline. The objective of this work is to identify and quantify the sources of flooding using a two dimensional inland hydrodynamic model. The hydrodynamic model was forced by three observed coastal boundary conditions, and one hydrologic boundary condition from United States Geological Survey (USGS). The model reliability was evaluated with both FEMA spatial flooding extend and USGS High water marks. Simulated flooding extent showed good agreement with the reanalysis spatial inundation extents. Results offered important perspectives on the flow of the water into the bay, the velocity and the depth of the inundated areas. Using such information can enable emergency managers and decision makers identify evacuation and deploy flood defenses.

  14. Estimating non-circular motions in barred galaxies using numerical N-body simulations

    Science.gov (United States)

    Randriamampandry, T. H.; Combes, F.; Carignan, C.; Deg, N.

    2015-12-01

    The observed velocities of the gas in barred galaxies are a combination of the azimuthally averaged circular velocity and non-circular motions, primarily caused by gas streaming along the bar. These non-circular flows must be accounted for before the observed velocities can be used in mass modelling. In this work, we examine the performance of the tilted-ring method and the DISKFIT algorithm for transforming velocity maps of barred spiral galaxies into rotation curves (RCs) using simulated data. We find that the tilted-ring method, which does not account for streaming motions, under-/overestimates the circular motions when the bar is parallel/perpendicular to the projected major axis. DISKFIT, which does include streaming motions, is limited to orientations where the bar is not aligned with either the major or minor axis of the image. Therefore, we propose a method of correcting RCs based on numerical simulations of galaxies. We correct the RC derived from the tilted-ring method based on a numerical simulation of a galaxy with similar properties and projections as the observed galaxy. Using observations of NGC 3319, which has a bar aligned with the major axis, as a test case, we show that the inferred mass models from the uncorrected and corrected RCs are significantly different. These results show the importance of correcting for the non-circular motions and demonstrate that new methods of accounting for these motions are necessary as current methods fail for specific bar alignments.

  15. Impact of irrigations on simulated convective activity over Central Greece: A high resolution study

    Science.gov (United States)

    Kotsopoulos, S.; Tegoulias, I.; Pytharoulis, I.; Kartsios, S.; Bampzelis, D.; Karacostas, T.

    2014-12-01

    The aim of this research is to investigate the impact of irrigations in the characteristics of convective activity simulated by the non-hydrostatic Weather Research and Forecasting model with the Advanced Research dynamic solver (WRF-ARW, version 3.5.1), under different upper air synoptic conditions in central Greece. To this end, 42 cases equally distributed under the six most frequent upper air synoptic conditions, which are associated with convective activity in the region of interest, were utilized considering two different soil moisture scenarios. In the first scenario, the model was initialized with the surface soil moisture of the ECMWF analysis data that usually does not take into account the modification of soil moisture due to agricultural activity in the area of interest. In the second scenario, the soil moisture in the upper soil layers of the study area was modified to the field capacity for the irrigated cropland. Three model domains, covering Europe, the Mediterranean Sea and northern Africa (d01), the wider area of Greece (d02) and central Greece - Thessaly region (d03) are used at horizontal grid-spacings of 15km, 5km and 1km respectively. The model numerical results indicate a strong dependence of convective spatiotemporal characteristics from the soil moisture difference between the two scenarios. Acknowledgements: This research is co-financed by the European Union (European Regional Development Fund) and Greek national funds, through the action "COOPERATION 2011: Partnerships of Production and Research Institutions in Focused Research and Technology Sectors" (contract number 11SYN_8_1088 - DAPHNE) in the framework of the operational programme "Competitiveness and Entrepreneurship" and Regions in Transition (OPC II, NSRF 2007-2013).

  16. Coating Thickness Measurement of the Simulated TRISO-Coated Fuel Particles using an Image Plate and a High Resolution Scanner

    International Nuclear Information System (INIS)

    Kim, Woong Ki; Kim, Yeon Ku; Jeong, Kyung Chai; Lee, Young Woo; Kim, Bong Goo; Eom, Sung Ho; Kim, Young Min; Yeo, Sung Hwan; Cho, Moon Sung

    2014-01-01

    In this study, the thickness of the coating layers of 196 coated particles was measured using an Image Plate detector, high resolution scanner and digital image processing techniques. The experimental results are as follows. - An X-ray image was acquired for 196 simulated TRISO-coated fuel particles with ZrO 2 kernel using an Image Plate with high resolution in a reduced amount of time. - We could observe clear boundaries between coating layers for 196 particles. - The geometric distortion error was compensated for the calculation. - The coating thickness of the TRISO-coated fuel particles can be nondestructively measured using X-ray radiography and digital image processing technology. - We can increase the number of TRISO-coated particles to be inspected by increasing the number of Image Plate detectors. A TRISO-coated fuel particle for an HTGR (high temperature gas-cooled reactor) is composed of a nuclear fuel kernel and outer coating layers. The coating layers consist of buffer PyC (pyrolytic carbon), inner PyC (I-PyC), SiC, and outer PyC (O-PyC) layer. The coating thickness is measured to evaluate the soundness of the coating layers. X-ray radiography is one of the nondestructive alternatives for measuring the coating thickness without generating a radioactive waste. Several billion particles are subject to be loaded in a reactor. A lot of sample particles should be tested as much as possible. The acquired X-ray images for the measurement of coating thickness have included a small number of particles because of the restricted resolution and size of the X-ray detector. We tried to test many particles for an X-ray exposure to reduce the measurement time. In this experiment, an X-ray image was acquired for 196 simulated TRISO-coated fuel particles using an image plate and high resolution scanner with a pixel size of 25Χ25 μm 2 . The coating thickness for the particles could be measured on the image

  17. A High-Resolution Spatially Explicit Monte-Carlo Simulation Approach to Commercial and Residential Electricity and Water Demand Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Morton, April M [ORNL; McManamay, Ryan A [ORNL; Nagle, Nicholas N [ORNL; Piburn, Jesse O [ORNL; Stewart, Robert N [ORNL; Surendran Nair, Sujithkumar [ORNL

    2016-01-01

    Abstract As urban areas continue to grow and evolve in a world of increasing environmental awareness, the need for high resolution spatially explicit estimates for energy and water demand has become increasingly important. Though current modeling efforts mark significant progress in the effort to better understand the spatial distribution of energy and water consumption, many are provided at a course spatial resolution or rely on techniques which depend on detailed region-specific data sources that are not publicly available for many parts of the U.S. Furthermore, many existing methods do not account for errors in input data sources and may therefore not accurately reflect inherent uncertainties in model outputs. We propose an alternative and more flexible Monte-Carlo simulation approach to high-resolution residential and commercial electricity and water consumption modeling that relies primarily on publicly available data sources. The method s flexible data requirement and statistical framework ensure that the model is both applicable to a wide range of regions and reflective of uncertainties in model results. Key words: Energy Modeling, Water Modeling, Monte-Carlo Simulation, Uncertainty Quantification Acknowledgment This manuscript has been authored by employees of UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the U.S. Department of Energy. Accordingly, the United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  18. Potential for added value in precipitation simulated by high-resolution nested Regional Climate Models and observations

    Energy Technology Data Exchange (ETDEWEB)

    Di Luca, Alejandro; Laprise, Rene [Universite du Quebec a Montreal (UQAM), Centre ESCER (Etude et Simulation du Climat a l' Echelle Regionale), Departement des Sciences de la Terre et de l' Atmosphere, PK-6530, Succ. Centre-ville, B.P. 8888, Montreal, QC (Canada); De Elia, Ramon [Universite du Quebec a Montreal, Ouranos Consortium, Centre ESCER (Etude et Simulation du Climat a l' Echelle Regionale), Montreal (Canada)

    2012-03-15

    Regional Climate Models (RCMs) constitute the most often used method to perform affordable high-resolution regional climate simulations. The key issue in the evaluation of nested regional models is to determine whether RCM simulations improve the representation of climatic statistics compared to the driving data, that is, whether RCMs add value. In this study we examine a necessary condition that some climate statistics derived from the precipitation field must satisfy in order that the RCM technique can generate some added value: we focus on whether the climate statistics of interest contain some fine spatial-scale variability that would be absent on a coarser grid. The presence and magnitude of fine-scale precipitation variance required to adequately describe a given climate statistics will then be used to quantify the potential added value (PAV) of RCMs. Our results show that the PAV of RCMs is much higher for short temporal scales (e.g., 3-hourly data) than for long temporal scales (16-day average data) due to the filtering resulting from the time-averaging process. PAV is higher in warm season compared to cold season due to the higher proportion of precipitation falling from small-scale weather systems in the warm season. In regions of complex topography, the orographic forcing induces an extra component of PAV, no matter the season or the temporal scale considered. The PAV is also estimated using high-resolution datasets based on observations allowing the evaluation of the sensitivity of changing resolution in the real climate system. The results show that RCMs tend to reproduce relatively well the PAV compared to observations although showing an overestimation of the PAV in warm season and mountainous regions. (orig.)

  19. Satellite alignment. I. Distribution of substructures and their dependence on assembly history from n-body simulations

    International Nuclear Information System (INIS)

    Wang, Yang Ocean; Lin, W. P.; Yu, Yu; Kang, X.; Dutton, Aaron; Macciò, Andrea V.

    2014-01-01

    Observations have shown that the spatial distribution of satellite galaxies is not random, but aligned with the major axes of central galaxies. This alignment is dependent on galaxy properties, such that red satellites are more strongly aligned than blue satellites. Theoretical work conducted to interpret this phenomenon has found that it is due to the non-spherical nature of dark matter halos. However, most studies overpredict the alignment signal under the assumption that the central galaxy shape follows the shape of the host halo. It is also not clear whether the color dependence of alignment is due to an assembly bias or an evolution effect. In this paper we study these problems using a cosmological N-body simulation. Subhalos are used to trace the positions of satellite galaxies. It is found that the shapes of dark matter halos are mis-aligned at different radii. If the central galaxy shares the same shape as the inner host halo, then the alignment effect is weaker and agrees with observational data. However, it predicts almost no dependence of alignment on the color of satellite galaxies, though the late accreted subhalos show stronger alignment with the outer layer of the host halo than their early accreted counterparts. We find that this is due to the limitation of pure N-body simulations where satellite galaxies without associated subhalos ('orphan galaxies') are not resolved. These orphan (mostly red) satellites often reside in the inner region of host halos and should follow the shape of the host halo in the inner region.

  20. High resolution crop growth simulation for identification of potential adaptation strategies under climate change

    Science.gov (United States)

    Kim, K. S.; Yoo, B. H.

    2016-12-01

    Impact assessment of climate change on crop production would facilitate planning of adaptation strategies. Because socio-environmental conditions would differ by local areas, it would be advantageous to assess potential adaptation measures at a specific area. The objectives of this study was to develop a crop growth simulation system at a very high spatial resolution, e.g., 30 m, and to assess different adaptation options including shift of planting date and use of different cultivars. The Decision Support System for Agrotechnology Transfer (DSSAT) model was used to predict yields of soybean and maize in Korea. Gridded data for climate and soil were used to prepare input data for the DSSAT model. Weather input data were prepared at the resolution of 30 m using bilinear interpolation from gridded climate scenario data. Those climate data were obtained from Korean Meteorology Administration. Spatial resolution of temperature and precipitation was 1 km whereas that of solar radiation was 12.5 km. Soil series data at the 30 m resolution were obtained from the soil database operated by Rural Development Administration, Korea. The SOL file, which is a soil input file for the DSSAT model was prepared using physical and chemical properties of a given soil series, which were available from the soil database. Crop yields were predicted by potential adaptation options based on planting date and cultivar. For example, 10 planting dates and three cultivars were used to identify ideal management options for climate change adaptation. In prediction of maize yield, combination of 20 planting dates and two cultivars was used as management options. Predicted crop yields differed by site even within a relatively small region. For example, the maximum of average yields for 2001-2010 seasons differed by sites In a county of which areas is 520 km2 (Fig. 1). There was also spatial variation in the ideal management option in the region (Fig. 2). These results suggested that local

  1. GRAPE-5: A Special-Purpose Computer for N-body Simulation

    OpenAIRE

    Kawai, Atsushi; Fukushige, Toshiyuki; Makino, Junichiro; Taiji, Makoto

    1999-01-01

    We have developed a special-purpose computer for gravitational many-body simulations, GRAPE-5. GRAPE-5 is the successor of GRAPE-3. Both consist of eight custom pipeline chips (G5 chip and GRAPE chip). The difference between GRAPE-5 and GRAPE-3 are: (1) The G5 chip contains two pipelines operating at 80 MHz, while the GRAPE chip had one at 20 MHz. Thus, the calculation speed of the G5 chip and that of GRAPE-5 board are 8 times faster than that of GRAPE chip and GRAPE-3 board. (2) The GRAPE-5 ...

  2. North Atlantic Tropical Cyclones: historical simulations and future changes with the new high-resolution Arpege AGCM.

    Science.gov (United States)

    Pilon, R.; Chauvin, F.; Palany, P.; Belmadani, A.

    2017-12-01

    A new version of the variable high-resolution Meteo-France Arpege atmospheric general circulation model (AGCM) has been developed for tropical cyclones (TC) studies, with a focus on the North Atlantic basin, where the model horizontal resolution is 15 km. Ensemble historical AMIP (Atmospheric Model Intercomparison Project)-type simulations (1965-2014) and future projections (2020-2080) under the IPCC (Intergovernmental Panel on Climate Change) representative concentration pathway (RCP) 8.5 scenario have been produced. TC-like vortices tracking algorithm is used to investigate TC activity and variability. TC frequency, genesis, geographical distribution and intensity are examined. Historical simulations are compared to best-track and reanalysis datasets. Model TC frequency is generally realistic but tends to be too high during the rst decade of the historical simulations. Biases appear to originate from both the tracking algorithm and model climatology. Nevertheless, the model is able to simulate extremely well intense TCs corresponding to category 5 hurricanes in the North Atlantic, where grid resolution is highest. Interaction between developing TCs and vertical wind shear is shown to be contributing factor for TC variability. Future changes in TC activity and properties are also discussed.

  3. High-resolution simulations of cylindrical void collapse in energetic materials: Effect of primary and secondary collapse on initiation thresholds

    Science.gov (United States)

    Rai, Nirmal Kumar; Schmidt, Martin J.; Udaykumar, H. S.

    2017-04-01

    Void collapse in energetic materials leads to hot spot formation and enhanced sensitivity. Much recent work has been directed towards simulation of collapse-generated reactive hot spots. The resolution of voids in calculations to date has varied as have the resulting predictions of hot spot intensity. Here we determine the required resolution for reliable cylindrical void collapse calculations leading to initiation of chemical reactions. High-resolution simulations of collapse provide new insights into the mechanism of hot spot generation. It is found that initiation can occur in two different modes depending on the loading intensity: Either the initiation occurs due to jet impact at the first collapse instant or it can occur at secondary lobes at the periphery of the collapsed void. A key observation is that secondary lobe collapse leads to large local temperatures that initiate reactions. This is due to a combination of a strong blast wave from the site of primary void collapse and strong colliding jets and vortical flows generated during the collapse of the secondary lobes. The secondary lobe collapse results in a significant lowering of the predicted threshold for ignition of the energetic material. The results suggest that mesoscale simulations of void fields may suffer from significant uncertainty in threshold predictions because unresolved calculations cannot capture the secondary lobe collapse phenomenon. The implications of this uncertainty for mesoscale simulations are discussed in this paper.

  4. Scalable streaming tools for analyzing N-body simulations: Finding halos and investigating excursion sets in one pass

    Science.gov (United States)

    Ivkin, N.; Liu, Z.; Yang, L. F.; Kumar, S. S.; Lemson, G.; Neyrinck, M.; Szalay, A. S.; Braverman, V.; Budavari, T.

    2018-04-01

    Cosmological N-body simulations play a vital role in studying models for the evolution of the Universe. To compare to observations and make a scientific inference, statistic analysis on large simulation datasets, e.g., finding halos, obtaining multi-point correlation functions, is crucial. However, traditional in-memory methods for these tasks do not scale to the datasets that are forbiddingly large in modern simulations. Our prior paper (Liu et al., 2015) proposes memory-efficient streaming algorithms that can find the largest halos in a simulation with up to 109 particles on a small server or desktop. However, this approach fails when directly scaling to larger datasets. This paper presents a robust streaming tool that leverages state-of-the-art techniques on GPU boosting, sampling, and parallel I/O, to significantly improve performance and scalability. Our rigorous analysis of the sketch parameters improves the previous results from finding the centers of the 103 largest halos (Liu et al., 2015) to ∼ 104 - 105, and reveals the trade-offs between memory, running time and number of halos. Our experiments show that our tool can scale to datasets with up to ∼ 1012 particles while using less than an hour of running time on a single GPU Nvidia GTX 1080.

  5. Feasibility of High-Resolution Soil Erosion Measurements by Means of Rainfall Simulations and SfM Photogrammetry

    Directory of Open Access Journals (Sweden)

    Phoebe Hänsel

    2016-11-01

    Full Text Available The silty soils of the intensively used agricultural landscape of the Saxon loess province, eastern Germany, are very prone to soil erosion, mainly caused by water erosion. Rainfall simulations, and also increasingly structure-from-motion (SfM photogrammetry, are used as methods in soil erosion research not only to assess soil erosion by water, but also to quantify soil loss. This study aims to validate SfM photogrammetry determined soil loss estimations with rainfall simulations measurements. Rainfall simulations were performed at three agricultural sites in central Saxony. Besides the measured data runoff and soil loss by sampling (in mm, terrestrial images were taken from the plots with digital cameras before and after the rainfall simulation. Subsequently, SfM photogrammetry was used to reconstruct soil surface changes due to soil erosion in terms of high resolution digital elevation models (DEMs for the pre- and post-event (resolution 1 × 1 mm. By multi-temporal change detection, the digital elevation model of difference (DoD and an averaged soil loss (in mm is received, which was compared to the soil loss by sampling. Soil loss by DoD was higher than soil loss by sampling. The method of SfM photogrammetry-determined soil loss estimations also include a comparison of three different ground control point (GCP approaches, revealing that the most complex one delivers the most reliable soil loss by DoD. Additionally, soil bulk density changes and splash erosion beyond the plot were measured during the rainfall simulation experiments in order to separate these processes and associated surface changes from the soil loss by DoD. Furthermore, splash was negligibly small, whereas higher soil densities after the rainfall simulations indicated soil compaction. By means of calculated soil surface changes due to soil compaction, the soil loss by DoD achieved approximately the same value as the soil loss by rainfall simulation.

  6. Cosmological N-body simulations with a tree code - Fluctuations in the linear and nonlinear regimes

    International Nuclear Information System (INIS)

    Suginohara, Tatsushi; Suto, Yasushi; Bouchet, F.R.; Hernquist, L.

    1991-01-01

    The evolution of gravitational systems is studied numerically in a cosmological context using a hierarchical tree algorithm with fully periodic boundary conditions. The simulations employ 262,144 particles, which are initially distributed according to scale-free power spectra. The subsequent evolution is followed in both flat and open universes. With this large number of particles, the discretized system can accurately model the linear phase. It is shown that the dynamics in the nonlinear regime depends on both the spectral index n and the density parameter Omega. In Omega = 1 universes, the evolution of the two-point correlation function Xi agrees well with similarity solutions for Xi greater than about 100 but its slope is steeper in open models with the same n. 28 refs

  7. The halo bispectrum in N-body simulations with non-Gaussian initial conditions

    Science.gov (United States)

    Sefusatti, E.; Crocce, M.; Desjacques, V.

    2012-10-01

    We present measurements of the bispectrum of dark matter haloes in numerical simulations with non-Gaussian initial conditions of local type. We show, in the first place, that the overall effect of primordial non-Gaussianity on the halo bispectrum is larger than on the halo power spectrum when all measurable configurations are taken into account. We then compare our measurements with a tree-level perturbative prediction, finding good agreement at large scales when the constant Gaussian bias parameter, both linear and quadratic, and their constant non-Gaussian corrections are fitted for. The best-fitting values of the Gaussian bias factors and their non-Gaussian, scale-independent corrections are in qualitative agreement with the peak-background split expectations. In particular, we show that the effect of non-Gaussian initial conditions on squeezed configurations is fairly large (up to 30 per cent for fNL = 100 at redshift z = 0.5) and results from contributions of similar amplitude induced by the initial matter bispectrum, scale-dependent bias corrections as well as from non-linear matter bispectrum corrections. We show, in addition, that effects at second order in fNL are irrelevant for the range of values allowed by cosmic microwave background and galaxy power spectrum measurements, at least on the scales probed by our simulations (k > 0.01 h Mpc-1). Finally, we present a Fisher matrix analysis to assess the possibility of constraining primordial non-Gaussianity with future measurements of the galaxy bispectrum. We find that a survey with a volume of about 10 h-3 Gpc3 at mean redshift z ≃ 1 could provide an error on fNL of the order of a few. This shows the relevance of a joint analysis of galaxy power spectrum and bispectrum in future redshift surveys.

  8. Keeping it real: revisiting a real-space approach to running ensembles of cosmological N-body simulations

    International Nuclear Information System (INIS)

    Orban, Chris

    2013-01-01

    In setting up initial conditions for ensembles of cosmological N-body simulations there are, fundamentally, two choices: either maximizing the correspondence of the initial density field to the assumed fourier-space clustering or, instead, matching to real-space statistics and allowing the DC mode (i.e. overdensity) to vary from box to box as it would in the real universe. As a stringent test of both approaches, I perform ensembles of simulations using power law and a ''powerlaw times a bump'' model inspired by baryon acoustic oscillations (BAO), exploiting the self-similarity of these initial conditions to quantify the accuracy of the matter-matter two-point correlation results. The real-space method, which was originally proposed by Pen 1997 [1] and implemented by Sirko 2005 [2], performed well in producing the expected self-similar behavior and corroborated the non-linear evolution of the BAO feature observed in conventional simulations, even in the strongly-clustered regime (σ 8 ∼>1). In revisiting the real-space method championed by [2], it was also noticed that this earlier study overlooked an important integral constraint correction to the correlation function in results from the conventional approach that can be important in ΛCDM simulations with L box ∼ −1 Gpc and on scales r∼>L box /10. Rectifying this issue shows that the fourier space and real space methods are about equally accurate and efficient for modeling the evolution and growth of the correlation function, contrary to previous claims. An appendix provides a useful independent-of-epoch analytic formula for estimating the importance of the integral constraint bias on correlation function measurements in ΛCDM simulations

  9. Comparing Results of SPH/N-body Impact Simulations Using Both Solid and Rubble-pile Target Asteroids

    Science.gov (United States)

    Durda, Daniel D.; Bottke, W. F.; Enke, B. L.; Nesvorný, D.; Asphaug, E.; Richardson, D. C.

    2006-09-01

    We have been investigating the properties of satellites and the morphology of size-frequency distributions (SFDs) resulting from a suite of 160 SPH/N-body simulations of impacts into 100-km diameter parent asteroids (Durda et al. 2004, Icarus 170, 243-257; Durda et al. 2006, Icarus, in press). These simulations have produced many valuable insights into the outcomes of cratering and disruptive impacts but were limited to monolithic basalt targets. As a natural consequence of collisional evolution, however, many asteroids have undergone a series of battering impacts that likely have left their interiors substantially fractured, if not completely rubblized. In light of this, we have re-mapped the matrix of simulations using rubble-pile target objects. We constructed the rubble-pile targets by filling the interior of the 100-km diameter spherical shell (the target envelope) with randomly sized solid spheres in mutual contact. We then assigned full damage (which reduces tensile and shear stresses to zero) to SPH particles in the contacts between the components; the remaining volume is void space. The internal spherical components have a power-law distribution of sizes simulating fragments of a pre-shattered parent object. First-look analysis of the rubble-pile results indicate some general similarities to the simulations with the monolithic targets (e.g., similar trends in the number of small, gravitationally bound satellite systems as a function of impact conditions) and some significant differences (e.g., size of largest remnants and smaller debris affecting size frequency distributions of resulting families). We will report details of a more thorough analysis and the implications for collisional models of the main asteroid belt. This work is supported by the National Science Foundation, grant number AST0407045.

  10. Transients from initial conditions based on Lagrangian perturbation theory in N-body simulations II: the effect of the transverse mode

    International Nuclear Information System (INIS)

    Tatekawa, Takayuki

    2014-01-01

    We study the initial conditions for cosmological N-body simulations for precision cosmology. In general, Zel'dovich approximation has been applied for the initial conditions of N-body simulations for a long time. These initial conditions provide incorrect higher-order growth. These error caused by setting up the initial conditions by perturbation theory is called transients. We investigated the impact of transient on non-Gaussianity of density field by performing cosmological N-body simulations with initial conditions based on first-, second-, and third-order Lagrangian perturbation theory in previous paper. In this paper, we evaluates the effect of the transverse mode in the third-order Lagrangian perturbation theory for several statistical quantities such as power spectrum and non-Gaussianty. Then we clarified that the effect of the transverse mode in the third-order Lagrangian perturbation theory is quite small

  11. Analysis of a high-resolution regional climate simulation for Alpine temperature. Validation and influence of the NAO

    Energy Technology Data Exchange (ETDEWEB)

    Proemmel, K. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Kuestenforschung

    2008-11-06

    To determine whether the increase in resolution of climate models improves the representation of climate is a crucial topic in regional climate modelling. An improvement over coarser-scale models is expected especially in areas with complex orography or along coastlines. However, some studies have shown no clear added value for regional climate models. In this study a high-resolution regional climate model simulation performed with REMO over the period 1958-1998 is analysed for 2m temperature over the orographically complex European Alps and their surroundings called the Greater Alpine Region (GAR). The model setup is in hindcast mode meaning that the simulation is driven with perfect boundary conditions by the ERA40 reanalysis through prescribing the values at the lateral boundaries and spectral nudging of the large-scale wind field inside the model domain. The added value is analysed between the regional climate simulation with a resolution of 1/6 and the driving reanalysis with a resolution of 1.125 . Before analysing the added value both the REMO simulation and the ERA40 reanalysis are validated against different station datasets of monthly and daily mean 2m temperature. The largest dataset is the dense, homogenised and quality controlled HISTALP dataset covering the whole GAR, which gave the opportunity for the validation undertaken in this study. The temporal variability of temperature, as quantified by correlation, is well represented by both REMO and ERA40. However, both show considerable biases. The REMO bias reaches 3 K in summer in regions known to experience a problem with summer drying in a number of regional models. In winter the bias is strongly influenced by the choice of the temperature lapse rate, which is applied to compare grid box and station data at different altitudes, and has the strongest influence on inner Alpine subregions where the altitude differences are largest. By applying a constant lapse rate the REMO bias in winter in the high

  12. MODELING PLANETARY SYSTEM FORMATION WITH N-BODY SIMULATIONS: ROLE OF GAS DISK AND STATISTICS COMPARED TO OBSERVATIONS

    International Nuclear Information System (INIS)

    Liu Huigen; Zhou Jilin; Wang Su

    2011-01-01

    During the late stage of planet formation, when Mars-sized cores appear, interactions among planetary cores can excite their orbital eccentricities, accelerate their merging, and thus sculpt their final orbital architecture. This study contributes to the final assembling of planetary systems with N-body simulations, including the type I or II migration of planets and gas accretion of massive cores in a viscous disk. Statistics on the final distributions of planetary masses, semimajor axes, and eccentricities are derived and are comparable to those of the observed systems. Our simulations predict some new orbital signatures of planetary systems around solar mass stars: 36% of the surviving planets are giant planets (>10 M + ). Most of the massive giant planets (>30 M + ) are located at 1-10 AU. Terrestrial planets are distributed more or less evenly at J in highly eccentric orbits (e > 0.3-0.4). The average eccentricity (∼0.15) of the giant planets (>10 M + ) is greater than that (∼0.05) of the terrestrial planets ( + ). A planetary system with more planets tends to have smaller planet masses and orbital eccentricities on average.

  13. Dissipative N-body simulations of the formation of single galaxies in a cold dark-matter cosmology

    International Nuclear Information System (INIS)

    Ewell, M.W. Jr.

    1988-01-01

    The details of an N-body code designed specifically to study the collapse of a single protogalaxy are presented. This code uses a spherical harmonic expansion to model the gravity and a sticky-particle algorithm to model the gas physics. It includes external tides and cosmologically realistic boundary conditions. The results of twelve simulations using this code are given. The initial conditions for these runs use mean-density profiles and r.m.s. quadrupoles and tides taken from the CDM power spectrum. The simulations start when the center of the perturbation first goes nonlinear, and continue until a redshift Z ∼ 1-2. The resulting rotation curves are approximately flat out to 100 kpc, but do show some structure. The circular velocity is 200 km/sec around a 3σ peak. The final systems have λ approx-equal .03. The angular momentum per unit mass of the baryons implies disk scale lengths of 1-3 kpc. The tidal forces are strong enough to profoundly influence the collapse geometry. In particular, the usual assumption, that tidal torques produce a system approximately in solid-body rotation, is shown to be seriously in error

  14. High-resolution simulations of the final assembly of Earth-like planets. 2. Water delivery and planetary habitability.

    Science.gov (United States)

    Raymond, Sean N; Quinn, Thomas; Lunine, Jonathan I

    2007-02-01

    The water content and habitability of terrestrial planets are determined during their final assembly, from perhaps 100 1,000-km "planetary embryos " and a swarm of billions of 1-10-km "planetesimals. " During this process, we assume that water-rich material is accreted by terrestrial planets via impacts of water-rich bodies that originate in the outer asteroid region. We present analysis of water delivery and planetary habitability in five high-resolution simulations containing about 10 times more particles than in previous simulations. These simulations formed 15 terrestrial planets from 0.4 to 2.6 Earth masses, including five planets in the habitable zone. Every planet from each simulation accreted at least the Earth's current water budget; most accreted several times that amount (assuming no impact depletion). Each planet accreted at least five water-rich embryos and planetesimals from the past 2.5 astronomical units; most accreted 10-20 water-rich bodies. We present a new model for water delivery to terrestrial planets in dynamically calm systems, with low-eccentricity or low-mass giant planets-such systems may be very common in the Galaxy. We suggest that water is accreted in comparable amounts from a few planetary embryos in a " hit or miss " way and from millions of planetesimals in a statistically robust process. Variations in water content are likely to be caused by fluctuations in the number of water-rich embryos accreted, as well as from systematic effects, such as planetary mass and location, and giant planet properties.

  15. Simulating the formation and evolution of galaxies with EvoL, the Padova N-body Tree-SPH code

    International Nuclear Information System (INIS)

    Merlin, E.; Chiosi, C.; Grassi, T.; Buonomo, U.; Chinellato, S.

    2009-01-01

    The importance of numerical simulations in astrophysics is constantly growing, because of the complexity, the multi-scaling properties and the non-linearity of many physical phenomena. In particular, cosmological and galaxy-sized simulations of structure formation have cast light on different aspects, giving answers to many questions, but raising a number of new issues to be investigated. Over the last decade, great effort has been devoted in Padova to develop a tool explicitly designed to study the problem of galaxy formation and evolution, with particular attention to the early-type ones. To this aim, many simulations have been run on CINECA supercomputers (see publications list below). The next step is the new release of EvoL, a Fortran N-body code capable to follow in great detail many different aspects of stellar, interstellar and cosmological physics. In particular, special care has been paid to the properties of stars and their interplay with the surrounding interstellar medium (ISM), as well as to the multiphase nature of the ISM, to the setting of the initial and boundary conditions, and to the correct description of gas physics via modern formulations of the classical Smoothed Particle Hydrodynamics algorithms. Moreover, a powerful tool to compare numerical predictions with observables has been developed, self-consistently closing the whole package. A library of new simulations, run with EvoL on CINECA supercomputers, is to be built in the next years, while new physics, including magnetic properties of matter and more exotic energy feedback effects, is to be added.

  16. Stochastic porous media modeling and high-resolution schemes for numerical simulation of subsurface immiscible fluid flow transport

    Science.gov (United States)

    Brantson, Eric Thompson; Ju, Binshan; Wu, Dan; Gyan, Patricia Semwaah

    2018-04-01

    This paper proposes stochastic petroleum porous media modeling for immiscible fluid flow simulation using Dykstra-Parson coefficient (V DP) and autocorrelation lengths to generate 2D stochastic permeability values which were also used to generate porosity fields through a linear interpolation technique based on Carman-Kozeny equation. The proposed method of permeability field generation in this study was compared to turning bands method (TBM) and uniform sampling randomization method (USRM). On the other hand, many studies have also reported that, upstream mobility weighting schemes, commonly used in conventional numerical reservoir simulators do not accurately capture immiscible displacement shocks and discontinuities through stochastically generated porous media. This can be attributed to high level of numerical smearing in first-order schemes, oftentimes misinterpreted as subsurface geological features. Therefore, this work employs high-resolution schemes of SUPERBEE flux limiter, weighted essentially non-oscillatory scheme (WENO), and monotone upstream-centered schemes for conservation laws (MUSCL) to accurately capture immiscible fluid flow transport in stochastic porous media. The high-order schemes results match well with Buckley Leverett (BL) analytical solution without any non-oscillatory solutions. The governing fluid flow equations were solved numerically using simultaneous solution (SS) technique, sequential solution (SEQ) technique and iterative implicit pressure and explicit saturation (IMPES) technique which produce acceptable numerical stability and convergence rate. A comparative and numerical examples study of flow transport through the proposed method, TBM and USRM permeability fields revealed detailed subsurface instabilities with their corresponding ultimate recovery factors. Also, the impact of autocorrelation lengths on immiscible fluid flow transport were analyzed and quantified. A finite number of lines used in the TBM resulted into visual

  17. Incorporation of Three-dimensional Radiative Transfer into a Very High Resolution Simulation of Horizontally Inhomogeneous Clouds

    Science.gov (United States)

    Ishida, H.; Ota, Y.; Sekiguchi, M.; Sato, Y.

    2016-12-01

    A three-dimensional (3D) radiative transfer calculation scheme is developed to estimate horizontal transport of radiation energy in a very high resolution (with the order of 10 m in spatial grid) simulation of cloud evolution, especially for horizontally inhomogeneous clouds such as shallow cumulus and stratocumulus. Horizontal radiative transfer due to inhomogeneous clouds seems to cause local heating/cooling in an atmosphere with a fine spatial scale. It is, however, usually difficult to estimate the 3D effects, because the 3D radiative transfer often needs a large resource for computation compared to a plane-parallel approximation. This study attempts to incorporate a solution scheme that explicitly solves the 3D radiative transfer equation into a numerical simulation, because this scheme has an advantage in calculation for a sequence of time evolution (i.e., the scene at a time is little different from that at the previous time step). This scheme is also appropriate to calculation of radiation with strong absorption, such as the infrared regions. For efficient computation, this scheme utilizes several techniques, e.g., the multigrid method for iteration solution, and a correlated-k distribution method refined for efficient approximation of the wavelength integration. For a case study, the scheme is applied to an infrared broadband radiation calculation in a broken cloud field generated with a large eddy simulation model. The horizontal transport of infrared radiation, which cannot be estimated by the plane-parallel approximation, and its variation in time can be retrieved. The calculation result elucidates that the horizontal divergences and convergences of infrared radiation flux are not negligible, especially at the boundaries of clouds and within optically thin clouds, and the radiative cooling at lateral boundaries of clouds may reduce infrared radiative heating in clouds. In a future work, the 3D effects on radiative heating/cooling will be able to be

  18. Variability of wet troposphere delays over inland reservoirs as simulated by a high-resolution regional climate model

    Science.gov (United States)

    Clark, E.; Lettenmaier, D. P.

    2014-12-01

    Satellite radar altimetry is widely used for measuring global sea level variations and, increasingly, water height variations of inland water bodies. Existing satellite radar altimeters measure water surfaces directly below the spacecraft (approximately at nadir). Over the ocean, most of these satellites use radiometry to measure the delay of radar signals caused by water vapor in the atmosphere (also known as the wet troposphere delay (WTD)). However, radiometry can only be used to estimate this delay over the largest inland water bodies, such as the Great Lakes, due to spatial resolution issues. As a result, atmospheric models are typically used to simulate and correct for the WTD at the time of observations. The resolutions of these models are quite coarse, at best about 5000 km2 at 30˚N. The upcoming NASA- and CNES-led Surface Water and Ocean Topography (SWOT) mission, on the other hand, will use interferometric synthetic aperture radar (InSAR) techniques to measure a 120-km-wide swath of the Earth's surface. SWOT is expected to make useful measurements of water surface elevation and extent (and storage change) for inland water bodies at spatial scales as small as 250 m, which is much smaller than current altimetry targets and several orders of magnitude smaller than the models used for wet troposphere corrections. Here, we calculate WTD from very high-resolution (4/3-km to 4-km) simulations of the Weather Research and Forecasting (WRF) regional climate model, and use the results to evaluate spatial variations in WTD. We focus on six U.S. reservoirs: Lake Elwell (MT), Lake Pend Oreille (ID), Upper Klamath Lake (OR), Elephant Butte (NM), Ray Hubbard (TX), and Sam Rayburn (TX). The reservoirs vary in climate, shape, use, and size. Because evaporation from open water impacts local water vapor content, we compare time series of WTD over land and water in the vicinity of each reservoir. To account for resolution effects, we examine the difference in WRF-simulated

  19. High-resolution simulations of unstable cylindrical gravity currents undergoing wandering and splitting motions in a rotating system

    Science.gov (United States)

    Dai, Albert; Wu, Ching-Sen

    2018-02-01

    High-resolution simulations of unstable cylindrical gravity currents when wandering and splitting motions occur in a rotating system are reported. In this study, our attention is focused on the situation of unstable rotating cylindrical gravity currents when the ratio of Coriolis to inertia forces is larger, namely, 0.5 ≤ C ≤ 2.0, in comparison to the stable ones when C ≤ 0.3 as investigated previously by the authors. The simulations reproduce the major features of the unstable rotating cylindrical gravity currents observed in the laboratory, i.e., vortex-wandering or vortex-splitting following the contraction-relaxation motion, and good agreement is found when compared with the experimental results on the outrush radius of the advancing front and on the number of bulges. Furthermore, the simulations provide energy budget information which could not be attained in the laboratory. After the heavy fluid is released, the heavy fluid collapses and a contraction-relaxation motion is at work for approximately 2-3 revolutions of the system. During the contraction-relaxation motion of the heavy fluid, the unstable rotating cylindrical gravity currents behave similar to the stable ones. Towards the end of the contraction-relaxation motion, the dissipation rate in the system reaches a local minimum and a quasi-geostrophic equilibrium state is reached. After the quasi-geostrophic equilibrium state, vortex-wandering or vortex-splitting may occur depending on the ratio of Coriolis to inertia forces. The vortex-splitting process begins with non-axisymmetric bulges and, as the bulges grow, the kinetic energy increases at the expense of decreasing potential energy in the system. The completion of vortex-splitting is accompanied by a local maximum of dissipation rate and a local maximum of kinetic energy in the system. A striking feature of the unstable rotating cylindrical gravity currents is the persistent upwelling and downwelling motions, which are observed for both the

  20. Mediterranean Thermohaline Response to Large-Scale Winter Atmospheric Forcing in a High-Resolution Ocean Model Simulation

    Science.gov (United States)

    Cusinato, Eleonora; Zanchettin, Davide; Sannino, Gianmaria; Rubino, Angelo

    2018-04-01

    Large-scale circulation anomalies over the North Atlantic and Euro-Mediterranean regions described by dominant climate modes, such as the North Atlantic Oscillation (NAO), the East Atlantic pattern (EA), the East Atlantic/Western Russian (EAWR) and the Mediterranean Oscillation Index (MOI), significantly affect interannual-to-decadal climatic and hydroclimatic variability in the Euro-Mediterranean region. However, whereas previous studies assessed the impact of such climate modes on air-sea heat and freshwater fluxes in the Mediterranean Sea, the propagation of these atmospheric forcing signals from the surface toward the interior and the abyss of the Mediterranean Sea remains unexplored. Here, we use a high-resolution ocean model simulation covering the 1979-2013 period to investigate spatial patterns and time scales of the Mediterranean thermohaline response to winter forcing from NAO, EA, EAWR and MOI. We find that these modes significantly imprint on the thermohaline properties in key areas of the Mediterranean Sea through a variety of mechanisms. Typically, density anomalies induced by all modes remain confined in the upper 600 m depth and remain significant for up to 18-24 months. One of the clearest propagation signals refers to the EA in the Adriatic and northern Ionian seas: There, negative EA anomalies are associated to an extensive positive density response, with anomalies that sink to the bottom of the South Adriatic Pit within a 2-year time. Other strong responses are the thermally driven responses to the EA in the Gulf of Lions and to the EAWR in the Aegean Sea. MOI and EAWR forcing of thermohaline properties in the Eastern Mediterranean sub-basins seems to be determined by reinforcement processes linked to the persistency of these modes in multiannual anomalous states. Our study also suggests that NAO, EA, EAWR and MOI could critically interfere with internal, deep and abyssal ocean dynamics and variability in the Mediterranean Sea.

  1. Development of local-scale high-resolution atmospheric dispersion model using large-eddy simulation. Part 3: turbulent flow and plume dispersion in building arrays

    Czech Academy of Sciences Publication Activity Database

    Nakayama, H.; Jurčáková, Klára; Nagai, H.

    2013-01-01

    Roč. 50, č. 5 (2013), s. 503-519 ISSN 0022-3131 Institutional support: RVO:61388998 Keywords : local-scale high-resolution dispersion model * nuclear emergency response system * large-eddy simulation * spatially developing turbulent boundary layer flow Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 1.452, year: 2013

  2. Structure formation from non-Gaussian initial conditions: Multivariate biasing, statistics, and comparison with N-body simulations

    International Nuclear Information System (INIS)

    Giannantonio, Tommaso; Porciani, Cristiano

    2010-01-01

    We study structure formation in the presence of primordial non-Gaussianity of the local type with parameters f NL and g NL . We show that the distribution of dark-matter halos is naturally described by a multivariate bias scheme where the halo overdensity depends not only on the underlying matter density fluctuation δ but also on the Gaussian part of the primordial gravitational potential φ. This corresponds to a non-local bias scheme in terms of δ only. We derive the coefficients of the bias expansion as a function of the halo mass by applying the peak-background split to common parametrizations for the halo mass function in the non-Gaussian scenario. We then compute the halo power spectrum and halo-matter cross spectrum in the framework of Eulerian perturbation theory up to third order. Comparing our results against N-body simulations, we find that our model accurately describes the numerical data for wave numbers k≤0.1-0.3h Mpc -1 depending on redshift and halo mass. In our multivariate approach, perturbations in the halo counts trace φ on large scales, and this explains why the halo and matter power spectra show different asymptotic trends for k→0. This strongly scale-dependent bias originates from terms at leading order in our expansion. This is different from what happens using the standard univariate local bias where the scale-dependent terms come from badly behaved higher-order corrections. On the other hand, our biasing scheme reduces to the usual local bias on smaller scales, where |φ| is typically much smaller than the density perturbations. We finally discuss the halo bispectrum in the context of multivariate biasing and show that, due to its strong scale and shape dependence, it is a powerful tool for the detection of primordial non-Gaussianity from future galaxy surveys.

  3. COUNTS-IN-CYLINDERS IN THE SLOAN DIGITAL SKY SURVEY WITH COMPARISONS TO N-BODY SIMULATIONS

    International Nuclear Information System (INIS)

    Berrier, Heather D.; Barton, Elizabeth J.; Bullock, James S.; Berrier, Joel C.; Zentner, Andrew R.; Wechsler, Risa H.

    2011-01-01

    Environmental statistics provide a necessary means of comparing the properties of galaxies in different environments, and a vital test of models of galaxy formation within the prevailing hierarchical cosmological model. We explore counts-in-cylinders, a common statistic defined as the number of companions of a particular galaxy found within a given projected radius and redshift interval. Galaxy distributions with the same two-point correlation functions do not necessarily have the same companion count distributions. We use this statistic to examine the environments of galaxies in the Sloan Digital Sky Survey Data Release 4 (SDSS DR4). We also make preliminary comparisons to four models for the spatial distributions of galaxies, based on N-body simulations and data from SDSS DR4, to study the utility of the counts-in-cylinders statistic. There is a very large scatter between the number of companions a galaxy has and the mass of its parent dark matter halo and the halo occupation, limiting the utility of this statistic for certain kinds of environmental studies. We also show that prevalent empirical models of galaxy clustering, that match observed two- and three-point clustering statistics well, fail to reproduce some aspects of the observed distribution of counts-in-cylinders on 1, 3, and 6 h -1 Mpc scales. All models that we explore underpredict the fraction of galaxies with few or no companions in 3 and 6 h -1 Mpc cylinders. Roughly 7% of galaxies in the real universe are significantly more isolated within a 6 h -1 Mpc cylinder than the galaxies in any of the models we use. Simple phenomenological models that map galaxies to dark matter halos fail to reproduce high-order clustering statistics in low-density environments.

  4. The WASCAL high-resolution regional climate simulation ensemble for West Africa: concept, dissemination and assessment

    Directory of Open Access Journals (Sweden)

    D. Heinzeller

    2018-04-01

    Full Text Available Climate change and constant population growth pose severe challenges to 21st century rural Africa. Within the framework of the West African Science Service Center on Climate Change and Adapted Land Use (WASCAL, an ensemble of high-resolution regional climate change scenarios for the greater West African region is provided to support the development of effective adaptation and mitigation measures. This contribution presents the overall concept of the WASCAL regional climate simulations, as well as detailed information on the experimental design, and provides information on the format and dissemination of the available data. All data are made available to the public at the CERA long-term archive of the German Climate Computing Center (DKRZ with a subset available at the PANGAEA Data Publisher for Earth & Environmental Science portal (https://doi.pangaea.de/10.1594/PANGAEA.880512. A brief assessment of the data are presented to provide guidance for future users. Regional climate projections are generated at high (12 km and intermediate (60 km resolution using the Weather Research and Forecasting Model (WRF. The simulations cover the validation period 1980–2010 and the two future periods 2020–2050 and 2070–2100. A brief comparison to observations and two climate change scenarios from the Coordinated Regional Downscaling Experiment (CORDEX initiative is presented to provide guidance on the data set to future users and to assess their climate change signal. Under the RCP4.5 (Representative Concentration Pathway 4.5 scenario, the results suggest an increase in temperature by 1.5 °C at the coast of Guinea and by up to 3 °C in the northern Sahel by the end of the 21st century, in line with existing climate projections for the region. They also project an increase in precipitation by up to 300 mm per year along the coast of Guinea, by up to 150 mm per year in the Soudano region adjacent in the north and almost no change in

  5. Tests of high-resolution simulations over a region of complex terrain in Southeast coast of Brazil

    Science.gov (United States)

    Chou, Sin Chan; Luís Gomes, Jorge; Ristic, Ivan; Mesinger, Fedor; Sueiro, Gustavo; Andrade, Diego; Lima-e-Silva, Pedro Paulo

    2013-04-01

    The Eta Model is used operationally by INPE at the Centre for Weather Forecasts and Climate Studies (CPTEC) to produce weather forecasts over South America since 1997. The model has gone through upgrades along these years. In order to prepare the model for operational higher resolution forecasts, the model is configured and tested over a region of complex topography located near the coast of Southeast Brazil. The model domain includes the two Brazilians cities, Rio de Janeiro and Sao Paulo, urban areas, preserved tropical forest, pasture fields, and complex terrain where it can rise from sea level up to about 1000 m. Accurate near-surface wind direction and magnitude are needed for the power plant emergency plan. Besides, the region suffers from frequent events of floods and landslides, therefore accurate local forecasts are required for disaster warnings. The objective of this work is to carry out a series of numerical experiments to test and evaluate high resolution simulations in this complex area. Verification of model runs uses observations taken from the nuclear power plant and higher resolution reanalyses data. The runs were tested in a period when flow was predominately forced by local conditions and in a period forced by frontal passage. The Eta Model was configured initially with 2-km horizontal resolution and 50 layers. The Eta-2km is a second nesting, it is driven by Eta-15km, which in its turn is driven by Era-Interim reanalyses. The series of experiments consists of replacing surface layer stability function, adjusting cloud microphysics scheme parameters, further increasing vertical and horizontal resolutions. By replacing the stability function for the stable conditions substantially increased the katabatic winds and verified better against the tower wind data. Precipitation produced by the model was excessive in the region. Increasing vertical resolution to 60 layers caused a further increase in precipitation production. This excessive

  6. Immersed boundary methods for high-resolution simulation of atmospheric boundary-layer flow over complex terrain

    Science.gov (United States)

    Lundquist, Katherine Ann

    Mesoscale models, such as the Weather Research and Forecasting (WRF) model, are increasingly used for high resolution simulations, particularly in complex terrain, but errors associated with terrain-following coordinates degrade the accuracy of the solution. Use of an alternative Cartesian gridding technique, known as an immersed boundary method (IBM), alleviates coordinate transformation errors and eliminates restrictions on terrain slope which currently limit mesoscale models to slowly varying terrain. In this dissertation, an immersed boundary method is developed for use in numerical weather prediction. Use of the method facilitates explicit resolution of complex terrain, even urban terrain, in the WRF mesoscale model. First, the errors that arise in the WRF model when complex terrain is present are presented. This is accomplished using a scalar advection test case, and comparing the numerical solution to the analytical solution. Results are presented for different orders of advection schemes, grid resolutions and aspect ratios, as well as various degrees of terrain slope. For comparison, results from the same simulation are presented using the IBM. Both two-dimensional and three-dimensional immersed boundary methods are then described, along with details that are specific to the implementation of IBM in the WRF code. Our IBM is capable of imposing both Dirichlet and Neumann boundary conditions. Additionally, a method for coupling atmospheric physics parameterizations at the immersed boundary is presented, making IB methods much more functional in the context of numerical weather prediction models. The two-dimensional IB method is verified through comparisons of solutions for gentle terrain slopes when using IBM and terrain-following grids. The canonical case of flow over a Witch of Agnesi hill provides validation of the basic no-slip and zero gradient boundary conditions. Specified diurnal heating in a valley, producing anabatic winds, is used to validate the

  7. Immersed Boundary Methods for High-Resolution Simulation of Atmospheric Boundary-Layer Flow Over Complex Terrain

    Energy Technology Data Exchange (ETDEWEB)

    Lundquist, K A [Univ. of California, Berkeley, CA (United States)

    2010-05-12

    Mesoscale models, such as the Weather Research and Forecasting (WRF) model, are increasingly used for high resolution simulations, particularly in complex terrain, but errors associated with terrain-following coordinates degrade the accuracy of the solution. Use of an alternative Cartesian gridding technique, known as an immersed boundary method (IBM), alleviates coordinate transformation errors and eliminates restrictions on terrain slope which currently limit mesoscale models to slowly varying terrain. In this dissertation, an immersed boundary method is developed for use in numerical weather prediction. Use of the method facilitates explicit resolution of complex terrain, even urban terrain, in the WRF mesoscale model. First, the errors that arise in the WRF model when complex terrain is present are presented. This is accomplished using a scalar advection test case, and comparing the numerical solution to the analytical solution. Results are presented for different orders of advection schemes, grid resolutions and aspect ratios, as well as various degrees of terrain slope. For comparison, results from the same simulation are presented using the IBM. Both two-dimensional and three-dimensional immersed boundary methods are then described, along with details that are specific to the implementation of IBM in the WRF code. Our IBM is capable of imposing both Dirichlet and Neumann boundary conditions. Additionally, a method for coupling atmospheric physics parameterizations at the immersed boundary is presented, making IB methods much more functional in the context of numerical weather prediction models. The two-dimensional IB method is verified through comparisons of solutions for gentle terrain slopes when using IBM and terrain-following grids. The canonical case of flow over a Witch of Agnesi hill provides validation of the basic no-slip and zero gradient boundary conditions. Specified diurnal heating in a valley, producing anabatic winds, is used to validate the

  8. High-resolution numerical simulation of summer wind field comparing WRF boundary-layer parametrizations over complex Arctic topography: case study from central Spitsbergen

    Czech Academy of Sciences Publication Activity Database

    Láska, K.; Chládová, Zuzana; Hošek, Jiří

    2017-01-01

    Roč. 26, č. 4 (2017), s. 391-408 ISSN 0941-2948 Institutional support: RVO:68378289 Keywords : surface wind field * model evaluation * topographic effect * circulation pattern * Svalbard Subject RIV: DG - Athmosphere Sciences, Meteorology OBOR OECD: Meteorology and atmospheric sciences Impact factor: 1.989, year: 2016 http://www.schweizerbart.de/papers/metz/detail/prepub/87659/High_resolution_numerical_simulation_of_summer_wind_field_comparing_WRF_boundary_layer_parametrizations_over_complex_Arctic_topography_case_study_from_central_Spitsbergen

  9. Statistical Analyses of High-Resolution Aircraft and Satellite Observations of Sea Ice: Applications for Improving Model Simulations

    Science.gov (United States)

    Farrell, S. L.; Kurtz, N. T.; Richter-Menge, J.; Harbeck, J. P.; Onana, V.

    2012-12-01

    Satellite-derived estimates of ice thickness and observations of ice extent over the last decade point to a downward trend in the basin-scale ice volume of the Arctic Ocean. This loss has broad-ranging impacts on the regional climate and ecosystems, as well as implications for regional infrastructure, marine navigation, national security, and resource exploration. New observational datasets at small spatial and temporal scales are now required to improve our understanding of physical processes occurring within the ice pack and advance parameterizations in the next generation of numerical sea-ice models. High-resolution airborne and satellite observations of the sea ice are now available at meter-scale resolution or better that provide new details on the properties and morphology of the ice pack across basin scales. For example the NASA IceBridge airborne campaign routinely surveys the sea ice of the Arctic and Southern Oceans with an advanced sensor suite including laser and radar altimeters and digital cameras that together provide high-resolution measurements of sea ice freeboard, thickness, snow depth and lead distribution. Here we present statistical analyses of the ice pack primarily derived from the following IceBridge instruments: the Digital Mapping System (DMS), a nadir-looking, high-resolution digital camera; the Airborne Topographic Mapper, a scanning lidar; and the University of Kansas snow radar, a novel instrument designed to estimate snow depth on sea ice. Together these instruments provide data from which a wide range of sea ice properties may be derived. We provide statistics on lead distribution and spacing, lead width and area, floe size and distance between floes, as well as ridge height, frequency and distribution. The goals of this study are to (i) identify unique statistics that can be used to describe the characteristics of specific ice regions, for example first-year/multi-year ice, diffuse ice edge/consolidated ice pack, and convergent

  10. High-resolution simulation of link-level vehicle emissions and concentrations for air pollutants in a traffic-populated eastern Asian city

    Directory of Open Access Journals (Sweden)

    S. Zhang

    2016-08-01

    Full Text Available Vehicle emissions containing air pollutants created substantial environmental impacts on air quality for many traffic-populated cities in eastern Asia. A high-resolution emission inventory is a useful tool compared with traditional tools (e.g. registration data-based approach to accurately evaluate real-world traffic dynamics and their environmental burden. In this study, Macau, one of the most populated cities in the world, is selected to demonstrate a high-resolution simulation of vehicular emissions and their contribution to air pollutant concentrations by coupling multimodels. First, traffic volumes by vehicle category on 47 typical roads were investigated during weekdays in 2010 and further applied in a networking demand simulation with the TransCAD model to establish hourly profiles of link-level vehicle counts. Local vehicle driving speed and vehicle age distribution data were also collected in Macau. Second, based on a localized vehicle emission model (e.g. the emission factor model for the Beijing vehicle fleet – Macau, EMBEV–Macau, this study established a link-based vehicle emission inventory in Macau with high resolution meshed in a temporal and spatial framework. Furthermore, we employed the AERMOD (AMS/EPA Regulatory Model model to map concentrations of CO and primary PM2.5 contributed by local vehicle emissions during weekdays in November 2010. This study has discerned the strong impact of traffic flow dynamics on the temporal and spatial patterns of vehicle emissions, such as a geographic discrepancy of spatial allocation up to 26 % between THC and PM2.5 emissions owing to spatially heterogeneous vehicle-use intensity between motorcycles and diesel fleets. We also identified that the estimated CO2 emissions from gasoline vehicles agreed well with the statistical fuel consumption in Macau. Therefore, this paper provides a case study and a solid framework for developing high-resolution environment assessment tools for other

  11. High-Resolution Mesoscale Simulations of the 6-7 May 2000 Missouri Flash Flood: Impact of Model Initialization and Land Surface Treatment

    Science.gov (United States)

    Baker, R. David; Wang, Yansen; Tao, Wei-Kuo; Wetzel, Peter; Belcher, Larry R.

    2004-01-01

    High-resolution mesoscale model simulations of the 6-7 May 2000 Missouri flash flood event were performed to test the impact of model initialization and land surface treatment on timing, intensity, and location of extreme precipitation. In this flash flood event, a mesoscale convective system (MCS) produced over 340 mm of rain in roughly 9 hours in some locations. Two different types of model initialization were employed: 1) NCEP global reanalysis with 2.5-degree grid spacing and 12-hour temporal resolution, and 2) Eta reanalysis with 40- km grid spacing and $hour temporal resolution. In addition, two different land surface treatments were considered. A simple land scheme. (SLAB) keeps soil moisture fixed at initial values throughout the simulation, while a more sophisticated land model (PLACE) allows for r interactive feedback. Simulations with high-resolution Eta model initialization show considerable improvement in the intensity of precipitation due to the presence in the initialization of a residual mesoscale convective vortex (hlCV) from a previous MCS. Simulations with the PLACE land model show improved location of heavy precipitation. Since soil moisture can vary over time in the PLACE model, surface energy fluxes exhibit strong spatial gradients. These surface energy flux gradients help produce a strong low-level jet (LLJ) in the correct location. The LLJ then interacts with the cold outflow boundary of the MCS to produce new convective cells. The simulation with both high-resolution model initialization and time-varying soil moisture test reproduces the intensity and location of observed rainfall.

  12. Climate SPHINX: High-resolution present-day and future climate simulations with an improved representation of small-scale variability

    Science.gov (United States)

    Davini, Paolo; von Hardenberg, Jost; Corti, Susanna; Subramanian, Aneesh; Weisheimer, Antje; Christensen, Hannah; Juricke, Stephan; Palmer, Tim

    2016-04-01

    The PRACE Climate SPHINX project investigates the sensitivity of climate simulations to model resolution and stochastic parameterization. The EC-Earth Earth-System Model is used to explore the impact of stochastic physics in 30-years climate integrations as a function of model resolution (from 80km up to 16km for the atmosphere). The experiments include more than 70 simulations in both a historical scenario (1979-2008) and a climate change projection (2039-2068), using RCP8.5 CMIP5 forcing. A total amount of 20 million core hours will be used at end of the project (March 2016) and about 150 TBytes of post-processed data will be available to the climate community. Preliminary results show a clear improvement in the representation of climate variability over the Euro-Atlantic following resolution increase. More specifically, the well-known atmospheric blocking negative bias over Europe is definitely resolved. High resolution runs also show improved fidelity in representation of tropical variability - such as the MJO and its propagation - over the low resolution simulations. It is shown that including stochastic parameterization in the low resolution runs help to improve some of the aspects of the MJO propagation further. These findings show the importance of representing the impact of small scale processes on the large scale climate variability either explicitly (with high resolution simulations) or stochastically (in low resolution simulations).

  13. Simulations of collisions between N-body classical systems in interaction; Simulations de collisions entre systemes classiques a n-corps en interaction

    Energy Technology Data Exchange (ETDEWEB)

    Morisseau, Francois [Laboratoire de Physique Corpusculaire de CAEN, ENSICAEN, Universite de Caen Basse-Normandie, UFR des Sciences, 6 bd Marechal Juin, 14050 Caen Cedex (France)

    2006-05-15

    The Classical N-body Dynamics (CNBD) is dedicated to the simulation of collisions between classical systems. The 2-body interaction used here has the properties of the Van der Waals potential and depends on just a few parameters. This work has two main goals. First, some theoretical approaches assume that the dynamical stage of the collisions plays an important role. Moreover, colliding nuclei are supposed to present a 1. order liquid-gas phase transition. Several signals have been introduced to show this transition. We have searched for two of them: the bimodality of the mass asymmetry and negative heat capacity. We have found them and we give an explanation of their presence in our calculations. Second, we have improved the interaction by adding a Coulomb like potential and by taking into account the stronger proton-neutron interaction in nuclei. Then we have figured out the relations that exist between the parameters of the 2-body interaction and the properties of the systems. These studies allow us to fit the properties of the classical systems to those of the nuclei. In this manuscript the first results of this fit are shown. (author)

  14. AMM15: a new high-resolution NEMO configuration for operational simulation of the European north-west shelf

    Science.gov (United States)

    Graham, Jennifer A.; O'Dea, Enda; Holt, Jason; Polton, Jeff; Hewitt, Helene T.; Furner, Rachel; Guihou, Karen; Brereton, Ashley; Arnold, Alex; Wakelin, Sarah; Castillo Sanchez, Juan Manuel; Mayorga Adame, C. Gabriela

    2018-02-01

    This paper describes the next-generation ocean forecast model for the European north-west shelf, which will become the basis of operational forecasts in 2018. This new system will provide a step change in resolution and therefore our ability to represent small-scale processes. The new model has a resolution of 1.5 km compared with a grid spacing of 7 km in the current operational system. AMM15 (Atlantic Margin Model, 1.5 km) is introduced as a new regional configuration of NEMO v3.6. Here we describe the technical details behind this configuration, with modifications appropriate for the new high-resolution domain. Results from a 30-year non-assimilative run using the AMM15 domain demonstrate the ability of this model to represent the mean state and variability of the region.Overall, there is an improvement in the representation of the mean state across the region, suggesting similar improvements may be seen in the future operational system. However, the reduction in seasonal bias is greater off-shelf than on-shelf. In the North Sea, biases are largely unchanged. Since there has been no change to the vertical resolution or parameterization schemes, performance improvements are not expected in regions where stratification is dominated by vertical processes rather than advection. This highlights the fact that increased horizontal resolution will not lead to domain-wide improvements. Further work is needed to target bias reduction across the north-west shelf region.

  15. PROBING THE ROLE OF DYNAMICAL FRICTION IN SHAPING THE BSS RADIAL DISTRIBUTION. I. SEMI-ANALYTICAL MODELS AND PRELIMINARY N-BODY SIMULATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Miocchi, P.; Lanzoni, B.; Ferraro, F. R.; Dalessandro, E.; Alessandrini, E. [Dipartimento di Fisica e Astronomia, Università di Bologna, Viale Berti Pichat 6/2, I-40127 Bologna (Italy); Pasquato, M.; Lee, Y.-W. [Department of Astronomy and Center for Galaxy Evolution Research, Yonsei University, Seoul 120-749 (Korea, Republic of); Vesperini, E. [Department of Astronomy, Indiana University, Bloomington, IN 47405 (United States)

    2015-01-20

    We present semi-analytical models and simplified N-body simulations with 10{sup 4} particles aimed at probing the role of dynamical friction (DF) in determining the radial distribution of blue straggler stars (BSSs) in globular clusters. The semi-analytical models show that DF (which is the only evolutionary mechanism at work) is responsible for the formation of a bimodal distribution with a dip progressively moving toward the external regions of the cluster. However, these models fail to reproduce the formation of the long-lived central peak observed in all dynamically evolved clusters. The results of N-body simulations confirm the formation of a sharp central peak, which remains as a stable feature over time regardless of the initial concentration of the system. In spite of noisy behavior, a bimodal distribution forms in many cases, with the size of the dip increasing as a function of time. In the most advanced stages, the distribution becomes monotonic. These results are in agreement with the observations. Also, the shape of the peak and the location of the minimum (which, in most of cases, is within 10 core radii) turn out to be consistent with observational results. For a more detailed and close comparison with observations, including a proper calibration of the timescales of the dynamical processes driving the evolution of the BSS spatial distribution, more realistic simulations will be necessary.

  16. High Resolution Elevation Contours

    Data.gov (United States)

    Minnesota Department of Natural Resources — This dataset contains contours generated from high resolution data sources such as LiDAR. Generally speaking this data is 2 foot or less contour interval.

  17. Vertical Rise Velocity of Equatorial Plasma Bubbles Estimated from Equatorial Atmosphere Radar Observations and High-Resolution Bubble Model Simulations

    Science.gov (United States)

    Yokoyama, T.; Ajith, K. K.; Yamamoto, M.; Niranjan, K.

    2017-12-01

    Equatorial plasma bubble (EPB) is a well-known phenomenon in the equatorial ionospheric F region. As it causes severe scintillation in the amplitude and phase of radio signals, it is important to understand and forecast the occurrence of EPBs from a space weather point of view. The development of EPBs is presently believed as an evolution of the generalized Rayleigh-Taylor instability. We have already developed a 3D high-resolution bubble (HIRB) model with a grid spacing of as small as 1 km and presented nonlinear growth of EPBs which shows very turbulent internal structures such as bifurcation and pinching. As EPBs have field-aligned structures, the latitude range that is affected by EPBs depends on the apex altitude of EPBs over the dip equator. However, it was not easy to observe the apex altitude and vertical rise velocity of EPBs. Equatorial Atmosphere Radar (EAR) in Indonesia is capable of steering radar beams quickly so that the growth phase of EPBs can be captured clearly. The vertical rise velocities of the EPBs observed around the midnight hours are significantly smaller compared to those observed in postsunset hours. Further, the vertical growth of the EPBs around midnight hours ceases at relatively lower altitudes, whereas the majority of EPBs at postsunset hours found to have grown beyond the maximum detectable altitude of the EAR. The HIRB model with varying background conditions are employed to investigate the possible factors that control the vertical rise velocity and maximum attainable altitudes of EPBs. The estimated rise velocities from EAR observations at both postsunset and midnight hours are, in general, consistent with the nonlinear evolution of EPBs from the HIRB model.

  18. High-resolution model for the simulation of the activity distribution and radiation field at the German FRJ-2 research reactor

    International Nuclear Information System (INIS)

    Winter, D.; Haeussler, A.; Abbasi, F.; Simons, F.; Nabbi, R.; Thomauske, B.

    2013-01-01

    F or the decommissioning of nuclear facilities in Germany, activity and dose rate atlases (ADAs) are required for the approval of the domestic regulatory authority. Thus, high detailed modeling efforts are demanded in order to optimize the quantification and the characterization of nuclear waste as well as to realize optimum radiation protection. For the generation of ADAs, computer codes based on the Monte-Carlo method are increasingly employed because of their potential for high resolution simulation of the neutron and gamma transport for activity and dose rate predictions, respectively. However, the demand on the modeling effort and the simulation time increases with the size and the complexity of the whole model that becomes a limiting factor. For instance, the German FRJ-2 research reactor consisting of a complex reactor core, the graphite reflector, and the adjacent thermal and biological shielding structures represents such a case. For the solving of this drawback, various techniques such as variance reduction methods are applied. A further simple but effective approach is the modeling of the regions of interest with appropriate boundary conditions e.g. surface source or reflective surfaces. In the framework of the existing research a high sophisticated simulation tool is developed which is characterized by: - CAD-based model generation for Monte-Carlo transport simulations; - Production and 3D visualization of high resolution activity and dose rate atlases; - Application of coupling routines and interface structures for optimum and automated simulations. The whole simulation system is based on the Monte-Carlo code MCNP5 and the depletion/activation code ORIGEN2. The numerical and computational efficiency of the proposed methods is discussed in this paper on the basis of the simulation and CAD-based model of the FRJ-2 research reactor with emphasis on the effect of variance reduction methods. (orig.)

  19. High-resolution model for the simulation of the activity distribution and radiation field at the German FRJ-2 research reactor

    Energy Technology Data Exchange (ETDEWEB)

    Winter, D.; Haeussler, A.; Abbasi, F.; Simons, F.; Nabbi, R.; Thomauske, B. [RWTH Aachen Univ. (Germany). Inst. of Nuclear Fuel Cycle; Damm, G. [Research Center Juelich (Germany)

    2013-11-15

    F or the decommissioning of nuclear facilities in Germany, activity and dose rate atlases (ADAs) are required for the approval of the domestic regulatory authority. Thus, high detailed modeling efforts are demanded in order to optimize the quantification and the characterization of nuclear waste as well as to realize optimum radiation protection. For the generation of ADAs, computer codes based on the Monte-Carlo method are increasingly employed because of their potential for high resolution simulation of the neutron and gamma transport for activity and dose rate predictions, respectively. However, the demand on the modeling effort and the simulation time increases with the size and the complexity of the whole model that becomes a limiting factor. For instance, the German FRJ-2 research reactor consisting of a complex reactor core, the graphite reflector, and the adjacent thermal and biological shielding structures represents such a case. For the solving of this drawback, various techniques such as variance reduction methods are applied. A further simple but effective approach is the modeling of the regions of interest with appropriate boundary conditions e.g. surface source or reflective surfaces. In the framework of the existing research a high sophisticated simulation tool is developed which is characterized by: - CAD-based model generation for Monte-Carlo transport simulations; - Production and 3D visualization of high resolution activity and dose rate atlases; - Application of coupling routines and interface structures for optimum and automated simulations. The whole simulation system is based on the Monte-Carlo code MCNP5 and the depletion/activation code ORIGEN2. The numerical and computational efficiency of the proposed methods is discussed in this paper on the basis of the simulation and CAD-based model of the FRJ-2 research reactor with emphasis on the effect of variance reduction methods. (orig.)

  20. Atlantic hurricanes and associated insurance loss potentials in future climate scenarios: limitations of high-resolution AGCM simulations

    Directory of Open Access Journals (Sweden)

    Thomas F. Stocker

    2012-01-01

    Full Text Available Potential future changes in tropical cyclone (TC characteristics are among the more serious regional threats of global climate change. Therefore, a better understanding of how anthropogenic climate change may affect TCs and how these changes translate in socio-economic impacts is required. Here, we apply a TC detection and tracking method that was developed for ERA-40 data to time-slice experiments of two atmospheric general circulation models, namely the fifth version of the European Centre model of Hamburg model (MPI, Hamburg, Germany, T213 and the Japan Meteorological Agency/ Meteorological research Institute model (MRI, Tsukuba city, Japan, TL959. For each model, two climate simulations are available: a control simulation for present-day conditions to evaluate the model against observations, and a scenario simulation to assess future changes. The evaluation of the control simulations shows that the number of intense storms is underestimated due to the model resolution. To overcome this deficiency, simulated cyclone intensities are scaled to the best track data leading to a better representation of the TC intensities. Both models project an increased number of major hurricanes and modified trajectories in their scenario simulations. These changes have an effect on the projected loss potentials. However, these state-of-the-art models still yield contradicting results, and therefore they are not yet suitable to provide robust estimates of losses due to uncertainties in simulated hurricane intensity, location and frequency.

  1. Validation of high-resolution aerosol optical thickness simulated by a global non-hydrostatic model against remote sensing measurements

    Science.gov (United States)

    Goto, Daisuke; Sato, Yousuke; Yashiro, Hisashi; Suzuki, Kentaroh; Nakajima, Teruyuki

    2017-02-01

    A high-performance computing resource allows us to conduct numerical simulations with a horizontal grid spacing that is sufficiently high to resolve cloud systems. The cutting-edge computational capability, which was provided by the K computer at RIKEN in Japan, enabled the authors to perform long-term, global simulations of air pollutions and clouds with unprecedentedly high horizontal resolutions. In this study, a next generation model capable of simulating global air pollutions with O(10 km) grid spacing by coupling an atmospheric chemistry model to the Non-hydrostatic Icosahedral Atmospheric Model (NICAM) was performed. Using the newly developed model, month-long simulations for July were conducted with 14 km grid spacing on the K computer. Regarding the global distributions of aerosol optical thickness (AOT), it was found that the correlation coefficient (CC) between the simulation and AERONET measurements was approximately 0.7, and the normalized mean bias was -10%. The simulated AOT was also compared with satellite-retrieved values; the CC was approximately 0.6. The radiative effects due to each chemical species (dust, sea salt, organics, and sulfate) were also calculated and compared with multiple measurements. As a result, the simulated fluxes of upward shortwave radiation at the top of atmosphere and the surface compared well with the observed values, whereas those of downward shortwave radiation at the surface were underestimated, even if all aerosol components were considered. However, the aerosol radiative effects on the downward shortwave flux at the surface were found to be as high as 10 W/m2 in a global scale; thus, simulated aerosol distributions can strongly affect the simulated air temperature and dynamic circulation.

  2. Simulation of synoptic and sub-synoptic phenomena over East Africa and Arabian Peninsula for current and future climate using a high resolution AGCM

    KAUST Repository

    Raj, Jerry

    2015-04-01

    Climate regimes of East Africa and Arabia are complex and are poorly understood. East Africa has large-scale tropical controls like major convergence zones and air streams. The region is in the proximity of two monsoons, north-east and south-west, and the humid and thermally unstable Congo air stream. The domain comprises regions with one, two, and three rainfall maxima, and the rainfall pattern over this region has high spatial variability. To explore the synoptic and sub-synoptic phenomena that drive the climate of the region we conducted climate simulations using a high resolution Atmospheric General Circulation Model (AGCM), GFDL\\'s High Resolution Atmospheric Model (HiRAM). Historic simulations (1975-2004) and future projections (2007-2050), with both RCP 4.5 and RCP 8.5 pathways, were performed according to CORDEX standard. The sea surface temperature (SST) was prescribed from the 2°x2.5° latitude-longitude resolution GFDL Earth System Model runs of IPCC AR5, as bottom boundary condition over the ocean. Our simulations were conducted at a horizontal grid spacing of 25 km, which is an ample resolution for regional climate simulation. In comparison with the regional models, global HiRAM has the advantage of accounting for two-way interaction between regional and global scale processes. Our initial results show that HiRAM simulations for historic period well reproduce the regional climate in East Africa and the Arabian Peninsula with their complex interplay of regional and global processes. Our future projections indicate warming and increased precipitation over the Ethiopian highlands and the Greater Horn of Africa. We found significant regional differences between RCP 4.5 and RCP 8.5 projections, e.g., west coast of the Arabian Peninsula, show anomalies of opposite signs in these two simulations.

  3. The Impact of High-Resolution Sea Surface Temperatures on the Simulated Nocturnal Florida Marine Boundary Layer

    Science.gov (United States)

    LaCasse, Katherine M.; Splitt, Michael E.; Lazarus, Steven M.; Lapenta, William M.

    2008-01-01

    High- and low-resolution sea surface temperature (SST) analysis products are used to initialize the Weather Research and Forecasting (WRF) Model for May 2004 for short-term forecasts over Florida and surrounding waters. Initial and boundary conditions for the simulations were provided by a combination of observations, large-scale model output, and analysis products. The impact of using a 1-km Moderate Resolution Imaging Spectroradiometer (MODIS) SST composite on subsequent evolution of the marine atmospheric boundary layer (MABL) is assessed through simulation comparisons and limited validation. Model results are presented for individual simulations, as well as for aggregates of easterly- and westerly-dominated low-level flows. The simulation comparisons show that the use of MODIS SST composites results in enhanced convergence zones. earlier and more intense horizontal convective rolls. and an increase in precipitation as well as a change in precipitation location. Validation of 10-m winds with buoys shows a slight improvement in wind speed. The most significant results of this study are that 1) vertical wind stress divergence and pressure gradient accelerations across the Florida Current region vary in importance as a function of flow direction and stability and 2) the warmer Florida Current in the MODIS product transports heat vertically and downwind of this heat source, modifying the thermal structure and the MABL wind field primarily through pressure gradient adjustments.

  4. Coupled multi-group neutron photon transport for the simulation of high-resolution gamma-ray spectroscopy applications

    Energy Technology Data Exchange (ETDEWEB)

    Burns, Kimberly A. [Georgia Inst. of Technology, Atlanta, GA (United States)

    2009-08-01

    The accurate and efficient simulation of coupled neutron-photon problems is necessary for several important radiation detection applications. Examples include the detection of nuclear threats concealed in cargo containers and prompt gamma neutron activation analysis for nondestructive determination of elemental composition of unknown samples.

  5. A Coastal Bay Summer Breeze Study, Part 2: High-resolution Numerical Simulation of Sea-breeze Local Influences

    Science.gov (United States)

    Calmet, Isabelle; Mestayer, Patrice G.; van Eijk, Alexander M. J.; Herlédant, Olivier

    2018-04-01

    We complete the analysis of the data obtained during the experimental campaign around the semi circular bay of Quiberon, France, during two weeks in June 2006 (see Part 1). A reanalysis of numerical simulations performed with the Advanced Regional Prediction System model is presented. Three nested computational domains with increasing horizontal resolution down to 100 m, and a vertical resolution of 10 m at the lowest level, are used to reproduce the local-scale variations of the breeze close to the water surface of the bay. The Weather Research and Forecasting mesoscale model is used to assimilate the meteorological data. Comparisons of the simulations with the experimental data obtained at three sites reveal a good agreement of the flow over the bay and around the Quiberon peninsula during the daytime periods of sea-breeze development and weakening. In conditions of offshore synoptic flow, the simulations demonstrate that the semi-circular shape of the bay induces a corresponding circular shape in the offshore zones of stagnant flow preceding the sea-breeze onset, which move further offshore thereafter. The higher-resolution simulations are successful in reproducing the small-scale impacts of the peninsula and local coasts (breeze deviations, wakes, flow divergences), and in demonstrating the complexity of the breeze fields close to the surface over the bay. Our reanalysis also provides guidance for numerical simulation strategies for analyzing the structure and evolution of the near-surface breeze over a semi-circular bay, and for forecasting important flow details for use in upcoming sailing competitions.

  6. High resolution solar observations

    International Nuclear Information System (INIS)

    Title, A.

    1985-01-01

    Currently there is a world-wide effort to develop optical technology required for large diffraction limited telescopes that must operate with high optical fluxes. These developments can be used to significantly improve high resolution solar telescopes both on the ground and in space. When looking at the problem of high resolution observations it is essential to keep in mind that a diffraction limited telescope is an interferometer. Even a 30 cm aperture telescope, which is small for high resolution observations, is a big interferometer. Meter class and above diffraction limited telescopes can be expected to be very unforgiving of inattention to details. Unfortunately, even when an earth based telescope has perfect optics there are still problems with the quality of its optical path. The optical path includes not only the interior of the telescope, but also the immediate interface between the telescope and the atmosphere, and finally the atmosphere itself

  7. A High-resolution Simulation of the Transport of Gazeous Pollutants from a Severe Effusive Volcanic Eruption

    Science.gov (United States)

    Durand, J.; Tulet, P.; Filippi, J. B.; Leriche, M.

    2014-12-01

    The Reunion Island experienced its biggest eruption of Piton de la Fournaise volcano during April 2007. Known as "the eruption of the century", this event degassed more than 230 KT of SO2. Theses emissions led to important health issues, accompanied by environmental and infrastructure degradations. We want to show a modeling study uses the mesoscale chemical model MesoNH to simulate the transport of gazeous SO2 between April 2nd and 7th, with a focus on the influence of heat fluxes from lava. Three domains are nested from 2km to 100m horizontal spacing grid, allow us to better represent the phenomenology of its eruption. This modelling study have coupled on-line (i) the MesoNH mesoscale dynamics, (ii) a gas and aqueous chemical scheme, and (iii) a surface scheme that integrates a new sheme for the lava heat flux and its surface propagation. Thus, all flows (heat sensible and latent, vapor, SO2, CO2, CO) are triggered depending on its dynamic. Our simulations reproduce quite faithfully the surface field observation of SO2. Various sensitivity analyzes exhibit that volcano sulfur distribution was mainly controlled by the lava heat flow.Without heat flow parameterization, the surface concentrations are multiplied by a factor 30 compared to the reference simulation.Numerical modeling allows us to distinguish acid rain produced by the emission of water vapor and chloride when the lava flows into the seawater of those formed by the mixing of the volcanic SO2 into the raindrops of convective clouds.

  8. Intramolecular diffusive motion in alkane monolayers studied by high-resolution quasielastic neutron scattering and molecular dynamics simulations

    DEFF Research Database (Denmark)

    Hansen, Flemming Yssing; Criswell, L.; Fuhrmann, D

    2004-01-01

    Molecular dynamics simulations of a tetracosane (n-C24H50) monolayer adsorbed on a graphite basal-plane surface show that there are diffusive motions associated with the creation and annihilation of gauche defects occurring on a time scale of similar to0.1-4 ns. We present evidence...... that these relatively slow motions are observable by high-energy-resolution quasielastic neutron scattering (QNS) thus demonstrating QNS as a technique, complementary to nuclear magnetic resonance, for studying conformational dynamics on a nanosecond time scale in molecular monolayers....

  9. High resolution temperature mapping of gas turbine combustor simulator exhaust with femtosecond laser induced fiber Bragg gratings

    Science.gov (United States)

    Walker, Robert B.; Yun, Sangsig; Ding, Huimin; Charbonneau, Michel; Coulas, David; Lu, Ping; Mihailov, Stephen J.; Ramachandran, Nanthan

    2017-04-01

    Femtosecond infrared (fs-IR) laser written fiber Bragg gratings (FBGs), have demonstrated great potential for extreme sensing. Such conditions are inherent in advanced gas turbine engines under development to reduce greenhouse gas emissions; and the ability to measure temperature gradients in these harsh environments is currently limited by the lack of sensors and controls capable of withstanding the high temperature, pressure and corrosive conditions present. This paper discusses fabrication and deployment of several fs-IR written FBG arrays, for monitoring exhaust temperature gradients of a gas turbine combustor simulator. Results include: contour plots of measured temperature gradients, contrast with thermocouple data.

  10. Vegetation and Carbon Cycle Dynamics in the High-Resolution Transient Holocene Simulations Using the MPI Earth System Model

    Science.gov (United States)

    Brovkin, V.; Lorenz, S.; Raddatz, T.; Claussen, M.; Dallmeyer, A.

    2017-12-01

    One of the interesting periods to investigate a climatic role of terrestrial biosphere is the Holocene, when, despite of the relatively steady global climate, the atmospheric CO2 grew by about 20 ppm from 7 kyr BP to pre-industrial. We use a new setup of the Max Planck Institute Earth System Model MPI-ESM1 consisting of the latest version of the atmospheric model ECHAM6, including the land surface model JSBACH3 with carbon cycle and vegetation dynamics, coupled to the ocean circulation model MPI-OM, which includes the HAMOCC model of ocean biogeochemistry. The model has been run for several simulations over the Holocene period of the last 8000 years under the forcing data sets of orbital insolation, atmospheric greenhouse gases, volcanic aerosols, solar irradiance and stratospheric ozone, as well as land-use changes. In response to this forcing, the land carbon storage increased by about 60 PgC between 8 and 4 kyr BP, stayed relatively constant until 2 kyr BP, and decreased by about 90 PgC by 1850 AD due to land use changes. At 8 kyr BP, vegetation cover was much denser in Africa, mainly due to increased rainfall in response to the orbital forcing. Boreal forests moved northward in both, North America and Eurasia. The boreal forest expansion in North America is much less pronounced than in Eurasia. Simulated physical ocean fields, including surface temperatures and meridional overturning, do not change substantially in the Holocene. Carbonate ion concentration in deep ocean decreases in both, prescribed and interactive CO2simulations. Comparison with available proxies for terrestrial vegetation and for the ocean carbonate chemistry will be presented. Vegetation and soil carbon changes significantly affected atmospheric CO2 during the periods of strong volcanic eruptions. In response to the eruption-caused cooling, the land initially stores more carbon as respiration decreases, but then it releases even more carbon die to productivity decrease. This decadal

  11. Chirality in MoS2 nano tubes studied by molecular dynamics simulation and images of high resolution microscopy

    International Nuclear Information System (INIS)

    Perez A, M.

    2003-01-01

    The nano tubes is a new material intensely studied from 1991 due to their characteristics that are the result of their nano metric size and of the associated quantum effects. Great part of these investigations have been focused to the characterization, modelling and computerized simulation, in order to studying its properties and possible behavior without necessity of the real manipulation of the material. The obtention of the structural properties in the different forms of particles of nano metric dimensions observed in the Transmission Electron Microscope is of great aid to study them mesoscopic characteristic of the material. (Author)

  12. Impact of the dynamical core on the direct simulation of tropical cyclones in a high-resolution global model

    International Nuclear Information System (INIS)

    Reed, K. A.

    2015-01-01

    Our paper examines the impact of the dynamical core on the simulation of tropical cyclone (TC) frequency, distribution, and intensity. The dynamical core, the central fluid flow component of any general circulation model (GCM), is often overlooked in the analysis of a model's ability to simulate TCs compared to the impact of more commonly documented components (e.g., physical parameterizations). The Community Atmosphere Model version 5 is configured with multiple dynamics packages. This analysis demonstrates that the dynamical core has a significant impact on storm intensity and frequency, even in the presence of similar large-scale environments. In particular, the spectral element core produces stronger TCs and more hurricanes than the finite-volume core using very similar parameterization packages despite the latter having a slightly more favorable TC environment. Furthermore, these results suggest that more detailed investigations into the impact of the GCM dynamical core on TC climatology are needed to fully understand these uncertainties. Key Points The impact of the GCM dynamical core is often overlooked in TC assessments The CAM5 dynamical core has a significant impact on TC frequency and intensity A larger effort is needed to better understand this uncertainty

  13. Star formation in N-body simulations .1. The impact of the stellar ultraviolet radiation on star formation

    NARCIS (Netherlands)

    Gerritsen, JPE; Icke, [No Value

    We present numerical simulations of isolated disk galaxies including gas dynamics and star formation. The gas is allowed to cool to 10 K, while heating of the gas is provided by the far-ultraviolet flux of all stars. Stars are allowed to form from the gas according to a Jeans instability criterion:

  14. High-resolution fast temperature mapping of a gas turbine combustor simulator with femtosecond infrared laser written fiber Bragg gratings

    Science.gov (United States)

    Walker, Robert B.; Yun, Sangsig; Ding, Huimin; Charbonneau, Michel; Coulas, David; Ramachandran, Nanthan; Mihailov, Stephen J.

    2017-02-01

    Femtosecond infrared (fs-IR) written fiber Bragg gratings (FBGs), have demonstrated great potential for extreme sensing. Such conditions are inherent to the advanced gas turbine engines under development to reduce greenhouse gas emissions; and the ability to measure temperature gradients in these harsh environments is currently limited by the lack of sensors and controls capable of withstanding the high temperature, pressure and corrosive conditions present. This paper discusses fabrication and deployment of several fs-IR written FBG arrays, for monitoring the sidewall and exhaust temperature gradients of a gas turbine combustor simulator. Results include: contour plots of measured temperature gradients contrasted with thermocouple data, discussion of deployment strategies and comments on reliability.

  15. Mesoscale spiral vortex embedded within a Lake Michigan snow squall band - High resolution satellite observations and numerical model simulations

    Science.gov (United States)

    Lyons, Walter A.; Keen, Cecil S.; Hjelmfelt, Mark; Pease, Steven R.

    1988-01-01

    It is known that Great Lakes snow squall convection occurs in a variety of different modes depending on various factors such as air-water temperature contrast, boundary-layer wind shear, and geostrophic wind direction. An exceptional and often neglected source of data for mesoscale cloud studies is the ultrahigh resolution multispectral data produced by Landsat satellites. On October 19, 1972, a clearly defined spiral vortex was noted in a Landsat-1 image near the southern end of Lake Michigan during an exceptionally early cold air outbreak over a still very warm lake. In a numerical simulation using a three-dimensional Eulerian hydrostatic primitive equation mesoscale model with an initially uniform wind field, a definite analog to the observed vortex was generated. This suggests that intense surface heating can be a principal cause in the development of a low-level mesoscale vortex.

  16. Vegetation and land carbon feedbacks in the high-resolution transient Holocene simulations using the MPI Earth system model

    Science.gov (United States)

    Brovkin, Victor; Lorenz, Stephan; Raddatz, Thomas

    2017-04-01

    Plants influence climate through changes in the land surface biophysics (albedo, transpiration) and concentrations of the atmospheric greenhouse gases. One of the interesting periods to investigate a climatic role of terrestrial biosphere is the Holocene, when, despite of the relatively steady global climate, the atmospheric CO2 grew by about 20 ppm from 7 kyr BP to pre-industrial. We use a new setup of the Max Planck Institute Earth System Model MPI-ESM1 consisting of the latest version of the atmospheric model ECHAM6, including the land surface model JSBACH3 with carbon cycle and vegetation dynamics, coupled to the ocean circulation model MPI-OM, which includes the HAMOCC model of ocean biogeochemistry. The model has been run for several simulations over the Holocene period of the last 8000 years under the forcing data sets of orbital insolation, atmospheric greenhouse gases, volcanic aerosols, solar irradiance and stratospheric ozone, as well as land-use changes. In response to this forcing, the land carbon storage increased by about 60 PgC between 8 and 4 kyr BP, stayed relatively constant until 2 kyr BP, and decreased by about 90 PgC by 1850 AD due to land use changes. Vegetation and soil carbon changes significantly affected atmospheric CO2 during the periods of strong volcanic eruptions. In response to the eruption-caused cooling, the land initially stores more carbon as respiration decreases, but then it releases even more carbon due to productivity decrease. This decadal- scale variability helps to quantify the vegetation and land carbon feedbacks during the past periods when the temporal resolution of the ice-core CO2 record is not sufficient to capture fast CO2 variations. From a set of Holocene simulations with prescribed or interactive atmospheric CO2, we get estimates of climate-carbon feedback useful for future climate studies. Members of the Hamburg Holocene Team: Jürgen Bader1, Sebastian Bathiany2, Victor Brovkin1, Martin Claussen1,3, Traute Cr

  17. Covariability of seasonal temperature and precipitation over the Iberian Peninsula in high-resolution regional climate simulations (1001-2099)

    Science.gov (United States)

    Fernández-Montes, S.; Gómez-Navarro, J. J.; Rodrigo, F. S.; García-Valero, J. A.; Montávez, J. P.

    2017-04-01

    Precipitation and surface temperature are interdependent variables, both as a response to atmospheric dynamics and due to intrinsic thermodynamic relationships and feedbacks between them. This study analyzes the covariability of seasonal temperature (T) and precipitation (P) across the Iberian Peninsula (IP) using regional climate paleosimulations for the period 1001-1990, driven by reconstructions of external forcings. Future climate (1990-2099) was simulated according to SRES scenarios A2 and B2. These simulations enable exploring, at high spatial resolution, robust and physically consistent relationships. In winter, positive P-T correlations dominate west-central IP (Pearson correlation coefficient ρ = + 0.43, for 1001-1990), due to prevalent cold-dry and warm-wet conditions, while this relationship weakens and become negative towards mountainous, northern and eastern regions. In autumn, negative correlations appear in similar regions as in winter, whereas for summer they extend also to the N/NW of the IP. In spring, the whole IP depicts significant negative correlations, strongest for eastern regions (ρ = - 0.51). This is due to prevalent frequency of warm-dry and cold-wet modes in these regions and seasons. At the temporal scale, regional correlation series between seasonal anomalies of temperature and precipitation (assessed in 31 years running windows in 1001-1990) show very large multidecadal variability. For winter and spring, periodicities of about 50-60 years arise. The frequency of warm-dry and cold-wet modes appears correlated with the North Atlantic Oscillation (NAO), explaining mainly co-variability changes in spring. For winter and some regions in autumn, maximum and minimum P-T correlations appear in periods with enhanced meridional or easterly circulation (low or high pressure anomalies in the Mediterranean and Europe). In spring and summer, the Atlantic Multidecadal Oscillation shows some fingerprint on the frequency of warm/cold modes. For

  18. Initial conditions for cosmological N-body simulations of the scalar sector of theories of Newtonian, Relativistic and Modified Gravity

    International Nuclear Information System (INIS)

    Valkenburg, Wessel; Hu, Bin

    2015-01-01

    We present a description for setting initial particle displacements and field values for simulations of arbitrary metric theories of gravity, for perfect and imperfect fluids with arbitrary characteristics. We extend the Zel'dovich Approximation to nontrivial theories of gravity, and show how scale dependence implies curved particle paths, even in the entirely linear regime of perturbations. For a viable choice of Effective Field Theory of Modified Gravity, initial conditions set at high redshifts are affected at the level of up to 5% at Mpc scales, which exemplifies the importance of going beyond Λ-Cold Dark Matter initial conditions for modifications of gravity outside of the quasi-static approximation. In addition, we show initial conditions for a simulation where a scalar modification of gravity is modelled in a Lagrangian particle-like description. Our description paves the way for simulations and mock galaxy catalogs under theories of gravity beyond the standard model, crucial for progress towards precision tests of gravity and cosmology

  19. Idealized climate change simulations with a high-resolution physical model: HadGEM3-GC2

    Science.gov (United States)

    Senior, Catherine A.; Andrews, Timothy; Burton, Chantelle; Chadwick, Robin; Copsey, Dan; Graham, Tim; Hyder, Pat; Jackson, Laura; McDonald, Ruth; Ridley, Jeff; Ringer, Mark; Tsushima, Yoko

    2016-06-01

    Idealized climate change simulations with a new physical climate model, HadGEM3-GC2 from The Met Office Hadley Centre are presented and contrasted with the earlier MOHC model, HadGEM2-ES. The role of atmospheric resolution is also investigated. The Transient Climate Response (TCR) is 1.9 K/2.1 K at N216/N96 and Effective Climate Sensitivity (ECS) is 3.1 K/3.2 K at N216/N96. These are substantially lower than HadGEM2-ES (TCR: 2.5 K; ECS: 4.6 K) arising from a combination of changes in the size of climate feedbacks. While the change in the net cloud feedback between HadGEM3 and HadGEM2 is relatively small, there is a change in sign of its longwave and a strengthening of its shortwave components. At a global scale, there is little impact of the increase in atmospheric resolution on the future climate change signal and even at a broad regional scale, many features are robust including tropical rainfall changes, however, there are some significant exceptions. For the North Atlantic and western Europe, the tripolar pattern of winter storm changes found in most CMIP5 models is little impacted by resolution but for the most intense storms, there is a larger percentage increase in number at higher resolution than at lower resolution. Arctic sea-ice sensitivity shows a larger dependence on resolution than on atmospheric physics.

  20. Development of high-resolution multi-scale modelling system for simulation of coastal-fluvial urban flooding

    Science.gov (United States)

    Comer, Joanne; Indiana Olbert, Agnieszka; Nash, Stephen; Hartnett, Michael

    2017-02-01

    Urban developments in coastal zones are often exposed to natural hazards such as flooding. In this research, a state-of-the-art, multi-scale nested flood (MSN_Flood) model is applied to simulate complex coastal-fluvial urban flooding due to combined effects of tides, surges and river discharges. Cork city on Ireland's southwest coast is a study case. The flood modelling system comprises a cascade of four dynamically linked models that resolve the hydrodynamics of Cork Harbour and/or its sub-region at four scales: 90, 30, 6 and 2 m. Results demonstrate that the internalization of the nested boundary through the use of ghost cells combined with a tailored adaptive interpolation technique creates a highly dynamic moving boundary that permits flooding and drying of the nested boundary. This novel feature of MSN_Flood provides a high degree of choice regarding the location of the boundaries to the nested domain and therefore flexibility in model application. The nested MSN_Flood model through dynamic downscaling facilitates significant improvements in accuracy of model output without incurring the computational expense of high spatial resolution over the entire model domain. The urban flood model provides full characteristics of water levels and flow regimes necessary for flood hazard identification and flood risk assessment.

  1. Interfaces and strain in InGaAsP/InP heterostructures assessed with dynamical simulations of high-resolution x-ray diffraction curves

    International Nuclear Information System (INIS)

    Vandenberg, J.M.

    1995-01-01

    The interfacial structure of a lattice-matched InGaAs/InP/(100)InP superlattice with a long period of ∼630 Angstrom has been studied by fully dynamical simulations of high-resolution x-ray diffraction curves. This structure exhibits a very symmetrical x-ray pattern enveloping a large number of closely spaced satellite intensities with pronounced maxima and minima. It appears in the dynamical analysis that the position and shape of these maxima and minima is extremely sensitive to the number N of molecular layers and atomic spacing d of the InGaAs and InP layer and in particular the presence of strained interfacial layers. The structural model of strained interfaces was also applied to an epitaxial lattice-matched 700 Angstrom InP/400 Angstrom InGaAsP/(100)InP beterostructure. 9 refs., 3 figs

  2. X-ray clusters from a high-resolution hydrodynamic PPM simulation of the cold dark matter universe

    Science.gov (United States)

    Bryan, Greg L.; Cen, Renyue; Norman, Michael L.; Ostriker, Jermemiah P.; Stone, James M.

    1994-01-01

    A new three-dimensional hydrodynamic code based on the piecewise parabolic method (PPM) is utilized to compute the distribution of hot gas in the standard Cosmic Background Explorer (COBE)-normalized cold dark matter (CDM) universe. Utilizing periodic boundary conditions, a box with size 85 h(exp-1) Mpc, having cell size 0.31 h(exp-1) Mpc, is followed in a simulation with 270(exp 3)=10(exp 7.3) cells. Adopting standard parameters determined from COBE and light-element nucleosynthesis, Sigma(sub 8)=1.05, Omega(sub b)=0.06, we find the X-ray-emitting clusters, compute the luminosity function at several wavelengths, the temperature distribution, and estimated sizes, as well as the evolution of these quantities with redshift. The results, which are compared with those obtained in the preceding paper (Kang et al. 1994a), may be used in conjuction with ROSAT and other observational data sets. Overall, the results of the two computations are qualitatively very similar with regard to the trends of cluster properties, i.e., how the number density, radius, and temeprature depend on luminosity and redshift. The total luminosity from clusters is approximately a factor of 2 higher using the PPM code (as compared to the 'total variation diminishing' (TVD) code used in the previous paper) with the number of bright clusters higher by a similar factor. The primary conclusions of the prior paper, with regard to the power spectrum of the primeval density perturbations, are strengthened: the standard CDM model, normalized to the COBE microwave detection, predicts too many bright X-ray emitting clusters, by a factor probably in excess of 5. The comparison between observations and theoretical predictions for the evolution of cluster properties, luminosity functions, and size and temperature distributions should provide an important discriminator among competing scenarios for the development of structure in the universe.

  3. High-Resolution Biogeochemical Simulation Identifies Practical Opportunities for Bioenergy Landscape Intensification Across Diverse US Agricultural Regions

    Science.gov (United States)

    Field, J.; Adler, P. R.; Evans, S.; Paustian, K.; Marx, E.; Easter, M.

    2015-12-01

    The sustainability of biofuel expansion is strongly dependent on the environmental footprint of feedstock production, including both direct impacts within feedstock-producing areas and potential leakage effects due to disruption of existing food, feed, or fiber production. Assessing and minimizing these impacts requires novel methods compared to traditional supply chain lifecycle assessment. When properly validated and applied at appropriate spatial resolutions, biogeochemical process models are useful for simulating how the productivity and soil greenhouse gas fluxes of cultivating both conventional crops and advanced feedstock crops respond across gradients of land quality and management intensity. In this work we use the DayCent model to assess the biogeochemical impacts of agricultural residue collection, establishment of perennial grasses on marginal cropland or conservation easements, and intensification of existing cropping at high spatial resolution across several real-world case study landscapes in diverse US agricultural regions. We integrate the resulting estimates of productivity, soil carbon changes, and nitrous oxide emissions with crop production budgets and lifecycle inventories, and perform a basic optimization to generate landscape cost/GHG frontiers and determine the most practical opportunities for low-impact feedstock provisioning. The optimization is constrained to assess the minimum combined impacts of residue collection, land use change, and intensification of existing agriculture necessary for the landscape to supply a commercial-scale biorefinery while maintaining exiting food, feed, and fiber production levels. These techniques can be used to assess how different feedstock provisioning strategies perform on both economic and environmental criteria, and sensitivity of performance to environmental and land use factors. The included figure shows an example feedstock cost-GHG mitigation tradeoff frontier for a commercial-scale cellulosic

  4. PHoToNs–A parallel heterogeneous and threads oriented code for cosmological N-body simulation

    Science.gov (United States)

    Wang, Qiao; Cao, Zong-Yan; Gao, Liang; Chi, Xue-Bin; Meng, Chen; Wang, Jie; Wang, Long

    2018-06-01

    We introduce a new code for cosmological simulations, PHoToNs, which incorporates features for performing massive cosmological simulations on heterogeneous high performance computer (HPC) systems and threads oriented programming. PHoToNs adopts a hybrid scheme to compute gravitational force, with the conventional Particle-Mesh (PM) algorithm to compute the long-range force, the Tree algorithm to compute the short range force and the direct summation Particle-Particle (PP) algorithm to compute gravity from very close particles. A self-similar space filling a Peano-Hilbert curve is used to decompose the computing domain. Threads programming is advantageously used to more flexibly manage the domain communication, PM calculation and synchronization, as well as Dual Tree Traversal on the CPU+MIC platform. PHoToNs scales well and efficiency of the PP kernel achieves 68.6% of peak performance on MIC and 74.4% on CPU platforms. We also test the accuracy of the code against the much used Gadget-2 in the community and found excellent agreement.

  5. Coupled atmosphere ocean climate model simulations in the Mediterranean region: effect of a high-resolution marine model on cyclones and precipitation

    Directory of Open Access Journals (Sweden)

    A. Sanna

    2013-06-01

    Full Text Available In this study we investigate the importance of an eddy-permitting Mediterranean Sea circulation model on the simulation of atmospheric cyclones and precipitation in a climate model. This is done by analyzing results of two fully coupled GCM (general circulation models simulations, differing only for the presence/absence of an interactive marine module, at very high-resolution (~ 1/16°, for the simulation of the 3-D circulation of the Mediterranean Sea. Cyclones are tracked by applying an objective Lagrangian algorithm to the MSLP (mean sea level pressure field. On annual basis, we find a statistically significant difference in vast cyclogenesis regions (northern Adriatic, Sirte Gulf, Aegean Sea and southern Turkey and in lifetime, giving evidence of the effect of both land–sea contrast and surface heat flux intensity and spatial distribution on cyclone characteristics. Moreover, annual mean convective precipitation changes significantly in the two model climatologies as a consequence of differences in both air–sea interaction strength and frequency of cyclogenesis in the two analyzed simulations.

  6. Hot gas in the cold dark matter scenario: X-ray clusters from a high-resolution numerical simulation

    Science.gov (United States)

    Kang, Hyesung; Cen, Renyue; Ostriker, Jeremiah P.; Ryu, Dongsu

    1994-01-01

    A new, three-dimensional, shock-capturing hydrodynamic code is utilized to determine the distribution of hot gas in a standard cold dark matter (CDM) model of the universe. Periodic boundary conditions are assumed: a box with size 85 h(exp -1) Mpc having cell size 0.31 h(exp -1) Mpc is followed in a simulation with 270(exp 3) = 10(exp 7.3) cells. Adopting standard parameters determined from COBE and light-element nucleosynthesis, sigma(sub 8) = 1.05, omega(sub b) = 0.06, and assuming h = 0.5, we find the X-ray-emitting clusters and compute the luminosity function at several wavelengths, the temperature distribution, and estimated sizes, as well as the evolution of these quantities with redshift. We find that most of the total X-ray emissivity in our box originates in a relatively small number of identifiable clusters which occupy approximately 10(exp -3) of the box volume. This standard CDM model, normalized to COBE, produces approximately 5 times too much emission from clusters having L(sub x) is greater than 10(exp 43) ergs/s, a not-unexpected result. If all other parameters were unchanged, we would expect adequate agreement for sigma(sub 8) = 0.6. This provides a new and independent argument for lower small-scale power than standard CDM at the 8 h(exp -1) Mpc scale. The background radiation field at 1 keV due to clusters in this model is approximately one-third of the observed background, which, after correction for numerical effects, again indicates approximately 5 times too much emission and the appropriateness of sigma(sub 8) = 0.6. If we have used the observed ratio of gas to total mass in clusters, rather than basing the mean density on light-element nucleosynthesis, then the computed luminosity of each cluster would have increased still further, by a factor of approximately 10. The number density of clusters increases to z approximately 1, but the luminosity per typical cluster decreases, with the result that evolution in the number density of bright

  7. High resolution data acquisition

    Science.gov (United States)

    Thornton, Glenn W.; Fuller, Kenneth R.

    1993-01-01

    A high resolution event interval timing system measures short time intervals such as occur in high energy physics or laser ranging. Timing is provided from a clock (38) pulse train (37) and analog circuitry (44) for generating a triangular wave (46) synchronously with the pulse train (37). The triangular wave (46) has an amplitude and slope functionally related to the time elapsed during each clock pulse in the train. A converter (18, 32) forms a first digital value of the amplitude and slope of the triangle wave at the start of the event interval and a second digital value of the amplitude and slope of the triangle wave at the end of the event interval. A counter (26) counts the clock pulse train (37) during the interval to form a gross event interval time. A computer (52) then combines the gross event interval time and the first and second digital values to output a high resolution value for the event interval.

  8. ANL high resolution injector

    International Nuclear Information System (INIS)

    Minehara, E.; Kutschera, W.; Hartog, P.D.; Billquist, P.

    1985-01-01

    The ANL (Argonne National Laboratory) high-resolution injector has been installed to obtain higher mass resolution and higher preacceleration, and to utilize effectively the full mass range of ATLAS (Argonne Tandem Linac Accelerator System). Preliminary results of the first beam test are reported briefly. The design and performance, in particular a high-mass-resolution magnet with aberration compensation, are discussed. 7 refs., 5 figs., 2 tabs

  9. Ultra high resolution tomography

    Energy Technology Data Exchange (ETDEWEB)

    Haddad, W.S.

    1994-11-15

    Recent work and results on ultra high resolution three dimensional imaging with soft x-rays will be presented. This work is aimed at determining microscopic three dimensional structure of biological and material specimens. Three dimensional reconstructed images of a microscopic test object will be presented; the reconstruction has a resolution on the order of 1000 A in all three dimensions. Preliminary work with biological samples will also be shown, and the experimental and numerical methods used will be discussed.

  10. High resolution (transformers.

    Science.gov (United States)

    Garcia-Souto, Jose A; Lamela-Rivera, Horacio

    2006-10-16

    A novel fiber-optic interferometric sensor is presented for vibrations measurements and analysis. In this approach, it is shown applied to the vibrations of electrical structures within power transformers. A main feature of the sensor is that an unambiguous optical phase measurement is performed using the direct detection of the interferometer output, without external modulation, for a more compact and stable implementation. High resolution of the interferometric measurement is obtained with this technique (transformers are also highlighted.

  11. N-body simulation for self-gravitating collisional systems with a new SIMD instruction set extension to the x86 architecture, Advanced Vector eXtensions

    Science.gov (United States)

    Tanikawa, Ataru; Yoshikawa, Kohji; Okamoto, Takashi; Nitadori, Keigo

    2012-02-01

    We present a high-performance N-body code for self-gravitating collisional systems accelerated with the aid of a new SIMD instruction set extension of the x86 architecture: Advanced Vector eXtensions (AVX), an enhanced version of the Streaming SIMD Extensions (SSE). With one processor core of Intel Core i7-2600 processor (8 MB cache and 3.40 GHz) based on Sandy Bridge micro-architecture, we implemented a fourth-order Hermite scheme with individual timestep scheme ( Makino and Aarseth, 1992), and achieved the performance of ˜20 giga floating point number operations per second (GFLOPS) for double-precision accuracy, which is two times and five times higher than that of the previously developed code implemented with the SSE instructions ( Nitadori et al., 2006b), and that of a code implemented without any explicit use of SIMD instructions with the same processor core, respectively. We have parallelized the code by using so-called NINJA scheme ( Nitadori et al., 2006a), and achieved ˜90 GFLOPS for a system containing more than N = 8192 particles with 8 MPI processes on four cores. We expect to achieve about 10 tera FLOPS (TFLOPS) for a self-gravitating collisional system with N ˜ 10 5 on massively parallel systems with at most 800 cores with Sandy Bridge micro-architecture. This performance will be comparable to that of Graphic Processing Unit (GPU) cluster systems, such as the one with about 200 Tesla C1070 GPUs ( Spurzem et al., 2010). This paper offers an alternative to collisional N-body simulations with GRAPEs and GPUs.

  12. COINCIDENCES BETWEEN O VI AND O VII LINES: INSIGHTS FROM HIGH-RESOLUTION SIMULATIONS OF THE WARM-HOT INTERGALACTIC MEDIUM

    International Nuclear Information System (INIS)

    Cen Renyue

    2012-01-01

    With high-resolution (0.46 h –1 kpc), large-scale, adaptive mesh-refinement Eulerian cosmological hydrodynamic simulations we compute properties of O VI and O VII absorbers from the warm-hot intergalactic medium (WHIM) at z = 0. Our new simulations are in broad agreement with previous simulations with ∼40% of the intergalactic medium being in the WHIM. Our simulations are in agreement with observed properties of O VI absorbers with respect to the line incidence rate and Doppler-width-column-density relation. It is found that the amount of gas in the WHIM below and above 10 6 K is roughly equal. Strong O VI absorbers are found to be predominantly collisionally ionized. It is found that (61%, 57%, 39%) of O VI absorbers of log N(O VI) cm 2 = (12.5-13, 13-14, > 14) have T 5 K. Cross correlations between galaxies and strong [N(O VI) > 10 14 cm –2 ] O VI absorbers on ∼100-300 kpc scales are suggested as a potential differentiator between collisional ionization and photoionization models. Quantitative prediction is made for the presence of broad and shallow O VI lines that are largely missed by current observations but will be detectable by Cosmic Origins Spectrograph observations. The reported 3σ upper limit on the mean column density of coincidental O VII lines at the location of detected O VI lines by Yao et al. is above our predicted value by a factor of 2.5-4. The claimed observational detection of O VII lines by Nicastro et al., if true, is 2σ above what our simulations predict.

  13. Test Particle Simulations of Electron Injection by the Bursty Bulk Flows (BBFs) using High Resolution Lyon-Feddor-Mobarry (LFM) Code

    Science.gov (United States)

    Eshetu, W. W.; Lyon, J.; Wiltberger, M. J.; Hudson, M. K.

    2017-12-01

    Test particle simulations of electron injection by the bursty bulk flows (BBFs) have been done using a test particle tracer code [1], and the output fields of the Lyon-Feddor-Mobarry global magnetohydro- dynamics (MHD) code[2]. The MHD code was run with high resolu- tion (oct resolution), and with specified solar wind conditions so as to reproduce the observed qualitative picture of the BBFs [3]. Test par- ticles were injected so that they interact with earthward propagating BBFs. The result of the simulation shows that electrons are pushed ahead of the BBFs and accelerated into the inner magnetosphere. Once electrons are in the inner magnetosphere they are further energized by drift resonance with the azimuthal electric field. In addition pitch angle scattering of electrons resulting in the violation conservation of the first adiabatic invariant has been observed. The violation of the first adiabatic invariant occurs as electrons cross a weak magnetic field region with a strong gradient of the field perturbed by the BBFs. References 1. Kress, B. T., Hudson,M. K., Looper, M. D. , Albert, J., Lyon, J. G., and Goodrich, C. C. (2007), Global MHD test particle simulations of ¿ 10 MeV radiation belt electrons during storm sudden commencement, J. Geophys. Res., 112, A09215, doi:10.1029/2006JA012218. Lyon,J. G., Fedder, J. A., and Mobarry, C.M., The Lyon- Fedder-Mobarry (LFM) Global MHD Magnetospheric Simulation Code (2004), J. Atm. And Solar-Terrestrial Phys., 66, Issue 15-16, 1333- 1350,doi:10.1016/j.jastp. Wiltberger, Merkin, M., Lyon, J. G., and Ohtani, S. (2015), High-resolution global magnetohydrodynamic simulation of bursty bulk flows, J. Geophys. Res. Space Physics, 120, 45554566, doi:10.1002/2015JA021080.

  14. Simulations of Cyclone Sidr in the Bay of Bengal with a High-Resolution Model: Sensitivity to Large-Scale Boundary Forcing

    Science.gov (United States)

    Kumar, Anil; Done, James; Dudhia, Jimy; Niyogi, Dev

    2011-01-01

    The predictability of Cyclone Sidr in the Bay of Bengal was explored in terms of track and intensity using the Advanced Research Hurricane Weather Research Forecast (AHW) model. This constitutes the first application of the AHW over an area that lies outside the region of the North Atlantic for which this model was developed and tested. Several experiments were conducted to understand the possible contributing factors that affected Sidr s intensity and track simulation by varying the initial start time and domain size. Results show that Sidr s track was strongly controlled by the synoptic flow at the 500-hPa level, seen especially due to the strong mid-latitude westerly over north-central India. A 96-h forecast produced westerly winds over north-central India at the 500-hPa level that were notably weaker; this likely caused the modeled cyclone track to drift from the observed actual track. Reducing the model domain size reduced model error in the synoptic-scale winds at 500 hPa and produced an improved cyclone track. Specifically, the cyclone track appeared to be sensitive to the upstream synoptic flow, and was, therefore, sensitive to the location of the western boundary of the domain. However, cyclone intensity remained largely unaffected by this synoptic wind error at the 500-hPa level. Comparison of the high resolution, moving nested domain with a single coarser resolution domain showed little difference in tracks, but resulted in significantly different intensities. Experiments on the domain size with regard to the total precipitation simulated by the model showed that precipitation patterns and 10-m surface winds were also different. This was mainly due to the mid-latitude westerly flow across the west side of the model domain. The analysis also suggested that the total precipitation pattern and track was unchanged when the domain was extended toward the east, north, and south. Furthermore, this highlights our conclusion that Sidr was influenced from the west

  15. From Modeling of Plasticity in Single-Crystal Superalloys to High-Resolution X-rays Three-Crystal Diffractometer Peaks Simulation

    Science.gov (United States)

    Jacques, Alain

    2016-12-01

    The dislocation-based modeling of the high-temperature creep of two-phased single-crystal superalloys requires input data beyond strain vs time curves. This may be obtained by use of in situ experiments combining high-temperature creep tests with high-resolution synchrotron three-crystal diffractometry. Such tests give access to changes in phase volume fractions and to the average components of the stress tensor in each phase as well as the plastic strain of each phase. Further progress may be obtained by a new method making intensive use of the Fast Fourier Transform, and first modeling the behavior of a representative volume of material (stress fields, plastic strain, dislocation densities…), then simulating directly the corresponding diffraction peaks, taking into account the displacement field within the material, chemical variations, and beam coherence. Initial tests indicate that the simulated peak shapes are close to the experimental ones and are quite sensitive to the details of the microstructure and to dislocation densities at interfaces and within the soft γ phase.

  16. Overview of Proposal on High Resolution Climate Model Simulations of Recent Hurricane and Typhoon Activity: The Impact of SSTs and the Madden Julian Oscillation

    Science.gov (United States)

    Schubert, Siegfried; Kang, In-Sik; Reale, Oreste

    2009-01-01

    This talk gives an update on the progress and further plans for a coordinated project to carry out and analyze high-resolution simulations of tropical storm activity with a number of state-of-the-art global climate models. Issues addressed include, the mechanisms by which SSTs control tropical storm. activity on inter-annual and longer time scales, the modulation of that activity by the Madden Julian Oscillation on sub-seasonal time scales, as well as the sensitivity of the results to model formulation. The project also encourages companion coarser resolution runs to help assess resolution dependence, and. the ability of the models to capture the large-scale and long-terra changes in the parameters important for hurricane development. Addressing the above science questions is critical to understanding the nature of the variability of the Asian-Australian monsoon and its regional impacts, and thus CLIVAR RAMP fully endorses the proposed tropical storm simulation activity. The project is open to all interested organizations and investigators, and the results from the runs will be shared among the participants, as well as made available to the broader scientific community for analysis.

  17. A Mass-Flux Scheme View of a High-Resolution Simulation of a Transition from Shallow to Deep Cumulus Convection.

    Science.gov (United States)

    Kuang, Zhiming; Bretherton, Christopher S.

    2006-07-01

    In this paper, an idealized, high-resolution simulation of a gradually forced transition from shallow, nonprecipitating to deep, precipitating cumulus convection is described; how the cloud and transport statistics evolve as the convection deepens is explored; and the collected statistics are used to evaluate assumptions in current cumulus schemes. The statistical analysis methodologies that are used do not require tracing the history of individual clouds or air parcels; instead they rely on probing the ensemble characteristics of cumulus convection in the large model dataset. They appear to be an attractive way for analyzing outputs from cloud-resolving numerical experiments. Throughout the simulation, it is found that 1) the initial thermodynamic properties of the updrafts at the cloud base have rather tight distributions; 2) contrary to the assumption made in many cumulus schemes, nearly undiluted air parcels are too infrequent to be relevant to any stage of the simulated convection; and 3) a simple model with a spectrum of entraining plumes appears to reproduce most features of the cloudy updrafts, but significantly overpredicts the mass flux as the updrafts approach their levels of zero buoyancy. A buoyancy-sorting model was suggested as a potential remedy. The organized circulations of cold pools seem to create clouds with larger-sized bases and may correspondingly contribute to their smaller lateral entrainment rates. Our results do not support a mass-flux closure based solely on convective available potential energy (CAPE), and are in general agreement with a convective inhibition (CIN)-based closure. The general similarity in the ensemble characteristics of shallow and deep convection and the continuous evolution of the thermodynamic structure during the transition provide justification for developing a single unified cumulus parameterization that encompasses both shallow and deep convection.

  18. Very high resolution regional climate simulations on the 4 km scale as a basis for carbon balance assessments in northeast European Russia

    Science.gov (United States)

    Stendel, Martin; Hesselbjerg Christensen, Jens; Adalgeirsdottir, Gudfinna; Rinke, Annette; Matthes, Heidrun; Marchenko, Sergej; Daanen, Ronald; Romanovsky, Vladimir

    2010-05-01

    Simulations with global circulation models (GCMs) clearly indicate that major climate changes in polar regions can be expected during the 21st century. Model studies have shown that the area of the Northern Hemisphere underlain by permafrost could be reduced substantially in a warmer climate. However, thawing of permafrost, in particular if it is ice-rich, is subject to a time lag due to the large latent heat of fusion. State-of-the-art GCMs are unable to adequately model these processes because (a) even the most advanced subsurface schemes rarely treat depths below 5 m explicitly, and (b) soil thawing and freezing processes cannot be dealt with directly due to the coarse resolution of present GCMs. Any attempt to model subsurface processes needs information about soil properties, vegetation and snow cover, which are hardly realistic on a typical GCM grid. Furthermore, simulated GCM precipitation is often underestimated and the proportion of rain and snow is incorrect. One possibility to overcome resolution-related problems is to use regional climate models (RCMs). Such an RCM, HIRHAM, has until now been the only one used for the entire circumpolar domain, and its most recent version, HIRHAM5, has also been used in the high resolution study described here. Instead of the traditional approach via a degree-day based frost index from observations or model data, we use the regional model to create boundary conditions for an advanced permafrost model. This approach offers the advantage that the permafrost model can be run on the grid of the regional model, i.e. in a considerably higher resolution than in previous approaches. We here present results from a new time-slice integration with an unprecedented horizontal resolution of only 4 km, covering northeast European Russia. This model simulation has served as basis for an assessment of the carbon balance for a region in northeast European Russia within the EU-funded Carbo-North project.

  19. High resolution ultrasonic densitometer

    International Nuclear Information System (INIS)

    Dress, W.B.

    1983-01-01

    The velocity of torsional stress pulses in an ultrasonic waveguide of non-circular cross section is affected by the temperature and density of the surrounding medium. Measurement of the transit times of acoustic echoes from the ends of a sensor section are interpreted as level, density, and temperature of the fluid environment surrounding that section. This paper examines methods of making these measurements to obtain high resolution, temperature-corrected absolute and relative density and level determinations of the fluid. Possible applications include on-line process monitoring, a hand-held density probe for battery charge state indication, and precise inventory control for such diverse fluids as uranium salt solutions in accountability storage and gasoline in service station storage tanks

  20. The morphological evolution and internal convection of ExB-drifting plasma clouds: Theory, dielectric-in-cell simulations, and N-body dielectric simulations

    International Nuclear Information System (INIS)

    Borovsky, J.E.; Hansen, P.J.

    1998-01-01

    The evolution of ExB-drifting plasma clouds is investigated with the aid of a computational technique denoted here as open-quotes dielectric-in-cell.close quotes Many of the familiar phenomena associated with clouds of collisionless plasma are seen and explained and less-well-known phenomena associated with convection patterns, with the stripping of cloud material, and with the evolution of plasma clouds composed of differing ion species are investigated. The effects of spatially uniform diffusion are studied with the dielectric-in-cell technique and with another computational technique denoted as open-quotes N-body dielectric;close quotes the suppression of convection, the suppression of structure growth, the increase in material stripping, and the evolution of cloud anisotropy are examined. copyright 1998 American Institute of Physics

  1. High resolution numerical simulation (WRF V3) of an extreme rainy event over the Guadeloupe archipelago: Case of 3-5 january 2011.

    Science.gov (United States)

    Bernard, Didier C.; Cécé, Raphaël; Dorville, Jean-François

    2013-04-01

    During the dry season, the Guadeloupe archipelago may be affected by extreme rainy disturbances which may induce floods in a very short time. C. Brévignon (2003) considered a heavy rain event for rainfall upper 100 mm per day (out of mountainous areas) for this tropical region. During a cold front passage (3-5 January 2011), torrential rainfalls caused floods, major damages, landslides and five deaths. This phenomenon has put into question the current warning system based on large scale numerical models. This low-resolution forecasting (around 50-km scale) has been unsuitable for small tropical island like Guadeloupe (1600 km2). The most affected area was the middle of Grande-Terre island which is the main flat island of the archipelago (area of 587 km2, peak at 136 m). It is the most populated sector of Guadeloupe. In this area, observed rainfall have reached to 100-160 mm in 24 hours (this amount is equivalent to two months of rain for January (C. Brévignon, 2003)), in less 2 hours drainage systems have been saturated, and five people died in a ravine. Since two years, the atmospheric model WRF ARW V3 (Skamarock et al., 2008) has been used to modeling meteorological variables fields observed over the Guadeloupe archipelago at high resolution 1-km scale (Cécé et al., 2011). The model error estimators show that meteorological variables seem to be properly simulated for standard types of weather: undisturbed, strong or weak trade winds. These simulations indicate that for synoptic winds weak to moderate, a small island like Grande-Terre is able to generate inland convergence zones during daytime. In this presentation, we apply this high resolution model to simulate this extreme rainy disturbance of 3-5 January 2011. The evolution of modeling meteorological variable fields is analyzed in the most affected area of Grande-Terre (city of Les Abymes). The main goal is to examine local quasi-stationary updraft systems and highlight their convective mechanisms. The

  2. Can small island mountains provide relief from the Subtropical Precipitation Decline? Simulating future precipitation regimes for small island nations using high resolution Regional Climate Models.

    Science.gov (United States)

    Bowden, J.; Terando, A. J.; Misra, V.; Wootten, A.

    2017-12-01

    Small island nations are vulnerable to changes in the hydrologic cycle because of their limited water resources. This risk to water security is likely even higher in sub-tropical regions where anthropogenic forcing of the climate system is expected to lead to a drier future (the so-called `dry-get-drier' pattern). However, high-resolution numerical modeling experiments have also shown an enhancement of existing orographically-influenced precipitation patterns on islands with steep topography, potentially mitigating subtropical drying on windward mountain sides. Here we explore the robustness of the near-term (25-45 years) subtropical precipitation decline (SPD) across two island groupings in the Caribbean, Puerto Rico and the U.S. Virgin Islands. These islands, forming the boundary between the Greater and Lesser Antilles, significantly differ in size, topographic relief, and orientation to prevailing winds. Two 2-km horizontal resolution regional climate model simulations are used to downscale a total of three different GCMs under the RCP8.5 emissions scenario. Results indicate some possibility for modest increases in precipitation at the leading edge of the Luquillo Mountains in Puerto Rico, but consistent declines elsewhere. We conclude with a discussion of potential explanations for these patterns and the attendant risks to water security that subtropical small island nations could face as the climate warms.

  3. Visualizing astrophysical N-body systems

    International Nuclear Information System (INIS)

    Dubinski, John

    2008-01-01

    I begin with a brief history of N-body simulation and visualization and then go on to describe various methods for creating images and animations of modern simulations in cosmology and galactic dynamics. These techniques are incorporated into a specialized particle visualization software library called MYRIAD that is designed to render images within large parallel N-body simulations as they run. I present several case studies that explore the application of these methods to animations in star clusters, interacting galaxies and cosmological structure formation.

  4. Spatial Variability in Column CO2 Inferred from High Resolution GEOS-5 Global Model Simulations: Implications for Remote Sensing and Inversions

    Science.gov (United States)

    Ott, L.; Putman, B.; Collatz, J.; Gregg, W.

    2012-01-01

    Column CO2 observations from current and future remote sensing missions represent a major advancement in our understanding of the carbon cycle and are expected to help constrain source and sink distributions. However, data assimilation and inversion methods are challenged by the difference in scale of models and observations. OCO-2 footprints represent an area of several square kilometers while NASA s future ASCENDS lidar mission is likely to have an even smaller footprint. In contrast, the resolution of models used in global inversions are typically hundreds of kilometers wide and often cover areas that include combinations of land, ocean and coastal areas and areas of significant topographic, land cover, and population density variations. To improve understanding of scales of atmospheric CO2 variability and representativeness of satellite observations, we will present results from a global, 10-km simulation of meteorology and atmospheric CO2 distributions performed using NASA s GEOS-5 general circulation model. This resolution, typical of mesoscale atmospheric models, represents an order of magnitude increase in resolution over typical global simulations of atmospheric composition allowing new insight into small scale CO2 variations across a wide range of surface flux and meteorological conditions. The simulation includes high resolution flux datasets provided by NASA s Carbon Monitoring System Flux Pilot Project at half degree resolution that have been down-scaled to 10-km using remote sensing datasets. Probability distribution functions are calculated over larger areas more typical of global models (100-400 km) to characterize subgrid-scale variability in these models. Particular emphasis is placed on coastal regions and regions containing megacities and fires to evaluate the ability of coarse resolution models to represent these small scale features. Additionally, model output are sampled using averaging kernels characteristic of OCO-2 and ASCENDS measurement

  5. A Lagrangian trajectory view on transport and mixing processes between the eye, eyewall, and environment using a high resolution simulation of Hurricane Bonnie (1998)

    Science.gov (United States)

    Cram, Thomas A.; Persing, John; Montgomery, Michael T.; Braun, Scott A.

    2006-01-01

    The transport and mixing characteristics of a large sample of air parcels within a mature and vertically sheared hurricane vortex is examined. Data from a high-resolution (2 km grid spacing) numerical simulation of "real-case" Hurricane Bonnie (1998) is used to calculate Lagrangian trajectories of air parcels in various subdomains of the hurricane (namely, the eye, eyewall, and near-environment) to study the degree of interaction (transport and mixing) between these subdomains. It is found that 1) there is transport and mixing from the low-level eye to the eyewall that carries high- Be air which can enhance the efficiency of the hurricane heat engine; 2) a portion of the low-level inflow of the hurricane bypasses the eyewall to enter the eye, that both replaces the mass of the low-level eye and lingers for a sufficient time (order 1 hour) to acquire enhanced entropy characteristics through interaction with the ocean beneath the eye; 3) air in the mid- to upper-level eye is exchanged with the eyewall such that more than half the air of the eye is exchanged in five hours in this case of a sheared hurricane; and 4) that one-fifth of the mass in the eyewall at a height of 5 km has an origin in the mid- to upper-level environment where thet(sub e) is much less than in the eyewall, which ventilates the ensemble average eyewall theta(sub e) by about 1 K. Implications of these findings to the problem of hurricane intensity forecasting are discussed.

  6. SACRA - global data sets of satellite-derived crop calendars for agricultural simulations: an estimation of a high-resolution crop calendar using satellite-sensed NDVI

    Science.gov (United States)

    Kotsuki, S.; Tanaka, K.

    2015-01-01

    To date, many studies have performed numerical estimations of food production and agricultural water demand to understand the present and future supply-demand relationship. A crop calendar (CC) is an essential input datum to estimate food production and agricultural water demand accurately with the numerical estimations. CC defines the date or month when farmers plant and harvest in cropland. This study aims to develop a new global data set of a satellite-derived crop calendar for agricultural simulations (SACRA) and reveal advantages and disadvantages of the satellite-derived CC compared to other global products. We estimate global CC at a spatial resolution of 5 min (≈10 km) using the satellite-sensed NDVI data, which corresponds well to vegetation growth and death on the land surface. We first demonstrate that SACRA shows similar spatial pattern in planting date compared to a census-based product. Moreover, SACRA reflects a variety of CC in the same administrative unit, since it uses high-resolution satellite data. However, a disadvantage is that the mixture of several crops in a grid is not considered in SACRA. We also address that the cultivation period of SACRA clearly corresponds to the time series of NDVI. Therefore, accuracy of SACRA depends on the accuracy of NDVI used for the CC estimation. Although SACRA shows different CC from a census-based product in some regions, multiple usages of the two products are useful to take into consideration the uncertainty of the CC. An advantage of SACRA compared to the census-based products is that SACRA provides not only planting/harvesting dates but also a peak date from the time series of NDVI data.

  7. High-resolution multi-slice PET

    International Nuclear Information System (INIS)

    Yasillo, N.J.; Chintu Chen; Ordonez, C.E.; Kapp, O.H.; Sosnowski, J.; Beck, R.N.

    1992-01-01

    This report evaluates the progress to test the feasibility and to initiate the design of a high resolution multi-slice PET system. The following specific areas were evaluated: detector development and testing; electronics configuration and design; mechanical design; and system simulation. The design and construction of a multiple-slice, high-resolution positron tomograph will provide substantial improvements in the accuracy and reproducibility of measurements of the distribution of activity concentrations in the brain. The range of functional brain research and our understanding of local brain function will be greatly extended when the development of this instrumentation is completed

  8. Climatology of Tibetan Plateau Vortices and connection to upper-level flow in reanalysis data and a high-resolution model simulation

    Science.gov (United States)

    Curio, Julia; Schiemann, Reinhard; Hodges, Kevin; Turner, Andrew

    2017-04-01

    The Tibetan Plateau (TP) and surrounding high mountain ranges constitute an important forcing of the atmospheric circulation over Asia due to their height and extent. Therefore, the TP impacts weather and climate in downstream regions of East Asia, especially precipitation. Mesoscale Tibetan Plateau Vortices (TPVs) are known to be one of the major precipitation-bearing systems on the TP. They are mainly present at the 500 hPa level and have a vertical extent of 2-3 km while their horizontal scale is around 500 km. Their average lifetime is 18 hours. There are two types of TPVs: the largest number originating and staying on the TP, while a smaller number is able to move off the plateau to the east. The latter category can cause extreme precipitation events and severe flooding in large parts of eastern and southern China downstream of the TP, e.g. the Yangtze River valley. The first aim of the study is to identify and track TPVs in reanalysis data and to connect the TPV activity to the position and strength of the upper-level subtropical jet stream, and to determine favourable conditions for TPV development and maintenance. We identify and track TPVs using the TRACK algorithm developed by Hodges et al. (1994). Relative vorticity at the 500 hPa level from the ERA-Interim and NCEP-CFSR reanalyses are used as input data. TPVs are retained which originate on the TP and which persist for at least two days, since these are more likely to move off the TP to the East. The second aim is to identify TPVs in a high-resolution, present-day climate model simulation of the MetOffice Unified Model (UPSCALE, HadGEM3 GA3.0) to assess how well the model represents the TPV climatology and variability. We find that the reanalysis data sets and the model show similar results for the statistical measures of TPVs (genesis, track, and lysis density). The TPV genesis region is small and stable at a specific region of the TP throughout the year. The reason for this seems to be the convergence

  9. Berkeley High-Resolution Ball

    International Nuclear Information System (INIS)

    Diamond, R.M.

    1984-10-01

    Criteria for a high-resolution γ-ray system are discussed. Desirable properties are high resolution, good response function, and moderate solid angle so as to achieve not only double- but triple-coincidences with good statistics. The Berkeley High-Resolution Ball involved the first use of bismuth germanate (BGO) for anti-Compton shield for Ge detectors. The resulting compact shield permitted rather close packing of 21 detectors around a target. In addition, a small central BGO ball gives the total γ-ray energy and multiplicity, as well as the angular pattern of the γ rays. The 21-detector array is nearly complete, and the central ball has been designed, but not yet constructed. First results taken with 9 detector modules are shown for the nucleus 156 Er. The complex decay scheme indicates a transition from collective rotation (prolate shape) to single- particle states (possibly oblate) near spin 30 h, and has other interesting features

  10. Feasibility of a CdTe-based SPECT for high-resolution low-dose small animal imaging: a Monte Carlo simulation study

    International Nuclear Information System (INIS)

    Park, S-J; Yu, A R; Lee, Y-J; Kim, Y-S; Kim, H-J

    2014-01-01

    Dedicated single-photon-emission computed tomography (SPECT) systems based on pixelated semiconductors such as cadmium telluride (CdTe) are in development to study small animal models of human disease. In an effort to develop a high-resolution, low-dose system for small animal imaging, we compared a CdTe-based SPECT system and a conventional NaI(Tl)-based SPECT system in terms of spatial resolution, sensitivity, contrast, and contrast-to-noise ratio (CNR). In addition, we investigated the radiation absorbed dose and calculated a figure of merit (FOM) for both SPECT systems. Using the conventional NaI(Tl)-based SPECT system, we achieved a spatial resolution of 1.66 mm at a 30 mm source-to-collimator distance, and a resolution of 2.4-mm hot-rods. Using the newly-developed CdTe-based SPECT system, we achieved a spatial resolution of 1.32 mm FWHM at a 30 mm source-to-collimator distance, and a resolution of 1.7-mm hot-rods. The sensitivities at a 30 mm source-to-collimator distance were 115.73 counts/sec/MBq and 83.38 counts/sec/MBq for the CdTe-based SPECT and conventional NaI(Tl)-based SPECT systems, respectively. To compare quantitative measurements in the mouse brain, we calculated the CNR for images from both systems. The CNR from the CdTe-based SPECT system was 4.41, while that from the conventional NaI(Tl)-based SPECT system was 3.11 when the injected striatal dose was 160 Bq/voxel. The CNR increased as a function of injected dose in both systems. The FOM of the CdTe-based SPECT system was superior to that of the conventional NaI(Tl)-based SPECT system, and the highest FOM was achieved with the CdTe-based SPECT at a dose of 40 Bq/voxel injected into the striatum. Thus, a CdTe-based SPECT system showed significant improvement in performance compared with a conventional system in terms of spatial resolution, sensitivity, and CNR, while reducing the radiation dose to the small animal subject. Herein, we discuss the feasibility of a CdTe-based SPECT system for high-resolution

  11. Modeling and simulation of tumor-influenced high resolution real-time physics-based breast models for model-guided robotic interventions

    Science.gov (United States)

    Neylon, John; Hasse, Katelyn; Sheng, Ke; Santhanam, Anand P.

    2016-03-01

    Breast radiation therapy is typically delivered to the patient in either supine or prone position. Each of these positioning systems has its limitations in terms of tumor localization, dose to the surrounding normal structures, and patient comfort. We envision developing a pneumatically controlled breast immobilization device that will enable the benefits of both supine and prone positioning. In this paper, we present a physics-based breast deformable model that aids in both the design of the breast immobilization device as well as a control module for the device during every day positioning. The model geometry is generated from a subject's CT scan acquired during the treatment planning stage. A GPU based deformable model is then generated for the breast. A mass-spring-damper approach is then employed for the deformable model, with the spring modeled to represent a hyperelastic tissue behavior. Each voxel of the CT scan is then associated with a mass element, which gives the model its high resolution nature. The subject specific elasticity is then estimated from a CT scan in prone position. Our results show that the model can deform at >60 deformations per second, which satisfies the real-time requirement for robotic positioning. The model interacts with a computer designed immobilization device to position the breast and tumor anatomy in a reproducible location. The design of the immobilization device was also systematically varied based on the breast geometry, tumor location, elasticity distribution and the reproducibility of the desired tumor location.

  12. High resolution metric imaging payload

    Science.gov (United States)

    Delclaud, Y.

    2017-11-01

    Alcatel Space Industries has become Europe's leader in the field of high and very high resolution optical payloads, in the frame work of earth observation system able to provide military government with metric images from space. This leadership allowed ALCATEL to propose for the export market, within a French collaboration frame, a complete space based system for metric observation.

  13. Simulation of synoptic and sub-synoptic phenomena over East Africa and Arabian Peninsula for current and future climate using a high resolution AGCM

    KAUST Repository

    Raj, Jerry; Bangalath, Hamza Kunhu; Stenchikov, Georgiy L.

    2015-01-01

    between regional and global scale processes. Our initial results show that HiRAM simulations for historic period well reproduce the regional climate in East Africa and the Arabian Peninsula with their complex interplay of regional and global processes. Our

  14. Impacts of spectral nudging on the simulated surface air temperature in summer compared with the selection of shortwave radiation and land surface model physics parameterization in a high-resolution regional atmospheric model

    Science.gov (United States)

    Park, Jun; Hwang, Seung-On

    2017-11-01

    The impact of a spectral nudging technique for the dynamical downscaling of the summer surface air temperature in a high-resolution regional atmospheric model is assessed. The performance of this technique is measured by comparing 16 analysis-driven simulation sets of physical parameterization combinations of two shortwave radiation and four land surface model schemes of the model, which are known to be crucial for the simulation of the surface air temperature. It is found that the application of spectral nudging to the outermost domain has a greater impact on the regional climate than any combination of shortwave radiation and land surface model physics schemes. The optimal choice of two model physics parameterizations is helpful for obtaining more realistic spatiotemporal distributions of land surface variables such as the surface air temperature, precipitation, and surface fluxes. However, employing spectral nudging adds more value to the results; the improvement is greater than using sophisticated shortwave radiation and land surface model physical parameterizations. This result indicates that spectral nudging applied to the outermost domain provides a more accurate lateral boundary condition to the innermost domain when forced by analysis data by securing the consistency with large-scale forcing over a regional domain. This consequently indirectly helps two physical parameterizations to produce small-scale features closer to the observed values, leading to a better representation of the surface air temperature in a high-resolution downscaled climate.

  15. Simulating single-phase and two-phase non-Newtonian fluid flow of a digital rock scanned at high resolution

    Science.gov (United States)

    Tembely, Moussa; Alsumaiti, Ali M.; Jouini, Mohamed S.; Rahimov, Khurshed; Dolatabadi, Ali

    2017-11-01

    Most of the digital rock physics (DRP) simulations focus on Newtonian fluids and overlook the detailed description of rock-fluid interaction. A better understanding of multiphase non-Newtonian fluid flow at pore-scale is crucial for optimizing enhanced oil recovery (EOR). The Darcy scale properties of reservoir rocks such as the capillary pressure curves and the relative permeability are controlled by the pore-scale behavior of the multiphase flow. In the present work, a volume of fluid (VOF) method coupled with an adaptive meshing technique is used to perform the pore-scale simulation on a 3D X-ray micro-tomography (CT) images of rock samples. The numerical model is based on the resolution of the Navier-Stokes equations along with a phase fraction equation incorporating the dynamics contact model. The simulations of a single phase flow for the absolute permeability showed a good agreement with the literature benchmark. Subsequently, the code is used to simulate a two-phase flow consisting of a polymer solution, displaying a shear-thinning power law viscosity. The simulations enable to access the impact of the consistency factor (K), the behavior index (n), along with the two contact angles (advancing and receding) on the relative permeability.

  16. High-resolution seismic wave propagation using local time stepping

    KAUST Repository

    Peter, Daniel; Rietmann, Max; Galvez, Percy; Ampuero, Jean Paul

    2017-01-01

    High-resolution seismic wave simulations often require local refinements in numerical meshes to accurately capture e.g. steep topography or complex fault geometry. Together with explicit time schemes, this dramatically reduces the global time step

  17. MO-G-17A-04: Internal Dosimetric Calculations for Pediatric Nuclear Imaging Applications, Using Monte Carlo Simulations and High-Resolution Pediatric Computational Models

    Energy Technology Data Exchange (ETDEWEB)

    Papadimitroulas, P; Kagadis, GC [University of Patras, Rion, Ahaia (Greece); Loudos, G [Technical Educational Institute of Athens, Aigaleo, Attiki (Greece)

    2014-06-15

    Purpose: Our purpose is to evaluate the administered absorbed dose in pediatric, nuclear imaging studies. Monte Carlo simulations with the incorporation of pediatric computational models can serve as reference for the accurate determination of absorbed dose. The procedure of the calculated dosimetric factors is described, while a dataset of reference doses is created. Methods: Realistic simulations were executed using the GATE toolkit and a series of pediatric computational models, developed by the “IT'IS Foundation”. The series of the phantoms used in our work includes 6 models in the range of 5–14 years old (3 boys and 3 girls). Pre-processing techniques were applied to the images, to incorporate the phantoms in GATE simulations. The resolution of the phantoms was set to 2 mm3. The most important organ densities were simulated according to the GATE “Materials Database”. Several used radiopharmaceuticals in SPECT and PET applications are being tested, following the EANM pediatric dosage protocol. The biodistributions of the several isotopes used as activity maps in the simulations, were derived by the literature. Results: Initial results of absorbed dose per organ (mGy) are presented in a 5 years old girl from the whole body exposure to 99mTc - SestaMIBI, 30 minutes after administration. Heart, kidney, liver, ovary, pancreas and brain are the most critical organs, in which the S-factors are calculated. The statistical uncertainty in the simulation procedure was kept lower than 5%. The Sfactors for each target organ are calculated in Gy/(MBq*sec) with highest dose being absorbed in kidneys and pancreas (9.29*10{sup 10} and 0.15*10{sup 10} respectively). Conclusion: An approach for the accurate dosimetry on pediatric models is presented, creating a reference dosage dataset for several radionuclides in children computational models with the advantages of MC techniques. Our study is ongoing, extending our investigation to other reference models and

  18. MO-G-17A-04: Internal Dosimetric Calculations for Pediatric Nuclear Imaging Applications, Using Monte Carlo Simulations and High-Resolution Pediatric Computational Models

    International Nuclear Information System (INIS)

    Papadimitroulas, P; Kagadis, GC; Loudos, G

    2014-01-01

    Purpose: Our purpose is to evaluate the administered absorbed dose in pediatric, nuclear imaging studies. Monte Carlo simulations with the incorporation of pediatric computational models can serve as reference for the accurate determination of absorbed dose. The procedure of the calculated dosimetric factors is described, while a dataset of reference doses is created. Methods: Realistic simulations were executed using the GATE toolkit and a series of pediatric computational models, developed by the “IT'IS Foundation”. The series of the phantoms used in our work includes 6 models in the range of 5–14 years old (3 boys and 3 girls). Pre-processing techniques were applied to the images, to incorporate the phantoms in GATE simulations. The resolution of the phantoms was set to 2 mm3. The most important organ densities were simulated according to the GATE “Materials Database”. Several used radiopharmaceuticals in SPECT and PET applications are being tested, following the EANM pediatric dosage protocol. The biodistributions of the several isotopes used as activity maps in the simulations, were derived by the literature. Results: Initial results of absorbed dose per organ (mGy) are presented in a 5 years old girl from the whole body exposure to 99mTc - SestaMIBI, 30 minutes after administration. Heart, kidney, liver, ovary, pancreas and brain are the most critical organs, in which the S-factors are calculated. The statistical uncertainty in the simulation procedure was kept lower than 5%. The Sfactors for each target organ are calculated in Gy/(MBq*sec) with highest dose being absorbed in kidneys and pancreas (9.29*10 10 and 0.15*10 10 respectively). Conclusion: An approach for the accurate dosimetry on pediatric models is presented, creating a reference dosage dataset for several radionuclides in children computational models with the advantages of MC techniques. Our study is ongoing, extending our investigation to other reference models and evaluating the

  19. Investigation into the Formation, Structure, and Evolution of an EF4 Tornado in East China Using a High-Resolution Numerical Simulation

    Science.gov (United States)

    Yao, Dan; Xue, Haile; Yin, Jinfang; Sun, Jisong; Liang, Xudong; Guo, Jianping

    2018-04-01

    Devastating tornadoes in China have received growing attention in recent years, but little is known about their formation, structure, and evolution on the tornadic scale. Most of these tornadoes develop within the East Asian monsoon regime, in an environment quite different from tornadoes in the U.S. In this study, we used an idealized, highresolution (25-m grid spacing) numerical simulation to investigate the deadly EF4 (Enhanced Fujita scale category 4) tornado that occurred on 23 June 2016 and claimed 99 lives in Yancheng, Jiangsu Province. A tornadic supercell developed in the simulation that had striking similarities to radar observations. The violent tornado in Funing County was reproduced, exceeding EF4 (74 m s-1), consistent with the on-site damage survey. It was accompanied by a funnel cloud that extended to the surface, and exhibited a double-helix vorticity structure. The signal of tornado genesis was found first at the cloud base in the pressure perturbation field, and then developed both upward and downward in terms of maximum vertical velocity overlapping with the intense vertical vorticity centers. The tornado's demise was found to accompany strong downdrafts overlapping with the intense vorticity centers. One of the interesting findings of this work is that a violent surface vortex was able to be generated and maintained, even though the simulation employed a free-slip lower boundary condition. The success of this simulation, despite using an idealized numerical approach, provides a means to investigate more historical tornadoes in China.

  20. A new method to assess the added value of high-resolution regional climate simulations: application to the EURO-CORDEX dataset

    Science.gov (United States)

    Soares, P. M. M.; Cardoso, R. M.

    2017-12-01

    Regional climate models (RCM) are used with increasing resolutions pursuing to represent in an improved way regional to local scale atmospheric phenomena. The EURO-CORDEX simulations at 0.11° and simulations exploiting finer grid spacing approaching the convective-permitting regimes are representative examples. The climate runs are computationally very demanding and do not always show improvements. These depend on the region, variable and object of study. The gains or losses associated with the use of higher resolution in relation to the forcing model (global climate model or reanalysis), or to different resolution RCM simulations, is known as added value. Its characterization is a long-standing issue, and many different added-value measures have been proposed. In the current paper, a new method is proposed to assess the added value of finer resolution simulations, in comparison to its forcing data or coarser resolution counterparts. This approach builds on a probability density function (PDF) matching score, giving a normalised measure of the difference between diverse resolution PDFs, mediated by the observational ones. The distribution added value (DAV) is an objective added value measure that can be applied to any variable, region or temporal scale, from hindcast or historical (non-synchronous) simulations. The DAVs metric and an application to the EURO-CORDEX simulations, for daily temperatures and precipitation, are here presented. The EURO-CORDEX simulations at both resolutions (0.44o,0.11o) display a clear added value in relation to ERA-Interim, with values around 30% in summer and 20% in the intermediate seasons, for precipitation. When both RCM resolutions are directly compared the added value is limited. The regions with the larger precipitation DAVs are areas where convection is relevant, e.g. Alps and Iberia. When looking at the extreme precipitation PDF tail, the higher resolution improvement is generally greater than the low resolution for seasons

  1. SPIRAL2/DESIR high resolution mass separator

    Energy Technology Data Exchange (ETDEWEB)

    Kurtukian-Nieto, T., E-mail: kurtukia@cenbg.in2p3.fr [Centre d’Études Nucléaires de Bordeaux Gradignan, Université Bordeaux 1-CNRS/IN2P3, BP 120, F-33175 Gradignan Cedex (France); Baartman, R. [TRIUMF, 4004 Wesbrook Mall, Vancouver B.C., V6T 2A3 (Canada); Blank, B.; Chiron, T. [Centre d’Études Nucléaires de Bordeaux Gradignan, Université Bordeaux 1-CNRS/IN2P3, BP 120, F-33175 Gradignan Cedex (France); Davids, C. [Physics Division, Argonne National Laboratory, Argonne, IL 60439 (United States); Delalee, F. [Centre d’Études Nucléaires de Bordeaux Gradignan, Université Bordeaux 1-CNRS/IN2P3, BP 120, F-33175 Gradignan Cedex (France); Duval, M. [GANIL, CEA/DSM-CNRS/IN2P3, Bd Henri Becquerel, BP 55027, F-14076 Caen Cedex 5 (France); El Abbeir, S.; Fournier, A. [Centre d’Études Nucléaires de Bordeaux Gradignan, Université Bordeaux 1-CNRS/IN2P3, BP 120, F-33175 Gradignan Cedex (France); Lunney, D. [CSNSM-IN2P3-CNRS, Université de Paris Sud, F-91405 Orsay (France); Méot, F. [BNL, Upton, Long Island, New York (United States); Serani, L. [Centre d’Études Nucléaires de Bordeaux Gradignan, Université Bordeaux 1-CNRS/IN2P3, BP 120, F-33175 Gradignan Cedex (France); Stodel, M.-H.; Varenne, F. [GANIL, CEA/DSM-CNRS/IN2P3, Bd Henri Becquerel, BP 55027, F-14076 Caen Cedex 5 (France); and others

    2013-12-15

    DESIR is the low-energy part of the SPIRAL2 ISOL facility under construction at GANIL. DESIR includes a high-resolution mass separator (HRS) with a designed resolving power m/Δm of 31,000 for a 1 π-mm-mrad beam emittance, obtained using a high-intensity beam cooling device. The proposed design consists of two 90-degree magnetic dipoles, complemented by electrostatic quadrupoles, sextupoles, and a multipole, arranged in a symmetric configuration to minimize aberrations. A detailed description of the design and results of extensive simulations are given.

  2. Evaluation of high-resolution GRAMM-GRAL (v15.12/v14.8) NOx simulations over the city of Zürich, Switzerland

    Science.gov (United States)

    Berchet, Antoine; Zink, Katrin; Oettl, Dietmar; Brunner, Jürg; Emmenegger, Lukas; Brunner, Dominik

    2017-09-01

    Hourly NOx concentrations were simulated for the city of Zürich, Switzerland, at 10 m resolution for the years 2013-2014. The simulations were generated with the nested mesoscale meteorology and micro-scale dispersion model system GRAMM-GRAL (versions v15.12 and v14.8) by applying a catalogue-based approach. This approach was specifically designed to enable long-term city-wide building-resolving simulations with affordable computation costs. It relies on a discrete set of possible weather situations and corresponding steady-state flow and dispersion patterns that are pre-computed and then matched hourly with actual meteorological observations. The modelling system was comprehensively evaluated using eight sites continuously monitoring NOx concentrations and 65 passive samplers measuring NO2 concentrations on a 2-weekly basis all over the city. The system was demonstrated to fulfil the European Commission standards for air pollution modelling at nearly all sites. The average spatial distribution was very well represented, despite a general tendency to overestimate the observed concentrations, possibly due to a crude representation of traffic-induced turbulence and to underestimated dispersion in the vicinity of buildings. The temporal variability of concentrations explained by varying emissions and weather situations was accurately reproduced on different timescales. The seasonal cycle of concentrations, mostly driven by stronger vertical dispersion in summer than in winter, was very well captured in the 2-year simulation period. Short-term events, such as episodes of particularly high and low concentrations, were detected in most cases by the system, although some unrealistic pollution peaks were occasionally generated, pointing at some limitations of the steady-state approximation. The different patterns of the diurnal cycle of concentrations observed in the city were generally well captured as well. The evaluation confirmed the adequacy of the catalogue

  3. Evaluation of high-resolution GRAMM–GRAL (v15.12/v14.8 NOx simulations over the city of Zürich, Switzerland

    Directory of Open Access Journals (Sweden)

    A. Berchet

    2017-09-01

    Full Text Available Hourly NOx concentrations were simulated for the city of Zürich, Switzerland, at 10 m resolution for the years 2013–2014. The simulations were generated with the nested mesoscale meteorology and micro-scale dispersion model system GRAMM–GRAL (versions v15.12 and v14.8 by applying a catalogue-based approach. This approach was specifically designed to enable long-term city-wide building-resolving simulations with affordable computation costs. It relies on a discrete set of possible weather situations and corresponding steady-state flow and dispersion patterns that are pre-computed and then matched hourly with actual meteorological observations. The modelling system was comprehensively evaluated using eight sites continuously monitoring NOx concentrations and 65 passive samplers measuring NO2 concentrations on a 2-weekly basis all over the city. The system was demonstrated to fulfil the European Commission standards for air pollution modelling at nearly all sites. The average spatial distribution was very well represented, despite a general tendency to overestimate the observed concentrations, possibly due to a crude representation of traffic-induced turbulence and to underestimated dispersion in the vicinity of buildings. The temporal variability of concentrations explained by varying emissions and weather situations was accurately reproduced on different timescales. The seasonal cycle of concentrations, mostly driven by stronger vertical dispersion in summer than in winter, was very well captured in the 2-year simulation period. Short-term events, such as episodes of particularly high and low concentrations, were detected in most cases by the system, although some unrealistic pollution peaks were occasionally generated, pointing at some limitations of the steady-state approximation. The different patterns of the diurnal cycle of concentrations observed in the city were generally well captured as well. The evaluation confirmed the

  4. A simulation study of high-resolution x-ray computed tomography imaging using irregular sampling with a photon-counting detector

    International Nuclear Information System (INIS)

    Lee, Seungwan; Choi, Yu-Na; Kim, Hee-Joung

    2013-01-01

    The purpose of this study was to improve the spatial resolution for the x-ray computed tomography (CT) imaging with a photon-counting detector using an irregular sampling method. The geometric shift-model of detector was proposed to produce the irregular sampling pattern and increase the number of samplings in the radial direction. The conventional micro-x-ray CT system and the novel system with the geometric shift-model of detector were simulated using analytic and Monte Carlo simulations. The projections were reconstructed using filtered back-projection (FBP), algebraic reconstruction technique (ART), and total variation (TV) minimization algorithms, and the reconstructed images were compared in terms of normalized root-mean-square error (NRMSE), full-width at half-maximum (FWHM), and coefficient-of-variation (COV). The results showed that the image quality improved in the novel system with the geometric shift-model of detector, and the NRMSE, FWHM, and COV were lower for the images reconstructed using the TV minimization technique in the novel system with the geometric shift-model of detector. The irregular sampling method produced by the geometric shift-model of detector can improve the spatial resolution and reduce artifacts and noise for reconstructed images obtained from an x-ray CT system with a photon-counting detector. -- Highlights: • We proposed a novel sampling method based on a spiral pattern to improve the spatial resolution. • The novel sampling method increased the number of samplings in the radial direction. • The spatial resolution was improved by the novel sampling method

  5. X-ray clusters in a cold dark matter + lambda universe: A direct, large-scale, high-resolution, hydrodynamic simulation

    Science.gov (United States)

    Cen, Renyue; Ostriker, Jeremiah P.

    1994-01-01

    A new, three-dimensional, shock-capturing, hydrodynamic code is utilized to determine the distribution of hot gas in a cold dark matter (CDM) + lambda model universe. Periodic boundary conditions are assumed: a box with size 85/h Mpc, having cell size 0.31/h Mpc, is followed in a simulation with 270(exp 3) = 10(exp 7.3) cells. We adopt omega = 0.45, lambda = 0.55, h identically equal to H/100 km/s/Mpc = 0.6, and then, from the cosmic background explorer (COBE) and light element nucleosynthesis, sigma(sub 8) = 0.77, omega(sub b) = 0.043. We identify the X-ray emitting clusters in the simulation box, compute the luminosity function at several wavelength bands, the temperature function and estimated sizes, as well as the evolution of these quantities with redshift. This open model succeeds in matching local observations of clusters in contrast to the standard omega = 1, CDM model, which fails. It predicts an order of magnitude decline in the number density of bright (h nu = 2-10 keV) clusters from z = 0 to z = 2 in contrast to a slight increase in the number density for standard omega = 1, CDM model. This COBE-normalized CDM + lambda model produces approximately the same number of X-ray clusters having L(sub x) greater than 10(exp 43) erg/s as observed. The background radiation field at 1 keV due to clusters is approximately the observed background which, after correction for numerical effects, again indicates that the model is consistent with observations.

  6. Feasibility of performing high resolution cloud-resolving simulations of historic extreme events: The San Fruttuoso (Liguria, italy) case of 1915.

    Science.gov (United States)

    Parodi, Antonio; Boni, Giorgio; Ferraris, Luca; Gallus, William; Maugeri, Maurizio; Molini, Luca; Siccardi, Franco

    2017-04-01

    Recent studies show that highly localized and persistent back-building mesoscale convective systems represent one of the most dangerous flash-flood producing storms in the north-western Mediterranean area. Substantial warming of the Mediterranean Sea in recent decades raises concerns over possible increases in frequency or intensity of these types of events as increased atmospheric temperatures generally support increases in water vapor content. Analyses of available historical records do not provide a univocal answer, since these may be likely affected by a lack of detailed observations for older events. In the present study, 20th Century Reanalysis Project initial and boundary condition data in ensemble mode are used to address the feasibility of performing cloud-resolving simulations with 1 km horizontal grid spacing of a historic extreme event that occurred over Liguria (Italy): The San Fruttuoso case of 1915. The proposed approach focuses on the ensemble Weather Research and Forecasting (WRF) model runs, as they are the ones most likely to best simulate the event. It is found that these WRF runs generally do show wind and precipitation fields that are consistent with the occurrence of highly localized and persistent back-building mesoscale convective systems, although precipitation peak amounts are underestimated. Systematic small north-westward position errors with regard to the heaviest rain and strongest convergence areas imply that the Reanalysis members may not be adequately representing the amount of cool air over the Po Plain outflowing into the Liguria Sea through the Apennines gap. Regarding the role of historical data sources, this study shows that in addition to Reanalysis products, unconventional data, such as historical meteorological bulletins, newspapers and even photographs can be very valuable sources of knowledge in the reconstruction of past extreme events.

  7. High-resolution ultrasonic spectroscopy

    Directory of Open Access Journals (Sweden)

    V. Buckin

    2018-03-01

    Full Text Available High-resolution ultrasonic spectroscopy (HR-US is an analytical technique for direct and non-destructive monitoring of molecular and micro-structural transformations in liquids and semi-solid materials. It is based on precision measurements of ultrasonic velocity and attenuation in analysed samples. The application areas of HR-US in research, product development, and quality and process control include analysis of conformational transitions of polymers, ligand binding, molecular self-assembly and aggregation, crystallisation, gelation, characterisation of phase transitions and phase diagrams, and monitoring of chemical and biochemical reactions. The technique does not require optical markers or optical transparency. The HR-US measurements can be performed in small sample volumes (down to droplet size, over broad temperature range, at ambient and elevated pressures, and in various measuring regimes such as automatic temperature ramps, titrations and measurements in flow.

  8. High Resolution Thermometry for EXACT

    Science.gov (United States)

    Panek, J. S.; Nash, A. E.; Larson, M.; Mulders, N.

    2000-01-01

    High Resolution Thermometers (HRTs) based on SQUID detection of the magnetization of a paramagnetic salt or a metal alloy has been commonly used for sub-nano Kelvin temperature resolution in low temperature physics experiments. The main applications to date have been for temperature ranges near the lambda point of He-4 (2.177 K). These thermometers made use of materials such as Cu(NH4)2Br4 *2H2O, GdCl3, or PdFe. None of these materials are suitable for EXACT, which will explore the region of the He-3/He-4 tricritical point at 0.87 K. The experiment requirements and properties of several candidate paramagnetic materials will be presented, as well as preliminary test results.

  9. High resolution tomographic instrument development

    International Nuclear Information System (INIS)

    1992-01-01

    Our recent work has concentrated on the development of high-resolution PET instrumentation reflecting in part the growing importance of PET in nuclear medicine imaging. We have developed a number of positron imaging instruments and have the distinction that every instrument has been placed in operation and has had an extensive history of application for basic research and clinical study. The present program is a logical continuation of these earlier successes. PCR-I, a single ring positron tomograph was the first demonstration of analog coding using BGO. It employed 4 mm detectors and is currently being used for a wide range of biological studies. These are of immense importance in guiding the direction for future instruments. In particular, PCR-II, a volume sensitive positron tomograph with 3 mm spatial resolution has benefited greatly from the studies using PCR-I. PCR-II is currently in the final stages of assembly and testing and will shortly be placed in operation for imaging phantoms, animals and ultimately humans. Perhaps the most important finding resulting from our previous study is that resolution and sensitivity must be carefully balanced to achieve a practical high resolution system. PCR-II has been designed to have the detection characteristics required to achieve 3 mm resolution in human brain under practical imaging situations. The development of algorithms by the group headed by Dr. Chesler is based on a long history of prior study including his joint work with Drs. Pelc and Reiderer and Stearns. This body of expertise will be applied to the processing of data from PCR-II when it becomes operational

  10. High resolution tomographic instrument development

    Energy Technology Data Exchange (ETDEWEB)

    1992-08-01

    Our recent work has concentrated on the development of high-resolution PET instrumentation reflecting in part the growing importance of PET in nuclear medicine imaging. We have developed a number of positron imaging instruments and have the distinction that every instrument has been placed in operation and has had an extensive history of application for basic research and clinical study. The present program is a logical continuation of these earlier successes. PCR-I, a single ring positron tomograph was the first demonstration of analog coding using BGO. It employed 4 mm detectors and is currently being used for a wide range of biological studies. These are of immense importance in guiding the direction for future instruments. In particular, PCR-II, a volume sensitive positron tomograph with 3 mm spatial resolution has benefited greatly from the studies using PCR-I. PCR-II is currently in the final stages of assembly and testing and will shortly be placed in operation for imaging phantoms, animals and ultimately humans. Perhaps the most important finding resulting from our previous study is that resolution and sensitivity must be carefully balanced to achieve a practical high resolution system. PCR-II has been designed to have the detection characteristics required to achieve 3 mm resolution in human brain under practical imaging situations. The development of algorithms by the group headed by Dr. Chesler is based on a long history of prior study including his joint work with Drs. Pelc and Reiderer and Stearns. This body of expertise will be applied to the processing of data from PCR-II when it becomes operational.

  11. High resolution tomographic instrument development

    Energy Technology Data Exchange (ETDEWEB)

    1992-01-01

    Our recent work has concentrated on the development of high-resolution PET instrumentation reflecting in part the growing importance of PET in nuclear medicine imaging. We have developed a number of positron imaging instruments and have the distinction that every instrument has been placed in operation and has had an extensive history of application for basic research and clinical study. The present program is a logical continuation of these earlier successes. PCR-I, a single ring positron tomograph was the first demonstration of analog coding using BGO. It employed 4 mm detectors and is currently being used for a wide range of biological studies. These are of immense importance in guiding the direction for future instruments. In particular, PCR-II, a volume sensitive positron tomograph with 3 mm spatial resolution has benefited greatly from the studies using PCR-I. PCR-II is currently in the final stages of assembly and testing and will shortly be placed in operation for imaging phantoms, animals and ultimately humans. Perhaps the most important finding resulting from our previous study is that resolution and sensitivity must be carefully balanced to achieve a practical high resolution system. PCR-II has been designed to have the detection characteristics required to achieve 3 mm resolution in human brain under practical imaging situations. The development of algorithms by the group headed by Dr. Chesler is based on a long history of prior study including his joint work with Drs. Pelc and Reiderer and Stearns. This body of expertise will be applied to the processing of data from PCR-II when it becomes operational.

  12. KiDS-450: cosmological constraints from weak-lensing peak statistics - II: Inference from shear peaks using N-body simulations

    Science.gov (United States)

    Martinet, Nicolas; Schneider, Peter; Hildebrandt, Hendrik; Shan, HuanYuan; Asgari, Marika; Dietrich, Jörg P.; Harnois-Déraps, Joachim; Erben, Thomas; Grado, Aniello; Heymans, Catherine; Hoekstra, Henk; Klaes, Dominik; Kuijken, Konrad; Merten, Julian; Nakajima, Reiko

    2018-02-01

    We study the statistics of peaks in a weak-lensing reconstructed mass map of the first 450 deg2 of the Kilo Degree Survey (KiDS-450). The map is computed with aperture masses directly applied to the shear field with an NFW-like compensated filter. We compare the peak statistics in the observations with that of simulations for various cosmologies to constrain the cosmological parameter S_8 = σ _8 √{Ω _m/0.3}, which probes the (Ωm, σ8) plane perpendicularly to its main degeneracy. We estimate S8 = 0.750 ± 0.059, using peaks in the signal-to-noise range 0 ≤ S/N ≤ 4, and accounting for various systematics, such as multiplicative shear bias, mean redshift bias, baryon feedback, intrinsic alignment, and shear-position coupling. These constraints are ˜ 25 per cent tighter than the constraints from the high significance peaks alone (3 ≤ S/N ≤ 4) which typically trace single-massive haloes. This demonstrates the gain of information from low-S/N peaks. However, we find that including S/N KiDS-450. Combining shear peaks with non-tomographic measurements of the shear two-point correlation functions yields a ˜20 per cent improvement in the uncertainty on S8 compared to the shear two-point correlation functions alone, highlighting the great potential of peaks as a cosmological probe.

  13. High-Resolution Mass Spectrometers

    Science.gov (United States)

    Marshall, Alan G.; Hendrickson, Christopher L.

    2008-07-01

    Over the past decade, mass spectrometry has been revolutionized by access to instruments of increasingly high mass-resolving power. For small molecules up to ˜400 Da (e.g., drugs, metabolites, and various natural organic mixtures ranging from foods to petroleum), it is possible to determine elemental compositions (CcHhNnOoSsPp…) of thousands of chemical components simultaneously from accurate mass measurements (the same can be done up to 1000 Da if additional information is included). At higher mass, it becomes possible to identify proteins (including posttranslational modifications) from proteolytic peptides, as well as lipids, glycoconjugates, and other biological components. At even higher mass (˜100,000 Da or higher), it is possible to characterize posttranslational modifications of intact proteins and to map the binding surfaces of large biomolecule complexes. Here we review the principles and techniques of the highest-resolution analytical mass spectrometers (time-of-flight and Fourier transform ion cyclotron resonance and orbitrap mass analyzers) and describe some representative high-resolution applications.

  14. Holocene climate change in North Africa and the end of the African humid period - results of new high-resolution transient simulations with the MPI-ESM 1.3

    Science.gov (United States)

    Dallmeyer, Anne; Claussen, Martin; Lorenz, Stephan

    2017-04-01

    The Max-Planck-Institute for Meteorology has recently undertaken high-resolution transient Holocene simulations using the fully-coupled Earth System Model MPI-ESM 1.3. The simulations cover the last 8000 years and are forced not only by reconstructed Holocene orbital variations and atmospheric greenhouse gas concentrations, but also by recent compilations of Holocene volcanic aerosol distributions, variations in spectral solar irradiance, stratospheric ozone and land-use change. The simulations reveal the ubiquitous "Holocene conundrum": simulated global mean temperatures increase during the mid-Holocene and stay constant during the late Holocene. Simulated mid-Holocene near-surface temperatures are too cold in large parts of the world. Simulated precipitation, however, agrees much better with reconstruction than temperatures do. Likewise simulated global biome pattern fit reconstructions nicely, except for North Western America. First results of these simulations are presented with the main focus on the North African monsoon region. The amplitude of the mid-Holocene African Humid Period (AHP) is well captured in terms of precipitation and vegetation cover, so is the south-ward transgression of the termination of the AHP seen in reconstructions. The Holocene weakening and southward retreat of the North African monsoon as well as changes in the monsoon dynamic including shifts in the seasonal cycle and their relation to the locally varying termination of the AHP are discussed in detail. Members of the Hamburg Holocene Team: Jürgen Bader (1), Sebastian Bathiany (2), Victor Brovkin (1), Martin Claussen (1,3), Traute Crüger (1), Roberta D'agostino (1), Anne Dallmeyer (1), Sabine Egerer (1), Vivienne Groner (1), Matthias Heinze (1), Tatiana Ilyina (1), Johann Jungclaus (1), Thomas Kleinen (1), Alexander Lemburg (1), Stephan Lorenz (1), Thomas Raddatz (1), Hauke Schmidt (1), Gerhard Schmiedl (3), Bjorn Stevens (1), Claudia Timmreck (1), Matthew Toohey (4) (1) Max

  15. High-resolution intravital microscopy.

    Directory of Open Access Journals (Sweden)

    Volker Andresen

    Full Text Available Cellular communication constitutes a fundamental mechanism of life, for instance by permitting transfer of information through synapses in the nervous system and by leading to activation of cells during the course of immune responses. Monitoring cell-cell interactions within living adult organisms is crucial in order to draw conclusions on their behavior with respect to the fate of cells, tissues and organs. Until now, there is no technology available that enables dynamic imaging deep within the tissue of living adult organisms at sub-cellular resolution, i.e. detection at the level of few protein molecules. Here we present a novel approach called multi-beam striped-illumination which applies for the first time the principle and advantages of structured-illumination, spatial modulation of the excitation pattern, to laser-scanning-microscopy. We use this approach in two-photon-microscopy--the most adequate optical deep-tissue imaging-technique. As compared to standard two-photon-microscopy, it achieves significant contrast enhancement and up to 3-fold improved axial resolution (optical sectioning while photobleaching, photodamage and acquisition speed are similar. Its imaging depth is comparable to multifocal two-photon-microscopy and only slightly less than in standard single-beam two-photon-microscopy. Precisely, our studies within mouse lymph nodes demonstrated 216% improved axial and 23% improved lateral resolutions at a depth of 80 µm below the surface. Thus, we are for the first time able to visualize the dynamic interactions between B cells and immune complex deposits on follicular dendritic cells within germinal centers (GCs of live mice. These interactions play a decisive role in the process of clonal selection, leading to affinity maturation of the humoral immune response. This novel high-resolution intravital microscopy method has a huge potential for numerous applications in neurosciences, immunology, cancer research and

  16. High-Resolution Intravital Microscopy

    Science.gov (United States)

    Andresen, Volker; Pollok, Karolin; Rinnenthal, Jan-Leo; Oehme, Laura; Günther, Robert; Spiecker, Heinrich; Radbruch, Helena; Gerhard, Jenny; Sporbert, Anje; Cseresnyes, Zoltan; Hauser, Anja E.; Niesner, Raluca

    2012-01-01

    Cellular communication constitutes a fundamental mechanism of life, for instance by permitting transfer of information through synapses in the nervous system and by leading to activation of cells during the course of immune responses. Monitoring cell-cell interactions within living adult organisms is crucial in order to draw conclusions on their behavior with respect to the fate of cells, tissues and organs. Until now, there is no technology available that enables dynamic imaging deep within the tissue of living adult organisms at sub-cellular resolution, i.e. detection at the level of few protein molecules. Here we present a novel approach called multi-beam striped-illumination which applies for the first time the principle and advantages of structured-illumination, spatial modulation of the excitation pattern, to laser-scanning-microscopy. We use this approach in two-photon-microscopy - the most adequate optical deep-tissue imaging-technique. As compared to standard two-photon-microscopy, it achieves significant contrast enhancement and up to 3-fold improved axial resolution (optical sectioning) while photobleaching, photodamage and acquisition speed are similar. Its imaging depth is comparable to multifocal two-photon-microscopy and only slightly less than in standard single-beam two-photon-microscopy. Precisely, our studies within mouse lymph nodes demonstrated 216% improved axial and 23% improved lateral resolutions at a depth of 80 µm below the surface. Thus, we are for the first time able to visualize the dynamic interactions between B cells and immune complex deposits on follicular dendritic cells within germinal centers (GCs) of live mice. These interactions play a decisive role in the process of clonal selection, leading to affinity maturation of the humoral immune response. This novel high-resolution intravital microscopy method has a huge potential for numerous applications in neurosciences, immunology, cancer research and developmental biology

  17. A method for generating high resolution satellite image time series

    Science.gov (United States)

    Guo, Tao

    2014-10-01

    There is an increasing demand for satellite remote sensing data with both high spatial and temporal resolution in many applications. But it still is a challenge to simultaneously improve spatial resolution and temporal frequency due to the technical limits of current satellite observation systems. To this end, much R&D efforts have been ongoing for years and lead to some successes roughly in two aspects, one includes super resolution, pan-sharpen etc. methods which can effectively enhance the spatial resolution and generate good visual effects, but hardly preserve spectral signatures and result in inadequate analytical value, on the other hand, time interpolation is a straight forward method to increase temporal frequency, however it increase little informative contents in fact. In this paper we presented a novel method to simulate high resolution time series data by combing low resolution time series data and a very small number of high resolution data only. Our method starts with a pair of high and low resolution data set, and then a spatial registration is done by introducing LDA model to map high and low resolution pixels correspondingly. Afterwards, temporal change information is captured through a comparison of low resolution time series data, and then projected onto the high resolution data plane and assigned to each high resolution pixel according to the predefined temporal change patterns of each type of ground objects. Finally the simulated high resolution data is generated. A preliminary experiment shows that our method can simulate a high resolution data with a reasonable accuracy. The contribution of our method is to enable timely monitoring of temporal changes through analysis of time sequence of low resolution images only, and usage of costly high resolution data can be reduces as much as possible, and it presents a highly effective way to build up an economically operational monitoring solution for agriculture, forest, land use investigation

  18. Section on High Resolution Optical Imaging (HROI)

    Data.gov (United States)

    Federal Laboratory Consortium — The Section on High Resolution Optical Imaging (HROI) develops novel technologies for studying biological processes at unprecedented speed and resolution. Research...

  19. High Resolution Orientation Distribution Function

    DEFF Research Database (Denmark)

    Schmidt, Søren; Gade-Nielsen, Nicolai Fog; Høstergaard, Martin

    2012-01-01

    from the deformed material. The underlying mathematical formalism supports all crystallographic space groups and reduces the problem to solving a (large) set of linear equations. An implementation on multi-core CPUs and Graphical Processing Units (GPUs) is discussed along with an example on simulated...

  20. Development of AMS high resolution injector system

    International Nuclear Information System (INIS)

    Bao Yiwen; Guan Xialing; Hu Yueming

    2008-01-01

    The Beijing HI-13 tandem accelerator AMS high resolution injector system was developed. The high resolution energy achromatic system consists of an electrostatic analyzer and a magnetic analyzer, which mass resolution can reach 600 and transmission is better than 80%. (authors)

  1. An Efficient, Semi-implicit Pressure-based Scheme Employing a High-resolution Finitie Element Method for Simulating Transient and Steady, Inviscid and Viscous, Compressible Flows on Unstructured Grids

    Energy Technology Data Exchange (ETDEWEB)

    Richard C. Martineau; Ray A. Berry

    2003-04-01

    A new semi-implicit pressure-based Computational Fluid Dynamics (CFD) scheme for simulating a wide range of transient and steady, inviscid and viscous compressible flow on unstructured finite elements is presented here. This new CFD scheme, termed the PCICEFEM (Pressure-Corrected ICE-Finite Element Method) scheme, is composed of three computational phases, an explicit predictor, an elliptic pressure Poisson solution, and a semiimplicit pressure-correction of the flow variables. The PCICE-FEM scheme is capable of second-order temporal accuracy by incorporating a combination of a time-weighted form of the two-step Taylor-Galerkin Finite Element Method scheme as an explicit predictor for the balance of momentum equations and the finite element form of a time-weighted trapezoid rule method for the semi-implicit form of the governing hydrodynamic equations. Second-order spatial accuracy is accomplished by linear unstructured finite element discretization. The PCICE-FEM scheme employs Flux-Corrected Transport as a high-resolution filter for shock capturing. The scheme is capable of simulating flows from the nearly incompressible to the high supersonic flow regimes. The PCICE-FEM scheme represents an advancement in mass-momentum coupled, pressurebased schemes. The governing hydrodynamic equations for this scheme are the conservative form of the balance of momentum equations (Navier-Stokes), mass conservation equation, and total energy equation. An operator splitting process is performed along explicit and implicit operators of the semi-implicit governing equations to render the PCICE-FEM scheme in the class of predictor-corrector schemes. The complete set of semi-implicit governing equations in the PCICE-FEM scheme are cast in this form, an explicit predictor phase and a semi-implicit pressure-correction phase with the elliptic pressure Poisson solution coupling the predictor-corrector phases. The result of this predictor-corrector formulation is that the pressure Poisson

  2. Ultra-high resolution AMOLED

    Science.gov (United States)

    Wacyk, Ihor; Prache, Olivier; Ghosh, Amal

    2011-06-01

    AMOLED microdisplays continue to show improvement in resolution and optical performance, enhancing their appeal for a broad range of near-eye applications such as night vision, simulation and training, situational awareness, augmented reality, medical imaging, and mobile video entertainment and gaming. eMagin's latest development of an HDTV+ resolution technology integrates an OLED pixel of 3.2 × 9.6 microns in size on a 0.18 micron CMOS backplane to deliver significant new functionality as well as the capability to implement a 1920×1200 microdisplay in a 0.86" diagonal area. In addition to the conventional matrix addressing circuitry, the HDTV+ display includes a very lowpower, low-voltage-differential-signaling (LVDS) serialized interface to minimize cable and connector size as well as electromagnetic emissions (EMI), an on-chip set of look-up-tables for digital gamma correction, and a novel pulsewidth- modulation (PWM) scheme that together with the standard analog control provides a total dimming range of 0.05cd/m2 to 2000cd/m2 in the monochrome version. The PWM function also enables an impulse drive mode of operation that significantly reduces motion artifacts in high speed scene changes. An internal 10-bit DAC ensures that a full 256 gamma-corrected gray levels are available across the entire dimming range, resulting in a measured dynamic range exceeding 20-bits. This device has been successfully tested for operation at frame rates ranging from 30Hz up to 85Hz. This paper describes the operational features and detailed optical and electrical test results for the new AMOLED WUXGA resolution microdisplay.

  3. Dynamic high resolution imaging of rats

    International Nuclear Information System (INIS)

    Miyaoka, R.S.; Lewellen, T.K.; Bice, A.N.

    1990-01-01

    A positron emission tomography with the sensitivity and resolution to do dynamic imaging of rats would be an invaluable tool for biological researchers. In this paper, the authors determine the biological criteria for dynamic positron emission imaging of rats. To be useful, 3 mm isotropic resolution and 2-3 second time binning were necessary characteristics for such a dedicated tomograph. A single plane in which two objects of interest could be imaged simultaneously was considered acceptable. Multi-layered detector designs were evaluated as a possible solution to the dynamic imaging and high resolution imaging requirements. The University of Washington photon history generator was used to generate data to investigate a tomograph's sensitivity to true, scattered and random coincidences for varying detector ring diameters. Intrinsic spatial uniformity advantages of multi-layered detector designs over conventional detector designs were investigated using a Monte Carlo program. As a result, a modular three layered detector prototype is being developed. A module will consist of a layer of five 3.5 mm wide crystals and two layers of six 2.5 mm wide crystals. The authors believe adequate sampling can be achieved with a stationary detector system using these modules. Economical crystal decoding strategies have been investigated and simulations have been run to investigate optimum light channeling methods for block decoding strategies. An analog block decoding method has been proposed and will be experimentally evaluated to determine whether it can provide the desired performance

  4. Explicit Cloud Nucleation from Arbitrary Mixtures of Aerosol Types and Sizes Using an Ultra-Efficient In-Line Aerosol Bin Model in High-Resolution Simulations of Hurricanes

    Science.gov (United States)

    Walko, R. L.; Ashby, T.; Cotton, W. R.

    2017-12-01

    The fundamental role of atmospheric aerosols in the process of cloud droplet nucleation is well known, and there is ample evidence that the concentration, size, and chemistry of aerosols can strongly influence microphysical, thermodynamic, and ultimately dynamic properties and evolution of clouds and convective systems. With the increasing availability of observation- and model-based environmental representations of different types of anthropogenic and natural aerosols, there is increasing need for models to be able to represent which aerosols nucleate and which do not in supersaturated conditions. However, this is a very complex process that involves competition for water vapor between multiple aerosol species (chemistries) and different aerosol sizes within each species. Attempts have been made to parameterize the nucleation properties of mixtures of different aerosol species, but it is very difficult or impossible to represent all possible mixtures that may occur in practice. As part of a modeling study of the impact of anthropogenic and natural aerosols on hurricanes, we developed an ultra-efficient aerosol bin model to represent nucleation in a high-resolution atmospheric model that explicitly represents cloud- and subcloud-scale vertical motion. The bin model is activated at any time and location in a simulation where supersaturation occurs and is potentially capable of activating new cloud droplets. The bins are populated from the aerosol species that are present at the given time and location and by multiple sizes from each aerosol species according to a characteristic size distribution, and the chemistry of each species is represented by its absorption or adsorption characteristics. The bin model is integrated in time increments that are smaller than that of the atmospheric model in order to temporally resolve the peak supersaturation, which determines the total nucleated number. Even though on the order of 100 bins are typically utilized, this leads only

  5. High resolution sequence stratigraphy in China

    International Nuclear Information System (INIS)

    Zhang Shangfeng; Zhang Changmin; Yin Yanshi; Yin Taiju

    2008-01-01

    Since high resolution sequence stratigraphy was introduced into China by DENG Hong-wen in 1995, it has been experienced two development stages in China which are the beginning stage of theory research and development of theory research and application, and the stage of theoretical maturity and widely application that is going into. It is proved by practices that high resolution sequence stratigraphy plays more and more important roles in the exploration and development of oil and gas in Chinese continental oil-bearing basin and the research field spreads to the exploration of coal mine, uranium mine and other strata deposits. However, the theory of high resolution sequence stratigraphy still has some shortages, it should be improved in many aspects. The authors point out that high resolution sequence stratigraphy should be characterized quantitatively and modelized by computer techniques. (authors)

  6. High resolution CT of the chest

    Energy Technology Data Exchange (ETDEWEB)

    Barneveld Binkhuysen, F H [Eemland Hospital (Netherlands), Dept. of Radiology

    1996-12-31

    Compared to conventional CT high resolution CT (HRCT) shows several extra anatomical structures which might effect both diagnosis and therapy. The extra anatomical structures were discussed briefly in this article. (18 refs.).

  7. High-resolution spectrometer at PEP

    International Nuclear Information System (INIS)

    Weiss, J.M.; HRS Collaboration.

    1982-01-01

    A description is presented of the High Resolution Spectrometer experiment (PEP-12) now running at PEP. The advanced capabilities of the detector are demonstrated with first physics results expected in the coming months

  8. High-resolution downscaling for hydrological management

    Science.gov (United States)

    Ulbrich, Uwe; Rust, Henning; Meredith, Edmund; Kpogo-Nuwoklo, Komlan; Vagenas, Christos

    2017-04-01

    Hydrological modellers and water managers require high-resolution climate data to model regional hydrologies and how these may respond to future changes in the large-scale climate. The ability to successfully model such changes and, by extension, critical infrastructure planning is often impeded by a lack of suitable climate data. This typically takes the form of too-coarse data from climate models, which are not sufficiently detailed in either space or time to be able to support water management decisions and hydrological research. BINGO (Bringing INnovation in onGOing water management; ) aims to bridge the gap between the needs of hydrological modellers and planners, and the currently available range of climate data, with the overarching aim of providing adaptation strategies for climate change-related challenges. Producing the kilometre- and sub-daily-scale climate data needed by hydrologists through continuous simulations is generally computationally infeasible. To circumvent this hurdle, we adopt a two-pronged approach involving (1) selective dynamical downscaling and (2) conditional stochastic weather generators, with the former presented here. We take an event-based approach to downscaling in order to achieve the kilometre-scale input needed by hydrological modellers. Computational expenses are minimized by identifying extremal weather patterns for each BINGO research site in lower-resolution simulations and then only downscaling to the kilometre-scale (convection permitting) those events during which such patterns occur. Here we (1) outline the methodology behind the selection of the events, and (2) compare the modelled precipitation distribution and variability (preconditioned on the extremal weather patterns) with that found in observations.

  9. Structure of high-resolution NMR spectra

    CERN Document Server

    Corio, PL

    2012-01-01

    Structure of High-Resolution NMR Spectra provides the principles, theories, and mathematical and physical concepts of high-resolution nuclear magnetic resonance spectra.The book presents the elementary theory of magnetic resonance; the quantum mechanical theory of angular momentum; the general theory of steady state spectra; and multiple quantum transitions, double resonance and spin echo experiments.Physicists, chemists, and researchers will find the book a valuable reference text.

  10. An N-body Integrator for Planetary Rings

    Science.gov (United States)

    Hahn, Joseph M.

    2011-04-01

    A planetary ring that is disturbed by a satellite's resonant perturbation can respond in an organized way. When the resonance lies in the ring's interior, the ring responds via an m-armed spiral wave, while a ring whose edge is confined by the resonance exhibits an m-lobed scalloping along the ring-edge. The amplitude of these disturbances are sensitive to ring surface density and viscosity, so modelling these phenomena can provide estimates of the ring's properties. However a brute force attempt to simulate a ring's full azimuthal extent with an N-body code will likely fail because of the large number of particles needed to resolve the ring's behavior. Another impediment is the gravitational stirring that occurs among the simulated particles, which can wash out the ring's organized response. However it is possible to adapt an N-body integrator so that it can simulate a ring's collective response to resonant perturbations. The code developed here uses a few thousand massless particles to trace streamlines within the ring. Particles are close in a radial sense to these streamlines, which allows streamlines to be treated as straight wires of constant linear density. Consequently, gravity due to these streamline is a simple function of the particle's radial distance to all streamlines. And because particles are responding to smooth gravitating streamlines, rather than discrete particles, this method eliminates the stirring that ordinarily occurs in brute force N-body calculations. Note also that ring surface density is now a simple function of streamline separations, so effects due to ring pressure and viscosity are easily accounted for, too. A poster will describe this N-body method in greater detail. Simulations of spiral density waves and scalloped ring-edges are executed in typically ten minutes on a desktop PC, and results for Saturn's A and B rings will be presented at conference time.

  11. Zeolites - a high resolution electron microscopy study

    International Nuclear Information System (INIS)

    Alfredsson, V.

    1994-10-01

    High resolution transmission electron microscopy (HRTEM) has been used to investigate a number of zeolites (EMT, FAU, LTL, MFI and MOR) and a member of the mesoporous M41S family. The electron optical artefact, manifested as a dark spot in the projected centre of the large zeolite channels, caused by insufficient transfer of certain reflections in the objective lens has been explained. The artefact severely hinders observation of materials confined in the zeolite channels and cavities. It is shown how to circumvent the artefact problem and how to image confined materials in spite of disturbance caused by the artefact. Image processing by means of a Wiener filter has been applied for removal of the artefact. The detailed surface structure of FAU has been investigated. Comparison of experimental micrographs with images simulated using different surface models indicates that the surface can be terminated in different ways depending on synthesis methods. The dealuminated form of FAU (USY) is covered by an amorphous region. Platinum incorporated in FAU has a preponderance to aggregate in the (111) twin planes, probably due to a local difference in cage structure with more spacious cages. It is shown that platinum is intra-zeolitic as opposed to being located on the external surface of the zeolite crystal. This could be deduced from tomography of ultra-thin sections among observations. HRTEM studies of the mesoporous MCM-41 show that the pores have a hexagonal shape and also supports the mechanistic model proposed which involves a cooperative formation of a mesophase including the silicate species as well as the surfactant. 66 refs, 24 figs

  12. 天津近海风能资源的高分辨率数值模拟与评估%High-Resolution Numerical Simulation and Assessment of the Offshore Wind Energy Resource in Tianjin

    Institute of Scientific and Technical Information of China (English)

    杨艳娟; 李明财; 任雨; 熊明明

    2011-01-01

    Wind energy is a rapidly growing alternative energy source and has been widely developed around the world over the last 10 years. Offshore wind power generation is now becoming a new trend in the development of future wind power generation because wind tends to blow faster and be more uniform over offshore areas than on the land. Accurate assessment of wind energy resource is fundamental and valuable for wind energy developers and potential wind energy users because it allows them to choose a general area of the estimated high wind resource for more detailed examination. However, it is difficult to make direct observations from meteorological variables over offshore areas, which calls for numerical simulation with high resolution so as to derive the availability and potential of wind energy. The distribution of wind energy resources with 1 km horizontal resolution and 10 m vertical resolution in Tianjin coastal areas was simulated using the numerical model MM5 and Calmet to derive wind energy potential over the offshore areas. In addition, the simulation efficiency was determined by comparing observation data with three wind-measurement towers over the same period. Results show that the annual mean wind speed and trend of daily mean wind speed were simulated well, and the relative deviations between observations and simulated values at three wind measurement towers were 7.11%, 12.99%, and 6.14%, respectively. This suggests that the models are effective in assessing the offshore wind energy resource in Tianjin. The long time wind energy resource was obtained by comparing simulated year’s and recent 20 years’ mean wind speed. It was found that annual mean wind speed is (6.6~7.0)m/s, and annual mean wind power density is above 340w/m2, which indicate that the offshore wind energy resource in Tianjin is exploitable and could be used for grid-connected power generation. The assessment shows that the MM5/Calmet model is capable of providing reasonable wind status

  13. A high resolution portable spectroscopy system

    International Nuclear Information System (INIS)

    Kulkarni, C.P.; Vaidya, P.P.; Paulson, M.; Bhatnagar, P.V.; Pande, S.S.; Padmini, S.

    2003-01-01

    Full text: This paper describes the system details of a High Resolution Portable Spectroscopy System (HRPSS) developed at Electronics Division, BARC. The system can be used for laboratory class, high-resolution nuclear spectroscopy applications. The HRPSS consists of a specially designed compact NIM bin, with built-in power supplies, accommodating a low power, high resolution MCA, and on-board embedded computer for spectrum building and communication. A NIM based spectroscopy amplifier and a HV module for detector bias are integrated (plug-in) in the bin. The system communicates with a host PC via a serial link. Along-with a laptop PC, and a portable HP-Ge detector, the HRPSS offers a laboratory class performance for portable applications

  14. The shape of dark matter haloes in the Aquarius simulations : Evolution and memory

    NARCIS (Netherlands)

    Vera-Ciro, C.A.; Sales, L. V.; Helmi, A.; Reyle, C; Robin, A; Schultheis, M

    We use the high resolution cosmological N-body simulations from the Aquarius project to investigate in detail the mechanisms that determine the shape of Milky Way-type dark matter haloes. We find that, when measured at the instantaneous virial radius, the shape of individual haloes changes with

  15. The shape of dark matter haloes in the Aquarius simulations: Evolution and memory

    NARCIS (Netherlands)

    Vera-Ciro, C. A.; Sales, L. V.; Helmi, A.

    We use the high resolution cosmological N-body simulations from the Aquarius project to investigate in detail the mechanisms that determine the shape of Milky Way-type dark matter haloes. We find that, when measured at the instantaneous virial radius, the shape of individual haloes changes with

  16. High-resolution seismic wave propagation using local time stepping

    KAUST Repository

    Peter, Daniel

    2017-03-13

    High-resolution seismic wave simulations often require local refinements in numerical meshes to accurately capture e.g. steep topography or complex fault geometry. Together with explicit time schemes, this dramatically reduces the global time step size for ground-motion simulations due to numerical stability conditions. To alleviate this problem, local time stepping (LTS) algorithms allow an explicit time stepping scheme to adapt the time step to the element size, allowing nearoptimal time steps everywhere in the mesh. This can potentially lead to significantly faster simulation runtimes.

  17. High resolution Neutron and Synchrotron Powder Diffraction

    International Nuclear Information System (INIS)

    Hewat, A.W.

    1986-01-01

    The use of high-resolution powder diffraction has grown rapidly in the past years, with the development of Rietveld (1967) methods of data analysis and new high-resolution diffractometers and multidetectors. The number of publications in this area has increased from a handful per year until 1973 to 150 per year in 1984, with a ten-year total of over 1000. These papers cover a wide area of solid state-chemistry, physics and materials science, and have been grouped under 20 subject headings, ranging from catalysts to zeolites, and from battery electrode materials to pre-stressed superconducting wires. In 1985 two new high-resolution diffractometers are being commissioned, one at the SNS laboratory near Oxford, and one at the ILL in Grenoble. In different ways these machines represent perhaps the ultimate that can be achieved with neutrons and will permit refinement of complex structures with about 250 parameters and unit cell volumes of about 2500 Angstrom/sp3/. The new European Synchotron Facility will complement the Grenoble neutron diffractometers, and extend the role of high-resolution powder diffraction to the direct solution of crystal structures, pioneered in Sweden

  18. High resolution CT in diffuse lung disease

    International Nuclear Information System (INIS)

    Webb, W.R.

    1995-01-01

    High resolution CT (computerized tomography) was discussed in detail. The conclusions were HRCT is able to define lung anatomy at the secondary lobular level and define a variety of abnormalities in patients with diffuse lung diseases. Evidence from numerous studies indicates that HRCT can play a major role in the assessment of diffuse infiltrative lung disease and is indicate clinically (95 refs.)

  19. Classification of high resolution satellite images

    OpenAIRE

    Karlsson, Anders

    2003-01-01

    In this thesis the Support Vector Machine (SVM)is applied on classification of high resolution satellite images. Sveral different measures for classification, including texture mesasures, 1st order statistics, and simple contextual information were evaluated. Additionnally, the image was segmented, using an enhanced watershed method, in order to improve the classification accuracy.

  20. High resolution CT in diffuse lung disease

    Energy Technology Data Exchange (ETDEWEB)

    Webb, W R [California Univ., San Francisco, CA (United States). Dept. of Radiology

    1996-12-31

    High resolution CT (computerized tomography) was discussed in detail. The conclusions were HRCT is able to define lung anatomy at the secondary lobular level and define a variety of abnormalities in patients with diffuse lung diseases. Evidence from numerous studies indicates that HRCT can play a major role in the assessment of diffuse infiltrative lung disease and is indicate clinically (95 refs.).

  1. High-resolution clean-sc

    NARCIS (Netherlands)

    Sijtsma, P.; Snellen, M.

    2016-01-01

    In this paper a high-resolution extension of CLEAN-SC is proposed: HR-CLEAN-SC. Where CLEAN-SC uses peak sources in “dirty maps” to define so-called source components, HR-CLEAN-SC takes advantage of the fact that source components can likewise be derived from points at some distance from the peak,

  2. A High-Resolution Stopwatch for Cents

    Science.gov (United States)

    Gingl, Z.; Kopasz, K.

    2011-01-01

    A very low-cost, easy-to-make stopwatch is presented to support various experiments in mechanics. The high-resolution stopwatch is based on two photodetectors connected directly to the microphone input of a sound card. Dedicated free open-source software has been developed and made available to download. The efficiency is demonstrated by a free…

  3. Planning for shallow high resolution seismic surveys

    CSIR Research Space (South Africa)

    Fourie, CJS

    2008-11-01

    Full Text Available of the input wave. This information can be used in conjunction with this spreadsheet to aid the geophysicist in designing shallow high resolution seismic surveys to achieve maximum resolution and penetration. This Excel spreadsheet is available free from...

  4. Constructing high-quality bounding volume hierarchies for N-body computation using the acceptance volume heuristic

    Science.gov (United States)

    Olsson, O.

    2018-01-01

    We present a novel heuristic derived from a probabilistic cost model for approximate N-body simulations. We show that this new heuristic can be used to guide tree construction towards higher quality trees with improved performance over current N-body codes. This represents an important step beyond the current practice of using spatial partitioning for N-body simulations, and enables adoption of a range of state-of-the-art algorithms developed for computer graphics applications to yield further improvements in N-body simulation performance. We outline directions for further developments and review the most promising such algorithms.

  5. An Advanced N -body Model for Interacting Multiple Stellar Systems

    Energy Technology Data Exchange (ETDEWEB)

    Brož, Miroslav [Astronomical Institute of the Charles University, Faculty of Mathematics and Physics, V Holešovičkách 2, CZ-18000 Praha 8 (Czech Republic)

    2017-06-01

    We construct an advanced model for interacting multiple stellar systems in which we compute all trajectories with a numerical N -body integrator, namely the Bulirsch–Stoer from the SWIFT package. We can then derive various observables: astrometric positions, radial velocities, minima timings (TTVs), eclipse durations, interferometric visibilities, closure phases, synthetic spectra, spectral energy distribution, and even complete light curves. We use a modified version of the Wilson–Devinney code for the latter, in which the instantaneous true phase and inclination of the eclipsing binary are governed by the N -body integration. If all of these types of observations are at one’s disposal, a joint χ {sup 2} metric and an optimization algorithm (a simplex or simulated annealing) allow one to search for a global minimum and construct very robust models of stellar systems. At the same time, our N -body model is free from artifacts that may arise if mutual gravitational interactions among all components are not self-consistently accounted for. Finally, we present a number of examples showing dynamical effects that can be studied with our code and we discuss how systematic errors may affect the results (and how to prevent this from happening).

  6. Smartphone microendoscopy for high resolution fluorescence imaging

    Directory of Open Access Journals (Sweden)

    Xiangqian Hong

    2016-09-01

    Full Text Available High resolution optical endoscopes are increasingly used in diagnosis of various medical conditions of internal organs, such as the cervix and gastrointestinal (GI tracts, but they are too expensive for use in resource-poor settings. On the other hand, smartphones with high resolution cameras and Internet access have become more affordable, enabling them to diffuse into most rural areas and developing countries in the past decade. In this paper, we describe a smartphone microendoscope that can take fluorescence images with a spatial resolution of 3.1 μm. Images collected from ex vivo, in vitro and in vivo samples using the device are also presented. The compact and cost-effective smartphone microendoscope may be envisaged as a powerful tool for detecting pre-cancerous lesions of internal organs in low and middle-income countries (LMICs.

  7. High resolution NMR theory and chemical applications

    CERN Document Server

    Becker, Edwin D

    2012-01-01

    High Resolution NMR: Theory and Chemical Applications discusses the principles and theory of nuclear magnetic resonance and how this concept is used in the chemical sciences. This book is written at an intermediate level, with mathematics used to augment verbal descriptions of the phenomena. This text pays attention to developing and interrelating four approaches - the steady state energy levels, the rotating vector picture, the density matrix, and the product operator formalism. The style of this book is based on the assumption that the reader has an acquaintance with the general principles of quantum mechanics, but no extensive background in quantum theory or proficiency in mathematics is required. This book begins with a description of the basic physics, together with a brief account of the historical development of the field. It looks at the study of NMR in liquids, including high resolution NMR in the solid state and the principles of NMR imaging and localized spectroscopy. This book is intended to assis...

  8. High resolution NMR theory and chemical applications

    CERN Document Server

    Becker, Edwin D

    1999-01-01

    High Resolution NMR provides a broad treatment of the principles and theory of nuclear magnetic resonance (NMR) as it is used in the chemical sciences. It is written at an "intermediate" level, with mathematics used to augment, rather than replace, clear verbal descriptions of the phenomena. The book is intended to allow a graduate student, advanced undergraduate, or researcher to understand NMR at a fundamental level, and to see illustrations of the applications of NMR to the determination of the structure of small organic molecules and macromolecules, including proteins. Emphasis is on the study of NMR in liquids, but the treatment also includes high resolution NMR in the solid state and the principles of NMR imaging and localized spectroscopy. Careful attention is given to developing and interrelating four approaches - steady state energy levels, the rotating vector picture, the density matrix, and the product operator formalism. The presentation is based on the assumption that the reader has an acquaintan...

  9. High resolution imaging of boron carbide microstructures

    International Nuclear Information System (INIS)

    MacKinnon, I.D.R.; Aselage, T.; Van Deusen, S.B.

    1986-01-01

    Two samples of boron carbide have been examined using high resolution transmission electron microscopy (HRTEM). A hot-pressed B 13 C 2 sample shows a high density of variable width twins normal to (10*1). Subtle shifts or offsets of lattice fringes along the twin plane and normal to approx.(10*5) were also observed. A B 4 C powder showed little evidence of stacking disorder in crystalline regions

  10. High-Resolution MRI in Rectal Cancer

    International Nuclear Information System (INIS)

    Dieguez, Adriana

    2010-01-01

    High-resolution MRI is the best method of assessing the relation of the rectal tumor with the potential circumferential resection margin (CRM). Therefore it is currently considered the method of choice for local staging of rectal cancer. The primary surgery of rectal cancer is total mesorectal excision (TME), which plane of dissection is formed by the mesorectal fascia surrounding mesorectal fat and rectum. This fascia will determine the circumferential margin of resection. At the same time, high resolution MRI allows adequate pre-operative identification of important prognostic risk factors, improving the selection and indication of therapy for each patient. This information includes, besides the circumferential margin of resection, tumor and lymph node staging, extramural vascular invasion and the description of lower rectal tumors. All these should be described in detail in the report, being part of the discussion in the multidisciplinary team, the place where the decisions involving the patient with rectal cancer will take place. The aim of this study is to provide the information necessary to understand the use of high resolution MRI in the identification of prognostic risk factors in rectal cancer. The technical requirements and standardized report for this study will be describe, as well as the anatomical landmarks of importance for the total mesorectal excision (TME), as we have said is the surgery of choice for rectal cancer. (authors) [es

  11. Ultra-high resolution protein crystallography

    International Nuclear Information System (INIS)

    Takeda, Kazuki; Hirano, Yu; Miki, Kunio

    2010-01-01

    Many protein structures have been determined by X-ray crystallography and deposited with the Protein Data Bank. However, these structures at usual resolution (1.5< d<3.0 A) are insufficient in their precision and quantity for elucidating the molecular mechanism of protein functions directly from structural information. Several studies at ultra-high resolution (d<0.8 A) have been performed with synchrotron radiation in the last decade. The highest resolution of the protein crystals was achieved at 0.54 A resolution for a small protein, crambin. In such high resolution crystals, almost all of hydrogen atoms of proteins and some hydrogen atoms of bound water molecules are experimentally observed. In addition, outer-shell electrons of proteins can be analyzed by the multipole refinement procedure. However, the influence of X-rays should be precisely estimated in order to derive meaningful information from the crystallographic results. In this review, we summarize refinement procedures, current status and perspectives for ultra high resolution protein crystallography. (author)

  12. High resolution, high speed ultrahigh vacuum microscopy

    International Nuclear Information System (INIS)

    Poppa, Helmut

    2004-01-01

    The history and future of transmission electron microscopy (TEM) is discussed as it refers to the eventual development of instruments and techniques applicable to the real time in situ investigation of surface processes with high resolution. To reach this objective, it was necessary to transform conventional high resolution instruments so that an ultrahigh vacuum (UHV) environment at the sample site was created, that access to the sample by various in situ sample modification procedures was provided, and that in situ sample exchanges with other integrated surface analytical systems became possible. Furthermore, high resolution image acquisition systems had to be developed to take advantage of the high speed imaging capabilities of projection imaging microscopes. These changes to conventional electron microscopy and its uses were slowly realized in a few international laboratories over a period of almost 40 years by a relatively small number of researchers crucially interested in advancing the state of the art of electron microscopy and its applications to diverse areas of interest; often concentrating on the nucleation, growth, and properties of thin films on well defined material surfaces. A part of this review is dedicated to the recognition of the major contributions to surface and thin film science by these pioneers. Finally, some of the important current developments in aberration corrected electron optics and eventual adaptations to in situ UHV microscopy are discussed. As a result of all the path breaking developments that have led to today's highly sophisticated UHV-TEM systems, integrated fundamental studies are now possible that combine many traditional surface science approaches. Combined investigations to date have involved in situ and ex situ surface microscopies such as scanning tunneling microscopy/atomic force microscopy, scanning Auger microscopy, and photoemission electron microscopy, and area-integrating techniques such as x-ray photoelectron

  13. USGS High Resolution Orthoimagery Collection - Historical - National Geospatial Data Asset (NGDA) High Resolution Orthoimagery

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — USGS high resolution orthorectified images from The National Map combine the image characteristics of an aerial photograph with the geometric qualities of a map. An...

  14. Sampling general N-body interactions with auxiliary fields

    Science.gov (United States)

    Körber, C.; Berkowitz, E.; Luu, T.

    2017-09-01

    We present a general auxiliary field transformation which generates effective interactions containing all possible N-body contact terms. The strength of the induced terms can analytically be described in terms of general coefficients associated with the transformation and thus are controllable. This transformation provides a novel way for sampling 3- and 4-body (and higher) contact interactions non-perturbatively in lattice quantum Monte Carlo simulations. As a proof of principle, we show that our method reproduces the exact solution for a two-site quantum mechanical problem.

  15. High resolution extremity CT for biomechanics modeling

    International Nuclear Information System (INIS)

    Ashby, A.E.; Brand, H.; Hollerbach, K.; Logan, C.M.; Martz, H.E.

    1995-01-01

    With the advent of ever more powerful computing and finite element analysis (FEA) capabilities, the bone and joint geometry detail available from either commercial surface definitions or from medical CT scans is inadequate. For dynamic FEA modeling of joints, precise articular contours are necessary to get appropriate contact definition. In this project, a fresh cadaver extremity was suspended in parafin in a lucite cylinder and then scanned with an industrial CT system to generate a high resolution data set for use in biomechanics modeling

  16. Ultra-high resolution coded wavefront sensor

    KAUST Repository

    Wang, Congli

    2017-06-08

    Wavefront sensors and more general phase retrieval methods have recently attracted a lot of attention in a host of application domains, ranging from astronomy to scientific imaging and microscopy. In this paper, we introduce a new class of sensor, the Coded Wavefront Sensor, which provides high spatio-temporal resolution using a simple masked sensor under white light illumination. Specifically, we demonstrate megapixel spatial resolution and phase accuracy better than 0.1 wavelengths at reconstruction rates of 50 Hz or more, thus opening up many new applications from high-resolution adaptive optics to real-time phase retrieval in microscopy.

  17. High resolution extremity CT for biomechanics modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ashby, A.E.; Brand, H.; Hollerbach, K.; Logan, C.M.; Martz, H.E.

    1995-09-23

    With the advent of ever more powerful computing and finite element analysis (FEA) capabilities, the bone and joint geometry detail available from either commercial surface definitions or from medical CT scans is inadequate. For dynamic FEA modeling of joints, precise articular contours are necessary to get appropriate contact definition. In this project, a fresh cadaver extremity was suspended in parafin in a lucite cylinder and then scanned with an industrial CT system to generate a high resolution data set for use in biomechanics modeling.

  18. High-resolution computer-aided moire

    Science.gov (United States)

    Sciammarella, Cesar A.; Bhat, Gopalakrishna K.

    1991-12-01

    This paper presents a high resolution computer assisted moire technique for the measurement of displacements and strains at the microscopic level. The detection of micro-displacements using a moire grid and the problem associated with the recovery of displacement field from the sampled values of the grid intensity are discussed. A two dimensional Fourier transform method for the extraction of displacements from the image of the moire grid is outlined. An example of application of the technique to the measurement of strains and stresses in the vicinity of the crack tip in a compact tension specimen is given.

  19. Laboratory of High resolution gamma spectrometry

    International Nuclear Information System (INIS)

    Mendez G, A.; Giber F, J.; Rivas C, I.; Reyes A, B.

    1992-01-01

    The Department of Nuclear Experimentation of the Nuclear Systems Management requests the collaboration of the Engineering unit for the supervision of the execution of the work of the High resolution Gamma spectrometry and low bottom laboratory, using the hut of the sub critic reactor of the Nuclear Center of Mexico. This laboratory has the purpose of determining the activity of special materials irradiated in nuclear power plants. In this report the architecture development, concepts, materials and diagrams for the realization of this type of work are presented. (Author)

  20. High resolution neutron spectroscopy for helium isotopes

    International Nuclear Information System (INIS)

    Abdel-Wahab, M.S.; Klages, H.O.; Schmalz, G.; Haesner, B.H.; Kecskemeti, J.; Schwarz, P.; Wilczynski, J.

    1992-01-01

    A high resolution fast neutron time-of-flight spectrometer is described, neutron time-of-flight spectra are taken using a specially designed TDC in connection to an on-line computer. The high time-of-flight resolution of 5 ps/m enabled the study of the total cross section of 4 He for neutrons near the 3/2 + resonance in the 5 He nucleus. The resonance parameters were determined by a single level Breit-Winger fit to the data. (orig.)

  1. Limiting liability via high resolution image processing

    Energy Technology Data Exchange (ETDEWEB)

    Greenwade, L.E.; Overlin, T.K.

    1996-12-31

    The utilization of high resolution image processing allows forensic analysts and visualization scientists to assist detectives by enhancing field photographs, and by providing the tools and training to increase the quality and usability of field photos. Through the use of digitized photographs and computerized enhancement software, field evidence can be obtained and processed as `evidence ready`, even in poor lighting and shadowed conditions or darkened rooms. These images, which are most often unusable when taken with standard camera equipment, can be shot in the worst of photographic condition and be processed as usable evidence. Visualization scientists have taken the use of digital photographic image processing and moved the process of crime scene photos into the technology age. The use of high resolution technology will assist law enforcement in making better use of crime scene photography and positive identification of prints. Valuable court room and investigation time can be saved and better served by this accurate, performance based process. Inconclusive evidence does not lead to convictions. Enhancement of the photographic capability helps solve one major problem with crime scene photos, that if taken with standard equipment and without the benefit of enhancement software would be inconclusive, thus allowing guilty parties to be set free due to lack of evidence.

  2. High-Resolution Scintimammography: A Pilot Study

    Energy Technology Data Exchange (ETDEWEB)

    Rachel F. Brem; Joelle M. Schoonjans; Douglas A. Kieper; Stan Majewski; Steven Goodman; Cahid Civelek

    2002-07-01

    This study evaluated a novel high-resolution breast-specific gamma camera (HRBGC) for the detection of suggestive breast lesions. Methods: Fifty patients (with 58 breast lesions) for whom a scintimammogram was clinically indicated were prospectively evaluated with a general-purpose gamma camera and a novel HRBGC prototype. The results of conventional and high-resolution nuclear studies were prospectively classified as negative (normal or benign) or positive (suggestive or malignant) by 2 radiologists who were unaware of the mammographic and histologic results. All of the included lesions were confirmed by pathology. Results: There were 30 benign and 28 malignant lesions. The sensitivity for detection of breast cancer was 64.3% (18/28) with the conventional camera and 78.6% (22/28) with the HRBGC. The specificity with both systems was 93.3% (28/30). For the 18 nonpalpable lesions, sensitivity was 55.5% (10/18) and 72.2% (13/18) with the general-purpose camera and the HRBGC, respectively. For lesions 1 cm, 7 of 15 were detected with the general-purpose camera and 10 of 15 with the HRBGC. Four lesions (median size, 8.5 mm) were detected only with the HRBGC and were missed by the conventional camera. Conclusion: Evaluation of indeterminate breast lesions with an HRBGC results in improved sensitivity for the detection of cancer, with greater improvement shown for nonpalpable and 1-cm lesions.

  3. High resolution studies of barium Rydberg states

    International Nuclear Information System (INIS)

    Eliel, E.R.

    1982-01-01

    The subtle structure of Rydberg states of barium with orbital angular momentum 0, 1, 2 and 3 is investigated. Some aspects of atomic theory for a configuration with two valence electrons are reviewed. The Multi Channel Quantum Defect Theory (MQDT) is concisely introduced as a convenient way to describe interactions between Rydberg series. Three high-resolution UV studies are presented. The first two, presenting results on a transition in indium and europium serve as an illustration of the frequency doubling technique. The third study is of hyperfine structure and isotope shifts in low-lying p states in Sr and Ba. An extensive study of the 6snp and 6snf Rydberg states of barium is presented with particular emphasis on the 6snf states. It is shown that the level structure cannot be fully explained with the model introduced earlier. Rather an effective two-body spin-orbit interaction has to be introduced to account for the observed splittings, illustrating that high resolution studies on Rydberg states offer an unique opportunity to determine the importance of such effects. Finally, the 6sns and 6snd series are considered. The hyperfine induced isotope shift in the simple excitation spectra to 6sns 1 S 0 is discussed and attention is paid to series perturbers. It is shown that level mixing parameters can easily be extracted from the experimental data. (Auth.)

  4. Principles of high resolution NMR in solids

    CERN Document Server

    Mehring, Michael

    1983-01-01

    The field of Nuclear Magnetic Resonance (NMR) has developed at a fascinating pace during the last decade. It always has been an extremely valuable tool to the organic chemist by supplying molecular "finger print" spectra at the atomic level. Unfortunately the high resolution achievable in liquid solutions could not be obtained in solids and physicists and physical chemists had to live with unresolved lines open to a wealth of curve fitting procedures and a vast amount of speculations. High resolution NMR in solids seemed to be a paradoxon. Broad structure­ less lines are usually encountered when dealing with NMR in solids. Only with the recent advent of mUltiple pulse, magic angle, cross-polarization, two-dimen­ sional and multiple-quantum spectroscopy and other techniques during the last decade it became possible to resolve finer details of nuclear spin interactions in solids. I have felt that graduate students, researchers and others beginning to get involved with these techniques needed a book which trea...

  5. High-Resolution PET Detector. Final report

    International Nuclear Information System (INIS)

    Karp, Joel

    2014-01-01

    The objective of this project was to develop an understanding of the limits of performance for a high resolution PET detector using an approach based on continuous scintillation crystals rather than pixelated crystals. The overall goal was to design a high-resolution detector, which requires both high spatial resolution and high sensitivity for 511 keV gammas. Continuous scintillation detectors (Anger cameras) have been used extensively for both single-photon and PET scanners, however, these instruments were based on NaI(Tl) scintillators using relatively large, individual photo-multipliers. In this project we investigated the potential of this type of detector technology to achieve higher spatial resolution through the use of improved scintillator materials and photo-sensors, and modification of the detector surface to optimize the light response function.We achieved an average spatial resolution of 3-mm for a 25-mm thick, LYSO continuous detector using a maximum likelihood position algorithm and shallow slots cut into the entrance surface

  6. High Resolution Powder Diffraction and Structure Determination

    International Nuclear Information System (INIS)

    Cox, D. E.

    1999-01-01

    It is clear that high-resolution synchrotrons X-ray powder diffraction is a very powerful and convenient tool for material characterization and structure determination. Most investigations to date have been carried out under ambient conditions and have focused on structure solution and refinement. The application of high-resolution techniques to increasingly complex structures will certainly represent an important part of future studies, and it has been seen how ab initio solution of structures with perhaps 100 atoms in the asymmetric unit is within the realms of possibility. However, the ease with which temperature-dependence measurements can be made combined with improvements in the technology of position-sensitive detectors will undoubtedly stimulate precise in situ structural studies of phase transitions and related phenomena. One challenge in this area will be to develop high-resolution techniques for ultra-high pressure investigations in diamond anvil cells. This will require highly focused beams and very precise collimation in front of the cell down to dimensions of 50 (micro)m or less. Anomalous scattering offers many interesting possibilities as well. As a means of enhancing scattering contrast it has applications not only to the determination of cation distribution in mixed systems such as the superconducting oxides discussed in Section 9.5.3, but also to the location of specific cations in partially occupied sites, such as the extra-framework positions in zeolites, for example. Another possible application is to provide phasing information for ab initio structure solution. Finally, the precise determination of f as a function of energy through an absorption edge can provide useful information about cation oxidation states, particularly in conjunction with XANES data. In contrast to many experiments at a synchrotron facility, powder diffraction is a relatively simple and user-friendly technique, and most of the procedures and software for data analysis

  7. High resolution CT of the lung

    Energy Technology Data Exchange (ETDEWEB)

    Itoh, Harumi (Kyoto Univ. (Japan). Faculty of Medicine)

    1991-02-01

    The emergence of computed tomography (CT) in the early 1970s has greatly contributed to diagnostic radiology. The brain was the first organ examined with CT, followed by the abdomen. For the chest, CT has also come into use shortly after the introduction in the examination of the thoracic cavity and mediastinum. CT techniques were, however, of limited significance in the evaluation of pulmonary diseases, especially diffuse pulmonary diseases. High-resolution CT (HRCT) has been introduced in clinical investigations of the lung field. This article is designed to present chest radiographic and conventional tomographic interpretations and to introduce findings of HRCT corresponding to the same shadows, with a summation of the significance of HRCT and issues of diagnostic imaging. Materials outlined are tuberculosis, pneumoconiosis, bronchopneumonia, mycoplasma pneumonia, lymphangitic carcinomatosis, sarcoidosis, diffuse panbronchiolitis, interstitial pneumonia, and pulmonary emphysema. Finally, an overview of basic investigations evolved from HRCT is given. (N.K.) 140 refs.

  8. Constructing a WISE High Resolution Galaxy Atlas

    Science.gov (United States)

    Jarrett, T. H.; Masci, F.; Tsai, C. W.; Petty, S.; Cluver, M.; Assef, Roberto J.; Benford, D.; Blain, A.; Bridge, C.; Donoso, E.; hide

    2012-01-01

    After eight months of continuous observations, the Wide-field Infrared Survey Explorer (WISE) mapped the entire sky at 3.4 micron, 4.6 micron, 12 micron, and 22 micron. We have begun a dedicated WISE High Resolution Galaxy Atlas project to fully characterize large, nearby galaxies and produce a legacy image atlas and source catalog. Here we summarize the deconvolution techniques used to significantly improve the spatial resolution of WISE imaging, specifically designed to study the internal anatomy of nearby galaxies. As a case study, we present results for the galaxy NGC 1566, comparing the WISE enhanced-resolution image processing to that of Spitzer, Galaxy Evolution Explorer, and ground-based imaging. This is the first paper in a two-part series; results for a larger sample of nearby galaxies are presented in the second paper.

  9. A high resolution jet analysis for LEP

    International Nuclear Information System (INIS)

    Hariri, S.

    1992-11-01

    A high resolution multijet analysis of hadronic events produced in e + e - annihilation at a C.M.S. energy of 91.2 GeV is described. Hadronic events produced in e + e - annihilations are generated using the Monte Carlo program JETSET7.3 with its two options: Matrix Element (M.E.) and Parton Showers (P.S.). The shower option is used with its default parameter values while the M.E. option is used with an invariant mass cut Y CUT =0.01 instead of 0.02. This choice ensures a better continuity in the evolution of the event shape variables. (K.A.) 3 refs.; 26 figs.; 1 tab

  10. High Resolution Displays Using NCAP Liquid Crystals

    Science.gov (United States)

    Macknick, A. Brian; Jones, Phil; White, Larry

    1989-07-01

    Nematic curvilinear aligned phase (NCAP) liquid crystals have been found useful for high information content video displays. NCAP materials are liquid crystals which have been encapsulated in a polymer matrix and which have a light transmission which is variable with applied electric fields. Because NCAP materials do not require polarizers, their on-state transmission is substantially better than twisted nematic cells. All dimensional tolerances are locked in during the encapsulation process and hence there are no critical sealing or spacing issues. By controlling the polymer/liquid crystal morphology, switching speeds of NCAP materials have been significantly improved over twisted nematic systems. Recent work has combined active matrix addressing with NCAP materials. Active matrices, such as thin film transistors, have given displays of high resolution. The paper will discuss the advantages of NCAP materials specifically designed for operation at video rates on transistor arrays; applications for both backlit and projection displays will be discussed.

  11. High resolution VUV facility at INDUS-1

    International Nuclear Information System (INIS)

    Krishnamurty, G.; Saraswathy, P.; Rao, P.M.R.; Mishra, A.P.; Kartha, V.B.

    1993-01-01

    Synchrotron radiation (SR) generated in the electron storage rings is an unique source for the study of atomic and molecular spectroscopy especially in the vacuum ultra violet region. Realizing the potential of this light source, efforts are in progress to develop a beamline facility at INDUS-1 to carry out high resolution atomic and molecular spectroscopy. This beam line consists of a fore-optic which is a combination of three cylindrical mirrors. The mirrors are so chosen that SR beam having a 60 mrad (horizontal) x 6 mrad (vertical) divergence is focussed onto a slit of a 6.65 metre off-plane spectrometer in Eagle Mount equipped with horizontal slit and vertical dispersion. The design of the various components of the beam line is completed. It is decided to build the spectrometer as per the requirements of the user community. Details of the various aspects of the beam line will be presented. (author). 3 figs

  12. High-resolution CT of airway reactivity

    International Nuclear Information System (INIS)

    Herold, C.J.; Brown, R.H.; Hirshman, C.A.; Mitzner, W.; Zerhouni, E.A.

    1990-01-01

    Assessment of airway reactivity has generally been limited to experimental nonimaging models. This authors of this paper used high-resolution CT (HRCT) to evaluate airway reactivity and to calculate airway resistance (Raw) compared with lung resistance (RL). Ten anesthetized and ventilated dogs were investigated with HRCT (10 contiguous 2-mm sections through the lower lung lobes) during control state, following aerosol histamine challenge, and following posthistamine hyperinflation. The HRCT scans were digitized, and areas of 10 airways per dog (diameter, 1-10 mm) were measured with a computer edging process. Changes in airway area and Raw (calculated by 1/[area] 2 ) were measured. RL was assessed separately, following the same protocol. Data were analyzed by use of a paired t-test with significance at p < .05

  13. High resolution crystal calorimetry at LHC

    International Nuclear Information System (INIS)

    Schneegans, M.; Ferrere, D.; Lebeau, M.; Vivargent, M.

    1991-01-01

    The search for Higgs bosons above Lep200 reach could be one of the main tasks of the future pp and ee colliders. In the intermediate mass region, and in particular in the range 80-140 GeV/c 2 , only the 2-photon decay mode of a Higgs produced inclusively or in association with a W, gives a good chance of observation. A 'dedicated' very high resolution calorimeter with photon angle reconstruction and pion identification capability should detect a Higgs signal with high probability. A crystal calorimeter can be considered as a conservative approach to such a detector, since a large design and operation experience already exists. The extensive R and D needed for finding a dense, fast and radiation hard crystal, is under way. Guide-lines for designing an optimum calorimeter for LHC are discussed and preliminary configurations are given. (author) 7 refs., 3 figs., 2 tabs

  14. High resolution tomography using analog coding

    International Nuclear Information System (INIS)

    Brownell, G.L.; Burnham, C.A.; Chesler, D.A.

    1985-01-01

    As part of a 30-year program in the development of positron instrumentation, the authors have developed a high resolution bismuth germanate (BGO) ring tomography (PCR) employing 360 detectors and 90 photomultiplier tubes for one plane. The detectors are shaped as trapezoid and are 4 mm wide at the front end. When assembled, they form an essentially continuous cylindrical detector. Light from a scintillation in the detector is viewed through a cylindrical light pipe by the photomultiplier tubes. By use of an analog coding scheme, the detector emitting light is identified from the phototube signals. In effect, each phototube can identify four crystals. PCR is designed as a static device and does not use interpolative motion. This results in considerable advantage when performing dynamic studies. PCR is the positron tomography analog of the γ-camera widely used in nuclear medicine

  15. High-resolution CT of otosclerosis

    International Nuclear Information System (INIS)

    Dewen, Yang; Kodama, Takao; Tono, Tetsuya; Ochiai, Reiji; Kiyomizu, Kensuke; Suzuki, Yukiko; Yano, Takanori; Watanabe, Katsushi

    1997-01-01

    High-resolution CT (HRCT) scans of thirty-two patients (60 ears) with the clinical diagnosis of fenestral otosclerosis were evaluated retrospectively. HRCT was performed with 1-mm-thick targeted sections and 1-mm (36 ears) or 0.5-mm (10 ears) intervals in the semiaxial projection. Seven patients (14 ears) underwent helical scanning with a 1-mm slice thickness and 1-mm/sec table speed. Forty-five ears (75%) were found to have one or more otospongiotic or otosclerotic foci on HRCT. In most instances (30 ears), the otospongiotic foci were found in the region of the fissula ante fenestram. No significant correlations between CT findings and air conduction threshold were observed. We found a significant relationship between lesions of the labyrinthine capsule and sensorineural hearing loss. We conclude that HRCT is a valuable modality for diagnosing otosclerosis, especially when otospongiotic focus is detected. (author)

  16. High resolution CT in pulmonary sarcoidosis

    International Nuclear Information System (INIS)

    Spina, Juan C.; Curros, Marisela L.; Gomez, M.; Gonzalez, A.; Chacon, Carolina; Guerendiain, G.

    2000-01-01

    Objectives: To establish the particular advantages of High Resolution CT (HRCT) for the diagnosis of pulmonary sarcoidosis. Material and Methods: A series of fourteen patients, (4 men and 10 women; mean age 44,5 years) with thoracic sarcoidosis. All patients were studied using HRCT and diagnosis was confirmed for each case. Confidence intervals were obtained for different disease manifestations. Results: The most common findings were: lymph node enlargement (n=14 patients), pulmonary nodules (n=13), thickening of septa (n=6), peribronquial vascular thickening (n=5) pulmonary pseudo mass (n=5) and signs of fibrosis (n=4). The stage most commonly observed was stage II. It is worth noting that no cases of pleural effusion or cavitations of pulmonary lesions were observed. Conclusions: In this series, confidence interval overlapping for lymph node enlargement, single pulmonary nodules and septum thickening, allows to infer that their presence in a young adult, with few clinical symptoms, forces to rule out first the possibility of sarcoidosis. (author)

  17. Improved methods for high resolution electron microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, J.R.

    1987-04-01

    Existing methods of making support films for high resolution transmission electron microscopy are investigated and novel methods are developed. Existing methods of fabricating fenestrated, metal reinforced specimen supports (microgrids) are evaluated for their potential to reduce beam induced movement of monolamellar crystals of C/sub 44/H/sub 90/ paraffin supported on thin carbon films. Improved methods of producing hydrophobic carbon films by vacuum evaporation, and improved methods of depositing well ordered monolamellar paraffin crystals on carbon films are developed. A novel technique for vacuum evaporation of metals is described which is used to reinforce microgrids. A technique is also developed to bond thin carbon films to microgrids with a polymer bonding agent. Unique biochemical methods are described to accomplish site specific covalent modification of membrane proteins. Protocols are given which covalently convert the carboxy terminus of papain cleaved bacteriorhodopsin to a free thiol. 53 refs., 19 figs., 1 tab.

  18. High resolution infrared spectroscopy of symbiotic stars

    International Nuclear Information System (INIS)

    Bensammar, S.

    1989-01-01

    We report here very early results of high resolution (5x10 3 - 4x10 4 ) infrared spectroscopy (1 - 2.5 μm) of different symbiotic stars (T CrB, RW Hya, CI Cyg, PU Vul) observed with the Fourier Transform Spectrometer of the 3.60m Canada France Hawaii Telescope. These stars are usually considered as interacting binaries and only little details are known about the nature of their cool component. CO absorption lines are detected for the four stars. Very different profiles of hydrogen Brackett γ and helium 10830 A lines are shown for CI Cyg observed at different phases, while Pu Vul shows very intense emission lines

  19. GRANULOMETRIC MAPS FROM HIGH RESOLUTION SATELLITE IMAGES

    Directory of Open Access Journals (Sweden)

    Catherine Mering

    2011-05-01

    Full Text Available A new method of land cover mapping from satellite images using granulometric analysis is presented here. Discontinuous landscapes such as steppian bushes of semi arid regions and recently growing urban settlements are especially concerned by this study. Spatial organisations of the land cover are quantified by means of the size distribution analysis of the land cover units extracted from high resolution remotely sensed images. A granulometric map is built by automatic classification of every pixel of the image according to the granulometric density inside a sliding neighbourhood. Granulometric mapping brings some advantages over traditional thematic mapping by remote sensing by focusing on fine spatial events and small changes in one peculiar category of the landscape.

  20. A subspace approach to high-resolution spectroscopic imaging.

    Science.gov (United States)

    Lam, Fan; Liang, Zhi-Pei

    2014-04-01

    To accelerate spectroscopic imaging using sparse sampling of (k,t)-space and subspace (or low-rank) modeling to enable high-resolution metabolic imaging with good signal-to-noise ratio. The proposed method, called SPectroscopic Imaging by exploiting spatiospectral CorrElation, exploits a unique property known as partial separability of spectroscopic signals. This property indicates that high-dimensional spectroscopic signals reside in a very low-dimensional subspace and enables special data acquisition and image reconstruction strategies to be used to obtain high-resolution spatiospectral distributions with good signal-to-noise ratio. More specifically, a hybrid chemical shift imaging/echo-planar spectroscopic imaging pulse sequence is proposed for sparse sampling of (k,t)-space, and a low-rank model-based algorithm is proposed for subspace estimation and image reconstruction from sparse data with the capability to incorporate prior information and field inhomogeneity correction. The performance of the proposed method has been evaluated using both computer simulations and phantom studies, which produced very encouraging results. For two-dimensional spectroscopic imaging experiments on a metabolite phantom, a factor of 10 acceleration was achieved with a minimal loss in signal-to-noise ratio compared to the long chemical shift imaging experiments and with a significant gain in signal-to-noise ratio compared to the accelerated echo-planar spectroscopic imaging experiments. The proposed method, SPectroscopic Imaging by exploiting spatiospectral CorrElation, is able to significantly accelerate spectroscopic imaging experiments, making high-resolution metabolic imaging possible. Copyright © 2014 Wiley Periodicals, Inc.

  1. Geological survey by high resolution electrical survey on granite areas

    International Nuclear Information System (INIS)

    Sugimoto, Yoshihiro; Yamada, Naoyuki

    2002-03-01

    As an Integral part of the geological survey in 'The study of the regions ground water flow system' that we are carrying out with Tono Geoscience Center, we proved the relation between the uncontinuation structure such as lineament in the base rock and resistivity structure (resistivity distribution), for the purpose of that confirms the efficacy of the high resolution electrical survey as geological survey, we carried out high resolution electrical survey on granite area. We obtained the following result, by the comparison of resistivity distribution with established geological survey, lineament analysis and investigative drilling. 1. The resistivity structure of this survey area is almost able to classify it into the following four range. 1) the low resistivity range of 50-800 Ωm, 2) The resistivity range like the middle of 200-2000 Ωm, 3) The high resistivity range of 2000 Ωm over, 4) The low resistivity range of depth of the survey line 400-550 section. 2. The low resistivity range of 4) that correspond with the established geological data is not admitted. 3. It was confirmed that resistivity structure almost correspond to geological structure by the comparison with the established data. 4. The small-scale low resistivity area is admitted in the point equivalent to the lineament position of established. 5. We carried out it with the simulation method about the low resistivity range of 4). As a result, it understood that it has the possibility that the narrow ratio low resistivity area is shown as the wide ratio resistivity range in the analysis section. In the survey in this time, it is conceivable that the resistivity distribution with the possibility of the unhomogeneous and uncontinuation structure of the base rock is being shown conspicuously, the efficacy of the high resolution resistivity survey as geological survey on granite was shown. (author)

  2. High-resolution RCMs as pioneers for future GCMs

    Science.gov (United States)

    Schar, C.; Ban, N.; Arteaga, A.; Charpilloz, C.; Di Girolamo, S.; Fuhrer, O.; Hoefler, T.; Leutwyler, D.; Lüthi, D.; Piaget, N.; Ruedisuehli, S.; Schlemmer, L.; Schulthess, T. C.; Wernli, H.

    2017-12-01

    Currently large efforts are underway to refine the horizontal resolution of global and regional climate models to O(1 km), with the intent to represent convective clouds explicitly rather than using semi-empirical parameterizations. This refinement will move the governing equations closer to first principles and is expected to reduce the uncertainties of climate models. High resolution is particularly attractive in order to better represent critical cloud feedback processes (e.g. related to global climate sensitivity and extratropical summer convection) and extreme events (such as heavy precipitation events, floods, and hurricanes). The presentation will be illustrated using decade-long simulations at 2 km horizontal grid spacing, some of these covering the European continent on a computational mesh with 1536x1536x60 grid points. To accomplish such simulations, use is made of emerging heterogeneous supercomputing architectures, using a version of the COSMO limited-area weather and climate model that is able to run entirely on GPUs. Results show that kilometer-scale resolution dramatically improves the simulation of precipitation in terms of the diurnal cycle and short-term extremes. The modeling framework is used to address changes of precipitation scaling with climate change. It is argued that already today, modern supercomputers would in principle enable global atmospheric convection-resolving climate simulations, provided appropriately refactored codes were available, and provided solutions were found to cope with the rapidly growing output volume. A discussion will be provided of key challenges affecting the design of future high-resolution climate models. It is suggested that km-scale RCMs should be exploited to pioneer this terrain, at a time when GCMs are not yet available at such resolutions. Areas of interest include the development of new parameterization schemes adequate for km-scale resolution, the exploration of new validation methodologies and data

  3. High resolution modelling of extreme precipitation events in urban areas

    Science.gov (United States)

    Siemerink, Martijn; Volp, Nicolette; Schuurmans, Wytze; Deckers, Dave

    2015-04-01

    The present day society needs to adjust to the effects of climate change. More extreme weather conditions are expected, which can lead to longer periods of drought, but also to more extreme precipitation events. Urban water systems are not designed for such extreme events. Most sewer systems are not able to drain the excessive storm water, causing urban flooding. This leads to high economic damage. In order to take appropriate measures against extreme urban storms, detailed knowledge about the behaviour of the urban water system above and below the streets is required. To investigate the behaviour of urban water systems during extreme precipitation events new assessment tools are necessary. These tools should provide a detailed and integral description of the flow in the full domain of overland runoff, sewer flow, surface water flow and groundwater flow. We developed a new assessment tool, called 3Di, which provides detailed insight in the urban water system. This tool is based on a new numerical methodology that can accurately deal with the interaction between overland runoff, sewer flow and surface water flow. A one-dimensional model for the sewer system and open channel flow is fully coupled to a two-dimensional depth-averaged model that simulates the overland flow. The tool uses a subgrid-based approach in order to take high resolution information of the sewer system and of the terrain into account [1, 2]. The combination of using the high resolution information and the subgrid based approach results in an accurate and efficient modelling tool. It is now possible to simulate entire urban water systems using extreme high resolution (0.5m x 0.5m) terrain data in combination with a detailed sewer and surface water network representation. The new tool has been tested in several Dutch cities, such as Rotterdam, Amsterdam and The Hague. We will present the results of an extreme precipitation event in the city of Schiedam (The Netherlands). This city deals with

  4. Integrated High Resolution Monitoring of Mediterranean vegetation

    Science.gov (United States)

    Cesaraccio, Carla; Piga, Alessandra; Ventura, Andrea; Arca, Angelo; Duce, Pierpaolo; Mereu, Simone

    2017-04-01

    The study of the vegetation features in a complex and highly vulnerable ecosystems, such as Mediterranean maquis, leads to the need of using continuous monitoring systems at high spatial and temporal resolution, for a better interpretation of the mechanisms of phenological and eco-physiological processes. Near-surface remote sensing techniques are used to quantify, at high temporal resolution, and with a certain degree of spatial integration, the seasonal variations of the surface optical and radiometric properties. In recent decades, the design and implementation of global monitoring networks involved the use of non-destructive and/or cheaper approaches such as (i) continuous surface fluxes measurement stations, (ii) phenological observation networks, and (iii) measurement of temporal and spatial variations of the vegetation spectral properties. In this work preliminary results from the ECO-SCALE (Integrated High Resolution Monitoring of Mediterranean vegetation) project are reported. The project was manly aimed to develop an integrated system for environmental monitoring based on digital photography, hyperspectral radiometry , and micrometeorological techniques during three years of experimentation (2013-2016) in a Mediterranean site of Italy (Capo Caccia, Alghero). The main results concerned the analysis of chromatic coordinates indices from digital images, to characterized the phenological patterns for typical shrubland species, determining start and duration of the growing season, and the physiological status in relation to different environmental drought conditions; then the seasonal patterns of canopy phenology, was compared to NEE (Net Ecosystem Exchange) patterns, showing similarities. However, maximum values of NEE and ER (Ecosystem respiration), and short term variation, seemed mainly tuned by inter annual pattern of meteorological variables, in particular of temperature recorded in the months preceding the vegetation green-up. Finally, green signals

  5. High-Resolution Integrated Optical System

    Science.gov (United States)

    Prakapenka, V. B.; Goncharov, A. F.; Holtgrewe, N.; Greenberg, E.

    2017-12-01

    Raman and optical spectroscopy in-situ at extreme high pressure and temperature conditions relevant to the planets' deep interior is a versatile tool for characterization of wide range of properties of minerals essential for understanding the structure, composition, and evolution of terrestrial and giant planets. Optical methods, greatly complementing X-ray diffraction and spectroscopy techniques, become crucial when dealing with light elements. Study of vibrational and optical properties of minerals and volatiles, was a topic of many research efforts in past decades. A great deal of information on the materials properties under extreme pressure and temperature has been acquired including that related to structural phase changes, electronic transitions, and chemical transformations. These provide an important insight into physical and chemical states of planetary interiors (e.g. nature of deep reservoirs) and their dynamics including heat and mass transport (e.g. deep carbon cycle). Optical and vibrational spectroscopy can be also very instrumental for elucidating the nature of the materials molten states such as those related to the Earth's volatiles (CO2, CH4, H2O), aqueous fluids and silicate melts, planetary ices (H2O, CH4, NH3), noble gases, and H2. The optical spectroscopy study performed concomitantly with X-ray diffraction and spectroscopy measurements at the GSECARS beamlines on the same sample and at the same P-T conditions would greatly enhance the quality of this research and, moreover, will provide unique new information on chemical state of matter. The advanced high-resolution user-friendly integrated optical system is currently under construction and expected to be completed by 2018. In our conceptual design we have implemented Raman spectroscopy with five excitation wavelengths (266, 473, 532, 660, 946 nm), confocal imaging, double sided IR laser heating combined with high temperature Raman (including coherent anti-Stokes Raman scattering) and

  6. High resolution muon computed tomography at neutrino beam facilities

    International Nuclear Information System (INIS)

    Suerfu, B.; Tully, C.G.

    2016-01-01

    X-ray computed tomography (CT) has an indispensable role in constructing 3D images of objects made from light materials. However, limited by absorption coefficients, X-rays cannot deeply penetrate materials such as copper and lead. Here we show via simulation that muon beams can provide high resolution tomographic images of dense objects and of structures within the interior of dense objects. The effects of resolution broadening from multiple scattering diminish with increasing muon momentum. As the momentum of the muon increases, the contrast of the image goes down and therefore requires higher resolution in the muon spectrometer to resolve the image. The variance of the measured muon momentum reaches a minimum and then increases with increasing muon momentum. The impact of the increase in variance is to require a higher integrated muon flux to reduce fluctuations. The flux requirements and level of contrast needed for high resolution muon computed tomography are well matched to the muons produced in the pion decay pipe at a neutrino beam facility and what can be achieved for momentum resolution in a muon spectrometer. Such an imaging system can be applied in archaeology, art history, engineering, material identification and whenever there is a need to image inside a transportable object constructed of dense materials

  7. High resolution computed tomography of positron emitters

    International Nuclear Information System (INIS)

    Derenzo, S.E.; Budinger, T.F.; Cahoon, J.L.; Huesman, R.H.; Jackson, H.G.

    1976-10-01

    High resolution computed transaxial radionuclide tomography has been performed on phantoms containing positron-emitting isotopes. The imaging system consisted of two opposing groups of eight NaI(Tl) crystals 8 mm x 30 mm x 50 mm deep and the phantoms were rotated to measure coincident events along 8960 projection integrals as they would be measured by a 280-crystal ring system now under construction. The spatial resolution in the reconstructed images is 7.5 mm FWHM at the center of the ring and approximately 11 mm FWHM at a radius of 10 cm. We present measurements of imaging and background rates under various operating conditions. Based on these measurements, the full 280-crystal system will image 10,000 events per sec with 400 μCi in a section 1 cm thick and 20 cm in diameter. We show that 1.5 million events are sufficient to reliably image 3.5-mm hot spots with 14-mm center-to-center spacing and isolated 9-mm diameter cold spots in phantoms 15 to 20 cm in diameter

  8. High speed, High resolution terahertz spectrometers

    International Nuclear Information System (INIS)

    Kim, Youngchan; Yee, Dae Su; Yi, Miwoo; Ahn, Jaewook

    2008-01-01

    A variety of sources and methods have been developed for terahertz spectroscopy during almost two decades. Terahertz time domain spectroscopy (THz TDS)has attracted particular attention as a basic measurement method in the fields of THz science and technology. Recently, asynchronous optical sampling (AOS)THz TDS has been demonstrated, featuring rapid data acquisition and a high spectral resolution. Also, terahertz frequency comb spectroscopy (TFCS)possesses attractive features for high precision terahertz spectroscopy. In this presentation, we report on these two types of terahertz spectrometer. Our high speed, high resolution terahertz spectrometer is demonstrated using two mode locked femtosecond lasers with slightly different repetition frequencies without a mechanical delay stage. The repetition frequencies of the two femtosecond lasers are stabilized by use of two phase locked loops sharing the same reference oscillator. The time resolution of our terahertz spectrometer is measured using the cross correlation method to be 270 fs. AOS THz TDS is presented in Fig. 1, which shows a time domain waveform rapidly acquired on a 10ns time window. The inset shows a zoom into the signal with 100ps time window. The spectrum obtained by the fast Fourier Transformation (FFT)of the time domain waveform has a frequency resolution of 100MHz. The dependence of the signal to noise ratio (SNR)on the measurement time is also investigated

  9. High resolution CT of temporal bone trauma

    International Nuclear Information System (INIS)

    Youn, Eun Kyung

    1986-01-01

    Radiographic studies of the temporal bone following head trauma are indicated when there is cerebrospinal fluid otorrhea or rhinorrhoea, hearing loss, or facial nerve paralysis. Plain radiography displays only 17-30% of temporal bone fractures and pluridirectional tomography is both difficult to perform, particularly in the acutely ill patient, and less satisfactory for the demonstration of fine fractures. Consequently, high resolution CT is the imaging method of choice for the investigation of suspected temporal bone trauma and allows special resolution of fine bony detail comparable to that attainable by conventional tomography. Eight cases of temporal bone trauma examined at Korea General Hospital April 1985 through May 1986. The results were as follows: Seven patients (87%) suffered longitudinal fractures. In 6 patients who had purely conductive hearing loss, CT revealed various ossicular chain abnormality. In one patient who had neuro sensory hearing loss, CT demonstrated intract ossicular with a fracture nearing lateral wall of the lateral semicircular canal. In one patient who had mixed hearing loss, CT showed complex fracture.

  10. High resolution SETI: Experiences and prospects

    Science.gov (United States)

    Horowitz, Paul; Clubok, Ken

    Megachannel spectroscopy with sub-Hertz resolution constitutes an attractive strategy for a microwave search for extraterrestrial intelligence (SETI), assuming the transmission of a narrowband radiofrequency beacon. Such resolution matches the properties of the interstellar medium, and the necessary Doppler corrections provide a high degree of interference rejection. We have constructed a frequency-agile receiver with an FFT-based 8 megachannel digital spectrum analyzer, on-line signal recognition, and multithreshold archiving. We are using it to conduct a meridian transit search of the northern sky at the Harvard-Smithsonian 26-m antenna, with a second identical system scheduled to begin observations in Argentina this month. Successive 400 kHz spectra, at 0.05 Hz resolution, are searched for features characteristic of an intentional narrowband beacon transmission. These spectra are centered on guessable frequencies (such as λ21 cm), referenced successively to the local standard of rest, the galactic barycenter, and the cosmic blackbody rest frame. This search has rejected interference admirably, but is greatly limited both in total frequency coverage and sensitivity to signals other than carriers. We summarize five years of high resolution SETI at Harvard, in the context of answering the questions "How useful is narrowband SETI, how serious are its limitations, what can be done to circumvent them, and in what direction should SETI evolve?" Increasingly powerful signal processing hardware, combined with ever-higher memory densities, are particularly relevant, permitting the construction of compact and affordable gigachannel spectrum analyzers covering hundreds of megahertz of instantaneous bandwidth.

  11. High-resolution CCD imaging alternatives

    Science.gov (United States)

    Brown, D. L.; Acker, D. E.

    1992-08-01

    High resolution CCD color cameras have recently stimulated the interest of a large number of potential end-users for a wide range of practical applications. Real-time High Definition Television (HDTV) systems are now being used or considered for use in applications ranging from entertainment program origination through digital image storage to medical and scientific research. HDTV generation of electronic images offers significant cost and time-saving advantages over the use of film in such applications. Further in still image systems electronic image capture is faster and more efficient than conventional image scanners. The CCD still camera can capture 3-dimensional objects into the computing environment directly without having to shoot a picture on film develop it and then scan the image into a computer. 2. EXTENDING CCD TECHNOLOGY BEYOND BROADCAST Most standard production CCD sensor chips are made for broadcast-compatible systems. One popular CCD and the basis for this discussion offers arrays of roughly 750 x 580 picture elements (pixels) or a total array of approximately 435 pixels (see Fig. 1). FOR. A has developed a technique to increase the number of available pixels for a given image compared to that produced by the standard CCD itself. Using an inter-lined CCD with an overall spatial structure several times larger than the photo-sensitive sensor areas each of the CCD sensors is shifted in two dimensions in order to fill in spatial gaps between adjacent sensors.

  12. High resolution simultaneous measurements of airborne radionuclides

    International Nuclear Information System (INIS)

    Abe, T.; Yamaguchi, Y.; Tanaka, K.; Komura, K.

    2006-01-01

    High resolution (2-3 hrs) simultaneous measurements of airborne radionuclides, 212 Pb, 210 Pb and 7 Be, have been performed by using extremely low background Ge detectors at Ogoya Underground Laboratory. We have measured above radionuclides at three monitoring points viz, 1) Low Level Radioactivity Laboratory (LLRL) Kanazawa University, 2) Shishiku Plateau (640 m MSL) located about 8 km from LLRL to investigate vertical difference of activity levels, and 3) Hegura Island (10 m MSL) located about 50 km from Noto Peninsula in the Sea of Japan to evaluate the influences of Asian continent or mainland of Japan on the variation to the activity levels. Variations of short-lived 212 Pb concentration showed noticeable time lags between at LLRL and at Shishiku Plateau. These time lags might be caused by change of height of a planetary boundary layer. On the contrary, variations of long-lived 210 Pb and 7 Be showed simultaneity at three locations because of homogeneity of these concentrations all over the area. (author)

  13. High-resolution X-ray television and high-resolution video recorders

    International Nuclear Information System (INIS)

    Haendle, J.; Horbaschek, H.; Alexandrescu, M.

    1977-01-01

    The improved transmission properties of the high-resolution X-ray television chain described here make it possible to transmit more information per television image. The resolution in the fluoroscopic image, which is visually determined, depends on the dose rate and the inertia of the television pick-up tube. This connection is discussed. In the last few years, video recorders have been increasingly used in X-ray diagnostics. The video recorder is a further quality-limiting element in X-ray television. The development of function patterns of high-resolution magnetic video recorders shows that this quality drop may be largely overcome. The influence of electrical band width and number of lines on the resolution in the X-ray television image stored is explained in more detail. (orig.) [de

  14. Using High Resolution Simulations with WRF/SSiB Regional Climate Model Constrained by In Situ Observations to Assess the Impacts of Dust in Snow in the Upper Colorado River Basin

    Science.gov (United States)

    Oaida, C. M.; Skiles, M.; Painter, T. H.; Xue, Y.

    2015-12-01

    The mountain snowpack is an essential resource for both the environment as well as society. Observational and energy balance modeling work have shown that dust on snow (DOS) in western U.S. (WUS) is a major contributor to snow processes, including snowmelt timing and runoff amount in regions like the Upper Colorado River Basin (UCRB). In order to accurately estimate the impact of DOS to the hydrologic cycle and water resources, now and under a changing climate, we need to be able to (1) adequately simulate the snowpack (accumulation), and (2) realistically represent DOS processes in models. Energy balance models do not capture the impact on a broader local or regional scale, nor the land-atmosphere feedbacks, while GCM studies cannot resolve orographic-related precipitation processes, and therefore snowpack accumulation, owing to coarse spatial resolution and smoother terrain. All this implies the impacts of dust on snow on the mountain snowpack and other hydrologic processes are likely not well captured in current modeling studies. Recent increase in computing power allows for RCMs to be used at higher spatial resolutions, while recent in situ observations of dust in snow properties can help constrain modeling simulations. Therefore, in the work presented here, we take advantage of these latest resources to address the some of the challenges outlined above. We employ the newly enhanced WRF/SSiB regional climate model at 4 km horizontal resolution. This scale has been shown by others to be adequate in capturing orographic processes over WUS. We also constrain the magnitude of dust deposition provided by a global chemistry and transport model, with in situ measurements taken at sites in the UCRB. Furthermore, we adjust the dust absorptive properties based on observed values at these sites, as opposed to generic global ones. This study aims to improve simulation of the impact of dust in snow on the hydrologic cycle and related water resources.

  15. Processing method for high resolution monochromator

    International Nuclear Information System (INIS)

    Kiriyama, Koji; Mitsui, Takaya

    2006-12-01

    A processing method for high resolution monochromator (HRM) has been developed at Japanese Atomic Energy Agency/Quantum Beam Science Directorate/Synchrotron Radiation Research unit at SPring-8. For manufacturing a HRM, a sophisticated slicing machine and X-ray diffractometer have been installed for shaping a crystal ingot and orienting precisely the surface of a crystal ingot, respectively. The specification of the slicing machine is following; Maximum size of a diamond blade is φ 350mm in diameter, φ 38.1mm in the spindle diameter, and 2mm in thickness. A large crystal such as an ingot with 100mm in diameter, 200mm in length can be cut. Thin crystal samples such as a wafer can be also cut using by another sample holder. Working distance of a main shaft with the direction perpendicular to working table in the machine is 350mm at maximum. Smallest resolution of the main shaft with directions of front-and-back and top-and-bottom are 0.001mm read by a digital encoder. 2mm/min can set for cutting samples in the forward direction. For orienting crystal faces relative to the blade direction adjustment, a one-circle goniometer and 2-circle segment are equipped on the working table in the machine. A rotation and a tilt of the stage can be done by manual operation. Digital encoder in a turn stage is furnished and has angle resolution of less than 0.01 degrees. In addition, a hand drill as a supporting device for detailed processing of crystal is prepared. Then, an ideal crystal face can be cut from crystal samples within an accuracy of about 0.01 degrees. By installation of these devices, a high energy resolution monochromator crystal for inelastic x-ray scattering and a beam collimator are got in hand and are expected to be used for nanotechnology studies. (author)

  16. Toward high-resolution optoelectronic retinal prosthesis

    Science.gov (United States)

    Palanker, Daniel; Huie, Philip; Vankov, Alexander; Asher, Alon; Baccus, Steven

    2005-04-01

    It has been already demonstrated that electrical stimulation of retina can produce visual percepts in blind patients suffering from macular degeneration and retinitis pigmentosa. Current retinal implants provide very low resolution (just a few electrodes), while several thousand pixels are required for functional restoration of sight. We present a design of the optoelectronic retinal prosthetic system that can activate a retinal stimulating array with pixel density up to 2,500 pix/mm2 (geometrically corresponding to a visual acuity of 20/80), and allows for natural eye scanning rather than scanning with a head-mounted camera. The system operates similarly to "virtual reality" imaging devices used in military and medical applications. An image from a video camera is projected by a goggle-mounted infrared LED-LCD display onto the retina, activating an array of powered photodiodes in the retinal implant. Such a system provides a broad field of vision by allowing for natural eye scanning. The goggles are transparent to visible light, thus allowing for simultaneous utilization of remaining natural vision along with prosthetic stimulation. Optical control of the implant allows for simple adjustment of image processing algorithms and for learning. A major prerequisite for high resolution stimulation is the proximity of neural cells to the stimulation sites. This can be achieved with sub-retinal implants constructed in a manner that directs migration of retinal cells to target areas. Two basic implant geometries are described: perforated membranes and protruding electrode arrays. Possibility of the tactile neural stimulation is also examined.

  17. High-resolution phylogenetic microbial community profiling

    Energy Technology Data Exchange (ETDEWEB)

    Singer, Esther; Coleman-Derr, Devin; Bowman, Brett; Schwientek, Patrick; Clum, Alicia; Copeland, Alex; Ciobanu, Doina; Cheng, Jan-Fang; Gies, Esther; Hallam, Steve; Tringe, Susannah; Woyke, Tanja

    2014-03-17

    The representation of bacterial and archaeal genome sequences is strongly biased towards cultivated organisms, which belong to merely four phylogenetic groups. Functional information and inter-phylum level relationships are still largely underexplored for candidate phyla, which are often referred to as microbial dark matter. Furthermore, a large portion of the 16S rRNA gene records in the GenBank database are labeled as environmental samples and unclassified, which is in part due to low read accuracy, potential chimeric sequences produced during PCR amplifications and the low resolution of short amplicons. In order to improve the phylogenetic classification of novel species and advance our knowledge of the ecosystem function of uncultivated microorganisms, high-throughput full length 16S rRNA gene sequencing methodologies with reduced biases are needed. We evaluated the performance of PacBio single-molecule real-time (SMRT) sequencing in high-resolution phylogenetic microbial community profiling. For this purpose, we compared PacBio and Illumina metagenomic shotgun and 16S rRNA gene sequencing of a mock community as well as of an environmental sample from Sakinaw Lake, British Columbia. Sakinaw Lake is known to contain a large age of microbial species from candidate phyla. Sequencing results show that community structure based on PacBio shotgun and 16S rRNA gene sequences is highly similar in both the mock and the environmental communities. Resolution power and community representation accuracy from SMRT sequencing data appeared to be independent of GC content of microbial genomes and was higher when compared to Illumina-based metagenome shotgun and 16S rRNA gene (iTag) sequences, e.g. full-length sequencing resolved all 23 OTUs in the mock community, while iTags did not resolve closely related species. SMRT sequencing hence offers various potential benefits when characterizing uncharted microbial communities.

  18. Bayesian Peptide Peak Detection for High Resolution TOF Mass Spectrometry.

    Science.gov (United States)

    Zhang, Jianqiu; Zhou, Xiaobo; Wang, Honghui; Suffredini, Anthony; Zhang, Lin; Huang, Yufei; Wong, Stephen

    2010-11-01

    In this paper, we address the issue of peptide ion peak detection for high resolution time-of-flight (TOF) mass spectrometry (MS) data. A novel Bayesian peptide ion peak detection method is proposed for TOF data with resolution of 10 000-15 000 full width at half-maximum (FWHW). MS spectra exhibit distinct characteristics at this resolution, which are captured in a novel parametric model. Based on the proposed parametric model, a Bayesian peak detection algorithm based on Markov chain Monte Carlo (MCMC) sampling is developed. The proposed algorithm is tested on both simulated and real datasets. The results show a significant improvement in detection performance over a commonly employed method. The results also agree with expert's visual inspection. Moreover, better detection consistency is achieved across MS datasets from patients with identical pathological condition.

  19. HIGH RESOLUTION AIRBORNE SHALLOW WATER MAPPING

    Directory of Open Access Journals (Sweden)

    F. Steinbacher

    2012-07-01

    Full Text Available In order to meet the requirements of the European Water Framework Directive (EU-WFD, authorities face the problem of repeatedly performing area-wide surveying of all kinds of inland waters. Especially for mid-sized or small rivers this is a considerable challenge imposing insurmountable logistical efforts and costs. It is therefore investigated if large-scale surveying of a river system on an operational basis is feasible by employing airborne hydrographic laser scanning. In cooperation with the Bavarian Water Authority (WWA Weilheim a pilot project was initiated by the Unit of Hydraulic Engineering at the University of Innsbruck and RIEGL Laser Measurement Systems exploiting the possibilities of a new LIDAR measurement system with high spatial resolution and high measurement rate to capture about 70 km of riverbed and foreland for the river Loisach in Bavaria/Germany and the estuary and parts of the shoreline (about 40km in length of lake Ammersee. The entire area surveyed was referenced to classic terrestrial cross-section surveys with the aim to derive products for the monitoring and managing needs of the inland water bodies forced by the EU-WFD. The survey was performed in July 2011 by helicopter and airplane and took 3 days in total. In addition, high resolution areal images were taken to provide an optical reference, offering a wide range of possibilities on further research, monitoring, and managing responsibilities. The operating altitude was about 500 m to maintain eye-safety, even for the aided eye, the airspeed was about 55 kts for the helicopter and 75 kts for the aircraft. The helicopter was used in the alpine regions while the fixed wing aircraft was used in the plains and the urban area, using appropriate scan rates to receive evenly distributed point clouds. The resulting point density ranged from 10 to 25 points per square meter. By carefully selecting days with optimum water quality, satisfactory penetration down to the river

  20. High Resolution Airborne Shallow Water Mapping

    Science.gov (United States)

    Steinbacher, F.; Pfennigbauer, M.; Aufleger, M.; Ullrich, A.

    2012-07-01

    In order to meet the requirements of the European Water Framework Directive (EU-WFD), authorities face the problem of repeatedly performing area-wide surveying of all kinds of inland waters. Especially for mid-sized or small rivers this is a considerable challenge imposing insurmountable logistical efforts and costs. It is therefore investigated if large-scale surveying of a river system on an operational basis is feasible by employing airborne hydrographic laser scanning. In cooperation with the Bavarian Water Authority (WWA Weilheim) a pilot project was initiated by the Unit of Hydraulic Engineering at the University of Innsbruck and RIEGL Laser Measurement Systems exploiting the possibilities of a new LIDAR measurement system with high spatial resolution and high measurement rate to capture about 70 km of riverbed and foreland for the river Loisach in Bavaria/Germany and the estuary and parts of the shoreline (about 40km in length) of lake Ammersee. The entire area surveyed was referenced to classic terrestrial cross-section surveys with the aim to derive products for the monitoring and managing needs of the inland water bodies forced by the EU-WFD. The survey was performed in July 2011 by helicopter and airplane and took 3 days in total. In addition, high resolution areal images were taken to provide an optical reference, offering a wide range of possibilities on further research, monitoring, and managing responsibilities. The operating altitude was about 500 m to maintain eye-safety, even for the aided eye, the airspeed was about 55 kts for the helicopter and 75 kts for the aircraft. The helicopter was used in the alpine regions while the fixed wing aircraft was used in the plains and the urban area, using appropriate scan rates to receive evenly distributed point clouds. The resulting point density ranged from 10 to 25 points per square meter. By carefully selecting days with optimum water quality, satisfactory penetration down to the river bed was achieved

  1. High Resolution Modeling of Hurricanes in a Climate Context

    Science.gov (United States)

    Knutson, T. R.

    2007-12-01

    Modeling of tropical cyclone activity in a climate context initially focused on simulation of relatively weak tropical storm-like disturbances as resolved by coarse grid (200 km) global models. As computing power has increased, multi-year simulations with global models of grid spacing 20-30 km have become feasible. Increased resolution also allowed for simulation storms of increasing intensity, and some global models generate storms of hurricane strength, depending on their resolution and other factors, although detailed hurricane structure is not simulated realistically. Results from some recent high resolution global model studies are reviewed. An alternative for hurricane simulation is regional downscaling. An early approach was to embed an operational (GFDL) hurricane prediction model within a global model solution, either for 5-day case studies of particular model storm cases, or for "idealized experiments" where an initial vortex is inserted into an idealized environments derived from global model statistics. Using this approach, hurricanes up to category five intensity can be simulated, owing to the model's relatively high resolution (9 km grid) and refined physics. Variants on this approach have been used to provide modeling support for theoretical predictions that greenhouse warming will increase the maximum intensities of hurricanes. These modeling studies also simulate increased hurricane rainfall rates in a warmer climate. The studies do not address hurricane frequency issues, and vertical shear is neglected in the idealized studies. A recent development is the use of regional model dynamical downscaling for extended (e.g., season-length) integrations of hurricane activity. In a study for the Atlantic basin, a non-hydrostatic model with grid spacing of 18km is run without convective parameterization, but with internal spectral nudging toward observed large-scale (basin wavenumbers 0-2) atmospheric conditions from reanalyses. Using this approach, our

  2. Analysis of high-resolution simulations for the Black Forest region from a point of view of tourism climatology - a comparison between two regional climate models (REMO and CLM)

    Science.gov (United States)

    Endler, Christina; Matzarakis, Andreas

    2011-03-01

    An analysis of climate simulations from a point of view of tourism climatology based on two regional climate models, namely REMO and CLM, was performed for a regional domain in the southwest of Germany, the Black Forest region, for two time frames, 1971-2000 that represents the twentieth century climate and 2021-2050 that represents the future climate. In that context, the Intergovernmental Panel on Climate Change (IPCC) scenarios A1B and B1 are used. The analysis focuses on human-biometeorological and applied climatologic issues, especially for tourism purposes - that means parameters belonging to thermal (physiologically equivalent temperature, PET), physical (precipitation, snow, wind), and aesthetic (fog, cloud cover) facets of climate in tourism. In general, both models reveal similar trends, but differ in their extent. The trend of thermal comfort is contradicting: it tends to decrease in REMO, while it shows a slight increase in CLM. Moreover, REMO reveals a wider range of future climate trends than CLM, especially for sunshine, dry days, and heat stress. Both models are driven by the same global coupled atmosphere-ocean model ECHAM5/MPI-OM. Because both models are not able to resolve meso- and micro-scale processes such as cloud microphysics, differences between model results and discrepancies in the development of even those parameters (e.g., cloud formation and cover) are due to different model parameterization and formulation. Climatic changes expected by 2050 are small compared to 2100, but may have major impacts on tourism as for example, snow cover and its duration are highly vulnerable to a warmer climate directly affecting tourism in winter. Beyond indirect impacts are of high relevance as they influence tourism as well. Thus, changes in climate, natural environment, demography, tourists' demands, among other things affect economy in general. The analysis of the CLM results and its comparison with the REMO results complete the analysis performed

  3. High resolution time integration for SN radiation transport

    International Nuclear Information System (INIS)

    Thoreson, Greg; McClarren, Ryan G.; Chang, Jae H.

    2009-01-01

    First-order, second-order, and high resolution time discretization schemes are implemented and studied for the discrete ordinates (S N ) equations. The high resolution method employs a rate of convergence better than first-order, but also suppresses artificial oscillations introduced by second-order schemes in hyperbolic partial differential equations. The high resolution method achieves these properties by nonlinearly adapting the time stencil to use a first-order method in regions where oscillations could be created. We employ a quasi-linear solution scheme to solve the nonlinear equations that arise from the high resolution method. All three methods were compared for accuracy and convergence rates. For non-absorbing problems, both second-order and high resolution converged to the same solution as the first-order with better convergence rates. High resolution is more accurate than first-order and matches or exceeds the second-order method

  4. Distribution-independent hierarchicald N-body methods

    International Nuclear Information System (INIS)

    Aluru, S.

    1994-01-01

    The N-body problem is to simulate the motion of N particles under the influence of mutual force fields based on an inverse square law. The problem has applications in several domains including astrophysics, molecular dynamics, fluid dynamics, radiosity methods in computer graphics and numerical complex analysis. Research efforts have focused on reducing the O(N 2 ) time per iteration required by the naive algorithm of computing each pairwise interaction. Widely respected among these are the Barnes-Hut and Greengard methods. Greengard claims his algorithm reduces the complexity to O(N) time per iteration. Throughout this thesis, we concentrate on rigorous, distribution-independent, worst-case analysis of the N-body methods. We show that Greengard's algorithm is not O(N), as claimed. Both Barnes-Hut and Greengard's methods depend on the same data structure, which we show is distribution-dependent. For the distribution that results in the smallest running time, we show that Greengard's algorithm is Ω(N log 2 N) in two dimensions and Ω(N log 4 N) in three dimensions. We have designed a hierarchical data structure whose size depends entirely upon the number of particles and is independent of the distribution of the particles. We show that both Greengard's and Barnes-Hut algorithms can be used in conjunction with this data structure to reduce their complexity. Apart from reducing the complexity of the Barnes-Hut algorithm, the data structure also permits more accurate error estimation. We present two- and three-dimensional algorithms for creating the data structure. The multipole method designed using this data structure has a complexity of O(N log N) in two dimensions and O(N log 2 N) in three dimensions

  5. Quantifying and containing the curse of high resolution coronal imaging

    Directory of Open Access Journals (Sweden)

    V. Delouille

    2008-10-01

    Full Text Available Future missions such as Solar Orbiter (SO, InterHelioprobe, or Solar Probe aim at approaching the Sun closer than ever before, with on board some high resolution imagers (HRI having a subsecond cadence and a pixel area of about (80 km2 at the Sun during perihelion. In order to guarantee their scientific success, it is necessary to evaluate if the photon counts available at these resolution and cadence will provide a sufficient signal-to-noise ratio (SNR. For example, if the inhomogeneities in the Quiet Sun emission prevail at higher resolution, one may hope to locally have more photon counts than in the case of a uniform source. It is relevant to quantify how inhomogeneous the quiet corona will be for a pixel pitch that is about 20 times smaller than in the case of SoHO/EIT, and 5 times smaller than TRACE. We perform a first step in this direction by analyzing and characterizing the spatial intermittency of Quiet Sun images thanks to a multifractal analysis. We identify the parameters that specify the scale-invariance behavior. This identification allows next to select a family of multifractal processes, namely the Compound Poisson Cascades, that can synthesize artificial images having some of the scale-invariance properties observed on the recorded images. The prevalence of self-similarity in Quiet Sun coronal images makes it relevant to study the ratio between the SNR present at SoHO/EIT images and in coarsened images. SoHO/EIT images thus play the role of "high resolution" images, whereas the "low-resolution" coarsened images are rebinned so as to simulate a smaller angular resolution and/or a larger distance to the Sun. For a fixed difference in angular resolution and in Spacecraft-Sun distance, we determine the proportion of pixels having a SNR preserved at high resolution given a particular increase in effective area. If scale-invariance continues to prevail at smaller scales, the conclusion reached with SoHO/EIT images can be transposed

  6. A high resolution solar atlas for fluorescence calculations

    Science.gov (United States)

    Hearn, M. F.; Ohlmacher, J. T.; Schleicher, D. G.

    1983-01-01

    The characteristics required of a solar atlas to be used for studying the fluorescence process in comets are examined. Several sources of low resolution data were combined to provide an absolutely calibrated spectrum from 2250 A to 7000A. Three different sources of high resolution data were also used to cover this same spectral range. The low resolution data were then used to put each high resolution spectrum on an absolute scale. The three high resolution spectra were then combined in their overlap regions to produce a single, absolutely calibrated high resolution spectrum over the entire spectral range.

  7. High resolution time integration for Sn radiation transport

    International Nuclear Information System (INIS)

    Thoreson, Greg; McClarren, Ryan G.; Chang, Jae H.

    2008-01-01

    First order, second order and high resolution time discretization schemes are implemented and studied for the S n equations. The high resolution method employs a rate of convergence better than first order, but also suppresses artificial oscillations introduced by second order schemes in hyperbolic differential equations. All three methods were compared for accuracy and convergence rates. For non-absorbing problems, both second order and high resolution converged to the same solution as the first order with better convergence rates. High resolution is more accurate than first order and matches or exceeds the second order method. (authors)

  8. Evaluation of a High-Resolution Regional Reanalysis for Europe

    Science.gov (United States)

    Ohlwein, C.; Wahl, S.; Keller, J. D.; Bollmeyer, C.

    2014-12-01

    Reanalyses gain more and more importance as a source of meteorological information for many purposes and applications. Several global reanalyses projects (e.g., ERA, MERRA, CSFR, JMA9) produce and verify these data sets to provide time series as long as possible combined with a high data quality. Due to a spatial resolution down to 50-70km and 3-hourly temporal output, they are not suitable for small scale problems (e.g., regional climate assessment, meso-scale NWP verification, input for subsequent models such as river runoff simulations). The implementation of regional reanalyses based on a limited area model along with a data assimilation scheme is able to generate reanalysis data sets with high spatio-temporal resolution. Within the Hans-Ertel-Centre for Weather Research (HErZ), the climate monitoring branch concentrates efforts on the assessment and analysis of regional climate in Germany and Europe. In joint cooperation with DWD (German Meteorological Service), a high-resolution reanalysis system based on the COSMO model has been developed. The regional reanalysis for Europe matches the domain of the CORDEX EURO-11 specifications, albeit at a higher spatial resolution, i.e., 0.055° (6km) instead of 0.11° (12km) and comprises the assimilation of observational data using the existing nudging scheme of COSMO complemented by a special soil moisture analysis with boundary conditions provided by ERA-Interim data. The reanalysis data set covers 6 years (2007-2012) and is currently extended to 16 years. Extensive evaluation of the reanalysis is performed using independent observations with special emphasis on precipitation and high-impact weather situations indicating a better representation of small scale variability. Further, the evaluation shows an added value of the regional reanalysis with respect to the forcing ERA Interim reanalysis and compared to a pure high-resolution dynamical downscaling approach without data assimilation.

  9. The high-resolution regional reanalysis COSMO-REA6

    Science.gov (United States)

    Ohlwein, C.

    2016-12-01

    Reanalyses gain more and more importance as a source of meteorological information for many purposes and applications. Several global reanalyses projects (e.g., ERA, MERRA, CSFR, JMA9) produce and verify these data sets to provide time series as long as possible combined with a high data quality. Due to a spatial resolution down to 50-70km and 3-hourly temporal output, they are not suitable for small scale problems (e.g., regional climate assessment, meso-scale NWP verification, input for subsequent models such as river runoff simulations). The implementation of regional reanalyses based on a limited area model along with a data assimilation scheme is able to generate reanalysis data sets with high spatio-temporal resolution. Within the Hans-Ertel-Centre for Weather Research (HErZ), the climate monitoring branch concentrates efforts on the assessment and analysis of regional climate in Germany and Europe. In joint cooperation with DWD (German Meteorological Service), a high-resolution reanalysis system based on the COSMO model has been developed. The regional reanalysis for Europe matches the domain of the CORDEX EURO-11 specifications, albeit at a higher spatial resolution, i.e., 0.055° (6km) instead of 0.11° (12km) and comprises the assimilation of observational data using the existing nudging scheme of COSMO complemented by a special soil moisture analysis with boundary conditions provided by ERA-Interim data. The reanalysis data set covers the past 20 years. Extensive evaluation of the reanalysis is performed using independent observations with special emphasis on precipitation and high-impact weather situations indicating a better representation of small scale variability. Further, the evaluation shows an added value of the regional reanalysis with respect to the forcing ERA Interim reanalysis and compared to a pure high-resolution dynamical downscaling approach without data assimilation.

  10. A High-resolution Reanalysis for the European CORDEX Region

    Science.gov (United States)

    Bentzien, Sabrina; Bollmeyer, Christoph; Crewell, Susanne; Friederichs, Petra; Hense, Andreas; Keller, Jan; Keune, Jessica; Kneifel, Stefan; Ohlwein, Christian; Pscheidt, Ieda; Redl, Stephanie; Steinke, Sandra

    2014-05-01

    A High-resolution Reanalysis for the European CORDEX Region Within the Hans-Ertel-Centre for Weather Research (HErZ), the climate monitoring branch concentrates efforts on the assessment and analysis of regional climate in Germany and Europe. In joint cooperation with DWD (German Meteorological Service), a high-resolution reanalysis system based on the COSMO model has been developed. Reanalyses gain more and more importance as a source of meteorological information for many purposes and applications. Several global reanalyses projects (e.g., ERA, MERRA, CSFR, JMA9) produce and verify these data sets to provide time series as long as possible combined with a high data quality. Due to a spatial resolution down to 50-70km and 3-hourly temporal output, they are not suitable for small scale problems (e.g., regional climate assessment, meso-scale NWP verification, input for subsequent models such as river runoff simulations). The implementation of regional reanalyses based on a limited area model along with a data assimilation scheme is able to generate reanalysis data sets with high spatio-temporal resolution. The work presented here focuses on the regional reanalysis for Europe with a domain matching the CORDEX-EURO-11 specifications, albeit at a higher spatial resolution, i.e., 0.055° (6km) instead of 0.11° (12km). The COSMO reanalysis system comprises the assimilation of observational data using the existing nudging scheme of COSMO and is complemented by a special soil moisture analysis and boundary conditions given by ERA-interim data. The reanalysis data set currently covers 6 years (2007-2012). The evaluation of the reanalyses is done using independent observations with special emphasis on precipitation and high-impact weather situations. The development and evaluation of the COSMO-based reanalysis for the CORDEX-Euro domain can be seen as a preparation for joint European activities on the development of an ensemble system of regional reanalyses for Europe.

  11. A high-resolution regional reanalysis for Europe

    Science.gov (United States)

    Ohlwein, C.

    2015-12-01

    Reanalyses gain more and more importance as a source of meteorological information for many purposes and applications. Several global reanalyses projects (e.g., ERA, MERRA, CSFR, JMA9) produce and verify these data sets to provide time series as long as possible combined with a high data quality. Due to a spatial resolution down to 50-70km and 3-hourly temporal output, they are not suitable for small scale problems (e.g., regional climate assessment, meso-scale NWP verification, input for subsequent models such as river runoff simulations). The implementation of regional reanalyses based on a limited area model along with a data assimilation scheme is able to generate reanalysis data sets with high spatio-temporal resolution. Within the Hans-Ertel-Centre for Weather Research (HErZ), the climate monitoring branch concentrates efforts on the assessment and analysis of regional climate in Germany and Europe. In joint cooperation with DWD (German Meteorological Service), a high-resolution reanalysis system based on the COSMO model has been developed. The regional reanalysis for Europe matches the domain of the CORDEX EURO-11 specifications, albeit at a higher spatial resolution, i.e., 0.055° (6km) instead of 0.11° (12km) and comprises the assimilation of observational data using the existing nudging scheme of COSMO complemented by a special soil moisture analysis with boundary conditions provided by ERA-Interim data. The reanalysis data set covers the past 20 years. Extensive evaluation of the reanalysis is performed using independent observations with special emphasis on precipitation and high-impact weather situations indicating a better representation of small scale variability. Further, the evaluation shows an added value of the regional reanalysis with respect to the forcing ERA Interim reanalysis and compared to a pure high-resolution dynamical downscaling approach without data assimilation.

  12. The implementation of sea ice model on a regional high-resolution scale

    Science.gov (United States)

    Prasad, Siva; Zakharov, Igor; Bobby, Pradeep; McGuire, Peter

    2015-09-01

    The availability of high-resolution atmospheric/ocean forecast models, satellite data and access to high-performance computing clusters have provided capability to build high-resolution models for regional ice condition simulation. The paper describes the implementation of the Los Alamos sea ice model (CICE) on a regional scale at high resolution. The advantage of the model is its ability to include oceanographic parameters (e.g., currents) to provide accurate results. The sea ice simulation was performed over Baffin Bay and the Labrador Sea to retrieve important parameters such as ice concentration, thickness, ridging, and drift. Two different forcing models, one with low resolution and another with a high resolution, were used for the estimation of sensitivity of model results. Sea ice behavior over 7 years was simulated to analyze ice formation, melting, and conditions in the region. Validation was based on comparing model results with remote sensing data. The simulated ice concentration correlated well with Advanced Microwave Scanning Radiometer for EOS (AMSR-E) and Ocean and Sea Ice Satellite Application Facility (OSI-SAF) data. Visual comparison of ice thickness trends estimated from the Soil Moisture and Ocean Salinity satellite (SMOS) agreed with the simulation for year 2010-2011.

  13. High-resolution gravity model of Venus

    Science.gov (United States)

    Reasenberg, R. D.; Goldberg, Z. M.

    1992-01-01

    The anomalous gravity field of Venus shows high correlation with surface features revealed by radar. We extract gravity models from the Doppler tracking data from the Pioneer Venus Orbiter by means of a two-step process. In the first step, we solve the nonlinear spacecraft state estimation problem using a Kalman filter-smoother. The Kalman filter has been evaluated through simulations. This evaluation and some unusual features of the filter are discussed. In the second step, we perform a geophysical inversion using a linear Bayesian estimator. To allow an unbiased comparison between gravity and topography, we use a simulation technique to smooth and distort the radar topographic data so as to yield maps having the same characteristics as our gravity maps. The maps presented cover 2/3 of the surface of Venus and display the strong topography-gravity correlation previously reported. The topography-gravity scatter plots show two distinct trends.

  14. High Resolution Representation and Simulation of Braiding Patterns

    DEFF Research Database (Denmark)

    Zwierzycki, Mateusz; Vestartas, Petras; Heinrich, Mary Katherine

    2017-01-01

    a contemporary architectural context. Within the flora robotica project, complex braided structures are a core element of the architectural vision, driving a need for generalized braid design modeling tools that can support fabrication. Due to limited availability of existing suitable tools, this interest...

  15. Updated vegetation information in high resolution WRF simulations

    DEFF Research Database (Denmark)

    Nielsen, Joakim Refslund; Dellwik, Ebba; Hahmann, Andrea N.

    2013-01-01

    modify the energy distribution at the land surface. In weather and climate models it is important to represent the vegetation variability accurately to obtain reliable results. The weather research and forecasting (WRF) model uses green vegetation fraction (GVF) time series to represent vegetation...... seasonality. The GVF of each grid cell is additionally used to scale other parameters such as LAI, roughness, emissivity and albedo within predefined intervals. However, the default GYP used by WRF does not reflect recent climatic changes or change in management practices since it was derived more than 20...

  16. Scalable Algorithms for Large High-Resolution Terrain Data

    DEFF Research Database (Denmark)

    Mølhave, Thomas; Agarwal, Pankaj K.; Arge, Lars Allan

    2010-01-01

    In this paper we demonstrate that the technology required to perform typical GIS computations on very large high-resolution terrain models has matured enough to be ready for use by practitioners. We also demonstrate the impact that high-resolution data has on common problems. To our knowledge, so...

  17. High-resolution X-ray diffraction studies of multilayers

    DEFF Research Database (Denmark)

    Christensen, Finn Erland; Hornstrup, Allan; Schnopper, H. W.

    1988-01-01

    High-resolution X-ray diffraction studies of the perfection of state-of-the-art multilayers are presented. Data were obtained using a triple-axis perfect-crystal X-ray diffractometer. Measurements reveal large-scale figure errors in the substrate. A high-resolution triple-axis set up is required...

  18. Achieving sensitive, high-resolution laser spectroscopy at CRIS

    Energy Technology Data Exchange (ETDEWEB)

    Groote, R. P. de [Instituut voor Kern- en Stralingsfysica, KU Leuven (Belgium); Lynch, K. M., E-mail: kara.marie.lynch@cern.ch [EP Department, CERN, ISOLDE (Switzerland); Wilkins, S. G. [The University of Manchester, School of Physics and Astronomy (United Kingdom); Collaboration: the CRIS collaboration

    2017-11-15

    The Collinear Resonance Ionization Spectroscopy (CRIS) experiment, located at the ISOLDE facility, has recently performed high-resolution laser spectroscopy, with linewidths down to 20 MHz. In this article, we present the modifications to the beam line and the newly-installed laser systems that have made sensitive, high-resolution measurements possible. Highlights of recent experimental campaigns are presented.

  19. High resolution UV spectroscopy and laser-focused nanofabrication

    NARCIS (Netherlands)

    Myszkiewicz, G.

    2005-01-01

    This thesis combines two at first glance different techniques: High Resolution Laser Induced Fluorescence Spectroscopy (LIF) of small aromatic molecules and Laser Focusing of atoms for Nanofabrication. The thesis starts with the introduction to the high resolution LIF technique of small aromatic

  20. High resolution NMR spectroscopy of synthetic polymers in bulk

    International Nuclear Information System (INIS)

    Komorski, R.A.

    1986-01-01

    The contents of this book are: Overview of high-resolution NMR of solid polymers; High-resolution NMR of glassy amorphous polymers; Carbon-13 solid-state NMR of semicrystalline polymers; Conformational analysis of polymers of solid-state NMR; High-resolution NMR studies of oriented polymers; High-resolution solid-state NMR of protons in polymers; and Deuterium NMR of solid polymers. This work brings together the various approaches for high-resolution NMR studies of bulk polymers into one volume. Heavy emphasis is, of course, given to 13C NMR studies both above and below Tg. Standard high-power pulse and wide-line techniques are not covered

  1. A high resolution global scale groundwater model

    Science.gov (United States)

    de Graaf, Inge; Sutanudjaja, Edwin; van Beek, Rens; Bierkens, Marc

    2014-05-01

    As the world's largest accessible source of freshwater, groundwater plays a vital role in satisfying the basic needs of human society. It serves as a primary source of drinking water and supplies water for agricultural and industrial activities. During times of drought, groundwater storage provides a large natural buffer against water shortage and sustains flows to rivers and wetlands, supporting ecosystem habitats and biodiversity. Yet, the current generation of global scale hydrological models (GHMs) do not include a groundwater flow component, although it is a crucial part of the hydrological cycle. Thus, a realistic physical representation of the groundwater system that allows for the simulation of groundwater head dynamics and lateral flows is essential for GHMs that increasingly run at finer resolution. In this study we present a global groundwater model with a resolution of 5 arc-minutes (approximately 10 km at the equator) using MODFLOW (McDonald and Harbaugh, 1988). With this global groundwater model we eventually intend to simulate the changes in the groundwater system over time that result from variations in recharge and abstraction. Aquifer schematization and properties of this groundwater model were developed from available global lithological maps and datasets (Dürr et al., 2005; Gleeson et al., 2010; Hartmann and Moosdorf, 2013), combined with our estimate of aquifer thickness for sedimentary basins. We forced the groundwater model with the output from the global hydrological model PCR-GLOBWB (van Beek et al., 2011), specifically the net groundwater recharge and average surface water levels derived from routed channel discharge. For the parameterization, we relied entirely on available global datasets and did not calibrate the model so that it can equally be expanded to data poor environments. Based on our sensitivity analysis, in which we run the model with various hydrogeological parameter settings, we observed that most variance in groundwater

  2. Modeling fire behavior on tropical islands with high-resolution weather data

    Science.gov (United States)

    John W. Benoit; Francis M. Fujioka; David R. Weise

    2009-01-01

    In this study, we consider fire behavior simulation in tropical island scenarios such as Hawaii and Puerto Rico. The development of a system to provide real-time fire behavior prediction in Hawaii is discussed. This involves obtaining fuels and topography information at a fine scale, as well as supplying daily high-resolution weather forecast data for the area of...

  3. High-resolution climate modelling of Antarctica and the Antarctic Peninsula

    NARCIS (Netherlands)

    van Wessem, J.M.|info:eu-repo/dai/nl/413533085

    2016-01-01

    In this thesis we have used a high-resolution regional atmospheric climate model (RACMO2.3) to simulate the present-day climate (1979-2014) of Antarctica and the Antarctic Peninsula. We have evaluated the model results with several observations, such as in situ surface energy balance (SEB)

  4. Defect testing of large aperture optics based on high resolution CCD camera

    International Nuclear Information System (INIS)

    Cheng Xiaofeng; Xu Xu; Zhang Lin; He Qun; Yuan Xiaodong; Jiang Xiaodong; Zheng Wanguo

    2009-01-01

    A fast testing method on inspecting defects of large aperture optics was introduced. With uniform illumination by LED source at grazing incidence, the image of defects on the surface of and inside the large aperture optics could be enlarged due to scattering. The images of defects were got by high resolution CCD camera and microscope, and the approximate mathematical relation between viewing dimension and real dimension of defects was simulated. Thus the approximate real dimension and location of all defects could be calculated through the high resolution pictures. (authors)

  5. N-Body Simulations of Tidal Encounters between Stellar Systems

    Indian Academy of Sciences (India)

    tribpo

    Saleh Mohammed Alladin† International Centre for Theoretical Physics, ... concentrate on how the tidal field of the primary changes the mass distribution, energy and angular momentum ..... International School for Advanced Studies, Trieste.

  6. High resolution multi-scalar drought indices for Iberia

    Science.gov (United States)

    Russo, Ana; Gouveia, Célia; Trigo, Ricardo; Jerez, Sonia

    2014-05-01

    The Iberian Peninsula has been recurrently affected by drought episodes and by adverse associated effects (Gouveia et al., 2009), ranging from severe water shortages to losses of hydroelectricity production, increasing risk of forest fires, forest decline and triggering processes of land degradation and desertification. Moreover, Iberia corresponds to one of the most sensitive areas to current and future climate change and is nowadays considered a hot spot of climate change with high probability for the increase of extreme events (Giorgi and Lionello, 2008). The spatial and temporal behavior of climatic droughts at different time scales was analyzed using spatially distributed time series of multi-scalar drought indicators, such as the Standardized Precipitation Evapotranspiration Index (SPEI) (Vicente-Serrano et al., 2010). This new climatic drought index is based on the simultaneous use of precipitation and temperature fields with the advantage of combining a multi-scalar character with the capacity to include the effects of temperature variability on drought assessment. Moreover, reanalysis data and the higher resolution hindcasted databases obtained from them are valuable surrogates of the sparse observations and widely used for in-depth characterizations of the present-day climate. Accordingly, this work aims to enhance the knowledge on high resolution drought patterns in Iberian Peninsula, taking advantage of high-resolution (10km) regional MM5 simulations of the recent past (1959-2007) over Iberia. It should be stressed that these high resolution meteorological fields (e.g. temperature, precipitation) have been validated for various purposes (Jerez et al., 2013). A detailed characterization of droughts since the 1960s using the 10 km resolution hidncasted simulation was performed with the aim to explore the conditions favoring drought onset, duration and ending, as well as the subsequent short, medium and long-term impacts affecting the environment and the

  7. Comparison of Two Grid Refinement Approaches for High Resolution Regional Climate Modeling: MPAS vs WRF

    Science.gov (United States)

    Leung, L.; Hagos, S. M.; Rauscher, S.; Ringler, T.

    2012-12-01

    This study compares two grid refinement approaches using global variable resolution model and nesting for high-resolution regional climate modeling. The global variable resolution model, Model for Prediction Across Scales (MPAS), and the limited area model, Weather Research and Forecasting (WRF) model, are compared in an idealized aqua-planet context with a focus on the spatial and temporal characteristics of tropical precipitation simulated by the models using the same physics package from the Community Atmosphere Model (CAM4). For MPAS, simulations have been performed with a quasi-uniform resolution global domain at coarse (1 degree) and high (0.25 degree) resolution, and a variable resolution domain with a high-resolution region at 0.25 degree configured inside a coarse resolution global domain at 1 degree resolution. Similarly, WRF has been configured to run on a coarse (1 degree) and high (0.25 degree) resolution tropical channel domain as well as a nested domain with a high-resolution region at 0.25 degree nested two-way inside the coarse resolution (1 degree) tropical channel. The variable resolution or nested simulations are compared against the high-resolution simulations that serve as virtual reality. Both MPAS and WRF simulate 20-day Kelvin waves propagating through the high-resolution domains fairly unaffected by the change in resolution. In addition, both models respond to increased resolution with enhanced precipitation. Grid refinement induces zonal asymmetry in precipitation (heating), accompanied by zonal anomalous Walker like circulations and standing Rossby wave signals. However, there are important differences between the anomalous patterns in MPAS and WRF due to differences in the grid refinement approaches and sensitivity of model physics to grid resolution. This study highlights the need for "scale aware" parameterizations in variable resolution and nested regional models.

  8. Distributed Modeling with Parflow using High Resolution LIDAR Data

    Science.gov (United States)

    Barnes, M.; Welty, C.; Miller, A. J.

    2012-12-01

    Urban landscapes provide a challenging domain for the application of distributed surface-subsurface hydrologic models. Engineered water infrastructure and altered topography influence surface and subsurface flow paths, yet these effects are difficult to quantify. In this work, a parallel, distributed watershed model (ParFlow) is used to simulate urban watersheds using spatial data at the meter and sub-meter scale. An approach using GRASS GIS (Geographic Resources Analysis Support System) is presented that incorporates these data to construct inputs for the ParFlow simulation. LIDAR topography provides the basis for the fully coupled overland flow simulation. Methods to address real discontinuities in the urban land-surface for use with the grid-based kinematic wave approximation used in ParFlow are presented. The spatial distribution of impervious surface is delineated accurately from high-resolution land cover data; hydrogeological properties are specified from literature values. An application is presented for part of the Dead Run subwatershed of the Gwynns Falls in Baltimore County, MD. The domain is approximately 3 square kilometers, and includes a highly impacted urban stream, a major freeway, and heterogeneous urban development represented at a 10-m horizontal resolution and 1-m vertical resolution. This resolution captures urban features such as building footprints and highways at an appropriate scale. The Dead Run domain provides an effective test case for ParFlow application at the fine scale in an urban environment. Preliminary model runs employ a homogeneous subsurface domain with no-flow boundaries. Initial results reflect the highly articulated topography of the road network and the combined influence of surface runoff from impervious surfaces and subsurface flux toward the channel network. Subsequent model runs will include comparisons of the coupled surface-subsurface response of alternative versions of the Dead Run domain with and without impervious

  9. Artificial terraced field extraction based on high resolution DEMs

    Science.gov (United States)

    Na, Jiaming; Yang, Xin; Xiong, Liyang; Tang, Guoan

    2017-04-01

    With the increase of human activities, artificial landforms become one of the main terrain features with special geographical and hydrological value. Terraced field, as the most important artificial landscapes of the loess plateau, plays an important role in conserving soil and water. With the development of digital terrain analysis (DTA), there is a current and future need in developing a robust, repeatable and cost-effective research methodology for terraced fields. In this paper, a novel method using bidirectional DEM shaded relief is proposed for terraced field identification based on high resolution DEM, taking Zhifanggou watershed, Shannxi province as the study area. Firstly, 1m DEM is obtained by low altitude aerial photogrammetry using Unmanned Aerial Vehicle (UAV), and 0.1m DOM is also obtained as the test data. Then, the positive and negative terrain segmentation is done to acquire the area of terraced field. Finally, a bidirectional DEM shaded relief is simulated to extract the ridges of each terraced field stages. The method in this paper can get not only polygon feature of the terraced field areas but also line feature of terraced field ridges. The accuracy is 89.7% compared with the artificial interpretation result from DOM. And additional experiment shows that this method has a strong robustness as well as high accuracy.

  10. Coded aperture subreflector array for high resolution radar imaging

    Science.gov (United States)

    Lynch, Jonathan J.; Herrault, Florian; Kona, Keerti; Virbila, Gabriel; McGuire, Chuck; Wetzel, Mike; Fung, Helen; Prophet, Eric

    2017-05-01

    HRL Laboratories has been developing a new approach for high resolution radar imaging on stationary platforms. High angular resolution is achieved by operating at 235 GHz and using a scalable tile phased array architecture that has the potential to realize thousands of elements at an affordable cost. HRL utilizes aperture coding techniques to minimize the size and complexity of the RF electronics needed for beamforming, and wafer level fabrication and integration allow tiles containing 1024 elements to be manufactured with reasonable costs. This paper describes the results of an initial feasibility study for HRL's Coded Aperture Subreflector Array (CASA) approach for a 1024 element micromachined antenna array with integrated single-bit phase shifters. Two candidate electronic device technologies were evaluated over the 170 - 260 GHz range, GaN HEMT transistors and GaAs Schottky diodes. Array structures utilizing silicon micromachining and die bonding were evaluated for etch and alignment accuracy. Finally, the overall array efficiency was estimated to be about 37% (not including spillover losses) using full wave array simulations and measured device performance, which is a reasonable value at 235 GHz. Based on the measured data we selected GaN HEMT devices operated passively with 0V drain bias due to their extremely low DC power dissipation.

  11. Precision cosmology with time delay lenses: high resolution imaging requirements

    Energy Technology Data Exchange (ETDEWEB)

    Meng, Xiao-Lei; Liao, Kai [Department of Astronomy, Beijing Normal University, 19 Xinjiekouwai Street, Beijing, 100875 (China); Treu, Tommaso; Agnello, Adriano [Department of Physics, University of California, Broida Hall, Santa Barbara, CA 93106 (United States); Auger, Matthew W. [Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom); Marshall, Philip J., E-mail: xlmeng919@gmail.com, E-mail: tt@astro.ucla.edu, E-mail: aagnello@physics.ucsb.edu, E-mail: mauger@ast.cam.ac.uk, E-mail: liaokai@mail.bnu.edu.cn, E-mail: dr.phil.marshall@gmail.com [Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, 452 Lomita Mall, Stanford, CA 94305 (United States)

    2015-09-01

    Lens time delays are a powerful probe of cosmology, provided that the gravitational potential of the main deflector can be modeled with sufficient precision. Recent work has shown that this can be achieved by detailed modeling of the host galaxies of lensed quasars, which appear as ''Einstein Rings'' in high resolution images. The distortion of these arcs and counter-arcs, as measured over a large number of pixels, provides tight constraints on the difference between the gravitational potential between the quasar image positions, and thus on cosmology in combination with the measured time delay. We carry out a systematic exploration of the high resolution imaging required to exploit the thousands of lensed quasars that will be discovered by current and upcoming surveys with the next decade. Specifically, we simulate realistic lens systems as imaged by the Hubble Space Telescope (HST), James Webb Space Telescope (JWST), and ground based adaptive optics images taken with Keck or the Thirty Meter Telescope (TMT). We compare the performance of these pointed observations with that of images taken by the Euclid (VIS), Wide-Field Infrared Survey Telescope (WFIRST) and Large Synoptic Survey Telescope (LSST) surveys. We use as our metric the precision with which the slope γ' of the total mass density profile ρ{sub tot}∝ r{sup −γ'} for the main deflector can be measured. Ideally, we require that the statistical error on γ' be less than 0.02, such that it is subdominant to other sources of random and systematic uncertainties. We find that survey data will likely have sufficient depth and resolution to meet the target only for the brighter gravitational lens systems, comparable to those discovered by the SDSS survey. For fainter systems, that will be discovered by current and future surveys, targeted follow-up will be required. However, the exposure time required with upcoming facilitites such as JWST, the Keck Next Generation

  12. Precision cosmology with time delay lenses: High resolution imaging requirements

    Energy Technology Data Exchange (ETDEWEB)

    Meng, Xiao -Lei [Beijing Normal Univ., Beijing (China); Univ. of California, Santa Barbara, CA (United States); Treu, Tommaso [Univ. of California, Santa Barbara, CA (United States); Univ. of California, Los Angeles, CA (United States); Agnello, Adriano [Univ. of California, Santa Barbara, CA (United States); Univ. of California, Los Angeles, CA (United States); Auger, Matthew W. [Univ. of Cambridge, Cambridge (United Kingdom); Liao, Kai [Beijing Normal Univ., Beijing (China); Univ. of California, Santa Barbara, CA (United States); Univ. of California, Los Angeles, CA (United States); Marshall, Philip J. [Stanford Univ., Stanford, CA (United States)

    2015-09-28

    Lens time delays are a powerful probe of cosmology, provided that the gravitational potential of the main deflector can be modeled with sufficient precision. Recent work has shown that this can be achieved by detailed modeling of the host galaxies of lensed quasars, which appear as ``Einstein Rings'' in high resolution images. The distortion of these arcs and counter-arcs, as measured over a large number of pixels, provides tight constraints on the difference between the gravitational potential between the quasar image positions, and thus on cosmology in combination with the measured time delay. We carry out a systematic exploration of the high resolution imaging required to exploit the thousands of lensed quasars that will be discovered by current and upcoming surveys with the next decade. Specifically, we simulate realistic lens systems as imaged by the Hubble Space Telescope (HST), James Webb Space Telescope (JWST), and ground based adaptive optics images taken with Keck or the Thirty Meter Telescope (TMT). We compare the performance of these pointed observations with that of images taken by the Euclid (VIS), Wide-Field Infrared Survey Telescope (WFIRST) and Large Synoptic Survey Telescope (LSST) surveys. We use as our metric the precision with which the slope γ' of the total mass density profile ρtot∝ r–γ' for the main deflector can be measured. Ideally, we require that the statistical error on γ' be less than 0.02, such that it is subdominant to other sources of random and systematic uncertainties. We find that survey data will likely have sufficient depth and resolution to meet the target only for the brighter gravitational lens systems, comparable to those discovered by the SDSS survey. For fainter systems, that will be discovered by current and future surveys, targeted follow-up will be required. Furthermore, the exposure time required with upcoming facilitites such as JWST, the Keck Next Generation Adaptive

  13. Minimal coupling schemes in N-body reaction theory

    International Nuclear Information System (INIS)

    Picklesimer, A.; Tandy, P.C.; Thaler, R.M.

    1982-01-01

    A new derivation of the N-body equations of Bencze, Redish, and Sloan is obtained through the use of Watson-type multiple scattering techniques. The derivation establishes an intimate connection between these partition-labeled N-body equations and the particle-labeled Rosenberg equations. This result yields new insight into the implicit role of channel coupling in, and the minimal dimensionality of, the partition-labeled equations

  14. A Forward-Looking High-Resolution GPR System

    National Research Council Canada - National Science Library

    Kositsky, Joel; Milanfar, Peyman

    1999-01-01

    A high-resolution ground penetrating radar (GPR) system was designed to help define the optimal radar parameters needed for the efficient standoff detection of buried and surface-laid antitank mines...

  15. Impact of high resolution land surface initialization in Indian summer ...

    Indian Academy of Sciences (India)

    The direct impact of high resolution land surface initialization on the forecast bias in a regional climate model in recent years ... surface initialization using a regional climate model. ...... ization of the snow field in a cloud model; J. Clim. Appl.

  16. Textural Segmentation of High-Resolution Sidescan Sonar Images

    National Research Council Canada - National Science Library

    Kalcic, Maria; Bibee, Dale

    1995-01-01

    .... The high resolution of the 455 kHz sonar imagery also provides much information about the surficial bottom sediments, however their acoustic scattering properties are not well understood at high frequencies...

  17. NOAA High-Resolution Sea Surface Temperature (SST) Analysis Products

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This archive covers two high resolution sea surface temperature (SST) analysis products developed using an optimum interpolation (OI) technique. The analyses have a...

  18. Hurricane Satellite (HURSAT) from Advanced Very High Resolution Radiometer (AVHRR)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Huricane Satellite (HURSAT)-Advanced Very High Resolution Radiometer (AVHRR) is used to extend the HURSAT data set such that appling the Objective Dvorak technique...

  19. NanoComposite Polymers for High Resolution Near Infrared Detectors

    Data.gov (United States)

    National Aeronautics and Space Administration — Develop nanocomposite materials with tuned refractive index in the near infra red spectral range as an index-matched immersion lens for high resolution infra-red...

  20. Methodology of high-resolution photography for mural condition database

    Science.gov (United States)

    Higuchi, R.; Suzuki, T.; Shibata, M.; Taniguchi, Y.

    2015-08-01

    Digital documentation is one of the most useful techniques to record the condition of cultural heritage. Recently, high-resolution images become increasingly useful because it is possible to show general views of mural paintings and also detailed mural conditions in a single image. As mural paintings are damaged by environmental stresses, it is necessary to record the details of painting condition on high-resolution base maps. Unfortunately, the cost of high-resolution photography and the difficulty of operating its instruments and software have commonly been an impediment for researchers and conservators. However, the recent development of graphic software makes its operation simpler and less expensive. In this paper, we suggest a new approach to make digital heritage inventories without special instruments, based on our recent our research project in Üzümlü church in Cappadocia, Turkey. This method enables us to achieve a high-resolution image database with low costs, short time, and limited human resources.

  1. High resolution integral holography using Fourier ptychographic approach.

    Science.gov (United States)

    Li, Zhaohui; Zhang, Jianqi; Wang, Xiaorui; Liu, Delian

    2014-12-29

    An innovative approach is proposed for calculating high resolution computer generated integral holograms by using the Fourier Ptychographic (FP) algorithm. The approach initializes a high resolution complex hologram with a random guess, and then stitches together low resolution multi-view images, synthesized from the elemental images captured by integral imaging (II), to recover the high resolution hologram through an iterative retrieval with FP constrains. This paper begins with an analysis of the principle of hologram synthesis from multi-projections, followed by an accurate determination of the constrains required in the Fourier ptychographic integral-holography (FPIH). Next, the procedure of the approach is described in detail. Finally, optical reconstructions are performed and the results are demonstrated. Theoretical analysis and experiments show that our proposed approach can reconstruct 3D scenes with high resolution.

  2. High-resolution MRI in detecting subareolar breast abscess.

    Science.gov (United States)

    Fu, Peifen; Kurihara, Yasuyuki; Kanemaki, Yoshihide; Okamoto, Kyoko; Nakajima, Yasuo; Fukuda, Mamoru; Maeda, Ichiro

    2007-06-01

    Because subareolar breast abscess has a high recurrence rate, a more effective imaging technique is needed to comprehensively visualize the lesions and guide surgery. We performed a high-resolution MRI technique using a microscopy coil to reveal the characteristics and extent of subareolar breast abscess. High-resolution MRI has potential diagnostic value in subareolar breast abscess. This technique can be used to guide surgery with the aim of reducing the recurrence rate.

  3. High-resolution esophageal pressure topography for esophageal motility disorders

    OpenAIRE

    Hashem Fakhre Yaseri; Gholamreza Hamsi; Tayeb Ramim

    2016-01-01

    Background: High-resolution manometer (HRM) of the esophagus has become the main diagnostic test in the evaluation of esophageal motility disorders. The development of high-resolution manometry catheters and software displays of manometry recordings in color-coded pressure plots have changed the diagnostic assessment of esophageal disease. The first step of the Chicago classification described abnormal esophagogastric junction deglutitive relaxation. The latest classification system, proposed...

  4. Volumetric expiratory high-resolution CT of the lung

    International Nuclear Information System (INIS)

    Nishino, Mizuki; Hatabu, Hiroto

    2004-01-01

    We developed a volumetric expiratory high-resolution CT (HRCT) protocol that provides combined inspiratory and expiratory volumetric imaging of the lung without increasing radiation exposure, and conducted a preliminary feasibility assessment of this protocol to evaluate diffuse lung disease with small airway abnormalities. The volumetric expiratory high-resolution CT increased the detectability of the conducting airway to the areas of air trapping (P<0.0001), and added significant information about extent and distribution of air trapping (P<0.0001)

  5. Developing Visual Editors for High-Resolution Haptic Patterns

    DEFF Research Database (Denmark)

    Cuartielles, David; Göransson, Andreas; Olsson, Tony

    2012-01-01

    In this article we give an overview of our iterative work in developing visual editors for creating high resolution haptic patterns to be used in wearable, haptic feedback devices. During the past four years we have found the need to address the question of how to represent, construct and edit high...... resolution haptic patterns so that they translate naturally to the user’s haptic experience. To solve this question we have developed and tested several visual editors...

  6. High resolution estimates of the corrosion risk for cultural heritage in Italy

    International Nuclear Information System (INIS)

    De Marco, Alessandra; Screpanti, Augusto; Mircea, Mihaela; Piersanti, Antonio; Proietti, Chiara; Fornasier, M. Francesca

    2017-01-01

    Air pollution plays a pivotal role in the deterioration of many materials used in buildings and cultural monuments causing an inestimable damage. This study aims to estimate the impacts of air pollution (SO 2 , HNO 3 , O 3 , PM 10 ) and meteorological conditions (temperature, precipitation, relative humidity) on limestone, copper and bronze based on high resolution air quality data-base produced with AMS-MINNI modelling system over the Italian territory over the time period 2003–2010. A comparison between high resolution data (AMS-MINNI grid, 4 × 4 km) and low resolution data (EMEP grid, 50 × 50 km) has been performed. Our results pointed out that the corrosion levels for limestone, copper and bronze are decreased in Italy from 2003 to 2010 in relation to decrease of pollutant concentrations. However, some problem related to air pollution persists especially in Northern and Southern Italy. In particular, PM 10 and HNO 3 are considered the main responsible for limestone corrosion. Moreover, the high resolution data (AMS-MINNI) allowed the identification of risk areas that are not visible with the low resolution data (EMEP modelling system) in all considered years and, especially, in the limestone case. Consequently, high resolution air quality simulations are suitable to provide concrete benefits in providing information for national effective policy against corrosion risk for cultural heritage, also in the context of climate changes that are affecting strongly Mediterranean basin. - Highlights: • Air pollution plays a pivotal role in the deterioration of cultural materials. • Limestone, copper and bronze corrosion levels decreased in Italy from 2003 to 2010. • PM 10 is considered the main responsible for limestone corrosion in Northern Italy. • HNO3 is considered the main responsible for limestone corrosion in all analyzed years. • High-resolution data are particularly useful to define area at risk for corrosion. - Importance of the high-resolution

  7. A combined N-body and hydrodynamic code for modeling disk galaxies

    International Nuclear Information System (INIS)

    Schroeder, M.C.

    1989-01-01

    A combined N-body and hydrodynamic computer code for the modeling of two dimensional galaxies is described. The N-body portion of the code is used to calculate the motion of the particle component of a galaxy, while the hydrodynamics portion of the code is used to follow the motion and evolution of the fluid component. A complete description of the numerical methods used for each portion of the code is given. Additionally, the proof tests of the separate and combined portions of the code are presented and discussed. Finally, a discussion of the topics researched with the code and results obtained is presented. These include: the measurement of stellar relaxation times in disk galaxy simulations; the effects of two-armed spiral perturbations on stable axisymmetric disks; the effects of the inclusion of an instellar medium (ISM) on the stability of disk galaxies; and the effect of the inclusion of stellar evolution on disk galaxy simulations

  8. EMODnet High Resolution Seabed Mapping - further developing a high resolution digital bathymetry for European seas

    Science.gov (United States)

    Schaap, D.; Schmitt, T.

    2017-12-01

    Access to marine data is a key issue for the EU Marine Strategy Framework Directive and the EU Marine Knowledge 2020 agenda and includes the European Marine Observation and Data Network (EMODnet) initiative. EMODnet aims at assembling European marine data, data products and metadata from diverse sources in a uniform way. The EMODnet Bathymetry project has developed Digital Terrain Models (DTM) for the European seas. These have been produced from survey and aggregated data sets that are indexed with metadata by adopting the SeaDataNet Catalogue services. SeaDataNet is a network of major oceanographic data centres around the European seas that manage, operate and further develop a pan-European infrastructure for marine and ocean data management. The latest EMODnet Bathymetry DTM release has a grid resolution of 1/8 arcminute and covers all European sea regions. Use has been made of circa 7800 gathered survey datasets and composite DTMs. Catalogues and the EMODnet DTM are published at the dedicated EMODnet Bathymetry portal including a versatile DTM viewing and downloading service. End December 2016 the Bathymetry project has been succeeded by EMODnet High Resolution Seabed Mapping (HRSM). This continues gathering of bathymetric in-situ data sets with extra efforts for near coastal waters and coastal zones. In addition Satellite Derived Bathymetry data are included to fill gaps in coverage of the coastal zones. The extra data and composite DTMs will increase the coverage of the European seas and its coastlines, and provide input for producing an EMODnet DTM with a common resolution of 1/16 arc minutes. The Bathymetry Viewing and Download service will be upgraded to provide a multi-resolution map and including 3D viewing. The higher resolution DTMs will also be used to determine best-estimates of the European coastline for a range of tidal levels (HAT, MHW, MSL, Chart Datum, LAT), thereby making use of a tidal model for Europe. Extra challenges will be `moving to the

  9. Speeding up N-body Calculations on Machines without Hardware Square Root

    Directory of Open Access Journals (Sweden)

    Alan H. Karp

    1992-01-01

    Full Text Available The most time consuming part of an N-body simulation is computing the components of the accelerations of the particles. On most machines the slowest part of computing the acceleration is in evaluating r-3/2, which is especially true on machines that do the square root in software. This note shows how to cut the time for this part of the calculation by a factor of 3 or more using standard Fortran.

  10. Non-instantaneous gas recycling and chemical evolution in N-body disk galaxies

    Czech Academy of Sciences Publication Activity Database

    Jungwiert, Bruno; Carraro, G.; Dalla Vecchia, C.

    2004-01-01

    Roč. 289, 3-4 (2004), s. 441-444 ISSN 0004-640X. [From observations to self-consistent modelling of the ISM in galaxies. Porto, 03.09.2002-05.09.2002] R&D Projects: GA ČR GP202/01/D075 Institutional research plan: CEZ:AV0Z1003909 Keywords : N-body simulations * galaxy evolution * gas recycling Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics Impact factor: 0.597, year: 2004

  11. Insights of meso and micro-scale processes for the Caxiuanã forest region from high resolution simulation Estudo dos processos de micro e meso-escala para a região da floresta de Caxiuanã a partir de simulações de alta resolução

    Directory of Open Access Journals (Sweden)

    Renata Leitão da Conceição Mesquita

    2012-07-01

    Full Text Available Meteorological data and high resolution numerical simulations were used to estimate spatial fields in eastern Amazonia where is located the Forest and the Bay of Caxiuanã. The study were performed for the period of November 2006, when occurred the field experiment COBRA-PARA. Analysis of the MODIS sensor from the Terra Satellite images show the occurrence of various phenomena such as local cloud streets, precipitating convective systems and an important influence of the interfaces between the forest and water surfaces. Numerical simulations for November 7, 2006 showed that the model represented well the major meteorological processes. The results show that the Caxiuanã Bay causes significant impact on adjacent meteorological fields mainly through advection by the northeast winds that induce to colder canopy temperature to the west of the bay and convective rainfall. Simulations with high resolution (LES produced spatial patterns of temperature and humidity aligned with the winds during the daytime, and at nighttime the patterns are influenced mainly by the presence of the bay. Correlations between the mid-level winds and the latent heat fluxes showed that there is a change from negative correlations for the early hours to positive correlations for the afternoon and early evening.Dados meteorológicos e simulações numéricas de alta resolução foram usados para estimar campos espaciais na região leste da Amazônia onde se situam a Floresta e a Baía de Caxiuanã, no Estado do Pará. O estudo foi feito para o período de Novembro de 2006, quando foi realizado o experimento de campo COBRA-PARÁ. Análises de imagens do sensor MODIS mostram a ocorrência de vários fenômenos locais como avenidas de nuvens, sistemas convectivos precipitantes, e importante influência das interfaces entre a floresta e as superfícies aquáticas. Simulações numéricas para o dia 7 de novembro de 2006 mostraram que o modelo representou bem as principais vari

  12. Efficient nonparametric n -body force fields from machine learning

    Science.gov (United States)

    Glielmo, Aldo; Zeni, Claudio; De Vita, Alessandro

    2018-05-01

    We provide a definition and explicit expressions for n -body Gaussian process (GP) kernels, which can learn any interatomic interaction occurring in a physical system, up to n -body contributions, for any value of n . The series is complete, as it can be shown that the "universal approximator" squared exponential kernel can be written as a sum of n -body kernels. These recipes enable the choice of optimally efficient force models for each target system, as confirmed by extensive testing on various materials. We furthermore describe how the n -body kernels can be "mapped" on equivalent representations that provide database-size-independent predictions and are thus crucially more efficient. We explicitly carry out this mapping procedure for the first nontrivial (three-body) kernel of the series, and we show that this reproduces the GP-predicted forces with meV /Å accuracy while being orders of magnitude faster. These results pave the way to using novel force models (here named "M-FFs") that are computationally as fast as their corresponding standard parametrized n -body force fields, while retaining the nonparametric character, the ease of training and validation, and the accuracy of the best recently proposed machine-learning potentials.

  13. Seychelles Dome variability in a high resolution ocean model

    Science.gov (United States)

    Nyadjro, E. S.; Jensen, T.; Richman, J. G.; Shriver, J. F.

    2016-02-01

    The Seychelles-Chagos Thermocline Ridge (SCTR; 5ºS-10ºS, 50ºE-80ºE) in the tropical Southwest Indian Ocean (SWIO) has been recognized as a region of prominence with regards to climate variability in the Indian Ocean. Convective activities in this region have regional consequences as it affect socio-economic livelihood of the people especially in the countries along the Indian Ocean rim. The SCTR is characterized by a quasi-permanent upwelling that is often associated with thermocline shoaling. This upwelling affects sea surface temperature (SST) variability. We present results on the variability and dynamics of the SCTR as simulated by the 1/12º high resolution HYbrid Coordinate Ocean Model (HYCOM). It is observed that locally, wind stress affects SST via Ekman pumping of cooler subsurface waters, mixing and anomalous zonal advection. Remotely, wind stress curl in the eastern equatorial Indian Ocean generates westward-propagating Rossby waves that impacts the depth of the thermocline which in turn impacts SST variability in the SCTR region. The variability of the contributions of these processes, especially with regard to the Indian Ocean Dipole (IOD) are further examined. In a typical positive IOD (PIOD) year, the net vertical velocity in the SCTR is negative year-round as easterlies along the region are intensified leading to a strong positive curl. This vertical velocity is caused mainly by anomalous local Ekman downwelling (with peak during September-November), a direct opposite to the climatology scenario when local Ekman pumping is positive (upwelling favorable) year-round. The anomalous remote contribution to the vertical velocity changes is minimal especially during the developing and peak stages of PIOD events. In a typical negative IOD (NIOD) year, anomalous vertical velocity is positive almost year-round with peaks in May and October. The remote contribution is positive, in contrast to the climatology and most of the PIOD years.

  14. Accelerated high-resolution photoacoustic tomography via compressed sensing

    Science.gov (United States)

    Arridge, Simon; Beard, Paul; Betcke, Marta; Cox, Ben; Huynh, Nam; Lucka, Felix; Ogunlade, Olumide; Zhang, Edward

    2016-12-01

    Current 3D photoacoustic tomography (PAT) systems offer either high image quality or high frame rates but are not able to deliver high spatial and temporal resolution simultaneously, which limits their ability to image dynamic processes in living tissue (4D PAT). A particular example is the planar Fabry-Pérot (FP) photoacoustic scanner, which yields high-resolution 3D images but takes several minutes to sequentially map the incident photoacoustic field on the 2D sensor plane, point-by-point. However, as the spatio-temporal complexity of many absorbing tissue structures is rather low, the data recorded in such a conventional, regularly sampled fashion is often highly redundant. We demonstrate that combining model-based, variational image reconstruction methods using spatial sparsity constraints with the development of novel PAT acquisition systems capable of sub-sampling the acoustic wave field can dramatically increase the acquisition speed while maintaining a good spatial resolution: first, we describe and model two general spatial sub-sampling schemes. Then, we discuss how to implement them using the FP interferometer and demonstrate the potential of these novel compressed sensing PAT devices through simulated data from a realistic numerical phantom and through measured data from a dynamic experimental phantom as well as from in vivo experiments. Our results show that images with good spatial resolution and contrast can be obtained from highly sub-sampled PAT data if variational image reconstruction techniques that describe the tissues structures with suitable sparsity-constraints are used. In particular, we examine the use of total variation (TV) regularization enhanced by Bregman iterations. These novel reconstruction strategies offer new opportunities to dramatically increase the acquisition speed of photoacoustic scanners that employ point-by-point sequential scanning as well as reducing the channel count of parallelized schemes that use detector arrays.

  15. Analysis of smear in high-resolution remote sensing satellites

    Science.gov (United States)

    Wahballah, Walid A.; Bazan, Taher M.; El-Tohamy, Fawzy; Fathy, Mahmoud

    2016-10-01

    High-resolution remote sensing satellites (HRRSS) that use time delay and integration (TDI) CCDs have the potential to introduce large amounts of image smear. Clocking and velocity mismatch smear are two of the key factors in inducing image smear. Clocking smear is caused by the discrete manner in which the charge is clocked in the TDI-CCDs. The relative motion between the HRRSS and the observed object obliges that the image motion velocity must be strictly synchronized with the velocity of the charge packet transfer (line rate) throughout the integration time. During imaging an object off-nadir, the image motion velocity changes resulting in asynchronization between the image velocity and the CCD's line rate. A Model for estimating the image motion velocity in HRRSS is derived. The influence of this velocity mismatch combined with clocking smear on the modulation transfer function (MTF) is investigated by using Matlab simulation. The analysis is performed for cross-track and along-track imaging with different satellite attitude angles and TDI steps. The results reveal that the velocity mismatch ratio and the number of TDI steps have a serious impact on the smear MTF; a velocity mismatch ratio of 2% degrades the MTFsmear by 32% at Nyquist frequency when the TDI steps change from 32 to 96. In addition, the results show that to achieve the requirement of MTFsmear >= 0.95 , for TDI steps of 16 and 64, the allowable roll angles are 13.7° and 6.85° and the permissible pitch angles are no more than 9.6° and 4.8°, respectively.

  16. IXO and the Missing Baryons: The Need High Resolution Spectroscopy

    Science.gov (United States)

    Nicastro, Fabrizio

    2009-01-01

    About half of the baryons in the Universe are currently eluding detection. Hydrodynamical simulations for the formation of Large Scale Structures (LSSs), predict that these baryons, at zmatter: the Warm-Hot Intergalactic Medium (WHIM). The WHIM has probably been progressively enriched with metals, during phases of intense starburst and AGN activity, up to possibly solar metallicity (Cen & Ostriker, 2006), and should therefore shine and/or absorb in in the soft X-ray band, via electronic transitions from the most abundant metals. The importance of detecting and studying the WHIM lies not only in the possibility of finally making a complete census of all baryons in the Universe, but also in the possibility of (a) directly measuring the metallicity history of the Universe, and so investigating on metal-transport in the Universe and galaxy-IGM, AGN-IGM feedback mechanisms, (b) directly measuring the heating history of the Universe, and so understanding the process of LSS formation and shocks, and (c) performing cosmological parameter measurements through a 3D 2-point angular correlation function analysis of the WHIM filaments. Detecting, and studying the WHIM with the current X-ray instrumentation however, is extremely challenging, because of the low sensitivity and resolution of the Chandra and XMM-Newton gratings, and the very low 'grasp' of all currently available imaging-spectrometers. IXO, instead, thanks to its large grating effective area (> 1000 cm2 at 0.5 keV) and high spectral resolution (R>2500 at 0.5 keV) will be perfectly suited to attack the problem in a systematic way. Here we demonstrate that high resolution gratings are crucial for these kind of studies and show that the IXO gratings will be able to detect more than 300-700 OVII WHIM filaments along about 70 lines of sight, in less than 0.7.

  17. High Resolution Atmospheric Modeling for Wind Energy Applications

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, M; Bulaevskaya, V; Glascoe, L; Singer, M

    2010-03-18

    The ability of the WRF atmospheric model to forecast wind speed over the Nysted wind park was investigated as a function of time. It was found that in the time period we considered (August 1-19, 2008), the model is able to predict wind speeds reasonably accurately for 48 hours ahead, but that its forecast skill deteriorates rapidly after 48 hours. In addition, a preliminary analysis was carried out to investigate the impact of vertical grid resolution on the forecast skill. Our preliminary finding is that increasing vertical grid resolution does not have a significant impact on the forecast skill of the WRF model over Nysted wind park during the period we considered. Additional simulations during this period, as well as during other time periods, will be run in order to validate the results presented here. Wind speed is a difficult parameter to forecast due the interaction of large and small length scale forcing. To accurately forecast the wind speed at a given location, the model must correctly forecast the movement and strength of synoptic systems, as well as the local influence of topography / land use on the wind speed. For example, small deviations in the forecast track or strength of a large-scale low pressure system can result in significant forecast errors for local wind speeds. The purpose of this study is to provide a preliminary baseline of a high-resolution limited area model forecast performance against observations from the Nysted wind park. Validating the numerical weather prediction model performance for past forecasts will give a reasonable measure of expected forecast skill over the Nysted wind park. Also, since the Nysted Wind Park is over water and some distance from the influence of terrain, the impact of high vertical grid spacing for wind speed forecast skill will also be investigated.

  18. High-resolution subgrid models: background, grid generation, and implementation

    Science.gov (United States)

    Sehili, Aissa; Lang, Günther; Lippert, Christoph

    2014-04-01

    The basic idea of subgrid models is the use of available high-resolution bathymetric data at subgrid level in computations that are performed on relatively coarse grids allowing large time steps. For that purpose, an algorithm that correctly represents the precise mass balance in regions where wetting and drying occur was derived by Casulli (Int J Numer Method Fluids 60:391-408, 2009) and Casulli and Stelling (Int J Numer Method Fluids 67:441-449, 2010). Computational grid cells are permitted to be wet, partially wet, or dry, and no drying threshold is needed. Based on the subgrid technique, practical applications involving various scenarios were implemented including an operational forecast model for water level, salinity, and temperature of the Elbe Estuary in Germany. The grid generation procedure allows a detailed boundary fitting at subgrid level. The computational grid is made of flow-aligned quadrilaterals including few triangles where necessary. User-defined grid subdivision at subgrid level allows a correct representation of the volume up to measurement accuracy. Bottom friction requires a particular treatment. Based on the conveyance approach, an appropriate empirical correction was worked out. The aforementioned features make the subgrid technique very efficient, robust, and accurate. Comparison of predicted water levels with the comparatively highly resolved classical unstructured grid model shows very good agreement. The speedup in computational performance due to the use of the subgrid technique is about a factor of 20. A typical daily forecast can be carried out in less than 10 min on a standard PC-like hardware. The subgrid technique is therefore a promising framework to perform accurate temporal and spatial large-scale simulations of coastal and estuarine flow and transport processes at low computational cost.

  19. High-resolution surface analysis for extended-range downscaling with limited-area atmospheric models

    Science.gov (United States)

    Separovic, Leo; Husain, Syed Zahid; Yu, Wei; Fernig, David

    2014-12-01

    High-resolution limited-area model (LAM) simulations are frequently employed to downscale coarse-resolution objective analyses over a specified area of the globe using high-resolution computational grids. When LAMs are integrated over extended time frames, from months to years, they are prone to deviations in land surface variables that can be harmful to the quality of the simulated near-surface fields. Nudging of the prognostic surface fields toward a reference-gridded data set is therefore devised in order to prevent the atmospheric model from diverging from the expected values. This paper presents a method to generate high-resolution analyses of land-surface variables, such as surface canopy temperature, soil moisture, and snow conditions, to be used for the relaxation of lower boundary conditions in extended-range LAM simulations. The proposed method is based on performing offline simulations with an external surface model, forced with the near-surface meteorological fields derived from short-range forecast, operational analyses, and observed temperatures and humidity. Results show that the outputs of the surface model obtained in the present study have potential to improve the near-surface atmospheric fields in extended-range LAM integrations.

  20. High resolution manometry findings in patients with esophageal epiphrenic diverticula.

    Science.gov (United States)

    Vicentine, Fernando P P; Herbella, Fernando A M; Silva, Luciana C; Patti, Marco G

    2011-12-01

    The pathophysiology of esophageal epiphrenic diverticula is still uncertain even though a concomitant motility disorder is found in the majority of patients in different series. High resolution manometry may allow detection of motor abnormalities in a higher number of patients with esophageal epiphrenic diverticula compared with conventional manometry. This study aims to evaluate the high resolution manometry findings in patients with esophageal epiphrenic diverticula. Nine individuals (mean age 63 ± 10 years, 4 females) with esophageal epiphrenic diverticula underwent high resolution manometry. A single diverticulum was observed in eight patients and multiple diverticula in one. Visual analysis of conventional tracings and color pressure plots for identification of segmental abnormalities was performed by two researchers experienced in high resolution manometry. Upper esophageal sphincter was normal in all patients. Esophageal body was abnormal in eight patients; lower esophageal sphincter was abnormal in seven patients. Named esophageal motility disorders were found in seven patients: achalasia in six, diffuse esophageal spasm in one. In one patient, a segmental hypercontractile zone was noticed with pressure of 196 mm Hg. High resolution manometry demonstrated motor abnormalities in all patients with esophageal epiphrenic diverticula.

  1. Numerical solutions of the N-body problem

    International Nuclear Information System (INIS)

    Marciniak, A.

    1985-01-01

    Devoted to the study of numerical methods for solving the general N-body problem and related problems, this volume starts with an overview of the conventional numerical methods for solving the initial value problem. The major part of the book contains original work and features a presentation of special numerical methods conserving the constants of motion in the general N-body problem and methods conserving the Jacobi constant in the problem of motion of N bodies in a rotating frame, as well as an analysis of the applications of both (conventional and special) kinds of methods for solving these problems. For all the methods considered, the author presents algorithms which are easily programmable in any computer language. Moreover, the author compares various methods and presents adequate numerical results. The appendix contains PL/I procedures for all the special methods conserving the constants of motion. 91 refs.; 35 figs.; 41 tabs

  2. Refinement procedure for the image alignment in high-resolution electron tomography

    International Nuclear Information System (INIS)

    Houben, L.; Bar Sadan, M.

    2011-01-01

    High-resolution electron tomography from a tilt series of transmission electron microscopy images requires an accurate image alignment procedure in order to maximise the resolution of the tomogram. This is the case in particular for ultra-high resolution where even very small misalignments between individual images can dramatically reduce the fidelity of the resultant reconstruction. A tomographic-reconstruction based and marker-free method is proposed, which uses an iterative optimisation of the tomogram resolution. The method utilises a search algorithm that maximises the contrast in tomogram sub-volumes. Unlike conventional cross-correlation analysis it provides the required correlation over a large tilt angle separation and guarantees a consistent alignment of images for the full range of object tilt angles. An assessment based on experimental reconstructions shows that the marker-free procedure is competitive to the reference of marker-based procedures at lower resolution and yields sub-pixel accuracy even for simulated high-resolution data. -- Highlights: → Alignment procedure for electron tomography based on iterative tomogram contrast optimisation. → Marker-free, independent of object, little user interaction. → Accuracy competitive with fiducial marker methods and suited for high-resolution tomography.

  3. High resolution estimates of the corrosion risk for cultural heritage in Italy.

    Science.gov (United States)

    De Marco, Alessandra; Screpanti, Augusto; Mircea, Mihaela; Piersanti, Antonio; Proietti, Chiara; Fornasier, M Francesca

    2017-07-01

    Air pollution plays a pivotal role in the deterioration of many materials used in buildings and cultural monuments causing an inestimable damage. This study aims to estimate the impacts of air pollution (SO 2 , HNO 3 , O 3 , PM 10 ) and meteorological conditions (temperature, precipitation, relative humidity) on limestone, copper and bronze based on high resolution air quality data-base produced with AMS-MINNI modelling system over the Italian territory over the time period 2003-2010. A comparison between high resolution data (AMS-MINNI grid, 4 × 4 km) and low resolution data (EMEP grid, 50 × 50 km) has been performed. Our results pointed out that the corrosion levels for limestone, copper and bronze are decreased in Italy from 2003 to 2010 in relation to decrease of pollutant concentrations. However, some problem related to air pollution persists especially in Northern and Southern Italy. In particular, PM 10 and HNO 3 are considered the main responsible for limestone corrosion. Moreover, the high resolution data (AMS-MINNI) allowed the identification of risk areas that are not visible with the low resolution data (EMEP modelling system) in all considered years and, especially, in the limestone case. Consequently, high resolution air quality simulations are suitable to provide concrete benefits in providing information for national effective policy against corrosion risk for cultural heritage, also in the context of climate changes that are affecting strongly Mediterranean basin. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Monte Carlo treatment planning and high-resolution alpha-track autoradiography for neutron capture therapy

    Energy Technology Data Exchange (ETDEWEB)

    Zamenhof, R.G.; Lin, K.; Ziegelmiller, D.; Clement, S.; Lui, C.; Harling, O.K.

    Monte Carlo simulations of thermal neutron flux distributions in a mathematical head model have been compared to experimental measurements in a corresponding anthropomorphic gelatin-based head phantom irradiated by a thermal neutron beam as presently available at the MITR-II Research Reactor. Excellent agreement between Monte Carlo and experimental measurements has encouraged us to employ the Monte Carlo simulation technique to approach treatment planning problems in neutron capture therapy. We have also implemented a high-resolution alpha-track autoradiography technique originally developed in our laboratory at MIT. Initial autoradiograms produced by this technique meet our expectations in terms of the high resolution available and the ability to etch tracks without concommitant destruction of stained tissue. Our preliminary results with computer-aided track distribution analysis indicate that this approach is very promising in being able to quantify boron distributions in tissue at the subcellular level with a minimum amount of operator effort necessary.

  5. Data Driven Approach for High Resolution Population Distribution and Dynamics Models

    Energy Technology Data Exchange (ETDEWEB)

    Bhaduri, Budhendra L [ORNL; Bright, Eddie A [ORNL; Rose, Amy N [ORNL; Liu, Cheng [ORNL; Urban, Marie L [ORNL; Stewart, Robert N [ORNL

    2014-01-01

    High resolution population distribution data are vital for successfully addressing critical issues ranging from energy and socio-environmental research to public health to human security. Commonly available population data from Census is constrained both in space and time and does not capture population dynamics as functions of space and time. This imposes a significant limitation on the fidelity of event-based simulation models with sensitive space-time resolution. This paper describes ongoing development of high-resolution population distribution and dynamics models, at Oak Ridge National Laboratory, through spatial data integration and modeling with behavioral or activity-based mobility datasets for representing temporal dynamics of population. The model is resolved at 1 km resolution globally and describes the U.S. population for nighttime and daytime at 90m. Integration of such population data provides the opportunity to develop simulations and applications in critical infrastructure management from local to global scales.

  6. Compact and high-resolution optical orbital angular momentum sorter

    Directory of Open Access Journals (Sweden)

    Chenhao Wan

    2017-03-01

    Full Text Available A compact and high-resolution optical orbital angular momentum (OAM sorter is proposed and demonstrated. The sorter comprises a quadratic fan-out mapper and a dual-phase corrector positioned in the pupil plane and the Fourier plane, respectively. The optical system is greatly simplified compared to previous demonstrations of OAM sorting, and the performance in resolution and efficiency is maintained. A folded configuration is set up using a single reflective spatial light modulator (SLM to demonstrate the validity of the scheme. The two phase elements are implemented on the left and right halves of the SLM and connected by a right-angle prism. Experimental results demonstrate the high resolution of the compact OAM sorter, and the current limit in efficiency can be overcome by replacing with transmissive SLMs and removing the beam splitters. This novel scheme paves the way for the miniaturization and integration of high-resolution OAM sorters.

  7. High-resolution SPECT for small-animal imaging

    International Nuclear Information System (INIS)

    Qi Yujin

    2006-01-01

    This article presents a brief overview of the development of high-resolution SPECT for small-animal imaging. A pinhole collimator has been used for high-resolution animal SPECT to provide better spatial resolution and detection efficiency in comparison with a parallel-hole collimator. The theory of imaging characteristics of the pinhole collimator is presented and the designs of the pinhole aperture are discussed. The detector technologies used for the development of small-animal SPECT and the recent advances are presented. The evolving trend of small-animal SPECT is toward a multi-pinhole and a multi-detector system to obtain a high resolution and also a high detection efficiency. (authors)

  8. High-resolution spectroscopy of gases for industrial applications

    DEFF Research Database (Denmark)

    Fateev, Alexander; Clausen, Sønnik

    High-resolution spectroscopy of gases is a powerful technique which has various fundamental and practical applications: in situ simultaneous measurements of gas temperature and gas composition, radiative transfer modeling, validation of existing and developing of new databases and etc. Existing...... databases (e.g. HITRAN, HITEMP or CDSD) can normally be used for absorption spectra calculations at limited temperature/pressure ranges. Therefore experimental measurements of absorption/transmission spectra gases (e.g. CO2, H2O or SO2) at high-resolution and elevated temperatures are essential both...... for analysis of complex experimental data and further development of the databases. High-temperature gas cell facilities available at DTU Chemical Engineering are presented and described. The gas cells and high-resolution spectrometers allow us to perform high-quality reference measurements of gases relevant...

  9. 1024 matrix image reconstruction: usefulness in high resolution chest CT

    International Nuclear Information System (INIS)

    Jeong, Sun Young; Chung, Myung Jin; Chong, Se Min; Sung, Yon Mi; Lee, Kyung Soo

    2006-01-01

    We tried to evaluate whether high resolution chest CT with a 1,024 matrix has a significant advantage in image quality compared to a 512 matrix. Each set of 512 and 1024 matrix high resolution chest CT scans with both 0.625 mm and 1.25 mm slice thickness were obtained from 26 patients. Seventy locations that contained twenty-four low density lesions without sharp boundary such as emphysema, and forty-six sharp linear densities such as linear fibrosis were selected; these were randomly displayed on a five mega pixel LCD monitor. All the images were masked for information concerning the matrix size and slice thickness. Two chest radiologists scored the image quality of each ar rowed lesion as follows: (1) undistinguishable, (2) poorly distinguishable, (3) fairly distinguishable, (4) well visible and (5) excellently visible. The scores were compared from the aspects of matrix size, slice thickness and the different observers by using ANOVA tests. The average and standard deviation of image quality were 3.09 (± .92) for the 0.625 mm x 512 matrix, 3.16 (± .84) for the 0.625 mm x 1024 matrix, 2.49 (± 1.02) for the 1.25 mm x 512 matrix, and 2.35 (± 1.02) for the 1.25 mm x 1024 matrix, respectively. The image quality on both matrices of the high resolution chest CT scans with a 0.625 mm slice thickness was significantly better than that on the 1.25 mm slice thickness (ρ < 0.001). However, the image quality on the 1024 matrix high resolution chest CT scans was not significantly different from that on the 512 matrix high resolution chest CT scans (ρ = 0.678). The interobserver variation between the two observers was not significant (ρ = 0.691). We think that 1024 matrix image reconstruction for high resolution chest CT may not be clinical useful

  10. High resolution electron microscopy and electron diffraction of YBa2Cu3O(7-x)

    International Nuclear Information System (INIS)

    Krakow, W.; Shaw, T.M.

    1988-01-01

    Experimental high resolution electron micrographs and computer simulation experiments have been used to evaluate the visibility of the atomic constituents of YBa 2 Cu 3 O(7-x). In practice, the detection of oxygen has not been possible in contradiction to that predicted by modelling of perfect crystalline material. Preliminary computer experiments of the electron diffraction patterns when oxygen vacancies are introduced on the Cu-O sheets separating Ba layers show the diffuse streaks characteristic of short range ordering. 7 references

  11. High Resolution Topography Analysis on Threading Edge Dislocations in 4H-SiC Epilayers

    International Nuclear Information System (INIS)

    Kamata, I.; Nagano, M.; Tsuchida, H.; Chen, Y.; Dudley, M.

    2009-01-01

    Threading edge dislocations (TEDs) in a 4H-SiC epitaxial layer are investigated using high-resolution synchrotron topography. Six types of TED image are confirmed to correspond to the Burgers vector directions by a comparison of computer simulated images and observed topography images in crystal boundaries. Using a mapping method, a wide spatial distribution of the six types of TED is examined in a quarter section of a 2-inch wafer.

  12. Development of a procedure to model high-resolution wind profiles from smoothed or low-frequency data

    Science.gov (United States)

    Camp, D. W.

    1977-01-01

    The derivation of simulated Jimsphere wind profiles from low-frequency rawinsonde data and a generated set of white noise data are presented. A computer program is developed to model high-resolution wind profiles based on the statistical properties of data from the Kennedy Space Center, Florida. Comparison of the measured Jimsphere data, rawinsonde data, and the simulated profiles shows excellent agreement.

  13. High-resolution observations of the near-surface wind field over an isolated mountain and in a steep river canyon

    Science.gov (United States)

    B. W. Butler; N. S. Wagenbrenner; J. M. Forthofer; B. K. Lamb; K. S. Shannon; D. Finn; R. M. Eckman; K. Clawson; L. Bradshaw; P. Sopko; S. Beard; D. Jimenez; C. Wold; M. Vosburgh

    2015-01-01

    A number of numerical wind flow models have been developed for simulating wind flow at relatively fine spatial resolutions (e.g., 100 m); however, there are very limited observational data available for evaluating these high-resolution models. This study presents high-resolution surface wind data sets collected from an isolated mountain and a steep river canyon. The...

  14. High-resolution urban flood modelling - a joint probability approach

    Science.gov (United States)

    Hartnett, Michael; Olbert, Agnieszka; Nash, Stephen

    2017-04-01

    (Divoky et al., 2005). Nevertheless, such events occur and in Ireland alone there are several cases of serious damage due to flooding resulting from a combination of high sea water levels and river flows driven by the same meteorological conditions (e.g. Olbert et al. 2015). A November 2009 fluvial-coastal flooding of Cork City bringing €100m loss was one such incident. This event was used by Olbert et al. (2015) to determine processes controlling urban flooding and is further explored in this study to elaborate on coastal and fluvial flood mechanisms and their roles in controlling water levels. The objective of this research is to develop a methodology to assess combined effect of multiple source flooding on flood probability and severity in urban areas and to establish a set of conditions that dictate urban flooding due to extreme climatic events. These conditions broadly combine physical flood drivers (such as coastal and fluvial processes), their mechanisms and thresholds defining flood severity. The two main physical processes controlling urban flooding: high sea water levels (coastal flooding) and high river flows (fluvial flooding), and their threshold values for which flood is likely to occur, are considered in this study. Contribution of coastal and fluvial drivers to flooding and their impacts are assessed in a two-step process. The first step involves frequency analysis and extreme value statistical modelling of storm surges, tides and river flows and ultimately the application of joint probability method to estimate joint exceedence return periods for combination of surges, tide and river flows. In the second step, a numerical model of Cork Harbour MSN_Flood comprising a cascade of four nested high-resolution models is used to perform simulation of flood inundation under numerous hypothetical coastal and fluvial flood scenarios. The risk of flooding is quantified based on a range of physical aspects such as the extent and depth of inundation (Apel et al

  15. N-body scattering solution in coordinate space

    International Nuclear Information System (INIS)

    Cheng-Guang, B.

    1986-01-01

    The Schroedinger equation has been transformed into a set of coupled partial differential equations having hyper-variables as arguments and a procedure for embedding the boundary conditions into the N-body scattering solution by using a set of homogeneous linear algebraic equations is proposed

  16. Reproducible high-resolution multispectral image acquisition in dermatology

    Science.gov (United States)

    Duliu, Alexandru; Gardiazabal, José; Lasser, Tobias; Navab, Nassir

    2015-07-01

    Multispectral image acquisitions are increasingly popular in dermatology, due to their improved spectral resolution which enables better tissue discrimination. Most applications however focus on restricted regions of interest, imaging only small lesions. In this work we present and discuss an imaging framework for high-resolution multispectral imaging on large regions of interest.

  17. Dye laser light for high-resolution classical photography

    International Nuclear Information System (INIS)

    Geissler, K.K.

    1982-01-01

    The test run with the bubble chamber HOLEBC in October 1981 offered the opportunity of checking the usefulness of de-speckled dye laser light for illumination purposes in high-resolution classical dark field photography of small bubble chambers. (orig./HSI)

  18. High-resolution seismic imaging of the Sohagpur Gondwana basin ...

    Indian Academy of Sciences (India)

    The quality of the high-resolution seismic data depends mainly on the data ..... metric rift geometry. Based on the .... Biswas S K 2003 Regional tectonic framework of the .... Sheth H C, Ray J S, Ray R, Vanderkluysen L, Mahoney J. J, Kumar A ...

  19. Pulmonary Gaucher's disease: high-resolution computed tomographic features

    International Nuclear Information System (INIS)

    Tunaci, A.; Berkmen, Y.M.; Goekmen, E.

    1995-01-01

    CT findings in pulmonary Gaucher's disease have not been previously reported. Chest radiograph of a patient with pulmonary involvement in type I Gaucher's disease proven by biopsy showed linear and reticulo-nodular opacities. High-resolution CT demonstrated thickening of the interlobular septa and between four and six small nodules within secondary lobules, probably each corresponding to an acinus. (orig.)

  20. High resolution techniques using scanning proton microprobe (SPM)

    International Nuclear Information System (INIS)

    Cholewa, M.; Saint, A.; Prawer, S.; Laird, J.S.; Legge, G.J.F.; Bardos, R.A.; Moorhead, G.F.; Taylor, G.N.; Stuart, S.A.; Howard, J.

    1994-01-01

    The very high resolution (down to 50 nm) achieved with low beam currents (fA) in a scanning ion microprobe have lead to many nondestructive techniques of microanalysis. This paper discusses recent developments and applications in the use of 3-D STIM (scanning transmission ion microscopy) Tomography, channeling STIM and IBIC (ion beam induced charge). (orig.)

  1. Structure Identification in High-Resolution Transmission Electron Microscopic Images

    DEFF Research Database (Denmark)

    Vestergaard, Jacob Schack; Kling, Jens; Dahl, Anders Bjorholm

    2014-01-01

    A connection between microscopic structure and macroscopic properties is expected for almost all material systems. High-resolution transmission electron microscopy is a technique offering insight into the atomic structure, but the analysis of large image series can be time consuming. The present ...

  2. Duchenne muscular dystrophy: High-resolution melting curve ...

    African Journals Online (AJOL)

    Duchenne muscular dystrophy: High-resolution melting curve analysis as an affordable diagnostic mutation scanning tool in a South African cohort. ... Genetic screening for D/BMD in South Africa currently includes multiple ligase-dependent probe amplification (MLPA) for exonic deletions and duplications and linkage ...

  3. Role of land state in a high resolution mesoscale model

    Indian Academy of Sciences (India)

    ... Proceedings – Mathematical Sciences · Resonance – Journal of Science ... Land surface characteristics; high resolution mesoscale model; Uttarakhand ... to predict realistic location, timing, amount,intensity and distribution of rainfall ... region embedded within two low pressure centers over Arabian Seaand Bay of Bengal.

  4. A high resolution powder diffractometer using focusing optics

    Indian Academy of Sciences (India)

    E-mail: siruguri@csr.ernet.in. Abstract. In this paper, we describe the design, construction and performance of a new high resolution neutron powder diffractometer that has been installed at the Dhruva reactor, Trombay, India. The instrument employs novel design concepts like the use of bent, perfect crystal monochromator ...

  5. Application of high resolution SNP arrays in patients with congenital ...

    Indian Academy of Sciences (India)

    clinical experience in implementing whole-genome high-resolution SNP arrays to investigate 33 patients with syndromic and .... Online Mendelian Inheritance in Man database (OMIM, ..... of damaged mitochondria through either autophagy or mito- ..... malformations: associations with maternal and infant character- istics in a ...

  6. Workshop on high-resolution, large-acceptance spectrometers

    Energy Technology Data Exchange (ETDEWEB)

    Zeidman, B. (ed.)

    1981-01-01

    The purpose of the Workshop on High-Resolution, Large-Acceptance Spectrometers was to provide a means for exchange of information among those actively engaged in the design and construction of these new spectrometers. Thirty-seven papers were prepared for the data base.

  7. Yeast expression proteomics by high-resolution mass spectrometry

    DEFF Research Database (Denmark)

    Walther, Tobias C; Olsen, Jesper Velgaard; Mann, Matthias

    2010-01-01

    -translational controls contribute majorly to regulation of protein abundance, for example in heat shock stress response. The development of new sample preparation methods, high-resolution mass spectrometry and novel bioinfomatic tools close this gap and allow the global quantitation of the yeast proteome under different...

  8. High resolution satellite imagery : from spies to pipeline management

    Energy Technology Data Exchange (ETDEWEB)

    Adam, S. [Canadian Geomatic Solutions Ltd., Calgary, AB (Canada); Farrell, M. [TransCanada Transmission, Calgary, AB (Canada)

    2000-07-01

    The launch of Space Imaging's IKONOS satellite in September 1999 has opened the door for corridor applications. The technology has been successfully implemented by TransCanada PipeLines in mapping over 1500 km of their mainline. IKONOS is the world's first commercial high resolution satellite which collects data at 1-meter black/white and 4-meter multi-spectral. Its use is regulated by the U.S. government. It is the best source of high resolution satellite image data. Other sources include the Indian Space Agency's IRS-1 C/D satellite and the Russian SPIN-2 which provides less reliable coverage. In addition, two more high resolution satellites may be launched this year to provide imagery every day of the year. IKONOS scenes as narrow as 5 km can be purchased. TransCanada conducted a pilot study to determine if high resolution satellite imagery is as effective as ortho-photos for identifying population structures within a buffer of TransCanada's east line right-of-way. The study examined three unique segments where residential, commercial, industrial and public features were compared. It was determined that IKONOS imagery is as good as digital ortho-photos for updating structures from low to very high density areas. The satellite imagery was also logistically easier than ortho-photos to acquire. This will be even more evident when the IKONOS image archives begins to grow. 4 tabs., 3 figs.

  9. Novel techniques in VUV high-resolution spectroscopy

    NARCIS (Netherlands)

    Ubachs, W.M.G.; Salumbides, E.J.; Eikema, K.S.E.; de Oliveira, N.; Nahon, L.

    2014-01-01

    Novel VUV sources and techniques for VUV spectroscopy are reviewed. Laser-based VUV sources have been developed via non-linear upconversion of laser pulses in the nanosecond (ns), the picosecond (ps), and femtosecond (fs) domain, and are applied in high-resolution gas phase spectroscopic studies.

  10. High resolution X-ray diffraction studies on unirradiated

    Indian Academy of Sciences (India)

    High-resolution X-ray diffraction technique, employing a three-crystal monochromator–collimator combination is used to study the irradiation induced defects in flux grown Sr-hexaferrite crystals irradiated with 50 MeV Li3+ ion beams at room temperature with a fluence value of 1 × 1014 ions/cm2. The diffraction curves of the ...

  11. High resolution STEM of quantum dots and quantum wires

    DEFF Research Database (Denmark)

    Kadkhodazadeh, Shima

    2013-01-01

    This article reviews the application of high resolution scanning transmission electron microscopy (STEM) to semiconductor quantum dots (QDs) and quantum wires (QWRs). Different imaging and analytical techniques in STEM are introduced and key examples of their application to QDs and QWRs...

  12. Pattern of interstitial lung disease detected by high resolution ...

    African Journals Online (AJOL)

    Background: Diffuse lung diseases constitute a major cause of morbidity and mortality worldwide. High Resolution Computed Tomography (HRCT) is the recommended imaging technique in the diagnosis, assessment and followup of these diseases. Objectives: To describe the pattern of HRCT findings in patients with ...

  13. A multi-channel high-resolution time recorder system

    International Nuclear Information System (INIS)

    Zhang Lingyun; Yang Xiaojun; Song Kezhu; Wang Yanfang

    2004-01-01

    This paper introduces a multi-channel and high-speed time recorder system, which was originally designed to work in the experiments of quantum cryptography research. The novelty of the system is that all the hardware logic is performed by only one FPGA. The system can achieve several desirable features, such as simplicity, high resolution and high processing speed. (authors)

  14. Remote parallel rendering for high-resolution tiled display walls

    KAUST Repository

    Nachbaur, Daniel

    2014-11-01

    © 2014 IEEE. We present a complete, robust and simple to use hardware and software stack delivering remote parallel rendering of complex geometrical and volumetric models to high resolution tiled display walls in a production environment. We describe the setup and configuration, present preliminary benchmarks showing interactive framerates, and describe our contributions for a seamless integration of all the software components.

  15. Remote parallel rendering for high-resolution tiled display walls

    KAUST Repository

    Nachbaur, Daniel; Dumusc, Raphael; Bilgili, Ahmet; Hernando, Juan; Eilemann, Stefan

    2014-01-01

    © 2014 IEEE. We present a complete, robust and simple to use hardware and software stack delivering remote parallel rendering of complex geometrical and volumetric models to high resolution tiled display walls in a production environment. We describe the setup and configuration, present preliminary benchmarks showing interactive framerates, and describe our contributions for a seamless integration of all the software components.

  16. Human enamel structure studied by high resolution electron microscopy

    International Nuclear Information System (INIS)

    Wen, S.L.

    1989-01-01

    Human enamel structural features are characterized by high resolution electron microscopy. The human enamel consists of polycrystals with a structure similar to Ca10(PO4)6(OH)2. This article describes the structural features of human enamel crystal at atomic and nanometer level. Besides the structural description, a great number of high resolution images are included. Research into the carious process in human enamel is very important for human beings. This article firstly describes the initiation of caries in enamel crystal at atomic and unit-cell level and secondly describes the further steps of caries with structural and chemical demineralization. The demineralization in fact, is the origin of caries in human enamel. The remineralization of carious areas in human enamel has drawn more and more attention as its potential application is realized. This process has been revealed by high resolution electron microscopy in detail in this article. On the other hand, the radiation effects on the structure of human enamel are also characterized by high resolution electron microscopy. In order to reveal this phenomenon clearly, a great number of electron micrographs have been shown, and a physical mechanism is proposed. 26 references

  17. High resolution and high speed positron emission tomography data acquisition

    International Nuclear Information System (INIS)

    Burgiss, S.G.; Byars, L.G.; Jones, W.F.; Casey, M.E.

    1986-01-01

    High resolution positron emission tomography (PET) requires many detectors. Thus, data collection systems for PET must have high data rates, wide data paths, and large memories to histogram the events. This design uses the VMEbus to cost effectively provide these features. It provides for several modes of operation including real time sorting, list mode data storage, and replay of stored list mode data

  18. High resolution wind turbine wake measurements with a scanning lidar

    DEFF Research Database (Denmark)

    Herges, T. G.; Maniaci, D. C.; Naughton, B. T.

    2017-01-01

    High-resolution lidar wake measurements are part of an ongoing field campaign being conducted at the Scaled Wind Farm Technology facility by Sandia National Laboratories and the National Renewable Energy Laboratory using a customized scanning lidar from the Technical University of Denmark. One...

  19. High resolution mid-infrared spectroscopy based on frequency upconversion

    DEFF Research Database (Denmark)

    Dam, Jeppe Seidelin; Hu, Qi; Tidemand-Lichtenberg, Peter

    2013-01-01

    signals can be analyzed. The obtainable frequency resolution is usually in the nm range where sub nm resolution is preferred in many applications, like gas spectroscopy. In this work we demonstrate how to obtain sub nm resolution when using upconversion. In the presented realization one object point...... high resolution spectral performance by observing emission from hot water vapor in a butane gas burner....

  20. Systematic high-resolution assessment of global hydropower potential

    NARCIS (Netherlands)

    Hoes, Olivier A C; Meijer, Lourens J J; Van Der Ent, Ruud J.|info:eu-repo/dai/nl/364164794; Van De Giesen, Nick C.

    2017-01-01

    Population growth, increasing energy demand and the depletion of fossil fuel reserves necessitate a search for sustainable alternatives for electricity generation. Hydropower could replace a large part of the contribution of gas and oil to the present energy mix. However, previous high-resolution

  1. High-Resolution Geologic Mapping of Martian Terraced Fan Deposits

    Science.gov (United States)

    Wolak, J. M.; Patterson, A. B.; Smith, S. D.; Robbins, N. N.

    2018-06-01

    This abstract documents our initial progress (year 1) mapping terraced fan features on Mars. Our objective is to investigate the role of fluids during fan formation and produce the first high-resolution geologic map (1:18k) of a terraced fan.

  2. High-resolution axial MR imaging of tibial stress injuries

    Directory of Open Access Journals (Sweden)

    Mammoto Takeo

    2012-05-01

    Full Text Available Abstract Purpose To evaluate the relative involvement of tibial stress injuries using high-resolution axial MR imaging and the correlation with MR and radiographic images. Methods A total of 33 patients with exercise-induced tibial pain were evaluated. All patients underwent radiograph and high-resolution axial MR imaging. Radiographs were taken at initial presentation and 4 weeks later. High-resolution MR axial images were obtained using a microscopy surface coil with 60 × 60 mm field of view on a 1.5T MR unit. All images were evaluated for abnormal signals of the periosteum, cortex and bone marrow. Results Nineteen patients showed no periosteal reaction at initial and follow-up radiographs. MR imaging showed abnormal signals in the periosteal tissue and partially abnormal signals in the bone marrow. In 7 patients, periosteal reaction was not seen at initial radiograph, but was detected at follow-up radiograph. MR imaging showed abnormal signals in the periosteal tissue and entire bone marrow. Abnormal signals in the cortex were found in 6 patients. The remaining 7 showed periosteal reactions at initial radiograph. MR imaging showed abnormal signals in the periosteal tissue in 6 patients. Abnormal signals were seen in the partial and entire bone marrow in 4 and 3 patients, respectively. Conclusions Bone marrow abnormalities in high-resolution axial MR imaging were related to periosteal reactions at follow-up radiograph. Bone marrow abnormalities might predict later periosteal reactions, suggesting shin splints or stress fractures. High-resolution axial MR imaging is useful in early discrimination of tibial stress injuries.

  3. High-resolution axial MR imaging of tibial stress injuries

    Science.gov (United States)

    2012-01-01

    Purpose To evaluate the relative involvement of tibial stress injuries using high-resolution axial MR imaging and the correlation with MR and radiographic images. Methods A total of 33 patients with exercise-induced tibial pain were evaluated. All patients underwent radiograph and high-resolution axial MR imaging. Radiographs were taken at initial presentation and 4 weeks later. High-resolution MR axial images were obtained using a microscopy surface coil with 60 × 60 mm field of view on a 1.5T MR unit. All images were evaluated for abnormal signals of the periosteum, cortex and bone marrow. Results Nineteen patients showed no periosteal reaction at initial and follow-up radiographs. MR imaging showed abnormal signals in the periosteal tissue and partially abnormal signals in the bone marrow. In 7 patients, periosteal reaction was not seen at initial radiograph, but was detected at follow-up radiograph. MR imaging showed abnormal signals in the periosteal tissue and entire bone marrow. Abnormal signals in the cortex were found in 6 patients. The remaining 7 showed periosteal reactions at initial radiograph. MR imaging showed abnormal signals in the periosteal tissue in 6 patients. Abnormal signals were seen in the partial and entire bone marrow in 4 and 3 patients, respectively. Conclusions Bone marrow abnormalities in high-resolution axial MR imaging were related to periosteal reactions at follow-up radiograph. Bone marrow abnormalities might predict later periosteal reactions, suggesting shin splints or stress fractures. High-resolution axial MR imaging is useful in early discrimination of tibial stress injuries. PMID:22574840

  4. Performance evaluation of a high resolution dedicated breast PET scanner

    Energy Technology Data Exchange (ETDEWEB)

    García Hernández, Trinitat, E-mail: mtrinitat@eresa.com; Vicedo González, Aurora; Brualla González, Luis; Granero Cabañero, Domingo [Department of Medical Physics, ERESA, Hospital General Universitario, Valencia 46014 (Spain); Ferrer Rebolleda, Jose; Sánchez Jurado, Raúl; Puig Cozar Santiago, Maria del [Department of Nuclear Medicine, ERESA, Hospital General Universitario, Valencia 46014 (Spain); Roselló Ferrando, Joan [Department of Medical Physics, ERESA, Hospital General Universitario, Valencia 46014 (Spain); Department of Physiology, University of Valencia, Valencia 46010 (Spain)

    2016-05-15

    Purpose: Early stage breast cancers may not be visible on a whole-body PET scan. To overcome whole-body PET limitations, several dedicated breast positron emission tomography (DbPET) systems have emerged nowadays aiming to improve spatial resolution. In this work the authors evaluate the performance of a high resolution dedicated breast PET scanner (Mammi-PET, Oncovision). Methods: Global status, uniformity, sensitivity, energy, and spatial resolution were measured. Spheres of different sizes (2.5, 4, 5, and 6 mm diameter) and various 18 fluorodeoxyglucose ({sup 18}F-FDG) activity concentrations were randomly inserted in a gelatine breast phantom developed at our institution. Several lesion-to-background ratios (LBR) were simulated, 5:1, 10:1, 20:1, 30:1, and 50:1. Images were reconstructed using different voxel sizes. The ability of experienced reporters to detect spheres was tested as a function of acquisition time, LBR, sphere size, and matrix reconstruction voxel size. For comparison, phantoms were scanned in the DbPET camera and in a wh