WorldWideScience

Sample records for highly parameterized global

  1. Impact of Physics Parameterization Ordering in a Global Atmosphere Model

    Science.gov (United States)

    Donahue, Aaron S.; Caldwell, Peter M.

    2018-02-01

    Because weather and climate models must capture a wide variety of spatial and temporal scales, they rely heavily on parameterizations of subgrid-scale processes. The goal of this study is to demonstrate that the assumptions used to couple these parameterizations have an important effect on the climate of version 0 of the Energy Exascale Earth System Model (E3SM) General Circulation Model (GCM), a close relative of version 1 of the Community Earth System Model (CESM1). Like most GCMs, parameterizations in E3SM are sequentially split in the sense that parameterizations are called one after another with each subsequent process feeling the effect of the preceding processes. This coupling strategy is noncommutative in the sense that the order in which processes are called impacts the solution. By examining a suite of 24 simulations with deep convection, shallow convection, macrophysics/microphysics, and radiation parameterizations reordered, process order is shown to have a big impact on predicted climate. In particular, reordering of processes induces differences in net climate feedback that are as big as the intermodel spread in phase 5 of the Coupled Model Intercomparison Project. One reason why process ordering has such a large impact is that the effect of each process is influenced by the processes preceding it. Where output is written is therefore an important control on apparent model behavior. Application of k-means clustering demonstrates that the positioning of macro/microphysics and shallow convection plays a critical role on the model solution.

  2. Parameterized neural networks for high-energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Baldi, Pierre; Sadowski, Peter [University of California, Department of Computer Science, Irvine, CA (United States); Cranmer, Kyle [NYU, Department of Physics, New York, NY (United States); Faucett, Taylor; Whiteson, Daniel [University of California, Department of Physics and Astronomy, Irvine, CA (United States)

    2016-05-15

    We investigate a new structure for machine learning classifiers built with neural networks and applied to problems in high-energy physics by expanding the inputs to include not only measured features but also physics parameters. The physics parameters represent a smoothly varying learning task, and the resulting parameterized classifier can smoothly interpolate between them and replace sets of classifiers trained at individual values. This simplifies the training process and gives improved performance at intermediate values, even for complex problems requiring deep learning. Applications include tools parameterized in terms of theoretical model parameters, such as the mass of a particle, which allow for a single network to provide improved discrimination across a range of masses. This concept is simple to implement and allows for optimized interpolatable results. (orig.)

  3. Parameterized neural networks for high-energy physics

    International Nuclear Information System (INIS)

    Baldi, Pierre; Sadowski, Peter; Cranmer, Kyle; Faucett, Taylor; Whiteson, Daniel

    2016-01-01

    We investigate a new structure for machine learning classifiers built with neural networks and applied to problems in high-energy physics by expanding the inputs to include not only measured features but also physics parameters. The physics parameters represent a smoothly varying learning task, and the resulting parameterized classifier can smoothly interpolate between them and replace sets of classifiers trained at individual values. This simplifies the training process and gives improved performance at intermediate values, even for complex problems requiring deep learning. Applications include tools parameterized in terms of theoretical model parameters, such as the mass of a particle, which allow for a single network to provide improved discrimination across a range of masses. This concept is simple to implement and allows for optimized interpolatable results. (orig.)

  4. Constructing IGA-suitable planar parameterization from complex CAD boundary by domain partition and global/local optimization

    Science.gov (United States)

    Xu, Gang; Li, Ming; Mourrain, Bernard; Rabczuk, Timon; Xu, Jinlan; Bordas, Stéphane P. A.

    2018-01-01

    In this paper, we propose a general framework for constructing IGA-suitable planar B-spline parameterizations from given complex CAD boundaries consisting of a set of B-spline curves. Instead of forming the computational domain by a simple boundary, planar domains with high genus and more complex boundary curves are considered. Firstly, some pre-processing operations including B\\'ezier extraction and subdivision are performed on each boundary curve in order to generate a high-quality planar parameterization; then a robust planar domain partition framework is proposed to construct high-quality patch-meshing results with few singularities from the discrete boundary formed by connecting the end points of the resulting boundary segments. After the topology information generation of quadrilateral decomposition, the optimal placement of interior B\\'ezier curves corresponding to the interior edges of the quadrangulation is constructed by a global optimization method to achieve a patch-partition with high quality. Finally, after the imposition of C1=G1-continuity constraints on the interface of neighboring B\\'ezier patches with respect to each quad in the quadrangulation, the high-quality B\\'ezier patch parameterization is obtained by a C1-constrained local optimization method to achieve uniform and orthogonal iso-parametric structures while keeping the continuity conditions between patches. The efficiency and robustness of the proposed method are demonstrated by several examples which are compared to results obtained by the skeleton-based parameterization approach.

  5. Improved parameterization of managed grassland in a global process-based vegetation model using Bayesian statistics

    Science.gov (United States)

    Rolinski, S.; Müller, C.; Lotze-Campen, H.; Bondeau, A.

    2010-12-01

    More than a quarter of the Earth’s land surface is covered by grassland, which is also the major part (~ 70 %) of the agricultural area. Most of this area is used for livestock production in different degrees of intensity. The dynamic global vegetation model LPJmL (Sitch et al., Global Change Biology, 2003; Bondeau et al., Global Change Biology, 2007) is one of few process-based model that simulates biomass production on managed grasslands at the global scale. The implementation of managed grasslands and its evaluation has received little attention so far, as reference data on grassland productivity are scarce and the definition of grassland extent and usage are highly uncertain. However, grassland productivity is related to large areas, and strongly influences global estimates of carbon and water budgets and should thus be improved. Plants are implemented in LPJmL in an aggregated form as plant functional types assuming that processes concerning carbon and water fluxes are quite similar between species of the same type. Therefore, the parameterization of a functional type is possible with parameters in a physiologically meaningful range of values. The actual choice of the parameter values from the possible and reasonable phase space should satisfy the condition of the best fit of model results and measured data. In order to improve the parameterization of managed grass we follow a combined procedure using model output and measured data of carbon and water fluxes. By comparing carbon and water fluxes simultaneously, we expect well-balanced refinements and avoid over-tuning of the model in only one direction. The comparison of annual biomass from grassland to data from the Food and Agriculture Organization of the United Nations (FAO) per country provide an overview about the order of magnitude and the identification of deviations. The comparison of daily net primary productivity, soil respiration and water fluxes at specific sites (FluxNet Data) provides

  6. Parameterizing Subgrid-Scale Orographic Drag in the High-Resolution Rapid Refresh (HRRR) Atmospheric Model

    Science.gov (United States)

    Toy, M. D.; Olson, J.; Kenyon, J.; Smirnova, T. G.; Brown, J. M.

    2017-12-01

    The accuracy of wind forecasts in numerical weather prediction (NWP) models is improved when the drag forces imparted on atmospheric flow by subgrid-scale orography are included. Without such parameterizations, only the terrain resolved by the model grid, along with the small-scale obstacles parameterized by the roughness lengths can have an effect on the flow. This neglects the impacts of subgrid-scale terrain variations, which typically leads to wind speeds that are too strong. Using statistical information about the subgrid-scale orography, such as the mean and variance of the topographic height within a grid cell, the drag forces due to flow blocking, gravity wave drag, and turbulent form drag are estimated and distributed vertically throughout the grid cell column. We recently implemented the small-scale gravity wave drag paramterization of Steeneveld et al. (2008) and Tsiringakis et al. (2017) for stable planetary boundary layers, and the turbulent form drag parameterization of Beljaars et al. (2004) in the High-Resolution Rapid Refresh (HRRR) NWP model developed at the National Oceanic and Atmospheric Administration (NOAA). As a result, a high surface wind speed bias in the model has been reduced and small improvement to the maintenance of stable layers has also been found. We present the results of experiments with the subgrid-scale orographic drag parameterization for the regional HRRR model, as well as for a global model in development at NOAA, showing the direct and indirect impacts.

  7. A new simple parameterization of daily clear-sky global solar radiation including horizon effects

    International Nuclear Information System (INIS)

    Lopez, Gabriel; Javier Batlles, F.; Tovar-Pescador, Joaquin

    2007-01-01

    Estimation of clear-sky global solar radiation is usually an important previous stage for calculating global solar radiation under all sky conditions. This is, for instance, a common procedure to derive incoming solar radiation from remote sensing or by using digital elevation models. In this work, we present a new model to calculate daily values of clear-sky global solar irradiation. The main goal is the simple parameterization in terms of atmospheric temperature and relative humidity, Angstroem's turbidity coefficient, ground albedo and site elevation, including a factor to take into account horizon obstructions. This allows us to obtain estimates even though a free horizon is not present as is the case of mountainous locations. Comparisons of calculated daily values with measured data show that this model is able to provide a good level of accurate estimates using either daily or mean monthly values of the input parameters. This new model has also been shown to improve daily estimates against those obtained using the clear-sky model from the European Solar Radiation Atlas and other accurate parameterized daily irradiation models. The introduction of Angstroem's turbidity coefficient and ground albedo should allow us to use the increasing worldwide aerosol information available and to consider those sites affected by snow covers in an easy and fast way. In addition, the proposed model is intended to be a useful tool to select clear-sky conditions

  8. Potential decadal predictability and its sensitivity to sea ice albedo parameterization in a global coupled model

    Energy Technology Data Exchange (ETDEWEB)

    Koenigk, Torben; Caian, Mihaela; Doescher, Ralf; Wyser, Klaus [Swedish Meteorological and Hydrological Institute, Rossby Centre, Norrkoeping (Sweden); Koenig Beatty, Christof [Universite Catholique de Louvain, Louvain-la-Neuve (Belgium)

    2012-06-15

    Decadal prediction is one focus of the upcoming 5th IPCC Assessment report. To be able to interpret the results and to further improve the decadal predictions it is important to investigate the potential predictability in the participating climate models. This study analyzes the upper limit of climate predictability on decadal time scales and its dependency on sea ice albedo parameterization by performing two perfect ensemble experiments with the global coupled climate model EC-Earth. In the first experiment, the standard albedo formulation of EC-Earth is used, in the second experiment sea ice albedo is reduced. The potential prognostic predictability is analyzed for a set of oceanic and atmospheric parameters. The decadal predictability of the atmospheric circulation is small. The highest potential predictability was found in air temperature at 2 m height over the northern North Atlantic and the southern South Atlantic. Over land, only a few areas are significantly predictable. The predictability for continental size averages of air temperature is relatively good in all northern hemisphere regions. Sea ice thickness is highly predictable along the ice edges in the North Atlantic Arctic Sector. The meridional overturning circulation is highly predictable in both experiments and governs most of the decadal climate predictability in the northern hemisphere. The experiments using reduced sea ice albedo show some important differences like a generally higher predictability of atmospheric variables in the Arctic or higher predictability of air temperature in Europe. Furthermore, decadal variations are substantially smaller in the simulations with reduced ice albedo, which can be explained by reduced sea ice thickness in these simulations. (orig.)

  9. The role of aerosols in cloud drop parameterizations and its applications in global climate models

    Energy Technology Data Exchange (ETDEWEB)

    Chuang, C.C.; Penner, J.E. [Lawrence Livermore National Lab., CA (United States)

    1996-04-01

    The characteristics of the cloud drop size distribution near cloud base are initially determined by aerosols that serve as cloud condensation nuclei and the updraft velocity. We have developed parameterizations relating cloud drop number concentration to aerosol number and sulfate mass concentrations and used them in a coupled global aerosol/general circulation model (GCM) to estimate the indirect aerosol forcing. The global aerosol model made use of our detailed emissions inventories for the amount of particulate matter from biomass burning sources and from fossil fuel sources as well as emissions inventories of the gas-phase anthropogenic SO{sub 2}. This work is aimed at validating the coupled model with the Atmospheric Radiation Measurement (ARM) Program measurements and assessing the possible magnitude of the aerosol-induced cloud effects on climate.

  10. Parameterization of dust emissions in the global atmospheric chemistry-climate model EMAC: impact of nudging and soil properties

    OpenAIRE

    Astitha, M.; Lelieveld, J.; Kader, M. Abdel; Pozzer, A.; de Meij, A.

    2012-01-01

    Airborne desert dust influences radiative transfer, atmospheric chemistry and dynamics, as well as nutrient transport and deposition. It directly and indirectly affects climate on regional and global scales. Two versions of a parameterization scheme to compute desert dust emissions are incorporated into the atmospheric chemistry general circulation model EMAC (ECHAM5/MESSy2.41 Atmospheric Chemistry). One uses a global...

  11. The Grell-Freitas Convective Parameterization: Recent Developments and Applications Within the NASA GEOS Global Model

    Science.gov (United States)

    Freitas, S.; Grell, G. A.; Molod, A.

    2017-12-01

    We implemented and began to evaluate an alternative convection parameterization for the NASA Goddard Earth Observing System (GEOS) global model. The parameterization (Grell and Freitas, 2014) is based on the mass flux approach with several closures, for equilibrium and non-equilibrium convection, and includes scale and aerosol awareness functionalities. Scale dependence for deep convection is implemented either through using the method described by Arakawa et al (2011), or through lateral spreading of the subsidence terms. Aerosol effects are included though the dependence of autoconversion and evaporation on the CCN number concentration.Recently, the scheme has been extended to a tri-modal spectral size approach to simulate the transition from shallow, congestus, and deep convection regimes. In addition, the inclusion of a new closure for non-equilibrium convection resulted in a substantial gain of realism in model simulation of the diurnal cycle of convection over the land. Also, a beta-pdf is employed now to represent the normalized mass flux profile. This opens up an additional venue to apply stochasticism in the scheme.

  12. A polar stratospheric cloud parameterization for the global modeling initiative three-dimensional model and its response to stratospheric aircraft

    International Nuclear Information System (INIS)

    Considine, D. B.; Douglass, A. R.; Connell, P. S.; Kinnison, D. E.; Rotman, D. A.

    2000-01-01

    We describe a new parameterization of polar stratospheric clouds (PSCs) which was written for and incorporated into the three-dimensional (3-D) chemistry and transport model (CTM) developed for NASA's Atmospheric Effects of Aviation Project (AEAP) by the Global Modeling Initiative (GMI). The parameterization was designed to respond to changes in NO y and H 2 O produced by high-speed civilian transport (HSCT) emissions. The parameterization predicts surface area densities (SADs) of both Type 1 and Type 2 PSCs for use in heterogeneous chemistry calculations. Type 1 PSCs are assumed to have a supercooled ternary sulfate (STS) composition, and Type 2 PSCs are treated as water ice with a coexisting nitric acid trihydrate (NAT) phase. Sedimentation is treated by assuming that the PSC particles obey lognormal size distributions, resulting in a realistic mass flux of condensed phase H 2 O and HNO 3 . We examine a simulation of the Southern Hemisphere high-latitude lower stratosphere winter and spring seasons driven by temperature and wind fields from a modified version of the National Center for Atmospheric Research (NCAR) Middle Atmosphere Community Climate Model Version 2 (MACCM2). Predicted PSC SADs and median radii for both Type 1 and Type 2 PSCs are consistent with observations. Gas phase HNO 3 and H 2 O concentrations in the high-latitude lower stratosphere qualitatively agree with Cryogenic Limb Array Etalon Spectrometer (CLAES) HNO 3 and Microwave Limb Sounder (MLS) H 2 O observations. The residual denitrification and dehydration of the model polar vortex after polar winter compares well with atmospheric trace molecule spectroscopy (ATMOS) observations taken during November 1994. When the NO x and H 2 O emissions of a standard 500-aircraft HSCT fleet with a NO x emission index of 5 are added, NO x and H 2 O concentrations in the Southern Hemisphere polar vortex before winter increase by up to 3%. This results in earlier onset of PSC formation, denitrification, and

  13. Global direct radiative forcing by process-parameterized aerosol optical properties

    Science.gov (United States)

    KirkevâG, Alf; Iversen, Trond

    2002-10-01

    A parameterization of aerosol optical parameters is developed and implemented in an extended version of the community climate model version 3.2 (CCM3) of the U.S. National Center for Atmospheric Research. Direct radiative forcing (DRF) by monthly averaged calculated concentrations of non-sea-salt sulfate and black carbon (BC) is estimated. Inputs are production-specific BC and sulfate from [2002] and background aerosol size distribution and composition. The scheme interpolates between tabulated values to obtain the aerosol single scattering albedo, asymmetry factor, extinction coefficient, and specific extinction coefficient. The tables are constructed by full calculations of optical properties for an array of aerosol input values, for which size-distributed aerosol properties are estimated from theory for condensation and Brownian coagulation, assumed distribution of cloud-droplet residuals from aqueous phase oxidation, and prescribed properties of the background aerosols. Humidity swelling is estimated from the Köhler equation, and Mie calculations finally yield spectrally resolved aerosol optical parameters for 13 solar bands. The scheme is shown to give excellent agreement with nonparameterized DRF calculations for a wide range of situations. Using IPCC emission scenarios for the years 2000 and 2100, calculations with an atmospheric global cliamte model (AFCM) yield a global net anthropogenic DRF of -0.11 and 0.11 W m-2, respectively, when 90% of BC from biomass burning is assumed anthropogenic. In the 2000 scenario, the individual DRF due to sulfate and BC has separately been estimated to -0.29 and 0.19 W m-2, respectively. Our estimates of DRF by BC per BC mass burden are lower than earlier published estimates. Some sensitivity tests are included to investigate to what extent uncertain assumptions may influence these results.

  14. Global model comparison of heterogeneous ice nucleation parameterizations in mixed phase clouds

    Science.gov (United States)

    Yun, Yuxing; Penner, Joyce E.

    2012-04-01

    A new aerosol-dependent mixed phase cloud parameterization for deposition/condensation/immersion (DCI) ice nucleation and one for contact freezing are compared to the original formulations in a coupled general circulation model and aerosol transport model. The present-day cloud liquid and ice water fields and cloud radiative forcing are analyzed and compared to observations. The new DCI freezing parameterization changes the spatial distribution of the cloud water field. Significant changes are found in the cloud ice water fraction and in the middle cloud fractions. The new DCI freezing parameterization predicts less ice water path (IWP) than the original formulation, especially in the Southern Hemisphere. The smaller IWP leads to a less efficient Bergeron-Findeisen process resulting in a larger liquid water path, shortwave cloud forcing, and longwave cloud forcing. It is found that contact freezing parameterizations have a greater impact on the cloud water field and radiative forcing than the two DCI freezing parameterizations that we compared. The net solar flux at top of atmosphere and net longwave flux at the top of the atmosphere change by up to 8.73 and 3.52 W m-2, respectively, due to the use of different DCI and contact freezing parameterizations in mixed phase clouds. The total climate forcing from anthropogenic black carbon/organic matter in mixed phase clouds is estimated to be 0.16-0.93 W m-2using the aerosol-dependent parameterizations. A sensitivity test with contact ice nuclei concentration in the original parameterization fit to that recommended by Young (1974) gives results that are closer to the new contact freezing parameterization.

  15. Parameterization of water vapor using high-resolution GPS data and empirical models

    Science.gov (United States)

    Ningombam, Shantikumar S.; Jade, Sridevi; Shrungeshwara, T. S.

    2018-03-01

    The present work evaluates eleven existing empirical models to estimate Precipitable Water Vapor (PWV) over a high-altitude (4500 m amsl), cold-desert environment. These models are tested extensively and used globally to estimate PWV for low altitude sites (below 1000 m amsl). The moist parameters used in the model are: water vapor scale height (Hc), dew point temperature (Td) and water vapor pressure (Es 0). These moist parameters are derived from surface air temperature and relative humidity measured at high temporal resolution from automated weather station. The performance of these models are examined statistically with observed high-resolution GPS (GPSPWV) data over the region (2005-2012). The correlation coefficient (R) between the observed GPSPWV and Model PWV is 0.98 at daily data and varies diurnally from 0.93 to 0.97. Parameterization of moisture parameters were studied in-depth (i.e., 2 h to monthly time scales) using GPSPWV , Td , and Es 0 . The slope of the linear relationships between GPSPWV and Td varies from 0.073°C-1 to 0.106°C-1 (R: 0.83 to 0.97) while GPSPWV and Es 0 varied from 1.688 to 2.209 (R: 0.95 to 0.99) at daily, monthly and diurnal time scales. In addition, the moist parameters for the cold desert, high-altitude environment are examined in-depth at various time scales during 2005-2012.

  16. Ocean's response to Hurricane Frances and its implications for drag coefficient parameterization at high wind speeds

    KAUST Repository

    Zedler, S. E.

    2009-04-25

    The drag coefficient parameterization of wind stress is investigated for tropical storm conditions using model sensitivity studies. The Massachusetts Institute of Technology (MIT) Ocean General Circulation Model was run in a regional setting with realistic stratification and forcing fields representing Hurricane Frances, which in early September 2004 passed east of the Caribbean Leeward Island chain. The model was forced with a NOAA-HWIND wind speed product after converting it to wind stress using four different drag coefficient parameterizations. Respective model results were tested against in situ measurements of temperature profiles and velocity, available from an array of 22 surface drifters and 12 subsurface floats. Changing the drag coefficient parameterization from one that saturated at a value of 2.3 × 10 -3 to a constant drag coefficient of 1.2 × 10-3 reduced the standard deviation difference between the simulated minus the measured sea surface temperature change from 0.8°C to 0.3°C. Additionally, the standard deviation in the difference between simulated minus measured high pass filtered 15-m current speed reduced from 15 cm/s to 5 cm/s. The maximum difference in sea surface temperature response when two different turbulent mixing parameterizations were implemented was 0.3°C, i.e., only 11% of the maximum change of sea surface temperature caused by the storm. Copyright 2009 by the American Geophysical Union.

  17. Parameterization of cloud droplet formation for global and regional models: including adsorption activation from insoluble CCN

    Directory of Open Access Journals (Sweden)

    P. Kumar

    2009-04-01

    Full Text Available Dust and black carbon aerosol have long been known to exert potentially important and diverse impacts on cloud droplet formation. Most studies to date focus on the soluble fraction of these particles, and overlook interactions of the insoluble fraction with water vapor (even if known to be hydrophilic. To address this gap, we developed a new parameterization that considers cloud droplet formation within an ascending air parcel containing insoluble (but wettable particles externally mixed with aerosol containing an appreciable soluble fraction. Activation of particles with a soluble fraction is described through well-established Köhler theory, while the activation of hydrophilic insoluble particles is treated by "adsorption-activation" theory. In the latter, water vapor is adsorbed onto insoluble particles, the activity of which is described by a multilayer Frenkel-Halsey-Hill (FHH adsorption isotherm modified to account for particle curvature. We further develop FHH activation theory to i find combinations of the adsorption parameters AFHH, BFHH which yield atmospherically-relevant behavior, and, ii express activation properties (critical supersaturation that follow a simple power law with respect to dry particle diameter.

    The new parameterization is tested by comparing the parameterized cloud droplet number concentration against predictions with a detailed numerical cloud model, considering a wide range of particle populations, cloud updraft conditions, water vapor condensation coefficient and FHH adsorption isotherm characteristics. The agreement between parameterization and parcel model is excellent, with an average error of 10% and R2~0.98. A preliminary sensitivity study suggests that the sublinear response of droplet number to Köhler particle concentration is not as strong for FHH particles.

  18. Approaches to highly parameterized inversion-A guide to using PEST for groundwater-model calibration

    Science.gov (United States)

    Doherty, John E.; Hunt, Randall J.

    2010-01-01

    Highly parameterized groundwater models can create calibration difficulties. Regularized inversion-the combined use of large numbers of parameters with mathematical approaches for stable parameter estimation-is becoming a common approach to address these difficulties and enhance the transfer of information contained in field measurements to parameters used to model that system. Though commonly used in other industries, regularized inversion is somewhat imperfectly understood in the groundwater field. There is concern that this unfamiliarity can lead to underuse, and misuse, of the methodology. This document is constructed to facilitate the appropriate use of regularized inversion for calibrating highly parameterized groundwater models. The presentation is directed at an intermediate- to advanced-level modeler, and it focuses on the PEST software suite-a frequently used tool for highly parameterized model calibration and one that is widely supported by commercial graphical user interfaces. A brief overview of the regularized inversion approach is provided, and techniques for mathematical regularization offered by PEST are outlined, including Tikhonov, subspace, and hybrid schemes. Guidelines for applying regularized inversion techniques are presented after a logical progression of steps for building suitable PEST input. The discussion starts with use of pilot points as a parameterization device and processing/grouping observations to form multicomponent objective functions. A description of potential parameter solution methodologies and resources available through the PEST software and its supporting utility programs follows. Directing the parameter-estimation process through PEST control variables is then discussed, including guidance for monitoring and optimizing the performance of PEST. Comprehensive listings of PEST control variables, and of the roles performed by PEST utility support programs, are presented in the appendixes.

  19. An analysis of MM5 sensitivity to different parameterizations for high-resolution climate simulations

    Science.gov (United States)

    Argüeso, D.; Hidalgo-Muñoz, J. M.; Gámiz-Fortis, S. R.; Esteban-Parra, M. J.; Castro-Díez, Y.

    2009-04-01

    An evaluation of MM5 mesoscale model sensitivity to different parameterizations schemes is presented in terms of temperature and precipitation for high-resolution integrations over Andalusia (South of Spain). As initial and boundary conditions ERA-40 Reanalysis data are used. Two domains were used, a coarse one with dimensions of 55 by 60 grid points with spacing of 30 km and a nested domain of 48 by 72 grid points grid spaced 10 km. Coarse domain fully covers Iberian Peninsula and Andalusia fits loosely in the finer one. In addition to parameterization tests, two dynamical downscaling techniques have been applied in order to examine the influence of initial conditions on RCM long-term studies. Regional climate studies usually employ continuous integration for the period under survey, initializing atmospheric fields only at the starting point and feeding boundary conditions regularly. An alternative approach is based on frequent re-initialization of atmospheric fields; hence the simulation is divided in several independent integrations. Altogether, 20 simulations have been performed using varying physics options, of which 4 were fulfilled applying the re-initialization technique. Surface temperature and accumulated precipitation (daily and monthly scale) were analyzed for a 5-year period covering from 1990 to 1994. Results have been compared with daily observational data series from 110 stations for temperature and 95 for precipitation Both daily and monthly average temperatures are generally well represented by the model. Conversely, daily precipitation results present larger deviations from observational data. However, noticeable accuracy is gained when comparing with monthly precipitation observations. There are some especially conflictive subregions where precipitation is scarcely captured, such as the Southeast of the Iberian Peninsula, mainly due to its extremely convective nature. Regarding parameterization schemes performance, every set provides very

  20. Real-time image parameterization in high energy gamma-ray astronomy using transputers

    International Nuclear Information System (INIS)

    Punch, M.; Fegan, D.J.

    1991-01-01

    Recently, significant advances in Very-High-Energy gamma-ray astronomy have been made by parameterization of the Cherenkov images arising from gamma-ray initiated showers in the Earth's atmosphere. A prototype system to evaluate the use of Transputers as a parallel-processing elements for real-time analysis of data from a Cherenkov imaging camera is described in this paper. The operation of and benefits resulting from such a system are described, and the viability of an applicaiton of the prototype system is discussed

  1. Global Performance of a Fast Parameterization Scheme for Estimating Surface Solar Radiation from MODIS data

    Science.gov (United States)

    Tang, W.; Yang, K.; Sun, Z.; Qin, J.; Niu, X.

    2016-12-01

    A fast parameterization scheme named SUNFLUX is used in this study to estimate instantaneous surface solar radiation (SSR) based on products from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor onboard both Terra and Aqua platforms. The scheme mainly takes into account the absorption and scattering processes due to clouds, aerosols and gas in the atmosphere. The estimated instantaneous SSR is evaluated against surface observations obtained from seven stations of the Surface Radiation Budget Network (SURFRAD), four stations in the North China Plain (NCP) and 40 stations of the Baseline Surface Radiation Network (BSRN). The statistical results for evaluation against these three datasets show that the relative root-mean-square error (RMSE) values of SUNFLUX are less than 15%, 16% and 17%, respectively. Daily SSR is derived through temporal upscaling from the MODIS-based instantaneous SSR estimates, and is validated against surface observations. The relative RMSE values for daily SSR estimates are about 16% at the seven SURFRAD stations, four NCP stations, 40 BSRN stations and 90 China Meteorological Administration (CMA) radiation stations.

  2. Fast engineering optimization: A novel highly effective control parameterization approach for industrial dynamic processes.

    Science.gov (United States)

    Liu, Ping; Li, Guodong; Liu, Xinggao

    2015-09-01

    Control vector parameterization (CVP) is an important approach of the engineering optimization for the industrial dynamic processes. However, its major defect, the low optimization efficiency caused by calculating the relevant differential equations in the generated nonlinear programming (NLP) problem repeatedly, limits its wide application in the engineering optimization for the industrial dynamic processes. A novel highly effective control parameterization approach, fast-CVP, is first proposed to improve the optimization efficiency for industrial dynamic processes, where the costate gradient formulae is employed and a fast approximate scheme is presented to solve the differential equations in dynamic process simulation. Three well-known engineering optimization benchmark problems of the industrial dynamic processes are demonstrated as illustration. The research results show that the proposed fast approach achieves a fine performance that at least 90% of the computation time can be saved in contrast to the traditional CVP method, which reveals the effectiveness of the proposed fast engineering optimization approach for the industrial dynamic processes. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  3. A Coordinated Effort to Improve Parameterization of High-Latitude Cloud and Radiation Processes

    International Nuclear Information System (INIS)

    J. O. Pinto; A.H. Lynch

    2004-01-01

    The goal of this project is the development and evaluation of improved parameterization of arctic cloud and radiation processes and implementation of the parameterizations into a climate model. Our research focuses specifically on the following issues: (1) continued development and evaluation of cloud microphysical parameterizations, focusing on issues of particular relevance for mixed phase clouds; and (2) evaluation of the mesoscale simulation of arctic cloud system life cycles

  4. Human impact parameterizations in global hydrological models improve estimates of monthly discharges and hydrological extremes: a multi-model validation study

    NARCIS (Netherlands)

    Veldkamp, T I E; Zhao, F; Ward, P J; Moel, H de; Aerts, J C J H; Schmied, H Müller; Portmann, F T; Masaki, Y; Pokhrel, Y; Liu, X; Satoh, Yusuke; Gerten, Dieter; Gosling, S N; Zaherpour, J; Wada, Yoshihide

    2018-01-01

    Human activity has a profound influence on river discharges, hydrological extremes and water-related hazards. In this study, we compare the results of five state-of-the-art global hydrological models (GHMs) with observations to examine the role of human impact parameterizations (HIP) in the

  5. Parameterization of dust emissions in the global atmospheric chemistry-climate model EMAC: impact of nudging and soil properties

    Science.gov (United States)

    Astitha, M.; Lelieveld, J.; Abdel Kader, M.; Pozzer, A.; de Meij, A.

    2012-11-01

    Airborne desert dust influences radiative transfer, atmospheric chemistry and dynamics, as well as nutrient transport and deposition. It directly and indirectly affects climate on regional and global scales. Two versions of a parameterization scheme to compute desert dust emissions are incorporated into the atmospheric chemistry general circulation model EMAC (ECHAM5/MESSy2.41 Atmospheric Chemistry). One uses a globally uniform soil particle size distribution, whereas the other explicitly accounts for different soil textures worldwide. We have tested these two versions and investigated the sensitivity to input parameters, using remote sensing data from the Aerosol Robotic Network (AERONET) and dust concentrations and deposition measurements from the AeroCom dust benchmark database (and others). The two versions are shown to produce similar atmospheric dust loads in the N-African region, while they deviate in the Asian, Middle Eastern and S-American regions. The dust outflow from Africa over the Atlantic Ocean is accurately simulated by both schemes, in magnitude, location and seasonality. Approximately 70% of the modelled annual deposition data and 70-75% of the modelled monthly aerosol optical depth (AOD) in the Atlantic Ocean stations lay in the range 0.5 to 2 times the observations for all simulations. The two versions have similar performance, even though the total annual source differs by ~50%, which underscores the importance of transport and deposition processes (being the same for both versions). Even though the explicit soil particle size distribution is considered more realistic, the simpler scheme appears to perform better in several locations. This paper discusses the differences between the two versions of the dust emission scheme, focusing on their limitations and strengths in describing the global dust cycle and suggests possible future improvements.

  6. Parameterization of dust emissions in the global atmospheric chemistry-climate model EMAC: impact of nudging and soil properties

    Directory of Open Access Journals (Sweden)

    M. Astitha

    2012-11-01

    Full Text Available Airborne desert dust influences radiative transfer, atmospheric chemistry and dynamics, as well as nutrient transport and deposition. It directly and indirectly affects climate on regional and global scales. Two versions of a parameterization scheme to compute desert dust emissions are incorporated into the atmospheric chemistry general circulation model EMAC (ECHAM5/MESSy2.41 Atmospheric Chemistry. One uses a globally uniform soil particle size distribution, whereas the other explicitly accounts for different soil textures worldwide. We have tested these two versions and investigated the sensitivity to input parameters, using remote sensing data from the Aerosol Robotic Network (AERONET and dust concentrations and deposition measurements from the AeroCom dust benchmark database (and others. The two versions are shown to produce similar atmospheric dust loads in the N-African region, while they deviate in the Asian, Middle Eastern and S-American regions. The dust outflow from Africa over the Atlantic Ocean is accurately simulated by both schemes, in magnitude, location and seasonality. Approximately 70% of the modelled annual deposition data and 70–75% of the modelled monthly aerosol optical depth (AOD in the Atlantic Ocean stations lay in the range 0.5 to 2 times the observations for all simulations. The two versions have similar performance, even though the total annual source differs by ~50%, which underscores the importance of transport and deposition processes (being the same for both versions. Even though the explicit soil particle size distribution is considered more realistic, the simpler scheme appears to perform better in several locations. This paper discusses the differences between the two versions of the dust emission scheme, focusing on their limitations and strengths in describing the global dust cycle and suggests possible future improvements.

  7. High precision tracking of a piezoelectric nano-manipulator with parameterized hysteresis compensation

    Science.gov (United States)

    Yan, Peng; Zhang, Yangming

    2018-06-01

    High performance scanning of nano-manipulators is widely deployed in various precision engineering applications such as SPM (scanning probe microscope), where trajectory tracking of sophisticated reference signals is an challenging control problem. The situation is further complicated when rate dependent hysteresis of the piezoelectric actuators and the stress-stiffening induced nonlinear stiffness of the flexure mechanism are considered. In this paper, a novel control framework is proposed to achieve high precision tracking of a piezoelectric nano-manipulator subjected to hysteresis and stiffness nonlinearities. An adaptive parameterized rate-dependent Prandtl-Ishlinskii model is constructed and the corresponding adaptive inverse model based online compensation is derived. Meanwhile a robust adaptive control architecture is further introduced to improve the tracking accuracy and robustness of the compensated system, where the parametric uncertainties of the nonlinear dynamics can be well eliminated by on-line estimations. Comparative experimental studies of the proposed control algorithm are conducted on a PZT actuated nano-manipulating stage, where hysteresis modeling accuracy and excellent tracking performance are demonstrated in real-time implementations, with significant improvement over existing results.

  8. Sensitivity of aerosol indirect forcing and autoconversion to cloud droplet parameterization: an assessment with the NASA Global Modeling Initiative.

    Science.gov (United States)

    Sotiropoulou, R. P.; Meshkhidze, N.; Nenes, A.

    2006-12-01

    The aerosol indirect forcing is one of the largest sources of uncertainty in assessments of anthropogenic climate change [IPCC, 2001]. Much of this uncertainty arises from the approach used for linking cloud droplet number concentration (CDNC) to precursor aerosol. Global Climate Models (GCM) use a wide range of cloud droplet activation mechanisms ranging from empirical [Boucher and Lohmann, 1995] to detailed physically- based formulations [e.g., Abdul-Razzak and Ghan, 2000; Fountoukis and Nenes, 2005]. The objective of this study is to assess the uncertainties in indirect forcing and autoconversion of cloud water to rain caused by the application of different cloud droplet parameterization mechanisms; this is an important step towards constraining the aerosol indirect effects (AIE). Here we estimate the uncertainty in indirect forcing and autoconversion rate using the NASA Global Model Initiative (GMI). The GMI allows easy interchange of meteorological fields, chemical mechanisms and the aerosol microphysical packages. Therefore, it is an ideal tool for assessing the effect of different parameters on aerosol indirect forcing. The aerosol module includes primary emissions, chemical production of sulfate in clear air and in-cloud aqueous phase, gravitational sedimentation, dry deposition, wet scavenging in and below clouds, and hygroscopic growth. Model inputs include SO2 (fossil fuel and natural), black carbon (BC), organic carbon (OC), mineral dust and sea salt. The meteorological data used in this work were taken from the NASA Data Assimilation Office (DAO) and two different GCMs: the NASA GEOS4 finite volume GCM (FVGCM) and the Goddard Institute for Space Studies version II' (GISS II') GCM. Simulations were carried out for "present day" and "preindustrial" emissions using different meteorological fields (i.e. DAO, FVGCM, GISS II'); cloud droplet number concentration is computed from the correlations of Boucher and Lohmann [1995], Abdul-Razzak and Ghan [2000

  9. A global lightning parameterization based on statistical relationships among environmental factors, aerosols, and convective clouds in the TRMM climatology

    Science.gov (United States)

    Stolz, Douglas C.; Rutledge, Steven A.; Pierce, Jeffrey R.; van den Heever, Susan C.

    2017-07-01

    The objective of this study is to determine the relative contributions of normalized convective available potential energy (NCAPE), cloud condensation nuclei (CCN) concentrations, warm cloud depth (WCD), vertical wind shear (SHEAR), and environmental relative humidity (RH) to the variability of lightning and radar reflectivity within convective features (CFs) observed by the Tropical Rainfall Measuring Mission (TRMM) satellite. Our approach incorporates multidimensional binned representations of observations of CFs and modeled thermodynamics, kinematics, and CCN as inputs to develop approximations for total lightning density (TLD) and the average height of 30 dBZ radar reflectivity (AVGHT30). The results suggest that TLD and AVGHT30 increase with increasing NCAPE, increasing CCN, decreasing WCD, increasing SHEAR, and decreasing RH. Multiple-linear approximations for lightning and radar quantities using the aforementioned predictors account for significant portions of the variance in the binned data set (R2 ≈ 0.69-0.81). The standardized weights attributed to CCN, NCAPE, and WCD are largest, the standardized weight of RH varies relative to other predictors, while the standardized weight for SHEAR is comparatively small. We investigate these statistical relationships for collections of CFs within various geographic areas and compare the aerosol (CCN) and thermodynamic (NCAPE and WCD) contributions to variations in the CF population in a partial sensitivity analysis based on multiple-linear regression approximations computed herein. A global lightning parameterization is developed; the average difference between predicted and observed TLD decreases from +21.6 to +11.6% when using a hybrid approach to combine separate approximations over continents and oceans, thus highlighting the need for regionally targeted investigations in the future.

  10. Approaches in highly parameterized inversion - PEST++, a Parameter ESTimation code optimized for large environmental models

    Science.gov (United States)

    Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.

    2012-01-01

    An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.

  11. Technical report series on global modeling and data assimilation. Volume 3: An efficient thermal infrared radiation parameterization for use in general circulation models

    Science.gov (United States)

    Suarex, Max J. (Editor); Chou, Ming-Dah

    1994-01-01

    A detailed description of a parameterization for thermal infrared radiative transfer designed specifically for use in global climate models is presented. The parameterization includes the effects of the main absorbers of terrestrial radiation: water vapor, carbon dioxide, and ozone. While being computationally efficient, the schemes compute very accurately the clear-sky fluxes and cooling rates from the Earth's surface to 0.01 mb. This combination of accuracy and speed makes the parameterization suitable for both tropospheric and middle atmospheric modeling applications. Since no transmittances are precomputed the atmospheric layers and the vertical distribution of the absorbers may be freely specified. The scheme can also account for any vertical distribution of fractional cloudiness with arbitrary optical thickness. These features make the parameterization very flexible and extremely well suited for use in climate modeling studies. In addition, the numerics and the FORTRAN implementation have been carefully designed to conserve both memory and computer time. This code should be particularly attractive to those contemplating long-term climate simulations, wishing to model the middle atmosphere, or planning to use a large number of levels in the vertical.

  12. Global parameterization and validation of a two-leaf light use efficiency model for predicting gross primary production across FLUXNET sites: TL-LUE Parameterization and Validation

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Yanlian [Jiangsu Provincial Key Laboratory of Geographic Information Science and Technology, School of Geographic and Oceanographic Sciences, Nanjing University, Nanjing China; Joint Center for Global Change Studies, Beijing China; Wu, Xiaocui [International Institute for Earth System Sciences, Nanjing University, Nanjing China; Joint Center for Global Change Studies, Beijing China; Ju, Weimin [International Institute for Earth System Sciences, Nanjing University, Nanjing China; Jiangsu Center for Collaborative Innovation in Geographic Information Resource Development and Application, Nanjing China; Chen, Jing M. [International Institute for Earth System Sciences, Nanjing University, Nanjing China; Joint Center for Global Change Studies, Beijing China; Wang, Shaoqiang [Key Laboratory of Ecosystem Network Observation and Modeling, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Science, Beijing China; Wang, Huimin [Key Laboratory of Ecosystem Network Observation and Modeling, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Science, Beijing China; Yuan, Wenping [State Key Laboratory of Earth Surface Processes and Resource Ecology, Future Earth Research Institute, Beijing Normal University, Beijing China; Andrew Black, T. [Faculty of Land and Food Systems, University of British Columbia, Vancouver British Columbia Canada; Jassal, Rachhpal [Faculty of Land and Food Systems, University of British Columbia, Vancouver British Columbia Canada; Ibrom, Andreas [Department of Environmental Engineering, Technical University of Denmark (DTU), Kgs. Lyngby Denmark; Han, Shijie [Institute of Applied Ecology, Chinese Academy of Sciences, Shenyang China; Yan, Junhua [South China Botanical Garden, Chinese Academy of Sciences, Guangzhou China; Margolis, Hank [Centre for Forest Studies, Faculty of Forestry, Geography and Geomatics, Laval University, Quebec City Quebec Canada; Roupsard, Olivier [CIRAD-Persyst, UMR Ecologie Fonctionnelle and Biogéochimie des Sols et Agroécosystèmes, SupAgro-CIRAD-INRA-IRD, Montpellier France; CATIE (Tropical Agricultural Centre for Research and Higher Education), Turrialba Costa Rica; Li, Yingnian [Northwest Institute of Plateau Biology, Chinese Academy of Sciences, Xining China; Zhao, Fenghua [Key Laboratory of Ecosystem Network Observation and Modeling, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Science, Beijing China; Kiely, Gerard [Environmental Research Institute, Civil and Environmental Engineering Department, University College Cork, Cork Ireland; Starr, Gregory [Department of Biological Sciences, University of Alabama, Tuscaloosa Alabama USA; Pavelka, Marian [Laboratory of Plants Ecological Physiology, Institute of Systems Biology and Ecology AS CR, Prague Czech Republic; Montagnani, Leonardo [Forest Services, Autonomous Province of Bolzano, Bolzano Italy; Faculty of Sciences and Technology, Free University of Bolzano, Bolzano Italy; Wohlfahrt, Georg [Institute for Ecology, University of Innsbruck, Innsbruck Austria; European Academy of Bolzano, Bolzano Italy; D' Odorico, Petra [Grassland Sciences Group, Institute of Agricultural Sciences, ETH Zurich Switzerland; Cook, David [Atmospheric and Climate Research Program, Environmental Science Division, Argonne National Laboratory, Argonne Illinois USA; Arain, M. Altaf [McMaster Centre for Climate Change and School of Geography and Earth Sciences, McMaster University, Hamilton Ontario Canada; Bonal, Damien [INRA Nancy, UMR EEF, Champenoux France; Beringer, Jason [School of Earth and Environment, The University of Western Australia, Crawley Australia; Blanken, Peter D. [Department of Geography, University of Colorado Boulder, Boulder Colorado USA; Loubet, Benjamin [UMR ECOSYS, INRA, AgroParisTech, Université Paris-Saclay, Thiverval-Grignon France; Leclerc, Monique Y. [Department of Crop and Soil Sciences, College of Agricultural and Environmental Sciences, University of Georgia, Athens Georgia USA; Matteucci, Giorgio [Viea San Camillo Ed LellisViterbo, University of Tuscia, Viterbo Italy; Nagy, Zoltan [MTA-SZIE Plant Ecology Research Group, Szent Istvan University, Godollo Hungary; Olejnik, Janusz [Meteorology Department, Poznan University of Life Sciences, Poznan Poland; Department of Matter and Energy Fluxes, Global Change Research Center, Brno Czech Republic; Paw U, Kyaw Tha [Department of Land, Air and Water Resources, University of California, Davis California USA; Joint Program on the Science and Policy of Global Change, Massachusetts Institute of Technology, Cambridge USA; Varlagin, Andrej [A.N. Severtsov Institute of Ecology and Evolution, Russian Academy of Sciences, Moscow Russia

    2016-04-06

    Light use efficiency (LUE) models are widely used to simulate gross primary production (GPP). However, the treatment of the plant canopy as a big leaf by these models can introduce large uncertainties in simulated GPP. Recently, a two-leaf light use efficiency (TL-LUE) model was developed to simulate GPP separately for sunlit and shaded leaves and has been shown to outperform the big-leaf MOD17 model at 6 FLUX sites in China. In this study we investigated the performance of the TL-LUE model for a wider range of biomes. For this we optimized the parameters and tested the TL-LUE model using data from 98 FLUXNET sites which are distributed across the globe. The results showed that the TL-LUE model performed in general better than the MOD17 model in simulating 8-day GPP. Optimized maximum light use efficiency of shaded leaves (εmsh) was 2.63 to 4.59 times that of sunlit leaves (εmsu). Generally, the relationships of εmsh and εmsu with εmax were well described by linear equations, indicating the existence of general patterns across biomes. GPP simulated by the TL-LUE model was much less sensitive to biases in the photosynthetically active radiation (PAR) input than the MOD17 model. The results of this study suggest that the proposed TL-LUE model has the potential for simulating regional and global GPP of terrestrial ecosystems and it is more robust with regard to usual biases in input data than existing approaches which neglect the bi-modal within-canopy distribution of PAR.

  13. The Empirical Canadian High Arctic Ionospheric Model (E-CHAIM): Bottomside Parameterization

    Science.gov (United States)

    Themens, D. R.; Jayachandran, P. T.

    2017-12-01

    It is well known that the International Reference Ionosphere (IRI) suffers reduced accuracy in its representation of monthly median ionospheric electron density at high latitudes. These inaccuracies are believed to stem, at least in part, from a historical lack of data from these regions. Now, roughly thirty and forty years after the development of the original URSI and CCIR foF2 maps, respectively, there exists a much larger dataset of high latitude observations of ionospheric electron density. These new measurements come in the form of new ionosonde deployments, such as those of the Canadian High Arctic Ionospheric Network, the CHAMP, GRACE, and COSMIC radio occultation missions, and the construction of the Poker Flat, Resolute, and EISCAT Incoherent Scatter Radar systems. These new datasets afford an opportunity to revise the IRI's representation of the high latitude ionosphere. Using a spherical cap harmonic expansion to represent horizontal and diurnal variability and a Fourier expansion in day of year to represent seasonal variations, we have developed a new model of the bottomside ionosphere's electron density for the high latitude ionosphere, above 50N geomagnetic latitude. For the peak heights of the E and F1 layers (hmE and hmF1, respectively), current standards use a constant value for hmE and either use a single-parameter model for hmF1 (IRI) or scale hmF1 with the F peak (NeQuick). For E-CHAIM, we have diverged from this convention to account for the greater variability seen in these characteristics at high latitudes, opting to use a full spherical harmonic model description for each of these characteristics. For the description of the bottomside vertical electron density profile, we present a single-layer model with altitude-varying scale height. The scale height function is taken as the sum three scale height layer functions anchored to the F2 peak, hmF1, and hmE. This parameterization successfully reproduces the structure of the various bottomside

  14. Topside Electron Density Representations for Middle and High Latitudes: A Topside Parameterization for E-CHAIM Based On the NeQuick

    Science.gov (United States)

    Themens, David R.; Jayachandran, P. T.; Bilitza, Dieter; Erickson, Philip J.; Häggström, Ingemar; Lyashenko, Mykhaylo V.; Reid, Benjamin; Varney, Roger H.; Pustovalova, Ljubov

    2018-02-01

    In this study, we present a topside model representation to be used by the Empirical Canadian High Arctic Ionospheric Model (E-CHAIM). In the process of this, we also present a comprehensive evaluation of the NeQuick's, and by extension the International Reference Ionosphere's, topside electron density model for middle and high latitudes in the Northern Hemisphere. Using data gathered from all available incoherent scatter radars, topside sounders, and Global Navigation Satellite System Radio Occultation satellites, we show that the current NeQuick parameterization suboptimally represents the shape of the topside electron density profile at these latitudes and performs poorly in the representation of seasonal and solar cycle variations of the topside scale thickness. Despite this, the simple, one variable, NeQuick model is a powerful tool for modeling the topside ionosphere. By refitting the parameters that define the maximum topside scale thickness and the rate of increase of the scale height within the NeQuick topside model function, r and g, respectively, and refitting the model's parameterization of the scale height at the F region peak, H0, we find considerable improvement in the NeQuick's ability to represent the topside shape and behavior. Building on these results, we present a new topside model extension of the E-CHAIM based on the revised NeQuick function. Overall, root-mean-square errors in topside electron density are improved over the traditional International Reference Ionosphere/NeQuick topside by 31% for a new NeQuick parameterization and by 36% for a newly proposed topside for E-CHAIM.

  15. Correction of Excessive Precipitation over Steep and High Mountains in a GCM: A Simple Method of Parameterizing the Thermal Effects of Subgrid Topographic Variation

    Science.gov (United States)

    Chao, Winston C.

    2015-01-01

    The excessive precipitation over steep and high mountains (EPSM) in GCMs and meso-scale models is due to a lack of parameterization of the thermal effects of the subgrid-scale topographic variation. These thermal effects drive subgrid-scale heated slope induced vertical circulations (SHVC). SHVC provide a ventilation effect of removing heat from the boundary layer of resolvable-scale mountain slopes and depositing it higher up. The lack of SHVC parameterization is the cause of EPSM. The author has previously proposed a method of parameterizing SHVC, here termed SHVC.1. Although this has been successful in avoiding EPSM, the drawback of SHVC.1 is that it suppresses convective type precipitation in the regions where it is applied. In this article we propose a new method of parameterizing SHVC, here termed SHVC.2. In SHVC.2 the potential temperature and mixing ratio of the boundary layer are changed when used as input to the cumulus parameterization scheme over mountainous regions. This allows the cumulus parameterization to assume the additional function of SHVC parameterization. SHVC.2 has been tested in NASA Goddard's GEOS-5 GCM. It achieves the primary goal of avoiding EPSM while also avoiding the suppression of convective-type precipitation in regions where it is applied.

  16. Improvement and implementation of a parameterization for shallow cumulus in the global climate model ECHAM5-HAM

    Science.gov (United States)

    Isotta, Francesco; Spichtinger, Peter; Lohmann, Ulrike; von Salzen, Knut

    2010-05-01

    Convection is a crucial component of weather and climate. Its parameterization in General Circulation Models (GCMs) is one of the largest sources of uncertainty. Convection redistributes moisture and heat, affects the radiation budget and transports tracers from the PBL to higher levels. Shallow convection is very common over the globe, in particular over the oceans in the trade wind regions. A recently developed shallow convection scheme by von Salzen and McFarlane (2002) is implemented in the ECHAM5-HAM GCM instead of the standard convection scheme by Tiedtke (1989). The scheme of von Salzen and McFarlane (2002) is a bulk parameterization for an ensemble of transient shallow cumuli. A life cycle is considered, as well as inhomogeneities in the horizontal distribution of in-cloud properties due to mixing. The shallow convection scheme is further developed to take the ice phase and precipitation in form of rain and snow into account. The double moment microphysics scheme for cloud droplets and ice crystals implemented is consistent with the stratiform scheme and with the other types of convective clouds. The ice phase permits to alter the criterion to distinguish between shallow convection and the other two types of convection, namely deep and mid-level, which are still calculated by the Tiedtke (1989) scheme. The lunching layer of the test parcel in the shallow convection scheme is chosen as the one with maximum moist static energy in the three lowest levels. The latter is modified to the ``frozen moist static energy'' to account for the ice phase. Moreover, tracers (e.g. aerosols) are transported in the updraft and scavenged in and below clouds. As a first test of the performance of the new scheme and the interaction with the rest of the model, the Barbados Oceanographic and Meteorological EXperiment (BOMEX) and the Rain In Cumulus over the Ocean experiment (RICO) case are simulated with the single column model (SCM) and the results are compared with large eddy

  17. Approaches in highly parameterized inversion: bgaPEST, a Bayesian geostatistical approach implementation with PEST: documentation and instructions

    Science.gov (United States)

    Fienen, Michael N.; D'Oria, Marco; Doherty, John E.; Hunt, Randall J.

    2013-01-01

    The application bgaPEST is a highly parameterized inversion software package implementing the Bayesian Geostatistical Approach in a framework compatible with the parameter estimation suite PEST. Highly parameterized inversion refers to cases in which parameters are distributed in space or time and are correlated with one another. The Bayesian aspect of bgaPEST is related to Bayesian probability theory in which prior information about parameters is formally revised on the basis of the calibration dataset used for the inversion. Conceptually, this approach formalizes the conditionality of estimated parameters on the specific data and model available. The geostatistical component of the method refers to the way in which prior information about the parameters is used. A geostatistical autocorrelation function is used to enforce structure on the parameters to avoid overfitting and unrealistic results. Bayesian Geostatistical Approach is designed to provide the smoothest solution that is consistent with the data. Optionally, users can specify a level of fit or estimate a balance between fit and model complexity informed by the data. Groundwater and surface-water applications are used as examples in this text, but the possible uses of bgaPEST extend to any distributed parameter applications.

  18. Development and testing of an aerosol-stratus cloud parameterization scheme for middle and high latitudes

    Energy Technology Data Exchange (ETDEWEB)

    Olsson, P.Q.; Meyers, M.P.; Kreidenweis, S.; Cotton, W.R. [Colorado State Univ., Fort Collins, CO (United States)

    1996-04-01

    The aim of this new project is to develop an aerosol/cloud microphysics parameterization of mixed-phase stratus and boundary layer clouds. Our approach is to create, test, and implement a bulk-microphysics/aerosol model using data from Atmospheric Radiation Measurement (ARM) Cloud and Radiation Testbed (CART) sites and large-eddy simulation (LES) explicit bin-resolving aerosol/microphysics models. The primary objectives of this work are twofold. First, we need the prediction of number concentrations of activated aerosol which are transferred to the droplet spectrum, so that the aerosol population directly affects the cloud formation and microphysics. Second, we plan to couple the aerosol model to the gas and aqueous-chemistry module that will drive the aerosol formation and growth. We begin by exploring the feasibility of performing cloud-resolving simulations of Arctic stratus clouds over the North Slope CART site. These simulations using Colorado State University`s regional atmospheric modeling system (RAMS) will be useful in designing the structure of the cloud-resolving model and in interpreting data acquired at the North Slope site.

  19. Parameterization Of Solar Radiation Using Neural Network

    International Nuclear Information System (INIS)

    Jiya, J. D.; Alfa, B.

    2002-01-01

    This paper presents a neural network technique for parameterization of global solar radiation. The available data from twenty-one stations is used for training the neural network and the data from other ten stations is used to validate the neural model. The neural network utilizes latitude, longitude, altitude, sunshine duration and period number to parameterize solar radiation values. The testing data was not used in the training to demonstrate the performance of the neural network in unknown stations to parameterize solar radiation. The results indicate a good agreement between the parameterized solar radiation values and actual measured values

  20. Approaches to highly parameterized inversion: A guide to using PEST for model-parameter and predictive-uncertainty analysis

    Science.gov (United States)

    Doherty, John E.; Hunt, Randall J.; Tonkin, Matthew J.

    2010-01-01

    Analysis of the uncertainty associated with parameters used by a numerical model, and with predictions that depend on those parameters, is fundamental to the use of modeling in support of decisionmaking. Unfortunately, predictive uncertainty analysis with regard to models can be very computationally demanding, due in part to complex constraints on parameters that arise from expert knowledge of system properties on the one hand (knowledge constraints) and from the necessity for the model parameters to assume values that allow the model to reproduce historical system behavior on the other hand (calibration constraints). Enforcement of knowledge and calibration constraints on parameters used by a model does not eliminate the uncertainty in those parameters. In fact, in many cases, enforcement of calibration constraints simply reduces the uncertainties associated with a number of broad-scale combinations of model parameters that collectively describe spatially averaged system properties. The uncertainties associated with other combinations of parameters, especially those that pertain to small-scale parameter heterogeneity, may not be reduced through the calibration process. To the extent that a prediction depends on system-property detail, its postcalibration variability may be reduced very little, if at all, by applying calibration constraints; knowledge constraints remain the only limits on the variability of predictions that depend on such detail. Regrettably, in many common modeling applications, these constraints are weak. Though the PEST software suite was initially developed as a tool for model calibration, recent developments have focused on the evaluation of model-parameter and predictive uncertainty. As a complement to functionality that it provides for highly parameterized inversion (calibration) by means of formal mathematical regularization techniques, the PEST suite provides utilities for linear and nonlinear error-variance and uncertainty analysis in

  1. AD Model Builder: using automatic differentiation for statistical inference of highly parameterized complex nonlinear models

    DEFF Research Database (Denmark)

    Fournier, David A.; Skaug, Hans J.; Ancheta, Johnoel

    2011-01-01

    Many criteria for statistical parameter estimation, such as maximum likelihood, are formulated as a nonlinear optimization problem.Automatic Differentiation Model Builder (ADMB) is a programming framework based on automatic differentiation, aimed at highly nonlinear models with a large number...... of such a feature is the generic implementation of Laplace approximation of high-dimensional integrals for use in latent variable models. We also review the literature in which ADMB has been used, and discuss future development of ADMB as an open source project. Overall, the main advantages ofADMB are flexibility...

  2. Approaches in highly parameterized inversion—PEST++ Version 3, a Parameter ESTimation and uncertainty analysis software suite optimized for large environmental models

    Science.gov (United States)

    Welter, David E.; White, Jeremy T.; Hunt, Randall J.; Doherty, John E.

    2015-09-18

    The PEST++ Version 1 object-oriented parameter estimation code is here extended to Version 3 to incorporate additional algorithms and tools to further improve support for large and complex environmental modeling problems. PEST++ Version 3 includes the Gauss-Marquardt-Levenberg (GML) algorithm for nonlinear parameter estimation, Tikhonov regularization, integrated linear-based uncertainty quantification, options of integrated TCP/IP based parallel run management or external independent run management by use of a Version 2 update of the GENIE Version 1 software code, and utilities for global sensitivity analyses. The Version 3 code design is consistent with PEST++ Version 1 and continues to be designed to lower the barriers of entry for users as well as developers while providing efficient and optimized algorithms capable of accommodating large, highly parameterized inverse problems. As such, this effort continues the original focus of (1) implementing the most popular and powerful features of the PEST software suite in a fashion that is easy for novice or experienced modelers to use and (2) developing a software framework that is easy to extend.

  3. CHEM2D-OPP: A new linearized gas-phase ozone photochemistry parameterization for high-altitude NWP and climate models

    Directory of Open Access Journals (Sweden)

    J. P. McCormack

    2006-01-01

    Full Text Available The new CHEM2D-Ozone Photochemistry Parameterization (CHEM2D-OPP for high-altitude numerical weather prediction (NWP systems and climate models specifies the net ozone photochemical tendency and its sensitivity to changes in ozone mixing ratio, temperature and overhead ozone column based on calculations from the CHEM2D interactive middle atmospheric photochemical transport model. We evaluate CHEM2D-OPP performance using both short-term (6-day and long-term (1-year stratospheric ozone simulations with the prototype high-altitude NOGAPS-ALPHA forecast model. An inter-comparison of NOGAPS-ALPHA 6-day ozone hindcasts for 7 February 2005 with ozone photochemistry parameterizations currently used in operational NWP systems shows that CHEM2D-OPP yields the best overall agreement with both individual Aura Microwave Limb Sounder ozone profile measurements and independent hemispheric (10°–90° N ozone analysis fields. A 1-year free-running NOGAPS-ALPHA simulation using CHEM2D-OPP produces a realistic seasonal cycle in zonal mean ozone throughout the stratosphere. We find that the combination of a model cold temperature bias at high latitudes in winter and a warm bias in the CHEM2D-OPP temperature climatology can degrade the performance of the linearized ozone photochemistry parameterization over seasonal time scales despite the fact that the parameterized temperature dependence is weak in these regions.

  4. A high resolution global scale groundwater model

    Science.gov (United States)

    de Graaf, Inge; Sutanudjaja, Edwin; van Beek, Rens; Bierkens, Marc

    2014-05-01

    As the world's largest accessible source of freshwater, groundwater plays a vital role in satisfying the basic needs of human society. It serves as a primary source of drinking water and supplies water for agricultural and industrial activities. During times of drought, groundwater storage provides a large natural buffer against water shortage and sustains flows to rivers and wetlands, supporting ecosystem habitats and biodiversity. Yet, the current generation of global scale hydrological models (GHMs) do not include a groundwater flow component, although it is a crucial part of the hydrological cycle. Thus, a realistic physical representation of the groundwater system that allows for the simulation of groundwater head dynamics and lateral flows is essential for GHMs that increasingly run at finer resolution. In this study we present a global groundwater model with a resolution of 5 arc-minutes (approximately 10 km at the equator) using MODFLOW (McDonald and Harbaugh, 1988). With this global groundwater model we eventually intend to simulate the changes in the groundwater system over time that result from variations in recharge and abstraction. Aquifer schematization and properties of this groundwater model were developed from available global lithological maps and datasets (Dürr et al., 2005; Gleeson et al., 2010; Hartmann and Moosdorf, 2013), combined with our estimate of aquifer thickness for sedimentary basins. We forced the groundwater model with the output from the global hydrological model PCR-GLOBWB (van Beek et al., 2011), specifically the net groundwater recharge and average surface water levels derived from routed channel discharge. For the parameterization, we relied entirely on available global datasets and did not calibrate the model so that it can equally be expanded to data poor environments. Based on our sensitivity analysis, in which we run the model with various hydrogeological parameter settings, we observed that most variance in groundwater

  5. Collaborative Project. 3D Radiative Transfer Parameterization Over Mountains/Snow for High-Resolution Climate Models. Fast physics and Applications

    Energy Technology Data Exchange (ETDEWEB)

    Liou, Kuo-Nan [Univ. of California, Los Angeles, CA (United States)

    2016-02-09

    Under the support of the aforementioned DOE Grant, we have made two fundamental contributions to atmospheric and climate sciences: (1) Develop an efficient 3-D radiative transfer parameterization for application to intense and intricate inhomogeneous mountain/snow regions. (2) Innovate a stochastic parameterization for light absorption by internally mixed black carbon and dust particles in snow grains for understanding and physical insight into snow albedo reduction in climate models. With reference to item (1), we divided solar fluxes reaching mountain surfaces into five components: direct and diffuse fluxes, direct- and diffuse-reflected fluxes, and coupled mountain-mountain flux. “Exact” 3D Monte Carlo photon tracing computations can then be performed for these solar flux components to compare with those calculated from the conventional plane-parallel (PP) radiative transfer program readily available in climate models. Subsequently, Parameterizations of the deviations of 3D from PP results for five flux components are carried out by means of the multiple linear regression analysis associated with topographic information, including elevation, solar incident angle, sky view factor, and terrain configuration factor. We derived five regression equations with high statistical correlations for flux deviations and successfully incorporated this efficient parameterization into WRF model, which was used as the testbed in connection with the Fu-Liou-Gu PP radiation scheme that has been included in the WRF physics package. Incorporating this 3D parameterization program, we conducted simulations of WRF and CCSM4 to understand and evaluate the mountain/snow effect on snow albedo reduction during seasonal transition and the interannual variability for snowmelt, cloud cover, and precipitation over the Western United States presented in the final report. With reference to item (2), we developed in our previous research a geometric-optics surface-wave approach (GOS) for the

  6. A high-resolution global flood hazard model

    Science.gov (United States)

    Sampson, Christopher C.; Smith, Andrew M.; Bates, Paul B.; Neal, Jeffrey C.; Alfieri, Lorenzo; Freer, Jim E.

    2015-09-01

    Floods are a natural hazard that affect communities worldwide, but to date the vast majority of flood hazard research and mapping has been undertaken by wealthy developed nations. As populations and economies have grown across the developing world, so too has demand from governments, businesses, and NGOs for modeled flood hazard data in these data-scarce regions. We identify six key challenges faced when developing a flood hazard model that can be applied globally and present a framework methodology that leverages recent cross-disciplinary advances to tackle each challenge. The model produces return period flood hazard maps at ˜90 m resolution for the whole terrestrial land surface between 56°S and 60°N, and results are validated against high-resolution government flood hazard data sets from the UK and Canada. The global model is shown to capture between two thirds and three quarters of the area determined to be at risk in the benchmark data without generating excessive false positive predictions. When aggregated to ˜1 km, mean absolute error in flooded fraction falls to ˜5%. The full complexity global model contains an automatically parameterized subgrid channel network, and comparison to both a simplified 2-D only variant and an independently developed pan-European model shows the explicit inclusion of channels to be a critical contributor to improved model performance. While careful processing of existing global terrain data sets enables reasonable model performance in urban areas, adoption of forthcoming next-generation global terrain data sets will offer the best prospect for a step-change improvement in model performance.

  7. Neural network radiative transfer solvers for the generation of high resolution solar irradiance spectra parameterized by cloud and aerosol parameters

    International Nuclear Information System (INIS)

    Taylor, M.; Kosmopoulos, P.G.; Kazadzis, S.; Keramitsoglou, I.; Kiranoudis, C.T.

    2016-01-01

    This paper reports on the development of a neural network (NN) model for instantaneous and accurate estimation of solar radiation spectra and budgets geared toward satellite cloud data using a ≈2.4 M record, high-spectral resolution look up table (LUT) generated with the radiative transfer model libRadtran. Two NN solvers, one for clear sky conditions dominated by aerosol and one for cloudy skies, were trained on a normally-distributed and multiparametric subset of the LUT that spans a very broad class of atmospheric and meteorological conditions as inputs with corresponding high resolution solar irradiance target spectra as outputs. The NN solvers were tested by feeding them with a large (10 K record) “off-grid” random subset of the LUT spanning the training data space, and then comparing simulated outputs with target values provided by the LUT. The NN solvers demonstrated a capability to interpolate accurately over the entire multiparametric space. Once trained, the NN solvers allow for high-speed estimation of solar radiation spectra with high spectral resolution (1 nm) and for a quantification of the effect of aerosol and cloud optical parameters on the solar radiation budget without the need for a massive database. The cloudy sky NN solver was applied to high spatial resolution (54 K pixel) cloud data extracted from the Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard the geostationary Meteosat Second Generation 3 (MSG3) satellite and demonstrated that coherent maps of spectrally-integrated global horizontal irradiance at this resolution can be produced on the order of 1 min. - Highlights: • Neural network radiative transfer solvers for generation of solar irradiance spectra. • Sensitivity analysis of irradiance spectra with respect to aerosol and cloud parameters. • Regional maps of total global horizontal irradiance for cloudy sky conditions. • Regional solar radiation maps produced directly from MSG3/SEVIRI satellite inputs.

  8. Balancing accuracy, efficiency, and flexibility in a radiative transfer parameterization for dynamical models

    Science.gov (United States)

    Pincus, R.; Mlawer, E. J.

    2017-12-01

    Radiation is key process in numerical models of the atmosphere. The problem is well-understood and the parameterization of radiation has seen relatively few conceptual advances in the past 15 years. It is nonthelss often the single most expensive component of all physical parameterizations despite being computed less frequently than other terms. This combination of cost and maturity suggests value in a single radiation parameterization that could be shared across models; devoting effort to a single parameterization might allow for fine tuning for efficiency. The challenge lies in the coupling of this parameterization to many disparate representations of clouds and aerosols. This talk will describe RRTMGP, a new radiation parameterization that seeks to balance efficiency and flexibility. This balance is struck by isolating computational tasks in "kernels" that expose as much fine-grained parallelism as possible. These have simple interfaces and are interoperable across programming languages so that they might be repalced by alternative implementations in domain-specific langauges. Coupling to the host model makes use of object-oriented features of Fortran 2003, minimizing branching within the kernels and the amount of data that must be transferred. We will show accuracy and efficiency results for a globally-representative set of atmospheric profiles using a relatively high-resolution spectral discretization.

  9. Parameterized post-Newtonian cosmology

    International Nuclear Information System (INIS)

    Sanghai, Viraj A A; Clifton, Timothy

    2017-01-01

    Einstein’s theory of gravity has been extensively tested on solar system scales, and for isolated astrophysical systems, using the perturbative framework known as the parameterized post-Newtonian (PPN) formalism. This framework is designed for use in the weak-field and slow-motion limit of gravity, and can be used to constrain a large class of metric theories of gravity with data collected from the aforementioned systems. Given the potential of future surveys to probe cosmological scales to high precision, it is a topic of much contemporary interest to construct a similar framework to link Einstein’s theory of gravity and its alternatives to observations on cosmological scales. Our approach to this problem is to adapt and extend the existing PPN formalism for use in cosmology. We derive a set of equations that use the same parameters to consistently model both weak fields and cosmology. This allows us to parameterize a large class of modified theories of gravity and dark energy models on cosmological scales, using just four functions of time. These four functions can be directly linked to the background expansion of the universe, first-order cosmological perturbations, and the weak-field limit of the theory. They also reduce to the standard PPN parameters on solar system scales. We illustrate how dark energy models and scalar-tensor and vector-tensor theories of gravity fit into this framework, which we refer to as ‘parameterized post-Newtonian cosmology’ (PPNC). (paper)

  10. Parameterized post-Newtonian cosmology

    Science.gov (United States)

    Sanghai, Viraj A. A.; Clifton, Timothy

    2017-03-01

    Einstein’s theory of gravity has been extensively tested on solar system scales, and for isolated astrophysical systems, using the perturbative framework known as the parameterized post-Newtonian (PPN) formalism. This framework is designed for use in the weak-field and slow-motion limit of gravity, and can be used to constrain a large class of metric theories of gravity with data collected from the aforementioned systems. Given the potential of future surveys to probe cosmological scales to high precision, it is a topic of much contemporary interest to construct a similar framework to link Einstein’s theory of gravity and its alternatives to observations on cosmological scales. Our approach to this problem is to adapt and extend the existing PPN formalism for use in cosmology. We derive a set of equations that use the same parameters to consistently model both weak fields and cosmology. This allows us to parameterize a large class of modified theories of gravity and dark energy models on cosmological scales, using just four functions of time. These four functions can be directly linked to the background expansion of the universe, first-order cosmological perturbations, and the weak-field limit of the theory. They also reduce to the standard PPN parameters on solar system scales. We illustrate how dark energy models and scalar-tensor and vector-tensor theories of gravity fit into this framework, which we refer to as ‘parameterized post-Newtonian cosmology’ (PPNC).

  11. Inheritance versus parameterization

    DEFF Research Database (Denmark)

    Ernst, Erik

    2013-01-01

    This position paper argues that inheritance and parameterization differ in their fundamental structure, even though they may emulate each other in many ways. Based on this, we claim that certain mechanisms, e.g., final classes, are in conflict with the nature of inheritance, and hence causes...

  12. Normalization of the parameterized Courant-Snyder matrix for symplectic factorization of a parameterized Taylor map

    International Nuclear Information System (INIS)

    Yan, Y.T.

    1991-01-01

    The transverse motion of charged particles in a circular accelerator can be well represented by a one-turn high-order Taylor map. For particles without energy deviation, the one-turn Taylor map is a 4-dimensional polynomials of four variables. The four variables are the transverse canonical coordinates and their conjugate momenta. To include the energy deviation (off-momentum) effects, the map has to be parameterized with a smallness factor representing the off-momentum and so the Taylor map becomes a 4-dimensional polynomials of five variables. It is for this type of parameterized Taylor map that a mehtod is presented for converting it into a parameterized Dragt-Finn factorization map. Parameterized nonlinear normal form and parameterized kick factorization can thus be obtained with suitable modification of the existing technique

  13. Global parameterization and validation of a two-leaf light use efficiency model for predicting gross primary production across FLUXNET sites

    Czech Academy of Sciences Publication Activity Database

    Zhou, Y.; Wu, X.; Weiming, J.; Chen, J.; Wang, S.; Wang, H.; Wenping, Y.; Black, T. A.; Jassal, R.; Ibrom, A.; Han, S.; Yan, J.; Margolis, H.; Roupsard, O.; Li, Y.; Zhao, F.; Kiely, G.; Starr, G.; Pavelka, Marian; Montagnani, L.; Wohlfahrt, G.; D'Odorico, P.; Cook, D.; Altaf Arain, M.; Bonal, D.; Beringer, J.; Blanken, P. D.; Loubet, B.; Leclerc, M. Y.; Matteucci, G.; Nagy, Z.; Olejnik, Janusz; U., K. T. P.; Varlagin, A.

    2016-01-01

    Roč. 36, č. 7 (2016), s. 2743-2760 ISSN 2169-8953 Institutional support: RVO:67179843 Keywords : global parametrization * predicting model * FlUXNET Subject RIV: EH - Ecology, Behaviour Impact factor: 3.395, year: 2016

  14. Improving the temperature predictions of subsurface thermal models by using high-quality input data. Part 1: Uncertainty analysis of the thermal-conductivity parameterization

    DEFF Research Database (Denmark)

    Fuchs, Sven; Balling, Niels

    2016-01-01

    The subsurface temperature field and the geothermal conditions in sedimentary basins are frequently examined by using numerical thermal models. For those models, detailed knowledge of rock thermal properties are paramount for a reliable parameterization of layer properties and boundary conditions...

  15. Cumulus parameterizations in chemical transport models

    Science.gov (United States)

    Mahowald, Natalie M.; Rasch, Philip J.; Prinn, Ronald G.

    1995-12-01

    Global three-dimensional chemical transport models (CTMs) are valuable tools for studying processes controlling the distribution of trace constituents in the atmosphere. A major uncertainty in these models is the subgrid-scale parametrization of transport by cumulus convection. This study seeks to define the range of behavior of moist convective schemes and point toward more reliable formulations for inclusion in chemical transport models. The emphasis is on deriving convective transport from meteorological data sets (such as those from the forecast centers) which do not routinely include convective mass fluxes. Seven moist convective parameterizations are compared in a column model to examine the sensitivity of the vertical profile of trace gases to the parameterization used in a global chemical transport model. The moist convective schemes examined are the Emanuel scheme [Emanuel, 1991], the Feichter-Crutzen scheme [Feichter and Crutzen, 1990], the inverse thermodynamic scheme (described in this paper), two versions of a scheme suggested by Hack [Hack, 1994], and two versions of a scheme suggested by Tiedtke (one following the formulation used in the ECMWF (European Centre for Medium-Range Weather Forecasting) and ECHAM3 (European Centre and Hamburg Max-Planck-Institut) models [Tiedtke, 1989], and one formulated as in the TM2 (Transport Model-2) model (M. Heimann, personal communication, 1992). These convective schemes vary in the closure used to derive the mass fluxes, as well as the cloud model formulation, giving a broad range of results. In addition, two boundary layer schemes are compared: a state-of-the-art nonlocal boundary layer scheme [Holtslag and Boville, 1993] and a simple adiabatic mixing scheme described in this paper. Three tests are used to compare the moist convective schemes against observations. Although the tests conducted here cannot conclusively show that one parameterization is better than the others, the tests are a good measure of the

  16. The dynamical core, physical parameterizations, and basic simulation characteristics of the atmospheric component AM3 of the GFDL global coupled model CM3

    Science.gov (United States)

    Donner, L.J.; Wyman, B.L.; Hemler, R.S.; Horowitz, L.W.; Ming, Y.; Zhao, M.; Golaz, J.-C.; Ginoux, P.; Lin, S.-J.; Schwarzkopf, M.D.; Austin, J.; Alaka, G.; Cooke, W.F.; Delworth, T.L.; Freidenreich, S.M.; Gordon, C.T.; Griffies, S.M.; Held, I.M.; Hurlin, W.J.; Klein, S.A.; Knutson, T.R.; Langenhorst, A.R.; Lee, H.-C.; Lin, Y.; Magi, B.I.; Malyshev, S.L.; Milly, P.C.D.; Naik, V.; Nath, M.J.; Pincus, R.; Ploshay, J.J.; Ramaswamy, V.; Seman, C.J.; Shevliakova, E.; Sirutis, J.J.; Stern, W.F.; Stouffer, R.J.; Wilson, R.J.; Winton, M.; Wittenberg, A.T.; Zeng, F.

    2011-01-01

    The Geophysical Fluid Dynamics Laboratory (GFDL) has developed a coupled general circulation model (CM3) for the atmosphere, oceans, land, and sea ice. The goal of CM3 is to address emerging issues in climate change, including aerosol-cloud interactions, chemistry-climate interactions, and coupling between the troposphere and stratosphere. The model is also designed to serve as the physical system component of earth system models and models for decadal prediction in the near-term future-for example, through improved simulations in tropical land precipitation relative to earlier-generation GFDL models. This paper describes the dynamical core, physical parameterizations, and basic simulation characteristics of the atmospheric component (AM3) of this model. Relative to GFDL AM2, AM3 includes new treatments of deep and shallow cumulus convection, cloud droplet activation by aerosols, subgrid variability of stratiform vertical velocities for droplet activation, and atmospheric chemistry driven by emissions with advective, convective, and turbulent transport. AM3 employs a cubed-sphere implementation of a finite-volume dynamical core and is coupled to LM3, a new land model with ecosystem dynamics and hydrology. Its horizontal resolution is approximately 200 km, and its vertical resolution ranges approximately from 70 m near the earth's surface to 1 to 1.5 km near the tropopause and 3 to 4 km in much of the stratosphere. Most basic circulation features in AM3 are simulated as realistically, or more so, as in AM2. In particular, dry biases have been reduced over South America. In coupled mode, the simulation of Arctic sea ice concentration has improved. AM3 aerosol optical depths, scattering properties, and surface clear-sky downward shortwave radiation are more realistic than in AM2. The simulation of marine stratocumulus decks remains problematic, as in AM2. The most intense 0.2% of precipitation rates occur less frequently in AM3 than observed. The last two decades of

  17. Integrated cumulus ensemble and turbulence (ICET): An integrated parameterization system for general circulation models (GCMs)

    Energy Technology Data Exchange (ETDEWEB)

    Evans, J.L.; Frank, W.M.; Young, G.S. [Pennsylvania State Univ., University Park, PA (United States)

    1996-04-01

    Successful simulations of the global circulation and climate require accurate representation of the properties of shallow and deep convective clouds, stable-layer clouds, and the interactions between various cloud types, the boundary layer, and the radiative fluxes. Each of these phenomena play an important role in the global energy balance, and each must be parameterized in a global climate model. These processes are highly interactive. One major problem limiting the accuracy of parameterizations of clouds and other processes in general circulation models (GCMs) is that most of the parameterization packages are not linked with a common physical basis. Further, these schemes have not, in general, been rigorously verified against observations adequate to the task of resolving subgrid-scale effects. To address these problems, we are designing a new Integrated Cumulus Ensemble and Turbulence (ICET) parameterization scheme, installing it in a climate model (CCM2), and evaluating the performance of the new scheme using data from Atmospheric Radiation Measurement (ARM) Program Cloud and Radiation Testbed (CART) sites.

  18. Approaches in highly parameterized inversion-PESTCommander, a graphical user interface for file and run management across networks

    Science.gov (United States)

    Karanovic, Marinko; Muffels, Christopher T.; Tonkin, Matthew J.; Hunt, Randall J.

    2012-01-01

    parameter estimation software PEST; the discussion presented in this report focuses on the use of the PESTCommander together with Parallel PEST. However, PESTCommander can be used with a wide variety of programs and models that require management, distribution, and cleanup of files before or after model execution. In addition to its use with the Parallel PEST program suite, discussion is also included in this report regarding the use of PESTCommander with the Global Run Manager GENIE, which was developed simultaneously with PESTCommander.

  19. Parameterized examination in econometrics

    Science.gov (United States)

    Malinova, Anna; Kyurkchiev, Vesselin; Spasov, Georgi

    2018-01-01

    The paper presents a parameterization of basic types of exam questions in Econometrics. This algorithm is used to automate and facilitate the process of examination, assessment and self-preparation of a large number of students. The proposed parameterization of testing questions reduces the time required to author tests and course assignments. It enables tutors to generate a large number of different but equivalent dynamic questions (with dynamic answers) on a certain topic, which are automatically assessed. The presented methods are implemented in DisPeL (Distributed Platform for e-Learning) and provide questions in the areas of filtering and smoothing of time-series data, forecasting, building and analysis of single-equation econometric models. Questions also cover elasticity, average and marginal characteristics, product and cost functions, measurement of monopoly power, supply, demand and equilibrium price, consumer and product surplus, etc. Several approaches are used to enable the required numerical computations in DisPeL - integration of third-party mathematical libraries, developing our own procedures from scratch, and wrapping our legacy math codes in order to modernize and reuse them.

  20. Impacts of spectral nudging on the simulated surface air temperature in summer compared with the selection of shortwave radiation and land surface model physics parameterization in a high-resolution regional atmospheric model

    Science.gov (United States)

    Park, Jun; Hwang, Seung-On

    2017-11-01

    The impact of a spectral nudging technique for the dynamical downscaling of the summer surface air temperature in a high-resolution regional atmospheric model is assessed. The performance of this technique is measured by comparing 16 analysis-driven simulation sets of physical parameterization combinations of two shortwave radiation and four land surface model schemes of the model, which are known to be crucial for the simulation of the surface air temperature. It is found that the application of spectral nudging to the outermost domain has a greater impact on the regional climate than any combination of shortwave radiation and land surface model physics schemes. The optimal choice of two model physics parameterizations is helpful for obtaining more realistic spatiotemporal distributions of land surface variables such as the surface air temperature, precipitation, and surface fluxes. However, employing spectral nudging adds more value to the results; the improvement is greater than using sophisticated shortwave radiation and land surface model physical parameterizations. This result indicates that spectral nudging applied to the outermost domain provides a more accurate lateral boundary condition to the innermost domain when forced by analysis data by securing the consistency with large-scale forcing over a regional domain. This consequently indirectly helps two physical parameterizations to produce small-scale features closer to the observed values, leading to a better representation of the surface air temperature in a high-resolution downscaled climate.

  1. eWaterCycle: A high resolution global hydrological model

    Science.gov (United States)

    van de Giesen, Nick; Bierkens, Marc; Drost, Niels; Hut, Rolf; Sutanudjaja, Edwin

    2014-05-01

    In 2013, the eWaterCycle project was started, which has the ambitious goal to run a high resolution global hydrological model. Starting point was the PCR-GLOBWB built by Utrecht University. The software behind this model will partially be re-engineered in order to enable to run it in a High Performance Computing (HPC) environment. The aim is to have a spatial resolution of 1km x 1km. The idea is also to run the model in real-time and forecasting mode, using data assimilation. An on-demand hydraulic model will be available for detailed flow and flood forecasting in support of navigation and disaster management. The project faces a set of scientific challenges. First, to enable the model to run in a HPC environment, model runs were analyzed to examine on which parts of the program most CPU time was spent. These parts were re-coded in Open MPI to allow for parallel processing. Different parallelization strategies are thinkable. In our case, it was decided to use watershed logic as a first step to distribute the analysis. There is rather limited recent experience with HPC in hydrology and there is much to be learned and adjusted, both on the hydrological modeling side and the computer science side. For example, an interesting early observation was that hydrological models are, due to their localized parameterization, much more memory intensive than models of sister-disciplines such as meteorology and oceanography. Because it would be deadly to have to swap information between CPU and hard drive, memory management becomes crucial. A standard Ensemble Kalman Filter (enKF) would, for example, have excessive memory demands. To circumvent these problems, an alternative to the enKF was developed that produces equivalent results. This presentation shows the most recent results from the model, including a 5km x 5km simulation and a proof of concept for the new data assimilation approach. Finally, some early ideas about financial sustainability of an operational global

  2. Parameterization of disorder predictors for large-scale applications requiring high specificity by using an extended benchmark dataset

    Directory of Open Access Journals (Sweden)

    Eisenhaber Frank

    2010-02-01

    Full Text Available Abstract Background Algorithms designed to predict protein disorder play an important role in structural and functional genomics, as disordered regions have been reported to participate in important cellular processes. Consequently, several methods with different underlying principles for disorder prediction have been independently developed by various groups. For assessing their usability in automated workflows, we are interested in identifying parameter settings and threshold selections, under which the performance of these predictors becomes directly comparable. Results First, we derived a new benchmark set that accounts for different flavours of disorder complemented with a similar amount of order annotation derived for the same protein set. We show that, using the recommended default parameters, the programs tested are producing a wide range of predictions at different levels of specificity and sensitivity. We identify settings, in which the different predictors have the same false positive rate. We assess conditions when sets of predictors can be run together to derive consensus or complementary predictions. This is useful in the framework of proteome-wide applications where high specificity is required such as in our in-house sequence analysis pipeline and the ANNIE webserver. Conclusions This work identifies parameter settings and thresholds for a selection of disorder predictors to produce comparable results at a desired level of specificity over a newly derived benchmark dataset that accounts equally for ordered and disordered regions of different lengths.

  3. Parameterization of solar flare dose

    International Nuclear Information System (INIS)

    Lamarche, A.H.; Poston, J.W.

    1996-01-01

    A critical aspect of missions to the moon or Mars will be the safety and health of the crew. Radiation in space is a hazard for astronauts, especially high-energy radiation following certain types of solar flares. A solar flare event can be very dangerous if astronauts are not adequately shielded because flares can deliver a very high dose in a short period of time. The goal of this research was to parameterize solar flare dose as a function of time to see if it was possible to predict solar flare occurrence, thus providing a warning time. This would allow astronauts to take corrective action and avoid receiving a dose greater than the recommended limit set by the National Council on Radiation Protection and Measurements (NCRP)

  4. A global high-resolution model experiment on the predictability of the atmosphere

    Science.gov (United States)

    Judt, F.

    2016-12-01

    Forecasting high-impact weather phenomena is one of the most important aspects of numerical weather prediction (NWP). Over the last couple of years, a tremendous increase in computing power has facilitated the advent of global convection-resolving NWP models, which allow for the seamless prediction of weather from local to planetary scales. Unfortunately, the predictability of specific meteorological phenomena in these models is not very well known. This raises questions about which forecast problems are potentially tractable, and what is the value of global convection-resolving model predictions for the end user. To address this issue, we use the Yellowstone supercomputer to conduct a global high-resolution predictability experiment with the recently developed Model for Prediction Across Scales (MPAS). The computing power of Yellowstone enables the model to run at a globally uniform resolution of 4 km with 55 vertical levels (>2 billion grid cells). These simulations, which require 3 million core-hours for the entire experiment, allow for the explicit treatment of organized deep moist convection (i.e., thunderstorm systems). Resolving organized deep moist convection alleviates grave limitations of previous predictability studies, which either used high-resolution limited-area models or global simulations with coarser grids and cumulus parameterization. By computing the error growth characteristics in a set of "identical twin" model runs, the experiment will clarify the intrinsic predictability limits of atmospheric phenomena on a wide range of scales, from severe thunderstorms to global-scale wind patterns that affect the distribution of tropical rainfall. Although a major task by itself, this study is intended to be exploratory work for a future predictability experiment going beyond of what has so far been feasible. We hope to use CISL's new Cheyenne supercomputer to conduct a similar predictability experiments on a global mesh with 1-2 km resolution. This

  5. The importance of parameterization when simulating the hydrologic response of vegetative land-cover change

    Science.gov (United States)

    White, Jeremy; Stengel, Victoria; Rendon, Samuel; Banta, John

    2017-08-01

    Computer models of hydrologic systems are frequently used to investigate the hydrologic response of land-cover change. If the modeling results are used to inform resource-management decisions, then providing robust estimates of uncertainty in the simulated response is an important consideration. Here we examine the importance of parameterization, a necessarily subjective process, on uncertainty estimates of the simulated hydrologic response of land-cover change. Specifically, we applied the soil water assessment tool (SWAT) model to a 1.4 km2 watershed in southern Texas to investigate the simulated hydrologic response of brush management (the mechanical removal of woody plants), a discrete land-cover change. The watershed was instrumented before and after brush-management activities were undertaken, and estimates of precipitation, streamflow, and evapotranspiration (ET) are available; these data were used to condition and verify the model. The role of parameterization in brush-management simulation was evaluated by constructing two models, one with 12 adjustable parameters (reduced parameterization) and one with 1305 adjustable parameters (full parameterization). Both models were subjected to global sensitivity analysis as well as Monte Carlo and generalized likelihood uncertainty estimation (GLUE) conditioning to identify important model inputs and to estimate uncertainty in several quantities of interest related to brush management. Many realizations from both parameterizations were identified as behavioral in that they reproduce daily mean streamflow acceptably well according to Nash-Sutcliffe model efficiency coefficient, percent bias, and coefficient of determination. However, the total volumetric ET difference resulting from simulated brush management remains highly uncertain after conditioning to daily mean streamflow, indicating that streamflow data alone are not sufficient to inform the model inputs that influence the simulated outcomes of brush management

  6. Parameterizing Size Distribution in Ice Clouds

    Energy Technology Data Exchange (ETDEWEB)

    DeSlover, Daniel; Mitchell, David L.

    2009-09-25

    PARAMETERIZING SIZE DISTRIBUTIONS IN ICE CLOUDS David L. Mitchell and Daniel H. DeSlover ABSTRACT An outstanding problem that contributes considerable uncertainty to Global Climate Model (GCM) predictions of future climate is the characterization of ice particle sizes in cirrus clouds. Recent parameterizations of ice cloud effective diameter differ by a factor of three, which, for overcast conditions, often translate to changes in outgoing longwave radiation (OLR) of 55 W m-2 or more. Much of this uncertainty in cirrus particle sizes is related to the problem of ice particle shattering during in situ sampling of the ice particle size distribution (PSD). Ice particles often shatter into many smaller ice fragments upon collision with the rim of the probe inlet tube. These small ice artifacts are counted as real ice crystals, resulting in anomalously high concentrations of small ice crystals (D < 100 µm) and underestimates of the mean and effective size of the PSD. Half of the cirrus cloud optical depth calculated from these in situ measurements can be due to this shattering phenomenon. Another challenge is the determination of ice and liquid water amounts in mixed phase clouds. Mixed phase clouds in the Arctic contain mostly liquid water, and the presence of ice is important for determining their lifecycle. Colder high clouds between -20 and -36 oC may also be mixed phase but in this case their condensate is mostly ice with low levels of liquid water. Rather than affecting their lifecycle, the presence of liquid dramatically affects the cloud optical properties, which affects cloud-climate feedback processes in GCMs. This project has made advancements in solving both of these problems. Regarding the first problem, PSD in ice clouds are uncertain due to the inability to reliably measure the concentrations of the smallest crystals (D < 100 µm), known as the “small mode”. Rather than using in situ probe measurements aboard aircraft, we employed a treatment of ice

  7. Automatic Generation of Symbolic Model for Parameterized Synchronous Systems

    Institute of Scientific and Technical Information of China (English)

    Wei-Wen Xu

    2004-01-01

    With the purpose of making the verification of parameterized system more general and easier, in this paper, a new and intuitive language PSL (Parameterized-system Specification Language) is proposed to specify a class of parameterized synchronous systems. From a PSL script, an automatic method is proposed to generate a constraint-based symbolic model. The model can concisely symbolically represent the collections of global states by counting the number of processes in a given state. Moreover, a theorem has been proved that there is a simulation relation between the original system and its symbolic model. Since the abstract and symbolic techniques are exploited in the symbolic model, state-explosion problem in traditional verification methods is efficiently avoided. Based on the proposed symbolic model, a reachability analysis procedure is implemented using ANSI C++ on UNIX platform. Thus, a complete tool for verifying the parameterized synchronous systems is obtained and tested for some cases. The experimental results show that the method is satisfactory.

  8. The Explicit-Cloud Parameterized-Pollutant hybrid approach for aerosol-cloud interactions in multiscale modeling framework models: tracer transport results

    International Nuclear Information System (INIS)

    Jr, William I Gustafson; Berg, Larry K; Easter, Richard C; Ghan, Steven J

    2008-01-01

    All estimates of aerosol indirect effects on the global energy balance have either completely neglected the influence of aerosol on convective clouds or treated the influence in a highly parameterized manner. Embedding cloud-resolving models (CRMs) within each grid cell of a global model provides a multiscale modeling framework for treating both the influence of aerosols on convective as well as stratiform clouds and the influence of clouds on the aerosol, but treating the interactions explicitly by simulating all aerosol processes in the CRM is computationally prohibitive. An alternate approach is to use horizontal statistics (e.g., cloud mass flux, cloud fraction, and precipitation) from the CRM simulation to drive a single-column parameterization of cloud effects on the aerosol and then use the aerosol profile to simulate aerosol effects on clouds within the CRM. Here, we present results from the first component of the Explicit-Cloud Parameterized-Pollutant parameterization to be developed, which handles vertical transport of tracers by clouds. A CRM with explicit tracer transport serves as a benchmark. We show that this parameterization, driven by the CRM's cloud mass fluxes, reproduces the CRM tracer transport significantly better than a single-column model that uses a conventional convective cloud parameterization

  9. The Explicit-Cloud Parameterized-Pollutant hybrid approach for aerosol-cloud interactions in multiscale modeling framework models: tracer transport results

    Energy Technology Data Exchange (ETDEWEB)

    Jr, William I Gustafson; Berg, Larry K; Easter, Richard C; Ghan, Steven J [Atmospheric Science and Global Change Division, Pacific Northwest National Laboratory, PO Box 999, MSIN K9-30, Richland, WA (United States)], E-mail: William.Gustafson@pnl.gov

    2008-04-15

    All estimates of aerosol indirect effects on the global energy balance have either completely neglected the influence of aerosol on convective clouds or treated the influence in a highly parameterized manner. Embedding cloud-resolving models (CRMs) within each grid cell of a global model provides a multiscale modeling framework for treating both the influence of aerosols on convective as well as stratiform clouds and the influence of clouds on the aerosol, but treating the interactions explicitly by simulating all aerosol processes in the CRM is computationally prohibitive. An alternate approach is to use horizontal statistics (e.g., cloud mass flux, cloud fraction, and precipitation) from the CRM simulation to drive a single-column parameterization of cloud effects on the aerosol and then use the aerosol profile to simulate aerosol effects on clouds within the CRM. Here, we present results from the first component of the Explicit-Cloud Parameterized-Pollutant parameterization to be developed, which handles vertical transport of tracers by clouds. A CRM with explicit tracer transport serves as a benchmark. We show that this parameterization, driven by the CRM's cloud mass fluxes, reproduces the CRM tracer transport significantly better than a single-column model that uses a conventional convective cloud parameterization.

  10. Systematic Parameterization of Lignin for the CHARMM Force Field

    Energy Technology Data Exchange (ETDEWEB)

    Vermaas, Joshua; Petridis, Loukas; Beckham, Gregg; Crowley, Michael

    2017-07-06

    Plant cell walls have three primary components, cellulose, hemicellulose, and lignin, the latter of which is a recalcitrant, aromatic heteropolymer that provides structure to plants, water and nutrient transport through plant tissues, and a highly effective defense against pathogens. Overcoming the recalcitrance of lignin is key to effective biomass deconstruction, which would in turn enable the use of biomass as a feedstock for industrial processes. Our understanding of lignin structure in the plant cell wall is hampered by the limitations of the available lignin forcefields, which currently only account for a single linkage between lignins and lack explicit parameterization for emerging lignin structures both from natural variants and engineered lignin structures. Since polymerization of lignin occurs via radical intermediates, multiple C-O and C-C linkages have been isolated , and the current force field only represents a small subset of lignin the diverse lignin structures found in plants. In order to take into account the wide range of lignin polymerization chemistries, monomers and dimer combinations of C-, H-, G-, and S-lignins as well as with hydroxycinnamic acid linkages were subjected to extensive quantum mechanical calculations to establish target data from which to build a complete molecular mechanics force field tuned specifically for diverse lignins. This was carried out in a GPU-accelerated global optimization process, whereby all molecules were parameterized simultaneously using the same internal parameter set. By parameterizing lignin specifically, we are able to more accurately represent the interactions and conformations of lignin monomers and dimers relative to a general force field. This new force field will enables computational researchers to study the effects of different linkages on the structure of lignin, as well as construct more accurate plant cell wall models based on observed statistical distributions of lignin that differ between

  11. The Global Systems Science High School Curriculum

    Science.gov (United States)

    Gould, A. D.; Sneider, C.; Farmer, E.; Erickson, J.

    2015-12-01

    Global Systems Science (GSS), a high school integrated interdisciplinary science project based at Lawrence Hall of Science at UC Berkeley, began in the early 1990s as a single book "Planet at Risk" which was only about climate change. Federal grants enabled the project to enlist about 150 teachers to field test materials in their classes and then meeting in summer institutes to share results and effect changes. The result was a series of smaller modules dealing not only with climate change, but other related topics including energy flow, energy use, ozone, loss of biodiversity, and ecosystem change. Other relevant societal issues have also been incorporated including economics, psychology and sociology. The course has many investigations/activities for student to pursue, interviews with scientists working in specific areas of research, and historical contexts. The interconnectedness of a myriad of small and large systems became an overarching theme of the resulting course materials which are now available to teachers for free online at http://www.globalsystemsscience.org/

  12. Multimodel Uncertainty Changes in Simulated River Flows Induced by Human Impact Parameterizations

    Science.gov (United States)

    Liu, Xingcai; Tang, Qiuhong; Cui, Huijuan; Mu, Mengfei; Gerten Dieter; Gosling, Simon; Masaki, Yoshimitsu; Satoh, Yusuke; Wada, Yoshihide

    2017-01-01

    Human impacts increasingly affect the global hydrological cycle and indeed dominate hydrological changes in some regions. Hydrologists have sought to identify the human-impact-induced hydrological variations via parameterizing anthropogenic water uses in global hydrological models (GHMs). The consequently increased model complexity is likely to introduce additional uncertainty among GHMs. Here, using four GHMs, between-model uncertainties are quantified in terms of the ratio of signal to noise (SNR) for average river flow during 1971-2000 simulated in two experiments, with representation of human impacts (VARSOC) and without (NOSOC). It is the first quantitative investigation of between-model uncertainty resulted from the inclusion of human impact parameterizations. Results show that the between-model uncertainties in terms of SNRs in the VARSOC annual flow are larger (about 2 for global and varied magnitude for different basins) than those in the NOSOC, which are particularly significant in most areas of Asia and northern areas to the Mediterranean Sea. The SNR differences are mostly negative (-20 to 5, indicating higher uncertainty) for basin-averaged annual flow. The VARSOC high flow shows slightly lower uncertainties than NOSOC simulations, with SNR differences mostly ranging from -20 to 20. The uncertainty differences between the two experiments are significantly related to the fraction of irrigation areas of basins. The large additional uncertainties in VARSOC simulations introduced by the inclusion of parameterizations of human impacts raise the urgent need of GHMs development regarding a better understanding of human impacts. Differences in the parameterizations of irrigation, reservoir regulation and water withdrawals are discussed towards potential directions of improvements for future GHM development. We also discuss the advantages of statistical approaches to reduce the between-model uncertainties, and the importance of calibration of GHMs for not only

  13. Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes

    Science.gov (United States)

    van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.

    2017-12-01

    Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.

  14. A parameterization of cloud droplet nucleation

    International Nuclear Information System (INIS)

    Ghan, S.J.; Chuang, C.; Penner, J.E.

    1993-01-01

    Droplet nucleation is a fundamental cloud process. The number of aerosols activated to form cloud droplets influences not only the number of aerosols scavenged by clouds but also the size of the cloud droplets. Cloud droplet size influences the cloud albedo and the conversion of cloud water to precipitation. Global aerosol models are presently being developed with the intention of coupling with global atmospheric circulation models to evaluate the influence of aerosols and aerosol-cloud interactions on climate. If these and other coupled models are to address issues of aerosol-cloud interactions, the droplet nucleation process must be adequately represented. Here we introduce a droplet nucleation parametrization that offers certain advantages over the popular Twomey (1959) parameterization

  15. Elastic orthorhombic anisotropic parameter inversion: An analysis of parameterization

    KAUST Repository

    Oh, Juwon; Alkhalifah, Tariq Ali

    2016-01-01

    The resolution of a multiparameter full-waveform inversion (FWI) is highly influenced by the parameterization used in the inversion algorithm, as well as the data quality and the sensitivity of the data to the elastic parameters because the scattering patterns of the partial derivative wavefields (PDWs) vary with parameterization. For this reason, it is important to identify an optimal parameterization for elastic orthorhombic FWI by analyzing the radiation patterns of the PDWs for many reasonable model parameterizations. We have promoted a parameterization that allows for the separation of the anisotropic properties in the radiation patterns. The central parameter of this parameterization is the horizontal P-wave velocity, with an isotropic scattering potential, influencing the data at all scales and directions. This parameterization decouples the influence of the scattering potential given by the P-wave velocity perturbation fromthe polar changes described by two dimensionless parameter perturbations and from the azimuthal variation given by three additional dimensionless parameters perturbations. In addition, the scattering potentials of the P-wave velocity perturbation are also decoupled from the elastic influences given by one S-wave velocity and two additional dimensionless parameter perturbations. The vertical S-wave velocity is chosen with the best resolution obtained from S-wave reflections and converted waves, little influence on P-waves in conventional surface seismic acquisition. The influence of the density on observed data can be absorbed by one anisotropic parameter that has a similar radiation pattern. The additional seven dimensionless parameters describe the polar and azimuth variations in the P- and S-waves that we may acquire, with some of the parameters having distinct influences on the recorded data on the earth's surface. These characteristics of the new parameterization offer the potential for a multistage inversion from high symmetry

  16. Elastic orthorhombic anisotropic parameter inversion: An analysis of parameterization

    KAUST Repository

    Oh, Juwon

    2016-09-15

    The resolution of a multiparameter full-waveform inversion (FWI) is highly influenced by the parameterization used in the inversion algorithm, as well as the data quality and the sensitivity of the data to the elastic parameters because the scattering patterns of the partial derivative wavefields (PDWs) vary with parameterization. For this reason, it is important to identify an optimal parameterization for elastic orthorhombic FWI by analyzing the radiation patterns of the PDWs for many reasonable model parameterizations. We have promoted a parameterization that allows for the separation of the anisotropic properties in the radiation patterns. The central parameter of this parameterization is the horizontal P-wave velocity, with an isotropic scattering potential, influencing the data at all scales and directions. This parameterization decouples the influence of the scattering potential given by the P-wave velocity perturbation fromthe polar changes described by two dimensionless parameter perturbations and from the azimuthal variation given by three additional dimensionless parameters perturbations. In addition, the scattering potentials of the P-wave velocity perturbation are also decoupled from the elastic influences given by one S-wave velocity and two additional dimensionless parameter perturbations. The vertical S-wave velocity is chosen with the best resolution obtained from S-wave reflections and converted waves, little influence on P-waves in conventional surface seismic acquisition. The influence of the density on observed data can be absorbed by one anisotropic parameter that has a similar radiation pattern. The additional seven dimensionless parameters describe the polar and azimuth variations in the P- and S-waves that we may acquire, with some of the parameters having distinct influences on the recorded data on the earth\\'s surface. These characteristics of the new parameterization offer the potential for a multistage inversion from high symmetry

  17. Impact of model structure and parameterization on Penman-Monteith type evaporation models

    KAUST Repository

    Ershadi, A.

    2015-04-12

    The impact of model structure and parameterization on the estimation of evaporation is investigated across a range of Penman-Monteith type models. To examine the role of model structure on flux retrievals, three different retrieval schemes are compared. The schemes include a traditional single-source Penman-Monteith model (Monteith, 1965), a two-layer model based on Shuttleworth and Wallace (1985) and a three-source model based on Mu et al. (2011). To assess the impact of parameterization choice on model performance, a number of commonly used formulations for aerodynamic and surface resistances were substituted into the different formulations. Model response to these changes was evaluated against data from twenty globally distributed FLUXNET towers, representing a cross-section of biomes that include grassland, cropland, shrubland, evergreen needleleaf forest and deciduous broadleaf forest. Scenarios based on 14 different combinations of model structure and parameterization were ranked based on their mean value of Nash-Sutcliffe Efficiency. Results illustrated considerable variability in model performance both within and between biome types. Indeed, no single model consistently outperformed any other when considered across all biomes. For instance, in grassland and shrubland sites, the single-source Penman-Monteith model performed the best. In croplands it was the three-source Mu model, while for evergreen needleleaf and deciduous broadleaf forests, the Shuttleworth-Wallace model rated highest. Interestingly, these top ranked scenarios all shared the simple lookup-table based surface resistance parameterization of Mu et al. (2011), while a more complex Jarvis multiplicative method for surface resistance produced lower ranked simulations. The highly ranked scenarios mostly employed a version of the Thom (1975) formulation for aerodynamic resistance that incorporated dynamic values of roughness parameters. This was true for all cases except over deciduous broadleaf

  18. Global Ethics in a High School Curriculum.

    Science.gov (United States)

    Sappir, Susan

    1998-01-01

    Raphi Amram, the late director of Israel's Society for Excellence Through Education, founded the Ethics in Science and Humanities Program operating in Israel and five other countries. Though the ethics program currently operates only in high schools serving high-achieving or gifted students, founders emphasize the universality of its appeal.…

  19. Droplet Nucleation: Physically-Based Parameterizations and Comparative Evaluation

    Directory of Open Access Journals (Sweden)

    Steve Ghan

    2011-10-01

    Full Text Available One of the greatest sources of uncertainty in simulations of climate and climate change is the influence of aerosols on the optical properties of clouds. The root of this influence is the droplet nucleation process, which involves the spontaneous growth of aerosol into cloud droplets at cloud edges, during the early stages of cloud formation, and in some cases within the interior of mature clouds. Numerical models of droplet nucleation represent much of the complexity of the process, but at a computational cost that limits their application to simulations of hours or days. Physically-based parameterizations of droplet nucleation are designed to quickly estimate the number nucleated as a function of the primary controlling parameters: the aerosol number size distribution, hygroscopicity and cooling rate. Here we compare and contrast the key assumptions used in developing each of the most popular parameterizations and compare their performances under a variety of conditions. We find that the more complex parameterizations perform well under a wider variety of nucleation conditions, but all parameterizations perform well under the most common conditions. We then discuss the various applications of the parameterizations to cloud-resolving, regional and global models to study aerosol effects on clouds at a wide range of spatial and temporal scales. We compare estimates of anthropogenic aerosol indirect effects using two different parameterizations applied to the same global climate model, and find that the estimates of indirect effects differ by only 10%. We conclude with a summary of the outstanding challenges remaining for further development and application.

  20. Uncertainties of parameterized surface downward clear-sky shortwave and all-sky longwave radiation.

    Science.gov (United States)

    Gubler, S.; Gruber, S.; Purves, R. S.

    2012-06-01

    As many environmental models rely on simulating the energy balance at the Earth's surface based on parameterized radiative fluxes, knowledge of the inherent model uncertainties is important. In this study we evaluate one parameterization of clear-sky direct, diffuse and global shortwave downward radiation (SDR) and diverse parameterizations of clear-sky and all-sky longwave downward radiation (LDR). In a first step, SDR is estimated based on measured input variables and estimated atmospheric parameters for hourly time steps during the years 1996 to 2008. Model behaviour is validated using the high quality measurements of six Alpine Surface Radiation Budget (ASRB) stations in Switzerland covering different elevations, and measurements of the Swiss Alpine Climate Radiation Monitoring network (SACRaM) in Payerne. In a next step, twelve clear-sky LDR parameterizations are calibrated using the ASRB measurements. One of the best performing parameterizations is elected to estimate all-sky LDR, where cloud transmissivity is estimated using measured and modeled global SDR during daytime. In a last step, the performance of several interpolation methods is evaluated to determine the cloud transmissivity in the night. We show that clear-sky direct, diffuse and global SDR is adequately represented by the model when using measurements of the atmospheric parameters precipitable water and aerosol content at Payerne. If the atmospheric parameters are estimated and used as a fix value, the relative mean bias deviance (MBD) and the relative root mean squared deviance (RMSD) of the clear-sky global SDR scatter between between -2 and 5%, and 7 and 13% within the six locations. The small errors in clear-sky global SDR can be attributed to compensating effects of modeled direct and diffuse SDR since an overestimation of aerosol content in the atmosphere results in underestimating the direct, but overestimating the diffuse SDR. Calibration of LDR parameterizations to local conditions

  1. Uncertainties of parameterized surface downward clear-sky shortwave and all-sky longwave radiation.

    Directory of Open Access Journals (Sweden)

    S. Gubler

    2012-06-01

    Full Text Available As many environmental models rely on simulating the energy balance at the Earth's surface based on parameterized radiative fluxes, knowledge of the inherent model uncertainties is important. In this study we evaluate one parameterization of clear-sky direct, diffuse and global shortwave downward radiation (SDR and diverse parameterizations of clear-sky and all-sky longwave downward radiation (LDR. In a first step, SDR is estimated based on measured input variables and estimated atmospheric parameters for hourly time steps during the years 1996 to 2008. Model behaviour is validated using the high quality measurements of six Alpine Surface Radiation Budget (ASRB stations in Switzerland covering different elevations, and measurements of the Swiss Alpine Climate Radiation Monitoring network (SACRaM in Payerne. In a next step, twelve clear-sky LDR parameterizations are calibrated using the ASRB measurements. One of the best performing parameterizations is elected to estimate all-sky LDR, where cloud transmissivity is estimated using measured and modeled global SDR during daytime. In a last step, the performance of several interpolation methods is evaluated to determine the cloud transmissivity in the night.

    We show that clear-sky direct, diffuse and global SDR is adequately represented by the model when using measurements of the atmospheric parameters precipitable water and aerosol content at Payerne. If the atmospheric parameters are estimated and used as a fix value, the relative mean bias deviance (MBD and the relative root mean squared deviance (RMSD of the clear-sky global SDR scatter between between −2 and 5%, and 7 and 13% within the six locations. The small errors in clear-sky global SDR can be attributed to compensating effects of modeled direct and diffuse SDR since an overestimation of aerosol content in the atmosphere results in underestimating the direct, but overestimating the diffuse SDR. Calibration of LDR parameterizations

  2. Application of the dual Youla parameterization

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik

    1999-01-01

    Different applications of the parameterization of all systems stabilized by a given controller, i.e. the dual Youla parameterization, are considered in this paper. It will be shown how the parameterization can be applied in connection with controller design, adaptive controllers, model validation...

  3. Radiative flux and forcing parameterization error in aerosol-free clear skies.

    Science.gov (United States)

    Pincus, Robert; Mlawer, Eli J; Oreopoulos, Lazaros; Ackerman, Andrew S; Baek, Sunghye; Brath, Manfred; Buehler, Stefan A; Cady-Pereira, Karen E; Cole, Jason N S; Dufresne, Jean-Louis; Kelley, Maxwell; Li, Jiangnan; Manners, James; Paynter, David J; Roehrig, Romain; Sekiguchi, Miho; Schwarzkopf, Daniel M

    2015-07-16

    Radiation parameterizations in GCMs are more accurate than their predecessorsErrors in estimates of 4 ×CO 2 forcing are large, especially for solar radiationErrors depend on atmospheric state, so global mean error is unknown.

  4. Left Global Hemineglect in High Autism-Spectrum Quotient Individuals

    Directory of Open Access Journals (Sweden)

    Daniel Paul Crewther

    2011-05-01

    Full Text Available Autism remains as a significant issue for many individuals due to the social impairment accompanying the disorder. Recent theories present potential relationships between autistic tendency and visual perceptual differences to explore differences in underlying visual pathways. These differences have been explored though the use of global and local stimuli to show difference in perception. This study compared the balance of global versus local perception between sub-groups from the normal population both high and low on the Autism Spectrum Quotient (AQ. A diamond illusion task containing rivaling global and local percepts was used to explore the effects of changing the occluder contrast and peripheral viewing upon global/local percept. An increase in global perception relative to increasing eccentricity of the stimulus from a fixation point was also seen in both groups. However, with increasing contrast of the occluding stripes both groups showed an increase in the percentage of global perception. When comparing between groups the high AQ showed a significant reduction in global perception compared to the low AQ group when the stimulus was presented in left hemifield. This difference wasn't present within right hemifield. We discuss how global perceptual hemineglect may suggest abnormal parietal function in individuals with high AQ.

  5. Parameterized and resolved Southern Ocean eddy compensation

    Science.gov (United States)

    Poulsen, Mads B.; Jochum, Markus; Nuterman, Roman

    2018-04-01

    The ability to parameterize Southern Ocean eddy effects in a forced coarse resolution ocean general circulation model is assessed. The transient model response to a suite of different Southern Ocean wind stress forcing perturbations is presented and compared to identical experiments performed with the same model in 0.1° eddy-resolving resolution. With forcing of present-day wind stress magnitude and a thickness diffusivity formulated in terms of the local stratification, it is shown that the Southern Ocean residual meridional overturning circulation in the two models is different in structure and magnitude. It is found that the difference in the upper overturning cell is primarily explained by an overly strong subsurface flow in the parameterized eddy-induced circulation while the difference in the lower cell is mainly ascribed to the mean-flow overturning. With a zonally constant decrease of the zonal wind stress by 50% we show that the absolute decrease in the overturning circulation is insensitive to model resolution, and that the meridional isopycnal slope is relaxed in both models. The agreement between the models is not reproduced by a 50% wind stress increase, where the high resolution overturning decreases by 20%, but increases by 100% in the coarse resolution model. It is demonstrated that this difference is explained by changes in surface buoyancy forcing due to a reduced Antarctic sea ice cover, which strongly modulate the overturning response and ocean stratification. We conclude that the parameterized eddies are able to mimic the transient response to altered wind stress in the high resolution model, but partly misrepresent the unperturbed Southern Ocean meridional overturning circulation and associated heat transports.

  6. Peripheral global neglect in high vs. low autistic tendency

    OpenAIRE

    Crewther, Daniel P.; Crewther, David P.

    2014-01-01

    In addition to its core social deficits, autism is characterized by altered visual perception, with a preference for local percept in those high in autistic tendency. Here, the balance of global vs. local percepts for the perceptually rivalrous diamond illusion was assessed between groups scoring high and low on the Autism Spectrum Quotient (AQ). The global percept of a diamond shape oscillating horizontally behind three occluders can as easily be interpreted as the local percept of four line...

  7. Peripheral global neglect in high versus low autistic tendency

    OpenAIRE

    Daniel Paul Crewther; David Philip Crewther

    2014-01-01

    In addition to its core social deficits, autism is characterised by altered visual perception, with a preference for local percept in those high in autistic tendency. Here, the balance of global versus local percepts for the perceptually rivalrous diamond illusion was assessed between groups scoring high and low on the Autism Spectrum Quotient (AQ). The global percept of a diamond shape oscillating horizontally behind three occluders can as easily be interpreted as the local percept of four l...

  8. Global tracker for the ALICE high level trigger

    International Nuclear Information System (INIS)

    Vik, Thomas

    2006-01-01

    This thesis deals with two main topics. The first is the implementation and testing of a Kalman filter algorithm in the HLT (High Level Trigger) reconstruction code. This will perform the global tracking in the HLT, that is merging tracklets and hits from the different sub-detectors in the central barrel detector. The second topic is a trigger mode of the HLT which uses the global tracking of particles through the TRD (Transition Radiation Detector), TPC (Time Projection Chamber) and the ITS (Inner Tracking System): The dielectron trigger. Global tracking: The Kalman filter algorithm has been introduced to the HLT tracking scheme. (Author)

  9. Peripheral global neglect in high versus low autistic tendency

    Directory of Open Access Journals (Sweden)

    Daniel Paul Crewther

    2014-04-01

    Full Text Available In addition to its core social deficits, autism is characterised by altered visual perception, with a preference for local percept in those high in autistic tendency. Here, the balance of global versus local percepts for the perceptually rivalrous diamond illusion was assessed between groups scoring high and low on the Autism Spectrum Quotient (AQ. The global percept of a diamond shape oscillating horizontally behind three occluders can as easily be interpreted as the local percept of four line elements, each moving vertically. Increasing the luminance contrast of the occluders with respect to background resulted in an increase of initial global percept in both groups, with no difference in sensitivity between groups. Presenting the target further into the periphery resulted in a marked increase in the percentage of global perception with visual field eccentricity. However, while the performance for centrally presented diamond targets was not different between AQ groups, the peripheral global performance of the High AQ group was significantly reduced compared with the Low AQ group. On the basis of other imaging studies, this peripheral but not foveal global perceptual neglect may indicate an abnormal interaction between striate cortex and the Lateral Occipital Complex, or to differences in the deployment of attention between the two groups.

  10. High speed global shutter image sensors for professional applications

    Science.gov (United States)

    Wu, Xu; Meynants, Guy

    2015-04-01

    Global shutter imagers expand the use to miscellaneous applications, such as machine vision, 3D imaging, medical imaging, space etc. to eliminate motion artifacts in rolling shutter imagers. A low noise global shutter pixel requires more than one non-light sensitive memory to reduce the read noise. But larger memory area reduces the fill-factor of the pixels. Modern micro-lenses technology can compensate this fill-factor loss. Backside illumination (BSI) is another popular technique to improve the pixel fill-factor. But some pixel architecture may not reach sufficient shutter efficiency with backside illumination. Non-light sensitive memory elements make the fabrication with BSI possible. Machine vision like fast inspection system, medical imaging like 3D medical or scientific applications always ask for high frame rate global shutter image sensors. Thanks to the CMOS technology, fast Analog-to-digital converters (ADCs) can be integrated on chip. Dual correlated double sampling (CDS) on chip ADC with high interface digital data rate reduces the read noise and makes more on-chip operation control. As a result, a global shutter imager with digital interface is a very popular solution for applications with high performance and high frame rate requirements. In this paper we will review the global shutter architectures developed in CMOSIS, discuss their optimization process and compare their performances after fabrication.

  11. Neutrosophic Parameterized Soft Relations and Their Applications

    Directory of Open Access Journals (Sweden)

    Irfan Deli

    2014-06-01

    Full Text Available The aim of this paper is to introduce the concept of relation on neutrosophic parameterized soft set (NP- soft sets theory. We have studied some related properties and also put forward some propositions on neutrosophic parameterized soft relation with proofs and examples. Finally the notions of symmetric, transitive, reflexive, and equivalence neutrosophic parameterized soft set relations have been established in our work. Finally a decision making method on NP-soft sets is presented.

  12. Infrared radiation parameterizations in numerical climate models

    Science.gov (United States)

    Chou, Ming-Dah; Kratz, David P.; Ridgway, William

    1991-01-01

    This study presents various approaches to parameterizing the broadband transmission functions for utilization in numerical climate models. One-parameter scaling is applied to approximate a nonhomogeneous path with an equivalent homogeneous path, and the diffuse transmittances are either interpolated from precomputed tables or fit by analytical functions. Two-parameter scaling is applied to parameterizing the carbon dioxide and ozone transmission functions in both the lower and middle atmosphere. Parameterizations are given for the nitrous oxide and methane diffuse transmission functions.

  13. Assessment of two physical parameterization schemes for desert dust emissions in an atmospheric chemistry general circulation model

    Science.gov (United States)

    Astitha, M.; Abdel Kader, M.; Pozzer, A.; Lelieveld, J.

    2012-04-01

    Atmospheric particulate matter and more specific desert dust has been the topic of numerous research studies in the past due to the wide range of impacts in the environment and climate and the uncertainty of characterizing and quantifying these impacts in a global scale. In this work we present two physical parameterizations of the desert dust production that have been incorporated in the atmospheric chemistry general circulation model EMAC (ECHAM5/MESSy2.41 Atmospheric Chemistry). The scope of this work is to assess the impact of the two physical parameterizations in the global distribution of desert dust and highlight the advantages and disadvantages of using either technique. The dust concentration and deposition has been evaluated using the AEROCOM dust dataset for the year 2000 and data from the MODIS and MISR satellites as well as sun-photometer data from the AERONET network was used to compare the modelled aerosol optical depth with observations. The implementation of the two parameterizations and the simulations using relatively high spatial resolution (T106~1.1deg) has highlighted the large spatial heterogeneity of the dust emission sources as well as the importance of the input parameters (soil size and texture, vegetation, surface wind speed). Also, sensitivity simulations with the nudging option using reanalysis data from ECMWF and without nudging have showed remarkable differences for some areas. Both parameterizations have revealed the difficulty of simulating all arid regions with the same assumptions and mechanisms. Depending on the arid region, each emission scheme performs more or less satisfactorily which leads to the necessity of treating each desert differently. Even though this is a quite different task to accomplish in a global model, some recommendations are given and ideas for future improvements.

  14. Tuning controllers using the dual Youla parameterization

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Stoustrup, Jakob

    2000-01-01

    This paper describes the application of the Youla parameterization of all stabilizing controllers and the dual Youla parameterization of all systems stabilized by a given controller in connection with tuning of controllers. In the uncertain case, it is shown that the use of the Youla parameteriza......This paper describes the application of the Youla parameterization of all stabilizing controllers and the dual Youla parameterization of all systems stabilized by a given controller in connection with tuning of controllers. In the uncertain case, it is shown that the use of the Youla...

  15. High Predictive Skill of Global Surface Temperature a Year Ahead

    Science.gov (United States)

    Folland, C. K.; Colman, A.; Kennedy, J. J.; Knight, J.; Parker, D. E.; Stott, P.; Smith, D. M.; Boucher, O.

    2011-12-01

    We discuss the high skill of real-time forecasts of global surface temperature a year ahead issued by the UK Met Office, and their scientific background. Although this is a forecasting and not a formal attribution study, we show that the main instrumental global annual surface temperature data sets since 1891 are structured consistently with a set of five physical forcing factors except during and just after the second World War. Reconstructions use a multiple application of cross validated linear regression to minimise artificial skill allowing time-varying uncertainties in the contribution of each forcing factor to global temperature to be assessed. Mean cross validated reconstructions for the data sets have total correlations in the range 0.93-0.95,interannual correlations in the range 0.72-0.75 and root mean squared errors near 0.06oC, consistent with observational uncertainties.Three transient runs of the HadCM3 coupled model for 1888-2002 demonstrate quite similar reconstruction skill from similar forcing factors defined appropriately for the model, showing that skilful use of our technique is not confined to observations. The observed reconstructions show that the Atlantic Multidecadal Oscillation (AMO) likely contributed to the re-commencement of global warming between 1976 and 2010 and to global cooling observed immediately beforehand in 1965-1976. The slowing of global warming in the last decade is likely to be largely due to a phase-delayed response to the downturn in the solar cycle since 2001-2, with no net ENSO contribution. The much reduced trend in 2001-10 is similar in size to other weak decadal temperature trends observed since global warming resumed in the 1970s. The causes of variations in decadal trends can be mostly explained by variations in the strength of the forcing factors. Eleven real-time forecasts of global mean surface temperature for the year ahead for 2000-2010, based on broadly similar methods, provide an independent test of the

  16. Analyses of the stratospheric dynamics simulated by a GCM with a stochastic nonorographic gravity wave parameterization

    Science.gov (United States)

    Serva, Federico; Cagnazzo, Chiara; Riccio, Angelo

    2016-04-01

    The effects of the propagation and breaking of atmospheric gravity waves have long been considered crucial for their impact on the circulation, especially in the stratosphere and mesosphere, between heights of 10 and 110 km. These waves, that in the Earth's atmosphere originate from surface orography (OGWs) or from transient (nonorographic) phenomena such as fronts and convective processes (NOGWs), have horizontal wavelengths between 10 and 1000 km, vertical wavelengths of several km, and frequencies spanning from minutes to hours. Orographic and nonorographic GWs must be accounted for in climate models to obtain a realistic simulation of the stratosphere in both hemispheres, since they can have a substantial impact on circulation and temperature, hence an important role in ozone chemistry for chemistry-climate models. Several types of parameterization are currently employed in models, differing in the formulation and for the values assigned to parameters, but the common aim is to quantify the effect of wave breaking on large-scale wind and temperature patterns. In the last decade, both global observations from satellite-borne instruments and the outputs of very high resolution climate models provided insight on the variability and properties of gravity wave field, and these results can be used to constrain some of the empirical parameters present in most parameterization scheme. A feature of the NOGW forcing that clearly emerges is the intermittency, linked with the nature of the sources: this property is absent in the majority of the models, in which NOGW parameterizations are uncoupled with other atmospheric phenomena, leading to results which display lower variability compared to observations. In this work, we analyze the climate simulated in AMIP runs of the MAECHAM5 model, which uses the Hines NOGW parameterization and with a fine vertical resolution suitable to capture the effects of wave-mean flow interaction. We compare the results obtained with two

  17. A new parameterization for waveform inversion in acoustic orthorhombic media

    KAUST Repository

    Masmoudi, Nabil

    2016-05-26

    Orthorhombic anisotropic model inversion is extra challenging because of the multiple parameter nature of the inversion problem. The high number of parameters required to describe the medium exerts considerable trade-off and additional nonlinearity to a full-waveform inversion (FWI) application. Choosing a suitable set of parameters to describe the model and designing an effective inversion strategy can help in mitigating this problem. Using the Born approximation, which is the central ingredient of the FWI update process, we have derived radiation patterns for the different acoustic orthorhombic parameterizations. Analyzing the angular dependence of scattering (radiation patterns) of the parameters of different parameterizations starting with the often used Thomsen-Tsvankin parameterization, we have assessed the potential trade-off between the parameters and the resolution in describing the data and inverting for the parameters. The analysis led us to introduce new parameters ϵd, δd, and ηd, which have azimuthally dependent radiation patterns, but keep the scattering potential of the transversely isotropic parameters stationary with azimuth (azimuth independent). The novel parameters ϵd, δd, and ηd are dimensionless and represent a measure of deviation between the vertical planes in orthorhombic anisotropy. Therefore, these deviation parameters offer a new parameterization style for an acoustic orthorhombic medium described by six parameters: three vertical transversely isotropic (VTI) parameters, two deviation parameters, and one parameter describing the anisotropy in the horizontal symmetry plane. The main feature of any parameterization based on the deviation parameters, is the azimuthal independency of the modeled data with respect to the VTI parameters, which allowed us to propose practical inversion strategies based on our experience with the VTI parameters. This feature of the new parameterization style holds for even the long-wavelength components of

  18. Global climate change: Mitigation opportunities high efficiency large chiller technology

    Energy Technology Data Exchange (ETDEWEB)

    Stanga, M.V.

    1997-12-31

    This paper, comprised of presentation viewgraphs, examines the impact of high efficiency large chiller technology on world electricity consumption and carbon dioxide emissions. Background data are summarized, and sample calculations are presented. Calculations show that presently available high energy efficiency chiller technology has the ability to substantially reduce energy consumption from large chillers. If this technology is widely implemented on a global basis, it could reduce carbon dioxide emissions by 65 million tons by 2010.

  19. Parameterization of MARVELS Spectra Using Deep Learning

    Science.gov (United States)

    Gilda, Sankalp; Ge, Jian; MARVELS

    2018-01-01

    Like many large-scale surveys, the Multi-Object APO Radial Velocity Exoplanet Large-area Survey (MARVELS) was designed to operate at a moderate spectral resolution ($\\sim$12,000) for efficiency in observing large samples, which makes the stellar parameterization difficult due to the high degree of blending of spectral features. Two extant solutions to deal with this issue are to utilize spectral synthesis, and to utilize spectral indices [Ghezzi et al. 2014]. While the former is a powerful and tested technique, it can often yield strongly coupled atmospheric parameters, and often requires high spectral resolution (Valenti & Piskunov 1996). The latter, though a promising technique utilizing measurements of equivalent widths of spectral indices, has only been employed with respect to FKG dwarfs and sub-giants and not red-giant branch stars, which constitute ~30% of MARVELS targets. In this work, we tackle this problem using a convolution neural network (CNN). In particular, we train a one-dimensional CNN on appropriately processed PHOENIX synthetic spectra using supervised training to automatically distinguish the features relevant for the determination of each of the three atmospheric parameters – T_eff, log(g), [Fe/H] – and use the knowledge thus gained by the network to parameterize 849 MARVELS giants. When tested on the synthetic spectra themselves, our estimates of the parameters were consistent to within 11 K, .02 dex, and .02 dex (in terms of mean absolute errors), respectively. For MARVELS dwarfs, the accuracies are 80K, .16 dex and .10 dex, respectively.

  20. High-resolution global irradiance monitoring from photovoltaic systems

    Science.gov (United States)

    Buchmann, Tina; Pfeilsticker, Klaus; Siegmund, Alexander; Meilinger, Stefanie; Mayer, Bernhard; Pinitz, Sven; Steinbrecht, Wolfgang

    2016-04-01

    Reliable and regional differentiated power forecasts are required to guarantee an efficient and economic energy transition towards renewable energies. Amongst other renewable energy technologies, e.g. wind mills, photovoltaic systems are an essential component of this transition being cost-efficient and simply to install. Reliable power forecasts are however required for a grid integration of photovoltaic systems, which among other data requires high-resolution spatio-temporal global irradiance data. Hence the generation of robust reviewed global irradiance data is an essential contribution for the energy transition. To achieve this goal our studies introduce a novel method which makes use of photovoltaic power generation in order to infer global irradiance. The method allows to determine high-resolution temporal global irradiance data (one data point every 15 minutes at each location) from power data of operated photovoltaic systems. Due to the multitude of installed photovoltaic systems (in Germany) the detailed spatial coverage is much better than for example only using global irradiance data from conventional pyranometer networks (e.g. from the German Weather Service). Our designated method is composed of two components: a forward component, i.e. to conclude from predicted global irradiance to photovoltaic (PV) power, and a backward component, i.e. from PV power with suitable calibration to global irradiance. The forward process is modelled by using the radiation transport model libRadtran (B. Mayer and A. Kylling (1)) for clear skies to obtain the characteristics (orientation, size, temperature dependence, …) of individual PV systems. For PV systems in the vicinity of a meteorological station, these data are validated against calibrated pyranometer readings. The forward-modelled global irradiance is used to determine the power efficiency for each photovoltaic system using non-linear optimisation techniques. The backward component uses the power efficiency

  1. Global distribution of urban parameters derived from high-resolution global datasets for weather modelling

    Science.gov (United States)

    Kawano, N.; Varquez, A. C. G.; Dong, Y.; Kanda, M.

    2016-12-01

    Numerical model such as Weather Research and Forecasting model coupled with single-layer Urban Canopy Model (WRF-UCM) is one of the powerful tools to investigate urban heat island. Urban parameters such as average building height (Have), plain area index (λp) and frontal area index (λf), are necessary inputs for the model. In general, these parameters are uniformly assumed in WRF-UCM but this leads to unrealistic urban representation. Distributed urban parameters can also be incorporated into WRF-UCM to consider a detail urban effect. The problem is that distributed building information is not readily available for most megacities especially in developing countries. Furthermore, acquiring real building parameters often require huge amount of time and money. In this study, we investigated the potential of using globally available satellite-captured datasets for the estimation of the parameters, Have, λp, and λf. Global datasets comprised of high spatial resolution population dataset (LandScan by Oak Ridge National Laboratory), nighttime lights (NOAA), and vegetation fraction (NASA). True samples of Have, λp, and λf were acquired from actual building footprints from satellite images and 3D building database of Tokyo, New York, Paris, Melbourne, Istanbul, Jakarta and so on. Regression equations were then derived from the block-averaging of spatial pairs of real parameters and global datasets. Results show that two regression curves to estimate Have and λf from the combination of population and nightlight are necessary depending on the city's level of development. An index which can be used to decide which equation to use for a city is the Gross Domestic Product (GDP). On the other hand, λphas less dependence on GDP but indicated a negative relationship to vegetation fraction. Finally, a simplified but precise approximation of urban parameters through readily-available, high-resolution global datasets and our derived regressions can be utilized to estimate a

  2. Parameterized Linear Longitudinal Airship Model

    Science.gov (United States)

    Kulczycki, Eric; Elfes, Alberto; Bayard, David; Quadrelli, Marco; Johnson, Joseph

    2010-01-01

    A parameterized linear mathematical model of the longitudinal dynamics of an airship is undergoing development. This model is intended to be used in designing control systems for future airships that would operate in the atmospheres of Earth and remote planets. Heretofore, the development of linearized models of the longitudinal dynamics of airships has been costly in that it has been necessary to perform extensive flight testing and to use system-identification techniques to construct models that fit the flight-test data. The present model is a generic one that can be relatively easily specialized to approximate the dynamics of specific airships at specific operating points, without need for further system identification, and with significantly less flight testing. The approach taken in the present development is to merge the linearized dynamical equations of an airship with techniques for estimation of aircraft stability derivatives, and to thereby make it possible to construct a linearized dynamical model of the longitudinal dynamics of a specific airship from geometric and aerodynamic data pertaining to that airship. (It is also planned to develop a model of the lateral dynamics by use of the same methods.) All of the aerodynamic data needed to construct the model of a specific airship can be obtained from wind-tunnel testing and computational fluid dynamics

  3. Global Surgery 2030: a roadmap for high income country actors

    Science.gov (United States)

    Greenberg, Sarah L M; Abdullah, Fizan; Amado, Vanda; Anderson, Geoffrey A; Cossa, Matchecane; Costas-Chavarri, Ainhoa; Davies, Justine; Debas, Haile T; Dyer, George S M; Erdene, Sarnai; Farmer, Paul E; Gaumnitz, Amber; Hagander, Lars; Haider, Adil; Leather, Andrew J M; Lin, Yihan; Marten, Robert; Marvin, Jeffrey T; McClain, Craig D; Meara, John G; Meheš, Mira; Mock, Charles; Mukhopadhyay, Swagoto; Orgoi, Sergelen; Prestero, Timothy; Price, Raymond R; Raykar, Nakul P; Riesel, Johanna N; Riviello, Robert; Rudy, Stephen M; Saluja, Saurabh; Sullivan, Richard; Tarpley, John L; Taylor, Robert H; Telemaque, Louis-Franck; Toma, Gabriel; Varghese, Asha; Walker, Melanie; Yamey, Gavin; Shrime, Mark G

    2016-01-01

    The Millennium Development Goals have ended and the Sustainable Development Goals have begun, marking a shift in the global health landscape. The frame of reference has changed from a focus on 8 development priorities to an expansive set of 17 interrelated goals intended to improve the well-being of all people. In this time of change, several groups, including the Lancet Commission on Global Surgery, have brought a critical problem to the fore: 5 billion people lack access to safe, affordable surgical and anaesthesia care when needed. The magnitude of this problem and the world's new focus on strengthening health systems mandate reimagined roles for and renewed commitments from high income country actors in global surgery. To discuss the way forward, on 6 May 2015, the Commission held its North American launch event in Boston, Massachusetts. Panels of experts outlined the current state of knowledge and agreed on the roles of surgical colleges and academic medical centres; trainees and training programmes; academia; global health funders; the biomedical devices industry, and news media and advocacy organisations in building sustainable, resilient surgical systems. This paper summarises these discussions and serves as a consensus statement providing practical advice to these groups. It traces a common policy agenda between major actors and provides a roadmap for maximising benefit to surgical patients worldwide. To close the access gap by 2030, individuals and organisations must work collectively, interprofessionally and globally. High income country actors must abandon colonial narratives and work alongside low and middle income country partners to build the surgical systems of the future. PMID:28588908

  4. A Non-hydrostatic Atmospheric Model for Global High-resolution Simulation

    Science.gov (United States)

    Peng, X.; Li, X.

    2017-12-01

    A three-dimensional non-hydrostatic atmosphere model, GRAPES_YY, is developed on the spherical Yin-Yang grid system in order to enforce global high-resolution weather simulation or forecasting at the CAMS/CMA. The quasi-uniform grid makes the computation be of high efficiency and free of pole problem. Full representation of the three-dimensional Coriolis force is considered in the governing equations. Under the constraint of third-order boundary interpolation, the model is integrated with the semi-implicit semi-Lagrangian method using the same code on both zones. A static halo region is set to ensure computation of cross-boundary transport and updating Dirichlet-type boundary conditions in the solution process of elliptical equations with the Schwarz method. A series of dynamical test cases, including the solid-body advection, the balanced geostrophic flow, zonal flow over an isolated mountain, development of the Rossby-Haurwitz wave and a baroclinic wave, are carried out, and excellent computational stability and accuracy of the dynamic core has been confirmed. After implementation of the physical processes of long and short-wave radiation, cumulus convection, micro-physical transformation of water substances and the turbulent processes in the planetary boundary layer include surface layer vertical fluxes parameterization, a long-term run of the model is then put forward under an idealized aqua-planet configuration to test the model physics and model ability in both short-term and long-term integrations. In the aqua-planet experiment, the model shows an Earth-like structure of circulation. The time-zonal mean temperature, wind components and humidity illustrate reasonable subtropical zonal westerly jet, meridional three-cell circulation, tropical convection and thermodynamic structures. The specific SST and solar insolation being symmetric about the equator enhance the ITCZ and tropical precipitation, which concentrated in tropical region. Additional analysis and

  5. Prediction of heavy rainfall over Chennai Metropolitan City, Tamil Nadu, India: Impact of microphysical parameterization schemes

    Science.gov (United States)

    Singh, K. S.; Bonthu, Subbareddy; Purvaja, R.; Robin, R. S.; Kannan, B. A. M.; Ramesh, R.

    2018-04-01

    This study attempts to investigate the real-time prediction of a heavy rainfall event over the Chennai Metropolitan City, Tamil Nadu, India that occurred on 01 December 2015 using Advanced Research Weather Research and Forecasting (WRF-ARW) model. The study evaluates the impact of six microphysical (Lin, WSM6, Goddard, Thompson, Morrison and WDM6) parameterization schemes of the model on prediction of heavy rainfall event. In addition, model sensitivity has also been evaluated with six Planetary Boundary Layer (PBL) and two Land Surface Model (LSM) schemes. Model forecast was carried out using nested domain and the impact of model horizontal grid resolutions were assessed at 9 km, 6 km and 3 km. Analysis of the synoptic features using National Center for Environmental Prediction Global Forecast System (NCEP-GFS) analysis data revealed strong upper-level divergence and high moisture content at lower level were favorable for the occurrence of heavy rainfall event over the northeast coast of Tamil Nadu. The study signified that forecasted rainfall was more sensitive to the microphysics and PBL schemes compared to the LSM schemes. The model provided better forecast of the heavy rainfall event using the logical combination of Goddard microphysics, YSU PBL and Noah LSM schemes, and it was mostly attributed to timely initiation and development of the convective system. The forecast with different horizontal resolutions using cumulus parameterization indicated that the rainfall prediction was not well represented at 9 km and 6 km. The forecast with 3 km horizontal resolution provided better prediction in terms of timely initiation and development of the event. The study highlights that forecast of heavy rainfall events using a high-resolution mesoscale model with suitable representations of physical parameterization schemes are useful for disaster management and planning to minimize the potential loss of life and property.

  6. A High-Resolution View of Global Seismicity

    Science.gov (United States)

    Waldhauser, F.; Schaff, D. P.

    2014-12-01

    We present high-precision earthquake relocation results from our global-scale re-analysis of the combined seismic archives of parametric data for the years 1964 to present from the International Seismological Centre (ISC), the USGS's Earthquake Data Report (EDR), and selected waveform data from IRIS. We employed iterative, multistep relocation procedures that initially correct for large location errors present in standard global earthquake catalogs, followed by a simultaneous inversion of delay times formed from regional and teleseismic arrival times of first and later arriving phases. An efficient multi-scale double-difference (DD) algorithm is used to solve for relative event locations to the precision of a few km or less, while incorporating information on absolute hypocenter locations from catalogs such as EHB and GEM. We run the computations on both a 40-core cluster geared towards HTC problems (data processing) and a 500-core HPC cluster for data inversion. Currently, we are incorporating waveform correlation delay time measurements available for events in selected regions, but are continuously building up a comprehensive, global correlation database for densely distributed events recorded at stations with a long history of high-quality waveforms. The current global DD catalog includes nearly one million earthquakes, equivalent to approximately 70% of the number of events in the ISC/EDR catalogs initially selected for relocation. The relocations sharpen the view of seismicity in most active regions around the world, in particular along subduction zones where event density is high, but also along mid-ocean ridges where existing hypocenters are especially poorly located. The new data offers the opportunity to investigate earthquake processes and fault structures along entire plate boundaries at the ~km scale, and provides a common framework that facilitates analysis and comparisons of findings across different plate boundary systems.

  7. Single-Column Modeling, GCM Parameterizations and Atmospheric Radiation Measurement Data

    International Nuclear Information System (INIS)

    Somerville, R.C.J.; Iacobellis, S.F.

    2005-01-01

    Our overall goal is identical to that of the Atmospheric Radiation Measurement (ARM) Program: the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global and regional models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have first compared single-column model (SCM) output with ARM observations at the Southern Great Plains (SGP), North Slope of Alaska (NSA) and Topical Western Pacific (TWP) sites. We focus on the predicted cloud amounts and on a suite of radiative quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments of cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art 3D atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable. We are currently testing the performance of our ARM-based parameterizations in state-of-the--art global and regional

  8. Big Data and High-Performance Computing in Global Seismology

    Science.gov (United States)

    Bozdag, Ebru; Lefebvre, Matthieu; Lei, Wenjie; Peter, Daniel; Smith, James; Komatitsch, Dimitri; Tromp, Jeroen

    2014-05-01

    Much of our knowledge of Earth's interior is based on seismic observations and measurements. Adjoint methods provide an efficient way of incorporating 3D full wave propagation in iterative seismic inversions to enhance tomographic images and thus our understanding of processes taking place inside the Earth. Our aim is to take adjoint tomography, which has been successfully applied to regional and continental scale problems, further to image the entire planet. This is one of the extreme imaging challenges in seismology, mainly due to the intense computational requirements and vast amount of high-quality seismic data that can potentially be assimilated. We have started low-resolution inversions (T > 30 s and T > 60 s for body and surface waves, respectively) with a limited data set (253 carefully selected earthquakes and seismic data from permanent and temporary networks) on Oak Ridge National Laboratory's Cray XK7 "Titan" system. Recent improvements in our 3D global wave propagation solvers, such as a GPU version of the SPECFEM3D_GLOBE package, will enable us perform higher-resolution (T > 9 s) and longer duration (~180 m) simulations to take the advantage of high-frequency body waves and major-arc surface waves, thereby improving imbalanced ray coverage as a result of the uneven global distribution of sources and receivers. Our ultimate goal is to use all earthquakes in the global CMT catalogue within the magnitude range of our interest and data from all available seismic networks. To take the full advantage of computational resources, we need a solid framework to manage big data sets during numerical simulations, pre-processing (i.e., data requests and quality checks, processing data, window selection, etc.) and post-processing (i.e., pre-conditioning and smoothing kernels, etc.). We address the bottlenecks in our global seismic workflow, which are mainly coming from heavy I/O traffic during simulations and the pre- and post-processing stages, by defining new data

  9. Statistical dynamical subgrid-scale parameterizations for geophysical flows

    International Nuclear Information System (INIS)

    O'Kane, T J; Frederiksen, J S

    2008-01-01

    Simulations of both atmospheric and oceanic circulations at given finite resolutions are strongly dependent on the form and strengths of the dynamical subgrid-scale parameterizations (SSPs) and in particular are sensitive to subgrid-scale transient eddies interacting with the retained scale topography and the mean flow. In this paper, we present numerical results for SSPs of the eddy-topographic force, stochastic backscatter, eddy viscosity and eddy-mean field interaction using an inhomogeneous statistical turbulence model based on a quasi-diagonal direct interaction approximation (QDIA). Although the theoretical description on which our model is based is for general barotropic flows, we specifically focus on global atmospheric flows where large-scale Rossby waves are present. We compare and contrast the closure-based results with an important earlier heuristic SSP of the eddy-topographic force, based on maximum entropy or statistical canonical equilibrium arguments, developed specifically for general ocean circulation models (Holloway 1992 J. Phys. Oceanogr. 22 1033-46). Our results demonstrate that where strong zonal flows and Rossby waves are present, such as in the atmosphere, maximum entropy arguments are insufficient to accurately parameterize the subgrid contributions due to eddy-eddy, eddy-topographic and eddy-mean field interactions. We contrast our atmospheric results with findings for the oceans. Our study identifies subgrid-scale interactions that are currently not parameterized in numerical atmospheric climate models, which may lead to systematic defects in the simulated circulations.

  10. Parameterization and measurements of helical magnetic fields

    International Nuclear Information System (INIS)

    Fischer, W.; Okamura, M.

    1997-01-01

    Magnetic fields with helical symmetry can be parameterized using multipole coefficients (a n , b n ). We present a parameterization that gives the familiar multipole coefficients (a n , b n ) for straight magnets when the helical wavelength tends to infinity. To measure helical fields all methods used for straight magnets can be employed. We show how to convert the results of those measurements to obtain the desired helical multipole coefficients (a n , b n )

  11. Menangkal Serangan SQL Injection Dengan Parameterized Query

    Directory of Open Access Journals (Sweden)

    Yulianingsih Yulianingsih

    2016-06-01

    Full Text Available Semakin meningkat pertumbuhan layanan informasi maka semakin tinggi pula tingkat kerentanan keamanan dari suatu sumber informasi. Melalui tulisan ini disajikan penelitian yang dilakukan secara eksperimen yang membahas tentang kejahatan penyerangan database secara SQL Injection. Penyerangan dilakukan melalui halaman autentikasi dikarenakan halaman ini merupakan pintu pertama akses yang seharusnya memiliki pertahanan yang cukup. Kemudian dilakukan eksperimen terhadap metode Parameterized Query untuk mendapatkan solusi terhadap permasalahan tersebut.   Kata kunci— Layanan Informasi, Serangan, eksperimen, SQL Injection, Parameterized Query.

  12. Understanding and Improving Ocean Mixing Parameterizations for modeling Climate Change

    Science.gov (United States)

    Howard, A. M.; Fells, J.; Clarke, J.; Cheng, Y.; Canuto, V.; Dubovikov, M. S.

    2017-12-01

    Climate is vital. Earth is only habitable due to the atmosphere&oceans' distribution of energy. Our Greenhouse Gas emissions shift overall the balance between absorbed and emitted radiation causing Global Warming. How much of these emissions are stored in the ocean vs. entering the atmosphere to cause warming and how the extra heat is distributed depends on atmosphere&ocean dynamics, which we must understand to know risks of both progressive Climate Change and Climate Variability which affect us all in many ways including extreme weather, floods, droughts, sea-level rise and ecosystem disruption. Citizens must be informed to make decisions such as "business as usual" vs. mitigating emissions to avert catastrophe. Simulations of Climate Change provide needed knowledge but in turn need reliable parameterizations of key physical processes, including ocean mixing, which greatly impacts transport&storage of heat and dissolved CO2. The turbulence group at NASA-GISS seeks to use physical theory to improve parameterizations of ocean mixing, including smallscale convective, shear driven, double diffusive, internal wave and tidal driven vertical mixing, as well as mixing by submesoscale eddies, and lateral mixing along isopycnals by mesoscale eddies. Medgar Evers undergraduates aid NASA research while learning climate science and developing computer&math skills. We write our own programs in MATLAB and FORTRAN to visualize and process output of ocean simulations including producing statistics to help judge impacts of different parameterizations on fidelity in reproducing realistic temperatures&salinities, diffusivities and turbulent power. The results can help upgrade the parameterizations. Students are introduced to complex system modeling and gain deeper appreciation of climate science and programming skills, while furthering climate science. We are incorporating climate projects into the Medgar Evers college curriculum. The PI is both a member of the turbulence group at

  13. Non-perturbative Aspects of QCD and Parameterized Quark Propagator

    Institute of Scientific and Technical Information of China (English)

    HAN Ding-An; ZHOU Li-Juan; ZENG Ya-Guang; GU Yun-Ting; CAO Hui; MA Wei-Xing; MENG Cheng-Ju; PAN Ji-Huan

    2008-01-01

    Based on the Global Color Symmetry Model, the non-perturbative QCD vacuum is investigated in theparameterized fully dressed quark propagator. Our theoretical predictions for various quantities characterized the QCD vacuum are in agreement with those predicted by many other phenomenological QCD inspired models. The successful predictions clearly indicate the extensive validity of our parameterized quark propagator used here. A detailed discussion on the arbitrariness in determining the integration cut-off parameter of# in calculating QCD vacuum condensates and a good method, which avoided the dependence of calculating results on the cut-off parameter is also strongly recommended to readers.

  14. High Resolution Global Electrical Conductivity Variations in the Earth's Mantle

    Science.gov (United States)

    Kelbert, A.; Sun, J.; Egbert, G. D.

    2013-12-01

    Electrical conductivity of the Earth's mantle is a valuable constraint on the water content and melting processes. In Kelbert et al. (2009), we obtained the first global inverse model of electrical conductivity in the mantle capable of providing constraints on the lateral variations in mantle water content. However, in doing so we had to compromise on the problem complexity by using the historically very primitive ionospheric and magnetospheric source assumptions. In particular, possible model contamination by the auroral current systems had greatly restricted our use of available data. We have now addressed this problem by inverting for the external sources along with the electrical conductivity variations. In this study, we still focus primarily on long period data that are dominated by quasi-zonal source fields. The improved understanding of the ionospheric sources allows us to invert the magnetic fields directly, without a correction for the source and/or the use of transfer functions. It allows us to extend the period range of available data to 1.2 days - 102 days, achieving better sensitivity to the upper mantle and transition zone structures. Finally, once the source effects in the data are accounted for, a much larger subset of observatories may be used in the electrical conductivity inversion. Here, we use full magnetic fields at 207 geomagnetic observatories, which include mid-latitude, equatorial and high latitude data. Observatory hourly means from the years 1958-2010 are employed. The improved quality and spatial distribution of the data set, as well as the high resolution modeling and inversion using degree and order 40 spherical harmonics mapped to a 2x2 degree lateral grid, all contribute to the much improved resolution of our models, representing a conceptual step forward in global electromagnetic sounding. We present a fully three-dimensional, global electrical conductivity model of the Earth's mantle as inferred from ground geomagnetic

  15. Globalization

    Directory of Open Access Journals (Sweden)

    Tulio Rosembuj

    2006-12-01

    Full Text Available There is no singular globalization, nor is the result of an individual agent. We could start by saying that global action has different angles and subjects who perform it are different, as well as its objectives. The global is an invisible invasion of materials and immediate effects.

  16. Globalization

    OpenAIRE

    Tulio Rosembuj

    2006-01-01

    There is no singular globalization, nor is the result of an individual agent. We could start by saying that global action has different angles and subjects who perform it are different, as well as its objectives. The global is an invisible invasion of materials and immediate effects.

  17. Global Monte Carlo Simulation with High Order Polynomial Expansions

    International Nuclear Information System (INIS)

    William R. Martin; James Paul Holloway; Kaushik Banerjee; Jesse Cheatham; Jeremy Conlin

    2007-01-01

    The functional expansion technique (FET) was recently developed for Monte Carlo simulation. The basic idea of the FET is to expand a Monte Carlo tally in terms of a high order expansion, the coefficients of which can be estimated via the usual random walk process in a conventional Monte Carlo code. If the expansion basis is chosen carefully, the lowest order coefficient is simply the conventional histogram tally, corresponding to a flat mode. This research project studied the applicability of using the FET to estimate the fission source, from which fission sites can be sampled for the next generation. The idea is that individual fission sites contribute to expansion modes that may span the geometry being considered, possibly increasing the communication across a loosely coupled system and thereby improving convergence over the conventional fission bank approach used in most production Monte Carlo codes. The project examined a number of basis functions, including global Legendre polynomials as well as 'local' piecewise polynomials such as finite element hat functions and higher order versions. The global FET showed an improvement in convergence over the conventional fission bank approach. The local FET methods showed some advantages versus global polynomials in handling geometries with discontinuous material properties. The conventional finite element hat functions had the disadvantage that the expansion coefficients could not be estimated directly but had to be obtained by solving a linear system whose matrix elements were estimated. An alternative fission matrix-based response matrix algorithm was formulated. Studies were made of two alternative applications of the FET, one based on the kernel density estimator and one based on Arnoldi's method of minimized iterations. Preliminary results for both methods indicate improvements in fission source convergence. These developments indicate that the FET has promise for speeding up Monte Carlo fission source convergence

  18. Global risk of pharmaceutical contamination from highly populated developing countries.

    Science.gov (United States)

    Rehman, Muhammad Saif Ur; Rashid, Naim; Ashfaq, Muhammad; Saif, Ameena; Ahmad, Nasir; Han, Jong-In

    2015-11-01

    Global pharmaceutical industry has relocated from the west to Asian countries to ensure competitive advantage. This industrial relocation has posed serious threats to the environment. The present study was carried out to assess the possible pharmaceutical contamination in the environment of emerging pharmaceutical manufacturing countries (Bangladesh, China, India and Pakistan). Although these countries have made tremendous progress in the pharmaceutical sector but most of their industrial units discharge wastewater into domestic sewage network without any treatment. The application of untreated wastewater (industrial and domestic) and biosolids (sewage sludge and manure) in agriculture causes the contamination of surface water, soil, groundwater, and the entire food web with pharmaceutical compounds (PCs), their metabolites and transformed products (TPs), and multidrug resistant microbes. This pharmaceutical contamination in Asian countries poses global risks via product export and international traveling. Several prospective research hypotheses including the development of new analytical methods to monitor these PCs/TPs and their metabolites, highly resistant microbial strains, and mixture toxicity as a consequence of pharmaceutical contamination in these emerging pharmaceutical exporters have also been proposed based on the available literature. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Firefly Algorithm for Polynomial Bézier Surface Parameterization

    Directory of Open Access Journals (Sweden)

    Akemi Gálvez

    2013-01-01

    reality, medical imaging, computer graphics, computer animation, and many others. Very often, the preferred approximating surface is polynomial, usually described in parametric form. This leads to the problem of determining suitable parametric values for the data points, the so-called surface parameterization. In real-world settings, data points are generally irregularly sampled and subjected to measurement noise, leading to a very difficult nonlinear continuous optimization problem, unsolvable with standard optimization techniques. This paper solves the parameterization problem for polynomial Bézier surfaces by applying the firefly algorithm, a powerful nature-inspired metaheuristic algorithm introduced recently to address difficult optimization problems. The method has been successfully applied to some illustrative examples of open and closed surfaces, including shapes with singularities. Our results show that the method performs very well, being able to yield the best approximating surface with a high degree of accuracy.

  20. Parameterizing the Spatial Markov Model from Breakthrough Curve Data Alone

    Science.gov (United States)

    Sherman, T.; Bolster, D.; Fakhari, A.; Miller, S.; Singha, K.

    2017-12-01

    The spatial Markov model (SMM) uses a correlated random walk and has been shown to effectively capture anomalous transport in porous media systems; in the SMM, particles' future trajectories are correlated to their current velocity. It is common practice to use a priori Lagrangian velocity statistics obtained from high resolution simulations to determine a distribution of transition probabilities (correlation) between velocity classes that govern predicted transport behavior; however, this approach is computationally cumbersome. Here, we introduce a methodology to quantify velocity correlation from Breakthrough (BTC) curve data alone; discretizing two measured BTCs into a set of arrival times and reverse engineering the rules of the SMM allows for prediction of velocity correlation, thereby enabling parameterization of the SMM in studies where Lagrangian velocity statistics are not available. The introduced methodology is applied to estimate velocity correlation from BTCs measured in high resolution simulations, thus allowing for a comparison of estimated parameters with known simulated values. Results show 1) estimated transition probabilities agree with simulated values and 2) using the SMM with estimated parameterization accurately predicts BTCs downstream. Additionally, we include uncertainty measurements by calculating lower and upper estimates of velocity correlation, which allow for prediction of a range of BTCs. The simulated BTCs fall in the range of predicted BTCs. This research proposes a novel method to parameterize the SMM from BTC data alone, thereby reducing the SMM's computational costs and widening its applicability.

  1. A Thermal Infrared Radiation Parameterization for Atmospheric Studies

    Science.gov (United States)

    Chou, Ming-Dah; Suarez, Max J.; Liang, Xin-Zhong; Yan, Michael M.-H.; Cote, Charles (Technical Monitor)

    2001-01-01

    This technical memorandum documents the longwave radiation parameterization developed at the Climate and Radiation Branch, NASA Goddard Space Flight Center, for a wide variety of weather and climate applications. Based on the 1996-version of the Air Force Geophysical Laboratory HITRAN data, the parameterization includes the absorption due to major gaseous absorption (water vapor, CO2, O3) and most of the minor trace gases (N2O, CH4, CFCs), as well as clouds and aerosols. The thermal infrared spectrum is divided into nine bands. To achieve a high degree of accuracy and speed, various approaches of computing the transmission function are applied to different spectral bands and gases. The gaseous transmission function is computed either using the k-distribution method or the table look-up method. To include the effect of scattering due to clouds and aerosols, the optical thickness is scaled by the single-scattering albedo and asymmetry factor. The parameterization can accurately compute fluxes to within 1% of the high spectral-resolution line-by-line calculations. The cooling rate can be accurately computed in the region extending from the surface to the 0.01-hPa level.

  2. Towards a Global High Resolution Peatland Map in 2020

    Science.gov (United States)

    Barthelmes, Alexandra; Barthelmes, Karen-Doreen; Joosten, Hans; Dommain, Rene; Margalef, Olga

    2015-04-01

    Some 3% of land area on planet Earth (approx. 4 million km2) is covered by peatlands. About 10% (~ 0.3 % of the land area) are drained and responsible for a disproportional 5 % of the global anthropogenic CO2 emissions (Victoria et al., 2012). Additionally, peatland drainage and degradation lead to land subsidence, soil degradation, water pollution, and enhanced susceptibility to fire (Holden et al., 2004; Joosten et al., 2012). The global importance of peatlands for carbon storage and climate change mitigation has currently been recognized in international policy - since 2008 organic soils are subject of discussion in the UN Framework Convention on Climate Change (UNFCCC) (Joosten, 2011). In May 2013 the European Parliament decided that the global post 2020 climate agreement should include the obligation to report emissions and removals from peatland drainage and rewetting. Implementation of such program, however, necessitates the rapid availability of reliable, comprehensive, high resolution, spatially explicit data on the extent and status of peatlands. For many reporting countries this requires an innovation in peatland mapping, i.e. the better and integrative use of novel, but already available methods and technologies. We developed an approach that links various science networks, methodologies and data bases, including those of peatland/landscape ecology for understanding where and how peatlands may occur, those of remote sensing for identifying possible locations, and those of pedology (legacy soil maps) and (palaeo-)ecology for ground truthing. Such integration of old field data, specialized knowledge, and modern RS and GIS technologies enables acquiring a rapid, comprehensive, detailed and rather reliable overview, even on a continental scale. We illustrate this approach with a high resolution overview of peatland distribution, area, status and greenhouse gas fluxes e.g. for the East African countries Rwanda, Burundi, Uganda and Zambia. Furthermore, we

  3. Global flow of glasma in high energy nuclear collisions

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Guangyao; Fries, Rainer J., E-mail: rjfries@comp.tamu.edu

    2013-06-25

    We discuss the energy flow of the classical gluon fields created in collisions of heavy nuclei at collider energies. We show how the Yang–Mills analog of Faraday's Law and Gauss' Law predicts the initial gluon flux tubes to expand or bend. The resulting transverse and longitudinal structure of the Poynting vector field has a rich phenomenology. Besides the well-known radial and elliptic flow in transverse direction, classical quantum chromodynamics predicts a rapidity-odd transverse flow that tilts the fireball for non-central collisions, and it implies a characteristic flow pattern for collisions of non-symmetric systems A+B. The rapidity-odd transverse flow translates into a directed particle flow v{sub 1} which has been observed at RHIC and LHC. The global flow fields in heavy ion collisions could be a powerful check for the validity of classical Yang–Mills dynamics in high energy collisions.

  4. High accuracy autonomous navigation using the global positioning system (GPS)

    Science.gov (United States)

    Truong, Son H.; Hart, Roger C.; Shoan, Wendy C.; Wood, Terri; Long, Anne C.; Oza, Dipak H.; Lee, Taesul

    1997-01-01

    The application of global positioning system (GPS) technology to the improvement of the accuracy and economy of spacecraft navigation, is reported. High-accuracy autonomous navigation algorithms are currently being qualified in conjunction with the GPS attitude determination flyer (GADFLY) experiment for the small satellite technology initiative Lewis spacecraft. Preflight performance assessments indicated that these algorithms are able to provide a real time total position accuracy of better than 10 m and a velocity accuracy of better than 0.01 m/s, with selective availability at typical levels. It is expected that the position accuracy will be increased to 2 m if corrections are provided by the GPS wide area augmentation system.

  5. Globalization

    OpenAIRE

    Andru?cã Maria Carmen

    2013-01-01

    The field of globalization has highlighted an interdependence implied by a more harmonious understanding determined by the daily interaction between nations through the inducement of peace and the management of streamlining and the effectiveness of the global economy. For the functioning of the globalization, the developing countries that can be helped by the developed ones must be involved. The international community can contribute to the institution of the development environment of the gl...

  6. Parameterized Shower Simulation in Lelaps: a Comparison with Geant4

    International Nuclear Information System (INIS)

    Langeveld, Willy G.J.

    2003-01-01

    The detector simulation toolkit Lelaps[1] simulates electromagnetic and hadronic showers in calorimetric detector elements of high-energy particle detectors using a parameterization based on the algorithms originally developed by Grindhammer and Peters[2] and Bock et al.[3]. The primary motivations of the present paper are to verify the implementation of the parameterization, to explore regions of energy where the parameterization is valid and to serve as a basis for further improvement of the algorithm. To this end, we compared the Lelaps simulation to a detailed simulation provided by Geant4[4]. A number of different calorimeters, both electromagnetic and hadronic, were implemented in both programs. Longitudinal and radial shower profiles and their fluctuations were obtained from Geant4 over a wide energy range and compared with those obtained from Lelaps. Generally the longitudinal shower profiles are found to be in good agreement in a large part of the energy range, with poorer results at energies below about 300 MeV. Radial profiles agree well in homogeneous detectors, but are somewhat deficient in segmented ones. These deficiencies are discussed

  7. Parameterizing the Spatial Markov Model From Breakthrough Curve Data Alone

    Science.gov (United States)

    Sherman, Thomas; Fakhari, Abbas; Miller, Savannah; Singha, Kamini; Bolster, Diogo

    2017-12-01

    The spatial Markov model (SMM) is an upscaled Lagrangian model that effectively captures anomalous transport across a diverse range of hydrologic systems. The distinct feature of the SMM relative to other random walk models is that successive steps are correlated. To date, with some notable exceptions, the model has primarily been applied to data from high-resolution numerical simulations and correlation effects have been measured from simulated particle trajectories. In real systems such knowledge is practically unattainable and the best one might hope for is breakthrough curves (BTCs) at successive downstream locations. We introduce a novel methodology to quantify velocity correlation from BTC data alone. By discretizing two measured BTCs into a set of arrival times and developing an inverse model, we estimate velocity correlation, thereby enabling parameterization of the SMM in studies where detailed Lagrangian velocity statistics are unavailable. The proposed methodology is applied to two synthetic numerical problems, where we measure all details and thus test the veracity of the approach by comparison of estimated parameters with known simulated values. Our results suggest that our estimated transition probabilities agree with simulated values and using the SMM with this estimated parameterization accurately predicts BTCs downstream. Our methodology naturally allows for estimates of uncertainty by calculating lower and upper bounds of velocity correlation, enabling prediction of a range of BTCs. The measured BTCs fall within the range of predicted BTCs. This novel method to parameterize the SMM from BTC data alone is quite parsimonious, thereby widening the SMM's practical applicability.

  8. A new, high-resolution global mass coral bleaching database.

    Directory of Open Access Journals (Sweden)

    Simon D Donner

    Full Text Available Episodes of mass coral bleaching have been reported in recent decades and have raised concerns about the future of coral reefs on a warming planet. Despite the efforts to enhance and coordinate coral reef monitoring within and across countries, our knowledge of the geographic extent of mass coral bleaching over the past few decades is incomplete. Existing databases, like ReefBase, are limited by the voluntary nature of contributions, geographical biases in data collection, and the variations in the spatial scale of bleaching reports. In this study, we have developed the first-ever gridded, global-scale historical coral bleaching database. First, we conducted a targeted search for bleaching reports not included in ReefBase by personally contacting scientists and divers conducting monitoring in under-reported locations and by extracting data from the literature. This search increased the number of observed bleaching reports by 79%, from 4146 to 7429. Second, we employed spatial interpolation techniques to develop annual 0.04° × 0.04° latitude-longitude global maps of the probability that bleaching occurred for 1985 through 2010. Initial results indicate that the area of coral reefs with a more likely than not (>50% or likely (>66% probability of bleaching was eight times higher in the second half of the assessed time period, after the 1997/1998 El Niño. The results also indicate that annual maximum Degree Heating Weeks, a measure of thermal stress, for coral reefs with a high probability of bleaching increased over time. The database will help the scientific community more accurately assess the change in the frequency of mass coral bleaching events, validate methods of predicting mass coral bleaching, and test whether coral reefs are adjusting to rising ocean temperatures.

  9. A new, high-resolution global mass coral bleaching database.

    Science.gov (United States)

    Donner, Simon D; Rickbeil, Gregory J M; Heron, Scott F

    2017-01-01

    Episodes of mass coral bleaching have been reported in recent decades and have raised concerns about the future of coral reefs on a warming planet. Despite the efforts to enhance and coordinate coral reef monitoring within and across countries, our knowledge of the geographic extent of mass coral bleaching over the past few decades is incomplete. Existing databases, like ReefBase, are limited by the voluntary nature of contributions, geographical biases in data collection, and the variations in the spatial scale of bleaching reports. In this study, we have developed the first-ever gridded, global-scale historical coral bleaching database. First, we conducted a targeted search for bleaching reports not included in ReefBase by personally contacting scientists and divers conducting monitoring in under-reported locations and by extracting data from the literature. This search increased the number of observed bleaching reports by 79%, from 4146 to 7429. Second, we employed spatial interpolation techniques to develop annual 0.04° × 0.04° latitude-longitude global maps of the probability that bleaching occurred for 1985 through 2010. Initial results indicate that the area of coral reefs with a more likely than not (>50%) or likely (>66%) probability of bleaching was eight times higher in the second half of the assessed time period, after the 1997/1998 El Niño. The results also indicate that annual maximum Degree Heating Weeks, a measure of thermal stress, for coral reefs with a high probability of bleaching increased over time. The database will help the scientific community more accurately assess the change in the frequency of mass coral bleaching events, validate methods of predicting mass coral bleaching, and test whether coral reefs are adjusting to rising ocean temperatures.

  10. Atlantic Multidecadal Oscillation footprint on global high cloud cover

    Science.gov (United States)

    Vaideanu, Petru; Dima, Mihai; Voiculescu, Mirela

    2017-12-01

    Due to the complexity of the physical processes responsible for cloud formation and to the relatively short satellite database of continuous data records, cloud behavior in a warming climate remains uncertain. Identifying physical links between climate modes and clouds would contribute not only to a better understanding of the physical processes governing their formation and dynamics, but also to an improved representation of the clouds in climate models. Here, we identify the global footprint of the Atlantic Multidecadal Oscillation (AMO) on high cloud cover, with focus on the tropical and North Atlantic, tropical Pacific and on the circum-Antarctic sector. In the tropical band, the sea surface temperature (SST) and high cloud cover (HCC) anomalies are positively correlated, indicating a dominant role played by convection in mediating the influence of the AMO-related SST anomalies on the HCC field. The negative SST-HCC correlation observed in North Atlantic could be explained by the reduced meridional temperature gradient induced by the AMO positive phase, which would be reflected in less storms and negative HCC anomalies. A similar negative SST-HCC correlation is observed around Antarctica. The corresponding negative correlation around Antarctica could be generated dynamically, as a response to the intensified upward motion in the Ferrel cell. Despite the inherent imperfection of the observed and reanalysis data sets, the AMO footprint on HCC is found to be robust to the choice of dataset, statistical method, and specific time period considered.

  11. Global estimates of high-level brain drain and deficit.

    Science.gov (United States)

    Ioannidis, John P A

    2004-06-01

    Brain drain, the international migration of scientists in search of better opportunities, has been a long-standing concern, but quantitative measurements are uncommon and limited to specific countries or disciplines. We need to understand brain drain at a global level and estimate the extent to which scientists born in countries with low opportunities never realize their potential. Data on 1523 of the most highly cited scientists for 1981-1999 are analyzed. Overall, 31.9% of these scientists did not reside in the country where they were born (range 18.1-54.6% across 21 different scientific fields). There was great variability across developed countries in the proportions of foreign-born resident scientists and emigrating scientists. Countries without a critical mass of native scientists lost most scientists to migration. This loss occurred in both developed and developing countries. Adjusting for population and using the U.S. as reference, the number of highly cited native-born scientists was at least 75% of the expected number in only 8 countries other than the U.S. It is estimated that approximately 94% of the expected top scientists worldwide have not been able to materialize themselves due to various adverse conditions. Scientific deficit is only likely to help perpetuate these adverse conditions.

  12. Gain scheduling using the Youla parameterization

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Stoustrup, Jakob

    1999-01-01

    Gain scheduling controllers are considered in this paper. The gain scheduling problem where the scheduling parameter vector cannot be measured directly, but needs to be estimated is considered. An estimation of the scheduling vector has been derived by using the Youla parameterization. The use...... in connection with H_inf gain scheduling controllers....

  13. Active Subspaces of Airfoil Shape Parameterizations

    Science.gov (United States)

    Grey, Zachary J.; Constantine, Paul G.

    2018-05-01

    Design and optimization benefit from understanding the dependence of a quantity of interest (e.g., a design objective or constraint function) on the design variables. A low-dimensional active subspace, when present, identifies important directions in the space of design variables; perturbing a design along the active subspace associated with a particular quantity of interest changes that quantity more, on average, than perturbing the design orthogonally to the active subspace. This low-dimensional structure provides insights that characterize the dependence of quantities of interest on design variables. Airfoil design in a transonic flow field with a parameterized geometry is a popular test problem for design methodologies. We examine two particular airfoil shape parameterizations, PARSEC and CST, and study the active subspaces present in two common design quantities of interest, transonic lift and drag coefficients, under each shape parameterization. We mathematically relate the two parameterizations with a common polynomial series. The active subspaces enable low-dimensional approximations of lift and drag that relate to physical airfoil properties. In particular, we obtain and interpret a two-dimensional approximation of both transonic lift and drag, and we show how these approximation inform a multi-objective design problem.

  14. Developing a stochastic parameterization to incorporate plant trait variability into ecohydrologic modeling

    Science.gov (United States)

    Liu, S.; Ng, G. H. C.

    2017-12-01

    The global plant database has revealed that plant traits can vary more within a plant functional type (PFT) than among different PFTs, indicating that the current paradigm in ecohydrogical models of specifying fixed parameters based solely on plant functional type (PFT) could potentially bias simulations. Although some recent modeling studies have attempted to incorporate this observed plant trait variability, many failed to consider uncertainties due to sparse global observation, or they omitted spatial and/or temporal variability in the traits. Here we present a stochastic parameterization for prognostic vegetation simulations that are stochastic in time and space in order to represent plant trait plasticity - the process by which trait differences arise. We have developed the new PFT parameterization within the Community Land Model 4.5 (CLM 4.5) and tested the method for a desert shrubland watershed in the Mojave Desert, where fixed parameterizations cannot represent acclimation to desert conditions. Spatiotemporally correlated plant trait parameters were first generated based on TRY statistics and were then used to implement ensemble runs for the study area. The new PFT parameterization was then further conditioned on field measurements of soil moisture and remotely sensed observations of leaf-area-index to constrain uncertainties in the sparse global database. Our preliminary results show that incorporating data-conditioned, variable PFT parameterizations strongly affects simulated soil moisture and water fluxes, compared with default simulations. The results also provide new insights about correlations among plant trait parameters and between traits and environmental conditions in the desert shrubland watershed. Our proposed stochastic PFT parameterization method for ecohydrological models has great potential in advancing our understanding of how terrestrial ecosystems are predicted to adapt to variable environmental conditions.

  15. Parameterization analysis and inversion for orthorhombic media

    KAUST Repository

    Masmoudi, Nabil

    2018-05-01

    Accounting for azimuthal anisotropy is necessary for the processing and inversion of wide-azimuth and wide-aperture seismic data because wave speeds naturally depend on the wave propagation direction. Orthorhombic anisotropy is considered the most effective anisotropic model that approximates the azimuthal anisotropy we observe in seismic data. In the framework of full wave form inversion (FWI), the large number of parameters describing orthorhombic media exerts a considerable trade-off and increases the non-linearity of the inversion problem. Choosing a suitable parameterization for the model, and identifying which parameters in that parameterization could be well resolved, are essential to a successful inversion. In this thesis, I derive the radiation patterns for different acoustic orthorhombic parameterization. Analyzing the angular dependence of the scattering of the parameters of different parameterizations starting with the conventionally used notation, I assess the potential trade-off between the parameters and the resolution in describing the data and inverting for the parameters. In order to build practical inversion strategies, I suggest new parameters (called deviation parameters) for a new parameterization style in orthorhombic media. The novel parameters denoted ∈d, ƞd and δd are dimensionless and represent a measure of deviation between the vertical planes in orthorhombic anisotropy. The main feature of the deviation parameters consists of keeping the scattering of the vertical transversely isotropic (VTI) parameters stationary with azimuth. Using these scattering features, we can condition FWI to invert for the parameters which the data are sensitive to, at different stages, scales, and locations in the model. With this parameterization, the data are mainly sensitive to the scattering of 3 parameters (out of six that describe an acoustic orthorhombic medium): the horizontal velocity in the x1 direction, ∈1 which provides scattering mainly near

  16. Impact of model structure and parameterization on Penman-Monteith type evaporation models

    KAUST Repository

    Ershadi, A.; McCabe, Matthew; Evans, J.P.; Wood, E.F.

    2015-01-01

    Overall, the results illustrate the sensitivity of Penman-Monteith type models to model structure, parameterization choice and biome type. A particular challenge in flux estimation relates to developing robust and broadly applicable model formulations. With many choices available for use, providing guidance on the most appropriate scheme to employ is required to advance approaches for routine global scale flux estimates, undertake hydrometeorological assessments or develop hydrological forecasting tools, amongst many other applications. In such cases, a multi-model ensemble or biome-specific tiled evaporation product may be an appropriate solution, given the inherent variability in model and parameterization choice that is observed within single product estimates.

  17. Global climate feedbacks

    Energy Technology Data Exchange (ETDEWEB)

    Manowitz, B.

    1990-10-01

    The important physical, chemical, and biological events that affect global climate change occur on a mesoscale -- requiring high spatial resolution for their analysis. The Department of Energy has formulated two major initiatives under the US Global Change Program: ARM (Atmospheric Radiation Measurements), and CHAMMP (Computer Hardware Advanced Mathematics and Model Physics). ARM is designed to use ground and air-craft based observations to document profiles of atmospheric composition, clouds, and radiative fluxes. With research and models of important physical processes, ARM will delineate the relationships between trace gases, aerosol and cloud structure, and radiative transfer in the atmosphere, and will improve the parameterization of global circulation models. The present GCMs do not model important feedbacks, including those from clouds, oceans, and land processes. The purpose of this workshop is to identify such potential feedbacks, to evaluate the uncertainties in the feedback processes (and, if possible, to parameterize the feedback processes so that they can be treated in a GCM), and to recommend research programs that will reduce the uncertainties in important feedback processes. Individual reports are processed separately for the data bases.

  18. Going Global: Science Issues for the Junior High.

    Science.gov (United States)

    Cronkhite, Louella; And Others

    This book contains a unit on science and global education that is designed to enable students to gain a practical understanding of the world they live in and the confidence to take appropriate action as responsible global citizens. This unit emphasizes cooperative learning that is experiential and participatory. Teachers and students are…

  19. Iridium: Global OTH data communications for high altitude scientific ballooning

    Science.gov (United States)

    Denney, A.

    While the scientific community is no stranger to embracing commercially available technologies, the growth and availability of truly affordable cutting edge technologies is opening the door to an entirely new means of global communications. For many years high altitude ballooning has provided science an alternative to costly satellite based experimental platforms. As with any project, evolution becomes an integral part of development. Specifically in the NSBF ballooning program, where flight durations have evolved from the earlier days of hours to several weeks and plans are underway to provide missions up to 100 days. Addressing increased flight durations, the harsh operational environment, along with cumbersome and outdated systems used on existing systems, such as the balloon vehicles Support Instrumentation Package (SIP) and ground-based systems, a new Over-The-Horizon (OTH) communications medium is sought. Current OTH equipment planning to be phased-out include: HF commanding systems, ARGOS PTT telemetry downlinks and INMARSAT data terminals. Other aspects up for review in addition to the SIP to utilize this communications medium include pathfinder balloon platforms - thereby, adding commanding abilities and increased data rates, plus providing a package for ultra-small experiments to ride aloft. Existing communication systems employed by the National Scientific Balloon Facility ballooning program have been limited not only by increased cost, slow data rates and "special government use only" services such as TDRSS (Tracking and Data Relay Satellite System), but have had to make special provisions to geographical flight location. Development of the Support Instrumentation Packages whether LDB (Long Duration Balloon), ULDB (Ultra Long Duration Balloon) or conventional ballooning have been plagued by non-standard systems configurations requiring additional support equipment for different regions and missions along with a myriad of backup for redundancy. Several

  20. Development of a cloud microphysical model and parameterizations to describe the effect of CCN on warm cloud

    Directory of Open Access Journals (Sweden)

    N. Kuba

    2006-01-01

    for a global model. The effects of sea salt, sulfate, and organic carbon particles were also studied with these parameterizations and global model.

  1. High fructose corn syrup and diabetes prevalence: a global perspective.

    Science.gov (United States)

    Goran, Michael I; Ulijaszek, Stanley J; Ventura, Emily E

    2013-01-01

    The overall aim of this study was to evaluate, from a global and ecological perspective, the relationships between availability of high fructose corn syrup (HFCS) and prevalence of type 2 diabetes. Using published resources, country-level estimates (n =43 countries) were obtained for: total sugar, HFCS and total calorie availability, obesity, two separate prevalence estimates for diabetes, prevalence estimate for impaired glucose tolerance and fasting plasma glucose. Pearson's correlations and partial correlations were conducted in order to explore associations between dietary availability and obesity and diabetes prevalence. Diabetes prevalence was 20% higher in countries with higher availability of HFCS compared to countries with low availability, and these differences were retained or strengthened after adjusting for country-level estimates of body mass index (BMI), population and gross domestic product (adjusted diabetes prevalence=8.0 vs. 6.7%, p=0.03; fasting plasma glucose=5.34 vs. 5.22 mmol/L, p=0.03) despite similarities in obesity and total sugar and calorie availability. These results suggest that countries with higher availability of HFCS have a higher prevalence of type 2 diabetes independent of obesity.

  2. Towards improved parameterization of a macroscale hydrologic model in a discontinuous permafrost boreal forest ecosystem

    Directory of Open Access Journals (Sweden)

    A. Endalamaw

    2017-09-01

    Full Text Available Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which better represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW in Interior Alaska: one nearly permafrost-free (LowP sub-basin and one permafrost-dominated (HighP sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC mesoscale hydrological model to simulate runoff, evapotranspiration (ET, and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub

  3. Parameterizing the interstellar dust temperature

    Science.gov (United States)

    Hocuk, S.; Szűcs, L.; Caselli, P.; Cazaux, S.; Spaans, M.; Esplugues, G. B.

    2017-08-01

    The temperature of interstellar dust particles is of great importance to astronomers. It plays a crucial role in the thermodynamics of interstellar clouds, because of the gas-dust collisional coupling. It is also a key parameter in astrochemical studies that governs the rate at which molecules form on dust. In 3D (magneto)hydrodynamic simulations often a simple expression for the dust temperature is adopted, because of computational constraints, while astrochemical modelers tend to keep the dust temperature constant over a large range of parameter space. Our aim is to provide an easy-to-use parametric expression for the dust temperature as a function of visual extinction (AV) and to shed light on the critical dependencies of the dust temperature on the grain composition. We obtain an expression for the dust temperature by semi-analytically solving the dust thermal balance for different types of grains and compare to a collection of recent observational measurements. We also explore the effect of ices on the dust temperature. Our results show that a mixed carbonaceous-silicate type dust with a high carbon volume fraction matches the observations best. We find that ice formation allows the dust to be warmer by up to 15% at high optical depths (AV> 20 mag) in the interstellar medium. Our parametric expression for the dust temperature is presented as Td = [ 11 + 5.7 × tanh(0.61 - log 10(AV) ]χuv1/5.9, where χuv is in units of the Draine (1978, ApJS, 36, 595) UV field.

  4. A stochastic parameterization for deep convection using cellular automata

    Science.gov (United States)

    Bengtsson, L.; Steinheimer, M.; Bechtold, P.; Geleyn, J.

    2012-12-01

    Cumulus parameterizations used in most operational weather and climate models today are based on the mass-flux concept which took form in the early 1970's. In such schemes it is assumed that a unique relationship exists between the ensemble-average of the sub-grid convection, and the instantaneous state of the atmosphere in a vertical grid box column. However, such a relationship is unlikely to be described by a simple deterministic function (Palmer, 2011). Thus, because of the statistical nature of the parameterization challenge, it has been recognized by the community that it is important to introduce stochastic elements to the parameterizations (for instance: Plant and Craig, 2008, Khouider et al. 2010, Frenkel et al. 2011, Bentsson et al. 2011, but the list is far from exhaustive). There are undoubtedly many ways in which stochastisity can enter new developments. In this study we use a two-way interacting cellular automata (CA), as its intrinsic nature possesses many qualities interesting for deep convection parameterization. In the one-dimensional entraining plume approach, there is no parameterization of horizontal transport of heat, moisture or momentum due to cumulus convection. In reality, mass transport due to gravity waves that propagate in the horizontal can trigger new convection, important for the organization of deep convection (Huang, 1988). The self-organizational characteristics of the CA allows for lateral communication between adjacent NWP model grid-boxes, and temporal memory. Thus the CA scheme used in this study contain three interesting components for representation of cumulus convection, which are not present in the traditional one-dimensional bulk entraining plume method: horizontal communication, memory and stochastisity. The scheme is implemented in the high resolution regional NWP model ALARO, and simulations show enhanced organization of convective activity along squall-lines. Probabilistic evaluation demonstrate an enhanced spread in

  5. Invariant box-parameterization of neutrino oscillations

    International Nuclear Information System (INIS)

    Weiler, Thomas J.; Wagner, DJ

    1998-01-01

    The model-independent 'box' parameterization of neutrino oscillations is examined. The invariant boxes are the classical amplitudes of the individual oscillating terms. Being observables, the boxes are independent of the choice of parameterization of the mixing matrix. Emphasis is placed on the relations among the box parameters due to mixing-matrix unitarity, and on the reduction of the number of boxes to the minimum basis set. Using the box algebra, we show that CP-violation may be inferred from measurements of neutrino flavor mixing even when the oscillatory factors have averaged. General analyses of neutrino oscillations among n≥3 flavors can readily determine the boxes, which can then be manipulated to yield magnitudes of mixing matrix elements

  6. Parameterized Concurrent Multi-Party Session Types

    Directory of Open Access Journals (Sweden)

    Minas Charalambides

    2012-08-01

    Full Text Available Session types have been proposed as a means of statically verifying implementations of communication protocols. Although prior work has been successful in verifying some classes of protocols, it does not cope well with parameterized, multi-actor scenarios with inherent asynchrony. For example, the sliding window protocol is inexpressible in previously proposed session type systems. This paper describes System-A, a new typing language which overcomes many of the expressiveness limitations of prior work. System-A explicitly supports asynchrony and parallelism, as well as multiple forms of parameterization. We define System-A and show how it can be used for the static verification of a large class of asynchronous communication protocols.

  7. Automatic Parameterization Strategy for Cardiac Electrophysiology Simulations.

    Science.gov (United States)

    Costa, Caroline Mendonca; Hoetzl, Elena; Rocha, Bernardo Martins; Prassl, Anton J; Plank, Gernot

    2013-10-01

    Driven by recent advances in medical imaging, image segmentation and numerical techniques, computer models of ventricular electrophysiology account for increasingly finer levels of anatomical and biophysical detail. However, considering the large number of model parameters involved parameterization poses a major challenge. A minimum requirement in combined experimental and modeling studies is to achieve good agreement in activation and repolarization sequences between model and experiment or patient data. In this study, we propose basic techniques which aid in determining bidomain parameters to match activation sequences. An iterative parameterization algorithm is implemented which determines appropriate bulk conductivities which yield prescribed velocities. In addition, a method is proposed for splitting the computed bulk conductivities into individual bidomain conductivities by prescribing anisotropy ratios.

  8. Invariant box parameterization of neutrino oscillations

    International Nuclear Information System (INIS)

    Weiler, T.J.; Wagner, D.

    1998-01-01

    The model-independent 'box' parameterization of neutrino oscillations is examined. The invariant boxes are the classical amplitudes of the individual oscillating terms. Being observables, the boxes are independent of the choice of parameterization of the mixing matrix. Emphasis is placed on the relations among the box parameters due to mixing matrix unitarity, and on the reduction of the number of boxes to the minimum basis set. Using the box algebra, we show that CP-violation may be inferred from measurements of neutrino flavor mixing even when the oscillatory factors have averaged. General analyses of neutrino oscillations among n≥3 flavors can readily determine the boxes, which can then be manipulated to yield magnitudes of mixing matrix elements. copyright 1998 American Institute of Physics

  9. Globalization

    DEFF Research Database (Denmark)

    Plum, Maja

    Globalization is often referred to as external to education - a state of affair facing the modern curriculum with numerous challenges. In this paper it is examined as internal to curriculum; analysed as a problematization in a Foucaultian sense. That is, as a complex of attentions, worries, ways...... of reasoning, producing curricular variables. The analysis is made through an example of early childhood curriculum in Danish Pre-school, and the way the curricular variable of the pre-school child comes into being through globalization as a problematization, carried forth by the comparative practices of PISA...

  10. Globalization

    OpenAIRE

    F. Gerard Adams

    2008-01-01

    The rapid globalization of the world economy is causing fundamental changes in patterns of trade and finance. Some economists have argued that globalization has arrived and that the world is “flat†. While the geographic scope of markets has increased, the author argues that new patterns of trade and finance are a result of the discrepancies between “old†countries and “new†. As the differences are gradually wiped out, particularly if knowledge and technology spread worldwide, the t...

  11. Automatic Parameterization Strategy for Cardiac Electrophysiology Simulations

    OpenAIRE

    Costa, Caroline Mendonca; Hoetzl, Elena; Rocha, Bernardo Martins; Prassl, Anton J; Plank, Gernot

    2013-01-01

    Driven by recent advances in medical imaging, image segmentation and numerical techniques, computer models of ventricular electrophysiology account for increasingly finer levels of anatomical and biophysical detail. However, considering the large number of model parameters involved parameterization poses a major challenge. A minimum requirement in combined experimental and modeling studies is to achieve good agreement in activation and repolarization sequences between model and experiment or ...

  12. Global High Resolution Sea Surface Flux Parameters From Multiple Satellites

    Science.gov (United States)

    Zhang, H.; Reynolds, R. W.; Shi, L.; Bates, J. J.

    2007-05-01

    Advances in understanding the coupled air-sea system and modeling of the ocean and atmosphere demand increasingly higher resolution data, such as air-sea fluxes of up to 3 hourly and every 50 km. These observational requirements can only be met by utilizing multiple satellite observations. Generation of such high resolution products from multiple-satellite and in-situ observations on an operational basis has been started at the U.S. National Oceanic and Atmospheric Administration (NOAA) National Climatic Data Center. Here we describe a few products that are directly related to the computation of turbulent air-sea fluxes. Sea surface wind speed has been observed from in-situ instruments and multiple satellites, with long-term observations ranging from one satellite in the mid 1987 to six or more satellites since mid 2002. A blended product with a global 0.25° grid and four snapshots per day has been produced for July 1987 to present, using a near Gaussian 3-D (x, y, t) interpolation to minimize aliases. Wind direction has been observed from fewer satellites, thus for the blended high resolution vector winds and wind stresses, the directions are taken from the NCEP Re-analysis 2 (operationally run near real time) for climate consistency. The widely used Reynolds Optimum Interpolation SST analysis has been improved with higher resolutions (daily and 0.25°). The improvements use both infrared and microwave satellite data that are bias-corrected by in- situ observations for the period 1985 to present. The new versions provide very significant improvements in terms of resolving ocean features such as the meandering of the Gulf Stream, the Aghulas Current, the equatorial jets and other fronts. The Ta and Qa retrievals are based on measurements from the AMSU sounder onboard the NOAA satellites. Ta retrieval uses AMSU-A data, while Qa retrieval uses both AMSU-A and AMSU-B observations. The retrieval algorithms are developed using the neural network approach. Training

  13. A stratiform cloud parameterization for General Circulation Models

    International Nuclear Information System (INIS)

    Ghan, S.J.; Leung, L.R.; Chuang, C.C.; Penner, J.E.; McCaa, J.

    1994-01-01

    The crude treatment of clouds in General Circulation Models (GCMs) is widely recognized as a major limitation in the application of these models to predictions of global climate change. The purpose of this project is to develop a paxameterization for stratiform clouds in GCMs that expresses stratiform clouds in terms of bulk microphysical properties and their subgrid variability. In this parameterization, precipitating cloud species are distinguished from non-precipitating species, and the liquid phase is distinguished from the ice phase. The size of the non-precipitating cloud particles (which influences both the cloud radiative properties and the conversion of non-precipitating cloud species to precipitating species) is determined by predicting both the mass and number concentrations of each species

  14. A stratiform cloud parameterization for general circulation models

    International Nuclear Information System (INIS)

    Ghan, S.J.; Leung, L.R.; Chuang, C.C.; Penner, J.E.; McCaa, J.

    1994-01-01

    The crude treatment of clouds in general circulation models (GCMs) is widely recognized as a major limitation in applying these models to predictions of global climate change. The purpose of this project is to develop in GCMs a stratiform cloud parameterization that expresses clouds in terms of bulk microphysical properties and their subgrid variability. Various clouds variables and their interactions are summarized. Precipitating cloud species are distinguished from non-precipitating species, and the liquid phase is distinguished from the ice phase. The size of the non-precipitating cloud particles (which influences both the cloud radiative properties and the conversion of non-precipitating cloud species to precipitating species) is determined by predicting both the mass and number concentrations of each species

  15. Examining Chaotic Convection with Super-Parameterization Ensembles

    Science.gov (United States)

    Jones, Todd R.

    This study investigates a variety of features present in a new configuration of the Community Atmosphere Model (CAM) variant, SP-CAM 2.0. The new configuration (multiple-parameterization-CAM, MP-CAM) changes the manner in which the super-parameterization (SP) concept represents physical tendency feedbacks to the large-scale by using the mean of 10 independent two-dimensional cloud-permitting model (CPM) curtains in each global model column instead of the conventional single CPM curtain. The climates of the SP and MP configurations are examined to investigate any significant differences caused by the application of convective physical tendencies that are more deterministic in nature, paying particular attention to extreme precipitation events and large-scale weather systems, such as the Madden-Julian Oscillation (MJO). A number of small but significant changes in the mean state climate are uncovered, and it is found that the new formulation degrades MJO performance. Despite these deficiencies, the ensemble of possible realizations of convective states in the MP configuration allows for analysis of uncertainty in the small-scale solution, lending to examination of those weather regimes and physical mechanisms associated with strong, chaotic convection. Methods of quantifying precipitation predictability are explored, and use of the most reliable of these leads to the conclusion that poor precipitation predictability is most directly related to the proximity of the global climate model column state to atmospheric critical points. Secondarily, the predictability is tied to the availability of potential convective energy, the presence of mesoscale convective organization on the CPM grid, and the directive power of the large-scale.

  16. A high-resolution global-scale groundwater model

    Science.gov (United States)

    de Graaf, I. E. M.; Sutanudjaja, E. H.; van Beek, L. P. H.; Bierkens, M. F. P.

    2015-02-01

    Groundwater is the world's largest accessible source of fresh water. It plays a vital role in satisfying basic needs for drinking water, agriculture and industrial activities. During times of drought groundwater sustains baseflow to rivers and wetlands, thereby supporting ecosystems. Most global-scale hydrological models (GHMs) do not include a groundwater flow component, mainly due to lack of geohydrological data at the global scale. For the simulation of lateral flow and groundwater head dynamics, a realistic physical representation of the groundwater system is needed, especially for GHMs that run at finer resolutions. In this study we present a global-scale groundwater model (run at 6' resolution) using MODFLOW to construct an equilibrium water table at its natural state as the result of long-term climatic forcing. The used aquifer schematization and properties are based on available global data sets of lithology and transmissivities combined with the estimated thickness of an upper, unconfined aquifer. This model is forced with outputs from the land-surface PCRaster Global Water Balance (PCR-GLOBWB) model, specifically net recharge and surface water levels. A sensitivity analysis, in which the model was run with various parameter settings, showed that variation in saturated conductivity has the largest impact on the groundwater levels simulated. Validation with observed groundwater heads showed that groundwater heads are reasonably well simulated for many regions of the world, especially for sediment basins (R2 = 0.95). The simulated regional-scale groundwater patterns and flow paths demonstrate the relevance of lateral groundwater flow in GHMs. Inter-basin groundwater flows can be a significant part of a basin's water budget and help to sustain river baseflows, especially during droughts. Also, water availability of larger aquifer systems can be positively affected by additional recharge from inter-basin groundwater flows.

  17. Net sea–air CO2 flux uncertainties in the Bay of Biscay based on the choice of wind speed products and gas transfer parameterizations

    Directory of Open Access Journals (Sweden)

    P. Otero

    2013-05-01

    Full Text Available The estimation of sea–air CO2 fluxes is largely dependent on wind speed through the gas transfer velocity parameterization. In this paper, we quantify uncertainties in the estimation of the CO2 uptake in the Bay of Biscay resulting from the use of different sources of wind speed such as three different global reanalysis meteorological models (NCEP/NCAR 1, NCEP/DOE 2 and ERA-Interim, one high-resolution regional forecast model (HIRLAM-AEMet, winds derived under the Cross-Calibrated Multi-Platform (CCMP project, and QuikSCAT winds in combination with some of the most widely used gas transfer velocity parameterizations. Results show that net CO2 flux estimations during an entire seasonal cycle (September 2002–September 2003 may vary by a factor of ~ 3 depending on the selected wind speed product and the gas exchange parameterization, with the highest impact due to the last one. The comparison of satellite- and model-derived winds with observations at buoys advises against the systematic overestimation of NCEP-2 and the underestimation of NCEP-1. In the coastal region, the presence of land and the time resolution are the main constraints of QuikSCAT, which turns CCMP and ERA-Interim in the preferred options.

  18. Model parameterization as method for data analysis in dendroecology

    Science.gov (United States)

    Tychkov, Ivan; Shishov, Vladimir; Popkova, Margarita

    2017-04-01

    There is no argue in usefulness of process-based models in ecological studies. Only limitations is how developed algorithm of model and how it will be applied for research. Simulation of tree-ring growth based on climate provides valuable information of tree-ring growth response on different environmental conditions, but also shares light on species-specifics of tree-ring growth process. Visual parameterization of the Vaganov-Shashkin model, allows to estimate non-linear response of tree-ring growth based on daily climate data: daily temperature, estimated day light and soil moisture. Previous using of the VS-Oscilloscope (a software tool of the visual parameterization) shows a good ability to recreate unique patterns of tree-ring growth for coniferous species in Siberian Russia, USA, China, Mediterranean Spain and Tunisia. But using of the models mostly is one-sided to better understand different tree growth processes, opposite to statistical methods of analysis (e.g. Generalized Linear Models, Mixed Models, Structural Equations.) which can be used for reconstruction and forecast. Usually the models are used either for checking of new hypothesis or quantitative assessment of physiological tree growth data to reveal a growth process mechanisms, while statistical methods used for data mining assessment and as a study tool itself. The high sensitivity of the model's VS-parameters reflects the ability of the model to simulate tree-ring growth and evaluates value of limiting growth climate factors. Precise parameterization of VS-Oscilloscope provides valuable information about growth processes of trees and under what conditions these processes occur (e.g. day of growth season onset, length of season, value of minimal/maximum temperature for tree-ring growth, formation of wide or narrow rings etc.). The work was supported by the Russian Science Foundation (RSF # 14-14-00219)

  19. Global high resolution versus Limited Area Model climate change projections over Europe

    DEFF Research Database (Denmark)

    Déqué, Michel; Jones, R. G.; Wild, M.

    2005-01-01

    the 2071-2100 and the 1961-1990 means is compared with the same diagnostic obtained with nine Regional Climate Models (RCM) all driven by the Hadley Centre atmospheric GCM. The seasonal mean response for 2m temperature and precipitation is investigated. For temperature, GCMs and RCMs behave similarly......, except that GCMs exhibit a larger spread. However, during summer, the spread of the RCMs - in particular in terms of precipitation - is larger than that of the GCMs. This indicates that the European summer climate is strongly controlled by parameterized physics and/or high-resolution processes...... errors are more spread. In addition, GCM precipitation response is slightly but significantly different from that of the RCMs....

  20. Impact of cloud parameterization on the numerical simulation of a super cyclone

    Energy Technology Data Exchange (ETDEWEB)

    Deshpande, M.S.; Pattnaik, S.; Salvekar, P.S. [Indian Institute of Tropical Meteorology, Pune (India)

    2012-07-01

    This study examines the role of parameterization of convection and explicit moisture processes on the simulated track, intensity and inner core structure of Orissa super cyclone (1999) in Bay of Bengal (north Indian Ocean). Sensitivity experiments are carried out to examine the impact of cumulus parameterization schemes (CPS) using MM5 model (Version 3.7) in a two-way nested domain (D1 and D2) configuration at horizontal resolutions (45-15 km). Three different cumulus parameterization schemes, namely Grell (Gr), Betts-Miller (BM) and updated Kain Fritsch (KF2), are tested. It is noted that track and intensity both are very sensitive to CPS and comparatively, KF2 predicts them reasonably well. Particularly, the rapid intensification phase of the super cyclone is best simulated by KF2 compared to other CPS. To examine the effect of the cumulus parameterization scheme at high resolution (5 km), the three-domain configuration (45-15-5 km resolution) is utilized. Based on initial results, KF2 scheme is used for both the domains (D1 and D2). Two experiments are conducted: one in which KF2 is used as CPS and another in which no CPS is used in the third domain. The intensity is well predicted when no CPS is used in the innermost domain. The sensitivity experiments are also carried out to examine the impact from microphysics parameterization schemes (MPS). Four cloud microphysics parameterization schemes, namely mixed phase (MP), Goddard microphysics with Graupel (GG), Reisner Graupel (RG) and Schultz (Sc), are tested in these experiments. It is noted that the tropical cyclone tracks and intensity variation have considerable sensitivity to the varying cloud microphysical parameterization schemes. The MPS of MP and Sc could very well capture the rapid intensification phase. The final intensity is well predicted by MP, which is overestimated by Sc. The MPS of GG and RG underestimates the intensity. (orig.)

  1. A subgrid parameterization scheme for precipitation

    Directory of Open Access Journals (Sweden)

    S. Turner

    2012-04-01

    Full Text Available With increasing computing power, the horizontal resolution of numerical weather prediction (NWP models is improving and today reaches 1 to 5 km. Nevertheless, clouds and precipitation formation are still subgrid scale processes for most cloud types, such as cumulus and stratocumulus. Subgrid scale parameterizations for water vapor condensation have been in use for many years and are based on a prescribed probability density function (PDF of relative humidity spatial variability within the model grid box, thus providing a diagnosis of the cloud fraction. A similar scheme is developed and tested here. It is based on a prescribed PDF of cloud water variability and a threshold value of liquid water content for droplet collection to derive a rain fraction within the model grid. Precipitation of rainwater raises additional concerns relative to the overlap of cloud and rain fractions, however. The scheme is developed following an analysis of data collected during field campaigns in stratocumulus (DYCOMS-II and fair weather cumulus (RICO and tested in a 1-D framework against large eddy simulations of these observed cases. The new parameterization is then implemented in a 3-D NWP model with a horizontal resolution of 2.5 km to simulate real cases of precipitating cloud systems over France.

  2. Improving microphysics in a convective parameterization: possibilities and limitations

    Science.gov (United States)

    Labbouz, Laurent; Heikenfeld, Max; Stier, Philip; Morrison, Hugh; Milbrandt, Jason; Protat, Alain; Kipling, Zak

    2017-04-01

    The convective cloud field model (CCFM) is a convective parameterization implemented in the climate model ECHAM6.1-HAM2.2. It represents a population of clouds within each ECHAM-HAM model column, simulating up to 10 different convective cloud types with individual radius, vertical velocities and microphysical properties. Comparisons between CCFM and radar data at Darwin, Australia, show that in order to reproduce both the convective cloud top height distribution and the vertical velocity profile, the effect of aerodynamic drag on the rising parcel has to be considered, along with a reduced entrainment parameter. A new double-moment microphysics (the Predicted Particle Properties scheme, P3) has been implemented in the latest version of CCFM and is compared to the standard single-moment microphysics and the radar retrievals at Darwin. The microphysical process rates (autoconversion, accretion, deposition, freezing, …) and their response to changes in CDNC are investigated and compared to high resolution CRM WRF simulations over the Amazon region. The results shed light on the possibilities and limitations of microphysics improvements in the framework of CCFM and in convective parameterizations in general.

  3. Parameterization of ion-induced nucleation rates based on ambient observations

    Directory of Open Access Journals (Sweden)

    T. Nieminen

    2011-04-01

    Full Text Available Atmospheric ions participate in the formation of new atmospheric aerosol particles, yet their exact role in this process has remained unclear. Here we derive a new simple parameterization for ion-induced nucleation or, more precisely, for the formation rate of charged 2-nm particles. The parameterization is semi-empirical in the sense that it is based on comprehensive results of one-year-long atmospheric cluster and particle measurements in the size range ~1–42 nm within the EUCAARI (European Integrated project on Aerosol Cloud Climate and Air Quality interactions project. Data from 12 field sites across Europe measured with different types of air ion and cluster mobility spectrometers were used in our analysis, with more in-depth analysis made using data from four stations with concomitant sulphuric acid measurements. The parameterization is given in two slightly different forms: a more accurate one that requires information on sulfuric acid and nucleating organic vapor concentrations, and a simpler one in which this information is replaced with the global radiation intensity. These new parameterizations are applicable to all large-scale atmospheric models containing size-resolved aerosol microphysics, and a scheme to calculate concentrations of sulphuric acid, condensing organic vapours and cluster ions.

  4. A unified spectral parameterization for wave breaking: From the deep ocean to the surf zone

    Science.gov (United States)

    Filipot, J.-F.; Ardhuin, F.

    2012-11-01

    A new wave-breaking dissipation parameterization designed for phase-averaged spectral wave models is presented. It combines wave breaking basic physical quantities, namely, the breaking probability and the dissipation rate per unit area. The energy lost by waves is first explicitly calculated in physical space before being distributed over the relevant spectral components. The transition from deep to shallow water is made possible by using a dissipation rate per unit area of breaking waves that varies with the wave height, wavelength and water depth. This parameterization is implemented in the WAVEWATCH III modeling framework, which is applied to a wide range of conditions and scales, from the global ocean to the beach scale. Wave height, peak and mean periods, and spectral data are validated using in situ and remote sensing data. Model errors are comparable to those of other specialized deep or shallow water parameterizations. This work shows that it is possible to have a seamless parameterization from the deep ocean to the surf zone.

  5. Building and calibrating a large-extent and high resolution coupled groundwater-land surface model using globally available data-sets

    Science.gov (United States)

    Sutanudjaja, E. H.; Van Beek, L. P.; de Jong, S. M.; van Geer, F.; Bierkens, M. F.

    2012-12-01

    The current generation of large-scale hydrological models generally lacks a groundwater model component simulating lateral groundwater flow. Large-scale groundwater models are rare due to a lack of hydro-geological data required for their parameterization and a lack of groundwater head data required for their calibration. In this study, we propose an approach to develop a large-extent fully-coupled land surface-groundwater model by using globally available datasets and calibrate it using a combination of discharge observations and remotely-sensed soil moisture data. The underlying objective is to devise a collection of methods that enables one to build and parameterize large-scale groundwater models in data-poor regions. The model used, PCR-GLOBWB-MOD, has a spatial resolution of 1 km x 1 km and operates on a daily basis. It consists of a single-layer MODFLOW groundwater model that is dynamically coupled to the PCR-GLOBWB land surface model. This fully-coupled model accommodates two-way interactions between surface water levels and groundwater head dynamics, as well as between upper soil moisture states and groundwater levels, including a capillary rise mechanism to sustain upper soil storage and thus to fulfill high evaporation demands (during dry conditions). As a test bed, we used the Rhine-Meuse basin, where more than 4000 groundwater head time series have been collected for validation purposes. The model was parameterized using globally available data-sets on surface elevation, drainage direction, land-cover, soil and lithology. Next, the model was calibrated using a brute force approach and massive parallel computing, i.e. by running the coupled groundwater-land surface model for more than 3000 different parameter sets. Here, we varied minimal soil moisture storage and saturated conductivities of the soil layers as well as aquifer transmissivities. Using different regularization strategies and calibration criteria we compared three calibration scenarios

  6. Sensitivity of tropical cyclone simulations to microphysics parameterizations in WRF

    International Nuclear Information System (INIS)

    Reshmi Mohan, P.; Srinivas, C.V.; Bhaskaran, R.; Venkatraman, B.; Yesubabu, V.

    2018-01-01

    Tropical cyclones (TC) cause storm surge along coastal areas where these storms cross the coast. As major nuclear facilities are usually installed in coastal region, the surge predictions are highly important for DAE. The critical TC parameters needed in estimating storm surge are intensity (winds, central pressure and radius of maximum winds) and storm tracks. The predictions with numerical models are generally made by representing the clouds and precipitation processes using convective and microphysics parameterization. At high spatial resolutions (1-3Km) microphysics can act as cloud resolving NWP model to explicitly resolve the convective precipitation without using convection schemes. Recent simulation studies using WRF on severe weather phenomena such as thunderstorms and hurricanes indicated large sensitivity of predicted rainfall and hurricane tracks to microphysics due to variation in temperature and pressure gradients which generate winds that determine the storm track. In the present study the sensitivity of tropical cyclone tracks and intensity to different microphysics schemes has been conducted

  7. High-resolution assessment of global technical and economic hydropower potential

    NARCIS (Netherlands)

    Gernaat, David E.H.J.; Bogaart, Patrick W.; Vuuren, van Detlef P.; Biemans, Hester; Niessink, Robin

    2017-01-01

    Hydropower is the most important renewable energy source to date, providing over 72% of all renewable electricity globally. Yet, only limited information is available on the global potential supply of hydropower and the associated costs. Here we provide a high-resolution assessment of the technical

  8. Climate impacts of parameterized Nordic Sea overflows

    Science.gov (United States)

    Danabasoglu, Gokhan; Large, William G.; Briegleb, Bruce P.

    2010-11-01

    A new overflow parameterization (OFP) of density-driven flows through ocean ridges via narrow, unresolved channels has been developed and implemented in the ocean component of the Community Climate System Model version 4. It represents exchanges from the Nordic Seas and the Antarctic shelves, associated entrainment, and subsequent injection of overflow product waters into the abyssal basins. We investigate the effects of the parameterized Denmark Strait (DS) and Faroe Bank Channel (FBC) overflows on the ocean circulation, showing their impacts on the Atlantic Meridional Overturning Circulation and the North Atlantic climate. The OFP is based on the Marginal Sea Boundary Condition scheme of Price and Yang (1998), but there are significant differences that are described in detail. Two uncoupled (ocean-only) and two fully coupled simulations are analyzed. Each pair consists of one case with the OFP and a control case without this parameterization. In both uncoupled and coupled experiments, the parameterized DS and FBC source volume transports are within the range of observed estimates. The entrainment volume transports remain lower than observational estimates, leading to lower than observed product volume transports. Due to low entrainment, the product and source water properties are too similar. The DS and FBC overflow temperature and salinity properties are in better agreement with observations in the uncoupled case than in the coupled simulation, likely reflecting surface flux differences. The most significant impact of the OFP is the improved North Atlantic Deep Water penetration depth, leading to a much better comparison with the observational data and significantly reducing the chronic, shallow penetration depth bias in level coordinate models. This improvement is due to the deeper penetration of the southward flowing Deep Western Boundary Current. In comparison with control experiments without the OFP, the abyssal ventilation rates increase in the North

  9. Natural Ocean Carbon Cycle Sensitivity to Parameterizations of the Recycling in a Climate Model

    Science.gov (United States)

    Romanou, A.; Romanski, J.; Gregg, W. W.

    2014-01-01

    Sensitivities of the oceanic biological pump within the GISS (Goddard Institute for Space Studies ) climate modeling system are explored here. Results are presented from twin control simulations of the air-sea CO2 gas exchange using two different ocean models coupled to the same atmosphere. The two ocean models (Russell ocean model and Hybrid Coordinate Ocean Model, HYCOM) use different vertical coordinate systems, and therefore different representations of column physics. Both variants of the GISS climate model are coupled to the same ocean biogeochemistry module (the NASA Ocean Biogeochemistry Model, NOBM), which computes prognostic distributions for biotic and abiotic fields that influence the air-sea flux of CO2 and the deep ocean carbon transport and storage. In particular, the model differences due to remineralization rate changes are compared to differences attributed to physical processes modeled differently in the two ocean models such as ventilation, mixing, eddy stirring and vertical advection. GISSEH(GISSER) is found to underestimate mixed layer depth compared to observations by about 55% (10 %) in the Southern Ocean and overestimate it by about 17% (underestimate by 2%) in the northern high latitudes. Everywhere else in the global ocean, the two models underestimate the surface mixing by about 12-34 %, which prevents deep nutrients from reaching the surface and promoting primary production there. Consequently, carbon export is reduced because of reduced production at the surface. Furthermore, carbon export is particularly sensitive to remineralization rate changes in the frontal regions of the subtropical gyres and at the Equator and this sensitivity in the model is much higher than the sensitivity to physical processes such as vertical mixing, vertical advection and mesoscale eddy transport. At depth, GISSER, which has a significant warm bias, remineralizes nutrients and carbon faster thereby producing more nutrients and carbon at depth, which

  10. Strategies for high-precision Global Positioning System orbit determination

    Science.gov (United States)

    Lichten, Stephen M.; Border, James S.

    1987-01-01

    Various strategies for the high-precision orbit determination of the GPS satellites are explored using data from the 1985 GPS field test. Several refinements to the orbit determination strategies were found to be crucial for achieving high levels of repeatability and accuracy. These include the fine tuning of the GPS solar radiation coefficients and the ground station zenith tropospheric delays. Multiday arcs of 3-6 days provided better orbits and baselines than the 8-hr arcs from single-day passes. Highest-quality orbits and baselines were obtained with combined carrier phase and pseudorange solutions.

  11. A test harness for accelerating physics parameterization advancements into operations

    Science.gov (United States)

    Firl, G. J.; Bernardet, L.; Harrold, M.; Henderson, J.; Wolff, J.; Zhang, M.

    2017-12-01

    The process of transitioning advances in parameterization of sub-grid scale processes from initial idea to implementation is often much quicker than the transition from implementation to use in an operational setting. After all, considerable work must be undertaken by operational centers to fully test, evaluate, and implement new physics. The process is complicated by the scarcity of like-to-like comparisons, availability of HPC resources, and the ``tuning problem" whereby advances in physics schemes are difficult to properly evaluate without first undertaking the expensive and time-consuming process of tuning to other schemes within a suite. To address this process shortcoming, the Global Model TestBed (GMTB), supported by the NWS NGGPS project and undertaken by the Developmental Testbed Center, has developed a physics test harness. It implements the concept of hierarchical testing, where the same code can be tested in model configurations of varying complexity from single column models (SCM) to fully coupled, cycled global simulations. Developers and users may choose at which level of complexity to engage. Several components of the physics test harness have been implemented, including a SCM and an end-to-end workflow that expands upon the one used at NOAA/EMC to run the GFS operationally, although the testbed components will necessarily morph to coincide with changes to the operational configuration (FV3-GFS). A standard, relatively user-friendly interface known as the Interoperable Physics Driver (IPD) is available for physics developers to connect their codes. This prerequisite exercise allows access to the testbed tools and removes a technical hurdle for potential inclusion into the Common Community Physics Package (CCPP). The testbed offers users the opportunity to conduct like-to-like comparisons between the operational physics suite and new development as well as among multiple developments. GMTB staff have demonstrated use of the testbed through a

  12. A simple parameterization of aerosol emissions in RAMS

    Science.gov (United States)

    Letcher, Theodore

    Throughout the past decade, a high degree of attention has been focused on determining the microphysical impact of anthropogenically enhanced concentrations of Cloud Condensation Nuclei (CCN) on orographic snowfall in the mountains of the western United States. This area has garnered a lot of attention due to the implications this effect may have on local water resource distribution within the Region. Recent advances in computing power and the development of highly advanced microphysical schemes within numerical models have provided an estimation of the sensitivity that orographic snowfall has to changes in atmospheric CCN concentrations. However, what is still lacking is a coupling between these advanced microphysical schemes and a real-world representation of CCN sources. Previously, an attempt to representation the heterogeneous evolution of aerosol was made by coupling three-dimensional aerosol output from the WRF Chemistry model to the Colorado State University (CSU) Regional Atmospheric Modeling System (RAMS) (Ward et al. 2011). The biggest problem associated with this scheme was the computational expense. In fact, the computational expense associated with this scheme was so high, that it was prohibitive for simulations with fine enough resolution to accurately represent microphysical processes. To improve upon this method, a new parameterization for aerosol emission was developed in such a way that it was fully contained within RAMS. Several assumptions went into generating a computationally efficient aerosol emissions parameterization in RAMS. The most notable assumption was the decision to neglect the chemical processes in formed in the formation of Secondary Aerosol (SA), and instead treat SA as primary aerosol via short-term WRF-CHEM simulations. While, SA makes up a substantial portion of the total aerosol burden (much of which is made up of organic material), the representation of this process is highly complex and highly expensive within a numerical

  13. Systematic high-resolution assessment of global hydropower potential

    NARCIS (Netherlands)

    Hoes, Olivier A C; Meijer, Lourens J J; Van Der Ent, Ruud J.|info:eu-repo/dai/nl/364164794; Van De Giesen, Nick C.

    2017-01-01

    Population growth, increasing energy demand and the depletion of fossil fuel reserves necessitate a search for sustainable alternatives for electricity generation. Hydropower could replace a large part of the contribution of gas and oil to the present energy mix. However, previous high-resolution

  14. Parameterization of Rocket Dust Storms on Mars in the LMD Martian GCM: Modeling Details and Validation

    Science.gov (United States)

    Wang, Chao; Forget, François; Bertrand, Tanguy; Spiga, Aymeric; Millour, Ehouarn; Navarro, Thomas

    2018-04-01

    The origin of the detached dust layers observed by the Mars Climate Sounder aboard the Mars Reconnaissance Orbiter is still debated. Spiga et al. (2013, https://doi.org/10.1002/jgre.20046) revealed that deep mesoscale convective "rocket dust storms" are likely to play an important role in forming these dust layers. To investigate how the detached dust layers are generated by this mesoscale phenomenon and subsequently evolve at larger scales, a parameterization of rocket dust storms to represent the mesoscale dust convection is designed and included into the Laboratoire de Météorologie Dynamique (LMD) Martian Global Climate Model (GCM). The new parameterization allows dust particles in the GCM to be transported to higher altitudes than in traditional GCMs. Combined with the horizontal transport by large-scale winds, the dust particles spread out and form detached dust layers. During the Martian dusty seasons, the LMD GCM with the new parameterization is able to form detached dust layers. The formation, evolution, and decay of the simulated dust layers are largely in agreement with the Mars Climate Sounder observations. This suggests that mesoscale rocket dust storms are among the key factors to explain the observed detached dust layers on Mars. However, the detached dust layers remain absent in the GCM during the clear seasons, even with the new parameterization. This implies that other relevant atmospheric processes, operating when no dust storms are occurring, are needed to explain the Martian detached dust layers. More observations of local dust storms could improve the ad hoc aspects of this parameterization, such as the trigger and timing of dust injection.

  15. High resolution global flood hazard map from physically-based hydrologic and hydraulic models.

    Science.gov (United States)

    Begnudelli, L.; Kaheil, Y.; McCollum, J.

    2017-12-01

    The global flood map published online at http://www.fmglobal.com/research-and-resources/global-flood-map at 90m resolution is being used worldwide to understand flood risk exposure, exercise certain measures of mitigation, and/or transfer the residual risk financially through flood insurance programs. The modeling system is based on a physically-based hydrologic model to simulate river discharges, and 2D shallow-water hydrodynamic model to simulate inundation. The model can be applied to large-scale flood hazard mapping thanks to several solutions that maximize its efficiency and the use of parallel computing. The hydrologic component of the modeling system is the Hillslope River Routing (HRR) hydrologic model. HRR simulates hydrological processes using a Green-Ampt parameterization, and is calibrated against observed discharge data from several publicly-available datasets. For inundation mapping, we use a 2D Finite-Volume Shallow-Water model with wetting/drying. We introduce here a grid Up-Scaling Technique (UST) for hydraulic modeling to perform simulations at higher resolution at global scale with relatively short computational times. A 30m SRTM is now available worldwide along with higher accuracy and/or resolution local Digital Elevation Models (DEMs) in many countries and regions. UST consists of aggregating computational cells, thus forming a coarser grid, while retaining the topographic information from the original full-resolution mesh. The full-resolution topography is used for building relationships between volume and free surface elevation inside cells and computing inter-cell fluxes. This approach almost achieves computational speed typical of the coarse grids while preserving, to a significant extent, the accuracy offered by the much higher resolution available DEM. The simulations are carried out along each river of the network by forcing the hydraulic model with the streamflow hydrographs generated by HRR. Hydrographs are scaled so that the peak

  16. An Evaluation of Lightning Flash Rate Parameterizations Based on Observations of Colorado Storms during DC3

    Science.gov (United States)

    Basarab, B.; Fuchs, B.; Rutledge, S. A.

    2013-12-01

    Predicting lightning activity in thunderstorms is important in order to accurately quantify the production of nitrogen oxides (NOx = NO + NO2) by lightning (LNOx). Lightning is an important global source of NOx, and since NOx is a chemical precursor to ozone, the climatological impacts of LNOx could be significant. Many cloud-resolving models rely on parameterizations to predict lightning and LNOx since the processes leading to charge separation and lightning discharge are not yet fully understood. This study evaluates predicted flash rates based on existing lightning parameterizations against flash rates observed for Colorado storms during the Deep Convective Clouds and Chemistry Experiment (DC3). Evaluating lightning parameterizations against storm observations is a useful way to possibly improve the prediction of flash rates and LNOx in models. Additionally, since convective storms that form in the eastern plains of Colorado can be different thermodynamically and electrically from storms in other regions, it is useful to test existing parameterizations against observations from these storms. We present an analysis of the dynamics, microphysics, and lightning characteristics of two case studies, severe storms that developed on 6 and 7 June 2012. This analysis includes dual-Doppler derived horizontal and vertical velocities, a hydrometeor identification based on polarimetric radar variables using the CSU-CHILL radar, and insight into the charge structure using observations from the northern Colorado Lightning Mapping Array (LMA). Flash rates were inferred from the LMA data using a flash counting algorithm. We have calculated various microphysical and dynamical parameters for these storms that have been used in empirical flash rate parameterizations. In particular, maximum vertical velocity has been used to predict flash rates in some cloud-resolving chemistry simulations. We diagnose flash rates for the 6 and 7 June storms using this parameterization and compare

  17. Fundamental statistical relationships between monthly and daily meteorological variables: Temporal downscaling of weather based on a global observational dataset

    Science.gov (United States)

    Sommer, Philipp; Kaplan, Jed

    2016-04-01

    Accurate modelling of large-scale vegetation dynamics, hydrology, and other environmental processes requires meteorological forcing on daily timescales. While meteorological data with high temporal resolution is becoming increasingly available, simulations for the future or distant past are limited by lack of data and poor performance of climate models, e.g., in simulating daily precipitation. To overcome these limitations, we may temporally downscale monthly summary data to a daily time step using a weather generator. Parameterization of such statistical models has traditionally been based on a limited number of observations. Recent developments in the archiving, distribution, and analysis of "big data" datasets provide new opportunities for the parameterization of a temporal downscaling model that is applicable over a wide range of climates. Here we parameterize a WGEN-type weather generator using more than 50 million individual daily meteorological observations, from over 10'000 stations covering all continents, based on the Global Historical Climatology Network (GHCN) and Synoptic Cloud Reports (EECRA) databases. Using the resulting "universal" parameterization and driven by monthly summaries, we downscale mean temperature (minimum and maximum), cloud cover, and total precipitation, to daily estimates. We apply a hybrid gamma-generalized Pareto distribution to calculate daily precipitation amounts, which overcomes much of the inability of earlier weather generators to simulate high amounts of daily precipitation. Our globally parameterized weather generator has numerous applications, including vegetation and crop modelling for paleoenvironmental studies.

  18. A high-throughput and sensitive method to measure Global DNA Methylation: Application in Lung Cancer

    Directory of Open Access Journals (Sweden)

    Mamaev Sergey

    2008-08-01

    Full Text Available Abstract Background Genome-wide changes in DNA methylation are an epigenetic phenomenon that can lead to the development of disease. The study of global DNA methylation utilizes technology that requires both expensive equipment and highly specialized skill sets. Methods We have designed and developed an assay, CpGlobal, which is easy-to-use, does not utilize PCR, radioactivity and expensive equipment. CpGlobal utilizes methyl-sensitive restriction enzymes, HRP Neutravidin to detect the biotinylated nucleotides incorporated in an end-fill reaction and a luminometer to measure the chemiluminescence. The assay shows high accuracy and reproducibility in measuring global DNA methylation. Furthermore, CpGlobal correlates significantly with High Performance Capillary Electrophoresis (HPCE, a gold standard technology. We have applied the technology to understand the role of global DNA methylation in the natural history of lung cancer. World-wide, it is the leading cause of death attributed to any cancer. The survival rate is 15% over 5 years due to the lack of any clinical symptoms until the disease has progressed to a stage where cure is limited. Results Through the use of cell lines and paired normal/tumor samples from patients with non-small cell lung cancer (NSCLC we show that global DNA hypomethylation is highly associated with the progression of the tumor. In addition, the results provide the first indication that the normal part of the lung from a cancer patient has already experienced a loss of methylation compared to a normal individual. Conclusion By detecting these changes in global DNA methylation, CpGlobal may have a role as a barometer for the onset and development of lung cancer.

  19. Factors influencing the parameterization of anvil clouds within GCMs

    International Nuclear Information System (INIS)

    Leone, J.M. Jr.; Chin, Hung-Neng.

    1993-03-01

    The overall goal of this project is to improve the representation of clouds and their effects within global climate models (GCMs). The authors have concentrated on a small portion of the overall goal, the evolution of convectively generated cirrus clouds and their effects on the large-scale environment. Because of the large range of time and length scales involved they have been using a multi-scale attack. For the early time generation and development of the cirrus anvil they are using a cloud-scale model with horizontal resolution of 1--2 kilometers; while for the larger scale transport by the larger scale flow they are using a mesoscale model with a horizontal resolution of 20--60 kilometers. The eventual goal is to use the information obtained from these simulations together with available observations to derive improved cloud parameterizations for use in GCMs. This paper presents results from their cloud-scale studies and describes a new tool, a cirrus generator, that they have developed to aid in their mesoscale studies

  20. Ozonolysis of α-pinene: parameterization of secondary organic aerosol mass fraction

    Directory of Open Access Journals (Sweden)

    R. K. Pathak

    2007-07-01

    Full Text Available Existing parameterizations tend to underpredict the α-pinene aerosol mass fraction (AMF or yield by a factor of 2–5 at low organic aerosol concentrations (<5 µg m−3. A wide range of smog chamber results obtained at various conditions (low/high NOx, presence/absence of UV radiation, dry/humid conditions, and temperatures ranging from 15–40°C collected by various research teams during the last decade are used to derive new parameterizations of the SOA formation from α-pinene ozonolysis. Parameterizations are developed by fitting experimental data to a basis set of saturation concentrations (from 10−2 to 104 µg m−3 using an absorptive equilibrium partitioning model. Separate parameterizations for α-pinene SOA mass fractions are developed for: 1 Low NOx, dark, and dry conditions, 2 Low NOx, UV, and dry conditions, 3 Low NOx, dark, and high RH conditions, 4 High NOx, dark, and dry conditions, 5 High NOx, UV, and dry conditions. According to the proposed parameterizations the α-pinene SOA mass fractions in an atmosphere with 5 µg m−3 of organic aerosol range from 0.032 to 0.1 for reacted α-pinene concentrations in the 1 ppt to 5 ppb range.

  1. Parameterized combinatorial geometry modeling in Moritz

    International Nuclear Information System (INIS)

    Van Riper, K.A.

    2005-01-01

    We describe the use of named variables as surface and solid body coefficients in the Moritz geometry editing program. Variables can also be used as material numbers, cell densities, and transformation values. A variable is defined as a constant or an arithmetic combination of constants and other variables. A variable reference, such as in a surface coefficient, can be a single variable or an expression containing variables and constants. Moritz can read and write geometry models in MCNP and ITS ACCEPT format; support for other codes will be added. The geometry can be saved with either the variables in place, for modifying the models in Moritz, or with the variables evaluated for use in the transport codes. A program window shows a list of variables and provides fields for editing them. Surface coefficients and other values that use a variable reference are shown in a distinctive style on object property dialogs; associated buttons show fields for editing the reference. We discuss our use of variables in defining geometry models for shielding studies in PET clinics. When a model is parameterized through the use of variables, changes such as room dimensions, shielding layer widths, and cell compositions can be quickly achieved by changing a few numbers without requiring knowledge of the input syntax for the transport code or the tedious and error prone work of recalculating many surface or solid body coefficients. (author)

  2. Phenomenology of convection-parameterization closure

    Directory of Open Access Journals (Sweden)

    J.-I. Yano

    2013-04-01

    Full Text Available Closure is a problem of defining the convective intensity in a given parameterization. In spite of many years of efforts and progress, it is still considered an overall unresolved problem. The present article reviews this problem from phenomenological perspectives. The physical variables that may contribute in defining the convective intensity are listed, and their statistical significances identified by observational data analyses are reviewed. A possibility is discussed for identifying a correct closure hypothesis by performing a linear stability analysis of tropical convectively coupled waves with various different closure hypotheses. Various individual theoretical issues are considered from various different perspectives. The review also emphasizes that the dominant physical factors controlling convection differ between the tropics and extra-tropics, as well as between oceanic and land areas. Both observational as well as theoretical analyses, often focused on the tropics, do not necessarily lead to conclusions consistent with our operational experiences focused on midlatitudes. Though we emphasize the importance of the interplays between these observational, theoretical and operational perspectives, we also face challenges for establishing a solid research framework that is universally applicable. An energy cycle framework is suggested as such a candidate.

  3. Stellar Atmospheric Parameterization Based on Deep Learning

    Science.gov (United States)

    Pan, Ru-yang; Li, Xiang-ru

    2017-07-01

    Deep learning is a typical learning method widely studied in the fields of machine learning, pattern recognition, and artificial intelligence. This work investigates the problem of stellar atmospheric parameterization by constructing a deep neural network with five layers, and the node number in each layer of the network is respectively 3821-500-100-50-1. The proposed scheme is verified on both the real spectra measured by the Sloan Digital Sky Survey (SDSS) and the theoretic spectra computed with the Kurucz's New Opacity Distribution Function (NEWODF) model, to make an automatic estimation for three physical parameters: the effective temperature (Teff), surface gravitational acceleration (lg g), and metallic abundance (Fe/H). The results show that the stacked autoencoder deep neural network has a better accuracy for the estimation. On the SDSS spectra, the mean absolute errors (MAEs) are 79.95 for Teff/K, 0.0058 for (lg Teff/K), 0.1706 for lg (g/(cm·s-2)), and 0.1294 dex for the [Fe/H], respectively; On the theoretic spectra, the MAEs are 15.34 for Teff/K, 0.0011 for lg (Teff/K), 0.0214 for lg(g/(cm · s-2)), and 0.0121 dex for [Fe/H], respectively.

  4. Evaluating and Improving Wind Forecasts over South China: The Role of Orographic Parameterization in the GRAPES Model

    Science.gov (United States)

    Zhong, Shuixin; Chen, Zitong; Xu, Daosheng; Zhang, Yanxia

    2018-06-01

    Unresolved small-scale orographic (SSO) drags are parameterized in a regional model based on the Global/Regional Assimilation and Prediction System for the Tropical Mesoscale Model (GRAPES TMM). The SSO drags are represented by adding a sink term in the momentum equations. The maximum height of the mountain within the grid box is adopted in the SSO parameterization (SSOP) scheme as compensation for the drag. The effects of the unresolved topography are parameterized as the feedbacks to the momentum tendencies on the first model level in planetary boundary layer (PBL) parameterization. The SSOP scheme has been implemented and coupled with the PBL parameterization scheme within the model physics package. A monthly simulation is designed to examine the performance of the SSOP scheme over the complex terrain areas located in the southwest of Guangdong. The verification results show that the surface wind speed bias has been much alleviated by adopting the SSOP scheme, in addition to reduction of the wind bias in the lower troposphere. The target verification over Xinyi shows that the simulations with the SSOP scheme provide improved wind estimation over the complex regions in the southwest of Guangdong.

  5. Carbody structural lightweighting based on implicit parameterized model

    Science.gov (United States)

    Chen, Xin; Ma, Fangwu; Wang, Dengfeng; Xie, Chen

    2014-05-01

    Most of recent research on carbody lightweighting has focused on substitute material and new processing technologies rather than structures. However, new materials and processing techniques inevitably lead to higher costs. Also, material substitution and processing lightweighting have to be realized through body structural profiles and locations. In the huge conventional workload of lightweight optimization, model modifications involve heavy manual work, and it always leads to a large number of iteration calculations. As a new technique in carbody lightweighting, the implicit parameterization is used to optimize the carbody structure to improve the materials utilization rate in this paper. The implicit parameterized structural modeling enables the use of automatic modification and rapid multidisciplinary design optimization (MDO) in carbody structure, which is impossible in the traditional structure finite element method (FEM) without parameterization. The structural SFE parameterized model is built in accordance with the car structural FE model in concept development stage, and it is validated by some structural performance data. The validated SFE structural parameterized model can be used to generate rapidly and automatically FE model and evaluate different design variables group in the integrated MDO loop. The lightweighting result of body-in-white (BIW) after the optimization rounds reveals that the implicit parameterized model makes automatic MDO feasible and can significantly improve the computational efficiency of carbody structural lightweighting. This paper proposes the integrated method of implicit parameterized model and MDO, which has the obvious practical advantage and industrial significance in the carbody structural lightweighting design.

  6. Recent developments in high-resolution global altimetric gravity field modeling

    DEFF Research Database (Denmark)

    Andersen, Ole Baltazar; Knudsen, Per; Berry, P. A .M.

    2010-01-01

    older gravity fields show accuracy improvement of the order of 20-40% due to a combination of retracking, enhanced processing, and the use of the new EGM2008 geoid model. In coastal and polar regions, accuracy improved in many places by 40-50% (or more) compared with older global marine gravity fields.......In recent years, dedicated effort has been made to improve high-resolution global marine gravity fields. One new global field is the Danish National Space Center (DNSC) 1-minute grid called DNSC08GRA, released in 2008. DNSC08GRA was derived from double-retracked satellite altimetry, mainly from...... the ERS-1 geodetic mission data, augmented with new retracked GEOSAT data which have significantly enhanced the range and hence the gravity field accuracy. DNSC08GRA is the first high-resolution global gravity field to cover the entire Arctic Ocean all the way to the North Pole. Comparisons with other...

  7. An energetically consistent vertical mixing parameterization in CCSM4

    DEFF Research Database (Denmark)

    Nielsen, Søren Borg; Jochum, Markus; Eden, Carsten

    2018-01-01

    An energetically consistent stratification-dependent vertical mixing parameterization is implemented in the Community Climate System Model 4 and forced with energy conversion from the barotropic tides to internal waves. The structures of the resulting dissipation and diffusivity fields are compared......, however, depends greatly on the details of the vertical mixing parameterizations, where the new energetically consistent parameterization results in low thermocline diffusivities and a sharper and shallower thermocline. It is also investigated if the ocean state is more sensitive to a change in forcing...

  8. Impact of different parameterization schemes on simulation of mesoscale convective system over south-east India

    Science.gov (United States)

    Madhulatha, A.; Rajeevan, M.

    2018-02-01

    Main objective of the present paper is to examine the role of various parameterization schemes in simulating the evolution of mesoscale convective system (MCS) occurred over south-east India. Using the Weather Research and Forecasting (WRF) model, numerical experiments are conducted by considering various planetary boundary layer, microphysics, and cumulus parameterization schemes. Performances of different schemes are evaluated by examining boundary layer, reflectivity, and precipitation features of MCS using ground-based and satellite observations. Among various physical parameterization schemes, Mellor-Yamada-Janjic (MYJ) boundary layer scheme is able to produce deep boundary layer height by simulating warm temperatures necessary for storm initiation; Thompson (THM) microphysics scheme is capable to simulate the reflectivity by reasonable distribution of different hydrometeors during various stages of system; Betts-Miller-Janjic (BMJ) cumulus scheme is able to capture the precipitation by proper representation of convective instability associated with MCS. Present analysis suggests that MYJ, a local turbulent kinetic energy boundary layer scheme, which accounts strong vertical mixing; THM, a six-class hybrid moment microphysics scheme, which considers number concentration along with mixing ratio of rain hydrometeors; and BMJ, a closure cumulus scheme, which adjusts thermodynamic profiles based on climatological profiles might have contributed for better performance of respective model simulations. Numerical simulation carried out using the above combination of schemes is able to capture storm initiation, propagation, surface variations, thermodynamic structure, and precipitation features reasonably well. This study clearly demonstrates that the simulation of MCS characteristics is highly sensitive to the choice of parameterization schemes.

  9. evaluation of land surface temperature parameterization ...

    African Journals Online (AJOL)

    user

    Surface temperature (Ts) is vital to the study of land-atmosphere interactions and ... representation of Ts in Global Climate Models using available ..... Obviously, the influence of the ambient .... diurnal cycle over land under clear and cloudy.

  10. Parameterization-based tracking for the P2 experiment

    Energy Technology Data Exchange (ETDEWEB)

    Sorokin, Iurii [Institut fuer Kernphysik and PRISMA Cluster of Excellence, Mainz (Germany); Collaboration: P2-Collaboration

    2016-07-01

    The P2 experiment at the new MESA accelerator in Mainz aims to determine the weak mixing angle by measuring the parity-violating asymmetry in elastic electron-proton scattering at low momentum transfer. To achieve an unprecedented precision an order of 10{sup 11} scattered electrons per second have to be acquired. %within the acceptance. Whereas the tracking system is not required to operate at such high rates, every attempt is made to achieve as high rate capability as possible. The P2 tracking system will consist of four planes of high-voltage monolithic active pixel sensors (HV-MAPS). With the present preliminary design one expects about 150 signal electron tracks and 20000 background hits (from bremsstrahlung photons) per plane in every 50 ns readout frame at the full rate. In order to cope with this extreme combinatorial background in on-line mode, a parameterization-based tracking is considered as a possible solution. The idea is to transform the hit positions into a set of weakly correlated quantities, and to find simple (e.g. polynomial) functions of these quantities, that would give the required characteristics of the track (e.g. momentum). The parameters of the functions are determined from a sample of high-quality tracks, taken either from a simulation, or reconstructed in a conventional way from a sample of low-rate data.

  11. GHI calculation sensitivity on microphysics, land- and cumulus parameterization in WRF over the Reunion Island

    Science.gov (United States)

    De Meij, A.; Vinuesa, J.-F.; Maupas, V.

    2018-05-01

    The sensitivity of different microphysics and dynamics schemes on calculated global horizontal irradiation (GHI) values in the Weather Research Forecasting (WRF) model is studied. 13 sensitivity simulations were performed for which the microphysics, cumulus parameterization schemes and land surface models were changed. Firstly we evaluated the model's performance by comparing calculated GHI values for the Base Case with observations for the Reunion Island for 2014. In general, the model calculates the largest bias during the austral summer. This indicates that the model is less accurate in timing the formation and dissipation of clouds during the summer, when higher water vapor quantities are present in the atmosphere than during the austral winter. Secondly, the model sensitivity on changing the microphysics, cumulus parameterization and land surface models on calculated GHI values is evaluated. The sensitivity simulations showed that changing the microphysics from the Thompson scheme (or Single-Moment 6-class scheme) to the Morrison double-moment scheme, the relative bias improves from 45% to 10%. The underlying reason for this improvement is that the Morrison double-moment scheme predicts the mass and number concentrations of five hydrometeors, which help to improve the calculation of the densities, size and lifetime of the cloud droplets. While the single moment schemes only predicts the mass for less hydrometeors. Changing the cumulus parameterization schemes and land surface models does not have a large impact on GHI calculations.

  12. On the importance of the albedo parameterization for the mass balance of the Greenland ice sheet in EC-Earth

    Directory of Open Access Journals (Sweden)

    M. M. Helsen

    2017-08-01

    Full Text Available The albedo of the surface of ice sheets changes as a function of time due to the effects of deposition of new snow, ageing of dry snow, bare ice exposure, melting and run-off. Currently, the calculation of the albedo of ice sheets is highly parameterized within the earth system model EC-Earth by taking a constant value for areas with thick perennial snow cover. This is an important reason why the surface mass balance (SMB of the Greenland ice sheet (GrIS is poorly resolved in the model. The purpose of this study is to improve the SMB forcing of the GrIS by evaluating different parameter settings within a snow albedo scheme. By allowing ice-sheet albedo to vary as a function of wet and dry conditions, the spatial distribution of albedo and melt rate improves. Nevertheless, the spatial distribution of SMB in EC-Earth is not significantly improved. As a reason for this, we identify omissions in the current snow albedo scheme, such as separate treatment of snow and ice and the effect of refreezing. The resulting SMB is downscaled from the lower-resolution global climate model topography to the higher-resolution ice-sheet topography of the GrIS, such that the influence of these different SMB climatologies on the long-term evolution of the GrIS is tested by ice-sheet model simulations. From these ice-sheet simulations we conclude that an albedo scheme with a short response time of decaying albedo during wet conditions performs best with respect to long-term simulated ice-sheet volume. This results in an optimized albedo parameterization that can be used in future EC-Earth simulations with an interactive ice-sheet component.

  13. Spectral cumulus parameterization based on cloud-resolving model

    Science.gov (United States)

    Baba, Yuya

    2018-02-01

    We have developed a spectral cumulus parameterization using a cloud-resolving model. This includes a new parameterization of the entrainment rate which was derived from analysis of the cloud properties obtained from the cloud-resolving model simulation and was valid for both shallow and deep convection. The new scheme was examined in a single-column model experiment and compared with the existing parameterization of Gregory (2001, Q J R Meteorol Soc 127:53-72) (GR scheme). The results showed that the GR scheme simulated more shallow and diluted convection than the new scheme. To further validate the physical performance of the parameterizations, Atmospheric Model Intercomparison Project (AMIP) experiments were performed, and the results were compared with reanalysis data. The new scheme performed better than the GR scheme in terms of mean state and variability of atmospheric circulation, i.e., the new scheme improved positive bias of precipitation in western Pacific region, and improved positive bias of outgoing shortwave radiation over the ocean. The new scheme also simulated better features of convectively coupled equatorial waves and Madden-Julian oscillation. These improvements were found to be derived from the modification of parameterization for the entrainment rate, i.e., the proposed parameterization suppressed excessive increase of entrainment, thus suppressing excessive increase of low-level clouds.

  14. Assessing carbon dioxide removal through global and regional ocean alkalinization under high and low emission pathways

    Science.gov (United States)

    Lenton, Andrew; Matear, Richard J.; Keller, David P.; Scott, Vivian; Vaughan, Naomi E.

    2018-04-01

    Atmospheric carbon dioxide (CO2) levels continue to rise, increasing the risk of severe impacts on the Earth system, and on the ecosystem services that it provides. Artificial ocean alkalinization (AOA) is capable of reducing atmospheric CO2 concentrations and surface warming and addressing ocean acidification. Here, we simulate global and regional responses to alkalinity (ALK) addition (0.25 PmolALK yr-1) over the period 2020-2100 using the CSIRO-Mk3L-COAL Earth System Model, under high (Representative Concentration Pathway 8.5; RCP8.5) and low (RCP2.6) emissions. While regionally there are large changes in alkalinity associated with locations of AOA, globally we see only a very weak dependence on where and when AOA is applied. On a global scale, while we see that under RCP2.6 the carbon uptake associated with AOA is only ˜ 60 % of the total, under RCP8.5 the relative changes in temperature are larger, as are the changes in pH (140 %) and aragonite saturation state (170 %). The simulations reveal AOA is more effective under lower emissions, therefore the higher the emissions the more AOA is required to achieve the same reduction in global warming and ocean acidification. Finally, our simulated AOA for 2020-2100 in the RCP2.6 scenario is capable of offsetting warming and ameliorating ocean acidification increases at the global scale, but with highly variable regional responses.

  15. Examining global electricity supply vulnerability to climate change using a high-fidelity hydropower dam model.

    Science.gov (United States)

    Turner, Sean W D; Ng, Jia Yi; Galelli, Stefano

    2017-07-15

    An important and plausible impact of a changing global climate is altered power generation from hydroelectric dams. Here we project 21st century global hydropower production by forcing a coupled, global hydrological and dam model with three General Circulation Model (GCM) projections run under two emissions scenarios. Dams are simulated using a detailed model that accounts for plant specifications, storage dynamics, reservoir bathymetry and realistic, optimized operations. We show that the inclusion of these features can have a non-trivial effect on the simulated response of hydropower production to changes in climate. Simulation results highlight substantial uncertainty in the direction of change in globally aggregated hydropower production (~-5 to +5% change in mean global production by the 2080s under a high emissions scenario, depending on GCM). Several clearly impacted hotspots are identified, the most prominent of which encompasses the Mediterranean countries in southern Europe, northern Africa and the Middle East. In this region, hydropower production is projected to be reduced by approximately 40% on average by the end of the century under a high emissions scenario. After accounting for each country's dependence on hydropower for meeting its current electricity demands, the Balkans countries emerge as the most vulnerable (~5-20% loss in total national electricity generation depending on country). On the flipside, a handful of countries in Scandinavia and central Asia are projected to reap a significant increase in total electrical production (~5-15%) without investing in new power generation facilities. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Global output feedback stabilisation of stochastic high-order feedforward nonlinear systems with time-delay

    Science.gov (United States)

    Zhang, Kemei; Zhao, Cong-Ran; Xie, Xue-Jun

    2015-12-01

    This paper considers the problem of output feedback stabilisation for stochastic high-order feedforward nonlinear systems with time-varying delay. By using the homogeneous domination theory and solving several troublesome obstacles in the design and analysis, an output feedback controller is constructed to drive the closed-loop system globally asymptotically stable in probability.

  17. Western US high June 2015 temperatures and their relation to global warming and soil moisture

    NARCIS (Netherlands)

    Philip, Sjoukje Y.; Kew, Sarah F.; Hauser, Mathias; Guillod, Benoit P.; Teuling, Adriaan J.; Whan, Kirien; Uhe, Peter; Oldenborgh, van Geert Jan

    2018-01-01

    The Western US states Washington (WA), Oregon (OR) and California (CA) experienced extremely high temperatures in June 2015. The temperature anomalies were so extreme that they cannot be explained with global warming alone. We investigate the hypothesis that soil moisture played an important role

  18. Evaluating Lightning-generated NOx (LNOx) Parameterization based on Cloud Top Height at Resolutions with Partially-resolved Convection for Upper Tropospheric Chemistry Studies

    Science.gov (United States)

    Wong, J.; Barth, M. C.; Noone, D. C.

    2012-12-01

    Lightning-generated nitrogen oxides (LNOx) is an important precursor to tropospheric ozone production. With a meteorological time-scale variability similar to that of the ozone chemical lifetime, it can nonlinearly perturb tropospheric ozone concentration. Coupled with upper-air circulation patterns, LNOx can accumulate in significant amount in the upper troposphere with other precursors, thus enhancing ozone production (see attached figure). While LNOx emission has been included and tuned extensively in global climate models, its inclusions in regional chemistry models are seldom tested. Here we present a study that evaluates the frequently used Price and Rind parameterization based on cloud-top height at resolutions that partially resolve deep convection using the Weather Research and Forecasting model with Chemistry (WRF-Chem) over the contiguous United States. With minor modifications, the parameterization is shown to generate integrated flash counts close to those observed. However, the modeled frequency distribution of cloud-to-ground flashes do not represent well for storms with high flash rates, bringing into question the applicability of the intra-cloud/ground partitioning (IC:CG) formulation of Price and Rind in some studies. Resolution dependency also requires attention when sub-grid cloud-tops are used instead of the originally intended grid-averaged cloud-top. LNOx passive tracers being gathered by monsoonal upper tropospheric anticyclone.

  19. An improved lightning flash rate parameterization developed from Colorado DC3 thunderstorm data for use in cloud-resolving chemical transport models

    Science.gov (United States)

    Basarab, B. M.; Rutledge, S. A.; Fuchs, B. R.

    2015-09-01

    Accurate prediction of total lightning flash rate in thunderstorms is important to improve estimates of nitrogen oxides (NOx) produced by lightning (LNOx) from the storm scale to the global scale. In this study, flash rate parameterization schemes from the literature are evaluated against observed total flash rates for a sample of 11 Colorado thunderstorms, including nine storms from the Deep Convective Clouds and Chemistry (DC3) experiment in May-June 2012. Observed flash rates were determined using an automated algorithm that clusters very high frequency radiation sources emitted by electrical breakdown in clouds and detected by the northern Colorado lightning mapping array. Existing schemes were found to inadequately predict flash rates and were updated based on observed relationships between flash rate and simple storm parameters, yielding significant improvement. The most successful updated scheme predicts flash rate based on the radar-derived mixed-phase 35 dBZ echo volume. Parameterizations based on metrics for updraft intensity were also updated but were found to be less reliable predictors of flash rate for this sample of storms. The 35 dBZ volume scheme was tested on a data set containing radar reflectivity volume information for thousands of isolated convective cells in different regions of the U.S. This scheme predicted flash rates to within 5.8% of observed flash rates on average. These results encourage the application of this scheme to larger radar data sets and its possible implementation into cloud-resolving models.

  20. Robustness and sensitivities of central U.S. summer convection in the super-parameterized CAM: Multi-model intercomparison with a new regional EOF index

    Science.gov (United States)

    Kooperman, Gabriel J.; Pritchard, Michael S.; Somerville, Richard C. J.

    2013-06-01

    Mesoscale convective systems (MCSs) can bring up to 60% of summer rainfall to the central United States but are not simulated by most global climate models. In this study, a new empirical orthogonal function based index is developed to isolate the MCS activity, similar to that developed by Wheeler and Hendon (2004) for the Madden-Julian Oscillation. The index is applied to compactly compare three conventional- and super-parameterized (SP) versions (3.0, 3.5, and 5.0) of the National Center for Atmospheric Research Community Atmosphere Model (CAM). Results show that nocturnal, eastward propagating convection is a robust effect of super-parameterization but is sensitive to its specific implementation. MCS composites based on the index show that in SP-CAM3.5, convective MCS anomalies are unrealistically large scale and concentrated, while surface precipitation is too weak. These aspects of the MCS signal are improved in the latest version (SP-CAM5.0), which uses high-order microphysics.

  1. The package PAKPDF 1.1 of parameterizations of parton distribution functions in the proton

    International Nuclear Information System (INIS)

    Charchula, K.

    1992-01-01

    A FORTRAN package containing parameterizations of parton distribution functions (PDFs) in the proton is described, allows an easy access to PDFs provided by several recent parameterizations and to some parameters characterizing particular parameterization. Some comments about the use of various parameterizations are also included. (orig.)

  2. National Construction of Global Education: A Critical Review of the National Curriculum Standards for South Korean Global High Schools

    Science.gov (United States)

    Sung, Youl-Kwan; Park, Minjeong; Choi, Il-Seon

    2013-01-01

    In this paper, the authors investigate what global visions of education are reflected in the selected national curriculum standards, with special reference to two seemingly contradictory forces: globalization and nationalism. This paper examines the socio-economic and cultural foundations of the curriculum and explains how the national curriculum…

  3. The global lambda visualization facility: An international ultra-high-definition wide-area visualization collaboratory

    Science.gov (United States)

    Leigh, J.; Renambot, L.; Johnson, Aaron H.; Jeong, B.; Jagodic, R.; Schwarz, N.; Svistula, D.; Singh, R.; Aguilera, J.; Wang, X.; Vishwanath, V.; Lopez, B.; Sandin, D.; Peterka, T.; Girado, J.; Kooima, R.; Ge, J.; Long, L.; Verlo, A.; DeFanti, T.A.; Brown, M.; Cox, D.; Patterson, R.; Dorn, P.; Wefel, P.; Levy, S.; Talandis, J.; Reitzer, J.; Prudhomme, T.; Coffin, T.; Davis, B.; Wielinga, P.; Stolk, B.; Bum, Koo G.; Kim, J.; Han, S.; Corrie, B.; Zimmerman, T.; Boulanger, P.; Garcia, M.

    2006-01-01

    The research outlined in this paper marks an initial global cooperative effort between visualization and collaboration researchers to build a persistent virtual visualization facility linked by ultra-high-speed optical networks. The goal is to enable the comprehensive and synergistic research and development of the necessary hardware, software and interaction techniques to realize the next generation of end-user tools for scientists to collaborate on the global Lambda Grid. This paper outlines some of the visualization research projects that were demonstrated at the iGrid 2005 workshop in San Diego, California.

  4. A high resolution global wind atlas - improving estimation of world wind resources

    DEFF Research Database (Denmark)

    Badger, Jake; Ejsing Jørgensen, Hans

    2011-01-01

    to population centres, electrical transmission grids, terrain types, and protected land areas are important parts of the resource assessment downstream of the generation of wind climate statistics. Related to these issues of integration are the temporal characteristics and spatial correlation of the wind...... resources. These aspects will also be addressed by the Global Wind Atlas. The Global Wind Atlas, through a transparent methodology, will provide a unified, high resolution, and public domain dataset of wind energy resources for the whole world. The wind atlas data will be the most appropriate wind resource...

  5. Global stability of stochastic high-order neural networks with discrete and distributed delays

    International Nuclear Information System (INIS)

    Wang Zidong; Fang Jianan; Liu Xiaohui

    2008-01-01

    High-order neural networks can be considered as an expansion of Hopfield neural networks, and have stronger approximation property, faster convergence rate, greater storage capacity, and higher fault tolerance than lower-order neural networks. In this paper, the global asymptotic stability analysis problem is considered for a class of stochastic high-order neural networks with discrete and distributed time-delays. Based on an Lyapunov-Krasovskii functional and the stochastic stability analysis theory, several sufficient conditions are derived, which guarantee the global asymptotic convergence of the equilibrium point in the mean square. It is shown that the stochastic high-order delayed neural networks under consideration are globally asymptotically stable in the mean square if two linear matrix inequalities (LMIs) are feasible, where the feasibility of LMIs can be readily checked by the Matlab LMI toolbox. It is also shown that the main results in this paper cover some recently published works. A numerical example is given to demonstrate the usefulness of the proposed global stability criteria

  6. Highly resolved global distribution of tropospheric NO2 using GOME narrow swath mode data

    Directory of Open Access Journals (Sweden)

    S. Beirle

    2004-01-01

    Full Text Available The Global Ozone Monitoring Experiment (GOME allows the retrieval of tropospheric vertical column densities (VCDs of NO2 on a global scale. Regions with enhanced industrial activity can clearly be detected, but the standard spatial resolution of the GOME ground pixels (320x40km2 is insufficient to resolve regional trace gas distributions or individual cities. Every 10 days within the nominal GOME operation, measurements are executed in the so called narrow swath mode with a much better spatial resolution (80x40km2. We use this data (1997-2001 to construct a detailed picture of the mean global tropospheric NO2 distribution. Since - due to the narrow swath - the global coverage of the high resolution observations is rather poor, it has proved to be essential to deseasonalize the single narrow swath mode observations to retrieve adequate mean maps. This is done by using the GOME backscan information. The retrieved high resolution map illustrates the shortcomings of the standard size GOME pixels and reveals an unprecedented wealth of details in the global distribution of tropospheric NO2. Localised spots of enhanced NO2 VCD can be directly associated to cities, heavy industry centers and even large power plants. Thus our result helps to check emission inventories. The small spatial extent of NO2 'hot spots' allows us to estimate an upper limit of the mean lifetime of boundary layer NOx of 17h on a global scale. The long time series of GOME data allows a quantitative comparison of the narrow swath mode data to the nominal resolution. Thus we can analyse the dependency of NO2 VCDs on pixel size. This is important for comparing GOME data to results of new satellite instruments like SCIAMACHY (launched March 2002 on ENVISAT, OMI (launched July 2004 on AURA or GOME II (to be launched 2005 with an improved spatial resolution.

  7. Parameterization of mixing by secondary circulation in estuaries

    Science.gov (United States)

    Basdurak, N. B.; Huguenard, K. D.; Valle-Levinson, A.; Li, M.; Chant, R. J.

    2017-07-01

    Eddy viscosity parameterizations that depend on a gradient Richardson number Ri have been most pertinent to the open ocean. Parameterizations applicable to stratified coastal regions typically require implementation of a numerical model. Two novel parameterizations of the vertical eddy viscosity, based on Ri, are proposed here for coastal waters. One turbulence closure considers temporal changes in stratification and bottom stress and is coined the "regular fit." The alternative approach, named the "lateral fit," incorporates variability of lateral flows that are prevalent in estuaries. The two turbulence parameterization schemes are tested using data from a Self-Contained Autonomous Microstructure Profiler (SCAMP) and an Acoustic Doppler Current Profiler (ADCP) collected in the James River Estuary. The "regular fit" compares favorably to SCAMP-derived vertical eddy viscosity values but only at relatively small values of gradient Ri. On the other hand, the "lateral fit" succeeds at describing the lateral variability of eddy viscosity over a wide range of Ri. The modifications proposed to Ri-dependent eddy viscosity parameterizations allow applicability to stratified coastal regions, particularly in wide estuaries, without requiring implementation of a numerical model.

  8. Effects of model resolution and parameterizations on the simulations of clouds, precipitation, and their interactions with aerosols

    Science.gov (United States)

    Lee, Seoung Soo; Li, Zhanqing; Zhang, Yuwei; Yoo, Hyelim; Kim, Seungbum; Kim, Byung-Gon; Choi, Yong-Sang; Mok, Jungbin; Um, Junshik; Ock Choi, Kyoung; Dong, Danhong

    2018-01-01

    This study investigates the roles played by model resolution and microphysics parameterizations in the well-known uncertainties or errors in simulations of clouds, precipitation, and their interactions with aerosols by the numerical weather prediction (NWP) models. For this investigation, we used cloud-system-resolving model (CSRM) simulations as benchmark simulations that adopt high-resolution and full-fledged microphysical processes. These simulations were evaluated against observations, and this evaluation demonstrated that the CSRM simulations can function as benchmark simulations. Comparisons between the CSRM simulations and the simulations at the coarse resolutions that are generally adopted by current NWP models indicate that the use of coarse resolutions as in the NWP models can lower not only updrafts and other cloud variables (e.g., cloud mass, condensation, deposition, and evaporation) but also their sensitivity to increasing aerosol concentration. The parameterization of the saturation process plays an important role in the sensitivity of cloud variables to aerosol concentrations. while the parameterization of the sedimentation process has a substantial impact on how cloud variables are distributed vertically. The variation in cloud variables with resolution is much greater than what happens with varying microphysics parameterizations, which suggests that the uncertainties in the NWP simulations are associated with resolution much more than microphysics parameterizations.

  9. Global dose to man from proposed NNTRP high altitude nuclear tests

    International Nuclear Information System (INIS)

    Peterson, K.R.

    1975-05-01

    Radionuclide measurements from past high altitude nuclear testing have enabled development of a model to estimate surface deposition and doses from 400 kt of fission products injected in winter within the Pacific Test Area at altitudes in excess of 50 km. The largest 30-year average dose to man is about 10 millirem and occurs at 30 0 to 50 0 N latitude. The principal contributor to this dose is external gamma radiation from gross fission products. Individual doses from 90 Sr via the forage-cow-milk pathway and 137 Cs via the pasture-meat pathway are about 1/5 the gross fission product doses. The global 30-year population dose is 3 x 10 7 person-rem, which compares with a 30-year natural background population dose of 1 X 10 10 person-rem. Due in large part to the global distribution of population, over 98 percent of the global person-rem from the proposed high altitude tests is received in the Northern Hemisphere, while about 75 percent of the total population dose occurs within the 30 0 --50 0 N latitude belt. Detonations in summer would decrease the global dose by about a factor of three. (U.S.)

  10. Application of a planetary wave breaking parameterization to stratospheric circulation statistics

    Science.gov (United States)

    Randel, William J.; Garcia, Rolando R.

    1994-01-01

    The planetary wave parameterization scheme developed recently by Garcia is applied to statospheric circulation statistics derived from 12 years of National Meteorological Center operational stratospheric analyses. From the data a planetary wave breaking criterion (based on the ratio of the eddy to zonal mean meridional potential vorticity (PV) gradients), a wave damping rate, and a meridional diffusion coefficient are calculated. The equatorward flank of the polar night jet during winter is identified as a wave breaking region from the observed PV gradients; the region moves poleward with season, covering all high latitudes in spring. Derived damping rates maximize in the subtropical upper stratosphere (the 'surf zone'), with damping time scales of 3-4 days. Maximum diffusion coefficients follow the spatial patterns of the wave breaking criterion, with magnitudes comparable to prior published estimates. Overall, the observed results agree well with the parameterized calculations of Garcia.

  11. Polynomial parameterized representation of macroscopic cross section for PWR reactor

    International Nuclear Information System (INIS)

    Fiel, Joao Claudio B.

    2015-01-01

    The purpose of this work is to describe, by means of Tchebychev polynomial, a parameterized representation of the homogenized macroscopic cross section for PWR fuel element as a function of soluble boron concentration, moderator temperature, fuel temperature, moderator density and 235 U 92 enrichment. Analyzed cross sections are: fission, scattering, total, transport, absorption and capture. This parameterization enables a quick and easy determination of the problem-dependent cross-sections to be used in few groups calculations. The methodology presented here will enable to provide cross-sections values to perform PWR core calculations without the need to generate them based on computer code calculations using standard steps. The results obtained by parameterized cross-sections functions, when compared with the cross-section generated by SCALE code calculations, or when compared with K inf , generated by MCNPX code calculations, show a difference of less than 0.7 percent. (author)

  12. Development of a parameterization scheme of mesoscale convective systems

    International Nuclear Information System (INIS)

    Cotton, W.R.

    1994-01-01

    The goal of this research is to develop a parameterization scheme of mesoscale convective systems (MCS) including diabatic heating, moisture and momentum transports, cloud formation, and precipitation. The approach is to: Perform explicit cloud-resolving simulation of MCSs; Perform statistical analyses of simulated MCSs to assist in fabricating a parameterization, calibrating coefficients, etc.; Test the parameterization scheme against independent field data measurements and in numerical weather prediction (NWP) models emulating general circulation model (GCM) grid resolution. Thus far we have formulated, calibrated, implemented and tested a deep convective engine against explicit Florida sea breeze convection and in coarse-grid regional simulations of mid-latitude and tropical MCSs. Several explicit simulations of MCSs have been completed, and several other are in progress. Analysis code is being written and run on the explicitly simulated data

  13. Parameterized Analysis of Paging and List Update Algorithms

    DEFF Research Database (Denmark)

    Dorrigiv, Reza; Ehmsen, Martin R.; López-Ortiz, Alejandro

    2015-01-01

    that a larger cache leads to a better performance. We also apply the parameterized analysis framework to list update and show that certain randomized algorithms which are superior to MTF in the classical model are not so in the parameterized case, which matches experimental results....... set model and express the performance of well known algorithms in terms of this parameter. This explicitly introduces parameterized-style analysis to online algorithms. The idea is that rather than normalizing the performance of an online algorithm by an (optimal) offline algorithm, we explicitly...... express the behavior of the algorithm in terms of two more natural parameters: the size of the cache and Denning’s working set measure. This technique creates a performance hierarchy of paging algorithms which better reflects their experimentally observed relative strengths. It also reflects the intuition...

  14. Parameterization of a Hydrological Model for a Large, Ungauged Urban Catchment

    Directory of Open Access Journals (Sweden)

    Gerald Krebs

    2016-10-01

    Full Text Available Urbanization leads to the replacement of natural areas by impervious surfaces and affects the catchment hydrological cycle with adverse environmental impacts. Low impact development tools (LID that mimic hydrological processes of natural areas have been developed and applied to mitigate these impacts. Hydrological simulations are one possibility to evaluate the LID performance but the associated small-scale processes require a highly spatially distributed and explicit modeling approach. However, detailed data for model development are often not available for large urban areas, hampering the model parameterization. In this paper we propose a methodology to parameterize a hydrological model to a large, ungauged urban area by maintaining at the same time a detailed surface discretization for direct parameter manipulation for LID simulation and a firm reliance on available data for model conceptualization. Catchment delineation was based on a high-resolution digital elevation model (DEM and model parameterization relied on a novel model regionalization approach. The impact of automated delineation and model regionalization on simulation results was evaluated for three monitored study catchments (5.87–12.59 ha. The simulated runoff peak was most sensitive to accurate catchment discretization and calibration, while both the runoff volume and the fit of the hydrograph were less affected.

  15. Global kink and ballooning modes in high-beta systems and stability of toroidal drift modes

    International Nuclear Information System (INIS)

    Galvao, R.M.O.; Goedbloed, J.P.; Rem, J.; Sakanaka, P.H.; Schep, T.J.; Venema, M.

    1983-01-01

    A numerical code (HBT) has been developed which solves for the equilibrium, global stability and high-n stability of plasmas with arbitrary cross-section. Various plasmas are analysed for their stability to these modes in the high-beta limit. Screw-pinch equilibria are stable to high-n ballooning modes up to betas of 18%. The eigenmode equation for drift waves is analysed numerically. The toroidal branch is shown to be destabilized by the non-adiabatic response of trapped and circulating particles. (author)

  16. Parameterization of radiocaesium soil-plant transfer using soil characteristics

    International Nuclear Information System (INIS)

    Konoplev, A. V.; Drissner, J.; Klemt, E.; Konopleva, I. V.; Zibold, G.

    1996-01-01

    A model of radionuclide soil-plant transfer is proposed to parameterize the transfer factor by soil and soil solution characteristics. The model is tested with experimental data on the aggregated transfer factor T ag and soil parameters for 8 forest sites in Baden-Wuerttemberg. It is shown that the integral soil-plant transfer factor can be parameterized through radiocaesium exchangeability, capacity of selective sorption sites and ion composition of the soil solution or the water extract. A modified technique of (FES) measurement for soils with interlayer collapse is proposed. (author)

  17. On global exponential stability of high-order neural networks with time-varying delays

    International Nuclear Information System (INIS)

    Zhang Baoyong; Xu Shengyuan; Li Yongmin; Chu Yuming

    2007-01-01

    This Letter investigates the problem of stability analysis for a class of high-order neural networks with time-varying delays. The delays are bounded but not necessarily differentiable. Based on the Lyapunov stability theory together with the linear matrix inequality (LMI) approach and the use of Halanay inequality, sufficient conditions guaranteeing the global exponential stability of the equilibrium point of the considered neural networks are presented. Two numerical examples are provided to demonstrate the effectiveness of the proposed stability criteria

  18. On global exponential stability of high-order neural networks with time-varying delays

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Baoyong [School of Automation, Nanjing University of Science and Technology, Nanjing 210094, Jiangsu (China)]. E-mail: baoyongzhang@yahoo.com.cn; Xu Shengyuan [School of Automation, Nanjing University of Science and Technology, Nanjing 210094, Jiangsu (China)]. E-mail: syxu02@yahoo.com.cn; Li Yongmin [School of Automation, Nanjing University of Science and Technology, Nanjing 210094, Jiangsu (China) and Department of Mathematics, Huzhou Teacher' s College, Huzhou 313000, Zhejiang (China)]. E-mail: ymlwww@163.com; Chu Yuming [Department of Mathematics, Huzhou Teacher' s College, Huzhou 313000, Zhejiang (China)

    2007-06-18

    This Letter investigates the problem of stability analysis for a class of high-order neural networks with time-varying delays. The delays are bounded but not necessarily differentiable. Based on the Lyapunov stability theory together with the linear matrix inequality (LMI) approach and the use of Halanay inequality, sufficient conditions guaranteeing the global exponential stability of the equilibrium point of the considered neural networks are presented. Two numerical examples are provided to demonstrate the effectiveness of the proposed stability criteria.

  19. Enhanced or Weakened Western North Pacific Subtropical High under Global Warming?

    OpenAIRE

    He, Chao; Zhou, Tianjun; Lin, Ailan; Wu, Bo; Gu, Dejun; Li, Chunhui; Zheng, Bin

    2015-01-01

    The Western North Pacific Subtropical High (WNPSH) regulates East Asian climate in summer. Anomalous WNPSH causes floods, droughts and heat waves in China, Japan and Korea. The potential change of the WNPSH under global warming is concerned by Asian people, but whether the WNPSH would be enhanced or weakened remains inconclusive. Based on the multi-model climate change projection from the 5th phase of Coupled Model Intercomparison Project (CMIP5), we show evidences that the WNPSH tends to wea...

  20. Global weak solutions for coupled transport processes in concrete walls at high temperatures

    OpenAIRE

    Beneš, Michal; Štefan, Radek

    2012-01-01

    We consider an initial-boundary value problem for a fully nonlinear coupled parabolic system with nonlinear boundary conditions modelling hygro-thermal behavior of concrete at high temperatures. We prove a global existence of a weak solution to this system on an arbitrary time interval. The main result is proved by an approximation procedure. This consists in proving the existence of solutions to mollified problems using the Leray-Schauder theorem, for which a priori estimates are obtained. T...

  1. Mode decomposition methods for flows in high-contrast porous media. Global-local approach

    KAUST Repository

    Ghommem, Mehdi; Presho, Michael; Calo, Victor M.; Efendiev, Yalchin R.

    2013-01-01

    In this paper, we combine concepts of the generalized multiscale finite element method (GMsFEM) and mode decomposition methods to construct a robust global-local approach for model reduction of flows in high-contrast porous media. This is achieved by implementing Proper Orthogonal Decomposition (POD) and Dynamic Mode Decomposition (DMD) techniques on a coarse grid computed using GMsFEM. The resulting reduced-order approach enables a significant reduction in the flow problem size while accurately capturing the behavior of fully-resolved solutions. We consider a variety of high-contrast coefficients and present the corresponding numerical results to illustrate the effectiveness of the proposed technique. This paper is a continuation of our work presented in Ghommem et al. (2013) [1] where we examine the applicability of POD and DMD to derive simplified and reliable representations of flows in high-contrast porous media on fully resolved models. In the current paper, we discuss how these global model reduction approaches can be combined with local techniques to speed-up the simulations. The speed-up is due to inexpensive, while sufficiently accurate, computations of global snapshots. © 2013 Elsevier Inc.

  2. Mode decomposition methods for flows in high-contrast porous media. Global-local approach

    KAUST Repository

    Ghommem, Mehdi

    2013-11-01

    In this paper, we combine concepts of the generalized multiscale finite element method (GMsFEM) and mode decomposition methods to construct a robust global-local approach for model reduction of flows in high-contrast porous media. This is achieved by implementing Proper Orthogonal Decomposition (POD) and Dynamic Mode Decomposition (DMD) techniques on a coarse grid computed using GMsFEM. The resulting reduced-order approach enables a significant reduction in the flow problem size while accurately capturing the behavior of fully-resolved solutions. We consider a variety of high-contrast coefficients and present the corresponding numerical results to illustrate the effectiveness of the proposed technique. This paper is a continuation of our work presented in Ghommem et al. (2013) [1] where we examine the applicability of POD and DMD to derive simplified and reliable representations of flows in high-contrast porous media on fully resolved models. In the current paper, we discuss how these global model reduction approaches can be combined with local techniques to speed-up the simulations. The speed-up is due to inexpensive, while sufficiently accurate, computations of global snapshots. © 2013 Elsevier Inc.

  3. Global situational awareness and early warning of high-consequence climate change.

    Energy Technology Data Exchange (ETDEWEB)

    Backus, George A.; Carr, Martin J.; Boslough, Mark Bruce Elrick

    2009-08-01

    Global monitoring systems that have high spatial and temporal resolution, with long observational baselines, are needed to provide situational awareness of the Earth's climate system. Continuous monitoring is required for early warning of high-consequence climate change and to help anticipate and minimize the threat. Global climate has changed abruptly in the past and will almost certainly do so again, even in the absence of anthropogenic interference. It is possible that the Earth's climate could change dramatically and suddenly within a few years. An unexpected loss of climate stability would be equivalent to the failure of an engineered system on a grand scale, and would affect billions of people by causing agricultural, economic, and environmental collapses that would cascade throughout the world. The probability of such an abrupt change happening in the near future may be small, but it is nonzero. Because the consequences would be catastrophic, we argue that the problem should be treated with science-informed engineering conservatism, which focuses on various ways a system can fail and emphasizes inspection and early detection. Such an approach will require high-fidelity continuous global monitoring, informed by scientific modeling.

  4. Local and global processing of music in high-functioning persons with autism: beyond central coherence?

    Science.gov (United States)

    Mottron, L; Peretz, I; Ménard, E

    2000-11-01

    A multi-modal abnormality in the integration of parts and whole has been proposed to account for a bias toward local stimuli in individuals with autism (Frith, 1989; Mottron & Belleville, 1993). In the current experiment, we examined the utility of hierarchical models in characterising musical information processing in autistic individuals. Participants were 13 high-functioning individuals with autism and 13 individuals of normal intelligence matched on chronological age, nonverbal IQ, and laterality, and without musical experience. The task consisted of same-different judgements of pairs of melodies. Differential local and global processing was assessed by manipulating the level, local or global, at which modifications occurred. No deficit was found in the two measures of global processing. In contrast, the clinical group performed better than the comparison group in the detection of change in nontransposed, contour-preserved melodies that tap local processing. These findings confirm the existence of a "local bias" in music perception in individuals with autism, but challenge the notion that it is accounted for by a deficit in global music processing. The present study suggests that enhanced processing of elementary physical properties of incoming stimuli, as found previously in the visual modality, may also exist in the auditory modality.

  5. Hydrologic Derivatives for Modeling and Analysis—A new global high-resolution database

    Science.gov (United States)

    Verdin, Kristine L.

    2017-07-17

    The U.S. Geological Survey has developed a new global high-resolution hydrologic derivative database. Loosely modeled on the HYDRO1k database, this new database, entitled Hydrologic Derivatives for Modeling and Analysis, provides comprehensive and consistent global coverage of topographically derived raster layers (digital elevation model data, flow direction, flow accumulation, slope, and compound topographic index) and vector layers (streams and catchment boundaries). The coverage of the data is global, and the underlying digital elevation model is a hybrid of three datasets: HydroSHEDS (Hydrological data and maps based on SHuttle Elevation Derivatives at multiple Scales), GMTED2010 (Global Multi-resolution Terrain Elevation Data 2010), and the SRTM (Shuttle Radar Topography Mission). For most of the globe south of 60°N., the raster resolution of the data is 3 arc-seconds, corresponding to the resolution of the SRTM. For the areas north of 60°N., the resolution is 7.5 arc-seconds (the highest resolution of the GMTED2010 dataset) except for Greenland, where the resolution is 30 arc-seconds. The streams and catchments are attributed with Pfafstetter codes, based on a hierarchical numbering system, that carry important topological information. This database is appropriate for use in continental-scale modeling efforts. The work described in this report was conducted by the U.S. Geological Survey in cooperation with the National Aeronautics and Space Administration Goddard Space Flight Center.

  6. Global Anthropogenic Phosphorus Loads to Fresh Water, Grey Water Footprint and Water Pollution Levels: A High-Resolution Global Study

    Science.gov (United States)

    Mekonnen, M. M.; Hoekstra, A. Y. Y.

    2014-12-01

    We estimated anthropogenic phosphorus (P) loads to freshwater, globally at a spatial resolution level of 5 by 5 arc minute. The global anthropogenic P load to freshwater systems from both diffuse and point sources in the period 2002-2010 was 1.5 million tonnes per year. China contributed about 30% to this global anthropogenic P load. India was the second largest contributor (8%), followed by the USA (7%), Spain and Brazil each contributing 6% to the total. The domestic sector contributed the largest share (54%) to this total followed by agriculture (38%) and industry (8%). Among the crops, production of cereals had the largest contribution to the P loads (32%), followed by fruits, vegetables, and oil crops, each contributing about 15% to the total. We also calculated the resultant grey water footprints, and relate the grey water footprints per river basin to runoff to calculate the P-related water pollution level (WPL) per catchment.

  7. Calibration process of highly parameterized semi-distributed hydrological model

    Science.gov (United States)

    Vidmar, Andrej; Brilly, Mitja

    2017-04-01

    Hydrological phenomena take place in the hydrological system, which is governed by nature, and are essentially stochastic. These phenomena are unique, non-recurring, and changeable across space and time. Since any river basin with its own natural characteristics and any hydrological event therein, are unique, this is a complex process that is not researched enough. Calibration is a procedure of determining the parameters of a model that are not known well enough. Input and output variables and mathematical model expressions are known, while only some parameters are unknown, which are determined by calibrating the model. The software used for hydrological modelling nowadays is equipped with sophisticated algorithms for calibration purposes without possibility to manage process by modeler. The results are not the best. We develop procedure for expert driven process of calibration. We use HBV-light-CLI hydrological model which has command line interface and coupling it with PEST. PEST is parameter estimation tool which is used widely in ground water modeling and can be used also on surface waters. Process of calibration managed by expert directly, and proportionally to the expert knowledge, affects the outcome of the inversion procedure and achieves better results than if the procedure had been left to the selected optimization algorithm. First step is to properly define spatial characteristic and structural design of semi-distributed model including all morphological and hydrological phenomena, like karstic area, alluvial area and forest area. This step includes and requires geological, meteorological, hydraulic and hydrological knowledge of modeler. Second step is to set initial parameter values at their preferred values based on expert knowledge. In this step we also define all parameter and observation groups. Peak data are essential in process of calibration if we are mainly interested in flood events. Each Sub Catchment in the model has own observations group. Third step is to set appropriate bounds to parameters in their range of realistic values. Fourth step is to use of singular value decomposition (SVD) ensures that PEST maintains numerical stability, regardless of how ill-posed is the inverse problem Fifth step is to run PWTADJ1. This creates a new PEST control file in which weights are adjusted such that the contribution made to the total objective function by each observation group is the same. This prevents the information content of any group from being invisible to the inversion process. Sixth step is to add Tikhonov regularization to the PEST control file by running the ADDREG1 utility (Doherty, J, 2013). In adding regularization to the PEST control file ADDREG1 automatically provides a prior information equation for each parameter in which the preferred value of that parameter is equated to its initial value. Last step is to run PEST. We run BeoPEST which a parallel version of PEST and can be run on multiple computers in parallel in same time on TCP communications and this speedup process of calibrations. The case study with results of calibration and validation of the model will be presented.

  8. Will high-resolution global ocean models benefit coupled predictions on short-range to climate timescales?

    Science.gov (United States)

    Hewitt, Helene T.; Bell, Michael J.; Chassignet, Eric P.; Czaja, Arnaud; Ferreira, David; Griffies, Stephen M.; Hyder, Pat; McClean, Julie L.; New, Adrian L.; Roberts, Malcolm J.

    2017-12-01

    As the importance of the ocean in the weather and climate system is increasingly recognised, operational systems are now moving towards coupled prediction not only for seasonal to climate timescales but also for short-range forecasts. A three-way tension exists between the allocation of computing resources to refine model resolution, the expansion of model complexity/capability, and the increase of ensemble size. Here we review evidence for the benefits of increased ocean resolution in global coupled models, where the ocean component explicitly represents transient mesoscale eddies and narrow boundary currents. We consider lessons learned from forced ocean/sea-ice simulations; from studies concerning the SST resolution required to impact atmospheric simulations; and from coupled predictions. Impacts of the mesoscale ocean in western boundary current regions on the large-scale atmospheric state have been identified. Understanding of air-sea feedback in western boundary currents is modifying our view of the dynamics in these key regions. It remains unclear whether variability associated with open ocean mesoscale eddies is equally important to the large-scale atmospheric state. We include a discussion of what processes can presently be parameterised in coupled models with coarse resolution non-eddying ocean models, and where parameterizations may fall short. We discuss the benefits of resolution and identify gaps in the current literature that leave important questions unanswered.

  9. Global modeling of secondary organic aerosol formation from aromatic hydrocarbons: high- vs. low-yield pathways

    Directory of Open Access Journals (Sweden)

    D. K. Henze

    2008-05-01

    Full Text Available Formation of SOA from the aromatic species toluene, xylene, and, for the first time, benzene, is added to a global chemical transport model. A simple mechanism is presented that accounts for competition between low and high-yield pathways of SOA formation, wherein secondary gas-phase products react further with either nitric oxide (NO or hydroperoxy radical (HO2 to yield semi- or non-volatile products, respectively. Aromatic species yield more SOA when they react with OH in regions where the [NO]/[HO2] ratios are lower. The SOA yield thus depends upon the distribution of aromatic emissions, with biomass burning emissions being in areas with lower [NO]/[HO2] ratios, and the reactivity of the aromatic with respect to OH, as a lower initial reactivity allows transport away from industrial source regions, where [NO]/[HO2] ratios are higher, to more remote regions, where this ratio is lower and, hence, the ultimate yield of SOA is higher. As a result, benzene is estimated to be the most important aromatic species with regards to global formation of SOA, with a total production nearly equal that of toluene and xylene combined. Global production of SOA from aromatic sources via the mechanisms identified here is estimated at 3.5 Tg/yr, resulting in a global burden of 0.08 Tg, twice as large as previous estimates. The contribution of these largely anthropogenic sources to global SOA is still small relative to biogenic sources, which are estimated to comprise 90% of the global SOA burden, about half of which comes from isoprene. Uncertainty in these estimates owing to factors ranging from the atmospheric relevance of chamber conditions to model deficiencies result in an estimated range of SOA production from aromatics of 2–12 Tg/yr. Though this uncertainty range affords a significant anthropogenic contribution to global SOA, it is evident from comparisons to recent observations that additional pathways for

  10. The High Altitude MMIC Sounding Radiometer on the GLOBAL HAWK: From Technology Development to Science Discovery

    Science.gov (United States)

    Brown, Shannon; Denning, Richard; Lambrigtsen, Bjorn; Lim, Boon; Tanabe, Jordan; Tanner, Alan

    2013-01-01

    This paper presents results from the High Altitude MMIC Sounding Radiometer (HAMSR) during three recent field campaigns on the Global Hawk Unmanned Ariel Vehicles (UAV), focusing on the enabling technology that led to unprecedented observations of significant weather phenomenon, such as thermodynamic evolution of the tropical cyclone core during rapid intensification and the high resolution three dimensional mapping of several atmospheric river events. HAMSR is a 25 channel cross-track scanning microwave sounder with channels near the 60 and 118 GHz oxygen lines and the 183 GHz water vapor line. HAMSR was originally designed and built at the Jet Propulsion Laboratory as a technology demonstrator in 1998. Subsequent to this, HAMSR participated in three NASA hurricane field campaigns, CAMEX-4, TCSP and NAMMA. Beginning in 2008, HAMSR was extensively upgraded to deploy on the NASA Global Hawk (GH) platform and serve as an asset to the NASA sub-orbital program. HAMSR has participated on the Global Hawk during the 2010 Genesis and Rapid Intensification (GRIP) campaign, the 2011 Winter Storms and Atmospheric Rivers (WISPAR) campaign and is currently participating in the NASA Ventures Hurricane and Severe Storm Sentinel (HS3) campaign (2011-2015).

  11. Impaired global, and compensatory local, biological motion processing in people with high levels of autistic traits.

    Science.gov (United States)

    van Boxtel, Jeroen J A; Lu, Hongjing

    2013-01-01

    People with Autism Spectrum Disorder (ASD) are hypothesized to have poor high-level processing but superior low-level processing, causing impaired social recognition, and a focus on non-social stimulus contingencies. Biological motion perception provides an ideal domain to investigate exactly how ASD modulates the interaction between low and high-level processing, because it involves multiple processing stages, and carries many important social cues. We investigated individual differences among typically developing observers in biological motion processing, and whether such individual differences associate with the number of autistic traits. In Experiment 1, we found that individuals with fewer autistic traits were automatically and involuntarily attracted to global biological motion information, whereas individuals with more autistic traits did not show this pre-attentional distraction. We employed an action adaptation paradigm in the second study to show that individuals with more autistic traits were able to compensate for deficits in global processing with an increased involvement in local processing. Our findings can be interpreted within a predictive coding framework, which characterizes the functional relationship between local and global processing stages, and explains how these stages contribute to the perceptual difficulties associated with ASD.

  12. Evaluation of variability in high-resolution protein structures by global distance scoring

    Directory of Open Access Journals (Sweden)

    Risa Anzai

    2018-01-01

    Full Text Available Systematic analysis of the statistical and dynamical properties of proteins is critical to understanding cellular events. Extraction of biologically relevant information from a set of high-resolution structures is important because it can provide mechanistic details behind the functional properties of protein families, enabling rational comparison between families. Most of the current structural comparisons are pairwise-based, which hampers the global analysis of increasing contents in the Protein Data Bank. Additionally, pairing of protein structures introduces uncertainty with respect to reproducibility because it frequently accompanies other settings for superimposition. This study introduces intramolecular distance scoring for the global analysis of proteins, for each of which at least several high-resolution structures are available. As a pilot study, we have tested 300 human proteins and showed that the method is comprehensively used to overview advances in each protein and protein family at the atomic level. This method, together with the interpretation of the model calculations, provide new criteria for understanding specific structural variation in a protein, enabling global comparison of the variability in proteins from different species.

  13. Evaluation of variability in high-resolution protein structures by global distance scoring.

    Science.gov (United States)

    Anzai, Risa; Asami, Yoshiki; Inoue, Waka; Ueno, Hina; Yamada, Koya; Okada, Tetsuji

    2018-01-01

    Systematic analysis of the statistical and dynamical properties of proteins is critical to understanding cellular events. Extraction of biologically relevant information from a set of high-resolution structures is important because it can provide mechanistic details behind the functional properties of protein families, enabling rational comparison between families. Most of the current structural comparisons are pairwise-based, which hampers the global analysis of increasing contents in the Protein Data Bank. Additionally, pairing of protein structures introduces uncertainty with respect to reproducibility because it frequently accompanies other settings for superimposition. This study introduces intramolecular distance scoring for the global analysis of proteins, for each of which at least several high-resolution structures are available. As a pilot study, we have tested 300 human proteins and showed that the method is comprehensively used to overview advances in each protein and protein family at the atomic level. This method, together with the interpretation of the model calculations, provide new criteria for understanding specific structural variation in a protein, enabling global comparison of the variability in proteins from different species.

  14. Impaired global, and compensatory local, biological motion processing in people with high levels of autistic traits

    Directory of Open Access Journals (Sweden)

    Jeroen J A Van Boxtel

    2013-04-01

    Full Text Available People with Autism Spectrum Disorder (ASD are hypothesized to have poor high-level processing but superior low-level processing, causing impaired social recognition, and a focus on non-social stimulus contingencies. Biological motion perception provides an ideal domain to investigate exactly how ASD modulates the interaction between low and high-level processing, because it involves multiple processing stages, and carries many important social cues. We investigated individual differences among typically developing observers in biological motion processing, and whether such individual differences associate with the number of autistic traits. In Experiment 1, we found that individuals with fewer autistic traits were automatically and involuntarily attracted to global biological motion information, whereas individuals with more autistic traits did not show this pre-attentional distraction. We employed an action adaptation paradigm in the second study to show that individuals with more autistic traits were able to compensate for deficits in global processing with an increased involvement in local processing. Our findings can be interpreted within a predictive coding framework, which characterizes the functional relationship between local and global processing stages, and explains how these stages contribute to the perceptual difficulties associated with ASD.

  15. On the importance of the albedo parameterization for the mass balance of the Greenland ice sheet in EC-Earth

    NARCIS (Netherlands)

    Helsen, Michiel M.; van de Wal, Roderik S. W.; Reerink, Thomas J.; Bintanja, Richard; Madsen, Marianne S.; Yang, Shuting; Li, Qiang; Zhang, Qiong

    2017-01-01

    The albedo of the surface of ice sheets changes as a function of time due to the effects of deposition of new snow, ageing of dry snow, bare ice exposure, melting and run-off. Currently, the calculation of the albedo of ice sheets is highly parameterized within the earth system model EC-Earth by

  16. On the importance of the albedo parameterization for the mass balance of the Greenland ice sheet in EC-Earth

    NARCIS (Netherlands)

    Helsen, Michiel M.; Van De Wal, Roderik S.W.; Reerink, Thomas J.; Bintanja, Richard; Madsen, Marianne S.; Yang, Shuting; Li, Qiang; Zhang, Qiong

    2017-01-01

    The albedo of the surface of ice sheets changes as a function of time due to the effects of deposition of new snow, ageing of dry snow, bare ice exposure, melting and run-off. Currently, the calculation of the albedo of ice sheets is highly parameterized within the earth system model ECEarth by

  17. Impact of cloud microphysics and cumulus parameterization on ...

    Indian Academy of Sciences (India)

    2007-10-09

    Oct 9, 2007 ... Bangladesh. Weather Research and Forecast (WRF–ARW version) modelling system with six dif- .... tem intensified rapidly into a land depression over southern part of ... Impact of cloud microphysics and cumulus parameterization on heavy rainfall. 261 .... tent and temperature and is represented as a sum.

  18. Parameterized representation of macroscopic cross section for PWR reactor

    International Nuclear Information System (INIS)

    Fiel, João Cláudio Batista; Carvalho da Silva, Fernando; Senra Martinez, Aquilino; Leal, Luiz C.

    2015-01-01

    Highlights: • This work describes a parameterized representation of the homogenized macroscopic cross section for PWR reactor. • Parameterization enables a quick determination of problem-dependent cross-sections to be used in few group calculations. • This work allows generating group cross-section data to perform PWR core calculations without computer code calculations. - Abstract: The purpose of this work is to describe, by means of Chebyshev polynomials, a parameterized representation of the homogenized macroscopic cross section for PWR fuel element as a function of soluble boron concentration, moderator temperature, fuel temperature, moderator density and 235 92 U enrichment. The cross-section data analyzed are fission, scattering, total, transport, absorption and capture. The parameterization enables a quick and easy determination of problem-dependent cross-sections to be used in few group calculations. The methodology presented in this paper will allow generation of group cross-section data from stored polynomials to perform PWR core calculations without the need to generate them based on computer code calculations using standard steps. The results obtained by the proposed methodology when compared with results from the SCALE code calculations show very good agreement

  19. Parameterization of planetary wave breaking in the middle atmosphere

    Science.gov (United States)

    Garcia, Rolando R.

    1991-01-01

    A parameterization of planetary wave breaking in the middle atmosphere has been developed and tested in a numerical model which includes governing equations for a single wave and the zonal-mean state. The parameterization is based on the assumption that wave breaking represents a steady-state equilibrium between the flux of wave activity and its dissipation by nonlinear processes, and that the latter can be represented as linear damping of the primary wave. With this and the additional assumption that the effect of breaking is to prevent further amplitude growth, the required dissipation rate is readily obtained from the steady-state equation for wave activity; diffusivity coefficients then follow from the dissipation rate. The assumptions made in the derivation are equivalent to those commonly used in parameterizations for gravity wave breaking, but the formulation in terms of wave activity helps highlight the central role of the wave group velocity in determining the dissipation rate. Comparison of model results with nonlinear calculations of wave breaking and with diagnostic determinations of stratospheric diffusion coefficients reveals remarkably good agreement, and suggests that the parameterization could be useful for simulating inexpensively, but realistically, the effects of planetary wave transport.

  20. CLOUD PARAMETERIZATIONS, CLOUD PHYSICS, AND THEIR CONNECTIONS: AN OVERVIEW

    International Nuclear Information System (INIS)

    LIU, Y.; DAUM, P.H.; CHAI, S.K.; LIU, F.

    2002-01-01

    This paper consists of three parts. The first part is concerned with the parameterization of cloud microphysics in climate models. We demonstrate the crucial importance of spectral dispersion of the cloud droplet size distribution in determining radiative properties of clouds (e.g., effective radius), and underline the necessity of specifying spectral dispersion in the parameterization of cloud microphysics. It is argued that the inclusion of spectral dispersion makes the issue of cloud parameterization essentially equivalent to that of the droplet size distribution function, bringing cloud parameterization to the forefront of cloud physics. The second part is concerned with theoretical investigations into the spectral shape of droplet size distributions in cloud physics. After briefly reviewing the mainstream theories (including entrainment and mixing theories, and stochastic theories), we discuss their deficiencies and the need for a paradigm shift from reductionist approaches to systems approaches. A systems theory that has recently been formulated by utilizing ideas from statistical physics and information theory is discussed, along with the major results derived from it. It is shown that the systems formalism not only easily explains many puzzles that have been frustrating the mainstream theories, but also reveals such new phenomena as scale-dependence of cloud droplet size distributions. The third part is concerned with the potential applications of the systems theory to the specification of spectral dispersion in terms of predictable variables and scale-dependence under different fluctuating environments

  1. Stable Kernel Representations and the Youla Parameterization for Nonlinear Systems

    NARCIS (Netherlands)

    Paice, A.D.B.; Schaft, A.J. van der

    1994-01-01

    In this paper a general approach is taken to yield a characterization of the class of stable plant controller pairs, which is a generalization of the Youla parameterization for linear systems. This is based on the idea of representing the input-output pairs of the plant and controller as elements of

  2. Global Modeling Study of the Bioavailable Atmospheric Iron Supply to the Global Ocean

    Science.gov (United States)

    Myriokefalitakis, S.; Krol, M. C.; van Noije, T.; Le Sager, P.

    2017-12-01

    Atmospheric deposition of trace constituents acts as a nutrient source to the open ocean and affect marine ecosystem. Dust is known as a major source of nutrients to the global ocean, but only a fraction of these nutrients is released in a bioavailable form that can be assimilated by the marine biota. Iron (Fe) is a key micronutrient that significantly modulates gross primary production in the High-Nutrient-Low-Chlorophyll (HNLC) oceans, where macronutrients like nitrate are abundant, but primary production is limited by Fe scarcity. The global atmospheric Fe cycle is here parameterized in the state-of-the-art global Earth System Model EC-Earth. The model takes into account the primary emissions of both insoluble and soluble Fe forms, associated with mineral dust and combustion aerosols. The impact of atmospheric acidity and organic ligands on mineral dissolution processes, is parameterized based on updated experimental and theoretical findings. Model results are also evaluated against available observations. Overall, the link between the labile Fe atmospheric deposition and atmospheric composition changes is here demonstrated and quantified. This work has been financed by the Marie-Curie H2020-MSCA-IF-2015 grant (ID 705652) ODEON (Online DEposition over OceaNs; modeling the effect of air pollution on ocean bio-geochemistry in an Earth System Model).

  3. Effects of global MHD instability on operational high beta-regime in LHD

    International Nuclear Information System (INIS)

    Watanabe, K.Y.; Sakakibara, S.; Narushima, Y.; Funaba, H.; Narihara, K.; Tanaka, K.; Toi, K.; Ohdachi, S.; Kaneko, O.; Yamada, H.; Nakajima, N.; Yamada, I.; Kawahata, K.; Tokuzawa, T.; Komori, A.; Yamaguchi, T.; Suzuki, Y.; Cooper, W.A.; Murakami, S.

    2005-01-01

    In the Large Helical device (LHD), the operational highest averaged beta value has been expanded from 3.2% to 4% in last two years by increasing the heating capability and exploring a new magnetic configuration with a high aspect ratio. Although the MHD stability properties are considered to be unfavourable in the new high aspect configuration, the heating efficiency due to neutral beams and the transport properties are expected to be favourable in a high beta range. In order to make clear the effect of the global ideal MHD unstable mode on the operational regimes in helical systems, specially the beta gradients in the peripheral region and the beta value, the MHD analysis and the transport analysis are done in a high beta range up to 4% in LHD. In a high beta range of more than 3%, the maxima of the observed thermal pressure gradients in the peripheral region are marginally stable to a global ideal MHD instability. Though a gradual degradation of the local transport in the region has been observed as beta increases, a disruptive degradation of the local transport does not appear in the beta range up to 4%. (author)

  4. A multiresolution spatial parameterization for the estimation of fossil-fuel carbon dioxide emissions via atmospheric inversions

    Directory of Open Access Journals (Sweden)

    J. Ray

    2014-09-01

    Full Text Available The characterization of fossil-fuel CO2 (ffCO2 emissions is paramount to carbon cycle studies, but the use of atmospheric inverse modeling approaches for this purpose has been limited by the highly heterogeneous and non-Gaussian spatiotemporal variability of emissions. Here we explore the feasibility of capturing this variability using a low-dimensional parameterization that can be implemented within the context of atmospheric CO2 inverse problems aimed at constraining regional-scale emissions. We construct a multiresolution (i.e., wavelet-based spatial parameterization for ffCO2 emissions using the Vulcan inventory, and examine whether such a~parameterization can capture a realistic representation of the expected spatial variability of actual emissions. We then explore whether sub-selecting wavelets using two easily available proxies of human activity (images of lights at night and maps of built-up areas yields a low-dimensional alternative. We finally implement this low-dimensional parameterization within an idealized inversion, where a sparse reconstruction algorithm, an extension of stagewise orthogonal matching pursuit (StOMP, is used to identify the wavelet coefficients. We find that (i the spatial variability of fossil-fuel emission can indeed be represented using a low-dimensional wavelet-based parameterization, (ii that images of lights at night can be used as a proxy for sub-selecting wavelets for such analysis, and (iii that implementing this parameterization within the described inversion framework makes it possible to quantify fossil-fuel emissions at regional scales if fossil-fuel-only CO2 observations are available.

  5. ON THE FLARE INDUCED HIGH-FREQUENCY GLOBAL WAVES IN THE SUN

    International Nuclear Information System (INIS)

    Kumar, Brajesh; Venkatakrishnan, P.; Mathur, Savita; GarcIa, R. A.

    2010-01-01

    Recently, Karoff and Kjeldsen presented evidence of strong correlation between the energy in the high-frequency part (5.3 < ν < 8.3 mHz) of the acoustic spectrum of the Sun and the solar X-ray flux. They have used disk-integrated intensity observations of the Sun obtained from the Variability of solar IRradiance and Gravity Oscillations instrument on board Solar and Heliospheric Observatory (SOHO) spacecraft. Similar signature of flares in velocity observations has not been confirmed till now. The study of low-degree high-frequency waves in the Sun is important for our understanding of the dynamics of the deeper solar layers. In this Letter, we present the analysis of the velocity observations of the Sun obtained from the Michelson and Doppler Imager (MDI) and the Global Oscillations at Low Frequencies (GOLF) instruments on board SOHO for some major flare events of the solar cycle 23. Application of wavelet techniques to the time series of disk-integrated velocity signals from the solar surface using the full-disk Dopplergrams obtained from the MDI clearly indicates that there is enhancement of high-frequency global waves in the Sun during the flares. This signature of flares is also visible in the Fourier Power Spectrum of these velocity oscillations. On the other hand, the analysis of disk-integrated velocity observations obtained from the GOLF shows only marginal evidence of effects of flares on high-frequency oscillations.

  6. Sensitivity of Glacier Mass Balance Estimates to the Selection of WRF Cloud Microphysics Parameterization in the Indus River Watershed

    Science.gov (United States)

    Johnson, E. S.; Rupper, S.; Steenburgh, W. J.; Strong, C.; Kochanski, A.

    2017-12-01

    Climate model outputs are often used as inputs to glacier energy and mass balance models, which are essential glaciological tools for testing glacier sensitivity, providing mass balance estimates in regions with little glaciological data, and providing a means to model future changes. Climate model outputs, however, are sensitive to the choice of physical parameterizations, such as those for cloud microphysics, land-surface schemes, surface layer options, etc. Furthermore, glacier mass balance (MB) estimates that use these climate model outputs as inputs are likely sensitive to the specific parameterization schemes, but this sensitivity has not been carefully assessed. Here we evaluate the sensitivity of glacier MB estimates across the Indus Basin to the selection of cloud microphysics parameterizations in the Weather Research and Forecasting Model (WRF). Cloud microphysics parameterizations differ in how they specify the size distributions of hydrometeors, the rate of graupel and snow production, their fall speed assumptions, the rates at which they convert from one hydrometeor type to the other, etc. While glacier MB estimates are likely sensitive to other parameterizations in WRF, our preliminary results suggest that glacier MB is highly sensitive to the timing, frequency, and amount of snowfall, which is influenced by the cloud microphysics parameterization. To this end, the Indus Basin is an ideal study site, as it has both westerly (winter) and monsoonal (summer) precipitation influences, is a data-sparse region (so models are critical), and still has lingering questions as to glacier importance for local and regional resources. WRF is run at a 4 km grid scale using two commonly used parameterizations: the Thompson scheme and the Goddard scheme. On average, these parameterizations result in minimal differences in annual precipitation. However, localized regions exhibit differences in precipitation of up to 3 m w.e. a-1. The different schemes also impact the

  7. High spatial sampling global mode structure measurements via multichannel reflectometry in NSTX

    Energy Technology Data Exchange (ETDEWEB)

    Crocker, N A; Peebles, W A; Kubota, S; Zhang, J [Department of Physics and Astronomy, University of California-Los Angeles, Los Angeles, CA 90095-7099 (United States); Bell, R E; Fredrickson, E D; Gorelenkov, N N; LeBlanc, B P; Menard, J E; Podesta, M [Princeton Plasma Physics Laboratory, PO Box 451, Princeton, NJ 08543-0451 (United States); Sabbagh, S A [Department of Applied Physics and Applied Mathematics, Columbia University, New York, NY 10027 (United States); Tritz, K [Johns Hopkins University, Baltimore, MD 21218 (United States); Yuh, H [Nova Photonics, Princeton, NJ 08540 (United States)

    2011-10-15

    Global modes-including kinks and tearing modes (f <{approx} 50 kHz), toroidicity-induced Alfven eigenmodes (TAE; f {approx} 50-250 kHz) and global and compressional Alfven eigenmodes (GAE and CAE; f >{approx} 400 kHz)-play critical roles in many aspects of plasma performance. Their investigation on NSTX is aided by an array of fixed-frequency quadrature reflectometers used to determine their radial density perturbation structure. The array has been recently upgraded to 16 channels spanning 30-75 GHz (n{sub cutoff} = (1.1-6.9) x 10{sup 19} m{sup -3} in O-mode), improving spatial sampling and access to the core of H-mode plasmas. The upgrade has yielded significant new results that advance the understanding of global modes in NSTX. The GAE and CAE structures have been measured for the first time in the core of an NSTX high-power (6 MW) beam-heated H-mode plasma. The CAE structure is strongly core-localized, which has important implications for electron thermal transport. The TAE structure has been measured with greatly improved spatial sampling, and measurements of the TAE phase, the first in NSTX, show strong radial variation near the midplane, indicating radial propagation caused by non-ideal MHD effects. Finally, the tearing mode structure measurements provide unambiguous evidence of coupling to an external kink.

  8. Southward shift of the global wind energy resource under high carbon dioxide emissions

    Science.gov (United States)

    Karnauskas, Kristopher B.; Lundquist, Julie K.; Zhang, Lei

    2018-01-01

    The use of wind energy resource is an integral part of many nations' strategies towards realizing the carbon emissions reduction targets set forth in the Paris Agreement, and global installed wind power cumulative capacity has grown on average by 22% per year since 2006. However, assessments of wind energy resource are usually based on today's climate, rather than taking into account that anthropogenic greenhouse gas emissions continue to modify the global atmospheric circulation. Here, we apply an industry wind turbine power curve to simulations of high and low future emissions scenarios in an ensemble of ten fully coupled global climate models to investigate large-scale changes in wind power across the globe. Our calculations reveal decreases in wind power across the Northern Hemisphere mid-latitudes and increases across the tropics and Southern Hemisphere, with substantial regional variations. The changes across the northern mid-latitudes are robust responses over time in both emissions scenarios, whereas the Southern Hemisphere changes appear critically sensitive to each individual emissions scenario. In addition, we find that established features of climate change can explain these patterns: polar amplification is implicated in the northern mid-latitude decrease in wind power, and enhanced land-sea thermal gradients account for the tropical and southern subtropical increases.

  9. Global Sensitivity Analysis of High Speed Shaft Subsystem of a Wind Turbine Drive Train

    Directory of Open Access Journals (Sweden)

    Saeed Asadi

    2018-01-01

    Full Text Available The wind turbine dynamics are complex and critical area of study for the wind industry. Quantification of the effective factors to wind turbine performance is valuable for making improvements to both power performance and turbine health. In this paper, the global sensitivity analysis of validated mathematical model for high speed shaft drive train test rig has been developed in order to evaluate the contribution of systems input parameters to the specified objective functions. The drive train in this study consists of a 3-phase induction motor, flexible shafts, shafts’ coupling, bearing housing, and disk with an eccentric mass. The governing equations were derived by using the Lagrangian formalism and were solved numerically by Newmark method. The variance based global sensitivity indices are introduced to evaluate the contribution of input structural parameters correlated to the objective functions. The conclusion from the current research provides informative beneficial data in terms of design and optimization of a drive train setup and also can provide better understanding of wind turbine drive train system dynamics with respect to different structural parameters, ultimately designing more efficient drive trains. Finally, the proposed global sensitivity analysis (GSA methodology demonstrates the detectability of faults in different components.

  10. Global Prevalence of Myopia and High Myopia and Temporal Trends from 2000 through 2050.

    Science.gov (United States)

    Holden, Brien A; Fricke, Timothy R; Wilson, David A; Jong, Monica; Naidoo, Kovin S; Sankaridurg, Padmaja; Wong, Tien Y; Naduvilath, Thomas J; Resnikoff, Serge

    2016-05-01

    Myopia is a common cause of vision loss, with uncorrected myopia the leading cause of distance vision impairment globally. Individual studies show variations in the prevalence of myopia and high myopia between regions and ethnic groups, and there continues to be uncertainty regarding increasing prevalence of myopia. Systematic review and meta-analysis. We performed a systematic review and meta-analysis of the prevalence of myopia and high myopia and estimated temporal trends from 2000 to 2050 using data published since 1995. The primary data were gathered into 5-year age groups from 0 to ≥100, in urban or rural populations in each country, standardized to definitions of myopia of -0.50 diopter (D) or less and of high myopia of -5.00 D or less, projected to the year 2010, then meta-analyzed within Global Burden of Disease (GBD) regions. Any urban or rural age group that lacked data in a GBD region took data from the most similar region. The prevalence data were combined with urbanization data and population data from United Nations Population Department (UNPD) to estimate the prevalence of myopia and high myopia in each country of the world. These estimates were combined with myopia change estimates over time derived from regression analysis of published evidence to project to each decade from 2000 through 2050. We included data from 145 studies covering 2.1 million participants. We estimated 1406 million people with myopia (22.9% of the world population; 95% confidence interval [CI], 932-1932 million [15.2%-31.5%]) and 163 million people with high myopia (2.7% of the world population; 95% CI, 86-387 million [1.4%-6.3%]) in 2000. We predict by 2050 there will be 4758 million people with myopia (49.8% of the world population; 3620-6056 million [95% CI, 43.4%-55.7%]) and 938 million people with high myopia (9.8% of the world population; 479-2104 million [95% CI, 5.7%-19.4%]). Myopia and high myopia estimates from 2000 to 2050 suggest significant increases in

  11. High-resolution global climate modelling: the UPSCALE project, a large-simulation campaign

    Directory of Open Access Journals (Sweden)

    M. S. Mizielinski

    2014-08-01

    Full Text Available The UPSCALE (UK on PRACE: weather-resolving Simulations of Climate for globAL Environmental risk project constructed and ran an ensemble of HadGEM3 (Hadley Centre Global Environment Model 3 atmosphere-only global climate simulations over the period 1985–2011, at resolutions of N512 (25 km, N216 (60 km and N96 (130 km as used in current global weather forecasting, seasonal prediction and climate modelling respectively. Alongside these present climate simulations a parallel ensemble looking at extremes of future climate was run, using a time-slice methodology to consider conditions at the end of this century. These simulations were primarily performed using a 144 million core hour, single year grant of computing time from PRACE (the Partnership for Advanced Computing in Europe in 2012, with additional resources supplied by the Natural Environment Research Council (NERC and the Met Office. Almost 400 terabytes of simulation data were generated on the HERMIT supercomputer at the High Performance Computing Center Stuttgart (HLRS, and transferred to the JASMIN super-data cluster provided by the Science and Technology Facilities Council Centre for Data Archival (STFC CEDA for analysis and storage. In this paper we describe the implementation of the project, present the technical challenges in terms of optimisation, data output, transfer and storage that such a project involves and include details of the model configuration and the composition of the UPSCALE data set. This data set is available for scientific analysis to allow assessment of the value of model resolution in both present and potential future climate conditions.

  12. Hitting times of local and global optima in genetic algorithms with very high selection pressure

    Directory of Open Access Journals (Sweden)

    Eremeev Anton V.

    2017-01-01

    Full Text Available The paper is devoted to upper bounds on the expected first hitting times of the sets of local or global optima for non-elitist genetic algorithms with very high selection pressure. The results of this paper extend the range of situations where the upper bounds on the expected runtime are known for genetic algorithms and apply, in particular, to the Canonical Genetic Algorithm. The obtained bounds do not require the probability of fitness-decreasing mutation to be bounded by a constant which is less than one.

  13. Michelson Interferometer for Global High-Resolution Thermospheric Imaging (MIGHTI): Monolithic Interferometer Design and Test

    Science.gov (United States)

    Harlander, John M.; Englert, Christoph R.; Brown, Charles M.; Marr, Kenneth D.; Miller, Ian J.; Zastera, Vaz; Bach, Bernhard W.; Mende, Stephen B.

    2017-10-01

    The design and laboratory tests of the interferometers for the Michelson Interferometer for Global High-resolution Thermospheric Imaging (MIGHTI) instrument which measures thermospheric wind and temperature for the NASA-sponsored Ionospheric Connection (ICON) Explorer mission are described. The monolithic interferometers use the Doppler Asymmetric Spatial Heterodyne (DASH) Spectroscopy technique for wind measurements and a multi-element photometer approach to measure thermospheric temperatures. The DASH technique and overall optical design of the MIGHTI instrument are described in an overview followed by details on the design, element fabrication, assembly, laboratory tests and thermal control of the interferometers that are the heart of MIGHTI.

  14. Global Nonexistence of Solutions for Viscoelastic Wave Equations of Kirchhoff Type with High Energy

    Directory of Open Access Journals (Sweden)

    Gang Li

    2012-01-01

    Full Text Available We consider viscoelastic wave equations of the Kirchhoff type utt-M(∥∇u∥22Δu+∫0tg(t-sΔu(sds+ut=|u|p-1u with Dirichlet boundary conditions, where ∥⋅∥p denotes the norm in the Lebesgue space Lp. Under some suitable assumptions on g and the initial data, we establish a global nonexistence result for certain solutions with arbitrarily high energy, in the sense that lim⁡t→T*-(∥u(t∥22+∫0t∥u(s∥22ds=∞ for some 0

  15. Global Warming in Schools: An Inquiry about the Competing Conceptions of High School Social Studies and Science Curricula and Teachers

    Science.gov (United States)

    Meehan, Casey R.

    Despite the scientific consensus supporting the theory of anthropogenic (human-induced) global warming, whether global warming is a serious problem, whether human activity is the primary cause of it, and whether scientific consensus exists at all are controversial questions among the U.S. lay-public. The cultural theory of risk perception (Schwarz and Thompson, 1990) serves as the theoretical framework for this qualitative analysis in which I ask the question how do U.S. secondary school curricula and teachers deal with the disparity between the overwhelming scientific consensus and the lay-public's skepticism regarding global warming? I analyzed nine widely used social studies and science textbooks, eight sets of supplemental materials about global warming produced by a range of not-for-profit and governmental organizations, and interviewed fourteen high school teachers who had experience teaching formal lessons about global warming in their content area. Findings suggest: 1) the range of global warming content within social studies and science textbooks and supplemental curricula reflects the spectrum of conceptualizations found among members of the U.S. public; 2) global warming curricula communicate only a narrow range of strategies for dealing with global warming and its associated threats; and 3) social studies and science teachers report taking a range of stances about global warming in their classroom, but sometimes the stance they put forth to their students does not align with their personal beliefs about global warming. The findings pose a troubling conundrum. Some of the global warming curricula treat the cause of global warming--a question that is not scientifically controversial--as a question with multiple and competing "right" answers. At the same time, much of curricula position how we should address global warming--a question that is legitimately controversial--as a question with one correct answer despite there being many reasonable responses

  16. Infrared radiation parameterizations for the minor CO2 bands and for several CFC bands in the window region

    Science.gov (United States)

    Kratz, David P.; Chou, Ming-Dah; Yan, Michael M.-H.

    1993-01-01

    Fast and accurate parameterizations have been developed for the transmission functions of the CO2 9.4- and 10.4-micron bands, as well as the CFC-11, CFC-12, and CFC-22 bands located in the 8-12-micron region. The parameterizations are based on line-by-line calculations of transmission functions for the CO2 bands and on high spectral resolution laboratory measurements of the absorption coefficients for the CFC bands. Also developed are the parameterizations for the H2O transmission functions for the corresponding spectral bands. Compared to the high-resolution calculations, fluxes at the tropopause computed with the parameterizations are accurate to within 10 percent when overlapping of gas absorptions within a band is taken into account. For individual gas absorption, the accuracy is of order 0-2 percent. The climatic effects of these trace gases have been studied using a zonally averaged multilayer energy balance model, which includes seasonal cycles and a simplified deep ocean. With the trace gas abundances taken to follow the Intergovernmental Panel on Climate Change Low Emissions 'B' scenario, the transient response of the surface temperature is simulated for the period 1900-2060.

  17. Parameterizing Coefficients of a POD-Based Dynamical System

    Science.gov (United States)

    Kalb, Virginia L.

    2010-01-01

    A method of parameterizing the coefficients of a dynamical system based of a proper orthogonal decomposition (POD) representing the flow dynamics of a viscous fluid has been introduced. (A brief description of POD is presented in the immediately preceding article.) The present parameterization method is intended to enable construction of the dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers. The need for this or a similar method arises as follows: A procedure that includes direct numerical simulation followed by POD, followed by Galerkin projection to a dynamical system has been proven to enable representation of flow dynamics by a low-dimensional model at the Reynolds number of the simulation. However, a more difficult task is to obtain models that are valid over a range of Reynolds numbers. Extrapolation of low-dimensional models by use of straightforward Reynolds-number-based parameter continuation has proven to be inadequate for successful prediction of flows. A key part of the problem of constructing a dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers is the problem of understanding and providing for the variation of the coefficients of the dynamical system with the Reynolds number. Prior methods do not enable capture of temporal dynamics over ranges of Reynolds numbers in low-dimensional models, and are not even satisfactory when large numbers of modes are used. The basic idea of the present method is to solve the problem through a suitable parameterization of the coefficients of the dynamical system. The parameterization computations involve utilization of the transfer of kinetic energy between modes as a function of Reynolds number. The thus-parameterized dynamical system accurately predicts the flow dynamics and is applicable to a range of flow problems in the dynamical regime around the Hopf bifurcation. Parameter

  18. Sources of Global Academic Self-Efficacy in Academically High-Achieving Females before the Onset of Disordered Eating

    Science.gov (United States)

    Krafchek, Jennifer; Kronborg, Leonie

    2015-01-01

    There is limited research applying the four sources of self-efficacy (Bandura, 1997) to global academic self-efficacy. This qualitative study examined the sources of global academic self-efficacy in a sample of academically high-achieving females who developed disordered eating. Semistructured interviews were conducted with 14 participants to gain…

  19. Derivation and analysis of a high-resolution estimate of global permafrost zonation

    Directory of Open Access Journals (Sweden)

    S. Gruber

    2012-02-01

    Full Text Available Permafrost underlies much of Earth's surface and interacts with climate, eco-systems and human systems. It is a complex phenomenon controlled by climate and (sub- surface properties and reacts to change with variable delay. Heterogeneity and sparse data challenge the modeling of its spatial distribution. Currently, there is no data set to adequately inform global studies of permafrost. The available data set for the Northern Hemisphere is frequently used for model evaluation, but its quality and consistency are difficult to assess. Here, a global model of permafrost extent and dataset of permafrost zonation are presented and discussed, extending earlier studies by including the Southern Hemisphere, by consistent data and methods, by attention to uncertainty and scaling. Established relationships between air temperature and the occurrence of permafrost are re-formulated into a model that is parametrized using published estimates. It is run with a high-resolution (<1 km global elevation data and air temperatures based on the NCAR-NCEP reanalysis and CRU TS 2.0. The resulting data provide more spatial detail and a consistent extrapolation to remote regions, while aggregated values resemble previous studies. The estimated uncertainties affect regional patterns and aggregate number, and provide interesting insight. The permafrost area, i.e. the actual surface area underlain by permafrost, north of 60° S is estimated to be 13–18 × 106 km2 or 9–14 % of the exposed land surface. The global permafrost area including Antarctic and sub-sea permafrost is estimated to be 16–21 × 106 km2. The global permafrost region, i.e. the exposed land surface below which some permafrost can be expected, is estimated to be 22 ± 3 × 106 km2. A large proportion of this exhibits considerable topography and spatially-discontinuous permafrost, underscoring the importance of attention to scaling issues

  20. Winter QPF Sensitivities to Snow Parameterizations and Comparisons to NASA CloudSat Observations

    Science.gov (United States)

    Molthan, Andrew; Haynes, John M.; Jedlovec, Gary J.; Lapenta, William M.

    2009-01-01

    Steady increases in computing power have allowed for numerical weather prediction models to be initialized and run at high spatial resolution, permitting a transition from larger scale parameterizations of the effects of clouds and precipitation to the simulation of specific microphysical processes and hydrometeor size distributions. Although still relatively coarse in comparison to true cloud resolving models, these high resolution forecasts (on the order of 4 km or less) have demonstrated value in the prediction of severe storm mode and evolution and are being explored for use in winter weather events . Several single-moment bulk water microphysics schemes are available within the latest release of the Weather Research and Forecast (WRF) model suite, including the NASA Goddard Cumulus Ensemble, which incorporate some assumptions in the size distribution of a small number of hydrometeor classes in order to predict their evolution, advection and precipitation within the forecast domain. Although many of these schemes produce similar forecasts of events on the synoptic scale, there are often significant details regarding precipitation and cloud cover, as well as the distribution of water mass among the constituent hydrometeor classes. Unfortunately, validating data for cloud resolving model simulations are sparse. Field campaigns require in-cloud measurements of hydrometeors from aircraft in coordination with extensive and coincident ground based measurements. Radar remote sensing is utilized to detect the spatial coverage and structure of precipitation. Here, two radar systems characterize the structure of winter precipitation for comparison to equivalent features within a forecast model: a 3 GHz, Weather Surveillance Radar-1988 Doppler (WSR-88D) based in Omaha, Nebraska, and the 94 GHz NASA CloudSat Cloud Profiling Radar, a spaceborne instrument and member of the afternoon or "A-Train" of polar orbiting satellites tasked with cataloguing global cloud

  1. Global carbon monoxide vertical distributions from spaceborne high-resolution FTIR nadir measurements

    Directory of Open Access Journals (Sweden)

    B. Barret

    2005-01-01

    Full Text Available This paper presents the first global distributions of CO vertical profiles retrieved from a thermal infrared FTS working in the nadir geometry. It is based on the exploitation of the high resolution and high quality spectra measured by the Interferometric Monitor of Greenhouse gases (IMG which flew onboard the Japanese ADEOS platform in 1996-1997. The retrievals are performed with an algorithm based on the Optimal Estimation Method (OEM and are characterized in terms of vertical sensitivity and error budget. It is found that most of the IMG measurements contain between 1.5 and 2.2 independent pieces of information about the vertical distribution of CO from the lower troposphere to the upper troposphere-lower stratosphere (UTLS. The retrievals are validated against coincident NOAA/CMDL in situ surface measurements and NDSC/FTIR total columns measurements. The retrieved global distributions of CO are also found to be in good agreement with the distributions modeled by the GEOS-CHEM 3D CTM, highlighting the ability of IMG to capture the horizontal as well as the vertical structure of the CO distributions.

  2. Global scaling analysis for the pebble bed advanced high temperature reactor

    International Nuclear Information System (INIS)

    Blandford, E.D.; Peterson, P.F.

    2009-01-01

    Scaled Integral Effects Test (IET) facilities play a critical role in the design certification process of innovative reactor designs. Best-estimate system analysis codes, which minimize deliberate conservatism, require confirmatory data during the validation process to ensure an acceptable level of accuracy as defined by the regulator. The modular Pebble Bed Advanced High Temperature Reactor (PB-AHTR), with a nominal power output of 900 MWth, is the most recent UC Berkeley design for a liquid fluoride salt cooled, solid fuel reactor. The PB-AHTR takes advantage of technologies developed for gas-cooled high temperature thermal and fast reactors, sodium fast reactors, and molten salt reactors. In this paper, non-dimensional scaling groups and similarity criteria are presented at the global system level for a loss of forced circulation transient, where single-phase natural circulation is the primary mechanism for decay heat removal following a primary pump trip. Due to very large margin to fuel damage temperatures, the peak metal temperature of primary-loop components was identified as the key safety parameter of interest. Fractional Scaling Analysis (FSA) methods were used to quantify the intensity of each transfer process during the transient and subsequently rank them by their relative importance while identifying key sources of distortion between the prototype and model. The results show that the development of a scaling hierarchy at the global system level informs the bottom-up scaling analysis. (author)

  3. submitter Data-driven RBE parameterization for helium ion beams

    CERN Document Server

    Mairani, A; Dokic, I; Valle, S M; Tessonnier, T; Galm, R; Ciocca, M; Parodi, K; Ferrari, A; Jäkel, O; Haberer, T; Pedroni, P; Böhlen, T T

    2016-01-01

    Helium ion beams are expected to be available again in the near future for clinical use. A suitable formalism to obtain relative biological effectiveness (RBE) values for treatment planning (TP) studies is needed. In this work we developed a data-driven RBE parameterization based on published in vitro experimental values. The RBE parameterization has been developed within the framework of the linear-quadratic (LQ) model as a function of the helium linear energy transfer (LET), dose and the tissue specific parameter ${{(\\alpha /\\beta )}_{\\text{ph}}}$ of the LQ model for the reference radiation. Analytic expressions are provided, derived from the collected database, describing the $\\text{RB}{{\\text{E}}_{\\alpha}}={{\\alpha}_{\\text{He}}}/{{\\alpha}_{\\text{ph}}}$ and ${{\\text{R}}_{\\beta}}={{\\beta}_{\\text{He}}}/{{\\beta}_{\\text{ph}}}$ ratios as a function of LET. Calculated RBE values at 2 Gy photon dose and at 10% survival ($\\text{RB}{{\\text{E}}_{10}}$ ) are compared with the experimental ones. Pearson's correlati...

  4. Reliable control using the primary and dual Youla parameterizations

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Stoustrup, J.

    2002-01-01

    Different aspects of modeling faults in dynamic systems are considered in connection with reliable control (RC). The fault models include models with additive faults, multiplicative faults and structural changes in the models due to faults in the systems. These descriptions are considered...... in connection with reliable control and feedback control with fault rejection. The main emphasis is on fault modeling. A number of fault diagnosis problems, reliable control problems, and feedback control with fault rejection problems are formulated/considered, again, mainly from a fault modeling point of view....... Reliability is introduced by means of the (primary) Youla parameterization of all stabilizing controllers, where an additional loop is closed around a diagnostic signal. In order to quantify the level of reliability, the dual Youla parameterization is introduced which can be used to analyze how large faults...

  5. IR OPTICS MEASUREMENT WITH LINEAR COUPLING'S ACTION-ANGLE PARAMETERIZATION

    International Nuclear Information System (INIS)

    LUO, Y.; BAI, M.; PILAT, R.; SATOGATA, T.; TRBOJEVIC, D.

    2005-01-01

    A parameterization of linear coupling in action-angle coordinates is convenient for analytical calculations and interpretation of turn-by-turn (TBT) beam position monitor (BPM) data. We demonstrate how to use this parameterization to extract the twiss and coupling parameters in interaction regions (IRs), using BPMs on each side of the long IR drift region. The example of TBT BPM analysis was acquired at the Relativistic Heavy Ion Collider (RHIC), using an AC dipole to excite a single eigenmode. Besides the full treatment, a fast estimate of beta*, the beta function at the interaction point (IP), is provided, along with the phase advance between these BPMs. We also calculate and measure the waist of the beta function and the local optics

  6. Elastic FWI for VTI media: A synthetic parameterization study

    KAUST Repository

    Kamath, Nishant

    2016-09-06

    A major challenge for multiparameter full-waveform inversion (FWI) is the inherent trade-offs (or cross-talk) between model parameters. Here, we perform FWI of multicomponent data generated for a synthetic VTI (transversely isotropic with a vertical symmetry axis) model based on a geologic section of the Valhall field. A horizontal displacement source, which excites intensive shear waves in the conventional offset range, helps provide more accurate updates to the SV-wave vertical velocity. We test three model parameterizations, which exhibit different radiation patterns and, therefore, create different parameter trade-offs. The results show that the choice of parameterization for FWI depends on the availability of long-offset data, the quality of the initial model for the anisotropy coefficients, and the parameter that needs to be resolved with the highest accuracy.

  7. Parameterization of phase change of water in a mesoscale model

    Energy Technology Data Exchange (ETDEWEB)

    Levkov, L; Eppel, D; Grassl, H

    1987-01-01

    A parameterization scheme of phase change of water is suggested to be used in the 3-D numerical nonhydrostatic model GESIMA. The microphysical formulation follows the so-called bulk technique. With this procedure the net production rates in the balance equations for water and potential temperature are given both for liquid and ice-phase. Convectively stable as well as convectively unstable mesoscale systems are considered. With 2 figs..

  8. Robust parameterization of elastic and absorptive electron atomic scattering factors

    International Nuclear Information System (INIS)

    Peng, L.M.; Ren, G.; Dudarev, S.L.; Whelan, M.J.

    1996-01-01

    A robust algorithm and computer program have been developed for the parameterization of elastic and absorptive electron atomic scattering factors. The algorithm is based on a combined modified simulated-annealing and least-squares method, and the computer program works well for fitting both elastic and absorptive atomic scattering factors with five Gaussians. As an application of this program, the elastic electron atomic scattering factors have been parameterized for all neutral atoms and for s up to 6 A -1 . Error analysis shows that the present results are considerably more accurate than the previous analytical fits in terms of the mean square value of the deviation between the numerical and fitted scattering factors. Parameterization for absorptive atomic scattering factors has been made for 17 important materials with the zinc blende structure over the temperature range 1 to 1000 K, where appropriate, and for temperature ranges for which accurate Debye-Waller factors are available. For other materials, the parameterization of the absorptive electron atomic scattering factors can be made using the program by supplying the atomic number of the element, the Debye-Waller factor and the acceleration voltage. For ions or when more accurate numerical results for neutral atoms are available, the program can read in the numerical values of the elastic scattering factors and return the parameters for both the elastic and absorptive scattering factors. The computer routines developed have been tested both on computer workstations and desktop PC computers, and will be made freely available via electronic mail or on floppy disk upon request. (orig.)

  9. New representation of water activity based on a single solute specific constant to parameterize the hygroscopic growth of aerosols in atmospheric models

    Directory of Open Access Journals (Sweden)

    S. Metzger

    2012-06-01

    Full Text Available Water activity is a key factor in aerosol thermodynamics and hygroscopic growth. We introduce a new representation of water activity (aw, which is empirically related to the solute molality (μs through a single solute specific constant, νi. Our approach is widely applicable, considers the Kelvin effect and covers ideal solutions at high relative humidity (RH, including cloud condensation nuclei (CCN activation. It also encompasses concentrated solutions with high ionic strength at low RH such as the relative humidity of deliquescence (RHD. The constant νi can thus be used to parameterize the aerosol hygroscopic growth over a wide range of particle sizes, from nanometer nucleation mode to micrometer coarse mode particles. In contrast to other aw-representations, our νi factor corrects the solute molality both linearly and in exponent form x · ax. We present four representations of our basic aw-parameterization at different levels of complexity for different aw-ranges, e.g. up to 0.95, 0.98 or 1. νi is constant over the selected aw-range, and in its most comprehensive form, the parameterization describes the entire aw range (0–1. In this work we focus on single solute solutions. νi can be pre-determined with a root-finding method from our water activity representation using an aw−μs data pair, e.g. at solute saturation using RHD and solubility measurements. Our aw and supersaturation (Köhler-theory results compare well with the thermodynamic reference model E-AIM for the key compounds NaCl and (NH42SO4 relevant for CCN modeling and calibration studies. Envisaged applications include regional and global atmospheric chemistry and

  10. S-World: A high resolution global soil database for simulation modelling (Invited)

    Science.gov (United States)

    Stoorvogel, J. J.

    2013-12-01

    There is an increasing call for high resolution soil information at the global level. A good example for such a call is the Global Gridded Crop Model Intercomparison carried out within AgMIP. While local studies can make use of surveying techniques to collect additional techniques this is practically impossible at the global level. It is therefore important to rely on legacy data like the Harmonized World Soil Database. Several efforts do exist that aim at the development of global gridded soil property databases. These estimates of the variation of soil properties can be used to assess e.g., global soil carbon stocks. However, they do not allow for simulation runs with e.g., crop growth simulation models as these models require a description of the entire pedon rather than a few soil properties. This study provides the required quantitative description of pedons at a 1 km resolution for simulation modelling. It uses the Harmonized World Soil Database (HWSD) for the spatial distribution of soil types, the ISRIC-WISE soil profile database to derive information on soil properties per soil type, and a range of co-variables on topography, climate, and land cover to further disaggregate the available data. The methodology aims to take stock of these available data. The soil database is developed in five main steps. Step 1: All 148 soil types are ordered on the basis of their expected topographic position using e.g., drainage, salinization, and pedogenesis. Using the topographic ordering and combining the HWSD with a digital elevation model allows for the spatial disaggregation of the composite soil units. This results in a new soil map with homogeneous soil units. Step 2: The ranges of major soil properties for the topsoil and subsoil of each of the 148 soil types are derived from the ISRIC-WISE soil profile database. Step 3: A model of soil formation is developed that focuses on the basic conceptual question where we are within the range of a particular soil property

  11. Online Adaboost-Based Parameterized Methods for Dynamic Distributed Network Intrusion Detection.

    Science.gov (United States)

    Hu, Weiming; Gao, Jun; Wang, Yanguo; Wu, Ou; Maybank, Stephen

    2014-01-01

    Current network intrusion detection systems lack adaptability to the frequently changing network environments. Furthermore, intrusion detection in the new distributed architectures is now a major requirement. In this paper, we propose two online Adaboost-based intrusion detection algorithms. In the first algorithm, a traditional online Adaboost process is used where decision stumps are used as weak classifiers. In the second algorithm, an improved online Adaboost process is proposed, and online Gaussian mixture models (GMMs) are used as weak classifiers. We further propose a distributed intrusion detection framework, in which a local parameterized detection model is constructed in each node using the online Adaboost algorithm. A global detection model is constructed in each node by combining the local parametric models using a small number of samples in the node. This combination is achieved using an algorithm based on particle swarm optimization (PSO) and support vector machines. The global model in each node is used to detect intrusions. Experimental results show that the improved online Adaboost process with GMMs obtains a higher detection rate and a lower false alarm rate than the traditional online Adaboost process that uses decision stumps. Both the algorithms outperform existing intrusion detection algorithms. It is also shown that our PSO, and SVM-based algorithm effectively combines the local detection models into the global model in each node; the global model in a node can handle the intrusion types that are found in other nodes, without sharing the samples of these intrusion types.

  12. Implementation of a generalized actuator line model for wind turbine parameterization in the Weather Research and Forecasting model

    Energy Technology Data Exchange (ETDEWEB)

    Marjanovic, Nikola [Department of Civil and Environmental Engineering, University of California, Berkeley, MC 1710, Berkeley, California 94720-1710, USA; Atmospheric, Earth and Energy Division, Lawrence Livermore National Laboratory, PO Box 808, L-103, Livermore, California 94551, USA; Mirocha, Jeffrey D. [Atmospheric, Earth and Energy Division, Lawrence Livermore National Laboratory, PO Box 808, L-103, Livermore, California 94551, USA; Kosović, Branko [Research Applications Laboratory, Weather Systems and Assessment Program, University Corporation for Atmospheric Research, PO Box 3000, Boulder, Colorado 80307, USA; Lundquist, Julie K. [Department of Atmospheric and Oceanic Sciences, University of Colorado, Boulder, Campus Box 311, Boulder, Colorado 80309, USA; National Renewable Energy Laboratory, 15013 Denver West Parkway, Golden, Colorado 80401, USA; Chow, Fotini Katopodes [Department of Civil and Environmental Engineering, University of California, Berkeley, MC 1710, Berkeley, California 94720-1710, USA

    2017-11-01

    A generalized actuator line (GAL) wind turbine parameterization is implemented within the Weather Research and Forecasting model to enable high-fidelity large-eddy simulations of wind turbine interactions with boundary layer flows under realistic atmospheric forcing conditions. Numerical simulations using the GAL parameterization are evaluated against both an already implemented generalized actuator disk (GAD) wind turbine parameterization and two field campaigns that measured the inflow and near-wake regions of a single turbine. The representation of wake wind speed, variance, and vorticity distributions is examined by comparing fine-resolution GAL and GAD simulations and GAD simulations at both fine and coarse-resolutions. The higher-resolution simulations show slightly larger and more persistent velocity deficits in the wake and substantially increased variance and vorticity when compared to the coarse-resolution GAD. The GAL generates distinct tip and root vortices that maintain coherence as helical tubes for approximately one rotor diameter downstream. Coarse-resolution simulations using the GAD produce similar aggregated wake characteristics to both fine-scale GAD and GAL simulations at a fraction of the computational cost. The GAL parameterization provides the capability to resolve near wake physics, including vorticity shedding and wake expansion.

  13. Air quality modeling: evaluation of chemical and meteorological parameterizations

    International Nuclear Information System (INIS)

    Kim, Youngseob

    2011-01-01

    The influence of chemical mechanisms and meteorological parameterizations on pollutant concentrations calculated with an air quality model is studied. The influence of the differences between two gas-phase chemical mechanisms on the formation of ozone and aerosols in Europe is low on average. For ozone, the large local differences are mainly due to the uncertainty associated with the kinetics of nitrogen monoxide (NO) oxidation reactions on the one hand and the representation of different pathways for the oxidation of aromatic compounds on the other hand. The aerosol concentrations are mainly influenced by the selection of all major precursors of secondary aerosols and the explicit treatment of chemical regimes corresponding to the nitrogen oxides (NO x ) levels. The influence of the meteorological parameterizations on the concentrations of aerosols and their vertical distribution is evaluated over the Paris region in France by comparison to lidar data. The influence of the parameterization of the dynamics in the atmospheric boundary layer is important; however, it is the use of an urban canopy model that improves significantly the modeling of the pollutant vertical distribution (author) [fr

  14. HURRICANE AND SEVERE STORM SENTINEL (HS3) GLOBAL HAWK HIGH ALTITUDE MMIC SOUNDING RADIOMETER (HAMSR) V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The Hurricane and Severe Storm Sentinel (HS3) Global Hawk High Altitude MMIC Sounding Radiometer (HAMSR) datasets include measurements gathered by the HAMSR...

  15. Recent developments and assessment of a three-dimensional PBL parameterization for improved wind forecasting over complex terrain

    Science.gov (United States)

    Kosovic, B.; Jimenez, P. A.; Haupt, S. E.; Martilli, A.; Olson, J.; Bao, J. W.

    2017-12-01

    At present, the planetary boundary layer (PBL) parameterizations available in most numerical weather prediction (NWP) models are one-dimensional. One-dimensional parameterizations are based on the assumption of horizontal homogeneity. This homogeneity assumption is appropriate for grid cell sizes greater than 10 km. However, for mesoscale simulations of flows in complex terrain with grid cell sizes below 1 km, the assumption of horizontal homogeneity is violated. Applying a one-dimensional PBL parameterization to high-resolution mesoscale simulations in complex terrain could result in significant error. For high-resolution mesoscale simulations of flows in complex terrain, we have therefore developed and implemented a three-dimensional (3D) PBL parameterization in the Weather Research and Forecasting (WRF) model. The implementation of the 3D PBL scheme is based on the developments outlined by Mellor and Yamada (1974, 1982). Our implementation in the Weather Research and Forecasting (WRF) model uses a pure algebraic model (level 2) to diagnose the turbulent fluxes. To evaluate the performance of the 3D PBL model, we use observations from the Wind Forecast Improvement Project 2 (WFIP2). The WFIP2 field study took place in the Columbia River Gorge area from 2015-2017. We focus on selected cases when physical phenomena of significance for wind energy applications such as mountain waves, topographic wakes, and gap flows were observed. Our assessment of the 3D PBL parameterization also considers a large-eddy simulation (LES). We carried out a nested LES with grid cell sizes of 30 m and 10 m covering a large fraction of the WFIP2 study area. Both LES domains were discretized using 6000 x 3000 x 200 grid cells in zonal, meridional, and vertical direction, respectively. The LES results are used to assess the relative magnitude of horizontal gradients of turbulent stresses and fluxes in comparison to vertical gradients. The presentation will highlight the advantages of the 3

  16. A study of the radiative forcing and global warming potentials of hydrofluorocarbons

    International Nuclear Information System (INIS)

    Zhang Hua; Wu Jinxiu; Lu Peng

    2011-01-01

    We developed a new radiation parameterization of hydrofluorocarbons (HFCs), using the correlated k-distribution method and the high-resolution transmission molecular absorption (HITRAN) 2004 database. We examined the instantaneous and stratospheric adjusted radiative efficiencies of HFCs for clear-sky and all-sky conditions. We also calculated the radiative forcing of HFCs from preindustrial times to the present and for future scenarios given by the Intergovernmental Panel on Climate Change Special Report on Emission Scenarios (SRES, in short). Global warming potential and global temperature potential were then examined and compared on the basis of the calculated radiative efficiencies. Finally, we discuss surface temperature changes due to various HFC emissions.

  17. Development of Modal Analysis for the Study of Global Modes in High Speed Boundary Layer Flows

    Science.gov (United States)

    Brock, Joseph Michael

    Boundary layer transition for compressible flows remains a challenging and unsolved problem. In the context of high-speed compressible flow, transitional and turbulent boundary-layers produce significantly higher surface heating caused by an increase in skin-friction. The higher heating associated with transitional and turbulent boundary layers drives thermal protection systems (TPS) and mission trajectory bounds. Proper understanding of the mechanisms that drive transition is crucial to the successful design and operation of the next generation spacecraft. Currently, prediction of boundary-layer transition is based on experimental efforts and computational stability analysis. Computational analysis, anchored by experimental correlations, offers an avenue to assess/predict stability at a reduced cost. Classical methods of Linearized Stability Theory (LST) and Parabolized Stability Equations (PSE) have proven to be very useful for simple geometries/base flows. Under certain conditions the assumptions that are inherent to classical methods become invalid and the use of LST/PSE is inaccurate. In these situations, a global approach must be considered. A TriGlobal stability analysis code, Global Mode Analysis in US3D (GMAUS3D), has been developed and implemented into the unstructured solver US3D. A discussion of the methodology and implementation will be presented. Two flow configurations are presented in an effort to validate/verify the approach. First, stability analysis for a subsonic cylinder wake is performed and results compared to literature. Second, a supersonic blunt cone is considered to directly compare LST/PSE analysis and results generated by GMAUS3D.

  18. Research into the influence of spatial variability and scale on the parameterization of hydrological processes

    Science.gov (United States)

    Wood, Eric F.

    1993-01-01

    The objectives of the research were as follows: (1) Extend the Representative Elementary Area (RE) concept, first proposed and developed in Wood et al, (1988), to the water balance fluxes of the interstorm period (redistribution, evapotranspiration and baseflow) necessary for the analysis of long-term water balance processes. (2) Derive spatially averaged water balance model equations for spatially variable soil, topography and vegetation, over A RANGE OF CLIMATES. This is a necessary step in our goal to derive consistent hydrologic results up to GCM grid scales necessary for global climate modeling. (3) Apply the above macroscale water balance equations with remotely sensed data and begin to explore the feasibility of parameterizing the water balance constitutive equations at GCM grid scale.

  19. A parameterization method and application in breast tomosynthesis dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xinhua; Zhang, Da; Liu, Bob [Division of Diagnostic Imaging Physics and Webster Center for Advanced Research and Education in Radiation, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts 02114 (United States)

    2013-09-15

    Purpose: To present a parameterization method based on singular value decomposition (SVD), and to provide analytical parameterization of the mean glandular dose (MGD) conversion factors from eight references for evaluating breast tomosynthesis dose in the Mammography Quality Standards Act (MQSA) protocol and in the UK, European, and IAEA dosimetry protocols.Methods: MGD conversion factor is usually listed in lookup tables for the factors such as beam quality, breast thickness, breast glandularity, and projection angle. The authors analyzed multiple sets of MGD conversion factors from the Hologic Selenia Dimensions quality control manual and seven previous papers. Each data set was parameterized using a one- to three-dimensional polynomial function of 2–16 terms. Variable substitution was used to improve accuracy. A least-squares fit was conducted using the SVD.Results: The differences between the originally tabulated MGD conversion factors and the results computed using the parameterization algorithms were (a) 0.08%–0.18% on average and 1.31% maximum for the Selenia Dimensions quality control manual, (b) 0.09%–0.66% on average and 2.97% maximum for the published data by Dance et al. [Phys. Med. Biol. 35, 1211–1219 (1990); ibid. 45, 3225–3240 (2000); ibid. 54, 4361–4372 (2009); ibid. 56, 453–471 (2011)], (c) 0.74%–0.99% on average and 3.94% maximum for the published data by Sechopoulos et al. [Med. Phys. 34, 221–232 (2007); J. Appl. Clin. Med. Phys. 9, 161–171 (2008)], and (d) 0.66%–1.33% on average and 2.72% maximum for the published data by Feng and Sechopoulos [Radiology 263, 35–42 (2012)], excluding one sample in (d) that does not follow the trends in the published data table.Conclusions: A flexible parameterization method is presented in this paper, and was applied to breast tomosynthesis dosimetry. The resultant data offer easy and accurate computations of MGD conversion factors for evaluating mean glandular breast dose in the MQSA

  20. Validation of High Wind Retrievals from the Cyclone Global Navigation Satellite System (CYGNSS) Mission

    Science.gov (United States)

    McKague, D. S.; Ruf, C. S.; Balasubramaniam, R.; Clarizia, M. P.

    2017-12-01

    The Cyclone Global Navigation Satellite System (CYGNSS) mission, launched in December of 2016, provides all-weather observations of sea surface winds. Using GPS-based bistatic reflectometry, the CYGNSS satellites can estimate sea surface winds even through a hurricane eye wall. This, combined with the high temporal resolution of the CYGNSS constellation (median revisit time of 2.8 hours), yields unprecedented ability to estimate hurricane strength winds. While there are a number of other sources of sea surface wind estimates, such as buoys, dropsondes, passive and active microwave from aircraft and satellite, and models, the combination of all-weather, high accuracy, short revisit time, high spatial coverage, and continuous operation of the CYGNSS mission enables significant advances in the understanding, monitoring, and prediction of cyclones. Validating CYGNSS wind retrievals over the bulk of the global wind speed distribution, which peaks at around 7 meters per second, is relatively straight-forward, requiring spatial-temporal matching of observations with independent sources (such as those mentioned above). Validating CYGNSS wind retrievals for "high" winds (> 20 meters per second), though, is problematic. Such winds occur only in intense storms. While infrequent, making validation opportunities also infrequent and problematic due to their intense nature, such storms are important to study because of the high potential for damage and loss of life. This presentation will describe the efforts of the CYGNSS Calibration/Validation team to gather measurements of high sea surface winds for development and validation of the CYGNSS geophysical model function (GMF), which forms the basis of retrieving winds from CYGNSS observations. The bulk of these observations come from buoy measurements as well as aircraft ("hurricane hunter") measurements from passive microwave and dropsondes. These data are matched in space and time to CYGNSS observations for training of the

  1. Global spectroscopic survey of cloud thermodynamic phase at high spatial resolution, 2005-2015

    Science.gov (United States)

    Thompson, David R.; Kahn, Brian H.; Green, Robert O.; Chien, Steve A.; Middleton, Elizabeth M.; Tran, Daniel Q.

    2018-02-01

    The distribution of ice, liquid, and mixed phase clouds is important for Earth's planetary radiation budget, impacting cloud optical properties, evolution, and solar reflectivity. Most remote orbital thermodynamic phase measurements observe kilometer scales and are insensitive to mixed phases. This under-constrains important processes with outsize radiative forcing impact, such as spatial partitioning in mixed phase clouds. To date, the fine spatial structure of cloud phase has not been measured at global scales. Imaging spectroscopy of reflected solar energy from 1.4 to 1.8 µm can address this gap: it directly measures ice and water absorption, a robust indicator of cloud top thermodynamic phase, with spatial resolution of tens to hundreds of meters. We report the first such global high spatial resolution survey based on data from 2005 to 2015 acquired by the Hyperion imaging spectrometer onboard NASA's Earth Observer 1 (EO-1) spacecraft. Seasonal and latitudinal distributions corroborate observations by the Atmospheric Infrared Sounder (AIRS). For extratropical cloud systems, just 25 % of variance observed at GCM grid scales of 100 km was related to irreducible measurement error, while 75 % was explained by spatial correlations possible at finer resolutions.

  2. The evolution of biomass-burning aerosol size distributions due to coagulation: dependence on fire and meteorological details and parameterization

    Directory of Open Access Journals (Sweden)

    K. M. Sakamoto

    2016-06-01

    Full Text Available Biomass-burning aerosols have a significant effect on global and regional aerosol climate forcings. To model the magnitude of these effects accurately requires knowledge of the size distribution of the emitted and evolving aerosol particles. Current biomass-burning inventories do not include size distributions, and global and regional models generally assume a fixed size distribution from all biomass-burning emissions. However, biomass-burning size distributions evolve in the plume due to coagulation and net organic aerosol (OA evaporation or formation, and the plume processes occur on spacial scales smaller than global/regional-model grid boxes. The extent of this size-distribution evolution is dependent on a variety of factors relating to the emission source and atmospheric conditions. Therefore, accurately accounting for biomass-burning aerosol size in global models requires an effective aerosol size distribution that accounts for this sub-grid evolution and can be derived from available emission-inventory and meteorological parameters. In this paper, we perform a detailed investigation of the effects of coagulation on the aerosol size distribution in biomass-burning plumes. We compare the effect of coagulation to that of OA evaporation and formation. We develop coagulation-only parameterizations for effective biomass-burning size distributions using the SAM-TOMAS large-eddy simulation plume model. For the most-sophisticated parameterization, we use the Gaussian Emulation Machine for Sensitivity Analysis (GEM-SA to build a parameterization of the aged size distribution based on the SAM-TOMAS output and seven inputs: emission median dry diameter, emission distribution modal width, mass emissions flux, fire area, mean boundary-layer wind speed, plume mixing depth, and time/distance since emission. This parameterization was tested against an independent set of SAM-TOMAS simulations and yields R2 values of 0.83 and 0.89 for Dpm and modal width

  3. The evolution of biomass-burning aerosol size distributions due to coagulation: dependence on fire and meteorological details and parameterization

    Science.gov (United States)

    Sakamoto, Kimiko M.; Laing, James R.; Stevens, Robin G.; Jaffe, Daniel A.; Pierce, Jeffrey R.

    2016-06-01

    Biomass-burning aerosols have a significant effect on global and regional aerosol climate forcings. To model the magnitude of these effects accurately requires knowledge of the size distribution of the emitted and evolving aerosol particles. Current biomass-burning inventories do not include size distributions, and global and regional models generally assume a fixed size distribution from all biomass-burning emissions. However, biomass-burning size distributions evolve in the plume due to coagulation and net organic aerosol (OA) evaporation or formation, and the plume processes occur on spacial scales smaller than global/regional-model grid boxes. The extent of this size-distribution evolution is dependent on a variety of factors relating to the emission source and atmospheric conditions. Therefore, accurately accounting for biomass-burning aerosol size in global models requires an effective aerosol size distribution that accounts for this sub-grid evolution and can be derived from available emission-inventory and meteorological parameters. In this paper, we perform a detailed investigation of the effects of coagulation on the aerosol size distribution in biomass-burning plumes. We compare the effect of coagulation to that of OA evaporation and formation. We develop coagulation-only parameterizations for effective biomass-burning size distributions using the SAM-TOMAS large-eddy simulation plume model. For the most-sophisticated parameterization, we use the Gaussian Emulation Machine for Sensitivity Analysis (GEM-SA) to build a parameterization of the aged size distribution based on the SAM-TOMAS output and seven inputs: emission median dry diameter, emission distribution modal width, mass emissions flux, fire area, mean boundary-layer wind speed, plume mixing depth, and time/distance since emission. This parameterization was tested against an independent set of SAM-TOMAS simulations and yields R2 values of 0.83 and 0.89 for Dpm and modal width, respectively. The

  4. FEATURES OF THE LOGISTICS MANAGEMENT OF PRODUCTION OF HIGH TECHNOLOGY PRODUCTS IN TURBULENCE CHANGING GLOBAL ECONOMY

    Directory of Open Access Journals (Sweden)

    Oleg M. Tolmachev

    2015-01-01

    Full Text Available The subject / topic: The level of development of any country is currently determined by the proportion of high technologies in the GDP. Logistics – the basis for efficient management of modern knowledge-intensive production. Given the adverse conditions in the global economy, greatly enhanced the relevance of the study of logistical aspects in the management of high-tech products.Subject of research: The logistics management of production of high technology products in turbulence changing global economy. In this paper we apply scientific methods: the dialectic, comparisons and analogies, analysis and synthesis, deduction and induction, abstract , logical, historical and retrospective. The purpose of this article is to identify the characteristics and problems of logistics management of production of high technology products in the countries of the Customs Union and the Eastern Partnership. Also consider the role of clusters in the formation of innovation infrastructure in the countries of the Customs Union.Results: As part of the presentation was the author of the present article the urgency of application of CALS-technologies as a tool for organization and information support for the creation, production and operation of the product at the enterprises of the national economy.Conclusions / significance: Management of enterprises in the real sector of the economy in modern conditions should be based on synergies methodological principles of market and state regulation, with increased use of methods focused on the long term. By such methods, in particular, should include the methods of logistic management of production of high technology products. The importance of these technologies has increased steadily, and in modern conditions gets a new quality content that refl ects the phased development plan targeted action to ensure that the desired state of the enterprise as a socio-economic system. This in turn points to the need to ensure that new

  5. Methodological Considerations When Quantifying High-Intensity Efforts in Team Sport Using Global Positioning System Technology.

    Science.gov (United States)

    Varley, Matthew C; Jaspers, Arne; Helsen, Werner F; Malone, James J

    2017-09-01

    Sprints and accelerations are popular performance indicators in applied sport. The methods used to define these efforts using athlete-tracking technology could affect the number of efforts reported. This study aimed to determine the influence of different techniques and settings for detecting high-intensity efforts using global positioning system (GPS) data. Velocity and acceleration data from a professional soccer match were recorded via 10-Hz GPS. Velocity data were filtered using either a median or an exponential filter. Acceleration data were derived from velocity data over a 0.2-s time interval (with and without an exponential filter applied) and a 0.3-second time interval. High-speed-running (≥4.17 m/s 2 ), sprint (≥7.00 m/s 2 ), and acceleration (≥2.78 m/s 2 ) efforts were then identified using minimum-effort durations (0.1-0.9 s) to assess differences in the total number of efforts reported. Different velocity-filtering methods resulted in small to moderate differences (effect size [ES] 0.28-1.09) in the number of high-speed-running and sprint efforts detected when minimum duration was GPS. Changes to how high-intensity efforts are defined affect reported data. Therefore, consistency in data processing is advised.

  6. GNES-R: Global nuclear energy simulator for reactors task 1: High-fidelity neutron transport

    International Nuclear Information System (INIS)

    Clarno, K.; De Almeida, V.; D'Azevedo, E.; De Oliveira, C.; Hamilton, S.

    2006-01-01

    A multi-laboratory, multi-university collaboration has formed to advance the state-of-the-art in high-fidelity, coupled-physics simulation of nuclear energy systems. We are embarking on the first-phase in the development of a new suite of simulation tools dedicated to the advancement of nuclear science and engineering technologies. We seek to develop and demonstrate a new generation of multi-physics simulation tools that will explore the scientific phenomena of tightly coupled physics parameters within nuclear systems, support the design and licensing of advanced nuclear reactors, and provide benchmark quality solutions for code validation. In this paper, we have presented the general scope of the collaborative project and discuss the specific challenges of high-fidelity neutronics for nuclear reactor simulation and the inroads we have made along this path. The high-performance computing neutronics code system utilizes the latest version of SCALE to generate accurate, problem-dependent cross sections, which are used in NEWTRNX - a new 3-D, general-geometry, discrete-ordinates solver based on the Slice-Balance Approach. The Global Nuclear Energy Simulator for Reactors (GNES-R) team is embarking on a long-term simulation development project that encompasses multiple laboratories and universities for the expansion of high-fidelity coupled-physics simulation of nuclear energy systems. (authors)

  7. High resolution analysis of tropical forest fragmentation and its impact on the global carbon cycle

    Science.gov (United States)

    Brinck, Katharina; Fischer, Rico; Groeneveld, Jürgen; Lehmann, Sebastian; Dantas de Paula, Mateus; Pütz, Sandro; Sexton, Joseph O.; Song, Danxia; Huth, Andreas

    2017-03-01

    Deforestation in the tropics is not only responsible for direct carbon emissions but also extends the forest edge wherein trees suffer increased mortality. Here we combine high-resolution (30 m) satellite maps of forest cover with estimates of the edge effect and show that 19% of the remaining area of tropical forests lies within 100 m of a forest edge. The tropics house around 50 million forest fragments and the length of the world's tropical forest edges sums to nearly 50 million km. Edge effects in tropical forests have caused an additional 10.3 Gt (2.1-14.4 Gt) of carbon emissions, which translates into 0.34 Gt per year and represents 31% of the currently estimated annual carbon releases due to tropical deforestation. Fragmentation substantially augments carbon emissions from tropical forests and must be taken into account when analysing the role of vegetation in the global carbon cycle.

  8. Real Time Global Tests of the ALICE High Level Trigger Data Transport Framework

    CERN Document Server

    Becker, B.; Cicalo J.; Cleymans, C.; de Vaux, G.; Fearick, R.W.; Lindenstruth, V.; Richter, M.; Rorich, D.; Staley, F.; Steinbeck, T.M.; Szostak, A.; Tilsner, H.; Weis, R.; Vilakazi, Z.Z.

    2008-01-01

    The High Level Trigger (HLT) system of the ALICE experiment is an online event filter and trigger system designed for input bandwidths of up to 25 GB/s at event rates of up to 1 kHz. The system is designed as a scalable PC cluster, implementing several hundred nodes. The transport of data in the system is handled by an object-oriented data flow framework operating on the basis of the publisher-subscriber principle, being designed fully pipelined with lowest processing overhead and communication latency in the cluster. In this paper, we report the latest measurements where this framework has been operated on five different sites over a global north-south link extending more than 10,000 km, processing a ``real-time'' data flow.

  9. The Sensitivity of WRF Daily Summertime Simulations over West Africa to Alternative Parameterizations. Part 1: African Wave Circulation

    Science.gov (United States)

    Noble, Erik; Druyan, Leonard M.; Fulakeza, Matthew

    2014-01-01

    The performance of the NCAR Weather Research and Forecasting Model (WRF) as a West African regional-atmospheric model is evaluated. The study tests the sensitivity of WRF-simulated vorticity maxima associated with African easterly waves to 64 combinations of alternative parameterizations in a series of simulations in September. In all, 104 simulations of 12-day duration during 11 consecutive years are examined. The 64 combinations combine WRF parameterizations of cumulus convection, radiation transfer, surface hydrology, and PBL physics. Simulated daily and mean circulation results are validated against NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA) and NCEP/Department of Energy Global Reanalysis 2. Precipitation is considered in a second part of this two-part paper. A wide range of 700-hPa vorticity validation scores demonstrates the influence of alternative parameterizations. The best WRF performers achieve correlations against reanalysis of 0.40-0.60 and realistic amplitudes of spatiotemporal variability for the 2006 focus year while a parallel-benchmark simulation by the NASA Regional Model-3 (RM3) achieves higher correlations, but less realistic spatiotemporal variability. The largest favorable impact on WRF-vorticity validation is achieved by selecting the Grell-Devenyi cumulus convection scheme, resulting in higher correlations against reanalysis than simulations using the Kain-Fritch convection. Other parameterizations have less-obvious impact, although WRF configurations incorporating one surface model and PBL scheme consistently performed poorly. A comparison of reanalysis circulation against two NASA radiosonde stations confirms that both reanalyses represent observations well enough to validate the WRF results. Validation statistics for optimized WRF configurations simulating the parallel period during 10 additional years are less favorable than for 2006.

  10. Global Monitoring of Mountain Glaciers Using High-Resolution Spotlight Imaging from the International Space Station

    Science.gov (United States)

    Donnellan, A.; Green, J. J.; Bills, B. G.; Goguen, J.; Ansar, A.; Knight, R. L.; Hallet, B.; Scambos, T. A.; Thompson, L. G.; Morin, P. J.

    2013-12-01

    Mountain glaciers around the world are retreating rapidly, contributing about 20% to present-day sea level rise. Numerous studies have shown that mountain glaciers are sensitive to global environmental change. Temperate-latitude glaciers and snowpack provide water for over 1 billion people. Glaciers are a resource for irrigation and hydroelectric power, but also pose flood and avalanche hazards. Accurate mass balance assessments have been made for only 280 glaciers, yet there are over 130,000 in the World Glacier Inventory. The rate of glacier retreat or advance can be highly variable, is poorly sampled, and inadequately understood. Liquid water from ice front lakes, rain, melt, or sea water and debris from rocks, dust, or pollution interact with glacier ice often leading to an amplification of warming and further melting. Many mountain glaciers undergo rapid and episodic events that greatly change their mass balance or extent but are sparsely documented. Events include calving, outburst floods, opening of crevasses, or iceberg motion. Spaceborne high-resolution spotlight optical imaging provides a means of clarifying the relationship between the health of mountain glaciers and global environmental change. Digital elevation models (DEMs) can be constructed from a series of images from a range of perspectives collected by staring at a target during a satellite overpass. It is possible to collect imagery for 1800 targets per month in the ×56° latitude range, construct high-resolution DEMs, and monitor changes in high detail over time with a high-resolution optical telescope mounted on the International Space Station (ISS). Snow and ice type, age, and maturity can be inferred from different color bands as well as distribution of liquid water. Texture, roughness, albedo, and debris distribution can be estimated by measuring bidirectional reflectance distribution functions (BRDF) and reflectance intensity as a function of viewing angle. The non-sun-synchronous orbit

  11. The Global Experience of Deployment of Energy-Efficient Technologies in High-Rise Construction

    Science.gov (United States)

    Potienko, Natalia D.; Kuznetsova, Anna A.; Solyakova, Darya N.; Klyueva, Yulia E.

    2018-03-01

    The objective of this research is to examine issues related to the increasing importance of energy-efficient technologies in high-rise construction. The aim of the paper is to investigate modern approaches to building design that involve implementation of various energy-saving technologies in diverse climates and at different structural levels, including the levels of urban development, functionality, planning, construction and engineering. The research methodology is based on the comprehensive analysis of the advanced global expertise in the design and construction of energy-efficient high-rise buildings, with the examination of their positive and negative features. The research also defines the basic principles of energy-efficient architecture. Besides, it draws parallels between the climate characteristics of countries that lead in the field of energy-efficient high-rise construction, on the one hand, and the climate in Russia, on the other, which makes it possible to use the vast experience of many countries, wholly or partially. The paper also gives an analytical review of the results arrived at by implementing energy efficiency principles into high-rise architecture. The study findings determine the impact of energy-efficient technologies on high-rise architecture and planning solutions. In conclusion, the research states that, apart from aesthetic and compositional interpretation of architectural forms, an architect nowadays has to address the task of finding a synthesis between technological and architectural solutions, which requires knowledge of advanced technologies. The study findings reveal that the implementation of modern energy-efficient technologies into high-rise construction is of immediate interest and is sure to bring long-term benefits.

  12. A global standardization trend for high-speed client and line side transceivers

    Science.gov (United States)

    Isono, Hideki

    2015-03-01

    Seeing the recent vast data increase in information industry, IT society will move into the new era of Zettabyte in a few years. Under these circumstances, high-speed and high-capacity optical communication systems have been deployed in the industry. Especially high speed optical transceivers are key devices to realize high-speed systems, and the practical development is accelerated. In order to develop these leading edge products timely, the global standard criteria are strongly required in the industry. Based on these backgrounds, the forum standardization bodies such as OIF PLLWG/ IEEE802.3 are energetically creating the de-fact standards. With regard to 100G/400G standardization activities, IEEE802.3 leads the client side, and OIF PLL-WG leads the line side, and both of them play important roles in the industry. In the previous Photonics West conferences, the activities of these standardization bodies till 2013 were reported. In 2014, the discussions of 400G client side transceiver projects have made some progress in IEEE802.3, whose baseline technologies are about to be fixed. Also 100G transceiver projects for metro applications in the line side, whose target profile is CFP2 form factor, have been discussed in OIF PLL-WG. In this paper, these high-end standardization topics are introduced and the future products direction is also discussed from the technical point of view. In order to realize these small form factor and cost effective transceivers, the device integration technologies, the low power device/electrical circuit technologies, and the development of high speed electrical interface such as 25G/50G are key factors.

  13. Parameter Estimation and Sensitivity Analysis of an Urban Surface Energy Balance Parameterization at a Tropical Suburban Site

    Science.gov (United States)

    Harshan, S.; Roth, M.; Velasco, E.

    2014-12-01

    Forecasting of the urban weather and climate is of great importance as our cities become more populated and considering the combined effects of global warming and local land use changes which make urban inhabitants more vulnerable to e.g. heat waves and flash floods. In meso/global scale models, urban parameterization schemes are used to represent the urban effects. However, these schemes require a large set of input parameters related to urban morphological and thermal properties. Obtaining all these parameters through direct measurements are usually not feasible. A number of studies have reported on parameter estimation and sensitivity analysis to adjust and determine the most influential parameters for land surface schemes in non-urban areas. Similar work for urban areas is scarce, in particular studies on urban parameterization schemes in tropical cities have so far not been reported. In order to address above issues, the town energy balance (TEB) urban parameterization scheme (part of the SURFEX land surface modeling system) was subjected to a sensitivity and optimization/parameter estimation experiment at a suburban site in, tropical Singapore. The sensitivity analysis was carried out as a screening test to identify the most sensitive or influential parameters. Thereafter, an optimization/parameter estimation experiment was performed to calibrate the input parameter. The sensitivity experiment was based on the "improved Sobol's global variance decomposition method" . The analysis showed that parameters related to road, roof and soil moisture have significant influence on the performance of the model. The optimization/parameter estimation experiment was performed using the AMALGM (a multi-algorithm genetically adaptive multi-objective method) evolutionary algorithm. The experiment showed a remarkable improvement compared to the simulations using the default parameter set. The calibrated parameters from this optimization experiment can be used for further model

  14. WRF model sensitivity to choice of parameterization: a study of the `York Flood 1999'

    Science.gov (United States)

    Remesan, Renji; Bellerby, Tim; Holman, Ian; Frostick, Lynne

    2015-10-01

    Numerical weather modelling has gained considerable attention in the field of hydrology especially in un-gauged catchments and in conjunction with distributed models. As a consequence, the accuracy with which these models represent precipitation, sub-grid-scale processes and exceptional events has become of considerable concern to the hydrological community. This paper presents sensitivity analyses for the Weather Research Forecast (WRF) model with respect to the choice of physical parameterization schemes (both cumulus parameterisation (CPSs) and microphysics parameterization schemes (MPSs)) used to represent the `1999 York Flood' event, which occurred over North Yorkshire, UK, 1st-14th March 1999. The study assessed four CPSs (Kain-Fritsch (KF2), Betts-Miller-Janjic (BMJ), Grell-Devenyi ensemble (GD) and the old Kain-Fritsch (KF1)) and four MPSs (Kessler, Lin et al., WRF single-moment 3-class (WSM3) and WRF single-moment 5-class (WSM5)] with respect to their influence on modelled rainfall. The study suggests that the BMJ scheme may be a better cumulus parameterization choice for the study region, giving a consistently better performance than other three CPSs, though there are suggestions of underestimation. The WSM3 was identified as the best MPSs and a combined WSM3/BMJ model setup produced realistic estimates of precipitation quantities for this exceptional flood event. This study analysed spatial variability in WRF performance through categorical indices, including POD, FBI, FAR and CSI during York Flood 1999 under various model settings. Moreover, the WRF model was good at predicting high-intensity rare events over the Yorkshire region, suggesting it has potential for operational use.

  15. Characteristics of Extreme Extratropical Cyclones in a High-Resolution Global Climate Model

    Science.gov (United States)

    Catalano, A. J.; Broccoli, A. J.; Kapnick, S. B.; Janoski, T. P.

    2017-12-01

    In the northeastern United States, many of the strongest impacts from extratropical cyclones (ETCs) are associated with storms that exhibit slow movement, unusual tracks, or exceptional intensity. Examples of extreme ETCs include the Appalachian storm of November 1950, the Perfect Storm of October 1991, and the Superstorm of March 1993. Owing to the rare nature of these events, it is difficult to quantify the associated risks (e.g. high winds, storm surge) given the limited duration of high-quality observational datasets. Furthermore, storms with even greater impacts than those observed may be possible, particularly in a warming climate. In the context of tropical cyclones, Lin and Emanuel (2016) have used the metaphor "grey swans" to refer to high-impact events that have not been observed but may be physically possible. One method for analyzing "grey swans" is to generate a larger sample of ETCs using a coupled climate model. Therefore, we use long simulations (over 1,000 years with atmospheric constituents fixed at 1990 levels) from a global climate model (GFDL FLOR) with 50km atmospheric resolution. FLOR has been shown to realistically simulate the spatial distribution and climatology of ETCs during the reanalysis era. We will discuss the climatological features of these extreme ETC events.

  16. Underreporting of high-risk water and sanitation practices undermines progress on global targets.

    Science.gov (United States)

    Vedachalam, Sridhar; MacDonald, Luke H; Shiferaw, Solomon; Seme, Assefa; Schwab, Kellogg J

    2017-01-01

    Water and sanitation indicators under the Millennium Development Goals failed to capture high-risk practices undertaken on a regular basis. In conjunction with local partners, fourteen rounds of household surveys using mobile phones with a customized open-source application were conducted across nine study geographies in Asia and Africa. In addition to the main water and sanitation facilities, interviewees (n = 245,054) identified all water and sanitation options regularly used for at least one season of the year. Unimproved water consumption and open defecation were targeted as high-risk practices. We defined underreporting as the difference between the regular and main use of high-risk practices. Our estimates of high-risk practices as the main option matched the widely accepted Demographic and Health Surveys (DHS) estimates within the 95% confidence interval. However, estimates of these practices as a regular option was far higher than the DHS estimates. Across the nine geographies, median underreporting of unimproved water use was 5.5%, with a range of 0.5% to 13.9%. Median underreporting of open defecation was much higher at 9.9%, with a range of 2.7% to 11.5%. This resulted in an underreported population of 25 million regularly consuming unimproved water and 50 million regularly practicing open defecation. Further examination of data from Ethiopia suggested that location and socio-economic factors were significant drivers of underreporting. Current global monitoring relies on a framework that considers the availability and use of a single option to meet drinking water and sanitation needs. Our analysis demonstrates the use of multiple options and widespread underreporting of high-risk practices. Policies based on current monitoring data, therefore, fail to consider the range of challenges and solutions to meeting water and sanitation needs, and result in an inflated sense of progress. Mobile surveys offer a cost-effective and innovative platform to rapidly

  17. Size-resolved measurement of the mixing state of soot in the megacity Beijing, China: diurnal cycle, aging and parameterization

    Directory of Open Access Journals (Sweden)

    Y. F. Cheng

    2012-05-01

    intensities, actual turnover rates of soot (kex → in up to 20% h−1 were derived, which showed a pronounced diurnal cycle peaking around noon time. This result confirms that (soot particles are undergoing fast aging/coating with the existing high levels of condensable vapors in the megacity Beijing. (5 Diurnal cycles of Fin were different between Aitken and accumulation mode particles, which could be explained by the faster growth of smaller Aitken mode particles into larger size bins.

    To improve the Fin prediction in regional/global models, we suggest parameterizing Fin by an air mass aging indicator, i.e., Fin = a + bx, where a and b are empirical coefficients determined from observations, and x is the value of an air mass age indicator. At the Yufa site in the North China Plain, fitted coefficients (a, b were determined as (0.57, 0.21, (0.47, 0.21, and (0.52, 0.0088 for x (indicators as [NOz]/[NOy], [E]/[X] ([ethylbenzene]/[m,p-xylene] and ([IM] + [OM]/[EC] ([inorganic + organic matter]/[elemental carbon], respectively. Such a parameterization consumes little additional computing time, but yields a more realistic description of Fin compared with the simple treatment of soot mixing state in regional/global models.

  18. A global, high-resolution data set of ice sheet topography, cavity geometry, and ocean bathymetry

    Science.gov (United States)

    Schaffer, Janin; Timmermann, Ralph; Arndt, Jan Erik; Savstrup Kristensen, Steen; Mayer, Christoph; Morlighem, Mathieu; Steinhage, Daniel

    2016-10-01

    The ocean plays an important role in modulating the mass balance of the polar ice sheets by interacting with the ice shelves in Antarctica and with the marine-terminating outlet glaciers in Greenland. Given that the flux of warm water onto the continental shelf and into the sub-ice cavities is steered by complex bathymetry, a detailed topography data set is an essential ingredient for models that address ice-ocean interaction. We followed the spirit of the global RTopo-1 data set and compiled consistent maps of global ocean bathymetry, upper and lower ice surface topographies, and global surface height on a spherical grid with now 30 arcsec grid spacing. For this new data set, called RTopo-2, we used the General Bathymetric Chart of the Oceans (GEBCO_2014) as the backbone and added the International Bathymetric Chart of the Arctic Ocean version 3 (IBCAOv3) and the International Bathymetric Chart of the Southern Ocean (IBCSO) version 1. While RTopo-1 primarily aimed at a good and consistent representation of the Antarctic ice sheet, ice shelves, and sub-ice cavities, RTopo-2 now also contains ice topographies of the Greenland ice sheet and outlet glaciers. In particular, we aimed at a good representation of the fjord and shelf bathymetry surrounding the Greenland continent. We modified data from earlier gridded products in the areas of Petermann Glacier, Hagen Bræ, and Sermilik Fjord, assuming that sub-ice and fjord bathymetries roughly follow plausible Last Glacial Maximum ice flow patterns. For the continental shelf off Northeast Greenland and the floating ice tongue of Nioghalvfjerdsfjorden Glacier at about 79° N, we incorporated a high-resolution digital bathymetry model considering original multibeam survey data for the region. Radar data for surface topographies of the floating ice tongues of Nioghalvfjerdsfjorden Glacier and Zachariæ Isstrøm have been obtained from the data centres of Technical University of Denmark (DTU), Operation Icebridge (NASA

  19. Multisite Evaluation of APEX for Water Quality: II. Regional Parameterization.

    Science.gov (United States)

    Nelson, Nathan O; Baffaut, Claire; Lory, John A; Anomaa Senaviratne, G M M M; Bhandari, Ammar B; Udawatta, Ranjith P; Sweeney, Daniel W; Helmers, Matt J; Van Liew, Mike W; Mallarino, Antonio P; Wortmann, Charles S

    2017-11-01

    Phosphorus (P) Index assessment requires independent estimates of long-term average annual P loss from fields, representing multiple climatic scenarios, management practices, and landscape positions. Because currently available measured data are insufficient to evaluate P Index performance, calibrated and validated process-based models have been proposed as tools to generate the required data. The objectives of this research were to develop a regional parameterization for the Agricultural Policy Environmental eXtender (APEX) model to estimate edge-of-field runoff, sediment, and P losses in restricted-layer soils of Missouri and Kansas and to assess the performance of this parameterization using monitoring data from multiple sites in this region. Five site-specific calibrated models (SSCM) from within the region were used to develop a regionally calibrated model (RCM), which was further calibrated and validated with measured data. Performance of the RCM was similar to that of the SSCMs for runoff simulation and had Nash-Sutcliffe efficiency (NSE) > 0.72 and absolute percent bias (|PBIAS|) 90%) and was particularly ineffective at simulating sediment loss from locations with small sediment loads. The RCM had acceptable performance for simulation of total P loss (NSE > 0.74, |PBIAS| < 30%) but underperformed the SSCMs. Total P-loss estimates should be used with caution due to poor simulation of sediment loss. Although we did not attain our goal of a robust regional parameterization of APEX for estimating sediment and total P losses, runoff estimates with the RCM were acceptable for P Index evaluation. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  20. Rainforest pharmacopeia in Madagascar provides high value for current local and prospective global uses.

    Directory of Open Access Journals (Sweden)

    Christopher D Golden

    Full Text Available Botanical diversity provides value to humans through carbon sequestration, air and water purification, and the provisioning of wild foods and ethnomedicines. Here we calculate the value of botanical ethnomedicines in a rainforest region of Madagascar, the Makira Protected Area, using a substitution method that combines replacement costs and choice modeling. The Makira watershed may comprise approximately 0.8% of global botanical diversity and possesses enormous value both in its ability to provision botanical ethnomedicines to local people and as a source of potentially novel pharmaceutical drugs for society as a whole. Approximately 241 locally-recognized species are used as ethnomedicines, including 113 agricultural or weed species. We equated each ethnomedicinal treatment to the monetary value of a comparable pharmaceutical treatment adjusted by personal preferences in perceived efficacy (rather than from known or assumed medicinal equivalency. The benefit value of these botanical ethnomedicines per individual is $5.40-7.90 per year when using the value of highly subsidized Malagasy pharmaceuticals and $100.60-287.40 when using the value of American pharmaceuticals. Using local pharmaceuticals as substitutes, the value per household is $30.24-44.30 per year, equivalent to 43-63% of median annual household income, demonstrating their local importance. Using the value of American pharmaceuticals, the amount is equivalent to 22-63% of the median annual health care expenditures for American adults under 45 in 2006. The potential for developing novel biomedicines from the Makira watershed's unique flora ranges in untapped benefit value from $0.3-5.7 billion for American pharmaceutical companies, non-inclusive of the importance of providing novel medicines and improved healthcare to society. This study provides evidence of the tremendous current local and prospective global value of botanical ethnomedicines and furthers arguments for the

  1. Dependency of high coastal water level and river discharge at the global scale

    Science.gov (United States)

    Ward, P.; Couasnon, A.; Haigh, I. D.; Muis, S.; Veldkamp, T.; Winsemius, H.; Wahl, T.

    2017-12-01

    It is widely recognized that floods cause huge socioeconomic impacts. From 1980-2013, global flood losses exceeded $1 trillion, with 220,000 fatalities. These impacts are particularly hard felt in low-lying densely populated deltas and estuaries, whose location at the coast-land interface makes them naturally prone to flooding. When river and coastal floods coincide, their impacts in these deltas and estuaries are often worse than when they occur in isolation. Such floods are examples of so-called `compound events'. In this contribution, we present the first global scale analysis of the statistical dependency of high coastal water levels (and the storm surge component alone) and river discharge. We show that there is statistical dependency between these components at more than half of the stations examined. We also show time-lags in the highest correlation between peak discharges and coastal water levels. Finally, we assess the probability of the simultaneous occurrence of design discharge and design coastal water levels, assuming both independence and statistical dependence. For those stations where we identified statistical dependency, the probability is between 1 and 5 times greater, when the dependence structure is accounted for. This information is essential for understanding the likelihood of compound flood events occurring at locations around the world as well as for accurate flood risk assessments and effective flood risk management. The research was carried out by analysing the statistical dependency between observed coastal water levels (and the storm surge component) from GESLA-2 and river discharge using gauged data from GRDC stations all around the world. The dependence structure was examined using copula functions.

  2. A Global Survey of Cloud Thermodynamic Phase using High Spatial Resolution VSWIR Spectroscopy, 2005-2015

    Science.gov (United States)

    Thompson, D. R.; Kahn, B. H.; Green, R. O.; Chien, S.; Middleton, E.; Tran, D. Q.

    2017-12-01

    Clouds' variable ice and liquid content significantly influences their optical properties, evolution, and radiative forcing potential (Tan and Storelvmo, J. Atmos. Sci, 73, 2016). However, most remote measurements of thermodynamic phase have spatial resolutions of 1 km or more and are insensitive to mixed phases. This under-constrains important processes, such as spatial partitioning within mixed phase clouds, that carry outsize radiative forcing impacts. These uncertainties could shift Global Climate Model (GCM) predictions of future warming by over 1 degree Celsius (Tan et al., Science 352:6282, 2016). Imaging spectroscopy of reflected solar energy from the 1.4 - 1.8 μm shortwave infrared (SWIR) spectral range can address this observational gap. These observations can distinguish ice and water absorption, providing a robust and sensitive measurement of cloud top thermodynamic phase including mixed phases. Imaging spectrometers can resolve variations at scales of tens to hundreds of meters (Thompson et al., JGR-Atmospheres 121, 2016). We report the first such global high spatial resolution (30 m) survey, based on data from 2005-2015 acquired by the Hyperion imaging spectrometer onboard NASA's EO-1 spacecraft (Pearlman et al., Proc. SPIE 4135, 2001). Estimated seasonal and latitudinal distributions of cloud thermodynamic phase generally agree with observations made by other satellites such as the Atmospheric Infrared Sounder (AIRS). Variogram analyses reveal variability at different spatial scales. Our results corroborate previously observed zonal distributions, while adding insight into the spatial scales of processes governing cloud top thermodynamic phase. Figure: Thermodynamic phase retrievals. Top: Example of a cloud top thermodynamic phase map from the EO-1/Hyperion. Bottom: Latitudinal distributions of pure and mixed phase clouds, 2005-2015, showing Liquid Thickness Fraction (LTF). LTF=0 corresponds to pure ice absorption, while LTF=1 is pure liquid. The

  3. The global establishment of a highly-fluoroquinolone resistant Salmonella enterica serotype Kentucky ST198 strain

    Directory of Open Access Journals (Sweden)

    Simon eLe Hello

    2013-12-01

    Full Text Available While the spread of Salmonella enterica serotype Kentucky resistant to ciprofloxacin across Africa and the Middle-East has been described recently, the presence of this strain in humans, food, various animal species (livestock, pets, and wildlife and in environment is suspected in other countries of different continents. Here, we report results of an in-depth molecular epidemiological study on a global human and non-human collection of S. Kentucky (n=70.We performed XbaI-pulsed field gel electrophoresis and multilocus sequence typing, assessed mutations in the quinolone resistance-determining regions, detected β-lactam resistance mechanisms, and screened the presence of the Salmonella genomic island 1 (SGI1. In this study, we highlight the rapid and extensive worldwide dissemination of the ciprofloxacin-resistant S. Kentucky ST198-X1-SGI1 strain since the mid-2000s in an increasingly large number of contaminated sources, including the environment. This strain has accumulated an increasing number of chromosomal and plasmid resistance determinants and has been identified in the Indian subcontinent, Southeast Asia and Europe since 2010. The second substitution at position 87 in GyrA (replacing the amino acid Asp appeared helpful for epidemiological studies to track the origin of contamination.This global study provides evidence leading to the conclusion that high-level resistance to ciprofloxacin in S. Kentucky is a simple microbiological trait that facilitates the identification of the epidemic clone of interest, ST198-X1-SGI1. Taking this into account is essential in order to detect and monitor it easily and to take rapid measures in livestock to ensure control of this infection.

  4. Parameterization models for solar radiation and solar technology applications

    International Nuclear Information System (INIS)

    Khalil, Samy A.

    2008-01-01

    Solar radiation is very important for the evaluation and wide use of solar renewable energy systems. The development of calibration procedures for broadband solar radiation photometric instrumentation and the improvement of broadband solar radiation measurement accuracy have been done. An improved diffuse sky reference and photometric calibration and characterization software for outdoor pyranometer calibrations are outlined. Parameterizations for direct beam, total hemispherical and diffuse sky radiation and solar radiation technology are briefly reviewed. The uncertainties for various broadband solar radiations of solar energy and atmospheric effects are discussed. The varying responsivities of solar radiation with meteorological, statistical and climatological parameters and possibility atmospheric conditions was examined

  5. Parameterization models for solar radiation and solar technology applications

    Energy Technology Data Exchange (ETDEWEB)

    Khalil, Samy A. [National Research Institute of Astronomy and Geophysics, Solar and Space Department, Marsed Street, Helwan, 11421 Cairo (Egypt)

    2008-08-15

    Solar radiation is very important for the evaluation and wide use of solar renewable energy systems. The development of calibration procedures for broadband solar radiation photometric instrumentation and the improvement of broadband solar radiation measurement accuracy have been done. An improved diffuse sky reference and photometric calibration and characterization software for outdoor pyranometer calibrations are outlined. Parameterizations for direct beam, total hemispherical and diffuse sky radiation and solar radiation technology are briefly reviewed. The uncertainties for various broadband solar radiations of solar energy and atmospheric effects are discussed. The varying responsivities of solar radiation with meteorological, statistical and climatological parameters and possibility atmospheric conditions was examined. (author)

  6. Parameterization of the dielectric function of semiconductor nanocrystals

    Energy Technology Data Exchange (ETDEWEB)

    Petrik, P., E-mail: petrik@mfa.kfki.hu

    2014-11-15

    Optical methods like spectroscopic ellipsometry are sensitive to the structural properties of semiconductor films such as crystallinity or grain size. The imaginary part of the dielectric function is proportional to the joint density of electronic states. Consequently, the analysis of the dielectric function around the critical point energies provides useful information about the electron band structure and all related parameters like the grain structure, band gap, temperature, composition, phase structure, and carrier mobility. In this work an attempt is made to present a selection of the approaches to parameterize and analyze the dielectric function of semiconductors, as well as some applications.

  7. IR Optics Measurement with Linear Coupling's Action-Angle Parameterization

    CERN Document Server

    Luo, Yun; Pilat, Fulvia Caterina; Satogata, Todd; Trbojevic, Dejan

    2005-01-01

    The interaction region (IP) optics are measured with the two DX/BPMs close to the IPs at the Relativistic Heavy Ion Collider (RHIC). The beta functions at IP are measured with the two eigenmodes' phase advances between the two BPMs. And the beta waists are also determined through the beta functions at the two BPMs. The coupling parameters at the IPs are also given through the linear coupling's action-angle parameterization. All the experimental data are taken during the driving oscillations with the AC dipole. The methods to do these measurements are discussed. And the measurement results during the beta*

  8. Parameterization of interatomic potential by genetic algorithms: A case study

    Energy Technology Data Exchange (ETDEWEB)

    Ghosh, Partha S., E-mail: psghosh@barc.gov.in; Arya, A.; Dey, G. K. [Materials Science Division, Bhabha Atomic Research Centre, Mumbai-400085 (India); Ranawat, Y. S. [Department of Ceramic Engineering, Indian Institute of Technology (BHU), Varanasi-221005 (India)

    2015-06-24

    A framework for Genetic Algorithm based methodology is developed to systematically obtain and optimize parameters for interatomic force field functions for MD simulations by fitting to a reference data base. This methodology is applied to the fitting of ThO{sub 2} (CaF{sub 2} prototype) – a representative of ceramic based potential fuel for nuclear applications. The resulting GA optimized parameterization of ThO{sub 2} is able to capture basic structural, mechanical, thermo-physical properties and also describes defect structures within the permissible range.

  9. The causal structure of spacetime is a parameterized Randers geometry

    Energy Technology Data Exchange (ETDEWEB)

    Skakala, Jozef; Visser, Matt, E-mail: jozef.skakala@msor.vuw.ac.nz, E-mail: matt.visser@msor.vuw.ac.nz [School of Mathematics, Statistics and Operations Research, Victoria University of Wellington, PO Box 600, Wellington (New Zealand)

    2011-03-21

    There is a well-established isomorphism between stationary four-dimensional spacetimes and three-dimensional purely spatial Randers geometries-these Randers geometries being a particular case of the more general class of three-dimensional Finsler geometries. We point out that in stably causal spacetimes, by using the (time-dependent) ADM decomposition, this result can be extended to general non-stationary spacetimes-the causal structure (conformal structure) of the full spacetime is completely encoded in a parameterized (t-dependent) class of Randers spaces, which can then be used to define a Fermat principle, and also to reconstruct the null cones and causal structure.

  10. The causal structure of spacetime is a parameterized Randers geometry

    International Nuclear Information System (INIS)

    Skakala, Jozef; Visser, Matt

    2011-01-01

    There is a well-established isomorphism between stationary four-dimensional spacetimes and three-dimensional purely spatial Randers geometries-these Randers geometries being a particular case of the more general class of three-dimensional Finsler geometries. We point out that in stably causal spacetimes, by using the (time-dependent) ADM decomposition, this result can be extended to general non-stationary spacetimes-the causal structure (conformal structure) of the full spacetime is completely encoded in a parameterized (t-dependent) class of Randers spaces, which can then be used to define a Fermat principle, and also to reconstruct the null cones and causal structure.

  11. Contributing to global computing platform: gliding, tunneling standard services and high energy physics application

    International Nuclear Information System (INIS)

    Lodygensky, O.

    2006-09-01

    Centralized computers have been replaced by 'client/server' distributed architectures which are in turn in competition with new distributed systems known as 'peer to peer'. These new technologies are widely spread, and trading, industry and the research world have understood the new goals involved and massively invest around these new technologies, named 'grid'. One of the fields is about calculating. This is the subject of the works presented here. At the Paris Orsay University, a synergy emerged between the Computing Science Laboratory (LRI) and the Linear Accelerator Laboratory (LAL) on grid infrastructure, opening new investigations fields for the first and new high computing perspective for the other. Works presented here are the results of this multi-discipline collaboration. They are based on XtremWeb, the LRI global computing platform. We first introduce a state of the art of the large scale distributed systems, its principles, its architecture based on services. We then introduce XtremWeb and detail modifications and improvements we had to specify and implement to achieve our goals. We present two different studies, first interconnecting grids in order to generalize resource sharing and secondly, be able to use legacy services on such platforms. We finally explain how a research community like the community of high energy cosmic radiation detection can gain access to these services and detail Monte Carlos and data analysis processes over the grids. (author)

  12. Globally Stable Microresonator Turing Pattern Formation for Coherent High-Power THz Radiation On-Chip

    Science.gov (United States)

    Huang, Shu-Wei; Yang, Jinghui; Yang, Shang-Hua; Yu, Mingbin; Kwong, Dim-Lee; Zelevinsky, T.; Jarrahi, Mona; Wong, Chee Wei

    2017-10-01

    In nonlinear microresonators driven by continuous-wave (cw) lasers, Turing patterns have been studied in the formalism of the Lugiato-Lefever equation with emphasis on their high coherence and exceptional robustness against perturbations. Destabilization of Turing patterns and the transition to spatiotemporal chaos, however, limit the available energy carried in the Turing rolls and prevent further harvest of their high coherence and robustness to noise. Here, we report a novel scheme to circumvent such destabilization, by incorporating the effect of local mode hybridizations, and we attain globally stable Turing pattern formation in chip-scale nonlinear oscillators with significantly enlarged parameter space, achieving a record-high power-conversion efficiency of 45% and an elevated peak-to-valley contrast of 100. The stationary Turing pattern is discretely tunable across 430 GHz on a THz carrier, with a fractional frequency sideband nonuniformity measured at 7.3 ×10-14 . We demonstrate the simultaneous microwave and optical coherence of the Turing rolls at different evolution stages through ultrafast optical correlation techniques. The free-running Turing-roll coherence, 9 kHz in 200 ms and 160 kHz in 20 minutes, is transferred onto a plasmonic photomixer for one of the highest-power THz coherent generations at room temperature, with 1.1% optical-to-THz power conversion. Its long-term stability can be further improved by more than 2 orders of magnitude, reaching an Allan deviation of 6 ×10-10 at 100 s, with a simple computer-aided slow feedback control. The demonstrated on-chip coherent high-power Turing-THz system is promising to find applications in astrophysics, medical imaging, and wireless communications.

  13. Functional over-redundancy and high functional vulnerability in global fish faunas on tropical reefs.

    Science.gov (United States)

    Mouillot, David; Villéger, Sébastien; Parravicini, Valeriano; Kulbicki, Michel; Arias-González, Jesus Ernesto; Bender, Mariana; Chabanet, Pascale; Floeter, Sergio R; Friedlander, Alan; Vigliola, Laurent; Bellwood, David R

    2014-09-23

    When tropical systems lose species, they are often assumed to be buffered against declines in functional diversity by the ability of the species-rich biota to display high functional redundancy: i.e., a high number of species performing similar functions. We tested this hypothesis using a ninefold richness gradient in global fish faunas on tropical reefs encompassing 6,316 species distributed among 646 functional entities (FEs): i.e., unique combinations of functional traits. We found that the highest functional redundancy is located in the Central Indo-Pacific with a mean of 7.9 species per FE. However, this overall level of redundancy is disproportionately packed into few FEs, a pattern termed functional over-redundancy (FOR). For instance, the most speciose FE in the Central Indo-Pacific contains 222 species (out of 3,689) whereas 38% of FEs (180 out of 468) have no functional insurance with only one species. Surprisingly, the level of FOR is consistent across the six fish faunas, meaning that, whatever the richness, over a third of the species may still be in overrepresented FEs whereas more than one third of the FEs are left without insurance, these levels all being significantly higher than expected by chance. Thus, our study shows that, even in high-diversity systems, such as tropical reefs, functional diversity remains highly vulnerable to species loss. Although further investigations are needed to specifically address the influence of redundant vs. vulnerable FEs on ecosystem functioning, our results suggest that the promised benefits from tropical biodiversity may not be as strong as previously thought.

  14. A Solar Radiation Parameterization for Atmospheric Studies. Volume 15

    Science.gov (United States)

    Chou, Ming-Dah; Suarez, Max J. (Editor)

    1999-01-01

    The solar radiation parameterization (CLIRAD-SW) developed at the Goddard Climate and Radiation Branch for application to atmospheric models are described. It includes the absorption by water vapor, O3, O2, CO2, clouds, and aerosols and the scattering by clouds, aerosols, and gases. Depending upon the nature of absorption, different approaches are applied to different absorbers. In the ultraviolet and visible regions, the spectrum is divided into 8 bands, and single O3 absorption coefficient and Rayleigh scattering coefficient are used for each band. In the infrared, the spectrum is divided into 3 bands, and the k-distribution method is applied for water vapor absorption. The flux reduction due to O2 is derived from a simple function, while the flux reduction due to CO2 is derived from precomputed tables. Cloud single-scattering properties are parameterized, separately for liquid drops and ice, as functions of water amount and effective particle size. A maximum-random approximation is adopted for the overlapping of clouds at different heights. Fluxes are computed using the Delta-Eddington approximation.

  15. The parameterization of microchannel-plate-based detection systems

    Science.gov (United States)

    Gershman, Daniel J.; Gliese, Ulrik; Dorelli, John C.; Avanov, Levon A.; Barrie, Alexander C.; Chornay, Dennis J.; MacDonald, Elizabeth A.; Holland, Matthew P.; Giles, Barbara L.; Pollock, Craig J.

    2016-10-01

    The most common instrument for low-energy plasmas consists of a top-hat electrostatic analyzer (ESA) geometry coupled with a microchannel-plate-based (MCP-based) detection system. While the electrostatic optics for such sensors are readily simulated and parameterized during the laboratory calibration process, the detection system is often less well characterized. Here we develop a comprehensive mathematical description of particle detection systems. As a function of instrument azimuthal angle, we parameterize (1) particle scattering within the ESA and at the surface of the MCP, (2) the probability distribution of MCP gain for an incident particle, (3) electron charge cloud spreading between the MCP and anode board, and (4) capacitive coupling between adjacent discrete anodes. Using the Dual Electron Spectrometers on the Fast Plasma Investigation on NASA's Magnetospheric Multiscale mission as an example, we demonstrate a method for extracting these fundamental detection system parameters from laboratory calibration. We further show that parameters that will evolve in flight, namely, MCP gain, can be determined through application of this model to specifically tailored in-flight calibration activities. This methodology provides a robust characterization of sensor suite performance throughout mission lifetime. The model developed in this work is not only applicable to existing sensors but also can be used as an analytical design tool for future particle instrumentation.

  16. Further study on parameterization of reactor NAA: Pt. 2

    International Nuclear Information System (INIS)

    Tian Weizhi; Zhang Shuxin

    1989-01-01

    In the last paper, Ik 0 method was proposed for fission interference corrections. Another important kind of interferences in reator NAA is due to threshold reaction induced by reactor fast neutrons. In view of the increasing importance of this kind of interferences, and difficulties encountered in using the relative comparison method, a parameterized method has been introduced. Typical channels in heavy water reflector and No.2 horizontal channel of Heavy Water Research Reactor in the Insitute of Atomic Energy have been shown to have fast neutron energy distributions (E>4 MeV) close to primary fission neutron spectrum, by using multi-threshold detectors. On this basis, Ti foil is used as an 'instant fast neutron flux monitor' in parameterized corrections for threshold reaction interferences in the long irradiations. Constant values of φ f /φ s = 0.70 ± 0.02% have been obtained for No.2 rabbit channel. This value can be directly used for threshold reaction inference correction in the short irradiations

  17. Parameterization of ionization rate by auroral electron precipitation in Jupiter

    Directory of Open Access Journals (Sweden)

    Y. Hiraki

    2008-02-01

    Full Text Available We simulate auroral electron precipitation into the Jovian atmosphere in which electron multi-directional scattering and energy degradation processes are treated exactly with a Monte Carlo technique. We make a parameterization of the calculated ionization rate of the neutral gas by electron impact in a similar way as used for the Earth's aurora. Our method allows the altitude distribution of the ionization rate to be obtained as a function of an arbitrary initial energy spectrum in the range of 1–200 keV. It also includes incident angle dependence and an arbitrary density distribution of molecular hydrogen. We show that there is little dependence of the estimated ionospheric conductance on atomic species such as H and He. We compare our results with those of recent studies with different electron transport schemes by adapting our parameterization to their atmospheric conditions. We discuss the intrinsic problem of their simplified assumption. The ionospheric conductance, which is important for Jupiter's magnetosphere-ionosphere coupling system, is estimated to vary by a factor depending on the electron energy spectrum based on recent observation and modeling. We discuss this difference through the relation with field-aligned current and electron spectrum.

  18. Parameterization of ionization rate by auroral electron precipitation in Jupiter

    Directory of Open Access Journals (Sweden)

    Y. Hiraki

    2008-02-01

    Full Text Available We simulate auroral electron precipitation into the Jovian atmosphere in which electron multi-directional scattering and energy degradation processes are treated exactly with a Monte Carlo technique. We make a parameterization of the calculated ionization rate of the neutral gas by electron impact in a similar way as used for the Earth's aurora. Our method allows the altitude distribution of the ionization rate to be obtained as a function of an arbitrary initial energy spectrum in the range of 1–200 keV. It also includes incident angle dependence and an arbitrary density distribution of molecular hydrogen. We show that there is little dependence of the estimated ionospheric conductance on atomic species such as H and He. We compare our results with those of recent studies with different electron transport schemes by adapting our parameterization to their atmospheric conditions. We discuss the intrinsic problem of their simplified assumption. The ionospheric conductance, which is important for Jupiter's magnetosphere-ionosphere coupling system, is estimated to vary by a factor depending on the electron energy spectrum based on recent observation and modeling. We discuss this difference through the relation with field-aligned current and electron spectrum.

  19. Rapid parameterization of small molecules using the Force Field Toolkit.

    Science.gov (United States)

    Mayne, Christopher G; Saam, Jan; Schulten, Klaus; Tajkhorshid, Emad; Gumbart, James C

    2013-12-15

    The inability to rapidly generate accurate and robust parameters for novel chemical matter continues to severely limit the application of molecular dynamics simulations to many biological systems of interest, especially in fields such as drug discovery. Although the release of generalized versions of common classical force fields, for example, General Amber Force Field and CHARMM General Force Field, have posited guidelines for parameterization of small molecules, many technical challenges remain that have hampered their wide-scale extension. The Force Field Toolkit (ffTK), described herein, minimizes common barriers to ligand parameterization through algorithm and method development, automation of tedious and error-prone tasks, and graphical user interface design. Distributed as a VMD plugin, ffTK facilitates the traversal of a clear and organized workflow resulting in a complete set of CHARMM-compatible parameters. A variety of tools are provided to generate quantum mechanical target data, setup multidimensional optimization routines, and analyze parameter performance. Parameters developed for a small test set of molecules using ffTK were comparable to existing CGenFF parameters in their ability to reproduce experimentally measured values for pure-solvent properties (<15% error from experiment) and free energy of solvation (±0.5 kcal/mol from experiment). Copyright © 2013 Wiley Periodicals, Inc.

  20. Importance of including ammonium sulfate ((NH42SO4 aerosols for ice cloud parameterization in GCMs

    Directory of Open Access Journals (Sweden)

    P. S. Bhattacharjee

    2010-02-01

    Full Text Available A common deficiency of many cloud-physics parameterizations including the NASA's microphysics of clouds with aerosol-cloud interactions (hereafter called McRAS-AC is that they simulate lesser (larger than the observed ice cloud particle number (size. A single column model (SCM of McRAS-AC physics of the GEOS4 Global Circulation Model (GCM together with an adiabatic parcel model (APM for ice-cloud nucleation (IN of aerosols were used to systematically examine the influence of introducing ammonium sulfate (NH42SO4 aerosols in McRAS-AC and its influence on the optical properties of both liquid and ice clouds. First an (NH42SO4 parameterization was included in the APM to assess its effect on clouds vis-à-vis that of the other aerosols. Subsequently, several evaluation tests were conducted over the ARM Southern Great Plain (SGP and thirteen other locations (sorted into pristine and polluted conditions distributed over marine and continental sites with the SCM. The statistics of the simulated cloud climatology were evaluated against the available ground and satellite data. The results showed that inclusion of (NH42SO4 into McRAS-AC of the SCM made a remarkable improvement in the simulated effective radius of ice cloud particulates. However, the corresponding ice-cloud optical thickness increased even more than the observed. This can be caused by lack of horizontal cloud advection not performed in the SCM. Adjusting the other tunable parameters such as precipitation efficiency can mitigate this deficiency. Inclusion of ice cloud particle splintering invoked empirically further reduced simulation biases. Overall, these changes make a substantial improvement in simulated cloud optical properties and cloud distribution particularly over the Intertropical Convergence Zone (ITCZ in the GCM.

  1. Parameterization Improvements and Functional and Structural Advances in Version 4 of the Community Land Model

    Directory of Open Access Journals (Sweden)

    Andrew G. Slater

    2011-05-01

    Full Text Available The Community Land Model is the land component of the Community Climate System Model. Here, we describe a broad set of model improvements and additions that have been provided through the CLM development community to create CLM4. The model is extended with a carbon-nitrogen (CN biogeochemical model that is prognostic with respect to vegetation, litter, and soil carbon and nitrogen states and vegetation phenology. An urban canyon model is added and a transient land cover and land use change (LCLUC capability, including wood harvest, is introduced, enabling study of historic and future LCLUC on energy, water, momentum, carbon, and nitrogen fluxes. The hydrology scheme is modified with a revised numerical solution of the Richards equation and a revised ground evaporation parameterization that accounts for litter and within-canopy stability. The new snow model incorporates the SNow and Ice Aerosol Radiation model (SNICAR - which includes aerosol deposition, grain-size dependent snow aging, and vertically-resolved snowpack heating –– as well as new snow cover and snow burial fraction parameterizations. The thermal and hydrologic properties of organic soil are accounted for and the ground column is extended to ~50-m depth. Several other minor modifications to the land surface types dataset, grass and crop optical properties, atmospheric forcing height, roughness length and displacement height, and the disposition of snow-capped runoff are also incorporated.Taken together, these augmentations to CLM result in improved soil moisture dynamics, drier soils, and stronger soil moisture variability. The new model also exhibits higher snow cover, cooler soil temperatures in organic-rich soils, greater global river discharge, and lower albedos over forests and grasslands, all of which are improvements compared to CLM3.5. When CLM4 is run with CN, the mean biogeophysical simulation is slightly degraded because the vegetation structure is prognostic rather

  2. Parameterization of the ACRU model for estimating biophysical and climatological change impacts, Beaver Creek, Alberta

    Science.gov (United States)

    Forbes, K. A.; Kienzle, S. W.; Coburn, C. A.; Byrne, J. M.

    2006-12-01

    Multiple threats, including intensification of agricultural production, non-renewable resource extraction and climate change, are threatening Southern Alberta's water supply. The objective of this research is to calibrate/evaluate the Agricultural Catchments Research Unit (ACRU) agrohydrological model; with the end goal of forecasting the impacts of a changing environment on water quantity. The strength of this model is the intensive multi-layered soil water budgeting routine that integrates water movement between the surface and atmosphere. The ACRU model was parameterized using data from Environment Canada's climate database for a twenty year period (1984-2004) and was used to simulate streamflow for Beaver Creek. The simulated streamflow was compared to Environment Canada's historical streamflow database to validate the model output. The Beaver Creek Watershed, located in the Porcupine Hills southwestern Alberta, Canada contains a heterogeneous cover of deciduous, coniferous, native prairie grasslands and forage crops. In a catchment with highly diversified land cover, canopy architecture cannot be overlooked in rainfall interception parameterization. Preliminary testing of ACRU suggests that streamflows were sensitive to varied levels of leaf area index (LAI), a representative fraction of canopy foliage. Further testing using remotely sensed LAI's will provide a more accurate representation of canopy foliage and ultimately best represent this important element of the hydrological cycle and the associated processes which govern the natural hydrology of the Beaver Creek watershed.

  3. Stochastic parameterizing manifolds and non-Markovian reduced equations stochastic manifolds for nonlinear SPDEs II

    CERN Document Server

    Chekroun, Mickaël D; Wang, Shouhong

    2015-01-01

    In this second volume, a general approach is developed to provide approximate parameterizations of the "small" scales by the "large" ones for a broad class of stochastic partial differential equations (SPDEs). This is accomplished via the concept of parameterizing manifolds (PMs), which are stochastic manifolds that improve, for a given realization of the noise, in mean square error the partial knowledge of the full SPDE solution when compared to its projection onto some resolved modes. Backward-forward systems are designed to give access to such PMs in practice. The key idea consists of representing the modes with high wave numbers as a pullback limit depending on the time-history of the modes with low wave numbers. Non-Markovian stochastic reduced systems are then derived based on such a PM approach. The reduced systems take the form of stochastic differential equations involving random coefficients that convey memory effects. The theory is illustrated on a stochastic Burgers-type equation.

  4. Global system for hydrological monitoring and forecasting in real time at high resolution

    Science.gov (United States)

    Ortiz, Enrique; De Michele, Carlo; Todini, Ezio; Cifres, Enrique

    2016-04-01

    This project presented at the EGU 2016 born of solidarity and the need to dignify the most disadvantaged people living in the poorest countries (Africa, South America and Asia, which are continually exposed to changes in the hydrologic cycle suffering events of large floods and/or long periods of droughts. It is also a special year this 2016, Year of Mercy, in which we must engage with the most disadvantaged of our Planet (Gaia) making available to them what we do professionally and scientifically. The project called "Global system for hydrological monitoring and forecasting in real time at high resolution" is Non-Profit and aims to provide at global high resolution (1km2) hydrological monitoring and forecasting in real time and continuously coupling Weather Forecast of Global Circulation Models, such us GFS-0.25° (Deterministic and Ensembles Run) forcing a physically based distributed hydrological model computationally efficient, such as the latest version extended of TOPKAPI model, named TOPKAPI-eXtended. Finally using the MCP approach for the proper use of ensembles for Predictive Uncertainty assessment essentially based on a multiple regression in the Normal space, can be easily extended to use ensembles to represent the local (in time) smaller or larger conditional predictive uncertainty, as a function of the ensemble spread. In this way, each prediction in time accounts for both the predictive uncertainty of the ensemble mean and that of the ensemble spread. To perform a continuous hydrological modeling with TOPKAPI-X model and have hot start of hydrological status of watersheds, the system assimilated products of rainfall and temperature derived from remote sensing, such as product 3B42RT of TRMM NASA and others.The system will be integrated into a Decision Support System (DSS) platform, based on geographical data. The DSS is a web application (For Pc, Tablet/Mobile phone): It does not need installation (all you need is a web browser and an internet

  5. A new parameterization for surface ocean light attenuation in Earth System Models: assessing the impact of light absorption by colored detrital material

    OpenAIRE

    G. E. Kim; M.-A. Pradal; A. Gnanadesikan

    2015-01-01

    Light limitation can affect the distribution of biota and nutrients in the ocean. Light absorption by colored detrital material (CDM) was included in a fully coupled Earth System Model using a new parameterization for shortwave attenuation. Two model runs were conducted, with and without light attenuation by CDM. In a global average sense, greater light limitation associated with CDM increased surface chlorophyll, biomass and nutrients together. These changes can be attribut...

  6. The Global Threat Reduction Initiative's Return of Highly Enriched Uranium from Chile

    Energy Technology Data Exchange (ETDEWEB)

    Messick, C.E.; Dickerson, S.L.; Greenberg, R.F. Jr. [U.S. Department of Energy, National Nuclear Security Administration, Washington D.C. (United States); Andes, T.C. [Y-12 National Security Complex, Oak Ridge, TN (United States)

    2011-07-01

    In March 2010, the U.S. National Nuclear Security Administration's Office of Global Threat Reduction (GTRI), in collaboration with the Chilean Nuclear Energy Commission (CCHEN), completed a shipment of 18.2 kilograms of non-U.S.-origin highly enriched uranium (HEU) to the United States. The HEU was in the form of 71 aluminium-clad material test reactor (MTR) fuel elements and was the first GTRI Gap Program shipment that included non-U.S. origin irradiated nuclear fuel. Although shipments of research reactor fuels are not unique, this shipment served as a cornerstone to the first Presidential Nuclear Security Summit held in Washington, D.C., in April 2010. Carrying out the shipment became critical when a severe earthquake struck Chile just one day before the shipment was to occur. As the fuel had already been packaged in casks and the ocean vessels were nearing the port, U.S. and Chilean officials decided that it was most imperative that the shipment continue as planned. After careful analysis of the situation, inspection of the transportation packages, roadways, and port services, the shipment team was able to make the shipment occur in a safe and secure manner. This paper describes the loading activities at both the RECH-1 and RECH-2 reactors as well as the transportation of the loaded casks to the port of departure. (author)

  7. Global pictures of the ozone field from high altitudes from DE-I

    Science.gov (United States)

    Keating, G. M.; Frank, L.; Craven, J.; Shapiro, M.; Young, D.; Bhartia, P.

    1982-01-01

    Detailed synoptic views of the column ozone field can be obtained by the Spin-Scan Ozone Imager (SOI) (Keating et al., 1981) aboard the Dynamics Explorer I satellite. The eccentric polar orbit with an apogee altitude of 23,000 km allows high resolution global-scale images to be obtained within 12 minutes, and allows regions to be viewed for long periods of time. At perigee, a pixel size of nadir measurements of 3 km is possible, and measurements are determined using the backscattered ultraviolet technique. A wavelength measurement of 317.5 nm is used as there are limitations in filter locations and it allows comparison with Nimbus 7 SBUV/TOMS data. Consideration of the reflectivities of this data aids in checking the SOI data reduction algorithm. SOI data show short-term (less than one day) variations in the observed ozone field, and a negative correlation (greater than 0.9) between ozone and tropopause heights. It is expected, due to this correlation, that SOI data will aid in understanding the time evolution of dynamics near the tropopause.

  8. Michelson Interferometer for Global High-Resolution Thermospheric Imaging (MIGHTI): Instrument Design and Calibration

    Science.gov (United States)

    Englert, Christoph R.; Harlander, John M.; Brown, Charles M.; Marr, Kenneth D.; Miller, Ian J.; Stump, J. Eloise; Hancock, Jed; Peterson, James Q.; Kumler, Jay; Morrow, William H.; Mooney, Thomas A.; Ellis, Scott; Mende, Stephen B.; Harris, Stewart E.; Stevens, Michael H.; Makela, Jonathan J.; Harding, Brian J.; Immel, Thomas J.

    2017-10-01

    The Michelson Interferometer for Global High-resolution Thermospheric Imaging (MIGHTI) instrument was built for launch and operation on the NASA Ionospheric Connection Explorer (ICON) mission. The instrument was designed to measure thermospheric horizontal wind velocity profiles and thermospheric temperature in altitude regions between 90 km and 300 km, during day and night. For the wind measurements it uses two perpendicular fields of view pointed at the Earth's limb, observing the Doppler shift of the atomic oxygen red and green lines at 630.0 nm and 557.7 nm wavelength. The wavelength shift is measured using field-widened, temperature compensated Doppler Asymmetric Spatial Heterodyne (DASH) spectrometers, employing low order échelle gratings operating at two different orders for the different atmospheric lines. The temperature measurement is accomplished by a multichannel photometric measurement of the spectral shape of the molecular oxygen A-band around 762 nm wavelength. For each field of view, the signals of the two oxygen lines and the A-band are detected on different regions of a single, cooled, frame transfer charge coupled device (CCD) detector. On-board calibration sources are used to periodically quantify thermal drifts, simultaneously with observing the atmosphere. The MIGHTI requirements, the resulting instrument design and the calibration are described.

  9. Global phenotypic characterisation of human platelet lysate expanded MSCs by high-throughput flow cytometry.

    Science.gov (United States)

    Reis, Monica; McDonald, David; Nicholson, Lindsay; Godthardt, Kathrin; Knobel, Sebastian; Dickinson, Anne M; Filby, Andrew; Wang, Xiao-Nong

    2018-03-02

    Mesenchymal stromal cells (MSCs) are a promising cell source to develop cell therapy for many diseases. Human platelet lysate (PLT) is increasingly used as an alternative to foetal calf serum (FCS) for clinical-scale MSC production. To date, the global surface protein expression of PLT-expended MSCs (MSC-PLT) is not known. To investigate this, paired MSC-PLT and MSC-FCS were analysed in parallel using high-throughput flow cytometry for the expression of 356 cell surface proteins. MSC-PLT showed differential surface protein expression compared to their MSC-FCS counterpart. Higher percentage of positive cells was observed in MSC-PLT for 48 surface proteins, of which 13 were significantly enriched on MSC-PLT. This finding was validated using multiparameter flow cytometry and further confirmed by quantitative staining intensity analysis. The enriched surface proteins are relevant to increased proliferation and migration capacity, as well as enhanced chondrogenic and osteogenic differentiation properties. In silico network analysis revealed that these enriched surface proteins are involved in three distinct networks that are associated with inflammatory responses, carbohydrate metabolism and cellular motility. This is the first study reporting differential cell surface protein expression between MSC-PLT and MSC-FSC. Further studies are required to uncover the impact of those enriched proteins on biological functions of MSC-PLT.

  10. Evaluation of two MM5-PBL parameterization for solar radiation and temperature estimation in the South-Eastern area of the Iberian Peninsula

    International Nuclear Information System (INIS)

    Ruiz-Arias, J.A.; Pozo-Vasquez, D.; Sanchez-Sanchez, N.; Hayas-Barru, A.; Tovar-Pescador, J.; Montavez, J.P.

    2008-01-01

    We study the relative performance of two different MM5-PBL parametrizations (Blackadar and MRF) simulating hourly values of solar irradiance and temperature in the south-eastern part of the Iberian Peninsula. The evaluation was carried out throughout the different season of the year 2005 and for three different sky conditions: clear-sky, broken-clouds and overcast conditions. Two integrations, one per PBL parameterization, were carried out for every sky condition and season of the year and results were compared with observational data. Overall, the MM5 model, both using the Blackadar or MRF PBL parameterization, revealed to be a valid tool to estimate hourly values of solar radiation and temperature over the study area. The influence of the PBL parameterization on the model estimates was found to be more important for the solar radiation than for the temperature and highly dependent on the season and sky conditions. Particularly, a detailed analysis revealed that, during broken-clouds conditions, the ability of the model to reproduce hourly changes in the solar radiation strongly depends upon the selected PBL parameterization. Additionally, it was found that solar radiation RMSE values are about one order of magnitude higher during broken-clouds and overcast conditions compared to clear-sky conditions. For the temperature, the two PBL parameterizations provide very similar estimates. Only under overcast conditions and during the autumn, the MRF provides significantly better estimates.

  11. Capturing the Interplay of Dynamics and Networks through Parameterizations of Laplacian Operators

    Science.gov (United States)

    2016-08-24

    we describe an umbrella framework that unifies some of the well known measures, connecting the ideas of centrality , communities and dynamical processes...change of basis. Parameterized centrality also leads to the definition of parameterized volume for subsets of vertices. Parameterized conductance...behind this definition is to establish a direct connection between centrality and community measures, as we will later demonstrate with the notion of

  12. Global Hybrid Simulations of The Magnetopause Boundary Layers In Low- and High-latitude Magnetic Reconnections

    Science.gov (United States)

    Lin, Y.; Perez, J. D.

    A 2-D global hybrid simulation is carried out to study the structure of the dayside mag- netopause in the noon-midnight meridian plane associated with magnetic reconnec- tion. In the simulation the bow shock, magnetosheath, and magnetopause are formed self-consistently by supersonic solar wind passing the geomagnetic field. The recon- nection events at high- and low-latitudes are simulated for various IMF conditions. The following results will be presented. (1) Large-amplitude rotational discontinuities and Alfvén waves are present in the quasi-steady reconnection layer. (2) The rotational discontinuity possesses an electron sense, or right-hand polarization in the magnetic field as the discontinuity forms from the X line. Later, however, the rotational dis- continuity tends to evolve to a structure with a smallest field rotational angle and thus may reverse its sense of the field rotation. The Walén relation is tested for elec- tron and ion flows in the magnetopause rotational discontinuities with left-hand and right-hand polarizations. (3) The structure of the magnetopause discontinuities and that of the accelerated/decelerated flows are modified significantly by the presence of the local magnetosheath flow. (4) Field-aligned currents are generated in the magne- topause rotational discontinuities. Part of the magnetopause currents propagate with Alfvén waves along the field lines into the polar ionosphere, contributing to the field- aligned current system in the high latitudes. The generation of the parallel currents under northward and southward IMF conditions is investigated. (5) Finally, typical ion velocity distributions will be shown at various locations across the magnetopause northward and southward of the X lines. The ion distributions associated with single or multiple X lines will be discussed.

  13. Toolbox for Urban Mobility Simulation: High Resolution Population Dynamics for Global Cities

    Science.gov (United States)

    Bhaduri, B. L.; Lu, W.; Liu, C.; Thakur, G.; Karthik, R.

    2015-12-01

    In this rapidly urbanizing world, unprecedented rate of population growth is not only mirrored by increasing demand for energy, food, water, and other natural resources, but has detrimental impacts on environmental and human security. Transportation simulations are frequently used for mobility assessment in urban planning, traffic operation, and emergency management. Previous research, involving purely analytical techniques to simulations capturing behavior, has investigated questions and scenarios regarding the relationships among energy, emissions, air quality, and transportation. Primary limitations of past attempts have been availability of input data, useful "energy and behavior focused" models, validation data, and adequate computational capability that allows adequate understanding of the interdependencies of our transportation system. With increasing availability and quality of traditional and crowdsourced data, we have utilized the OpenStreetMap roads network, and has integrated high resolution population data with traffic simulation to create a Toolbox for Urban Mobility Simulations (TUMS) at global scale. TUMS consists of three major components: data processing, traffic simulation models, and Internet-based visualizations. It integrates OpenStreetMap, LandScanTM population, and other open data (Census Transportation Planning Products, National household Travel Survey, etc.) to generate both normal traffic operation and emergency evacuation scenarios. TUMS integrates TRANSIMS and MITSIM as traffic simulation engines, which are open-source and widely-accepted for scalable traffic simulations. Consistent data and simulation platform allows quick adaption to various geographic areas that has been demonstrated for multiple cities across the world. We are combining the strengths of geospatial data sciences, high performance simulations, transportation planning, and emissions, vehicle and energy technology development to design and develop a simulation

  14. Ontology-based meta-analysis of global collections of high-throughput public data.

    Directory of Open Access Journals (Sweden)

    Ilya Kupershmidt

    2010-09-01

    Full Text Available The investigation of the interconnections between the molecular and genetic events that govern biological systems is essential if we are to understand the development of disease and design effective novel treatments. Microarray and next-generation sequencing technologies have the potential to provide this information. However, taking full advantage of these approaches requires that biological connections be made across large quantities of highly heterogeneous genomic datasets. Leveraging the increasingly huge quantities of genomic data in the public domain is fast becoming one of the key challenges in the research community today.We have developed a novel data mining framework that enables researchers to use this growing collection of public high-throughput data to investigate any set of genes or proteins. The connectivity between molecular states across thousands of heterogeneous datasets from microarrays and other genomic platforms is determined through a combination of rank-based enrichment statistics, meta-analyses, and biomedical ontologies. We address data quality concerns through dataset replication and meta-analysis and ensure that the majority of the findings are derived using multiple lines of evidence. As an example of our strategy and the utility of this framework, we apply our data mining approach to explore the biology of brown fat within the context of the thousands of publicly available gene expression datasets.Our work presents a practical strategy for organizing, mining, and correlating global collections of large-scale genomic data to explore normal and disease biology. Using a hypothesis-free approach, we demonstrate how a data-driven analysis across very large collections of genomic data can reveal novel discoveries and evidence to support existing hypothesis.

  15. Western US high June 2015 temperatures and their relation to global warming and soil moisture

    Science.gov (United States)

    Philip, Sjoukje Y.; Kew, Sarah F.; Hauser, Mathias; Guillod, Benoit P.; Teuling, Adriaan J.; Whan, Kirien; Uhe, Peter; Oldenborgh, Geert Jan van

    2018-04-01

    The Western US states Washington (WA), Oregon (OR) and California (CA) experienced extremely high temperatures in June 2015. The temperature anomalies were so extreme that they cannot be explained with global warming alone. We investigate the hypothesis that soil moisture played an important role as well. We use a land surface model and a large ensemble from the weather@home modelling effort to investigate the coupling between soil moisture and temperature in a warming world. Both models show that May was anomalously dry, satisfying a prerequisite for the extreme heat wave, and they indicate that WA and OR are in a wet-to-dry transitional soil moisture regime. We use two different land surface-atmosphere coupling metrics to show that there was strong coupling between temperature, latent heat flux and the effect of soil moisture deficits on the energy balance in June 2015 in WA and OR. June temperature anomalies conditioned on wet/dry conditions show that both the mean and extreme temperatures become hotter for dry soils, especially in WA and OR. Fitting a Gaussian model to temperatures using soil moisture as a covariate shows that the June 2015 temperature values fit well in the extrapolated empirical temperature/drought lines. The high temperature anomalies in WA and OR are thus to be expected, given the dry soil moisture conditions and that those regions are in the transition from a wet to a dry regime. CA is already in the dry regime and therefore the necessity of taking soil moisture into account is of lower importance.

  16. Ontology-based meta-analysis of global collections of high-throughput public data.

    Science.gov (United States)

    Kupershmidt, Ilya; Su, Qiaojuan Jane; Grewal, Anoop; Sundaresh, Suman; Halperin, Inbal; Flynn, James; Shekar, Mamatha; Wang, Helen; Park, Jenny; Cui, Wenwu; Wall, Gregory D; Wisotzkey, Robert; Alag, Satnam; Akhtari, Saeid; Ronaghi, Mostafa

    2010-09-29

    The investigation of the interconnections between the molecular and genetic events that govern biological systems is essential if we are to understand the development of disease and design effective novel treatments. Microarray and next-generation sequencing technologies have the potential to provide this information. However, taking full advantage of these approaches requires that biological connections be made across large quantities of highly heterogeneous genomic datasets. Leveraging the increasingly huge quantities of genomic data in the public domain is fast becoming one of the key challenges in the research community today. We have developed a novel data mining framework that enables researchers to use this growing collection of public high-throughput data to investigate any set of genes or proteins. The connectivity between molecular states across thousands of heterogeneous datasets from microarrays and other genomic platforms is determined through a combination of rank-based enrichment statistics, meta-analyses, and biomedical ontologies. We address data quality concerns through dataset replication and meta-analysis and ensure that the majority of the findings are derived using multiple lines of evidence. As an example of our strategy and the utility of this framework, we apply our data mining approach to explore the biology of brown fat within the context of the thousands of publicly available gene expression datasets. Our work presents a practical strategy for organizing, mining, and correlating global collections of large-scale genomic data to explore normal and disease biology. Using a hypothesis-free approach, we demonstrate how a data-driven analysis across very large collections of genomic data can reveal novel discoveries and evidence to support existing hypothesis.

  17. Power spectral density and scaling exponent of high frequency global solar radiation sequences

    Science.gov (United States)

    Calif, Rudy; Schmitt, François G.; Huang, Yongxiang

    2013-04-01

    The part of the solar power production from photovlotaïcs systems is constantly increasing in the electric grids. Solar energy converter devices such as photovoltaic cells are very sensitive to instantaneous solar radiation fluctuations. Thus rapid variation of solar radiation due to changes in the local meteorological condition can induce large amplitude fluctuations of the produced electrical power and reduce the overall efficiency of the system. When large amount of photovoltaic electricity is send into a weak or small electricity network such as island network, the electric grid security can be in jeopardy due to these power fluctuations. The integration of this energy in the electrical network remains a major challenge, due to the high variability of solar radiation in time and space. To palliate these difficulties, it is essential to identify the characteristic of these fluctuations in order to anticipate the eventuality of power shortage or power surge. The objective of this study is to present an approach based on Empirical Mode Decomposition (EMD) and Hilbert-Huang Transform (HHT) to highlight the scaling properties of global solar irradiance data G(t). The scale of invariance is detected on this dataset using the Empirical Mode Decomposition in association with arbitrary-order Hilbert spectral analysis, a generalization of (HHT) or Hilbert Spectral Analysis (HSA). The first step is the EMD, consists in decomposing the normalized global solar radiation data G'(t) into several Intrinsic Mode Functions (IMF) Ci(t) without giving an a priori basis. Consequently, the normalized original solar radiation sequence G'(t) can be written as a sum of Ci(t) with a residual rn. From all IMF modes, a joint PDF P(f,A) of locally and instantaneous frequency f and amplitude A, is estimated. To characterize the scaling behavior in amplitude-frequency space, an arbitrary-order Hilbert marginal spectrum is defined to: Iq(f) = 0 P (f,A)A dA (1) with q × 0 In case of scale

  18. High-speed Imaging of Global Surface Temperature Distributions on Hypersonic Ballistic-Range Projectiles

    Science.gov (United States)

    Wilder, Michael C.; Reda, Daniel C.

    2004-01-01

    The NASA-Ames ballistic range provides a unique capability for aerothermodynamic testing of configurations in hypersonic, real-gas, free-flight environments. The facility can closely simulate conditions at any point along practically any trajectory of interest experienced by a spacecraft entering an atmosphere. Sub-scale models of blunt atmospheric entry vehicles are accelerated by a two-stage light-gas gun to speeds as high as 20 times the speed of sound to fly ballistic trajectories through an 24 m long vacuum-rated test section. The test-section pressure (effective altitude), the launch velocity of the model (flight Mach number), and the test-section working gas (planetary atmosphere) are independently variable. The model travels at hypersonic speeds through a quiescent test gas, creating a strong bow-shock wave and real-gas effects that closely match conditions achieved during actual atmospheric entry. The challenge with ballistic range experiments is to obtain quantitative surface measurements from a model traveling at hypersonic speeds. The models are relatively small (less than 3.8 cm in diameter), which limits the spatial resolution possible with surface mounted sensors. Furthermore, since the model is in flight, surface-mounted sensors require some form of on-board telemetry, which must survive the massive acceleration loads experienced during launch (up to 500,000 gravities). Finally, the model and any on-board instrumentation will be destroyed at the terminal wall of the range. For these reasons, optical measurement techniques are the most practical means of acquiring data. High-speed thermal imaging has been employed in the Ames ballistic range to measure global surface temperature distributions and to visualize the onset of transition to turbulent-flow on the forward regions of hypersonic blunt bodies. Both visible wavelength and infrared high-speed cameras are in use. The visible wavelength cameras are intensified CCD imagers capable of integration

  19. ANALYSIS OF PARAMETERIZATION VALUE REDUCTION OF SOFT SETS AND ITS ALGORITHM

    Directory of Open Access Journals (Sweden)

    Mohammed Adam Taheir Mohammed

    2016-02-01

    Full Text Available In this paper, the parameterization value reduction of soft sets and its algorithm in decision making are studied and described. It is based on parameterization reduction of soft sets. The purpose of this study is to investigate the inherited disadvantages of parameterization reduction of soft sets and its algorithm. The algorithms presented in this study attempt to reduce the value of least parameters from soft set. Through the analysis, two techniques have been described. Through this study, it is found that parameterization reduction of soft sets and its algorithm has yielded a different and inconsistency in suboptimal result.

  20. Modelling of primary aerosols in the chemical transport model MOCAGE: development and evaluation of aerosol physical parameterizations

    Directory of Open Access Journals (Sweden)

    B. Sič

    2015-02-01

    Full Text Available This paper deals with recent improvements to the global chemical transport model of Météo-France MOCAGE (Modèle de Chimie Atmosphérique à Grande Echelle that consists of updates to different aerosol parameterizations. MOCAGE only contains primary aerosol species: desert dust, sea salt, black carbon, organic carbon, and also volcanic ash in the case of large volcanic eruptions. We introduced important changes to the aerosol parameterization concerning emissions, wet deposition and sedimentation. For the emissions, size distribution and wind calculations are modified for desert dust aerosols, and a surface sea temperature dependant source function is introduced for sea salt aerosols. Wet deposition is modified toward a more physically realistic representation by introducing re-evaporation of falling rain and snowfall scavenging and by changing the in-cloud scavenging scheme along with calculations of precipitation cloud cover and rain properties. The sedimentation scheme update includes changes regarding the stability and viscosity calculations. Independent data from satellites (MODIS, SEVIRI, the ground (AERONET, EMEP, and a model inter-comparison project (AeroCom are compared with MOCAGE simulations and show that the introduced changes brought a significant improvement on aerosol representation, properties and global distribution. Emitted quantities of desert dust and sea salt, as well their lifetimes, moved closer towards values of AeroCom estimates and the multi-model average. When comparing the model simulations with MODIS aerosol optical depth (AOD observations over the oceans, the updated model configuration shows a decrease in the modified normalized mean bias (MNMB; from 0.42 to 0.10 and a better correlation (from 0.06 to 0.32 in terms of the geographical distribution and the temporal variability. The updates corrected a strong positive MNMB in the sea salt representation at high latitudes (from 0.65 to 0.16, and a negative MNMB in

  1. Authorship ethics in global health research partnerships between researchers from low or middle income countries and high income countries.

    Science.gov (United States)

    Smith, Elise; Hunt, Matthew; Master, Zubin

    2014-05-28

    Over the past two decades, the promotion of collaborative partnerships involving researchers from low and middle income countries with those from high income countries has been a major development in global health research. Ideally, these partnerships would lead to more equitable collaboration including the sharing of research responsibilities and rewards. While collaborative partnership initiatives have shown promise and attracted growing interest, there has been little scholarly debate regarding the fair distribution of authorship credit within these partnerships. In this paper, we identify four key authorship issues relevant to global health research and discuss their ethical and practical implications. First, we argue that authorship guidance may not adequately apply to global health research because it requires authors to write or substantially revise the manuscript. Since most journals of international reputation in global health are written in English, this would systematically and unjustly exclude non-English speaking researchers even if they have substantially contributed to the research project. Second, current guidance on authorship order does not address or mitigate unfair practices which can occur in global health research due to power differences between researchers from high and low-middle income countries. It also provides insufficient recognition of "technical tasks" such as local participant recruitment. Third, we consider the potential for real or perceived editorial bias in medical science journals in favour of prominent western researchers, and the risk of promoting misplaced credit and/or prestige authorship. Finally, we explore how diverse cultural practices and expectations regarding authorship may create conflict between researchers from low-middle and high income countries and contribute to unethical authorship practices. To effectively deal with these issues, we suggest: 1) undertaking further empirical and conceptual research regarding

  2. High-resolution global maps of 21st-century forest cover change.

    Science.gov (United States)

    Hansen, M C; Potapov, P V; Moore, R; Hancher, M; Turubanova, S A; Tyukavina, A; Thau, D; Stehman, S V; Goetz, S J; Loveland, T R; Kommareddy, A; Egorov, A; Chini, L; Justice, C O; Townshend, J R G

    2013-11-15

    Quantification of global forest change has been lacking despite the recognized importance of forest ecosystem services. In this study, Earth observation satellite data were used to map global forest loss (2.3 million square kilometers) and gain (0.8 million square kilometers) from 2000 to 2012 at a spatial resolution of 30 meters. The tropics were the only climate domain to exhibit a trend, with forest loss increasing by 2101 square kilometers per year. Brazil's well-documented reduction in deforestation was offset by increasing forest loss in Indonesia, Malaysia, Paraguay, Bolivia, Zambia, Angola, and elsewhere. Intensive forestry practiced within subtropical forests resulted in the highest rates of forest change globally. Boreal forest loss due largely to fire and forestry was second to that in the tropics in absolute and proportional terms. These results depict a globally consistent and locally relevant record of forest change.

  3. An evaluation of applying the 'Critical thinking model' to teaching global warming to junior high school students

    Science.gov (United States)

    Huang, J.; Hong, C.; Hsu, Y.

    2013-12-01

    Climate change is a consequence of interaction among the biosphere, atmosphere, hydrosphere and geosphere. The causes of climate change are extremely complicated for scientists to explain. The fact that the global climate has kept warming in the past few decades is one example. It remains controversial for scientists whether this warming is the result of human activity or natural causes. This research aims to lead students to discuss the causes of global warming from distinct and controversial viewpoints to help the students realize the uncertainty and complicated characteristics of the global warming issue. The context of applying the critical thinking model to teaching the scientific concepts of climate change and global warming is designed for use in junior high schools. The videos of the upside concept 'An Inconvenient Truth' (a 2006 documentary film directed by Davis Guggenheim) and the reverse-side concept 'The Great Global Warming Swindle' (a 2007 documentary film made by British television producer/director Martin Durkin) about the global warming crisis are incorporated into lessons in order to guide students to make their own decisions appropriately when discussing the earth climate change crisis. A questionnaire, individual teacher interviews and observations in class were conducted to evaluate the curriculum. The pre-test and post-test questionnaires showed differences in the students' knowledge, attitudes and behavior towards the global warming phenomenon before and after attending the lessons. The results show that those students who attended the whole curriculum had a significant increase in their knowledge and behavior factors of global climate (P value <0.001*). However, there was no significant improvement in their attitudes between the pre-test and post-test questionnaires (P value=0.329). From the individual interviews, the teachers who gave the lessons indicated that this project could increase the interaction with their students during class

  4. Weather Avoidance Guidelines for NASA Global Hawk High-Altitude UAS

    Science.gov (United States)

    Cecil, Daniel J.; Zipser, Edward J.; Velden, Chris; Monette, Sarah; Heymsfield, Gerry; Braun, Scott; Newman, Paul; Black, Pete; Black, Michael; Dunion, Jason

    2014-01-01

    NASA operates two Global Hawk unmanned aircraft systems for Earth Science research projects. In particular, they are used in the Hurricane and Severe Storm Sentinel (HS3) project during 2012, 2013, and 2014 to take measurements from the environment around tropical cyclones, and from directly above tropical cyclones. There is concern that strict adherence to the weather avoidance rules used in 2012 may sacrifice the ability to observe important science targets. We have proposed modifications to these weather avoidance rules that we believe will improve the ability to observe science targets without compromising aircraft safety. The previous guidelines, used in 2012, specified: Do not approach thunderstorms within 25 nm during flight at FL500 or below. When flying above FL500: Do not approach reported lightning within 25NM in areas where cloud tops are reported at FL500 or higher. Aircraft should maintain at least 10000 ft vertical separation from reported lightning if cloud tops are below FL500. No over-flight of cumulus tops higher than FL500. No flight into forecast or reported icing conditions. No flight into forecast or reported moderate or severe turbulence Based on past experience with high-altitude flights over tropical cyclones, we have recommended changing this guidance to: Do not approach thunderstorms within 25 nm during flight at FL500 or below. Aircraft should maintain at least 5000 ft vertical separation from significant convective cloud tops except: a) When cloud tops above FL500: In the event of reported significant lightning activity or indicators of significant overshooting tops, do not approach within 10-25 nm, depending on pilot discretion and advice from Mission Scientist. b) When cloud tops are below FL500, maintain 10000 ft separation from reported significant lightning or indicators of significant overshooting tops. No flight into forecasted or reported icing conditions. No flight into forecasted or reported moderate or severe turbulence The

  5. Parameterization of ion channeling half-angles and minimum yields

    Energy Technology Data Exchange (ETDEWEB)

    Doyle, Barney L.

    2016-03-15

    A MS Excel program has been written that calculates ion channeling half-angles and minimum yields in cubic bcc, fcc and diamond lattice crystals. All of the tables and graphs in the three Ion Beam Analysis Handbooks that previously had to be manually looked up and read from were programed into Excel in handy lookup tables, or parameterized, for the case of the graphs, using rather simple exponential functions with different power functions of the arguments. The program then offers an extremely convenient way to calculate axial and planar half-angles, minimum yields, effects on half-angles and minimum yields of amorphous overlayers. The program can calculate these half-angles and minimum yields for 〈u v w〉 axes and [h k l] planes up to (5 5 5). The program is open source and available at (http://www.sandia.gov/pcnsc/departments/iba/ibatable.html).

  6. A global and high-resolution assessment of the green, blue and grey water footprint of wheat

    NARCIS (Netherlands)

    Mekonnen, Mesfin; Hoekstra, Arjen Ysbert

    2010-01-01

    The aim of this study is to estimate the green, blue and grey water footprint of wheat in a spatially-explicit way, both from a production and consumption perspective. The assessment is global and improves upon earlier research by taking a high-resolution approach, estimating the water footprint of

  7. Estimate of the largest Lyapunov characteristic exponent of a high dimensional atmospheric global circulation model: a sensitivity analysis

    International Nuclear Information System (INIS)

    Guerrieri, A.

    2009-01-01

    In this report the largest Lyapunov characteristic exponent of a high dimensional atmospheric global circulation model of intermediate complexity has been estimated numerically. A sensitivity analysis has been carried out by varying the equator-to-pole temperature difference, the space resolution and the value of some parameters employed by the model. Chaotic and non-chaotic regimes of circulation have been found. [it

  8. Systematic Parameterization, Storage, and Representation of Volumetric DICOM Data.

    Science.gov (United States)

    Fischer, Felix; Selver, M Alper; Gezer, Sinem; Dicle, Oğuz; Hillen, Walter

    Tomographic medical imaging systems produce hundreds to thousands of slices, enabling three-dimensional (3D) analysis. Radiologists process these images through various tools and techniques in order to generate 3D renderings for various applications, such as surgical planning, medical education, and volumetric measurements. To save and store these visualizations, current systems use snapshots or video exporting, which prevents further optimizations and requires the storage of significant additional data. The Grayscale Softcopy Presentation State extension of the Digital Imaging and Communications in Medicine (DICOM) standard resolves this issue for two-dimensional (2D) data by introducing an extensive set of parameters, namely 2D Presentation States (2DPR), that describe how an image should be displayed. 2DPR allows storing these parameters instead of storing parameter applied images, which cause unnecessary duplication of the image data. Since there is currently no corresponding extension for 3D data, in this study, a DICOM-compliant object called 3D presentation states (3DPR) is proposed for the parameterization and storage of 3D medical volumes. To accomplish this, the 3D medical visualization process is divided into four tasks, namely pre-processing, segmentation, post-processing, and rendering. The important parameters of each task are determined. Special focus is given to the compression of segmented data, parameterization of the rendering process, and DICOM-compliant implementation of the 3DPR object. The use of 3DPR was tested in a radiology department on three clinical cases, which require multiple segmentations and visualizations during the workflow of radiologists. The results show that 3DPR can effectively simplify the workload of physicians by directly regenerating 3D renderings without repeating intermediate tasks, increase efficiency by preserving all user interactions, and provide efficient storage as well as transfer of visualized data.

  9. A globally calibrated scheme for generating daily meteorology from monthly statistics: Global-WGEN (GWGEN) v1.0

    Science.gov (United States)

    Sommer, Philipp S.; Kaplan, Jed O.

    2017-10-01

    While a wide range of Earth system processes occur at daily and even subdaily timescales, many global vegetation and other terrestrial dynamics models historically used monthly meteorological forcing both to reduce computational demand and because global datasets were lacking. Recently, dynamic land surface modeling has moved towards resolving daily and subdaily processes, and global datasets containing daily and subdaily meteorology have become available. These meteorological datasets, however, cover only the instrumental era of the last approximately 120 years at best, are subject to considerable uncertainty, and represent extremely large data files with associated computational costs of data input/output and file transfer. For periods before the recent past or in the future, global meteorological forcing can be provided by climate model output, but the quality of these data at high temporal resolution is low, particularly for daily precipitation frequency and amount. Here, we present GWGEN, a globally applicable statistical weather generator for the temporal downscaling of monthly climatology to daily meteorology. Our weather generator is parameterized using a global meteorological database and simulates daily values of five common variables: minimum and maximum temperature, precipitation, cloud cover, and wind speed. GWGEN is lightweight, modular, and requires a minimal set of monthly mean variables as input. The weather generator may be used in a range of applications, for example, in global vegetation, crop, soil erosion, or hydrological models. While GWGEN does not currently perform spatially autocorrelated multi-point downscaling of daily weather, this additional functionality could be implemented in future versions.

  10. Assessing the performance of wave breaking parameterizations in shallow waters in spectral wave models

    Science.gov (United States)

    Lin, Shangfei; Sheng, Jinyu

    2017-12-01

    Depth-induced wave breaking is the primary dissipation mechanism for ocean surface waves in shallow waters. Different parametrizations were developed for parameterizing depth-induced wave breaking process in ocean surface wave models. The performance of six commonly-used parameterizations in simulating significant wave heights (SWHs) is assessed in this study. The main differences between these six parameterizations are representations of the breaker index and the fraction of breaking waves. Laboratory and field observations consisting of 882 cases from 14 sources of published observational data are used in the assessment. We demonstrate that the six parameterizations have reasonable performance in parameterizing depth-induced wave breaking in shallow waters, but with their own limitations and drawbacks. The widely-used parameterization suggested by Battjes and Janssen (1978, BJ78) has a drawback of underpredicting the SWHs in the locally-generated wave conditions and overpredicting in the remotely-generated wave conditions over flat bottoms. The drawback of BJ78 was addressed by a parameterization suggested by Salmon et al. (2015, SA15). But SA15 had relatively larger errors in SWHs over sloping bottoms than BJ78. We follow SA15 and propose a new parameterization with a dependence of the breaker index on the normalized water depth in deep waters similar to SA15. In shallow waters, the breaker index of the new parameterization has a nonlinear dependence on the local bottom slope rather than the linear dependence used in SA15. Overall, this new parameterization has the best performance with an average scatter index of ∼8.2% in comparison with the three best performing existing parameterizations with the average scatter index between 9.2% and 13.6%.

  11. A New WRF-Chem Treatment for Studying Regional Scale Impacts of Cloud-Aerosol Interactions in Parameterized Cumuli

    Energy Technology Data Exchange (ETDEWEB)

    Berg, Larry K.; Shrivastava, ManishKumar B.; Easter, Richard C.; Fast, Jerome D.; Chapman, Elaine G.; Liu, Ying

    2015-01-01

    A new treatment of cloud-aerosol interactions within parameterized shallow and deep convection has been implemented in WRF-Chem that can be used to better understand the aerosol lifecycle over regional to synoptic scales. The modifications to the model to represent cloud-aerosol interactions include treatment of the cloud dropletnumber mixing ratio; key cloud microphysical and macrophysical parameters (including the updraft fractional area, updraft and downdraft mass fluxes, and entrainment) averaged over the population of shallow clouds, or a single deep convective cloud; and vertical transport, activation/resuspension, aqueous chemistry, and wet removal of aerosol and trace gases in warm clouds. Thesechanges have been implemented in both the WRF-Chem chemistry packages as well as the Kain-Fritsch cumulus parameterization that has been modified to better represent shallow convective clouds. Preliminary testing of the modified WRF-Chem has been completed using observations from the Cumulus Humilis Aerosol Processing Study (CHAPS) as well as a high-resolution simulation that does not include parameterized convection. The simulation results are used to investigate the impact of cloud-aerosol interactions on the regional scale transport of black carbon (BC), organic aerosol (OA), and sulfate aerosol. Based on the simulations presented here, changes in the column integrated BC can be as large as -50% when cloud-aerosol interactions are considered (due largely to wet removal), or as large as +35% for sulfate in non-precipitating conditions due to the sulfate production in the parameterized clouds. The modifications to WRF-Chem version 3.2.1 are found to account for changes in the cloud drop number concentration (CDNC) and changes in the chemical composition of cloud-drop residuals in a way that is consistent with observations collected during CHAPS. Efforts are currently underway to port the changes described here to WRF-Chem version 3.5, and it is anticipated that they

  12. A theory-based parameterization for heterogeneous ice nucleation and implications for the simulation of ice processes in atmospheric models

    Science.gov (United States)

    Savre, J.; Ekman, A. M. L.

    2015-05-01

    A new parameterization for heterogeneous ice nucleation constrained by laboratory data and based on classical nucleation theory is introduced. Key features of the parameterization include the following: a consistent and modular modeling framework for treating condensation/immersion and deposition freezing, the possibility to consider various potential ice nucleating particle types (e.g., dust, black carbon, and bacteria), and the possibility to account for an aerosol size distribution. The ice nucleating ability of each aerosol type is described using a contact angle (θ) probability density function (PDF). A new modeling strategy is described to allow the θ PDF to evolve in time so that the most efficient ice nuclei (associated with the lowest θ values) are progressively removed as they nucleate ice. A computationally efficient quasi Monte Carlo method is used to integrate the computed ice nucleation rates over both size and contact angle distributions. The parameterization is employed in a parcel model, forced by an ensemble of Lagrangian trajectories extracted from a three-dimensional simulation of a springtime low-level Arctic mixed-phase cloud, in order to evaluate the accuracy and convergence of the method using different settings. The same model setup is then employed to examine the importance of various parameters for the simulated ice production. Modeling the time evolution of the θ PDF is found to be particularly crucial; assuming a time-independent θ PDF significantly overestimates the ice nucleation rates. It is stressed that the capacity of black carbon (BC) to form ice in the condensation/immersion freezing mode is highly uncertain, in particular at temperatures warmer than -20°C. In its current version, the parameterization most likely overestimates ice initiation by BC.

  13. Improvement in the Modeled Representation of North American Monsoon Precipitation Using a Modified Kain–Fritsch Convective Parameterization Scheme

    KAUST Repository

    Luong, Thang

    2018-01-22

    A commonly noted problem in the simulation of warm season convection in the North American monsoon region has been the inability of atmospheric models at the meso-β scales (10 s to 100 s of kilometers) to simulate organized convection, principally mesoscale convective systems. With the use of convective parameterization, high precipitation biases in model simulations are typically observed over the peaks of mountain ranges. To address this issue, the Kain–Fritsch (KF) cumulus parameterization scheme has been modified with new diagnostic equations to compute the updraft velocity, the convective available potential energy closure assumption, and the convective trigger function. The scheme has been adapted for use in the Weather Research and Forecasting (WRF). A numerical weather prediction-type simulation is conducted for the North American Monsoon Experiment Intensive Observing Period 2 and a regional climate simulation is performed, by dynamically downscaling. In both of these applications, there are notable improvements in the WRF model-simulated precipitation due to the better representation of organized, propagating convection. The use of the modified KF scheme for atmospheric model simulations may provide a more computationally economical alternative to improve the representation of organized convection, as compared to convective-permitting simulations at the kilometer scale or a super-parameterization approach.

  14. Creating and parameterizing patient-specific deep brain stimulation pathway-activation models using the hyperdirect pathway as an example.

    Science.gov (United States)

    Gunalan, Kabilar; Chaturvedi, Ashutosh; Howell, Bryan; Duchin, Yuval; Lempka, Scott F; Patriat, Remi; Sapiro, Guillermo; Harel, Noam; McIntyre, Cameron C

    2017-01-01

    Deep brain stimulation (DBS) is an established clinical therapy and computational models have played an important role in advancing the technology. Patient-specific DBS models are now common tools in both academic and industrial research, as well as clinical software systems. However, the exact methodology for creating patient-specific DBS models can vary substantially and important technical details are often missing from published reports. Provide a detailed description of the assembly workflow and parameterization of a patient-specific DBS pathway-activation model (PAM) and predict the response of the hyperdirect pathway to clinical stimulation. Integration of multiple software tools (e.g. COMSOL, MATLAB, FSL, NEURON, Python) enables the creation and visualization of a DBS PAM. An example DBS PAM was developed using 7T magnetic resonance imaging data from a single unilaterally implanted patient with Parkinson's disease (PD). This detailed description implements our best computational practices and most elaborate parameterization steps, as defined from over a decade of technical evolution. Pathway recruitment curves and strength-duration relationships highlight the non-linear response of axons to changes in the DBS parameter settings. Parameterization of patient-specific DBS models can be highly detailed and constrained, thereby providing confidence in the simulation predictions, but at the expense of time demanding technical implementation steps. DBS PAMs represent new tools for investigating possible correlations between brain pathway activation patterns and clinical symptom modulation.

  15. Creating and parameterizing patient-specific deep brain stimulation pathway-activation models using the hyperdirect pathway as an example.

    Directory of Open Access Journals (Sweden)

    Kabilar Gunalan

    Full Text Available Deep brain stimulation (DBS is an established clinical therapy and computational models have played an important role in advancing the technology. Patient-specific DBS models are now common tools in both academic and industrial research, as well as clinical software systems. However, the exact methodology for creating patient-specific DBS models can vary substantially and important technical details are often missing from published reports.Provide a detailed description of the assembly workflow and parameterization of a patient-specific DBS pathway-activation model (PAM and predict the response of the hyperdirect pathway to clinical stimulation.Integration of multiple software tools (e.g. COMSOL, MATLAB, FSL, NEURON, Python enables the creation and visualization of a DBS PAM. An example DBS PAM was developed using 7T magnetic resonance imaging data from a single unilaterally implanted patient with Parkinson's disease (PD. This detailed description implements our best computational practices and most elaborate parameterization steps, as defined from over a decade of technical evolution.Pathway recruitment curves and strength-duration relationships highlight the non-linear response of axons to changes in the DBS parameter settings.Parameterization of patient-specific DBS models can be highly detailed and constrained, thereby providing confidence in the simulation predictions, but at the expense of time demanding technical implementation steps. DBS PAMs represent new tools for investigating possible correlations between brain pathway activation patterns and clinical symptom modulation.

  16. Global output feedback control for a class of high-order feedforward nonlinear systems with input delay.

    Science.gov (United States)

    Zha, Wenting; Zhai, Junyong; Fei, Shumin

    2013-07-01

    This paper investigates the problem of output feedback stabilization for a class of high-order feedforward nonlinear systems with time-varying input delay. First, a scaling gain is introduced into the system under a set of coordinate transformations. Then, the authors construct an observer and controller to make the nominal system globally asymptotically stable. Based on homogeneous domination approach and Lyapunov-Krasovskii functional, it is shown that the closed-loop system can be rendered globally asymptotically stable by the scaling gain. Finally, two simulation examples are provided to illustrate the effectiveness of the proposed scheme. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  17. TerraClimate, a high-resolution global dataset of monthly climate and climatic water balance from 1958–2015

    OpenAIRE

    Abatzoglou, John T.; Dobrowski, Solomon Z.; Parks, Sean A.; Hegewisch, Katherine C.

    2018-01-01

    We present TerraClimate, a dataset of high-spatial resolution (1/24°, ~4-km) monthly climate and climatic water balance for global terrestrial surfaces from 1958–2015. TerraClimate uses climatically aided interpolation, combining high-spatial resolution climatological normals from the WorldClim dataset, with coarser resolution time varying (i.e., monthly) data from other sources to produce a monthly dataset of precipitation, maximum and minimum temperature, wind speed, vapor pressure, and sol...

  18. Comparison of Explicitly Simulated and Downscaled Tropical Cyclone Activity in a High-Resolution Global Climate Model

    Directory of Open Access Journals (Sweden)

    Hirofumi Tomita

    2010-01-01

    Full Text Available The response of tropical cyclone activity to climate change is a matter of great inherent interest and practical importance. Most current global climate models are not, however, capable of adequately resolving tropical cyclones; this has led to the development of downscaling techniques designed to infer tropical cyclone activity from the large-scale fields produced by climate models. Here we compare the statistics of tropical cyclones simulated explicitly in a very high resolution (~14 km grid mesh global climate model to the results of one such downscaling technique driven by the same global model. This is done for a simulation of the current climate and also for a simulation of a climate warmed by the addition of carbon dioxide. The explicitly simulated and downscaled storms are similarly distributed in space, but the intensity distribution of the downscaled events has a somewhat longer high-intensity tail, owing to the higher resolution of the downscaling model. Both explicitly simulated and downscaled events show large increases in the frequency of events at the high-intensity ends of their respective intensity distributions, but the downscaled storms also show increases in low-intensity events, whereas the explicitly simulated weaker events decline in number. On the regional scale, there are large differences in the responses of the explicitly simulated and downscaled events to global warming. In particular, the power dissipation of downscaled events shows a 175% increase in the Atlantic, while the power dissipation of explicitly simulated events declines there.

  19. The Fire INventory from NCAR (FINN: a high resolution global model to estimate the emissions from open burning

    Directory of Open Access Journals (Sweden)

    C. Wiedinmyer

    2011-07-01

    Full Text Available The Fire INventory from NCAR version 1.0 (FINNv1 provides daily, 1 km resolution, global estimates of the trace gas and particle emissions from open burning of biomass, which includes wildfire, agricultural fires, and prescribed burning and does not include biofuel use and trash burning. Emission factors used in the calculations have been updated with recent data, particularly for the non-methane organic compounds (NMOC. The resulting global annual NMOC emission estimates are as much as a factor of 5 greater than some prior estimates. Chemical speciation profiles, necessary to allocate the total NMOC emission estimates to lumped species for use by chemical transport models, are provided for three widely used chemical mechanisms: SAPRC99, GEOS-CHEM, and MOZART-4. Using these profiles, FINNv1 also provides global estimates of key organic compounds, including formaldehyde and methanol. Uncertainties in the emissions estimates arise from several of the method steps. The use of fire hot spots, assumed area burned, land cover maps, biomass consumption estimates, and emission factors all introduce error into the model estimates. The uncertainty in the FINNv1 emission estimates are about a factor of two; but, the global estimates agree reasonably well with other global inventories of biomass burning emissions for CO, CO2, and other species with less variable emission factors. FINNv1 emission estimates have been developed specifically for modeling atmospheric chemistry and air quality in a consistent framework at scales from local to global. The product is unique because of the high temporal and spatial resolution, global coverage, and the number of species estimated. FINNv1 can be used for both hindcast and forecast or near-real time model applications and the results are being critically evaluated with models and observations whenever possible.

  20. An Efficient Method for Mapping High-Resolution Global River Discharge Based on the Algorithms of Drainage Network Extraction

    Directory of Open Access Journals (Sweden)

    Jiaye Li

    2018-04-01

    Full Text Available River discharge, which represents the accumulation of surface water flowing into rivers and ultimately into the ocean or other water bodies, may have great impacts on water quality and the living organisms in rivers. However, the global knowledge of river discharge is still poor and worth exploring. This study proposes an efficient method for mapping high-resolution global river discharge based on the algorithms of drainage network extraction. Using the existing global runoff map and digital elevation model (DEM data as inputs, this method consists of three steps. First, the pixels of the runoff map and the DEM data are resampled into the same resolution (i.e., 0.01-degree. Second, the flow direction of each pixel of the DEM data (identified by the optimal flow path method used in drainage network extraction is determined and then applied to the corresponding pixel of the runoff map. Third, the river discharge of each pixel of the runoff map is calculated by summing the runoffs of all the pixels in the upstream of this pixel, similar to the upslope area accumulation step in drainage network extraction. Finally, a 0.01-degree global map of the mean annual river discharge is obtained. Moreover, a 0.5-degree global map of the mean annual river discharge is produced to display the results with a more intuitive perception. Compared against the existing global river discharge databases, the 0.01-degree map is of a generally high accuracy for the selected river basins, especially for the Amazon River basin with the lowest relative error (RE of 0.3% and the Yangtze River basin within the RE range of ±6.0%. However, it is noted that the results of the Congo and Zambezi River basins are not satisfactory, with RE values over 90%, and it is inferred that there may be some accuracy problems with the runoff map in these river basins.

  1. A Novel Structure and Design Optimization of Compact Spline-Parameterized UWB Slot Antenna

    Directory of Open Access Journals (Sweden)

    Koziel Slawomir

    2016-12-01

    Full Text Available In this paper, a novel structure of a compact UWB slot antenna and its design optimization procedure has been presented. In order to achieve a sufficient number of degrees of freedom necessary to obtain a considerable size reduction rate, the slot is parameterized using spline curves. All antenna dimensions are simultaneously adjusted using numerical optimization procedures. The fundamental bottleneck here is a high cost of the electromagnetic (EM simulation model of the structure that includes (for reliability an SMA connector. Another problem is a large number of geometry parameters (nineteen. For the sake of computational efficiency, the optimization process is therefore performed using variable-fidelity EM simulations and surrogate-assisted algorithms. The optimization process is oriented towards explicit reduction of the antenna size and leads to a compact footprint of 199 mm2 as well as acceptable matching within the entire UWB band. The simulation results are validated using physical measurements of the fabricated antenna prototype.

  2. The CCPP-ARM Parameterization Testbed (CAPT): Where Climate Simulation Meets Weather Prediction

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, T J; Potter, G L; Williamson, D L; Cederwall, R T; Boyle, J S; Fiorino, M; Hnilo, J J; Olson, J G; Xie, S; Yio, J J

    2003-11-21

    To significantly improve the simulation of climate by general circulation models (GCMs), systematic errors in representations of relevant processes must first be identified, and then reduced. This endeavor demands, in particular, that the GCM parameterizations of unresolved processes should be tested over a wide range of time scales, not just in climate simulations. Thus, a numerical weather prediction (NWP) methodology for evaluating model parameterizations and gaining insights into their behavior may prove useful, provied that suitable adaptations are made for implementation in climate GCMs. This method entails the generation of short-range weather forecasts by realistically initialized climate GCM, and the application of six-hourly NWP analyses and observations of parameterized variables to evaluate these forecasts. The behavior of the parameterizations in such a weather-forecasting framework can provide insights on how these schemes might be improved, and modified parameterizations then can be similarly tested. In order to further this method for evaluating and analyzing parameterizations in climate GCMs, the USDOE is funding a joint venture of its Climate Change Prediction Program (CCPP) and Atmospheric Radiation Measurement (ARM) Program: the CCPP-ARM Parameterization Testbed (CAPT). This article elaborates the scientific rationale for CAPT, discusses technical aspects of its methodology, and presents examples of its implementation in a representative climate GCM. Numerical weather prediction methods show promise for improving parameterizations in climate GCMs.

  3. A shallow convection parameterization for the non-hydrostatic MM5 mesoscale model

    Energy Technology Data Exchange (ETDEWEB)

    Seaman, N.L.; Kain, J.S.; Deng, A. [Pennsylvania State Univ., University Park, PA (United States)

    1996-04-01

    A shallow convection parameterization suitable for the Pennsylvannia State University (PSU)/National Center for Atmospheric Research nonhydrostatic mesoscale model (MM5) is being developed at PSU. The parameterization is based on parcel perturbation theory developed in conjunction with a 1-D Mellor Yamada 1.5-order planetary boundary layer scheme and the Kain-Fritsch deep convection model.

  4. Distance parameterization for efficient seismic history matching with the ensemble Kalman Filter

    NARCIS (Netherlands)

    Leeuwenburgh, O.; Arts, R.

    2012-01-01

    The Ensemble Kalman Filter (EnKF), in combination with travel-time parameterization, provides a robust and flexible method for quantitative multi-model history matching to time-lapse seismic data. A disadvantage of the parameterization in terms of travel-times is that it requires simulation of

  5. Chemical evidences of the effects of global change in high elevation lakes in Central Himalaya, Nepal

    Science.gov (United States)

    Tartari, Gianni; Lami, Andrea; Rogora, Michela; Salerno, Franco

    2016-04-01

    It is well known that the lakes integrate the pressure of their surrounding terrestrial environment and the climatic variability. Both the water column and sediments are capable to accumulate signals of global change, such as warming of the deep layers or mutation of diverse biological records (e.g., fossil diatoms) and the nutrient loads variability affecting the trophic state. Typically, the biological responses to climate change have been studied in several types of lakes, while documented changes in water chemistry are much rare. A long term study of 20 high altitude lakes located in central southern Himalaya (Mt Everest) conducted since the 90s has highlighted a general change in the chemical composition of the lake water: a substantial rise in the ionic content was observed, particularly pronounced in the case of sulphate. In a couple of these lakes, monitored on an annual basis, the sulphate concentrations increased over 4-fold. A change in the composition of atmospheric wet deposition, as well as a possible influence of decrease in seasonal snow cover duration, which could have exposed larger basin surfaces to alteration processes, were excluded. The chemical changes proved to be mainly related to the sulphide oxidation processes occurring in the bedrocks or the hydrographic basins. In particular, the oxidation processes, considered as the main factor causing the sulphate increase, occurred in subglacial environments characterized by higher glacier velocities causing higher glacier shrinkage. Associated to this mechanism, the exposure of fresh mineral surfaces to the atmosphere may have contributed also to increases in the alkalinity of lakes. Weakened monsoon of the past two decades may have partially contributed to the solute enrichment of the lakes through runoff waters. The almost synchronous response of the lakes studied, which differs in terms of the presence of glaciers in their basins, highlights the fact that the increasing ionic content of lake

  6. Parameterizing Urban Canopy Layer transport in an Lagrangian Particle Dispersion Model

    Science.gov (United States)

    Stöckl, Stefan; Rotach, Mathias W.

    2016-04-01

    The percentage of people living in urban areas is rising worldwide, crossed 50% in 2007 and is even higher in developed countries. High population density and numerous sources of air pollution in close proximity can lead to health issues. Therefore it is important to understand the nature of urban pollutant dispersion. In the last decades this field has experienced considerable progress, however the influence of large roughness elements is complex and has as of yet not been completely described. Hence, this work studied urban particle dispersion close to source and ground. It used an existing, steady state, three-dimensional Lagrangian particle dispersion model, which includes Roughness Sublayer parameterizations of turbulence and flow. The model is valid for convective and neutral to stable conditions and uses the kernel method for concentration calculation. As most Lagrangian models, its lower boundary is the zero-plane displacement, which means that roughly the lower two-thirds of the mean building height are not included in the model. This missing layer roughly coincides with the Urban Canopy Layer. An earlier work "traps" particles hitting the lower model boundary for a recirculation period, which is calculated under the assumption of a vortex in skimming flow, before "releasing" them again. The authors hypothesize that improving the lower boundary condition by including Urban Canopy Layer transport could improve model predictions. This was tested herein by not only trapping the particles, but also advecting them with a mean, parameterized flow in the Urban Canopy Layer. Now the model calculates the trapping period based on either recirculation due to vortex motion in skimming flow regimes or vertical velocity if no vortex forms, depending on incidence angle of the wind on a randomly chosen street canyon. The influence of this modification, as well as the model's sensitivity to parameterization constants, was investigated. To reach this goal, the model was

  7. Improving Mixed-phase Cloud Parameterization in Climate Model with the ACRF Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Zhien [Univ. of Wyoming, Laramie, WY (United States)

    2016-12-13

    Mixed-phase cloud microphysical and dynamical processes are still poorly understood, and their representation in GCMs is a major source of uncertainties in overall cloud feedback in GCMs. Thus improving mixed-phase cloud parameterizations in climate models is critical to reducing the climate forecast uncertainties. This study aims at providing improved knowledge of mixed-phase cloud properties from the long-term ACRF observations and improving mixed-phase clouds simulations in the NCAR Community Atmosphere Model version 5 (CAM5). The key accomplishments are: 1) An improved retrieval algorithm was developed to provide liquid droplet concentration for drizzling or mixed-phase stratiform clouds. 2) A new ice concentration retrieval algorithm for stratiform mixed-phase clouds was developed. 3) A strong seasonal aerosol impact on ice generation in Arctic mixed-phase clouds was identified, which is mainly attributed to the high dust occurrence during the spring season. 4) A suite of multi-senor algorithms was applied to long-term ARM observations at the Barrow site to provide a complete dataset (LWC and effective radius profile for liquid phase, and IWC, Dge profiles and ice concentration for ice phase) to characterize Arctic stratiform mixed-phase clouds. This multi-year stratiform mixed-phase cloud dataset provides necessary information to study related processes, evaluate model stratiform mixed-phase cloud simulations, and improve model stratiform mixed-phase cloud parameterization. 5). A new in situ data analysis method was developed to quantify liquid mass partition in convective mixed-phase clouds. For the first time, we reliably compared liquid mass partitions in stratiform and convective mixed-phase clouds. Due to the different dynamics in stratiform and convective mixed-phase clouds, the temperature dependencies of liquid mass partitions are significantly different due to much higher ice concentrations in convective mixed phase clouds. 6) Systematic evaluations

  8. On the Dependence of Cloud Feedbacks on Physical Parameterizations in WRF Aquaplanet Simulations

    Science.gov (United States)

    Cesana, Grégory; Suselj, Kay; Brient, Florent

    2017-10-01

    We investigate the effects of physical parameterizations on cloud feedback uncertainty in response to climate change. For this purpose, we construct an ensemble of eight aquaplanet simulations using the Weather Research and Forecasting (WRF) model. In each WRF-derived simulation, we replace only one parameterization at a time while all other parameters remain identical. By doing so, we aim to (i) reproduce cloud feedback uncertainty from state-of-the-art climate models and (ii) understand how parametrizations impact cloud feedbacks. Our results demonstrate that this ensemble of WRF simulations, which differ only in physical parameterizations, replicates the range of cloud feedback uncertainty found in state-of-the-art climate models. We show that microphysics and convective parameterizations govern the magnitude and sign of cloud feedbacks, mostly due to tropical low-level clouds in subsidence regimes. Finally, this study highlights the advantages of using WRF to analyze cloud feedback mechanisms owing to its plug-and-play parameterization capability.

  9. Low Cloud Feedback to Surface Warming in the World's First Global Climate Model with Explicit Embedded Boundary Layer Turbulence

    Science.gov (United States)

    Parishani, H.; Pritchard, M. S.; Bretherton, C. S.; Wyant, M. C.; Khairoutdinov, M.; Singh, B.

    2017-12-01

    Biases and parameterization formulation uncertainties in the representation of boundary layer clouds remain a leading source of possible systematic error in climate projections. Here we show the first results of cloud feedback to +4K SST warming in a new experimental climate model, the ``Ultra-Parameterized (UP)'' Community Atmosphere Model, UPCAM. We have developed UPCAM as an unusually high-resolution implementation of cloud superparameterization (SP) in which a global set of cloud resolving arrays is embedded in a host global climate model. In UP, the cloud-resolving scale includes sufficient internal resolution to explicitly generate the turbulent eddies that form marine stratocumulus and trade cumulus clouds. This is computationally costly but complements other available approaches for studying low clouds and their climate interaction, by avoiding parameterization of the relevant scales. In a recent publication we have shown that UP, while not without its own complexity trade-offs, can produce encouraging improvements in low cloud climatology in multi-month simulations of the present climate and is a promising target for exascale computing (Parishani et al. 2017). Here we show results of its low cloud feedback to warming in multi-year simulations for the first time. References: Parishani, H., M. S. Pritchard, C. S. Bretherton, M. C. Wyant, and M. Khairoutdinov (2017), Toward low-cloud-permitting cloud superparameterization with explicit boundary layer turbulence, J. Adv. Model. Earth Syst., 9, doi:10.1002/2017MS000968.

  10. The high-resolution global SST forecast set of the CSIR

    CSIR Research Space (South Africa)

    Landman, WA

    2011-09-01

    Full Text Available -RESOLUTION GLOBAL SST FORECAST SET OF THE CSIR Willem A. Landman Council for Scientific and Industrial Research, P. O. Box 395, Pretoria, 0001, South Africa David G. DeWitt and Dong-Eun Lee International Research Institute for Climate and Society, Lamont...

  11. Towards development of a high quality public domain global roads database

    Directory of Open Access Journals (Sweden)

    Andrew Nelson

    2006-12-01

    Full Text Available There is clear demand for a global spatial public domain roads data set with improved geographic and temporal coverage, consistent coding of road types, and clear documentation of sources. The currently best available global public domain product covers only one-quarter to one-third of the existing road networks, and this varies considerably by region. Applications for such a data set span multiple sectors and would be particularly valuable for the international economic development, disaster relief, and biodiversity conservation communities, not to mention national and regional agencies and organizations around the world. The building blocks for such a global product are available for many countries and regions, yet thus far there has been neither strategy nor leadership for developing it. This paper evaluates the best available public domain and commercial data sets, assesses the gaps in global coverage, and proposes a number of strategies for filling them. It also identifies stakeholder organizations with an interest in such a data set that might either provide leadership or funding for its development. It closes with a proposed set of actions to begin the process.

  12. High-quality global hydrogen silsequioxane contact planarization for nanoimprint lithography

    NARCIS (Netherlands)

    Büyükköse, S.; Vratzov, Boris; van der Wiel, Wilfred Gerard

    2011-01-01

    The authors present a novel global contact planarization technique based on the spin-on-glass material hydrogen silsequioxane (HSQ) and demonstrate its excellent performance on patterns of 70 nm up to several microns generated by UV-based nanoimprint lithography. The HSQ layer (∼165 nm) is spin

  13. Further progress on defining highly conserved immunogenic epitopes for a global HIV vaccine

    DEFF Research Database (Denmark)

    De Groot, Anne S; Levitz, Lauren; Ardito, Matthew T

    2012-01-01

    Two major obstacles confronting HIV vaccine design have been the extensive viral diversity of HIV-1 globally and viral evolution driven by escape from CD8(+) cytotoxic T-cell lymphocyte (CTL)-mediated immune pressure. Regions of the viral genome that are not able to escape immune response...

  14. A new method to generate a high-resolution global distribution map of lake chlorophyll

    Science.gov (United States)

    Sayers, Michael J; Grimm, Amanda G.; Shuchman, Robert A.; Deines, Andrew M.; Bunnell, David B.; Raymer, Zachary B; Rogers, Mark W.; Woelmer, Whitney; Bennion, David; Brooks, Colin N.; Whitley, Matthew A.; Warner, David M.; Mychek-Londer, Justin G.

    2015-01-01

    A new method was developed, evaluated, and applied to generate a global dataset of growing-season chlorophyll-a (chl) concentrations in 2011 for freshwater lakes. Chl observations from freshwater lakes are valuable for estimating lake productivity as well as assessing the role that these lakes play in carbon budgets. The standard 4 km NASA OceanColor L3 chlorophyll concentration products generated from MODIS and MERIS sensor data are not sufficiently representative of global chl values because these can only resolve larger lakes, which generally have lower chl concentrations than lakes of smaller surface area. Our new methodology utilizes the 300 m-resolution MERIS full-resolution full-swath (FRS) global dataset as input and does not rely on the land mask used to generate standard NASA products, which masks many lakes that are otherwise resolvable in MERIS imagery. The new method produced chl concentration values for 78,938 and 1,074 lakes in the northern and southern hemispheres, respectively. The mean chl for lakes visible in the MERIS composite was 19.2 ± 19.2, the median was 13.3, and the interquartile range was 3.90–28.6 mg m−3. The accuracy of the MERIS-derived values was assessed by comparison with temporally near-coincident and globally distributed in situmeasurements from the literature (n = 185, RMSE = 9.39, R2 = 0.72). This represents the first global-scale dataset of satellite-derived chl estimates for medium to large lakes.

  15. Impact of Parameterized Lee Wave Drag on the Energy Budget of an Eddying Global Ocean Model

    Science.gov (United States)

    2013-08-26

    Comparison between vertical shear mixing and surface wave-induced mixing in the extratropical ocean. J. Geophys. Res.-Oceans 117, C00J16. Rosmond, T.E...cycle for the World Ocean based on the 1=10 STORM /NCEP simulation. J. Phys. Oceanogr. 42, 2185–2205. Wallcraft, A.J., Kara, A.B., Hurlburt, H.E., 2005

  16. High-Precision Global Geodetic Systems: Revolution And Revelation In Fluid And 'Solid' Earth Tracking (Invited)

    Science.gov (United States)

    Minster, J. H.; Altamimi, Z.; Blewitt, G.; Carter, W. E.; Cazenave, A. A.; Davis, J. L.; Dragert, H.; Feary, D. A.; Herring, T.; Larson, K. M.; Ries, J. C.; Sandwell, D. T.; Wahr, J. M.

    2009-12-01

    Over the past half-century, space geodetic technologies have changed profoundly the way we look at the planet, not only in the matter of details and accuracy, but also in the matter of how the entire planet changes with time, even on “human” time scales. The advent of space geodesy has provided exquisite images of the ever-changing land and ocean topography and global gravity field of the planet. We now enjoy an International Terrestrial Reference System with a time-dependent geocenter position accurate to a few millimeters. We can image small and large tectonic deformations of the surface before, during, and after earthquakes and volcanic eruptions. We measure both the past subtle changes as well as the recent dramatic changes in the ice sheets, and track global and regional sea-level change to a precision of a millimeter per year or better. The remarkable achievements of Earth observing missions over the past two decades, and the success of future international missions described in the Decadal Survey depend both implicitly and explicitly on the continued availability and enhancement of a reliable and resilient global infrastructure for precise geodesy, and on ongoing advances in geodetic science that are linked to it. This allows us to deal with global scientific, technological and social issues such as climate change and natural hazards, but the impact of the global precise geodetic infrastructure also permeates our everyday lives. Nowadays drivers, aviators, and sailors can determine their positions inexpensively to meter precision in real time, anywhere on the planet. In the foreseeable future, not only will we be able to know a vehicle’s position to centimeter accuracy in real time, but also to control that position, and thus introduce autonomous navigation systems for many tasks which are beyond the reach of “manual” navigation capabilities. This vision will only be realized with sustained international support of the precise global geodetic

  17. Global crop exposure to critical high temperatures in the reproductive period: historical trends and future projections

    International Nuclear Information System (INIS)

    Gourdji, Sharon M; Sibley, Adam M; Lobell, David B

    2013-01-01

    Long-term warming trends across the globe have shifted the distribution of temperature variability, such that what was once classified as extreme heat relative to local mean conditions has become more common. This is also true for agricultural regions, where exposure to extreme heat, particularly during key growth phases such as the reproductive period, can severely damage crop production in ways that are not captured by most crop models. Here, we analyze exposure of crops to physiologically critical temperatures in the reproductive stage (T crit ), across the global harvested areas of maize, rice, soybean and wheat. Trends for the 1980–2011 period show a relatively weak correspondence (r = 0.19) between mean growing season temperature and T crit exposure trends, emphasizing the importance of separate analyses for T crit . Increasing T crit exposure in the past few decades is apparent for wheat in Central and South Asia and South America, and for maize in many diverse locations across the globe. Maize had the highest percentage (15%) of global harvested area exposed to at least five reproductive days over T crit in the 2000s, although this value is somewhat sensitive to the exact temperature used for the threshold. While there was relatively little sustained exposure to reproductive days over T crit for the other crops in the past few decades, all show increases with future warming. Using projections from climate models we estimate that by the 2030s, 31, 16, and 11% respectively of maize, rice, and wheat global harvested area will be exposed to at least five reproductive days over T crit in a typical year, with soybean much less affected. Both maize and rice exhibit non-linear increases with time, with total area exposed for rice projected to grow from 8% in the 2000s to 27% by the 2050s, and maize from 15 to 44% over the same period. While faster development should lead to earlier flowering, which would reduce reproductive extreme heat exposure for wheat on a

  18. Earth System Modeling 2.0: A Blueprint for Models That Learn From Observations and Targeted High-Resolution Simulations

    Science.gov (United States)

    Schneider, Tapio; Lan, Shiwei; Stuart, Andrew; Teixeira, João.

    2017-12-01

    Climate projections continue to be marred by large uncertainties, which originate in processes that need to be parameterized, such as clouds, convection, and ecosystems. But rapid progress is now within reach. New computational tools and methods from data assimilation and machine learning make it possible to integrate global observations and local high-resolution simulations in an Earth system model (ESM) that systematically learns from both and quantifies uncertainties. Here we propose a blueprint for such an ESM. We outline how parameterization schemes can learn from global observations and targeted high-resolution simulations, for example, of clouds and convection, through matching low-order statistics between ESMs, observations, and high-resolution simulations. We illustrate learning algorithms for ESMs with a simple dynamical system that shares characteristics of the climate system; and we discuss the opportunities the proposed framework presents and the challenges that remain to realize it.

  19. Parameterization and sensitivity analyses of a radiative transfer model for remote sensing plant canopies

    Science.gov (United States)

    Hall, Carlton Raden

    A major objective of remote sensing is determination of biochemical and biophysical characteristics of plant canopies utilizing high spectral resolution sensors. Canopy reflectance signatures are dependent on absorption and scattering processes of the leaf, canopy properties, and the ground beneath the canopy. This research investigates, through field and laboratory data collection, and computer model parameterization and simulations, the relationships between leaf optical properties, canopy biophysical features, and the nadir viewed above-canopy reflectance signature. Emphasis is placed on parameterization and application of an existing irradiance radiative transfer model developed for aquatic systems. Data and model analyses provide knowledge on the relative importance of leaves and canopy biophysical features in estimating the diffuse absorption a(lambda,m-1), diffuse backscatter b(lambda,m-1), beam attenuation alpha(lambda,m-1), and beam to diffuse conversion c(lambda,m-1 ) coefficients of the two-flow irradiance model. Data sets include field and laboratory measurements from three plant species, live oak (Quercus virginiana), Brazilian pepper (Schinus terebinthifolius) and grapefruit (Citrus paradisi) sampled on Cape Canaveral Air Force Station and Kennedy Space Center Florida in March and April of 1997. Features measured were depth h (m), projected foliage coverage PFC, leaf area index LAI, and zenith leaf angle. Optical measurements, collected with a Spectron SE 590 high sensitivity narrow bandwidth spectrograph, included above canopy reflectance, internal canopy transmittance and reflectance and bottom reflectance. Leaf samples were returned to laboratory where optical and physical and chemical measurements of leaf thickness, leaf area, leaf moisture and pigment content were made. A new term, the leaf volume correction index LVCI was developed and demonstrated in support of model coefficient parameterization. The LVCI is based on angle adjusted leaf

  20. Factors influencing the parameterization of anvil clouds within general circulation models

    International Nuclear Information System (INIS)

    Leone, J.M. Jr.; Chin, H.N.

    1994-01-01

    The overall goal of this project is to improve the representation of clouds and their effects within global climate models (GCMs). We have concentrated on a small portion of the overall goal, the evolution of convectively generated cirrus clouds and their effects on the large-scale environment. Because of the large range of time and length scales involved, we have been using a multi-scale attack. For the early time generation and development of the cirrus anvil, we are using a cloud-scale model with horizontal resolution of 1 to 2 kilometers; for the larger scale transport by the larger scale flow, we are using a mesoscale model with a horizontal resolution of 20 to 60 kilometers. The eventual goal is to use the information obtained from these simulations, together with available observations, to derive improved cloud parameterizations for use in GCMs. This paper presents a new tool, a cirrus generator, that we have developed to aid in our mesoscale studies

  1. Double-moment cloud microphysics scheme for the deep convection parameterization in the GFDL AM3

    Science.gov (United States)

    Belochitski, A.; Donner, L.

    2014-12-01

    A double-moment cloud microphysical scheme originally developed by Morrision and Gettelman (2008) for the stratiform clouds and later adopted for the deep convection by Song and Zhang (2011) has been implemented in to the Geophysical Fluid Dynamics Laboratory's atmospheric general circulation model AM3. The scheme treats cloud drop, cloud ice, rain, and snow number concentrations and mixing ratios as diagnostic variables and incorporates processes of autoconversion, self-collection, collection between hydrometeor species, sedimentation, ice nucleation, drop activation, homogeneous and heterogeneous freezing, and the Bergeron-Findeisen process. Such detailed representation of microphysical processes makes the scheme suitable for studying the interactions between aerosols and convection, as well as aerosols' indirect effects on clouds and their roles in climate change. The scheme is first tested in the single column version of the GFDL AM3 using forcing data obtained at the U.S. Department of Energy Atmospheric Radiation Measurment project's Southern Great Planes site. Scheme's impact on SCM simulations is discussed. As the next step, runs of the full atmospheric GCM incorporating the new parameterization are compared to the unmodified version of GFDL AM3. Global climatological fields and their variability are contrasted with those of the original version of the GCM. Impact on cloud radiative forcing and climate sensitivity is investigated.

  2. Global-scale high-resolution ( 1 km) modelling of mean, maximum and minimum annual streamflow

    Science.gov (United States)

    Barbarossa, Valerio; Huijbregts, Mark; Hendriks, Jan; Beusen, Arthur; Clavreul, Julie; King, Henry; Schipper, Aafke

    2017-04-01

    Quantifying mean, maximum and minimum annual flow (AF) of rivers at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. AF metrics can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict AF metrics based on climate and catchment characteristics. Yet, so far, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. We developed global-scale regression models that quantify mean, maximum and minimum AF as function of catchment area and catchment-averaged slope, elevation, and mean, maximum and minimum annual precipitation and air temperature. We then used these models to obtain global 30 arc-seconds (˜ 1 km) maps of mean, maximum and minimum AF for each year from 1960 through 2015, based on a newly developed hydrologically conditioned digital elevation model. We calibrated our regression models based on observations of discharge and catchment characteristics from about 4,000 catchments worldwide, ranging from 100 to 106 km2 in size, and validated them against independent measurements as well as the output of a number of process-based global hydrological models (GHMs). The variance explained by our regression models ranged up to 90% and the performance of the models compared well with the performance of existing GHMs. Yet, our AF maps provide a level of spatial detail that cannot yet be achieved by current GHMs.

  3. Remotely Sensed High-Resolution Global Cloud Dynamics for Predicting Ecosystem and Biodiversity Distributions.

    Directory of Open Access Journals (Sweden)

    Adam M Wilson

    2016-03-01

    Full Text Available Cloud cover can influence numerous important ecological processes, including reproduction, growth, survival, and behavior, yet our assessment of its importance at the appropriate spatial scales has remained remarkably limited. If captured over a large extent yet at sufficiently fine spatial grain, cloud cover dynamics may provide key information for delineating a variety of habitat types and predicting species distributions. Here, we develop new near-global, fine-grain (≈1 km monthly cloud frequencies from 15 y of twice-daily Moderate Resolution Imaging Spectroradiometer (MODIS satellite images that expose spatiotemporal cloud cover dynamics of previously undocumented global complexity. We demonstrate that cloud cover varies strongly in its geographic heterogeneity and that the direct, observation-based nature of cloud-derived metrics can improve predictions of habitats, ecosystem, and species distributions with reduced spatial autocorrelation compared to commonly used interpolated climate data. These findings support the fundamental role of remote sensing as an effective lens through which to understand and globally monitor the fine-grain spatial variability of key biodiversity and ecosystem properties.

  4. Cross-section parameterization of the pebble bed modular reactor using the dimension-wise expansion model

    International Nuclear Information System (INIS)

    Zivanovic, Rastko; Bokov, Pavel M.

    2010-01-01

    This paper discusses the use of the dimension-wise expansion model for cross-section parameterization. The components of the model were approximated with tensor products of orthogonal polynomials. As we demonstrate, the model for a specific cross-section can be built in a systematic way directly from data without any a priori knowledge of its structure. The methodology is able to construct a finite basis of orthogonal polynomials that is required to approximate a cross-section with pre-specified accuracy. The methodology includes a global sensitivity analysis that indicates irrelevant state parameters which can be excluded from the model without compromising the accuracy of the approximation and without repetition of the fitting process. To fit the dimension-wise expansion model, Randomised Quasi-Monte-Carlo Integration and Sparse Grid Integration methods were used. To test the parameterization methods with different integrations embedded we have used the OECD PBMR 400 MW benchmark problem. It has been shown in this paper that the Sparse Grid Integration achieves pre-specified accuracy with a significantly (up to 1-2 orders of magnitude) smaller number of samples compared to Randomised Quasi-Monte-Carlo Integration.

  5. Advances in understanding, models and parameterizations of biosphere-atmosphere ammonia exchange

    Science.gov (United States)

    Flechard, C. R.; Massad, R.-S.; Loubet, B.; Personne, E.; Simpson, D.; Bash, J. O.; Cooter, E. J.; Nemitz, E.; Sutton, M. A.

    2013-07-01

    . Their level of complexity depends on their purpose, the spatial scale at which they are applied, the current level of parameterization, and the availability of the input data they require. State-of-the-art solutions for determining the emission/sink Γ potentials through the soil/canopy system include coupled, interactive chemical transport models (CTM) and soil/ecosystem modelling at the regional scale. However, it remains a matter for debate to what extent realistic options for future regional and global models should be based on process-based mechanistic versus empirical and regression-type models. Further discussion is needed on the extent and timescale by which new approaches can be used, such as integration with ecosystem models and satellite observations.

  6. Advances in understanding, models and parameterizations of biosphere-atmosphere ammonia exchange

    Directory of Open Access Journals (Sweden)

    C. R. Flechard

    2013-07-01

    -chemical species schemes. Their level of complexity depends on their purpose, the spatial scale at which they are applied, the current level of parameterization, and the availability of the input data they require. State-of-the-art solutions for determining the emission/sink Γ potentials through the soil/canopy system include coupled, interactive chemical transport models (CTM and soil/ecosystem modelling at the regional scale. However, it remains a matter for debate to what extent realistic options for future regional and global models should be based on process-based mechanistic versus empirical and regression-type models. Further discussion is needed on the extent and timescale by which new approaches can be used, such as integration with ecosystem models and satellite observations.

  7. A general circulation model (GCM) parameterization of Pinatubo aerosols

    Energy Technology Data Exchange (ETDEWEB)

    Lacis, A.A.; Carlson, B.E.; Mishchenko, M.I. [NASA Goddard Institute for Space Studies, New York, NY (United States)

    1996-04-01

    The June 1991 volcanic eruption of Mt. Pinatubo is the largest and best documented global climate forcing experiment in recorded history. The time development and geographical dispersion of the aerosol has been closely monitored and sampled. Based on preliminary estimates of the Pinatubo aerosol loading, general circulation model predictions of the impact on global climate have been made.

  8. Validation of high-resolution aerosol optical thickness simulated by a global non-hydrostatic model against remote sensing measurements

    Science.gov (United States)

    Goto, Daisuke; Sato, Yousuke; Yashiro, Hisashi; Suzuki, Kentaroh; Nakajima, Teruyuki

    2017-02-01

    A high-performance computing resource allows us to conduct numerical simulations with a horizontal grid spacing that is sufficiently high to resolve cloud systems. The cutting-edge computational capability, which was provided by the K computer at RIKEN in Japan, enabled the authors to perform long-term, global simulations of air pollutions and clouds with unprecedentedly high horizontal resolutions. In this study, a next generation model capable of simulating global air pollutions with O(10 km) grid spacing by coupling an atmospheric chemistry model to the Non-hydrostatic Icosahedral Atmospheric Model (NICAM) was performed. Using the newly developed model, month-long simulations for July were conducted with 14 km grid spacing on the K computer. Regarding the global distributions of aerosol optical thickness (AOT), it was found that the correlation coefficient (CC) between the simulation and AERONET measurements was approximately 0.7, and the normalized mean bias was -10%. The simulated AOT was also compared with satellite-retrieved values; the CC was approximately 0.6. The radiative effects due to each chemical species (dust, sea salt, organics, and sulfate) were also calculated and compared with multiple measurements. As a result, the simulated fluxes of upward shortwave radiation at the top of atmosphere and the surface compared well with the observed values, whereas those of downward shortwave radiation at the surface were underestimated, even if all aerosol components were considered. However, the aerosol radiative effects on the downward shortwave flux at the surface were found to be as high as 10 W/m2 in a global scale; thus, simulated aerosol distributions can strongly affect the simulated air temperature and dynamic circulation.

  9. Experimental continuously reinforced concrete pavement parameterization using nondestructive methods

    Directory of Open Access Journals (Sweden)

    L. S. Salles

    Full Text Available ABSTRACT Four continuously reinforced concrete pavement (CRCP sections were built at the University of São Paulo campus in order to analyze the pavement performance in a tropical environment. The sections short length coupled with particular project aspects made the experimental CRCP cracking be different from the traditional CRCP one. After three years of construction, a series of nondestructive testing were performed - Falling Weight Deflectometer (FWD loadings - to verify and to parameterize the pavement structural condition based on two main properties: the elasticity modulus of concrete (E and the modulus of subgrade reaction (k. These properties estimation was obtained through the matching process between real and EverFE simulated basins with the load at the slab center, between two consecutive cracks. The backcalculation results show that the lack of anchorage at the sections end decreases the E and k values and that the longitudinal reinforcement percentage provides additional stiffness to the pavement. Additionally, FWD loadings tangential to the cracks allowed the load transfer efficiency (LTE estimation determination across cracks. The LTE resulted in values above 90 % for all cracks.

  10. Parameterization-based tracking for the P2 experiment

    Science.gov (United States)

    Sorokin, Iurii

    2017-08-01

    The P2 experiment in Mainz aims to determine the weak mixing angle θW at low momentum transfer by measuring the parity-violating asymmetry of elastic electronproton scattering. In order to achieve the intended precision of Δ(sin2 θW)/sin2θW = 0:13% within the planned 10 000 hours of running the experiment has to operate at the rate of 1011 detected electrons per second. Although it is not required to measure the kinematic parameters of each individual electron, every attempt is made to achieve the highest possible throughput in the track reconstruction chain. In the present work a parameterization-based track reconstruction method is described. It is a variation of track following, where the results of the computation-heavy steps, namely the propagation of a track to the further detector plane, and the fitting, are pre-calculated, and expressed in terms of parametric analytic functions. This makes the algorithm extremely fast, and well-suited for an implementation on an FPGA. The method also takes implicitly into account the actual phase space distribution of the tracks already at the stage of candidate construction. Compared to a simple algorithm, that does not use such information, this allows reducing the combinatorial background by many orders of magnitude, down to O(1) background candidate per one signal track. The method is developed specifically for the P2 experiment in Mainz, and the presented implementation is tightly coupled to the experimental conditions.

  11. Influence of Ice Nuclei Parameterization Schemes on the Hail Process

    Directory of Open Access Journals (Sweden)

    Xiaoli Liu

    2018-01-01

    Full Text Available Ice nuclei are very important factors as they significantly affect the development and evolvement of convective clouds such as hail clouds. In this study, numerical simulations of hail processes in the Zhejiang Province were conducted using a mesoscale numerical model (WRF v3.4. The effects of six ice nuclei parameterization schemes on the macroscopic and microscopic structures of hail clouds were compared. The effect of the ice nuclei concentration on ground hailfall is stronger than that on ground rainfall. There were significant spatiotemporal, intensity, and distribution differences in hailfall. Changes in the ice nuclei concentration caused different changes in hydrometeors and directly affected the ice crystals, and, hence, the spatiotemporal distribution of other hydrometeors and the thermodynamic structure of clouds. An increased ice nuclei concentration raises the initial concentration of ice crystals with higher mixing ratio. In the developing and early maturation stages of hail cloud, a larger number of ice crystals competed for water vapor with increasing ice nuclei concentration. This effect prevents ice crystals from maturing into snow particles and inhibits the formation and growth of hail embryos. During later maturation stages, updraft in the cloud intensified and more supercooled water was transported above the 0°C level, benefitting the production and growth of hail particles. An increased ice nuclei concentration therefore favors the formation of hail.

  12. Frozen soil parameterization in a distributed biosphere hydrological model

    Directory of Open Access Journals (Sweden)

    L. Wang

    2010-03-01

    Full Text Available In this study, a frozen soil parameterization has been modified and incorporated into a distributed biosphere hydrological model (WEB-DHM. The WEB-DHM with the frozen scheme was then rigorously evaluated in a small cold area, the Binngou watershed, against the in-situ observations from the WATER (Watershed Allied Telemetry Experimental Research. First, by using the original WEB-DHM without the frozen scheme, the land surface parameters and two van Genuchten parameters were optimized using the observed surface radiation fluxes and the soil moistures at upper layers (5, 10 and 20 cm depths at the DY station in July. Second, by using the WEB-DHM with the frozen scheme, two frozen soil parameters were calibrated using the observed soil temperature at 5 cm depth at the DY station from 21 November 2007 to 20 April 2008; while the other soil hydraulic parameters were optimized by the calibration of the discharges at the basin outlet in July and August that covers the annual largest flood peak in 2008. With these calibrated parameters, the WEB-DHM with the frozen scheme was then used for a yearlong validation from 21 November 2007 to 20 November 2008. Results showed that the WEB-DHM with the frozen scheme has given much better performance than the WEB-DHM without the frozen scheme, in the simulations of soil moisture profile at the cold regions catchment and the discharges at the basin outlet in the yearlong simulation.

  13. Test Driven Development of a Parameterized Ice Sheet Component

    Science.gov (United States)

    Clune, T.

    2011-12-01

    Test driven development (TDD) is a software development methodology that offers many advantages over traditional approaches including reduced development and maintenance costs, improved reliability, and superior design quality. Although TDD is widely accepted in many software communities, the suitability to scientific software is largely undemonstrated and warrants a degree of skepticism. Indeed, numerical algorithms pose several challenges to unit testing in general, and TDD in particular. Among these challenges are the need to have simple, non-redundant closed-form expressions to compare against the results obtained from the implementation as well as realistic error estimates. The necessity for serial and parallel performance raises additional concerns for many scientific applicaitons. In previous work I demonstrated that TDD performed well for the development of a relatively simple numerical model that simulates the growth of snowflakes, but the results were anecdotal and of limited relevance to far more complex software components typical of climate models. This investigation has now been extended by successfully applying TDD to the implementation of a substantial portion of a new parameterized ice sheet component within a full climate model. After a brief introduction to TDD, I will present techniques that address some of the obstacles encountered with numerical algorithms. I will conclude with some quantitative and qualitative comparisons against climate components developed in a more traditional manner.

  14. Boundary layer parameterizations and long-range transport

    International Nuclear Information System (INIS)

    Irwin, J.S.

    1992-01-01

    A joint work group between the American Meteorological Society (AMS) and the EPA is perusing the construction of an air quality model that incorporates boundary layer parameterizations of dispersion and transport. This model could replace the currently accepted model, the Industrial Source Complex (ISC) model. The ISC model is a Gaussian-plume multiple point-source model that provides for consideration of fugitive emissions, aerodynamic wake effects, gravitational settling and dry deposition. A work group of several Federal and State agencies is perusing the construction of an air quality modeling system for use in assessing and tracking visibility impairment resulting from long-range transport of pollutants. The modeling system is designed to use the hourly vertical profiles of wind, temperature and moisture resulting from a mesoscale meteorological processor that employs four dimensional data assimilation (FDDA). FDDA involves adding forcing functions to the governing model equations to gradually ''nudge'' the model state toward the observations (12-hourly upper air observations of wind, temperature and moisture, and 3-hourly surface observations of wind and moisture). In this way it is possible to generate data sets whose accuracy, in terms of transport, precipitation, and dynamic consistency is superior to both direct interpolation of synoptic-scale analyses of observations and purely predictive mode model result. (AB) ( 19 refs.)

  15. A new parameterization for integrated population models to document amphibian reintroductions.

    Science.gov (United States)

    Duarte, Adam; Pearl, Christopher A; Adams, Michael J; Peterson, James T

    2017-09-01

    Managers are increasingly implementing reintroduction programs as part of a global effort to alleviate amphibian declines. Given uncertainty in factors affecting populations and a need to make recurring decisions to achieve objectives, adaptive management is a useful component of these efforts. A major impediment to the estimation of demographic rates often used to parameterize and refine decision-support models is that life-stage-specific monitoring data are frequently sparse for amphibians. We developed a new parameterization for integrated population models to match the ecology of amphibians and capitalize on relatively inexpensive monitoring data to document amphibian reintroductions. We evaluate the capability of this model by fitting it to Oregon spotted frog (Rana pretiosa) monitoring data collected from 2007 to 2014 following their reintroduction within the Klamath Basin, Oregon, USA. The number of egg masses encountered and the estimated adult and metamorph abundances generally increased following reintroduction. We found that survival probability from egg to metamorph ranged from 0.01 in 2008 to 0.09 in 2009 and was not related to minimum spring temperatures, metamorph survival probability ranged from 0.13 in 2010-2011 to 0.86 in 2012-2013 and was positively related to mean monthly temperatures (logit-scale slope = 2.37), adult survival probability was lower for founders (0.40) than individuals recruited after reintroduction (0.56), and the mean number of egg masses per adult female was 0.74. Our study is the first to test hypotheses concerning Oregon spotted frog egg-to-metamorph and metamorph-to-adult transition probabilities in the wild and document their response at multiple life stages following reintroduction. Furthermore, we provide an example to illustrate how the structure of our integrated population model serves as a useful foundation for amphibian decision-support models within adaptive management programs. The integration of multiple, but

  16. Parameterization of Fuel-Optimal Synchronous Approach Trajectories to Tumbling Targets

    Directory of Open Access Journals (Sweden)

    David Charles Sternberg

    2018-04-01

    Full Text Available Docking with potentially tumbling Targets is a common element of many mission architectures, including on-orbit servicing and active debris removal. This paper studies synchronized docking trajectories as a way to ensure the Chaser satellite remains on the docking axis of the tumbling Target, thereby reducing collision risks and enabling persistent onboard sensing of the docking location. Chaser satellites have limited computational power available to them and the time allowed for the determination of a fuel optimal trajectory may be limited. Consequently, parameterized trajectories that approximate the fuel optimal trajectory while following synchronous approaches may be used to provide a computationally efficient means of determining near optimal trajectories to a tumbling Target. This paper presents a method of balancing the computation cost with the added fuel expenditure required for parameterization, including the selection of a parameterization scheme, the number of parameters in the parameterization, and a means of incorporating the dynamics of a tumbling satellite into the parameterization process. Comparisons of the parameterized trajectories are made with the fuel optimal trajectory, which is computed through the numerical propagation of Euler’s equations. Additionally, various tumble types are considered to demonstrate the efficacy of the presented computation scheme. With this parameterized trajectory determination method, Chaser satellites may perform terminal approach and docking maneuvers with both fuel and computational efficiency.

  17. Prototyping global Earth System Models at high resolution: Representation of climate, ecosystems, and acidification in Eastern Boundary Currents

    Science.gov (United States)

    Dunne, J. P.; John, J. G.; Stock, C. A.

    2013-12-01

    The world's major Eastern Boundary Currents (EBC) such as the California Current Large Marine Ecosystem (CCLME) are critically important areas for global fisheries. Computational limitations have divided past EBC modeling into two types: high resolution regional approaches that resolve the strong meso-scale structures involved, and coarse global approaches that represent the large scale context for EBCs, but only crudely resolve only the largest scales of their manifestation. These latter global studies have illustrated the complex mechanisms involved in the climate change and acidification response in these regions, with the CCLME response dominated not by local adjustments but large scale reorganization of ocean circulation through remote forcing of water-mass supply pathways. While qualitatively illustrating the limitations of regional high resolution studies in long term projection, these studies lack the ability to robustly quantify change because of the inability of these models to represent the baseline meso-scale structures of EBCs. In the present work, we compare current generation coarse resolution (one degree) and a prototype next generation high resolution (1/10 degree) Earth System Models (ESMs) from NOAA's Geophysical Fluid Dynamics Laboratory in representing the four major EBCs. We review the long-known temperature biases that the coarse models suffer in being unable to represent the timing and intensity of upwelling-favorable winds, along with lack of representation of the observed high chlorophyll and biological productivity resulting from this upwelling. In promising contrast, we show that the high resolution prototype is capable of representing not only the overall meso-scale structure in physical and biogeochemical fields, but also the appropriate offshore extent of temperature anomalies and other EBC characteristics. Results for chlorophyll were mixed; while high resolution chlorophyll in EBCs were strongly enhanced over the coarse resolution

  18. Intergenic DNA sequences from the human X chromosome reveal high rates of global gene flow

    Directory of Open Access Journals (Sweden)

    Wall Jeffrey D

    2008-11-01

    Full Text Available Abstract Background Despite intensive efforts devoted to collecting human polymorphism data, little is known about the role of gene flow in the ancestry of human populations. This is partly because most analyses have applied one of two simple models of population structure, the island model or the splitting model, which make unrealistic biological assumptions. Results Here, we analyze 98-kb of DNA sequence from 20 independently evolving intergenic regions on the X chromosome in a sample of 90 humans from six globally diverse populations. We employ an isolation-with-migration (IM model, which assumes that populations split and subsequently exchange migrants, to independently estimate effective population sizes and migration rates. While the maximum effective size of modern humans is estimated at ~10,000, individual populations vary substantially in size, with African populations tending to be larger (2,300–9,000 than non-African populations (300–3,300. We estimate mean rates of bidirectional gene flow at 4.8 × 10-4/generation. Bidirectional migration rates are ~5-fold higher among non-African populations (1.5 × 10-3 than among African populations (2.7 × 10-4. Interestingly, because effective sizes and migration rates are inversely related in African and non-African populations, population migration rates are similar within Africa and Eurasia (e.g., global mean Nm = 2.4. Conclusion We conclude that gene flow has played an important role in structuring global human populations and that migration rates should be incorporated as critical parameters in models of human demography.

  19. Enhancing Global Competitiveness: Benchmarking Airline Operational Performance in Highly Regulated Environments

    Science.gov (United States)

    Bowen, Brent D.; Headley, Dean E.; Kane, Karisa D.

    1998-01-01

    Enhancing competitiveness in the global airline industry is at the forefront of attention with airlines, government, and the flying public. The seemingly unchecked growth of major airline alliances is heralded as an enhancement to global competition. However, like many mega-conglomerates, mega-airlines will face complications driven by size regardless of the many recitations of enhanced efficiency. Outlined herein is a conceptual model to serve as a decision tool for policy-makers, managers, and consumers of airline services. This model is developed using public data for the United States (U.S.) major airline industry available from the U/S. Department of Transportation, Federal Aviation Administration, the National Aeronautics and Space Administration, the National Transportation Safety Board, and other public and private sector sources. Data points include number of accidents, pilot deviations, operational performance indicators, flight problems, and other factors. Data from these sources provide opportunity to develop a model based on a complex dot product equation of two vectors. A row vector is weighted for importance by a key informant panel of government, industry, and consumer experts, while a column vector is established with the factor value. The resulting equation, known as the national Airline Quality Rating (AQR), where Q is quality, C is weight, and V is the value of the variables, is stated Q=C[i1-19] x V[i1-19]. Looking at historical patterns of AQR results provides the basis for establishment of an industry benchmark for the purpose of enhancing airline operational performance. A 7 year average of overall operational performance provides the resulting benchmark indicator. Applications from this example can be applied to the many competitive environments of the global industry and assist policy-makers faced with rapidly changing regulatory challenges.

  20. Tool-driven Design and Automated Parameterization for Real-time Generic Drivetrain Models

    Directory of Open Access Journals (Sweden)

    Schwarz Christina

    2015-01-01

    Full Text Available Real-time dynamic drivetrain modeling approaches have a great potential for development cost reduction in the automotive industry. Even though real-time drivetrain models are available, these solutions are specific to single transmission topologies. In this paper an environment for parameterization of a solution is proposed based on a generic method applicable to all types of gear transmission topologies. This enables tool-guided modeling by non- experts in the fields of mechanic engineering and control theory leading to reduced development and testing efforts. The approach is demonstrated for an exemplary automatic transmission using the environment for automated parameterization. Finally, the parameterization is validated via vehicle measurement data.

  1. USING PARAMETERIZATION OF OBJECTS IN AUTODESK INVENTOR IN DESIGNING STRUCTURAL CONNECTORS

    Directory of Open Access Journals (Sweden)

    Gabriel Borowski

    2015-05-01

    Full Text Available The article presents the parameterization of objects used for designing the type of elements as structural connectors and making modifications of their characteristics. The design process was carried out using Autodesk Inventor 2015. We show the latest software tools, which were used for parameterization and modeling selected types of structural connectors. We also show examples of the use of parameterization facilities in the process of constructing some details and making changes to geometry with holding of the shape the element. The presented method of Inventor usage has enabled fast and efficient creation of new objects based on sketches created.

  2. GHRSST Level 3P Global Subskin Sea Surface Temperature from the Advanced Very High Resolution Radiometer (AVHRR) on the MetOp-A satellite (GDS version 1)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A global Level 3 Group for HIgh Resolution Sea Surface Temperature (GHRSST) dataset from the Advanced Very High Resolution Radiometer (AVHRR) on the MetOp-A platform...

  3. A New Global Mascon Solution Tuned for High-Latitude Ice Studies

    Science.gov (United States)

    Luthcke, S. B.; Sabaka, T.; Rowlands, D. D> McCarthy, J. J.; Loomis, B.

    2011-01-01

    A new global mascon solution has been developed with I-arc-degree spatial and IO-day temporal sampling. The global mas cons are estimated from the reduction of nearly 8 years of GRACE K-band range-rate data. Temporal and anisotropic spatial constraints have been applied for land, ocean and ice regions. The solution construction and tuning is focused towards the Greenland and Antarctic ice sheets (GIS and AIS) as well as the Gulf of Alaska mountain glaciers (GoA). Details of the solution development will be discussed, including the mascon parameter definitions, constraints, and the tuning of the constraint damping factor. Results will be presented, exploring the spatial and temporal variability of the ice sheets and GoA regions. A detailed error analysis will be discussed, including solution dependence on iteration, damping factor, forward modeling, and multitechnique comparisons. We also investigate the fundamental resolution of the solution and the spatial correlation of ice sheet inter-annual change. Finally, we discuss future improvements, including specific constraint application for the rest of the major land ice regions and improvements in solution regularization.

  4. Bioavailability of radiocaesium in soil: parameterization using soil characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Syssoeva, A.A.; Konopleva, I.V. [Russian Institute of Agricultural Radiology and Agroecology, Obninsk (Russian Federation)

    2004-07-01

    It has been shown that radiocaesium availability to plants strongly influenced by soil properties. For the best evaluation of TFs it necessary to use mechanistic models that predict radionuclide uptake by plants based on consideration of sorption-desorption and fixation-remobilization of the radionuclide in the soil as well as root uptake processes controlled by the plant. The aim of the research was to characterise typical Russian soils on the basis of the radiocaesium availability. The parameter of the radiocaesium availability in soils (A) has been developed which consist on radiocaesium exchangeability; CF -concentration factor which is the ratio of the radiocaesium in plant to that in soil solution; K{sub Dex} - exchangeable solid-liquid distribution coefficient of radiocaesium. The approach was tested for a wide range of Russian soils using radiocaesium uptake data from a barley pot trial and parameters of the radiocaesium bioavailability. Soils were collected from the arable horizons in different soil climatic zones of Russia and artificially contaminated by {sup 137}Cs. The classification of soils in terms of the radiocaesium availability corresponds quite well to observed linear relationship between {sup 137}Cs TF for barley and A. K{sub Dex} is related to the soil radiocaesium interception potential (RIP), which was found to be positively and strongly related to clay and physical clay (<0,01 mm) content. The {sup 137}Cs exchangeability were found to be in close relation to the soil vermiculite content, which was estimated by the method of Cs{sup +} fixation. It's shown radiocaesium availability to plants in soils under study can be parameterized through mineralogical soil characteristics: % clay and the soil vermiculite content. (author)

  5. A parameterization of nuclear track profiles in CR-39 detector

    Science.gov (United States)

    Azooz, A. A.; Al-Nia'emi, S. H.; Al-Jubbori, M. A.

    2012-11-01

    In this work, the empirical parameterization describing the alpha particles’ track depth in CR-39 detectors is extended to describe longitudinal track profiles against etching time for protons and alpha particles. MATLAB based software is developed for this purpose. The software calculates and plots the depth, diameter, range, residual range, saturation time, and etch rate versus etching time. The software predictions are compared with other experimental data and with results of calculations using the original software, TRACK_TEST, developed for alpha track calculations. The software related to this work is freely downloadable and performs calculations for protons in addition to alpha particles. Program summary Program title: CR39 Catalog identifier: AENA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENA_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Copyright (c) 2011, Aasim Azooz Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met • Redistributions of source code must retain the above copyright, this list of conditions and the following disclaimer. • Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution This software is provided by the copyright holders and contributors “as is” and any express or implied warranties, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose are disclaimed. In no event shall the copyright owner or contributors be liable for any direct, indirect, incidental, special, exemplary, or consequential damages (including, but not limited to, procurement of substitute goods or services; loss of use, data, or profits; or business interruption) however caused and

  6. A High Performance PSO-Based Global MPP Tracker for a PV Power Generation System

    Directory of Open Access Journals (Sweden)

    Kuei-Hsiang Chao

    2015-07-01

    Full Text Available This paper aims to present an improved version of a typical particle swarm optimization (PSO algorithm, such that the global maximum power point (MPP on a P-V characteristic curve with multiple peaks can be located in an efficient and precise manner for a photovoltaic module array. A series of instrumental measurements are conducted on variously configured arrays built with SANYO HIP2717 PV modules, either unshaded, partially shaded, or malfunctioning, as the building blocks. There appear two, triple and quadruple peaks on the corresponding P-V characteristic curves. Subsequently, the tracking performance comparisons, made by some practical experiments, indicate the superiority of this improved MPP tracking algorithm over the typical one.

  7. Global thermal niche models of two European grasses show high invasion risks in Antarctica.

    Science.gov (United States)

    Pertierra, Luis R; Aragón, Pedro; Shaw, Justine D; Bergstrom, Dana M; Terauds, Aleks; Olalla-Tárraga, Miguel Ángel

    2017-07-01

    The two non-native grasses that have established long-term populations in Antarctica (Poa pratensis and Poa annua) were studied from a global multidimensional thermal niche perspective to address the biological invasion risk to Antarctica. These two species exhibit contrasting introduction histories and reproductive strategies and represent two referential case studies of biological invasion processes. We used a multistep process with a range of species distribution modelling techniques (ecological niche factor analysis, multidimensional envelopes, distance/entropy algorithms) together with a suite of thermoclimatic variables, to characterize the potential ranges of these species. Their native bioclimatic thermal envelopes in Eurasia, together with the different naturalized populations across continents, were compared next. The potential niche of P. pratensis was wider at the cold extremes; however, P. annua life history attributes enable it to be a more successful colonizer. We observe that particularly cold summers are a key aspect of the unique Antarctic environment. In consequence, ruderals such as P. annua can quickly expand under such harsh conditions, whereas the more stress-tolerant P. pratensis endures and persist through steady growth. Compiled data on human pressure at the Antarctic Peninsula allowed us to provide site-specific biosecurity risk indicators. We conclude that several areas across the region are vulnerable to invasions from these and other similar species. This can only be visualized in species distribution models (SDMs) when accounting for founder populations that reveal nonanalogous conditions. Results reinforce the need for strict management practices to minimize introductions. Furthermore, our novel set of temperature-based bioclimatic GIS layers for ice-free terrestrial Antarctica provide a mechanism for regional and global species distribution models to be built for other potentially invasive species. © 2017 John Wiley & Sons Ltd.

  8. A Comparative Study of Nucleation Parameterizations: 2. Three-Dimensional Model Application and Evaluation

    Science.gov (United States)

    Following the examination and evaluation of 12 nucleation parameterizations presented in part 1, 11 of them representing binary, ternary, kinetic, and cluster‐activated nucleation theories are evaluated in the U.S. Environmental Protection Agency Community Multiscale Air Quality ...

  9. Nitrous Oxide Emissions from Biofuel Crops and Parameterization in the EPIC Biogeochemical Model

    Science.gov (United States)

    This presentation describes year 1 field measurements of N2O fluxes and crop yields which are used to parameterize the EPIC biogeochemical model for the corresponding field site. Initial model simulations are also presented.

  10. Improving Convection and Cloud Parameterization Using ARM Observations and NCAR Community Atmosphere Model CAM5

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Guang J. [Univ. of California, San Diego, CA (United States)

    2016-11-07

    The fundamental scientific objectives of our research are to use ARM observations and the NCAR CAM5 to understand the large-scale control on convection, and to develop improved convection and cloud parameterizations for use in GCMs.

  11. Parameterizing the competition between homogeneous and heterogeneous freezing in cirrus cloud formation – monodisperse ice nuclei

    Directory of Open Access Journals (Sweden)

    D. Barahona

    2009-01-01

    Full Text Available We present a parameterization of cirrus cloud formation that computes the ice crystal number and size distribution under the presence of homogeneous and heterogeneous freezing. The parameterization is very simple to apply and is derived from the analytical solution of the cloud parcel equations, assuming that the ice nuclei population is monodisperse and chemically homogeneous. In addition to the ice distribution, an analytical expression is provided for the limiting ice nuclei number concentration that suppresses ice formation from homogeneous freezing. The parameterization is evaluated against a detailed numerical parcel model, and reproduces numerical simulations over a wide range of conditions with an average error of 6±33%. The parameterization also compares favorably against other formulations that require some form of numerical integration.

  12. Global parameterization and validation of a two-leaf light use efficiency model for predicting gross primary production across FLUXNET sites: TL-LUE Parameterization and Validation

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Yanlian [Jiangsu Provincial Key Laboratory of Geographic Information Science and Technology, School of Geographic and Oceanographic Sciences, Nanjing University, Nanjing China; Joint Center for Global Change Studies, Beijing China; Wu, Xiaocui [International Institute for Earth System Sciences, Nanjing University, Nanjing China; Joint Center for Global Change Studies, Beijing China; Ju, Weimin [International Institute for Earth System Sciences, Nanjing University, Nanjing China; Jiangsu Center for Collaborative Innovation in Geographic Information Resource Development and Application, Nanjing China; Chen, Jing M. [International Institute for Earth System Sciences, Nanjing University, Nanjing China; Joint Center for Global Change Studies, Beijing China; Wang, Shaoqiang [Key Laboratory of Ecosystem Network Observation and Modeling, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Science, Beijing China; Wang, Huimin [Key Laboratory of Ecosystem Network Observation and Modeling, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Science, Beijing China; Yuan, Wenping [State Key Laboratory of Earth Surface Processes and Resource Ecology, Future Earth Research Institute, Beijing Normal University, Beijing China; Andrew Black, T. [Faculty of Land and Food Systems, University of British Columbia, Vancouver British Columbia Canada; Jassal, Rachhpal [Faculty of Land and Food Systems, University of British Columbia, Vancouver British Columbia Canada; Ibrom, Andreas [Department of Environmental Engineering, Technical University of Denmark (DTU), Kgs. Lyngby Denmark; Han, Shijie [Institute of Applied Ecology, Chinese Academy of Sciences, Shenyang China; Yan, Junhua [South China Botanical Garden, Chinese Academy of Sciences, Guangzhou China; Margolis, Hank [Centre for Forest Studies, Faculty of Forestry, Geography and Geomatics, Laval University, Quebec City Quebec Canada; Roupsard, Olivier [CIRAD-Persyst, UMR Ecologie Fonctionnelle and Biogéochimie des Sols et Agroécosystèmes, SupAgro-CIRAD-INRA-IRD, Montpellier France; CATIE (Tropical Agricultural Centre for Research and Higher Education), Turrialba Costa Rica; Li, Yingnian [Northwest Institute of Plateau Biology, Chinese Academy of Sciences, Xining China; Zhao, Fenghua [Key Laboratory of Ecosystem Network Observation and Modeling, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Science, Beijing China; Kiely, Gerard [Environmental Research Institute, Civil and Environmental Engineering Department, University College Cork, Cork Ireland; Starr, Gregory [Department of Biological Sciences, University of Alabama, Tuscaloosa Alabama USA; Pavelka, Marian [Laboratory of Plants Ecological Physiology, Institute of Systems Biology and Ecology AS CR, Prague Czech Republic; Montagnani, Leonardo [Forest Services, Autonomous Province of Bolzano, Bolzano Italy; Faculty of Sciences and Technology, Free University of Bolzano, Bolzano Italy; Wohlfahrt, Georg [Institute for Ecology, University of Innsbruck, Innsbruck Austria; European Academy of Bolzano, Bolzano Italy; D' Odorico, Petra [Grassland Sciences Group, Institute of Agricultural Sciences, ETH Zurich Switzerland; Cook, David [Atmospheric and Climate Research Program, Environmental Science Division, Argonne National Laboratory, Argonne Illinois USA; Arain, M. Altaf [McMaster Centre for Climate Change and School of Geography and Earth Sciences, McMaster University, Hamilton Ontario Canada; Bonal, Damien [INRA Nancy, UMR EEF, Champenoux France; Beringer, Jason [School of Earth and Environment, The University of Western Australia, Crawley Australia; Blanken, Peter D. [Department of Geography, University of Colorado Boulder, Boulder Colorado USA; Loubet, Benjamin [UMR ECOSYS, INRA, AgroParisTech, Université Paris-Saclay, Thiverval-Grignon France; Leclerc, Monique Y. [Department of Crop and Soil Sciences, College of Agricultural and Environmental Sciences, University of Georgia, Athens Georgia USA; Matteucci, Giorgio [Viea San Camillo Ed LellisViterbo, University of Tuscia, Viterbo Italy; Nagy, Zoltan [MTA-SZIE Plant Ecology Research Group, Szent Istvan University, Godollo Hungary; Olejnik, Janusz [Meteorology Department, Poznan University of Life Sciences, Poznan Poland; Department of Matter and Energy Fluxes, Global Change Research Center, Brno Czech Republic; Paw U, Kyaw Tha [Department of Land, Air and Water Resources, University of California, Davis California USA; Joint Program on the Science and Policy of Global Change, Massachusetts Institute of Technology, Cambridge USA; Varlagin, Andrej [A.N. Severtsov Institute of Ecology and Evolution, Russian Academy of Sciences, Moscow Russia

    2016-04-01

    We present the first extended validation of satellitemicrowave (MW) liquidwater path (LWP) for low nonprecipitating clouds, from four operational sensors, against ship-borne observations from a three-channel MW radiometer collected along ship transects over the northeast Pacific during May–August 2013. Satellite MW retrievals have an overall correlation of 0.84 with ship observations and a bias of 9.3 g/m2. The bias for broken cloud scenes increases linearly with water vapor path and remains below 17.7 g/m2. In contrast, satelliteMWLWP is unbiased in overcast scenes with correlations up to 0.91, demonstrating that the retrievals are accurate and reliable under these conditions. Satellite MW retrievals produce a diurnal cycle amplitude consistent with ship-based observations (33 g/m2). Observations taken aboard extended ship cruises to evaluate not only satellite MW LWP but also LWP derived from visible/infrared sensors offer a new way to validate this important property over vast oceanic regions.

  13. FY 2011 Second Quarter: Demonstration of New Aerosol Measurement Verification Testbed for Present-Day Global Aerosol Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Koch, D

    2011-03-20

    The regional-scale Weather Research and Forecasting (WRF) model is being used by a DOE Earth System Modeling (ESM) project titled “Improving the Characterization of Clouds, Aerosols and the Cryosphere in Climate Models” to evaluate the performance of atmospheric process modules that treat aerosols and aerosol radiative forcing in the Arctic. We are using a regional-scale modeling framework for three reasons: (1) It is easier to produce a useful comparison to observations with a high resolution model; (2) We can compare the behavior of the CAM parameterization suite with some of the more complex and computationally expensive parameterizations used in WRF; (3) we can explore the behavior of this parameterization suite at high resolution. Climate models like the Community Atmosphere Model version 5 (CAM5) being used within the Community Earth System Model (CESM) will not likely be run at mesoscale spatial resolutions (10–20 km) until 5–10 years from now. The performance of the current suite of physics modules in CAM5 at such resolutions is not known, and current computing resources do not permit high-resolution global simulations to be performed routinely. We are taking advantage of two tools recently developed under PNNL Laboratory Directed Research and Development (LDRD) projects for this activity. The first is the Aerosol Modeling Testbed (Fast et al., 2011b), a new computational framework designed to streamline the process of testing and evaluating aerosol process modules over a range of spatial and temporal scales. The second is the CAM5 suite of physics parameterizations that have been ported into WRF so that their performance and scale dependency can be quantified at mesoscale spatial resolutions (Gustafson et al., 2010; with more publications in preparation).

  14. Collaborative Project: High-resolution Global Modeling of the Effects of Subgrid-Scale Clouds and Turbulence on Precipitating Cloud Systems

    Energy Technology Data Exchange (ETDEWEB)

    Randall, David A. [Colorado State Univ., Fort Collins, CO (United States). Dept. of Atmospheric Science

    2015-11-01

    We proposed to implement, test, and evaluate recently developed turbulence parameterizations, using a wide variety of methods and modeling frameworks together with observations including ARM data. We have successfully tested three different turbulence parameterizations in versions of the Community Atmosphere Model: CLUBB, SHOC, and IPHOC. All three produce significant improvements in the simulated climate. CLUBB will be used in CAM6, and also in ACME. SHOC is being tested in the NCEP forecast model. In addition, we have achieved a better understanding of the strengths and limitations of the PDF-based parameterizations of turbulence and convection.

  15. Parameterization of pion production and reaction cross sections at LAMPF energies

    International Nuclear Information System (INIS)

    Burman, R.L.; Smith, E.S.

    1989-05-01

    A parameterization of pion production and reaction cross sections is developed for eventual use in modeling neutrino production by protons in a beam stop. Emphasis is placed upon smooth parameterizations for proton energies up to 800 MeV, for all pion energies and angles, and for a wide range of materials. The resulting representations of the data are well-behaved and can be used for extrapolation to regions where there are no measurements. 22 refs., 16 figs., 2 tabs

  16. A scheme for parameterizing ice cloud water content in general circulation models

    Science.gov (United States)

    Heymsfield, Andrew J.; Donner, Leo J.

    1989-01-01

    A method for specifying ice water content in GCMs is developed, based on theory and in-cloud measurements. A theoretical development of the conceptual precipitation model is given and the aircraft flights used to characterize the ice mass distribution in deep ice clouds is discussed. Ice water content values derived from the theoretical parameterization are compared with the measured values. The results demonstrate that a simple parameterization for atmospheric ice content can account for ice contents observed in several synoptic contexts.

  17. A Stochastic Lagrangian Basis for a Probabilistic Parameterization of Moisture Condensation in Eulerian Models

    OpenAIRE

    Tsang, Yue-Kin; Vallis, Geoffrey K.

    2018-01-01

    In this paper we describe the construction of an efficient probabilistic parameterization that could be used in a coarse-resolution numerical model in which the variation of moisture is not properly resolved. An Eulerian model using a coarse-grained field on a grid cannot properly resolve regions of saturation---in which condensation occurs---that are smaller than the grid boxes. Thus, in the absence of a parameterization scheme, either the grid box must become saturated or condensation will ...

  18. A parameterization for the absorption of solar radiation by water vapor in the earth's atmosphere

    Science.gov (United States)

    Wang, W.-C.

    1976-01-01

    A parameterization for the absorption of solar radiation as a function of the amount of water vapor in the earth's atmosphere is obtained. Absorption computations are based on the Goody band model and the near-infrared absorption band data of Ludwig et al. A two-parameter Curtis-Godson approximation is used to treat the inhomogeneous atmosphere. Heating rates based on a frequently used one-parameter pressure-scaling approximation are also discussed and compared with the present parameterization.

  19. Global agricultural land resources--a high resolution suitability evaluation and its perspectives until 2100 under climate change conditions.

    Directory of Open Access Journals (Sweden)

    Florian Zabel

    Full Text Available Changing natural conditions determine the land's suitability for agriculture. The growing demand for food, feed, fiber and bioenergy increases pressure on land and causes trade-offs between different uses of land and ecosystem services. Accordingly, an inventory is required on the changing potentially suitable areas for agriculture under changing climate conditions. We applied a fuzzy logic approach to compute global agricultural suitability to grow the 16 most important food and energy crops according to the climatic, soil and topographic conditions at a spatial resolution of 30 arc seconds. We present our results for current climate conditions (1981-2010, considering today's irrigated areas and separately investigate the suitability of densely forested as well as protected areas, in order to investigate their potentials for agriculture. The impact of climate change under SRES A1B conditions, as simulated by the global climate model ECHAM5, on agricultural suitability is shown by comparing the time-period 2071-2100 with 1981-2010. Our results show that climate change will expand suitable cropland by additionally 5.6 million km2, particularly in the Northern high latitudes (mainly in Canada, China and Russia. Most sensitive regions with decreasing suitability are found in the Global South, mainly in tropical regions, where also the suitability for multiple cropping decreases.

  20. Global Anthropogenic Phosphorus Loads to Freshwater and Associated Grey Water Footprints and Water Pollution Levels: A High-Resolution Global Study

    Science.gov (United States)

    Mekonnen, Mesfin M.; Hoekstra, Arjen Y.

    2018-01-01

    We estimate the global anthropogenic phosphorus (P) loads to freshwater and the associated grey water footprints (GWFs) for the period 2002-2010, at a spatial resolution of 5 × 5 arc min, and compare the GWF per river basin to runoff to assess the P-related water pollution level (WPL). The global anthropogenic P load to freshwater systems from both diffuse and point sources is estimated at 1.5 Tg/yr. More than half of this total load was in Asia, followed by Europe (19%) and Latin America and the Caribbean (13%). The domestic sector contributed 54% to the total, agriculture 38%, and industry 8%. In agriculture, cereals production had the largest contribution to the P load (31%), followed by fruits, vegetables, and oil crops, each contributing 15%. The global total GWF related to anthropogenic P loads is estimated to be 147 × 1012 m3/yr, with China contributing 30%, India 8%, USA 7%, and Spain and Brazil 6% each. The basins with WPL > 1 (where GWF exceeds the basin's assimilation capacity) together cover about 38% of the global land area, 37% of the global river discharge, and provide residence to about 90% of the global population.

  1. Global communication schemes for the numerical solution of high-dimensional PDEs

    DEFF Research Database (Denmark)

    Hupp, Philipp; Heene, Mario; Jacob, Riko

    2016-01-01

    The numerical treatment of high-dimensional partial differential equations is among the most compute-hungry problems and in urgent need for current and future high-performance computing (HPC) systems. It is thus also facing the grand challenges of exascale computing such as the requirement...

  2. Neoliberal Global Assemblages: The Emergence of "Public" International High-School Curriculum Programs in China

    Science.gov (United States)

    Liu, Shuning

    2018-01-01

    Since 2010, the number of urban Chinese high-school students applying to US universities has rapidly grown. Many of these students have chosen emerging international curriculum programs established by elite public high schools in China. These programs prepare wealthy Chinese students for the US college application process by exposing them to an…

  3. A simple parameterization for the rising velocity of bubbles in a liquid pool

    Energy Technology Data Exchange (ETDEWEB)

    Park, Sung Hoon [Dept. of Environmental Engineering, Sunchon National University, Suncheon (Korea, Republic of); Park, Chang Hwan; Lee, Jin Yong; Lee, Byung Chul [FNC Technology, Co., Ltd., Yongin (Korea, Republic of)

    2017-06-15

    The determination of the shape and rising velocity of gas bubbles in a liquid pool is of great importance in analyzing the radioactive aerosol emissions from nuclear power plant accidents in terms of the fission product release rate and the pool scrubbing efficiency of radioactive aerosols. This article suggests a simple parameterization for the gas bubble rising velocity as a function of the volume-equivalent bubble diameter; this parameterization does not require prior knowledge of bubble shape. This is more convenient than previously suggested parameterizations because it is given as a single explicit formula. It is also shown that a bubble shape diagram, which is very similar to the Grace's diagram, can be easily generated using the parameterization suggested in this article. Furthermore, the boundaries among the three bubble shape regimes in the E{sub o}–R{sub e} plane and the condition for the bypass of the spheroidal regime can be delineated directly from the parameterization formula. Therefore, the parameterization suggested in this article appears to be useful not only in easily determining the bubble rising velocity (e.g., in postulated severe accident analysis codes) but also in understanding the trend of bubble shape change due to bubble growth.

  4. A review of the theoretical basis for bulk mass flux convective parameterization

    Directory of Open Access Journals (Sweden)

    R. S. Plant

    2010-04-01

    Full Text Available Most parameterizations for precipitating convection in use today are bulk schemes, in which an ensemble of cumulus elements with different properties is modelled as a single, representative entraining-detraining plume. We review the underpinning mathematical model for such parameterizations, in particular by comparing it with spectral models in which elements are not combined into the representative plume. The chief merit of a bulk model is that the representative plume can be described by an equation set with the same structure as that which describes each element in a spectral model. The equivalence relies on an ansatz for detrained condensate introduced by Yanai et al. (1973 and on a simplified microphysics. There are also conceptual differences in the closure of bulk and spectral parameterizations. In particular, we show that the convective quasi-equilibrium closure of Arakawa and Schubert (1974 for spectral parameterizations cannot be carried over to a bulk parameterization in a straightforward way. Quasi-equilibrium of the cloud work function assumes a timescale separation between a slow forcing process and a rapid convective response. But, for the natural bulk analogue to the cloud-work function, the relevant forcing is characterised by a different timescale, and so its quasi-equilibrium entails a different physical constraint. Closures of bulk parameterizations that use a parcel value of CAPE do not suffer from this timescale issue. However, the Yanai et al. (1973 ansatz must be invoked as a necessary ingredient of those closures.

  5. A simple parameterization for the rising velocity of bubbles in a liquid pool

    International Nuclear Information System (INIS)

    Park, Sung Hoon; Park, Chang Hwan; Lee, Jin Yong; Lee, Byung Chul

    2017-01-01

    The determination of the shape and rising velocity of gas bubbles in a liquid pool is of great importance in analyzing the radioactive aerosol emissions from nuclear power plant accidents in terms of the fission product release rate and the pool scrubbing efficiency of radioactive aerosols. This article suggests a simple parameterization for the gas bubble rising velocity as a function of the volume-equivalent bubble diameter; this parameterization does not require prior knowledge of bubble shape. This is more convenient than previously suggested parameterizations because it is given as a single explicit formula. It is also shown that a bubble shape diagram, which is very similar to the Grace's diagram, can be easily generated using the parameterization suggested in this article. Furthermore, the boundaries among the three bubble shape regimes in the E_o–R_e plane and the condition for the bypass of the spheroidal regime can be delineated directly from the parameterization formula. Therefore, the parameterization suggested in this article appears to be useful not only in easily determining the bubble rising velocity (e.g., in postulated severe accident analysis codes) but also in understanding the trend of bubble shape change due to bubble growth

  6. Parameterized Disturbance Observer Based Controller to Reduce Cyclic Loads of Wind Turbine

    Directory of Open Access Journals (Sweden)

    Raja M. Imran

    2018-05-01

    Full Text Available This paper is concerned with bump-less transfer of parameterized disturbance observer based controller with individual pitch control strategy to reduce cyclic loads of wind turbine in full load operation. Cyclic loads are generated due to wind shear and tower shadow effects. Multivariable disturbance observer based linear controllers are designed with objective to reduce output power fluctuation, tower oscillation and drive-train torsion using optimal control theory. Linear parameterized controllers are designed by using a smooth scheduling mechanism between the controllers. The proposed parameterized controller with individual pitch was tested on nonlinear Fatigue, Aerodynamics, Structures, and Turbulence (FAST code model of National Renewable Energy Laboratory (NREL’s 5 MW wind turbine. The closed-loop system performance was assessed by comparing the simulation results of proposed controller with a fixed gain and parameterized controller with collective pitch for full load operation of wind turbine. Simulations are performed with step wind to see the behavior of the system with wind shear and tower shadow effects. Then, turbulent wind is applied to see the smooth transition of the controllers. It can be concluded from the results that the proposed parameterized control shows smooth transition from one controller to another controller. Moreover, 3p and 6p harmonics are well mitigated as compared to fixed gain DOBC and parameterized DOBC with collective pitch.

  7. Analysis of sensitivity to different parameterization schemes for a subtropical cyclone

    Science.gov (United States)

    Quitián-Hernández, L.; Fernández-González, S.; González-Alemán, J. J.; Valero, F.; Martín, M. L.

    2018-05-01

    A sensitivity analysis to diverse WRF model physical parameterization schemes is carried out during the lifecycle of a Subtropical cyclone (STC). STCs are low-pressure systems that share tropical and extratropical characteristics, with hybrid thermal structures. In October 2014, a STC made landfall in the Canary Islands, causing widespread damage from strong winds and precipitation there. The system began to develop on October 18 and its effects lasted until October 21. Accurate simulation of this type of cyclone continues to be a major challenge because of its rapid intensification and unique characteristics. In the present study, several numerical simulations were performed using the WRF model to do a sensitivity analysis of its various parameterization schemes for the development and intensification of the STC. The combination of parameterization schemes that best simulated this type of phenomenon was thereby determined. In particular, the parameterization combinations that included the Tiedtke cumulus schemes had the most positive effects on model results. Moreover, concerning STC track validation, optimal results were attained when the STC was fully formed and all convective processes stabilized. Furthermore, to obtain the parameterization schemes that optimally categorize STC structure, a verification using Cyclone Phase Space is assessed. Consequently, the combination of parameterizations including the Tiedtke cumulus schemes were again the best in categorizing the cyclone's subtropical structure. For strength validation, related atmospheric variables such as wind speed and precipitable water were analyzed. Finally, the effects of using a deterministic or probabilistic approach in simulating intense convective phenomena were evaluated.

  8. Automatic identification approach for high-performance liquid chromatography-multiple reaction monitoring fatty acid global profiling.

    Science.gov (United States)

    Tie, Cai; Hu, Ting; Jia, Zhi-Xin; Zhang, Jin-Lan

    2015-08-18

    Fatty acids (FAs) are a group of lipid molecules that are essential to organisms. As potential biomarkers for different diseases, FAs have attracted increasing attention from both biological researchers and the pharmaceutical industry. A sensitive and accurate method for globally profiling and identifying FAs is required for biomarker discovery. The high selectivity and sensitivity of high-performance liquid chromatography-multiple reaction monitoring (HPLC-MRM) gives it great potential to fulfill the need to identify FAs from complicated matrices. This paper developed a new approach for global FA profiling and identification for HPLC-MRM FA data mining. Mathematical models for identifying FAs were simulated using the isotope-induced retention time (RT) shift (IRS) and peak area ratios between parallel isotope peaks for a series of FA standards. The FA structures were predicated using another model based on the RT and molecular weight. Fully automated FA identification software was coded using the Qt platform based on these mathematical models. Different samples were used to verify the software. A high identification efficiency (greater than 75%) was observed when 96 FA species were identified in plasma. This FAs identification strategy promises to accelerate FA research and applications.

  9. High-resolution minisatellite-based typing as a portable approach to global analysis of Mycobacterium tuberculosis molecular epidemiology

    Science.gov (United States)

    Mazars, Edith; Lesjean, Sarah; Banuls, Anne-Laure; Gilbert, Michèle; Vincent, Véronique; Gicquel, Brigitte; Tibayrenc, Michel; Locht, Camille; Supply, Philip

    2001-01-01

    The worldwide threat of tuberculosis to human health emphasizes the need to develop novel approaches to a global epidemiological surveillance. The current standard for Mycobacterium tuberculosis typing based on IS6110 restriction fragment length polymorphism (RFLP) suffers from the difficulty of comparing data between independent laboratories. Here, we propose a high-resolution typing method based on variable number tandem repeats (VNTRs) of genetic elements named mycobacterial interspersed repetitive units (MIRUs) in 12 human minisatellite-like regions of the M. tuberculosis genome. MIRU-VNTR profiles of 72 different M. tuberculosis isolates were established by PCR analysis of all 12 loci. From 2 to 8 MIRU-VNTR alleles were identified in the 12 regions in these strains, which corresponds to a potential of over 16 million different combinations, yielding a resolution power close to that of IS6110-RFLP. All epidemiologically related isolates tested were perfectly clustered by MIRU-VNTR typing, indicating that the stability of these MIRU-VNTRs is adequate to track outbreak episodes. The correlation between genetic relationships inferred from MIRU-VNTR and IS6110-RFLP typing was highly significant. Compared with IS6110-RFLP, high-resolution MIRU-VNTR typing has the considerable advantages of being fast, appropriate for all M. tuberculosis isolates, including strains that have a few IS6110 copies, and permitting easy and rapid comparison of results from independent laboratories. This typing method opens the way to the construction of digital global databases for molecular epidemiology studies of M. tuberculosis. PMID:11172048

  10. A Global Lake Ecological Observatory Network (GLEON) for synthesising high-frequency sensor data for validation of deterministic ecological models

    Science.gov (United States)

    David, Hamilton P; Carey, Cayelan C.; Arvola, Lauri; Arzberger, Peter; Brewer, Carol A.; Cole, Jon J; Gaiser, Evelyn; Hanson, Paul C.; Ibelings, Bas W; Jennings, Eleanor; Kratz, Tim K; Lin, Fang-Pang; McBride, Christopher G.; de Motta Marques, David; Muraoka, Kohji; Nishri, Ami; Qin, Boqiang; Read, Jordan S.; Rose, Kevin C.; Ryder, Elizabeth; Weathers, Kathleen C.; Zhu, Guangwei; Trolle, Dennis; Brookes, Justin D

    2014-01-01

    A Global Lake Ecological Observatory Network (GLEON; www.gleon.org) has formed to provide a coordinated response to the need for scientific understanding of lake processes, utilising technological advances available from autonomous sensors. The organisation embraces a grassroots approach to engage researchers from varying disciplines, sites spanning geographic and ecological gradients, and novel sensor and cyberinfrastructure to synthesise high-frequency lake data at scales ranging from local to global. The high-frequency data provide a platform to rigorously validate process- based ecological models because model simulation time steps are better aligned with sensor measurements than with lower-frequency, manual samples. Two case studies from Trout Bog, Wisconsin, USA, and Lake Rotoehu, North Island, New Zealand, are presented to demonstrate that in the past, ecological model outputs (e.g., temperature, chlorophyll) have been relatively poorly validated based on a limited number of directly comparable measurements, both in time and space. The case studies demonstrate some of the difficulties of mapping sensor measurements directly to model state variable outputs as well as the opportunities to use deviations between sensor measurements and model simulations to better inform process understanding. Well-validated ecological models provide a mechanism to extrapolate high-frequency sensor data in space and time, thereby potentially creating a fully 3-dimensional simulation of key variables of interest.

  11. TerraClimate, a high-resolution global dataset of monthly climate and climatic water balance from 1958-2015

    Science.gov (United States)

    Abatzoglou, John T.; Dobrowski, Solomon Z.; Parks, Sean A.; Hegewisch, Katherine C.

    2018-01-01

    We present TerraClimate, a dataset of high-spatial resolution (1/24°, ~4-km) monthly climate and climatic water balance for global terrestrial surfaces from 1958-2015. TerraClimate uses climatically aided interpolation, combining high-spatial resolution climatological normals from the WorldClim dataset, with coarser resolution time varying (i.e., monthly) data from other sources to produce a monthly dataset of precipitation, maximum and minimum temperature, wind speed, vapor pressure, and solar radiation. TerraClimate additionally produces monthly surface water balance datasets using a water balance model that incorporates reference evapotranspiration, precipitation, temperature, and interpolated plant extractable soil water capacity. These data provide important inputs for ecological and hydrological studies at global scales that require high spatial resolution and time varying climate and climatic water balance data. We validated spatiotemporal aspects of TerraClimate using annual temperature, precipitation, and calculated reference evapotranspiration from station data, as well as annual runoff from streamflow gauges. TerraClimate datasets showed noted improvement in overall mean absolute error and increased spatial realism relative to coarser resolution gridded datasets.

  12. Modeling the regional impact of ship emissions on NOx and ozone levels over the Eastern Atlantic and Western Europe using ship plume parameterization

    Directory of Open Access Journals (Sweden)

    P. Pisoft

    2010-07-01

    Full Text Available In general, regional and global chemistry transport models apply instantaneous mixing of emissions into the model's finest resolved scale. In case of a concentrated source, this could result in erroneous calculation of the evolution of both primary and secondary chemical species. Several studies discussed this issue in connection with emissions from ships and aircraft. In this study, we present an approach to deal with the non-linear effects during dispersion of NOx emissions from ships. It represents an adaptation of the original approach developed for aircraft NOx emissions, which uses an exhaust tracer to trace the amount of the emitted species in the plume and applies an effective reaction rate for the ozone production/destruction during the plume's dilution into the background air. In accordance with previous studies examining the impact of international shipping on the composition of the troposphere, we found that the contribution of ship induced surface NOx to the total reaches 90% over remote ocean and makes 10–30% near coastal regions. Due to ship emissions, surface ozone increases by up to 4–6 ppbv making 10% contribution to the surface ozone budget. When applying the ship plume parameterization, we show that the large scale NOx decreases and the ship NOx contribution is reduced by up to 20–25%. A similar decrease was found in the case of O3. The plume parameterization suppressed the ship induced ozone production by 15–30% over large areas of the studied region. To evaluate the presented parameterization, nitrogen monoxide measurements over the English Channel were compared with modeled values and it was found that after activating the parameterization the model accuracy increases.

  13. Parameterization of a ruminant model of phosphorus digestion and metabolism.

    Science.gov (United States)

    Feng, X; Knowlton, K F; Hanigan, M D

    2015-10-01

    The objective of the current work was to parameterize the digestive elements of the model of Hill et al. (2008) using data collected from animals that were ruminally, duodenally, and ileally cannulated, thereby providing a better understanding of the digestion and metabolism of P fractions in growing and lactating cattle. The model of Hill et al. (2008) was fitted and evaluated for adequacy using the data from 6 animal studies. We hypothesized that sufficient data would be available to estimate P digestion and metabolism parameters and that these parameters would be sufficient to derive P bioavailabilities of a range of feed ingredients. Inputs to the model were dry matter intake; total feed P concentration (fPtFd); phytate (Pp), organic (Po), and inorganic (Pi) P as fractions of total P (fPpPt, fPoPt, fPiPt); microbial growth; amount of Pi and Pp infused into the omasum or ileum; milk yield; and BW. The available data were sufficient to derive all model parameters of interest. The final model predicted that given 75 g/d of total P input, the total-tract digestibility of P was 40.8%, Pp digestibility in the rumen was 92.4%, and in the total-tract was 94.7%. Blood P recycling to the rumen was a major source of Pi flow into the small intestine, and the primary route of excretion. A large proportion of Pi flowing to the small intestine was absorbed; however, additional Pi was absorbed from the large intestine (3.15%). Absorption of Pi from the small intestine was regulated, and given the large flux of salivary P recycling, the effective fractional small intestine absorption of available P derived from the diet was 41.6% at requirements. Milk synthesis used 16% of total absorbed P, and less than 1% was excreted in urine. The resulting model could be used to derive P bioavailabilities of commonly used feedstuffs in cattle production. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  14. Effects of microphysics parameterization on simulations of summer heavy precipitation in the Yangtze-Huaihe Region, China

    Science.gov (United States)

    Kan, Yu; Chen, Bo; Shen, Tao; Liu, Chaoshun; Qiao, Fengxue

    2017-09-01

    It has been a longstanding problem for current weather/climate models to accurately predict summer heavy precipitation over the Yangtze-Huaihe Region (YHR) which is the key flood-prone area in China with intensive population and developed economy. Large uncertainty has been identified with model deficiencies in representing precipitation processes such as microphysics and cumulus parameterizations. This study focuses on examining the effects of microphysics parameterization on the simulation of different type of heavy precipitation over the YHR taking into account two different cumulus schemes. All regional persistent heavy precipitation events over the YHR during 2008-2012 are classified into three types according to their weather patterns: the type I associated with stationary front, the type II directly associated with typhoon or with its spiral rain band, and the type III associated with strong convection along the edge of the Subtropical High. Sixteen groups of experiments are conducted for three selected cases with different types and a local short-time rainstorm in Shanghai, using the WRF model with eight microphysics and two cumulus schemes. Results show that microphysics parameterization has large but different impacts on the location and intensity of regional heavy precipitation centers. The Ferrier (microphysics) -BMJ (cumulus) scheme and Thompson (microphysics) - KF (cumulus) scheme most realistically simulates the rain-bands with the center location and intensity for type I and II respectively. For type III, the Lin microphysics scheme shows advantages in regional persistent cases over YHR, while the WSM5 microphysics scheme is better in local short-term case, both with the BMJ cumulus scheme.

  15. Sensitivity of U.S. summer precipitation to model resolution and convective parameterizations across gray zone resolutions

    Science.gov (United States)

    Gao, Yang; Leung, L. Ruby; Zhao, Chun; Hagos, Samson

    2017-03-01

    Simulating summer precipitation is a significant challenge for climate models that rely on cumulus parameterizations to represent moist convection processes. Motivated by recent advances in computing that support very high-resolution modeling, this study aims to systematically evaluate the effects of model resolution and convective parameterizations across the gray zone resolutions. Simulations using the Weather Research and Forecasting model were conducted at grid spacings of 36 km, 12 km, and 4 km for two summers over the conterminous U.S. The convection-permitting simulations at 4 km grid spacing are most skillful in reproducing the observed precipitation spatial distributions and diurnal variability. Notable differences are found between simulations with the traditional Kain-Fritsch (KF) and the scale-aware Grell-Freitas (GF) convection schemes, with the latter more skillful in capturing the nocturnal timing in the Great Plains and North American monsoon regions. The GF scheme also simulates a smoother transition from convective to large-scale precipitation as resolution increases, resulting in reduced sensitivity to model resolution compared to the KF scheme. Nonhydrostatic dynamics has a positive impact on precipitation over complex terrain even at 12 km and 36 km grid spacings. With nudging of the winds toward observations, we show that the conspicuous warm biases in the Southern Great Plains are related to precipitation biases induced by large-scale circulation biases, which are insensitive to model resolution. Overall, notable improvements in simulating summer rainfall and its diurnal variability through convection-permitting modeling and scale-aware parameterizations suggest promising venues for improving climate simulations of water cycle processes.

  16. Field Investigation of the Turbulent Flux Parameterization and Scalar Turbulence Structure over a Melting Valley Glacier

    Science.gov (United States)

    Guo, X.; Yang, K.; Yang, W.; Li, S.; Long, Z.

    2011-12-01

    We present a field investigation over a melting valley glacier on the Tibetan Plateau. One particular aspect lies in that three melt phases are distinguished during the glacier's ablation season, which enables us to compare results over snow, bare-ice, and hummocky surfaces [with aerodynamic roughness lengths (z0M) varying on the order of 10-4-10-2 m]. We address two issues of common concern in the study of glacio-meteorology and micrometeorology. First, we study turbulent energy flux estimation through a critical evaluation of three parameterizations of the scalar roughness lengths (z0T for temperature and z0q for humidity), viz. key factors for the accurate estimation of sensible heat and latent heat fluxes using the bulk aerodynamic method. The first approach (Andreas 1987, Boundary-Layer Meteorol 38:159-184) is based on surface-renewal models and has been very widely applied in glaciated areas; the second (Yang et al. 2002, Q J Roy Meteorol Soc 128:2073-2087) has never received application over an ice/snow surface, despite its validity in arid regions; the third approach (Smeets and van den Broeke 2008, Boundary-Layer Meteorol 128:339-355) is proposed for use specifically over rough ice defined as z0M > 10-3 m or so. This empirical z0M threshold value is deemed of general relevance to glaciated areas (e.g. ice sheet/cap and valley/outlet glaciers), above which the first approach gives underestimated z0T and z0q. The first and the third approaches tend to underestimate and overestimate turbulent heat/moisture exchange, respectively (relative errors often > 30%). Overall, the second approach produces fairly low errors in energy flux estimates; it thus emerges as a practically useful choice to parameterize z0T and z0q over an ice/snow surface. Our evaluation of z0T and z0q parameterizations hopefully serves as a useful source of reference for physically based modeling of land-ice surface energy budget and mass balance. Second, we explore how scalar turbulence

  17. GOW2.0: A global wave hindcast of high resolution

    Science.gov (United States)

    Menendez, Melisa; Perez, Jorge; Losada, Inigo

    2016-04-01

    The information provided by reconstructions of historical wind generated waves is of paramount importance for a variety of coastal and offshore purposes (e.g. risk assessment, design of costal structures and coastal management). Here, a new global wave hindcast (GOW2.0) is presented. This hindcast is an update of GOW1.0 (Reguero et al. 2012) motivated by the emergence of new settings and atmospheric information from reanalysis during recent years. GOW2.0 is based on version 4.18 of WaveWatch III numerical model (Tolman, 2014). Main features of the model set-up are the analysis and selection of recent source terms concerning wave generation and dissipation (Ardhuin et al. 2010, Zieger et al., 2015) and the implementation of obstruction grids to improve the modeling of wave shadowing effects in line with the approach described in Chawla and Tolman (2007). This has been complemented by a multigrid system and the use of the hourly wind and ice coverage from the Climate Forecast System Reanalysis, CFSR (30km spatial resolution approximately). The multigrid scheme consists of a series of "two-way" nested domains covering the whole ocean basins at a 0.5° spatial resolution and continental shelfs worldwide at a 0.25° spatial resolution. In addition, a technique to reconstruct wave 3D spectra for any grid-point is implemented from spectral partitioning information. A validation analysis of GOW2.0 outcomes has been undertaken considering wave spectral information from surface buoy stations and multi-mission satellite data for a spatial validation. GOW2.0 shows a substantial improvement over its predecessor for all the analyzed variables. In summary, GOW2.0 reconstructs historical wave spectral data and climate information from 1979 to present at hourly resolution providing higher spatial resolution over regions where local generated wind seas, bimodal-spectral behaviour and relevant swell transformations across the continental shelf are important. Ardhuin F, Rogers E

  18. Impact of the dynamical core on the direct simulation of tropical cyclones in a high-resolution global model

    International Nuclear Information System (INIS)

    Reed, K. A.

    2015-01-01

    Our paper examines the impact of the dynamical core on the simulation of tropical cyclone (TC) frequency, distribution, and intensity. The dynamical core, the central fluid flow component of any general circulation model (GCM), is often overlooked in the analysis of a model's ability to simulate TCs compared to the impact of more commonly documented components (e.g., physical parameterizations). The Community Atmosphere Model version 5 is configured with multiple dynamics packages. This analysis demonstrates that the dynamical core has a significant impact on storm intensity and frequency, even in the presence of similar large-scale environments. In particular, the spectral element core produces stronger TCs and more hurricanes than the finite-volume core using very similar parameterization packages despite the latter having a slightly more favorable TC environment. Furthermore, these results suggest that more detailed investigations into the impact of the GCM dynamical core on TC climatology are needed to fully understand these uncertainties. Key Points The impact of the GCM dynamical core is often overlooked in TC assessments The CAM5 dynamical core has a significant impact on TC frequency and intensity A larger effort is needed to better understand this uncertainty

  19. Outcomes and challenges of global high-resolution non-hydrostatic atmospheric simulations using the K computer

    Science.gov (United States)

    Satoh, Masaki; Tomita, Hirofumi; Yashiro, Hisashi; Kajikawa, Yoshiyuki; Miyamoto, Yoshiaki; Yamaura, Tsuyoshi; Miyakawa, Tomoki; Nakano, Masuo; Kodama, Chihiro; Noda, Akira T.; Nasuno, Tomoe; Yamada, Yohei; Fukutomi, Yoshiki

    2017-12-01

    This article reviews the major outcomes of a 5-year (2011-2016) project using the K computer to perform global numerical atmospheric simulations based on the non-hydrostatic icosahedral atmospheric model (NICAM). The K computer was made available to the public in September 2012 and was used as a primary resource for Japan's Strategic Programs for Innovative Research (SPIRE), an initiative to investigate five strategic research areas; the NICAM project fell under the research area of climate and weather simulation sciences. Combining NICAM with high-performance computing has created new opportunities in three areas of research: (1) higher resolution global simulations that produce more realistic representations of convective systems, (2) multi-member ensemble simulations that are able to perform extended-range forecasts 10-30 days in advance, and (3) multi-decadal simulations for climatology and variability. Before the K computer era, NICAM was used to demonstrate realistic simulations of intra-seasonal oscillations including the Madden-Julian oscillation (MJO), merely as a case study approach. Thanks to the big leap in computational performance of the K computer, we could greatly increase the number of cases of MJO events for numerical simulations, in addition to integrating time and horizontal resolution. We conclude that the high-resolution global non-hydrostatic model, as used in this five-year project, improves the ability to forecast intra-seasonal oscillations and associated tropical cyclogenesis compared with that of the relatively coarser operational models currently in use. The impacts of the sub-kilometer resolution simulation and the multi-decadal simulations using NICAM are also reviewed.

  20. Final Technical Report for "High-resolution global modeling of the effects of subgrid-scale clouds and turbulence on precipitating cloud systems"

    Energy Technology Data Exchange (ETDEWEB)

    Larson, Vincent [Univ. of Wisconsin, Milwaukee, WI (United States)

    2016-11-25

    The Multiscale Modeling Framework (MMF) embeds a cloud-resolving model in each grid column of a General Circulation Model (GCM). A MMF model does not need to use a deep convective parameterization, and thereby dispenses with the uncertainties in such parameterizations. However, MMF models grossly under-resolve shallow boundary-layer clouds, and hence those clouds may still benefit from parameterization. In this grant, we successfully created a climate model that embeds a cloud parameterization (“CLUBB”) within a MMF model. This involved interfacing CLUBB’s clouds with microphysics and reducing computational cost. We have evaluated the resulting simulated clouds and precipitation with satellite observations. The chief benefit of the project is to provide a MMF model that has an improved representation of clouds and that provides improved simulations of precipitation.

  1. Spectroscopic measurements of soybeans used to parameterize physiological traits in the AgroIBIS ecosystem model

    Science.gov (United States)

    Singh, A.; Serbin, S.; Kucharik, C. J.; Townsend, P. A.

    2014-12-01

    Ecosystem models such AgroIBIS require detailed parameterizations of numerous vegetation traits related to leaf structure, biochemistry and photosynthetic capacity to properly assess plant carbon assimilation and yield response to environmental variability. In general, these traits are estimated from a limited number of field measurements or sourced from the literature, but rarely is the full observed range of variability in these traits utilized in modeling activities. In addition, pathogens and pests, such as the exotic soybean aphid (Aphis glycines), which affects photosynthetic pathways in soybean plants by feeding on phloem and sap, can potentially impact plant productivity and yields. Capturing plant responses to pest pressure in conjunction with environmental variability is of considerable interest to managers and the scientific community alike. In this research, we employed full-range (400-2500 nm) field and laboratory spectroscopy to rapidly characterize the leaf biochemical and physiological traits, namely foliar nitrogen, specific leaf area (SLA) and the maximum rate of RuBP carboxylation by the enzyme RuBisCo (Vcmax) in soybean plants, which experienced a broad range of environmental conditions and soybean aphid pressures. We utilized near-surface spectroscopic remote sensing measurements as a means to capture the spatial and temporal patterns of aphid impacts across broad aphid pressure levels. In addition, we used the spectroscopic data to generate a much larger dataset of key model parameters required by AgroIBIS than would be possible through traditional measurements of biochemistry and leaf-level gas exchange. The use of spectroscopic retrievals of soybean traits allowed us to better characterize the variability of plant responses associated with aphid pressure to more accurately model the likely impacts of soybean aphid on soybeans. Our next steps include the coupling of the information derived from our spectral measurements with the Agro

  2. Computational issues in complex water-energy optimization problems: Time scales, parameterizations, objectives and algorithms

    Science.gov (United States)

    Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris

    2015-04-01

    Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial

  3. Improvement of Systematic Bias of mean state and the intraseasonal variability of CFSv2 through superparameterization and revised cloud-convection-radiation parameterization

    Science.gov (United States)

    Mukhopadhyay, P.; Phani Murali Krishna, R.; Goswami, Bidyut B.; Abhik, S.; Ganai, Malay; Mahakur, M.; Khairoutdinov, Marat; Dudhia, Jimmy

    2016-05-01

    Inspite of significant improvement in numerical model physics, resolution and numerics, the general circulation models (GCMs) find it difficult to simulate realistic seasonal and intraseasonal variabilities over global tropics and particularly over Indian summer monsoon (ISM) region. The bias is mainly attributed to the improper representation of physical processes. Among all the processes, the cloud and convective processes appear to play a major role in modulating model bias. In recent times, NCEP CFSv2 model is being adopted under Monsoon Mission for dynamical monsoon forecast over Indian region. The analyses of climate free run of CFSv2 in two resolutions namely at T126 and T382, show largely similar bias in simulating seasonal rainfall, in capturing the intraseasonal variability at different scales over the global tropics and also in capturing tropical waves. Thus, the biases of CFSv2 indicate a deficiency in model's parameterization of cloud and convective processes. Keeping this in background and also for the need to improve the model fidelity, two approaches have been adopted. Firstly, in the superparameterization, 32 cloud resolving models each with a horizontal resolution of 4 km are embedded in each GCM (CFSv2) grid and the conventional sub-grid scale convective parameterization is deactivated. This is done to demonstrate the role of resolving cloud processes which otherwise remain unresolved. The superparameterized CFSv2 (SP-CFS) is developed on a coarser version T62. The model is integrated for six and half years in climate free run mode being initialised from 16 May 2008. The analyses reveal that SP-CFS simulates a significantly improved mean state as compared to default CFS. The systematic bias of lesser rainfall over Indian land mass, colder troposphere has substantially been improved. Most importantly the convectively coupled equatorial waves and the eastward propagating MJO has been found to be simulated with more fidelity in SP-CFS. The reason of

  4. Mode decomposition methods for flows in high-contrast porous media. A global approach

    KAUST Repository

    Ghommem, Mehdi; Calo, Victor M.; Efendiev, Yalchin R.

    2014-01-01

    We apply dynamic mode decomposition (DMD) and proper orthogonal decomposition (POD) methods to flows in highly-heterogeneous porous media to extract the dominant coherent structures and derive reduced-order models via Galerkin projection. Permeability fields with high contrast are considered to investigate the capability of these techniques to capture the main flow features and forecast the flow evolution within a certain accuracy. A DMD-based approach shows a better predictive capability due to its ability to accurately extract the information relevant to long-time dynamics, in particular, the slowly-decaying eigenmodes corresponding to largest eigenvalues. Our study enables a better understanding of the strengths and weaknesses of the applicability of these techniques for flows in high-contrast porous media. Furthermore, we discuss the robustness of DMD- and POD-based reduced-order models with respect to variations in initial conditions, permeability fields, and forcing terms. © 2013 Elsevier Inc.

  5. Molecular corridors and parameterizations of volatility in the chemical evolution of organic aerosols

    Directory of Open Access Journals (Sweden)

    Y. Li

    2016-03-01

    Full Text Available The formation and aging of organic aerosols (OA proceed through multiple steps of chemical reaction and mass transport in the gas and particle phases, which is challenging for the interpretation of field measurements and laboratory experiments as well as accurate representation of OA evolution in atmospheric aerosol models. Based on data from over 30 000 compounds, we show that organic compounds with a wide variety of functional groups fall into molecular corridors, characterized by a tight inverse correlation between molar mass and volatility. We developed parameterizations to predict the saturation mass concentration of organic compounds containing oxygen, nitrogen, and sulfur from the elemental composition that can be measured by soft-ionization high-resolution mass spectrometry. Field measurement data from new particle formation events, biomass burning, cloud/fog processing, and indoor environments were mapped into molecular corridors to characterize the chemical nature of the observed OA components. We found that less-oxidized indoor OA are constrained to a corridor of low molar mass and high volatility, whereas highly oxygenated compounds in atmospheric water extend to high molar mass and low volatility. Among the nitrogen- and sulfur-containing compounds identified in atmospheric aerosols, amines tend to exhibit low molar mass and high volatility, whereas organonitrates and organosulfates follow high O : C corridors extending to high molar mass and low volatility. We suggest that the consideration of molar mass and molecular corridors can help to constrain volatility and particle-phase state in the modeling of OA particularly for nitrogen- and sulfur-containing compounds.

  6. Remote Sensing Image Enhancement Based on Non-subsampled Shearlet Transform and Parameterized Logarithmic Image Processing Model

    Directory of Open Access Journals (Sweden)

    TAO Feixiang

    2015-08-01

    Full Text Available Aiming at parts of remote sensing images with dark brightness and low contrast, a remote sensing image enhancement method based on non-subsampled Shearlet transform and parameterized logarithmic image processing model is proposed in this paper to improve the visual effects and interpretability of remote sensing images. Firstly, a remote sensing image is decomposed into a low-frequency component and high frequency components by non-subsampled Shearlet transform.Then the low frequency component is enhanced according to PLIP (parameterized logarithmic image processing model, which can improve the contrast of image, while the improved fuzzy enhancement method is used to enhance the high frequency components in order to highlight the information of edges and details. A large number of experimental results show that, compared with five kinds of image enhancement methods such as bidirectional histogram equalization method, the method based on stationary wavelet transform and the method based on non-subsampled contourlet transform, the proposed method has advantages in both subjective visual effects and objective quantitative evaluation indexes such as contrast and definition, which can more effectively improve the contrast of remote sensing image and enhance edges and texture details with better visual effects.

  7. Geoscience Meets Social Science: A Flexible Data Driven Approach for Developing High Resolution Population Datasets at Global Scale

    Science.gov (United States)

    Rose, A.; McKee, J.; Weber, E.; Bhaduri, B. L.

    2017-12-01

    Leveraging decades of expertise in population modeling, and in response to growing demand for higher resolution population data, Oak Ridge National Laboratory is now generating LandScan HD at global scale. LandScan HD is conceived as a 90m resolution population distribution where modeling is tailored to the unique geography and data conditions of individual countries or regions by combining social, cultural, physiographic, and other information with novel geocomputation methods. Similarities among these areas are exploited in order to leverage existing training data and machine learning algorithms to rapidly scale development. Drawing on ORNL's unique set of capabilities, LandScan HD adapts highly mature population modeling methods developed for LandScan Global and LandScan USA, settlement mapping research and production in high-performance computing (HPC) environments, land use and neighborhood mapping through image segmentation, and facility-specific population density models. Adopting a flexible methodology to accommodate different geographic areas, LandScan HD accounts for the availability, completeness, and level of detail of relevant ancillary data. Beyond core population and mapped settlement inputs, these factors determine the model complexity for an area, requiring that for any given area, a data-driven model could support either a simple top-down approach, a more detailed bottom-up approach, or a hybrid approach.

  8. The NNSA global threat reduction initiative's efforts to minimize the use of highly enriched uranium for medical isotope production

    International Nuclear Information System (INIS)

    Staples, Parrish

    2010-01-01

    The mission of the National Nuclear Security Administration's (NNSA) Office of Global Threat Reduction (GTRI) is to reduce and protect vulnerable nuclear and radiological materials located at civilian sites worldwide. GTRI is a key organization for supporting domestic and global efforts to minimize and, to the extent possible, eliminate the use of highly enriched uranium (HEU) in civilian nuclear applications. GTRI implements the following activities in order to achieve its threat reduction and HEU minimization objectives: Converting domestic and international civilian research reactors and isotope production facilities from the use of HEU to low enriched uranium (LEU); Demonstrating the viability of medical isotope production technologies that do not use HEU; Removing or disposing excess nuclear and radiological materials from civilian sites worldwide; and Protecting high-priority nuclear and radiological materials worldwide from theft and sabotage. This paper provides a brief overview on the recent developments and priorities for GTRI program activities in 2010, with a particular focus on GTRI's efforts to demonstrate the viability of non-HEU based medical isotope production technologies. (author)

  9. Influences of in-cloud aerosol scavenging parameterizations on aerosol concentrations and wet deposition in ECHAM5-HAM

    Directory of Open Access Journals (Sweden)

    B. Croft

    2010-02-01

    Full Text Available A diagnostic cloud nucleation scavenging scheme, which determines stratiform cloud scavenging ratios for both aerosol mass and number distributions, based on cloud droplet, and ice crystal number concentrations, is introduced into the ECHAM5-HAM global climate model. This scheme is coupled with a size-dependent in-cloud impaction scavenging parameterization for both cloud droplet-aerosol, and ice crystal-aerosol collisions. The aerosol mass scavenged in stratiform clouds is found to be primarily (>90% scavenged by cloud nucleation processes for all aerosol species, except for dust (50%. The aerosol number scavenged is primarily (>90% attributed to impaction. 99% of this impaction scavenging occurs in clouds with temperatures less than 273 K. Sensitivity studies are presented, which compare aerosol concentrations, burdens, and deposition for a variety of in-cloud scavenging approaches: prescribed fractions, a more computationally expensive prognostic aerosol cloud processing treatment, and the new diagnostic scheme, also with modified assumptions about in-cloud impaction and nucleation scavenging. Our results show that while uncertainties in the representation of in-cloud scavenging processes can lead to differences in the range of 20–30% for the predicted annual, global mean aerosol mass burdens, and near to 50% for accumulation mode aerosol number burden, the differences in predicted aerosol mass concentrations can be up to one order of magnitude, particularly for regions of the middle troposphere with temperatures below 273 K where mixed and ice phase clouds exist. Different parameterizations for impaction scavenging changed the predicted global, annual mean number removal attributed to ice clouds by seven-fold, and the global, annual dust mass removal attributed to impaction by two orders of magnitude. Closer agreement with observations of black carbon profiles from aircraft (increases near to one order of magnitude for mixed phase clouds

  10. Global Binary Optimization on Graphs for Classification of High Dimensional Data

    Science.gov (United States)

    2014-09-01

    penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE SEP 2014...bregman iteration for rof, vectorial tv, and high order models. SIAM J. Imag- ing Sci. 3(3), 300–339 (2010) 52. Yuan, J., Bae, E., Tai, X.C.: A study on

  11. Global distribution of bedrock exposures on Mars using THEMIS high-resolution thermal inertia

    Science.gov (United States)

    Edwards, C.S.; Bandfield, J.L.; Christensen, P.R.; Fergason, R.L.

    2009-01-01

    We investigate high thermal inertia surfaces using the Mars Odyssey Thermal Emission Imaging System (THEMIS) nighttime temperature images (100 m/pixel spatial sampling). For this study, we interpret any pixel in a THEMIS image with a thermal inertia over 1200 J m-2 K-1 s-1/2 as "bedrock" which represents either in situ rock exposures or rock-dominated surfaces. Three distinct morphologies, ranked from most to least common, are associated with these high thermal inertia surfaces: (1) valley and crater walls associated with mass wasting and high surface slope angles; (2) floors of craters with diameters >25 km and containing melt or volcanics associated with larger, high-energy impacts; and (3) intercrater surfaces with compositions significantly more mafic than the surrounding regolith. In general, bedrock instances on Mars occur as small exposures (less than several square kilometers) situated in lower-albedo (inertia (>350 J m-2 K-1 s-1/2), and relatively dust-free (dust cover index <0.95) regions; however, there are instances that do not follow these generalizations. Most instances are concentrated in the southern highlands, with very few located at high latitudes (poleward of 45oN and 58oS), suggesting enhanced mechanical breakdown probably associated with permafrost. Overall, Mars has very little exposed bedrock with only 960 instances identified from 75oS to 75oN with likely <3500 km2 exposed, representing???1% of the total surface area. These data indicate that Mars has likely undergone large-scale surface processing and reworking, both chemically and mechanically, either destroying or masking a majority of the bedrock exposures on the planet. Copyright 2009 by the American Geophysical Union.

  12. Next generation of global land cover characterization, mapping, and monitoring

    Science.gov (United States)

    Giri, Chandra; Pengra, Bruce; Long, J.; Loveland, Thomas R.

    2013-01-01

    Land cover change is increasingly affecting the biophysics, biogeochemistry, and biogeography of the Earth's surface and the atmosphere, with far-reaching consequences to human well-being. However, our scientific understanding of the distribution and dynamics of land cover and land cover change (LCLCC) is limited. Previous global land cover assessments performed using coarse spatial resolution (300 m–1 km) satellite data did not provide enough thematic detail or change information for global change studies and for resource management. High resolution (∼30 m) land cover characterization and monitoring is needed that permits detection of land change at the scale of most human activity and offers the increased flexibility of environmental model parameterization needed for global change studies. However, there are a number of challenges to overcome before producing such data sets including unavailability of consistent global coverage of satellite data, sheer volume of data, unavailability of timely and accurate training and validation data, difficulties in preparing image mosaics, and high performance computing requirements. Integration of remote sensing and information technology is needed for process automation and high-performance computing needs. Recent developments in these areas have created an opportunity for operational high resolution land cover mapping, and monitoring of the world. Here, we report and discuss these advancements and opportunities in producing the next generations of global land cover characterization, mapping, and monitoring at 30-m spatial resolution primarily in the context of United States, Group on Earth Observations Global 30 m land cover initiative (UGLC).

  13. Global Business Networks and Cooperation within Supply Chain as a Strategy of High-Tech Companies’ Growth

    Directory of Open Access Journals (Sweden)

    Milena Ratajczak-Mrozek

    2012-01-01

    Full Text Available Abstract The specificity of the operation profile of high-tech companies, including the necessity of operating at the international scale may account for the fact that these companies may find in network relationships, business networks and cooperation an essential determinant for growth and competitiveness. Foreign entities should be especially interesting business partners for high- tech companies, as they are often seen as representing more advanced knowledge, resources and experience. The aim of the article is to point out to global business networks (i.e. including both local and foreign entities, and especially to cooperation within supply chain, as an important basis for a growth strategy of a high-tech company. The article adopts assumptions of the network approach as a concept of companies cooperation. An analysis of the author’s own as well as secondary empirical research, with the focus on high- tech companies located in Poland is presented. In particular, the data from own research of 62 high-tech companies in Poland conducted in the first half of 2011 is analysed. It shows that the high-tech companies placing great importance on cooperation within supply chain demonstrate a higher growth and level of competitiveness than the companies which do not ascribe such importance (bearing in mind that supply chain forms an important part of a business network.

  14. Limited tolerance by insects to high temperatures across tropical elevational gradients and the implications of global warming for extinction.

    Science.gov (United States)

    García-Robledo, Carlos; Kuprewicz, Erin K; Staines, Charles L; Erwin, Terry L; Kress, W John

    2016-01-19

    The critical thermal maximum (CTmax), the temperature at which motor control is lost in animals, has the potential to determine if species will tolerate global warming. For insects, tolerance to high temperatures decreases with latitude, suggesting that similar patterns may exist along elevational gradients as well. This study explored how CTmax varies among species and populations of a group of diverse tropical insect herbivores, the rolled-leaf beetles, across both broad and narrow elevational gradients. Data from 6,948 field observations and 8,700 museum specimens were used to map the elevational distributions of rolled-leaf beetles on two mountains in Costa Rica. CTmax was determined for 1,252 individual beetles representing all populations across the gradients. Initial morphological identifications suggested a total of 26 species with populations at different elevations displaying contrasting upper thermal limits. However, compared with morphological identifications, DNA barcodes (cytochrome oxidase I) revealed significant cryptic species diversity. DNA barcodes identified 42 species and haplotypes across 11 species complexes. These 42 species displayed much narrower elevational distributions and values of CTmax than the 26 morphologically defined species. In general, species found at middle elevations and on mountaintops are less tolerant to high temperatures than species restricted to lowland habitats. Species with broad elevational distributions display high CTmax throughout their ranges. We found no significant phylogenetic signal in CTmax, geography, or elevational range. The narrow variance in CTmax values for most rolled-leaf beetles, especially high-elevation species, suggests that the risk of extinction of insects may be substantial under some projected rates of global warming.

  15. Global Managers

    DEFF Research Database (Denmark)

    Barakat, Livia L.; Lorenz, Melanie P.; Ramsey, Jase R.

    2016-01-01

    Purpose: – The purpose of this paper is to examine the effect of cultural intelligence (CQ) on the job performance of global managers. Design/methodology/approach: – In total, 332 global managers were surveyed from multinational companies operating in Brazil. The mediating effect of job...... satisfaction was tested on the CQ-job performance relationship. Findings: – The findings suggest that job satisfaction transmits the effect of CQ to job performance, such that global managers high in CQ exhibit more job satisfaction in an international setting, and therefore perform better at their jobs....... Practical implications: – Results imply that global managers should increase their CQ in order to improve their job satisfaction and ultimately perform better in an international context. Originality/value: – The authors make three primary contributions to the international business literature. First...

  16. INSPIRE: Managing Metadata in a Global Digital Library for High-Energy Physics

    CERN Document Server

    Martin Montull, Javier

    2011-01-01

    Four leading laboratories in the High-Energy Physics (HEP) field are collaborating to roll-out the next-generation scientific information portal: INSPIRE. The goal of this project is to replace the popular 40 year-old SPIRES database. INSPIRE already provides access to about 1 million records and includes services such as fulltext search, automatic keyword assignment, ingestion and automatic display of LaTeX, citation analysis, automatic author disambiguation, metadata harvesting, extraction of figures from fulltext and search in figure captions. In order to achieve high quality metadata both automatic processing and manual curation are needed. The different tools available in the system use modern web technologies to provide the curators of the maximum efficiency, while dealing with the MARC standard format. The project is under heavy development in order to provide new features including semantic analysis, crowdsourcing of metadata curation, user tagging, recommender systems, integration of OAIS standards a...

  17. GHRSST Level 2P Global Bulk Sea Surface Temperature from the Advanced Very High Resolution Radiometer (AVHRR) on the NOAA-17 satellite produced by NAVO (GDS version 1)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A global Group for High Resolution Sea Surface Temperature (GHRSST) Level 2P dataset based on multi-channel sea surface temperature (SST) retrievals generated in...

  18. GHRSST Level 2P Global Bulk Sea Surface Temperature from the Advanced Very High Resolution Radiometer (AVHRR) on the NOAA-17 satellite (GDS version 1)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A global Level 2P Group for High Resolution Sea Surface Temperature (GHRSST) dataset based on multi-channel sea surface temperature (SST) retrievals from the...

  19. GHRSST Level 2P Global Bulk Sea Surface Temperature from the Advanced Very High Resolution Radiometer (AVHRR) on the NOAA-16 satellite (GDS version 1)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A global Level 2P Group for High Resolution Sea Surface Temperature (GHRSST) dataset based on multi-channel sea surface temperature (SST) retrievals from the...

  20. Further results on global state feedback stabilization of nonlinear high-order feedforward systems.

    Science.gov (United States)

    Xie, Xue-Jun; Zhang, Xing-Hui

    2014-03-01

    In this paper, by introducing a combined method of sign function, homogeneous domination and adding a power integrator, and overcoming several troublesome obstacles in the design and analysis, the problem of state feedback control for a class of nonlinear high-order feedforward systems with the nonlinearity's order being relaxed to an interval rather than a fixed point is solved. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  1. INSPIRE: Managing Metadata in a Global Digital Library for High-Energy Physics

    OpenAIRE

    Martin Montull, Javier

    2011-01-01

    Four leading laboratories in the High-Energy Physics (HEP) field are collaborating to roll-out the next-generation scientific information portal: INSPIRE. The goal of this project is to replace the popular 40 year-old SPIRES database. INSPIRE already provides access to about 1 million records and includes services such as fulltext search, automatic keyword assignment, ingestion and automatic display of LaTeX, citation analysis, automatic author disambiguation, metadata harvesting, extraction ...

  2. Global Environmental Leadership and Sustainability: High School Students Teaching Environmental Science to Policymakers

    Science.gov (United States)

    Wilson, S.; Tamsitt, V. M.

    2016-02-01

    A two week high school course for high-achieving 10th-12th graders was developed through the combined efforts of Scripps Institution of Oceanography (SIO) Graduate Students and UC San Diego Academic Connections. For the high school students involved, one week was spent at SIO learning basic climate science and researching climate-related topics, and one week was spent in Washington D.C. lobbying Congress for an environmental issue of their choosing. The specific learning goals of the course were for students to (1) collect, analyze and interpret scientific data, (2) synthesize scientific research for policy recommendations, (3) craft and deliver a compelling policy message, and (4) understand and experience change. In this first year, 10 students conducted research on two scientific topics; sea level rise using pier temperature data and California rainfall statistics using weather stations. Simultaneous lessons on policy messaging helped students learn how to focus scientific information for non-scientists. In combining the importance of statistics from their Science lessons with effective communication from their Policy lessons, the students developed issue papers which highlighted an environmental problem, the solution, and the reason their solution is most effective. The course culminated in two days of meetings on Capitol Hill, where they presented their solutions to their Congressional and Senate Members, conversed with policymakers, and received constructive feedback. Throughout the process, the students effectively defined arguments for an environmental topic in a program developed by SIO Graduate Students.

  3. Echocardiographic evaluation of global left ventricular function during high thoracic epidural anesthesia.

    Science.gov (United States)

    Niimi, Y; Ichinose, F; Saegusa, H; Nakata, Y; Morita, S

    1997-03-01

    To assess the effects of high thoracic epidural anesthesia on left ventricular (LV) diastolic filling and systolic function in patients without heart disease. Prospective study. University hospital. 24 ASA physical status I and II patients scheduled for elective noncardiac surgery. Patients received high thoracic (HTE; n = 12) or low thoracic (LTE; n = 12) epidural anesthesia. Left ventricular diastolic filling was noninvasively determined by precordial echocardiography using a pulsed Doppler technique and with a newly developed acoustic quantification (AQ) method that automatically detects endocardial borders and measures cavity area. All measurements were performed in awake premedicated patients. In the HTE group, the extent of sensory blockade of T1-T5, at the least, was induced with 2% lidocaine 5 ml. During HTE, systolic blood pressure (119 +/- 16 vs. 108 +/- 14 mmHg, p LTE group, no significant differences were noted in all systolic and diastolic indices obtained by pulsed Doppler and AQ method. High thoracic epidural anesthesia causes a decrease in CO without changing LV ejection and diastolic filling performance in healthy subjects.

  4. Shallow cumuli ensemble statistics for development of a stochastic parameterization

    Science.gov (United States)

    Sakradzija, Mirjana; Seifert, Axel; Heus, Thijs

    2014-05-01

    According to a conventional deterministic approach to the parameterization of moist convection in numerical atmospheric models, a given large scale forcing produces an unique response from the unresolved convective processes. This representation leaves out the small-scale variability of convection, as it is known from the empirical studies of deep and shallow convective cloud ensembles, there is a whole distribution of sub-grid states corresponding to the given large scale forcing. Moreover, this distribution gets broader with the increasing model resolution. This behavior is also consistent with our theoretical understanding of a coarse-grained nonlinear system. We propose an approach to represent the variability of the unresolved shallow-convective states, including the dependence of the sub-grid states distribution spread and shape on the model horizontal resolution. Starting from the Gibbs canonical ensemble theory, Craig and Cohen (2006) developed a theory for the fluctuations in a deep convective ensemble. The micro-states of a deep convective cloud ensemble are characterized by the cloud-base mass flux, which, according to the theory, is exponentially distributed (Boltzmann distribution). Following their work, we study the shallow cumulus ensemble statistics and the distribution of the cloud-base mass flux. We employ a Large-Eddy Simulation model (LES) and a cloud tracking algorithm, followed by a conditional sampling of clouds at the cloud base level, to retrieve the information about the individual cloud life cycles and the cloud ensemble as a whole. In the case of shallow cumulus cloud ensemble, the distribution of micro-states is a generalized exponential distribution. Based on the empirical and theoretical findings, a stochastic model has been developed to simulate the shallow convective cloud ensemble and to test the convective ensemble theory. Stochastic model simulates a compound random process, with the number of convective elements drawn from a

  5. Attending to global versus local stimulus features modulates neural processing of low versus high spatial frequencies: An analysis with event-related brain potentials.

    Directory of Open Access Journals (Sweden)

    Anastasia V Flevaris

    2014-04-01

    Full Text Available Spatial frequency (SF selection has long been recognized to play a role in global and local processing, though the nature of the relationship between SF processing and global/local perception is debated. Previous studies have shown that attention to relatively lower SFs facilitates global perception, and that attention to relatively higher SFs facilitates local perception. Here we recorded event-related brain potentials (ERPs to