WorldWideScience

Sample records for model physics parameterizations

  1. Impact of Physics Parameterization Ordering in a Global Atmosphere Model

    Science.gov (United States)

    Donahue, Aaron S.; Caldwell, Peter M.

    2018-02-01

    Because weather and climate models must capture a wide variety of spatial and temporal scales, they rely heavily on parameterizations of subgrid-scale processes. The goal of this study is to demonstrate that the assumptions used to couple these parameterizations have an important effect on the climate of version 0 of the Energy Exascale Earth System Model (E3SM) General Circulation Model (GCM), a close relative of version 1 of the Community Earth System Model (CESM1). Like most GCMs, parameterizations in E3SM are sequentially split in the sense that parameterizations are called one after another with each subsequent process feeling the effect of the preceding processes. This coupling strategy is noncommutative in the sense that the order in which processes are called impacts the solution. By examining a suite of 24 simulations with deep convection, shallow convection, macrophysics/microphysics, and radiation parameterizations reordered, process order is shown to have a big impact on predicted climate. In particular, reordering of processes induces differences in net climate feedback that are as big as the intermodel spread in phase 5 of the Coupled Model Intercomparison Project. One reason why process ordering has such a large impact is that the effect of each process is influenced by the processes preceding it. Where output is written is therefore an important control on apparent model behavior. Application of k-means clustering demonstrates that the positioning of macro/microphysics and shallow convection plays a critical role on the model solution.

  2. Efficient Parameterization for Grey-box Model Identification of Complex Physical Systems

    DEFF Research Database (Denmark)

    Blanke, Mogens; Knudsen, Morten Haack

    2006-01-01

    Grey box model identification preserves known physical structures in a model but with limits to the possible excitation, all parameters are rarely identifiable, and different parametrizations give significantly different model quality. Convenient methods to show which parameterizations are the be...... that need be constrained to achieve satisfactory convergence. Identification of nonlinear models for a ship illustrate the concept....

  3. Assessment of two physical parameterization schemes for desert dust emissions in an atmospheric chemistry general circulation model

    Science.gov (United States)

    Astitha, M.; Abdel Kader, M.; Pozzer, A.; Lelieveld, J.

    2012-04-01

    Atmospheric particulate matter and more specific desert dust has been the topic of numerous research studies in the past due to the wide range of impacts in the environment and climate and the uncertainty of characterizing and quantifying these impacts in a global scale. In this work we present two physical parameterizations of the desert dust production that have been incorporated in the atmospheric chemistry general circulation model EMAC (ECHAM5/MESSy2.41 Atmospheric Chemistry). The scope of this work is to assess the impact of the two physical parameterizations in the global distribution of desert dust and highlight the advantages and disadvantages of using either technique. The dust concentration and deposition has been evaluated using the AEROCOM dust dataset for the year 2000 and data from the MODIS and MISR satellites as well as sun-photometer data from the AERONET network was used to compare the modelled aerosol optical depth with observations. The implementation of the two parameterizations and the simulations using relatively high spatial resolution (T106~1.1deg) has highlighted the large spatial heterogeneity of the dust emission sources as well as the importance of the input parameters (soil size and texture, vegetation, surface wind speed). Also, sensitivity simulations with the nudging option using reanalysis data from ECMWF and without nudging have showed remarkable differences for some areas. Both parameterizations have revealed the difficulty of simulating all arid regions with the same assumptions and mechanisms. Depending on the arid region, each emission scheme performs more or less satisfactorily which leads to the necessity of treating each desert differently. Even though this is a quite different task to accomplish in a global model, some recommendations are given and ideas for future improvements.

  4. Usage of Parameterized Fatigue Spectra and Physics-Based Systems Engineering Models for Wind Turbine Component Sizing: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Parsons, Taylor; Guo, Yi; Veers, Paul; Dykes, Katherine; Damiani, Rick

    2016-01-26

    Software models that use design-level input variables and physics-based engineering analysis for estimating the mass and geometrical properties of components in large-scale machinery can be very useful for analyzing design trade-offs in complex systems. This study uses DriveSE, an OpenMDAO-based drivetrain model that uses stress and deflection criteria to size drivetrain components within a geared, upwind wind turbine. Because a full lifetime fatigue load spectrum can only be defined using computationally-expensive simulations in programs such as FAST, a parameterized fatigue loads spectrum that depends on wind conditions, rotor diameter, and turbine design life has been implemented. The parameterized fatigue spectrum is only used in this paper to demonstrate the proposed fatigue analysis approach. This paper details a three-part investigation of the parameterized approach and a comparison of the DriveSE model with and without fatigue analysis on the main shaft system. It compares loads from three turbines of varying size and determines if and when fatigue governs drivetrain sizing compared to extreme load-driven design. It also investigates the model's sensitivity to shaft material parameters. The intent of this paper is to demonstrate how fatigue considerations in addition to extreme loads can be brought into a system engineering optimization.

  5. Analysis of different atmospheric physical parameterizations in COAWST modeling system for the Tropical Storm Nock-ten application

    DEFF Research Database (Denmark)

    Ren, Danqin; Du, Jianting; Hua, Feng

    2016-01-01

    the storm center area. As a result, using Kain–Fritsch cumulus scheme, Goddard shortwave radiation scheme and RRTM longwave radiation scheme in WRF may lead to much larger wind intensity, significant wave height, current intensity, as well as lower SST and sea surface pressure. Thus......A coupled ocean–atmosphere–wave–sediment transport modeling system was applied to study the atmosphere and ocean dynamics during Tropical Storm Nock-ten. Different atmospheric physical parameterizations in WRF model were investigated through ten groups of numerical experiments. Results...... of atmosphere, ocean wave and current features were compared with storm observations, ERA-Interim data, NOAA sea surface temperature data, AVISO current data and HYCOM data, respectively. It was found that the storm track and intensity are sensitive to the cumulus and radiation schemes in WRF, especially around...

  6. Modelling of primary aerosols in the chemical transport model MOCAGE: development and evaluation of aerosol physical parameterizations

    Directory of Open Access Journals (Sweden)

    B. Sič

    2015-02-01

    Full Text Available This paper deals with recent improvements to the global chemical transport model of Météo-France MOCAGE (Modèle de Chimie Atmosphérique à Grande Echelle that consists of updates to different aerosol parameterizations. MOCAGE only contains primary aerosol species: desert dust, sea salt, black carbon, organic carbon, and also volcanic ash in the case of large volcanic eruptions. We introduced important changes to the aerosol parameterization concerning emissions, wet deposition and sedimentation. For the emissions, size distribution and wind calculations are modified for desert dust aerosols, and a surface sea temperature dependant source function is introduced for sea salt aerosols. Wet deposition is modified toward a more physically realistic representation by introducing re-evaporation of falling rain and snowfall scavenging and by changing the in-cloud scavenging scheme along with calculations of precipitation cloud cover and rain properties. The sedimentation scheme update includes changes regarding the stability and viscosity calculations. Independent data from satellites (MODIS, SEVIRI, the ground (AERONET, EMEP, and a model inter-comparison project (AeroCom are compared with MOCAGE simulations and show that the introduced changes brought a significant improvement on aerosol representation, properties and global distribution. Emitted quantities of desert dust and sea salt, as well their lifetimes, moved closer towards values of AeroCom estimates and the multi-model average. When comparing the model simulations with MODIS aerosol optical depth (AOD observations over the oceans, the updated model configuration shows a decrease in the modified normalized mean bias (MNMB; from 0.42 to 0.10 and a better correlation (from 0.06 to 0.32 in terms of the geographical distribution and the temporal variability. The updates corrected a strong positive MNMB in the sea salt representation at high latitudes (from 0.65 to 0.16, and a negative MNMB in

  7. Parameterized Linear Longitudinal Airship Model

    Science.gov (United States)

    Kulczycki, Eric; Elfes, Alberto; Bayard, David; Quadrelli, Marco; Johnson, Joseph

    2010-01-01

    A parameterized linear mathematical model of the longitudinal dynamics of an airship is undergoing development. This model is intended to be used in designing control systems for future airships that would operate in the atmospheres of Earth and remote planets. Heretofore, the development of linearized models of the longitudinal dynamics of airships has been costly in that it has been necessary to perform extensive flight testing and to use system-identification techniques to construct models that fit the flight-test data. The present model is a generic one that can be relatively easily specialized to approximate the dynamics of specific airships at specific operating points, without need for further system identification, and with significantly less flight testing. The approach taken in the present development is to merge the linearized dynamical equations of an airship with techniques for estimation of aircraft stability derivatives, and to thereby make it possible to construct a linearized dynamical model of the longitudinal dynamics of a specific airship from geometric and aerodynamic data pertaining to that airship. (It is also planned to develop a model of the lateral dynamics by use of the same methods.) All of the aerodynamic data needed to construct the model of a specific airship can be obtained from wind-tunnel testing and computational fluid dynamics

  8. Infrared radiation parameterizations in numerical climate models

    Science.gov (United States)

    Chou, Ming-Dah; Kratz, David P.; Ridgway, William

    1991-01-01

    This study presents various approaches to parameterizing the broadband transmission functions for utilization in numerical climate models. One-parameter scaling is applied to approximate a nonhomogeneous path with an equivalent homogeneous path, and the diffuse transmittances are either interpolated from precomputed tables or fit by analytical functions. Two-parameter scaling is applied to parameterizing the carbon dioxide and ozone transmission functions in both the lower and middle atmosphere. Parameterizations are given for the nitrous oxide and methane diffuse transmission functions.

  9. CLOUD PARAMETERIZATIONS, CLOUD PHYSICS, AND THEIR CONNECTIONS: AN OVERVIEW

    International Nuclear Information System (INIS)

    LIU, Y.; DAUM, P.H.; CHAI, S.K.; LIU, F.

    2002-01-01

    This paper consists of three parts. The first part is concerned with the parameterization of cloud microphysics in climate models. We demonstrate the crucial importance of spectral dispersion of the cloud droplet size distribution in determining radiative properties of clouds (e.g., effective radius), and underline the necessity of specifying spectral dispersion in the parameterization of cloud microphysics. It is argued that the inclusion of spectral dispersion makes the issue of cloud parameterization essentially equivalent to that of the droplet size distribution function, bringing cloud parameterization to the forefront of cloud physics. The second part is concerned with theoretical investigations into the spectral shape of droplet size distributions in cloud physics. After briefly reviewing the mainstream theories (including entrainment and mixing theories, and stochastic theories), we discuss their deficiencies and the need for a paradigm shift from reductionist approaches to systems approaches. A systems theory that has recently been formulated by utilizing ideas from statistical physics and information theory is discussed, along with the major results derived from it. It is shown that the systems formalism not only easily explains many puzzles that have been frustrating the mainstream theories, but also reveals such new phenomena as scale-dependence of cloud droplet size distributions. The third part is concerned with the potential applications of the systems theory to the specification of spectral dispersion in terms of predictable variables and scale-dependence under different fluctuating environments

  10. Droplet Nucleation: Physically-Based Parameterizations and Comparative Evaluation

    Directory of Open Access Journals (Sweden)

    Steve Ghan

    2011-10-01

    Full Text Available One of the greatest sources of uncertainty in simulations of climate and climate change is the influence of aerosols on the optical properties of clouds. The root of this influence is the droplet nucleation process, which involves the spontaneous growth of aerosol into cloud droplets at cloud edges, during the early stages of cloud formation, and in some cases within the interior of mature clouds. Numerical models of droplet nucleation represent much of the complexity of the process, but at a computational cost that limits their application to simulations of hours or days. Physically-based parameterizations of droplet nucleation are designed to quickly estimate the number nucleated as a function of the primary controlling parameters: the aerosol number size distribution, hygroscopicity and cooling rate. Here we compare and contrast the key assumptions used in developing each of the most popular parameterizations and compare their performances under a variety of conditions. We find that the more complex parameterizations perform well under a wider variety of nucleation conditions, but all parameterizations perform well under the most common conditions. We then discuss the various applications of the parameterizations to cloud-resolving, regional and global models to study aerosol effects on clouds at a wide range of spatial and temporal scales. We compare estimates of anthropogenic aerosol indirect effects using two different parameterizations applied to the same global climate model, and find that the estimates of indirect effects differ by only 10%. We conclude with a summary of the outstanding challenges remaining for further development and application.

  11. Parameterized neural networks for high-energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Baldi, Pierre; Sadowski, Peter [University of California, Department of Computer Science, Irvine, CA (United States); Cranmer, Kyle [NYU, Department of Physics, New York, NY (United States); Faucett, Taylor; Whiteson, Daniel [University of California, Department of Physics and Astronomy, Irvine, CA (United States)

    2016-05-15

    We investigate a new structure for machine learning classifiers built with neural networks and applied to problems in high-energy physics by expanding the inputs to include not only measured features but also physics parameters. The physics parameters represent a smoothly varying learning task, and the resulting parameterized classifier can smoothly interpolate between them and replace sets of classifiers trained at individual values. This simplifies the training process and gives improved performance at intermediate values, even for complex problems requiring deep learning. Applications include tools parameterized in terms of theoretical model parameters, such as the mass of a particle, which allow for a single network to provide improved discrimination across a range of masses. This concept is simple to implement and allows for optimized interpolatable results. (orig.)

  12. Parameterized neural networks for high-energy physics

    International Nuclear Information System (INIS)

    Baldi, Pierre; Sadowski, Peter; Cranmer, Kyle; Faucett, Taylor; Whiteson, Daniel

    2016-01-01

    We investigate a new structure for machine learning classifiers built with neural networks and applied to problems in high-energy physics by expanding the inputs to include not only measured features but also physics parameters. The physics parameters represent a smoothly varying learning task, and the resulting parameterized classifier can smoothly interpolate between them and replace sets of classifiers trained at individual values. This simplifies the training process and gives improved performance at intermediate values, even for complex problems requiring deep learning. Applications include tools parameterized in terms of theoretical model parameters, such as the mass of a particle, which allow for a single network to provide improved discrimination across a range of masses. This concept is simple to implement and allows for optimized interpolatable results. (orig.)

  13. Collaborative Project. 3D Radiative Transfer Parameterization Over Mountains/Snow for High-Resolution Climate Models. Fast physics and Applications

    Energy Technology Data Exchange (ETDEWEB)

    Liou, Kuo-Nan [Univ. of California, Los Angeles, CA (United States)

    2016-02-09

    Under the support of the aforementioned DOE Grant, we have made two fundamental contributions to atmospheric and climate sciences: (1) Develop an efficient 3-D radiative transfer parameterization for application to intense and intricate inhomogeneous mountain/snow regions. (2) Innovate a stochastic parameterization for light absorption by internally mixed black carbon and dust particles in snow grains for understanding and physical insight into snow albedo reduction in climate models. With reference to item (1), we divided solar fluxes reaching mountain surfaces into five components: direct and diffuse fluxes, direct- and diffuse-reflected fluxes, and coupled mountain-mountain flux. “Exact” 3D Monte Carlo photon tracing computations can then be performed for these solar flux components to compare with those calculated from the conventional plane-parallel (PP) radiative transfer program readily available in climate models. Subsequently, Parameterizations of the deviations of 3D from PP results for five flux components are carried out by means of the multiple linear regression analysis associated with topographic information, including elevation, solar incident angle, sky view factor, and terrain configuration factor. We derived five regression equations with high statistical correlations for flux deviations and successfully incorporated this efficient parameterization into WRF model, which was used as the testbed in connection with the Fu-Liou-Gu PP radiation scheme that has been included in the WRF physics package. Incorporating this 3D parameterization program, we conducted simulations of WRF and CCSM4 to understand and evaluate the mountain/snow effect on snow albedo reduction during seasonal transition and the interannual variability for snowmelt, cloud cover, and precipitation over the Western United States presented in the final report. With reference to item (2), we developed in our previous research a geometric-optics surface-wave approach (GOS) for the

  14. Impacts of spectral nudging on the simulated surface air temperature in summer compared with the selection of shortwave radiation and land surface model physics parameterization in a high-resolution regional atmospheric model

    Science.gov (United States)

    Park, Jun; Hwang, Seung-On

    2017-11-01

    The impact of a spectral nudging technique for the dynamical downscaling of the summer surface air temperature in a high-resolution regional atmospheric model is assessed. The performance of this technique is measured by comparing 16 analysis-driven simulation sets of physical parameterization combinations of two shortwave radiation and four land surface model schemes of the model, which are known to be crucial for the simulation of the surface air temperature. It is found that the application of spectral nudging to the outermost domain has a greater impact on the regional climate than any combination of shortwave radiation and land surface model physics schemes. The optimal choice of two model physics parameterizations is helpful for obtaining more realistic spatiotemporal distributions of land surface variables such as the surface air temperature, precipitation, and surface fluxes. However, employing spectral nudging adds more value to the results; the improvement is greater than using sophisticated shortwave radiation and land surface model physical parameterizations. This result indicates that spectral nudging applied to the outermost domain provides a more accurate lateral boundary condition to the innermost domain when forced by analysis data by securing the consistency with large-scale forcing over a regional domain. This consequently indirectly helps two physical parameterizations to produce small-scale features closer to the observed values, leading to a better representation of the surface air temperature in a high-resolution downscaled climate.

  15. Spectral cumulus parameterization based on cloud-resolving model

    Science.gov (United States)

    Baba, Yuya

    2018-02-01

    We have developed a spectral cumulus parameterization using a cloud-resolving model. This includes a new parameterization of the entrainment rate which was derived from analysis of the cloud properties obtained from the cloud-resolving model simulation and was valid for both shallow and deep convection. The new scheme was examined in a single-column model experiment and compared with the existing parameterization of Gregory (2001, Q J R Meteorol Soc 127:53-72) (GR scheme). The results showed that the GR scheme simulated more shallow and diluted convection than the new scheme. To further validate the physical performance of the parameterizations, Atmospheric Model Intercomparison Project (AMIP) experiments were performed, and the results were compared with reanalysis data. The new scheme performed better than the GR scheme in terms of mean state and variability of atmospheric circulation, i.e., the new scheme improved positive bias of precipitation in western Pacific region, and improved positive bias of outgoing shortwave radiation over the ocean. The new scheme also simulated better features of convectively coupled equatorial waves and Madden-Julian oscillation. These improvements were found to be derived from the modification of parameterization for the entrainment rate, i.e., the proposed parameterization suppressed excessive increase of entrainment, thus suppressing excessive increase of low-level clouds.

  16. On the Dependence of Cloud Feedbacks on Physical Parameterizations in WRF Aquaplanet Simulations

    Science.gov (United States)

    Cesana, Grégory; Suselj, Kay; Brient, Florent

    2017-10-01

    We investigate the effects of physical parameterizations on cloud feedback uncertainty in response to climate change. For this purpose, we construct an ensemble of eight aquaplanet simulations using the Weather Research and Forecasting (WRF) model. In each WRF-derived simulation, we replace only one parameterization at a time while all other parameters remain identical. By doing so, we aim to (i) reproduce cloud feedback uncertainty from state-of-the-art climate models and (ii) understand how parametrizations impact cloud feedbacks. Our results demonstrate that this ensemble of WRF simulations, which differ only in physical parameterizations, replicates the range of cloud feedback uncertainty found in state-of-the-art climate models. We show that microphysics and convective parameterizations govern the magnitude and sign of cloud feedbacks, mostly due to tropical low-level clouds in subsidence regimes. Finally, this study highlights the advantages of using WRF to analyze cloud feedback mechanisms owing to its plug-and-play parameterization capability.

  17. Optimal Physics Parameterization Scheme Combination of the Weather Research and Forecasting Model for Seasonal Precipitation Simulation over Ghana

    Directory of Open Access Journals (Sweden)

    Richard Yao Kuma Agyeman

    2017-01-01

    Full Text Available Seasonal predictions of precipitation, among others, are important to help mitigate the effects of drought and floods on agriculture, hydropower generation, disasters, and many more. This work seeks to obtain a suitable combination of physics schemes of the Weather Research and Forecasting (WRF model for seasonal precipitation simulation over Ghana. Using the ERA-Interim reanalysis as forcing data, simulation experiments spanning eight months (from April to November were performed for two different years: a dry year (2001 and a wet year (2008. A double nested approach was used with the outer domain at 50 km resolution covering West Africa and the inner domain covering Ghana at 10 km resolution. The results suggest that the WRF model generally overestimated the observed precipitation by a mean value between 3% and 64% for both years. Most of the scheme combinations overestimated (underestimated precipitation over coastal (northern zones of Ghana for both years but estimated precipitation reasonably well over forest and transitional zones. On the whole, the combination of WRF Single-Moment 6-Class Microphysics Scheme, Grell-Devenyi Ensemble Cumulus Scheme, and Asymmetric Convective Model Planetary Boundary Layer Scheme simulated the best temporal pattern and temporal variability with the least relative bias for both years and therefore is recommended for Ghana.

  18. A test harness for accelerating physics parameterization advancements into operations

    Science.gov (United States)

    Firl, G. J.; Bernardet, L.; Harrold, M.; Henderson, J.; Wolff, J.; Zhang, M.

    2017-12-01

    The process of transitioning advances in parameterization of sub-grid scale processes from initial idea to implementation is often much quicker than the transition from implementation to use in an operational setting. After all, considerable work must be undertaken by operational centers to fully test, evaluate, and implement new physics. The process is complicated by the scarcity of like-to-like comparisons, availability of HPC resources, and the ``tuning problem" whereby advances in physics schemes are difficult to properly evaluate without first undertaking the expensive and time-consuming process of tuning to other schemes within a suite. To address this process shortcoming, the Global Model TestBed (GMTB), supported by the NWS NGGPS project and undertaken by the Developmental Testbed Center, has developed a physics test harness. It implements the concept of hierarchical testing, where the same code can be tested in model configurations of varying complexity from single column models (SCM) to fully coupled, cycled global simulations. Developers and users may choose at which level of complexity to engage. Several components of the physics test harness have been implemented, including a SCM and an end-to-end workflow that expands upon the one used at NOAA/EMC to run the GFS operationally, although the testbed components will necessarily morph to coincide with changes to the operational configuration (FV3-GFS). A standard, relatively user-friendly interface known as the Interoperable Physics Driver (IPD) is available for physics developers to connect their codes. This prerequisite exercise allows access to the testbed tools and removes a technical hurdle for potential inclusion into the Common Community Physics Package (CCPP). The testbed offers users the opportunity to conduct like-to-like comparisons between the operational physics suite and new development as well as among multiple developments. GMTB staff have demonstrated use of the testbed through a

  19. The dynamical core, physical parameterizations, and basic simulation characteristics of the atmospheric component AM3 of the GFDL global coupled model CM3

    Science.gov (United States)

    Donner, L.J.; Wyman, B.L.; Hemler, R.S.; Horowitz, L.W.; Ming, Y.; Zhao, M.; Golaz, J.-C.; Ginoux, P.; Lin, S.-J.; Schwarzkopf, M.D.; Austin, J.; Alaka, G.; Cooke, W.F.; Delworth, T.L.; Freidenreich, S.M.; Gordon, C.T.; Griffies, S.M.; Held, I.M.; Hurlin, W.J.; Klein, S.A.; Knutson, T.R.; Langenhorst, A.R.; Lee, H.-C.; Lin, Y.; Magi, B.I.; Malyshev, S.L.; Milly, P.C.D.; Naik, V.; Nath, M.J.; Pincus, R.; Ploshay, J.J.; Ramaswamy, V.; Seman, C.J.; Shevliakova, E.; Sirutis, J.J.; Stern, W.F.; Stouffer, R.J.; Wilson, R.J.; Winton, M.; Wittenberg, A.T.; Zeng, F.

    2011-01-01

    The Geophysical Fluid Dynamics Laboratory (GFDL) has developed a coupled general circulation model (CM3) for the atmosphere, oceans, land, and sea ice. The goal of CM3 is to address emerging issues in climate change, including aerosol-cloud interactions, chemistry-climate interactions, and coupling between the troposphere and stratosphere. The model is also designed to serve as the physical system component of earth system models and models for decadal prediction in the near-term future-for example, through improved simulations in tropical land precipitation relative to earlier-generation GFDL models. This paper describes the dynamical core, physical parameterizations, and basic simulation characteristics of the atmospheric component (AM3) of this model. Relative to GFDL AM2, AM3 includes new treatments of deep and shallow cumulus convection, cloud droplet activation by aerosols, subgrid variability of stratiform vertical velocities for droplet activation, and atmospheric chemistry driven by emissions with advective, convective, and turbulent transport. AM3 employs a cubed-sphere implementation of a finite-volume dynamical core and is coupled to LM3, a new land model with ecosystem dynamics and hydrology. Its horizontal resolution is approximately 200 km, and its vertical resolution ranges approximately from 70 m near the earth's surface to 1 to 1.5 km near the tropopause and 3 to 4 km in much of the stratosphere. Most basic circulation features in AM3 are simulated as realistically, or more so, as in AM2. In particular, dry biases have been reduced over South America. In coupled mode, the simulation of Arctic sea ice concentration has improved. AM3 aerosol optical depths, scattering properties, and surface clear-sky downward shortwave radiation are more realistic than in AM2. The simulation of marine stratocumulus decks remains problematic, as in AM2. The most intense 0.2% of precipitation rates occur less frequently in AM3 than observed. The last two decades of

  20. Cumulus parameterizations in chemical transport models

    Science.gov (United States)

    Mahowald, Natalie M.; Rasch, Philip J.; Prinn, Ronald G.

    1995-12-01

    Global three-dimensional chemical transport models (CTMs) are valuable tools for studying processes controlling the distribution of trace constituents in the atmosphere. A major uncertainty in these models is the subgrid-scale parametrization of transport by cumulus convection. This study seeks to define the range of behavior of moist convective schemes and point toward more reliable formulations for inclusion in chemical transport models. The emphasis is on deriving convective transport from meteorological data sets (such as those from the forecast centers) which do not routinely include convective mass fluxes. Seven moist convective parameterizations are compared in a column model to examine the sensitivity of the vertical profile of trace gases to the parameterization used in a global chemical transport model. The moist convective schemes examined are the Emanuel scheme [Emanuel, 1991], the Feichter-Crutzen scheme [Feichter and Crutzen, 1990], the inverse thermodynamic scheme (described in this paper), two versions of a scheme suggested by Hack [Hack, 1994], and two versions of a scheme suggested by Tiedtke (one following the formulation used in the ECMWF (European Centre for Medium-Range Weather Forecasting) and ECHAM3 (European Centre and Hamburg Max-Planck-Institut) models [Tiedtke, 1989], and one formulated as in the TM2 (Transport Model-2) model (M. Heimann, personal communication, 1992). These convective schemes vary in the closure used to derive the mass fluxes, as well as the cloud model formulation, giving a broad range of results. In addition, two boundary layer schemes are compared: a state-of-the-art nonlocal boundary layer scheme [Holtslag and Boville, 1993] and a simple adiabatic mixing scheme described in this paper. Three tests are used to compare the moist convective schemes against observations. Although the tests conducted here cannot conclusively show that one parameterization is better than the others, the tests are a good measure of the

  1. Understanding and Improving Ocean Mixing Parameterizations for modeling Climate Change

    Science.gov (United States)

    Howard, A. M.; Fells, J.; Clarke, J.; Cheng, Y.; Canuto, V.; Dubovikov, M. S.

    2017-12-01

    Climate is vital. Earth is only habitable due to the atmosphere&oceans' distribution of energy. Our Greenhouse Gas emissions shift overall the balance between absorbed and emitted radiation causing Global Warming. How much of these emissions are stored in the ocean vs. entering the atmosphere to cause warming and how the extra heat is distributed depends on atmosphere&ocean dynamics, which we must understand to know risks of both progressive Climate Change and Climate Variability which affect us all in many ways including extreme weather, floods, droughts, sea-level rise and ecosystem disruption. Citizens must be informed to make decisions such as "business as usual" vs. mitigating emissions to avert catastrophe. Simulations of Climate Change provide needed knowledge but in turn need reliable parameterizations of key physical processes, including ocean mixing, which greatly impacts transport&storage of heat and dissolved CO2. The turbulence group at NASA-GISS seeks to use physical theory to improve parameterizations of ocean mixing, including smallscale convective, shear driven, double diffusive, internal wave and tidal driven vertical mixing, as well as mixing by submesoscale eddies, and lateral mixing along isopycnals by mesoscale eddies. Medgar Evers undergraduates aid NASA research while learning climate science and developing computer&math skills. We write our own programs in MATLAB and FORTRAN to visualize and process output of ocean simulations including producing statistics to help judge impacts of different parameterizations on fidelity in reproducing realistic temperatures&salinities, diffusivities and turbulent power. The results can help upgrade the parameterizations. Students are introduced to complex system modeling and gain deeper appreciation of climate science and programming skills, while furthering climate science. We are incorporating climate projects into the Medgar Evers college curriculum. The PI is both a member of the turbulence group at

  2. Parameterized combinatorial geometry modeling in Moritz

    International Nuclear Information System (INIS)

    Van Riper, K.A.

    2005-01-01

    We describe the use of named variables as surface and solid body coefficients in the Moritz geometry editing program. Variables can also be used as material numbers, cell densities, and transformation values. A variable is defined as a constant or an arithmetic combination of constants and other variables. A variable reference, such as in a surface coefficient, can be a single variable or an expression containing variables and constants. Moritz can read and write geometry models in MCNP and ITS ACCEPT format; support for other codes will be added. The geometry can be saved with either the variables in place, for modifying the models in Moritz, or with the variables evaluated for use in the transport codes. A program window shows a list of variables and provides fields for editing them. Surface coefficients and other values that use a variable reference are shown in a distinctive style on object property dialogs; associated buttons show fields for editing the reference. We discuss our use of variables in defining geometry models for shielding studies in PET clinics. When a model is parameterized through the use of variables, changes such as room dimensions, shielding layer widths, and cell compositions can be quickly achieved by changing a few numbers without requiring knowledge of the input syntax for the transport code or the tedious and error prone work of recalculating many surface or solid body coefficients. (author)

  3. A review of recent research on improvement of physical parameterizations in the GLA GCM

    Science.gov (United States)

    Sud, Y. C.; Walker, G. K.

    1990-01-01

    A systematic assessment of the effect of a series of improvements in physical parameterizations of the Goddard Laboratory for Atmospheres (GLA) general circulation model (GCM) are summarized. The implementation of the Simple Biosphere Model (SiB) in the GCM is followed by a comparison of SiB GCM simulations with that of the earlier slab soil hydrology GCM (SSH-GCM) simulations. In the Sahelian context, the biogeophysical component of desertification was analyzed for SiB-GCM simulations. Cumulus parameterization is found to be the primary determinant of the organization of the simulated tropical rainfall of the GLA GCM using Arakawa-Schubert cumulus parameterization. A comparison of model simulations with station data revealed excessive shortwave radiation accompanied by excessive drying and heating to the land. The perpetual July simulations with and without interactive soil moisture shows that 30 to 40 day oscillations may be a natural mode of the simulated earth atmosphere system.

  4. Carbody structural lightweighting based on implicit parameterized model

    Science.gov (United States)

    Chen, Xin; Ma, Fangwu; Wang, Dengfeng; Xie, Chen

    2014-05-01

    Most of recent research on carbody lightweighting has focused on substitute material and new processing technologies rather than structures. However, new materials and processing techniques inevitably lead to higher costs. Also, material substitution and processing lightweighting have to be realized through body structural profiles and locations. In the huge conventional workload of lightweight optimization, model modifications involve heavy manual work, and it always leads to a large number of iteration calculations. As a new technique in carbody lightweighting, the implicit parameterization is used to optimize the carbody structure to improve the materials utilization rate in this paper. The implicit parameterized structural modeling enables the use of automatic modification and rapid multidisciplinary design optimization (MDO) in carbody structure, which is impossible in the traditional structure finite element method (FEM) without parameterization. The structural SFE parameterized model is built in accordance with the car structural FE model in concept development stage, and it is validated by some structural performance data. The validated SFE structural parameterized model can be used to generate rapidly and automatically FE model and evaluate different design variables group in the integrated MDO loop. The lightweighting result of body-in-white (BIW) after the optimization rounds reveals that the implicit parameterized model makes automatic MDO feasible and can significantly improve the computational efficiency of carbody structural lightweighting. This paper proposes the integrated method of implicit parameterized model and MDO, which has the obvious practical advantage and industrial significance in the carbody structural lightweighting design.

  5. Automatic Generation of Symbolic Model for Parameterized Synchronous Systems

    Institute of Scientific and Technical Information of China (English)

    Wei-Wen Xu

    2004-01-01

    With the purpose of making the verification of parameterized system more general and easier, in this paper, a new and intuitive language PSL (Parameterized-system Specification Language) is proposed to specify a class of parameterized synchronous systems. From a PSL script, an automatic method is proposed to generate a constraint-based symbolic model. The model can concisely symbolically represent the collections of global states by counting the number of processes in a given state. Moreover, a theorem has been proved that there is a simulation relation between the original system and its symbolic model. Since the abstract and symbolic techniques are exploited in the symbolic model, state-explosion problem in traditional verification methods is efficiently avoided. Based on the proposed symbolic model, a reachability analysis procedure is implemented using ANSI C++ on UNIX platform. Thus, a complete tool for verifying the parameterized synchronous systems is obtained and tested for some cases. The experimental results show that the method is satisfactory.

  6. Single-Column Modeling, GCM Parameterizations and Atmospheric Radiation Measurement Data

    International Nuclear Information System (INIS)

    Somerville, R.C.J.; Iacobellis, S.F.

    2005-01-01

    Our overall goal is identical to that of the Atmospheric Radiation Measurement (ARM) Program: the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global and regional models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have first compared single-column model (SCM) output with ARM observations at the Southern Great Plains (SGP), North Slope of Alaska (NSA) and Topical Western Pacific (TWP) sites. We focus on the predicted cloud amounts and on a suite of radiative quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments of cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art 3D atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable. We are currently testing the performance of our ARM-based parameterizations in state-of-the--art global and regional

  7. Balancing accuracy, efficiency, and flexibility in a radiative transfer parameterization for dynamical models

    Science.gov (United States)

    Pincus, R.; Mlawer, E. J.

    2017-12-01

    Radiation is key process in numerical models of the atmosphere. The problem is well-understood and the parameterization of radiation has seen relatively few conceptual advances in the past 15 years. It is nonthelss often the single most expensive component of all physical parameterizations despite being computed less frequently than other terms. This combination of cost and maturity suggests value in a single radiation parameterization that could be shared across models; devoting effort to a single parameterization might allow for fine tuning for efficiency. The challenge lies in the coupling of this parameterization to many disparate representations of clouds and aerosols. This talk will describe RRTMGP, a new radiation parameterization that seeks to balance efficiency and flexibility. This balance is struck by isolating computational tasks in "kernels" that expose as much fine-grained parallelism as possible. These have simple interfaces and are interoperable across programming languages so that they might be repalced by alternative implementations in domain-specific langauges. Coupling to the host model makes use of object-oriented features of Fortran 2003, minimizing branching within the kernels and the amount of data that must be transferred. We will show accuracy and efficiency results for a globally-representative set of atmospheric profiles using a relatively high-resolution spectral discretization.

  8. Parameterizing the Spatial Markov Model From Breakthrough Curve Data Alone

    Science.gov (United States)

    Sherman, Thomas; Fakhari, Abbas; Miller, Savannah; Singha, Kamini; Bolster, Diogo

    2017-12-01

    The spatial Markov model (SMM) is an upscaled Lagrangian model that effectively captures anomalous transport across a diverse range of hydrologic systems. The distinct feature of the SMM relative to other random walk models is that successive steps are correlated. To date, with some notable exceptions, the model has primarily been applied to data from high-resolution numerical simulations and correlation effects have been measured from simulated particle trajectories. In real systems such knowledge is practically unattainable and the best one might hope for is breakthrough curves (BTCs) at successive downstream locations. We introduce a novel methodology to quantify velocity correlation from BTC data alone. By discretizing two measured BTCs into a set of arrival times and developing an inverse model, we estimate velocity correlation, thereby enabling parameterization of the SMM in studies where detailed Lagrangian velocity statistics are unavailable. The proposed methodology is applied to two synthetic numerical problems, where we measure all details and thus test the veracity of the approach by comparison of estimated parameters with known simulated values. Our results suggest that our estimated transition probabilities agree with simulated values and using the SMM with this estimated parameterization accurately predicts BTCs downstream. Our methodology naturally allows for estimates of uncertainty by calculating lower and upper bounds of velocity correlation, enabling prediction of a range of BTCs. The measured BTCs fall within the range of predicted BTCs. This novel method to parameterize the SMM from BTC data alone is quite parsimonious, thereby widening the SMM's practical applicability.

  9. Parameterization of phase change of water in a mesoscale model

    Energy Technology Data Exchange (ETDEWEB)

    Levkov, L; Eppel, D; Grassl, H

    1987-01-01

    A parameterization scheme of phase change of water is suggested to be used in the 3-D numerical nonhydrostatic model GESIMA. The microphysical formulation follows the so-called bulk technique. With this procedure the net production rates in the balance equations for water and potential temperature are given both for liquid and ice-phase. Convectively stable as well as convectively unstable mesoscale systems are considered. With 2 figs..

  10. Air quality modeling: evaluation of chemical and meteorological parameterizations

    International Nuclear Information System (INIS)

    Kim, Youngseob

    2011-01-01

    The influence of chemical mechanisms and meteorological parameterizations on pollutant concentrations calculated with an air quality model is studied. The influence of the differences between two gas-phase chemical mechanisms on the formation of ozone and aerosols in Europe is low on average. For ozone, the large local differences are mainly due to the uncertainty associated with the kinetics of nitrogen monoxide (NO) oxidation reactions on the one hand and the representation of different pathways for the oxidation of aromatic compounds on the other hand. The aerosol concentrations are mainly influenced by the selection of all major precursors of secondary aerosols and the explicit treatment of chemical regimes corresponding to the nitrogen oxides (NO x ) levels. The influence of the meteorological parameterizations on the concentrations of aerosols and their vertical distribution is evaluated over the Paris region in France by comparison to lidar data. The influence of the parameterization of the dynamics in the atmospheric boundary layer is important; however, it is the use of an urban canopy model that improves significantly the modeling of the pollutant vertical distribution (author) [fr

  11. Model parameterization as method for data analysis in dendroecology

    Science.gov (United States)

    Tychkov, Ivan; Shishov, Vladimir; Popkova, Margarita

    2017-04-01

    There is no argue in usefulness of process-based models in ecological studies. Only limitations is how developed algorithm of model and how it will be applied for research. Simulation of tree-ring growth based on climate provides valuable information of tree-ring growth response on different environmental conditions, but also shares light on species-specifics of tree-ring growth process. Visual parameterization of the Vaganov-Shashkin model, allows to estimate non-linear response of tree-ring growth based on daily climate data: daily temperature, estimated day light and soil moisture. Previous using of the VS-Oscilloscope (a software tool of the visual parameterization) shows a good ability to recreate unique patterns of tree-ring growth for coniferous species in Siberian Russia, USA, China, Mediterranean Spain and Tunisia. But using of the models mostly is one-sided to better understand different tree growth processes, opposite to statistical methods of analysis (e.g. Generalized Linear Models, Mixed Models, Structural Equations.) which can be used for reconstruction and forecast. Usually the models are used either for checking of new hypothesis or quantitative assessment of physiological tree growth data to reveal a growth process mechanisms, while statistical methods used for data mining assessment and as a study tool itself. The high sensitivity of the model's VS-parameters reflects the ability of the model to simulate tree-ring growth and evaluates value of limiting growth climate factors. Precise parameterization of VS-Oscilloscope provides valuable information about growth processes of trees and under what conditions these processes occur (e.g. day of growth season onset, length of season, value of minimal/maximum temperature for tree-ring growth, formation of wide or narrow rings etc.). The work was supported by the Russian Science Foundation (RSF # 14-14-00219)

  12. Parameterizing the Spatial Markov Model from Breakthrough Curve Data Alone

    Science.gov (United States)

    Sherman, T.; Bolster, D.; Fakhari, A.; Miller, S.; Singha, K.

    2017-12-01

    The spatial Markov model (SMM) uses a correlated random walk and has been shown to effectively capture anomalous transport in porous media systems; in the SMM, particles' future trajectories are correlated to their current velocity. It is common practice to use a priori Lagrangian velocity statistics obtained from high resolution simulations to determine a distribution of transition probabilities (correlation) between velocity classes that govern predicted transport behavior; however, this approach is computationally cumbersome. Here, we introduce a methodology to quantify velocity correlation from Breakthrough (BTC) curve data alone; discretizing two measured BTCs into a set of arrival times and reverse engineering the rules of the SMM allows for prediction of velocity correlation, thereby enabling parameterization of the SMM in studies where Lagrangian velocity statistics are not available. The introduced methodology is applied to estimate velocity correlation from BTCs measured in high resolution simulations, thus allowing for a comparison of estimated parameters with known simulated values. Results show 1) estimated transition probabilities agree with simulated values and 2) using the SMM with estimated parameterization accurately predicts BTCs downstream. Additionally, we include uncertainty measurements by calculating lower and upper estimates of velocity correlation, which allow for prediction of a range of BTCs. The simulated BTCs fall in the range of predicted BTCs. This research proposes a novel method to parameterize the SMM from BTC data alone, thereby reducing the SMM's computational costs and widening its applicability.

  13. Integrated cumulus ensemble and turbulence (ICET): An integrated parameterization system for general circulation models (GCMs)

    Energy Technology Data Exchange (ETDEWEB)

    Evans, J.L.; Frank, W.M.; Young, G.S. [Pennsylvania State Univ., University Park, PA (United States)

    1996-04-01

    Successful simulations of the global circulation and climate require accurate representation of the properties of shallow and deep convective clouds, stable-layer clouds, and the interactions between various cloud types, the boundary layer, and the radiative fluxes. Each of these phenomena play an important role in the global energy balance, and each must be parameterized in a global climate model. These processes are highly interactive. One major problem limiting the accuracy of parameterizations of clouds and other processes in general circulation models (GCMs) is that most of the parameterization packages are not linked with a common physical basis. Further, these schemes have not, in general, been rigorously verified against observations adequate to the task of resolving subgrid-scale effects. To address these problems, we are designing a new Integrated Cumulus Ensemble and Turbulence (ICET) parameterization scheme, installing it in a climate model (CCM2), and evaluating the performance of the new scheme using data from Atmospheric Radiation Measurement (ARM) Program Cloud and Radiation Testbed (CART) sites.

  14. Shallow Chamber & Conduit Behavior of Silicic Magma: A Thermo- and Fluid- Dynamic Parameterization Model of Physical Deformation as Constrained by Geodetic Observations: Case Study; Soufriere Hills Volcano, Montserrat

    Science.gov (United States)

    Gunn de Rosas, C. L.

    2013-12-01

    The Soufrière Hills Volcano, Montserrat (SHV) is an active, mainly andesitic and well-studied stratovolcano situated at the northern end of the Lesser Antilles Arc subduction zone in the Caribbean Sea. The goal of our research is to create a high resolution 3D subsurface model of the shallow and deeper aspects of the magma storage and plumbing system at SHV. Our model will integrate inversions using continuous and campaign geodetic observations at SHV from 1995 to the present as well as local seismic records taken at various unrest intervals to construct a best-fit geometry, pressure point source and inflation rate and magnitude. We will also incorporate a heterogeneous media in the crust and use the most contemporary understanding of deep crustal- or even mantle-depth 'hot-zone' genesis and chemical evolution of silicic and intermediate magmas to inform the character of the deep edifice influx. Our heat transfer model will be constructed with a modified 'thin shell' enveloping the magma chamber to simulate the insulating or conducting influence of heat-altered chamber boundary conditions. The final forward model should elucidate observational data preceding and proceeding unrest events, the behavioral suite of magma transport in the subsurface environment and the feedback mechanisms that may contribute to eruption triggering. Preliminary hypotheses suggest wet, low-viscosity residual melts derived from 'hot zones' will ascend rapidly to shallower stall-points and that their products (eventually erupted lavas as well as stalled plutonic masses) will experience and display two discrete periods of shallow evolution; a rapid depressurization crystallization event followed by a slower conduction-controlled heat transfer and cooling crystallization. These events have particular implications for shallow magma behaviors, notably inflation, compressibility and pressure values. Visualization of the model with its inversion constraints will be affected with Com

  15. A stratiform cloud parameterization for General Circulation Models

    International Nuclear Information System (INIS)

    Ghan, S.J.; Leung, L.R.; Chuang, C.C.; Penner, J.E.; McCaa, J.

    1994-01-01

    The crude treatment of clouds in General Circulation Models (GCMs) is widely recognized as a major limitation in the application of these models to predictions of global climate change. The purpose of this project is to develop a paxameterization for stratiform clouds in GCMs that expresses stratiform clouds in terms of bulk microphysical properties and their subgrid variability. In this parameterization, precipitating cloud species are distinguished from non-precipitating species, and the liquid phase is distinguished from the ice phase. The size of the non-precipitating cloud particles (which influences both the cloud radiative properties and the conversion of non-precipitating cloud species to precipitating species) is determined by predicting both the mass and number concentrations of each species

  16. A stratiform cloud parameterization for general circulation models

    International Nuclear Information System (INIS)

    Ghan, S.J.; Leung, L.R.; Chuang, C.C.; Penner, J.E.; McCaa, J.

    1994-01-01

    The crude treatment of clouds in general circulation models (GCMs) is widely recognized as a major limitation in applying these models to predictions of global climate change. The purpose of this project is to develop in GCMs a stratiform cloud parameterization that expresses clouds in terms of bulk microphysical properties and their subgrid variability. Various clouds variables and their interactions are summarized. Precipitating cloud species are distinguished from non-precipitating species, and the liquid phase is distinguished from the ice phase. The size of the non-precipitating cloud particles (which influences both the cloud radiative properties and the conversion of non-precipitating cloud species to precipitating species) is determined by predicting both the mass and number concentrations of each species

  17. Aircraft Observations for Improved Physical Parameterization for Seasonal Prediction

    Science.gov (United States)

    2013-09-30

    platform is ready for use in air-sea interaction research projects. RELATED PROJECTS None PUBLICATIONS Gerber H., G. Frick, S. Malinowski ... Malinowski , S. P., H. Gerber, I. Jen-LaPlante, M. K. Kopec, W. Kumala, K. Nurowska, P. Y. Chuang, K. E. Haman, D. D. Khelif, S. K. Krueger, and H. H. Jonsson...Haman, K. E., Kopec, M. K., Khelif, D., and Malinowski , S. P.: Modified ultrafast thermometer UFT-M and temperature measurements during Physics of

  18. The use of the k - {epsilon} turbulence model within the Rossby Centre regional ocean climate model: parameterization development and results

    Energy Technology Data Exchange (ETDEWEB)

    Markus Meier, H.E. [Swedish Meteorological and Hydrological Inst., Norrkoeping (Sweden). Rossby Centre

    2000-09-01

    As mixing plays a dominant role for the physics of an estuary like the Baltic Sea (seasonal heat storage, mixing in channels, deep water mixing), different mixing parameterizations for use in 3D Baltic Sea models are discussed and compared. For this purpose two different OGCMs of the Baltic Sea are utilized. Within the Swedish regional climate modeling program, SWECLIM, a 3D coupled ice-ocean model for the Baltic Sea has been coupled with an improved version of the two-equation k - {epsilon} turbulence model with corrected dissipation term, flux boundary conditions to include the effect of a turbulence enhanced layer due to breaking surface gravity waves and a parameterization for breaking internal waves. Results of multi-year simulations are compared with observations. The seasonal thermocline is simulated satisfactory and erosion of the halocline is avoided. Unsolved problems are discussed. To replace the controversial equation for dissipation the performance of a hierarchy of k-models has been tested and compared with the k - {epsilon} model. In addition, it is shown that the results of the mixing parameterization depend very much on the choice of the ocean model. Finally, the impact of two mixing parameterizations on Baltic Sea climate is investigated. In this case the sensitivity of mean SST, vertical temperature and salinity profiles, ice season and seasonal cycle of heat fluxes is quite large.

  19. A shallow convection parameterization for the non-hydrostatic MM5 mesoscale model

    Energy Technology Data Exchange (ETDEWEB)

    Seaman, N.L.; Kain, J.S.; Deng, A. [Pennsylvania State Univ., University Park, PA (United States)

    1996-04-01

    A shallow convection parameterization suitable for the Pennsylvannia State University (PSU)/National Center for Atmospheric Research nonhydrostatic mesoscale model (MM5) is being developed at PSU. The parameterization is based on parcel perturbation theory developed in conjunction with a 1-D Mellor Yamada 1.5-order planetary boundary layer scheme and the Kain-Fritsch deep convection model.

  20. Evaluating and Improving Wind Forecasts over South China: The Role of Orographic Parameterization in the GRAPES Model

    Science.gov (United States)

    Zhong, Shuixin; Chen, Zitong; Xu, Daosheng; Zhang, Yanxia

    2018-06-01

    Unresolved small-scale orographic (SSO) drags are parameterized in a regional model based on the Global/Regional Assimilation and Prediction System for the Tropical Mesoscale Model (GRAPES TMM). The SSO drags are represented by adding a sink term in the momentum equations. The maximum height of the mountain within the grid box is adopted in the SSO parameterization (SSOP) scheme as compensation for the drag. The effects of the unresolved topography are parameterized as the feedbacks to the momentum tendencies on the first model level in planetary boundary layer (PBL) parameterization. The SSOP scheme has been implemented and coupled with the PBL parameterization scheme within the model physics package. A monthly simulation is designed to examine the performance of the SSOP scheme over the complex terrain areas located in the southwest of Guangdong. The verification results show that the surface wind speed bias has been much alleviated by adopting the SSOP scheme, in addition to reduction of the wind bias in the lower troposphere. The target verification over Xinyi shows that the simulations with the SSOP scheme provide improved wind estimation over the complex regions in the southwest of Guangdong.

  1. WRF model sensitivity to choice of parameterization: a study of the `York Flood 1999'

    Science.gov (United States)

    Remesan, Renji; Bellerby, Tim; Holman, Ian; Frostick, Lynne

    2015-10-01

    Numerical weather modelling has gained considerable attention in the field of hydrology especially in un-gauged catchments and in conjunction with distributed models. As a consequence, the accuracy with which these models represent precipitation, sub-grid-scale processes and exceptional events has become of considerable concern to the hydrological community. This paper presents sensitivity analyses for the Weather Research Forecast (WRF) model with respect to the choice of physical parameterization schemes (both cumulus parameterisation (CPSs) and microphysics parameterization schemes (MPSs)) used to represent the `1999 York Flood' event, which occurred over North Yorkshire, UK, 1st-14th March 1999. The study assessed four CPSs (Kain-Fritsch (KF2), Betts-Miller-Janjic (BMJ), Grell-Devenyi ensemble (GD) and the old Kain-Fritsch (KF1)) and four MPSs (Kessler, Lin et al., WRF single-moment 3-class (WSM3) and WRF single-moment 5-class (WSM5)] with respect to their influence on modelled rainfall. The study suggests that the BMJ scheme may be a better cumulus parameterization choice for the study region, giving a consistently better performance than other three CPSs, though there are suggestions of underestimation. The WSM3 was identified as the best MPSs and a combined WSM3/BMJ model setup produced realistic estimates of precipitation quantities for this exceptional flood event. This study analysed spatial variability in WRF performance through categorical indices, including POD, FBI, FAR and CSI during York Flood 1999 under various model settings. Moreover, the WRF model was good at predicting high-intensity rare events over the Yorkshire region, suggesting it has potential for operational use.

  2. Subgrid Parameterization of the Soil Moisture Storage Capacity for a Distributed Rainfall-Runoff Model

    Directory of Open Access Journals (Sweden)

    Weijian Guo

    2015-05-01

    Full Text Available Spatial variability plays an important role in nonlinear hydrologic processes. Due to the limitation of computational efficiency and data resolution, subgrid variability is usually assumed to be uniform for most grid-based rainfall-runoff models, which leads to the scale-dependence of model performances. In this paper, the scale effect on the Grid-Xinanjiang model was examined. The bias of the estimation of precipitation, runoff, evapotranspiration and soil moisture at the different grid scales, along with the scale-dependence of the effective parameters, highlights the importance of well representing the subgrid variability. This paper presents a subgrid parameterization method to incorporate the subgrid variability of the soil storage capacity, which is a key variable that controls runoff generation and partitioning in the Grid-Xinanjiang model. In light of the similar spatial pattern and physical basis, the soil storage capacity is correlated with the topographic index, whose spatial distribution can more readily be measured. A beta distribution is introduced to represent the spatial distribution of the soil storage capacity within the grid. The results derived from the Yanduhe Basin show that the proposed subgrid parameterization method can effectively correct the watershed soil storage capacity curve. Compared to the original Grid-Xinanjiang model, the model performances are quite consistent at the different grid scales when the subgrid variability is incorporated. This subgrid parameterization method reduces the recalibration necessity when the Digital Elevation Model (DEM resolution is changed. Moreover, it improves the potential for the application of the distributed model in the ungauged basin.

  3. Impact of model structure and parameterization on Penman-Monteith type evaporation models

    KAUST Repository

    Ershadi, A.; McCabe, Matthew; Evans, J.P.; Wood, E.F.

    2015-01-01

    Overall, the results illustrate the sensitivity of Penman-Monteith type models to model structure, parameterization choice and biome type. A particular challenge in flux estimation relates to developing robust and broadly applicable model formulations. With many choices available for use, providing guidance on the most appropriate scheme to employ is required to advance approaches for routine global scale flux estimates, undertake hydrometeorological assessments or develop hydrological forecasting tools, amongst many other applications. In such cases, a multi-model ensemble or biome-specific tiled evaporation product may be an appropriate solution, given the inherent variability in model and parameterization choice that is observed within single product estimates.

  4. Frozen soil parameterization in a distributed biosphere hydrological model

    Directory of Open Access Journals (Sweden)

    L. Wang

    2010-03-01

    Full Text Available In this study, a frozen soil parameterization has been modified and incorporated into a distributed biosphere hydrological model (WEB-DHM. The WEB-DHM with the frozen scheme was then rigorously evaluated in a small cold area, the Binngou watershed, against the in-situ observations from the WATER (Watershed Allied Telemetry Experimental Research. First, by using the original WEB-DHM without the frozen scheme, the land surface parameters and two van Genuchten parameters were optimized using the observed surface radiation fluxes and the soil moistures at upper layers (5, 10 and 20 cm depths at the DY station in July. Second, by using the WEB-DHM with the frozen scheme, two frozen soil parameters were calibrated using the observed soil temperature at 5 cm depth at the DY station from 21 November 2007 to 20 April 2008; while the other soil hydraulic parameters were optimized by the calibration of the discharges at the basin outlet in July and August that covers the annual largest flood peak in 2008. With these calibrated parameters, the WEB-DHM with the frozen scheme was then used for a yearlong validation from 21 November 2007 to 20 November 2008. Results showed that the WEB-DHM with the frozen scheme has given much better performance than the WEB-DHM without the frozen scheme, in the simulations of soil moisture profile at the cold regions catchment and the discharges at the basin outlet in the yearlong simulation.

  5. Implementation of a generalized actuator line model for wind turbine parameterization in the Weather Research and Forecasting model

    Energy Technology Data Exchange (ETDEWEB)

    Marjanovic, Nikola [Department of Civil and Environmental Engineering, University of California, Berkeley, MC 1710, Berkeley, California 94720-1710, USA; Atmospheric, Earth and Energy Division, Lawrence Livermore National Laboratory, PO Box 808, L-103, Livermore, California 94551, USA; Mirocha, Jeffrey D. [Atmospheric, Earth and Energy Division, Lawrence Livermore National Laboratory, PO Box 808, L-103, Livermore, California 94551, USA; Kosović, Branko [Research Applications Laboratory, Weather Systems and Assessment Program, University Corporation for Atmospheric Research, PO Box 3000, Boulder, Colorado 80307, USA; Lundquist, Julie K. [Department of Atmospheric and Oceanic Sciences, University of Colorado, Boulder, Campus Box 311, Boulder, Colorado 80309, USA; National Renewable Energy Laboratory, 15013 Denver West Parkway, Golden, Colorado 80401, USA; Chow, Fotini Katopodes [Department of Civil and Environmental Engineering, University of California, Berkeley, MC 1710, Berkeley, California 94720-1710, USA

    2017-11-01

    A generalized actuator line (GAL) wind turbine parameterization is implemented within the Weather Research and Forecasting model to enable high-fidelity large-eddy simulations of wind turbine interactions with boundary layer flows under realistic atmospheric forcing conditions. Numerical simulations using the GAL parameterization are evaluated against both an already implemented generalized actuator disk (GAD) wind turbine parameterization and two field campaigns that measured the inflow and near-wake regions of a single turbine. The representation of wake wind speed, variance, and vorticity distributions is examined by comparing fine-resolution GAL and GAD simulations and GAD simulations at both fine and coarse-resolutions. The higher-resolution simulations show slightly larger and more persistent velocity deficits in the wake and substantially increased variance and vorticity when compared to the coarse-resolution GAD. The GAL generates distinct tip and root vortices that maintain coherence as helical tubes for approximately one rotor diameter downstream. Coarse-resolution simulations using the GAD produce similar aggregated wake characteristics to both fine-scale GAD and GAL simulations at a fraction of the computational cost. The GAL parameterization provides the capability to resolve near wake physics, including vorticity shedding and wake expansion.

  6. Parameterization of a ruminant model of phosphorus digestion and metabolism.

    Science.gov (United States)

    Feng, X; Knowlton, K F; Hanigan, M D

    2015-10-01

    The objective of the current work was to parameterize the digestive elements of the model of Hill et al. (2008) using data collected from animals that were ruminally, duodenally, and ileally cannulated, thereby providing a better understanding of the digestion and metabolism of P fractions in growing and lactating cattle. The model of Hill et al. (2008) was fitted and evaluated for adequacy using the data from 6 animal studies. We hypothesized that sufficient data would be available to estimate P digestion and metabolism parameters and that these parameters would be sufficient to derive P bioavailabilities of a range of feed ingredients. Inputs to the model were dry matter intake; total feed P concentration (fPtFd); phytate (Pp), organic (Po), and inorganic (Pi) P as fractions of total P (fPpPt, fPoPt, fPiPt); microbial growth; amount of Pi and Pp infused into the omasum or ileum; milk yield; and BW. The available data were sufficient to derive all model parameters of interest. The final model predicted that given 75 g/d of total P input, the total-tract digestibility of P was 40.8%, Pp digestibility in the rumen was 92.4%, and in the total-tract was 94.7%. Blood P recycling to the rumen was a major source of Pi flow into the small intestine, and the primary route of excretion. A large proportion of Pi flowing to the small intestine was absorbed; however, additional Pi was absorbed from the large intestine (3.15%). Absorption of Pi from the small intestine was regulated, and given the large flux of salivary P recycling, the effective fractional small intestine absorption of available P derived from the diet was 41.6% at requirements. Milk synthesis used 16% of total absorbed P, and less than 1% was excreted in urine. The resulting model could be used to derive P bioavailabilities of commonly used feedstuffs in cattle production. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  7. Assessing the performance of wave breaking parameterizations in shallow waters in spectral wave models

    Science.gov (United States)

    Lin, Shangfei; Sheng, Jinyu

    2017-12-01

    Depth-induced wave breaking is the primary dissipation mechanism for ocean surface waves in shallow waters. Different parametrizations were developed for parameterizing depth-induced wave breaking process in ocean surface wave models. The performance of six commonly-used parameterizations in simulating significant wave heights (SWHs) is assessed in this study. The main differences between these six parameterizations are representations of the breaker index and the fraction of breaking waves. Laboratory and field observations consisting of 882 cases from 14 sources of published observational data are used in the assessment. We demonstrate that the six parameterizations have reasonable performance in parameterizing depth-induced wave breaking in shallow waters, but with their own limitations and drawbacks. The widely-used parameterization suggested by Battjes and Janssen (1978, BJ78) has a drawback of underpredicting the SWHs in the locally-generated wave conditions and overpredicting in the remotely-generated wave conditions over flat bottoms. The drawback of BJ78 was addressed by a parameterization suggested by Salmon et al. (2015, SA15). But SA15 had relatively larger errors in SWHs over sloping bottoms than BJ78. We follow SA15 and propose a new parameterization with a dependence of the breaker index on the normalized water depth in deep waters similar to SA15. In shallow waters, the breaker index of the new parameterization has a nonlinear dependence on the local bottom slope rather than the linear dependence used in SA15. Overall, this new parameterization has the best performance with an average scatter index of ∼8.2% in comparison with the three best performing existing parameterizations with the average scatter index between 9.2% and 13.6%.

  8. Nitrous Oxide Emissions from Biofuel Crops and Parameterization in the EPIC Biogeochemical Model

    Science.gov (United States)

    This presentation describes year 1 field measurements of N2O fluxes and crop yields which are used to parameterize the EPIC biogeochemical model for the corresponding field site. Initial model simulations are also presented.

  9. Tool-driven Design and Automated Parameterization for Real-time Generic Drivetrain Models

    Directory of Open Access Journals (Sweden)

    Schwarz Christina

    2015-01-01

    Full Text Available Real-time dynamic drivetrain modeling approaches have a great potential for development cost reduction in the automotive industry. Even though real-time drivetrain models are available, these solutions are specific to single transmission topologies. In this paper an environment for parameterization of a solution is proposed based on a generic method applicable to all types of gear transmission topologies. This enables tool-guided modeling by non- experts in the fields of mechanic engineering and control theory leading to reduced development and testing efforts. The approach is demonstrated for an exemplary automatic transmission using the environment for automated parameterization. Finally, the parameterization is validated via vehicle measurement data.

  10. Natural Ocean Carbon Cycle Sensitivity to Parameterizations of the Recycling in a Climate Model

    Science.gov (United States)

    Romanou, A.; Romanski, J.; Gregg, W. W.

    2014-01-01

    eventually resurfaces with the global thermohaline circulation especially in the Southern Ocean. Because of the reduced primary production and carbon export in GISSEH compared to GISSER, the biological pump efficiency, i.e., the ratio of primary production and carbon export at 75 m, is half in the GISSEH of that in GISSER, The Southern Ocean emerges as a key region where the CO2 flux is as sensitive to biological parameterizations as it is to physical parameterizations. The fidelity of ocean mixing in the Southern Ocean compared to observations is shown to be a good indicator of the magnitude of the biological pump efficiency regardless of physical model choice.

  11. Impact of model structure and parameterization on Penman-Monteith type evaporation models

    KAUST Repository

    Ershadi, A.

    2015-04-12

    The impact of model structure and parameterization on the estimation of evaporation is investigated across a range of Penman-Monteith type models. To examine the role of model structure on flux retrievals, three different retrieval schemes are compared. The schemes include a traditional single-source Penman-Monteith model (Monteith, 1965), a two-layer model based on Shuttleworth and Wallace (1985) and a three-source model based on Mu et al. (2011). To assess the impact of parameterization choice on model performance, a number of commonly used formulations for aerodynamic and surface resistances were substituted into the different formulations. Model response to these changes was evaluated against data from twenty globally distributed FLUXNET towers, representing a cross-section of biomes that include grassland, cropland, shrubland, evergreen needleleaf forest and deciduous broadleaf forest. Scenarios based on 14 different combinations of model structure and parameterization were ranked based on their mean value of Nash-Sutcliffe Efficiency. Results illustrated considerable variability in model performance both within and between biome types. Indeed, no single model consistently outperformed any other when considered across all biomes. For instance, in grassland and shrubland sites, the single-source Penman-Monteith model performed the best. In croplands it was the three-source Mu model, while for evergreen needleleaf and deciduous broadleaf forests, the Shuttleworth-Wallace model rated highest. Interestingly, these top ranked scenarios all shared the simple lookup-table based surface resistance parameterization of Mu et al. (2011), while a more complex Jarvis multiplicative method for surface resistance produced lower ranked simulations. The highly ranked scenarios mostly employed a version of the Thom (1975) formulation for aerodynamic resistance that incorporated dynamic values of roughness parameters. This was true for all cases except over deciduous broadleaf

  12. Identifiability of Model Properties in Over-Parameterized Model Classes

    DEFF Research Database (Denmark)

    Jaeger, Manfred

    2013-01-01

    Classical learning theory is based on a tight linkage between hypothesis space (a class of function on a domain X), data space (function-value examples (x, f(x))), and the space of queries for the learned model (predicting function values for new examples x). However, in many learning scenarios t...

  13. A Stochastic Lagrangian Basis for a Probabilistic Parameterization of Moisture Condensation in Eulerian Models

    OpenAIRE

    Tsang, Yue-Kin; Vallis, Geoffrey K.

    2018-01-01

    In this paper we describe the construction of an efficient probabilistic parameterization that could be used in a coarse-resolution numerical model in which the variation of moisture is not properly resolved. An Eulerian model using a coarse-grained field on a grid cannot properly resolve regions of saturation---in which condensation occurs---that are smaller than the grid boxes. Thus, in the absence of a parameterization scheme, either the grid box must become saturated or condensation will ...

  14. Global model comparison of heterogeneous ice nucleation parameterizations in mixed phase clouds

    Science.gov (United States)

    Yun, Yuxing; Penner, Joyce E.

    2012-04-01

    A new aerosol-dependent mixed phase cloud parameterization for deposition/condensation/immersion (DCI) ice nucleation and one for contact freezing are compared to the original formulations in a coupled general circulation model and aerosol transport model. The present-day cloud liquid and ice water fields and cloud radiative forcing are analyzed and compared to observations. The new DCI freezing parameterization changes the spatial distribution of the cloud water field. Significant changes are found in the cloud ice water fraction and in the middle cloud fractions. The new DCI freezing parameterization predicts less ice water path (IWP) than the original formulation, especially in the Southern Hemisphere. The smaller IWP leads to a less efficient Bergeron-Findeisen process resulting in a larger liquid water path, shortwave cloud forcing, and longwave cloud forcing. It is found that contact freezing parameterizations have a greater impact on the cloud water field and radiative forcing than the two DCI freezing parameterizations that we compared. The net solar flux at top of atmosphere and net longwave flux at the top of the atmosphere change by up to 8.73 and 3.52 W m-2, respectively, due to the use of different DCI and contact freezing parameterizations in mixed phase clouds. The total climate forcing from anthropogenic black carbon/organic matter in mixed phase clouds is estimated to be 0.16-0.93 W m-2using the aerosol-dependent parameterizations. A sensitivity test with contact ice nuclei concentration in the original parameterization fit to that recommended by Young (1974) gives results that are closer to the new contact freezing parameterization.

  15. Parameterization models for solar radiation and solar technology applications

    International Nuclear Information System (INIS)

    Khalil, Samy A.

    2008-01-01

    Solar radiation is very important for the evaluation and wide use of solar renewable energy systems. The development of calibration procedures for broadband solar radiation photometric instrumentation and the improvement of broadband solar radiation measurement accuracy have been done. An improved diffuse sky reference and photometric calibration and characterization software for outdoor pyranometer calibrations are outlined. Parameterizations for direct beam, total hemispherical and diffuse sky radiation and solar radiation technology are briefly reviewed. The uncertainties for various broadband solar radiations of solar energy and atmospheric effects are discussed. The varying responsivities of solar radiation with meteorological, statistical and climatological parameters and possibility atmospheric conditions was examined

  16. Parameterization models for solar radiation and solar technology applications

    Energy Technology Data Exchange (ETDEWEB)

    Khalil, Samy A. [National Research Institute of Astronomy and Geophysics, Solar and Space Department, Marsed Street, Helwan, 11421 Cairo (Egypt)

    2008-08-15

    Solar radiation is very important for the evaluation and wide use of solar renewable energy systems. The development of calibration procedures for broadband solar radiation photometric instrumentation and the improvement of broadband solar radiation measurement accuracy have been done. An improved diffuse sky reference and photometric calibration and characterization software for outdoor pyranometer calibrations are outlined. Parameterizations for direct beam, total hemispherical and diffuse sky radiation and solar radiation technology are briefly reviewed. The uncertainties for various broadband solar radiations of solar energy and atmospheric effects are discussed. The varying responsivities of solar radiation with meteorological, statistical and climatological parameters and possibility atmospheric conditions was examined. (author)

  17. Towards improved parameterization of a macroscale hydrologic model in a discontinuous permafrost boreal forest ecosystem

    Directory of Open Access Journals (Sweden)

    A. Endalamaw

    2017-09-01

    Full Text Available Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which better represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW in Interior Alaska: one nearly permafrost-free (LowP sub-basin and one permafrost-dominated (HighP sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC mesoscale hydrological model to simulate runoff, evapotranspiration (ET, and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub

  18. A scheme for parameterizing ice cloud water content in general circulation models

    Science.gov (United States)

    Heymsfield, Andrew J.; Donner, Leo J.

    1989-01-01

    A method for specifying ice water content in GCMs is developed, based on theory and in-cloud measurements. A theoretical development of the conceptual precipitation model is given and the aircraft flights used to characterize the ice mass distribution in deep ice clouds is discussed. Ice water content values derived from the theoretical parameterization are compared with the measured values. The results demonstrate that a simple parameterization for atmospheric ice content can account for ice contents observed in several synoptic contexts.

  19. Physically sound parameterization of incomplete ionization in aluminum-doped silicon

    Directory of Open Access Journals (Sweden)

    Heiko Steinkemper

    2016-12-01

    Full Text Available Incomplete ionization is an important issue when modeling silicon devices featuring aluminum-doped p+ (Al-p+ regions. Aluminum has a rather deep state in the band gap compared to boron or phosphorus, causing strong incomplete ionization. In this paper, we considerably improve our recent parameterization [Steinkemper et al., J. Appl. Phys. 117, 074504 (2015]. On the one hand, we found a fundamental criterion to further reduce the number of free parameters in our fitting procedure. And on the other hand, we address a mistake in the original publication of the incomplete ionization formalism in Altermatt et al., J. Appl. Phys. 100, 113715 (2006.

  20. Parameterization models for pesticide exposure via crop consumption.

    Science.gov (United States)

    Fantke, Peter; Wieland, Peter; Juraske, Ronnie; Shaddick, Gavin; Itoiz, Eva Sevigné; Friedrich, Rainer; Jolliet, Olivier

    2012-12-04

    An approach for estimating human exposure to pesticides via consumption of six important food crops is presented that can be used to extend multimedia models applied in health risk and life cycle impact assessment. We first assessed the variation of model output (pesticide residues per kg applied) as a function of model input variables (substance, crop, and environmental properties) including their possible correlations using matrix algebra. We identified five key parameters responsible for between 80% and 93% of the variation in pesticide residues, namely time between substance application and crop harvest, degradation half-lives in crops and on crop surfaces, overall residence times in soil, and substance molecular weight. Partition coefficients also play an important role for fruit trees and tomato (Kow), potato (Koc), and lettuce (Kaw, Kow). Focusing on these parameters, we develop crop-specific models by parametrizing a complex fate and exposure assessment framework. The parametric models thereby reflect the framework's physical and chemical mechanisms and predict pesticide residues in harvest using linear combinations of crop, crop surface, and soil compartments. Parametric model results correspond well with results from the complex framework for 1540 substance-crop combinations with total deviations between a factor 4 (potato) and a factor 66 (lettuce). Predicted residues also correspond well with experimental data previously used to evaluate the complex framework. Pesticide mass in harvest can finally be combined with reduction factors accounting for food processing to estimate human exposure from crop consumption. All parametric models can be easily implemented into existing assessment frameworks.

  1. “Using Statistical Comparisons between SPartICus Cirrus Microphysical Measurements, Detailed Cloud Models, and GCM Cloud Parameterizations to Understand Physical Processes Controlling Cirrus Properties and to Improve the Cloud Parameterizations”

    Energy Technology Data Exchange (ETDEWEB)

    Woods, Sarah [SPEC Inc., Boulder, CO (United States)

    2015-12-01

    The dual objectives of this project were improving our basic understanding of processes that control cirrus microphysical properties and improvement of the representation of these processes in the parameterizations. A major effort in the proposed research was to integrate, calibrate, and better understand the uncertainties in all of these measurements.

  2. Evaluation of five dry particle deposition parameterizations for incorporation into atmospheric transport models

    Science.gov (United States)

    Khan, Tanvir R.; Perlinger, Judith A.

    2017-10-01

    Despite considerable effort to develop mechanistic dry particle deposition parameterizations for atmospheric transport models, current knowledge has been inadequate to propose quantitative measures of the relative performance of available parameterizations. In this study, we evaluated the performance of five dry particle deposition parameterizations developed by Zhang et al. (2001) (Z01), Petroff and Zhang (2010) (PZ10), Kouznetsov and Sofiev (2012) (KS12), Zhang and He (2014) (ZH14), and Zhang and Shao (2014) (ZS14), respectively. The evaluation was performed in three dimensions: model ability to reproduce observed deposition velocities, Vd (accuracy); the influence of imprecision in input parameter values on the modeled Vd (uncertainty); and identification of the most influential parameter(s) (sensitivity). The accuracy of the modeled Vd was evaluated using observations obtained from five land use categories (LUCs): grass, coniferous and deciduous forests, natural water, and ice/snow. To ascertain the uncertainty in modeled Vd, and quantify the influence of imprecision in key model input parameters, a Monte Carlo uncertainty analysis was performed. The Sobol' sensitivity analysis was conducted with the objective to determine the parameter ranking from the most to the least influential. Comparing the normalized mean bias factors (indicators of accuracy), we find that the ZH14 parameterization is the most accurate for all LUCs except for coniferous forest, for which it is second most accurate. From Monte Carlo simulations, the estimated mean normalized uncertainties in the modeled Vd obtained for seven particle sizes (ranging from 0.005 to 2.5 µm) for the five LUCs are 17, 12, 13, 16, and 27 % for the Z01, PZ10, KS12, ZH14, and ZS14 parameterizations, respectively. From the Sobol' sensitivity results, we suggest that the parameter rankings vary by particle size and LUC for a given parameterization. Overall, for dp = 0.001 to 1.0 µm, friction velocity was one of

  3. Evaluation of five dry particle deposition parameterizations for incorporation into atmospheric transport models

    Directory of Open Access Journals (Sweden)

    T. R. Khan

    2017-10-01

    Full Text Available Despite considerable effort to develop mechanistic dry particle deposition parameterizations for atmospheric transport models, current knowledge has been inadequate to propose quantitative measures of the relative performance of available parameterizations. In this study, we evaluated the performance of five dry particle deposition parameterizations developed by Zhang et al. (2001 (Z01, Petroff and Zhang (2010 (PZ10, Kouznetsov and Sofiev (2012 (KS12, Zhang and He (2014 (ZH14, and Zhang and Shao (2014 (ZS14, respectively. The evaluation was performed in three dimensions: model ability to reproduce observed deposition velocities, Vd (accuracy; the influence of imprecision in input parameter values on the modeled Vd (uncertainty; and identification of the most influential parameter(s (sensitivity. The accuracy of the modeled Vd was evaluated using observations obtained from five land use categories (LUCs: grass, coniferous and deciduous forests, natural water, and ice/snow. To ascertain the uncertainty in modeled Vd, and quantify the influence of imprecision in key model input parameters, a Monte Carlo uncertainty analysis was performed. The Sobol' sensitivity analysis was conducted with the objective to determine the parameter ranking from the most to the least influential. Comparing the normalized mean bias factors (indicators of accuracy, we find that the ZH14 parameterization is the most accurate for all LUCs except for coniferous forest, for which it is second most accurate. From Monte Carlo simulations, the estimated mean normalized uncertainties in the modeled Vd obtained for seven particle sizes (ranging from 0.005 to 2.5 µm for the five LUCs are 17, 12, 13, 16, and 27 % for the Z01, PZ10, KS12, ZH14, and ZS14 parameterizations, respectively. From the Sobol' sensitivity results, we suggest that the parameter rankings vary by particle size and LUC for a given parameterization. Overall, for dp  =  0.001 to 1.0

  4. Parameterization and sensitivity analyses of a radiative transfer model for remote sensing plant canopies

    Science.gov (United States)

    Hall, Carlton Raden

    A major objective of remote sensing is determination of biochemical and biophysical characteristics of plant canopies utilizing high spectral resolution sensors. Canopy reflectance signatures are dependent on absorption and scattering processes of the leaf, canopy properties, and the ground beneath the canopy. This research investigates, through field and laboratory data collection, and computer model parameterization and simulations, the relationships between leaf optical properties, canopy biophysical features, and the nadir viewed above-canopy reflectance signature. Emphasis is placed on parameterization and application of an existing irradiance radiative transfer model developed for aquatic systems. Data and model analyses provide knowledge on the relative importance of leaves and canopy biophysical features in estimating the diffuse absorption a(lambda,m-1), diffuse backscatter b(lambda,m-1), beam attenuation alpha(lambda,m-1), and beam to diffuse conversion c(lambda,m-1 ) coefficients of the two-flow irradiance model. Data sets include field and laboratory measurements from three plant species, live oak (Quercus virginiana), Brazilian pepper (Schinus terebinthifolius) and grapefruit (Citrus paradisi) sampled on Cape Canaveral Air Force Station and Kennedy Space Center Florida in March and April of 1997. Features measured were depth h (m), projected foliage coverage PFC, leaf area index LAI, and zenith leaf angle. Optical measurements, collected with a Spectron SE 590 high sensitivity narrow bandwidth spectrograph, included above canopy reflectance, internal canopy transmittance and reflectance and bottom reflectance. Leaf samples were returned to laboratory where optical and physical and chemical measurements of leaf thickness, leaf area, leaf moisture and pigment content were made. A new term, the leaf volume correction index LVCI was developed and demonstrated in support of model coefficient parameterization. The LVCI is based on angle adjusted leaf

  5. Evaluating the importance of characterizing soil structure and horizons in parameterizing a hydrologic process model

    Science.gov (United States)

    Mirus, Benjamin B.

    2015-01-01

    Incorporating the influence of soil structure and horizons into parameterizations of distributed surface water/groundwater models remains a challenge. Often, only a single soil unit is employed, and soil-hydraulic properties are assigned based on textural classification, without evaluating the potential impact of these simplifications. This study uses a distributed physics-based model to assess the influence of soil horizons and structure on effective parameterization. This paper tests the viability of two established and widely used hydrogeologic methods for simulating runoff and variably saturated flow through layered soils: (1) accounting for vertical heterogeneity by combining hydrostratigraphic units with contrasting hydraulic properties into homogeneous, anisotropic units and (2) use of established pedotransfer functions based on soil texture alone to estimate water retention and conductivity, without accounting for the influence of pedon structures and hysteresis. The viability of this latter method for capturing the seasonal transition from runoff-dominated to evapotranspiration-dominated regimes is also tested here. For cases tested here, event-based simulations using simplified vertical heterogeneity did not capture the state-dependent anisotropy and complex combinations of runoff generation mechanisms resulting from permeability contrasts in layered hillslopes with complex topography. Continuous simulations using pedotransfer functions that do not account for the influence of soil structure and hysteresis generally over-predicted runoff, leading to propagation of substantial water balance errors. Analysis suggests that identifying a dominant hydropedological unit provides the most acceptable simplification of subsurface layering and that modified pedotransfer functions with steeper soil-water retention curves might adequately capture the influence of soil structure and hysteresis on hydrologic response in headwater catchments.

  6. Parameterization of strombolian explosions: constraint from simultaneous physical and geophysical measurements (Invited)

    Science.gov (United States)

    gurioli, L.; Harris, A. J.

    2013-12-01

    Strombolian activity is the most common type of explosive eruption (by frequency) experienced by Earth's volcanoes. It is commonly viewed as consisting of a succession of short discrete explosions where fragments of incandescent magma are ejected a few tens to hundreds meters into the air. This kind of activity is generally restricted to basaltic or basaltic-andesitic magmas because these systems have the sufficiently low viscosities so as to allow gas coalescence and decoupled slug ascent. Mercalli (1907) proposed one of the first formal classifications of explosive activity based on the character of the erupted products and descriptions of case-type eruptions. Later, Walker (1973) devised a classification based on grain size and dispersion, within which strombolian explosions formed the low-to-middle end of the classification. Other classifications have categorized strombolian activity on the basis of erupted magnitude and/or intensity, such as Newhall and Self's (1982) Volcanic Explosivity Index (VEI). Classification can also be made on the basis of explosion mechanism, where strombolian eruptions have become associated with bursting of large gas bubbles, as opposed to release of locked in bubble populations in rapidly ascending magma that feed sustained fountains. Finally, strombolian eruptions can be defined on the basis of geophysical metrics for the explosion source and plume ascent dynamics. Recently, the volcanology community has begun to discuss the difficulty of actually placing strombolian explosions within the compartments defined by each scheme. New sampling strategies in active strombolian volcanic fields have allowed us to parameterize these mildly explosive events both physically and geophysically. Our data show that individual 'normal' and "major" explosions at Stromboli are extremely small, meaning that the classical deposit-based classification thresholds need to be reduced, or a new category defined, if the 'strombolian' eruption style at

  7. Assessing Impact, DIF, and DFF in Accommodated Item Scores: A Comparison of Multilevel Measurement Model Parameterizations

    Science.gov (United States)

    Beretvas, S. Natasha; Cawthon, Stephanie W.; Lockhart, L. Leland; Kaye, Alyssa D.

    2012-01-01

    This pedagogical article is intended to explain the similarities and differences between the parameterizations of two multilevel measurement model (MMM) frameworks. The conventional two-level MMM that includes item indicators and models item scores (Level 1) clustered within examinees (Level 2) and the two-level cross-classified MMM (in which item…

  8. Developing a stochastic parameterization to incorporate plant trait variability into ecohydrologic modeling

    Science.gov (United States)

    Liu, S.; Ng, G. H. C.

    2017-12-01

    The global plant database has revealed that plant traits can vary more within a plant functional type (PFT) than among different PFTs, indicating that the current paradigm in ecohydrogical models of specifying fixed parameters based solely on plant functional type (PFT) could potentially bias simulations. Although some recent modeling studies have attempted to incorporate this observed plant trait variability, many failed to consider uncertainties due to sparse global observation, or they omitted spatial and/or temporal variability in the traits. Here we present a stochastic parameterization for prognostic vegetation simulations that are stochastic in time and space in order to represent plant trait plasticity - the process by which trait differences arise. We have developed the new PFT parameterization within the Community Land Model 4.5 (CLM 4.5) and tested the method for a desert shrubland watershed in the Mojave Desert, where fixed parameterizations cannot represent acclimation to desert conditions. Spatiotemporally correlated plant trait parameters were first generated based on TRY statistics and were then used to implement ensemble runs for the study area. The new PFT parameterization was then further conditioned on field measurements of soil moisture and remotely sensed observations of leaf-area-index to constrain uncertainties in the sparse global database. Our preliminary results show that incorporating data-conditioned, variable PFT parameterizations strongly affects simulated soil moisture and water fluxes, compared with default simulations. The results also provide new insights about correlations among plant trait parameters and between traits and environmental conditions in the desert shrubland watershed. Our proposed stochastic PFT parameterization method for ecohydrological models has great potential in advancing our understanding of how terrestrial ecosystems are predicted to adapt to variable environmental conditions.

  9. Parameterizing Subgrid-Scale Orographic Drag in the High-Resolution Rapid Refresh (HRRR) Atmospheric Model

    Science.gov (United States)

    Toy, M. D.; Olson, J.; Kenyon, J.; Smirnova, T. G.; Brown, J. M.

    2017-12-01

    The accuracy of wind forecasts in numerical weather prediction (NWP) models is improved when the drag forces imparted on atmospheric flow by subgrid-scale orography are included. Without such parameterizations, only the terrain resolved by the model grid, along with the small-scale obstacles parameterized by the roughness lengths can have an effect on the flow. This neglects the impacts of subgrid-scale terrain variations, which typically leads to wind speeds that are too strong. Using statistical information about the subgrid-scale orography, such as the mean and variance of the topographic height within a grid cell, the drag forces due to flow blocking, gravity wave drag, and turbulent form drag are estimated and distributed vertically throughout the grid cell column. We recently implemented the small-scale gravity wave drag paramterization of Steeneveld et al. (2008) and Tsiringakis et al. (2017) for stable planetary boundary layers, and the turbulent form drag parameterization of Beljaars et al. (2004) in the High-Resolution Rapid Refresh (HRRR) NWP model developed at the National Oceanic and Atmospheric Administration (NOAA). As a result, a high surface wind speed bias in the model has been reduced and small improvement to the maintenance of stable layers has also been found. We present the results of experiments with the subgrid-scale orographic drag parameterization for the regional HRRR model, as well as for a global model in development at NOAA, showing the direct and indirect impacts.

  10. Modeling of clouds and radiation for development of parameterizations for general circulation models

    International Nuclear Information System (INIS)

    Westphal, D.; Toon, B.; Jensen, E.; Kinne, S.; Ackerman, A.; Bergstrom, R.; Walker, A.

    1994-01-01

    Atmospheric Radiation Measurement (ARM) Program research at NASA Ames Research Center (ARC) includes radiative transfer modeling, cirrus cloud microphysics, and stratus cloud modeling. These efforts are designed to provide the basis for improving cloud and radiation parameterizations in our main effort: mesoscale cloud modeling. The range of non-convective cloud models used by the ARM modeling community can be crudely categorized based on the number of predicted hydrometers such as cloud water, ice water, rain, snow, graupel, etc. The simplest model has no predicted hydrometers and diagnoses the presence of clouds based on the predicted relative humidity. The vast majority of cloud models have two or more predictive bulk hydrometers and are termed either bulk water (BW) or size-resolving (SR) schemes. This study compares the various cloud models within the same dynamical framework, and compares results with observations rather than climate statistics

  11. Parameterization of cirrus microphysical and radiative properties in larger-scale models

    International Nuclear Information System (INIS)

    Heymsfield, A.J.; Coen, J.L.

    1994-01-01

    This study exploits measurements in clouds sampled during several field programs to develop and validate parameterizations that represent the physical and radiative properties of convectively generated cirrus clouds in intermediate and large-scale models. The focus is on cirrus anvils because they occur frequently, cover large areas, and play a large role in the radiation budget. Preliminary work focuses on understanding the microphysical, radiative, and dynamical processes that occur in these clouds. A detailed microphysical package has been constructed that considers the growth of the following hydrometer types: water drops, needles, plates, dendrites, columns, bullet rosettes, aggregates, graupel, and hail. Particle growth processes include diffusional and accretional growth, aggregation, sedimentation, and melting. This package is being implemented in a simple dynamical model that tracks the evolution and dispersion of hydrometers in a stratiform anvil cloud. Given the momentum, vapor, and ice fluxes into the stratiform region and the temperature and humidity structure in the anvil's environment, this model will suggest anvil properties and structure

  12. A new albedo parameterization for use in climate models over the Antarctic ice sheet

    NARCIS (Netherlands)

    Kuipers Munneke, P.|info:eu-repo/dai/nl/304831891; van den Broeke, M.R.|info:eu-repo/dai/nl/073765643; Lenaerts, J.T.M.|info:eu-repo/dai/nl/314850163; Flanner, M.G.; Gardner, A.S.; van de Berg, W.J.|info:eu-repo/dai/nl/304831611

    2011-01-01

    A parameterization for broadband snow surface albedo, based on snow grain size evolution, cloud optical thickness, and solar zenith angle, is implemented into a regional climate model for Antarctica and validated against field observations of albedo for the period 1995–2004. Over the Antarctic

  13. Parameterizing unconditional skewness in models for financial time series

    DEFF Research Database (Denmark)

    He, Changli; Silvennoinen, Annastiina; Teräsvirta, Timo

    In this paper we consider the third-moment structure of a class of time series models. It is often argued that the marginal distribution of financial time series such as returns is skewed. Therefore it is of importance to know what properties a model should possess if it is to accommodate...

  14. Regional modelling of tracer transport by tropical convection – Part 1: Sensitivity to convection parameterization

    Directory of Open Access Journals (Sweden)

    J. Arteta

    2009-09-01

    Full Text Available The general objective of this series of papers is to evaluate long duration limited area simulations with idealised tracers as a tool to assess tracer transport in chemistry-transport models (CTMs. In this first paper, we analyse the results of six simulations using different convection closures and parameterizations. The simulations are using the Grell and Dévényi (2002 mass-flux framework for the convection parameterization with different closures (Grell = GR, Arakawa-Shubert = AS, Kain-Fritch = KF, Low omega = LO, Moisture convergence = MC and an ensemble parameterization (EN based on the other five closures. The simulations are run for one month during the SCOUT-O3 field campaign lead from Darwin (Australia. They have a 60 km horizontal resolution and a fine vertical resolution in the upper troposphere/lower stratosphere. Meteorological results are compared with satellite products, radiosoundings and SCOUT-O3 aircraft campaign data. They show that the model is generally in good agreement with the measurements with less variability in the model. Except for the precipitation field, the differences between the six simulations are small on average with respect to the differences with the meteorological observations. The comparison with TRMM rainrates shows that the six parameterizations or closures have similar behaviour concerning convection triggering times and locations. However, the 6 simulations provide two different behaviours for rainfall values, with the EN, AS and KF parameterizations (Group 1 modelling better rain fields than LO, MC and GR (Group 2. The vertical distribution of tropospheric tracers is very different for the two groups showing significantly more transport into the TTL for Group 1 related to the larger average values of the upward velocities. Nevertheless the low values for the Group 1 fluxes at and above the cold point level indicate that the model does not simulate significant overshooting. For stratospheric tracers

  15. A generalized and parameterized interference model for cognitive radio networks

    KAUST Repository

    Mahmood, Nurul Huda; Yilmaz, Ferkan; Alouini, Mohamed-Slim

    2011-01-01

    For meaningful co-existence of cognitive radios with primary system, it is imperative that the cognitive radio system is aware of how much interference it generates at the primary receivers. This can be done through statistical modeling

  16. Parameterization of clouds and radiation in climate models

    Energy Technology Data Exchange (ETDEWEB)

    Roeckner, E. [Max Planck Institute for Meterology, Hamburg (Germany)

    1995-09-01

    Clouds are a very important, yet poorly modeled element in the climate system. There are many potential cloud feedbacks, including those related to cloud cover, height, water content, phase change, and droplet concentration and size distribution. As a prerequisite to studying the cloud feedback issue, this research reports on the simulation and validation of cloud radiative forcing under present climate conditions using the ECHAM general circulation model and ERBE top-of-atmosphere radiative fluxes.

  17. A generalized and parameterized interference model for cognitive radio networks

    KAUST Repository

    Mahmood, Nurul Huda

    2011-06-01

    For meaningful co-existence of cognitive radios with primary system, it is imperative that the cognitive radio system is aware of how much interference it generates at the primary receivers. This can be done through statistical modeling of the interference as perceived at the primary receivers. In this work, we propose a generalized model for the interference generated by a cognitive radio network, in the presence of small and large scale fading, at a primary receiver located at the origin. We then demonstrate how this model can be used to estimate the impact of cognitive radio transmission on the primary receiver in terms of different outage probabilities. Finally, our analytical findings are validated through some selected computer-based simulations. © 2011 IEEE.

  18. Parameterized data-driven fuzzy model based optimal control of a semi-batch reactor.

    Science.gov (United States)

    Kamesh, Reddi; Rani, K Yamuna

    2016-09-01

    A parameterized data-driven fuzzy (PDDF) model structure is proposed for semi-batch processes, and its application for optimal control is illustrated. The orthonormally parameterized input trajectories, initial states and process parameters are the inputs to the model, which predicts the output trajectories in terms of Fourier coefficients. Fuzzy rules are formulated based on the signs of a linear data-driven model, while the defuzzification step incorporates a linear regression model to shift the domain from input to output domain. The fuzzy model is employed to formulate an optimal control problem for single rate as well as multi-rate systems. Simulation study on a multivariable semi-batch reactor system reveals that the proposed PDDF modeling approach is capable of capturing the nonlinear and time-varying behavior inherent in the semi-batch system fairly accurately, and the results of operating trajectory optimization using the proposed model are found to be comparable to the results obtained using the exact first principles model, and are also found to be comparable to or better than parameterized data-driven artificial neural network model based optimization results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Parameterization of a Hydrological Model for a Large, Ungauged Urban Catchment

    Directory of Open Access Journals (Sweden)

    Gerald Krebs

    2016-10-01

    Full Text Available Urbanization leads to the replacement of natural areas by impervious surfaces and affects the catchment hydrological cycle with adverse environmental impacts. Low impact development tools (LID that mimic hydrological processes of natural areas have been developed and applied to mitigate these impacts. Hydrological simulations are one possibility to evaluate the LID performance but the associated small-scale processes require a highly spatially distributed and explicit modeling approach. However, detailed data for model development are often not available for large urban areas, hampering the model parameterization. In this paper we propose a methodology to parameterize a hydrological model to a large, ungauged urban area by maintaining at the same time a detailed surface discretization for direct parameter manipulation for LID simulation and a firm reliance on available data for model conceptualization. Catchment delineation was based on a high-resolution digital elevation model (DEM and model parameterization relied on a novel model regionalization approach. The impact of automated delineation and model regionalization on simulation results was evaluated for three monitored study catchments (5.87–12.59 ha. The simulated runoff peak was most sensitive to accurate catchment discretization and calibration, while both the runoff volume and the fit of the hydrograph were less affected.

  20. Calibration process of highly parameterized semi-distributed hydrological model

    Science.gov (United States)

    Vidmar, Andrej; Brilly, Mitja

    2017-04-01

    Hydrological phenomena take place in the hydrological system, which is governed by nature, and are essentially stochastic. These phenomena are unique, non-recurring, and changeable across space and time. Since any river basin with its own natural characteristics and any hydrological event therein, are unique, this is a complex process that is not researched enough. Calibration is a procedure of determining the parameters of a model that are not known well enough. Input and output variables and mathematical model expressions are known, while only some parameters are unknown, which are determined by calibrating the model. The software used for hydrological modelling nowadays is equipped with sophisticated algorithms for calibration purposes without possibility to manage process by modeler. The results are not the best. We develop procedure for expert driven process of calibration. We use HBV-light-CLI hydrological model which has command line interface and coupling it with PEST. PEST is parameter estimation tool which is used widely in ground water modeling and can be used also on surface waters. Process of calibration managed by expert directly, and proportionally to the expert knowledge, affects the outcome of the inversion procedure and achieves better results than if the procedure had been left to the selected optimization algorithm. First step is to properly define spatial characteristic and structural design of semi-distributed model including all morphological and hydrological phenomena, like karstic area, alluvial area and forest area. This step includes and requires geological, meteorological, hydraulic and hydrological knowledge of modeler. Second step is to set initial parameter values at their preferred values based on expert knowledge. In this step we also define all parameter and observation groups. Peak data are essential in process of calibration if we are mainly interested in flood events. Each Sub Catchment in the model has own observations group

  1. Modeling river dune evolution using a parameterization of flow separation

    NARCIS (Netherlands)

    Paarlberg, Andries J.; Dohmen-Janssen, C. Marjolein; Hulscher, Suzanne J.M.H.; Termes, Paul

    2009-01-01

    This paper presents an idealized morphodynamic model to predict river dune evolution. The flow field is solved in a vertical plane assuming hydrostatic pressure conditions. The sediment transport is computed using a Meyer-Peter–Müller type of equation, including gravitational bed slope effects and a

  2. Evaluation of a stratiform cloud parameterization for general circulation models

    Energy Technology Data Exchange (ETDEWEB)

    Ghan, S.J.; Leung, L.R. [Pacific Northwest National Lab., Richland, WA (United States); McCaa, J. [Univ. of Washington, Seattle, WA (United States)

    1996-04-01

    To evaluate the relative importance of horizontal advection of cloud versus cloud formation within the grid cell of a single column model (SCM), we have performed a series of simulations with our SCM driven by a fixed vertical velocity and various rates of horizontal advection.

  3. A general circulation model (GCM) parameterization of Pinatubo aerosols

    Energy Technology Data Exchange (ETDEWEB)

    Lacis, A.A.; Carlson, B.E.; Mishchenko, M.I. [NASA Goddard Institute for Space Studies, New York, NY (United States)

    1996-04-01

    The June 1991 volcanic eruption of Mt. Pinatubo is the largest and best documented global climate forcing experiment in recorded history. The time development and geographical dispersion of the aerosol has been closely monitored and sampled. Based on preliminary estimates of the Pinatubo aerosol loading, general circulation model predictions of the impact on global climate have been made.

  4. Ice shelf fracture parameterization in an ice sheet model

    Directory of Open Access Journals (Sweden)

    S. Sun

    2017-11-01

    Full Text Available Floating ice shelves exert a stabilizing force onto the inland ice sheet. However, this buttressing effect is diminished by the fracture process, which on large scales effectively softens the ice, accelerating its flow, increasing calving, and potentially leading to ice shelf breakup. We add a continuum damage model (CDM to the BISICLES ice sheet model, which is intended to model the localized opening of crevasses under stress, the transport of those crevasses through the ice sheet, and the coupling between crevasse depth and the ice flow field and to carry out idealized numerical experiments examining the broad impact on large-scale ice sheet and shelf dynamics. In each case we see a complex pattern of damage evolve over time, with an eventual loss of buttressing approximately equivalent to halving the thickness of the ice shelf. We find that it is possible to achieve a similar ice flow pattern using a simple rule of thumb: introducing an enhancement factor ∼ 10 everywhere in the model domain. However, spatially varying damage (or equivalently, enhancement factor fields set at the start of prognostic calculations to match velocity observations, as is widely done in ice sheet simulations, ought to evolve in time, or grounding line retreat can be slowed by an order of magnitude.

  5. Ice shelf fracture parameterization in an ice sheet model

    Science.gov (United States)

    Sun, Sainan; Cornford, Stephen L.; Moore, John C.; Gladstone, Rupert; Zhao, Liyun

    2017-11-01

    Floating ice shelves exert a stabilizing force onto the inland ice sheet. However, this buttressing effect is diminished by the fracture process, which on large scales effectively softens the ice, accelerating its flow, increasing calving, and potentially leading to ice shelf breakup. We add a continuum damage model (CDM) to the BISICLES ice sheet model, which is intended to model the localized opening of crevasses under stress, the transport of those crevasses through the ice sheet, and the coupling between crevasse depth and the ice flow field and to carry out idealized numerical experiments examining the broad impact on large-scale ice sheet and shelf dynamics. In each case we see a complex pattern of damage evolve over time, with an eventual loss of buttressing approximately equivalent to halving the thickness of the ice shelf. We find that it is possible to achieve a similar ice flow pattern using a simple rule of thumb: introducing an enhancement factor ˜ 10 everywhere in the model domain. However, spatially varying damage (or equivalently, enhancement factor) fields set at the start of prognostic calculations to match velocity observations, as is widely done in ice sheet simulations, ought to evolve in time, or grounding line retreat can be slowed by an order of magnitude.

  6. Macromolecular refinement by model morphing using non-atomic parameterizations.

    Science.gov (United States)

    Cowtan, Kevin; Agirre, Jon

    2018-02-01

    Refinement is a critical step in the determination of a model which explains the crystallographic observations and thus best accounts for the missing phase components. The scattering density is usually described in terms of atomic parameters; however, in macromolecular crystallography the resolution of the data is generally insufficient to determine the values of these parameters for individual atoms. Stereochemical and geometric restraints are used to provide additional information, but produce interrelationships between parameters which slow convergence, resulting in longer refinement times. An alternative approach is proposed in which parameters are not attached to atoms, but to regions of the electron-density map. These parameters can move the density or change the local temperature factor to better explain the structure factors. Varying the size of the region which determines the parameters at a particular position in the map allows the method to be applied at different resolutions without the use of restraints. Potential applications include initial refinement of molecular-replacement models with domain motions, and potentially the use of electron density from other sources such as electron cryo-microscopy (cryo-EM) as the refinement model.

  7. The relationship between a deformation-based eddy parameterization and the LANS-α turbulence model

    Science.gov (United States)

    Bachman, Scott D.; Anstey, James A.; Zanna, Laure

    2018-06-01

    A recent class of ocean eddy parameterizations proposed by Porta Mana and Zanna (2014) and Anstey and Zanna (2017) modeled the large-scale flow as a non-Newtonian fluid whose subgridscale eddy stress is a nonlinear function of the deformation. This idea, while largely new to ocean modeling, has a history in turbulence modeling dating at least back to Rivlin (1957). The new class of parameterizations results in equations that resemble the Lagrangian-averaged Navier-Stokes-α model (LANS-α, e.g., Holm et al., 1998a). In this note we employ basic tensor mathematics to highlight the similarities between these turbulence models using component-free notation. We extend the Anstey and Zanna (2017) parameterization, which was originally presented in 2D, to 3D, and derive variants of this closure that arise when the full non-Newtonian stress tensor is used. Despite the mathematical similarities between the non-Newtonian and LANS-α models which might provide insight into numerical implementation, the input and dissipation of kinetic energy between these two turbulent models differ.

  8. Current state of aerosol nucleation parameterizations for air-quality and climate modeling

    Science.gov (United States)

    Semeniuk, Kirill; Dastoor, Ashu

    2018-04-01

    Aerosol nucleation parameterization models commonly used in 3-D air quality and climate models have serious limitations. This includes classical nucleation theory based variants, empirical models and other formulations. Recent work based on detailed and extensive laboratory measurements and improved quantum chemistry computation has substantially advanced the state of nucleation parameterizations. In terms of inorganic nucleation involving BHN and THN including ion effects these new models should be considered as worthwhile replacements for the old models. However, the contribution of organic species to nucleation remains poorly quantified. New particle formation consists of a distinct post-nucleation growth regime which is characterized by a strong Kelvin curvature effect and is thus dependent on availability of very low volatility organic species or sulfuric acid. There have been advances in the understanding of the multiphase chemistry of biogenic and anthropogenic organic compounds which facilitate to overcome the initial aerosol growth barrier. Implementation of processes influencing new particle formation is challenging in 3-D models and there is a lack of comprehensive parameterizations. This review considers the existing models and recent innovations.

  9. Warm Bias and Parameterization of Boundary Upwelling in Ocean Models

    Energy Technology Data Exchange (ETDEWEB)

    Cessi, Paola; Wolfe, Christopher

    2012-11-06

    It has been demonstrated that Eastern Boundary Currents (EBC) are a baroclinic intensification of the interior circulation of the ocean due to the emergence of mesoscale eddies in response to the sharp buoyancy gradients driven by the wind-stress and the thermal surface forcing. The eddies accomplish the heat and salt transport necessary to insure that the subsurface flow is adiabatic, compensating for the heat and salt transport effected by the mean currents. The EBC thus generated occurs on a cross-shore scale of order 20-100 km, and thus this scale needs to be resolved in climate models in order to capture the meridional transport by the EBC. Our result indicate that changes in the near shore currents on the oceanic eastern boundaries are linked not just to local forcing, such as coastal changes in the winds, but depend on the basin-wide circulation as well.

  10. Amplification of intrinsic emittance due to rough metal cathodes: Formulation of a parameterization model

    Energy Technology Data Exchange (ETDEWEB)

    Charles, T.K. [School of Physics and Astronomy, Monash University, Clayton, Victoria, 3800 (Australia); Australian Synchrotron, 800 Blackburn Road, Clayton, Victoria, 3168 (Australia); Paganin, D.M. [School of Physics and Astronomy, Monash University, Clayton, Victoria, 3800 (Australia); Dowd, R.T. [Australian Synchrotron, 800 Blackburn Road, Clayton, Victoria, 3168 (Australia)

    2016-08-21

    Intrinsic emittance is often the limiting factor for brightness in fourth generation light sources and as such, a good understanding of the factors affecting intrinsic emittance is essential in order to be able to decrease it. Here we present a parameterization model describing the proportional increase in emittance induced by cathode surface roughness. One major benefit behind the parameterization approach presented here is that it takes the complexity of a Monte Carlo model and reduces the results to a straight-forward empirical model. The resulting models describe the proportional increase in transverse momentum introduced by surface roughness, and are applicable to various metal types, photon wavelengths, applied electric fields, and cathode surface terrains. The analysis includes the increase in emittance due to changes in the electric field induced by roughness as well as the increase in transverse momentum resultant from the spatially varying surface normal. We also compare the results of the Parameterization Model to an Analytical Model which employs various approximations to produce a more compact expression with the cost of a reduction in accuracy.

  11. Mesoscale model parameterizations for radiation and turbulent fluxes at the lower boundary

    International Nuclear Information System (INIS)

    Somieski, F.

    1988-11-01

    A radiation parameterization scheme for use in mesoscale models with orography and clouds has been developed. Broadband parameterizations are presented for the solar and the terrestrial spectral ranges. They account for clear, turbid or cloudy atmospheres. The scheme is one-dimensional in the atmosphere, but the effects of mountains (inclination, shading, elevated horizon) are taken into account at the surface. In the terrestrial band, grey and black clouds are considered. Furthermore, the calculation of turbulent fluxes of sensible and latent heat and momentum at an inclined lower model boundary is described. Surface-layer similarity and the surface energy budget are used to evaluate the ground surface temperature. The total scheme is part of the mesoscale model MESOSCOP. (orig.) With 3 figs., 25 refs [de

  12. Modeling the energy balance in Marseille: Sensitivity to roughness length parameterizations and thermal admittance

    Science.gov (United States)

    Demuzere, M.; De Ridder, K.; van Lipzig, N. P. M.

    2008-08-01

    During the ESCOMPTE campaign (Experience sur Site pour COntraindre les Modeles de Pollution atmospherique et de Transport d'Emissions), a 4-day intensive observation period was selected to evaluate the Advanced Regional Prediction System (ARPS), a nonhydrostatic meteorological mesoscale model that was optimized with a parameterization for thermal roughness length to better represent urban surfaces. The evaluation shows that the ARPS model is able to correctly reproduce temperature, wind speed, and direction for one urban and two rural measurements stations. Furthermore, simulated heat fluxes show good agreement compared to the observations, although simulated sensible heat fluxes were initially too low for the urban stations. In order to improve the latter, different roughness length parameterization schemes were tested, combined with various thermal admittance values. This sensitivity study showed that the Zilitinkevich scheme combined with and intermediate value of thermal admittance performs best.

  13. Solvation of monovalent anions in formamide and methanol: Parameterization of the IEF-PCM model

    International Nuclear Information System (INIS)

    Boees, Elvis S.; Bernardi, Edson; Stassen, Hubert; Goncalves, Paulo F.B.

    2008-01-01

    The thermodynamics of solvation for a series of monovalent anions in formamide and methanol has been studied using the polarizable continuum model (PCM). The parameterization of this continuum model was guided by molecular dynamics simulations. The parameterized PCM model predicts the Gibbs free energies of solvation for 13 anions in formamide and 16 anions in methanol in very good agreement with experimental data. Two sets of atomic radii were tested in the definition of the solute cavities in the PCM and their performances are evaluated and discussed. Mean absolute deviations of the calculated free energies of solvation from the experimental values are in the range of 1.3-2.1 kcal/mol

  14. Parameterization of Rocket Dust Storms on Mars in the LMD Martian GCM: Modeling Details and Validation

    Science.gov (United States)

    Wang, Chao; Forget, François; Bertrand, Tanguy; Spiga, Aymeric; Millour, Ehouarn; Navarro, Thomas

    2018-04-01

    The origin of the detached dust layers observed by the Mars Climate Sounder aboard the Mars Reconnaissance Orbiter is still debated. Spiga et al. (2013, https://doi.org/10.1002/jgre.20046) revealed that deep mesoscale convective "rocket dust storms" are likely to play an important role in forming these dust layers. To investigate how the detached dust layers are generated by this mesoscale phenomenon and subsequently evolve at larger scales, a parameterization of rocket dust storms to represent the mesoscale dust convection is designed and included into the Laboratoire de Météorologie Dynamique (LMD) Martian Global Climate Model (GCM). The new parameterization allows dust particles in the GCM to be transported to higher altitudes than in traditional GCMs. Combined with the horizontal transport by large-scale winds, the dust particles spread out and form detached dust layers. During the Martian dusty seasons, the LMD GCM with the new parameterization is able to form detached dust layers. The formation, evolution, and decay of the simulated dust layers are largely in agreement with the Mars Climate Sounder observations. This suggests that mesoscale rocket dust storms are among the key factors to explain the observed detached dust layers on Mars. However, the detached dust layers remain absent in the GCM during the clear seasons, even with the new parameterization. This implies that other relevant atmospheric processes, operating when no dust storms are occurring, are needed to explain the Martian detached dust layers. More observations of local dust storms could improve the ad hoc aspects of this parameterization, such as the trigger and timing of dust injection.

  15. Development of a cloud microphysical model and parameterizations to describe the effect of CCN on warm cloud

    Directory of Open Access Journals (Sweden)

    N. Kuba

    2006-01-01

    Full Text Available First, a hybrid cloud microphysical model was developed that incorporates both Lagrangian and Eulerian frameworks to study quantitatively the effect of cloud condensation nuclei (CCN on the precipitation of warm clouds. A parcel model and a grid model comprise the cloud model. The condensation growth of CCN in each parcel is estimated in a Lagrangian framework. Changes in cloud droplet size distribution arising from condensation and coalescence are calculated on grid points using a two-moment bin method in a semi-Lagrangian framework. Sedimentation and advection are estimated in the Eulerian framework between grid points. Results from the cloud model show that an increase in the number of CCN affects both the amount and the area of precipitation. Additionally, results from the hybrid microphysical model and Kessler's parameterization were compared. Second, new parameterizations were developed that estimate the number and size distribution of cloud droplets given the updraft velocity and the number of CCN. The parameterizations were derived from the results of numerous numerical experiments that used the cloud microphysical parcel model. The input information of CCN for these parameterizations is only several values of CCN spectrum (they are given by CCN counter for example. It is more convenient than conventional parameterizations those need values concerned with CCN spectrum, C and k in the equation of N=CSk, or, breadth, total number and median radius, for example. The new parameterizations' predictions of initial cloud droplet size distribution for the bin method were verified by using the aforesaid hybrid microphysical model. The newly developed parameterizations will save computing time, and can effectively approximate components of cloud microphysics in a non-hydrostatic cloud model. The parameterizations are useful not only in the bin method in the regional cloud-resolving model but also both for a two-moment bulk microphysical model and

  16. Approaches to highly parameterized inversion-A guide to using PEST for groundwater-model calibration

    Science.gov (United States)

    Doherty, John E.; Hunt, Randall J.

    2010-01-01

    Highly parameterized groundwater models can create calibration difficulties. Regularized inversion-the combined use of large numbers of parameters with mathematical approaches for stable parameter estimation-is becoming a common approach to address these difficulties and enhance the transfer of information contained in field measurements to parameters used to model that system. Though commonly used in other industries, regularized inversion is somewhat imperfectly understood in the groundwater field. There is concern that this unfamiliarity can lead to underuse, and misuse, of the methodology. This document is constructed to facilitate the appropriate use of regularized inversion for calibrating highly parameterized groundwater models. The presentation is directed at an intermediate- to advanced-level modeler, and it focuses on the PEST software suite-a frequently used tool for highly parameterized model calibration and one that is widely supported by commercial graphical user interfaces. A brief overview of the regularized inversion approach is provided, and techniques for mathematical regularization offered by PEST are outlined, including Tikhonov, subspace, and hybrid schemes. Guidelines for applying regularized inversion techniques are presented after a logical progression of steps for building suitable PEST input. The discussion starts with use of pilot points as a parameterization device and processing/grouping observations to form multicomponent objective functions. A description of potential parameter solution methodologies and resources available through the PEST software and its supporting utility programs follows. Directing the parameter-estimation process through PEST control variables is then discussed, including guidance for monitoring and optimizing the performance of PEST. Comprehensive listings of PEST control variables, and of the roles performed by PEST utility support programs, are presented in the appendixes.

  17. Evaluation and Improvement of Cloud and Convective Parameterizations from Analyses of ARM Observations and Models

    Energy Technology Data Exchange (ETDEWEB)

    Del Genio, Anthony D. [NASA Goddard Inst. for Space Studies (GISS), New York, NY (United States)

    2016-03-11

    Over this period the PI and his performed a broad range of data analysis, model evaluation, and model improvement studies using ARM data. These included cloud regimes in the TWP and their evolution over the MJO; M-PACE IOP SCM-CRM intercomparisons; simulations of convective updraft strength and depth during TWP-ICE; evaluation of convective entrainment parameterizations using TWP-ICE simulations; evaluation of GISS GCM cloud behavior vs. long-term SGP cloud statistics; classification of aerosol semi-direct effects on cloud cover; depolarization lidar constraints on cloud phase; preferred states of the winter Arctic atmosphere, surface, and sub-surface; sensitivity of convection to tropospheric humidity; constraints on the parameterization of mesoscale organization from TWP-ICE WRF simulations; updraft and downdraft properties in TWP-ICE simulated convection; insights from long-term ARM records at Manus and Nauru.

  18. Classification of parameter-dependent quantum integrable models, their parameterization, exact solution and other properties

    International Nuclear Information System (INIS)

    Owusu, Haile K; Yuzbashyan, Emil A

    2011-01-01

    We study general quantum integrable Hamiltonians linear in a coupling constant and represented by finite N x N real symmetric matrices. The restriction on the coupling dependence leads to a natural notion of nontrivial integrals of motion and classification of integrable families into types according to the number of such integrals. A type M family in our definition is formed by N-M nontrivial mutually commuting operators linear in the coupling. Working from this definition alone, we parameterize type M operators, i.e. resolve the commutation relations, and obtain an exact solution for their eigenvalues and eigenvectors. We show that our parameterization covers all type 1, 2 and 3 integrable models and discuss the extent to which it is complete for other types. We also present robust numerical observation on the number of energy-level crossings in type M integrable systems and analyze the taxonomy of types in the 1D Hubbard model. (paper)

  19. Framework of cloud parameterization including ice for 3-D mesoscale models

    Energy Technology Data Exchange (ETDEWEB)

    Levkov, L; Jacob, D; Eppel, D; Grassl, H

    1989-01-01

    A parameterization scheme for the simulation of ice in clouds incorporated into the hydrostatic version of the GKSS three-dimensional mesoscale model. Numerical simulations of precipitation are performed: over the Northe Sea, the Hawaiian trade wind area and in the region of the intertropical convergence zone. Not only some major features of convective structures in all three areas but also cloud-aerosol interactions have successfully been simulated. (orig.) With 19 figs., 2 tabs.

  20. Parameterization of cloud droplet formation for global and regional models: including adsorption activation from insoluble CCN

    Directory of Open Access Journals (Sweden)

    P. Kumar

    2009-04-01

    Full Text Available Dust and black carbon aerosol have long been known to exert potentially important and diverse impacts on cloud droplet formation. Most studies to date focus on the soluble fraction of these particles, and overlook interactions of the insoluble fraction with water vapor (even if known to be hydrophilic. To address this gap, we developed a new parameterization that considers cloud droplet formation within an ascending air parcel containing insoluble (but wettable particles externally mixed with aerosol containing an appreciable soluble fraction. Activation of particles with a soluble fraction is described through well-established Köhler theory, while the activation of hydrophilic insoluble particles is treated by "adsorption-activation" theory. In the latter, water vapor is adsorbed onto insoluble particles, the activity of which is described by a multilayer Frenkel-Halsey-Hill (FHH adsorption isotherm modified to account for particle curvature. We further develop FHH activation theory to i find combinations of the adsorption parameters AFHH, BFHH which yield atmospherically-relevant behavior, and, ii express activation properties (critical supersaturation that follow a simple power law with respect to dry particle diameter.

    The new parameterization is tested by comparing the parameterized cloud droplet number concentration against predictions with a detailed numerical cloud model, considering a wide range of particle populations, cloud updraft conditions, water vapor condensation coefficient and FHH adsorption isotherm characteristics. The agreement between parameterization and parcel model is excellent, with an average error of 10% and R2~0.98. A preliminary sensitivity study suggests that the sublinear response of droplet number to Köhler particle concentration is not as strong for FHH particles.

  1. Morphing methods to parameterize specimen-specific finite element model geometries.

    Science.gov (United States)

    Sigal, Ian A; Yang, Hongli; Roberts, Michael D; Downs, J Crawford

    2010-01-19

    Shape plays an important role in determining the biomechanical response of a structure. Specimen-specific finite element (FE) models have been developed to capture the details of the shape of biological structures and predict their biomechanics. Shape, however, can vary considerably across individuals or change due to aging or disease, and analysis of the sensitivity of specimen-specific models to these variations has proven challenging. An alternative to specimen-specific representation has been to develop generic models with simplified geometries whose shape is relatively easy to parameterize, and can therefore be readily used in sensitivity studies. Despite many successful applications, generic models are limited in that they cannot make predictions for individual specimens. We propose that it is possible to harness the detail available in specimen-specific models while leveraging the power of the parameterization techniques common in generic models. In this work we show that this can be accomplished by using morphing techniques to parameterize the geometry of specimen-specific FE models such that the model shape can be varied in a controlled and systematic way suitable for sensitivity analysis. We demonstrate three morphing techniques by using them on a model of the load-bearing tissues of the posterior pole of the eye. We show that using relatively straightforward procedures these morphing techniques can be combined, which allows the study of factor interactions. Finally, we illustrate that the techniques can be used in other systems by applying them to morph a femur. Morphing techniques provide an exciting new possibility for the analysis of the biomechanical role of shape, independently or in interaction with loading and material properties. Copyright 2009 Elsevier Ltd. All rights reserved.

  2. The cloud-phase feedback in the Super-parameterized Community Earth System Model

    Science.gov (United States)

    Burt, M. A.; Randall, D. A.

    2016-12-01

    Recent comparisons of observations and climate model simulations by I. Tan and colleagues have suggested that the Wegener-Bergeron-Findeisen (WBF) process tends to be too active in climate models, making too much cloud ice, and resulting in an exaggerated negative cloud-phase feedback on climate change. We explore the WBF process and its effect on shortwave cloud forcing in present-day and future climate simulations with the Community Earth System Model, and its super-parameterized counterpart. Results show that SP-CESM has much less cloud ice and a weaker cloud-phase feedback than CESM.

  3. Modeling canopy-induced turbulence in the Earth system: a unified parameterization of turbulent exchange within plant canopies and the roughness sublayer (CLM-ml v0

    Directory of Open Access Journals (Sweden)

    G. B. Bonan

    2018-04-01

    Full Text Available Land surface models used in climate models neglect the roughness sublayer and parameterize within-canopy turbulence in an ad hoc manner. We implemented a roughness sublayer turbulence parameterization in a multilayer canopy model (CLM-ml v0 to test if this theory provides a tractable parameterization extending from the ground through the canopy and the roughness sublayer. We compared the canopy model with the Community Land Model (CLM4.5 at seven forest, two grassland, and three cropland AmeriFlux sites over a range of canopy heights, leaf area indexes, and climates. CLM4.5 has pronounced biases during summer months at forest sites in midday latent heat flux, sensible heat flux, gross primary production, nighttime friction velocity, and the radiative temperature diurnal range. The new canopy model reduces these biases by introducing new physics. Advances in modeling stomatal conductance and canopy physiology beyond what is in CLM4.5 substantially improve model performance at the forest sites. The signature of the roughness sublayer is most evident in nighttime friction velocity and the diurnal cycle of radiative temperature, but is also seen in sensible heat flux. Within-canopy temperature profiles are markedly different compared with profiles obtained using Monin–Obukhov similarity theory, and the roughness sublayer produces cooler daytime and warmer nighttime temperatures. The herbaceous sites also show model improvements, but the improvements are related less systematically to the roughness sublayer parameterization in these canopies. The multilayer canopy with the roughness sublayer turbulence improves simulations compared with CLM4.5 while also advancing the theoretical basis for surface flux parameterizations.

  4. Modeling canopy-induced turbulence in the Earth system: a unified parameterization of turbulent exchange within plant canopies and the roughness sublayer (CLM-ml v0)

    Science.gov (United States)

    Bonan, Gordon B.; Patton, Edward G.; Harman, Ian N.; Oleson, Keith W.; Finnigan, John J.; Lu, Yaqiong; Burakowski, Elizabeth A.

    2018-04-01

    Land surface models used in climate models neglect the roughness sublayer and parameterize within-canopy turbulence in an ad hoc manner. We implemented a roughness sublayer turbulence parameterization in a multilayer canopy model (CLM-ml v0) to test if this theory provides a tractable parameterization extending from the ground through the canopy and the roughness sublayer. We compared the canopy model with the Community Land Model (CLM4.5) at seven forest, two grassland, and three cropland AmeriFlux sites over a range of canopy heights, leaf area indexes, and climates. CLM4.5 has pronounced biases during summer months at forest sites in midday latent heat flux, sensible heat flux, gross primary production, nighttime friction velocity, and the radiative temperature diurnal range. The new canopy model reduces these biases by introducing new physics. Advances in modeling stomatal conductance and canopy physiology beyond what is in CLM4.5 substantially improve model performance at the forest sites. The signature of the roughness sublayer is most evident in nighttime friction velocity and the diurnal cycle of radiative temperature, but is also seen in sensible heat flux. Within-canopy temperature profiles are markedly different compared with profiles obtained using Monin-Obukhov similarity theory, and the roughness sublayer produces cooler daytime and warmer nighttime temperatures. The herbaceous sites also show model improvements, but the improvements are related less systematically to the roughness sublayer parameterization in these canopies. The multilayer canopy with the roughness sublayer turbulence improves simulations compared with CLM4.5 while also advancing the theoretical basis for surface flux parameterizations.

  5. The urban land use in the COSMO-CLM model: a comparison of three parameterizations for Berlin

    Directory of Open Access Journals (Sweden)

    Kristina Trusilova

    2016-05-01

    Full Text Available The regional non-hydrostatic climate model COSMO-CLM is increasingly being used on fine spatial scales of 1–5 km. Such applications require a detailed differentiation between the parameterization for natural and urban land uses. Since 2010, three parameterizations for urban land use have been incorporated into COSMO-CLM. These parameterizations vary in their complexity, required city parameters and their computational cost. We perform model simulations with the COSMO-CLM coupled to these three parameterizations for urban land in the same model domain of Berlin on a 1-km grid and compare results with available temperature observations. While all models capture the urban heat island, they differ in spatial detail, magnitude and the diurnal variation.

  6. Polynomial Chaos–Based Bayesian Inference of K-Profile Parameterization in a General Circulation Model of the Tropical Pacific

    KAUST Repository

    Sraj, Ihab; Zedler, Sarah E.; Knio, Omar; Jackson, Charles S.; Hoteit, Ibrahim

    2016-01-01

    The authors present a polynomial chaos (PC)-based Bayesian inference method for quantifying the uncertainties of the K-profile parameterization (KPP) within the MIT general circulation model (MITgcm) of the tropical Pacific. The inference

  7. Parameterization of a bucket model for soil-vegetation-atmosphere modeling under seasonal climatic regimes

    Directory of Open Access Journals (Sweden)

    N. Romano

    2011-12-01

    Full Text Available We investigate the potential impact of accounting for seasonal variations in the climatic forcing and using different methods to parameterize the soil water content at field capacity on the water balance components computed by a bucket model (BM. The single-layer BM of Guswa et al. (2002 is employed, whereas the Richards equation (RE based Soil Water Atmosphere Plant (SWAP model is used as a benchmark model. The results are analyzed for two differently-textured soils and for some synthetic runs under real-like seasonal weather conditions, using stochastically-generated daily rainfall data for a period of 100 years. Since transient soil-moisture dynamics and climatic seasonality play a key role in certain zones of the World, such as in Mediterranean land areas, a specific feature of this study is to test the prediction capability of the bucket model under a condition where seasonal variations in rainfall are not in phase with the variations in plant transpiration. Reference is made to a hydrologic year in which we have a rainy period (starting 1 November and lasting 151 days where vegetation is basically assumed in a dormant stage, followed by a drier and rainless period with a vegetation regrowth phase. Better agreement between BM and RE-SWAP intercomparison results are obtained when BM is parameterized by a field capacity value determined through the drainage method proposed by Romano and Santini (2002. Depending on the vegetation regrowth or dormant seasons, rainfall variability within a season results in transpiration regimes and soil moisture fluctuations with distinctive features. During the vegetation regrowth season, transpiration exerts a key control on soil water budget with respect to rainfall. During the dormant season of vegetation, the precipitation regime becomes an important climate forcing. Simulations also highlight the occurrence of bimodality in the probability distribution of soil moisture during the season when plants are

  8. Evaluating cloud processes in large-scale models: Of idealized case studies, parameterization testbeds and single-column modelling on climate time-scales

    Science.gov (United States)

    Neggers, Roel

    2016-04-01

    Boundary-layer schemes have always formed an integral part of General Circulation Models (GCMs) used for numerical weather and climate prediction. The spatial and temporal scales associated with boundary-layer processes and clouds are typically much smaller than those at which GCMs are discretized, which makes their representation through parameterization a necessity. The need for generally applicable boundary-layer parameterizations has motivated many scientific studies, which in effect has created its own active research field in the atmospheric sciences. Of particular interest has been the evaluation of boundary-layer schemes at "process-level". This means that parameterized physics are studied in isolated mode from the larger-scale circulation, using prescribed forcings and excluding any upscale interaction. Although feedbacks are thus prevented, the benefit is an enhanced model transparency, which might aid an investigator in identifying model errors and understanding model behavior. The popularity and success of the process-level approach is demonstrated by the many past and ongoing model inter-comparison studies that have been organized by initiatives such as GCSS/GASS. A red line in the results of these studies is that although most schemes somehow manage to capture first-order aspects of boundary layer cloud fields, there certainly remains room for improvement in many areas. Only too often are boundary layer parameterizations still found to be at the heart of problems in large-scale models, negatively affecting forecast skills of NWP models or causing uncertainty in numerical predictions of future climate. How to break this parameterization "deadlock" remains an open problem. This presentation attempts to give an overview of the various existing methods for the process-level evaluation of boundary-layer physics in large-scale models. This includes i) idealized case studies, ii) longer-term evaluation at permanent meteorological sites (the testbed approach

  9. Modelling and parameterizing the influence of tides on ice-shelf melt rates

    Science.gov (United States)

    Jourdain, N.; Molines, J. M.; Le Sommer, J.; Mathiot, P.; de Lavergne, C.; Gurvan, M.; Durand, G.

    2017-12-01

    Significant Antarctic ice sheet thinning is observed in several sectors of Antarctica, in particular in the Amundsen Sea sector, where warm circumpolar deep waters affect basal melting. The later has the potential to trigger marine ice sheet instabilities, with an associated potential for rapid sea level rise. It is therefore crucial to simulate and understand the processes associated with ice-shelf melt rates. In particular, the absence of tides representation in ocean models remains a caveat of numerous ocean hindcasts and climate projections. In the Amundsen Sea, tides are relatively weak and the melt-induced circulation is stronger than the tidal circulation. Using a regional 1/12° ocean model of the Amundsen Sea, we nonetheless find that tides can increase melt rates by up to 36% in some ice-shelf cavities. Among the processes that can possibly affect melt rates, the most important is an increased exchange at the ice/ocean interface resulting from the presence of strong tidal currents along the ice drafts. Approximately a third of this effect is compensated by a decrease in thermal forcing along the ice draft, which is related to an enhanced vertical mixing in the ocean interior in presence of tides. Parameterizing the effect of tides is an alternative to the representation of explicit tides in an ocean model, and has the advantage not to require any filtering of ocean model outputs. We therefore explore different ways to parameterize the effects of tides on ice shelf melt. First, we compare several methods to impose tidal velocities along the ice draft. We show that getting a realistic spatial distribution of tidal velocities in important, and can be deduced from the barotropic velocities of a tide model. Then, we explore several aspects of parameterized tidal mixing to reproduce the tide-induced decrease in thermal forcing along the ice drafts.

  10. Linear units improve articulation between social and physical constructs: An example from caregiver parameterization for children supported by complex medical technologies

    Science.gov (United States)

    Bezruczko, N.; Stanley, T.; Battle, M.; Latty, C.

    2016-11-01

    Despite broad sweeping pronouncements by international research organizations that social sciences are being integrated into global research programs, little attention has been directed toward obstacles blocking productive collaborations. In particular, social sciences routinely implement nonlinear, ordinal measures, which fundamentally inhibit integration with overarching scientific paradigms. The widely promoted general linear model in contemporary social science methods is largely based on untransformed scores and ratings, which are neither objective nor linear. This issue has historically separated physical and social sciences, which this report now asserts is unnecessary. In this research, nonlinear, subjective caregiver ratings of confidence to care for children supported by complex, medical technologies were transformed to an objective scale defined by logits (N=70). Transparent linear units from this transformation provided foundational insights into measurement properties of a social- humanistic caregiving construct, which clarified physical and social caregiver implications. Parameterized items and ratings were also subjected to multivariate hierarchical analysis, then decomposed to demonstrate theoretical coherence (R2 >.50), which provided further support for convergence of mathematical parameterization, physical expectations, and a social-humanistic construct. These results present substantial support for improving integration of social sciences with contemporary scientific research programs by emphasizing construction of common variables with objective, linear units.

  11. The Effect of Subsurface Parameterizations on Modeled Flows in the Catchment Land Surface Model, Fortuna 2.5

    Science.gov (United States)

    Roningen, J. M.; Eylander, J. B.

    2014-12-01

    Groundwater use and management is subject to economic, legal, technical, and informational constraints and incentives at a variety of spatial and temporal scales. Planned and de facto management practices influenced by tax structures, legal frameworks, and agricultural and trade policies that vary at the country scale may have medium- and long-term effects on the ability of a region to support current and projected agricultural and industrial development. USACE is working to explore and develop global-scale, physically-based frameworks to serve as a baseline for hydrologic policy comparisons and consequence assessment, and such frameworks must include a reasonable representation of groundwater systems. To this end, we demonstrate the effects of different subsurface parameterizations, scaling, and meteorological forcings on surface and subsurface components of the Catchment Land Surface Model Fortuna v2.5 (Koster et al. 2000). We use the Land Information System 7 (Kumar et al. 2006) to process model runs using meteorological components of the Air Force Weather Agency's AGRMET forcing data from 2006 through 2011. Seasonal patterns and trends are examined in areas of the Upper Nile basin, northern China, and the Mississippi Valley. We also discuss the relevance of the model's representation of the catchment deficit with respect to local hydrogeologic structures.

  12. Climate Simulations from Super-parameterized and Conventional General Circulation Models with a Third-order Turbulence Closure

    Science.gov (United States)

    Xu, Kuan-Man; Cheng, Anning

    2014-05-01

    A high-resolution cloud-resolving model (CRM) embedded in a general circulation model (GCM) is an attractive alternative for climate modeling because it replaces all traditional cloud parameterizations and explicitly simulates cloud physical processes in each grid column of the GCM. Such an approach is called "Multiscale Modeling Framework." MMF still needs to parameterize the subgrid-scale (SGS) processes associated with clouds and large turbulent eddies because circulations associated with planetary boundary layer (PBL) and in-cloud turbulence are unresolved by CRMs with horizontal grid sizes on the order of a few kilometers. A third-order turbulence closure (IPHOC) has been implemented in the CRM component of the super-parameterized Community Atmosphere Model (SPCAM). IPHOC is used to predict (or diagnose) fractional cloudiness and the variability of temperature and water vapor at scales that are not resolved on the CRM's grid. This model has produced promised results, especially for low-level cloud climatology, seasonal variations and diurnal variations (Cheng and Xu 2011, 2013a, b; Xu and Cheng 2013a, b). Because of the enormous computational cost of SPCAM-IPHOC, which is 400 times of a conventional CAM, we decided to bypass the CRM and implement the IPHOC directly to CAM version 5 (CAM5). IPHOC replaces the PBL/stratocumulus, shallow convection, and cloud macrophysics parameterizations in CAM5. Since there are large discrepancies in the spatial and temporal scales between CRM and CAM5, IPHOC used in CAM5 has to be modified from that used in SPCAM. In particular, we diagnose all second- and third-order moments except for the fluxes. These prognostic and diagnostic moments are used to select a double-Gaussian probability density function to describe the SGS variability. We also incorporate a diagnostic PBL height parameterization to represent the strong inversion above PBL. The goal of this study is to compare the simulation of the climatology from these three

  13. Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes

    Science.gov (United States)

    van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.

    2017-12-01

    Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.

  14. On the sensitivity of mesoscale models to surface-layer parameterization constants

    Science.gov (United States)

    Garratt, J. R.; Pielke, R. A.

    1989-09-01

    The Colorado State University standard mesoscale model is used to evaluate the sensitivity of one-dimensional (1D) and two-dimensional (2D) fields to differences in surface-layer parameterization “constants”. Such differences reflect the range in the published values of the von Karman constant, Monin-Obukhov stability functions and the temperature roughness length at the surface. The sensitivity of 1D boundary-layer structure, and 2D sea-breeze intensity, is generally less than that found in published comparisons related to turbulence closure schemes generally.

  15. Parameterization of the ACRU model for estimating biophysical and climatological change impacts, Beaver Creek, Alberta

    Science.gov (United States)

    Forbes, K. A.; Kienzle, S. W.; Coburn, C. A.; Byrne, J. M.

    2006-12-01

    Multiple threats, including intensification of agricultural production, non-renewable resource extraction and climate change, are threatening Southern Alberta's water supply. The objective of this research is to calibrate/evaluate the Agricultural Catchments Research Unit (ACRU) agrohydrological model; with the end goal of forecasting the impacts of a changing environment on water quantity. The strength of this model is the intensive multi-layered soil water budgeting routine that integrates water movement between the surface and atmosphere. The ACRU model was parameterized using data from Environment Canada's climate database for a twenty year period (1984-2004) and was used to simulate streamflow for Beaver Creek. The simulated streamflow was compared to Environment Canada's historical streamflow database to validate the model output. The Beaver Creek Watershed, located in the Porcupine Hills southwestern Alberta, Canada contains a heterogeneous cover of deciduous, coniferous, native prairie grasslands and forage crops. In a catchment with highly diversified land cover, canopy architecture cannot be overlooked in rainfall interception parameterization. Preliminary testing of ACRU suggests that streamflows were sensitive to varied levels of leaf area index (LAI), a representative fraction of canopy foliage. Further testing using remotely sensed LAI's will provide a more accurate representation of canopy foliage and ultimately best represent this important element of the hydrological cycle and the associated processes which govern the natural hydrology of the Beaver Creek watershed.

  16. Model-driven harmonic parameterization of the cortical surface: HIP-HOP.

    Science.gov (United States)

    Auzias, G; Lefèvre, J; Le Troter, A; Fischer, C; Perrot, M; Régis, J; Coulon, O

    2013-05-01

    In the context of inter subject brain surface matching, we present a parameterization of the cortical surface constrained by a model of cortical organization. The parameterization is defined via an harmonic mapping of each hemisphere surface to a rectangular planar domain that integrates a representation of the model. As opposed to previous landmark-based registration methods we do not match folds between individuals but instead optimize the fit between cortical sulci and specific iso-coordinate axis in the model. This strategy overcomes some limitation to sulcus-based registration techniques such as topological variability in sulcal landmarks across subjects. Experiments on 62 subjects with manually traced sulci are presented and compared with the result of the Freesurfer software. The evaluation involves a measure of dispersion of sulci with both angular and area distortions. We show that the model-based strategy can lead to a natural, efficient and very fast (less than 5 min per hemisphere) method for defining inter subjects correspondences. We discuss how this approach also reduces the problems inherent to anatomically defined landmarks and open the way to the investigation of cortical organization through the notion of orientation and alignment of structures across the cortex.

  17. Parameterization and evaluation of sulfate adsorption in a dynamic soil chemistry model

    International Nuclear Information System (INIS)

    Martinson, Liisa; Alveteg, Mattias; Warfvinge, Per

    2003-01-01

    Including sulfate adsorption improves the dynamic behavior of the SAFE model. - Sulfate adsorption was implemented in the dynamic, multi-layer soil chemistry model SAFE. The process is modeled by an isotherm in which sulfate adsorption is considered to be fully reversible and dependent on sulfate concentration as well as pH in soil solution. The isotherm was parameterized by a site-specific series of simple batch experiments at different pH (3.8-5.0) and sulfate concentration (10-260 μmol l -1 ) levels. Application of the model to the Lake Gaardsjoen roof covered site shows that including sulfate adsorption improves the dynamic behavior of the model and sulfate adsorption and desorption delay acidification and recovery of the soil. The modeled adsorbed pool of sulfate at the site reached a maximum level of 700 mmol/m 2 in the late 1980s, well in line with experimental data

  18. Parameterizing Urban Canopy Layer transport in an Lagrangian Particle Dispersion Model

    Science.gov (United States)

    Stöckl, Stefan; Rotach, Mathias W.

    2016-04-01

    The percentage of people living in urban areas is rising worldwide, crossed 50% in 2007 and is even higher in developed countries. High population density and numerous sources of air pollution in close proximity can lead to health issues. Therefore it is important to understand the nature of urban pollutant dispersion. In the last decades this field has experienced considerable progress, however the influence of large roughness elements is complex and has as of yet not been completely described. Hence, this work studied urban particle dispersion close to source and ground. It used an existing, steady state, three-dimensional Lagrangian particle dispersion model, which includes Roughness Sublayer parameterizations of turbulence and flow. The model is valid for convective and neutral to stable conditions and uses the kernel method for concentration calculation. As most Lagrangian models, its lower boundary is the zero-plane displacement, which means that roughly the lower two-thirds of the mean building height are not included in the model. This missing layer roughly coincides with the Urban Canopy Layer. An earlier work "traps" particles hitting the lower model boundary for a recirculation period, which is calculated under the assumption of a vortex in skimming flow, before "releasing" them again. The authors hypothesize that improving the lower boundary condition by including Urban Canopy Layer transport could improve model predictions. This was tested herein by not only trapping the particles, but also advecting them with a mean, parameterized flow in the Urban Canopy Layer. Now the model calculates the trapping period based on either recirculation due to vortex motion in skimming flow regimes or vertical velocity if no vortex forms, depending on incidence angle of the wind on a randomly chosen street canyon. The influence of this modification, as well as the model's sensitivity to parameterization constants, was investigated. To reach this goal, the model was

  19. Parameterization Improvements and Functional and Structural Advances in Version 4 of the Community Land Model

    Directory of Open Access Journals (Sweden)

    Andrew G. Slater

    2011-05-01

    Full Text Available The Community Land Model is the land component of the Community Climate System Model. Here, we describe a broad set of model improvements and additions that have been provided through the CLM development community to create CLM4. The model is extended with a carbon-nitrogen (CN biogeochemical model that is prognostic with respect to vegetation, litter, and soil carbon and nitrogen states and vegetation phenology. An urban canyon model is added and a transient land cover and land use change (LCLUC capability, including wood harvest, is introduced, enabling study of historic and future LCLUC on energy, water, momentum, carbon, and nitrogen fluxes. The hydrology scheme is modified with a revised numerical solution of the Richards equation and a revised ground evaporation parameterization that accounts for litter and within-canopy stability. The new snow model incorporates the SNow and Ice Aerosol Radiation model (SNICAR - which includes aerosol deposition, grain-size dependent snow aging, and vertically-resolved snowpack heating –– as well as new snow cover and snow burial fraction parameterizations. The thermal and hydrologic properties of organic soil are accounted for and the ground column is extended to ~50-m depth. Several other minor modifications to the land surface types dataset, grass and crop optical properties, atmospheric forcing height, roughness length and displacement height, and the disposition of snow-capped runoff are also incorporated.Taken together, these augmentations to CLM result in improved soil moisture dynamics, drier soils, and stronger soil moisture variability. The new model also exhibits higher snow cover, cooler soil temperatures in organic-rich soils, greater global river discharge, and lower albedos over forests and grasslands, all of which are improvements compared to CLM3.5. When CLM4 is run with CN, the mean biogeophysical simulation is slightly degraded because the vegetation structure is prognostic rather

  20. Parameterizing unresolved obstacles with source terms in wave modeling: A real-world application

    Science.gov (United States)

    Mentaschi, Lorenzo; Kakoulaki, Georgia; Vousdoukas, Michalis; Voukouvalas, Evangelos; Feyen, Luc; Besio, Giovanni

    2018-06-01

    Parameterizing the dissipative effects of small, unresolved coastal features, is fundamental to improve the skills of wave models. The established technique to deal with this problem consists in reducing the amount of energy advected within the propagation scheme, and is currently available only for regular grids. To find a more general approach, Mentaschi et al., 2015b formulated a technique based on source terms, and validated it on synthetic case studies. This technique separates the parameterization of the unresolved features from the energy advection, and can therefore be applied to any numerical scheme and to any type of mesh. Here we developed an open-source library for the estimation of the transparency coefficients needed by this approach, from bathymetric data and for any type of mesh. The spectral wave model WAVEWATCH III was used to show that in a real-world domain, such as the Caribbean Sea, the proposed approach has skills comparable and sometimes better than the established propagation-based technique.

  1. The Grell-Freitas Convective Parameterization: Recent Developments and Applications Within the NASA GEOS Global Model

    Science.gov (United States)

    Freitas, S.; Grell, G. A.; Molod, A.

    2017-12-01

    We implemented and began to evaluate an alternative convection parameterization for the NASA Goddard Earth Observing System (GEOS) global model. The parameterization (Grell and Freitas, 2014) is based on the mass flux approach with several closures, for equilibrium and non-equilibrium convection, and includes scale and aerosol awareness functionalities. Scale dependence for deep convection is implemented either through using the method described by Arakawa et al (2011), or through lateral spreading of the subsidence terms. Aerosol effects are included though the dependence of autoconversion and evaporation on the CCN number concentration.Recently, the scheme has been extended to a tri-modal spectral size approach to simulate the transition from shallow, congestus, and deep convection regimes. In addition, the inclusion of a new closure for non-equilibrium convection resulted in a substantial gain of realism in model simulation of the diurnal cycle of convection over the land. Also, a beta-pdf is employed now to represent the normalized mass flux profile. This opens up an additional venue to apply stochasticism in the scheme.

  2. A clinically parameterized mathematical model of Shigella immunity to inform vaccine design.

    Directory of Open Access Journals (Sweden)

    Courtney L Davis

    Full Text Available We refine and clinically parameterize a mathematical model of the humoral immune response against Shigella, a diarrheal bacteria that infects 80-165 million people and kills an estimated 600,000 people worldwide each year. Using Latin hypercube sampling and Monte Carlo simulations for parameter estimation, we fit our model to human immune data from two Shigella EcSf2a-2 vaccine trials and a rechallenge study in which antibody and B-cell responses against Shigella's lipopolysaccharide (LPS and O-membrane proteins (OMP were recorded. The clinically grounded model is used to mathematically investigate which key immune mechanisms and bacterial targets confer immunity against Shigella and to predict which humoral immune components should be elicited to create a protective vaccine against Shigella. The model offers insight into why the EcSf2a-2 vaccine had low efficacy and demonstrates that at a group level a humoral immune response induced by EcSf2a-2 vaccine or wild-type challenge against Shigella's LPS or OMP does not appear sufficient for protection. That is, the model predicts an uncontrolled infection of gut epithelial cells that is present across all best-fit model parameterizations when fit to EcSf2a-2 vaccine or wild-type challenge data. Using sensitivity analysis, we explore which model parameter values must be altered to prevent the destructive epithelial invasion by Shigella bacteria and identify four key parameter groups as potential vaccine targets or immune correlates: 1 the rate that Shigella migrates into the lamina propria or epithelium, 2 the rate that memory B cells (BM differentiate into antibody-secreting cells (ASC, 3 the rate at which antibodies are produced by activated ASC, and 4 the Shigella-specific BM carrying capacity. This paper underscores the need for a multifaceted approach in ongoing efforts to design an effective Shigella vaccine.

  3. A clinically parameterized mathematical model of Shigella immunity to inform vaccine design.

    Science.gov (United States)

    Davis, Courtney L; Wahid, Rezwanul; Toapanta, Franklin R; Simon, Jakub K; Sztein, Marcelo B

    2018-01-01

    We refine and clinically parameterize a mathematical model of the humoral immune response against Shigella, a diarrheal bacteria that infects 80-165 million people and kills an estimated 600,000 people worldwide each year. Using Latin hypercube sampling and Monte Carlo simulations for parameter estimation, we fit our model to human immune data from two Shigella EcSf2a-2 vaccine trials and a rechallenge study in which antibody and B-cell responses against Shigella's lipopolysaccharide (LPS) and O-membrane proteins (OMP) were recorded. The clinically grounded model is used to mathematically investigate which key immune mechanisms and bacterial targets confer immunity against Shigella and to predict which humoral immune components should be elicited to create a protective vaccine against Shigella. The model offers insight into why the EcSf2a-2 vaccine had low efficacy and demonstrates that at a group level a humoral immune response induced by EcSf2a-2 vaccine or wild-type challenge against Shigella's LPS or OMP does not appear sufficient for protection. That is, the model predicts an uncontrolled infection of gut epithelial cells that is present across all best-fit model parameterizations when fit to EcSf2a-2 vaccine or wild-type challenge data. Using sensitivity analysis, we explore which model parameter values must be altered to prevent the destructive epithelial invasion by Shigella bacteria and identify four key parameter groups as potential vaccine targets or immune correlates: 1) the rate that Shigella migrates into the lamina propria or epithelium, 2) the rate that memory B cells (BM) differentiate into antibody-secreting cells (ASC), 3) the rate at which antibodies are produced by activated ASC, and 4) the Shigella-specific BM carrying capacity. This paper underscores the need for a multifaceted approach in ongoing efforts to design an effective Shigella vaccine.

  4. The role of aerosols in cloud drop parameterizations and its applications in global climate models

    Energy Technology Data Exchange (ETDEWEB)

    Chuang, C.C.; Penner, J.E. [Lawrence Livermore National Lab., CA (United States)

    1996-04-01

    The characteristics of the cloud drop size distribution near cloud base are initially determined by aerosols that serve as cloud condensation nuclei and the updraft velocity. We have developed parameterizations relating cloud drop number concentration to aerosol number and sulfate mass concentrations and used them in a coupled global aerosol/general circulation model (GCM) to estimate the indirect aerosol forcing. The global aerosol model made use of our detailed emissions inventories for the amount of particulate matter from biomass burning sources and from fossil fuel sources as well as emissions inventories of the gas-phase anthropogenic SO{sub 2}. This work is aimed at validating the coupled model with the Atmospheric Radiation Measurement (ARM) Program measurements and assessing the possible magnitude of the aerosol-induced cloud effects on climate.

  5. Towards product design automation based on parameterized standard model with diversiform knowledge

    Science.gov (United States)

    Liu, Wei; Zhang, Xiaobing

    2017-04-01

    Product standardization based on CAD software is an effective way to improve design efficiency. In the past, research and development on standardization mainly focused on the level of component, and the standardization of the entire product as a whole is rarely taken into consideration. In this paper, the size and structure of 3D product models are both driven by the Excel datasheets, based on which a parameterized model library is therefore established. Diversiform knowledge including associated parameters and default properties are embedded into the templates in advance to simplify their reuse. Through the simple operation, we can obtain the correct product with the finished 3D models including single parts or complex assemblies. Two examples are illustrated later to invalid the idea, which will greatly improve the design efficiency.

  6. Effects of model resolution and parameterizations on the simulations of clouds, precipitation, and their interactions with aerosols

    Science.gov (United States)

    Lee, Seoung Soo; Li, Zhanqing; Zhang, Yuwei; Yoo, Hyelim; Kim, Seungbum; Kim, Byung-Gon; Choi, Yong-Sang; Mok, Jungbin; Um, Junshik; Ock Choi, Kyoung; Dong, Danhong

    2018-01-01

    This study investigates the roles played by model resolution and microphysics parameterizations in the well-known uncertainties or errors in simulations of clouds, precipitation, and their interactions with aerosols by the numerical weather prediction (NWP) models. For this investigation, we used cloud-system-resolving model (CSRM) simulations as benchmark simulations that adopt high-resolution and full-fledged microphysical processes. These simulations were evaluated against observations, and this evaluation demonstrated that the CSRM simulations can function as benchmark simulations. Comparisons between the CSRM simulations and the simulations at the coarse resolutions that are generally adopted by current NWP models indicate that the use of coarse resolutions as in the NWP models can lower not only updrafts and other cloud variables (e.g., cloud mass, condensation, deposition, and evaporation) but also their sensitivity to increasing aerosol concentration. The parameterization of the saturation process plays an important role in the sensitivity of cloud variables to aerosol concentrations. while the parameterization of the sedimentation process has a substantial impact on how cloud variables are distributed vertically. The variation in cloud variables with resolution is much greater than what happens with varying microphysics parameterizations, which suggests that the uncertainties in the NWP simulations are associated with resolution much more than microphysics parameterizations.

  7. Assessing Sexual Dicromatism: The Importance of Proper Parameterization in Tetrachromatic Visual Models.

    Directory of Open Access Journals (Sweden)

    Pierre-Paul Bitton

    Full Text Available Perceptual models of animal vision have greatly contributed to our understanding of animal-animal and plant-animal communication. The receptor-noise model of color contrasts has been central to this research as it quantifies the difference between two colors for any visual system of interest. However, if the properties of the visual system are unknown, assumptions regarding parameter values must be made, generally with unknown consequences. In this study, we conduct a sensitivity analysis of the receptor-noise model using avian visual system parameters to systematically investigate the influence of variation in light environment, photoreceptor sensitivities, photoreceptor densities, and light transmission properties of the ocular media and the oil droplets. We calculated the chromatic contrast of 15 plumage patches to quantify a dichromatism score for 70 species of Galliformes, a group of birds that display a wide range of sexual dimorphism. We found that the photoreceptor densities and the wavelength of maximum sensitivity of the short-wavelength-sensitive photoreceptor 1 (SWS1 can change dichromatism scores by 50% to 100%. In contrast, the light environment, transmission properties of the oil droplets, transmission properties of the ocular media, and the peak sensitivities of the cone photoreceptors had a smaller impact on the scores. By investigating the effect of varying two or more parameters simultaneously, we further demonstrate that improper parameterization could lead to differences between calculated and actual contrasts of more than 650%. Our findings demonstrate that improper parameterization of tetrachromatic visual models can have very large effects on measures of dichromatism scores, potentially leading to erroneous inferences. We urge more complete characterization of avian retinal properties and recommend that researchers either determine whether their species of interest possess an ultraviolet or near-ultraviolet sensitive SWS1

  8. Improving Mixed-phase Cloud Parameterization in Climate Model with the ACRF Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Zhien [Univ. of Wyoming, Laramie, WY (United States)

    2016-12-13

    Mixed-phase cloud microphysical and dynamical processes are still poorly understood, and their representation in GCMs is a major source of uncertainties in overall cloud feedback in GCMs. Thus improving mixed-phase cloud parameterizations in climate models is critical to reducing the climate forecast uncertainties. This study aims at providing improved knowledge of mixed-phase cloud properties from the long-term ACRF observations and improving mixed-phase clouds simulations in the NCAR Community Atmosphere Model version 5 (CAM5). The key accomplishments are: 1) An improved retrieval algorithm was developed to provide liquid droplet concentration for drizzling or mixed-phase stratiform clouds. 2) A new ice concentration retrieval algorithm for stratiform mixed-phase clouds was developed. 3) A strong seasonal aerosol impact on ice generation in Arctic mixed-phase clouds was identified, which is mainly attributed to the high dust occurrence during the spring season. 4) A suite of multi-senor algorithms was applied to long-term ARM observations at the Barrow site to provide a complete dataset (LWC and effective radius profile for liquid phase, and IWC, Dge profiles and ice concentration for ice phase) to characterize Arctic stratiform mixed-phase clouds. This multi-year stratiform mixed-phase cloud dataset provides necessary information to study related processes, evaluate model stratiform mixed-phase cloud simulations, and improve model stratiform mixed-phase cloud parameterization. 5). A new in situ data analysis method was developed to quantify liquid mass partition in convective mixed-phase clouds. For the first time, we reliably compared liquid mass partitions in stratiform and convective mixed-phase clouds. Due to the different dynamics in stratiform and convective mixed-phase clouds, the temperature dependencies of liquid mass partitions are significantly different due to much higher ice concentrations in convective mixed phase clouds. 6) Systematic evaluations

  9. Using polarimetric radar observations and probabilistic inference to develop the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), a novel microphysical parameterization framework

    Science.gov (United States)

    van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.

    2016-12-01

    Microphysical parameterization schemes have reached an impressive level of sophistication: numerous prognostic hydrometeor categories, and either size-resolved (bin) particle size distributions, or multiple prognostic moments of the size distribution. Yet, uncertainty in model representation of microphysical processes and the effects of microphysics on numerical simulation of weather has not shown a improvement commensurate with the advanced sophistication of these schemes. We posit that this may be caused by unconstrained assumptions of these schemes, such as ad-hoc parameter value choices and structural uncertainties (e.g. choice of a particular form for the size distribution). We present work on development and observational constraint of a novel microphysical parameterization approach, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), which seeks to address these sources of uncertainty. Our framework avoids unnecessary a priori assumptions, and instead relies on observations to provide probabilistic constraint of the scheme structure and sensitivities to environmental and microphysical conditions. We harness the rich microphysical information content of polarimetric radar observations to develop and constrain BOSS within a Bayesian inference framework using a Markov Chain Monte Carlo sampler (see Kumjian et al., this meeting for details on development of an associated polarimetric forward operator). Our work shows how knowledge of microphysical processes is provided by polarimetric radar observations of diverse weather conditions, and which processes remain highly uncertain, even after considering observations.

  10. Electrochemical-mechanical coupled modeling and parameterization of swelling and ionic transport in lithium-ion batteries

    Science.gov (United States)

    Sauerteig, Daniel; Hanselmann, Nina; Arzberger, Arno; Reinshagen, Holger; Ivanov, Svetlozar; Bund, Andreas

    2018-02-01

    The intercalation and aging induced volume changes of lithium-ion battery electrodes lead to significant mechanical pressure or volume changes on cell and module level. As the correlation between electrochemical and mechanical performance of lithium ion batteries at nano and macro scale requires a comprehensive and multidisciplinary approach, physical modeling accounting for chemical and mechanical phenomena during operation is very useful for the battery design. Since the introduced fully-coupled physical model requires proper parameterization, this work also focuses on identifying appropriate mathematical representation of compressibility as well as the ionic transport in the porous electrodes and the separator. The ionic transport is characterized by electrochemical impedance spectroscopy (EIS) using symmetric pouch cells comprising LiNi1/3Mn1/3Co1/3O2 (NMC) cathode, graphite anode and polyethylene separator. The EIS measurements are carried out at various mechanical loads. The observed decrease of the ionic conductivity reveals a significant transport limitation at high pressures. The experimentally obtained data are applied as input to the electrochemical-mechanical model of a prismatic 10 Ah cell. Our computational approach accounts intercalation induced electrode expansion, stress generation caused by mechanical boundaries, compression of the electrodes and the separator, outer expansion of the cell and finally the influence of the ionic transport within the electrolyte.

  11. A photon source model based on particle transport in a parameterized accelerator structure for Monte Carlo dose calculations.

    Science.gov (United States)

    Ishizawa, Yoshiki; Dobashi, Suguru; Kadoya, Noriyuki; Ito, Kengo; Chiba, Takahito; Takayama, Yoshiki; Sato, Kiyokazu; Takeda, Ken

    2018-05-17

    An accurate source model of a medical linear accelerator is essential for Monte Carlo (MC) dose calculations. This study aims to propose an analytical photon source model based on particle transport in parameterized accelerator structures, focusing on a more realistic determination of linac photon spectra compared to existing approaches. We designed the primary and secondary photon sources based on the photons attenuated and scattered by a parameterized flattening filter. The primary photons were derived by attenuating bremsstrahlung photons based on the path length in the filter. Conversely, the secondary photons were derived from the decrement of the primary photons in the attenuation process. This design facilitates these sources to share the free parameters of the filter shape and be related to each other through the photon interaction in the filter. We introduced two other parameters of the primary photon source to describe the particle fluence in penumbral regions. All the parameters are optimized based on calculated dose curves in water using the pencil-beam-based algorithm. To verify the modeling accuracy, we compared the proposed model with the phase space data (PSD) of the Varian TrueBeam 6 and 15 MV accelerators in terms of the beam characteristics and the dose distributions. The EGS5 Monte Carlo code was used to calculate the dose distributions associated with the optimized model and reference PSD in a homogeneous water phantom and a heterogeneous lung phantom. We calculated the percentage of points passing 1D and 2D gamma analysis with 1%/1 mm criteria for the dose curves and lateral dose distributions, respectively. The optimized model accurately reproduced the spectral curves of the reference PSD both on- and off-axis. The depth dose and lateral dose profiles of the optimized model also showed good agreement with those of the reference PSD. The passing rates of the 1D gamma analysis with 1%/1 mm criteria between the model and PSD were 100% for 4

  12. Splitting turbulence algorithm for mixing parameterization embedded in the ocean climate model. Examples of data assimilation and Prandtl number variations.

    Science.gov (United States)

    Moshonkin, Sergey; Gusev, Anatoly; Zalesny, Vladimir; Diansky, Nikolay

    2017-04-01

    Series of experiments were performed with a three-dimensional, free surface, sigma coordinate eddy-permitting ocean circulation model for Atlantic (from 30°S) - Arctic and Bering sea domain (0.25 degrees resolution, Institute of Numerical Mathematics Ocean Model or INMOM) using vertical grid refinement in the zone of fully developed turbulence (40 sigma-levels). The model variables are horizontal velocity components, potential temperature, and salinity as well as free surface height. For parameterization of viscosity and diffusivity, the original splitting turbulence algorithm (STA) is used when total evolutionary equations for the turbulence kinetic energy (TKE) and turbulence dissipation frequency (TDF) split into the stages of transport-diffusion and generation-dissipation. For the generation-dissipation stage the analytical solution was obtained for TKE and TDF as functions of the buoyancy and velocity shift frequencies (BF and VSF). The proposed model with STA is similar to the contemporary differential turbulence models, concerning the physical formulations. At the same time, its algorithm has high enough computational efficiency. For mixing simulation in the zone of turbulence decay, the two kind numerical experiments were carried out, as with assimilation of annual mean climatic buoyancy frequency, as with variation of Prandtl number function dependence upon the BF, VSF, TKE and TDF. The CORE-II data for 1948-2009 were used for experiments. Quality of temperature T and salinity S structure simulation is estimated by the comparison of model monthly profiles T and S averaged for 1980-2009, with T and S monthly data from the World Ocean Atlas 2013. Form of coefficients in equations for TKE and TDF on the generation-dissipation stage makes it possible to assimilate annual mean climatic buoyancy frequency in a varying degree that cardinally improves adequacy of model results to climatic data in all analyzed model domain. The numerical experiments with modified

  13. On The Importance of Connecting Laboratory Measurements of Ice Crystal Growth with Model Parameterizations: Predicting Ice Particle Properties

    Science.gov (United States)

    Harrington, J. Y.

    2017-12-01

    Parameterizing the growth of ice particles in numerical models is at an interesting cross-roads. Most parameterizations developed in the past, including some that I have developed, parse model ice into numerous categories based primarily on the growth mode of the particle. Models routinely possess smaller ice, snow crystals, aggregates, graupel, and hail. The snow and ice categories in some models are further split into subcategories to account for the various shapes of ice. There has been a relatively recent shift towards a new class of microphysical models that predict the properties of ice particles instead of using multiple categories and subcategories. Particle property models predict the physical characteristics of ice, such as aspect ratio, maximum dimension, effective density, rime density, effective area, and so forth. These models are attractive in the sense that particle characteristics evolve naturally in time and space without the need for numerous (and somewhat artificial) transitions among pre-defined classes. However, particle property models often require fundamental parameters that are typically derived from laboratory measurements. For instance, the evolution of particle shape during vapor depositional growth requires knowledge of the growth efficiencies for the various axis of the crystals, which in turn depends on surface parameters that can only be determined in the laboratory. The evolution of particle shapes and density during riming, aggregation, and melting require data on the redistribution of mass across a crystals axis as that crystal collects water drops, ice crystals, or melts. Predicting the evolution of particle properties based on laboratory-determined parameters has a substantial influence on the evolution of some cloud systems. Radiatively-driven cirrus clouds show a broader range of competition between heterogeneous nucleation and homogeneous freezing when ice crystal properties are predicted. Even strongly convective squall

  14. Parameterized Finite Element Modeling and Buckling Analysis of Six Typical Composite Grid Cylindrical Shells

    Science.gov (United States)

    Lai, Changliang; Wang, Junbiao; Liu, Chuang

    2014-10-01

    Six typical composite grid cylindrical shells are constructed by superimposing three basic types of ribs. Then buckling behavior and structural efficiency of these shells are analyzed under axial compression, pure bending, torsion and transverse bending by finite element (FE) models. The FE models are created by a parametrical FE modeling approach that defines FE models with original natural twisted geometry and orients cross-sections of beam elements exactly. And the approach is parameterized and coded by Patran Command Language (PCL). The demonstrations of FE modeling indicate the program enables efficient generation of FE models and facilitates parametric studies and design of grid shells. Using the program, the effects of helical angles on the buckling behavior of six typical grid cylindrical shells are determined. The results of these studies indicate that the triangle grid and rotated triangle grid cylindrical shell are more efficient than others under axial compression and pure bending, whereas under torsion and transverse bending, the hexagon grid cylindrical shell is most efficient. Additionally, buckling mode shapes are compared and provide an understanding of composite grid cylindrical shells that is useful in preliminary design of such structures.

  15. Parameterization and Uncertainty Analysis of SWAT model in Hydrological Simulation of Chaohe River Basin

    Science.gov (United States)

    Jie, M.; Zhang, J.; Guo, B. B.

    2017-12-01

    As a typical distributed hydrological model, the SWAT model also has a challenge in calibrating parameters and analysis their uncertainty. This paper chooses the Chaohe River Basin China as the study area, through the establishment of the SWAT model, loading the DEM data of the Chaohe river basin, the watershed is automatically divided into several sub-basins. Analyzing the land use, soil and slope which are on the basis of the sub-basins and calculating the hydrological response unit (HRU) of the study area, after running SWAT model, the runoff simulation values in the watershed are obtained. On this basis, using weather data, known daily runoff of three hydrological stations, combined with the SWAT-CUP automatic program and the manual adjustment method are used to analyze the multi-site calibration of the model parameters. Furthermore, the GLUE algorithm is used to analyze the parameters uncertainty of the SWAT model. Through the sensitivity analysis, calibration and uncertainty study of SWAT, the results indicate that the parameterization of the hydrological characteristics of the Chaohe river is successful and feasible which can be used to simulate the Chaohe river basin.

  16. A new parameterization for integrated population models to document amphibian reintroductions.

    Science.gov (United States)

    Duarte, Adam; Pearl, Christopher A; Adams, Michael J; Peterson, James T

    2017-09-01

    Managers are increasingly implementing reintroduction programs as part of a global effort to alleviate amphibian declines. Given uncertainty in factors affecting populations and a need to make recurring decisions to achieve objectives, adaptive management is a useful component of these efforts. A major impediment to the estimation of demographic rates often used to parameterize and refine decision-support models is that life-stage-specific monitoring data are frequently sparse for amphibians. We developed a new parameterization for integrated population models to match the ecology of amphibians and capitalize on relatively inexpensive monitoring data to document amphibian reintroductions. We evaluate the capability of this model by fitting it to Oregon spotted frog (Rana pretiosa) monitoring data collected from 2007 to 2014 following their reintroduction within the Klamath Basin, Oregon, USA. The number of egg masses encountered and the estimated adult and metamorph abundances generally increased following reintroduction. We found that survival probability from egg to metamorph ranged from 0.01 in 2008 to 0.09 in 2009 and was not related to minimum spring temperatures, metamorph survival probability ranged from 0.13 in 2010-2011 to 0.86 in 2012-2013 and was positively related to mean monthly temperatures (logit-scale slope = 2.37), adult survival probability was lower for founders (0.40) than individuals recruited after reintroduction (0.56), and the mean number of egg masses per adult female was 0.74. Our study is the first to test hypotheses concerning Oregon spotted frog egg-to-metamorph and metamorph-to-adult transition probabilities in the wild and document their response at multiple life stages following reintroduction. Furthermore, we provide an example to illustrate how the structure of our integrated population model serves as a useful foundation for amphibian decision-support models within adaptive management programs. The integration of multiple, but

  17. Basic Concepts for Convection Parameterization in Weather Forecast and Climate Models: COST Action ES0905 Final Report

    Directory of Open Access Journals (Sweden)

    Jun–Ichi Yano

    2014-12-01

    Full Text Available The research network “Basic Concepts for Convection Parameterization in Weather Forecast and Climate Models” was organized with European funding (COST Action ES0905 for the period of 2010–2014. Its extensive brainstorming suggests how the subgrid-scale parameterization problem in atmospheric modeling, especially for convection, can be examined and developed from the point of view of a robust theoretical basis. Our main cautions are current emphasis on massive observational data analyses and process studies. The closure and the entrainment–detrainment problems are identified as the two highest priorities for convection parameterization under the mass–flux formulation. The need for a drastic change of the current European research culture as concerns policies and funding in order not to further deplete the visions of the European researchers focusing on those basic issues is emphasized.

  18. Parameterization of water vapor using high-resolution GPS data and empirical models

    Science.gov (United States)

    Ningombam, Shantikumar S.; Jade, Sridevi; Shrungeshwara, T. S.

    2018-03-01

    The present work evaluates eleven existing empirical models to estimate Precipitable Water Vapor (PWV) over a high-altitude (4500 m amsl), cold-desert environment. These models are tested extensively and used globally to estimate PWV for low altitude sites (below 1000 m amsl). The moist parameters used in the model are: water vapor scale height (Hc), dew point temperature (Td) and water vapor pressure (Es 0). These moist parameters are derived from surface air temperature and relative humidity measured at high temporal resolution from automated weather station. The performance of these models are examined statistically with observed high-resolution GPS (GPSPWV) data over the region (2005-2012). The correlation coefficient (R) between the observed GPSPWV and Model PWV is 0.98 at daily data and varies diurnally from 0.93 to 0.97. Parameterization of moisture parameters were studied in-depth (i.e., 2 h to monthly time scales) using GPSPWV , Td , and Es 0 . The slope of the linear relationships between GPSPWV and Td varies from 0.073°C-1 to 0.106°C-1 (R: 0.83 to 0.97) while GPSPWV and Es 0 varied from 1.688 to 2.209 (R: 0.95 to 0.99) at daily, monthly and diurnal time scales. In addition, the moist parameters for the cold desert, high-altitude environment are examined in-depth at various time scales during 2005-2012.

  19. Improved parameterization of managed grassland in a global process-based vegetation model using Bayesian statistics

    Science.gov (United States)

    Rolinski, S.; Müller, C.; Lotze-Campen, H.; Bondeau, A.

    2010-12-01

    More than a quarter of the Earth’s land surface is covered by grassland, which is also the major part (~ 70 %) of the agricultural area. Most of this area is used for livestock production in different degrees of intensity. The dynamic global vegetation model LPJmL (Sitch et al., Global Change Biology, 2003; Bondeau et al., Global Change Biology, 2007) is one of few process-based model that simulates biomass production on managed grasslands at the global scale. The implementation of managed grasslands and its evaluation has received little attention so far, as reference data on grassland productivity are scarce and the definition of grassland extent and usage are highly uncertain. However, grassland productivity is related to large areas, and strongly influences global estimates of carbon and water budgets and should thus be improved. Plants are implemented in LPJmL in an aggregated form as plant functional types assuming that processes concerning carbon and water fluxes are quite similar between species of the same type. Therefore, the parameterization of a functional type is possible with parameters in a physiologically meaningful range of values. The actual choice of the parameter values from the possible and reasonable phase space should satisfy the condition of the best fit of model results and measured data. In order to improve the parameterization of managed grass we follow a combined procedure using model output and measured data of carbon and water fluxes. By comparing carbon and water fluxes simultaneously, we expect well-balanced refinements and avoid over-tuning of the model in only one direction. The comparison of annual biomass from grassland to data from the Food and Agriculture Organization of the United Nations (FAO) per country provide an overview about the order of magnitude and the identification of deviations. The comparison of daily net primary productivity, soil respiration and water fluxes at specific sites (FluxNet Data) provides

  20. Parameterization of dust emissions in the global atmospheric chemistry-climate model EMAC: impact of nudging and soil properties

    OpenAIRE

    Astitha, M.; Lelieveld, J.; Kader, M. Abdel; Pozzer, A.; de Meij, A.

    2012-01-01

    Airborne desert dust influences radiative transfer, atmospheric chemistry and dynamics, as well as nutrient transport and deposition. It directly and indirectly affects climate on regional and global scales. Two versions of a parameterization scheme to compute desert dust emissions are incorporated into the atmospheric chemistry general circulation model EMAC (ECHAM5/MESSy2.41 Atmospheric Chemistry). One uses a global...

  1. Assessment of a surface-layer parameterization scheme in an atmospheric model for varying meteorological conditions

    Directory of Open Access Journals (Sweden)

    T. J. Anurose

    2014-06-01

    Full Text Available The performance of a surface-layer parameterization scheme in a high-resolution regional model (HRM is carried out by comparing the model-simulated sensible heat flux (H with the concurrent in situ measurements recorded at Thiruvananthapuram (8.5° N, 76.9° E, a coastal station in India. With a view to examining the role of atmospheric stability in conjunction with the roughness lengths in the determination of heat exchange coefficient (CH and H for varying meteorological conditions, the model simulations are repeated by assigning different values to the ratio of momentum and thermal roughness lengths (i.e. z0m/z0h in three distinct configurations of the surface-layer scheme designed for the present study. These three configurations resulted in differential behaviour for the varying meteorological conditions, which is attributed to the sensitivity of CH to the bulk Richardson number (RiB under extremely unstable, near-neutral and stable stratification of the atmosphere.

  2. Using Remote Sensing Data to Parameterize Ice Jam Modeling for a Northern Inland Delta

    Directory of Open Access Journals (Sweden)

    Fan Zhang

    2017-04-01

    Full Text Available The Slave River is a northern river in Canada, with ice being an important component of its flow regime for at least half of the year. During the spring breakup period, ice jams and ice-jam flooding can occur in the Slave River Delta, which is of benefit for the replenishment of moisture and sediment required to maintain the ecological integrity of the delta. To better understand the ice jam processes that lead to flooding, as well as the replenishment of the delta, the one-dimensional hydraulic river ice model RIVICE was implemented to simulate and explore ice jam formation in the Slave River Delta. Incoming ice volume, a crucial input parameter for RIVICE, was determined by the novel approach of using MODIS space-born remote sensing imagery. Space-borne and air-borne remote sensing data were used to parameterize the upstream ice volume available for ice jamming. Gauged data was used to complement modeling calibration and validation. HEC-RAS, another one-dimensional hydrodynamic model, was used to determine ice volumes required for equilibrium jams and the upper limit of ice volume that a jam can sustain, as well as being used as a threshold for the volumes estimated by the dynamic ice jam simulations using RIVICE. Parameter sensitivity analysis shows that morphological and hydraulic properties have great impacts on the ice jam length and water depth in the Slave River Delta.

  3. Approaches in highly parameterized inversion - PEST++, a Parameter ESTimation code optimized for large environmental models

    Science.gov (United States)

    Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.

    2012-01-01

    An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.

  4. Factors influencing the parameterization of anvil clouds within general circulation models

    International Nuclear Information System (INIS)

    Leone, J.M. Jr.; Chin, H.N.

    1994-01-01

    The overall goal of this project is to improve the representation of clouds and their effects within global climate models (GCMs). We have concentrated on a small portion of the overall goal, the evolution of convectively generated cirrus clouds and their effects on the large-scale environment. Because of the large range of time and length scales involved, we have been using a multi-scale attack. For the early time generation and development of the cirrus anvil, we are using a cloud-scale model with horizontal resolution of 1 to 2 kilometers; for the larger scale transport by the larger scale flow, we are using a mesoscale model with a horizontal resolution of 20 to 60 kilometers. The eventual goal is to use the information obtained from these simulations, together with available observations, to derive improved cloud parameterizations for use in GCMs. This paper presents a new tool, a cirrus generator, that we have developed to aid in our mesoscale studies

  5. A Comparative Study of Nucleation Parameterizations: 2. Three-Dimensional Model Application and Evaluation

    Science.gov (United States)

    Following the examination and evaluation of 12 nucleation parameterizations presented in part 1, 11 of them representing binary, ternary, kinetic, and cluster‐activated nucleation theories are evaluated in the U.S. Environmental Protection Agency Community Multiscale Air Quality ...

  6. Improving Convection and Cloud Parameterization Using ARM Observations and NCAR Community Atmosphere Model CAM5

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Guang J. [Univ. of California, San Diego, CA (United States)

    2016-11-07

    The fundamental scientific objectives of our research are to use ARM observations and the NCAR CAM5 to understand the large-scale control on convection, and to develop improved convection and cloud parameterizations for use in GCMs.

  7. Sensitivity of simulated convection-driven stratosphere-troposphere exchange in WRF-Chem to the choice of physical and chemical parameterization

    Science.gov (United States)

    Phoenix, Daniel B.; Homeyer, Cameron R.; Barth, Mary C.

    2017-08-01

    Tropopause-penetrating convection is capable of rapidly transporting air from the lower troposphere to the upper troposphere and lower stratosphere (UTLS), where it can have important impacts on chemistry, the radiative budget, and climate. However, obtaining in situ measurements of convection and convective transport is difficult and such observations are historically rare. Modeling studies, on the other hand, offer the advantage of providing output related to the physical, dynamical, and chemical characteristics of storms and their environments at fine spatial and temporal scales. Since these characteristics of simulated convection depend on the chosen model design, we examine the sensitivity of simulated convective transport to the choice of physical (bulk microphysics or BMP and planetary boundary layer or PBL) and chemical parameterizations in the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem). In particular, we simulate multiple cases where in situ observations are available from the recent (2012) Deep Convective Clouds and Chemistry (DC3) experiment. Model output is evaluated using ground-based radar observations of each storm and in situ trace gas observations from two aircraft operated during the DC3 experiment. Model results show measurable sensitivity of the physical characteristics of a storm and the transport of water vapor and additional trace gases into the UTLS to the choice of BMP. The physical characteristics of the storm and transport of insoluble trace gases are largely insensitive to the choice of PBL scheme and chemical mechanism, though several soluble trace gases (e.g., SO2, CH2O, and HNO3) exhibit some measurable sensitivity.

  8. Spectroscopic measurements of soybeans used to parameterize physiological traits in the AgroIBIS ecosystem model

    Science.gov (United States)

    Singh, A.; Serbin, S.; Kucharik, C. J.; Townsend, P. A.

    2014-12-01

    Ecosystem models such AgroIBIS require detailed parameterizations of numerous vegetation traits related to leaf structure, biochemistry and photosynthetic capacity to properly assess plant carbon assimilation and yield response to environmental variability. In general, these traits are estimated from a limited number of field measurements or sourced from the literature, but rarely is the full observed range of variability in these traits utilized in modeling activities. In addition, pathogens and pests, such as the exotic soybean aphid (Aphis glycines), which affects photosynthetic pathways in soybean plants by feeding on phloem and sap, can potentially impact plant productivity and yields. Capturing plant responses to pest pressure in conjunction with environmental variability is of considerable interest to managers and the scientific community alike. In this research, we employed full-range (400-2500 nm) field and laboratory spectroscopy to rapidly characterize the leaf biochemical and physiological traits, namely foliar nitrogen, specific leaf area (SLA) and the maximum rate of RuBP carboxylation by the enzyme RuBisCo (Vcmax) in soybean plants, which experienced a broad range of environmental conditions and soybean aphid pressures. We utilized near-surface spectroscopic remote sensing measurements as a means to capture the spatial and temporal patterns of aphid impacts across broad aphid pressure levels. In addition, we used the spectroscopic data to generate a much larger dataset of key model parameters required by AgroIBIS than would be possible through traditional measurements of biochemistry and leaf-level gas exchange. The use of spectroscopic retrievals of soybean traits allowed us to better characterize the variability of plant responses associated with aphid pressure to more accurately model the likely impacts of soybean aphid on soybeans. Our next steps include the coupling of the information derived from our spectral measurements with the Agro

  9. The Explicit-Cloud Parameterized-Pollutant hybrid approach for aerosol-cloud interactions in multiscale modeling framework models: tracer transport results

    International Nuclear Information System (INIS)

    Jr, William I Gustafson; Berg, Larry K; Easter, Richard C; Ghan, Steven J

    2008-01-01

    All estimates of aerosol indirect effects on the global energy balance have either completely neglected the influence of aerosol on convective clouds or treated the influence in a highly parameterized manner. Embedding cloud-resolving models (CRMs) within each grid cell of a global model provides a multiscale modeling framework for treating both the influence of aerosols on convective as well as stratiform clouds and the influence of clouds on the aerosol, but treating the interactions explicitly by simulating all aerosol processes in the CRM is computationally prohibitive. An alternate approach is to use horizontal statistics (e.g., cloud mass flux, cloud fraction, and precipitation) from the CRM simulation to drive a single-column parameterization of cloud effects on the aerosol and then use the aerosol profile to simulate aerosol effects on clouds within the CRM. Here, we present results from the first component of the Explicit-Cloud Parameterized-Pollutant parameterization to be developed, which handles vertical transport of tracers by clouds. A CRM with explicit tracer transport serves as a benchmark. We show that this parameterization, driven by the CRM's cloud mass fluxes, reproduces the CRM tracer transport significantly better than a single-column model that uses a conventional convective cloud parameterization

  10. The Explicit-Cloud Parameterized-Pollutant hybrid approach for aerosol-cloud interactions in multiscale modeling framework models: tracer transport results

    Energy Technology Data Exchange (ETDEWEB)

    Jr, William I Gustafson; Berg, Larry K; Easter, Richard C; Ghan, Steven J [Atmospheric Science and Global Change Division, Pacific Northwest National Laboratory, PO Box 999, MSIN K9-30, Richland, WA (United States)], E-mail: William.Gustafson@pnl.gov

    2008-04-15

    All estimates of aerosol indirect effects on the global energy balance have either completely neglected the influence of aerosol on convective clouds or treated the influence in a highly parameterized manner. Embedding cloud-resolving models (CRMs) within each grid cell of a global model provides a multiscale modeling framework for treating both the influence of aerosols on convective as well as stratiform clouds and the influence of clouds on the aerosol, but treating the interactions explicitly by simulating all aerosol processes in the CRM is computationally prohibitive. An alternate approach is to use horizontal statistics (e.g., cloud mass flux, cloud fraction, and precipitation) from the CRM simulation to drive a single-column parameterization of cloud effects on the aerosol and then use the aerosol profile to simulate aerosol effects on clouds within the CRM. Here, we present results from the first component of the Explicit-Cloud Parameterized-Pollutant parameterization to be developed, which handles vertical transport of tracers by clouds. A CRM with explicit tracer transport serves as a benchmark. We show that this parameterization, driven by the CRM's cloud mass fluxes, reproduces the CRM tracer transport significantly better than a single-column model that uses a conventional convective cloud parameterization.

  11. Evaluating model parameterizations of submicron aerosol scattering and absorption with in situ data from ARCTAS 2008

    Directory of Open Access Journals (Sweden)

    M. J. Alvarado

    2016-07-01

    Full Text Available Accurate modeling of the scattering and absorption of ultraviolet and visible radiation by aerosols is essential for accurate simulations of atmospheric chemistry and climate. Closure studies using in situ measurements of aerosol scattering and absorption can be used to evaluate and improve models of aerosol optical properties without interference from model errors in aerosol emissions, transport, chemistry, or deposition rates. Here we evaluate the ability of four externally mixed, fixed size distribution parameterizations used in global models to simulate submicron aerosol scattering and absorption at three wavelengths using in situ data gathered during the 2008 Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS campaign. The four models are the NASA Global Modeling Initiative (GMI Combo model, GEOS-Chem v9-02, the baseline configuration of a version of GEOS-Chem with online radiative transfer calculations (called GC-RT, and the Optical Properties of Aerosol and Clouds (OPAC v3.1 package. We also use the ARCTAS data to perform the first evaluation of the ability of the Aerosol Simulation Program (ASP v2.1 to simulate submicron aerosol scattering and absorption when in situ data on the aerosol size distribution are used, and examine the impact of different mixing rules for black carbon (BC on the results. We find that the GMI model tends to overestimate submicron scattering and absorption at shorter wavelengths by 10–23 %, and that GMI has smaller absolute mean biases for submicron absorption than OPAC v3.1, GEOS-Chem v9-02, or GC-RT. However, the changes to the density and refractive index of BC in GC-RT improve the simulation of submicron aerosol absorption at all wavelengths relative to GEOS-Chem v9-02. Adding a variable size distribution, as in ASP v2.1, improves model performance for scattering but not for absorption, likely due to the assumption in ASP v2.1 that BC is present at a constant mass

  12. IMPROVED PARAMETERIZATION OF WATER CLOUD MODEL FOR HYBRID-POLARIZED BACKSCATTER SIMULATION USING INTERACTION FACTOR

    Directory of Open Access Journals (Sweden)

    S. Chauhan

    2017-07-01

    Full Text Available The prime aim of this study was to assess the potential of semi-empirical water cloud model (WCM in simulating hybrid-polarized SAR backscatter signatures (RH and RV retrieved from RISAT-1 data and integrate the results into a graphical user interface (GUI to facilitate easy comprehension and interpretation. A predominant agricultural wheat growing area was selected in Mathura and Bharatpur districts located in the Indian states of Uttar Pradesh and Rajasthan respectively to carry out the study. The three-date datasets were acquired covering the crucial growth stages of the wheat crop. In synchrony, the fieldwork was organized to measure crop/soil parameters. The RH and RV backscattering coefficient images were extracted from the SAR data for all the three dates. The effect of four combinations of vegetation descriptors (V1 and V2 viz., LAI-LAI, LAI-Plant water content (PWC, Leaf water area index (LWAI-LWAI, and LAI-Interaction factor (IF on the total RH and RV backscatter was analyzed. The results revealed that WCM calibrated with LAI and IF as the two vegetation descriptors simulated the total RH and RV backscatter values with highest R2 of 0.90 and 0.85 while the RMSE was lowest among the other tested models (1.18 and 1.25 dB, respectively. The theoretical considerations and interpretations have been discussed and examined in the paper. The novelty of this work emanates from the fact that it is a first step towards the modeling of hybrid-polarized backscatter data using an accurately parameterized semi-empirical approach.

  13. Advances in understanding, models and parameterizations of biosphere-atmosphere ammonia exchange

    Science.gov (United States)

    Flechard, C. R.; Massad, R.-S.; Loubet, B.; Personne, E.; Simpson, D.; Bash, J. O.; Cooter, E. J.; Nemitz, E.; Sutton, M. A.

    2013-07-01

    . Their level of complexity depends on their purpose, the spatial scale at which they are applied, the current level of parameterization, and the availability of the input data they require. State-of-the-art solutions for determining the emission/sink Γ potentials through the soil/canopy system include coupled, interactive chemical transport models (CTM) and soil/ecosystem modelling at the regional scale. However, it remains a matter for debate to what extent realistic options for future regional and global models should be based on process-based mechanistic versus empirical and regression-type models. Further discussion is needed on the extent and timescale by which new approaches can be used, such as integration with ecosystem models and satellite observations.

  14. Advances in understanding, models and parameterizations of biosphere-atmosphere ammonia exchange

    Directory of Open Access Journals (Sweden)

    C. R. Flechard

    2013-07-01

    -chemical species schemes. Their level of complexity depends on their purpose, the spatial scale at which they are applied, the current level of parameterization, and the availability of the input data they require. State-of-the-art solutions for determining the emission/sink Γ potentials through the soil/canopy system include coupled, interactive chemical transport models (CTM and soil/ecosystem modelling at the regional scale. However, it remains a matter for debate to what extent realistic options for future regional and global models should be based on process-based mechanistic versus empirical and regression-type models. Further discussion is needed on the extent and timescale by which new approaches can be used, such as integration with ecosystem models and satellite observations.

  15. Assessing uncertainty and sensitivity of model parameterizations and parameters in WRF affecting simulated surface fluxes and land-atmosphere coupling over the Amazon region

    Science.gov (United States)

    Qian, Y.; Wang, C.; Huang, M.; Berg, L. K.; Duan, Q.; Feng, Z.; Shrivastava, M. B.; Shin, H. H.; Hong, S. Y.

    2016-12-01

    This study aims to quantify the relative importance and uncertainties of different physical processes and parameters in affecting simulated surface fluxes and land-atmosphere coupling strength over the Amazon region. We used two-legged coupling metrics, which include both terrestrial (soil moisture to surface fluxes) and atmospheric (surface fluxes to atmospheric state or precipitation) legs, to diagnose the land-atmosphere interaction and coupling strength. Observations made using the Department of Energy's Atmospheric Radiation Measurement (ARM) Mobile Facility during the GoAmazon field campaign together with satellite and reanalysis data are used to evaluate model performance. To quantify the uncertainty in physical parameterizations, we performed a 120 member ensemble of simulations with the WRF model using a stratified experimental design including 6 cloud microphysics, 3 convection, 6 PBL and surface layer, and 3 land surface schemes. A multiple-way analysis of variance approach is used to quantitatively analyze the inter- and intra-group (scheme) means and variances. To quantify parameter sensitivity, we conducted an additional 256 WRF simulations in which an efficient sampling algorithm is used to explore the multiple-dimensional parameter space. Three uncertainty quantification approaches are applied for sensitivity analysis (SA) of multiple variables of interest to 20 selected parameters in YSU PBL and MM5 surface layer schemes. Results show consistent parameter sensitivity across different SA methods. We found that 5 out of 20 parameters contribute more than 90% total variance, and first-order effects dominate comparing to the interaction effects. Results of this uncertainty quantification study serve as guidance for better understanding the roles of different physical processes in land-atmosphere interactions, quantifying model uncertainties from various sources such as physical processes, parameters and structural errors, and providing insights for

  16. Potential decadal predictability and its sensitivity to sea ice albedo parameterization in a global coupled model

    Energy Technology Data Exchange (ETDEWEB)

    Koenigk, Torben; Caian, Mihaela; Doescher, Ralf; Wyser, Klaus [Swedish Meteorological and Hydrological Institute, Rossby Centre, Norrkoeping (Sweden); Koenig Beatty, Christof [Universite Catholique de Louvain, Louvain-la-Neuve (Belgium)

    2012-06-15

    Decadal prediction is one focus of the upcoming 5th IPCC Assessment report. To be able to interpret the results and to further improve the decadal predictions it is important to investigate the potential predictability in the participating climate models. This study analyzes the upper limit of climate predictability on decadal time scales and its dependency on sea ice albedo parameterization by performing two perfect ensemble experiments with the global coupled climate model EC-Earth. In the first experiment, the standard albedo formulation of EC-Earth is used, in the second experiment sea ice albedo is reduced. The potential prognostic predictability is analyzed for a set of oceanic and atmospheric parameters. The decadal predictability of the atmospheric circulation is small. The highest potential predictability was found in air temperature at 2 m height over the northern North Atlantic and the southern South Atlantic. Over land, only a few areas are significantly predictable. The predictability for continental size averages of air temperature is relatively good in all northern hemisphere regions. Sea ice thickness is highly predictable along the ice edges in the North Atlantic Arctic Sector. The meridional overturning circulation is highly predictable in both experiments and governs most of the decadal climate predictability in the northern hemisphere. The experiments using reduced sea ice albedo show some important differences like a generally higher predictability of atmospheric variables in the Arctic or higher predictability of air temperature in Europe. Furthermore, decadal variations are substantially smaller in the simulations with reduced ice albedo, which can be explained by reduced sea ice thickness in these simulations. (orig.)

  17. Creating and parameterizing patient-specific deep brain stimulation pathway-activation models using the hyperdirect pathway as an example.

    Science.gov (United States)

    Gunalan, Kabilar; Chaturvedi, Ashutosh; Howell, Bryan; Duchin, Yuval; Lempka, Scott F; Patriat, Remi; Sapiro, Guillermo; Harel, Noam; McIntyre, Cameron C

    2017-01-01

    Deep brain stimulation (DBS) is an established clinical therapy and computational models have played an important role in advancing the technology. Patient-specific DBS models are now common tools in both academic and industrial research, as well as clinical software systems. However, the exact methodology for creating patient-specific DBS models can vary substantially and important technical details are often missing from published reports. Provide a detailed description of the assembly workflow and parameterization of a patient-specific DBS pathway-activation model (PAM) and predict the response of the hyperdirect pathway to clinical stimulation. Integration of multiple software tools (e.g. COMSOL, MATLAB, FSL, NEURON, Python) enables the creation and visualization of a DBS PAM. An example DBS PAM was developed using 7T magnetic resonance imaging data from a single unilaterally implanted patient with Parkinson's disease (PD). This detailed description implements our best computational practices and most elaborate parameterization steps, as defined from over a decade of technical evolution. Pathway recruitment curves and strength-duration relationships highlight the non-linear response of axons to changes in the DBS parameter settings. Parameterization of patient-specific DBS models can be highly detailed and constrained, thereby providing confidence in the simulation predictions, but at the expense of time demanding technical implementation steps. DBS PAMs represent new tools for investigating possible correlations between brain pathway activation patterns and clinical symptom modulation.

  18. Creating and parameterizing patient-specific deep brain stimulation pathway-activation models using the hyperdirect pathway as an example.

    Directory of Open Access Journals (Sweden)

    Kabilar Gunalan

    Full Text Available Deep brain stimulation (DBS is an established clinical therapy and computational models have played an important role in advancing the technology. Patient-specific DBS models are now common tools in both academic and industrial research, as well as clinical software systems. However, the exact methodology for creating patient-specific DBS models can vary substantially and important technical details are often missing from published reports.Provide a detailed description of the assembly workflow and parameterization of a patient-specific DBS pathway-activation model (PAM and predict the response of the hyperdirect pathway to clinical stimulation.Integration of multiple software tools (e.g. COMSOL, MATLAB, FSL, NEURON, Python enables the creation and visualization of a DBS PAM. An example DBS PAM was developed using 7T magnetic resonance imaging data from a single unilaterally implanted patient with Parkinson's disease (PD. This detailed description implements our best computational practices and most elaborate parameterization steps, as defined from over a decade of technical evolution.Pathway recruitment curves and strength-duration relationships highlight the non-linear response of axons to changes in the DBS parameter settings.Parameterization of patient-specific DBS models can be highly detailed and constrained, thereby providing confidence in the simulation predictions, but at the expense of time demanding technical implementation steps. DBS PAMs represent new tools for investigating possible correlations between brain pathway activation patterns and clinical symptom modulation.

  19. A theory-based parameterization for heterogeneous ice nucleation and implications for the simulation of ice processes in atmospheric models

    Science.gov (United States)

    Savre, J.; Ekman, A. M. L.

    2015-05-01

    A new parameterization for heterogeneous ice nucleation constrained by laboratory data and based on classical nucleation theory is introduced. Key features of the parameterization include the following: a consistent and modular modeling framework for treating condensation/immersion and deposition freezing, the possibility to consider various potential ice nucleating particle types (e.g., dust, black carbon, and bacteria), and the possibility to account for an aerosol size distribution. The ice nucleating ability of each aerosol type is described using a contact angle (θ) probability density function (PDF). A new modeling strategy is described to allow the θ PDF to evolve in time so that the most efficient ice nuclei (associated with the lowest θ values) are progressively removed as they nucleate ice. A computationally efficient quasi Monte Carlo method is used to integrate the computed ice nucleation rates over both size and contact angle distributions. The parameterization is employed in a parcel model, forced by an ensemble of Lagrangian trajectories extracted from a three-dimensional simulation of a springtime low-level Arctic mixed-phase cloud, in order to evaluate the accuracy and convergence of the method using different settings. The same model setup is then employed to examine the importance of various parameters for the simulated ice production. Modeling the time evolution of the θ PDF is found to be particularly crucial; assuming a time-independent θ PDF significantly overestimates the ice nucleation rates. It is stressed that the capacity of black carbon (BC) to form ice in the condensation/immersion freezing mode is highly uncertain, in particular at temperatures warmer than -20°C. In its current version, the parameterization most likely overestimates ice initiation by BC.

  20. Impact of a Stochastic Parameterization Scheme on El Nino-Southern Oscillation in the Community Climate System Model

    Science.gov (United States)

    Christensen, H. M.; Berner, J.; Sardeshmukh, P. D.

    2017-12-01

    Stochastic parameterizations have been used for more than a decade in atmospheric models. They provide a way to represent model uncertainty through representing the variability of unresolved sub-grid processes, and have been shown to have a beneficial effect on the spread and mean state for medium- and extended-range forecasts. There is increasing evidence that stochastic parameterization of unresolved processes can improve the bias in mean and variability, e.g. by introducing a noise-induced drift (nonlinear rectification), and by changing the residence time and structure of flow regimes. We present results showing the impact of including the Stochastically Perturbed Parameterization Tendencies scheme (SPPT) in coupled runs of the National Center for Atmospheric Research (NCAR) Community Atmosphere Model, version 4 (CAM4) with historical forcing. SPPT results in a significant improvement in the representation of the El Nino-Southern Oscillation in CAM4, improving the power spectrum, as well as both the inter- and intra-annual variability of tropical pacific sea surface temperatures. We use a Linear Inverse Modelling framework to gain insight into the mechanisms by which SPPT has improved ENSO-variability.

  1. Improvement in the Modeled Representation of North American Monsoon Precipitation Using a Modified Kain–Fritsch Convective Parameterization Scheme

    KAUST Repository

    Luong, Thang

    2018-01-22

    A commonly noted problem in the simulation of warm season convection in the North American monsoon region has been the inability of atmospheric models at the meso-β scales (10 s to 100 s of kilometers) to simulate organized convection, principally mesoscale convective systems. With the use of convective parameterization, high precipitation biases in model simulations are typically observed over the peaks of mountain ranges. To address this issue, the Kain–Fritsch (KF) cumulus parameterization scheme has been modified with new diagnostic equations to compute the updraft velocity, the convective available potential energy closure assumption, and the convective trigger function. The scheme has been adapted for use in the Weather Research and Forecasting (WRF). A numerical weather prediction-type simulation is conducted for the North American Monsoon Experiment Intensive Observing Period 2 and a regional climate simulation is performed, by dynamically downscaling. In both of these applications, there are notable improvements in the WRF model-simulated precipitation due to the better representation of organized, propagating convection. The use of the modified KF scheme for atmospheric model simulations may provide a more computationally economical alternative to improve the representation of organized convection, as compared to convective-permitting simulations at the kilometer scale or a super-parameterization approach.

  2. Improving the temperature predictions of subsurface thermal models by using high-quality input data. Part 1: Uncertainty analysis of the thermal-conductivity parameterization

    DEFF Research Database (Denmark)

    Fuchs, Sven; Balling, Niels

    2016-01-01

    The subsurface temperature field and the geothermal conditions in sedimentary basins are frequently examined by using numerical thermal models. For those models, detailed knowledge of rock thermal properties are paramount for a reliable parameterization of layer properties and boundary conditions...

  3. A polar stratospheric cloud parameterization for the global modeling initiative three-dimensional model and its response to stratospheric aircraft

    International Nuclear Information System (INIS)

    Considine, D. B.; Douglass, A. R.; Connell, P. S.; Kinnison, D. E.; Rotman, D. A.

    2000-01-01

    We describe a new parameterization of polar stratospheric clouds (PSCs) which was written for and incorporated into the three-dimensional (3-D) chemistry and transport model (CTM) developed for NASA's Atmospheric Effects of Aviation Project (AEAP) by the Global Modeling Initiative (GMI). The parameterization was designed to respond to changes in NO y and H 2 O produced by high-speed civilian transport (HSCT) emissions. The parameterization predicts surface area densities (SADs) of both Type 1 and Type 2 PSCs for use in heterogeneous chemistry calculations. Type 1 PSCs are assumed to have a supercooled ternary sulfate (STS) composition, and Type 2 PSCs are treated as water ice with a coexisting nitric acid trihydrate (NAT) phase. Sedimentation is treated by assuming that the PSC particles obey lognormal size distributions, resulting in a realistic mass flux of condensed phase H 2 O and HNO 3 . We examine a simulation of the Southern Hemisphere high-latitude lower stratosphere winter and spring seasons driven by temperature and wind fields from a modified version of the National Center for Atmospheric Research (NCAR) Middle Atmosphere Community Climate Model Version 2 (MACCM2). Predicted PSC SADs and median radii for both Type 1 and Type 2 PSCs are consistent with observations. Gas phase HNO 3 and H 2 O concentrations in the high-latitude lower stratosphere qualitatively agree with Cryogenic Limb Array Etalon Spectrometer (CLAES) HNO 3 and Microwave Limb Sounder (MLS) H 2 O observations. The residual denitrification and dehydration of the model polar vortex after polar winter compares well with atmospheric trace molecule spectroscopy (ATMOS) observations taken during November 1994. When the NO x and H 2 O emissions of a standard 500-aircraft HSCT fleet with a NO x emission index of 5 are added, NO x and H 2 O concentrations in the Southern Hemisphere polar vortex before winter increase by up to 3%. This results in earlier onset of PSC formation, denitrification, and

  4. Aerosol-Cloud-Precipitation Interactions in WRF Model:Sensitivity to Autoconversion Parameterization

    Institute of Scientific and Technical Information of China (English)

    解小宁; 刘晓东

    2015-01-01

    Cloud-to-rain autoconversion process is an important player in aerosol loading, cloud morphology, and precipitation variations because it can modulate cloud microphysical characteristics depending on the par-ticipation of aerosols, and aff ects the spatio-temporal distribution and total amount of precipitation. By applying the Kessler, the Khairoutdinov-Kogan (KK), and the Dispersion autoconversion parameterization schemes in a set of sensitivity experiments, the indirect eff ects of aerosols on clouds and precipitation are investigated for a deep convective cloud system in Beijing under various aerosol concentration backgrounds from 50 to 10000 cm−3. Numerical experiments show that aerosol-induced precipitation change is strongly dependent on autoconversion parameterization schemes. For the Kessler scheme, the average cumulative precipitation is enhanced slightly with increasing aerosols, whereas surface precipitation is reduced signifi-cantly with increasing aerosols for the KK scheme. Moreover, precipitation varies non-monotonically for the Dispersion scheme, increasing with aerosols at lower concentrations and decreasing at higher concentrations. These diff erent trends of aerosol-induced precipitation change are mainly ascribed to diff erences in rain wa-ter content under these three autoconversion parameterization schemes. Therefore, this study suggests that accurate parameterization of cloud microphysical processes, particularly the cloud-to-rain autoconversion process, is needed for improving the scientifi c understanding of aerosol-cloud-precipitation interactions.

  5. Modelling heterogeneous ice nucleation on mineral dust and soot with parameterizations based on laboratory experiments

    Science.gov (United States)

    Hoose, C.; Hande, L. B.; Mohler, O.; Niemand, M.; Paukert, M.; Reichardt, I.; Ullrich, R.

    2016-12-01

    Between 0 and -37°C, ice formation in clouds is triggered by aerosol particles acting as heterogeneous ice nuclei. At lower temperatures, heterogeneous ice nucleation on aerosols can occur at lower supersaturations than homogeneous freezing of solutes. In laboratory experiments, the ability of different aerosol species (e.g. desert dusts, soot, biological particles) has been studied in detail and quantified via various theoretical or empirical parameterization approaches. For experiments in the AIDA cloud chamber, we have quantified the ice nucleation efficiency via a temperature- and supersaturation dependent ice nucleation active site density. Here we present a new empirical parameterization scheme for immersion and deposition ice nucleation on desert dust and soot based on these experimental data. The application of this parameterization to the simulation of cirrus clouds, deep convective clouds and orographic clouds will be shown, including the extension of the scheme to the treatment of freezing of rain drops. The results are compared to other heterogeneous ice nucleation schemes. Furthermore, an aerosol-dependent parameterization of contact ice nucleation is presented.

  6. The Empirical Canadian High Arctic Ionospheric Model (E-CHAIM): Bottomside Parameterization

    Science.gov (United States)

    Themens, D. R.; Jayachandran, P. T.

    2017-12-01

    It is well known that the International Reference Ionosphere (IRI) suffers reduced accuracy in its representation of monthly median ionospheric electron density at high latitudes. These inaccuracies are believed to stem, at least in part, from a historical lack of data from these regions. Now, roughly thirty and forty years after the development of the original URSI and CCIR foF2 maps, respectively, there exists a much larger dataset of high latitude observations of ionospheric electron density. These new measurements come in the form of new ionosonde deployments, such as those of the Canadian High Arctic Ionospheric Network, the CHAMP, GRACE, and COSMIC radio occultation missions, and the construction of the Poker Flat, Resolute, and EISCAT Incoherent Scatter Radar systems. These new datasets afford an opportunity to revise the IRI's representation of the high latitude ionosphere. Using a spherical cap harmonic expansion to represent horizontal and diurnal variability and a Fourier expansion in day of year to represent seasonal variations, we have developed a new model of the bottomside ionosphere's electron density for the high latitude ionosphere, above 50N geomagnetic latitude. For the peak heights of the E and F1 layers (hmE and hmF1, respectively), current standards use a constant value for hmE and either use a single-parameter model for hmF1 (IRI) or scale hmF1 with the F peak (NeQuick). For E-CHAIM, we have diverged from this convention to account for the greater variability seen in these characteristics at high latitudes, opting to use a full spherical harmonic model description for each of these characteristics. For the description of the bottomside vertical electron density profile, we present a single-layer model with altitude-varying scale height. The scale height function is taken as the sum three scale height layer functions anchored to the F2 peak, hmF1, and hmE. This parameterization successfully reproduces the structure of the various bottomside

  7. A comparison study of convective and microphysical parameterization schemes associated with lightning occurrence in southeastern Brazil using the WRF model

    Science.gov (United States)

    Zepka, G. D.; Pinto, O.

    2010-12-01

    The intent of this study is to identify the combination of convective and microphysical WRF parameterizations that better adjusts to lightning occurrence over southeastern Brazil. Twelve thunderstorm days were simulated with WRF model using three different convective parameterizations (Kain-Fritsch, Betts-Miller-Janjic and Grell-Devenyi ensemble) and two different microphysical schemes (Purdue-Lin and WSM6). In order to test the combinations of parameterizations at the same time of lightning occurrence, a comparison was made between the WRF grid point values of surface-based Convective Available Potential Energy (CAPE), Lifted Index (LI), K-Index (KI) and equivalent potential temperature (theta-e), and the lightning locations nearby those grid points. Histograms were built up to show the ratio of the occurrence of different values of these variables for WRF grid points associated with lightning to all WRF grid points. The first conclusion from this analysis was that the choice of microphysics did not change appreciably the results as much as different convective schemes. The Betts-Miller-Janjic parameterization has generally worst skill to relate higher magnitudes for all four variables to lightning occurrence. The differences between the Kain-Fritsch and Grell-Devenyi ensemble schemes were not large. This fact can be attributed to the similar main assumptions used by these schemes that consider entrainment/detrainment processes along the cloud boundaries. After that, we examined three case studies using the combinations of convective and microphysical options without the Betts-Miller-Janjic scheme. Differently from the traditional verification procedures, fields of surface-based CAPE from WRF 10 km domain were compared to the Eta model, satellite images and lightning data. In general the more reliable convective scheme was Kain-Fritsch since it provided more consistent distribution of the CAPE fields with respect to satellite images and lightning data.

  8. Sensitivity of U.S. summer precipitation to model resolution and convective parameterizations across gray zone resolutions

    Science.gov (United States)

    Gao, Yang; Leung, L. Ruby; Zhao, Chun; Hagos, Samson

    2017-03-01

    Simulating summer precipitation is a significant challenge for climate models that rely on cumulus parameterizations to represent moist convection processes. Motivated by recent advances in computing that support very high-resolution modeling, this study aims to systematically evaluate the effects of model resolution and convective parameterizations across the gray zone resolutions. Simulations using the Weather Research and Forecasting model were conducted at grid spacings of 36 km, 12 km, and 4 km for two summers over the conterminous U.S. The convection-permitting simulations at 4 km grid spacing are most skillful in reproducing the observed precipitation spatial distributions and diurnal variability. Notable differences are found between simulations with the traditional Kain-Fritsch (KF) and the scale-aware Grell-Freitas (GF) convection schemes, with the latter more skillful in capturing the nocturnal timing in the Great Plains and North American monsoon regions. The GF scheme also simulates a smoother transition from convective to large-scale precipitation as resolution increases, resulting in reduced sensitivity to model resolution compared to the KF scheme. Nonhydrostatic dynamics has a positive impact on precipitation over complex terrain even at 12 km and 36 km grid spacings. With nudging of the winds toward observations, we show that the conspicuous warm biases in the Southern Great Plains are related to precipitation biases induced by large-scale circulation biases, which are insensitive to model resolution. Overall, notable improvements in simulating summer rainfall and its diurnal variability through convection-permitting modeling and scale-aware parameterizations suggest promising venues for improving climate simulations of water cycle processes.

  9. Sensitivity of aerosol indirect forcing and autoconversion to cloud droplet parameterization: an assessment with the NASA Global Modeling Initiative.

    Science.gov (United States)

    Sotiropoulou, R. P.; Meshkhidze, N.; Nenes, A.

    2006-12-01

    The aerosol indirect forcing is one of the largest sources of uncertainty in assessments of anthropogenic climate change [IPCC, 2001]. Much of this uncertainty arises from the approach used for linking cloud droplet number concentration (CDNC) to precursor aerosol. Global Climate Models (GCM) use a wide range of cloud droplet activation mechanisms ranging from empirical [Boucher and Lohmann, 1995] to detailed physically- based formulations [e.g., Abdul-Razzak and Ghan, 2000; Fountoukis and Nenes, 2005]. The objective of this study is to assess the uncertainties in indirect forcing and autoconversion of cloud water to rain caused by the application of different cloud droplet parameterization mechanisms; this is an important step towards constraining the aerosol indirect effects (AIE). Here we estimate the uncertainty in indirect forcing and autoconversion rate using the NASA Global Model Initiative (GMI). The GMI allows easy interchange of meteorological fields, chemical mechanisms and the aerosol microphysical packages. Therefore, it is an ideal tool for assessing the effect of different parameters on aerosol indirect forcing. The aerosol module includes primary emissions, chemical production of sulfate in clear air and in-cloud aqueous phase, gravitational sedimentation, dry deposition, wet scavenging in and below clouds, and hygroscopic growth. Model inputs include SO2 (fossil fuel and natural), black carbon (BC), organic carbon (OC), mineral dust and sea salt. The meteorological data used in this work were taken from the NASA Data Assimilation Office (DAO) and two different GCMs: the NASA GEOS4 finite volume GCM (FVGCM) and the Goddard Institute for Space Studies version II' (GISS II') GCM. Simulations were carried out for "present day" and "preindustrial" emissions using different meteorological fields (i.e. DAO, FVGCM, GISS II'); cloud droplet number concentration is computed from the correlations of Boucher and Lohmann [1995], Abdul-Razzak and Ghan [2000

  10. Sensitivity of the weather research and forecasting model to parameterization schemes for regional climate of Nile River Basin

    Science.gov (United States)

    Tariku, Tebikachew Betru; Gan, Thian Yew

    2018-06-01

    Regional climate models (RCMs) have been used to simulate rainfall at relatively high spatial and temporal resolutions useful for sustainable water resources planning, design and management. In this study, the sensitivity of the RCM, weather research and forecasting (WRF), in modeling the regional climate of the Nile River Basin (NRB) was investigated using 31 combinations of different physical parameterization schemes which include cumulus (Cu), microphysics (MP), planetary boundary layer (PBL), land-surface model (LSM) and radiation (Ra) schemes. Using the European Centre for Medium-Range Weather Forecast (ECMWF) ERA-Interim reanalysis data as initial and lateral boundary conditions, WRF was configured to model the climate of NRB at a resolution of 36 km with 30 vertical levels. The 1999-2001 simulations using WRF were compared with satellite data combined with ground observation and the NCEP reanalysis data for 2 m surface air temperature (T2), rainfall, short- and longwave downward radiation at the surface (SWRAD, LWRAD). Overall, WRF simulated more accurate T2 and LWRAD (with correlation coefficients >0.8 and low root-mean-square error) than SWRAD and rainfall for the NRB. Further, the simulation of rainfall is more sensitive to PBL, Cu and MP schemes than other schemes of WRF. For example, WRF simulated less biased rainfall with Kain-Fritsch combined with MYJ than with YSU as the PBL scheme. The simulation of T2 is more sensitive to LSM and Ra than to Cu, PBL and MP schemes selected, SWRAD is more sensitive to MP and Ra than to Cu, LSM and PBL schemes, and LWRAD is more sensitive to LSM, Ra and PBL than Cu, and MP schemes. In summary, the following combination of schemes simulated the most representative regional climate of NRB: WSM3 microphysics, KF cumulus, MYJ PBL, RRTM longwave radiation and Dudhia shortwave radiation schemes, and Noah LSM. The above configuration of WRF coupled to the Noah LSM has also been shown to simulate representative regional

  11. Sensitivity of the weather research and forecasting model to parameterization schemes for regional climate of Nile River Basin

    Science.gov (United States)

    Tariku, Tebikachew Betru; Gan, Thian Yew

    2017-08-01

    Regional climate models (RCMs) have been used to simulate rainfall at relatively high spatial and temporal resolutions useful for sustainable water resources planning, design and management. In this study, the sensitivity of the RCM, weather research and forecasting (WRF), in modeling the regional climate of the Nile River Basin (NRB) was investigated using 31 combinations of different physical parameterization schemes which include cumulus (Cu), microphysics (MP), planetary boundary layer (PBL), land-surface model (LSM) and radiation (Ra) schemes. Using the European Centre for Medium-Range Weather Forecast (ECMWF) ERA-Interim reanalysis data as initial and lateral boundary conditions, WRF was configured to model the climate of NRB at a resolution of 36 km with 30 vertical levels. The 1999-2001 simulations using WRF were compared with satellite data combined with ground observation and the NCEP reanalysis data for 2 m surface air temperature (T2), rainfall, short- and longwave downward radiation at the surface (SWRAD, LWRAD). Overall, WRF simulated more accurate T2 and LWRAD (with correlation coefficients >0.8 and low root-mean-square error) than SWRAD and rainfall for the NRB. Further, the simulation of rainfall is more sensitive to PBL, Cu and MP schemes than other schemes of WRF. For example, WRF simulated less biased rainfall with Kain-Fritsch combined with MYJ than with YSU as the PBL scheme. The simulation of T2 is more sensitive to LSM and Ra than to Cu, PBL and MP schemes selected, SWRAD is more sensitive to MP and Ra than to Cu, LSM and PBL schemes, and LWRAD is more sensitive to LSM, Ra and PBL than Cu, and MP schemes. In summary, the following combination of schemes simulated the most representative regional climate of NRB: WSM3 microphysics, KF cumulus, MYJ PBL, RRTM longwave radiation and Dudhia shortwave radiation schemes, and Noah LSM. The above configuration of WRF coupled to the Noah LSM has also been shown to simulate representative regional

  12. AD Model Builder: using automatic differentiation for statistical inference of highly parameterized complex nonlinear models

    DEFF Research Database (Denmark)

    Fournier, David A.; Skaug, Hans J.; Ancheta, Johnoel

    2011-01-01

    Many criteria for statistical parameter estimation, such as maximum likelihood, are formulated as a nonlinear optimization problem.Automatic Differentiation Model Builder (ADMB) is a programming framework based on automatic differentiation, aimed at highly nonlinear models with a large number...... of such a feature is the generic implementation of Laplace approximation of high-dimensional integrals for use in latent variable models. We also review the literature in which ADMB has been used, and discuss future development of ADMB as an open source project. Overall, the main advantages ofADMB are flexibility...

  13. New parameterization of external and induced fields in geomagnetic field modeling, and a candidate model for IGRF 2005

    DEFF Research Database (Denmark)

    Olsen, Nils; Sabaka, T.J.; Lowes, F.

    2005-01-01

    When deriving spherical harmonic models of the Earth's magnetic field, low-degree external field contributions are traditionally considered by assuming that their expansion coefficient q(1)(0) varies linearly with the D-st-index, while induced contributions are considered assuming a constant ratio...... Q(1) of induced to external coefficients. A value of Q(1) = 0.27 was found from Magsat data and has been used by several authors when deriving recent field models from Orsted and CHAMP data. We describe a new approach that considers external and induced field based on a separation of D-st = E-st + I......-st into external (E-st) and induced (I-st) parts using a 1D model of mantle conductivity. The temporal behavior of q(1)(0) and of the corresponding induced coefficient are parameterized by E-st and I-st, respectively. In addition, we account for baseline-instabilities of D-st by estimating a value of q(1...

  14. Cross-section parameterization of the pebble bed modular reactor using the dimension-wise expansion model

    International Nuclear Information System (INIS)

    Zivanovic, Rastko; Bokov, Pavel M.

    2010-01-01

    This paper discusses the use of the dimension-wise expansion model for cross-section parameterization. The components of the model were approximated with tensor products of orthogonal polynomials. As we demonstrate, the model for a specific cross-section can be built in a systematic way directly from data without any a priori knowledge of its structure. The methodology is able to construct a finite basis of orthogonal polynomials that is required to approximate a cross-section with pre-specified accuracy. The methodology includes a global sensitivity analysis that indicates irrelevant state parameters which can be excluded from the model without compromising the accuracy of the approximation and without repetition of the fitting process. To fit the dimension-wise expansion model, Randomised Quasi-Monte-Carlo Integration and Sparse Grid Integration methods were used. To test the parameterization methods with different integrations embedded we have used the OECD PBMR 400 MW benchmark problem. It has been shown in this paper that the Sparse Grid Integration achieves pre-specified accuracy with a significantly (up to 1-2 orders of magnitude) smaller number of samples compared to Randomised Quasi-Monte-Carlo Integration.

  15. Physical modeling of rock

    International Nuclear Information System (INIS)

    Cheney, J.A.

    1981-01-01

    The problems of statisfying similarity between a physical model and the prototype in rock wherein fissures and cracks place a role in physical behavior is explored. The need for models of large physical dimensions is explained but also testing of models of the same prototype over a wide range of scales is needed to ascertain the influence of lack of similitude of particular parameters between prototype and model. A large capacity centrifuge would be useful in that respect

  16. Application of the dual Youla parameterization

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik

    1999-01-01

    Different applications of the parameterization of all systems stabilized by a given controller, i.e. the dual Youla parameterization, are considered in this paper. It will be shown how the parameterization can be applied in connection with controller design, adaptive controllers, model validation...

  17. Human impact parameterizations in global hydrological models improve estimates of monthly discharges and hydrological extremes: a multi-model validation study

    NARCIS (Netherlands)

    Veldkamp, T I E; Zhao, F; Ward, P J; Moel, H de; Aerts, J C J H; Schmied, H Müller; Portmann, F T; Masaki, Y; Pokhrel, Y; Liu, X; Satoh, Yusuke; Gerten, Dieter; Gosling, S N; Zaherpour, J; Wada, Yoshihide

    2018-01-01

    Human activity has a profound influence on river discharges, hydrological extremes and water-related hazards. In this study, we compare the results of five state-of-the-art global hydrological models (GHMs) with observations to examine the role of human impact parameterizations (HIP) in the

  18. Technical report series on global modeling and data assimilation. Volume 3: An efficient thermal infrared radiation parameterization for use in general circulation models

    Science.gov (United States)

    Suarex, Max J. (Editor); Chou, Ming-Dah

    1994-01-01

    A detailed description of a parameterization for thermal infrared radiative transfer designed specifically for use in global climate models is presented. The parameterization includes the effects of the main absorbers of terrestrial radiation: water vapor, carbon dioxide, and ozone. While being computationally efficient, the schemes compute very accurately the clear-sky fluxes and cooling rates from the Earth's surface to 0.01 mb. This combination of accuracy and speed makes the parameterization suitable for both tropospheric and middle atmospheric modeling applications. Since no transmittances are precomputed the atmospheric layers and the vertical distribution of the absorbers may be freely specified. The scheme can also account for any vertical distribution of fractional cloudiness with arbitrary optical thickness. These features make the parameterization very flexible and extremely well suited for use in climate modeling studies. In addition, the numerics and the FORTRAN implementation have been carefully designed to conserve both memory and computer time. This code should be particularly attractive to those contemplating long-term climate simulations, wishing to model the middle atmosphere, or planning to use a large number of levels in the vertical.

  19. Application of mathematical models to metronomic chemotherapy: What can be inferred from minimal parameterized models?

    Science.gov (United States)

    Ledzewicz, Urszula; Schättler, Heinz

    2017-08-10

    Metronomic chemotherapy refers to the frequent administration of chemotherapy at relatively low, minimally toxic doses without prolonged treatment interruptions. Different from conventional or maximum-tolerated-dose chemotherapy which aims at an eradication of all malignant cells, in a metronomic dosing the goal often lies in the long-term management of the disease when eradication proves elusive. Mathematical modeling and subsequent analysis (theoretical as well as numerical) have become an increasingly more valuable tool (in silico) both for determining conditions under which specific treatment strategies should be preferred and for numerically optimizing treatment regimens. While elaborate, computationally-driven patient specific schemes that would optimize the timing and drug dose levels are still a part of the future, such procedures may become instrumental in making chemotherapy effective in situations where it currently fails. Ideally, mathematical modeling and analysis will develop into an additional decision making tool in the complicated process that is the determination of efficient chemotherapy regimens. In this article, we review some of the results that have been obtained about metronomic chemotherapy from mathematical models and what they infer about the structure of optimal treatment regimens. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Impact of a simple parameterization of convective gravity-wave drag in a stratosphere-troposphere general circulation model and its sensitivity to vertical resolution

    Directory of Open Access Journals (Sweden)

    C. Bossuet

    Full Text Available Systematic westerly biases in the southern hemisphere wintertime flow and easterly equatorial biases are experienced in the Météo-France climate model. These biases are found to be much reduced when a simple parameterization is introduced to take into account the vertical momentum transfer through the gravity waves excited by deep convection. These waves are quasi-stationary in the frame of reference moving with convection and they propagate vertically to higher levels in the atmosphere, where they may exert a significant deceleration of the mean flow at levels where dissipation occurs. Sixty-day experiments have been performed from a multiyear simulation with the standard 31 levels for a summer and a winter month, and with a T42 horizontal resolution. The impact of this parameterization on the integration of the model is found to be generally positive, with a significant deceleration in the westerly stratospheric jet and with a reduction of the easterly equatorial bias. The sensitivity of the Météo-France climate model to vertical resolution is also investigated by increasing the number of vertical levels, without moving the top of the model. The vertical resolution is increased up to 41 levels, using two kinds of level distribution. For the first, the increase in vertical resolution concerns especially the troposphere (with 22 levels in the troposphere, and the second treats the whole atmosphere in a homogeneous way (with 15 levels in the troposphere; the standard version of 31 levels has 10 levels in the troposphere. A comparison is made between the dynamical aspects of the simulations. The zonal wind and precipitation are presented and compared for each resolution. A positive impact is found with the finer tropospheric resolution on the precipitation in the mid-latitudes and on the westerly stratospheric jet, but the general impact on the model climate is weak, the physical parameterizations used appear to be mostly independent to the

  1. Remote Sensing Image Enhancement Based on Non-subsampled Shearlet Transform and Parameterized Logarithmic Image Processing Model

    Directory of Open Access Journals (Sweden)

    TAO Feixiang

    2015-08-01

    Full Text Available Aiming at parts of remote sensing images with dark brightness and low contrast, a remote sensing image enhancement method based on non-subsampled Shearlet transform and parameterized logarithmic image processing model is proposed in this paper to improve the visual effects and interpretability of remote sensing images. Firstly, a remote sensing image is decomposed into a low-frequency component and high frequency components by non-subsampled Shearlet transform.Then the low frequency component is enhanced according to PLIP (parameterized logarithmic image processing model, which can improve the contrast of image, while the improved fuzzy enhancement method is used to enhance the high frequency components in order to highlight the information of edges and details. A large number of experimental results show that, compared with five kinds of image enhancement methods such as bidirectional histogram equalization method, the method based on stationary wavelet transform and the method based on non-subsampled contourlet transform, the proposed method has advantages in both subjective visual effects and objective quantitative evaluation indexes such as contrast and definition, which can more effectively improve the contrast of remote sensing image and enhance edges and texture details with better visual effects.

  2. Direct spondylolisthesis identification and measurement in MR/CT using detectors trained by articulated parameterized spine model

    Science.gov (United States)

    Cai, Yunliang; Leung, Stephanie; Warrington, James; Pandey, Sachin; Shmuilovich, Olga; Li, Shuo

    2017-02-01

    The identification of spondylolysis and spondylolisthesis is important in spinal diagnosis, rehabilitation, and surgery planning. Accurate and automatic detection of spinal portion with spondylolisthesis problem will significantly reduce the manual work of physician and provide a more robust evaluation for the spine condition. Most existing automatic identification methods adopted the indirect approach which used vertebrae locations to measure the spondylolisthesis. However, these methods relied heavily on automatic vertebra detection which often suffered from the pool spatial accuracy and the lack of validated pathological training samples. In this study, we present a novel spondylolisthesis detection method which can directly locate the irregular spine portion and output the corresponding grading. The detection is done by a set of learning-based detectors which are discriminatively trained by synthesized spondylolisthesis image samples. To provide sufficient pathological training samples, we used a parameterized spine model to synthesize different types of spondylolysis images from real MR/CT scans. The parameterized model can automatically locate the vertebrae in spine images and estimate their pose orientations, and can inversely alter the vertebrae locations and poses by changing the corresponding parameters. Various training samples can then be generated from only a few spine MR/CT images. The preliminary results suggest great potential for the fast and efficient spondylolisthesis identification and measurement in both MR and CT spine images.

  3. Identification of physical models

    DEFF Research Database (Denmark)

    Melgaard, Henrik

    1994-01-01

    of the model with the available prior knowledge. The methods for identification of physical models have been applied in two different case studies. One case is the identification of thermal dynamics of building components. The work is related to a CEC research project called PASSYS (Passive Solar Components......The problem of identification of physical models is considered within the frame of stochastic differential equations. Methods for estimation of parameters of these continuous time models based on descrete time measurements are discussed. The important algorithms of a computer program for ML or MAP...... design of experiments, which is for instance the design of an input signal that are optimal according to a criterion based on the information provided by the experiment. Also model validation is discussed. An important verification of a physical model is to compare the physical characteristics...

  4. Approaches to highly parameterized inversion: A guide to using PEST for model-parameter and predictive-uncertainty analysis

    Science.gov (United States)

    Doherty, John E.; Hunt, Randall J.; Tonkin, Matthew J.

    2010-01-01

    Analysis of the uncertainty associated with parameters used by a numerical model, and with predictions that depend on those parameters, is fundamental to the use of modeling in support of decisionmaking. Unfortunately, predictive uncertainty analysis with regard to models can be very computationally demanding, due in part to complex constraints on parameters that arise from expert knowledge of system properties on the one hand (knowledge constraints) and from the necessity for the model parameters to assume values that allow the model to reproduce historical system behavior on the other hand (calibration constraints). Enforcement of knowledge and calibration constraints on parameters used by a model does not eliminate the uncertainty in those parameters. In fact, in many cases, enforcement of calibration constraints simply reduces the uncertainties associated with a number of broad-scale combinations of model parameters that collectively describe spatially averaged system properties. The uncertainties associated with other combinations of parameters, especially those that pertain to small-scale parameter heterogeneity, may not be reduced through the calibration process. To the extent that a prediction depends on system-property detail, its postcalibration variability may be reduced very little, if at all, by applying calibration constraints; knowledge constraints remain the only limits on the variability of predictions that depend on such detail. Regrettably, in many common modeling applications, these constraints are weak. Though the PEST software suite was initially developed as a tool for model calibration, recent developments have focused on the evaluation of model-parameter and predictive uncertainty. As a complement to functionality that it provides for highly parameterized inversion (calibration) by means of formal mathematical regularization techniques, the PEST suite provides utilities for linear and nonlinear error-variance and uncertainty analysis in

  5. A Parameterized Inversion Model for Soil Moisture and Biomass from Polarimetric Backscattering Coefficients

    Science.gov (United States)

    Truong-Loi, My-Linh; Saatchi, Sassan; Jaruwatanadilok, Sermsak

    2012-01-01

    A semi-empirical algorithm for the retrieval of soil moisture, root mean square (RMS) height and biomass from polarimetric SAR data is explained and analyzed in this paper. The algorithm is a simplification of the distorted Born model. It takes into account the physical scattering phenomenon and has three major components: volume, double-bounce and surface. This simplified model uses the three backscattering coefficients ( sigma HH, sigma HV and sigma vv) at low-frequency (P-band). The inversion process uses the Levenberg-Marquardt non-linear least-squares method to estimate the structural parameters. The estimation process is entirely explained in this paper, from initialization of the unknowns to retrievals. A sensitivity analysis is also done where the initial values in the inversion process are varying randomly. The results show that the inversion process is not really sensitive to initial values and a major part of the retrievals has a root-mean-square error lower than 5% for soil moisture, 24 Mg/ha for biomass and 0.49 cm for roughness, considering a soil moisture of 40%, roughness equal to 3cm and biomass varying from 0 to 500 Mg/ha with a mean of 161 Mg/ha

  6. RACORO Continental Boundary Layer Cloud Investigations: 3. Separation of Parameterization Biases in Single-Column Model CAM5 Simulations of Shallow Cumulus

    Science.gov (United States)

    Lin, Wuyin; Liu, Yangang; Vogelmann, Andrew M.; Fridlind, Ann; Endo, Satoshi; Song, Hua; Feng, Sha; Toto, Tami; Li, Zhijin; Zhang, Minghua

    2015-01-01

    Climatically important low-level clouds are commonly misrepresented in climate models. The FAst-physics System TEstbed and Research (FASTER) Project has constructed case studies from the Atmospheric Radiation Measurement Climate Research Facility's Southern Great Plain site during the RACORO aircraft campaign to facilitate research on model representation of boundary-layer clouds. This paper focuses on using the single-column Community Atmosphere Model version 5 (SCAM5) simulations of a multi-day continental shallow cumulus case to identify specific parameterization causes of low-cloud biases. Consistent model biases among the simulations driven by a set of alternative forcings suggest that uncertainty in the forcing plays only a relatively minor role. In-depth analysis reveals that the model's shallow cumulus convection scheme tends to significantly under-produce clouds during the times when shallow cumuli exist in the observations, while the deep convective and stratiform cloud schemes significantly over-produce low-level clouds throughout the day. The links between model biases and the underlying assumptions of the shallow cumulus scheme are further diagnosed with the aid of large-eddy simulations and aircraft measurements, and by suppressing the triggering of the deep convection scheme. It is found that the weak boundary layer turbulence simulated is directly responsible for the weak cumulus activity and the simulated boundary layer stratiform clouds. Increased vertical and temporal resolutions are shown to lead to stronger boundary layer turbulence and reduction of low-cloud biases.

  7. Improvement and implementation of a parameterization for shallow cumulus in the global climate model ECHAM5-HAM

    Science.gov (United States)

    Isotta, Francesco; Spichtinger, Peter; Lohmann, Ulrike; von Salzen, Knut

    2010-05-01

    Convection is a crucial component of weather and climate. Its parameterization in General Circulation Models (GCMs) is one of the largest sources of uncertainty. Convection redistributes moisture and heat, affects the radiation budget and transports tracers from the PBL to higher levels. Shallow convection is very common over the globe, in particular over the oceans in the trade wind regions. A recently developed shallow convection scheme by von Salzen and McFarlane (2002) is implemented in the ECHAM5-HAM GCM instead of the standard convection scheme by Tiedtke (1989). The scheme of von Salzen and McFarlane (2002) is a bulk parameterization for an ensemble of transient shallow cumuli. A life cycle is considered, as well as inhomogeneities in the horizontal distribution of in-cloud properties due to mixing. The shallow convection scheme is further developed to take the ice phase and precipitation in form of rain and snow into account. The double moment microphysics scheme for cloud droplets and ice crystals implemented is consistent with the stratiform scheme and with the other types of convective clouds. The ice phase permits to alter the criterion to distinguish between shallow convection and the other two types of convection, namely deep and mid-level, which are still calculated by the Tiedtke (1989) scheme. The lunching layer of the test parcel in the shallow convection scheme is chosen as the one with maximum moist static energy in the three lowest levels. The latter is modified to the ``frozen moist static energy'' to account for the ice phase. Moreover, tracers (e.g. aerosols) are transported in the updraft and scavenged in and below clouds. As a first test of the performance of the new scheme and the interaction with the rest of the model, the Barbados Oceanographic and Meteorological EXperiment (BOMEX) and the Rain In Cumulus over the Ocean experiment (RICO) case are simulated with the single column model (SCM) and the results are compared with large eddy

  8. Invariant box parameterization of neutrino oscillations

    International Nuclear Information System (INIS)

    Weiler, T.J.; Wagner, D.

    1998-01-01

    The model-independent 'box' parameterization of neutrino oscillations is examined. The invariant boxes are the classical amplitudes of the individual oscillating terms. Being observables, the boxes are independent of the choice of parameterization of the mixing matrix. Emphasis is placed on the relations among the box parameters due to mixing matrix unitarity, and on the reduction of the number of boxes to the minimum basis set. Using the box algebra, we show that CP-violation may be inferred from measurements of neutrino flavor mixing even when the oscillatory factors have averaged. General analyses of neutrino oscillations among n≥3 flavors can readily determine the boxes, which can then be manipulated to yield magnitudes of mixing matrix elements. copyright 1998 American Institute of Physics

  9. Polynomial Chaos–Based Bayesian Inference of K-Profile Parameterization in a General Circulation Model of the Tropical Pacific

    KAUST Repository

    Sraj, Ihab

    2016-08-26

    The authors present a polynomial chaos (PC)-based Bayesian inference method for quantifying the uncertainties of the K-profile parameterization (KPP) within the MIT general circulation model (MITgcm) of the tropical Pacific. The inference of the uncertain parameters is based on a Markov chain Monte Carlo (MCMC) scheme that utilizes a newly formulated test statistic taking into account the different components representing the structures of turbulent mixing on both daily and seasonal time scales in addition to the data quality, and filters for the effects of parameter perturbations over those as a result of changes in the wind. To avoid the prohibitive computational cost of integrating the MITgcm model at each MCMC iteration, a surrogate model for the test statistic using the PC method is built. Because of the noise in the model predictions, a basis-pursuit-denoising (BPDN) compressed sensing approach is employed to determine the PC coefficients of a representative surrogate model. The PC surrogate is then used to evaluate the test statistic in the MCMC step for sampling the posterior of the uncertain parameters. Results of the posteriors indicate good agreement with the default values for two parameters of the KPP model, namely the critical bulk and gradient Richardson numbers; while the posteriors of the remaining parameters were barely informative. © 2016 American Meteorological Society.

  10. Models in physics teaching

    DEFF Research Database (Denmark)

    Kneubil, Fabiana Botelho

    2016-01-01

    In this work we show an approach based on models, for an usual subject in an introductory physics course, in order to foster discussions on the nature of physical knowledge. The introduction of elements of the nature of knowledge in physics lessons has been emphasised by many educators and one uses...... the case of metals to show the theoretical and phenomenological dimensions of physics. The discussion is made by means of four questions whose answers cannot be reached neither for theoretical elements nor experimental measurements. Between these two dimensions it is necessary to realise a series...... of reasoning steps to deepen the comprehension of microscopic concepts, such as electrical resistivity, drift velocity and free electrons. When this approach is highlighted, beyond the physical content, aspects of its nature become explicit and may improve the structuring of knowledge for learners...

  11. Alternative Parameterization of the 3-PG Model for Loblolly Pine: A Regional Validation and Climate Change Assessment on Stand Productivity

    Science.gov (United States)

    Yang, J.; Gonzalez-Benecke, C. A.; Teskey, R. O.; Martin, T.; Jokela, E. J.

    2015-12-01

    Loblolly pine (Pinus taeda L.) is one of the fastest growing pine species. It has been planted on more than 10 million ha in the southeastern U.S., and also been introduced into many countries. Using data from the literature and long-term productivity studies, we re-parameterized the 3-PG model for loblolly pine stands. We developed new functions for estimating NPP allocation dynamics, canopy cover and needlefall dynamics, effects of frost on production, density-independent and density-dependent tree mortality, biomass pools at variable starting ages, and the fertility rating. New functions to estimate merchantable volume partitioning were also included, allowing for economic analyses. The fertility rating was determined as a function of site index (mean height of dominant trees at age=25 years). We used the largest and most geographically extensive validation dataset for this species ever used (91 pots in 12 states in U.S. and 10 plots in Uruguay). Comparison of modeled to measured data showed robust agreement across the natural range in the U.S., as well as in Uruguay, where the species is grown as an exotic. Using the new set of functions and parameters with downscaled projections from twenty different climate models, the model was applied to assess the impact of future climate change scenarios on stand productivity in the southeastern U.S.

  12. Parameterizing road construction in route-based road weather models: can ground-penetrating radar provide any answers?

    International Nuclear Information System (INIS)

    Hammond, D S; Chapman, L; Thornes, J E

    2011-01-01

    A ground-penetrating radar (GPR) survey of a 32 km mixed urban and rural study route is undertaken to assess the usefulness of GPR as a tool for parameterizing road construction in a route-based road weather forecast model. It is shown that GPR can easily identify even the smallest of bridges along the route, which previous thermal mapping surveys have identified as thermal singularities with implications for winter road maintenance. Using individual GPR traces measured at each forecast point along the route, an inflexion point detection algorithm attempts to identify the depth of the uppermost subsurface layers at each forecast point for use in a road weather model instead of existing ordinal road-type classifications. This approach has the potential to allow high resolution modelling of road construction and bridge decks on a scale previously not possible within a road weather model, but initial results reveal that significant future research will be required to unlock the full potential that this technology can bring to the road weather industry. (technical design note)

  13. Global parameterization and validation of a two-leaf light use efficiency model for predicting gross primary production across FLUXNET sites: TL-LUE Parameterization and Validation

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Yanlian [Jiangsu Provincial Key Laboratory of Geographic Information Science and Technology, School of Geographic and Oceanographic Sciences, Nanjing University, Nanjing China; Joint Center for Global Change Studies, Beijing China; Wu, Xiaocui [International Institute for Earth System Sciences, Nanjing University, Nanjing China; Joint Center for Global Change Studies, Beijing China; Ju, Weimin [International Institute for Earth System Sciences, Nanjing University, Nanjing China; Jiangsu Center for Collaborative Innovation in Geographic Information Resource Development and Application, Nanjing China; Chen, Jing M. [International Institute for Earth System Sciences, Nanjing University, Nanjing China; Joint Center for Global Change Studies, Beijing China; Wang, Shaoqiang [Key Laboratory of Ecosystem Network Observation and Modeling, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Science, Beijing China; Wang, Huimin [Key Laboratory of Ecosystem Network Observation and Modeling, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Science, Beijing China; Yuan, Wenping [State Key Laboratory of Earth Surface Processes and Resource Ecology, Future Earth Research Institute, Beijing Normal University, Beijing China; Andrew Black, T. [Faculty of Land and Food Systems, University of British Columbia, Vancouver British Columbia Canada; Jassal, Rachhpal [Faculty of Land and Food Systems, University of British Columbia, Vancouver British Columbia Canada; Ibrom, Andreas [Department of Environmental Engineering, Technical University of Denmark (DTU), Kgs. Lyngby Denmark; Han, Shijie [Institute of Applied Ecology, Chinese Academy of Sciences, Shenyang China; Yan, Junhua [South China Botanical Garden, Chinese Academy of Sciences, Guangzhou China; Margolis, Hank [Centre for Forest Studies, Faculty of Forestry, Geography and Geomatics, Laval University, Quebec City Quebec Canada; Roupsard, Olivier [CIRAD-Persyst, UMR Ecologie Fonctionnelle and Biogéochimie des Sols et Agroécosystèmes, SupAgro-CIRAD-INRA-IRD, Montpellier France; CATIE (Tropical Agricultural Centre for Research and Higher Education), Turrialba Costa Rica; Li, Yingnian [Northwest Institute of Plateau Biology, Chinese Academy of Sciences, Xining China; Zhao, Fenghua [Key Laboratory of Ecosystem Network Observation and Modeling, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Science, Beijing China; Kiely, Gerard [Environmental Research Institute, Civil and Environmental Engineering Department, University College Cork, Cork Ireland; Starr, Gregory [Department of Biological Sciences, University of Alabama, Tuscaloosa Alabama USA; Pavelka, Marian [Laboratory of Plants Ecological Physiology, Institute of Systems Biology and Ecology AS CR, Prague Czech Republic; Montagnani, Leonardo [Forest Services, Autonomous Province of Bolzano, Bolzano Italy; Faculty of Sciences and Technology, Free University of Bolzano, Bolzano Italy; Wohlfahrt, Georg [Institute for Ecology, University of Innsbruck, Innsbruck Austria; European Academy of Bolzano, Bolzano Italy; D' Odorico, Petra [Grassland Sciences Group, Institute of Agricultural Sciences, ETH Zurich Switzerland; Cook, David [Atmospheric and Climate Research Program, Environmental Science Division, Argonne National Laboratory, Argonne Illinois USA; Arain, M. Altaf [McMaster Centre for Climate Change and School of Geography and Earth Sciences, McMaster University, Hamilton Ontario Canada; Bonal, Damien [INRA Nancy, UMR EEF, Champenoux France; Beringer, Jason [School of Earth and Environment, The University of Western Australia, Crawley Australia; Blanken, Peter D. [Department of Geography, University of Colorado Boulder, Boulder Colorado USA; Loubet, Benjamin [UMR ECOSYS, INRA, AgroParisTech, Université Paris-Saclay, Thiverval-Grignon France; Leclerc, Monique Y. [Department of Crop and Soil Sciences, College of Agricultural and Environmental Sciences, University of Georgia, Athens Georgia USA; Matteucci, Giorgio [Viea San Camillo Ed LellisViterbo, University of Tuscia, Viterbo Italy; Nagy, Zoltan [MTA-SZIE Plant Ecology Research Group, Szent Istvan University, Godollo Hungary; Olejnik, Janusz [Meteorology Department, Poznan University of Life Sciences, Poznan Poland; Department of Matter and Energy Fluxes, Global Change Research Center, Brno Czech Republic; Paw U, Kyaw Tha [Department of Land, Air and Water Resources, University of California, Davis California USA; Joint Program on the Science and Policy of Global Change, Massachusetts Institute of Technology, Cambridge USA; Varlagin, Andrej [A.N. Severtsov Institute of Ecology and Evolution, Russian Academy of Sciences, Moscow Russia

    2016-04-06

    Light use efficiency (LUE) models are widely used to simulate gross primary production (GPP). However, the treatment of the plant canopy as a big leaf by these models can introduce large uncertainties in simulated GPP. Recently, a two-leaf light use efficiency (TL-LUE) model was developed to simulate GPP separately for sunlit and shaded leaves and has been shown to outperform the big-leaf MOD17 model at 6 FLUX sites in China. In this study we investigated the performance of the TL-LUE model for a wider range of biomes. For this we optimized the parameters and tested the TL-LUE model using data from 98 FLUXNET sites which are distributed across the globe. The results showed that the TL-LUE model performed in general better than the MOD17 model in simulating 8-day GPP. Optimized maximum light use efficiency of shaded leaves (εmsh) was 2.63 to 4.59 times that of sunlit leaves (εmsu). Generally, the relationships of εmsh and εmsu with εmax were well described by linear equations, indicating the existence of general patterns across biomes. GPP simulated by the TL-LUE model was much less sensitive to biases in the photosynthetically active radiation (PAR) input than the MOD17 model. The results of this study suggest that the proposed TL-LUE model has the potential for simulating regional and global GPP of terrestrial ecosystems and it is more robust with regard to usual biases in input data than existing approaches which neglect the bi-modal within-canopy distribution of PAR.

  14. CHEM2D-OPP: A new linearized gas-phase ozone photochemistry parameterization for high-altitude NWP and climate models

    Directory of Open Access Journals (Sweden)

    J. P. McCormack

    2006-01-01

    Full Text Available The new CHEM2D-Ozone Photochemistry Parameterization (CHEM2D-OPP for high-altitude numerical weather prediction (NWP systems and climate models specifies the net ozone photochemical tendency and its sensitivity to changes in ozone mixing ratio, temperature and overhead ozone column based on calculations from the CHEM2D interactive middle atmospheric photochemical transport model. We evaluate CHEM2D-OPP performance using both short-term (6-day and long-term (1-year stratospheric ozone simulations with the prototype high-altitude NOGAPS-ALPHA forecast model. An inter-comparison of NOGAPS-ALPHA 6-day ozone hindcasts for 7 February 2005 with ozone photochemistry parameterizations currently used in operational NWP systems shows that CHEM2D-OPP yields the best overall agreement with both individual Aura Microwave Limb Sounder ozone profile measurements and independent hemispheric (10°–90° N ozone analysis fields. A 1-year free-running NOGAPS-ALPHA simulation using CHEM2D-OPP produces a realistic seasonal cycle in zonal mean ozone throughout the stratosphere. We find that the combination of a model cold temperature bias at high latitudes in winter and a warm bias in the CHEM2D-OPP temperature climatology can degrade the performance of the linearized ozone photochemistry parameterization over seasonal time scales despite the fact that the parameterized temperature dependence is weak in these regions.

  15. Parameterization of aquatic ecosystem functioning and its natural variation: Hierarchical Bayesian modelling of plankton food web dynamics

    Science.gov (United States)

    Norros, Veera; Laine, Marko; Lignell, Risto; Thingstad, Frede

    2017-10-01

    Methods for extracting empirically and theoretically sound parameter values are urgently needed in aquatic ecosystem modelling to describe key flows and their variation in the system. Here, we compare three Bayesian formulations for mechanistic model parameterization that differ in their assumptions about the variation in parameter values between various datasets: 1) global analysis - no variation, 2) separate analysis - independent variation and 3) hierarchical analysis - variation arising from a shared distribution defined by hyperparameters. We tested these methods, using computer-generated and empirical data, coupled with simplified and reasonably realistic plankton food web models, respectively. While all methods were adequate, the simulated example demonstrated that a well-designed hierarchical analysis can result in the most accurate and precise parameter estimates and predictions, due to its ability to combine information across datasets. However, our results also highlighted sensitivity to hyperparameter prior distributions as an important caveat of hierarchical analysis. In the more complex empirical example, hierarchical analysis was able to combine precise identification of parameter values with reasonably good predictive performance, although the ranking of the methods was less straightforward. We conclude that hierarchical Bayesian analysis is a promising tool for identifying key ecosystem-functioning parameters and their variation from empirical datasets.

  16. A mass-flux cumulus parameterization scheme for large-scale models: description and test with observations

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Tongwen [China Meteorological Administration (CMA), National Climate Center (Beijing Climate Center), Beijing (China)

    2012-02-15

    A simple mass-flux cumulus parameterization scheme suitable for large-scale atmospheric models is presented. The scheme is based on a bulk-cloud approach and has the following properties: (1) Deep convection is launched at the level of maximum moist static energy above the top of the boundary layer. It is triggered if there is positive convective available potential energy (CAPE) and relative humidity of the air at the lifting level of convection cloud is greater than 75%; (2) Convective updrafts for mass, dry static energy, moisture, cloud liquid water and momentum are parameterized by a one-dimensional entrainment/detrainment bulk-cloud model. The lateral entrainment of the environmental air into the unstable ascending parcel before it rises to the lifting condensation level is considered. The entrainment/detrainment amount for the updraft cloud parcel is separately determined according to the increase/decrease of updraft parcel mass with altitude, and the mass change for the adiabatic ascent cloud parcel with altitude is derived from a total energy conservation equation of the whole adiabatic system in which involves the updraft cloud parcel and the environment; (3) The convective downdraft is assumed saturated and originated from the level of minimum environmental saturated equivalent potential temperature within the updraft cloud; (4) The mass flux at the base of convective cloud is determined by a closure scheme suggested by Zhang (J Geophys Res 107(D14)), in which the increase/decrease of CAPE due to changes of the thermodynamic states in the free troposphere resulting from convection approximately balances the decrease/increase resulting from large-scale processes. Evaluation of the proposed convection scheme is performed by using a single column model (SCM) forced by the Atmospheric Radiation Measurement Program's (ARM) summer 1995 and 1997 Intensive Observing Period (IOP) observations, and field observations from the Global Atmospheric Research

  17. Parameterizing sub-surface drainage with geology to improve modeling streamflow responses to climate in data limited environments

    Directory of Open Access Journals (Sweden)

    C. L. Tague

    2013-01-01

    Full Text Available Hydrologic models are one of the core tools used to project how water resources may change under a warming climate. These models are typically applied over a range of scales, from headwater streams to higher order rivers, and for a variety of purposes, such as evaluating changes to aquatic habitat or reservoir operation. Most hydrologic models require streamflow data to calibrate subsurface drainage parameters. In many cases, long-term gage records may not be available for calibration, particularly when assessments are focused on low-order stream reaches. Consequently, hydrologic modeling of climate change impacts is often performed in the absence of sufficient data to fully parameterize these hydrologic models. In this paper, we assess a geologic-based strategy for assigning drainage parameters. We examine the performance of this modeling strategy for the McKenzie River watershed in the US Oregon Cascades, a region where previous work has demonstrated sharp contrasts in hydrology based primarily on geological differences between the High and Western Cascades. Based on calibration and verification using existing streamflow data, we demonstrate that: (1 a set of streams ranging from 1st to 3rd order within the Western Cascade geologic region can share the same drainage parameter set, while (2 streams from the High Cascade geologic region require a different parameter set. Further, we show that a watershed comprised of a mixture of High and Western Cascade geologies can be modeled without additional calibration by transferring parameters from these distinctive High and Western Cascade end-member parameter sets. More generally, we show that by defining a set of end-member parameters that reflect different geologic classes, we can more efficiently apply a hydrologic model over a geologically complex landscape and resolve geo-climatic differences in how different watersheds are likely to respond to simple warming scenarios.

  18. Parameterizing Bayesian network Representations of Social-Behavioral Models by Expert Elicitation

    Energy Technology Data Exchange (ETDEWEB)

    Walsh, Stephen J.; Dalton, Angela C.; Whitney, Paul D.; White, Amanda M.

    2010-05-23

    Bayesian networks provide a general framework with which to model many natural phenomena. The mathematical nature of Bayesian networks enables a plethora of model validation and calibration techniques: e.g parameter estimation, goodness of fit tests, and diagnostic checking of the model assumptions. However, they are not free of shortcomings. Parameter estimation from relevant extant data is a common approach to calibrating the model parameters. In practice it is not uncommon to find oneself lacking adequate data to reliably estimate all model parameters. In this paper we present the early development of a novel application of conjoint analysis as a method for eliciting and modeling expert opinions and using the results in a methodology for calibrating the parameters of a Bayesian network.

  19. Automatic Model-Based Generation of Parameterized Test Cases Using Data Abstraction

    NARCIS (Netherlands)

    Calamé, Jens R.; Ioustinova, Natalia; Romijn, J.M.T.; Smith, G.; van de Pol, Jan Cornelis

    2007-01-01

    Developing test suites is a costly and error-prone process. Model-based test generation tools facilitate this process by automatically generating test cases from system models. The applicability of these tools, however, depends on the size of the target systems. Here, we propose an approach to

  20. Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model

    Science.gov (United States)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2014-02-01

    Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation

  1. Technical Note: Approximate Bayesian parameterization of a complex tropical forest model

    Science.gov (United States)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2013-08-01

    Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can

  2. Development of a CFD Model Including Tree's Drag Parameterizations: Application to Pedestrian's Wind Comfort in an Urban Area

    Science.gov (United States)

    Kang, G.; Kim, J.

    2017-12-01

    This study investigated the tree's effect on wind comfort at pedestrian height in an urban area using a computational fluid dynamics (CFD) model. We implemented the tree's drag parameterization scheme to the CFD model and validated the simulated results against the wind-tunnel measurement data as well as LES data via several statistical methods. The CFD model underestimated (overestimated) the concentrations on the leeward (windward) walls inside the street canyon in the presence of trees, because the CFD model can't resolve the latticed cage and can't reflect the concentration increase and decrease caused by the latticed cage in the simulations. However, the scalar pollutants' dispersion simulated by the CFD model was quite similar to that in the wind-tunnel measurement in pattern and magnitude, on the whole. The CFD model overall satisfied the statistical validation indices (root normalized mean square error, geometric mean variance, correlation coefficient, and FAC2) but failed to satisfy the fractional bias and geometric mean bias due to the underestimation on the leeward wall and overestimation on the windward wall, showing that its performance was comparable to the LES's performance. We applied the CFD model to evaluation of the trees' effect on the pedestrian's wind-comfort in an urban area. To investigate sensory levels for human activities, the wind-comfort criteria based on Beaufort wind-force scales (BWSs) were used. In the tree-free scenario, BWS 4 and 5 (unpleasant condition for sitting long and sitting short, respectively) appeared in the narrow spaces between buildings, in the upwind side of buildings, and the unobstructed areas. In the tree scenario, BWSs decreased by 1 3 grade inside the campus of Pukyong National University located in the target area, which indicated that trees planted in the campus effectively improved pedestrian's wind comfort.

  3. Atmospheric water vapor transport: Estimation of continental precipitation recycling and parameterization of a simple climate model. M.S. Thesis

    Science.gov (United States)

    Brubaker, Kaye L.; Entekhabi, Dara; Eagleson, Peter S.

    1991-01-01

    The advective transport of atmospheric water vapor and its role in global hydrology and the water balance of continental regions are discussed and explored. The data set consists of ten years of global wind and humidity observations interpolated onto a regular grid by objective analysis. Atmospheric water vapor fluxes across the boundaries of selected continental regions are displayed graphically. The water vapor flux data are used to investigate the sources of continental precipitation. The total amount of water that precipitates on large continental regions is supplied by two mechanisms: (1) advection from surrounding areas external to the region; and (2) evaporation and transpiration from the land surface recycling of precipitation over the continental area. The degree to which regional precipitation is supplied by recycled moisture is a potentially significant climate feedback mechanism and land surface-atmosphere interaction, which may contribute to the persistence and intensification of droughts. A simplified model of the atmospheric moisture over continents and simultaneous estimates of regional precipitation are employed to estimate, for several large continental regions, the fraction of precipitation that is locally derived. In a separate, but related, study estimates of ocean to land water vapor transport are used to parameterize an existing simple climate model, containing both land and ocean surfaces, that is intended to mimic the dynamics of continental climates.

  4. Lessons Learned From the Development and Parameterization of a Computer Simulation Model to Evaluate Task Modification for Health Care Providers.

    Science.gov (United States)

    Kasaie, Parastu; David Kelton, W; Ancona, Rachel M; Ward, Michael J; Froehle, Craig M; Lyons, Michael S

    2018-02-01

    Computer simulation is a highly advantageous method for understanding and improving health care operations with a wide variety of possible applications. Most computer simulation studies in emergency medicine have sought to improve allocation of resources to meet demand or to assess the impact of hospital and other system policies on emergency department (ED) throughput. These models have enabled essential discoveries that can be used to improve the general structure and functioning of EDs. Theoretically, computer simulation could also be used to examine the impact of adding or modifying specific provider tasks. Doing so involves a number of unique considerations, particularly in the complex environment of acute care settings. In this paper, we describe conceptual advances and lessons learned during the design, parameterization, and validation of a computer simulation model constructed to evaluate changes in ED provider activity. We illustrate these concepts using examples from a study focused on the operational effects of HIV screening implementation in the ED. Presentation of our experience should emphasize the potential for application of computer simulation to study changes in health care provider activity and facilitate the progress of future investigators in this field. © 2017 by the Society for Academic Emergency Medicine.

  5. Parameterized examination in econometrics

    Science.gov (United States)

    Malinova, Anna; Kyurkchiev, Vesselin; Spasov, Georgi

    2018-01-01

    The paper presents a parameterization of basic types of exam questions in Econometrics. This algorithm is used to automate and facilitate the process of examination, assessment and self-preparation of a large number of students. The proposed parameterization of testing questions reduces the time required to author tests and course assignments. It enables tutors to generate a large number of different but equivalent dynamic questions (with dynamic answers) on a certain topic, which are automatically assessed. The presented methods are implemented in DisPeL (Distributed Platform for e-Learning) and provide questions in the areas of filtering and smoothing of time-series data, forecasting, building and analysis of single-equation econometric models. Questions also cover elasticity, average and marginal characteristics, product and cost functions, measurement of monopoly power, supply, demand and equilibrium price, consumer and product surplus, etc. Several approaches are used to enable the required numerical computations in DisPeL - integration of third-party mathematical libraries, developing our own procedures from scratch, and wrapping our legacy math codes in order to modernize and reuse them.

  6. Intercomparison and validation of snow albedo parameterization schemes in climate models

    Energy Technology Data Exchange (ETDEWEB)

    Pedersen, Christina A.; Winther, Jan-Gunnar [Norwegian Polar Institute, Tromsoe (Norway)

    2005-09-01

    Snow albedo is known to be crucial for heat exchange at high latitudes and high altitudes, and is also an important parameter in General Circulation Models (GCMs) because of its strong positive feedback properties. In this study, seven GCM snow albedo schemes and a multiple linear regression model were intercompared and validated against 59 years of in situ data from Svalbard, the French Alps and six stations in the former Soviet Union. For each site, the significant meteorological parameters for modeling the snow albedo were identified by constructing the 95% confidence intervals. The significant parameters were found to be: temperature, snow depth, positive degree day and a dummy of snow depth, and the multiple linear regression model was constructed to include these. Overall, the intercomparison showed that the modeled snow albedo varied more than the observed albedo for all models, and that the albedo was often underestimated. In addition, for several of the models, the snow albedo decreased at a faster rate or by a greater magnitude during the winter snow metamorphosis than the observed albedo. Both the temperature dependent schemes and the prognostic schemes showed shortcomings. (orig.)

  7. Parameterization of Macropore Flow Using Dye-Tracer Infiltration Patterns in the SWAP Model

    NARCIS (Netherlands)

    Schaik, N.L.M.B.; Hendriks, R.F.A.; Dam, van J.C.

    2010-01-01

    Preferential flow is known to influence infiltration, soil moisture content distribution, groundwater response, and runoff generation. Various model concepts are used to simulate preferential flow. Preferential flow parameters are often determined by indirect optimization using outflow or discharge

  8. Setup of a Parameterized FE Model for the Die Roll Prediction in Fine Blanking using Artificial Neural Networks

    Science.gov (United States)

    Stanke, J.; Trauth, D.; Feuerhack, A.; Klocke, F.

    2017-09-01

    Die roll is a morphological feature of fine blanked sheared edges. The die roll reduces the functional part of the sheared edge. To compensate for the die roll thicker sheet metal strips and secondary machining must be used. However, in order to avoid this, the influence of various fine blanking process parameters on the die roll has been experimentally and numerically studied, but there is still a lack of knowledge on the effects of some factors and especially factor interactions on the die roll. Recent changes in the field of artificial intelligence motivate the hybrid use of the finite element method and artificial neural networks to account for these non-considered parameters. Therefore, a set of simulations using a validated finite element model of fine blanking is firstly used to train an artificial neural network. Then the artificial neural network is trained with thousands of experimental trials. Thus, the objective of this contribution is to develop an artificial neural network that reliably predicts the die roll. Therefore, in this contribution, the setup of a fully parameterized 2D FE model is presented that will be used for batch training of an artificial neural network. The FE model enables an automatic variation of the edge radii of blank punch and die plate, the counter and blank holder force, the sheet metal thickness and part diameter, V-ring height and position, cutting velocity as well as material parameters covered by the Hensel-Spittel model for 16MnCr5 (1.7131, AISI/SAE 5115). The FE model is validated using experimental trails. The results of this contribution is a FE model suitable to perform 9.623 simulations and to pass the simulated die roll width and height automatically to an artificial neural network.

  9. Parameterized source term in the diffusion approximation for enhanced near-field modeling of collimated light

    Science.gov (United States)

    Jia, Mengyu; Wang, Shuang; Chen, Xueying; Gao, Feng; Zhao, Huijuan

    2016-03-01

    Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we have reported on an improved explicit model, referred to as "Virtual Source" (VS) diffuse approximation (DA), to inherit the mathematical simplicity of the DA while considerably extend its validity in modeling the near-field photon migration in low-albedo medium. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the nearfield to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. The proposed VS-DA model is validated by comparing with the Monte Carlo simulations, and further introduced in the image reconstruction of the Laminar Optical Tomography system.

  10. Toward Rigorous Parameterization of Underconstrained Neural Network Models Through Interactive Visualization and Steering of Connectivity Generation

    Directory of Open Access Journals (Sweden)

    Christian Nowke

    2018-06-01

    Full Text Available Simulation models in many scientific fields can have non-unique solutions or unique solutions which can be difficult to find. Moreover, in evolving systems, unique final state solutions can be reached by multiple different trajectories. Neuroscience is no exception. Often, neural network models are subject to parameter fitting to obtain desirable output comparable to experimental data. Parameter fitting without sufficient constraints and a systematic exploration of the possible solution space can lead to conclusions valid only around local minima or around non-minima. To address this issue, we have developed an interactive tool for visualizing and steering parameters in neural network simulation models. In this work, we focus particularly on connectivity generation, since finding suitable connectivity configurations for neural network models constitutes a complex parameter search scenario. The development of the tool has been guided by several use cases—the tool allows researchers to steer the parameters of the connectivity generation during the simulation, thus quickly growing networks composed of multiple populations with a targeted mean activity. The flexibility of the software allows scientists to explore other connectivity and neuron variables apart from the ones presented as use cases. With this tool, we enable an interactive exploration of parameter spaces and a better understanding of neural network models and grapple with the crucial problem of non-unique network solutions and trajectories. In addition, we observe a reduction in turn around times for the assessment of these models, due to interactive visualization while the simulation is computed.

  11. Parameterization Of Solar Radiation Using Neural Network

    International Nuclear Information System (INIS)

    Jiya, J. D.; Alfa, B.

    2002-01-01

    This paper presents a neural network technique for parameterization of global solar radiation. The available data from twenty-one stations is used for training the neural network and the data from other ten stations is used to validate the neural model. The neural network utilizes latitude, longitude, altitude, sunshine duration and period number to parameterize solar radiation values. The testing data was not used in the training to demonstrate the performance of the neural network in unknown stations to parameterize solar radiation. The results indicate a good agreement between the parameterized solar radiation values and actual measured values

  12. Proposing a Compartmental Model for Leprosy and Parameterizing Using Regional Incidence in Brazil.

    Science.gov (United States)

    Smith, Rebecca Lee

    2016-08-01

    Hansen's disease (HD), or leprosy, is still considered a public health risk in much of Brazil. Understanding the dynamics of the infection at a regional level can aid in identification of targets to improve control. A compartmental continuous-time model for leprosy dynamics was designed based on understanding of the biology of the infection. The transmission coefficients for the model and the rate of detection were fit for each region using Approximate Bayesian Computation applied to paucibacillary and multibacillary incidence data over the period of 2000 to 2010, and model fit was validated on incidence data from 2011 to 2012. Regional variation was noted in detection rate, with cases in the Midwest estimated to be infectious for 10 years prior to detection compared to 5 years for most other regions. Posterior predictions for the model estimated that elimination of leprosy as a public health risk would require, on average, 44-45 years in the three regions with the highest prevalence. The model is easily adaptable to other settings, and can be studied to determine the efficacy of improved case finding on leprosy control.

  13. Beyond Standard Model Physics

    Energy Technology Data Exchange (ETDEWEB)

    Bellantoni, L.

    2009-11-01

    There are many recent results from searches for fundamental new physics using the TeVatron, the SLAC b-factory and HERA. This talk quickly reviewed searches for pair-produced stop, for gauge-mediated SUSY breaking, for Higgs bosons in the MSSM and NMSSM models, for leptoquarks, and v-hadrons. There is a SUSY model which accommodates the recent astrophysical experimental results that suggest that dark matter annihilation is occurring in the center of our galaxy, and a relevant experimental result. Finally, model-independent searches at D0, CDF, and H1 are discussed.

  14. Electrostatics of cysteine residues in proteins: Parameterization and validation of a simple model

    Science.gov (United States)

    Salsbury, Freddie R.; Poole, Leslie B.; Fetrow, Jacquelyn S.

    2013-01-01

    One of the most popular and simple models for the calculation of pKas from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pKas. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pKas; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pKas. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pKa values (where the calculation should reproduce the pKa within experimental error). Both the general behavior of cysteines in proteins and the perturbed pKa in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pKa should be shifted, and validation of force field parameters for cysteine residues. PMID:22777874

  15. On parameterization of heat conduction in coupled soil water and heat flow modelling

    Czech Academy of Sciences Publication Activity Database

    Votrubová, J.; Dohnal, M.; Vogel, T.; Tesař, Miroslav

    2012-01-01

    Roč. 7, č. 4 (2012), s. 125-137 ISSN 1801-5395 R&D Projects: GA ČR GA205/08/1174 Institutional research plan: CEZ:AV0Z20600510 Keywords : advective heat flux * dual-permeability model * soil heat transport * soil thermal conductivity * surface energy balance Subject RIV: DA - Hydrology ; Limnology Impact factor: 0.333, year: 2012

  16. Electrostatics of cysteine residues in proteins: parameterization and validation of a simple model.

    Science.gov (United States)

    Salsbury, Freddie R; Poole, Leslie B; Fetrow, Jacquelyn S

    2012-11-01

    One of the most popular and simple models for the calculation of pK(a) s from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pK(a) s. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pK(a) s; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pK(a) s. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pK(a) values (where the calculation should reproduce the pK(a) within experimental error). Both the general behavior of cysteines in proteins and the perturbed pK(a) in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pK(a) should be shifted, and validation of force field parameters for cysteine residues. Copyright © 2012 Wiley Periodicals, Inc.

  17. Modeling urbanized watershed flood response changes with distributed hydrological model: key hydrological processes, parameterization and case studies

    Science.gov (United States)

    Chen, Y.

    2017-12-01

    Urbanization is the world development trend for the past century, and the developing countries have been experiencing much rapider urbanization in the past decades. Urbanization brings many benefits to human beings, but also causes negative impacts, such as increasing flood risk. Impact of urbanization on flood response has long been observed, but quantitatively studying this effect still faces great challenges. For example, setting up an appropriate hydrological model representing the changed flood responses and determining accurate model parameters are very difficult in the urbanized or urbanizing watershed. In the Pearl River Delta area, rapidest urbanization has been observed in China for the past decades, and dozens of highly urbanized watersheds have been appeared. In this study, a physically based distributed watershed hydrological model, the Liuxihe model is employed and revised to simulate the hydrological processes of the highly urbanized watershed flood in the Pearl River Delta area. A virtual soil type is then defined in the terrain properties dataset, and its runoff production and routing algorithms are added to the Liuxihe model. Based on a parameter sensitive analysis, the key hydrological processes of a highly urbanized watershed is proposed, that provides insight into the hydrological processes and for parameter optimization. Based on the above analysis, the model is set up in the Songmushan watershed where there is hydrological data observation. A model parameter optimization and updating strategy is proposed based on the remotely sensed LUC types, which optimizes model parameters with PSO algorithm and updates them based on the changed LUC types. The model parameters in Songmushan watershed are regionalized at the Pearl River Delta area watersheds based on the LUC types of the other watersheds. A dozen watersheds in the highly urbanized area of Dongguan City in the Pearl River Delta area were studied for the flood response changes due to

  18. Methods used to parameterize the spatially-explicit components of a state-and-transition simulation model

    Directory of Open Access Journals (Sweden)

    Rachel R. Sleeter

    2015-06-01

    Full Text Available Spatially-explicit state-and-transition simulation models of land use and land cover (LULC increase our ability to assess regional landscape characteristics and associated carbon dynamics across multiple scenarios. By characterizing appropriate spatial attributes such as forest age and land-use distribution, a state-and-transition model can more effectively simulate the pattern and spread of LULC changes. This manuscript describes the methods and input parameters of the Land Use and Carbon Scenario Simulator (LUCAS, a customized state-and-transition simulation model utilized to assess the relative impacts of LULC on carbon stocks for the conterminous U.S. The methods and input parameters are spatially explicit and describe initial conditions (strata, state classes and forest age, spatial multipliers, and carbon stock density. Initial conditions were derived from harmonization of multi-temporal data characterizing changes in land use as well as land cover. Harmonization combines numerous national-level datasets through a cell-based data fusion process to generate maps of primary LULC categories. Forest age was parameterized using data from the North American Carbon Program and spatially-explicit maps showing the locations of past disturbances (i.e. wildfire and harvest. Spatial multipliers were developed to spatially constrain the location of future LULC transitions. Based on distance-decay theory, maps were generated to guide the placement of changes related to forest harvest, agricultural intensification/extensification, and urbanization. We analyze the spatially-explicit input parameters with a sensitivity analysis, by showing how LUCAS responds to variations in the model input. This manuscript uses Mediterranean California as a regional subset to highlight local to regional aspects of land change, which demonstrates the utility of LUCAS at many scales and applications.

  19. Methods used to parameterize the spatially-explicit components of a state-and-transition simulation model

    Science.gov (United States)

    Sleeter, Rachel; Acevedo, William; Soulard, Christopher E.; Sleeter, Benjamin M.

    2015-01-01

    Spatially-explicit state-and-transition simulation models of land use and land cover (LULC) increase our ability to assess regional landscape characteristics and associated carbon dynamics across multiple scenarios. By characterizing appropriate spatial attributes such as forest age and land-use distribution, a state-and-transition model can more effectively simulate the pattern and spread of LULC changes. This manuscript describes the methods and input parameters of the Land Use and Carbon Scenario Simulator (LUCAS), a customized state-and-transition simulation model utilized to assess the relative impacts of LULC on carbon stocks for the conterminous U.S. The methods and input parameters are spatially explicit and describe initial conditions (strata, state classes and forest age), spatial multipliers, and carbon stock density. Initial conditions were derived from harmonization of multi-temporal data characterizing changes in land use as well as land cover. Harmonization combines numerous national-level datasets through a cell-based data fusion process to generate maps of primary LULC categories. Forest age was parameterized using data from the North American Carbon Program and spatially-explicit maps showing the locations of past disturbances (i.e. wildfire and harvest). Spatial multipliers were developed to spatially constrain the location of future LULC transitions. Based on distance-decay theory, maps were generated to guide the placement of changes related to forest harvest, agricultural intensification/extensification, and urbanization. We analyze the spatially-explicit input parameters with a sensitivity analysis, by showing how LUCAS responds to variations in the model input. This manuscript uses Mediterranean California as a regional subset to highlight local to regional aspects of land change, which demonstrates the utility of LUCAS at many scales and applications.

  20. Parameterization of a complex landscape for a sediment routing model of the Le Sueur River, southern Minnesota

    Science.gov (United States)

    Belmont, P.; Viparelli, E.; Parker, G.; Lauer, W.; Jennings, C.; Gran, K.; Wilcock, P.; Melesse, A.

    2008-12-01

    Modeling sediment fluxes and pathways in complex landscapes is limited by our inability to accurately measure and integrate heterogeneous, spatially distributed sources into a single coherent, predictive geomorphic transport law. In this study, we partition the complex landscape of the Le Sueur River watershed into five distributed primary source types, bluffs (including strath terrace caps), ravines, streambanks, tributaries, and flat,agriculture-dominated uplands. The sediment contribution of each source is quantified independently and parameterized for use in a sand and mud routing model. Rigorous modeling of the evolution of this landscape and sediment flux from each source type requires consideration of substrate characteristics, heterogeneity, and spatial connectivity. The subsurface architecture of the Le Sueur drainage basin is defined by a layer cake sequence of fine-grained tills, interbedded with fluvioglacial sands. Nearly instantaneous baselevel fall of 65 m occurred at 11.5 ka, as a result of the catastrophic draining of glacial Lake Agassiz through the Minnesota River, to which the Le Sueur is a tributary. The major knickpoint that was generated from that event has propagated 40 km into the Le Sueur network, initiating an incised river valley with tall, retreating bluffs and actively incising ravines. Loading estimates constrained by river gaging records that bound the knick zone indicate that bluffs connected to the river are retreating at an average rate of less than 2 cm per year and ravines are incising at an average rate of less than 0.8 mm per year, consistent with the Holocene average incision rate on the main stem of the river of less than 0.6 mm per year. Ongoing work with cosmogenic nuclide sediment tracers, ground-based LiDAR, historic aerial photos, and field mapping will be combined to represent the diversity of erosional environments and processes in a single coherent routing model.

  1. Investigating Marine Boundary Layer Parameterizations by Combining Observations with Models via State Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Delle Monahce, Luca [University Corporation for Atmospheric Research (UCAR), Boulder, CO (United States); Clifton, Andrew [National Renewable Energy Lab. (NREL), Golden, CO (United States); Hacker, Joshua [University Corporation for Atmospheric Research (UCAR), Boulder, CO (United States); Kosovic, Branko [University Corporation for Atmospheric Research (UCAR), Boulder, CO (United States); Lee, Jared [University Corporation for Atmospheric Research (UCAR), Boulder, CO (United States); Vanderberghe, Francois [University Corporation for Atmospheric Research (UCAR), Boulder, CO (United States); Wu, Yonghui [University Corporation for Atmospheric Research (UCAR), Boulder, CO (United States); Hawkins, Sam [Vattenfall, Solna Municipality (Sweden); Nissen, Jesper [Vattenfall, Solna Municipality (Sweden)

    2015-06-30

    In this project we have improved numerical weather prediction analyses and forecasts of low level winds in the marine boundary layer. This has been accomplished with the following tools; The National Center for Atmospheric Research (NCAR) Weather and Research Forecasting model, WRF, both in his single column (SCM) and three-dimensional (3D) versions; The National Oceanic and Atmospheric Administration (NOAA) Wave Watch III (WWIII); SE algorithms from the Data Assimilation Research Testbed (DART, Anderson et al. 2009); and Observations of key quantities of the lower MBL, including temperature and winds at multiple levels above the sea surface. The experiments with the WRF SCM / DART system have lead to large improvements with respect to a standard WRF configuration, which is currently commonly used by the wind energy industry. The single column model appears to be a tool particularly suitable for off-shore wind energy applications given its accuracy, the ability to quantify uncertainty, and the minimal computational resource requirements. In situations where the impact of an upwind wind park may be of interest in a downwind location, a 3D approach may be more suitable. We have demonstrated that with the WRF 3D / DART system the accuracy of wind predictions (and other meteorological parameters) can be improved over a 3D computational domain, and not only at specific locations. All the scripting systems developed in this project (i.e., to run WRF SCM / DART, WRF 3D / DART, and the coupling between WRF and WWIII) and the several modifications and upgrades made to the WRF SCM model will be shared with the broader community.

  2. Parameterizations of Chromospheric Condensations in dG and dMe Model Flare Atmospheres

    Science.gov (United States)

    Kowalski, Adam F.; Allred, Joel C.

    2018-01-01

    The origin of the near-ultraviolet and optical continuum radiation in flares is critical for understanding particle acceleration and impulsive heating in stellar atmospheres. Radiative-hydrodynamic (RHD) simulations in 1D have shown that high energy deposition rates from electron beams produce two flaring layers at T ∼ 104 K that develop in the chromosphere: a cooling condensation (downflowing compression) and heated non-moving (stationary) flare layers just below the condensation. These atmospheres reproduce several observed phenomena in flare spectra, such as the red-wing asymmetry of the emission lines in solar flares and a small Balmer jump ratio in M dwarf flares. The high beam flux simulations are computationally expensive in 1D, and the (human) timescales for completing NLTE models with adaptive grids in 3D will likely be unwieldy for some time to come. We have developed a prescription for predicting the approximate evolved states, continuum optical depth, and emergent continuum flux spectra of RHD model flare atmospheres. These approximate prescriptions are based on an important atmospheric parameter: the column mass ({m}{ref}) at which hydrogen becomes nearly completely ionized at the depths that are approximately in steady state with the electron beam heating. Using this new modeling approach, we find that high energy flux density (>F11) electron beams are needed to reproduce the brightest observed continuum intensity in IRIS data of the 2014 March 29 X1 solar flare, and that variation in {m}{ref} from 0.001 to 0.02 g cm‑2 reproduces most of the observed range of the optical continuum flux ratios at the peak of M dwarf flares.

  3. Modeling and Parameterization of Fuel Economy in Heavy Duty Vehicles (HDVs

    Directory of Open Access Journals (Sweden)

    Yunjung Oh

    2014-08-01

    Full Text Available The present paper suggests fuel consumption modeling for HDVs based on the code from the Japanese Ministry of the Environment. Two interpolation models (inversed distance weighted (IDW and Hermite and three types of fuel efficiency maps (coarse, medium, and dense were adopted to determine the most appropriate combination for further studies. Finally, sensitivity analysis studies were conducted to determine which parameters greatly impact the fuel efficiency prediction results for HDVs. While vitiating each parameter at specific percentages (±1%, ±3%, ±5%, ±10%, the change rate of the fuel efficiency results was analyzed, and the main factors affecting fuel efficiency were summarized. As a result, the Japanese transformation algorithm program showed good agreement with slightly increased prediction accuracy for the fuel efficiency test results when applying the Hermite interpolation method compared to IDW interpolation. The prediction accuracy of fuel efficiency remained unchanged regardless of the chosen fuel efficiency map data density. According to the sensitivity analysis study, three parameters (fuel consumption map data, driving force, and gross vehicle weight have the greatest impact on fuel efficiency (±5% to ±10% changes.

  4. Third nearest neighbor parameterized tight binding model for graphene nano-ribbons

    Directory of Open Access Journals (Sweden)

    Van-Truong Tran

    2017-07-01

    Full Text Available The existing tight binding models can very well reproduce the ab initio band structure of a 2D graphene sheet. For graphene nano-ribbons (GNRs, the current sets of tight binding parameters can successfully describe the semi-conducting behavior of all armchair GNRs. However, they are still failing in reproducing accurately the slope of the bands that is directly associated with the group velocity and the effective mass of electrons. In this work, both density functional theory and tight binding calculations were performed and a new set of tight binding parameters up to the third nearest neighbors including overlap terms is introduced. The results obtained with this model offer excellent agreement with the predictions of the density functional theory in most cases of ribbon structures, even in the high-energy region. Moreover, this set can induce electron-hole asymmetry as manifested in results from density functional theory. Relevant outcomes are also achieved for armchair ribbons of various widths as well as for zigzag structures, thus opening a route for multi-scale atomistic simulation of large systems that cannot be considered using density functional theory.

  5. A new parameterization for surface ocean light attenuation in Earth System Models: assessing the impact of light absorption by colored detrital material

    OpenAIRE

    G. E. Kim; M.-A. Pradal; A. Gnanadesikan

    2015-01-01

    Light limitation can affect the distribution of biota and nutrients in the ocean. Light absorption by colored detrital material (CDM) was included in a fully coupled Earth System Model using a new parameterization for shortwave attenuation. Two model runs were conducted, with and without light attenuation by CDM. In a global average sense, greater light limitation associated with CDM increased surface chlorophyll, biomass and nutrients together. These changes can be attribut...

  6. Accounting for Unresolved Spatial Variability in Large Scale Models: Development and Evaluation of a Statistical Cloud Parameterization with Prognostic Higher Order Moments

    Energy Technology Data Exchange (ETDEWEB)

    Robert Pincus

    2011-05-17

    This project focused on the variability of clouds that is present across a wide range of scales ranging from the synoptic to the millimeter. In particular, there is substantial variability in cloud properties at scales smaller than the grid spacing of models used to make climate projections (GCMs) and weather forecasts. These models represent clouds and other small-scale processes with parameterizations that describe how those processes respond to and feed back on the largescale state of the atmosphere.

  7. Parameterization of dust emissions in the global atmospheric chemistry-climate model EMAC: impact of nudging and soil properties

    Science.gov (United States)

    Astitha, M.; Lelieveld, J.; Abdel Kader, M.; Pozzer, A.; de Meij, A.

    2012-11-01

    Airborne desert dust influences radiative transfer, atmospheric chemistry and dynamics, as well as nutrient transport and deposition. It directly and indirectly affects climate on regional and global scales. Two versions of a parameterization scheme to compute desert dust emissions are incorporated into the atmospheric chemistry general circulation model EMAC (ECHAM5/MESSy2.41 Atmospheric Chemistry). One uses a globally uniform soil particle size distribution, whereas the other explicitly accounts for different soil textures worldwide. We have tested these two versions and investigated the sensitivity to input parameters, using remote sensing data from the Aerosol Robotic Network (AERONET) and dust concentrations and deposition measurements from the AeroCom dust benchmark database (and others). The two versions are shown to produce similar atmospheric dust loads in the N-African region, while they deviate in the Asian, Middle Eastern and S-American regions. The dust outflow from Africa over the Atlantic Ocean is accurately simulated by both schemes, in magnitude, location and seasonality. Approximately 70% of the modelled annual deposition data and 70-75% of the modelled monthly aerosol optical depth (AOD) in the Atlantic Ocean stations lay in the range 0.5 to 2 times the observations for all simulations. The two versions have similar performance, even though the total annual source differs by ~50%, which underscores the importance of transport and deposition processes (being the same for both versions). Even though the explicit soil particle size distribution is considered more realistic, the simpler scheme appears to perform better in several locations. This paper discusses the differences between the two versions of the dust emission scheme, focusing on their limitations and strengths in describing the global dust cycle and suggests possible future improvements.

  8. Parameterization of dust emissions in the global atmospheric chemistry-climate model EMAC: impact of nudging and soil properties

    Directory of Open Access Journals (Sweden)

    M. Astitha

    2012-11-01

    Full Text Available Airborne desert dust influences radiative transfer, atmospheric chemistry and dynamics, as well as nutrient transport and deposition. It directly and indirectly affects climate on regional and global scales. Two versions of a parameterization scheme to compute desert dust emissions are incorporated into the atmospheric chemistry general circulation model EMAC (ECHAM5/MESSy2.41 Atmospheric Chemistry. One uses a globally uniform soil particle size distribution, whereas the other explicitly accounts for different soil textures worldwide. We have tested these two versions and investigated the sensitivity to input parameters, using remote sensing data from the Aerosol Robotic Network (AERONET and dust concentrations and deposition measurements from the AeroCom dust benchmark database (and others. The two versions are shown to produce similar atmospheric dust loads in the N-African region, while they deviate in the Asian, Middle Eastern and S-American regions. The dust outflow from Africa over the Atlantic Ocean is accurately simulated by both schemes, in magnitude, location and seasonality. Approximately 70% of the modelled annual deposition data and 70–75% of the modelled monthly aerosol optical depth (AOD in the Atlantic Ocean stations lay in the range 0.5 to 2 times the observations for all simulations. The two versions have similar performance, even though the total annual source differs by ~50%, which underscores the importance of transport and deposition processes (being the same for both versions. Even though the explicit soil particle size distribution is considered more realistic, the simpler scheme appears to perform better in several locations. This paper discusses the differences between the two versions of the dust emission scheme, focusing on their limitations and strengths in describing the global dust cycle and suggests possible future improvements.

  9. Parameterization of Nitrogen Limitation for a Dynamic Ecohydrological Model: a Case Study from the Luquillo Critical Zone Observatory

    Science.gov (United States)

    Bastola, S.; Bras, R. L.

    2017-12-01

    Feedbacks between vegetation and the soil nutrient cycle are important in ecosystems where nitrogen limits plant growth, and consequently influences the carbon balance in the plant-soil system. However, many biosphere models do not include such feedbacks, because interactions between carbon and the nitrogen cycle can be complex, and remain poorly understood. In this study we coupled a nitrogen cycle model with an eco-hydrological model by using the concept of carbon cost economics. This concept accounts for different "costs" to the plant of acquiring nitrogen via different pathways. This study builds on tRIBS-VEGGIE, a spatially explicit hydrological model coupled with a model of photosynthesis, stomatal resistance, and energy balance, by combining it with a model of nitrogen recycling. Driven by climate and spatially explicit data of soils, vegetation and topography, the model (referred to as tRIBS-VEGGIE-CN) simulates the dynamics of carbon and nitrogen in the soil-plant system; the dynamics of vegetation; and different components of the hydrological cycle. The tRIBS-VEGGIE-CN is applied in a humid tropical watershed at the Luquillo Critical Zone Observatory (LCZO). The region is characterized by high availability and cycling of nitrogen, high soil respiration rates, and large carbon stocks.We drive the model under contemporary CO2 and hydro-climatic forcing and compare results to a simulation under doubling CO2 and a range of future climate scenarios. The results with parameterization of nitrogen limitation based on carbon cost economics show that the carbon cost of the acquisition of nitrogen is 14% of the net primary productivity (NPP) and the N uptake cost for different pathways vary over a large range depending on leaf nitrogen content, turnover rates of carbon in soil and nitrogen cycling processes. Moreover, the N fertilization simulation experiment shows that the application of N fertilizer does not significantly change the simulated NPP. Furthermore, an

  10. Least square regression based integrated multi-parameteric demand modeling for short term load forecasting

    International Nuclear Information System (INIS)

    Halepoto, I.A.; Uqaili, M.A.

    2014-01-01

    Nowadays, due to power crisis, electricity demand forecasting is deemed an important area for socioeconomic development and proper anticipation of the load forecasting is considered essential step towards efficient power system operation, scheduling and planning. In this paper, we present STLF (Short Term Load Forecasting) using multiple regression techniques (i.e. linear, multiple linear, quadratic and exponential) by considering hour by hour load model based on specific targeted day approach with temperature variant parameter. The proposed work forecasts the future load demand correlation with linear and non-linear parameters (i.e. considering temperature in our case) through different regression approaches. The overall load forecasting error is 2.98% which is very much acceptable. From proposed regression techniques, Quadratic Regression technique performs better compared to than other techniques because it can optimally fit broad range of functions and data sets. The work proposed in this paper, will pave a path to effectively forecast the specific day load with multiple variance factors in a way that optimal accuracy can be maintained. (author)

  11. Sensitivity analysis of a parameterization of the stomatal component of the DO{sub 3}SE model for Quercus ilex to estimate ozone fluxes

    Energy Technology Data Exchange (ETDEWEB)

    Alonso, Rocio [Ecotoxicology of Air Pollution, CIEMAT, Avenida Complutense 22, 28040 Madrid (Spain)], E-mail: rocio.alonso@ciemat.es; Elvira, Susana [Ecotoxicology of Air Pollution, CIEMAT, Avenida Complutense 22, 28040 Madrid (Spain)], E-mail: susana.elvira@ciemat.es; Sanz, Maria J. [Fundacion CEAM, Charles Darwin 14, 46980 Paterna, Valencia (Spain)], E-mail: mjose@ceam.es; Gerosa, Giacomo [Department of Mathematics and Physics, Universita Cattolica del Sacro Cuore, via Musei 41, 25121 Brescia (Italy)], E-mail: giacomo.gerosa@unicatt.it; Emberson, Lisa D. [Stockholm Environment Institute, University of York, York YO 10 5DD (United Kingdom)], E-mail: lde1@york.ac.uk; Bermejo, Victoria [Ecotoxicology of Air Pollution, CIEMAT, Avenida Complutense 22, 28040 Madrid (Spain)], E-mail: victoria.bermejo@ciemat.es; Gimeno, Benjamin S. [Ecotoxicology of Air Pollution, CIEMAT, Avenida Complutense 22, 28040 Madrid (Spain)], E-mail: benjamin.gimeno@ciemat.es

    2008-10-15

    A sensitivity analysis of a proposed parameterization of the stomatal conductance (g{sub s}) module of the European ozone deposition model (DO{sub 3}SE) for Quercus ilex was performed. The performance of the model was tested against measured g{sub s} in the field at three sites in Spain. The best fit of the model was found for those sites, or during those periods, facing no or mild stress conditions, but a worse performance was found under severe drought or temperature stress, mostly occurring at continental sites. The best performance was obtained when both f{sub phen} and f{sub SWP} were included. A local parameterization accounting for the lower temperatures recorded in winter and the higher water shortage at the continental sites resulted in a better performance of the model. The overall results indicate that two different parameterizations of the model are needed, one for marine-influenced sites and another one for continental sites. - No redundancy between phenological and water-related modifying functions was found when estimating stomatal behavior of Holm oak.

  12. Sensitivity analysis of a parameterization of the stomatal component of the DO3SE model for Quercus ilex to estimate ozone fluxes

    International Nuclear Information System (INIS)

    Alonso, Rocio; Elvira, Susana; Sanz, Maria J.; Gerosa, Giacomo; Emberson, Lisa D.; Bermejo, Victoria; Gimeno, Benjamin S.

    2008-01-01

    A sensitivity analysis of a proposed parameterization of the stomatal conductance (g s ) module of the European ozone deposition model (DO 3 SE) for Quercus ilex was performed. The performance of the model was tested against measured g s in the field at three sites in Spain. The best fit of the model was found for those sites, or during those periods, facing no or mild stress conditions, but a worse performance was found under severe drought or temperature stress, mostly occurring at continental sites. The best performance was obtained when both f phen and f SWP were included. A local parameterization accounting for the lower temperatures recorded in winter and the higher water shortage at the continental sites resulted in a better performance of the model. The overall results indicate that two different parameterizations of the model are needed, one for marine-influenced sites and another one for continental sites. - No redundancy between phenological and water-related modifying functions was found when estimating stomatal behavior of Holm oak

  13. Modeling the regional impact of ship emissions on NOx and ozone levels over the Eastern Atlantic and Western Europe using ship plume parameterization

    Directory of Open Access Journals (Sweden)

    P. Pisoft

    2010-07-01

    Full Text Available In general, regional and global chemistry transport models apply instantaneous mixing of emissions into the model's finest resolved scale. In case of a concentrated source, this could result in erroneous calculation of the evolution of both primary and secondary chemical species. Several studies discussed this issue in connection with emissions from ships and aircraft. In this study, we present an approach to deal with the non-linear effects during dispersion of NOx emissions from ships. It represents an adaptation of the original approach developed for aircraft NOx emissions, which uses an exhaust tracer to trace the amount of the emitted species in the plume and applies an effective reaction rate for the ozone production/destruction during the plume's dilution into the background air. In accordance with previous studies examining the impact of international shipping on the composition of the troposphere, we found that the contribution of ship induced surface NOx to the total reaches 90% over remote ocean and makes 10–30% near coastal regions. Due to ship emissions, surface ozone increases by up to 4–6 ppbv making 10% contribution to the surface ozone budget. When applying the ship plume parameterization, we show that the large scale NOx decreases and the ship NOx contribution is reduced by up to 20–25%. A similar decrease was found in the case of O3. The plume parameterization suppressed the ship induced ozone production by 15–30% over large areas of the studied region. To evaluate the presented parameterization, nitrogen monoxide measurements over the English Channel were compared with modeled values and it was found that after activating the parameterization the model accuracy increases.

  14. Improving the representation of river-groundwater interactions in land surface modeling at the regional scale: Observational evidence and parameterization applied in the Community Land Model

    KAUST Repository

    Zampieri, Matteo

    2012-02-01

    Groundwater is an important component of the hydrological cycle, included in many land surface models to provide a lower boundary condition for soil moisture, which in turn plays a key role in the land-vegetation-atmosphere interactions and the ecosystem dynamics. In regional-scale climate applications land surface models (LSMs) are commonly coupled to atmospheric models to close the surface energy, mass and carbon balance. LSMs in these applications are used to resolve the momentum, heat, water and carbon vertical fluxes, accounting for the effect of vegetation, soil type and other surface parameters, while lack of adequate resolution prevents using them to resolve horizontal sub-grid processes. Specifically, LSMs resolve the large-scale runoff production associated with infiltration excess and sub-grid groundwater convergence, but they neglect the effect from loosing streams to groundwater. Through the analysis of observed data of soil moisture obtained from the Oklahoma Mesoscale Network stations and land surface temperature derived from MODIS we provide evidence that the regional scale soil moisture and surface temperature patterns are affected by the rivers. This is demonstrated on the basis of simulations from a land surface model (i.e., Community Land Model - CLM, version 3.5). We show that the model cannot reproduce the features of the observed soil moisture and temperature spatial patterns that are related to the underlying mechanism of reinfiltration of river water to groundwater. Therefore, we implement a simple parameterization of this process in CLM showing the ability to reproduce the soil moisture and surface temperature spatial variabilities that relate to the river distribution at regional scale. The CLM with this new parameterization is used to evaluate impacts of the improved representation of river-groundwater interactions on the simulated water cycle parameters and the surface energy budget at the regional scale. © 2011 Elsevier B.V.

  15. Development, Parameterization, and Validation of a Visco-Plastic Material Model for Sand with DifferentLevels of Water Saturation

    Science.gov (United States)

    2009-01-01

    in essential physics of the tire –sand interactions. Towards that end, a simpler ribbed- tread tire model (described below) of the type often used for...i.e. the deflection and the contact area) on a rigid sur- face. The tire was modelled in the present work using a ribbed- tread tire model similar to...with material properties representing the composite behaviour through the carcass thick - ness. The tread -cap is constructed using linear, hybrid

  16. A parameterization of the heterogeneous hydrolysis of N2O5 for mass-based aerosol models: improvement of particulate nitrate prediction

    Science.gov (United States)

    Chen, Ying; Wolke, Ralf; Ran, Liang; Birmili, Wolfram; Spindler, Gerald; Schröder, Wolfram; Su, Hang; Cheng, Yafang; Tegen, Ina; Wiedensohler, Alfred

    2018-01-01

    The heterogeneous hydrolysis of N2O5 on the surface of deliquescent aerosol leads to HNO3 formation and acts as a major sink of NOx in the atmosphere during night-time. The reaction constant of this heterogeneous hydrolysis is determined by temperature (T), relative humidity (RH), aerosol particle composition, and the surface area concentration (S). However, these parameters were not comprehensively considered in the parameterization of the heterogeneous hydrolysis of N2O5 in previous mass-based 3-D aerosol modelling studies. In this investigation, we propose a sophisticated parameterization (NewN2O5) of N2O5 heterogeneous hydrolysis with respect to T, RH, aerosol particle compositions, and S based on laboratory experiments. We evaluated closure between NewN2O5 and a state-of-the-art parameterization based on a sectional aerosol treatment. The comparison showed a good linear relationship (R = 0.91) between these two parameterizations. NewN2O5 was incorporated into a 3-D fully online coupled model, COSMO-MUSCAT, with the mass-based aerosol treatment. As a case study, we used the data from the HOPE Melpitz campaign (10-25 September 2013) to validate model performance. Here, we investigated the improvement of nitrate prediction over western and central Europe. The modelled particulate nitrate mass concentrations ([NO3-]) were validated by filter measurements over Germany (Neuglobsow, Schmücke, Zingst, and Melpitz). The modelled [NO3-] was significantly overestimated for this period by a factor of 5-19, with the corrected NH3 emissions (reduced by 50 %) and the original parameterization of N2O5 heterogeneous hydrolysis. The NewN2O5 significantly reduces the overestimation of [NO3-] by ˜ 35 %. Particularly, the overestimation factor was reduced to approximately 1.4 in our case study (12, 17-18 and 25 September 2013) when [NO3-] was dominated by local chemical formations. In our case, the suppression of organic coating was negligible over western and central Europe

  17. A linear CO chemistry parameterization in a chemistry-transport model: evaluation and application to data assimilation

    Directory of Open Access Journals (Sweden)

    M. Claeyman

    2010-07-01

    Full Text Available This paper presents an evaluation of a new linear parameterization valid for the troposphere and the stratosphere, based on a first order approximation of the carbon monoxide (CO continuity equation. This linear scheme (hereinafter noted LINCO has been implemented in the 3-D Chemical Transport Model (CTM MOCAGE (MOdèle de Chimie Atmospherique Grande Echelle. First, a one and a half years of LINCO simulation has been compared to output obtained from a detailed chemical scheme output. The mean differences between both schemes are about ±25 ppbv (part per billion by volume or 15% in the troposphere and ±10 ppbv or 100% in the stratosphere. Second, LINCO has been compared to diverse observations from satellite instruments covering the troposphere (Measurements Of Pollution In The Troposphere: MOPITT and the stratosphere (Microwave Limb Sounder: MLS and also from aircraft (Measurements of ozone and water vapour by Airbus in-service aircraft: MOZAIC programme mostly flying in the upper troposphere and lower stratosphere (UTLS. In the troposphere, the LINCO seasonal variations as well as the vertical and horizontal distributions are quite close to MOPITT CO observations. However, a bias of ~−40 ppbv is observed at 700 Pa between LINCO and MOPITT. In the stratosphere, MLS and LINCO present similar large-scale patterns, except over the poles where the CO concentration is underestimated by the model. In the UTLS, LINCO presents small biases less than 2% compared to independent MOZAIC profiles. Third, we assimilated MOPITT CO using a variational 3D-FGAT (First Guess at Appropriate Time method in conjunction with MOCAGE for a long run of one and a half years. The data assimilation greatly improves the vertical CO distribution in the troposphere from 700 to 350 hPa compared to independent MOZAIC profiles. At 146 hPa, the assimilated CO distribution is also improved compared to MLS observations by reducing the bias up to a factor of 2 in the tropics

  18. A review of the theoretical basis for bulk mass flux convective parameterization

    Directory of Open Access Journals (Sweden)

    R. S. Plant

    2010-04-01

    Full Text Available Most parameterizations for precipitating convection in use today are bulk schemes, in which an ensemble of cumulus elements with different properties is modelled as a single, representative entraining-detraining plume. We review the underpinning mathematical model for such parameterizations, in particular by comparing it with spectral models in which elements are not combined into the representative plume. The chief merit of a bulk model is that the representative plume can be described by an equation set with the same structure as that which describes each element in a spectral model. The equivalence relies on an ansatz for detrained condensate introduced by Yanai et al. (1973 and on a simplified microphysics. There are also conceptual differences in the closure of bulk and spectral parameterizations. In particular, we show that the convective quasi-equilibrium closure of Arakawa and Schubert (1974 for spectral parameterizations cannot be carried over to a bulk parameterization in a straightforward way. Quasi-equilibrium of the cloud work function assumes a timescale separation between a slow forcing process and a rapid convective response. But, for the natural bulk analogue to the cloud-work function, the relevant forcing is characterised by a different timescale, and so its quasi-equilibrium entails a different physical constraint. Closures of bulk parameterizations that use a parcel value of CAPE do not suffer from this timescale issue. However, the Yanai et al. (1973 ansatz must be invoked as a necessary ingredient of those closures.

  19. Parameterized isoprene and monoterpene emissions from the boreal forest floor: Implementation into a 1D chemistry-transport model and investigation of the influence on atmospheric chemistry

    Science.gov (United States)

    Mogensen, Ditte; Aaltonen, Hermanni; Aalto, Juho; Bäck, Jaana; Kieloaho, Antti-Jussi; Gierens, Rosa; Smolander, Sampo; Kulmala, Markku; Boy, Michael

    2015-04-01

    Volatile organic compounds (VOCs) are emitted from the biosphere and can work as precursor gases for aerosol particles that can affect the climate (e.g. Makkonen et al., ACP, 2012). VOC emissions from needles and leaves have gained the most attention, however other parts of the ecosystem also have the ability to emit a vast amount of VOCs. This, often neglected, source can be important e.g. at periods where leaves are absent. Both sources and drivers related to forest floor emission of VOCs are currently limited. It is thought that the sources are mainly due to degradation of organic matter (Isidorov and Jdanova, Chemosphere, 2002), living roots (Asensio et al., Soil Biol. Biochem., 2008) and ground vegetation. The drivers are biotic (e.g. microbes) and abiotic (e.g. temperature and moisture). However, the relative importance of the sources and the drivers individually are currently poorly understood. Further, the relative importance of these factors is highly dependent on the tree species occupying the area of interest. The emission of isoprene and monoterpenes where measured from the boreal forest floor at the SMEAR II station in Southern Finland (Hari and Kulmala, Boreal Env. Res., 2005) during the snow-free period in 2010-2012. We used a dynamic method with 3 automated chambers analyzed by Proton Transfer Reaction - Mass Spectrometer (Aaltonen et al., Plant Soil, 2013). Using this data, we have developed empirical parameterizations for the emission of isoprene and monoterpenes from the forest floor. These parameterizations depends on abiotic factors, however, since the parameterizations are based on field measurements, biotic features are captured. Further, we have used the 1D chemistry-transport model SOSAA (Boy et al., ACP, 2011) to test the seasonal relative importance of inclusion of these parameterizations of the forest floor compared to the canopy crown emissions, on the atmospheric reactivity throughout the canopy.

  20. Model description and evaluation of the mark-recapture survival model used to parameterize the 2012 status and threats analysis for the Florida manatee (Trichechus manatus latirostris)

    Science.gov (United States)

    Langtimm, Catherine A.; Kendall, William L.; Beck, Cathy A.; Kochman, Howard I.; Teague, Amy L.; Meigs-Friend, Gaia; Peñaloza, Claudia L.

    2016-11-30

    This report provides supporting details and evidence for the rationale, validity and efficacy of a new mark-recapture model, the Barker Robust Design, to estimate regional manatee survival rates used to parameterize several components of the 2012 version of the Manatee Core Biological Model (CBM) and Threats Analysis (TA).  The CBM and TA provide scientific analyses on population viability of the Florida manatee subspecies (Trichechus manatus latirostris) for U.S. Fish and Wildlife Service’s 5-year reviews of the status of the species as listed under the Endangered Species Act.  The model evaluation is presented in a standardized reporting framework, modified from the TRACE (TRAnsparent and Comprehensive model Evaluation) protocol first introduced for environmental threat analyses.  We identify this new protocol as TRACE-MANATEE SURVIVAL and this model evaluation specifically as TRACE-MANATEE SURVIVAL, Barker RD version 1. The longer-term objectives of the manatee standard reporting format are to (1) communicate to resource managers consistent evaluation information over sequential modeling efforts; (2) build understanding and expertise on the structure and function of the models; (3) document changes in model structures and applications in response to evolving management objectives, new biological and ecological knowledge, and new statistical advances; and (4) provide greater transparency for management and research review.

  1. Terrain Classification on Venus from Maximum-Likelihood Inversion of Parameterized Models of Topography, Gravity, and their Relation

    Science.gov (United States)

    Eggers, G. L.; Lewis, K. W.; Simons, F. J.; Olhede, S.

    2013-12-01

    Venus does not possess a plate-tectonic system like that observed on Earth, and many surface features--such as tesserae and coronae--lack terrestrial equivalents. To understand Venus' tectonics is to understand its lithosphere, requiring a study of topography and gravity, and how they relate. Past studies of topography dealt with mapping and classification of visually observed features, and studies of gravity dealt with inverting the relation between topography and gravity anomalies to recover surface density and elastic thickness in either the space (correlation) or the spectral (admittance, coherence) domain. In the former case, geological features could be delineated but not classified quantitatively. In the latter case, rectangular or circular data windows were used, lacking geological definition. While the estimates of lithospheric strength on this basis were quantitative, they lacked robust error estimates. Here, we remapped the surface into 77 regions visually and qualitatively defined from a combination of Magellan topography, gravity, and radar images. We parameterize the spectral covariance of the observed topography, treating it as a Gaussian process assumed to be stationary over the mapped regions, using a three-parameter isotropic Matern model, and perform maximum-likelihood based inversions for the parameters. We discuss the parameter distribution across the Venusian surface and across terrain types such as coronoae, dorsae, tesserae, and their relation with mean elevation and latitudinal position. We find that the three-parameter model, while mathematically established and applicable to Venus topography, is overparameterized, and thus reduce the results to a two-parameter description of the peak spectral variance and the range-to-half-peak variance (in function of the wavenumber). With the reduction the clustering of geological region types in two-parameter space becomes promising. Finally, we perform inversions for the JOINT spectral variance of

  2. Parameterized post-Newtonian cosmology

    International Nuclear Information System (INIS)

    Sanghai, Viraj A A; Clifton, Timothy

    2017-01-01

    Einstein’s theory of gravity has been extensively tested on solar system scales, and for isolated astrophysical systems, using the perturbative framework known as the parameterized post-Newtonian (PPN) formalism. This framework is designed for use in the weak-field and slow-motion limit of gravity, and can be used to constrain a large class of metric theories of gravity with data collected from the aforementioned systems. Given the potential of future surveys to probe cosmological scales to high precision, it is a topic of much contemporary interest to construct a similar framework to link Einstein’s theory of gravity and its alternatives to observations on cosmological scales. Our approach to this problem is to adapt and extend the existing PPN formalism for use in cosmology. We derive a set of equations that use the same parameters to consistently model both weak fields and cosmology. This allows us to parameterize a large class of modified theories of gravity and dark energy models on cosmological scales, using just four functions of time. These four functions can be directly linked to the background expansion of the universe, first-order cosmological perturbations, and the weak-field limit of the theory. They also reduce to the standard PPN parameters on solar system scales. We illustrate how dark energy models and scalar-tensor and vector-tensor theories of gravity fit into this framework, which we refer to as ‘parameterized post-Newtonian cosmology’ (PPNC). (paper)

  3. Parameterized post-Newtonian cosmology

    Science.gov (United States)

    Sanghai, Viraj A. A.; Clifton, Timothy

    2017-03-01

    Einstein’s theory of gravity has been extensively tested on solar system scales, and for isolated astrophysical systems, using the perturbative framework known as the parameterized post-Newtonian (PPN) formalism. This framework is designed for use in the weak-field and slow-motion limit of gravity, and can be used to constrain a large class of metric theories of gravity with data collected from the aforementioned systems. Given the potential of future surveys to probe cosmological scales to high precision, it is a topic of much contemporary interest to construct a similar framework to link Einstein’s theory of gravity and its alternatives to observations on cosmological scales. Our approach to this problem is to adapt and extend the existing PPN formalism for use in cosmology. We derive a set of equations that use the same parameters to consistently model both weak fields and cosmology. This allows us to parameterize a large class of modified theories of gravity and dark energy models on cosmological scales, using just four functions of time. These four functions can be directly linked to the background expansion of the universe, first-order cosmological perturbations, and the weak-field limit of the theory. They also reduce to the standard PPN parameters on solar system scales. We illustrate how dark energy models and scalar-tensor and vector-tensor theories of gravity fit into this framework, which we refer to as ‘parameterized post-Newtonian cosmology’ (PPNC).

  4. Analysis of sensitivity to different parameterization schemes for a subtropical cyclone

    Science.gov (United States)

    Quitián-Hernández, L.; Fernández-González, S.; González-Alemán, J. J.; Valero, F.; Martín, M. L.

    2018-05-01

    A sensitivity analysis to diverse WRF model physical parameterization schemes is carried out during the lifecycle of a Subtropical cyclone (STC). STCs are low-pressure systems that share tropical and extratropical characteristics, with hybrid thermal structures. In October 2014, a STC made landfall in the Canary Islands, causing widespread damage from strong winds and precipitation there. The system began to develop on October 18 and its effects lasted until October 21. Accurate simulation of this type of cyclone continues to be a major challenge because of its rapid intensification and unique characteristics. In the present study, several numerical simulations were performed using the WRF model to do a sensitivity analysis of its various parameterization schemes for the development and intensification of the STC. The combination of parameterization schemes that best simulated this type of phenomenon was thereby determined. In particular, the parameterization combinations that included the Tiedtke cumulus schemes had the most positive effects on model results. Moreover, concerning STC track validation, optimal results were attained when the STC was fully formed and all convective processes stabilized. Furthermore, to obtain the parameterization schemes that optimally categorize STC structure, a verification using Cyclone Phase Space is assessed. Consequently, the combination of parameterizations including the Tiedtke cumulus schemes were again the best in categorizing the cyclone's subtropical structure. For strength validation, related atmospheric variables such as wind speed and precipitable water were analyzed. Finally, the effects of using a deterministic or probabilistic approach in simulating intense convective phenomena were evaluated.

  5. Best convective parameterization scheme within RegCM4 to downscale CMIP5 multi-model data for the CORDEX-MENA/Arab domain

    Science.gov (United States)

    Almazroui, Mansour; Islam, Md. Nazrul; Al-Khalaf, A. K.; Saeed, Fahad

    2016-05-01

    A suitable convective parameterization scheme within Regional Climate Model version 4.3.4 (RegCM4) developed by the Abdus Salam International Centre for Theoretical Physics, Trieste, Italy, is investigated through 12 sensitivity runs for the period 2000-2010. RegCM4 is driven with European Centre for Medium-Range Weather Forecasts (ECMWF) ERA-Interim 6-hourly boundary condition fields for the CORDEX-MENA/Arab domain. Besides ERA-Interim lateral boundary conditions data, the Climatic Research Unit (CRU) data is also used to assess the performance of RegCM4. Different statistical measures are taken into consideration in assessing model performance for 11 sub-domains throughout the analysis domain, out of which 7 (4) sub-domains give drier (wetter) conditions for the area of interest. There is no common best option for the simulation of both rainfall and temperature (with lowest bias); however, one option each for temperature and rainfall has been found to be superior among the 12 options investigated in this study. These best options for the two variables vary from region to region as well. Overall, RegCM4 simulates large pressure and water vapor values along with lower wind speeds compared to the driving fields, which are the key sources of bias in simulating rainfall and temperature. Based on the climatic characteristics of most of the Arab countries located within the study domain, the drier sub-domains are given priority in the selection of a suitable convective scheme, albeit with a compromise for both rainfall and temperature simulations. The most suitable option Grell over Land and Emanuel over Ocean in wet (GLEO wet) delivers a rainfall wet bias of 2.96 % and a temperature cold bias of 0.26 °C, compared to CRU data. An ensemble derived from all 12 runs provides unsatisfactory results for rainfall (28.92 %) and temperature (-0.54 °C) bias in the drier region because some options highly overestimate rainfall (reaching up to 200 %) and underestimate

  6. The sensitivity of Alpine summer convection to surrogate climate change: an intercomparison between convection-parameterizing and convection-resolving models

    Directory of Open Access Journals (Sweden)

    M. Keller

    2018-04-01

    Full Text Available Climate models project an increase in heavy precipitation events in response to greenhouse gas forcing. Important elements of such events are rain showers and thunderstorms, which are poorly represented in models with parameterized convection. In this study, simulations with 12 km horizontal grid spacing (convection-parameterizing model, CPM and 2 km grid spacing (convection-resolving model, CRM are employed to investigate the change in the diurnal cycle of convection with warmer climate. For this purpose, simulations of 11 days in June 2007 with a pronounced diurnal cycle of convection are compared with surrogate simulations from the same period. The surrogate climate simulations mimic a future climate with increased temperatures but unchanged relative humidity and similar synoptic-scale circulation. Two temperature scenarios are compared: one with homogeneous warming (HW using a vertically uniform warming and the other with vertically dependent warming (VW that enables changes in lapse rate.The two sets of simulations with parameterized and explicit convection exhibit substantial differences, some of which are well known from the literature. These include differences in the timing and amplitude of the diurnal cycle of convection, and the frequency of precipitation with low intensities. The response to climate change is much less studied. We can show that stratification changes have a strong influence on the changes in convection. Precipitation is strongly increasing for HW but decreasing for the VW simulations. For cloud type frequencies, virtually no changes are found for HW, but a substantial reduction in high clouds is found for VW. Further, we can show that the climate change signal strongly depends upon the horizontal resolution. In particular, significant differences between CPM and CRM are found in terms of the radiative feedbacks, with CRM exhibiting a stronger negative feedback in the top-of-the-atmosphere energy budget.

  7. The sensitivity of Alpine summer convection to surrogate climate change: an intercomparison between convection-parameterizing and convection-resolving models

    Science.gov (United States)

    Keller, Michael; Kröner, Nico; Fuhrer, Oliver; Lüthi, Daniel; Schmidli, Juerg; Stengel, Martin; Stöckli, Reto; Schär, Christoph

    2018-04-01

    Climate models project an increase in heavy precipitation events in response to greenhouse gas forcing. Important elements of such events are rain showers and thunderstorms, which are poorly represented in models with parameterized convection. In this study, simulations with 12 km horizontal grid spacing (convection-parameterizing model, CPM) and 2 km grid spacing (convection-resolving model, CRM) are employed to investigate the change in the diurnal cycle of convection with warmer climate. For this purpose, simulations of 11 days in June 2007 with a pronounced diurnal cycle of convection are compared with surrogate simulations from the same period. The surrogate climate simulations mimic a future climate with increased temperatures but unchanged relative humidity and similar synoptic-scale circulation. Two temperature scenarios are compared: one with homogeneous warming (HW) using a vertically uniform warming and the other with vertically dependent warming (VW) that enables changes in lapse rate. The two sets of simulations with parameterized and explicit convection exhibit substantial differences, some of which are well known from the literature. These include differences in the timing and amplitude of the diurnal cycle of convection, and the frequency of precipitation with low intensities. The response to climate change is much less studied. We can show that stratification changes have a strong influence on the changes in convection. Precipitation is strongly increasing for HW but decreasing for the VW simulations. For cloud type frequencies, virtually no changes are found for HW, but a substantial reduction in high clouds is found for VW. Further, we can show that the climate change signal strongly depends upon the horizontal resolution. In particular, significant differences between CPM and CRM are found in terms of the radiative feedbacks, with CRM exhibiting a stronger negative feedback in the top-of-the-atmosphere energy budget.

  8. NATO Advanced Study Institute on Advanced Physical Oceanographic Numerical Modelling

    CERN Document Server

    1986-01-01

    This book is a direct result of the NATO Advanced Study Institute held in Banyuls-sur-mer, France, June 1985. The Institute had the same title as this book. It was held at Laboratoire Arago. Eighty lecturers and students from almost all NATO countries attended. The purpose was to review the state of the art of physical oceanographic numerical modelling including the parameterization of physical processes. This book represents a cross-section of the lectures presented at the ASI. It covers elementary mathematical aspects through large scale practical aspects of ocean circulation calculations. It does not encompass every facet of the science of oceanographic modelling. We have, however, captured most of the essence of mesoscale and large-scale ocean modelling for blue water and shallow seas. There have been considerable advances in modelling coastal circulation which are not included. The methods section does not include important material on phase and group velocity errors, selection of grid structures, advanc...

  9. Physics-based distributed snow models in the operational arena: Current and future challenges

    Science.gov (United States)

    Winstral, A. H.; Jonas, T.; Schirmer, M.; Helbig, N.

    2017-12-01

    The demand for modeling tools robust to climate change and weather extremes along with coincident increases in computational capabilities have led to an increase in the use of physics-based snow models in operational applications. Current operational applications include the WSL-SLF's across Switzerland, ASO's in California, and USDA-ARS's in Idaho. While the physics-based approaches offer many advantages there remain limitations and modeling challenges. The most evident limitation remains computation times that often limit forecasters to a single, deterministic model run. Other limitations however remain less conspicuous amidst the assumptions that these models require little to no calibration based on their foundation on physical principles. Yet all energy balance snow models seemingly contain parameterizations or simplifications of processes where validation data are scarce or present understanding is limited. At the research-basin scale where many of these models were developed these modeling elements may prove adequate. However when applied over large areas, spatially invariable parameterizations of snow albedo, roughness lengths and atmospheric exchange coefficients - all vital to determining the snowcover energy balance - become problematic. Moreover as we apply models over larger grid cells, the representation of sub-grid variability such as the snow-covered fraction adds to the challenges. Here, we will demonstrate some of the major sensitivities of distributed energy balance snow models to particular model constructs, the need for advanced and spatially flexible methods and parameterizations, and prompt the community for open dialogue and future collaborations to further modeling capabilities.

  10. Parameterization of an empirical model for the prediction of n-octanol, alkane and cyclohexane/water as well as brain/blood partition coefficients.

    Science.gov (United States)

    Zerara, Mohamed; Brickmann, Jürgen; Kretschmer, Robert; Exner, Thomas E

    2009-02-01

    Quantitative information of solvation and transfer free energies is often needed for the understanding of many physicochemical processes, e.g the molecular recognition phenomena, the transport and diffusion processes through biological membranes and the tertiary structure of proteins. Recently, a concept for the localization and quantification of hydrophobicity has been introduced (Jäger et al. J Chem Inf Comput Sci 43:237-247, 2003). This model is based on the assumptions that the overall hydrophobicity can be obtained as a superposition of fragment contributions. To date, all predictive models for the logP have been parameterized for n-octanol/water (logP(oct)) solvent while very few models with poor predictive abilities are available for other solvents. In this work, we propose a parameterization of an empirical model for n-octanol/water, alkane/water (logP(alk)) and cyclohexane/water (logP(cyc)) systems. Comparison of both logP(alk) and logP(cyc) with the logarithms of brain/blood ratios (logBB) for a set of structurally diverse compounds revealed a high correlation showing their superiority over the logP(oct) measure in this context.

  11. Development and evaluation of a physics-based windblown dust emission scheme implemented in the CMAQ modeling system

    Science.gov (United States)

    A new windblown dust emission treatment was incorporated in the Community Multiscale Air Quality (CMAQ) modeling system. This new model treatment has been built upon previously developed physics-based parameterization schemes from the literature. A distinct and novel feature of t...

  12. Inheritance versus parameterization

    DEFF Research Database (Denmark)

    Ernst, Erik

    2013-01-01

    This position paper argues that inheritance and parameterization differ in their fundamental structure, even though they may emulate each other in many ways. Based on this, we claim that certain mechanisms, e.g., final classes, are in conflict with the nature of inheritance, and hence causes...

  13. A Parameterization for Land-Atmosphere-Cloud Exchange (PLACE): Documentation and Testing of a Detailed Process Model of the Partly Cloudy Boundary Layer over Heterogeneous Land.

    Science.gov (United States)

    Wetzel, Peter J.; Boone, Aaron

    1995-07-01

    This paper presents a general description of, and demonstrates the capabilities of, the Parameterization for Land-Atmosphere-Cloud Exchange (PLACE). The PLACE model is a detailed process model of the partly cloudy atmospheric boundary layer and underlying heterogeneous land surfaces. In its development, particular attention has been given to three of the model's subprocesses: the prediction of boundary layer cloud amount, the treatment of surface and soil subgrid heterogeneity, and the liquid water budget. The model includes a three-parameter nonprecipitating cumulus model that feeds back to the surface and boundary layer through radiative effects. Surface heterogeneity in the PLACE model is treated both statistically and by resolving explicit subgrid patches. The model maintains a vertical column of liquid water that is divided into seven reservoirs, from the surface interception store down to bedrock.Five single-day demonstration cases are presented, in which the PLACE model was initialized, run, and compared to field observations from four diverse sites. The model is shown to predict cloud amount well in these while predicting the surface fluxes with similar accuracy. A slight tendency to underpredict boundary layer depth is noted in all cases.Sensitivity tests were also run using anemometer-level forcing provided by the Project for Inter-comparison of Land-surface Parameterization Schemes (PILPS). The purpose is to demonstrate the relative impact of heterogeneity of surface parameters on the predicted annual mean surface fluxes. Significant sensitivity to subgrid variability of certain parameters is demonstrated, particularly to parameters related to soil moisture. A major result is that the PLACE-computed impact of total (homogeneous) deforestation of a rain forest is comparable in magnitude to the effect of imposing heterogeneity of certain surface variables, and is similarly comparable to the overall variance among the other PILPS participant models. Were

  14. Robustness and sensitivities of central U.S. summer convection in the super-parameterized CAM: Multi-model intercomparison with a new regional EOF index

    Science.gov (United States)

    Kooperman, Gabriel J.; Pritchard, Michael S.; Somerville, Richard C. J.

    2013-06-01

    Mesoscale convective systems (MCSs) can bring up to 60% of summer rainfall to the central United States but are not simulated by most global climate models. In this study, a new empirical orthogonal function based index is developed to isolate the MCS activity, similar to that developed by Wheeler and Hendon (2004) for the Madden-Julian Oscillation. The index is applied to compactly compare three conventional- and super-parameterized (SP) versions (3.0, 3.5, and 5.0) of the National Center for Atmospheric Research Community Atmosphere Model (CAM). Results show that nocturnal, eastward propagating convection is a robust effect of super-parameterization but is sensitive to its specific implementation. MCS composites based on the index show that in SP-CAM3.5, convective MCS anomalies are unrealistically large scale and concentrated, while surface precipitation is too weak. These aspects of the MCS signal are improved in the latest version (SP-CAM5.0), which uses high-order microphysics.

  15. A parameterization of the heterogeneous hydrolysis of N2O5 for mass-based aerosol models: improvement of particulate nitrate prediction

    Directory of Open Access Journals (Sweden)

    Y. Chen

    2018-01-01

    Full Text Available The heterogeneous hydrolysis of N2O5 on the surface of deliquescent aerosol leads to HNO3 formation and acts as a major sink of NOx in the atmosphere during night-time. The reaction constant of this heterogeneous hydrolysis is determined by temperature (T, relative humidity (RH, aerosol particle composition, and the surface area concentration (S. However, these parameters were not comprehensively considered in the parameterization of the heterogeneous hydrolysis of N2O5 in previous mass-based 3-D aerosol modelling studies. In this investigation, we propose a sophisticated parameterization (NewN2O5 of N2O5 heterogeneous hydrolysis with respect to T, RH, aerosol particle compositions, and S based on laboratory experiments. We evaluated closure between NewN2O5 and a state-of-the-art parameterization based on a sectional aerosol treatment. The comparison showed a good linear relationship (R =  0.91 between these two parameterizations. NewN2O5 was incorporated into a 3-D fully online coupled model, COSMO–MUSCAT, with the mass-based aerosol treatment. As a case study, we used the data from the HOPE Melpitz campaign (10–25 September 2013 to validate model performance. Here, we investigated the improvement of nitrate prediction over western and central Europe. The modelled particulate nitrate mass concentrations ([NO3−] were validated by filter measurements over Germany (Neuglobsow, Schmücke, Zingst, and Melpitz. The modelled [NO3−] was significantly overestimated for this period by a factor of 5–19, with the corrected NH3 emissions (reduced by 50 % and the original parameterization of N2O5 heterogeneous hydrolysis. The NewN2O5 significantly reduces the overestimation of [NO3−] by  ∼  35 %. Particularly, the overestimation factor was reduced to approximately 1.4 in our case study (12, 17–18 and 25 September 2013 when [NO3−] was dominated by local chemical formations. In our case, the suppression of organic coating

  16. An improved lightning flash rate parameterization developed from Colorado DC3 thunderstorm data for use in cloud-resolving chemical transport models

    Science.gov (United States)

    Basarab, B. M.; Rutledge, S. A.; Fuchs, B. R.

    2015-09-01

    Accurate prediction of total lightning flash rate in thunderstorms is important to improve estimates of nitrogen oxides (NOx) produced by lightning (LNOx) from the storm scale to the global scale. In this study, flash rate parameterization schemes from the literature are evaluated against observed total flash rates for a sample of 11 Colorado thunderstorms, including nine storms from the Deep Convective Clouds and Chemistry (DC3) experiment in May-June 2012. Observed flash rates were determined using an automated algorithm that clusters very high frequency radiation sources emitted by electrical breakdown in clouds and detected by the northern Colorado lightning mapping array. Existing schemes were found to inadequately predict flash rates and were updated based on observed relationships between flash rate and simple storm parameters, yielding significant improvement. The most successful updated scheme predicts flash rate based on the radar-derived mixed-phase 35 dBZ echo volume. Parameterizations based on metrics for updraft intensity were also updated but were found to be less reliable predictors of flash rate for this sample of storms. The 35 dBZ volume scheme was tested on a data set containing radar reflectivity volume information for thousands of isolated convective cells in different regions of the U.S. This scheme predicted flash rates to within 5.8% of observed flash rates on average. These results encourage the application of this scheme to larger radar data sets and its possible implementation into cloud-resolving models.

  17. A method for sensible heat flux model parameterization based on radiometric surface temperature and environmental factors without involving the parameter KB-1

    Science.gov (United States)

    Zhuang, Qifeng; Wu, Bingfang; Yan, Nana; Zhu, Weiwei; Xing, Qiang

    2016-05-01

    Sensible heat flux is a key component of land-atmosphere interaction. In most parameterizations it is calculated with surface-air temperature differences and total aerodynamic resistance to heat transfer (Rae) that is related to the KB-1 parameter. Suitable values are hard to obtain since KB-1 is related both to canopy characteristics and environmental conditions. In this paper, a parameterize method for sensible heat flux over vegetated surfaces (maize field and grass land in the Heihe river basin of northwest China) was proposed based on the radiometric surface temperature, surface resistance (Rs) and vapor pressures (saturated and actual) at the surface and the atmosphere above the canopy. A biophysics-based surface resistance model was revised to compute surface resistance with several environmental factors. The total aerodynamic resistance to heat transfer is directly calculated by combining the biophysics-based surface resistance and vapor pressures. One merit of this method is that the calculation of KB-1 can be avoided. The method provides a new way to estimate sensible heat flux over vegetated surfaces and its performance compares well to the LAS measured sensible heat and other empirical or semi-empirical KB-1 based estimations.

  18. New insight of Arctic cloud parameterization from regional climate model simulations, satellite-based, and drifting station data

    Science.gov (United States)

    Klaus, D.; Dethloff, K.; Dorn, W.; Rinke, A.; Wu, D. L.

    2016-05-01

    Cloud observations from the CloudSat and CALIPSO satellites helped to explain the reduced total cloud cover (Ctot) in the atmospheric regional climate model HIRHAM5 with modified cloud physics. Arctic climate conditions are found to be better reproduced with (1) a more efficient Bergeron-Findeisen process and (2) a more generalized subgrid-scale variability of total water content. As a result, the annual cycle of Ctot is improved over sea ice, associated with an almost 14% smaller area average than in the control simulation. The modified cloud scheme reduces the Ctot bias with respect to the satellite observations. Except for autumn, the cloud reduction over sea ice improves low-level temperature profiles compared to drifting station data. The HIRHAM5 sensitivity study highlights the need for improving accuracy of low-level (<700 m) cloud observations, as these clouds exert a strong impact on the near-surface climate.

  19. On the use of wave parameterizations and a storm impact scaling model in National Weather Service Coastal Flood and decision support operations

    Science.gov (United States)

    Mignone, Anthony; Stockdon, H.; Willis, M.; Cannon, J.W.; Thompson, R.

    2012-01-01

    National Weather Service (NWS) Weather Forecast Offices (WFO) are responsible for issuing coastal flood watches, warnings, advisories, and local statements to alert decision makers and the general public when rising water levels may lead to coastal impacts such as inundation, erosion, and wave battery. Both extratropical and tropical cyclones can generate the prerequisite rise in water level to set the stage for a coastal impact event. Forecasters use a variety of tools including computer model guidance and local studies to help predict the potential severity of coastal flooding. However, a key missing component has been the incorporation of the effects of waves in the prediction of total water level and the associated coastal impacts. Several recent studies have demonstrated the importance of incorporating wave action into the NWS coastal flood program. To follow up on these studies, this paper looks at the potential of applying recently developed empirical parameterizations of wave setup, swash, and runup to the NWS forecast process. Additionally, the wave parameterizations are incorporated into a storm impact scaling model that compares extreme water levels to beach elevation data to determine the mode of coastal change at predetermined “hotspots” of interest. Specifically, the storm impact model compares the approximate storm-induced still water level, which includes contributions from tides, storm surge, and wave setup, to dune crest elevation to determine inundation potential. The model also compares the combined effects of tides, storm surge, and the 2 % exceedance level for vertical wave runup (including both wave setup and swash) to dune toe and crest elevations to determine if erosion and/or ocean overwash may occur. The wave parameterizations and storm impact model are applied to two cases in 2009 that led to significant coastal impacts and unique forecast challenges in North Carolina: the extratropical “Nor'Ida” event during 11-14 November and

  20. Physical Modeling Modular Boxes: PHOXES

    DEFF Research Database (Denmark)

    Gelineck, Steven; Serafin, Stefania

    2010-01-01

    This paper presents the development of a set of musical instruments, which are based on known physical modeling sound synthesis techniques. The instruments are modular, meaning that they can be combined in various ways. This makes it possible to experiment with physical interaction and sonic...

  1. Standard Model physics

    CERN Multimedia

    Altarelli, Guido

    1999-01-01

    Introduction structure of gauge theories. The QEDand QCD examples. Chiral theories. The electroweak theory. Spontaneous symmetry breaking. The Higgs mechanism Gauge boson and fermion masses. Yukawa coupling. Charges current couplings. The Cabibo-Kobayashi-Maskawa matrix and CP violation. Neutral current couplings. The Glasow-Iliopoulos-Maiani mechanism. Gauge boson and Higgs coupling. Radiative corrections and loops. Cancellation of the chiral anomaly. Limits on the Higgs comparaison. Problems of the Standard Model. Outlook.

  2. New representation of water activity based on a single solute specific constant to parameterize the hygroscopic growth of aerosols in atmospheric models

    Directory of Open Access Journals (Sweden)

    S. Metzger

    2012-06-01

    Full Text Available Water activity is a key factor in aerosol thermodynamics and hygroscopic growth. We introduce a new representation of water activity (aw, which is empirically related to the solute molality (μs through a single solute specific constant, νi. Our approach is widely applicable, considers the Kelvin effect and covers ideal solutions at high relative humidity (RH, including cloud condensation nuclei (CCN activation. It also encompasses concentrated solutions with high ionic strength at low RH such as the relative humidity of deliquescence (RHD. The constant νi can thus be used to parameterize the aerosol hygroscopic growth over a wide range of particle sizes, from nanometer nucleation mode to micrometer coarse mode particles. In contrast to other aw-representations, our νi factor corrects the solute molality both linearly and in exponent form x · ax. We present four representations of our basic aw-parameterization at different levels of complexity for different aw-ranges, e.g. up to 0.95, 0.98 or 1. νi is constant over the selected aw-range, and in its most comprehensive form, the parameterization describes the entire aw range (0–1. In this work we focus on single solute solutions. νi can be pre-determined with a root-finding method from our water activity representation using an aw−μs data pair, e.g. at solute saturation using RHD and solubility measurements. Our aw and supersaturation (Köhler-theory results compare well with the thermodynamic reference model E-AIM for the key compounds NaCl and (NH42SO4 relevant for CCN modeling and calibration studies. Envisaged applications include regional and global atmospheric chemistry and

  3. A Deep Learning Algorithm of Neural Network for the Parameterization of Typhoon-Ocean Feedback in Typhoon Forecast Models

    Science.gov (United States)

    Jiang, Guo-Qing; Xu, Jing; Wei, Jun

    2018-04-01

    Two algorithms based on machine learning neural networks are proposed—the shallow learning (S-L) and deep learning (D-L) algorithms—that can potentially be used in atmosphere-only typhoon forecast models to provide flow-dependent typhoon-induced sea surface temperature cooling (SSTC) for improving typhoon predictions. The major challenge of existing SSTC algorithms in forecast models is how to accurately predict SSTC induced by an upcoming typhoon, which requires information not only from historical data but more importantly also from the target typhoon itself. The S-L algorithm composes of a single layer of neurons with mixed atmospheric and oceanic factors. Such a structure is found to be unable to represent correctly the physical typhoon-ocean interaction. It tends to produce an unstable SSTC distribution, for which any perturbations may lead to changes in both SSTC pattern and strength. The D-L algorithm extends the neural network to a 4 × 5 neuron matrix with atmospheric and oceanic factors being separated in different layers of neurons, so that the machine learning can determine the roles of atmospheric and oceanic factors in shaping the SSTC. Therefore, it produces a stable crescent-shaped SSTC distribution, with its large-scale pattern determined mainly by atmospheric factors (e.g., winds) and small-scale features by oceanic factors (e.g., eddies). Sensitivity experiments reveal that the D-L algorithms improve maximum wind intensity errors by 60-70% for four case study simulations, compared to their atmosphere-only model runs.

  4. Quasi standard model physics

    International Nuclear Information System (INIS)

    Peccei, R.D.

    1986-01-01

    Possible small extensions of the standard model are considered, which are motivated by the strong CP problem and by the baryon asymmetry of the Universe. Phenomenological arguments are given which suggest that imposing a PQ symmetry to solve the strong CP problem is only tenable if the scale of the PQ breakdown is much above M W . Furthermore, an attempt is made to connect the scale of the PQ breakdown to that of the breakdown of lepton number. It is argued that in these theories the same intermediate scale may be responsible for the baryon number of the Universe, provided the Kuzmin Rubakov Shaposhnikov (B+L) erasing mechanism is operative. (orig.)

  5. The influence of Cloud Longwave Scattering together with a state-of-the-art Ice Longwave Optical Parameterization in Climate Model Simulations

    Science.gov (United States)

    Chen, Y. H.; Kuo, C. P.; Huang, X.; Yang, P.

    2017-12-01

    Clouds play an important role in the Earth's radiation budget, and thus realistic and comprehensive treatments of cloud optical properties and cloud-sky radiative transfer are crucial for simulating weather and climate. However, most GCMs neglect LW scattering effects by clouds and tend to use inconsistent cloud SW and LW optical parameterizations. Recently, co-authors of this study have developed a new LW optical properties parameterization for ice clouds, which is based on ice cloud particle statistics from MODIS measurements and state-of-the-art scattering calculation. A two-stream multiple-scattering scheme has also been implemented into the RRTMG_LW, a widely used longwave radiation scheme by climate modeling centers. This study is to integrate both the new LW cloud-radiation scheme for ice clouds and the modified RRTMG_LW with scattering capability into the NCAR CESM to improve the cloud longwave radiation treatment. A number of single column model (SCM) simulations using the observation from the ARM SGP site on July 18 to August 4 in 1995 are carried out to assess the impact of new LW optical properties of clouds and scattering-enabled radiation scheme on simulated radiation budget and cloud radiative effect (CRE). The SCM simulation allows interaction between cloud and radiation schemes with other parameterizations, but the large-scale forcing is prescribed or nudged. Comparing to the results from the SCM of the standard CESM, the new ice cloud optical properties alone leads to an increase of LW CRE by 26.85 W m-2 in average, as well as an increase of the downward LW flux at surface by 6.48 W m-2. Enabling LW cloud scattering further increases the LW CRE by another 3.57 W m-2 and the downward LW flux at the surface by 0.2 W m-2. The change of LW CRE is mainly due to an increase of cloud top height, which enhances the LW CRE. A long-term simulation of CESM will be carried out to further understand the impact of such changes on simulated climates.

  6. Monte Carlo simulation for uncertainty estimation on structural data in implicit 3-D geological modeling, a guide for disturbance distribution selection and parameterization

    Science.gov (United States)

    Pakyuz-Charrier, Evren; Lindsay, Mark; Ogarko, Vitaliy; Giraud, Jeremie; Jessell, Mark

    2018-04-01

    Three-dimensional (3-D) geological structural modeling aims to determine geological information in a 3-D space using structural data (foliations and interfaces) and topological rules as inputs. This is necessary in any project in which the properties of the subsurface matters; they express our understanding of geometries in depth. For that reason, 3-D geological models have a wide range of practical applications including but not restricted to civil engineering, the oil and gas industry, the mining industry, and water management. These models, however, are fraught with uncertainties originating from the inherent flaws of the modeling engines (working hypotheses, interpolator's parameterization) and the inherent lack of knowledge in areas where there are no observations combined with input uncertainty (observational, conceptual and technical errors). Because 3-D geological models are often used for impactful decision-making it is critical that all 3-D geological models provide accurate estimates of uncertainty. This paper's focus is set on the effect of structural input data measurement uncertainty propagation in implicit 3-D geological modeling. This aim is achieved using Monte Carlo simulation for uncertainty estimation (MCUE), a stochastic method which samples from predefined disturbance probability distributions that represent the uncertainty of the original input data set. MCUE is used to produce hundreds to thousands of altered unique data sets. The altered data sets are used as inputs to produce a range of plausible 3-D models. The plausible models are then combined into a single probabilistic model as a means to propagate uncertainty from the input data to the final model. In this paper, several improved methods for MCUE are proposed. The methods pertain to distribution selection for input uncertainty, sample analysis and statistical consistency of the sampled distribution. Pole vector sampling is proposed as a more rigorous alternative than dip vector

  7. Monte Carlo simulation for uncertainty estimation on structural data in implicit 3-D geological modeling, a guide for disturbance distribution selection and parameterization

    Directory of Open Access Journals (Sweden)

    E. Pakyuz-Charrier

    2018-04-01

    Full Text Available Three-dimensional (3-D geological structural modeling aims to determine geological information in a 3-D space using structural data (foliations and interfaces and topological rules as inputs. This is necessary in any project in which the properties of the subsurface matters; they express our understanding of geometries in depth. For that reason, 3-D geological models have a wide range of practical applications including but not restricted to civil engineering, the oil and gas industry, the mining industry, and water management. These models, however, are fraught with uncertainties originating from the inherent flaws of the modeling engines (working hypotheses, interpolator's parameterization and the inherent lack of knowledge in areas where there are no observations combined with input uncertainty (observational, conceptual and technical errors. Because 3-D geological models are often used for impactful decision-making it is critical that all 3-D geological models provide accurate estimates of uncertainty. This paper's focus is set on the effect of structural input data measurement uncertainty propagation in implicit 3-D geological modeling. This aim is achieved using Monte Carlo simulation for uncertainty estimation (MCUE, a stochastic method which samples from predefined disturbance probability distributions that represent the uncertainty of the original input data set. MCUE is used to produce hundreds to thousands of altered unique data sets. The altered data sets are used as inputs to produce a range of plausible 3-D models. The plausible models are then combined into a single probabilistic model as a means to propagate uncertainty from the input data to the final model. In this paper, several improved methods for MCUE are proposed. The methods pertain to distribution selection for input uncertainty, sample analysis and statistical consistency of the sampled distribution. Pole vector sampling is proposed as a more rigorous alternative than

  8. The impact of changes in parameterizations of surface drag and vertical diffusion on the large-scale circulation in the Community Atmosphere Model (CAM5)

    Science.gov (United States)

    Lindvall, Jenny; Svensson, Gunilla; Caballero, Rodrigo

    2017-06-01

    Simulations with the Community Atmosphere Model version 5 (CAM5) are used to analyze the sensitivity of the large-scale circulation to changes in parameterizations of orographic surface drag and vertical diffusion. Many GCMs and NWP models use enhanced turbulent mixing in stable conditions to improve simulations, while CAM5 cuts off all turbulence at high stabilities and instead employs a strong orographic surface stress parameterization, known as turbulent mountain stress (TMS). TMS completely dominates the surface stress over land and reduces the near-surface wind speeds compared to simulations without TMS. It is found that TMS is generally beneficial for the large-scale circulation as it improves zonal wind speeds, Arctic sea level pressure and zonal anomalies of the 500-hPa stream function, compared to ERA-Interim. It also alleviates atmospheric blocking frequency biases in the Northern Hemisphere. Using a scheme that instead allows for a modest increase of turbulent diffusion at higher stabilities only in the planetary boundary layer (PBL) appears to in some aspects have a similar, although much smaller, beneficial effect as TMS. Enhanced mixing throughout the atmospheric column, however, degrades the CAM5 simulation. Evaluating the simulations in comparison with detailed measurements at two locations reveals that TMS is detrimental for the PBL at the flat grassland ARM Southern Great Plains site, giving too strong wind turning and too deep PBLs. At the Sodankylä forest site, the effect of TMS is smaller due to the larger local vegetation roughness. At both sites, all simulations substantially overestimate the boundary layer ageostrophic flow.

  9. Testing the importance of accurate meteorological input fields and parameterizations in atmospheric transport modelling using DREAM - Validation against ETEX-1

    DEFF Research Database (Denmark)

    Brandt, J.; Bastrup-Birk, A.; Christensen, J.H.

    1998-01-01

    A tracer model, the DREAM, which is based on a combination of a near-range Lagrangian model and a long-range Eulerian model, has been developed. The meteorological meso-scale model, MM5V1, is implemented as a meteorological driver for the tracer model. The model system is used for studying...

  10. Improvement of a snow albedo parameterization in the Snow-Atmosphere-Soil Transfer model: evaluation of impacts of aerosol on seasonal snow cover

    Science.gov (United States)

    Zhong, Efang; Li, Qian; Sun, Shufen; Chen, Wen; Chen, Shangfeng; Nath, Debashis

    2017-11-01

    The presence of light-absorbing aerosols (LAA) in snow profoundly influence the surface energy balance and water budget. However, most snow-process schemes in land-surface and climate models currently do not take this into consideration. To better represent the snow process and to evaluate the impacts of LAA on snow, this study presents an improved snow albedo parameterization in the Snow-Atmosphere-Soil Transfer (SAST) model, which includes the impacts of LAA on snow. Specifically, the Snow, Ice and Aerosol Radiation (SNICAR) model is incorporated into the SAST model with an LAA mass stratigraphy scheme. The new coupled model is validated against in-situ measurements at the Swamp Angel Study Plot (SASP), Colorado, USA. Results show that the snow albedo and snow depth are better reproduced than those in the original SAST, particularly during the period of snow ablation. Furthermore, the impacts of LAA on snow are estimated in the coupled model through case comparisons of the snowpack, with or without LAA. The LAA particles directly absorb extra solar radiation, which accelerates the growth rate of the snow grain size. Meanwhile, these larger snow particles favor more radiative absorption. The average total radiative forcing of the LAA at the SASP is 47.5 W m-2. This extra radiative absorption enhances the snowmelt rate. As a result, the peak runoff time and "snow all gone" day have shifted 18 and 19.5 days earlier, respectively, which could further impose substantial impacts on the hydrologic cycle and atmospheric processes.

  11. Physical model of Nernst element

    International Nuclear Information System (INIS)

    Nakamura, Hiroaki; Ikeda, Kazuaki; Yamaguchi, Satarou

    1998-08-01

    Generation of electric power by the Nernst effect is a new application of a semiconductor. A key point of this proposal is to find materials with a high thermomagnetic figure-of-merit, which are called Nernst elements. In order to find candidates of the Nernst element, a physical model to describe its transport phenomena is needed. As the first model, we began with a parabolic two-band model in classical statistics. According to this model, we selected InSb as candidates of the Nernst element and measured their transport coefficients in magnetic fields up to 4 Tesla within a temperature region from 270 K to 330 K. In this region, we calculated transport coefficients numerically by our physical model. For InSb, experimental data are coincident with theoretical values in strong magnetic field. (author)

  12. Instream Physical Habitat Modelling Types

    DEFF Research Database (Denmark)

    Conallin, John; Boegh, Eva; Krogsgaard, Jørgen

    2010-01-01

    The introduction of the EU Water Framework Directive (WFD) is providing member state water resource managers with significant challenges in relation to meeting the deadline for 'Good Ecological Status' by 2015. Overall, instream physical habitat modelling approaches have advantages and disadvanta......The introduction of the EU Water Framework Directive (WFD) is providing member state water resource managers with significant challenges in relation to meeting the deadline for 'Good Ecological Status' by 2015. Overall, instream physical habitat modelling approaches have advantages...... suit their situations. This paper analyses the potential of different methods available for water managers to assess hydrological and geomorphological impacts on the habitats of stream biota, as requested by the WFD. The review considers both conventional and new advanced research-based instream...... physical habitat models. In parametric and non-parametric regression models, model assumptions are often not satisfied and the models are difficult to transfer to other regions. Research-based methods such as the artificial neural networks and individual-based modelling have promising potential as water...

  13. Accelerator physics and modeling: Proceedings

    International Nuclear Information System (INIS)

    Parsa, Z.

    1991-01-01

    This report contains papers on the following topics: Physics of high brightness beams; radio frequency beam conditioner for fast-wave free-electron generators of coherent radiation; wake-field and space-charge effects on high brightness beams. Calculations and measured results for BNL-ATF; non-linear orbit theory and accelerator design; general problems of modeling for accelerators; development and application of dispersive soft ferrite models for time-domain simulation; and bunch lengthening in the SLC damping rings

  14. Wave Generation in Physical Models

    DEFF Research Database (Denmark)

    Andersen, Thomas Lykke; Frigaard, Peter

    The present book describes the most important aspects of wave generation techniques in physical models. Moreover, the book serves as technical documentation for the wave generation software AwaSys 6, cf. Aalborg University (2012). In addition to the two main authors also Tue Hald and Michael...

  15. Development of the physical model

    International Nuclear Information System (INIS)

    Liu Zunqi; Morsy, Samir

    2001-01-01

    Full text: The Physical Model was developed during Program 93+2 as a technical tool to aid enhanced information analysis and now is an integrated part of the Department's on-going State evaluation process. This paper will describe the concept of the Physical Model, including its objectives, overall structure and the development of indicators with designated strengths, followed by a brief description of using the Physical Model in implementing the enhanced information analysis. The work plan for expansion and update of the Physical Model is also presented at the end of the paper. The development of the Physical Model is an attempt to identify, describe and characterize every known process for carrying out each step necessary for the acquisition of weapons-usable material, i.e., all plausible acquisition paths for highly enriched uranium (HEU) and separated plutonium (Pu). The overall structure of the Physical Model has a multilevel arrangement. It includes at the top level all the main steps (technologies) that may be involved in the nuclear fuel cycle from the source material production up to the acquisition of weapons-usable material, and then beyond the civilian fuel cycle to the development of nuclear explosive devices (weaponization). Each step is logically interconnected with the preceding and/or succeeding steps by nuclear material flows. It contains at its lower levels every known process that is associated with the fuel cycle activities presented at the top level. For example, uranium enrichment is broken down into three branches at the second level, i.e., enrichment of UF 6 , UCl 4 and U-metal respectively; and then further broken down at the third level into nine processes: gaseous diffusion, gas centrifuge, aerodynamic, electromagnetic, molecular laser (MLIS), atomic vapor laser (AVLIS), chemical exchange, ion exchange and plasma. Narratives are presented at each level, beginning with a general process description then proceeding with detailed

  16. A unified spectral parameterization for wave breaking: From the deep ocean to the surf zone

    Science.gov (United States)

    Filipot, J.-F.; Ardhuin, F.

    2012-11-01

    A new wave-breaking dissipation parameterization designed for phase-averaged spectral wave models is presented. It combines wave breaking basic physical quantities, namely, the breaking probability and the dissipation rate per unit area. The energy lost by waves is first explicitly calculated in physical space before being distributed over the relevant spectral components. The transition from deep to shallow water is made possible by using a dissipation rate per unit area of breaking waves that varies with the wave height, wavelength and water depth. This parameterization is implemented in the WAVEWATCH III modeling framework, which is applied to a wide range of conditions and scales, from the global ocean to the beach scale. Wave height, peak and mean periods, and spectral data are validated using in situ and remote sensing data. Model errors are comparable to those of other specialized deep or shallow water parameterizations. This work shows that it is possible to have a seamless parameterization from the deep ocean to the surf zone.

  17. Parameterization and Sensitivity Analysis of a Complex Simulation Model for Mosquito Population Dynamics, Dengue Transmission, and Their Control

    Science.gov (United States)

    Ellis, Alicia M.; Garcia, Andres J.; Focks, Dana A.; Morrison, Amy C.; Scott, Thomas W.

    2011-01-01

    Models can be useful tools for understanding the dynamics and control of mosquito-borne disease. More detailed models may be more realistic and better suited for understanding local disease dynamics; however, evaluating model suitability, accuracy, and performance becomes increasingly difficult with greater model complexity. Sensitivity analysis is a technique that permits exploration of complex models by evaluating the sensitivity of the model to changes in parameters. Here, we present results of sensitivity analyses of two interrelated complex simulation models of mosquito population dynamics and dengue transmission. We found that dengue transmission may be influenced most by survival in each life stage of the mosquito, mosquito biting behavior, and duration of the infectious period in humans. The importance of these biological processes for vector-borne disease models and the overwhelming lack of knowledge about them make acquisition of relevant field data on these biological processes a top research priority. PMID:21813844

  18. A novel approach for introducing cloud spatial structure into cloud radiative transfer parameterizations

    International Nuclear Information System (INIS)

    Huang, Dong; Liu, Yangang

    2014-01-01

    Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost, allowing for more realistic representation of cloud radiation interactions in large-scale models. (letter)

  19. A novel approach for introducing cloud spatial structure into cloud radiative transfer parameterizations

    Science.gov (United States)

    Huang, Dong; Liu, Yangang

    2014-12-01

    Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost, allowing for more realistic representation of cloud radiation interactions in large-scale models.

  20. Active Subspaces of Airfoil Shape Parameterizations

    Science.gov (United States)

    Grey, Zachary J.; Constantine, Paul G.

    2018-05-01

    Design and optimization benefit from understanding the dependence of a quantity of interest (e.g., a design objective or constraint function) on the design variables. A low-dimensional active subspace, when present, identifies important directions in the space of design variables; perturbing a design along the active subspace associated with a particular quantity of interest changes that quantity more, on average, than perturbing the design orthogonally to the active subspace. This low-dimensional structure provides insights that characterize the dependence of quantities of interest on design variables. Airfoil design in a transonic flow field with a parameterized geometry is a popular test problem for design methodologies. We examine two particular airfoil shape parameterizations, PARSEC and CST, and study the active subspaces present in two common design quantities of interest, transonic lift and drag coefficients, under each shape parameterization. We mathematically relate the two parameterizations with a common polynomial series. The active subspaces enable low-dimensional approximations of lift and drag that relate to physical airfoil properties. In particular, we obtain and interpret a two-dimensional approximation of both transonic lift and drag, and we show how these approximation inform a multi-objective design problem.

  1. Probabilistic modelling of overflow, surcharge and flooding in urban drainage using the first-order reliability method and parameterization of local rain series.

    Science.gov (United States)

    Thorndahl, S; Willems, P

    2008-01-01

    Failure of urban drainage systems may occur due to surcharge or flooding at specific manholes in the system, or due to overflows from combined sewer systems to receiving waters. To quantify the probability or return period of failure, standard approaches make use of the simulation of design storms or long historical rainfall series in a hydrodynamic model of the urban drainage system. In this paper, an alternative probabilistic method is investigated: the first-order reliability method (FORM). To apply this method, a long rainfall time series was divided in rainstorms (rain events), and each rainstorm conceptualized to a synthetic rainfall hyetograph by a Gaussian shape with the parameters rainstorm depth, duration and peak intensity. Probability distributions were calibrated for these three parameters and used on the basis of the failure probability estimation, together with a hydrodynamic simulation model to determine the failure conditions for each set of parameters. The method takes into account the uncertainties involved in the rainstorm parameterization. Comparison is made between the failure probability results of the FORM method, the standard method using long-term simulations and alternative methods based on random sampling (Monte Carlo direct sampling and importance sampling). It is concluded that without crucial influence on the modelling accuracy, the FORM is very applicable as an alternative to traditional long-term simulations of urban drainage systems.

  2. Automatic Parameterization Strategy for Cardiac Electrophysiology Simulations

    OpenAIRE

    Costa, Caroline Mendonca; Hoetzl, Elena; Rocha, Bernardo Martins; Prassl, Anton J; Plank, Gernot

    2013-01-01

    Driven by recent advances in medical imaging, image segmentation and numerical techniques, computer models of ventricular electrophysiology account for increasingly finer levels of anatomical and biophysical detail. However, considering the large number of model parameters involved parameterization poses a major challenge. A minimum requirement in combined experimental and modeling studies is to achieve good agreement in activation and repolarization sequences between model and experiment or ...

  3. Physical models of cell motility

    CERN Document Server

    2016-01-01

    This book surveys the most recent advances in physics-inspired cell movement models. This synergetic, cross-disciplinary effort to increase the fidelity of computational algorithms will lead to a better understanding of the complex biomechanics of cell movement, and stimulate progress in research on related active matter systems, from suspensions of bacteria and synthetic swimmers to cell tissues and cytoskeleton.Cell motility and collective motion are among the most important themes in biology and statistical physics of out-of-equilibrium systems, and crucial for morphogenesis, wound healing, and immune response in eukaryotic organisms. It is also relevant for the development of effective treatment strategies for diseases such as cancer, and for the design of bioactive surfaces for cell sorting and manipulation. Substrate-based cell motility is, however, a very complex process as regulatory pathways and physical force generation mechanisms are intertwined. To understand the interplay between adhesion, force ...

  4. Development and Testing of a Life Cycle Model and a Parameterization of Thin Mid-level Stratiform Clouds

    Energy Technology Data Exchange (ETDEWEB)

    Krueger, Steven K.

    2008-03-03

    We used a cloud-resolving model (a detailed computer model of cloud systems) to evaluate and improve the representation of clouds in global atmospheric models used for numerical weather prediction and climate modeling. We also used observations of the atmospheric state, including clouds, made at DOE's Atmospheric Radiation Measurement (ARM) Program's Climate Research Facility located in the Southern Great Plains (Kansas and Oklahoma) during Intensive Observation Periods to evaluate our detailed computer model as well as a single-column version of a global atmospheric model used for numerical weather prediction (the Global Forecast System of the NOAA National Centers for Environmental Prediction). This so-called Single-Column Modeling approach has proved to be a very effective method for testing the representation of clouds in global atmospheric models. The method relies on detailed observations of the atmospheric state, including clouds, in an atmospheric column comparable in size to a grid column used in a global atmospheric model. The required observations are made by a combination of in situ and remote sensing instruments. One of the greatest problems facing mankind at the present is climate change. Part of the problem is our limited ability to predict the regional patterns of climate change. In order to increase this ability, uncertainties in climate models must be reduced. One of the greatest of these uncertainties is the representation of clouds and cloud processes. This project, and ARM taken as a whole, has helped to improve the representation of clouds in global atmospheric models.

  5. The impact on UT/LS cirrus clouds in the CAM/CARMA model using a new interactive aerosol parameterization.

    Science.gov (United States)

    Maloney, C.; Toon, B.; Bardeen, C.

    2017-12-01

    Recent studies indicate that heterogeneous nucleation may play a large role in cirrus cloud formation in the UT/LS, a region previously thought to be primarily dominated by homogeneous nucleation. As a result, it is beneficial to ensure that general circulation models properly represent heterogeneous nucleation in ice cloud simulations. Our work strives towards addressing this issue in the NSF/DOE Community Earth System Model's atmospheric model, CAM. More specifically we are addressing the role of heterogeneous nucleation in the coupled sectional microphysics cloud model, CARMA. Currently, our CAM/CARMA cirrus model only performs homogenous ice nucleation while ignoring heterogeneous nucleation. In our work, we couple the CAM/CARMA cirrus model with the Modal Aerosol Model (MAM). By combining the aerosol model with CAM/CARMA we can both account for heterogeneous nucleation, as well as directly link the sulfates used for homogeneous nucleation to computed fields instead of the current static field being utilized. Here we present our initial results and compare our findings to observations from the long running CALIPSO and MODIS satellite missions.

  6. Strategies for control of sudden oak death in Humboldt County-informed guidance based on a parameterized epidemiological model

    Science.gov (United States)

    João A. N. Filipe; Richard C. Cobb; David M. Rizzo; Ross K. Meentemeyer; Christopher A.. Gilligan

    2010-01-01

    Landscape- to regional-scale models of plant epidemics are direly needed to predict largescale impacts of disease and assess practicable options for control. While landscape heterogeneity is recognized as a major driver of disease dynamics, epidemiological models are rarely applied to realistic landscape conditions due to computational and data limitations. Here we...

  7. Coupled carbon-water exchange of the Amazon rain forest. I. Model description, parameterization and sensitivity analysis

    NARCIS (Netherlands)

    Simon, E.; Meixner, F.X.; Ganzeveld, L.N.; Kesselmeier, J.

    2005-01-01

    Detailed one-dimensional multilayer biosphere-atmosphere models, also referred to as CANVEG models, are used for more than a decade to describe coupled water-carbon exchange between the terrestrial vegetation and the lower atmosphere. Within the present study, a modified CANVEG scheme is described.

  8. Physical model of reactor pulse

    International Nuclear Information System (INIS)

    Petrovic, A.; Ravnik, M.

    2004-01-01

    Pulse experiments have been performed at J. Stefan Institute TRIGA reactor since 1991. In total, more than 130 pulses have been performed. Extensive experimental information on the pulse physical characteristics has been accumulated. Fuchs-Hansen adiabatic model has been used for predicting and analysing the pulse parameters. The model is based on point kinetics equation, neglecting the delayed neutrons and assuming constant inserted reactivity in form of step function. Deficiencies of the Fuchs-Hansen model and systematic experimental errors have been observed and analysed. Recently, the pulse model was improved by including the delayed neutrons and time dependence of inserted reactivity. The results explain the observed non-linearity of the pulse energy for high pulses due to finite time of pulse rod withdrawal and the contribution of the delayed neutrons after the prompt part of the pulse. The results of the improved model are in good agreement with experimental results. (author)

  9. Approaches in highly parameterized inversion—PEST++ Version 3, a Parameter ESTimation and uncertainty analysis software suite optimized for large environmental models

    Science.gov (United States)

    Welter, David E.; White, Jeremy T.; Hunt, Randall J.; Doherty, John E.

    2015-09-18

    The PEST++ Version 1 object-oriented parameter estimation code is here extended to Version 3 to incorporate additional algorithms and tools to further improve support for large and complex environmental modeling problems. PEST++ Version 3 includes the Gauss-Marquardt-Levenberg (GML) algorithm for nonlinear parameter estimation, Tikhonov regularization, integrated linear-based uncertainty quantification, options of integrated TCP/IP based parallel run management or external independent run management by use of a Version 2 update of the GENIE Version 1 software code, and utilities for global sensitivity analyses. The Version 3 code design is consistent with PEST++ Version 1 and continues to be designed to lower the barriers of entry for users as well as developers while providing efficient and optimized algorithms capable of accommodating large, highly parameterized inverse problems. As such, this effort continues the original focus of (1) implementing the most popular and powerful features of the PEST software suite in a fashion that is easy for novice or experienced modelers to use and (2) developing a software framework that is easy to extend.

  10. Sensitivity of Greenland Ice Sheet surface mass balance to surface albedo parameterization: a study with a regional climate model

    OpenAIRE

    Angelen, J. H.; Lenaerts, J. T. M.; Lhermitte, S.; Fettweis, X.; Kuipers Munneke, P.; Broeke, M. R.; Meijgaard, E.; Smeets, C. J. P. P.

    2012-01-01

    We present a sensitivity study of the surface mass balance (SMB) of the Greenland Ice Sheet, as modeled using a regional atmospheric climate model, to various parameter settings in the albedo scheme. The snow albedo scheme uses grain size as a prognostic variable and further depends on cloud cover, solar zenith angle and black carbon concentration. For the control experiment the overestimation of absorbed shortwave radiation (+6%) at the K-transect (west Greenland) for the period 2004–2009 is...

  11. Parameterization analysis and inversion for orthorhombic media

    KAUST Repository

    Masmoudi, Nabil

    2018-05-01

    Accounting for azimuthal anisotropy is necessary for the processing and inversion of wide-azimuth and wide-aperture seismic data because wave speeds naturally depend on the wave propagation direction. Orthorhombic anisotropy is considered the most effective anisotropic model that approximates the azimuthal anisotropy we observe in seismic data. In the framework of full wave form inversion (FWI), the large number of parameters describing orthorhombic media exerts a considerable trade-off and increases the non-linearity of the inversion problem. Choosing a suitable parameterization for the model, and identifying which parameters in that parameterization could be well resolved, are essential to a successful inversion. In this thesis, I derive the radiation patterns for different acoustic orthorhombic parameterization. Analyzing the angular dependence of the scattering of the parameters of different parameterizations starting with the conventionally used notation, I assess the potential trade-off between the parameters and the resolution in describing the data and inverting for the parameters. In order to build practical inversion strategies, I suggest new parameters (called deviation parameters) for a new parameterization style in orthorhombic media. The novel parameters denoted ∈d, ƞd and δd are dimensionless and represent a measure of deviation between the vertical planes in orthorhombic anisotropy. The main feature of the deviation parameters consists of keeping the scattering of the vertical transversely isotropic (VTI) parameters stationary with azimuth. Using these scattering features, we can condition FWI to invert for the parameters which the data are sensitive to, at different stages, scales, and locations in the model. With this parameterization, the data are mainly sensitive to the scattering of 3 parameters (out of six that describe an acoustic orthorhombic medium): the horizontal velocity in the x1 direction, ∈1 which provides scattering mainly near

  12. Approaches in highly parameterized inversion: TSPROC, a general time-series processor to assist in model calibration and result summarization

    Science.gov (United States)

    Westenbroek, Stephen M.; Doherty, John; Walker, John F.; Kelson, Victor A.; Hunt, Randall J.; Cera, Timothy B.

    2012-01-01

    The TSPROC (Time Series PROCessor) computer software uses a simple scripting language to process and analyze time series. It was developed primarily to assist in the calibration of environmental models. The software is designed to perform calculations on time-series data commonly associated with surface-water models, including calculation of flow volumes, transformation by means of basic arithmetic operations, and generation of seasonal and annual statistics and hydrologic indices. TSPROC can also be used to generate some of the key input files required to perform parameter optimization by means of the PEST (Parameter ESTimation) computer software. Through the use of TSPROC, the objective function for use in the model-calibration process can be focused on specific components of a hydrograph.

  13. High-Latitude Stratospheric Sensitivity to QBO Width in a Chemistry-Climate Model with Parameterized Ozone Chemistry

    Science.gov (United States)

    Hurwitz, M. M.; Braesicke, P.; Pyle, J. A.

    2010-01-01

    In a pair of idealized simulations with a simplified chemistry-climate model, the sensitivity of the wintertime Arctic stratosphere to variability in the width of the quasi-biennial oscillation (QBO) is assessed. The width of the QBO appears to have equal influence on the Arctic stratosphere as does the phase (i.e. the Holton-Tan mechanism). In the model, a wider QBO acts like a preferential shift toward the easterly phase of the QBO, where zonal winds at 60 N tend to be relatively weaker, while 50 hPa geopotential heights and polar ozone values tend to be higher.

  14. Modelling winter organic aerosol at the European scale with CAMx: evaluation and source apportionment with a VBS parameterization based on novel wood burning smog chamber experiments

    Science.gov (United States)

    Ciarelli, Giancarlo; Aksoyoglu, Sebnem; El Haddad, Imad; Bruns, Emily A.; Crippa, Monica; Poulain, Laurent; Äijälä, Mikko; Carbone, Samara; Freney, Evelyn; O'Dowd, Colin; Baltensperger, Urs; Prévôt, André S. H.

    2017-06-01

    We evaluated a modified VBS (volatility basis set) scheme to treat biomass-burning-like organic aerosol (BBOA) implemented in CAMx (Comprehensive Air Quality Model with extensions). The updated scheme was parameterized with novel wood combustion smog chamber experiments using a hybrid VBS framework which accounts for a mixture of wood burning organic aerosol precursors and their further functionalization and fragmentation in the atmosphere. The new scheme was evaluated for one of the winter EMEP intensive campaigns (February-March 2009) against aerosol mass spectrometer (AMS) measurements performed at 11 sites in Europe. We found a considerable improvement for the modelled organic aerosol (OA) mass compared to our previous model application with the mean fractional bias (MFB) reduced from -61 to -29 %. We performed model-based source apportionment studies and compared results against positive matrix factorization (PMF) analysis performed on OA AMS data. Both model and observations suggest that OA was mainly of secondary origin at almost all sites. Modelled secondary organic aerosol (SOA) contributions to total OA varied from 32 to 88 % (with an average contribution of 62 %) and absolute concentrations were generally under-predicted. Modelled primary hydrocarbon-like organic aerosol (HOA) and primary biomass-burning-like aerosol (BBPOA) fractions contributed to a lesser extent (HOA from 3 to 30 %, and BBPOA from 1 to 39 %) with average contributions of 13 and 25 %, respectively. Modelled BBPOA fractions were found to represent 12 to 64 % of the total residential-heating-related OA, with increasing contributions at stations located in the northern part of the domain. Source apportionment studies were performed to assess the contribution of residential and non-residential combustion precursors to the total SOA. Non-residential combustion and road transportation sector contributed about 30-40 % to SOA formation (with increasing contributions at urban and near

  15. Parameterizing Spatial Models of Infectious Disease Transmission that Incorporate Infection Time Uncertainty Using Sampling-Based Likelihood Approximations.

    Directory of Open Access Journals (Sweden)

    Rajat Malik

    Full Text Available A class of discrete-time models of infectious disease spread, referred to as individual-level models (ILMs, are typically fitted in a Bayesian Markov chain Monte Carlo (MCMC framework. These models quantify probabilistic outcomes regarding the risk of infection of susceptible individuals due to various susceptibility and transmissibility factors, including their spatial distance from infectious individuals. The infectious pressure from infected individuals exerted on susceptible individuals is intrinsic to these ILMs. Unfortunately, quantifying this infectious pressure for data sets containing many individuals can be computationally burdensome, leading to a time-consuming likelihood calculation and, thus, computationally prohibitive MCMC-based analysis. This problem worsens when using data augmentation to allow for uncertainty in infection times. In this paper, we develop sampling methods that can be used to calculate a fast, approximate likelihood when fitting such disease models. A simple random sampling approach is initially considered followed by various spatially-stratified schemes. We test and compare the performance of our methods with both simulated data and data from the 2001 foot-and-mouth disease (FMD epidemic in the U.K. Our results indicate that substantial computation savings can be obtained--albeit, of course, with some information loss--suggesting that such techniques may be of use in the analysis of very large epidemic data sets.

  16. Improvement in the Modeled Representation of North American Monsoon Precipitation Using a Modified Kain–Fritsch Convective Parameterization Scheme

    KAUST Repository

    Luong, Thang; Castro, Christopher; Nguyen, Truong; Cassell, William; Chang, Hsin-I

    2018-01-01

    A commonly noted problem in the simulation of warm season convection in the North American monsoon region has been the inability of atmospheric models at the meso-β scales (10 s to 100 s of kilometers) to simulate organized convection, principally

  17. On the Relationship between Observed NLDN Lightning Strikes and Modeled Convective Precipitation Rates Parameterization of Lightning NOx Production in CMAQ

    Science.gov (United States)

    Lightning-produced nitrogen oxides (NOX=NO+NO2) in the middle and upper troposphere play an essential role in the production of ozone (O3) and influence the oxidizing capacity of the troposphere. Despite much effort in both observing and modeling lightning NOX during the past dec...

  18. Meta-analysis of field-saturated hydraulic conductivity recovery following wildland fire: Applications for hydrologic model parameterization and resilience assessment

    Science.gov (United States)

    Ebel, Brian A.; Martin, Deborah

    2017-01-01

    Hydrologic recovery after wildfire is critical for restoring the ecosystem services of protecting of human lives and infrastructure from hazards and delivering water supply of sufficient quality and quantity. Recovery of soil-hydraulic properties, such as field-saturated hydraulic conductivity (Kfs), is a key factor for assessing the duration of watershed-scale flash flood and debris flow risks after wildfire. Despite the crucial role of Kfs in parameterizing numerical hydrologic models to predict the magnitude of postwildfire run-off and erosion, existing quantitative relations to predict Kfsrecovery with time since wildfire are lacking. Here, we conduct meta-analyses of 5 datasets from the literature that measure or estimate Kfs with time since wildfire for longer than 3-year duration. The meta-analyses focus on fitting 2 quantitative relations (linear and non-linear logistic) to explain trends in Kfs temporal recovery. The 2 relations adequately described temporal recovery except for 1 site where macropore flow dominated infiltration and Kfs recovery. This work also suggests that Kfs can have low hydrologic resistance (large postfire changes), and moderate to high hydrologic stability (recovery time relative to disturbance recurrence interval) and resilience (recovery of hydrologic function and provision of ecosystem services). Future Kfs relations could more explicitly incorporate processes such as soil-water repellency, ground cover and soil structure regeneration, macropore recovery, and vegetation regrowth.

  19. Sensitivity of ecological soil-screening levels for metals to exposure model parameterization and toxicity reference values.

    Science.gov (United States)

    Sample, Bradley E; Fairbrother, Anne; Kaiser, Ashley; Law, Sheryl; Adams, Bill

    2014-10-01

    Ecological soil-screening levels (Eco-SSLs) were developed by the United States Environmental Protection Agency (USEPA) for the purposes of setting conservative soil screening values that can be used to eliminate the need for further ecological assessment for specific analytes at a given site. Ecological soil-screening levels for wildlife represent a simplified dietary exposure model solved in terms of soil concentrations to produce exposure equal to a no-observed-adverse-effect toxicity reference value (TRV). Sensitivity analyses were performed for 6 avian and mammalian model species, and 16 metals/metalloids for which Eco-SSLs have been developed. The relative influence of model parameters was expressed as the absolute value of the range of variation observed in the resulting soil concentration when exposure is equal to the TRV. Rank analysis of variance was used to identify parameters with greatest influence on model output. For both birds and mammals, soil ingestion displayed the broadest overall range (variability), although TRVs consistently had the greatest influence on calculated soil concentrations; bioavailability in food was consistently the least influential parameter, although an important site-specific variable. Relative importance of parameters differed by trophic group. Soil ingestion ranked 2nd for carnivores and herbivores, but was 4th for invertivores. Different patterns were exhibited, depending on which parameter, trophic group, and analyte combination was considered. The approach for TRV selection was also examined in detail, with Cu as the representative analyte. The underlying assumption that generic body-weight-normalized TRVs can be used to derive protective levels for any species is not supported by the data. Whereas the use of site-, species-, and analyte-specific exposure parameters is recommended to reduce variation in exposure estimates (soil protection level), improvement of TRVs is more problematic. © 2014 The Authors

  20. Parameterization of the driving time in the evacuation or fast relocation model of an accident consequence code

    International Nuclear Information System (INIS)

    Pfeffer, W.; Hofer, E.; Nowak, E.; Schnadt, H.

    1988-01-01

    The model of protective measures in the accident consequence code system UFOMOD of the German Risk Study, Phase B, requires the driving times of the population to be evacuated for the evaluation of the dose received during the evacuation. The parameter values are derived from evacuation simulations carried out with the code EVAS for 36 sectors from various sites. The simulations indicated that the driving time strongly depends on the population density, whereas other influences are less important. It was decided to use different driving times in the consequence code for each of four population density classes as well as for each of three or four fractions of the population in a sector. The variability between sectors of a class was estimated from the 36 sectors, in order to derive subjective probability distributions that are to model the uncertainty in the parameter value to be used for any of the fractions in a particular sector for which an EVAS simulation has not yet been performed. To this end also the impact of the uncertainties in the parameters and modelling assumptions of EVAS on the simulated times was quantified using expert judgement. The distributions permit the derivation of a set of driving times to be used as so-called ''best estimate'' or reference values in the accident consequence code. Additionally they are directly applicable in an uncertainty and sensitivity analysis

  1. Sensitivity of a two-dimensional chemistry-transport model to changes in parameterizations of radiative processes

    International Nuclear Information System (INIS)

    Grant, K.E.; Ellingson, R.G.; Wuebbles, D.J.

    1988-08-01

    Radiative processes strongly effect equilibrium trace gas concentrations both directly, through photolysis reactions, and indirectly through temperature and transport processes. As part of our continuing radiative submodel development and validation, we have used the LLNL 2-D chemical-radiative-transport (CRT) model to investigate the net sensitivity of equilibrium ozone concentrations to several changes in radiative forcing. Doubling CO 2 from 300 ppmv to 600 ppmv resulted in a temperature decrease of 5 K to 8 K in the middle stratosphere along with an 8% to 16% increase in ozone in the same region. Replacing our usual shortwave scattering algorithms with a simplified Rayleigh algorithm led to a 1% to 2% increase in ozone in the lower stratosphere. Finally, modifying our normal CO 2 cooling rates by corrections derived from line-by-line calculations resulted in several regions of heating and cooling. We observed temperature changes on the order of 1 K to 1.5 K with corresponding changes of 0.5% to 1.5% in O 3 . Our results for doubled CO 2 compare favorably with those by other authors. Results for our two perturbation scenarios stress the need for accurately modeling radiative processes while confirming the general validity of current 2-D CRT models. 15 refs., 5 figs

  2. Comparison of mean properties of simulated convection in a cloud-resolving model with those produced by cumulus parameterization

    Energy Technology Data Exchange (ETDEWEB)

    Dudhia, J.; Parsons, D.B. [National Center for Atmospheric Research, Boulder, CO (United States)

    1996-04-01

    An Intensive Observation Period (IOP) of the Atmospheric Radiation Measurement (ARM) Program took place at the Southern Great Plains (SGP) Cloud and Radiation Testbed (CART) site from June 16-26, 1993. The National Center for Atmospheric Research (NCAR)/Penn State Mesoscale Model (MM5) has been used to simulate this period on a 60-km domain with 20- and 6.67-km nests centered on Lamont, Oklahoma. Simulations are being run with data assimilation by the nudging technique to incorporate upper-air and surface data from a variety of platforms. The model maintains dynamical consistency between the fields, while the data correct for model biases that may occur during long-term simulations and provide boundary conditions. For the work reported here the Mesoscale Atmospheric Prediction System (MAPS) of the National Ocean and Atmospheric Administration (NOAA) 3-hourly analyses were used to drive the 60-km domain while the inner domains were unforced. A continuous 10-day period was simulated.

  3. Using Leaf Chlorophyll to Parameterize Light-Use-Efficiency Within a Thermal-Based Carbon, Water and Energy Exchange Model

    Science.gov (United States)

    Houlborg, Rasmus; Anderson, Martha C.; Daughtry, C. S. T.; Kustas, W. P.; Rodell, Matthew

    2010-01-01

    Chlorophylls absorb photosynthetically active radiation and thus function as vital pigments for photosynthesis, which makes leaf chlorophyll content (C(sub ab) useful for monitoring vegetation productivity and an important indicator of the overall plant physiological condition. This study investigates the utility of integrating remotely sensed estimates of C(sub ab) into a thermal-based Two-Source Energy Balance (TSEB) model that estimates land-surface CO2 and energy fluxes using an analytical, light-use-efficiency (LUE) based model of canopy resistance. The LUE model component computes canopy-scale carbon assimilation and transpiration fluxes and incorporates LUE modifications from a nominal (species-dependent) value (LUE(sub n)) in response to short term variations in environmental conditions, However LUE(sub n) may need adjustment on a daily timescale to accommodate changes in plant phenology, physiological condition and nutrient status. Day to day variations in LUE(sub n) were assessed for a heterogeneous corn crop field in Maryland, U,S.A. through model calibration with eddy covariance CO2 flux tower observations. The optimized daily LUE(sub n) values were then compared to estimates of C(sub ab) integrated from gridded maps of chlorophyll content weighted over the tower flux source area. The time continuous maps of daily C(sub ab) over the study field were generated by focusing in-situ measurements with retrievals generated with an integrated radiative transfer modeling tool (accurate to within +/-10%) using at-sensor radiances in green, red and near-infrared wavelengths acquired with an aircraft imaging system. The resultant daily changes in C(sub ab) within the tower flux source area generally correlated well with corresponding changes in daily calibrated LUE(sub n) derived from the tower flux data, and hourly water, energy and carbon flux estimation accuracies from TSEB were significantly improved when using C(sub ab) for delineating spatio

  4. Prognostic parameterization of cloud ice with a single category in the aerosol-climate model ECHAM(v6.3.0)-HAM(v2.3)

    Science.gov (United States)

    Dietlicher, Remo; Neubauer, David; Lohmann, Ulrike

    2018-04-01

    A new scheme for stratiform cloud microphysics has been implemented in the ECHAM6-HAM2 general circulation model. It features a widely used description of cloud water with two categories for cloud droplets and raindrops. The unique aspect of the new scheme is the break with the traditional approach to describe cloud ice analogously. Here we parameterize cloud ice by a single category that predicts bulk particle properties (P3). This method has already been applied in a regional model and most recently also in the Community Atmosphere Model 5 (CAM5). A single cloud ice category does not rely on heuristic conversion rates from one category to another. Therefore, it is conceptually easier and closer to first principles. This work shows that a single category is a viable approach to describe cloud ice in climate models. Prognostic representation of sedimentation is achieved by a nested approach for sub-stepping the cloud microphysics scheme. This yields good results in terms of accuracy and performance as compared to simulations with high temporal resolution. Furthermore, the new scheme allows for a competition between various cloud processes and is thus able to unbiasedly represent the ice formation pathway from nucleation to growth by vapor deposition and collisions to sedimentation. Specific aspects of the P3 method are evaluated. We could not produce a purely stratiform cloud where rime growth dominates growth by vapor deposition and conclude that the lack of appropriate conditions renders the prognostic parameters associated with the rime properties unnecessary. Limitations inherent in a single category are examined.

  5. Stellar Atmospheric Parameterization Based on Deep Learning

    Science.gov (United States)

    Pan, Ru-yang; Li, Xiang-ru

    2017-07-01

    Deep learning is a typical learning method widely studied in the fields of machine learning, pattern recognition, and artificial intelligence. This work investigates the problem of stellar atmospheric parameterization by constructing a deep neural network with five layers, and the node number in each layer of the network is respectively 3821-500-100-50-1. The proposed scheme is verified on both the real spectra measured by the Sloan Digital Sky Survey (SDSS) and the theoretic spectra computed with the Kurucz's New Opacity Distribution Function (NEWODF) model, to make an automatic estimation for three physical parameters: the effective temperature (Teff), surface gravitational acceleration (lg g), and metallic abundance (Fe/H). The results show that the stacked autoencoder deep neural network has a better accuracy for the estimation. On the SDSS spectra, the mean absolute errors (MAEs) are 79.95 for Teff/K, 0.0058 for (lg Teff/K), 0.1706 for lg (g/(cm·s-2)), and 0.1294 dex for the [Fe/H], respectively; On the theoretic spectra, the MAEs are 15.34 for Teff/K, 0.0011 for lg (Teff/K), 0.0214 for lg(g/(cm · s-2)), and 0.0121 dex for [Fe/H], respectively.

  6. Automatic Parameterization Strategy for Cardiac Electrophysiology Simulations.

    Science.gov (United States)

    Costa, Caroline Mendonca; Hoetzl, Elena; Rocha, Bernardo Martins; Prassl, Anton J; Plank, Gernot

    2013-10-01

    Driven by recent advances in medical imaging, image segmentation and numerical techniques, computer models of ventricular electrophysiology account for increasingly finer levels of anatomical and biophysical detail. However, considering the large number of model parameters involved parameterization poses a major challenge. A minimum requirement in combined experimental and modeling studies is to achieve good agreement in activation and repolarization sequences between model and experiment or patient data. In this study, we propose basic techniques which aid in determining bidomain parameters to match activation sequences. An iterative parameterization algorithm is implemented which determines appropriate bulk conductivities which yield prescribed velocities. In addition, a method is proposed for splitting the computed bulk conductivities into individual bidomain conductivities by prescribing anisotropy ratios.

  7. Cabin Environment Physics Risk Model

    Science.gov (United States)

    Mattenberger, Christopher J.; Mathias, Donovan Leigh

    2014-01-01

    This paper presents a Cabin Environment Physics Risk (CEPR) model that predicts the time for an initial failure of Environmental Control and Life Support System (ECLSS) functionality to propagate into a hazardous environment and trigger a loss-of-crew (LOC) event. This physics-of failure model allows a probabilistic risk assessment of a crewed spacecraft to account for the cabin environment, which can serve as a buffer to protect the crew during an abort from orbit and ultimately enable a safe return. The results of the CEPR model replace the assumption that failure of the crew critical ECLSS functionality causes LOC instantly, and provide a more accurate representation of the spacecraft's risk posture. The instant-LOC assumption is shown to be excessively conservative and, moreover, can impact the relative risk drivers identified for the spacecraft. This, in turn, could lead the design team to allocate mass for equipment to reduce overly conservative risk estimates in a suboptimal configuration, which inherently increases the overall risk to the crew. For example, available mass could be poorly used to add redundant ECLSS components that have a negligible benefit but appear to make the vehicle safer due to poor assumptions about the propagation time of ECLSS failures.

  8. A new parameterization for surface ocean light attenuation in Earth System Models: assessing the impact of light absorption by colored detrital material

    Science.gov (United States)

    Kim, G. E.; Pradal, M.-A.; Gnanadesikan, A.

    2015-03-01

    Light limitation can affect the distribution of biota and nutrients in the ocean. Light absorption by colored detrital material (CDM) was included in a fully coupled Earth System Model using a new parameterization for shortwave attenuation. Two model runs were conducted, with and without light attenuation by CDM. In a global average sense, greater light limitation associated with CDM increased surface chlorophyll, biomass and nutrients together. These changes can be attributed to the movement of biological productivity higher up the water column, which increased surface chlorophyll and biomass while simultaneously decreasing total biomass. Meanwhile, the reduction in biomass resulted in greater nutrient availability throughout the water column. Similar results were found on a regional scale in an analysis of the oceans by biome. In coastal regions, surface chlorophyll increased by 35% while total integrated phytoplankton biomass diminished by 18%. The largest relative increases in modeled surface chlorophyll and biomass in the open ocean were found in the equatorial biomes, while largest decreases in depth-integrated biomass and chlorophyll were found in the subpolar and polar biomes. This mismatch of surface and subsurface trends and their regional dependence was analyzed by comparing the competing factors of diminished light availability and increased nutrient availability on phytoplankton growth in the upper 200 m. Overall, increases in surface biomass were expected to accompany greater nutrient uptake and therefore diminish surface nutrients, but changes in light limitation decoupled trends between these two variables. Understanding changes in biological productivity requires both surface and depth-resolved information. Surface trends may be minimal or of the opposite sign to depth-integrated amounts, depending on the vertical structure of phytoplankton abundance.

  9. When probabilistic seismic hazard climbs volcanoes: the Mt. Etna case, Italy - Part 1: Model components for sources parameterization

    Science.gov (United States)

    Azzaro, Raffaele; Barberi, Graziella; D'Amico, Salvatore; Pace, Bruno; Peruzza, Laura; Tuvè, Tiziana

    2017-11-01

    The volcanic region of Mt. Etna (Sicily, Italy) represents a perfect lab for testing innovative approaches to seismic hazard assessment. This is largely due to the long record of historical and recent observations of seismic and tectonic phenomena, the high quality of various geophysical monitoring and particularly the rapid geodynamics clearly demonstrate some seismotectonic processes. We present here the model components and the procedures adopted for defining seismic sources to be used in a new generation of probabilistic seismic hazard assessment (PSHA), the first results and maps of which are presented in a companion paper, Peruzza et al. (2017). The sources include, with increasing complexity, seismic zones, individual faults and gridded point sources that are obtained by integrating geological field data with long and short earthquake datasets (the historical macroseismic catalogue, which covers about 3 centuries, and a high-quality instrumental location database for the last decades). The analysis of the frequency-magnitude distribution identifies two main fault systems within the volcanic complex featuring different seismic rates that are controlled essentially by volcano-tectonic processes. We discuss the variability of the mean occurrence times of major earthquakes along the main Etnean faults by using an historical approach and a purely geologic method. We derive a magnitude-size scaling relationship specifically for this volcanic area, which has been implemented into a recently developed software tool - FiSH (Pace et al., 2016) - that we use to calculate the characteristic magnitudes and the related mean recurrence times expected for each fault. Results suggest that for the Mt. Etna area, the traditional assumptions of uniform and Poissonian seismicity can be relaxed; a time-dependent fault-based modeling, joined with a 3-D imaging of volcano-tectonic sources depicted by the recent instrumental seismicity, can therefore be implemented in PSHA maps

  10. Improving representation of convective transport for scale-aware parameterization: 2. Analysis of cloud-resolving model simulations

    Science.gov (United States)

    Liu, Yi-Chin; Fan, Jiwen; Zhang, Guang J.; Xu, Kuan-Man; Ghan, Steven J.

    2015-04-01

    Following Part I, in which 3-D cloud-resolving model (CRM) simulations of a squall line and mesoscale convective complex in the midlatitude continental and the tropical regions are conducted and evaluated, we examine the scale dependence of eddy transport of water vapor, evaluate different eddy transport formulations, and improve the representation of convective transport across all scales by proposing a new formulation that more accurately represents the CRM-calculated eddy flux. CRM results show that there are strong grid-spacing dependencies of updraft and downdraft fractions regardless of altitudes, cloud life stage, and geographical location. As for the eddy transport of water vapor, updraft eddy flux is a major contributor to total eddy flux in the lower and middle troposphere. However, downdraft eddy transport can be as large as updraft eddy transport in the lower atmosphere especially at the mature stage of midlatitude continental convection. We show that the single-updraft approach significantly underestimates updraft eddy transport of water vapor because it fails to account for the large internal variability of updrafts, while a single downdraft represents the downdraft eddy transport of water vapor well. We find that using as few as three updrafts can account for the internal variability of updrafts well. Based on the evaluation with the CRM simulated data, we recommend a simplified eddy transport formulation that considers three updrafts and one downdraft. Such formulation is similar to the conventional one but much more accurately represents CRM-simulated eddy flux across all grid scales.

  11. New and extended parameterization of the thermodynamic model AIOMFAC: calculation of activity coefficients for organic-inorganic mixtures containing carboxyl, hydroxyl, carbonyl, ether, ester, alkenyl, alkyl, and aromatic functional groups

    Directory of Open Access Journals (Sweden)

    A. Zuend

    2011-09-01

    Full Text Available We present a new and considerably extended parameterization of the thermodynamic activity coefficient model AIOMFAC (Aerosol Inorganic-Organic Mixtures Functional groups Activity Coefficients at room temperature. AIOMFAC combines a Pitzer-like electrolyte solution model with a UNIFAC-based group-contribution approach and explicitly accounts for interactions between organic functional groups and inorganic ions. Such interactions constitute the salt-effect, may cause liquid-liquid phase separation, and affect the gas-particle partitioning of aerosols. The previous AIOMFAC version was parameterized for alkyl and hydroxyl functional groups of alcohols and polyols. With the goal to describe a wide variety of organic compounds found in atmospheric aerosols, we extend here the parameterization of AIOMFAC to include the functional groups carboxyl, hydroxyl, ketone, aldehyde, ether, ester, alkenyl, alkyl, aromatic carbon-alcohol, and aromatic hydrocarbon. Thermodynamic equilibrium data of organic-inorganic systems from the literature are critically assessed and complemented with new measurements to establish a comprehensive database. The database is used to determine simultaneously the AIOMFAC parameters describing interactions of organic functional groups with the ions H+, Li+, Na+, K+, NH4+, Mg2+, Ca2+, Cl, Br, NO3, HSO4, and SO42−. Detailed descriptions of different types of thermodynamic data, such as vapor-liquid, solid-liquid, and liquid-liquid equilibria, and their use for the model parameterization are provided. Issues regarding deficiencies of the database, types and uncertainties of experimental data, and limitations of the model, are discussed. The challenging parameter optimization problem is solved with a novel combination of powerful global minimization

  12. Excellence in Physics Education Award: Modeling Theory for Physics Instruction

    Science.gov (United States)

    Hestenes, David

    2014-03-01

    All humans create mental models to plan and guide their interactions with the physical world. Science has greatly refined and extended this ability by creating and validating formal scientific models of physical things and processes. Research in physics education has found that mental models created from everyday experience are largely incompatible with scientific models. This suggests that the fundamental problem in learning and understanding science is coordinating mental models with scientific models. Modeling Theory has drawn on resources of cognitive science to work out extensive implications of this suggestion and guide development of an approach to science pedagogy and curriculum design called Modeling Instruction. Modeling Instruction has been widely applied to high school physics and, more recently, to chemistry and biology, with noteworthy results.

  13. A Multivariate Model of Physics Problem Solving

    Science.gov (United States)

    Taasoobshirazi, Gita; Farley, John

    2013-01-01

    A model of expertise in physics problem solving was tested on undergraduate science, physics, and engineering majors enrolled in an introductory-level physics course. Structural equation modeling was used to test hypothesized relationships among variables linked to expertise in physics problem solving including motivation, metacognitive planning,…

  14. Models and structures: mathematical physics

    International Nuclear Information System (INIS)

    2003-01-01

    This document gathers research activities along 5 main directions. 1) Quantum chaos and dynamical systems. Recent results concern the extension of the exact WKB method that has led to a host of new results on the spectrum and wave functions. Progress have also been made in the description of the wave functions of chaotic quantum systems. Renormalization has been applied to the analysis of dynamical systems. 2) Combinatorial statistical physics. We see the emergence of new techniques applied to various such combinatorial problems, from random walks to random lattices. 3) Integrability: from structures to applications. Techniques of conformal field theory and integrable model systems have been developed. Progress is still made in particular for open systems with boundary conditions, in connection to strings and branes physics. Noticeable links between integrability and exact WKB quantization to 2-dimensional disordered systems have been highlighted. New correlations of eigenvalues and better connections to integrability have been formulated for random matrices. 4) Gravities and string theories. We have developed aspects of 2-dimensional string theory with a particular emphasis on its connection to matrix models as well as non-perturbative properties of M-theory. We have also followed an alternative path known as loop quantum gravity. 5) Quantum field theory. The results obtained lately concern its foundations, in flat or curved spaces, but also applications to second-order phase transitions in statistical systems

  15. Impact of different parameterization schemes on simulation of mesoscale convective system over south-east India

    Science.gov (United States)

    Madhulatha, A.; Rajeevan, M.

    2018-02-01

    Main objective of the present paper is to examine the role of various parameterization schemes in simulating the evolution of mesoscale convective system (MCS) occurred over south-east India. Using the Weather Research and Forecasting (WRF) model, numerical experiments are conducted by considering various planetary boundary layer, microphysics, and cumulus parameterization schemes. Performances of different schemes are evaluated by examining boundary layer, reflectivity, and precipitation features of MCS using ground-based and satellite observations. Among various physical parameterization schemes, Mellor-Yamada-Janjic (MYJ) boundary layer scheme is able to produce deep boundary layer height by simulating warm temperatures necessary for storm initiation; Thompson (THM) microphysics scheme is capable to simulate the reflectivity by reasonable distribution of different hydrometeors during various stages of system; Betts-Miller-Janjic (BMJ) cumulus scheme is able to capture the precipitation by proper representation of convective instability associated with MCS. Present analysis suggests that MYJ, a local turbulent kinetic energy boundary layer scheme, which accounts strong vertical mixing; THM, a six-class hybrid moment microphysics scheme, which considers number concentration along with mixing ratio of rain hydrometeors; and BMJ, a closure cumulus scheme, which adjusts thermodynamic profiles based on climatological profiles might have contributed for better performance of respective model simulations. Numerical simulation carried out using the above combination of schemes is able to capture storm initiation, propagation, surface variations, thermodynamic structure, and precipitation features reasonably well. This study clearly demonstrates that the simulation of MCS characteristics is highly sensitive to the choice of parameterization schemes.

  16. Examining Chaotic Convection with Super-Parameterization Ensembles

    Science.gov (United States)

    Jones, Todd R.

    This study investigates a variety of features present in a new configuration of the Community Atmosphere Model (CAM) variant, SP-CAM 2.0. The new configuration (multiple-parameterization-CAM, MP-CAM) changes the manner in which the super-parameterization (SP) concept represents physical tendency feedbacks to the large-scale by using the mean of 10 independent two-dimensional cloud-permitting model (CPM) curtains in each global model column instead of the conventional single CPM curtain. The climates of the SP and MP configurations are examined to investigate any significant differences caused by the application of convective physical tendencies that are more deterministic in nature, paying particular attention to extreme precipitation events and large-scale weather systems, such as the Madden-Julian Oscillation (MJO). A number of small but significant changes in the mean state climate are uncovered, and it is found that the new formulation degrades MJO performance. Despite these deficiencies, the ensemble of possible realizations of convective states in the MP configuration allows for analysis of uncertainty in the small-scale solution, lending to examination of those weather regimes and physical mechanisms associated with strong, chaotic convection. Methods of quantifying precipitation predictability are explored, and use of the most reliable of these leads to the conclusion that poor precipitation predictability is most directly related to the proximity of the global climate model column state to atmospheric critical points. Secondarily, the predictability is tied to the availability of potential convective energy, the presence of mesoscale convective organization on the CPM grid, and the directive power of the large-scale.

  17. Ensemble assimilation of JASON/ENVISAT and JASON/AltiKA altimetric observations with stochastic parameterization of the model dynamical uncertainties

    Science.gov (United States)

    Brasseur, Pierre; Candille, Guillem; Bouttier, Pierre-Antoine; Brankart, Jean-Michel; Verron, Jacques

    2015-04-01

    The objective of this study is to explicitly simulate and quantify the uncertainty related to sea-level anomalies diagnosed from eddy-resolving ocean circulation models, in order to develop advanced methods suitable for addressing along-track altimetric data assimilation into such models. This work is carried out jointly with the MyOcean and SANGOMA (Stochastic Assimilation for the Next Generation Ocean Model Applications) consortium, funded by EU under the GMES umbrella over the 2012-2015 period. In this framework, a realistic circulation model of the North Atlantic ocean at 1/4° resolution (NATL025 configuration) has been adapted to include effects of unresolved scales on the dynamics. This is achieved by introducing stochastic perturbations of the equation of state to represent the associated model uncertainty. Assimilation experiments are designed using altimetric data from past and on-going missions (Jason-2 and Saral/AltiKA experiments, and Cryosat-2 for fully independent altimetric validation) to better control the Gulf Stream circulation, especially the frontal regions which are predominantly affected by the non-resolved dynamical scales. An ensemble based on such stochastic perturbations is then produced and evaluated -through the probabilistic criteria: the reliability and the resolution- using the model equivalent of along-track altimetric observations. These three elements (stochastic parameterization, ensemble simulation and 4D observation operator) are used together to perform optimal 4D analysis of along-track altimetry over 10-day assimilation windows. In this presentation, the results show that the free ensemble -before starting the assimilation process- well reproduces the climatological variability over the Gulf Stream area: the system is then pretty reliable but no informative (null probabilistic resolution). Updating the free ensemble with altimetric data leads to a better reliability and to an improvement of the information (resolution

  18. An Evaluation of Marine Boundary Layer Cloud Property Simulations in the Community Atmosphere Model Using Satellite Observations: Conventional Subgrid Parameterization versus CLUBB

    Energy Technology Data Exchange (ETDEWEB)

    Song, Hua [Joint Center for Earth Systems Technology, University of Maryland, Baltimore County, Baltimore, Maryland; Zhang, Zhibo [Joint Center for Earth Systems Technology, and Physics Department, University of Maryland, Baltimore County, Baltimore, Maryland; Ma, Po-Lun [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland, Washington; Ghan, Steven J. [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland, Washington; Wang, Minghuai [Institute for Climate and Global Change Research, and School of Atmospheric Sciences, Nanjing University, Nanjing, China

    2018-03-01

    This paper presents a two-step evaluation of the marine boundary layer (MBL) cloud properties from two Community Atmospheric Model (version 5.3, CAM5) simulations, one based on the CAM5 standard parameterization schemes (CAM5-Base), and the other on the Cloud Layers Unified By Binormals (CLUBB) scheme (CAM5-CLUBB). In the first step, we compare the cloud properties directly from model outputs between the two simulations. We find that the CAM5-CLUBB run produces more MBL clouds in the tropical and subtropical large-scale descending regions. Moreover, the stratocumulus (Sc) to cumulus (Cu) cloud regime transition is much smoother in CAM5-CLUBB than in CAM5-Base. In addition, in CAM5-Base we find some grid cells with very small low cloud fraction (<20%) to have very high in-cloud water content (mixing ratio up to 400mg/kg). We find no such grid cells in the CAM5-CLUBB run. However, we also note that both simulations, especially CAM5-CLUBB, produce a significant amount of “empty” low cloud cells with significant cloud fraction (up to 70%) and near-zero in-cloud water content. In the second step, we use satellite observations from CERES, MODIS and CloudSat to evaluate the simulated MBL cloud properties by employing the COSP satellite simulators. We note that a feature of the COSP-MODIS simulator to mimic the minimum detection threshold of MODIS cloud masking removes much more low clouds from CAM5-CLUBB than it does from CAM5-Base. This leads to a surprising result — in the large-scale descending regions CAM5-CLUBB has a smaller COSP-MODIS cloud fraction and weaker shortwave cloud radiative forcing than CAM5-Base. A sensitivity study suggests that this is because CAM5-CLUBB suffers more from the above-mentioned “empty” clouds issue than CAM5-Base. The COSP-MODIS cloud droplet effective radius in CAM5-CLUBB shows a spatial increase from coastal St toward Cu, which is in qualitative agreement with MODIS observations. In contrast, COSP-MODIS cloud droplet

  19. Challenges of Representing Sub-Grid Physics in an Adaptive Mesh Refinement Atmospheric Model

    Science.gov (United States)

    O'Brien, T. A.; Johansen, H.; Johnson, J. N.; Rosa, D.; Benedict, J. J.; Keen, N. D.; Collins, W.; Goodfriend, E.

    2015-12-01

    Some of the greatest potential impacts from future climate change are tied to extreme atmospheric phenomena that are inherently multiscale, including tropical cyclones and atmospheric rivers. Extremes are challenging to simulate in conventional climate models due to existing models' coarse resolutions relative to the native length-scales of these phenomena. Studying the weather systems of interest requires an atmospheric model with sufficient local resolution, and sufficient performance for long-duration climate-change simulations. To this end, we have developed a new global climate code with adaptive spatial and temporal resolution. The dynamics are formulated using a block-structured conservative finite volume approach suitable for moist non-hydrostatic atmospheric dynamics. By using both space- and time-adaptive mesh refinement, the solver focuses computational resources only where greater accuracy is needed to resolve critical phenomena. We explore different methods for parameterizing sub-grid physics, such as microphysics, macrophysics, turbulence, and radiative transfer. In particular, we contrast the simplified physics representation of Reed and Jablonowski (2012) with the more complex physics representation used in the System for Atmospheric Modeling of Khairoutdinov and Randall (2003). We also explore the use of a novel macrophysics parameterization that is designed to be explicitly scale-aware.

  20. An energetically consistent vertical mixing parameterization in CCSM4

    DEFF Research Database (Denmark)

    Nielsen, Søren Borg; Jochum, Markus; Eden, Carsten

    2018-01-01

    An energetically consistent stratification-dependent vertical mixing parameterization is implemented in the Community Climate System Model 4 and forced with energy conversion from the barotropic tides to internal waves. The structures of the resulting dissipation and diffusivity fields are compared......, however, depends greatly on the details of the vertical mixing parameterizations, where the new energetically consistent parameterization results in low thermocline diffusivities and a sharper and shallower thermocline. It is also investigated if the ocean state is more sensitive to a change in forcing...

  1. Invariant box-parameterization of neutrino oscillations

    International Nuclear Information System (INIS)

    Weiler, Thomas J.; Wagner, DJ

    1998-01-01

    The model-independent 'box' parameterization of neutrino oscillations is examined. The invariant boxes are the classical amplitudes of the individual oscillating terms. Being observables, the boxes are independent of the choice of parameterization of the mixing matrix. Emphasis is placed on the relations among the box parameters due to mixing-matrix unitarity, and on the reduction of the number of boxes to the minimum basis set. Using the box algebra, we show that CP-violation may be inferred from measurements of neutrino flavor mixing even when the oscillatory factors have averaged. General analyses of neutrino oscillations among n≥3 flavors can readily determine the boxes, which can then be manipulated to yield magnitudes of mixing matrix elements

  2. Capturing the complex behavior of hydraulic fracture stimulation through multi-physics modeling, field-based constraints, and model reduction

    Science.gov (United States)

    Johnson, S.; Chiaramonte, L.; Cruz, L.; Izadi, G.

    2016-12-01

    Advances in the accuracy and fidelity of numerical methods have significantly improved our understanding of coupled processes in unconventional reservoirs. However, such multi-physics models are typically characterized by many parameters and require exceptional computational resources to evaluate systems of practical importance, making these models difficult to use for field analyses or uncertainty quantification. One approach to remove these limitations is through targeted complexity reduction and field data constrained parameterization. For the latter, a variety of field data streams may be available to engineers and asset teams, including micro-seismicity from proximate sites, well logs, and 3D surveys, which can constrain possible states of the reservoir as well as the distributions of parameters. We describe one such workflow, using the Argos multi-physics code and requisite geomechanical analysis to parameterize the underlying models. We illustrate with a field study involving a constraint analysis of various field data and details of the numerical optimizations and model reduction to demonstrate how complex models can be applied to operation design in hydraulic fracturing operations, including selection of controllable completion and fluid injection design properties. The implication of this work is that numerical methods are mature and computationally tractable enough to enable complex engineering analysis and deterministic field estimates and to advance research into stochastic analyses for uncertainty quantification and value of information applications.

  3. Development and Testing of Embedded Gridding Within the Regional Ocean Modeling System: Interaction Between Near-Shore Off-Shore Currents and Material Properties

    National Research Council Canada - National Science Library

    McWilliams, James

    2003-01-01

    ...) continuing the development of the Regional Ocean Modeling System (ROMS) with respect to its hydrodynamic algorithms, physical transport parameterizations, and range of biogeochemical processes; (2...

  4. Parameterized and resolved Southern Ocean eddy compensation

    Science.gov (United States)

    Poulsen, Mads B.; Jochum, Markus; Nuterman, Roman

    2018-04-01

    The ability to parameterize Southern Ocean eddy effects in a forced coarse resolution ocean general circulation model is assessed. The transient model response to a suite of different Southern Ocean wind stress forcing perturbations is presented and compared to identical experiments performed with the same model in 0.1° eddy-resolving resolution. With forcing of present-day wind stress magnitude and a thickness diffusivity formulated in terms of the local stratification, it is shown that the Southern Ocean residual meridional overturning circulation in the two models is different in structure and magnitude. It is found that the difference in the upper overturning cell is primarily explained by an overly strong subsurface flow in the parameterized eddy-induced circulation while the difference in the lower cell is mainly ascribed to the mean-flow overturning. With a zonally constant decrease of the zonal wind stress by 50% we show that the absolute decrease in the overturning circulation is insensitive to model resolution, and that the meridional isopycnal slope is relaxed in both models. The agreement between the models is not reproduced by a 50% wind stress increase, where the high resolution overturning decreases by 20%, but increases by 100% in the coarse resolution model. It is demonstrated that this difference is explained by changes in surface buoyancy forcing due to a reduced Antarctic sea ice cover, which strongly modulate the overturning response and ocean stratification. We conclude that the parameterized eddies are able to mimic the transient response to altered wind stress in the high resolution model, but partly misrepresent the unperturbed Southern Ocean meridional overturning circulation and associated heat transports.

  5. A physically based model of global freshwater surface temperature

    Science.gov (United States)

    van Beek, Ludovicus P. H.; Eikelboom, Tessa; van Vliet, Michelle T. H.; Bierkens, Marc F. P.

    2012-09-01

    Temperature determines a range of physical properties of water and exerts a strong control on surface water biogeochemistry. Thus, in freshwater ecosystems the thermal regime directly affects the geographical distribution of aquatic species through their growth and metabolism and indirectly through their tolerance to parasites and diseases. Models used to predict surface water temperature range between physically based deterministic models and statistical approaches. Here we present the initial results of a physically based deterministic model of global freshwater surface temperature. The model adds a surface water energy balance to river discharge modeled by the global hydrological model PCR-GLOBWB. In addition to advection of energy from direct precipitation, runoff, and lateral exchange along the drainage network, energy is exchanged between the water body and the atmosphere by shortwave and longwave radiation and sensible and latent heat fluxes. Also included are ice formation and its effect on heat storage and river hydraulics. We use the coupled surface water and energy balance model to simulate global freshwater surface temperature at daily time steps with a spatial resolution of 0.5° on a regular grid for the period 1976-2000. We opt to parameterize the model with globally available data and apply it without calibration in order to preserve its physical basis with the outlook of evaluating the effects of atmospheric warming on freshwater surface temperature. We validate our simulation results with daily temperature data from rivers and lakes (U.S. Geological Survey (USGS), limited to the USA) and compare mean monthly temperatures with those recorded in the Global Environment Monitoring System (GEMS) data set. Results show that the model is able to capture the mean monthly surface temperature for the majority of the GEMS stations, while the interannual variability as derived from the USGS and NOAA data was captured reasonably well. Results are poorest for

  6. Sensitivity of Glacier Mass Balance Estimates to the Selection of WRF Cloud Microphysics Parameterization in the Indus River Watershed

    Science.gov (United States)

    Johnson, E. S.; Rupper, S.; Steenburgh, W. J.; Strong, C.; Kochanski, A.

    2017-12-01

    Climate model outputs are often used as inputs to glacier energy and mass balance models, which are essential glaciological tools for testing glacier sensitivity, providing mass balance estimates in regions with little glaciological data, and providing a means to model future changes. Climate model outputs, however, are sensitive to the choice of physical parameterizations, such as those for cloud microphysics, land-surface schemes, surface layer options, etc. Furthermore, glacier mass balance (MB) estimates that use these climate model outputs as inputs are likely sensitive to the specific parameterization schemes, but this sensitivity has not been carefully assessed. Here we evaluate the sensitivity of glacier MB estimates across the Indus Basin to the selection of cloud microphysics parameterizations in the Weather Research and Forecasting Model (WRF). Cloud microphysics parameterizations differ in how they specify the size distributions of hydrometeors, the rate of graupel and snow production, their fall speed assumptions, the rates at which they convert from one hydrometeor type to the other, etc. While glacier MB estimates are likely sensitive to other parameterizations in WRF, our preliminary results suggest that glacier MB is highly sensitive to the timing, frequency, and amount of snowfall, which is influenced by the cloud microphysics parameterization. To this end, the Indus Basin is an ideal study site, as it has both westerly (winter) and monsoonal (summer) precipitation influences, is a data-sparse region (so models are critical), and still has lingering questions as to glacier importance for local and regional resources. WRF is run at a 4 km grid scale using two commonly used parameterizations: the Thompson scheme and the Goddard scheme. On average, these parameterizations result in minimal differences in annual precipitation. However, localized regions exhibit differences in precipitation of up to 3 m w.e. a-1. The different schemes also impact the

  7. Dual Youla parameterization

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik

    2003-01-01

    A different aspect of using the parameterisation of all systems stabilised by a given controller, i.e. the dual Youla parameterisation, is considered. The relation between system change and the dual Youla parameter is derived in explicit form. A number of standard uncertain model descriptions...... are considered and the relation with the dual Youla parameter given. Some applications of the dual Youla parameterisation are considered in connection with the design of controllers and model/performance validation....

  8. Literature Review of Dredging Physical Models

    Science.gov (United States)

    This U.S. Army Engineer Research and Development Center, Coastal and Hydraulics Laboratory, special report presents a review of dredging physical ...model studies with the goal of understanding the most current state of dredging physical modeling, understanding conditions of similitude used in past...studies, and determining whether the flow field around a dredging operation has been quantified. Historical physical modeling efforts have focused on

  9. Evaluating a Model of Youth Physical Activity

    Science.gov (United States)

    Heitzler, Carrie D.; Lytle, Leslie A.; Erickson, Darin J.; Barr-Anderson, Daheia; Sirard, John R.; Story, Mary

    2010-01-01

    Objective: To explore the relationship between social influences, self-efficacy, enjoyment, and barriers and physical activity. Methods: Structural equation modeling examined relationships between parent and peer support, parent physical activity, individual perceptions, and objectively measured physical activity using accelerometers among a…

  10. Parameterizing convective organization

    Directory of Open Access Journals (Sweden)

    Brian Earle Mapes

    2011-06-01

    Full Text Available Lateral mixing parameters in buoyancy-driven deep convection schemes are among the most sensitive and important unknowns in atmosphere models. Unfortunately, there is not a true optimum value for plume mixing rate, but rather a dilemma or tradeoff: Excessive dilution of updrafts leads to unstable stratification bias in the mean state, while inadequate dilution allows deep convection to occur too easily, causing poor space and time distributions and variability. In this too-small parameter space, compromises are made based on competing metrics of model performance. We attempt to escape this “entrainment dilemma” by making bulk plume parameters (chiefly entrainment rate depend on a new prognostic variable (“organization,” org meant to reflect the rectified effects of subgrid-scale structure in meteorological fields. We test an org scheme in the Community Atmosphere Model (CAM5 with a new unified shallow-deep convection scheme (UW-ens, a 2-plume version of the University of Washington scheme. Since buoyant ascent involves natural selection, subgrid structure makes convection systematically deeper and stronger than the pure unorganized case: plumes of average (or randomly sampled air rising in the average environment. To reflect this, org is nonnegative, but we leave it dimensionless. A time scale characterizes its behavior (here ∼3 h for a 2o model. Currently its source is rain evaporation, but other sources can be added easily. We also let org be horizontally transported by advection, as a mass-weighted mean over the convecting layer. Linear coefficients link org to a plume ensemble, which it assists via: 1 plume base warmth above the mean temperature 2 plume radius enhancement (reduced mixing, and 3 increased probability of overlap in a multi-plume scheme, where interactions benefit later generations (this part has only been implemented in an offline toy column model. Since rain evaporation is a source for org, it functions as a time

  11. Simplified models for new physics in vector boson scattering. Input for Snowmass 2013

    International Nuclear Information System (INIS)

    Reuter, Juergen; Kilian, Wolfgang; Sekulla, Marco

    2013-07-01

    In this contribution to the Snowmass process 2013 we give a brief review of how new physics could enter in the electroweak (EW) sector of the Standard Model (SM). This new physics, if it is directly accessible at low energies, can be parameterized by explicit resonances having certain quantum numbers. The extreme case is the decoupling limit where those resonances are very heavy and leave only traces in the form of deviations in the SM couplings. Translations are given into higher-dimensional operators leading to such deviations. As long as such resonances are introduced without a UV-complete theory behind it, these models suffer from unitarity violation of perturbative scattering amplitudes. We show explicitly how theoretically sane descriptions could be achieved by using a unitarization prescription that allows a correct description of such a resonance without specifying a UV-complete model.

  12. Simplified models for new physics in vector boson scattering. Input for Snowmass 2013

    Energy Technology Data Exchange (ETDEWEB)

    Reuter, Juergen [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Kilian, Wolfgang; Sekulla, Marco [Siegen Univ. (Germany). Theoretische Physik I

    2013-07-15

    In this contribution to the Snowmass process 2013 we give a brief review of how new physics could enter in the electroweak (EW) sector of the Standard Model (SM). This new physics, if it is directly accessible at low energies, can be parameterized by explicit resonances having certain quantum numbers. The extreme case is the decoupling limit where those resonances are very heavy and leave only traces in the form of deviations in the SM couplings. Translations are given into higher-dimensional operators leading to such deviations. As long as such resonances are introduced without a UV-complete theory behind it, these models suffer from unitarity violation of perturbative scattering amplitudes. We show explicitly how theoretically sane descriptions could be achieved by using a unitarization prescription that allows a correct description of such a resonance without specifying a UV-complete model.

  13. Elastic orthorhombic anisotropic parameter inversion: An analysis of parameterization

    KAUST Repository

    Oh, Juwon; Alkhalifah, Tariq Ali

    2016-01-01

    The resolution of a multiparameter full-waveform inversion (FWI) is highly influenced by the parameterization used in the inversion algorithm, as well as the data quality and the sensitivity of the data to the elastic parameters because the scattering patterns of the partial derivative wavefields (PDWs) vary with parameterization. For this reason, it is important to identify an optimal parameterization for elastic orthorhombic FWI by analyzing the radiation patterns of the PDWs for many reasonable model parameterizations. We have promoted a parameterization that allows for the separation of the anisotropic properties in the radiation patterns. The central parameter of this parameterization is the horizontal P-wave velocity, with an isotropic scattering potential, influencing the data at all scales and directions. This parameterization decouples the influence of the scattering potential given by the P-wave velocity perturbation fromthe polar changes described by two dimensionless parameter perturbations and from the azimuthal variation given by three additional dimensionless parameters perturbations. In addition, the scattering potentials of the P-wave velocity perturbation are also decoupled from the elastic influences given by one S-wave velocity and two additional dimensionless parameter perturbations. The vertical S-wave velocity is chosen with the best resolution obtained from S-wave reflections and converted waves, little influence on P-waves in conventional surface seismic acquisition. The influence of the density on observed data can be absorbed by one anisotropic parameter that has a similar radiation pattern. The additional seven dimensionless parameters describe the polar and azimuth variations in the P- and S-waves that we may acquire, with some of the parameters having distinct influences on the recorded data on the earth's surface. These characteristics of the new parameterization offer the potential for a multistage inversion from high symmetry

  14. Elastic orthorhombic anisotropic parameter inversion: An analysis of parameterization

    KAUST Repository

    Oh, Juwon

    2016-09-15

    The resolution of a multiparameter full-waveform inversion (FWI) is highly influenced by the parameterization used in the inversion algorithm, as well as the data quality and the sensitivity of the data to the elastic parameters because the scattering patterns of the partial derivative wavefields (PDWs) vary with parameterization. For this reason, it is important to identify an optimal parameterization for elastic orthorhombic FWI by analyzing the radiation patterns of the PDWs for many reasonable model parameterizations. We have promoted a parameterization that allows for the separation of the anisotropic properties in the radiation patterns. The central parameter of this parameterization is the horizontal P-wave velocity, with an isotropic scattering potential, influencing the data at all scales and directions. This parameterization decouples the influence of the scattering potential given by the P-wave velocity perturbation fromthe polar changes described by two dimensionless parameter perturbations and from the azimuthal variation given by three additional dimensionless parameters perturbations. In addition, the scattering potentials of the P-wave velocity perturbation are also decoupled from the elastic influences given by one S-wave velocity and two additional dimensionless parameter perturbations. The vertical S-wave velocity is chosen with the best resolution obtained from S-wave reflections and converted waves, little influence on P-waves in conventional surface seismic acquisition. The influence of the density on observed data can be absorbed by one anisotropic parameter that has a similar radiation pattern. The additional seven dimensionless parameters describe the polar and azimuth variations in the P- and S-waves that we may acquire, with some of the parameters having distinct influences on the recorded data on the earth\\'s surface. These characteristics of the new parameterization offer the potential for a multistage inversion from high symmetry

  15. A validated physical model of greenhouse climate.

    NARCIS (Netherlands)

    Bot, G.P.A.

    1989-01-01

    In the greenhouse model the momentaneous environmental crop growth factors are calculated as output, together with the physical behaviour of the crop. The boundary conditions for this model are the outside weather conditions; other inputs are the physical characteristics of the crop, of the

  16. Numerical modelling in material physics

    International Nuclear Information System (INIS)

    Proville, L.

    2004-12-01

    The author first briefly presents his past research activities: investigation of a dislocation sliding in solid solution by molecular dynamics, modelling of metal film growth by phase field and Monte Carlo kinetics, phase field model for surface self-organisation, phase field model for the Al 3 Zr alloy, calculation of anharmonic photons, mobility of bipolarons in superconductors. Then, he more precisely reports the mesoscopic modelling in phase field, and some atomistic modelling (dislocation sliding, Monte Carlo simulation of metal surface growth, anharmonic network optical spectrum modelling)

  17. An analysis of MM5 sensitivity to different parameterizations for high-resolution climate simulations

    Science.gov (United States)

    Argüeso, D.; Hidalgo-Muñoz, J. M.; Gámiz-Fortis, S. R.; Esteban-Parra, M. J.; Castro-Díez, Y.

    2009-04-01

    An evaluation of MM5 mesoscale model sensitivity to different parameterizations schemes is presented in terms of temperature and precipitation for high-resolution integrations over Andalusia (South of Spain). As initial and boundary conditions ERA-40 Reanalysis data are used. Two domains were used, a coarse one with dimensions of 55 by 60 grid points with spacing of 30 km and a nested domain of 48 by 72 grid points grid spaced 10 km. Coarse domain fully covers Iberian Peninsula and Andalusia fits loosely in the finer one. In addition to parameterization tests, two dynamical downscaling techniques have been applied in order to examine the influence of initial conditions on RCM long-term studies. Regional climate studies usually employ continuous integration for the period under survey, initializing atmospheric fields only at the starting point and feeding boundary conditions regularly. An alternative approach is based on frequent re-initialization of atmospheric fields; hence the simulation is divided in several independent integrations. Altogether, 20 simulations have been performed using varying physics options, of which 4 were fulfilled applying the re-initialization technique. Surface temperature and accumulated precipitation (daily and monthly scale) were analyzed for a 5-year period covering from 1990 to 1994. Results have been compared with daily observational data series from 110 stations for temperature and 95 for precipitation Both daily and monthly average temperatures are generally well represented by the model. Conversely, daily precipitation results present larger deviations from observational data. However, noticeable accuracy is gained when comparing with monthly precipitation observations. There are some especially conflictive subregions where precipitation is scarcely captured, such as the Southeast of the Iberian Peninsula, mainly due to its extremely convective nature. Regarding parameterization schemes performance, every set provides very

  18. A subgrid parameterization scheme for precipitation

    Directory of Open Access Journals (Sweden)

    S. Turner

    2012-04-01

    Full Text Available With increasing computing power, the horizontal resolution of numerical weather prediction (NWP models is improving and today reaches 1 to 5 km. Nevertheless, clouds and precipitation formation are still subgrid scale processes for most cloud types, such as cumulus and stratocumulus. Subgrid scale parameterizations for water vapor condensation have been in use for many years and are based on a prescribed probability density function (PDF of relative humidity spatial variability within the model grid box, thus providing a diagnosis of the cloud fraction. A similar scheme is developed and tested here. It is based on a prescribed PDF of cloud water variability and a threshold value of liquid water content for droplet collection to derive a rain fraction within the model grid. Precipitation of rainwater raises additional concerns relative to the overlap of cloud and rain fractions, however. The scheme is developed following an analysis of data collected during field campaigns in stratocumulus (DYCOMS-II and fair weather cumulus (RICO and tested in a 1-D framework against large eddy simulations of these observed cases. The new parameterization is then implemented in a 3-D NWP model with a horizontal resolution of 2.5 km to simulate real cases of precipitating cloud systems over France.

  19. Problems in physical modeling of magnetic materials

    International Nuclear Information System (INIS)

    Della Torre, E.

    2004-01-01

    Physical modeling of magnetic materials should give insights into the basic processes involved and should be able to extrapolate results to new situations that the models were not necessarily intended to solve. Thus, for example, if a model is designed to describe a static magnetization curve, it should also be able to describe aspects of magnetization dynamics. Both micromagnetic modeling and Preisach modeling, the two most popular magnetic models, fulfill this requirement, but in the process of fulfilling this requirement, they both had to be modified in some ways. Hence, we should view physical modeling as an iterative process whereby we start with some simple assumptions and refine them as reality requires. In the process of refining these assumptions, we should try to appeal to physical arguments for the modifications, if we are to come up with good models. If we consider phenomenological models, on the other hand, that is as axiomatic models requiring no physical justification, we can follow them logically to see the end and examine the consequences of their assumptions. In this way, we can learn the properties, limitations and achievements of the particular model. Physical and phenomenological models complement each other in furthering our understanding of the behavior of magnetic materials

  20. High precision Standard Model Physics

    International Nuclear Information System (INIS)

    Magnin, J.

    2009-01-01

    The main goal of the LHCb experiment, one of the four large experiments of the Large Hadron Collider, is to try to give answers to the question of why Nature prefers matter over antimatter? This will be done by studying the decay of b quarks and their antimatter partners, b-bar, which will be produced by billions in 14 TeV p-p collisions by the LHC. In addition, as 'beauty' particles mainly decay in charm particles, an interesting program of charm physics will be carried on, allowing to measure quantities as for instance the D 0 -D-bar 0 mixing, with incredible precision.

  1. Physics Based Modeling of Compressible Turbulance

    Science.gov (United States)

    2016-11-07

    AFRL-AFOSR-VA-TR-2016-0345 PHYSICS -BASED MODELING OF COMPRESSIBLE TURBULENCE PARVIZ MOIN LELAND STANFORD JUNIOR UNIV CA Final Report 09/13/2016...on the AFOSR project (FA9550-11-1-0111) entitled: Physics based modeling of compressible turbulence. The period of performance was, June 15, 2011...by ANSI Std. Z39.18 Page 1 of 2FORM SF 298 11/10/2016https://livelink.ebs.afrl.af.mil/livelink/llisapi.dll PHYSICS -BASED MODELING OF COMPRESSIBLE

  2. Slush Fund: Modeling the Multiphase Physics of Oceanic Ices

    Science.gov (United States)

    Buffo, J.; Schmidt, B. E.

    2016-12-01

    The prevalence of ice interacting with an ocean, both on Earth and throughout the solar system, and its crucial role as the mediator of exchange between the hydrosphere below and atmosphere above, have made quantifying the thermodynamic, chemical, and physical properties of the ice highly desirable. While direct observations of these quantities exist, their scarcity increases with the difficulty of obtainment; the basal surfaces of terrestrial ice shelves remain largely unexplored and the icy interiors of moons like Europa and Enceladus have never been directly observed. Our understanding of these entities thus relies on numerical simulation, and the efficacy of their incorporation into larger systems models is dependent on the accuracy of these initial simulations. One characteristic of seawater, likely shared by the oceans of icy moons, is that it is a solution. As such, when it is frozen a majority of the solute is rejected from the forming ice, concentrating in interstitial pockets and channels, producing a two-component reactive porous media known as a mushy layer. The multiphase nature of this layer affects the evolution and dynamics of the overlying ice mass. Additionally ice can form in the water column and accrete onto the basal surface of these ice masses via buoyancy driven sedimentation as frazil or platelet ice. Numerical models hoping to accurately represent ice-ocean interactions should include the multiphase behavior of these two phenomena. While models of sea ice have begun to incorporate multiphase physics into their capabilities, no models of ice shelves/shells explicitly account for the two-phase behavior of the ice-ocean interface. Here we present a 1D multiphase model of floating oceanic ice that includes parameterizations of both density driven advection within the `mushy layer' and buoyancy driven sedimentation. The model is validated against contemporary sea ice models and observational data. Environmental stresses such as supercooling and

  3. The Physical Internet and Business Model Innovation

    Directory of Open Access Journals (Sweden)

    Diane Poulin

    2012-06-01

    Full Text Available Building on the analogy of data packets within the Digital Internet, the Physical Internet is a concept that dramatically transforms how physical objects are designed, manufactured, and distributed. This approach is open, efficient, and sustainable beyond traditional proprietary logistical solutions, which are often plagued by inefficiencies. The Physical Internet redefines supply chain configurations, business models, and value-creation patterns. Firms are bound to be less dependent on operational scale and scope trade-offs because they will be in a position to offer novel hybrid products and services that would otherwise destroy value. Finally, logistical chains become flexible and reconfigurable in real time, thus becoming better in tune with firm strategic choices. This article focuses on the potential impact of the Physical Internet on business model innovation, both from the perspectives of Physical-Internet enabled and enabling business models.

  4. Recent developments and assessment of a three-dimensional PBL parameterization for improved wind forecasting over complex terrain

    Science.gov (United States)

    Kosovic, B.; Jimenez, P. A.; Haupt, S. E.; Martilli, A.; Olson, J.; Bao, J. W.

    2017-12-01

    At present, the planetary boundary layer (PBL) parameterizations available in most numerical weather prediction (NWP) models are one-dimensional. One-dimensional parameterizations are based on the assumption of horizontal homogeneity. This homogeneity assumption is appropriate for grid cell sizes greater than 10 km. However, for mesoscale simulations of flows in complex terrain with grid cell sizes below 1 km, the assumption of horizontal homogeneity is violated. Applying a one-dimensional PBL parameterization to high-resolution mesoscale simulations in complex terrain could result in significant error. For high-resolution mesoscale simulations of flows in complex terrain, we have therefore developed and implemented a three-dimensional (3D) PBL parameterization in the Weather Research and Forecasting (WRF) model. The implementation of the 3D PBL scheme is based on the developments outlined by Mellor and Yamada (1974, 1982). Our implementation in the Weather Research and Forecasting (WRF) model uses a pure algebraic model (level 2) to diagnose the turbulent fluxes. To evaluate the performance of the 3D PBL model, we use observations from the Wind Forecast Improvement Project 2 (WFIP2). The WFIP2 field study took place in the Columbia River Gorge area from 2015-2017. We focus on selected cases when physical phenomena of significance for wind energy applications such as mountain waves, topographic wakes, and gap flows were observed. Our assessment of the 3D PBL parameterization also considers a large-eddy simulation (LES). We carried out a nested LES with grid cell sizes of 30 m and 10 m covering a large fraction of the WFIP2 study area. Both LES domains were discretized using 6000 x 3000 x 200 grid cells in zonal, meridional, and vertical direction, respectively. The LES results are used to assess the relative magnitude of horizontal gradients of turbulent stresses and fluxes in comparison to vertical gradients. The presentation will highlight the advantages of the 3

  5. Are Physical Education Majors Models for Fitness?

    Science.gov (United States)

    Kamla, James; Snyder, Ben; Tanner, Lori; Wash, Pamela

    2012-01-01

    The National Association of Sport and Physical Education (NASPE) (2002) has taken a firm stance on the importance of adequate fitness levels of physical education teachers stating that they have the responsibility to model an active lifestyle and to promote fitness behaviors. Since the NASPE declaration, national initiatives like Let's Move…

  6. Improving the representation of river-groundwater interactions in land surface modeling at the regional scale: Observational evidence and parameterization applied in the Community Land Model

    KAUST Repository

    Zampieri, Matteo; Serpetzoglou, Efthymios; Anagnostou, Emmanouil N.; Nikolopoulos, Efthymios I.; Papadopoulos, Anastasios

    2012-01-01

    Groundwater is an important component of the hydrological cycle, included in many land surface models to provide a lower boundary condition for soil moisture, which in turn plays a key role in the land-vegetation-atmosphere interactions

  7. Quark models in hadron physics

    International Nuclear Information System (INIS)

    Phatak, Shashikant C.

    2007-01-01

    In this talk, we review the role played by the quark models in the study of interaction of strong, weak and electromagnetic probes with hadrons at intermediate and high momentum transfers. By hadrons, we mean individual nucleons as well as nuclei. We argue that at these momentum transfers, the structure of hadrons plays an important role. The hadron structure of the hadrons is because of the underlying quark structure of hadrons and therefore the quark models play an important role in determining the hadron structure. Further, the properties of hadrons are likely to change when these are placed in nuclear medium and this change should arise from the underlying quark structure. We shall consider some quark models to look into these aspects. (author)

  8. Physics of the Quark Model

    Science.gov (United States)

    Young, Robert D.

    1973-01-01

    Discusses the charge independence, wavefunctions, magnetic moments, and high-energy scattering of hadrons on the basis of group theory and nonrelativistic quark model with mass spectrum calculated by first-order perturbation theory. The presentation is explainable to advanced undergraduate students. (CC)

  9. A Coordinated Effort to Improve Parameterization of High-Latitude Cloud and Radiation Processes

    International Nuclear Information System (INIS)

    J. O. Pinto; A.H. Lynch

    2004-01-01

    The goal of this project is the development and evaluation of improved parameterization of arctic cloud and radiation processes and implementation of the parameterizations into a climate model. Our research focuses specifically on the following issues: (1) continued development and evaluation of cloud microphysical parameterizations, focusing on issues of particular relevance for mixed phase clouds; and (2) evaluation of the mesoscale simulation of arctic cloud system life cycles

  10. Simplified Models for LHC New Physics Searches

    CERN Document Server

    Alves, Daniele; Arora, Sanjay; Bai, Yang; Baumgart, Matthew; Berger, Joshua; Buckley, Matthew; Butler, Bart; Chang, Spencer; Cheng, Hsin-Chia; Cheung, Clifford; Chivukula, R.Sekhar; Cho, Won Sang; Cotta, Randy; D'Alfonso, Mariarosaria; El Hedri, Sonia; Essig, Rouven; Evans, Jared A.; Fitzpatrick, Liam; Fox, Patrick; Franceschini, Roberto; Freitas, Ayres; Gainer, James S.; Gershtein, Yuri; Gray, Richard; Gregoire, Thomas; Gripaios, Ben; Gunion, Jack; Han, Tao; Haas, Andy; Hansson, Per; Hewett, JoAnne; Hits, Dmitry; Hubisz, Jay; Izaguirre, Eder; Kaplan, Jared; Katz, Emanuel; Kilic, Can; Kim, Hyung-Do; Kitano, Ryuichiro; Koay, Sue Ann; Ko, Pyungwon; Krohn, David; Kuflik, Eric; Lewis, Ian; Lisanti, Mariangela; Liu, Tao; Liu, Zhen; Lu, Ran; Luty, Markus; Meade, Patrick; Morrissey, David; Mrenna, Stephen; Nojiri, Mihoko; Okui, Takemichi; Padhi, Sanjay; Papucci, Michele; Park, Michael; Park, Myeonghun; Perelstein, Maxim; Peskin, Michael; Phalen, Daniel; Rehermann, Keith; Rentala, Vikram; Roy, Tuhin; Ruderman, Joshua T.; Sanz, Veronica; Schmaltz, Martin; Schnetzer, Stephen; Schuster, Philip; Schwaller, Pedro; Schwartz, Matthew D.; Schwartzman, Ariel; Shao, Jing; Shelton, Jessie; Shih, David; Shu, Jing; Silverstein, Daniel; Simmons, Elizabeth; Somalwar, Sunil; Spannowsky, Michael; Spethmann, Christian; Strassler, Matthew; Su, Shufang; Tait, Tim; Thomas, Brooks; Thomas, Scott; Toro, Natalia; Volansky, Tomer; Wacker, Jay; Waltenberger, Wolfgang; Yavin, Itay; Yu, Felix; Zhao, Yue; Zurek, Kathryn

    2012-01-01

    This document proposes a collection of simplified models relevant to the design of new-physics searches at the LHC and the characterization of their results. Both ATLAS and CMS have already presented some results in terms of simplified models, and we encourage them to continue and expand this effort, which supplements both signature-based results and benchmark model interpretations. A simplified model is defined by an effective Lagrangian describing the interactions of a small number of new particles. Simplified models can equally well be described by a small number of masses and cross-sections. These parameters are directly related to collider physics observables, making simplified models a particularly effective framework for evaluating searches and a useful starting point for characterizing positive signals of new physics. This document serves as an official summary of the results from the "Topologies for Early LHC Searches" workshop, held at SLAC in September of 2010, the purpose of which was to develop a...

  11. Parameterization of mixing by secondary circulation in estuaries

    Science.gov (United States)

    Basdurak, N. B.; Huguenard, K. D.; Valle-Levinson, A.; Li, M.; Chant, R. J.

    2017-07-01

    Eddy viscosity parameterizations that depend on a gradient Richardson number Ri have been most pertinent to the open ocean. Parameterizations applicable to stratified coastal regions typically require implementation of a numerical model. Two novel parameterizations of the vertical eddy viscosity, based on Ri, are proposed here for coastal waters. One turbulence closure considers temporal changes in stratification and bottom stress and is coined the "regular fit." The alternative approach, named the "lateral fit," incorporates variability of lateral flows that are prevalent in estuaries. The two turbulence parameterization schemes are tested using data from a Self-Contained Autonomous Microstructure Profiler (SCAMP) and an Acoustic Doppler Current Profiler (ADCP) collected in the James River Estuary. The "regular fit" compares favorably to SCAMP-derived vertical eddy viscosity values but only at relatively small values of gradient Ri. On the other hand, the "lateral fit" succeeds at describing the lateral variability of eddy viscosity over a wide range of Ri. The modifications proposed to Ri-dependent eddy viscosity parameterizations allow applicability to stratified coastal regions, particularly in wide estuaries, without requiring implementation of a numerical model.

  12. Parameterization of radiocaesium soil-plant transfer using soil characteristics

    International Nuclear Information System (INIS)

    Konoplev, A. V.; Drissner, J.; Klemt, E.; Konopleva, I. V.; Zibold, G.

    1996-01-01

    A model of radionuclide soil-plant transfer is proposed to parameterize the transfer factor by soil and soil solution characteristics. The model is tested with experimental data on the aggregated transfer factor T ag and soil parameters for 8 forest sites in Baden-Wuerttemberg. It is shown that the integral soil-plant transfer factor can be parameterized through radiocaesium exchangeability, capacity of selective sorption sites and ion composition of the soil solution or the water extract. A modified technique of (FES) measurement for soils with interlayer collapse is proposed. (author)

  13. Modeling Cyber Physical War Gaming

    Science.gov (United States)

    2017-08-07

    games share similar constructs. We also provide a game-theoretic approach to mathematically analyze attacker and defender strategies in cyber war...Military Practice of Course-of-Action Analysis 4 2. Game-Theoretic Method 7 2.1 Mathematical Model 7 2.2 Strategy Selection 10 2.2.1 Pure...officers, hundreds of combat and support vehicles, helicopters, sophisticated intelligence and communication equipment and specialists , artillery and

  14. A Hybrid Physical and Maximum-Entropy Landslide Susceptibility Model

    Directory of Open Access Journals (Sweden)

    Jerry Davis

    2015-06-01

    Full Text Available The clear need for accurate landslide susceptibility mapping has led to multiple approaches. Physical models are easily interpreted and have high predictive capabilities but rely on spatially explicit and accurate parameterization, which is commonly not possible. Statistical methods can include other factors influencing slope stability such as distance to roads, but rely on good landslide inventories. The maximum entropy (MaxEnt model has been widely and successfully used in species distribution mapping, because data on absence are often uncertain. Similarly, knowledge about the absence of landslides is often limited due to mapping scale or methodology. In this paper a hybrid approach is described that combines the physically-based landslide susceptibility model “Stability INdex MAPping” (SINMAP with MaxEnt. This method is tested in a coastal watershed in Pacifica, CA, USA, with a well-documented landslide history including 3 inventories of 154 scars on 1941 imagery, 142 in 1975, and 253 in 1983. Results indicate that SINMAP alone overestimated susceptibility due to insufficient data on root cohesion. Models were compared using SINMAP stability index (SI or slope alone, and SI or slope in combination with other environmental factors: curvature, a 50-m trail buffer, vegetation, and geology. For 1941 and 1975, using slope alone was similar to using SI alone; however in 1983 SI alone creates an Areas Under the receiver operator Curve (AUC of 0.785, compared with 0.749 for slope alone. In maximum-entropy models created using all environmental factors, the stability index (SI from SINMAP represented the greatest contributions in all three years (1941: 48.1%; 1975: 35.3; and 1983: 48%, with AUC of 0.795, 0822, and 0.859, respectively; however; using slope instead of SI created similar overall AUC values, likely due to the combined effect with plan curvature indicating focused hydrologic inputs and vegetation identifying the effect of root cohesion

  15. Physics beyond the Standard Model

    Science.gov (United States)

    Lach, Theodore

    2011-04-01

    Recent discoveries of the excited states of the Bs** meson along with the discovery of the omega-b-minus have brought into popular acceptance the concept of the orbiting quarks predicted by the Checker Board Model (CBM) 14 years ago. Back then the concept of orbiting quarks was not fashionable. Recent estimates of velocities of these quarks inside the proton and neutron are in excess of 90% the speed of light also in agreement with the CBM model. Still a 2D structure of the nucleus has not been accepted nor has it been proven wrong. The CBM predicts masses of the up and dn quarks are 237.31 MeV and 42.392 MeV respectively and suggests that a lighter generation of quarks u and d make up a different generation of quarks that make up light mesons. The CBM also predicts that the T' and B' quarks do exist and are not as massive as might be expected. (this would make it a 5G world in conflict with the SM) The details of the CB model and prediction of quark masses can be found at: http://checkerboard.dnsalias.net/ (1). T.M. Lach, Checkerboard Structure of the Nucleus, Infinite Energy, Vol. 5, issue 30, (2000). (2). T.M. Lach, Masses of the Sub-Nuclear Particles, nucl-th/0008026, @http://xxx.lanl.gov/.

  16. The evolution of process-based hydrologic models: historical challenges and the collective quest for physical realism

    Science.gov (United States)

    Clark, M. P.; Nijssen, B.; Wood, A.; Mizukami, N.; Newman, A. J.

    2017-12-01

    The diversity in hydrologic models has historically led to great controversy on the "correct" approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.

  17. Development of a parameterization scheme of mesoscale convective systems

    International Nuclear Information System (INIS)

    Cotton, W.R.

    1994-01-01

    The goal of this research is to develop a parameterization scheme of mesoscale convective systems (MCS) including diabatic heating, moisture and momentum transports, cloud formation, and precipitation. The approach is to: Perform explicit cloud-resolving simulation of MCSs; Perform statistical analyses of simulated MCSs to assist in fabricating a parameterization, calibrating coefficients, etc.; Test the parameterization scheme against independent field data measurements and in numerical weather prediction (NWP) models emulating general circulation model (GCM) grid resolution. Thus far we have formulated, calibrated, implemented and tested a deep convective engine against explicit Florida sea breeze convection and in coarse-grid regional simulations of mid-latitude and tropical MCSs. Several explicit simulations of MCSs have been completed, and several other are in progress. Analysis code is being written and run on the explicitly simulated data

  18. Parameterized Analysis of Paging and List Update Algorithms

    DEFF Research Database (Denmark)

    Dorrigiv, Reza; Ehmsen, Martin R.; López-Ortiz, Alejandro

    2015-01-01

    that a larger cache leads to a better performance. We also apply the parameterized analysis framework to list update and show that certain randomized algorithms which are superior to MTF in the classical model are not so in the parameterized case, which matches experimental results....... set model and express the performance of well known algorithms in terms of this parameter. This explicitly introduces parameterized-style analysis to online algorithms. The idea is that rather than normalizing the performance of an online algorithm by an (optimal) offline algorithm, we explicitly...... express the behavior of the algorithm in terms of two more natural parameters: the size of the cache and Denning’s working set measure. This technique creates a performance hierarchy of paging algorithms which better reflects their experimentally observed relative strengths. It also reflects the intuition...

  19. Ladder physics in the spin fermion model

    Science.gov (United States)

    Tsvelik, A. M.

    2017-05-01

    A link is established between the spin fermion (SF) model of the cuprates and the approach based on the analogy between the physics of doped Mott insulators in two dimensions and the physics of fermionic ladders. This enables one to use nonperturbative results derived for fermionic ladders to move beyond the large-N approximation in the SF model. It is shown that the paramagnon exchange postulated in the SF model has exactly the right form to facilitate the emergence of the fully gapped d -Mott state in the region of the Brillouin zone at the hot spots of the Fermi surface. Hence, the SF model provides an adequate description of the pseudogap.

  20. A parameterization of cloud droplet nucleation

    International Nuclear Information System (INIS)

    Ghan, S.J.; Chuang, C.; Penner, J.E.

    1993-01-01

    Droplet nucleation is a fundamental cloud process. The number of aerosols activated to form cloud droplets influences not only the number of aerosols scavenged by clouds but also the size of the cloud droplets. Cloud droplet size influences the cloud albedo and the conversion of cloud water to precipitation. Global aerosol models are presently being developed with the intention of coupling with global atmospheric circulation models to evaluate the influence of aerosols and aerosol-cloud interactions on climate. If these and other coupled models are to address issues of aerosol-cloud interactions, the droplet nucleation process must be adequately represented. Here we introduce a droplet nucleation parametrization that offers certain advantages over the popular Twomey (1959) parameterization

  1. Ontology modeling in physical asset integrity management

    CERN Document Server

    Yacout, Soumaya

    2015-01-01

    This book presents cutting-edge applications of, and up-to-date research on, ontology engineering techniques in the physical asset integrity domain. Though a survey of state-of-the-art theory and methods on ontology engineering, the authors emphasize essential topics including data integration modeling, knowledge representation, and semantic interpretation. The book also reflects novel topics dealing with the advanced problems of physical asset integrity applications such as heterogeneity, data inconsistency, and interoperability existing in design and utilization. With a distinctive focus on applications relevant in heavy industry, Ontology Modeling in Physical Asset Integrity Management is ideal for practicing industrial and mechanical engineers working in the field, as well as researchers and graduate concerned with ontology engineering in physical systems life cycles. This book also: Introduces practicing engineers, research scientists, and graduate students to ontology engineering as a modeling techniqu...

  2. A new parameterization for waveform inversion in acoustic orthorhombic media

    KAUST Repository

    Masmoudi, Nabil

    2016-05-26

    Orthorhombic anisotropic model inversion is extra challenging because of the multiple parameter nature of the inversion problem. The high number of parameters required to describe the medium exerts considerable trade-off and additional nonlinearity to a full-waveform inversion (FWI) application. Choosing a suitable set of parameters to describe the model and designing an effective inversion strategy can help in mitigating this problem. Using the Born approximation, which is the central ingredient of the FWI update process, we have derived radiation patterns for the different acoustic orthorhombic parameterizations. Analyzing the angular dependence of scattering (radiation patterns) of the parameters of different parameterizations starting with the often used Thomsen-Tsvankin parameterization, we have assessed the potential trade-off between the parameters and the resolution in describing the data and inverting for the parameters. The analysis led us to introduce new parameters ϵd, δd, and ηd, which have azimuthally dependent radiation patterns, but keep the scattering potential of the transversely isotropic parameters stationary with azimuth (azimuth independent). The novel parameters ϵd, δd, and ηd are dimensionless and represent a measure of deviation between the vertical planes in orthorhombic anisotropy. Therefore, these deviation parameters offer a new parameterization style for an acoustic orthorhombic medium described by six parameters: three vertical transversely isotropic (VTI) parameters, two deviation parameters, and one parameter describing the anisotropy in the horizontal symmetry plane. The main feature of any parameterization based on the deviation parameters, is the azimuthal independency of the modeled data with respect to the VTI parameters, which allowed us to propose practical inversion strategies based on our experience with the VTI parameters. This feature of the new parameterization style holds for even the long-wavelength components of

  3. Modelling Mathematical Reasoning in Physics Education

    Science.gov (United States)

    Uhden, Olaf; Karam, Ricardo; Pietrocola, Maurício; Pospiech, Gesche

    2012-04-01

    Many findings from research as well as reports from teachers describe students' problem solving strategies as manipulation of formulas by rote. The resulting dissatisfaction with quantitative physical textbook problems seems to influence the attitude towards the role of mathematics in physics education in general. Mathematics is often seen as a tool for calculation which hinders a conceptual understanding of physical principles. However, the role of mathematics cannot be reduced to this technical aspect. Hence, instead of putting mathematics away we delve into the nature of physical science to reveal the strong conceptual relationship between mathematics and physics. Moreover, we suggest that, for both prospective teaching and further research, a focus on deeply exploring such interdependency can significantly improve the understanding of physics. To provide a suitable basis, we develop a new model which can be used for analysing different levels of mathematical reasoning within physics. It is also a guideline for shifting the attention from technical to structural mathematical skills while teaching physics. We demonstrate its applicability for analysing physical-mathematical reasoning processes with an example.

  4. Utilities for high performance dispersion model PHYSIC

    International Nuclear Information System (INIS)

    Yamazawa, Hiromi

    1992-09-01

    The description and usage of the utilities for the dispersion calculation model PHYSIC were summarized. The model was developed in the study of developing high performance SPEEDI with the purpose of introducing meteorological forecast function into the environmental emergency response system. The procedure of PHYSIC calculation consists of three steps; preparation of relevant files, creation and submission of JCL, and graphic output of results. A user can carry out the above procedure with the help of the Geographical Data Processing Utility, the Model Control Utility, and the Graphic Output Utility. (author)

  5. Waste Feed Evaporation Physical Properties Modeling

    International Nuclear Information System (INIS)

    Daniel, W.E.

    2003-01-01

    This document describes the waste feed evaporator modeling work done in the Waste Feed Evaporation and Physical Properties Modeling test specification and in support of the Hanford River Protection Project (RPP) Waste Treatment Plant (WTP) project. A private database (ZEOLITE) was developed and used in this work in order to include the behavior of aluminosilicates such a NAS-gel in the OLI/ESP simulations, in addition to the development of the mathematical models. Mathematical models were developed that describe certain physical properties in the Hanford RPP-WTP waste feed evaporator process (FEP). In particular, models were developed for the feed stream to the first ultra-filtration step characterizing its heat capacity, thermal conductivity, and viscosity, as well as the density of the evaporator contents. The scope of the task was expanded to include the volume reduction factor across the waste feed evaporator (total evaporator feed volume/evaporator bottoms volume). All the physical properties were modeled as functions of the waste feed composition, temperature, and the high level waste recycle volumetric flow rate relative to that of the waste feed. The goal for the mathematical models was to predict the physical property to predicted simulation value. The simulation model approximating the FEP process used to develop the correlations was relatively complex, and not possible to duplicate within the scope of the bench scale evaporation experiments. Therefore, simulants were made of 13 design points (a subset of the points used in the model fits) using the compositions of the ultra-filtration feed streams as predicted by the simulation model. The chemistry and physical properties of the supernate (the modeled stream) as predicted by the simulation were compared with the analytical results of experimental simulant work as a method of validating the simulation software

  6. A study on the intrusion model by physical modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jung Yul; Kim, Yoo Sung; Hyun, Hye Ja [Korea Inst. of Geology Mining and Materials, Taejon (Korea, Republic of)

    1995-12-01

    In physical modeling, the actual phenomena of seismic wave propagation are directly measured like field survey and furthermore the structure and physical properties of subsurface can be known. So the measured datasets from physical modeling can be very desirable as input data to test the efficiency of various inversion algorithms. An underground structure formed by intrusion, which can be often seen in seismic section for oil exploration, is investigated by physical modeling. The model is characterized by various types of layer boundaries with steep dip angle. Therefore, this physical modeling data are very available not only to interpret seismic sections for oil exploration as a case history, but also to develop data processing techniques and estimate the capability of software such as migration, full waveform inversion. (author). 5 refs., 18 figs.

  7. Fast Physics Testbed for the FASTER Project

    Energy Technology Data Exchange (ETDEWEB)

    Lin, W.; Liu, Y.; Hogan, R.; Neggers, R.; Jensen, M.; Fridlind, A.; Lin, Y.; Wolf, A.

    2010-03-15

    This poster describes the Fast Physics Testbed for the new FAst-physics System Testbed and Research (FASTER) project. The overall objective is to provide a convenient and comprehensive platform for fast turn-around model evaluation against ARM observations and to facilitate development of parameterizations for cloud-related fast processes represented in global climate models. The testbed features three major components: a single column model (SCM) testbed, an NWP-Testbed, and high-resolution modeling (HRM). The web-based SCM-Testbed features multiple SCMs from major climate modeling centers and aims to maximize the potential of SCM approach to enhance and accelerate the evaluation and improvement of fast physics parameterizations through continuous evaluation of existing and evolving models against historical as well as new/improved ARM and other complementary measurements. The NWP-Testbed aims to capitalize on the large pool of operational numerical weather prediction products. Continuous evaluations of NWP forecasts against observations at ARM sites are carried out to systematically identify the biases and skills of physical parameterizations under all weather conditions. The highresolution modeling (HRM) activities aim to simulate the fast processes at high resolution to aid in the understanding of the fast processes and their parameterizations. A four-tier HRM framework is established to augment the SCM- and NWP-Testbeds towards eventual improvement of the parameterizations.

  8. Prediction of heavy rainfall over Chennai Metropolitan City, Tamil Nadu, India: Impact of microphysical parameterization schemes

    Science.gov (United States)

    Singh, K. S.; Bonthu, Subbareddy; Purvaja, R.; Robin, R. S.; Kannan, B. A. M.; Ramesh, R.

    2018-04-01

    This study attempts to investigate the real-time prediction of a heavy rainfall event over the Chennai Metropolitan City, Tamil Nadu, India that occurred on 01 December 2015 using Advanced Research Weather Research and Forecasting (WRF-ARW) model. The study evaluates the impact of six microphysical (Lin, WSM6, Goddard, Thompson, Morrison and WDM6) parameterization schemes of the model on prediction of heavy rainfall event. In addition, model sensitivity has also been evaluated with six Planetary Boundary Layer (PBL) and two Land Surface Model (LSM) schemes. Model forecast was carried out using nested domain and the impact of model horizontal grid resolutions were assessed at 9 km, 6 km and 3 km. Analysis of the synoptic features using National Center for Environmental Prediction Global Forecast System (NCEP-GFS) analysis data revealed strong upper-level divergence and high moisture content at lower level were favorable for the occurrence of heavy rainfall event over the northeast coast of Tamil Nadu. The study signified that forecasted rainfall was more sensitive to the microphysics and PBL schemes compared to the LSM schemes. The model provided better forecast of the heavy rainfall event using the logical combination of Goddard microphysics, YSU PBL and Noah LSM schemes, and it was mostly attributed to timely initiation and development of the convective system. The forecast with different horizontal resolutions using cumulus parameterization indicated that the rainfall prediction was not well represented at 9 km and 6 km. The forecast with 3 km horizontal resolution provided better prediction in terms of timely initiation and development of the event. The study highlights that forecast of heavy rainfall events using a high-resolution mesoscale model with suitable representations of physical parameterization schemes are useful for disaster management and planning to minimize the potential loss of life and property.

  9. Neutrosophic Parameterized Soft Relations and Their Applications

    Directory of Open Access Journals (Sweden)

    Irfan Deli

    2014-06-01

    Full Text Available The aim of this paper is to introduce the concept of relation on neutrosophic parameterized soft set (NP- soft sets theory. We have studied some related properties and also put forward some propositions on neutrosophic parameterized soft relation with proofs and examples. Finally the notions of symmetric, transitive, reflexive, and equivalence neutrosophic parameterized soft set relations have been established in our work. Finally a decision making method on NP-soft sets is presented.

  10. Plasma simulation studies using multilevel physics models

    International Nuclear Information System (INIS)

    Park, W.; Belova, E.V.; Fu, G.Y.; Tang, X.Z.; Strauss, H.R.; Sugiyama, L.E.

    1999-01-01

    The question of how to proceed toward ever more realistic plasma simulation studies using ever increasing computing power is addressed. The answer presented here is the M3D (Multilevel 3D) project, which has developed a code package with a hierarchy of physics levels that resolve increasingly complete subsets of phase-spaces and are thus increasingly more realistic. The rationale for the multilevel physics models is given. Each physics level is described and examples of its application are given. The existing physics levels are fluid models (3D configuration space), namely magnetohydrodynamic (MHD) and two-fluids; and hybrid models, namely gyrokinetic-energetic-particle/MHD (5D energetic particle phase-space), gyrokinetic-particle-ion/fluid-electron (5D ion phase-space), and full-kinetic-particle-ion/fluid-electron level (6D ion phase-space). Resolving electron phase-space (5D or 6D) remains a future project. Phase-space-fluid models are not used in favor of δf particle models. A practical and accurate nonlinear fluid closure for noncollisional plasmas seems not likely in the near future. copyright 1999 American Institute of Physics

  11. Physically realistic modeling of maritime training simulation

    OpenAIRE

    Cieutat , Jean-Marc

    2003-01-01

    Maritime training simulation is an important matter of maritime teaching, which requires a lot of scientific and technical skills.In this framework, where the real time constraint has to be maintained, all physical phenomena cannot be studied; the most visual physical phenomena relating to the natural elements and the ship behaviour are reproduced only. Our swell model, based on a surface wave simulation approach, permits to simulate the shape and the propagation of a regular train of waves f...

  12. Tuning controllers using the dual Youla parameterization

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Stoustrup, Jakob

    2000-01-01

    This paper describes the application of the Youla parameterization of all stabilizing controllers and the dual Youla parameterization of all systems stabilized by a given controller in connection with tuning of controllers. In the uncertain case, it is shown that the use of the Youla parameteriza......This paper describes the application of the Youla parameterization of all stabilizing controllers and the dual Youla parameterization of all systems stabilized by a given controller in connection with tuning of controllers. In the uncertain case, it is shown that the use of the Youla...

  13. Computational models in physics teaching: a framework

    Directory of Open Access Journals (Sweden)

    Marco Antonio Moreira

    2012-08-01

    Full Text Available The purpose of the present paper is to present a theoretical framework to promote and assist meaningful physics learning through computational models. Our proposal is based on the use of a tool, the AVM diagram, to design educational activities involving modeling and computer simulations. The idea is to provide a starting point for the construction and implementation of didactical approaches grounded in a coherent epistemological view about scientific modeling.

  14. REDUCED DATA FOR CURVE MODELING – APPLICATIONS IN GRAPHICS, COMPUTER VISION AND PHYSICS

    Directory of Open Access Journals (Sweden)

    Małgorzata Janik

    2013-06-01

    Full Text Available In this paper we consider the problem of modeling curves in Rn via interpolation without a priori specified interpolation knots. We discuss two approaches to estimate the missing knots for non-parametric data (i.e. collection of points. The first approach (uniform evaluation is based on blind guess in which knots are chosen uniformly. The second approach (cumulative chord parameterization incorporates the geometry of the distribution of data points. More precisely, the difference is equal to the Euclidean distance between data points qi+1 and qi. The second method partially compensates for the loss of the information carried by the reduced data. We also present the application of the above schemes for fitting non-parametric data in computer graphics (light-source motion rendering, in computer vision (image segmentation and in physics (high velocity particles trajectory modeling. Though experiments are conducted for points in R2 and R3 the entire method is equally applicable in Rn.

  15. Simplified Models for LHC New Physics Searches

    International Nuclear Information System (INIS)

    Alves, Daniele; Arkani-Hamed, Nima; Arora, Sanjay; Bai, Yang; Baumgart, Matthew; Berger, Joshua; Butler, Bart; Chang, Spencer; Cheng, Hsin-Chia; Cheung, Clifford; Chivukula, R. Sekhar; Cho, Won Sang; Cotta, Randy; D'Alfonso, Mariarosaria; El Hedri, Sonia; Essig, Rouven; Fitzpatrick, Liam; Fox, Patrick; Franceschini, Roberto

    2012-01-01

    This document proposes a collection of simplified models relevant to the design of new-physics searches at the LHC and the characterization of their results. Both ATLAS and CMS have already presented some results in terms of simplified models, and we encourage them to continue and expand this effort, which supplements both signature-based results and benchmark model interpretations. A simplified model is defined by an effective Lagrangian describing the interactions of a small number of new particles. Simplified models can equally well be described by a small number of masses and cross-sections. These parameters are directly related to collider physics observables, making simplified models a particularly effective framework for evaluating searches and a useful starting point for characterizing positive signals of new physics. This document serves as an official summary of the results from the 'Topologies for Early LHC Searches' workshop, held at SLAC in September of 2010, the purpose of which was to develop a set of representative models that can be used to cover all relevant phase space in experimental searches. Particular emphasis is placed on searches relevant for the first ∼ 50-500 pb -1 of data and those motivated by supersymmetric models. This note largely summarizes material posted at http://lhcnewphysics.org/, which includes simplified model definitions, Monte Carlo material, and supporting contacts within the theory community. We also comment on future developments that may be useful as more data is gathered and analyzed by the experiments.

  16. Simplified Models for LHC New Physics Searches

    Energy Technology Data Exchange (ETDEWEB)

    Alves, Daniele; /SLAC; Arkani-Hamed, Nima; /Princeton, Inst. Advanced Study; Arora, Sanjay; /Rutgers U., Piscataway; Bai, Yang; /SLAC; Baumgart, Matthew; /Johns Hopkins U.; Berger, Joshua; /Cornell U., Phys. Dept.; Buckley, Matthew; /Fermilab; Butler, Bart; /SLAC; Chang, Spencer; /Oregon U. /UC, Davis; Cheng, Hsin-Chia; /UC, Davis; Cheung, Clifford; /UC, Berkeley; Chivukula, R.Sekhar; /Michigan State U.; Cho, Won Sang; /Tokyo U.; Cotta, Randy; /SLAC; D' Alfonso, Mariarosaria; /UC, Santa Barbara; El Hedri, Sonia; /SLAC; Essig, Rouven, (ed.); /SLAC; Evans, Jared A.; /UC, Davis; Fitzpatrick, Liam; /Boston U.; Fox, Patrick; /Fermilab; Franceschini, Roberto; /LPHE, Lausanne /Pittsburgh U. /Argonne /Northwestern U. /Rutgers U., Piscataway /Rutgers U., Piscataway /Carleton U. /CERN /UC, Davis /Wisconsin U., Madison /SLAC /SLAC /SLAC /Rutgers U., Piscataway /Syracuse U. /SLAC /SLAC /Boston U. /Rutgers U., Piscataway /Seoul Natl. U. /Tohoku U. /UC, Santa Barbara /Korea Inst. Advanced Study, Seoul /Harvard U., Phys. Dept. /Michigan U. /Wisconsin U., Madison /Princeton U. /UC, Santa Barbara /Wisconsin U., Madison /Michigan U. /UC, Davis /SUNY, Stony Brook /TRIUMF; /more authors..

    2012-06-01

    This document proposes a collection of simplified models relevant to the design of new-physics searches at the LHC and the characterization of their results. Both ATLAS and CMS have already presented some results in terms of simplified models, and we encourage them to continue and expand this effort, which supplements both signature-based results and benchmark model interpretations. A simplified model is defined by an effective Lagrangian describing the interactions of a small number of new particles. Simplified models can equally well be described by a small number of masses and cross-sections. These parameters are directly related to collider physics observables, making simplified models a particularly effective framework for evaluating searches and a useful starting point for characterizing positive signals of new physics. This document serves as an official summary of the results from the 'Topologies for Early LHC Searches' workshop, held at SLAC in September of 2010, the purpose of which was to develop a set of representative models that can be used to cover all relevant phase space in experimental searches. Particular emphasis is placed on searches relevant for the first {approx} 50-500 pb{sup -1} of data and those motivated by supersymmetric models. This note largely summarizes material posted at http://lhcnewphysics.org/, which includes simplified model definitions, Monte Carlo material, and supporting contacts within the theory community. We also comment on future developments that may be useful as more data is gathered and analyzed by the experiments.

  17. PHYSICAL EDUCATION - PHYSICAL CULTURE. TWO MODELS, TWO DIDACTIC

    Directory of Open Access Journals (Sweden)

    Manuel Vizuete Carrizosa

    2014-11-01

    The survival of these conflicting positions and their interests and different views on education, in a lengthy space of time, as a consequence threw two teaching approaches and two different educational models, in which the objectives and content of education differ , and with them the forms and methods of teaching. The need to define the cultural and educational approach, in every time and place, is now a pressing need and challenge the processes of teacher training, as responsible for shaping an advanced physical education, adjusted to the time and place, the interests and needs of citizens and the democratic values of modern society.

  18. Composing Models of Geographic Physical Processes

    Science.gov (United States)

    Hofer, Barbara; Frank, Andrew U.

    Processes are central for geographic information science; yet geographic information systems (GIS) lack capabilities to represent process related information. A prerequisite to including processes in GIS software is a general method to describe geographic processes independently of application disciplines. This paper presents such a method, namely a process description language. The vocabulary of the process description language is derived formally from mathematical models. Physical processes in geography can be described in two equivalent languages: partial differential equations or partial difference equations, where the latter can be shown graphically and used as a method for application specialists to enter their process models. The vocabulary of the process description language comprises components for describing the general behavior of prototypical geographic physical processes. These process components can be composed by basic models of geographic physical processes, which is shown by means of an example.

  19. Parameterization of solar flare dose

    International Nuclear Information System (INIS)

    Lamarche, A.H.; Poston, J.W.

    1996-01-01

    A critical aspect of missions to the moon or Mars will be the safety and health of the crew. Radiation in space is a hazard for astronauts, especially high-energy radiation following certain types of solar flares. A solar flare event can be very dangerous if astronauts are not adequately shielded because flares can deliver a very high dose in a short period of time. The goal of this research was to parameterize solar flare dose as a function of time to see if it was possible to predict solar flare occurrence, thus providing a warning time. This would allow astronauts to take corrective action and avoid receiving a dose greater than the recommended limit set by the National Council on Radiation Protection and Measurements (NCRP)

  20. PHYSICAL EDUCATION - PHYSICAL CULTURE. TWO MODELS, TWO DIDACTIC

    Directory of Open Access Journals (Sweden)

    Manuel Vizuete Carrizosa

    2014-10-01

    Full Text Available Physical Education is currently facing a number of problems that are rooted in the identity crisis prompted by the spread of the professional group, the confrontation of ideas from the scientific community and the competing interests of different political and social areas, compared to which physical education has failed, or unable, to react in time. The political and ideological confrontation that characterized the twentieth century gave us two forms, each with a consistent ideological position, in which the body as a subject of education was understood from two different positions: one set from the left and communism and another, from Western democratic societies.The survival of these conflicting positions and their interests and different views on education, in a lengthy space of time, as a consequence threw two teaching approaches and two different educational models, in which the objectives and content of education differ , and with them the forms and methods of teaching. The need to define the cultural and educational approach, in every time and place, is now a pressing need and challenge the processes of teacher training, as responsible for shaping an advanced physical education, adjusted to the time and place, the interests and needs of citizens and the democratic values of modern society.

  1. Impact of cloud microphysics and cumulus parameterization on ...

    Indian Academy of Sciences (India)

    2007-10-09

    Oct 9, 2007 ... Bangladesh. Weather Research and Forecast (WRF–ARW version) modelling system with six dif- .... tem intensified rapidly into a land depression over southern part of ... Impact of cloud microphysics and cumulus parameterization on heavy rainfall. 261 .... tent and temperature and is represented as a sum.

  2. Physical models for high burnup fuel

    International Nuclear Information System (INIS)

    Kanyukova, V.; Khoruzhii, O.; Likhanskii, V.; Solodovnikov, G.; Sorokin, A.

    2003-01-01

    In this paper some models of processes in high burnup fuel developed in Src of Russia Troitsk Institute for Innovation and Fusion Research are presented. The emphasis is on the description of the degradation of the fuel heat conductivity, radial profiles of the burnup and the plutonium accumulation, restructuring of the pellet rim, mechanical pellet-cladding interaction. The results demonstrate the possibility of rather accurate description of the behaviour of the fuel of high burnup on the base of simplified models in frame of the fuel performance code if the models are physically ground. The development of such models requires the performance of the detailed physical analysis to serve as a test for a correct choice of allowable simplifications. This approach was applied in the SRC of Russia TRINITI to develop a set of models for the WWER fuel resulting in high reliability of predictions in simulation of the high burnup fuel

  3. The optical model in atomic physics

    International Nuclear Information System (INIS)

    McCarthy, I.E.

    1978-01-01

    The optical model for electron scattering on atoms has quite a short history in comparison with nuclear physics. The main reason for this is that there were insufficient data. Angular distribution for elastic and some inelastic scattering have now been measured for the atoms which exist in gaseous form at reasonable temperatures, inert gases, hydrogen, alkalies and mercury being the main ones out in. The author shows that the optical model makes sense in atomic physics by considering its theory and recent history. (orig./AH) [de

  4. Elastic full-waveform inversion and parameterization analysis applied to walk-away vertical seismic profile data for unconventional (heavy oil) reservoir characterization

    Science.gov (United States)

    Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu

    2018-03-01

    Seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter tradeoff, arising from the covariance between different physical parameters, which increases nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parameterization and acquisition arrangement. An appropriate choice of model parameterization is critical to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parameterizations in isotropic-elastic FWI with walk-away vertical seismic profile (W-VSP) dataset for unconventional heavy oil reservoir characterization. Six model parameterizations are considered: velocity-density (α, β and ρ΄), modulus-density (κ, μ and ρ), Lamé-density (λ, μ΄ and ρ‴), impedance-density (IP, IS and ρ″), velocity-impedance-I (α΄, β΄ and I_P^'), and velocity-impedance-II (α″, β″ and I_S^'). We begin analyzing the interparameter tradeoff by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. In this paper, we discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter tradeoffs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter tradeoffs for various model parameterizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parameterization, the inverted density

  5. Distance parameterization for efficient seismic history matching with the ensemble Kalman Filter

    NARCIS (Netherlands)

    Leeuwenburgh, O.; Arts, R.

    2012-01-01

    The Ensemble Kalman Filter (EnKF), in combination with travel-time parameterization, provides a robust and flexible method for quantitative multi-model history matching to time-lapse seismic data. A disadvantage of the parameterization in terms of travel-times is that it requires simulation of

  6. Plasma simulation studies using multilevel physics models

    International Nuclear Information System (INIS)

    Park, W.; Belova, E.V.; Fu, G.Y.

    2000-01-01

    The question of how to proceed toward ever more realistic plasma simulation studies using ever increasing computing power is addressed. The answer presented here is the M3D (Multilevel 3D) project, which has developed a code package with a hierarchy of physics levels that resolve increasingly complete subsets of phase-spaces and are thus increasingly more realistic. The rationale for the multilevel physics models is given. Each physics level is described and examples of its application are given. The existing physics levels are fluid models (3D configuration space), namely magnetohydrodynamic (MHD) and two-fluids; and hybrid models, namely gyrokinetic-energetic-particle/MHD (5D energetic particle phase-space), gyrokinetic-particle-ion/fluid-electron (5D ion phase-space), and full-kinetic-particle-ion/fluid-electron level (6D ion phase-space). Resolving electron phase-space (5D or 6D) remains a future project. Phase-space-fluid models are not used in favor of delta f particle models. A practical and accurate nonlinear fluid closure for noncollisional plasmas seems not likely in the near future

  7. A validated physical model of greenhouse climate

    International Nuclear Information System (INIS)

    Bot, G.P.A.

    1989-01-01

    In the greenhouse model the momentaneous environmental crop growth factors are calculated as output, together with the physical behaviour of the crop. The boundary conditions for this model are the outside weather conditions; other inputs are the physical characteristics of the crop, of the greenhouse and of the control system. The greenhouse model is based on the energy, water vapour and CO 2 balances of the crop-greenhouse system. While the emphasis is on the dynamic behaviour of the greenhouse for implementation in continuous optimization, the state variables temperature, water vapour pressure and carbondioxide concentration in the relevant greenhouse parts crop, air, soil and cover are calculated from the balances over these parts. To do this in a proper way, the physical exchange processes between the system parts have to be quantified first. Therefore the greenhouse model is constructed from submodels describing these processes: a. Radiation transmission model for the modification of the outside to the inside global radiation. b. Ventilation model to describe the ventilation exchange between greenhouse and outside air. c. The description of the exchange of energy and mass between the crop and the greenhouse air. d. Calculation of the thermal radiation exchange between the various greenhouse parts. e. Quantification of the convective exchange processes between the greenhouse air and respectively the cover, the heating pipes and the soil surface and between the cover and the outside air. f. Determination of the heat conduction in the soil. The various submodels are validated first and then the complete greenhouse model is verified

  8. Topos models for physics and topos theory

    International Nuclear Information System (INIS)

    Wolters, Sander

    2014-01-01

    What is the role of topos theory in the topos models for quantum theory as used by Isham, Butterfield, Döring, Heunen, Landsman, Spitters, and others? In other words, what is the interplay between physical motivation for the models and the mathematical framework used in these models? Concretely, we show that the presheaf topos model of Butterfield, Isham, and Döring resembles classical physics when viewed from the internal language of the presheaf topos, similar to the copresheaf topos model of Heunen, Landsman, and Spitters. Both the presheaf and copresheaf models provide a “quantum logic” in the form of a complete Heyting algebra. Although these algebras are natural from a topos theoretic stance, we seek a physical interpretation for the logical operations. Finally, we investigate dynamics. In particular, we describe how an automorphism on the operator algebra induces a homeomorphism (or isomorphism of locales) on the associated state spaces of the topos models, and how elementary propositions and truth values transform under the action of this homeomorphism. Also with dynamics the focus is on the internal perspective of the topos

  9. Ladder physics in the spin fermion model

    International Nuclear Information System (INIS)

    Tsvelik, A. M.

    2017-01-01

    A link is established between the spin fermion (SF) model of the cuprates and the approach based on the analogy between the physics of doped Mott insulators in two dimensions and the physics of fermionic ladders. This enables one to use nonperturbative results derived for fermionic ladders to move beyond the large-N approximation in the SF model. Here, it is shown that the paramagnon exchange postulated in the SF model has exactly the right form to facilitate the emergence of the fully gapped d-Mott state in the region of the Brillouin zone at the hot spots of the Fermi surface. Hence, the SF model provides an adequate description of the pseudogap.

  10. Statistical physics of pairwise probability models

    DEFF Research Database (Denmark)

    Roudi, Yasser; Aurell, Erik; Hertz, John

    2009-01-01

    (dansk abstrakt findes ikke) Statistical models for describing the probability distribution over the states of biological systems are commonly used for dimensional reduction. Among these models, pairwise models are very attractive in part because they can be fit using a reasonable amount of  data......: knowledge of the means and correlations between pairs of elements in the system is sufficient. Not surprisingly, then, using pairwise models for studying neural data has been the focus of many studies in recent years. In this paper, we describe how tools from statistical physics can be employed for studying...

  11. Mathematical and physical models and radiobiology

    International Nuclear Information System (INIS)

    Lokajicek, M.

    1980-01-01

    The hit theory of the mechanism of biological radiation effects in the cell is discussed with respect to radiotherapy. The mechanisms of biological effects and of intracellular recovery, the cumulative radiation effect and the cumulative biological effect in fractionated irradiation are described. The benefit is shown of consistent application of mathematical and physical models in radiobiology and radiotherapy. (J.P.)

  12. Protein Folding: Search for Basic Physical Models

    Directory of Open Access Journals (Sweden)

    Ivan Y. Torshin

    2003-01-01

    Full Text Available How a unique three-dimensional structure is rapidly formed from the linear sequence of a polypeptide is one of the important questions in contemporary science. Apart from biological context of in vivo protein folding (which has been studied only for a few proteins, the roles of the fundamental physical forces in the in vitro folding remain largely unstudied. Despite a degree of success in using descriptions based on statistical and/or thermodynamic approaches, few of the current models explicitly include more basic physical forces (such as electrostatics and Van Der Waals forces. Moreover, the present-day models rarely take into account that the protein folding is, essentially, a rapid process that produces a highly specific architecture. This review considers several physical models that may provide more direct links between sequence and tertiary structure in terms of the physical forces. In particular, elaboration of such simple models is likely to produce extremely effective computational techniques with value for modern genomics.

  13. Dilution physics modeling: Dissolution/precipitation chemistry

    International Nuclear Information System (INIS)

    Onishi, Y.; Reid, H.C.; Trent, D.S.

    1995-09-01

    This report documents progress made to date on integrating dilution/precipitation chemistry and new physical models into the TEMPEST thermal-hydraulics computer code. Implementation of dissolution/precipitation chemistry models is necessary for predicting nonhomogeneous, time-dependent, physical/chemical behavior of tank wastes with and without a variety of possible engineered remediation and mitigation activities. Such behavior includes chemical reactions, gas retention, solids resuspension, solids dissolution and generation, solids settling/rising, and convective motion of physical and chemical species. Thus this model development is important from the standpoint of predicting the consequences of various engineered activities, such as mitigation by dilution, retrieval, or pretreatment, that can affect safe operations. The integration of a dissolution/precipitation chemistry module allows the various phase species concentrations to enter into the physical calculations that affect the TEMPEST hydrodynamic flow calculations. The yield strength model of non-Newtonian sludge correlates yield to a power function of solids concentration. Likewise, shear stress is concentration-dependent, and the dissolution/precipitation chemistry calculations develop the species concentration evolution that produces fluid flow resistance changes. Dilution of waste with pure water, molar concentrations of sodium hydroxide, and other chemical streams can be analyzed for the reactive species changes and hydrodynamic flow characteristics

  14. Physical and mathematical modelling of extrusion processes

    DEFF Research Database (Denmark)

    Arentoft, Mogens; Gronostajski, Z.; Niechajowics, A.

    2000-01-01

    The main objective of the work is to study the extrusion process using physical modelling and to compare the findings of the study with finite element predictions. The possibilities and advantages of the simultaneous application of both of these methods for the analysis of metal forming processes...

  15. Phenomenology of convection-parameterization closure

    Directory of Open Access Journals (Sweden)

    J.-I. Yano

    2013-04-01

    Full Text Available Closure is a problem of defining the convective intensity in a given parameterization. In spite of many years of efforts and progress, it is still considered an overall unresolved problem. The present article reviews this problem from phenomenological perspectives. The physical variables that may contribute in defining the convective intensity are listed, and their statistical significances identified by observational data analyses are reviewed. A possibility is discussed for identifying a correct closure hypothesis by performing a linear stability analysis of tropical convectively coupled waves with various different closure hypotheses. Various individual theoretical issues are considered from various different perspectives. The review also emphasizes that the dominant physical factors controlling convection differ between the tropics and extra-tropics, as well as between oceanic and land areas. Both observational as well as theoretical analyses, often focused on the tropics, do not necessarily lead to conclusions consistent with our operational experiences focused on midlatitudes. Though we emphasize the importance of the interplays between these observational, theoretical and operational perspectives, we also face challenges for establishing a solid research framework that is universally applicable. An energy cycle framework is suggested as such a candidate.

  16. A Multi-scale Modeling System with Unified Physics to Study Precipitation Processes

    Science.gov (United States)

    Tao, W. K.

    2017-12-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), and (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF). The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitation, processes and their sensitivity on model resolution and microphysics schemes will be presented. Also how to use of the multi-satellite simulator to improve precipitation processes will be discussed.

  17. Physical models for classroom teaching in hydrology

    Directory of Open Access Journals (Sweden)

    A. Rodhe

    2012-09-01

    Full Text Available Hydrology teaching benefits from the fact that many important processes can be illustrated and explained with simple physical models. A set of mobile physical models has been developed and used during many years of lecturing at basic university level teaching in hydrology. One model, with which many phenomena can be demonstrated, consists of a 1.0-m-long plexiglass container containing an about 0.25-m-deep open sand aquifer through which water is circulated. The model can be used for showing the groundwater table and its influence on the water content in the unsaturated zone and for quantitative determination of hydraulic properties such as the storage coefficient and the saturated hydraulic conductivity. It is also well suited for discussions on the runoff process and the significance of recharge and discharge areas for groundwater. The flow paths of water and contaminant dispersion can be illustrated in tracer experiments using fluorescent or colour dye. This and a few other physical models, with suggested demonstrations and experiments, are described in this article. The finding from using models in classroom teaching is that it creates curiosity among the students, promotes discussions and most likely deepens the understanding of the basic processes.

  18. Reliable control using the primary and dual Youla parameterizations

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Stoustrup, J.

    2002-01-01

    Different aspects of modeling faults in dynamic systems are considered in connection with reliable control (RC). The fault models include models with additive faults, multiplicative faults and structural changes in the models due to faults in the systems. These descriptions are considered...... in connection with reliable control and feedback control with fault rejection. The main emphasis is on fault modeling. A number of fault diagnosis problems, reliable control problems, and feedback control with fault rejection problems are formulated/considered, again, mainly from a fault modeling point of view....... Reliability is introduced by means of the (primary) Youla parameterization of all stabilizing controllers, where an additional loop is closed around a diagnostic signal. In order to quantify the level of reliability, the dual Youla parameterization is introduced which can be used to analyze how large faults...

  19. Climate impacts of parameterized Nordic Sea overflows

    Science.gov (United States)

    Danabasoglu, Gokhan; Large, William G.; Briegleb, Bruce P.

    2010-11-01

    A new overflow parameterization (OFP) of density-driven flows through ocean ridges via narrow, unresolved channels has been developed and implemented in the ocean component of the Community Climate System Model version 4. It represents exchanges from the Nordic Seas and the Antarctic shelves, associated entrainment, and subsequent injection of overflow product waters into the abyssal basins. We investigate the effects of the parameterized Denmark Strait (DS) and Faroe Bank Channel (FBC) overflows on the ocean circulation, showing their impacts on the Atlantic Meridional Overturning Circulation and the North Atlantic climate. The OFP is based on the Marginal Sea Boundary Condition scheme of Price and Yang (1998), but there are significant differences that are described in detail. Two uncoupled (ocean-only) and two fully coupled simulations are analyzed. Each pair consists of one case with the OFP and a control case without this parameterization. In both uncoupled and coupled experiments, the parameterized DS and FBC source volume transports are within the range of observed estimates. The entrainment volume transports remain lower than observational estimates, leading to lower than observed product volume transports. Due to low entrainment, the product and source water properties are too similar. The DS and FBC overflow temperature and salinity properties are in better agreement with observations in the uncoupled case than in the coupled simulation, likely reflecting surface flux differences. The most significant impact of the OFP is the improved North Atlantic Deep Water penetration depth, leading to a much better comparison with the observational data and significantly reducing the chronic, shallow penetration depth bias in level coordinate models. This improvement is due to the deeper penetration of the southward flowing Deep Western Boundary Current. In comparison with control experiments without the OFP, the abyssal ventilation rates increase in the North

  20. Service Learning In Physics: The Consultant Model

    Science.gov (United States)

    Guerra, David

    2005-04-01

    Each year thousands of students across the country and across the academic disciplines participate in service learning. Unfortunately, with no clear model for integrating community service into the physics curriculum, there are very few physics students engaged in service learning. To overcome this shortfall, a consultant based service-learning program has been developed and successfully implemented at Saint Anselm College (SAC). As consultants, students in upper level physics courses apply their problem solving skills in the service of others. Most recently, SAC students provided technical and managerial support to a group from Girl's Inc., a national empowerment program for girls in high-risk, underserved areas, who were participating in the national FIRST Lego League Robotics competition. In their role as consultants the SAC students provided technical information through brainstorming sessions and helped the girls stay on task with project management techniques, like milestone charting. This consultant model of service-learning, provides technical support to groups that may not have a great deal of resources and gives physics students a way to improve their interpersonal skills, test their technical expertise, and better define the marketable skill set they are developing through the physics curriculum.

  1. Physical retrieval of precipitation water contents from Special Sensor Microwave/Imager (SSM/I) data. Part 1: A cloud ensemble/radiative parameterization for sensor response (report version)

    Science.gov (United States)

    Olson, William S.; Raymond, William H.

    1990-01-01

    The physical retrieval of geophysical parameters based upon remotely sensed data requires a sensor response model which relates the upwelling radiances that the sensor observes to the parameters to be retrieved. In the retrieval of precipitation water contents from satellite passive microwave observations, the sensor response model has two basic components. First, a description of the radiative transfer of microwaves through a precipitating atmosphere must be considered, because it is necessary to establish the physical relationship between precipitation water content and upwelling microwave brightness temperature. Also the spatial response of the satellite microwave sensor (or antenna pattern) must be included in the description of sensor response, since precipitation and the associated brightness temperature field can vary over a typical microwave sensor resolution footprint. A 'population' of convective cells, as well as stratiform clouds, are simulated using a computationally-efficient multi-cylinder cloud model. Ensembles of clouds selected at random from the population, distributed over a 25 km x 25 km model domain, serve as the basis for radiative transfer calculations of upwelling brightness temperatures at the SSM/I frequencies. Sensor spatial response is treated explicitly by convolving the upwelling brightness temperature by the domain-integrated SSM/I antenna patterns. The sensor response model is utilized in precipitation water content retrievals.

  2. Parameterization of planetary wave breaking in the middle atmosphere

    Science.gov (United States)

    Garcia, Rolando R.

    1991-01-01

    A parameterization of planetary wave breaking in the middle atmosphere has been developed and tested in a numerical model which includes governing equations for a single wave and the zonal-mean state. The parameterization is based on the assumption that wave breaking represents a steady-state equilibrium between the flux of wave activity and its dissipation by nonlinear processes, and that the latter can be represented as linear damping of the primary wave. With this and the additional assumption that the effect of breaking is to prevent further amplitude growth, the required dissipation rate is readily obtained from the steady-state equation for wave activity; diffusivity coefficients then follow from the dissipation rate. The assumptions made in the derivation are equivalent to those commonly used in parameterizations for gravity wave breaking, but the formulation in terms of wave activity helps highlight the central role of the wave group velocity in determining the dissipation rate. Comparison of model results with nonlinear calculations of wave breaking and with diagnostic determinations of stratospheric diffusion coefficients reveals remarkably good agreement, and suggests that the parameterization could be useful for simulating inexpensively, but realistically, the effects of planetary wave transport.

  3. Nuclear physics for applications. A model approach

    International Nuclear Information System (INIS)

    Prussin, S.G.

    2007-01-01

    Written by a researcher and teacher with experience at top institutes in the US and Europe, this textbook provides advanced undergraduates minoring in physics with working knowledge of the principles of nuclear physics. Simplifying models and approaches reveal the essence of the principles involved, with the mathematical and quantum mechanical background integrated in the text where it is needed and not relegated to the appendices. The practicality of the book is enhanced by numerous end-of-chapter problems and solutions available on the Wiley homepage. (orig.)

  4. The CCPP-ARM Parameterization Testbed (CAPT): Where Climate Simulation Meets Weather Prediction

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, T J; Potter, G L; Williamson, D L; Cederwall, R T; Boyle, J S; Fiorino, M; Hnilo, J J; Olson, J G; Xie, S; Yio, J J

    2003-11-21

    To significantly improve the simulation of climate by general circulation models (GCMs), systematic errors in representations of relevant processes must first be identified, and then reduced. This endeavor demands, in particular, that the GCM parameterizations of unresolved processes should be tested over a wide range of time scales, not just in climate simulations. Thus, a numerical weather prediction (NWP) methodology for evaluating model parameterizations and gaining insights into their behavior may prove useful, provied that suitable adaptations are made for implementation in climate GCMs. This method entails the generation of short-range weather forecasts by realistically initialized climate GCM, and the application of six-hourly NWP analyses and observations of parameterized variables to evaluate these forecasts. The behavior of the parameterizations in such a weather-forecasting framework can provide insights on how these schemes might be improved, and modified parameterizations then can be similarly tested. In order to further this method for evaluating and analyzing parameterizations in climate GCMs, the USDOE is funding a joint venture of its Climate Change Prediction Program (CCPP) and Atmospheric Radiation Measurement (ARM) Program: the CCPP-ARM Parameterization Testbed (CAPT). This article elaborates the scientific rationale for CAPT, discusses technical aspects of its methodology, and presents examples of its implementation in a representative climate GCM. Numerical weather prediction methods show promise for improving parameterizations in climate GCMs.

  5. Prototyping of cerebral vasculature physical models.

    Science.gov (United States)

    Khan, Imad S; Kelly, Patrick D; Singer, Robert J

    2014-01-01

    Prototyping of cerebral vasculature models through stereolithographic methods have the ability to accurately depict the 3D structures of complicated aneurysms with high accuracy. We describe the method to manufacture such a model and review some of its uses in the context of treatment planning, research, and surgical training. We prospectively used the data from the rotational angiography of a 40-year-old female who presented with an unruptured right paraclinoid aneurysm. The 3D virtual model was then converted to a physical life-sized model. The model constructed was shown to be a very accurate depiction of the aneurysm and its associated vasculature. It was found to be useful, among other things, for surgical training and as a patient education tool. With improving and more widespread printing options, these models have the potential to become an important part of research and training modalities.

  6. Assessing physical models used in nuclear aerosol transport models

    International Nuclear Information System (INIS)

    McDonald, B.H.

    1987-01-01

    Computer codes used to predict the behaviour of aerosols in water-cooled reactor containment buildings after severe accidents contain a variety of physical models. Special models are in place for describing agglomeration processes where small aerosol particles combine to form larger ones. Other models are used to calculate the rates at which aerosol particles are deposited on building structures. Condensation of steam on aerosol particles is currently a very active area in aerosol modelling. In this paper, the physical models incorporated in the current available international codes for all of these processes are reviewed and documented. There is considerable variation in models used in different codes, and some uncertainties exist as to which models are superior. 28 refs

  7. Elastic FWI for VTI media: A synthetic parameterization study

    KAUST Repository

    Kamath, Nishant

    2016-09-06

    A major challenge for multiparameter full-waveform inversion (FWI) is the inherent trade-offs (or cross-talk) between model parameters. Here, we perform FWI of multicomponent data generated for a synthetic VTI (transversely isotropic with a vertical symmetry axis) model based on a geologic section of the Valhall field. A horizontal displacement source, which excites intensive shear waves in the conventional offset range, helps provide more accurate updates to the SV-wave vertical velocity. We test three model parameterizations, which exhibit different radiation patterns and, therefore, create different parameter trade-offs. The results show that the choice of parameterization for FWI depends on the availability of long-offset data, the quality of the initial model for the anisotropy coefficients, and the parameter that needs to be resolved with the highest accuracy.

  8. Electromagnetic Physics Models for Parallel Computing Architectures

    International Nuclear Information System (INIS)

    Amadio, G; Bianchini, C; Iope, R; Ananya, A; Apostolakis, J; Aurora, A; Bandieramonte, M; Brun, R; Carminati, F; Gheata, A; Gheata, M; Goulas, I; Nikitina, T; Bhattacharyya, A; Mohanty, A; Canal, P; Elvira, D; Jun, S Y; Lima, G; Duhem, L

    2016-01-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well. (paper)

  9. Electromagnetic Physics Models for Parallel Computing Architectures

    Science.gov (United States)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-10-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well.

  10. B physics beyond the Standard Model

    International Nuclear Information System (INIS)

    Hewett, J.A.L.

    1997-12-01

    The ability of present and future experiments to test the Standard Model in the B meson sector is described. The authors examine the loop effects of new interactions in flavor changing neutral current B decays and in Z → b anti b, concentrating on supersymmetry and the left-right symmetric model as specific examples of new physics scenarios. The procedure for performing a global fit to the Wilson coefficients which describe b → s transitions is outlined, and the results of such a fit from Monte Carlo generated data is compared to the predictions of the two sample new physics scenarios. A fit to the Zb anti b couplings from present data is also given

  11. A minimal physical model for crawling cells

    Science.gov (United States)

    Tiribocchi, Adriano; Tjhung, Elsen; Marenduzzo, Davide; Cates, Michael E.

    Cell motility in higher organisms (eukaryotes) is fundamental to biological functions such as wound healing or immune response, and is also implicated in diseases such as cancer. For cells crawling on solid surfaces, considerable insights into motility have been gained from experiments replicating such motion in vitro. Such experiments show that crawling uses a combination of actin treadmilling (polymerization), which pushes the front of a cell forward, and myosin-induced stress (contractility), which retracts the rear. We present a simplified physical model of a crawling cell, consisting of a droplet of active polar fluid with contractility throughout, but treadmilling connected to a thin layer near the supporting wall. The model shows a variety of shapes and/or motility regimes, some closely resembling cases seen experimentally. Our work supports the view that cellular motility exploits autonomous physical mechanisms whose operation does not need continuous regulatory effort.

  12. LHC Higgs physics beyond the Standard Model

    International Nuclear Information System (INIS)

    Spannowsky, M.

    2007-01-01

    The Large Hadron Collider (LHC) at CERN will be able to perform proton collisions at a much higher center-of-mass energy and luminosity than any other collider. Its main purpose is to detect the Higgs boson, the last unobserved particle of the Standard Model, explaining the riddle of the origin of mass. Studies have shown, that for the whole allowed region of the Higgs mass processes exist to detect the Higgs at the LHC. However, the Standard Model cannot be a theory of everything and is not able to provide a complete understanding of physics. It is at most an effective theory up to a presently unknown energy scale. Hence, extensions of the Standard Model are necessary which can affect the Higgs-boson signals. We discuss these effects in two popular extensions of the Standard Model: the Minimal Supersymmetric Standard Model (MSSM) and the Standard Model with four generations (SM4G). Constraints on these models come predominantly from flavor physics and electroweak precision measurements. We show, that the SM4G is still viable and that a fourth generation has strong impact on decay and production processes of the Higgs boson. Furthermore, we study the charged Higgs boson in the MSSM, yielding a clear signal for physics beyond the Standard Model. For small tan β in minimal flavor violation (MFV) no processes for the detection of a charged Higgs boson do exist at the LHC. However, MFV is just motivated by the experimental agreement of results from flavor physics with Standard Model predictions, but not by any basic theoretical consideration. In this thesis, we calculate charged Higgs boson production cross sections beyond the assumption of MFV, where a large number of free parameters is present in the MSSM. We find that the soft-breaking parameters which enhance the charged-Higgs boson production most are just bound to large values, e.g. by rare B-meson decays. Although the charged-Higgs boson cross sections beyond MFV turn out to be sizeable, only a detailed

  13. LHC Higgs physics beyond the Standard Model

    Energy Technology Data Exchange (ETDEWEB)

    Spannowsky, M.

    2007-09-22

    The Large Hadron Collider (LHC) at CERN will be able to perform proton collisions at a much higher center-of-mass energy and luminosity than any other collider. Its main purpose is to detect the Higgs boson, the last unobserved particle of the Standard Model, explaining the riddle of the origin of mass. Studies have shown, that for the whole allowed region of the Higgs mass processes exist to detect the Higgs at the LHC. However, the Standard Model cannot be a theory of everything and is not able to provide a complete understanding of physics. It is at most an effective theory up to a presently unknown energy scale. Hence, extensions of the Standard Model are necessary which can affect the Higgs-boson signals. We discuss these effects in two popular extensions of the Standard Model: the Minimal Supersymmetric Standard Model (MSSM) and the Standard Model with four generations (SM4G). Constraints on these models come predominantly from flavor physics and electroweak precision measurements. We show, that the SM4G is still viable and that a fourth generation has strong impact on decay and production processes of the Higgs boson. Furthermore, we study the charged Higgs boson in the MSSM, yielding a clear signal for physics beyond the Standard Model. For small tan {beta} in minimal flavor violation (MFV) no processes for the detection of a charged Higgs boson do exist at the LHC. However, MFV is just motivated by the experimental agreement of results from flavor physics with Standard Model predictions, but not by any basic theoretical consideration. In this thesis, we calculate charged Higgs boson production cross sections beyond the assumption of MFV, where a large number of free parameters is present in the MSSM. We find that the soft-breaking parameters which enhance the charged-Higgs boson production most are just bound to large values, e.g. by rare B-meson decays. Although the charged-Higgs boson cross sections beyond MFV turn out to be sizeable, only a detailed

  14. Looking for physics beyond the standard model

    International Nuclear Information System (INIS)

    Binetruy, P.

    2002-01-01

    Motivations for new physics beyond the Standard Model are presented. The most successful and best motivated option, supersymmetry, is described in some detail, and the associated searches performed at LEP are reviewed. These include searches for additional Higgs bosons and for supersymmetric partners of the standard particles. These searches constrain the mass of the lightest supersymmetric particle which could be responsible for the dark matter of the universe. (authors)

  15. Statistical physics of pairwise probability models

    Directory of Open Access Journals (Sweden)

    Yasser Roudi

    2009-11-01

    Full Text Available Statistical models for describing the probability distribution over the states of biological systems are commonly used for dimensional reduction. Among these models, pairwise models are very attractive in part because they can be fit using a reasonable amount of data: knowledge of the means and correlations between pairs of elements in the system is sufficient. Not surprisingly, then, using pairwise models for studying neural data has been the focus of many studies in recent years. In this paper, we describe how tools from statistical physics can be employed for studying and using pairwise models. We build on our previous work on the subject and study the relation between different methods for fitting these models and evaluating their quality. In particular, using data from simulated cortical networks we study how the quality of various approximate methods for inferring the parameters in a pairwise model depends on the time bin chosen for binning the data. We also study the effect of the size of the time bin on the model quality itself, again using simulated data. We show that using finer time bins increases the quality of the pairwise model. We offer new ways of deriving the expressions reported in our previous work for assessing the quality of pairwise models.

  16. The Sensitivity of WRF Daily Summertime Simulations over West Africa to Alternative Parameterizations. Part 1: African Wave Circulation

    Science.gov (United States)

    Noble, Erik; Druyan, Leonard M.; Fulakeza, Matthew

    2014-01-01

    The performance of the NCAR Weather Research and Forecasting Model (WRF) as a West African regional-atmospheric model is evaluated. The study tests the sensitivity of WRF-simulated vorticity maxima associated with African easterly waves to 64 combinations of alternative parameterizations in a series of simulations in September. In all, 104 simulations of 12-day duration during 11 consecutive years are examined. The 64 combinations combine WRF parameterizations of cumulus convection, radiation transfer, surface hydrology, and PBL physics. Simulated daily and mean circulation results are validated against NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA) and NCEP/Department of Energy Global Reanalysis 2. Precipitation is considered in a second part of this two-part paper. A wide range of 700-hPa vorticity validation scores demonstrates the influence of alternative parameterizations. The best WRF performers achieve correlations against reanalysis of 0.40-0.60 and realistic amplitudes of spatiotemporal variability for the 2006 focus year while a parallel-benchmark simulation by the NASA Regional Model-3 (RM3) achieves higher correlations, but less realistic spatiotemporal variability. The largest favorable impact on WRF-vorticity validation is achieved by selecting the Grell-Devenyi cumulus convection scheme, resulting in higher correlations against reanalysis than simulations using the Kain-Fritch convection. Other parameterizations have less-obvious impact, although WRF configurations incorporating one surface model and PBL scheme consistently performed poorly. A comparison of reanalysis circulation against two NASA radiosonde stations confirms that both reanalyses represent observations well enough to validate the WRF results. Validation statistics for optimized WRF configurations simulating the parallel period during 10 additional years are less favorable than for 2006.

  17. Physical models on discrete space and time

    International Nuclear Information System (INIS)

    Lorente, M.

    1986-01-01

    The idea of space and time quantum operators with a discrete spectrum has been proposed frequently since the discovery that some physical quantities exhibit measured values that are multiples of fundamental units. This paper first reviews a number of these physical models. They are: the method of finite elements proposed by Bender et al; the quantum field theory model on discrete space-time proposed by Yamamoto; the finite dimensional quantum mechanics approach proposed by Santhanam et al; the idea of space-time as lattices of n-simplices proposed by Kaplunovsky et al; and the theory of elementary processes proposed by Weizsaecker and his colleagues. The paper then presents a model proposed by the authors and based on the (n+1)-dimensional space-time lattice where fundamental entities interact among themselves 1 to 2n in order to build up a n-dimensional cubic lattice as a ground field where the physical interactions take place. The space-time coordinates are nothing more than the labelling of the ground field and take only discrete values. 11 references

  18. Hydrological model parameterization using NDVI values to account for the effects of land-cover change on the rainfall-runoff response

    Science.gov (United States)

    Classic rainfall-runoff models usually use historical data to estimate model parameters and mean values of parameters are considered for predictions. However, due to climate changes and human effects, the parameters of model change temporally. To overcome this problem, Normalized Difference Vegetati...

  19. Generomak: Fusion physics, engineering and costing model

    International Nuclear Information System (INIS)

    Delene, J.G.; Krakowski, R.A.; Sheffield, J.; Dory, R.A.

    1988-06-01

    A generic fusion physics, engineering and economics model (Generomak) was developed as a means of performing consistent analysis of the economic viability of alternative magnetic fusion reactors. The original Generomak model developed at Oak Ridge by Sheffield was expanded for the analyses of the Senior Committee on Environmental Safety and Economics of Magnetic Fusion Energy (ESECOM). This report describes the Generomak code as used by ESECOM. The input data used for each of the ten ESECOM fusion plants and the Generomak code output for each case is given. 14 refs., 3 figs., 17 tabs

  20. Gyrofluid Modeling of Turbulent, Kinetic Physics

    Science.gov (United States)

    Despain, Kate Marie

    2011-12-01

    Gyrofluid models to describe plasma turbulence combine the advantages of fluid models, such as lower dimensionality and well-developed intuition, with those of gyrokinetics models, such as finite Larmor radius (FLR) effects. This allows gyrofluid models to be more tractable computationally while still capturing much of the physics related to the FLR of the particles. We present a gyrofluid model derived to capture the behavior of slow solar wind turbulence and describe the computer code developed to implement the model. In addition, we describe the modifications we made to a gyrofluid model and code that simulate plasma turbulence in tokamak geometries. Specifically, we describe a nonlinear phase mixing phenomenon, part of the E x B term, that was previously missing from the model. An inherently FLR effect, it plays an important role in predicting turbulent heat flux and diffusivity levels for the plasma. We demonstrate this importance by comparing results from the updated code to studies done previously by gyrofluid and gyrokinetic codes. We further explain what would be necessary to couple the updated gyrofluid code, gryffin, to a turbulent transport code, thus allowing gryffin to play a role in predicting profiles for fusion devices such as ITER and to explore novel fusion configurations. Such a coupling would require the use of Graphical Processing Units (GPUs) to make the modeling process fast enough to be viable. Consequently, we also describe our experience with GPU computing and demonstrate that we are poised to complete a gryffin port to this innovative architecture.

  1. Agent-Based Models in Social Physics

    Science.gov (United States)

    Quang, Le Anh; Jung, Nam; Cho, Eun Sung; Choi, Jae Han; Lee, Jae Woo

    2018-06-01

    We review the agent-based models (ABM) on social physics including econophysics. The ABM consists of agent, system space, and external environment. The agent is autonomous and decides his/her behavior by interacting with the neighbors or the external environment with the rules of behavior. Agents are irrational because they have only limited information when they make decisions. They adapt using learning from past memories. Agents have various attributes and are heterogeneous. ABM is a non-equilibrium complex system that exhibits various emergence phenomena. The social complexity ABM describes human behavioral characteristics. In ABMs of econophysics, we introduce the Sugarscape model and the artificial market models. We review minority games and majority games in ABMs of game theory. Social flow ABM introduces crowding, evacuation, traffic congestion, and pedestrian dynamics. We also review ABM for opinion dynamics and voter model. We discuss features and advantages and disadvantages of Netlogo, Repast, Swarm, and Mason, which are representative platforms for implementing ABM.

  2. Parameterization of the InVEST Crop Pollination Model to spatially predict abundance of wild blueberry (Vaccinium angustifolium Aiton) native bee pollinators in Maine, USA

    Science.gov (United States)

    Groff, Shannon C.; Loftin, Cynthia S.; Drummond, Frank; Bushmann, Sara; McGill, Brian J.

    2016-01-01

    Non-native honeybees historically have been managed for crop pollination, however, recent population declines draw attention to pollination services provided by native bees. We applied the InVEST Crop Pollination model, developed to predict native bee abundance from habitat resources, in Maine's wild blueberry crop landscape. We evaluated model performance with parameters informed by four approaches: 1) expert opinion; 2) sensitivity analysis; 3) sensitivity analysis informed model optimization; and, 4) simulated annealing (uninformed) model optimization. Uninformed optimization improved model performance by 29% compared to expert opinion-informed model, while sensitivity-analysis informed optimization improved model performance by 54%. This suggests that expert opinion may not result in the best parameter values for the InVEST model. The proportion of deciduous/mixed forest within 2000 m of a blueberry field also reliably predicted native bee abundance in blueberry fields, however, the InVEST model provides an efficient tool to estimate bee abundance beyond the field perimeter.

  3. Reliability of Coulomb stress changes inferred from correlated uncertainties of finite-fault source models

    KAUST Repository

    Woessner, J.; Jonsson, Sigurjon; Sudhaus, H.; Baumann, C.

    2012-01-01

    Static stress transfer is one physical mechanism to explain triggered seismicity. Coseismic stress-change calculations strongly depend on the parameterization of the causative finite-fault source model. These models are uncertain due

  4. Simulation of heavy precipitation episode over eastern Peninsular Malaysia using MM5: sensitivity to cumulus parameterization schemes

    Science.gov (United States)

    Salimun, Ester; Tangang, Fredolin; Juneng, Liew

    2010-06-01

    A comparative study has been conducted to investigate the skill of four convection parameterization schemes, namely the Anthes-Kuo (AK), the Betts-Miller (BM), the Kain-Fritsch (KF), and the Grell (GR) schemes in the numerical simulation of an extreme precipitation episode over eastern Peninsular Malaysia using the Pennsylvania State University—National Center for Atmospheric Research Center (PSU-NCAR) Fifth Generation Mesoscale Model (MM5). The event is a commonly occurring westward propagating tropical depression weather system during a boreal winter resulting from an interaction between a cold surge and the quasi-stationary Borneo vortex. The model setup and other physical parameterizations are identical in all experiments and hence any difference in the simulation performance could be associated with the cumulus parameterization scheme used. From the predicted rainfall and structure of the storm, it is clear that the BM scheme has an edge over the other schemes. The rainfall intensity and spatial distribution were reasonably well simulated compared to observations. The BM scheme was also better in resolving the horizontal and vertical structures of the storm. Most of the rainfall simulated by the BM simulation was of the convective type. The failure of other schemes (AK, GR and KF) in simulating the event may be attributed to the trigger function, closure assumption, and precipitation scheme. On the other hand, the appropriateness of the BM scheme for this episode may not be generalized for other episodes or convective environments.

  5. A physically-based parsimonious hydrological model for flash floods in Mediterranean catchments

    Directory of Open Access Journals (Sweden)

    H. Roux

    2011-09-01

    Full Text Available A spatially distributed hydrological model, dedicated to flood simulation, is developed on the basis of physical process representation (infiltration, overland flow, channel routing. Estimation of model parameters requires data concerning topography, soil properties, vegetation and land use. Four parameters are calibrated for the entire catchment using one flood event. Model sensitivity to individual parameters is assessed using Monte-Carlo simulations. Results of this sensitivity analysis with a criterion based on the Nash efficiency coefficient and the error of peak time and runoff are used to calibrate the model. This procedure is tested on the Gardon d'Anduze catchment, located in the Mediterranean zone of southern France. A first validation is conducted using three flood events with different hydrometeorological characteristics. This sensitivity analysis along with validation tests illustrates the predictive capability of the model and points out the possible improvements on the model's structure and parameterization for flash flood forecasting, especially in ungauged basins. Concerning the model structure, results show that water transfer through the subsurface zone also contributes to the hydrograph response to an extreme event, especially during the recession period. Maps of soil saturation emphasize the impact of rainfall and soil properties variability on these dynamics. Adding a subsurface flow component in the simulation also greatly impacts the spatial distribution of soil saturation and shows the importance of the drainage network. Measures of such distributed variables would help discriminating between different possible model structures.

  6. Parameterization and measurements of helical magnetic fields

    International Nuclear Information System (INIS)

    Fischer, W.; Okamura, M.

    1997-01-01

    Magnetic fields with helical symmetry can be parameterized using multipole coefficients (a n , b n ). We present a parameterization that gives the familiar multipole coefficients (a n , b n ) for straight magnets when the helical wavelength tends to infinity. To measure helical fields all methods used for straight magnets can be employed. We show how to convert the results of those measurements to obtain the desired helical multipole coefficients (a n , b n )

  7. Menangkal Serangan SQL Injection Dengan Parameterized Query

    Directory of Open Access Journals (Sweden)

    Yulianingsih Yulianingsih

    2016-06-01

    Full Text Available Semakin meningkat pertumbuhan layanan informasi maka semakin tinggi pula tingkat kerentanan keamanan dari suatu sumber informasi. Melalui tulisan ini disajikan penelitian yang dilakukan secara eksperimen yang membahas tentang kejahatan penyerangan database secara SQL Injection. Penyerangan dilakukan melalui halaman autentikasi dikarenakan halaman ini merupakan pintu pertama akses yang seharusnya memiliki pertahanan yang cukup. Kemudian dilakukan eksperimen terhadap metode Parameterized Query untuk mendapatkan solusi terhadap permasalahan tersebut.   Kata kunci— Layanan Informasi, Serangan, eksperimen, SQL Injection, Parameterized Query.

  8. Global parameterization and validation of a two-leaf light use efficiency model for predicting gross primary production across FLUXNET sites

    DEFF Research Database (Denmark)

    Zhou, Yanlian; Wu, Xiaocui; Ju, Weimin

    2015-01-01

    Light use efficiency (LUE) models are widely used to simulate gross primary production (GPP). However, the treatment of the plant canopy as a big leaf by these models can introduce large uncertainties in simulated GPP. Recently, a two-leaf light use efficiency (TL-LUE) model was developed...... to simulate GPP separately for sunlit and shaded leaves and has been shown to outperform the big-leaf MOD17 model at six FLUX sites in China. In this study we investigated the performance of the TL-LUE model for a wider range of biomes. For this we optimized the parameters and tested the TL-LUE model using...... data from 98 FLUXNET sites which are distributed across the globe. The results showed that the TL-LUE model performed in general better than the MOD17 model in simulating 8 day GPP. Optimized maximum light use efficiency of shaded leaves (epsilon(msh)) was 2.63 to 4.59 times that of sunlit leaves...

  9. Parameterization of the 3-PG model for Pinus elliottii stands using alternative methods to estimate fertility rating, biomass partitioning and canopy closure

    Science.gov (United States)

    Carlos A. Gonzalez-Benecke; Eric J. Jokela; Wendell P. Cropper; Rosvel Bracho; Daniel J. Leduc

    2014-01-01

    The forest simulation model, 3-PG, has been widely applied as a useful tool for predicting growth of forest species in many countries. The model has the capability to estimate the effects of management, climate and site characteristics on many stand attributes using easily available data. Currently, there is an increasing interest in estimating biomass and assessing...

  10. A new 2D climate model with chemistry and self consistent eddy-parameterization. The impact of airplane NO{sub x} on the chemistry of the atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Gepraegs, R; Schmitz, G; Peters, D [Institut fuer Atmosphaerenphysik, Kuehlungsborn (Germany)

    1998-12-31

    A 2D version of the ECHAM T21 climate model has been developed. The new model includes an efficient spectral transport scheme with implicit diffusion. Furthermore, photodissociation and chemistry of the NCAR 2D model have been incorporated. A self consistent parametrization scheme is used for eddy heat- and momentum flux in the troposphere. It is based on the heat flux parametrization of Branscome and mixing-length formulation for quasi-geostrophic vorticity. Above 150 hPa the mixing-coefficient K{sub yy} is prescribed. Some of the model results are discussed, concerning especially the impact of aircraft NO{sub x} emission on the model chemistry. (author) 6 refs.

  11. A new 2D climate model with chemistry and self consistent eddy-parameterization. The impact of airplane NO{sub x} on the chemistry of the atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Gepraegs, R.; Schmitz, G.; Peters, D. [Institut fuer Atmosphaerenphysik, Kuehlungsborn (Germany)

    1997-12-31

    A 2D version of the ECHAM T21 climate model has been developed. The new model includes an efficient spectral transport scheme with implicit diffusion. Furthermore, photodissociation and chemistry of the NCAR 2D model have been incorporated. A self consistent parametrization scheme is used for eddy heat- and momentum flux in the troposphere. It is based on the heat flux parametrization of Branscome and mixing-length formulation for quasi-geostrophic vorticity. Above 150 hPa the mixing-coefficient K{sub yy} is prescribed. Some of the model results are discussed, concerning especially the impact of aircraft NO{sub x} emission on the model chemistry. (author) 6 refs.

  12. Modellus: Learning Physics with Mathematical Modelling

    Science.gov (United States)

    Teodoro, Vitor

    Computers are now a major tool in research and development in almost all scientific and technological fields. Despite recent developments, this is far from true for learning environments in schools and most undergraduate studies. This thesis proposes a framework for designing curricula where computers, and computer modelling in particular, are a major tool for learning. The framework, based on research on learning science and mathematics and on computer user interface, assumes that: 1) learning is an active process of creating meaning from representations; 2) learning takes place in a community of practice where students learn both from their own effort and from external guidance; 3) learning is a process of becoming familiar with concepts, with links between concepts, and with representations; 4) direct manipulation user interfaces allow students to explore concrete-abstract objects such as those of physics and can be used by students with minimal computer knowledge. Physics is the science of constructing models and explanations about the physical world. And mathematical models are an important type of models that are difficult for many students. These difficulties can be rooted in the fact that most students do not have an environment where they can explore functions, differential equations and iterations as primary objects that model physical phenomena--as objects-to-think-with, reifying the formal objects of physics. The framework proposes that students should be introduced to modelling in a very early stage of learning physics and mathematics, two scientific areas that must be taught in very closely related way, as they were developed since Galileo and Newton until the beginning of our century, before the rise of overspecialisation in science. At an early stage, functions are the main type of objects used to model real phenomena, such as motions. At a later stage, rates of change and equations with rates of change play an important role. This type of equations

  13. Physics Beyond the Standard Model: Supersymmetry

    Energy Technology Data Exchange (ETDEWEB)

    Nojiri, M.M.; /KEK, Tsukuba /Tsukuba, Graduate U. Adv. Studies /Tokyo U.; Plehn, T.; /Edinburgh U.; Polesello, G.; /INFN, Pavia; Alexander, John M.; /Edinburgh U.; Allanach, B.C.; /Cambridge U.; Barr, Alan J.; /Oxford U.; Benakli, K.; /Paris U., VI-VII; Boudjema, F.; /Annecy, LAPTH; Freitas, A.; /Zurich U.; Gwenlan, C.; /University Coll. London; Jager, S.; /CERN /LPSC, Grenoble

    2008-02-01

    This collection of studies on new physics at the LHC constitutes the report of the supersymmetry working group at the Workshop 'Physics at TeV Colliders', Les Houches, France, 2007. They cover the wide spectrum of phenomenology in the LHC era, from alternative models and signatures to the extraction of relevant observables, the study of the MSSM parameter space and finally to the interplay of LHC observations with additional data expected on a similar time scale. The special feature of this collection is that while not each of the studies is explicitly performed together by theoretical and experimental LHC physicists, all of them were inspired by and discussed in this particular environment.

  14. Parameterizing Coefficients of a POD-Based Dynamical System

    Science.gov (United States)

    Kalb, Virginia L.

    2010-01-01

    A method of parameterizing the coefficients of a dynamical system based of a proper orthogonal decomposition (POD) representing the flow dynamics of a viscous fluid has been introduced. (A brief description of POD is presented in the immediately preceding article.) The present parameterization method is intended to enable construction of the dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers. The need for this or a similar method arises as follows: A procedure that includes direct numerical simulation followed by POD, followed by Galerkin projection to a dynamical system has been proven to enable representation of flow dynamics by a low-dimensional model at the Reynolds number of the simulation. However, a more difficult task is to obtain models that are valid over a range of Reynolds numbers. Extrapolation of low-dimensional models by use of straightforward Reynolds-number-based parameter continuation has proven to be inadequate for successful prediction of flows. A key part of the problem of constructing a dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers is the problem of understanding and providing for the variation of the coefficients of the dynamical system with the Reynolds number. Prior methods do not enable capture of temporal dynamics over ranges of Reynolds numbers in low-dimensional models, and are not even satisfactory when large numbers of modes are used. The basic idea of the present method is to solve the problem through a suitable parameterization of the coefficients of the dynamical system. The parameterization computations involve utilization of the transfer of kinetic energy between modes as a function of Reynolds number. The thus-parameterized dynamical system accurately predicts the flow dynamics and is applicable to a range of flow problems in the dynamical regime around the Hopf bifurcation. Parameter

  15. Parameterizing Size Distribution in Ice Clouds

    Energy Technology Data Exchange (ETDEWEB)

    DeSlover, Daniel; Mitchell, David L.

    2009-09-25

    PARAMETERIZING SIZE DISTRIBUTIONS IN ICE CLOUDS David L. Mitchell and Daniel H. DeSlover ABSTRACT An outstanding problem that contributes considerable uncertainty to Global Climate Model (GCM) predictions of future climate is the characterization of ice particle sizes in cirrus clouds. Recent parameterizations of ice cloud effective diameter differ by a factor of three, which, for overcast conditions, often translate to changes in outgoing longwave radiation (OLR) of 55 W m-2 or more. Much of this uncertainty in cirrus particle sizes is related to the problem of ice particle shattering during in situ sampling of the ice particle size distribution (PSD). Ice particles often shatter into many smaller ice fragments upon collision with the rim of the probe inlet tube. These small ice artifacts are counted as real ice crystals, resulting in anomalously high concentrations of small ice crystals (D < 100 µm) and underestimates of the mean and effective size of the PSD. Half of the cirrus cloud optical depth calculated from these in situ measurements can be due to this shattering phenomenon. Another challenge is the determination of ice and liquid water amounts in mixed phase clouds. Mixed phase clouds in the Arctic contain mostly liquid water, and the presence of ice is important for determining their lifecycle. Colder high clouds between -20 and -36 oC may also be mixed phase but in this case their condensate is mostly ice with low levels of liquid water. Rather than affecting their lifecycle, the presence of liquid dramatically affects the cloud optical properties, which affects cloud-climate feedback processes in GCMs. This project has made advancements in solving both of these problems. Regarding the first problem, PSD in ice clouds are uncertain due to the inability to reliably measure the concentrations of the smallest crystals (D < 100 µm), known as the “small mode”. Rather than using in situ probe measurements aboard aircraft, we employed a treatment of ice

  16. Models in Physics, Models for Physics Learning, and Why the Distinction May Matter in the Case of Electric Circuits

    Science.gov (United States)

    Hart, Christina

    2008-01-01

    Models are important both in the development of physics itself and in teaching physics. Historically, the consensus models of physics have come to embody particular ontological assumptions and epistemological commitments. Educators have generally assumed that the consensus models of physics, which have stood the test of time, will also work well…

  17. Global parameterization and validation of a two-leaf light use efficiency model for predicting gross primary production across FLUXNET sites

    Czech Academy of Sciences Publication Activity Database

    Zhou, Y.; Wu, X.; Weiming, J.; Chen, J.; Wang, S.; Wang, H.; Wenping, Y.; Black, T. A.; Jassal, R.; Ibrom, A.; Han, S.; Yan, J.; Margolis, H.; Roupsard, O.; Li, Y.; Zhao, F.; Kiely, G.; Starr, G.; Pavelka, Marian; Montagnani, L.; Wohlfahrt, G.; D'Odorico, P.; Cook, D.; Altaf Arain, M.; Bonal, D.; Beringer, J.; Blanken, P. D.; Loubet, B.; Leclerc, M. Y.; Matteucci, G.; Nagy, Z.; Olejnik, Janusz; U., K. T. P.; Varlagin, A.

    2016-01-01

    Roč. 36, č. 7 (2016), s. 2743-2760 ISSN 2169-8953 Institutional support: RVO:67179843 Keywords : global parametrization * predicting model * FlUXNET Subject RIV: EH - Ecology, Behaviour Impact factor: 3.395, year: 2016

  18. Physics and Dynamics Coupling Across Scales in the Next Generation CESM. Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Bacmeister, Julio T. [University Corporation for Atmospheric Research (UCAR), Boulder, CO (United States)

    2015-06-12

    This project examines physics/dynamics coupling, that is, exchange of meteorological profiles and tendencies between an atmospheric model’s dynamical core and its various physics parameterizations. Most model physics parameterizations seek to represent processes that occur on scales smaller than the smallest scale resolved by the dynamical core. As a consequence a key conceptual aspect of parameterizations is an assumption about the subgrid variability of quantities such as temperature, humidity or vertical wind. Most existing parameterizations of processes such as turbulence, convection, cloud, and gravity wave drag make relatively ad hoc assumptions about this variability and are forced to introduce empirical parameters, i.e., “tuning knobs” to obtain realistic simulations. These knobs make systematic dependences on model grid size difficult to quantify.

  19. Physical model for membrane protrusions during spreading

    International Nuclear Information System (INIS)

    Chamaraux, F; Ali, O; Fourcade, B; Keller, S; Bruckert, F

    2008-01-01

    During cell spreading onto a substrate, the kinetics of the contact area is an observable quantity. This paper is concerned with a physical approach to modeling this process in the case of ameboid motility where the membrane detaches itself from the underlying cytoskeleton at the leading edge. The physical model we propose is based on previous reports which highlight that membrane tension regulates cell spreading. Using a phenomenological feedback loop to mimic stress-dependent biochemistry, we show that the actin polymerization rate can be coupled to the stress which builds up at the margin of the contact area between the cell and the substrate. In the limit of small variation of membrane tension, we show that the actin polymerization rate can be written in a closed form. Our analysis defines characteristic lengths which depend on elastic properties of the membrane–cytoskeleton complex, such as the membrane–cytoskeleton interaction, and on molecular parameters, the rate of actin polymerization. We discuss our model in the case of axi-symmetric and non-axi-symmetric spreading and we compute the characteristic time scales as a function of fundamental elastic constants such as the strength of membrane–cytoskeleton adherence

  20. submitter Data-driven RBE parameterization for helium ion beams

    CERN Document Server

    Mairani, A; Dokic, I; Valle, S M; Tessonnier, T; Galm, R; Ciocca, M; Parodi, K; Ferrari, A; Jäkel, O; Haberer, T; Pedroni, P; Böhlen, T T

    2016-01-01

    Helium ion beams are expected to be available again in the near future for clinical use. A suitable formalism to obtain relative biological effectiveness (RBE) values for treatment planning (TP) studies is needed. In this work we developed a data-driven RBE parameterization based on published in vitro experimental values. The RBE parameterization has been developed within the framework of the linear-quadratic (LQ) model as a function of the helium linear energy transfer (LET), dose and the tissue specific parameter ${{(\\alpha /\\beta )}_{\\text{ph}}}$ of the LQ model for the reference radiation. Analytic expressions are provided, derived from the collected database, describing the $\\text{RB}{{\\text{E}}_{\\alpha}}={{\\alpha}_{\\text{He}}}/{{\\alpha}_{\\text{ph}}}$ and ${{\\text{R}}_{\\beta}}={{\\beta}_{\\text{He}}}/{{\\beta}_{\\text{ph}}}$ ratios as a function of LET. Calculated RBE values at 2 Gy photon dose and at 10% survival ($\\text{RB}{{\\text{E}}_{10}}$ ) are compared with the experimental ones. Pearson's correlati...

  1. Statistical dynamical subgrid-scale parameterizations for geophysical flows

    International Nuclear Information System (INIS)

    O'Kane, T J; Frederiksen, J S

    2008-01-01

    Simulations of both atmospheric and oceanic circulations at given finite resolutions are strongly dependent on the form and strengths of the dynamical subgrid-scale parameterizations (SSPs) and in particular are sensitive to subgrid-scale transient eddies interacting with the retained scale topography and the mean flow. In this paper, we present numerical results for SSPs of the eddy-topographic force, stochastic backscatter, eddy viscosity and eddy-mean field interaction using an inhomogeneous statistical turbulence model based on a quasi-diagonal direct interaction approximation (QDIA). Although the theoretical description on which our model is based is for general barotropic flows, we specifically focus on global atmospheric flows where large-scale Rossby waves are present. We compare and contrast the closure-based results with an important earlier heuristic SSP of the eddy-topographic force, based on maximum entropy or statistical canonical equilibrium arguments, developed specifically for general ocean circulation models (Holloway 1992 J. Phys. Oceanogr. 22 1033-46). Our results demonstrate that where strong zonal flows and Rossby waves are present, such as in the atmosphere, maximum entropy arguments are insufficient to accurately parameterize the subgrid contributions due to eddy-eddy, eddy-topographic and eddy-mean field interactions. We contrast our atmospheric results with findings for the oceans. Our study identifies subgrid-scale interactions that are currently not parameterized in numerical atmospheric climate models, which may lead to systematic defects in the simulated circulations.

  2. Beyond the standard model with B and K physics

    International Nuclear Information System (INIS)

    Grossman, Y

    2003-01-01

    In the first part of the talk the flavor physics input to models beyond the standard model is described. One specific example of such new physics model is given: A model with bulk fermions in a non factorizable one extra dimension. In the second part of the talk we discuss several observables that are sensitive to new physics. We explain what type of new physics can produce deviations from the standard model predictions in each of these observables

  3. Gravitational wave tests of general relativity with the parameterized post-Einsteinian framework

    International Nuclear Information System (INIS)

    Cornish, Neil; Sampson, Laura; Yunes, Nicolas; Pretorius, Frans

    2011-01-01

    Gravitational wave astronomy has tremendous potential for studying extreme astrophysical phenomena and exploring fundamental physics. The waves produced by binary black hole mergers will provide a pristine environment in which to study strong-field dynamical gravity. Extracting detailed information about these systems requires accurate theoretical models of the gravitational wave signals. If gravity is not described by general relativity, analyses that are based on waveforms derived from Einstein's field equations could result in parameter biases and a loss of detection efficiency. A new class of ''parameterized post-Einsteinian'' waveforms has been proposed to cover this eventuality. Here, we apply the parameterized post-Einsteinian approach to simulated data from a network of advanced ground-based interferometers and from a future space-based interferometer. Bayesian inference and model selection are used to investigate parameter biases, and to determine the level at which departures from general relativity can be detected. We find that in some cases the parameter biases from assuming the wrong theory can be severe. We also find that gravitational wave observations will beat the existing bounds on deviations from general relativity derived from the orbital decay of binary pulsars by a large margin across a wide swath of parameter space.

  4. Lightning NOx Production in CMAQ: Part II - Parameterization Based on Relationship between Observed NLDN Lightning Strikes and Modeled Convective Precipitation Rates

    Science.gov (United States)

    Lightning-produced nitrogen oxides (NOX=NO+NO2) in the middle and upper troposphere play an essential role in the production of ozone (O3) and influence the oxidizing capacity of the troposphere. Despite much effort in both observing and modeling lightning NOX during the past dec...

  5. Parameterization of neutron production double-differential cross section above several tens-MeV by the use of moving source model

    International Nuclear Information System (INIS)

    Kitsuki, Hirohiko; Shigyo, Nobuhiro; Ishibashi, Kenji

    2000-01-01

    The moving source model based on the Maxwell-like energy distribution with Gaussian shape terms are employed for analyzing the neutron emission spectra from proton-induced spallation reaction. The parallelization of the double differential cross section is made for the experimental and calculated neutron data in the energy region from several-tens MeV to 3 GeV. (author)

  6. Parameterization of a numerical 2-D debris flow model with entrainment: a case study of the Faucon catchment, Southern French Alps

    Directory of Open Access Journals (Sweden)

    H. Y. Hussin

    2012-10-01

    Full Text Available The occurrence of debris flows has been recorded for more than a century in the European Alps, accounting for the risk to settlements and other human infrastructure that have led to death, building damage and traffic disruptions. One of the difficulties in the quantitative hazard assessment of debris flows is estimating the run-out behavior, which includes the run-out distance and the related hazard intensities like the height and velocity of a debris flow. In addition, as observed in the French Alps, the process of entrainment of material during the run-out can be 10–50 times in volume with respect to the initially mobilized mass triggered at the source area. The entrainment process is evidently an important factor that can further determine the magnitude and intensity of debris flows. Research on numerical modeling of debris flow entrainment is still ongoing and involves some difficulties. This is partly due to our lack of knowledge of the actual process of the uptake and incorporation of material and due the effect of entrainment on the final behavior of a debris flow. Therefore, it is important to model the effects of this key erosional process on the formation of run-outs and related intensities. In this study we analyzed a debris flow with high entrainment rates that occurred in 2003 at the Faucon catchment in the Barcelonnette Basin (Southern French Alps. The historic event was back-analyzed using the Voellmy rheology and an entrainment model imbedded in the RAMMS 2-D numerical modeling software. A sensitivity analysis of the rheological and entrainment parameters was carried out and the effects of modeling with entrainment on the debris flow run-out, height and velocity were assessed.

  7. Empirical Storm-Time Correction to the International Reference Ionosphere Model E-Region Electron and Ion Density Parameterizations Using Observations from TIMED/SABER

    Science.gov (United States)

    Mertens, Christoper J.; Winick, Jeremy R.; Russell, James M., III; Mlynczak, Martin G.; Evans, David S.; Bilitza, Dieter; Xu, Xiaojing

    2007-01-01

    The response of the ionospheric E-region to solar-geomagnetic storms can be characterized using observations of infrared 4.3 micrometers emission. In particular, we utilize nighttime TIMED/SABER measurements of broadband 4.3 micrometers limb emission and derive a new data product, the NO+(v) volume emission rate, which is our primary observation-based quantity for developing an empirical storm-time correction the IRI E-region electron density. In this paper we describe our E-region proxy and outline our strategy for developing the empirical storm model. In our initial studies, we analyzed a six day storm period during the Halloween 2003 event. The results of this analysis are promising and suggest that the ap-index is a viable candidate to use as a magnetic driver for our model.

  8. Pre-Service Physics Teachers' Argumentation in a Model Rocketry Physics Experience

    Science.gov (United States)

    Gürel, Cem; Süzük, Erol

    2017-01-01

    This study investigates the quality of argumentation developed by a group of pre-service physics teachers' (PSPT) as an indicator of subject matter knowledge on model rocketry physics. The structure of arguments and scientific credibility model was used as a design framework in the study. The inquiry of model rocketry physics was employed in…

  9. Physical and Chemical Environmental Abstraction Model

    International Nuclear Information System (INIS)

    Nowak, E.

    2000-01-01

    As directed by a written development plan (CRWMS M and O 1999a), Task 1, an overall conceptualization of the physical and chemical environment (P/CE) in the emplacement drift is documented in this Analysis/Model Report (AMR). Included are the physical components of the engineered barrier system (EBS). The intended use of this descriptive conceptualization is to assist the Performance Assessment Department (PAD) in modeling the physical and chemical environment within a repository drift. It is also intended to assist PAD in providing a more integrated and complete in-drift geochemical model abstraction and to answer the key technical issues raised in the U.S. Nuclear Regulatory Commission (NRC) Issue Resolution Status Report (IRSR) for the Evolution of the Near-Field Environment (NFE) Revision 2 (NRC 1999). EBS-related features, events, and processes (FEPs) have been assembled and discussed in ''EBS FEPs/Degradation Modes Abstraction'' (CRWMS M and O 2000a). Reference AMRs listed in Section 6 address FEPs that have not been screened out. This conceptualization does not directly address those FEPs. Additional tasks described in the written development plan are recommended for future work in Section 7.3. To achieve the stated purpose, the scope of this document includes: (1) the role of in-drift physical and chemical environments in the Total System Performance Assessment (TSPA) (Section 6.1); (2) the configuration of engineered components (features) and critical locations in drifts (Sections 6.2.1 and 6.3, portions taken from EBS Radionuclide Transport Abstraction (CRWMS M and O 2000b)); (3) overview and critical locations of processes that can affect P/CE (Section 6.3); (4) couplings and relationships among features and processes in the drifts (Section 6.4); and (5) identities and uses of parameters transmitted to TSPA by some of the reference AMRs (Section 6.5). This AMR originally considered a design with backfill, and is now being updated (REV 00 ICN1) to address

  10. Relativistic nuclear physics with the spectator model

    International Nuclear Information System (INIS)

    Gross, F.

    1988-01-01

    The spectator model, a general approach to the relativistic treatment of nuclear physics problems in which spectators to nuclear interactions are put on their mass-shell, will be defined nd described. The approach grows out of the relativistic treatment of two and three body systems in which one particle is off-shell, and recent numerical results for the NN interaction will be presented. Two meson-exchange models, one with only 4 mesons (π, σ, /rho/, ω) but with a 25% admixture of γ 5 coupling for the pion, and a second with 6 mesons (π, σ, /rho/, ω, δ, and /eta/) but a pure γ 5 γ/sup mu/ pion coupling, are shown to give very good quantitative fits to NN scattering phase shifts below 400 MeV, and also a good description of the /rho/ 40 Cα elastic scattering observables. 19 refs., 6 figs., 1 tab

  11. REPFLO model evaluation, physical and numerical consistency

    International Nuclear Information System (INIS)

    Wilson, R.N.; Holland, D.H.

    1978-11-01

    This report contains a description of some suggested changes and an evaluation of the REPFLO computer code, which models ground-water flow and nuclear-waste migration in and about a nuclear-waste repository. The discussion contained in the main body of the report is supplemented by a flow chart, presented in the Appendix of this report. The suggested changes are of four kinds: (1) technical changes to make the code compatible with a wider variety of digital computer systems; (2) changes to fill gaps in the computer code, due to missing proprietary subroutines; (3) changes to (a) correct programming errors, (b) correct logical flaws, and (c) remove unnecessary complexity; and (4) changes in the computer code logical structure to make REPFLO a more viable model from the physical point of view

  12. Physics-based models of the plasmasphere

    Energy Technology Data Exchange (ETDEWEB)

    Jordanova, Vania K [Los Alamos National Laboratory; Pierrard, Vivane [BELGIUM; Goldstein, Jerry [SWRI; Andr' e, Nicolas [ESTEC/ESA; Kotova, Galina A [SRI, RUSSIA; Lemaire, Joseph F [BELGIUM; Liemohn, Mike W [U OF MICHIGAN; Matsui, H [UNIV OF NEW HAMPSHIRE

    2008-01-01

    We describe recent progress in physics-based models of the plasmasphere using the Auid and the kinetic approaches. Global modeling of the dynamics and inAuence of the plasmasphere is presented. Results from global plasmasphere simulations are used to understand and quantify (i) the electric potential pattern and evolution during geomagnetic storms, and (ii) the inAuence of the plasmasphere on the excitation of electromagnetic ion cyclotron (ElvIIC) waves a.nd precipitation of energetic ions in the inner magnetosphere. The interactions of the plasmasphere with the ionosphere a.nd the other regions of the magnetosphere are pointed out. We show the results of simulations for the formation of the plasmapause and discuss the inAuence of plasmaspheric wind and of ultra low frequency (ULF) waves for transport of plasmaspheric material. Theoretical formulations used to model the electric field and plasma distribution in the plasmasphere are given. Model predictions are compared to recent CLUSTER and MAGE observations, but also to results of earlier models and satellite observations.

  13. Propulsion Physics Using the Chameleon Density Model

    Science.gov (United States)

    Robertson, Glen A.

    2011-01-01

    To grow as a space faring race, future spaceflight systems will require a new theory of propulsion. Specifically one that does not require mass ejection without limiting the high thrust necessary to accelerate within or beyond our solar system and return within a normal work period or lifetime. The Chameleon Density Model (CDM) is one such model that could provide new paths in propulsion toward this end. The CDM is based on Chameleon Cosmology a dark matter theory; introduced by Khrouy and Weltman in 2004. Chameleon as it is hidden within known physics, where the Chameleon field represents a scalar field within and about an object; even in the vacuum. The CDM relates to density changes in the Chameleon field, where the density changes are related to matter accelerations within and about an object. These density changes in turn change how an object couples to its environment. Whereby, thrust is achieved by causing a differential in the environmental coupling about an object. As a demonstration to show that the CDM fits within known propulsion physics, this paper uses the model to estimate the thrust from a solid rocket motor. Under the CDM, a solid rocket constitutes a two body system, i.e., the changing density of the rocket and the changing density in the nozzle arising from the accelerated mass. Whereby, the interactions between these systems cause a differential coupling to the local gravity environment of the earth. It is shown that the resulting differential in coupling produces a calculated value for the thrust near equivalent to the conventional thrust model used in Sutton and Ross, Rocket Propulsion Elements. Even though imbedded in the equations are the Universe energy scale factor, the reduced Planck mass and the Planck length, which relates the large Universe scale to the subatomic scale.

  14. Cross-flow turbines: physical and numerical model studies towards improved array simulations

    Science.gov (United States)

    Wosnik, M.; Bachant, P.

    2015-12-01

    Cross-flow, or vertical-axis turbines, show potential in marine hydrokinetic (MHK) and wind energy applications. As turbine designs mature, the research focus is shifting from individual devices towards improving turbine array layouts for maximizing overall power output, i.e., minimizing wake interference for axial-flow turbines, or taking advantage of constructive wake interaction for cross-flow turbines. Numerical simulations are generally better suited to explore the turbine array design parameter space, as physical model studies of large arrays at large model scale would be expensive. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries, the turbines' interaction with the energy resource needs to be parameterized, or modeled. Most models in use today, e.g. actuator disk, are not able to predict the unique wake structure generated by cross-flow turbines. Experiments were carried out using a high-resolution turbine test bed in a large cross-section tow tank, designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. To improve parameterization in array simulations, an actuator line model (ALM) was developed to provide a computationally feasible method for simulating full turbine arrays inside Navier--Stokes models. The ALM predicts turbine loading with the blade element method combined with sub-models for dynamic stall and flow curvature. The open-source software is written as an extension library for the OpenFOAM CFD package, which allows the ALM body force to be applied to their standard RANS and LES solvers. Turbine forcing is also applied to volume of fluid (VOF) models, e.g., for predicting free surface effects on submerged MHK devices. An

  15. Working group report: Flavor physics and model building

    Indian Academy of Sciences (India)

    cO Indian Academy of Sciences. Vol. ... This is the report of flavor physics and model building working group at ... those in model building have been primarily devoted to neutrino physics. ..... [12] Andrei Gritsan, ICHEP 2004, Beijing, China.

  16. USING PARAMETERIZATION OF OBJECTS IN AUTODESK INVENTOR IN DESIGNING STRUCTURAL CONNECTORS

    Directory of Open Access Journals (Sweden)

    Gabriel Borowski

    2015-05-01

    Full Text Available The article presents the parameterization of objects used for designing the type of elements as structural connectors and making modifications of their characteristics. The design process was carried out using Autodesk Inventor 2015. We show the latest software tools, which were used for parameterization and modeling selected types of structural connectors. We also show examples of the use of parameterization facilities in the process of constructing some details and making changes to geometry with holding of the shape the element. The presented method of Inventor usage has enabled fast and efficient creation of new objects based on sketches created.

  17. Influence of Superparameterization and a Higher-Order Turbulence Closure on Rainfall Bias Over Amazonia in Community Atmosphere Model Version 5: How Parameterization Changes Rainfall

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Kai [Jackson School of Geosciences, University of Texas at Austin, Austin TX USA; Fu, Rong [Jackson School of Geosciences, University of Texas at Austin, Austin TX USA; Department of Atmospheric and Oceanic Sciences, University of California, Los Angeles CA USA; Shaikh, Muhammad J. [Jackson School of Geosciences, University of Texas at Austin, Austin TX USA; Ghan, Steven [Pacific Northwest National Laboratory, Richland WA USA; Wang, Minghuai [Institute for Climate and Global Change Research and School of Atmospheric Sciences, Nanjing University, Nanjing China; Collaborative Innovation Center of Climate Change, Nanjing China; Leung, L. Ruby [Pacific Northwest National Laboratory, Richland WA USA; Dickinson, Robert E. [Jackson School of Geosciences, University of Texas at Austin, Austin TX USA; Marengo, Jose [Centro Nacional de Monitoramento e Alertas aos Desastres Naturais, São Jose dos Campos Brazil

    2017-09-21

    We evaluate the Community Atmosphere Model Version 5 (CAM5) with a higher-order turbulence closure scheme, named Cloud Layers Unified By Binomials (CLUBB), and a Multiscale Modeling Framework (MMF) with two different microphysics configurations to investigate their influences on rainfall simulations over Southern Amazonia. The two different microphysics configurations in MMF are the one-moment cloud microphysics without aerosol treatment (SAM1MOM) and two-moment cloud microphysics coupled with aerosol treatment (SAM2MOM). Results show that both MMF-SAM2MOM and CLUBB effectively reduce the low biases of rainfall, mainly during the wet season. The CLUBB reduces low biases of humidity in the lower troposphere with further reduced shallow clouds. The latter enables more surface solar flux, leading to stronger convection and more rainfall. MMF, especially MMF-SAM2MOM, unstablizes the atmosphere with more moisture and higher atmospheric temperatures in the atmospheric boundary layer, allowing the growth of more extreme convection and further generating more deep convection. MMF-SAM2MOM significantly increases rainfall in the afternoon, but it does not reduce the early bias of the diurnal rainfall peak; LUBB, on the other hand, delays the afternoon peak time and produces more precipitation in the early morning, due to more realistic gradual transition between shallow and deep convection. MMF appears to be able to realistically capture the observed increase of relative humidity prior to deep convection, especially with its two-moment configuration. In contrast, in CAM5 and CAM5 with CLUBB, occurrence of deep convection in these models appears to be a result of stronger heating rather than higher relative humidity.

  18. Probabilistic modelling of overflow, surcharge and flooding in urban drainage using the first-order reliability method and parameterization of local rain series

    DEFF Research Database (Denmark)

    Thorndahl, Søren; Willems, Patrick

    2007-01-01

    Failure of urban drainage systems may occur due to surcharge or flooding at specific manholes in the system, or due to overflows from combined sewer systems to receiving waters. To quantify the probability or return period of failure, standard approaches make use of the simulation of design storms...... or long historical rainfall series in a hydrodynamic model of the urban drainage system. In this paper, an alternative probabilistic method is investigated: the First Order Reliability Method (FORM). To apply this method, a long rainfall time series was divided in rain storms (rain events), and each rain...

  19. Architectural and growth traits differ in effects on performance of clonal plants: an analysis using a field-parameterized simulation model

    Czech Academy of Sciences Publication Activity Database

    Wildová, Radka; Gough, L.; Herben, Tomáš; Hershock, Ch.; Goldberg, D. E.

    2007-01-01

    Roč. 116, č. 5 (2007), s. 836-852 ISSN 0030-1299 R&D Projects: GA ČR(CZ) GA206/02/0953; GA ČR(CZ) GA206/02/0578 Grant - others:NSF(US) DEB99-74296; NSF(US) DEB99-74284 Institutional research plan: CEZ:AV0Z60050516 Keywords : individual-based model * performance * plant architecture * competitive response * resource allocation Subject RIV: EF - Botanics Impact factor: 3.136, year: 2007

  20. Sensitivity of quantitative precipitation forecasts to boundary layer parameterization: a flash flood case study in the Western Mediterranean

    Directory of Open Access Journals (Sweden)

    M. Zampieri

    2005-01-01

    Full Text Available The 'Montserrat-2000' severe flash flood event which occurred over Catalonia on 9 and 10 June 2000 is analyzed. Strong precipitation was generated by a mesoscale convective system associated with the development of a cyclone. The location of heavy precipitation depends on the position of the cyclone, which, in turn, is found to be very sensitive to various model characteristics and initial conditions. Numerical simulations of this case study using the hydrostatic BOLAM and the non-hydrostatic MOLOCH models are performed in order to test the effects of different formulations of the boundary layer parameterization: a modified version of the Louis (order 1 model and a custom version of the E-ℓ (order 1.5 model. Both of them require a diagnostic formulation of the mixing length, but the use of the turbulent kinetic energy equation in the E-ℓ model allows to represent turbulence history and non-locality effects and to formulate a more physically based mixing length. The impact of the two schemes is different in the two models. The hydrostatic model, run at 1/5 degree resolution, is less sensitive, but the quantitative precipitation forecast is in any case unsatisfactory in terms of localization and amount. Conversely, the non-hydrostatic model, run at 1/50 degree resolution, is capable of realistically simulate timing, position and amount of precipitation, with the apparently superior results obtained with the E-ℓ parameterization model.

  1. Fuzzy modelling of Atlantic salmon physical habitat

    Science.gov (United States)

    St-Hilaire, André; Mocq, Julien; Cunjak, Richard

    2015-04-01

    Fish habitat models typically attempt to quantify the amount of available river habitat for a given fish species for various flow and hydraulic conditions. To achieve this, information on the preferred range of values of key physical habitat variables (e.g. water level, velocity, substrate diameter) for the targeted fishs pecies need to be modelled. In this context, we developed several habitat suitability indices sets for three Atlantic salmon life stages (young-of-the-year (YOY), parr, spawning adults) with the help of fuzzy logic modeling. Using the knowledge of twenty-seven experts, from both sides of the Atlantic Ocean, we defined fuzzy sets of four variables (depth, substrate size, velocity and Habitat Suitability Index, or HSI) and associated fuzzy rules. When applied to the Romaine River (Canada), median curves of standardized Weighted Usable Area (WUA) were calculated and a confidence interval was obtained by bootstrap resampling. Despite the large range of WUA covered by the expert WUA curves, confidence intervals were relatively narrow: an average width of 0.095 (on a scale of 0 to 1) for spawning habitat, 0.155 for parr rearing habitat and 0.160 for YOY rearing habitat. When considering an environmental flow value corresponding to 90% of the maximum reached by WUA curve, results seem acceptable for the Romaine River. Generally, this proposed fuzzy logic method seems suitable to model habitat availability for the three life stages, while also providing an estimate of uncertainty in salmon preferences.

  2. Normalization of the parameterized Courant-Snyder matrix for symplectic factorization of a parameterized Taylor map

    International Nuclear Information System (INIS)

    Yan, Y.T.

    1991-01-01

    The transverse motion of charged particles in a circular accelerator can be well represented by a one-turn high-order Taylor map. For particles without energy deviation, the one-turn Taylor map is a 4-dimensional polynomials of four variables. The four variables are the transverse canonical coordinates and their conjugate momenta. To include the energy deviation (off-momentum) effects, the map has to be parameterized with a smallness factor representing the off-momentum and so the Taylor map becomes a 4-dimensional polynomials of five variables. It is for this type of parameterized Taylor map that a mehtod is presented for converting it into a parameterized Dragt-Finn factorization map. Parameterized nonlinear normal form and parameterized kick factorization can thus be obtained with suitable modification of the existing technique

  3. Testing cloud microphysics parameterizations in NCAR CAM5 with ISDAC and M-PACE observations

    Science.gov (United States)

    Liu, Xiaohong; Xie, Shaocheng; Boyle, James; Klein, Stephen A.; Shi, Xiangjun; Wang, Zhien; Lin, Wuyin; Ghan, Steven J.; Earle, Michael; Liu, Peter S. K.; Zelenyuk, Alla

    2011-01-01

    Arctic clouds simulated by the National Center for Atmospheric Research (NCAR) Community Atmospheric Model version 5 (CAM5) are evaluated with observations from the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Indirect and Semi-Direct Aerosol Campaign (ISDAC) and Mixed-Phase Arctic Cloud Experiment (M-PACE), which were conducted at its North Slope of Alaska site in April 2008 and October 2004, respectively. Model forecasts for the Arctic spring and fall seasons performed under the Cloud-Associated Parameterizations Testbed framework generally reproduce the spatial distributions of cloud fraction for single-layer boundary-layer mixed-phase stratocumulus and multilayer or deep frontal clouds. However, for low-level stratocumulus, the model significantly underestimates the observed cloud liquid water content in both seasons. As a result, CAM5 significantly underestimates the surface downward longwave radiative fluxes by 20-40 W m-2. Introducing a new ice nucleation parameterization slightly improves the model performance for low-level mixed-phase clouds by increasing cloud liquid water content through the reduction of the conversion rate from cloud liquid to ice by the Wegener-Bergeron-Findeisen process. The CAM5 single-column model testing shows that changing the instantaneous freezing temperature of rain to form snow from -5°C to -40°C causes a large increase in modeled cloud liquid water content through the slowing down of cloud liquid and rain-related processes (e.g., autoconversion of cloud liquid to rain). The underestimation of aerosol concentrations in CAM5 in the Arctic also plays an important role in the low bias of cloud liquid water in the single-layer mixed-phase clouds. In addition, numerical issues related to the coupling of model physics and time stepping in CAM5 are responsible for the model biases and will be explored in future studies.

  4. When probabilistic seismic hazard climbs volcanoes: the Mt. Etna case, Italy – Part 1: Model components for sources parameterization

    Directory of Open Access Journals (Sweden)

    R. Azzaro

    2017-11-01

    Full Text Available The volcanic region of Mt. Etna (Sicily, Italy represents a perfect lab for testing innovative approaches to seismic hazard assessment. This is largely due to the long record of historical and recent observations of seismic and tectonic phenomena, the high quality of various geophysical monitoring and particularly the rapid geodynamics clearly demonstrate some seismotectonic processes. We present here the model components and the procedures adopted for defining seismic sources to be used in a new generation of probabilistic seismic hazard assessment (PSHA, the first results and maps of which are presented in a companion paper, Peruzza et al. (2017. The sources include, with increasing complexity, seismic zones, individual faults and gridded point sources that are obtained by integrating geological field data with long and short earthquake datasets (the historical macroseismic catalogue, which covers about 3 centuries, and a high-quality instrumental location database for the last decades. The analysis of the frequency–magnitude distribution identifies two main fault systems within the volcanic complex featuring different seismic rates that are controlled essentially by volcano-tectonic processes. We discuss the variability of the mean occurrence times of major earthquakes along the main Etnean faults by using an historical approach and a purely geologic method. We derive a magnitude–size scaling relationship specifically for this volcanic area, which has been implemented into a recently developed software tool – FiSH (Pace et al., 2016 – that we use to calculate the characteristic magnitudes and the related mean recurrence times expected for each fault. Results suggest that for the Mt. Etna area, the traditional assumptions of uniform and Poissonian seismicity can be relaxed; a time-dependent fault-based modeling, joined with a 3-D imaging of volcano-tectonic sources depicted by the recent instrumental seismicity, can therefore be

  5. A Holoinformational Model of the Physical Observer

    Science.gov (United States)

    di Biase, Francisco

    2013-09-01

    The author proposes a holoinformational view of the observer based, on the holonomic theory of brain/mind function and quantum brain dynamics developed by Karl Pribram, Sir John Eccles, R.L. Amoroso, Hameroff, Jibu and Yasue, and in the quantumholographic and holomovement theory of David Bohm. This conceptual framework is integrated with nonlocal information properties of the Quantum Field Theory of Umesawa, with the concept of negentropy, order, and organization developed by Shannon, Wiener, Szilard and Brillouin, and to the theories of self-organization and complexity of Prigogine, Atlan, Jantsch and Kauffman. Wheeler's "it from bit" concept of a participatory universe, and the developments of the physics of information made by Zureck and others with the concepts of statistical entropy and algorithmic entropy, related to the number of bits being processed in the mind of the observer are also considered. This new synthesis gives a self-organizing quantum nonlocal informational basis for a new model of awareness in a participatory universe. In this synthesis, awareness is conceived as meaningful quantum nonlocal information interconnecting the brain and the cosmos, by a holoinformational unified field (integrating nonlocal holistic (quantum) and local (Newtonian). We propose that the cosmology of the physical observer is this unified nonlocal quantum-holographic cosmos manifesting itself through awareness, interconnected in a participatory holistic and indivisible way the human mind-brain to all levels of the self-organizing holographic anthropic multiverse.

  6. Parameterization of interatomic potential by genetic algorithms: A case study

    Energy Technology Data Exchange (ETDEWEB)

    Ghosh, Partha S., E-mail: psghosh@barc.gov.in; Arya, A.; Dey, G. K. [Materials Science Division, Bhabha Atomic Research Centre, Mumbai-400085 (India); Ranawat, Y. S. [Department of Ceramic Engineering, Indian Institute of Technology