WorldWideScience

Sample records for model physics parameterizations

  1. Impact of Physics Parameterization Ordering in a Global Atmosphere Model

    Science.gov (United States)

    Donahue, Aaron S.; Caldwell, Peter M.

    2018-02-01

    Because weather and climate models must capture a wide variety of spatial and temporal scales, they rely heavily on parameterizations of subgrid-scale processes. The goal of this study is to demonstrate that the assumptions used to couple these parameterizations have an important effect on the climate of version 0 of the Energy Exascale Earth System Model (E3SM) General Circulation Model (GCM), a close relative of version 1 of the Community Earth System Model (CESM1). Like most GCMs, parameterizations in E3SM are sequentially split in the sense that parameterizations are called one after another with each subsequent process feeling the effect of the preceding processes. This coupling strategy is noncommutative in the sense that the order in which processes are called impacts the solution. By examining a suite of 24 simulations with deep convection, shallow convection, macrophysics/microphysics, and radiation parameterizations reordered, process order is shown to have a big impact on predicted climate. In particular, reordering of processes induces differences in net climate feedback that are as big as the intermodel spread in phase 5 of the Coupled Model Intercomparison Project. One reason why process ordering has such a large impact is that the effect of each process is influenced by the processes preceding it. Where output is written is therefore an important control on apparent model behavior. Application of k-means clustering demonstrates that the positioning of macro/microphysics and shallow convection plays a critical role on the model solution.

  2. Efficient Parameterization for Grey-box Model Identification of Complex Physical Systems

    DEFF Research Database (Denmark)

    Blanke, Mogens; Knudsen, Morten Haack

    2006-01-01

    Grey box model identification preserves known physical structures in a model but with limits to the possible excitation, all parameters are rarely identifiable, and different parametrizations give significantly different model quality. Convenient methods to show which parameterizations are the be......Grey box model identification preserves known physical structures in a model but with limits to the possible excitation, all parameters are rarely identifiable, and different parametrizations give significantly different model quality. Convenient methods to show which parameterizations...

  3. Assessment of Noah model physics and various runoff parameterizations over a Tibetan River

    Science.gov (United States)

    Zheng, Donghai; van der Velde, Rogier; Su, Zhongbo; Wen, Jun; Wang, Xin

    2017-04-01

    Noah model physics options validated for the source region of the Yellow River (SRYR) are applied to investigate their ability in reproducing runoff at the catchment scale. Three sets of augmentations are implemented affecting the descriptions of i) turbulent and soil heat transport (Noah-H), ii) soil water flow (Noah-W) and iii) frozen ground processes (Noah-F). Five numerical experiments are designed with the three augmented versions, a control run with default model physics and a run with all augmentations (Noah-A). Further, runoff parameterizations currently adopted by the i) Noah-MP model, ii) Community Land Model (CLM), and iii) CLM with variable infiltration capacity hydrology (CLM-VIC) are incorporated into the structure of Noah-A, and four additional numerical experiments are designed with the three aforementioned and the default Noah runoff parameterizations within the Noah-A. Each experiment is forced with 0.1o atmospheric forcing data from Institute of Tibetan Plateau Research, with vegetation and soil parameters adopted from Weather Research and Forecasting dataset and China Soil Database. In addition, the Community Earth System Model database provides the maximum surface saturated area parameter for the Noah-MP and CLM parameterizations. Each model run is initialized using a single-year recurrent spin-up to achieve the equilibrium model states. The results highlight that i) a complete description of vertical heat and water exchanges is necessary to correctly simulate the runoff at the catchment scale, and ii) the soil water storage-based parameterizations (Noah-A and CLM-VIC) outperform the groundwater table-based parameterizations (Noah-MP and CLM) in the seasonally frozen and high altitude SRYR.

  4. Uncertainty in a chemistry-transport model due to physical parameterizations and numerical approximations: An ensemble approach applied to ozone modeling

    OpenAIRE

    Mallet , Vivien; Sportisse , Bruno

    2006-01-01

    International audience; This paper estimates the uncertainty in the outputs of a chemistry-transport model due to physical parameterizations and numerical approximations. An ensemble of 20 simulations is generated from a reference simulation in which one key parameterization (chemical mechanism, dry deposition parameterization, turbulent closure, etc.) or one numerical approximation (grid size, splitting method, etc.) is changed at a time. Intercomparisons of the simulations and comparisons w...

  5. Physical and Numerical Model Studies of Cross-flow Turbines Towards Accurate Parameterization in Array Simulations

    Science.gov (United States)

    Wosnik, M.; Bachant, P.

    2014-12-01

    Cross-flow turbines, often referred to as vertical-axis turbines, show potential for success in marine hydrokinetic (MHK) and wind energy applications, ranging from small- to utility-scale installations in tidal/ocean currents and offshore wind. As turbine designs mature, the research focus is shifting from individual devices to the optimization of turbine arrays. It would be expensive and time-consuming to conduct physical model studies of large arrays at large model scales (to achieve sufficiently high Reynolds numbers), and hence numerical techniques are generally better suited to explore the array design parameter space. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries (e.g., grid resolution into the viscous sublayer on turbine blades), the turbines' interaction with the energy resource (water current or wind) needs to be parameterized, or modeled. Models used today--a common model is the actuator disk concept--are not able to predict the unique wake structure generated by cross-flow turbines. This wake structure has been shown to create "constructive" interference in some cases, improving turbine performance in array configurations, in contrast with axial-flow, or horizontal axis devices. Towards a more accurate parameterization of cross-flow turbines, an extensive experimental study was carried out using a high-resolution turbine test bed with wake measurement capability in a large cross-section tow tank. The experimental results were then "interpolated" using high-fidelity Navier--Stokes simulations, to gain insight into the turbine's near-wake. The study was designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. The end product of

  6. Usage of Parameterized Fatigue Spectra and Physics-Based Systems Engineering Models for Wind Turbine Component Sizing: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Parsons, Taylor; Guo, Yi; Veers, Paul; Dykes, Katherine; Damiani, Rick

    2016-01-26

    Software models that use design-level input variables and physics-based engineering analysis for estimating the mass and geometrical properties of components in large-scale machinery can be very useful for analyzing design trade-offs in complex systems. This study uses DriveSE, an OpenMDAO-based drivetrain model that uses stress and deflection criteria to size drivetrain components within a geared, upwind wind turbine. Because a full lifetime fatigue load spectrum can only be defined using computationally-expensive simulations in programs such as FAST, a parameterized fatigue loads spectrum that depends on wind conditions, rotor diameter, and turbine design life has been implemented. The parameterized fatigue spectrum is only used in this paper to demonstrate the proposed fatigue analysis approach. This paper details a three-part investigation of the parameterized approach and a comparison of the DriveSE model with and without fatigue analysis on the main shaft system. It compares loads from three turbines of varying size and determines if and when fatigue governs drivetrain sizing compared to extreme load-driven design. It also investigates the model's sensitivity to shaft material parameters. The intent of this paper is to demonstrate how fatigue considerations in addition to extreme loads can be brought into a system engineering optimization.

  7. Parameterization guidelines and considerations for hydrologic models

    Science.gov (United States)

     R. W. Malone; G. Yagow; C. Baffaut; M.W  Gitau; Z. Qi; Devendra Amatya; P.B.   Parajuli; J.V. Bonta; T.R.  Green

    2015-01-01

     Imparting knowledge of the physical processes of a system to a model and determining a set of parameter values for a hydrologic or water quality model application (i.e., parameterization) are important and difficult tasks. An exponential...

  8. Cloud parameterization for climate modeling - Status and prospects

    Science.gov (United States)

    Randall, David A.

    1989-01-01

    The current status of cloud parameterization research is reviewed. It is emphasized that the upper tropospheric stratiform clouds associated with deep convection are both physically important and poorly parameterized in current models. Emerging parameterizations are described in general terms, with emphasis on prognostic cloud water and fractional cloudiness, and how these relate to the problem just mentioned.

  9. A Physically Based Horizontal Subgrid-scale Turbulent Mixing Parameterization for the Convective Boundary Layer in Mesoscale Models

    Science.gov (United States)

    Zhou, Bowen; Xue, Ming; Zhu, Kefeng

    2017-04-01

    Compared to the representation of vertical turbulent mixing through various PBL schemes, the treatment of horizontal turbulence mixing in the boundary layer within mesoscale models, with O(10) km horizontal grid spacing, has received much less attention. In mesoscale models, subgrid-scale horizontal fluxes most often adopt the gradient-diffusion assumption. The horizontal mixing coefficients are usually set to a constant, or through the 2D Smagorinsky formulation, or in some cases based on the 1.5-order turbulence kinetic energy (TKE) closure. In this work, horizontal turbulent mixing parameterizations using physically based characteristic velocity and length scales are proposed for the convective boundary layer based on analysis of a well-resolved, wide-domain large-eddy simulation (LES). The proposed schemes involve different levels of sophistication. The first two schemes can be used together with first-order PBL schemes, while the third uses TKE to define its characteristic velocity scale and can be used together with TKE-based higher-order PBL schemes. The current horizontal mixing formulations are also assessed a priori through the filtered LES results to illustrate their limitations. The proposed parameterizations are tested a posteriori in idealized simulations of turbulent dispersion of a passive scalar. Comparisons show improved horizontal dispersion by the proposed schemes, and further demonstrate the weakness of the current schemes.

  10. Modelling of primary aerosols in the chemical transport model MOCAGE: development and evaluation of aerosol physical parameterizations

    Directory of Open Access Journals (Sweden)

    B. Sič

    2015-02-01

    Full Text Available This paper deals with recent improvements to the global chemical transport model of Météo-France MOCAGE (Modèle de Chimie Atmosphérique à Grande Echelle that consists of updates to different aerosol parameterizations. MOCAGE only contains primary aerosol species: desert dust, sea salt, black carbon, organic carbon, and also volcanic ash in the case of large volcanic eruptions. We introduced important changes to the aerosol parameterization concerning emissions, wet deposition and sedimentation. For the emissions, size distribution and wind calculations are modified for desert dust aerosols, and a surface sea temperature dependant source function is introduced for sea salt aerosols. Wet deposition is modified toward a more physically realistic representation by introducing re-evaporation of falling rain and snowfall scavenging and by changing the in-cloud scavenging scheme along with calculations of precipitation cloud cover and rain properties. The sedimentation scheme update includes changes regarding the stability and viscosity calculations. Independent data from satellites (MODIS, SEVIRI, the ground (AERONET, EMEP, and a model inter-comparison project (AeroCom are compared with MOCAGE simulations and show that the introduced changes brought a significant improvement on aerosol representation, properties and global distribution. Emitted quantities of desert dust and sea salt, as well their lifetimes, moved closer towards values of AeroCom estimates and the multi-model average. When comparing the model simulations with MODIS aerosol optical depth (AOD observations over the oceans, the updated model configuration shows a decrease in the modified normalized mean bias (MNMB; from 0.42 to 0.10 and a better correlation (from 0.06 to 0.32 in terms of the geographical distribution and the temporal variability. The updates corrected a strong positive MNMB in the sea salt representation at high latitudes (from 0.65 to 0.16, and a negative MNMB in

  11. Prototype Mcs Parameterization for Global Climate Models

    Science.gov (United States)

    Moncrieff, M. W.

    2017-12-01

    Excellent progress has been made with observational, numerical and theoretical studies of MCS processes but the parameterization of those processes remain in a dire state and are missing from GCMs. The perceived complexity of the distribution, type, and intensity of organized precipitation systems has arguably daunted attention and stifled the development of adequate parameterizations. TRMM observations imply links between convective organization and large-scale meteorological features in the tropics and subtropics that are inadequately treated by GCMs. This calls for improved physical-dynamical treatment of organized convection to enable the next-generation of GCMs to reliably address a slew of challenges. The multiscale coherent structure parameterization (MCSP) paradigm is based on the fluid-dynamical concept of coherent structures in turbulent environments. The effects of vertical shear on MCS dynamics implemented as 2nd baroclinic convective heating and convective momentum transport is based on Lagrangian conservation principles, nonlinear dynamical models, and self-similarity. The prototype MCS parameterization, a minimalist proof-of-concept, is applied in the NCAR Community Climate Model, Version 5.5 (CAM 5.5). The MCSP generates convectively coupled tropical waves and large-scale precipitation features notably in the Indo-Pacific warm-pool and Maritime Continent region, a center-of-action for weather and climate variability around the globe.

  12. Building a Structural Model: Parameterization and Structurality

    Directory of Open Access Journals (Sweden)

    Michel Mouchart

    2016-04-01

    Full Text Available A specific concept of structural model is used as a background for discussing the structurality of its parameterization. Conditions for a structural model to be also causal are examined. Difficulties and pitfalls arising from the parameterization are analyzed. In particular, pitfalls when considering alternative parameterizations of a same model are shown to have lead to ungrounded conclusions in the literature. Discussions of observationally equivalent models related to different economic mechanisms are used to make clear the connection between an economically meaningful parameterization and an economically meaningful decomposition of a complex model. The design of economic policy is used for drawing some practical implications of the proposed analysis.

  13. Parameterized Linear Longitudinal Airship Model

    Science.gov (United States)

    Kulczycki, Eric; Elfes, Alberto; Bayard, David; Quadrelli, Marco; Johnson, Joseph

    2010-01-01

    A parameterized linear mathematical model of the longitudinal dynamics of an airship is undergoing development. This model is intended to be used in designing control systems for future airships that would operate in the atmospheres of Earth and remote planets. Heretofore, the development of linearized models of the longitudinal dynamics of airships has been costly in that it has been necessary to perform extensive flight testing and to use system-identification techniques to construct models that fit the flight-test data. The present model is a generic one that can be relatively easily specialized to approximate the dynamics of specific airships at specific operating points, without need for further system identification, and with significantly less flight testing. The approach taken in the present development is to merge the linearized dynamical equations of an airship with techniques for estimation of aircraft stability derivatives, and to thereby make it possible to construct a linearized dynamical model of the longitudinal dynamics of a specific airship from geometric and aerodynamic data pertaining to that airship. (It is also planned to develop a model of the lateral dynamics by use of the same methods.) All of the aerodynamic data needed to construct the model of a specific airship can be obtained from wind-tunnel testing and computational fluid dynamics

  14. CLOUD PARAMETERIZATIONS, CLOUD PHYSICS, AND THEIR CONNECTIONS: AN OVERVIEW.

    Energy Technology Data Exchange (ETDEWEB)

    LIU,Y.; DAUM,P.H.; CHAI,S.K.; LIU,F.

    2002-02-12

    This paper consists of three parts. The first part is concerned with the parameterization of cloud microphysics in climate models. We demonstrate the crucial importance of spectral dispersion of the cloud droplet size distribution in determining radiative properties of clouds (e.g., effective radius), and underline the necessity of specifying spectral dispersion in the parameterization of cloud microphysics. It is argued that the inclusion of spectral dispersion makes the issue of cloud parameterization essentially equivalent to that of the droplet size distribution function, bringing cloud parameterization to the forefront of cloud physics. The second part is concerned with theoretical investigations into the spectral shape of droplet size distributions in cloud physics. After briefly reviewing the mainstream theories (including entrainment and mixing theories, and stochastic theories), we discuss their deficiencies and the need for a paradigm shift from reductionist approaches to systems approaches. A systems theory that has recently been formulated by utilizing ideas from statistical physics and information theory is discussed, along with the major results derived from it. It is shown that the systems formalism not only easily explains many puzzles that have been frustrating the mainstream theories, but also reveals such new phenomena as scale-dependence of cloud droplet size distributions. The third part is concerned with the potential applications of the systems theory to the specification of spectral dispersion in terms of predictable variables and scale-dependence under different fluctuating environments.

  15. CLOUD PARAMETERIZATIONS, CLOUD PHYSICS, AND THEIR CONNECTIONS: AN OVERVIEW

    International Nuclear Information System (INIS)

    LIU, Y.; DAUM, P.H.; CHAI, S.K.; LIU, F.

    2002-01-01

    This paper consists of three parts. The first part is concerned with the parameterization of cloud microphysics in climate models. We demonstrate the crucial importance of spectral dispersion of the cloud droplet size distribution in determining radiative properties of clouds (e.g., effective radius), and underline the necessity of specifying spectral dispersion in the parameterization of cloud microphysics. It is argued that the inclusion of spectral dispersion makes the issue of cloud parameterization essentially equivalent to that of the droplet size distribution function, bringing cloud parameterization to the forefront of cloud physics. The second part is concerned with theoretical investigations into the spectral shape of droplet size distributions in cloud physics. After briefly reviewing the mainstream theories (including entrainment and mixing theories, and stochastic theories), we discuss their deficiencies and the need for a paradigm shift from reductionist approaches to systems approaches. A systems theory that has recently been formulated by utilizing ideas from statistical physics and information theory is discussed, along with the major results derived from it. It is shown that the systems formalism not only easily explains many puzzles that have been frustrating the mainstream theories, but also reveals such new phenomena as scale-dependence of cloud droplet size distributions. The third part is concerned with the potential applications of the systems theory to the specification of spectral dispersion in terms of predictable variables and scale-dependence under different fluctuating environments

  16. Droplet Nucleation: Physically-Based Parameterizations and Comparative Evaluation

    Directory of Open Access Journals (Sweden)

    Steve Ghan

    2011-10-01

    Full Text Available One of the greatest sources of uncertainty in simulations of climate and climate change is the influence of aerosols on the optical properties of clouds. The root of this influence is the droplet nucleation process, which involves the spontaneous growth of aerosol into cloud droplets at cloud edges, during the early stages of cloud formation, and in some cases within the interior of mature clouds. Numerical models of droplet nucleation represent much of the complexity of the process, but at a computational cost that limits their application to simulations of hours or days. Physically-based parameterizations of droplet nucleation are designed to quickly estimate the number nucleated as a function of the primary controlling parameters: the aerosol number size distribution, hygroscopicity and cooling rate. Here we compare and contrast the key assumptions used in developing each of the most popular parameterizations and compare their performances under a variety of conditions. We find that the more complex parameterizations perform well under a wider variety of nucleation conditions, but all parameterizations perform well under the most common conditions. We then discuss the various applications of the parameterizations to cloud-resolving, regional and global models to study aerosol effects on clouds at a wide range of spatial and temporal scales. We compare estimates of anthropogenic aerosol indirect effects using two different parameterizations applied to the same global climate model, and find that the estimates of indirect effects differ by only 10%. We conclude with a summary of the outstanding challenges remaining for further development and application.

  17. Collaborative Project. 3D Radiative Transfer Parameterization Over Mountains/Snow for High-Resolution Climate Models. Fast physics and Applications

    Energy Technology Data Exchange (ETDEWEB)

    Liou, Kuo-Nan [Univ. of California, Los Angeles, CA (United States)

    2016-02-09

    Under the support of the aforementioned DOE Grant, we have made two fundamental contributions to atmospheric and climate sciences: (1) Develop an efficient 3-D radiative transfer parameterization for application to intense and intricate inhomogeneous mountain/snow regions. (2) Innovate a stochastic parameterization for light absorption by internally mixed black carbon and dust particles in snow grains for understanding and physical insight into snow albedo reduction in climate models. With reference to item (1), we divided solar fluxes reaching mountain surfaces into five components: direct and diffuse fluxes, direct- and diffuse-reflected fluxes, and coupled mountain-mountain flux. “Exact” 3D Monte Carlo photon tracing computations can then be performed for these solar flux components to compare with those calculated from the conventional plane-parallel (PP) radiative transfer program readily available in climate models. Subsequently, Parameterizations of the deviations of 3D from PP results for five flux components are carried out by means of the multiple linear regression analysis associated with topographic information, including elevation, solar incident angle, sky view factor, and terrain configuration factor. We derived five regression equations with high statistical correlations for flux deviations and successfully incorporated this efficient parameterization into WRF model, which was used as the testbed in connection with the Fu-Liou-Gu PP radiation scheme that has been included in the WRF physics package. Incorporating this 3D parameterization program, we conducted simulations of WRF and CCSM4 to understand and evaluate the mountain/snow effect on snow albedo reduction during seasonal transition and the interannual variability for snowmelt, cloud cover, and precipitation over the Western United States presented in the final report. With reference to item (2), we developed in our previous research a geometric-optics surface-wave approach (GOS) for the

  18. Spectral cumulus parameterization based on cloud-resolving model

    Science.gov (United States)

    Baba, Yuya

    2018-02-01

    We have developed a spectral cumulus parameterization using a cloud-resolving model. This includes a new parameterization of the entrainment rate which was derived from analysis of the cloud properties obtained from the cloud-resolving model simulation and was valid for both shallow and deep convection. The new scheme was examined in a single-column model experiment and compared with the existing parameterization of Gregory (2001, Q J R Meteorol Soc 127:53-72) (GR scheme). The results showed that the GR scheme simulated more shallow and diluted convection than the new scheme. To further validate the physical performance of the parameterizations, Atmospheric Model Intercomparison Project (AMIP) experiments were performed, and the results were compared with reanalysis data. The new scheme performed better than the GR scheme in terms of mean state and variability of atmospheric circulation, i.e., the new scheme improved positive bias of precipitation in western Pacific region, and improved positive bias of outgoing shortwave radiation over the ocean. The new scheme also simulated better features of convectively coupled equatorial waves and Madden-Julian oscillation. These improvements were found to be derived from the modification of parameterization for the entrainment rate, i.e., the proposed parameterization suppressed excessive increase of entrainment, thus suppressing excessive increase of low-level clouds.

  19. Sparse canopy parameterizations for meteorological models

    NARCIS (Netherlands)

    Hurk, van den B.J.J.M.

    1996-01-01

    Meteorological models for numerical weather prediction or climate simulation require a description of land surface exchange processes. The degree of complexity of these land-surface parameterization schemes - or SVAT's - that is necessary for accurate model predictions, is yet unclear. Also, the

  20. Optimal Physics Parameterization Scheme Combination of the Weather Research and Forecasting Model for Seasonal Precipitation Simulation over Ghana

    Directory of Open Access Journals (Sweden)

    Richard Yao Kuma Agyeman

    2017-01-01

    Full Text Available Seasonal predictions of precipitation, among others, are important to help mitigate the effects of drought and floods on agriculture, hydropower generation, disasters, and many more. This work seeks to obtain a suitable combination of physics schemes of the Weather Research and Forecasting (WRF model for seasonal precipitation simulation over Ghana. Using the ERA-Interim reanalysis as forcing data, simulation experiments spanning eight months (from April to November were performed for two different years: a dry year (2001 and a wet year (2008. A double nested approach was used with the outer domain at 50 km resolution covering West Africa and the inner domain covering Ghana at 10 km resolution. The results suggest that the WRF model generally overestimated the observed precipitation by a mean value between 3% and 64% for both years. Most of the scheme combinations overestimated (underestimated precipitation over coastal (northern zones of Ghana for both years but estimated precipitation reasonably well over forest and transitional zones. On the whole, the combination of WRF Single-Moment 6-Class Microphysics Scheme, Grell-Devenyi Ensemble Cumulus Scheme, and Asymmetric Convective Model Planetary Boundary Layer Scheme simulated the best temporal pattern and temporal variability with the least relative bias for both years and therefore is recommended for Ghana.

  1. A test harness for accelerating physics parameterization advancements into operations

    Science.gov (United States)

    Firl, G. J.; Bernardet, L.; Harrold, M.; Henderson, J.; Wolff, J.; Zhang, M.

    2017-12-01

    The process of transitioning advances in parameterization of sub-grid scale processes from initial idea to implementation is often much quicker than the transition from implementation to use in an operational setting. After all, considerable work must be undertaken by operational centers to fully test, evaluate, and implement new physics. The process is complicated by the scarcity of like-to-like comparisons, availability of HPC resources, and the ``tuning problem" whereby advances in physics schemes are difficult to properly evaluate without first undertaking the expensive and time-consuming process of tuning to other schemes within a suite. To address this process shortcoming, the Global Model TestBed (GMTB), supported by the NWS NGGPS project and undertaken by the Developmental Testbed Center, has developed a physics test harness. It implements the concept of hierarchical testing, where the same code can be tested in model configurations of varying complexity from single column models (SCM) to fully coupled, cycled global simulations. Developers and users may choose at which level of complexity to engage. Several components of the physics test harness have been implemented, including a SCM and an end-to-end workflow that expands upon the one used at NOAA/EMC to run the GFS operationally, although the testbed components will necessarily morph to coincide with changes to the operational configuration (FV3-GFS). A standard, relatively user-friendly interface known as the Interoperable Physics Driver (IPD) is available for physics developers to connect their codes. This prerequisite exercise allows access to the testbed tools and removes a technical hurdle for potential inclusion into the Common Community Physics Package (CCPP). The testbed offers users the opportunity to conduct like-to-like comparisons between the operational physics suite and new development as well as among multiple developments. GMTB staff have demonstrated use of the testbed through a

  2. European upper mantle tomography: adaptively parameterized models

    Science.gov (United States)

    Schäfer, J.; Boschi, L.

    2009-04-01

    We have devised a new algorithm for upper-mantle surface-wave tomography based on adaptive parameterization: i.e. the size of each parameterization pixel depends on the local density of seismic data coverage. The advantage in using this kind of parameterization is that a high resolution can be achieved in regions with dense data coverage while a lower (and cheaper) resolution is kept in regions with low coverage. This way, parameterization is everywhere optimal, both in terms of its computational cost, and of model resolution. This is especially important for data sets with inhomogenous data coverage, as it is usually the case for global seismic databases. The data set we use has an especially good coverage around Switzerland and over central Europe. We focus on periods from 35s to 150s. The final goal of the project is to determine a new model of seismic velocities for the upper mantle underlying Europe and the Mediterranean Basin, of resolution higher than what is currently found in the literature. Our inversions involve regularization via norm and roughness minimization, and this in turn requires that discrete norm and roughness operators associated with our adaptive grid be precisely defined. The discretization of the roughness damping operator in the case of adaptive parameterizations is not as trivial as it is for the uniform ones; important complications arise from the significant lateral variations in the size of pixels. We chose to first define the roughness operator in a spherical harmonic framework, and subsequently translate it to discrete pixels via a linear transformation. Since the smallest pixels we allow in our parameterization have a size of 0.625 °, the spherical-harmonic roughness operator has to be defined up to harmonic degree 899, corresponding to 810.000 harmonic coefficients. This results in considerable computational costs: we conduct the harmonic-pixel transformations on a small Beowulf cluster. We validate our implementation of adaptive

  3. Develop and Test Coupled Physical Parameterizations and Tripolar Wave Model Grid: NAVGEM / WaveWatch III / HYCOM

    Science.gov (United States)

    2013-09-30

    disk the following wave input fields: Stokes drift current ( SDC ), wave-to-ocean momentum flux (WOMF), bottom orbital wave current (OWC). (b) Add SDC ...Earth System Modeling Framework) layer in HYCOM to import SDC , WOMF and OWC fields and export SSC (surface current) and SSH (surface height) fields

  4. Lightning parameterization in a storm electrification model

    Science.gov (United States)

    Helsdon, John H., Jr.; Farley, Richard D.; Wu, Gang

    1988-01-01

    The parameterization of an intracloud lightning discharge has been implemented in our Storm Electrification Model. The initiation, propagation direction, termination and charge redistribution of the discharge are approximated assuming overall charge neutrality. Various simulations involving differing amounts of charge transferred have been done. The effects of the lightning-produced ions on the hydrometeor charges, electric field components and electrical energy depend strongly on the charge transferred. A comparison between the measured electric field change of an actual intracloud flash and the field change due to the simulated discharge show favorable agreement.

  5. The dynamical core, physical parameterizations, and basic simulation characteristics of the atmospheric component AM3 of the GFDL global coupled model CM3

    Science.gov (United States)

    Donner, L.J.; Wyman, B.L.; Hemler, R.S.; Horowitz, L.W.; Ming, Y.; Zhao, M.; Golaz, J.-C.; Ginoux, P.; Lin, S.-J.; Schwarzkopf, M.D.; Austin, J.; Alaka, G.; Cooke, W.F.; Delworth, T.L.; Freidenreich, S.M.; Gordon, C.T.; Griffies, S.M.; Held, I.M.; Hurlin, W.J.; Klein, S.A.; Knutson, T.R.; Langenhorst, A.R.; Lee, H.-C.; Lin, Y.; Magi, B.I.; Malyshev, S.L.; Milly, P.C.D.; Naik, V.; Nath, M.J.; Pincus, R.; Ploshay, J.J.; Ramaswamy, V.; Seman, C.J.; Shevliakova, E.; Sirutis, J.J.; Stern, W.F.; Stouffer, R.J.; Wilson, R.J.; Winton, M.; Wittenberg, A.T.; Zeng, F.

    2011-01-01

    The Geophysical Fluid Dynamics Laboratory (GFDL) has developed a coupled general circulation model (CM3) for the atmosphere, oceans, land, and sea ice. The goal of CM3 is to address emerging issues in climate change, including aerosol-cloud interactions, chemistry-climate interactions, and coupling between the troposphere and stratosphere. The model is also designed to serve as the physical system component of earth system models and models for decadal prediction in the near-term future-for example, through improved simulations in tropical land precipitation relative to earlier-generation GFDL models. This paper describes the dynamical core, physical parameterizations, and basic simulation characteristics of the atmospheric component (AM3) of this model. Relative to GFDL AM2, AM3 includes new treatments of deep and shallow cumulus convection, cloud droplet activation by aerosols, subgrid variability of stratiform vertical velocities for droplet activation, and atmospheric chemistry driven by emissions with advective, convective, and turbulent transport. AM3 employs a cubed-sphere implementation of a finite-volume dynamical core and is coupled to LM3, a new land model with ecosystem dynamics and hydrology. Its horizontal resolution is approximately 200 km, and its vertical resolution ranges approximately from 70 m near the earth's surface to 1 to 1.5 km near the tropopause and 3 to 4 km in much of the stratosphere. Most basic circulation features in AM3 are simulated as realistically, or more so, as in AM2. In particular, dry biases have been reduced over South America. In coupled mode, the simulation of Arctic sea ice concentration has improved. AM3 aerosol optical depths, scattering properties, and surface clear-sky downward shortwave radiation are more realistic than in AM2. The simulation of marine stratocumulus decks remains problematic, as in AM2. The most intense 0.2% of precipitation rates occur less frequently in AM3 than observed. The last two decades of

  6. Understanding and Improving Ocean Mixing Parameterizations for modeling Climate Change

    Science.gov (United States)

    Howard, A. M.; Fells, J.; Clarke, J.; Cheng, Y.; Canuto, V.; Dubovikov, M. S.

    2017-12-01

    Climate is vital. Earth is only habitable due to the atmosphere&oceans' distribution of energy. Our Greenhouse Gas emissions shift overall the balance between absorbed and emitted radiation causing Global Warming. How much of these emissions are stored in the ocean vs. entering the atmosphere to cause warming and how the extra heat is distributed depends on atmosphere&ocean dynamics, which we must understand to know risks of both progressive Climate Change and Climate Variability which affect us all in many ways including extreme weather, floods, droughts, sea-level rise and ecosystem disruption. Citizens must be informed to make decisions such as "business as usual" vs. mitigating emissions to avert catastrophe. Simulations of Climate Change provide needed knowledge but in turn need reliable parameterizations of key physical processes, including ocean mixing, which greatly impacts transport&storage of heat and dissolved CO2. The turbulence group at NASA-GISS seeks to use physical theory to improve parameterizations of ocean mixing, including smallscale convective, shear driven, double diffusive, internal wave and tidal driven vertical mixing, as well as mixing by submesoscale eddies, and lateral mixing along isopycnals by mesoscale eddies. Medgar Evers undergraduates aid NASA research while learning climate science and developing computer&math skills. We write our own programs in MATLAB and FORTRAN to visualize and process output of ocean simulations including producing statistics to help judge impacts of different parameterizations on fidelity in reproducing realistic temperatures&salinities, diffusivities and turbulent power. The results can help upgrade the parameterizations. Students are introduced to complex system modeling and gain deeper appreciation of climate science and programming skills, while furthering climate science. We are incorporating climate projects into the Medgar Evers college curriculum. The PI is both a member of the turbulence group at

  7. Parameterized combinatorial geometry modeling in Moritz

    International Nuclear Information System (INIS)

    Van Riper, K.A.

    2005-01-01

    We describe the use of named variables as surface and solid body coefficients in the Moritz geometry editing program. Variables can also be used as material numbers, cell densities, and transformation values. A variable is defined as a constant or an arithmetic combination of constants and other variables. A variable reference, such as in a surface coefficient, can be a single variable or an expression containing variables and constants. Moritz can read and write geometry models in MCNP and ITS ACCEPT format; support for other codes will be added. The geometry can be saved with either the variables in place, for modifying the models in Moritz, or with the variables evaluated for use in the transport codes. A program window shows a list of variables and provides fields for editing them. Surface coefficients and other values that use a variable reference are shown in a distinctive style on object property dialogs; associated buttons show fields for editing the reference. We discuss our use of variables in defining geometry models for shielding studies in PET clinics. When a model is parameterized through the use of variables, changes such as room dimensions, shielding layer widths, and cell compositions can be quickly achieved by changing a few numbers without requiring knowledge of the input syntax for the transport code or the tedious and error prone work of recalculating many surface or solid body coefficients. (author)

  8. Impact of parameterization of physical processes on simulation of track and intensity of tropical cyclone Nargis (2008) with WRF-NMM model.

    Science.gov (United States)

    Pattanayak, Sujata; Mohanty, U C; Osuri, Krishna K

    2012-01-01

    The present study is carried out to investigate the performance of different cumulus convection, planetary boundary layer, land surface processes, and microphysics parameterization schemes in the simulation of a very severe cyclonic storm (VSCS) Nargis (2008), developed in the central Bay of Bengal on 27 April 2008. For this purpose, the nonhydrostatic mesoscale model (NMM) dynamic core of weather research and forecasting (WRF) system is used. Model-simulated track positions and intensity in terms of minimum central mean sea level pressure (MSLP), maximum surface wind (10 m), and precipitation are verified with observations as provided by the India Meteorological Department (IMD) and Tropical Rainfall Measurement Mission (TRMM). The estimated optimum combination is reinvestigated with six different initial conditions of the same case to have better conclusion on the performance of WRF-NMM. A few more diagnostic fields like vertical velocity, vorticity, and heat fluxes are also evaluated. The results indicate that cumulus convection play an important role in the movement of the cyclone, and PBL has a crucial role in the intensification of the storm. The combination of Simplified Arakawa Schubert (SAS) convection, Yonsei University (YSU) PBL, NMM land surface, and Ferrier microphysics parameterization schemes in WRF-NMM give better track and intensity forecast with minimum vector displacement error.

  9. Carbody structural lightweighting based on implicit parameterized model

    Science.gov (United States)

    Chen, Xin; Ma, Fangwu; Wang, Dengfeng; Xie, Chen

    2014-05-01

    Most of recent research on carbody lightweighting has focused on substitute material and new processing technologies rather than structures. However, new materials and processing techniques inevitably lead to higher costs. Also, material substitution and processing lightweighting have to be realized through body structural profiles and locations. In the huge conventional workload of lightweight optimization, model modifications involve heavy manual work, and it always leads to a large number of iteration calculations. As a new technique in carbody lightweighting, the implicit parameterization is used to optimize the carbody structure to improve the materials utilization rate in this paper. The implicit parameterized structural modeling enables the use of automatic modification and rapid multidisciplinary design optimization (MDO) in carbody structure, which is impossible in the traditional structure finite element method (FEM) without parameterization. The structural SFE parameterized model is built in accordance with the car structural FE model in concept development stage, and it is validated by some structural performance data. The validated SFE structural parameterized model can be used to generate rapidly and automatically FE model and evaluate different design variables group in the integrated MDO loop. The lightweighting result of body-in-white (BIW) after the optimization rounds reveals that the implicit parameterized model makes automatic MDO feasible and can significantly improve the computational efficiency of carbody structural lightweighting. This paper proposes the integrated method of implicit parameterized model and MDO, which has the obvious practical advantage and industrial significance in the carbody structural lightweighting design.

  10. Parameterization effects in nonlinear models to describe growth curves

    Directory of Open Access Journals (Sweden)

    Tales Jesus Fernandes

    2015-10-01

    Full Text Available Various parameterizations of nonlinear models are common in the literature.In addition to complicating the understanding of these models, these parameterizations affect the nonlinearity measures and subsequently the inferences about the parameters. Bates and Watts (1980 quantified model nonlinearity using the geometric concept of curvature. Here we aimed to evaluate the three most common parameterizations of the Logistic and Gompertz nonlinear models with a focus on their nonlinearity and how this might affect inferences, and to establish relations between the parameters under the various expressions of the models. All parameterizations were adjusted to the growth data from pequi fruit. The intrinsic and parametric curvature described by Bates and Watts were calculated for each parameter. The choice of parameterization affects the nonlinearity measures, thus influencing the reliability and inferences about the estimated parameters. The most used methodologies presented the highest distance from linearity, showing the importance of analyzing these measures in any growth curve study. We propose that the parameterization in which the estimate of B is the abscissa of the inflection point should be used because of the lower deviations from linearity and direct biological interpretation for all parameters.

  11. Parameterization of Cloud Droplet Formation in Global Climate Models

    Science.gov (United States)

    Nenes, A.; Seinfeld, J.H.

    2003-01-01

    An aerosol activation parameterization has been developed based on a generalized representation of aerosol size and composition within the framework of an ascending adiabatic parcel; this allows for parameterizing the activation of chemically complex aerosol with an arbitrary size distribution and mixing state. The new parameterization introduces the concept of"population splitting", in which the cloud condensation nuclei (CCN) that form droplets are treated as two separate populations; those that have a size close to their critical diameter and those that do not.Explicit consideration of kinetic limitations of droplet growth is introduced. Our treatment of the activation process unravels much of its complexity. As a result of this, a substantial number of conditions of droplet formation can be treated completely free of empirical information or correlations; there are, however, some conditions of droplet activation for which an empirically derived correlation is utilized. Predictions of the parameterization are compared against extensive cloud parcel model simu;lations for a variety of aerosol activation conditions that cover a wide range of chemical variability and CCN concentrations. The parameterization tracks the parcel model simulations closely and robustly. The parameterization presented here is intended to allow for a comprehensive assessment of the aerosol indirect effect in general circulation models.

  12. Single-Column Modeling, GCM Parameterizations and Atmospheric Radiation Measurement Data

    International Nuclear Information System (INIS)

    Somerville, R.C.J.; Iacobellis, S.F.

    2005-01-01

    Our overall goal is identical to that of the Atmospheric Radiation Measurement (ARM) Program: the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global and regional models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have first compared single-column model (SCM) output with ARM observations at the Southern Great Plains (SGP), North Slope of Alaska (NSA) and Topical Western Pacific (TWP) sites. We focus on the predicted cloud amounts and on a suite of radiative quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments of cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art 3D atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable. We are currently testing the performance of our ARM-based parameterizations in state-of-the--art global and regional

  13. Cloud-radiation interactions and their parameterization in climate models

    Science.gov (United States)

    1994-01-01

    This report contains papers from the International Workshop on Cloud-Radiation Interactions and Their Parameterization in Climate Models met on 18-20 October 1993 in Camp Springs, Maryland, USA. It was organized by the Joint Working Group on Clouds and Radiation of the International Association of Meteorology and Atmospheric Sciences. Recommendations were grouped into three broad areas: (1) general circulation models (GCMs), (2) satellite studies, and (3) process studies. Each of the panels developed recommendations on the themes of the workshop. Explicitly or implicitly, each panel independently recommended observations of basic cloud microphysical properties (water content, phase, size) on the scales resolved by GCMs. Such observations are necessary to validate cloud parameterizations in GCMs, to use satellite data to infer radiative forcing in the atmosphere and at the earth's surface, and to refine the process models which are used to develop advanced cloud parameterizations.

  14. Developing a parameterization approach of soil erodibility for the Rangeland Hydrology and Erosion Model (RHEM)

    Science.gov (United States)

    Soil erodibility is a key factor for estimating soil erosion using physically based models. In this study, a new parameterization approach for estimating erodibility was developed for the Rangeland Hydrology and Erosion Model (RHEM). The approach uses empirical equations that were developed by apply...

  15. Balancing accuracy, efficiency, and flexibility in a radiative transfer parameterization for dynamical models

    Science.gov (United States)

    Pincus, R.; Mlawer, E. J.

    2017-12-01

    Radiation is key process in numerical models of the atmosphere. The problem is well-understood and the parameterization of radiation has seen relatively few conceptual advances in the past 15 years. It is nonthelss often the single most expensive component of all physical parameterizations despite being computed less frequently than other terms. This combination of cost and maturity suggests value in a single radiation parameterization that could be shared across models; devoting effort to a single parameterization might allow for fine tuning for efficiency. The challenge lies in the coupling of this parameterization to many disparate representations of clouds and aerosols. This talk will describe RRTMGP, a new radiation parameterization that seeks to balance efficiency and flexibility. This balance is struck by isolating computational tasks in "kernels" that expose as much fine-grained parallelism as possible. These have simple interfaces and are interoperable across programming languages so that they might be repalced by alternative implementations in domain-specific langauges. Coupling to the host model makes use of object-oriented features of Fortran 2003, minimizing branching within the kernels and the amount of data that must be transferred. We will show accuracy and efficiency results for a globally-representative set of atmospheric profiles using a relatively high-resolution spectral discretization.

  16. Accuracy of parameterized proton range models; A comparison

    Science.gov (United States)

    Pettersen, H. E. S.; Chaar, M.; Meric, I.; Odland, O. H.; Sølie, J. R.; Röhrich, D.

    2018-03-01

    An accurate calculation of proton ranges in phantoms or detector geometries is crucial for decision making in proton therapy and proton imaging. To this end, several parameterizations of the range-energy relationship exist, with different levels of complexity and accuracy. In this study we compare the accuracy of four different parameterizations models for proton range in water: Two analytical models derived from the Bethe equation, and two different interpolation schemes applied to range-energy tables. In conclusion, a spline interpolation scheme yields the highest reproduction accuracy, while the shape of the energy loss-curve is best reproduced with the differentiated Bragg-Kleeman equation.

  17. An efficient physically based parameterization to derive surface solar irradiance based on satellite atmospheric products

    Science.gov (United States)

    Qin, Jun; Tang, Wenjun; Yang, Kun; Lu, Ning; Niu, Xiaolei; Liang, Shunlin

    2015-05-01

    Surface solar irradiance (SSI) is required in a wide range of scientific researches and practical applications. Many parameterization schemes are developed to estimate it using routinely measured meteorological variables, since SSI is directly measured at a very limited number of stations. Even so, meteorological stations are still sparse, especially in remote areas. Remote sensing can be used to map spatiotemporally continuous SSI. Considering the huge amount of satellite data, coarse-resolution SSI has been estimated for reducing the computational burden when the estimation is based on a complex radiative transfer model. On the other hand, many empirical relationships are used to enhance the retrieval efficiency, but the accuracy cannot be guaranteed out of regions where they are locally calibrated. In this study, an efficient physically based parameterization is proposed to balance computational efficiency and retrieval accuracy for SSI estimation. In this parameterization, the transmittances for gases, aerosols, and clouds are all handled in full band form and the multiple reflections between the atmosphere and surface are explicitly taken into account. The newly proposed parameterization is applied to estimate SSI with both Moderate Resolution Imaging Spectroradiometer (MODIS) atmospheric and land products as inputs. These retrievals are validated against in situ measurements at the Surface Radiation Budget Network and at the North China Plain on an instantaneous basis, and moreover, they are validated and compared with Global Energy and Water Exchanges-Surface Radiation Budget and International Satellite Cloud Climatology Project-flux data SSI estimates at radiation stations of China Meteorological Administration on a daily mean basis. The estimation results indicates that the newly proposed SSI estimation scheme can effectively retrieve SSI based on MODIS products with mean root-mean-square errors of about 100 Wm- 1 and 35 Wm- 1 on an instantaneous and daily

  18. Parameterizing the Spatial Markov Model From Breakthrough Curve Data Alone

    Science.gov (United States)

    Sherman, Thomas; Fakhari, Abbas; Miller, Savannah; Singha, Kamini; Bolster, Diogo

    2017-12-01

    The spatial Markov model (SMM) is an upscaled Lagrangian model that effectively captures anomalous transport across a diverse range of hydrologic systems. The distinct feature of the SMM relative to other random walk models is that successive steps are correlated. To date, with some notable exceptions, the model has primarily been applied to data from high-resolution numerical simulations and correlation effects have been measured from simulated particle trajectories. In real systems such knowledge is practically unattainable and the best one might hope for is breakthrough curves (BTCs) at successive downstream locations. We introduce a novel methodology to quantify velocity correlation from BTC data alone. By discretizing two measured BTCs into a set of arrival times and developing an inverse model, we estimate velocity correlation, thereby enabling parameterization of the SMM in studies where detailed Lagrangian velocity statistics are unavailable. The proposed methodology is applied to two synthetic numerical problems, where we measure all details and thus test the veracity of the approach by comparison of estimated parameters with known simulated values. Our results suggest that our estimated transition probabilities agree with simulated values and using the SMM with this estimated parameterization accurately predicts BTCs downstream. Our methodology naturally allows for estimates of uncertainty by calculating lower and upper bounds of velocity correlation, enabling prediction of a range of BTCs. The measured BTCs fall within the range of predicted BTCs. This novel method to parameterize the SMM from BTC data alone is quite parsimonious, thereby widening the SMM's practical applicability.

  19. An intracloud lightning parameterization scheme for a storm electrification model

    Science.gov (United States)

    Helsdon, John H., Jr.; Wu, Gang; Farley, Richard D.

    1992-01-01

    The parameterization of an intracloud lightning discharge has been implemented in the present storm electrification model. The initiation, propagation direction, and termination of the discharge are computed using the magnitude and direction of the electric field vector as the determining criteria. The charge redistribution due to the lightning is approximated assuming the channel to be an isolated conductor with zero net charge over its entire length. Various simulations involving differing amounts of charge transferred and distribution of charges have been done. Values of charge transfer, dipole moment change, and electrical energy dissipation computed in the model are consistent with observations. The effects of the lightning-produced ions on the hydrometeor charges and electric field components depend strongly on the amount of charge transferred. A comparison between the measured electric field change of an actual intracloud flash and the field change due to the simulated discharge shows favorable agreement. Limitations of the parameterization scheme are discussed.

  20. Air quality modeling: evaluation of chemical and meteorological parameterizations

    International Nuclear Information System (INIS)

    Kim, Youngseob

    2011-01-01

    The influence of chemical mechanisms and meteorological parameterizations on pollutant concentrations calculated with an air quality model is studied. The influence of the differences between two gas-phase chemical mechanisms on the formation of ozone and aerosols in Europe is low on average. For ozone, the large local differences are mainly due to the uncertainty associated with the kinetics of nitrogen monoxide (NO) oxidation reactions on the one hand and the representation of different pathways for the oxidation of aromatic compounds on the other hand. The aerosol concentrations are mainly influenced by the selection of all major precursors of secondary aerosols and the explicit treatment of chemical regimes corresponding to the nitrogen oxides (NO x ) levels. The influence of the meteorological parameterizations on the concentrations of aerosols and their vertical distribution is evaluated over the Paris region in France by comparison to lidar data. The influence of the parameterization of the dynamics in the atmospheric boundary layer is important; however, it is the use of an urban canopy model that improves significantly the modeling of the pollutant vertical distribution (author) [fr

  1. A Formal Approach to Verify Parameterized Protocols in Mobile Cyber-Physical Systems

    Directory of Open Access Journals (Sweden)

    Long Zhang

    2017-01-01

    Full Text Available Mobile cyber-physical systems (CPSs are very hard to verify, because of asynchronous communication and the arbitrary number of components. Verification via model checking typically becomes impracticable due to the state space explosion caused by the system parameters and concurrency. In this paper, we propose a formal approach to verify the safety properties of parameterized protocols in mobile CPS. By using counter abstraction, the protocol is modeled as a Petri net. Then, a novel algorithm, which uses IC3 (the state-of-the-art model checking algorithm as the back-end engine, is presented to verify the Petri net model. The experimental results show that our new approach can greatly scale the verification capabilities compared favorably against several recently published approaches. In addition to solving the instances fast, our method is significant for its lower memory consumption.

  2. Model parameterization as method for data analysis in dendroecology

    Science.gov (United States)

    Tychkov, Ivan; Shishov, Vladimir; Popkova, Margarita

    2017-04-01

    There is no argue in usefulness of process-based models in ecological studies. Only limitations is how developed algorithm of model and how it will be applied for research. Simulation of tree-ring growth based on climate provides valuable information of tree-ring growth response on different environmental conditions, but also shares light on species-specifics of tree-ring growth process. Visual parameterization of the Vaganov-Shashkin model, allows to estimate non-linear response of tree-ring growth based on daily climate data: daily temperature, estimated day light and soil moisture. Previous using of the VS-Oscilloscope (a software tool of the visual parameterization) shows a good ability to recreate unique patterns of tree-ring growth for coniferous species in Siberian Russia, USA, China, Mediterranean Spain and Tunisia. But using of the models mostly is one-sided to better understand different tree growth processes, opposite to statistical methods of analysis (e.g. Generalized Linear Models, Mixed Models, Structural Equations.) which can be used for reconstruction and forecast. Usually the models are used either for checking of new hypothesis or quantitative assessment of physiological tree growth data to reveal a growth process mechanisms, while statistical methods used for data mining assessment and as a study tool itself. The high sensitivity of the model's VS-parameters reflects the ability of the model to simulate tree-ring growth and evaluates value of limiting growth climate factors. Precise parameterization of VS-Oscilloscope provides valuable information about growth processes of trees and under what conditions these processes occur (e.g. day of growth season onset, length of season, value of minimal/maximum temperature for tree-ring growth, formation of wide or narrow rings etc.). The work was supported by the Russian Science Foundation (RSF # 14-14-00219)

  3. Integrated cumulus ensemble and turbulence (ICET): An integrated parameterization system for general circulation models (GCMs)

    Energy Technology Data Exchange (ETDEWEB)

    Evans, J.L.; Frank, W.M.; Young, G.S. [Pennsylvania State Univ., University Park, PA (United States)

    1996-04-01

    Successful simulations of the global circulation and climate require accurate representation of the properties of shallow and deep convective clouds, stable-layer clouds, and the interactions between various cloud types, the boundary layer, and the radiative fluxes. Each of these phenomena play an important role in the global energy balance, and each must be parameterized in a global climate model. These processes are highly interactive. One major problem limiting the accuracy of parameterizations of clouds and other processes in general circulation models (GCMs) is that most of the parameterization packages are not linked with a common physical basis. Further, these schemes have not, in general, been rigorously verified against observations adequate to the task of resolving subgrid-scale effects. To address these problems, we are designing a new Integrated Cumulus Ensemble and Turbulence (ICET) parameterization scheme, installing it in a climate model (CCM2), and evaluating the performance of the new scheme using data from Atmospheric Radiation Measurement (ARM) Program Cloud and Radiation Testbed (CART) sites.

  4. Parameterizing sequence alignment with an explicit evolutionary model.

    Science.gov (United States)

    Rivas, Elena; Eddy, Sean R

    2015-12-10

    Inference of sequence homology is inherently an evolutionary question, dependent upon evolutionary divergence. However, the insertion and deletion penalties in the most widely used methods for inferring homology by sequence alignment, including BLAST and profile hidden Markov models (profile HMMs), are not based on any explicitly time-dependent evolutionary model. Using one fixed score system (BLOSUM62 with some gap open/extend costs, for example) corresponds to making an unrealistic assumption that all sequence relationships have diverged by the same time. Adoption of explicit time-dependent evolutionary models for scoring insertions and deletions in sequence alignments has been hindered by algorithmic complexity and technical difficulty. We identify and implement several probabilistic evolutionary models compatible with the affine-cost insertion/deletion model used in standard pairwise sequence alignment. Assuming an affine gap cost imposes important restrictions on the realism of the evolutionary models compatible with it, as single insertion events with geometrically distributed lengths do not result in geometrically distributed insert lengths at finite times. Nevertheless, we identify one evolutionary model compatible with symmetric pair HMMs that are the basis for Smith-Waterman pairwise alignment, and two evolutionary models compatible with standard profile-based alignment. We test different aspects of the performance of these "optimized branch length" models, including alignment accuracy and homology coverage (discrimination of residues in a homologous region from nonhomologous flanking residues). We test on benchmarks of both global homologies (full length sequence homologs) and local homologies (homologous subsequences embedded in nonhomologous sequence). Contrary to our expectations, we find that for global homologies a single long branch parameterization suffices both for distant and close homologous relationships. In contrast, we do see an advantage in

  5. Modeling late rectal toxicities based on a parameterized representation of the 3D dose distribution

    Science.gov (United States)

    Buettner, Florian; Gulliford, Sarah L.; Webb, Steve; Partridge, Mike

    2011-04-01

    Many models exist for predicting toxicities based on dose-volume histograms (DVHs) or dose-surface histograms (DSHs). This approach has several drawbacks as firstly the reduction of the dose distribution to a histogram results in the loss of spatial information and secondly the bins of the histograms are highly correlated with each other. Furthermore, some of the complex nonlinear models proposed in the past lack a direct physical interpretation and the ability to predict probabilities rather than binary outcomes. We propose a parameterized representation of the 3D distribution of the dose to the rectal wall which explicitly includes geometrical information in the form of the eccentricity of the dose distribution as well as its lateral and longitudinal extent. We use a nonlinear kernel-based probabilistic model to predict late rectal toxicity based on the parameterized dose distribution and assessed its predictive power using data from the MRC RT01 trial (ISCTRN 47772397). The endpoints under consideration were rectal bleeding, loose stools, and a global toxicity score. We extract simple rules identifying 3D dose patterns related to a specifically low risk of complication. Normal tissue complication probability (NTCP) models based on parameterized representations of geometrical and volumetric measures resulted in areas under the curve (AUCs) of 0.66, 0.63 and 0.67 for predicting rectal bleeding, loose stools and global toxicity, respectively. In comparison, NTCP models based on standard DVHs performed worse and resulted in AUCs of 0.59 for all three endpoints. In conclusion, we have presented low-dimensional, interpretable and nonlinear NTCP models based on the parameterized representation of the dose to the rectal wall. These models had a higher predictive power than models based on standard DVHs and their low dimensionality allowed for the identification of 3D dose patterns related to a low risk of complication.

  6. Modeling late rectal toxicities based on a parameterized representation of the 3D dose distribution

    Energy Technology Data Exchange (ETDEWEB)

    Buettner, Florian; Gulliford, Sarah L; Webb, Steve; Partridge, Mike, E-mail: florian.buttner@icr.ac.uk [Joint Department of Physics, Institute of Cancer Research and Royal Marsden NHS Foundation Trust, Sutton, Surrey SM2 5PT (United Kingdom)

    2011-04-07

    Many models exist for predicting toxicities based on dose-volume histograms (DVHs) or dose-surface histograms (DSHs). This approach has several drawbacks as firstly the reduction of the dose distribution to a histogram results in the loss of spatial information and secondly the bins of the histograms are highly correlated with each other. Furthermore, some of the complex nonlinear models proposed in the past lack a direct physical interpretation and the ability to predict probabilities rather than binary outcomes. We propose a parameterized representation of the 3D distribution of the dose to the rectal wall which explicitly includes geometrical information in the form of the eccentricity of the dose distribution as well as its lateral and longitudinal extent. We use a nonlinear kernel-based probabilistic model to predict late rectal toxicity based on the parameterized dose distribution and assessed its predictive power using data from the MRC RT01 trial (ISCTRN 47772397). The endpoints under consideration were rectal bleeding, loose stools, and a global toxicity score. We extract simple rules identifying 3D dose patterns related to a specifically low risk of complication. Normal tissue complication probability (NTCP) models based on parameterized representations of geometrical and volumetric measures resulted in areas under the curve (AUCs) of 0.66, 0.63 and 0.67 for predicting rectal bleeding, loose stools and global toxicity, respectively. In comparison, NTCP models based on standard DVHs performed worse and resulted in AUCs of 0.59 for all three endpoints. In conclusion, we have presented low-dimensional, interpretable and nonlinear NTCP models based on the parameterized representation of the dose to the rectal wall. These models had a higher predictive power than models based on standard DVHs and their low dimensionality allowed for the identification of 3D dose patterns related to a low risk of complication.

  7. A stratiform cloud parameterization for General Circulation Models

    International Nuclear Information System (INIS)

    Ghan, S.J.; Leung, L.R.; Chuang, C.C.; Penner, J.E.; McCaa, J.

    1994-01-01

    The crude treatment of clouds in General Circulation Models (GCMs) is widely recognized as a major limitation in the application of these models to predictions of global climate change. The purpose of this project is to develop a paxameterization for stratiform clouds in GCMs that expresses stratiform clouds in terms of bulk microphysical properties and their subgrid variability. In this parameterization, precipitating cloud species are distinguished from non-precipitating species, and the liquid phase is distinguished from the ice phase. The size of the non-precipitating cloud particles (which influences both the cloud radiative properties and the conversion of non-precipitating cloud species to precipitating species) is determined by predicting both the mass and number concentrations of each species

  8. A stratiform cloud parameterization for general circulation models

    International Nuclear Information System (INIS)

    Ghan, S.J.; Leung, L.R.; Chuang, C.C.; Penner, J.E.; McCaa, J.

    1994-01-01

    The crude treatment of clouds in general circulation models (GCMs) is widely recognized as a major limitation in applying these models to predictions of global climate change. The purpose of this project is to develop in GCMs a stratiform cloud parameterization that expresses clouds in terms of bulk microphysical properties and their subgrid variability. Various clouds variables and their interactions are summarized. Precipitating cloud species are distinguished from non-precipitating species, and the liquid phase is distinguished from the ice phase. The size of the non-precipitating cloud particles (which influences both the cloud radiative properties and the conversion of non-precipitating cloud species to precipitating species) is determined by predicting both the mass and number concentrations of each species

  9. The use of the k - {epsilon} turbulence model within the Rossby Centre regional ocean climate model: parameterization development and results

    Energy Technology Data Exchange (ETDEWEB)

    Markus Meier, H.E. [Swedish Meteorological and Hydrological Inst., Norrkoeping (Sweden). Rossby Centre

    2000-09-01

    As mixing plays a dominant role for the physics of an estuary like the Baltic Sea (seasonal heat storage, mixing in channels, deep water mixing), different mixing parameterizations for use in 3D Baltic Sea models are discussed and compared. For this purpose two different OGCMs of the Baltic Sea are utilized. Within the Swedish regional climate modeling program, SWECLIM, a 3D coupled ice-ocean model for the Baltic Sea has been coupled with an improved version of the two-equation k - {epsilon} turbulence model with corrected dissipation term, flux boundary conditions to include the effect of a turbulence enhanced layer due to breaking surface gravity waves and a parameterization for breaking internal waves. Results of multi-year simulations are compared with observations. The seasonal thermocline is simulated satisfactory and erosion of the halocline is avoided. Unsolved problems are discussed. To replace the controversial equation for dissipation the performance of a hierarchy of k-models has been tested and compared with the k - {epsilon} model. In addition, it is shown that the results of the mixing parameterization depend very much on the choice of the ocean model. Finally, the impact of two mixing parameterizations on Baltic Sea climate is investigated. In this case the sensitivity of mean SST, vertical temperature and salinity profiles, ice season and seasonal cycle of heat fluxes is quite large.

  10. Parameterization of Fire Injection Height in Large Scale Transport Model

    Science.gov (United States)

    Paugam, R.; Wooster, M.; Atherton, J.; Val Martin, M.; Freitas, S.; Kaiser, J. W.; Schultz, M. G.

    2012-12-01

    The parameterization of fire injection height in global chemistry transport model is currently a subject of debate in the atmospheric community. The approach usually proposed in the literature is based on relationships linking injection height and remote sensing products like the Fire Radiative Power (FRP) which can measure active fire properties. In this work we present an approach based on the Plume Rise Model (PRM) developed by Freitas et al (2007, 2010). This plume model is already used in different host models (e.g. WRF, BRAMS). In its original version, the fire is modeled by: a convective heat flux (CHF; pre-defined by the land cover and evaluated as a fixed part of the total heat released) and a plume radius (derived from the GOES Wildfire-ABBA product) which defines the fire extension where the CHF is homogeneously distributed. Here in our approach the Freitas model is modified, in particular we added (i) an equation for mass conservation, (ii) a scheme to parameterize horizontal entrainment/detrainment, and (iii) a new initialization module which estimates the sensible heat released by the fire on the basis of measured FRP rather than fuel cover type. FRP and Active Fire (AF) area necessary for the initialization of the model are directly derived from a modified version of the Dozier algorithm applied to the MOD14 product. An optimization (using the simulating annealing method) of this new version of the PRM is then proposed based on fire plume characteristics derived from the official MISR plume height project and atmospheric profiles extracted from the ECMWF analysis. The data set covers the main fire region (Africa, Siberia, Indonesia, and North and South America) and is set up to (i) retain fires where plume height and FRP can be easily linked (i.e. avoid large fire cluster where individual plume might interact), (ii) keep fire which show decrease of FRP and AF area after MISR overpass (i.e. to minimize effect of the time period needed for the plume to

  11. Accuracy of cuticular resistance parameterizations in ammonia dry deposition models

    Science.gov (United States)

    Schrader, Frederik; Brümmer, Christian; Richter, Undine; Fléchard, Chris; Wichink Kruit, Roy; Erisman, Jan Willem

    2016-04-01

    Accurate representation of total reactive nitrogen (Nr) exchange between ecosystems and the atmosphere is a crucial part of modern air quality models. However, bi-directional exchange of ammonia (NH3), the dominant Nr species in agricultural landscapes, still poses a major source of uncertainty in these models, where especially the treatment of non-stomatal pathways (e.g. exchange with wet leaf surfaces or the ground layer) can be challenging. While complex dynamic leaf surface chemistry models have been shown to successfully reproduce measured ammonia fluxes on the field scale, computational restraints and the lack of necessary input data have so far limited their application in larger scale simulations. A variety of different approaches to modelling dry deposition to leaf surfaces with simplified steady-state parameterizations have therefore arisen in the recent literature. We present a performance assessment of selected cuticular resistance parameterizations by comparing them with ammonia deposition measurements by means of eddy covariance (EC) and the aerodynamic gradient method (AGM) at a number of semi-natural and grassland sites in Europe. First results indicate that using a state-of-the-art uni-directional approach tends to overestimate and using a bi-directional cuticular compensation point approach tends to underestimate cuticular resistance in some cases, consequently leading to systematic errors in the resulting flux estimates. Using the uni-directional model, situations where low ratios of total atmospheric acids to NH3 concentration occur lead to fairly high minimum cuticular resistances, limiting predicted downward fluxes in conditions usually favouring deposition. On the other hand, the bi-directional model used here features a seasonal cycle of external leaf surface emission potentials that can lead to comparably low effective resistance estimates under warm and wet conditions, when in practice an expected increase in the compensation point due to

  12. A shallow convection parameterization for the non-hydrostatic MM5 mesoscale model

    Energy Technology Data Exchange (ETDEWEB)

    Seaman, N.L.; Kain, J.S.; Deng, A. [Pennsylvania State Univ., University Park, PA (United States)

    1996-04-01

    A shallow convection parameterization suitable for the Pennsylvannia State University (PSU)/National Center for Atmospheric Research nonhydrostatic mesoscale model (MM5) is being developed at PSU. The parameterization is based on parcel perturbation theory developed in conjunction with a 1-D Mellor Yamada 1.5-order planetary boundary layer scheme and the Kain-Fritsch deep convection model.

  13. Subgrid Parameterization of the Soil Moisture Storage Capacity for a Distributed Rainfall-Runoff Model

    Directory of Open Access Journals (Sweden)

    Weijian Guo

    2015-05-01

    Full Text Available Spatial variability plays an important role in nonlinear hydrologic processes. Due to the limitation of computational efficiency and data resolution, subgrid variability is usually assumed to be uniform for most grid-based rainfall-runoff models, which leads to the scale-dependence of model performances. In this paper, the scale effect on the Grid-Xinanjiang model was examined. The bias of the estimation of precipitation, runoff, evapotranspiration and soil moisture at the different grid scales, along with the scale-dependence of the effective parameters, highlights the importance of well representing the subgrid variability. This paper presents a subgrid parameterization method to incorporate the subgrid variability of the soil storage capacity, which is a key variable that controls runoff generation and partitioning in the Grid-Xinanjiang model. In light of the similar spatial pattern and physical basis, the soil storage capacity is correlated with the topographic index, whose spatial distribution can more readily be measured. A beta distribution is introduced to represent the spatial distribution of the soil storage capacity within the grid. The results derived from the Yanduhe Basin show that the proposed subgrid parameterization method can effectively correct the watershed soil storage capacity curve. Compared to the original Grid-Xinanjiang model, the model performances are quite consistent at the different grid scales when the subgrid variability is incorporated. This subgrid parameterization method reduces the recalibration necessity when the Digital Elevation Model (DEM resolution is changed. Moreover, it improves the potential for the application of the distributed model in the ungauged basin.

  14. Parameterization and validation of an ungulate-pasture model.

    Science.gov (United States)

    Pekkarinen, Antti-Juhani; Kumpula, Jouko; Tahvonen, Olli

    2017-10-01

    Ungulate grazing and trampling strongly affect pastures and ecosystems throughout the world. Ecological population models are used for studying these systems and determining the guidelines for sustainable and economically viable management. However, the effect of trampling and other resource wastage is either not taken into account or quantified with data in earlier models. Also, the ability of models to describe the herbivore impact on pastures is usually not validated. We used a detailed model and data to study the level of winter- and summertime lichen wastage by reindeer and the effects of wastage on population sizes and management. We also validated the model with respect to its ability of predicting changes in lichen biomass and compared the actual management in herding districts with model results. The modeling efficiency value (0.75) and visual comparison between the model predictions and data showed that the model was able to describe the changes in lichen pastures caused by reindeer grazing and trampling. At the current lichen biomass levels in the northernmost Finland, the lichen wastage varied from 0 to 1 times the lichen intake during winter and from 6 to 10 times the intake during summer. With a higher value for wastage, reindeer numbers and net revenues were lower in the economically optimal solutions. Higher wastage also favored the use of supplementary feeding in the optimal steady state. Actual reindeer numbers in the districts were higher than in the optimal steady-state solutions for the model in 18 herding districts out of 20. Synthesis and applications . We show that a complex model can be used for analyzing ungulate-pasture dynamics and sustainable management if the model is parameterized and validated for the system. Wastage levels caused by trampling and other causes should be quantified with data as they strongly affect the results and management recommendations. Summertime lichen wastage caused by reindeer is higher than expected, which

  15. Frozen soil parameterization in a distributed biosphere hydrological model

    Directory of Open Access Journals (Sweden)

    L. Wang

    2010-03-01

    Full Text Available In this study, a frozen soil parameterization has been modified and incorporated into a distributed biosphere hydrological model (WEB-DHM. The WEB-DHM with the frozen scheme was then rigorously evaluated in a small cold area, the Binngou watershed, against the in-situ observations from the WATER (Watershed Allied Telemetry Experimental Research. First, by using the original WEB-DHM without the frozen scheme, the land surface parameters and two van Genuchten parameters were optimized using the observed surface radiation fluxes and the soil moistures at upper layers (5, 10 and 20 cm depths at the DY station in July. Second, by using the WEB-DHM with the frozen scheme, two frozen soil parameters were calibrated using the observed soil temperature at 5 cm depth at the DY station from 21 November 2007 to 20 April 2008; while the other soil hydraulic parameters were optimized by the calibration of the discharges at the basin outlet in July and August that covers the annual largest flood peak in 2008. With these calibrated parameters, the WEB-DHM with the frozen scheme was then used for a yearlong validation from 21 November 2007 to 20 November 2008. Results showed that the WEB-DHM with the frozen scheme has given much better performance than the WEB-DHM without the frozen scheme, in the simulations of soil moisture profile at the cold regions catchment and the discharges at the basin outlet in the yearlong simulation.

  16. Implementation of a Generalized Actuator Line Model for Wind Turbine Parameterization in the Weather Research and Forecasting Model

    Energy Technology Data Exchange (ETDEWEB)

    Lundquist, Julie [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Marjanovic, Nikola [University of California, Berkeley; Lawrence Livermore National Laboratory; Mirocha, Jeffrey D. [Lawrence Livermore National Laboratory; Kosovic, Branko [University Corporation for Atmospheric Research; Chow, Fotini Katopodes [University of California, Berkeley

    2017-12-22

    A generalized actuator line (GAL) wind turbine parameterization is implemented within the Weather Research and Forecasting model to enable high-fidelity large-eddy simulations of wind turbine interactions with boundary layer flows under realistic atmospheric forcing conditions. Numerical simulations using the GAL parameterization are evaluated against both an already implemented generalized actuator disk (GAD) wind turbine parameterization and two field campaigns that measured the inflow and near-wake regions of a single turbine. The representation of wake wind speed, variance, and vorticity distributions is examined by comparing fine-resolution GAL and GAD simulations and GAD simulations at both fine and coarse-resolutions. The higher-resolution simulations show slightly larger and more persistent velocity deficits in the wake and substantially increased variance and vorticity when compared to the coarse-resolution GAD. The GAL generates distinct tip and root vortices that maintain coherence as helical tubes for approximately one rotor diameter downstream. Coarse-resolution simulations using the GAD produce similar aggregated wake characteristics to both fine-scale GAD and GAL simulations at a fraction of the computational cost. The GAL parameterization provides the capability to resolve near wake physics, including vorticity shedding and wake expansion.

  17. Parameterizing the Morse potential for coarse-grained modeling of blood plasma

    International Nuclear Information System (INIS)

    Zhang, Na; Zhang, Peng; Kang, Wei; Bluestein, Danny; Deng, Yuefan

    2014-01-01

    Multiscale simulations of fluids such as blood represent a major computational challenge of coupling the disparate spatiotemporal scales between molecular and macroscopic transport phenomena characterizing such complex fluids. In this paper, a coarse-grained (CG) particle model is developed for simulating blood flow by modifying the Morse potential, traditionally used in Molecular Dynamics for modeling vibrating structures. The modified Morse potential is parameterized with effective mass scales for reproducing blood viscous flow properties, including density, pressure, viscosity, compressibility and characteristic flow dynamics of human blood plasma fluid. The parameterization follows a standard inverse-problem approach in which the optimal micro parameters are systematically searched, by gradually decoupling loosely correlated parameter spaces, to match the macro physical quantities of viscous blood flow. The predictions of this particle based multiscale model compare favorably to classic viscous flow solutions such as Counter-Poiseuille and Couette flows. It demonstrates that such coarse grained particle model can be applied to replicate the dynamics of viscous blood flow, with the advantage of bridging the gap between macroscopic flow scales and the cellular scales characterizing blood flow that continuum based models fail to handle adequately

  18. Parameterization of a ruminant model of phosphorus digestion and metabolism.

    Science.gov (United States)

    Feng, X; Knowlton, K F; Hanigan, M D

    2015-10-01

    The objective of the current work was to parameterize the digestive elements of the model of Hill et al. (2008) using data collected from animals that were ruminally, duodenally, and ileally cannulated, thereby providing a better understanding of the digestion and metabolism of P fractions in growing and lactating cattle. The model of Hill et al. (2008) was fitted and evaluated for adequacy using the data from 6 animal studies. We hypothesized that sufficient data would be available to estimate P digestion and metabolism parameters and that these parameters would be sufficient to derive P bioavailabilities of a range of feed ingredients. Inputs to the model were dry matter intake; total feed P concentration (fPtFd); phytate (Pp), organic (Po), and inorganic (Pi) P as fractions of total P (fPpPt, fPoPt, fPiPt); microbial growth; amount of Pi and Pp infused into the omasum or ileum; milk yield; and BW. The available data were sufficient to derive all model parameters of interest. The final model predicted that given 75 g/d of total P input, the total-tract digestibility of P was 40.8%, Pp digestibility in the rumen was 92.4%, and in the total-tract was 94.7%. Blood P recycling to the rumen was a major source of Pi flow into the small intestine, and the primary route of excretion. A large proportion of Pi flowing to the small intestine was absorbed; however, additional Pi was absorbed from the large intestine (3.15%). Absorption of Pi from the small intestine was regulated, and given the large flux of salivary P recycling, the effective fractional small intestine absorption of available P derived from the diet was 41.6% at requirements. Milk synthesis used 16% of total absorbed P, and less than 1% was excreted in urine. The resulting model could be used to derive P bioavailabilities of commonly used feedstuffs in cattle production. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  19. Nitrous Oxide Emissions from Biofuel Crops and Parameterization in the EPIC Biogeochemical Model

    Science.gov (United States)

    This presentation describes year 1 field measurements of N2O fluxes and crop yields which are used to parameterize the EPIC biogeochemical model for the corresponding field site. Initial model simulations are also presented.

  20. Tool-driven Design and Automated Parameterization for Real-time Generic Drivetrain Models

    Directory of Open Access Journals (Sweden)

    Schwarz Christina

    2015-01-01

    Full Text Available Real-time dynamic drivetrain modeling approaches have a great potential for development cost reduction in the automotive industry. Even though real-time drivetrain models are available, these solutions are specific to single transmission topologies. In this paper an environment for parameterization of a solution is proposed based on a generic method applicable to all types of gear transmission topologies. This enables tool-guided modeling by non- experts in the fields of mechanic engineering and control theory leading to reduced development and testing efforts. The approach is demonstrated for an exemplary automatic transmission using the environment for automated parameterization. Finally, the parameterization is validated via vehicle measurement data.

  1. Identifiability of Model Properties in Over-Parameterized Model Classes

    DEFF Research Database (Denmark)

    Jaeger, Manfred

    2013-01-01

    Classical learning theory is based on a tight linkage between hypothesis space (a class of function on a domain X), data space (function-value examples (x, f(x))), and the space of queries for the learned model (predicting function values for new examples x). However, in many learning scenarios......: the identification of temporal logic properties of probabilistic automata learned from sequence data, the identification of causal dependencies in probabilistic graphical models, and the transfer of probabilistic relational models to new domains....

  2. Natural Ocean Carbon Cycle Sensitivity to Parameterizations of the Recycling in a Climate Model

    Science.gov (United States)

    Romanou, A.; Romanski, J.; Gregg, W. W.

    2014-01-01

    eventually resurfaces with the global thermohaline circulation especially in the Southern Ocean. Because of the reduced primary production and carbon export in GISSEH compared to GISSER, the biological pump efficiency, i.e., the ratio of primary production and carbon export at 75 m, is half in the GISSEH of that in GISSER, The Southern Ocean emerges as a key region where the CO2 flux is as sensitive to biological parameterizations as it is to physical parameterizations. The fidelity of ocean mixing in the Southern Ocean compared to observations is shown to be a good indicator of the magnitude of the biological pump efficiency regardless of physical model choice.

  3. Impact of model structure and parameterization on Penman-Monteith type evaporation models

    KAUST Repository

    Ershadi, A.

    2015-04-12

    The impact of model structure and parameterization on the estimation of evaporation is investigated across a range of Penman-Monteith type models. To examine the role of model structure on flux retrievals, three different retrieval schemes are compared. The schemes include a traditional single-source Penman-Monteith model (Monteith, 1965), a two-layer model based on Shuttleworth and Wallace (1985) and a three-source model based on Mu et al. (2011). To assess the impact of parameterization choice on model performance, a number of commonly used formulations for aerodynamic and surface resistances were substituted into the different formulations. Model response to these changes was evaluated against data from twenty globally distributed FLUXNET towers, representing a cross-section of biomes that include grassland, cropland, shrubland, evergreen needleleaf forest and deciduous broadleaf forest. Scenarios based on 14 different combinations of model structure and parameterization were ranked based on their mean value of Nash-Sutcliffe Efficiency. Results illustrated considerable variability in model performance both within and between biome types. Indeed, no single model consistently outperformed any other when considered across all biomes. For instance, in grassland and shrubland sites, the single-source Penman-Monteith model performed the best. In croplands it was the three-source Mu model, while for evergreen needleleaf and deciduous broadleaf forests, the Shuttleworth-Wallace model rated highest. Interestingly, these top ranked scenarios all shared the simple lookup-table based surface resistance parameterization of Mu et al. (2011), while a more complex Jarvis multiplicative method for surface resistance produced lower ranked simulations. The highly ranked scenarios mostly employed a version of the Thom (1975) formulation for aerodynamic resistance that incorporated dynamic values of roughness parameters. This was true for all cases except over deciduous broadleaf

  4. Exploring morphological indicators for improved model parameterization in transport modeling

    Science.gov (United States)

    Kumahor, Samuel K.; Vogel, Hans-Jörg

    2017-04-01

    Two phenomena that control transport of colloidal materials, including nanoparticles, are interaction at the air-water and solid-water interfaces for unsaturated flow. Current approaches for multiphase inverse modeling to quantify the associated processes utilize empirical parameters and/or assumptions to characterise these interactions. This introduces uncertainty in model outcomes. Two classical examples are: (i) application of the Young-Laplace Equation, assuming spherical air-water interfaces, to quantify interactions at the air-water interface and (ii) the choice of parameters that define the nature and shape of retention profiles for modeling straining at the solid-water interface. In this contribution, an alternate approach using some morphological indicators derived from X-ray micro-computed tomography (µ-CT) to quantify interaction at both the air-water interface and solid-water interface is presented. These indicators, related to air-water and solid-water interface densities, are thought to alleviate the deficiencies associated with modeling interaction at both the solid-water and air-water interfaces.

  5. Potential Vorticity based parameterization for specification of Upper troposphere/lower stratosphere ozone in atmospheric models

    Data.gov (United States)

    U.S. Environmental Protection Agency — Potential Vorticity based parameterization for specification of Upper troposphere/lower stratosphere ozone in atmospheric models - the data set consists of 3D O3...

  6. A Stochastic Lagrangian Basis for a Probabilistic Parameterization of Moisture Condensation in Eulerian Models

    OpenAIRE

    Tsang, Yue-Kin; Vallis, Geoffrey K.

    2018-01-01

    In this paper we describe the construction of an efficient probabilistic parameterization that could be used in a coarse-resolution numerical model in which the variation of moisture is not properly resolved. An Eulerian model using a coarse-grained field on a grid cannot properly resolve regions of saturation---in which condensation occurs---that are smaller than the grid boxes. Thus, in the absence of a parameterization scheme, either the grid box must become saturated or condensation will ...

  7. Parameterization models for solar radiation and solar technology applications

    International Nuclear Information System (INIS)

    Khalil, Samy A.

    2008-01-01

    Solar radiation is very important for the evaluation and wide use of solar renewable energy systems. The development of calibration procedures for broadband solar radiation photometric instrumentation and the improvement of broadband solar radiation measurement accuracy have been done. An improved diffuse sky reference and photometric calibration and characterization software for outdoor pyranometer calibrations are outlined. Parameterizations for direct beam, total hemispherical and diffuse sky radiation and solar radiation technology are briefly reviewed. The uncertainties for various broadband solar radiations of solar energy and atmospheric effects are discussed. The varying responsivities of solar radiation with meteorological, statistical and climatological parameters and possibility atmospheric conditions was examined

  8. Inverse modeling of pumping tests to parameterize three-dimensional hydrofacies models

    Science.gov (United States)

    Medina-Ortega, P.; Morales-Casique, E.; Escolero-Fuentes, O.; Hernandez Espriu, A.

    2013-05-01

    We model the spatial distribution of hydrofacies in the aquifer of Mexico City and present a procedure for parameterizing those hydrofacies by inverse modeling of pumping tests . The aquifer is composed of a highly heterogeneous mixture of alluvial deposits and volcanic rocks. Lithological records from 111 production water wells are analyzed using indicator geostatistics. The different lithological categories are grouped into four hydrofacies, where a hydrofacies is a set of lithological categories which have similar hydraulic properties. An exponential variogram model was fitted to each hydrofacies by minimizing cross validation errors. The data set is then kriged to obtain the three-dimensional distribution of each hydrofacies within the alluvial aquifer of Mexico City. We present a procedure to parameterize the four hydrofacies by inverse modeling of two pumping tests and test the predictive capabilities of the inversion results by forward modeling of two more pumping tests.

  9. Towards improved parameterization of a macroscale hydrologic model in a discontinuous permafrost boreal forest ecosystem

    Directory of Open Access Journals (Sweden)

    A. Endalamaw

    2017-09-01

    Full Text Available Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which better represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW in Interior Alaska: one nearly permafrost-free (LowP sub-basin and one permafrost-dominated (HighP sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC mesoscale hydrological model to simulate runoff, evapotranspiration (ET, and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub

  10. A comparison of sea salt emission parameterizations in northwestern Europe using a chemistry transport model setup

    Directory of Open Access Journals (Sweden)

    D. Neumann

    2016-08-01

    Full Text Available Atmospheric sea salt particles affect chemical and physical processes in the atmosphere. These particles provide surface area for condensation and reaction of nitrogen, sulfur, and organic species and are a vehicle for the transport of these species. Additionally, HCl is released from sea salt. Hence, sea salt has a relevant impact on air quality, particularly in coastal regions with high anthropogenic emissions, such as the North Sea region. Therefore, the integration of sea salt emissions in modeling studies in these regions is necessary. However, it was found that sea salt concentrations are not represented with the necessary accuracy in some situations.In this study, three sea salt emission parameterizations depending on different combinations of wind speed, salinity, sea surface temperature, and wave data were implemented and compared: GO03 (Gong, 2003, SP13 (Spada et al., 2013, and OV14 (Ovadnevaite et al., 2014. The aim was to identify the parameterization that most accurately predicts the sea salt mass concentrations at different distances to the source regions. For this purpose, modeled particle sodium concentrations, sodium wet deposition, and aerosol optical depth were evaluated against measurements of these parameters. Each 2-month period in winter and summer 2008 were considered for this purpose. The shortness of these periods limits generalizability of the conclusions on other years.While the GO03 emissions yielded overestimations in the PM10 concentrations at coastal stations and underestimations of those at inland stations, OV14 emissions conversely led to underestimations at coastal stations and overestimations at inland stations. Because of the differently shaped particle size distributions of the GO03 and OV14 emission cases, the deposition velocity of the coarse particles differed between both cases which yielded this distinct behavior at inland and coastal stations. The PM10 concentrations produced by the SP13 emissions

  11. Parameterization models for pesticide exposure via crop consumption.

    Science.gov (United States)

    Fantke, Peter; Wieland, Peter; Juraske, Ronnie; Shaddick, Gavin; Itoiz, Eva Sevigné; Friedrich, Rainer; Jolliet, Olivier

    2012-12-04

    An approach for estimating human exposure to pesticides via consumption of six important food crops is presented that can be used to extend multimedia models applied in health risk and life cycle impact assessment. We first assessed the variation of model output (pesticide residues per kg applied) as a function of model input variables (substance, crop, and environmental properties) including their possible correlations using matrix algebra. We identified five key parameters responsible for between 80% and 93% of the variation in pesticide residues, namely time between substance application and crop harvest, degradation half-lives in crops and on crop surfaces, overall residence times in soil, and substance molecular weight. Partition coefficients also play an important role for fruit trees and tomato (Kow), potato (Koc), and lettuce (Kaw, Kow). Focusing on these parameters, we develop crop-specific models by parametrizing a complex fate and exposure assessment framework. The parametric models thereby reflect the framework's physical and chemical mechanisms and predict pesticide residues in harvest using linear combinations of crop, crop surface, and soil compartments. Parametric model results correspond well with results from the complex framework for 1540 substance-crop combinations with total deviations between a factor 4 (potato) and a factor 66 (lettuce). Predicted residues also correspond well with experimental data previously used to evaluate the complex framework. Pesticide mass in harvest can finally be combined with reduction factors accounting for food processing to estimate human exposure from crop consumption. All parametric models can be easily implemented into existing assessment frameworks.

  12. Physically sound parameterization of incomplete ionization in aluminum-doped silicon

    Directory of Open Access Journals (Sweden)

    Heiko Steinkemper

    2016-12-01

    Full Text Available Incomplete ionization is an important issue when modeling silicon devices featuring aluminum-doped p+ (Al-p+ regions. Aluminum has a rather deep state in the band gap compared to boron or phosphorus, causing strong incomplete ionization. In this paper, we considerably improve our recent parameterization [Steinkemper et al., J. Appl. Phys. 117, 074504 (2015]. On the one hand, we found a fundamental criterion to further reduce the number of free parameters in our fitting procedure. And on the other hand, we address a mistake in the original publication of the incomplete ionization formalism in Altermatt et al., J. Appl. Phys. 100, 113715 (2006.

  13. “Using Statistical Comparisons between SPartICus Cirrus Microphysical Measurements, Detailed Cloud Models, and GCM Cloud Parameterizations to Understand Physical Processes Controlling Cirrus Properties and to Improve the Cloud Parameterizations”

    Energy Technology Data Exchange (ETDEWEB)

    Woods, Sarah [SPEC Inc., Boulder, CO (United States)

    2015-12-01

    The dual objectives of this project were improving our basic understanding of processes that control cirrus microphysical properties and improvement of the representation of these processes in the parameterizations. A major effort in the proposed research was to integrate, calibrate, and better understand the uncertainties in all of these measurements.

  14. Parameterization and sensitivity analyses of a radiative transfer model for remote sensing plant canopies

    Science.gov (United States)

    Hall, Carlton Raden

    A major objective of remote sensing is determination of biochemical and biophysical characteristics of plant canopies utilizing high spectral resolution sensors. Canopy reflectance signatures are dependent on absorption and scattering processes of the leaf, canopy properties, and the ground beneath the canopy. This research investigates, through field and laboratory data collection, and computer model parameterization and simulations, the relationships between leaf optical properties, canopy biophysical features, and the nadir viewed above-canopy reflectance signature. Emphasis is placed on parameterization and application of an existing irradiance radiative transfer model developed for aquatic systems. Data and model analyses provide knowledge on the relative importance of leaves and canopy biophysical features in estimating the diffuse absorption a(lambda,m-1), diffuse backscatter b(lambda,m-1), beam attenuation alpha(lambda,m-1), and beam to diffuse conversion c(lambda,m-1 ) coefficients of the two-flow irradiance model. Data sets include field and laboratory measurements from three plant species, live oak (Quercus virginiana), Brazilian pepper (Schinus terebinthifolius) and grapefruit (Citrus paradisi) sampled on Cape Canaveral Air Force Station and Kennedy Space Center Florida in March and April of 1997. Features measured were depth h (m), projected foliage coverage PFC, leaf area index LAI, and zenith leaf angle. Optical measurements, collected with a Spectron SE 590 high sensitivity narrow bandwidth spectrograph, included above canopy reflectance, internal canopy transmittance and reflectance and bottom reflectance. Leaf samples were returned to laboratory where optical and physical and chemical measurements of leaf thickness, leaf area, leaf moisture and pigment content were made. A new term, the leaf volume correction index LVCI was developed and demonstrated in support of model coefficient parameterization. The LVCI is based on angle adjusted leaf

  15. Attributing the behavior of low-level clouds in large-scale models to subgrid-scale parameterizations

    Science.gov (United States)

    Neggers, R. A. J.

    2015-12-01

    This study explores ways of establishing the characteristic behavior of boundary layer schemes in representing subtropical marine low-level clouds in climate models. To this purpose, parameterization schemes are studied in both isolated and interactive mode with the larger-scale circulation. Results of the EUCLIPSE/GASS intercomparison study for Single-Column Models (SCM) on low-level cloud transitions are compared to General Circulation Model (GCM) results from the CFMIP-2 project at selected grid points in the subtropical eastern Pacific. Low cloud characteristics are plotted as a function of key state variables for which Large-Eddy Simulation results suggest a distinct and reasonably tight relation. These include the Cloud Top Entrainment Instability (CTEI) parameter and the total cloud cover. SCM and GCM results are thus compared and their resemblance is quantified using simple metrics. Good agreement is reported, to such a degree that SCM results are found to be uniquely representative of their GCM, and vice versa. This suggests that the system of parameterized fast boundary layer physics dominates the model state at any given time, even when interactive with the larger-scale flow. This behavior can loosely be interpreted as a unique "fingerprint" of a boundary layer scheme, recognizable in both SCM and GCM simulations. The result justifies and advocates the use of SCM simulation for improving weather and climate models, including the attribution of typical responses of low clouds to climate change in a GCM to specific parameterizations.

  16. Evaluating the importance of characterizing soil structure and horizons in parameterizing a hydrologic process model

    Science.gov (United States)

    Mirus, Benjamin B.

    2015-01-01

    Incorporating the influence of soil structure and horizons into parameterizations of distributed surface water/groundwater models remains a challenge. Often, only a single soil unit is employed, and soil-hydraulic properties are assigned based on textural classification, without evaluating the potential impact of these simplifications. This study uses a distributed physics-based model to assess the influence of soil horizons and structure on effective parameterization. This paper tests the viability of two established and widely used hydrogeologic methods for simulating runoff and variably saturated flow through layered soils: (1) accounting for vertical heterogeneity by combining hydrostratigraphic units with contrasting hydraulic properties into homogeneous, anisotropic units and (2) use of established pedotransfer functions based on soil texture alone to estimate water retention and conductivity, without accounting for the influence of pedon structures and hysteresis. The viability of this latter method for capturing the seasonal transition from runoff-dominated to evapotranspiration-dominated regimes is also tested here. For cases tested here, event-based simulations using simplified vertical heterogeneity did not capture the state-dependent anisotropy and complex combinations of runoff generation mechanisms resulting from permeability contrasts in layered hillslopes with complex topography. Continuous simulations using pedotransfer functions that do not account for the influence of soil structure and hysteresis generally over-predicted runoff, leading to propagation of substantial water balance errors. Analysis suggests that identifying a dominant hydropedological unit provides the most acceptable simplification of subsurface layering and that modified pedotransfer functions with steeper soil-water retention curves might adequately capture the influence of soil structure and hysteresis on hydrologic response in headwater catchments.

  17. Efficient parameterization of cardiac action potential models using a genetic algorithm.

    Science.gov (United States)

    Cairns, Darby I; Fenton, Flavio H; Cherry, E M

    2017-09-01

    Finding appropriate values for parameters in mathematical models of cardiac cells is a challenging task. Here, we show that it is possible to obtain good parameterizations in as little as 30-40 s when as many as 27 parameters are fit simultaneously using a genetic algorithm and two flexible phenomenological models of cardiac action potentials. We demonstrate how our implementation works by considering cases of "model recovery" in which we attempt to find parameter values that match model-derived action potential data from several cycle lengths. We assess performance by evaluating the parameter values obtained, action potentials at fit and non-fit cycle lengths, and bifurcation plots for fidelity to the truth as well as consistency across different runs of the algorithm. We also fit the models to action potentials recorded experimentally using microelectrodes and analyze performance. We find that our implementation can efficiently obtain model parameterizations that are in good agreement with the dynamics exhibited by the underlying systems that are included in the fitting process. However, the parameter values obtained in good parameterizations can exhibit a significant amount of variability, raising issues of parameter identifiability and sensitivity. Along similar lines, we also find that the two models differ in terms of the ease of obtaining parameterizations that reproduce model dynamics accurately, most likely reflecting different levels of parameter identifiability for the two models.

  18. Efficient parameterization of cardiac action potential models using a genetic algorithm

    Science.gov (United States)

    Cairns, Darby I.; Fenton, Flavio H.; Cherry, E. M.

    2017-09-01

    Finding appropriate values for parameters in mathematical models of cardiac cells is a challenging task. Here, we show that it is possible to obtain good parameterizations in as little as 30-40 s when as many as 27 parameters are fit simultaneously using a genetic algorithm and two flexible phenomenological models of cardiac action potentials. We demonstrate how our implementation works by considering cases of "model recovery" in which we attempt to find parameter values that match model-derived action potential data from several cycle lengths. We assess performance by evaluating the parameter values obtained, action potentials at fit and non-fit cycle lengths, and bifurcation plots for fidelity to the truth as well as consistency across different runs of the algorithm. We also fit the models to action potentials recorded experimentally using microelectrodes and analyze performance. We find that our implementation can efficiently obtain model parameterizations that are in good agreement with the dynamics exhibited by the underlying systems that are included in the fitting process. However, the parameter values obtained in good parameterizations can exhibit a significant amount of variability, raising issues of parameter identifiability and sensitivity. Along similar lines, we also find that the two models differ in terms of the ease of obtaining parameterizations that reproduce model dynamics accurately, most likely reflecting different levels of parameter identifiability for the two models.

  19. Assessing Impact, DIF, and DFF in Accommodated Item Scores: A Comparison of Multilevel Measurement Model Parameterizations

    Science.gov (United States)

    Beretvas, S. Natasha; Cawthon, Stephanie W.; Lockhart, L. Leland; Kaye, Alyssa D.

    2012-01-01

    This pedagogical article is intended to explain the similarities and differences between the parameterizations of two multilevel measurement model (MMM) frameworks. The conventional two-level MMM that includes item indicators and models item scores (Level 1) clustered within examinees (Level 2) and the two-level cross-classified MMM (in which item…

  20. Parameterization of norfolk sandy loam properties for stochastic modeling of light in-wheel motor UGV

    Science.gov (United States)

    To accurately develop a mathematical model for an In-Wheel Motor Unmanned Ground Vehicle (IWM UGV) on soft terrain, parameterization of terrain properties is essential to stochastically model tire-terrain interaction for each wheel independently. Operating in off-road conditions requires paying clos...

  1. Structure and parameterization of MF-swift, a magic formula-based rigid ring tire model

    NARCIS (Netherlands)

    Schmeitz, A.J.C.; Versteden, W.D.

    2009-01-01

    Vehicle dynamic simulations require accurate, fast, reliable, and easy-to- parameterize tire models. For this purpose, TNO developed MF-Swift in close cooperation with the technical universities of Delft and Eindhoven. MF-Swift is based on the well-known magic formula model of Pacejka but extending

  2. Developing a stochastic parameterization to incorporate plant trait variability into ecohydrologic modeling

    Science.gov (United States)

    Liu, S.; Ng, G. H. C.

    2017-12-01

    The global plant database has revealed that plant traits can vary more within a plant functional type (PFT) than among different PFTs, indicating that the current paradigm in ecohydrogical models of specifying fixed parameters based solely on plant functional type (PFT) could potentially bias simulations. Although some recent modeling studies have attempted to incorporate this observed plant trait variability, many failed to consider uncertainties due to sparse global observation, or they omitted spatial and/or temporal variability in the traits. Here we present a stochastic parameterization for prognostic vegetation simulations that are stochastic in time and space in order to represent plant trait plasticity - the process by which trait differences arise. We have developed the new PFT parameterization within the Community Land Model 4.5 (CLM 4.5) and tested the method for a desert shrubland watershed in the Mojave Desert, where fixed parameterizations cannot represent acclimation to desert conditions. Spatiotemporally correlated plant trait parameters were first generated based on TRY statistics and were then used to implement ensemble runs for the study area. The new PFT parameterization was then further conditioned on field measurements of soil moisture and remotely sensed observations of leaf-area-index to constrain uncertainties in the sparse global database. Our preliminary results show that incorporating data-conditioned, variable PFT parameterizations strongly affects simulated soil moisture and water fluxes, compared with default simulations. The results also provide new insights about correlations among plant trait parameters and between traits and environmental conditions in the desert shrubland watershed. Our proposed stochastic PFT parameterization method for ecohydrological models has great potential in advancing our understanding of how terrestrial ecosystems are predicted to adapt to variable environmental conditions.

  3. Cloud Simulations in Response to Turbulence Parameterizations in the GISS Model E GCM

    Science.gov (United States)

    Yao, Mao-Sung; Cheng, Ye

    2013-01-01

    The response of cloud simulations to turbulence parameterizations is studied systematically using the GISS general circulation model (GCM) E2 employed in the Intergovernmental Panel on Climate Change's (IPCC) Fifth Assessment Report (AR5).Without the turbulence parameterization, the relative humidity (RH) and the low cloud cover peak unrealistically close to the surface; with the dry convection or with only the local turbulence parameterization, these two quantities improve their vertical structures, but the vertical transport of water vapor is still weak in the planetary boundary layers (PBLs); with both local and nonlocal turbulence parameterizations, the RH and low cloud cover have better vertical structures in all latitudes due to more significant vertical transport of water vapor in the PBL. The study also compares the cloud and radiation climatologies obtained from an experiment using a newer version of turbulence parameterization being developed at GISS with those obtained from the AR5 version. This newer scheme differs from the AR5 version in computing nonlocal transports, turbulent length scale, and PBL height and shows significant improvements in cloud and radiation simulations, especially over the subtropical eastern oceans and the southern oceans. The diagnosed PBL heights appear to correlate well with the low cloud distribution over oceans. This suggests that a cloud-producing scheme needs to be constructed in a framework that also takes the turbulence into consideration.

  4. Parameterizing Subgrid-Scale Orographic Drag in the High-Resolution Rapid Refresh (HRRR) Atmospheric Model

    Science.gov (United States)

    Toy, M. D.; Olson, J.; Kenyon, J.; Smirnova, T. G.; Brown, J. M.

    2017-12-01

    The accuracy of wind forecasts in numerical weather prediction (NWP) models is improved when the drag forces imparted on atmospheric flow by subgrid-scale orography are included. Without such parameterizations, only the terrain resolved by the model grid, along with the small-scale obstacles parameterized by the roughness lengths can have an effect on the flow. This neglects the impacts of subgrid-scale terrain variations, which typically leads to wind speeds that are too strong. Using statistical information about the subgrid-scale orography, such as the mean and variance of the topographic height within a grid cell, the drag forces due to flow blocking, gravity wave drag, and turbulent form drag are estimated and distributed vertically throughout the grid cell column. We recently implemented the small-scale gravity wave drag paramterization of Steeneveld et al. (2008) and Tsiringakis et al. (2017) for stable planetary boundary layers, and the turbulent form drag parameterization of Beljaars et al. (2004) in the High-Resolution Rapid Refresh (HRRR) NWP model developed at the National Oceanic and Atmospheric Administration (NOAA). As a result, a high surface wind speed bias in the model has been reduced and small improvement to the maintenance of stable layers has also been found. We present the results of experiments with the subgrid-scale orographic drag parameterization for the regional HRRR model, as well as for a global model in development at NOAA, showing the direct and indirect impacts.

  5. The development and evaluation of new runoff parameterization representations coupled with Noah Land Surface Model

    Science.gov (United States)

    Zheng, Z.; Zhang, W.; Xu, J.

    2011-12-01

    As a key component of the global water cycle, runoff plays an important role in earth climate system by affecting the land surface water and energy balance. Realistic runoff parameterization within land surface model (LSM) is significant for accurate land surface modeling and numerical weather and climate prediction. Hence, optimization and refinement of runoff formulation in LSM can further improve model predictive capability of surface-to-atmosphere fluxes which influences the complex interactions between the land surface and atmosphere. Moreover, the performance of runoff simulation in LSM would essential to drought and flood prediction and warning. In this study, a new runoff parameterization named XXT (Xin'anjiang x TOPMODEL) was developed by introducing the water table depth into the soil moisture storage capacity distribution curve (SMSCC) from Xin'anjiang model for surface runoff calculation improvement and then integrating with a TOPMODEL-based groundwater scheme. Several studies had already found a strong correlation between the water table depth and land surface processes. In this runoff parameterization, the dynamic variation of surface and subsurface runoff calculation is connected in a systematic way through the change of water table depth. The XXT runoff parameterization was calibrated and validated with datasets both from observation and Weather Research & Forecasting model (WRF) outputs, the results with high Nash-efficiency coefficient indicated that it has reliable capability of runoff simulation in different climate regions. After model test, the XXT runoff parameterization is coupled with the unified Noah LSM 3.2 instead of simple water balance model (SWB) in order to alleviate the runoff simulating bias which may lead to poor energy partition and evaporation. The impact of XXT is investigated through application of a whole year (1998) simulation at surface flux site of Champaign, Illinois (40.01°N, 88.37°W). The results show that Noah

  6. Parameterized post-Newtonian approximation in a teleparallel model of dark energy with a boundary term

    Energy Technology Data Exchange (ETDEWEB)

    Sadjadi, H.M. [University of Tehran, Department of Physics, Tehran (Iran, Islamic Republic of)

    2017-03-15

    We study the parameterized post-Newtonian approximation in teleparallel model of gravity with a scalar field. The scalar field is non-minimally coupled to the scalar torsion as well as to the boundary term introduced in Bahamonde and Wright (Phys Rev D 92:084034 arXiv:1508.06580v4 [gr-qc], 2015). We show that, in contrast to the case where the scalar field is only coupled to the scalar torsion, the presence of the new coupling affects the parameterized post-Newtonian parameters. These parameters for different situations are obtained and discussed. (orig.)

  7. Design and implementation of parameterized adaptive cruise control: An explicit model predictive control approach

    NARCIS (Netherlands)

    Naus, G.J.L.; Ploeg, J.; Molengraft, M.J.G. van de; Heemels, W.P.M.H.; Steinbuch, M.

    2010-01-01

    The combination of different characteristics and situation-dependent behavior cause the design of adaptive cruise control (ACC) systems to be time consuming. This paper presents a systematic approach for the design of a parameterized ACC, based on explicit model predictive control. A unique feature

  8. A new albedo parameterization for use in climate models over the Antarctic ice sheet

    NARCIS (Netherlands)

    Kuipers Munneke, P.|info:eu-repo/dai/nl/304831891; van den Broeke, M.R.|info:eu-repo/dai/nl/073765643; Lenaerts, J.T.M.|info:eu-repo/dai/nl/314850163; Flanner, M.G.; Gardner, A.S.; van de Berg, W.J.|info:eu-repo/dai/nl/304831611

    2011-01-01

    A parameterization for broadband snow surface albedo, based on snow grain size evolution, cloud optical thickness, and solar zenith angle, is implemented into a regional climate model for Antarctica and validated against field observations of albedo for the period 1995–2004. Over the Antarctic

  9. A Boundary Layer Parameterization for a General Model.

    Science.gov (United States)

    1984-03-01

    evaluation of grassland evapotrans- piration. Agric. Meteor., 11, 373-383. O’Neill, P., L. Pochop and J. Borrelli , 1979: Urban lawn evapotranspiration...model development in this report is original in nature. The software logistics of the combination of these models is described in the User’s Manual

  10. Parameterizing unconditional skewness in models for financial time series

    DEFF Research Database (Denmark)

    He, Changli; Silvennoinen, Annastiina; Teräsvirta, Timo

    In this paper we consider the third-moment structure of a class of time series models. It is often argued that the marginal distribution of financial time series such as returns is skewed. Therefore it is of importance to know what properties a model should possess if it is to accommodate...... unconditional skewness. We consider modelling the unconditional mean and variance using models that respond nonlinearly or asymmetrically to shocks. We investigate the implications of these models on the third-moment structure of the marginal distribution as well as conditions under which the unconditional...... distribution exhibits skewness and nonzero third-order autocovariance structure. In this respect, an asymmetric or nonlinear specification of the conditional mean is found to be of greater importance than the properties of the conditional variance. Several examples are discussed and, whenever possible...

  11. A Parameterized Variable Dark Energy Model: Structure Formation and Observational Constraints

    OpenAIRE

    Bidgoli, Sepehr Arbabi; Movahed, M. Sadegh; Rahvar, Sohrab

    2005-01-01

    In this paper we investigate a simple parameterization scheme of the quintessence model given by Wetterich (2004). The crucial parameter of this model is the bending parameter $b$, which is related to the amount of dark energy in the early universe. Using the linear perturbation and the spherical infall approximations, we investigate the evolution of matter density perturbations in the variable dark energy model, and obtain an analytical expression for the growth index $f$. We show that incre...

  12. Parameterization of a Cookoff Model for LX-07

    Energy Technology Data Exchange (ETDEWEB)

    Aviles-Ramos, Cuauhtemoc [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Advanced Engineering Analysis Group (W-13)

    2016-11-22

    A thermal decomposition model for PBX 9501 (95% HMX, 2.5% Estane® binder, 2.5% BDNPA/F nitro-plasticizer) was implemented by Dickson, et. al. The objective in this study is to estimate parameters associated with this kinetics model so it can be applied to carry out thermal ignition predictions for LX-07 (90% HMX, 10% Viton binder). LX-07 thermal ignition experiments have been carried out using the “Sandia Instrumented Thermal Ignition Apparatus”, SITI. The SITI design consists of solid cylinders (1” diameter × 1” height) of high explosive (HE) confined by a cylindrical aluminum case. An electric heater is wrapped around the outer surface of the case. This heater produces a temperature heating ramp on the outer surface of the case. Internal thermocouples measure the HE temperature rise from the center to locations close to the HE-aluminum interface. The energetic material is heated until thermal ignition occurs. A two–dimensional axisymmetric heat conduction finite element model is used to simulate these experiments. The HE thermal decomposition kinetics is coupled to a heat conduction model trough the definition of an energy source term. The parameters used to define the HE thermal decomposition model are optimized to obtain a good agreement with the experimental time to thermal ignition and temperatures. Also, heat capacity and thermal conductivity of the LX-07 mixture were estimated using temperatures measured at the center of the HE before the solid to solid HMX phase transition occurred.

  13. Impact of wetlands mapping on parameterization of hydrologic simulation models

    Science.gov (United States)

    Viger, R.

    2015-12-01

    Wetlands and other surface depressions can impact hydrologic response within the landscape in a number of ways, such as intercepting runoff and near-surface flows or changing the potential for evaporation and seepage into the soil. The role of these features is increasingly being integrated into hydrological simulation models, such as the USGS Precipitation-Runoff Modeling System (PRMS) and the Soil Water Assessment Tool (SWAT), and applied to landscapes where wetlands are dominating features. Because the extent of these features varies widely through time, many modeling applications rely on delineations of the maximum possible extent to define total capacity of a model's spatial response unit. This poster presents an evaluation of several wetland map delineations for the Pipestem River basin in the North Dakota Prairie-pothole region. The featured data sets include the US Fish and Wildlife Service National Wetlands Inventory (NWI), surface water bodies extracted from the US Geological Survey National Hydrography Dataset (NHD), and elevation depressions extracted from 1 meter LiDAR data for the area. In addition to characterizing differences in the quality of these datasets, the poster will assess the impact of these differences when parameters are derived from them for the spatial response units of the PRMS model.

  14. Parameterized data-driven fuzzy model based optimal control of a semi-batch reactor.

    Science.gov (United States)

    Kamesh, Reddi; Rani, K Yamuna

    2016-09-01

    A parameterized data-driven fuzzy (PDDF) model structure is proposed for semi-batch processes, and its application for optimal control is illustrated. The orthonormally parameterized input trajectories, initial states and process parameters are the inputs to the model, which predicts the output trajectories in terms of Fourier coefficients. Fuzzy rules are formulated based on the signs of a linear data-driven model, while the defuzzification step incorporates a linear regression model to shift the domain from input to output domain. The fuzzy model is employed to formulate an optimal control problem for single rate as well as multi-rate systems. Simulation study on a multivariable semi-batch reactor system reveals that the proposed PDDF modeling approach is capable of capturing the nonlinear and time-varying behavior inherent in the semi-batch system fairly accurately, and the results of operating trajectory optimization using the proposed model are found to be comparable to the results obtained using the exact first principles model, and are also found to be comparable to or better than parameterized data-driven artificial neural network model based optimization results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Parameterization of a Hydrological Model for a Large, Ungauged Urban Catchment

    Directory of Open Access Journals (Sweden)

    Gerald Krebs

    2016-10-01

    Full Text Available Urbanization leads to the replacement of natural areas by impervious surfaces and affects the catchment hydrological cycle with adverse environmental impacts. Low impact development tools (LID that mimic hydrological processes of natural areas have been developed and applied to mitigate these impacts. Hydrological simulations are one possibility to evaluate the LID performance but the associated small-scale processes require a highly spatially distributed and explicit modeling approach. However, detailed data for model development are often not available for large urban areas, hampering the model parameterization. In this paper we propose a methodology to parameterize a hydrological model to a large, ungauged urban area by maintaining at the same time a detailed surface discretization for direct parameter manipulation for LID simulation and a firm reliance on available data for model conceptualization. Catchment delineation was based on a high-resolution digital elevation model (DEM and model parameterization relied on a novel model regionalization approach. The impact of automated delineation and model regionalization on simulation results was evaluated for three monitored study catchments (5.87–12.59 ha. The simulated runoff peak was most sensitive to accurate catchment discretization and calibration, while both the runoff volume and the fit of the hydrograph were less affected.

  16. A generalized and parameterized interference model for cognitive radio networks

    KAUST Repository

    Mahmood, Nurul Huda

    2011-06-01

    For meaningful co-existence of cognitive radios with primary system, it is imperative that the cognitive radio system is aware of how much interference it generates at the primary receivers. This can be done through statistical modeling of the interference as perceived at the primary receivers. In this work, we propose a generalized model for the interference generated by a cognitive radio network, in the presence of small and large scale fading, at a primary receiver located at the origin. We then demonstrate how this model can be used to estimate the impact of cognitive radio transmission on the primary receiver in terms of different outage probabilities. Finally, our analytical findings are validated through some selected computer-based simulations. © 2011 IEEE.

  17. Parameterization Models for Pesticide Exposure via Crop Consumption

    DEFF Research Database (Denmark)

    Fantke, Peter; Wieland, Peter; Juraske, Ronnie

    2012-01-01

    An approach for estimating human exposure to pesticides via consumption of six important food crops is presented that can be used to extend multimedia models applied in health risk and life cycle impact assessment. We first assessed the variation of model output (pesticide residues per kg applied......) as a function of model input variables (substance, crop, and environmental properties) including their possible correlations using matrix algebra. We identified five key parameters responsible for between 80% and 93% of the variation in pesticide residues, namely time between substance application and crop...... harvest, degradation half-lives in crops and on crop surfaces, overall residence times in soil, and substance molecular weight. Partition coefficients also play an important role for fruit trees and tomato (Kow), potato (Koc), and lettuce (Kaw, Kow). Focusing on these parameters, we develop crop...

  18. Calibration process of highly parameterized semi-distributed hydrological model

    Science.gov (United States)

    Vidmar, Andrej; Brilly, Mitja

    2017-04-01

    Hydrological phenomena take place in the hydrological system, which is governed by nature, and are essentially stochastic. These phenomena are unique, non-recurring, and changeable across space and time. Since any river basin with its own natural characteristics and any hydrological event therein, are unique, this is a complex process that is not researched enough. Calibration is a procedure of determining the parameters of a model that are not known well enough. Input and output variables and mathematical model expressions are known, while only some parameters are unknown, which are determined by calibrating the model. The software used for hydrological modelling nowadays is equipped with sophisticated algorithms for calibration purposes without possibility to manage process by modeler. The results are not the best. We develop procedure for expert driven process of calibration. We use HBV-light-CLI hydrological model which has command line interface and coupling it with PEST. PEST is parameter estimation tool which is used widely in ground water modeling and can be used also on surface waters. Process of calibration managed by expert directly, and proportionally to the expert knowledge, affects the outcome of the inversion procedure and achieves better results than if the procedure had been left to the selected optimization algorithm. First step is to properly define spatial characteristic and structural design of semi-distributed model including all morphological and hydrological phenomena, like karstic area, alluvial area and forest area. This step includes and requires geological, meteorological, hydraulic and hydrological knowledge of modeler. Second step is to set initial parameter values at their preferred values based on expert knowledge. In this step we also define all parameter and observation groups. Peak data are essential in process of calibration if we are mainly interested in flood events. Each Sub Catchment in the model has own observations group

  19. Evaluation of a stratiform cloud parameterization for general circulation models

    Energy Technology Data Exchange (ETDEWEB)

    Ghan, S.J.; Leung, L.R. [Pacific Northwest National Lab., Richland, WA (United States); McCaa, J. [Univ. of Washington, Seattle, WA (United States)

    1996-04-01

    To evaluate the relative importance of horizontal advection of cloud versus cloud formation within the grid cell of a single column model (SCM), we have performed a series of simulations with our SCM driven by a fixed vertical velocity and various rates of horizontal advection.

  20. A general circulation model (GCM) parameterization of Pinatubo aerosols

    Energy Technology Data Exchange (ETDEWEB)

    Lacis, A.A.; Carlson, B.E.; Mishchenko, M.I. [NASA Goddard Institute for Space Studies, New York, NY (United States)

    1996-04-01

    The June 1991 volcanic eruption of Mt. Pinatubo is the largest and best documented global climate forcing experiment in recorded history. The time development and geographical dispersion of the aerosol has been closely monitored and sampled. Based on preliminary estimates of the Pinatubo aerosol loading, general circulation model predictions of the impact on global climate have been made.

  1. Ice shelf fracture parameterization in an ice sheet model

    Directory of Open Access Journals (Sweden)

    S. Sun

    2017-11-01

    Full Text Available Floating ice shelves exert a stabilizing force onto the inland ice sheet. However, this buttressing effect is diminished by the fracture process, which on large scales effectively softens the ice, accelerating its flow, increasing calving, and potentially leading to ice shelf breakup. We add a continuum damage model (CDM to the BISICLES ice sheet model, which is intended to model the localized opening of crevasses under stress, the transport of those crevasses through the ice sheet, and the coupling between crevasse depth and the ice flow field and to carry out idealized numerical experiments examining the broad impact on large-scale ice sheet and shelf dynamics. In each case we see a complex pattern of damage evolve over time, with an eventual loss of buttressing approximately equivalent to halving the thickness of the ice shelf. We find that it is possible to achieve a similar ice flow pattern using a simple rule of thumb: introducing an enhancement factor ∼ 10 everywhere in the model domain. However, spatially varying damage (or equivalently, enhancement factor fields set at the start of prognostic calculations to match velocity observations, as is widely done in ice sheet simulations, ought to evolve in time, or grounding line retreat can be slowed by an order of magnitude.

  2. Deposition parameterizations for the Industrial Source Complex (ISC3) model

    Energy Technology Data Exchange (ETDEWEB)

    Wesely, Marvin L. [Argonne National Lab. (ANL), Argonne, IL (United States); Doskey, Paul V. [Argonne National Lab. (ANL), Argonne, IL (United States); Shannon, J. D. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2002-06-01

    Improved algorithms have been developed to simulate the dry and wet deposition of hazardous air pollutants (HAPs) with the Industrial Source Complex version 3 (ISC3) model system. The dry deposition velocities (concentrations divided by downward flux at a specified height) of the gaseous HAPs are modeled with algorithms adapted from existing dry deposition modules. The dry deposition velocities are described in a conventional resistance scheme, for which micrometeorological formulas are applied to describe the aerodynamic resistances above the surface. Pathways to uptake at the ground and in vegetative canopies are depicted with several resistances that are affected by variations in air temperature, humidity, solar irradiance, and soil moisture. The role of soil moisture variations in affecting the uptake of gases through vegetative plant leaf stomata is assessed with the relative available soil moisture, which is estimated with a rudimentary budget of soil moisture content. Some of the procedures and equations are simplified to be commensurate with the type and extent of information on atmospheric and surface conditions available to the ISC3 model system user. For example, standardized land use types and seasonal categories provide sets of resistances to uptake by various components of the surface. To describe the dry deposition of the large number of gaseous organic HAPS, a new technique based on laboratory study results and theoretical considerations has been developed providing a means of evaluating the role of lipid solubility in uptake by the waxy outer cuticle of vegetative plant leaves.

  3. Macromolecular refinement by model morphing using non-atomic parameterizations.

    Science.gov (United States)

    Cowtan, Kevin; Agirre, Jon

    2018-02-01

    Refinement is a critical step in the determination of a model which explains the crystallographic observations and thus best accounts for the missing phase components. The scattering density is usually described in terms of atomic parameters; however, in macromolecular crystallography the resolution of the data is generally insufficient to determine the values of these parameters for individual atoms. Stereochemical and geometric restraints are used to provide additional information, but produce interrelationships between parameters which slow convergence, resulting in longer refinement times. An alternative approach is proposed in which parameters are not attached to atoms, but to regions of the electron-density map. These parameters can move the density or change the local temperature factor to better explain the structure factors. Varying the size of the region which determines the parameters at a particular position in the map allows the method to be applied at different resolutions without the use of restraints. Potential applications include initial refinement of molecular-replacement models with domain motions, and potentially the use of electron density from other sources such as electron cryo-microscopy (cryo-EM) as the refinement model.

  4. Demonstration of Effects on Tropical Cyclone Forecasts with a High Resolution Global Model from Variation in Cumulus Convection Parameterization

    Science.gov (United States)

    Miller, Timothy L.; Robertson, Franklin R.; Cohen, Charles; Mackaro, Jessica

    2009-01-01

    The Goddard Earth Observing System Model, Version 5 (GEOS-5) is a system of models that have been developed at Goddard Space Flight Center to support NASA's earth science research in data analysis, observing system modeling and design, climate and weather prediction, and basic research. The work presented used GEOS-5 with 0.25o horizontal resolution and 72 vertical levels (up to 0.01 hP) resolving both the troposphere and stratosphere, with closer packing of the levels close to the surface. The model includes explicit (grid-scale) moist physics, as well as convective parameterization schemes. Results will be presented that will demonstrate strong dependence in the results of modeling of a strong hurricane on the type of convective parameterization scheme used. The previous standard (default) option in the model was the Relaxed Arakawa-Schubert (RAS) scheme, which uses a quasi-equilibrium closure. In the cases shown, this scheme does not permit the efficient development of a strong storm in comparison with observations. When this scheme is replaced by a modified version of the Kain-Fritsch scheme, which was originally developed for use on grids with intervals of order 25 km such as the present one, the storm is able to develop to a much greater extent, closer to that of reality. Details of the two cases will be shown in order to elucidate the differences in the two modeled storms.

  5. Current state of aerosol nucleation parameterizations for air-quality and climate modeling

    Science.gov (United States)

    Semeniuk, Kirill; Dastoor, Ashu

    2018-04-01

    Aerosol nucleation parameterization models commonly used in 3-D air quality and climate models have serious limitations. This includes classical nucleation theory based variants, empirical models and other formulations. Recent work based on detailed and extensive laboratory measurements and improved quantum chemistry computation has substantially advanced the state of nucleation parameterizations. In terms of inorganic nucleation involving BHN and THN including ion effects these new models should be considered as worthwhile replacements for the old models. However, the contribution of organic species to nucleation remains poorly quantified. New particle formation consists of a distinct post-nucleation growth regime which is characterized by a strong Kelvin curvature effect and is thus dependent on availability of very low volatility organic species or sulfuric acid. There have been advances in the understanding of the multiphase chemistry of biogenic and anthropogenic organic compounds which facilitate to overcome the initial aerosol growth barrier. Implementation of processes influencing new particle formation is challenging in 3-D models and there is a lack of comprehensive parameterizations. This review considers the existing models and recent innovations.

  6. AUTOMATED FORCE FIELD PARAMETERIZATION FOR NON-POLARIZABLE AND POLARIZABLE ATOMIC MODELS BASED ONAB INITIOTARGET DATA.

    Science.gov (United States)

    Huang, Lei; Roux, Benoît

    2013-08-13

    Classical molecular dynamics (MD) simulations based on atomistic models are increasingly used to study a wide range of biological systems. A prerequisite for meaningful results from such simulations is an accurate molecular mechanical force field. Most biomolecular simulations are currently based on the widely used AMBER and CHARMM force fields, which were parameterized and optimized to cover a small set of basic compounds corresponding to the natural amino acids and nucleic acid bases. Atomic models of additional compounds are commonly generated by analogy to the parameter set of a given force field. While this procedure yields models that are internally consistent, the accuracy of the resulting models can be limited. In this work, we propose a method, General Automated Atomic Model Parameterization (GAAMP), for generating automatically the parameters of atomic models of small molecules using the results from ab initio quantum mechanical (QM) calculations as target data. Force fields that were previously developed for a wide range of model compounds serve as initial guess, although any of the final parameter can be optimized. The electrostatic parameters (partial charges, polarizabilities and shielding) are optimized on the basis of QM electrostatic potential (ESP) and, if applicable, the interaction energies between the compound and water molecules. The soft dihedrals are automatically identified and parameterized by targeting QM dihedral scans as well as the energies of stable conformers. To validate the approach, the solvation free energy is calculated for more than 200 small molecules and MD simulations of 3 different proteins are carried out.

  7. Parameterized Radiation Transport Model for Neutron Detection in Complex Scenes

    Science.gov (United States)

    Lavelle, C. M.; Bisson, D.; Gilligan, J.; Fisher, B. M.; Mayo, R. M.

    2013-04-01

    There is interest in developing the ability to rapidly compute the energy dependent neutron flux within a complex geometry for a variety of applications. Coupled with sensor response function information, this capability would allow direct estimation of sensor behavior in multitude of operational scenarios. In situations where detailed simulation is not warranted or affordable, it is desirable to possess reliable estimates of the neutron field in practical scenarios which do not require intense computation. A tool set of this kind would provide quantitative means to address the development of operational concepts, inform asset allocation decisions, and exercise planning. Monte Carlo and/or deterministic methods provide a high degree of precision and fidelity consistent with the accuracy with which the scene is rendered. However, these methods are often too computationally expensive to support the real-time evolution of a virtual operational scenario. High fidelity neutron transport simulations are also time consuming from the standpoint of user setup and post-simulation analysis. We pre-compute adjoint solutions using MCNP to generate a coarse spatial and energy grid of the neutron flux over various surfaces as an alternative to full Monte Carlo modeling. We attempt to capture the characteristics of the neutron transport solution. We report on the results of brief verification and validation measurements which test the predictive capability of this approach over soil and asphalt concrete surfaces. We highlight the sensitivity of the simulated and experimental results to the material composition of the environment.

  8. Mesoscale model parameterizations for radiation and turbulent fluxes at the lower boundary

    International Nuclear Information System (INIS)

    Somieski, F.

    1988-11-01

    A radiation parameterization scheme for use in mesoscale models with orography and clouds has been developed. Broadband parameterizations are presented for the solar and the terrestrial spectral ranges. They account for clear, turbid or cloudy atmospheres. The scheme is one-dimensional in the atmosphere, but the effects of mountains (inclination, shading, elevated horizon) are taken into account at the surface. In the terrestrial band, grey and black clouds are considered. Furthermore, the calculation of turbulent fluxes of sensible and latent heat and momentum at an inclined lower model boundary is described. Surface-layer similarity and the surface energy budget are used to evaluate the ground surface temperature. The total scheme is part of the mesoscale model MESOSCOP. (orig.) With 3 figs., 25 refs [de

  9. Development of a cloud microphysical model and parameterizations to describe the effect of CCN on warm cloud

    Directory of Open Access Journals (Sweden)

    N. Kuba

    2006-01-01

    Full Text Available First, a hybrid cloud microphysical model was developed that incorporates both Lagrangian and Eulerian frameworks to study quantitatively the effect of cloud condensation nuclei (CCN on the precipitation of warm clouds. A parcel model and a grid model comprise the cloud model. The condensation growth of CCN in each parcel is estimated in a Lagrangian framework. Changes in cloud droplet size distribution arising from condensation and coalescence are calculated on grid points using a two-moment bin method in a semi-Lagrangian framework. Sedimentation and advection are estimated in the Eulerian framework between grid points. Results from the cloud model show that an increase in the number of CCN affects both the amount and the area of precipitation. Additionally, results from the hybrid microphysical model and Kessler's parameterization were compared. Second, new parameterizations were developed that estimate the number and size distribution of cloud droplets given the updraft velocity and the number of CCN. The parameterizations were derived from the results of numerous numerical experiments that used the cloud microphysical parcel model. The input information of CCN for these parameterizations is only several values of CCN spectrum (they are given by CCN counter for example. It is more convenient than conventional parameterizations those need values concerned with CCN spectrum, C and k in the equation of N=CSk, or, breadth, total number and median radius, for example. The new parameterizations' predictions of initial cloud droplet size distribution for the bin method were verified by using the aforesaid hybrid microphysical model. The newly developed parameterizations will save computing time, and can effectively approximate components of cloud microphysics in a non-hydrostatic cloud model. The parameterizations are useful not only in the bin method in the regional cloud-resolving model but also both for a two-moment bulk microphysical model and

  10. A simple parameterization of sub-grid scale open water for climate models

    Science.gov (United States)

    Pitman, Aj

    1991-09-01

    The effects of small fractions ( water covering a grid element are currently neglected even in atmospheric general circulation models (AGCMs) which incorporate complex land surface parameterization schemes. Here, a method for simulating sub-grid scale open water is proposed which permits any existing land surface model to be modified to account for open water. This new parameterization is tested as an addition to an advanced land surface scheme and, as expected, is shown to produce general increases in the surface latent heat flux at the expense of the surface sensible heat flux. Small changes in temperature are associated with this change in the partitioning of available energy which is driven by an increase in the wetness of the grid element. The sensitivity of the land surface to increasing amounts of open water is dependent upon the type of vegetation represented. Dense vegetation (with a high leaf area index) is shown to complicate the apparently simple model sensitivity and indicates that more advanced methods of incorporating open water into AGCMs need to be considered and compared against the parameterization suggested here. However, the sensitivity of one land surface model to incorporating open water is large enough to warrant consideration of its incorporation into climate models.

  11. Evaluation and Improvement of Cloud and Convective Parameterizations from Analyses of ARM Observations and Models

    Energy Technology Data Exchange (ETDEWEB)

    Del Genio, Anthony D. [NASA Goddard Inst. for Space Studies (GISS), New York, NY (United States)

    2016-03-11

    Over this period the PI and his performed a broad range of data analysis, model evaluation, and model improvement studies using ARM data. These included cloud regimes in the TWP and their evolution over the MJO; M-PACE IOP SCM-CRM intercomparisons; simulations of convective updraft strength and depth during TWP-ICE; evaluation of convective entrainment parameterizations using TWP-ICE simulations; evaluation of GISS GCM cloud behavior vs. long-term SGP cloud statistics; classification of aerosol semi-direct effects on cloud cover; depolarization lidar constraints on cloud phase; preferred states of the winter Arctic atmosphere, surface, and sub-surface; sensitivity of convection to tropospheric humidity; constraints on the parameterization of mesoscale organization from TWP-ICE WRF simulations; updraft and downdraft properties in TWP-ICE simulated convection; insights from long-term ARM records at Manus and Nauru.

  12. Development and Implementation of Universal Cloud/Radiation Parameterizations in Navy Operational Forecast Models

    Science.gov (United States)

    2013-09-30

    www.eas.purdue.edu/research/clew/index.html LONG-TERM GOALS Improve the simulation of atmospheric radiation energy fields in Navy operational weather...parameters to RRTMG whereas Figures 5 through 12 show the output shortwave and longwave radiation fluxes and cooling rates. Results are identical to... Radiation Parameterizations in Navy Operational Forecast Models Harshvardhan Dept. of Earth, Atmospheric & Planetary Sciences Purdue University West

  13. The effects of microphysical parameterization on model predictions of sulfate production in clouds

    OpenAIRE

    HEGG, DEAN A.; LARSON, TIMOTHY V.

    2011-01-01

    Model predictions of sulfate production by an explicit cloud chemistry parameterization are compared with corresponding predictions by a bulk chemistry model. Under conditions of high SO2 and H2O2, the various model predictions are in reasonable agreement. For conditions of low H2O2, the explicit microphysical model predicts sulfate production as much as 30 times higher than the bulk model, though more commonly the difference is of the order of a factor of 3. The differences arise because of ...

  14. Comparison of 2-3D convection models with parameterized thermal evolution models: Application to Mars

    Science.gov (United States)

    Thiriet, M.; Plesa, A. C.; Breuer, D.; Michaut, C.

    2017-12-01

    To model the thermal evolution of terrestrial planets, 1D parametrized models are often used as 2 or 3D mantle convection codes are very time-consuming. In these parameterized models, scaling laws that describe the convective heat transfer rate as a function of the convective parameters are derived from 2-3D steady state convection models. However, so far there has been no comprehensive comparison whether they can be applied to model the thermal evolution of a cooling planet. Here we compare 2D and 3D thermal evolution models in the stagnant lid regime with 1D parametrized models and use parameters representing the cooling of the Martian mantle. For the 1D parameterized models, we use the approach of Grasset and Parmentier (1998) and treat the stagnant lid and the convecting layer separately. In the convecting layer, the scaling law for a fluid with constant viscosity is valid with Nu (Ra/Rac) ?, with Rac the critical Rayleigh number at which the thermal boundary layers (TBL) - top or bottom - destabilize. ? varies between 1/3 and 1/4 depending on the heating mode and previous studies have proposed intermediate values of b 0.28-0.32 according to their model set-up. The base of the stagnant lid is defined by the temperature at which the mantle viscosity has increased by a factor of 10; it thus depends on the rate of viscosity change with temperature multiplied by a factor? , whose value appears to vary depending on the geometry and convection conditions. In applying Monte Carlo simulations, we search for the best fit to temperature profiles and heat flux using three free parameters, i.e. ? of the upper TBL, ? and the Rac of the lower TBL. We find that depending on the definition of the stagnant lid thickness in the 2-3D models several combinations of ? and ? for the upper TBL can retrieve suitable fits. E.g. combinations of ? = 0.329 and ? = 2.19 but also ? = 0.295 and ? = 2.97 are possible; Rac of the lower TBL is 10 for all best fits. The results show that

  15. Morphing methods to parameterize specimen-specific finite element model geometries.

    Science.gov (United States)

    Sigal, Ian A; Yang, Hongli; Roberts, Michael D; Downs, J Crawford

    2010-01-19

    Shape plays an important role in determining the biomechanical response of a structure. Specimen-specific finite element (FE) models have been developed to capture the details of the shape of biological structures and predict their biomechanics. Shape, however, can vary considerably across individuals or change due to aging or disease, and analysis of the sensitivity of specimen-specific models to these variations has proven challenging. An alternative to specimen-specific representation has been to develop generic models with simplified geometries whose shape is relatively easy to parameterize, and can therefore be readily used in sensitivity studies. Despite many successful applications, generic models are limited in that they cannot make predictions for individual specimens. We propose that it is possible to harness the detail available in specimen-specific models while leveraging the power of the parameterization techniques common in generic models. In this work we show that this can be accomplished by using morphing techniques to parameterize the geometry of specimen-specific FE models such that the model shape can be varied in a controlled and systematic way suitable for sensitivity analysis. We demonstrate three morphing techniques by using them on a model of the load-bearing tissues of the posterior pole of the eye. We show that using relatively straightforward procedures these morphing techniques can be combined, which allows the study of factor interactions. Finally, we illustrate that the techniques can be used in other systems by applying them to morph a femur. Morphing techniques provide an exciting new possibility for the analysis of the biomechanical role of shape, independently or in interaction with loading and material properties. Copyright 2009 Elsevier Ltd. All rights reserved.

  16. Modeling canopy-induced turbulence in the Earth system: a unified parameterization of turbulent exchange within plant canopies and the roughness sublayer (CLM-ml v0)

    Science.gov (United States)

    Bonan, Gordon B.; Patton, Edward G.; Harman, Ian N.; Oleson, Keith W.; Finnigan, John J.; Lu, Yaqiong; Burakowski, Elizabeth A.

    2018-04-01

    Land surface models used in climate models neglect the roughness sublayer and parameterize within-canopy turbulence in an ad hoc manner. We implemented a roughness sublayer turbulence parameterization in a multilayer canopy model (CLM-ml v0) to test if this theory provides a tractable parameterization extending from the ground through the canopy and the roughness sublayer. We compared the canopy model with the Community Land Model (CLM4.5) at seven forest, two grassland, and three cropland AmeriFlux sites over a range of canopy heights, leaf area indexes, and climates. CLM4.5 has pronounced biases during summer months at forest sites in midday latent heat flux, sensible heat flux, gross primary production, nighttime friction velocity, and the radiative temperature diurnal range. The new canopy model reduces these biases by introducing new physics. Advances in modeling stomatal conductance and canopy physiology beyond what is in CLM4.5 substantially improve model performance at the forest sites. The signature of the roughness sublayer is most evident in nighttime friction velocity and the diurnal cycle of radiative temperature, but is also seen in sensible heat flux. Within-canopy temperature profiles are markedly different compared with profiles obtained using Monin-Obukhov similarity theory, and the roughness sublayer produces cooler daytime and warmer nighttime temperatures. The herbaceous sites also show model improvements, but the improvements are related less systematically to the roughness sublayer parameterization in these canopies. The multilayer canopy with the roughness sublayer turbulence improves simulations compared with CLM4.5 while also advancing the theoretical basis for surface flux parameterizations.

  17. Sensitivity of Tropical Cyclones to Parameterized Convection in the NASA GEOS5 Model

    Science.gov (United States)

    Lim, Young-Kwon; Schubert, Siegfried D.; Reale, Oreste; Lee, Myong-In; Molod, Andrea M.; Suarez, Max J.

    2014-01-01

    The sensitivity of tropical cyclones (TCs) to changes in parameterized convection is investigated to improve the simulation of TCs in the North Atlantic. Specifically, the impact of reducing the influence of the Relaxed Arakawa-Schubert (RAS) scheme-based parameterized convection is explored using the Goddard Earth Observing System version5 (GEOS5) model at 0.25 horizontal resolution. The years 2005 and 2006 characterized by very active and inactive hurricane seasons, respectively, are selected for simulation. A reduction in parameterized deep convection results in an increase in TC activity (e.g., TC number and longer life cycle) to more realistic levels compared to the baseline control configuration. The vertical and horizontal structure of the strongest simulated hurricane shows the maximum lower-level (850-950hPa) wind speed greater than 60 ms and the minimum sea level pressure reaching 940mb, corresponding to a category 4 hurricane - a category never achieved by the control configuration. The radius of the maximum wind of 50km, the location of the warm core exceeding 10 C, and the horizontal compactness of the hurricane center are all quite realistic without any negatively affecting the atmospheric mean state. This study reveals that an increase in the threshold of minimum entrainment suppresses parameterized deep convection by entraining more dry air into the typical plume. This leads to cooling and drying at the mid- to upper-troposphere, along with the positive latent heat flux and moistening in the lower-troposphere. The resulting increase in conditional instability provides an environment that is more conducive to TC vortex development and upward moisture flux convergence by dynamically resolved moist convection, thereby increasing TC activity.

  18. Evaluating cloud processes in large-scale models: Of idealized case studies, parameterization testbeds and single-column modelling on climate time-scales

    Science.gov (United States)

    Neggers, Roel

    2016-04-01

    Boundary-layer schemes have always formed an integral part of General Circulation Models (GCMs) used for numerical weather and climate prediction. The spatial and temporal scales associated with boundary-layer processes and clouds are typically much smaller than those at which GCMs are discretized, which makes their representation through parameterization a necessity. The need for generally applicable boundary-layer parameterizations has motivated many scientific studies, which in effect has created its own active research field in the atmospheric sciences. Of particular interest has been the evaluation of boundary-layer schemes at "process-level". This means that parameterized physics are studied in isolated mode from the larger-scale circulation, using prescribed forcings and excluding any upscale interaction. Although feedbacks are thus prevented, the benefit is an enhanced model transparency, which might aid an investigator in identifying model errors and understanding model behavior. The popularity and success of the process-level approach is demonstrated by the many past and ongoing model inter-comparison studies that have been organized by initiatives such as GCSS/GASS. A red line in the results of these studies is that although most schemes somehow manage to capture first-order aspects of boundary layer cloud fields, there certainly remains room for improvement in many areas. Only too often are boundary layer parameterizations still found to be at the heart of problems in large-scale models, negatively affecting forecast skills of NWP models or causing uncertainty in numerical predictions of future climate. How to break this parameterization "deadlock" remains an open problem. This presentation attempts to give an overview of the various existing methods for the process-level evaluation of boundary-layer physics in large-scale models. This includes i) idealized case studies, ii) longer-term evaluation at permanent meteorological sites (the testbed approach

  19. Parameterization of a bucket model for soil-vegetation-atmosphere modeling under seasonal climatic regimes

    Directory of Open Access Journals (Sweden)

    N. Romano

    2011-12-01

    Full Text Available We investigate the potential impact of accounting for seasonal variations in the climatic forcing and using different methods to parameterize the soil water content at field capacity on the water balance components computed by a bucket model (BM. The single-layer BM of Guswa et al. (2002 is employed, whereas the Richards equation (RE based Soil Water Atmosphere Plant (SWAP model is used as a benchmark model. The results are analyzed for two differently-textured soils and for some synthetic runs under real-like seasonal weather conditions, using stochastically-generated daily rainfall data for a period of 100 years. Since transient soil-moisture dynamics and climatic seasonality play a key role in certain zones of the World, such as in Mediterranean land areas, a specific feature of this study is to test the prediction capability of the bucket model under a condition where seasonal variations in rainfall are not in phase with the variations in plant transpiration. Reference is made to a hydrologic year in which we have a rainy period (starting 1 November and lasting 151 days where vegetation is basically assumed in a dormant stage, followed by a drier and rainless period with a vegetation regrowth phase. Better agreement between BM and RE-SWAP intercomparison results are obtained when BM is parameterized by a field capacity value determined through the drainage method proposed by Romano and Santini (2002. Depending on the vegetation regrowth or dormant seasons, rainfall variability within a season results in transpiration regimes and soil moisture fluctuations with distinctive features. During the vegetation regrowth season, transpiration exerts a key control on soil water budget with respect to rainfall. During the dormant season of vegetation, the precipitation regime becomes an important climate forcing. Simulations also highlight the occurrence of bimodality in the probability distribution of soil moisture during the season when plants are

  20. Models parameterization for SWE retrievals from passive microwave over Canadian boreal forest

    Science.gov (United States)

    Roy, A.; Royer, A.; Langlois, A.; Montpetit, B.

    2012-12-01

    Boreal forest is the world largest northern land biome and has important impact and feedback on climate. Snow in this ecosystem changed drastically surface energy balance (albedo, turbulent fluxes). Furthermore, snow is a freshwater reservoir influencing hydrological regime and is an important source of energy through hydroelectricity. Passive microwave remote sensing is an appealing approach for characterizing the properties of snow at the synoptic scale; images are available at least twice a day for northern regions where meteorological stations and networks are generally sparse. However, major challenge such as forest canopy contribution and snow grain size within the snowpack, which have both huge impact on passive microwave signature from space-born sensors, must be well parameterized to retrieve variables of interest like Snow water equivalent (SWE). In this presentation, we show advances made in boreal forest τ-ω (forest transmissivity and scattering) and QH (soil reflectivity) models parameterization, as well as snow grains consideration development in the microwave snow emission. In the perspective of AMSR-E brightness temperature (Tb) assimilation in the Canadian Land surface scheme (CLASS), we used a new version of a multi-layer snow emission model: DMRT-ML. First, based on two distinct Tb datasets (winter airborne and summer space-borne), τ-ω and QH models are parameterized at 4 frequencies (6.9, 10.7, 18.7 and 36.5 GHz) for dense boreal forest sites. The forest transmissivity is then spatialized by establishing a relationship with forest structure parameters (LAI and stem volume). Secondly, snow surface specific area (SSA) was parameterized in DMRT-ML based on SWIR reflectance measurements for SSA calculation, as well as snow characteristics (temperature, density, height) and radiometric (19 & 37 GHz) measurements conducted on 20 snowpits in different open environments (grass, tundra, dry fen). Analysis shows that a correction factor must be

  1. Sensitivity of Coupled Tropical Pacific Model Biases to Convective Parameterization in CESM1

    Science.gov (United States)

    Woelfle, M. D.; Yu, S.; Bretherton, C. S.; Pritchard, M. S.

    2018-01-01

    Six month coupled hindcasts show the central equatorial Pacific cold tongue bias development in a GCM to be sensitive to the atmospheric convective parameterization employed. Simulations using the standard configuration of the Community Earth System Model version 1 (CESM1) develop a cold bias in equatorial Pacific sea surface temperatures (SSTs) within the first two months of integration due to anomalous ocean advection driven by overly strong easterly surface wind stress along the equator. Disabling the deep convection parameterization enhances the zonal pressure gradient leading to stronger zonal wind stress and a stronger equatorial SST bias, highlighting the role of pressure gradients in determining the strength of the cold bias. Superparameterized hindcasts show reduced SST bias in the cold tongue region due to a reduction in surface easterlies despite simulating an excessively strong low-level jet at 1-1.5 km elevation. This reflects inadequate vertical mixing of zonal momentum from the absence of convective momentum transport in the superparameterized model. Standard CESM1simulations modified to omit shallow convective momentum transport reproduce the superparameterized low-level wind bias and associated equatorial SST pattern. Further superparameterized simulations using a three-dimensional cloud resolving model capable of producing realistic momentum transport simulate a cold tongue similar to the default CESM1. These findings imply convective momentum fluxes may be an underappreciated mechanism for controlling the strength of the equatorial cold tongue. Despite the sensitivity of equatorial SST to these changes in convective parameterization, the east Pacific double-Intertropical Convergence Zone rainfall bias persists in all simulations presented in this study.

  2. Basic concepts for convection parameterization in weather forecast and climate models: COST Action ES0905 final report

    OpenAIRE

    Yano, J.-I.; Geleyn, J.-F.; Koller, M.; Mironov, D.; Quass, J.; Soares, P. M. M.; Phillips, V. J. T. P.; Plant, R S; Deluca, A.; Marquet, P.; Stulic, L.; Fuchs, Z.

    2015-01-01

    The research network “Basic Concepts for Convection Parameterization in Weather Forecast and Climate Models” was organized with European funding (COST Action ES0905) for the period of 2010–2014. Its extensive brainstorming suggests how the subgrid-scale parameterization problem in atmospheric modeling, especially for convection, can be examined and developed from the point of view of a robust theoretical basis. Our main cautions are current emphasis on massive observational data analyses and ...

  3. Model-driven harmonic parameterization of the cortical surface: HIP-HOP.

    Science.gov (United States)

    Auzias, G; Lefèvre, J; Le Troter, A; Fischer, C; Perrot, M; Régis, J; Coulon, O

    2013-05-01

    In the context of inter subject brain surface matching, we present a parameterization of the cortical surface constrained by a model of cortical organization. The parameterization is defined via an harmonic mapping of each hemisphere surface to a rectangular planar domain that integrates a representation of the model. As opposed to previous landmark-based registration methods we do not match folds between individuals but instead optimize the fit between cortical sulci and specific iso-coordinate axis in the model. This strategy overcomes some limitation to sulcus-based registration techniques such as topological variability in sulcal landmarks across subjects. Experiments on 62 subjects with manually traced sulci are presented and compared with the result of the Freesurfer software. The evaluation involves a measure of dispersion of sulci with both angular and area distortions. We show that the model-based strategy can lead to a natural, efficient and very fast (less than 5 min per hemisphere) method for defining inter subjects correspondences. We discuss how this approach also reduces the problems inherent to anatomically defined landmarks and open the way to the investigation of cortical organization through the notion of orientation and alignment of structures across the cortex.

  4. Sensitivity Study of Cloud Cover and Ozone Modeling to Microphysics Parameterization

    Science.gov (United States)

    Wałaszek, Kinga; Kryza, Maciej; Szymanowski, Mariusz; Werner, Małgorzata; Ojrzyńska, Hanna

    2017-02-01

    Cloud cover is a significant meteorological parameter influencing the amount of solar radiation reaching the ground surface, and therefore affecting the formation of photochemical pollutants, most of all tropospheric ozone (O3). Because cloud amount and type in meteorological models are resolved by microphysics schemes, adjusting this parameterization is a major factor determining the accuracy of the results. However, verification of cloud cover simulations based on surface data is difficult and yields significant errors. Current meteorological satellite programs provide many high-resolution cloud products, which can be used to verify numerical models. In this study, the Weather Research and Forecasting model (WRF) has been applied for the area of Poland for an episode of June 17th-July 4th, 2008, when high ground-level ozone concentrations were observed. Four simulations were performed, each with a different microphysics parameterization: Purdue Lin, Eta Ferrier, WRF Single-Moment 6-class, and Morrison Double-Moment scheme. The results were then evaluated based on cloud mask satellite images derived from SEVIRI data. Meteorological variables and O3 concentrations were also evaluated. The results show that the simulation using Morrison Double-Moment microphysics provides the most and Purdue Lin the least accurate information on cloud cover and surface meteorological variables for the selected high ozone episode. Those two configurations were used for WRF-Chem runs, which showed significantly higher O3 concentrations and better model-measurements agreement of the latter.

  5. Parameterization and evaluation of sulfate adsorption in a dynamic soil chemistry model

    International Nuclear Information System (INIS)

    Martinson, Liisa; Alveteg, Mattias; Warfvinge, Per

    2003-01-01

    Including sulfate adsorption improves the dynamic behavior of the SAFE model. - Sulfate adsorption was implemented in the dynamic, multi-layer soil chemistry model SAFE. The process is modeled by an isotherm in which sulfate adsorption is considered to be fully reversible and dependent on sulfate concentration as well as pH in soil solution. The isotherm was parameterized by a site-specific series of simple batch experiments at different pH (3.8-5.0) and sulfate concentration (10-260 μmol l -1 ) levels. Application of the model to the Lake Gaardsjoen roof covered site shows that including sulfate adsorption improves the dynamic behavior of the model and sulfate adsorption and desorption delay acidification and recovery of the soil. The modeled adsorbed pool of sulfate at the site reached a maximum level of 700 mmol/m 2 in the late 1980s, well in line with experimental data

  6. Parameterizing Urban Canopy Layer transport in an Lagrangian Particle Dispersion Model

    Science.gov (United States)

    Stöckl, Stefan; Rotach, Mathias W.

    2016-04-01

    The percentage of people living in urban areas is rising worldwide, crossed 50% in 2007 and is even higher in developed countries. High population density and numerous sources of air pollution in close proximity can lead to health issues. Therefore it is important to understand the nature of urban pollutant dispersion. In the last decades this field has experienced considerable progress, however the influence of large roughness elements is complex and has as of yet not been completely described. Hence, this work studied urban particle dispersion close to source and ground. It used an existing, steady state, three-dimensional Lagrangian particle dispersion model, which includes Roughness Sublayer parameterizations of turbulence and flow. The model is valid for convective and neutral to stable conditions and uses the kernel method for concentration calculation. As most Lagrangian models, its lower boundary is the zero-plane displacement, which means that roughly the lower two-thirds of the mean building height are not included in the model. This missing layer roughly coincides with the Urban Canopy Layer. An earlier work "traps" particles hitting the lower model boundary for a recirculation period, which is calculated under the assumption of a vortex in skimming flow, before "releasing" them again. The authors hypothesize that improving the lower boundary condition by including Urban Canopy Layer transport could improve model predictions. This was tested herein by not only trapping the particles, but also advecting them with a mean, parameterized flow in the Urban Canopy Layer. Now the model calculates the trapping period based on either recirculation due to vortex motion in skimming flow regimes or vertical velocity if no vortex forms, depending on incidence angle of the wind on a randomly chosen street canyon. The influence of this modification, as well as the model's sensitivity to parameterization constants, was investigated. To reach this goal, the model was

  7. Modeling parameterized geometry in GPU-based Monte Carlo particle transport simulation for radiotherapy.

    Science.gov (United States)

    Chi, Yujie; Tian, Zhen; Jia, Xun

    2016-08-07

    Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU's shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0

  8. Parameterization Improvements and Functional and Structural Advances in Version 4 of the Community Land Model

    Directory of Open Access Journals (Sweden)

    Andrew G. Slater

    2011-05-01

    Full Text Available The Community Land Model is the land component of the Community Climate System Model. Here, we describe a broad set of model improvements and additions that have been provided through the CLM development community to create CLM4. The model is extended with a carbon-nitrogen (CN biogeochemical model that is prognostic with respect to vegetation, litter, and soil carbon and nitrogen states and vegetation phenology. An urban canyon model is added and a transient land cover and land use change (LCLUC capability, including wood harvest, is introduced, enabling study of historic and future LCLUC on energy, water, momentum, carbon, and nitrogen fluxes. The hydrology scheme is modified with a revised numerical solution of the Richards equation and a revised ground evaporation parameterization that accounts for litter and within-canopy stability. The new snow model incorporates the SNow and Ice Aerosol Radiation model (SNICAR - which includes aerosol deposition, grain-size dependent snow aging, and vertically-resolved snowpack heating –– as well as new snow cover and snow burial fraction parameterizations. The thermal and hydrologic properties of organic soil are accounted for and the ground column is extended to ~50-m depth. Several other minor modifications to the land surface types dataset, grass and crop optical properties, atmospheric forcing height, roughness length and displacement height, and the disposition of snow-capped runoff are also incorporated.Taken together, these augmentations to CLM result in improved soil moisture dynamics, drier soils, and stronger soil moisture variability. The new model also exhibits higher snow cover, cooler soil temperatures in organic-rich soils, greater global river discharge, and lower albedos over forests and grasslands, all of which are improvements compared to CLM3.5. When CLM4 is run with CN, the mean biogeophysical simulation is slightly degraded because the vegetation structure is prognostic rather

  9. A clinically parameterized mathematical model of Shigella immunity to inform vaccine design.

    Directory of Open Access Journals (Sweden)

    Courtney L Davis

    Full Text Available We refine and clinically parameterize a mathematical model of the humoral immune response against Shigella, a diarrheal bacteria that infects 80-165 million people and kills an estimated 600,000 people worldwide each year. Using Latin hypercube sampling and Monte Carlo simulations for parameter estimation, we fit our model to human immune data from two Shigella EcSf2a-2 vaccine trials and a rechallenge study in which antibody and B-cell responses against Shigella's lipopolysaccharide (LPS and O-membrane proteins (OMP were recorded. The clinically grounded model is used to mathematically investigate which key immune mechanisms and bacterial targets confer immunity against Shigella and to predict which humoral immune components should be elicited to create a protective vaccine against Shigella. The model offers insight into why the EcSf2a-2 vaccine had low efficacy and demonstrates that at a group level a humoral immune response induced by EcSf2a-2 vaccine or wild-type challenge against Shigella's LPS or OMP does not appear sufficient for protection. That is, the model predicts an uncontrolled infection of gut epithelial cells that is present across all best-fit model parameterizations when fit to EcSf2a-2 vaccine or wild-type challenge data. Using sensitivity analysis, we explore which model parameter values must be altered to prevent the destructive epithelial invasion by Shigella bacteria and identify four key parameter groups as potential vaccine targets or immune correlates: 1 the rate that Shigella migrates into the lamina propria or epithelium, 2 the rate that memory B cells (BM differentiate into antibody-secreting cells (ASC, 3 the rate at which antibodies are produced by activated ASC, and 4 the Shigella-specific BM carrying capacity. This paper underscores the need for a multifaceted approach in ongoing efforts to design an effective Shigella vaccine.

  10. A clinically parameterized mathematical model of Shigella immunity to inform vaccine design

    Science.gov (United States)

    Wahid, Rezwanul; Toapanta, Franklin R.; Simon, Jakub K.; Sztein, Marcelo B.

    2018-01-01

    We refine and clinically parameterize a mathematical model of the humoral immune response against Shigella, a diarrheal bacteria that infects 80-165 million people and kills an estimated 600,000 people worldwide each year. Using Latin hypercube sampling and Monte Carlo simulations for parameter estimation, we fit our model to human immune data from two Shigella EcSf2a-2 vaccine trials and a rechallenge study in which antibody and B-cell responses against Shigella′s lipopolysaccharide (LPS) and O-membrane proteins (OMP) were recorded. The clinically grounded model is used to mathematically investigate which key immune mechanisms and bacterial targets confer immunity against Shigella and to predict which humoral immune components should be elicited to create a protective vaccine against Shigella. The model offers insight into why the EcSf2a-2 vaccine had low efficacy and demonstrates that at a group level a humoral immune response induced by EcSf2a-2 vaccine or wild-type challenge against Shigella′s LPS or OMP does not appear sufficient for protection. That is, the model predicts an uncontrolled infection of gut epithelial cells that is present across all best-fit model parameterizations when fit to EcSf2a-2 vaccine or wild-type challenge data. Using sensitivity analysis, we explore which model parameter values must be altered to prevent the destructive epithelial invasion by Shigella bacteria and identify four key parameter groups as potential vaccine targets or immune correlates: 1) the rate that Shigella migrates into the lamina propria or epithelium, 2) the rate that memory B cells (BM) differentiate into antibody-secreting cells (ASC), 3) the rate at which antibodies are produced by activated ASC, and 4) the Shigella-specific BM carrying capacity. This paper underscores the need for a multifaceted approach in ongoing efforts to design an effective Shigella vaccine. PMID:29304144

  11. The role of aerosols in cloud drop parameterizations and its applications in global climate models

    Energy Technology Data Exchange (ETDEWEB)

    Chuang, C.C.; Penner, J.E. [Lawrence Livermore National Lab., CA (United States)

    1996-04-01

    The characteristics of the cloud drop size distribution near cloud base are initially determined by aerosols that serve as cloud condensation nuclei and the updraft velocity. We have developed parameterizations relating cloud drop number concentration to aerosol number and sulfate mass concentrations and used them in a coupled global aerosol/general circulation model (GCM) to estimate the indirect aerosol forcing. The global aerosol model made use of our detailed emissions inventories for the amount of particulate matter from biomass burning sources and from fossil fuel sources as well as emissions inventories of the gas-phase anthropogenic SO{sub 2}. This work is aimed at validating the coupled model with the Atmospheric Radiation Measurement (ARM) Program measurements and assessing the possible magnitude of the aerosol-induced cloud effects on climate.

  12. Towards product design automation based on parameterized standard model with diversiform knowledge

    Science.gov (United States)

    Liu, Wei; Zhang, Xiaobing

    2017-04-01

    Product standardization based on CAD software is an effective way to improve design efficiency. In the past, research and development on standardization mainly focused on the level of component, and the standardization of the entire product as a whole is rarely taken into consideration. In this paper, the size and structure of 3D product models are both driven by the Excel datasheets, based on which a parameterized model library is therefore established. Diversiform knowledge including associated parameters and default properties are embedded into the templates in advance to simplify their reuse. Through the simple operation, we can obtain the correct product with the finished 3D models including single parts or complex assemblies. Two examples are illustrated later to invalid the idea, which will greatly improve the design efficiency.

  13. Assessing Sexual Dicromatism: The Importance of Proper Parameterization in Tetrachromatic Visual Models.

    Directory of Open Access Journals (Sweden)

    Pierre-Paul Bitton

    Full Text Available Perceptual models of animal vision have greatly contributed to our understanding of animal-animal and plant-animal communication. The receptor-noise model of color contrasts has been central to this research as it quantifies the difference between two colors for any visual system of interest. However, if the properties of the visual system are unknown, assumptions regarding parameter values must be made, generally with unknown consequences. In this study, we conduct a sensitivity analysis of the receptor-noise model using avian visual system parameters to systematically investigate the influence of variation in light environment, photoreceptor sensitivities, photoreceptor densities, and light transmission properties of the ocular media and the oil droplets. We calculated the chromatic contrast of 15 plumage patches to quantify a dichromatism score for 70 species of Galliformes, a group of birds that display a wide range of sexual dimorphism. We found that the photoreceptor densities and the wavelength of maximum sensitivity of the short-wavelength-sensitive photoreceptor 1 (SWS1 can change dichromatism scores by 50% to 100%. In contrast, the light environment, transmission properties of the oil droplets, transmission properties of the ocular media, and the peak sensitivities of the cone photoreceptors had a smaller impact on the scores. By investigating the effect of varying two or more parameters simultaneously, we further demonstrate that improper parameterization could lead to differences between calculated and actual contrasts of more than 650%. Our findings demonstrate that improper parameterization of tetrachromatic visual models can have very large effects on measures of dichromatism scores, potentially leading to erroneous inferences. We urge more complete characterization of avian retinal properties and recommend that researchers either determine whether their species of interest possess an ultraviolet or near-ultraviolet sensitive SWS1

  14. A simplified PDF parameterization of subgrid-scale clouds and turbulence for cloud-resolving models

    Science.gov (United States)

    Bogenschutz, Peter A.; Krueger, Steven K.

    2013-06-01

    Over the past decade a new type of global climate model (GCM) has emerged, which is known as a multiscale modeling framework (MMF). Colorado State University's MMF represents a coupling between the Community Atmosphere Model and the System for Atmospheric Modeling (SAM) to serve as the cloud-resolving model (CRM) that replaces traditionally parameterized convection in GCMs. However, due to the high computational expense of the MMF, the grid size of the embedded CRM is typically limited to 4 km for long-term climate simulations. With grid sizes this coarse, shallow convective processes and turbulence cannot be resolved and must still be parameterized within the context of the embedded CRM. This paper describes a computationally efficient closure that aims to better represent turbulence and shallow convective processes in coarse-grid CRMs. The closure is based on the assumed probability density function (PDF) technique to serve as the subgrid-scale (SGS) condensation scheme and turbulence closure that employs a diagnostic method to determine the needed input moments. This paper describes the scheme, as well as the formulation of the eddy length which is empirically determined from large eddy simulation (LES) data. CRM tests utilizing the closure yields good results when compared to LESs for two trade-wind cumulus cases, a transition from stratocumulus to cumulus, and continental cumulus. This new closure improves the representation of clouds through the use of SGS condensation scheme and turbulence due to better representation of the buoyancy flux and dissipation rates. In addition, the scheme reduces the sensitivity of CRM simulations to horizontal grid spacing. The improvement when compared to the standard low-order closure configuration of the SAM is especially striking.

  15. Improving Mixed-phase Cloud Parameterization in Climate Model with the ACRF Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Zhien [Univ. of Wyoming, Laramie, WY (United States)

    2016-12-13

    Mixed-phase cloud microphysical and dynamical processes are still poorly understood, and their representation in GCMs is a major source of uncertainties in overall cloud feedback in GCMs. Thus improving mixed-phase cloud parameterizations in climate models is critical to reducing the climate forecast uncertainties. This study aims at providing improved knowledge of mixed-phase cloud properties from the long-term ACRF observations and improving mixed-phase clouds simulations in the NCAR Community Atmosphere Model version 5 (CAM5). The key accomplishments are: 1) An improved retrieval algorithm was developed to provide liquid droplet concentration for drizzling or mixed-phase stratiform clouds. 2) A new ice concentration retrieval algorithm for stratiform mixed-phase clouds was developed. 3) A strong seasonal aerosol impact on ice generation in Arctic mixed-phase clouds was identified, which is mainly attributed to the high dust occurrence during the spring season. 4) A suite of multi-senor algorithms was applied to long-term ARM observations at the Barrow site to provide a complete dataset (LWC and effective radius profile for liquid phase, and IWC, Dge profiles and ice concentration for ice phase) to characterize Arctic stratiform mixed-phase clouds. This multi-year stratiform mixed-phase cloud dataset provides necessary information to study related processes, evaluate model stratiform mixed-phase cloud simulations, and improve model stratiform mixed-phase cloud parameterization. 5). A new in situ data analysis method was developed to quantify liquid mass partition in convective mixed-phase clouds. For the first time, we reliably compared liquid mass partitions in stratiform and convective mixed-phase clouds. Due to the different dynamics in stratiform and convective mixed-phase clouds, the temperature dependencies of liquid mass partitions are significantly different due to much higher ice concentrations in convective mixed phase clouds. 6) Systematic evaluations

  16. Electrochemical-mechanical coupled modeling and parameterization of swelling and ionic transport in lithium-ion batteries

    Science.gov (United States)

    Sauerteig, Daniel; Hanselmann, Nina; Arzberger, Arno; Reinshagen, Holger; Ivanov, Svetlozar; Bund, Andreas

    2018-02-01

    The intercalation and aging induced volume changes of lithium-ion battery electrodes lead to significant mechanical pressure or volume changes on cell and module level. As the correlation between electrochemical and mechanical performance of lithium ion batteries at nano and macro scale requires a comprehensive and multidisciplinary approach, physical modeling accounting for chemical and mechanical phenomena during operation is very useful for the battery design. Since the introduced fully-coupled physical model requires proper parameterization, this work also focuses on identifying appropriate mathematical representation of compressibility as well as the ionic transport in the porous electrodes and the separator. The ionic transport is characterized by electrochemical impedance spectroscopy (EIS) using symmetric pouch cells comprising LiNi1/3Mn1/3Co1/3O2 (NMC) cathode, graphite anode and polyethylene separator. The EIS measurements are carried out at various mechanical loads. The observed decrease of the ionic conductivity reveals a significant transport limitation at high pressures. The experimentally obtained data are applied as input to the electrochemical-mechanical model of a prismatic 10 Ah cell. Our computational approach accounts intercalation induced electrode expansion, stress generation caused by mechanical boundaries, compression of the electrodes and the separator, outer expansion of the cell and finally the influence of the ionic transport within the electrolyte.

  17. Using polarimetric radar observations and probabilistic inference to develop the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), a novel microphysical parameterization framework

    Science.gov (United States)

    van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.

    2016-12-01

    Microphysical parameterization schemes have reached an impressive level of sophistication: numerous prognostic hydrometeor categories, and either size-resolved (bin) particle size distributions, or multiple prognostic moments of the size distribution. Yet, uncertainty in model representation of microphysical processes and the effects of microphysics on numerical simulation of weather has not shown a improvement commensurate with the advanced sophistication of these schemes. We posit that this may be caused by unconstrained assumptions of these schemes, such as ad-hoc parameter value choices and structural uncertainties (e.g. choice of a particular form for the size distribution). We present work on development and observational constraint of a novel microphysical parameterization approach, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), which seeks to address these sources of uncertainty. Our framework avoids unnecessary a priori assumptions, and instead relies on observations to provide probabilistic constraint of the scheme structure and sensitivities to environmental and microphysical conditions. We harness the rich microphysical information content of polarimetric radar observations to develop and constrain BOSS within a Bayesian inference framework using a Markov Chain Monte Carlo sampler (see Kumjian et al., this meeting for details on development of an associated polarimetric forward operator). Our work shows how knowledge of microphysical processes is provided by polarimetric radar observations of diverse weather conditions, and which processes remain highly uncertain, even after considering observations.

  18. Applying the Triangle Method for the parameterization of irrigated areas as input for spatially distributed hydrological modeling - Assessing future drought risk in the Gaza Strip (Palestine).

    Science.gov (United States)

    Gampe, David; Ludwig, Ralf; Qahman, Khalid; Afifi, Samir

    2016-02-01

    In the Mediterranean region, particularly in the Gaza strip, an increased risk of drought is among the major concerns related to climate change. The impacts of climate change on water availability, drought risk and food security can be assessed by means of hydro-climatological modeling. However, the region is prone to severe observation data scarcity, which limits the potential for robust model parameterization, calibration and validation. In this study, the physically based, spatially distributed hydrological model WaSiM is parameterized and evaluated using satellite imagery to assess hydrological quantities. The Triangle Method estimates actual evapotranspiration (ETR) through the Normalized Difference Vegetation Index (NDVI) and land surface temperature (LST) provided by Landsat TM imagery. So-derived spatially distributed evapotranspiration is then used in two ways: first a subset of the imagery is used to parameterize the irrigation module of WaSiM and second, withheld scenes are applied to evaluate the performance of the hydrological model in the data scarce study area. The results show acceptable overall correlation with the validation scenes (r=0.53) and an improvement over the usual irrigation parameterization scheme using land use information exclusively. This model setup is then applied for future drought risk assessment in the Gaza Strip using a small ensemble of four regional climate projections for the period 2041-2070. Hydrological modeling reveals an increased risk of drought, assessed with an evapotranspiration index, compared to the reference period 1971-2000. Current irrigation procedures cannot maintain the agricultural productivity under future conditions without adaptation. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Sensitivity of Drought Processes to Runoff Parameterizations in East Asia with the Community Land Model

    Science.gov (United States)

    Kim, J. B.; Um, M. J.; Kim, Y.

    2016-12-01

    Drought is one of the most powerful and extensive disasters and has the highest annual average damage among all the disasters. Focusing on East Asia, where over one fifth of all the people in the world live, drought has impacted as well as been projected to impact the region significantly. .Therefore it is critical to reasonably simulate the drought phenomenon in the region and thus this study would focus on the reproducibility of drought with the NCAR CLM. In this study, we examine the propagation of drought processes with different runoff parameterization of CLM in East Asia. Two different schemes are used; TOPMODEL-based and VIC-based schemes, which differentiate the result of runoff through the surface and subsurface runoff parameterization. CLM with different runoff scheme are driven with two atmospheric forcings from CRU/NCEP and NCEP reanalysis data. Specifically, propagation of drought from meteorological, agricultural to hydrologic drought is investigated with different drought indices, estimated with not only model simulated results but also observational data. The indices include the standardized precipitation evapotranspiration index (SPEI), standardized runoff index (SRI) and standardized soil moisture index (SSMI). Based on these indices, the drought characteristics such as intensity, frequency and spatial extent are investigated. At last, such drought assessments would reveal the possible model deficiencies in East Asia. AcknowledgementsThis work was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (2015R1C1A2A01054800) and the Korea Meteorological Administration R&D Program under Grant KMIPA 2015-6180.

  20. New parameterization of external and induced fields in geomagnetic field modeling, and a candidate model for IGRF 2005

    DEFF Research Database (Denmark)

    Olsen, Nils; Sabaka, T.J.; Lowes, F.

    2005-01-01

    When deriving spherical harmonic models of the Earth's magnetic field, low-degree external field contributions are traditionally considered by assuming that their expansion coefficient q(1)(0) varies linearly with the D-st-index, while induced contributions are considered assuming a constant ratio......)(0) for each of the 67 months of Orsted and CHAMP data that have been used. We discuss the advantage of this new parameterization of external and induced field for geomagnetic field modeling, and describe the derivation of candidate models for IGRF 2005....

  1. Parameterization and Uncertainty Analysis of SWAT model in Hydrological Simulation of Chaohe River Basin

    Science.gov (United States)

    Jie, M.; Zhang, J.; Guo, B. B.

    2017-12-01

    As a typical distributed hydrological model, the SWAT model also has a challenge in calibrating parameters and analysis their uncertainty. This paper chooses the Chaohe River Basin China as the study area, through the establishment of the SWAT model, loading the DEM data of the Chaohe river basin, the watershed is automatically divided into several sub-basins. Analyzing the land use, soil and slope which are on the basis of the sub-basins and calculating the hydrological response unit (HRU) of the study area, after running SWAT model, the runoff simulation values in the watershed are obtained. On this basis, using weather data, known daily runoff of three hydrological stations, combined with the SWAT-CUP automatic program and the manual adjustment method are used to analyze the multi-site calibration of the model parameters. Furthermore, the GLUE algorithm is used to analyze the parameters uncertainty of the SWAT model. Through the sensitivity analysis, calibration and uncertainty study of SWAT, the results indicate that the parameterization of the hydrological characteristics of the Chaohe river is successful and feasible which can be used to simulate the Chaohe river basin.

  2. Basic Concepts for Convection Parameterization in Weather Forecast and Climate Models: COST Action ES0905 Final Report

    Directory of Open Access Journals (Sweden)

    Jun–Ichi Yano

    2014-12-01

    Full Text Available The research network “Basic Concepts for Convection Parameterization in Weather Forecast and Climate Models” was organized with European funding (COST Action ES0905 for the period of 2010–2014. Its extensive brainstorming suggests how the subgrid-scale parameterization problem in atmospheric modeling, especially for convection, can be examined and developed from the point of view of a robust theoretical basis. Our main cautions are current emphasis on massive observational data analyses and process studies. The closure and the entrainment–detrainment problems are identified as the two highest priorities for convection parameterization under the mass–flux formulation. The need for a drastic change of the current European research culture as concerns policies and funding in order not to further deplete the visions of the European researchers focusing on those basic issues is emphasized.

  3. Development of a Mobile Dust Source Parameterization Using an Inverse Lagrangian Stochastic Modeling Technique

    Science.gov (United States)

    McAlpine, Jerrold D.

    In arid regions, mechanical disturbances along the desert floor can result in large fluctuations of dust particles into the atmosphere. Rotorcraft operation near the surface may have the greatest potential for dust entrainment per vehicle. Due to this, there is a need for efficient tools to estimate the risk of air quality and visibility impacts in the neighborhood of rotorcraft operating near the desert surface. In this study, a set of parameterized models were developed to form a multi-component modeling system to simulate the entrainment and dispersion of dust from a rotorcraft wake. A simplified scheme utilizing momentum theory was applied to predict the shear stress at the ground under the rotorcraft. Stochastic dust emission algorithms were used to predict the PM10 emission rate from the wake. The distribution of dust emission from the wake was assigned at the walls of a box-volume that encapsulates the wake. The distribution was determined using the results of an inverse Lagrangian stochastic particle dispersion modeling study, using a dataset from a full-scale experiment. All of the elements were put together into a model that simulates the dispersion of PM10 dust from a rotorcraft wake. Downwind concentrations of PM10 estimated using the multi-component modeling system compared well to a set of experimental measurements.

  4. Parameterization of water vapor using high-resolution GPS data and empirical models

    Science.gov (United States)

    Ningombam, Shantikumar S.; Jade, Sridevi; Shrungeshwara, T. S.

    2018-03-01

    The present work evaluates eleven existing empirical models to estimate Precipitable Water Vapor (PWV) over a high-altitude (4500 m amsl), cold-desert environment. These models are tested extensively and used globally to estimate PWV for low altitude sites (below 1000 m amsl). The moist parameters used in the model are: water vapor scale height (Hc), dew point temperature (Td) and water vapor pressure (Es 0). These moist parameters are derived from surface air temperature and relative humidity measured at high temporal resolution from automated weather station. The performance of these models are examined statistically with observed high-resolution GPS (GPSPWV) data over the region (2005-2012). The correlation coefficient (R) between the observed GPSPWV and Model PWV is 0.98 at daily data and varies diurnally from 0.93 to 0.97. Parameterization of moisture parameters were studied in-depth (i.e., 2 h to monthly time scales) using GPSPWV , Td , and Es 0 . The slope of the linear relationships between GPSPWV and Td varies from 0.073°C-1 to 0.106°C-1 (R: 0.83 to 0.97) while GPSPWV and Es 0 varied from 1.688 to 2.209 (R: 0.95 to 0.99) at daily, monthly and diurnal time scales. In addition, the moist parameters for the cold desert, high-altitude environment are examined in-depth at various time scales during 2005-2012.

  5. Markov model-based polymer assembly from force field-parameterized building blocks.

    Science.gov (United States)

    Durmaz, Vedat

    2015-03-01

    A conventional by hand construction and parameterization of a polymer model for the purpose of molecular simulations can quickly become very work-intensive and time-consuming. Using the example of polyglycerol, I present a polymer decomposition strategy yielding a set of five monomeric residues that are convenient for an instantaneous assembly and subsequent force field simulation of a polyglycerol polymer model. Force field parameters have been developed in accordance with the classical Amber force field. Partial charges of each unit were fitted to the electrostatic potential using quantum-chemical methods and slightly modified in order to guarantee a neutral total polymer charge. In contrast to similarly constructed models of amino acid and nucleotide sequences, the glycerol building blocks may yield an arbitrary degree of bifurcations depending on the underlying probabilistic model. The iterative development of the overall structure as well as the relation of linear to branching units is controlled by a simple Markov model which is presented with few algorithmic details. The resulting polymer is highly suitable for classical explicit water molecular dynamics simulations on the atomistic level after a structural relaxation step. Moreover, the decomposition strategy presented here can easily be adopted to many other (co)polymers.

  6. Multiscale parameterization of LIDAR elevations for reducing complexity in hydraulic models of coastal urban areas

    Science.gov (United States)

    Cheung, Sweungwon; Slatton, K. Clint; Cho, Hyun-Chong; Dean, Robert G.

    2011-01-01

    Airborne light detection and ranging (LIDAR) technology now makes it possible to sample the Earth's surface with point spacings well below 1 m. It is, however, time consuming, costly, and technically challenging to directly use very high resolution LIDAR data for hydraulic modeling because of the computational requirements associated with solving fluid dynamics equations over complex boundary conditions in large data sets. For high relief terrain and urban areas, using coarse digital elevation models (DEMs) can cause significant degradation in hydraulic modeling, particularly when artificial obstructions, such as buildings, mask spatial correlations between terrain points. In this paper we present a strategy to reduce the computational complexity in the estimation of surface water discharge through a decomposition of the DEM data, wherein features have different characteristic spatial frequencies. Though the optimal DEM scale for a particular application will ultimately be decided by the user's tolerance for error, we present guidelines to choose a proper scale by balancing computer memory usage and accuracy. We also suggest a method to parameterize man-made structures, such as buildings in hydraulic modeling, to efficiently and accurately account for their effects on surface water discharge.

  7. Approaches in highly parameterized inversion - PEST++, a Parameter ESTimation code optimized for large environmental models

    Science.gov (United States)

    Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.

    2012-01-01

    An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.

  8. Incorporating the subgrid-scale variability of clouds in the autoconversion parameterization in a large-scale model

    Science.gov (United States)

    Weber, Torsten; Quaas, Johannes

    2010-05-01

    Precipitation formation in warm clouds is mainly governed by the autoconversion rate being a high nonlinear process. Large scale models commonly calculate the autoconversion rate using the grid-cell mean of liquid cloud water which introduces a strong low-bias because clouds and therefore liquid cloud water are inhomogeneous distributed. The parameterized autoconversion is thus artificially tuned so that accumulated large-scale precipitation matches the observations. Here, we revise the parameterization for the autoconversion rate to incorporate the subgrid-scale variability of clouds using the horizontal subgrid-scale distribution of liquid cloud water mixing ratio derived from the subgrid-scale variability scheme of water vapor and cloud condensate. This scheme is employed in the ECHAM5 climate model in order to calculate the horizontal cloud fraction by means of a probability density function (PDF) of the total water mixing ratio. The revised parameterization now also ensures the consistency between the calculation of horizontal cloud fraction and the precipitation formation. An introduction of the improved parameterization and first results of the evaluation of the precipitation rate on a global scale will be presented. Specifically, precipitation and vertically integrated liquid cloud water estimated by the model are compared with observational data derived from ground based measurements and satellite instruments.

  9. Improving Convection and Cloud Parameterization Using ARM Observations and NCAR Community Atmosphere Model CAM5

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Guang J. [Univ. of California, San Diego, CA (United States)

    2016-11-07

    The fundamental scientific objectives of our research are to use ARM observations and the NCAR CAM5 to understand the large-scale control on convection, and to develop improved convection and cloud parameterizations for use in GCMs.

  10. Sensitivity of simulated convection-driven stratosphere-troposphere exchange in WRF-Chem to the choice of physical and chemical parameterization

    Science.gov (United States)

    Phoenix, Daniel B.; Homeyer, Cameron R.; Barth, Mary C.

    2017-08-01

    Tropopause-penetrating convection is capable of rapidly transporting air from the lower troposphere to the upper troposphere and lower stratosphere (UTLS), where it can have important impacts on chemistry, the radiative budget, and climate. However, obtaining in situ measurements of convection and convective transport is difficult and such observations are historically rare. Modeling studies, on the other hand, offer the advantage of providing output related to the physical, dynamical, and chemical characteristics of storms and their environments at fine spatial and temporal scales. Since these characteristics of simulated convection depend on the chosen model design, we examine the sensitivity of simulated convective transport to the choice of physical (bulk microphysics or BMP and planetary boundary layer or PBL) and chemical parameterizations in the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem). In particular, we simulate multiple cases where in situ observations are available from the recent (2012) Deep Convective Clouds and Chemistry (DC3) experiment. Model output is evaluated using ground-based radar observations of each storm and in situ trace gas observations from two aircraft operated during the DC3 experiment. Model results show measurable sensitivity of the physical characteristics of a storm and the transport of water vapor and additional trace gases into the UTLS to the choice of BMP. The physical characteristics of the storm and transport of insoluble trace gases are largely insensitive to the choice of PBL scheme and chemical mechanism, though several soluble trace gases (e.g., SO2, CH2O, and HNO3) exhibit some measurable sensitivity.

  11. Parameterization of rain induced surface roughness and its validation study using a third generation wave model

    Science.gov (United States)

    Rajesh Kumar, R.; Prasad Kumar, B.; Bala Subrahamanyam, D.

    2009-09-01

    The effect of raindrops striking water surface and their role in modifying the prevailing sea-surface roughness is investigated. The work presents a new theoretical formulation developed to study rain-induced stress on sea-surface based on dimensional analysis. Rain parameters include drop size, rain intensity and rain duration. The influences of these rain parameters on young and mature waves were studied separately under varying wind speeds, rain intensity and rain duration. Contrary to popular belief that rain only attenuates surface waves, this study also points out rain duration under certain condition can contribute to wave growth at high wind speeds. Strong winds in conjunction with high rain intensity enhance the horizontal stress component on the sea-surface, leading to wave growth. Previous studies based on laboratory experiments and dimensional analysis do not account for rain duration when attempting to parameterize sea-surface roughness. This study signifies the importance of rain duration as an important parameter modifying sea-surface roughness. Qualitative as well quantitative support for the developed formulation is established through critical validation with reports of several researchers and satellite measurements for an extreme cyclonic event in the Indian Ocean. Based on skill assessment, it is suggested that the present formulation is superior to prior studies. Numerical experiments and validation performed by incorporating in state-of-art WAM wave model show the importance of treating rain-induced surface roughness as an essential pre-requisite for ocean wave modeling studies.

  12. Parameterizing deep water percolation improves subsurface temperature simulations by a multilayer firn model

    Science.gov (United States)

    Marchenko, Sergey; van Pelt, Ward J. J.; Claremar, Björn; Pohjola, Veijo; Pettersson, Rickard; Machguth, Horst; Reijmer, Carleen

    2017-03-01

    Deep preferential percolation of melt water in snow and firn brings water lower along the vertical profile than a laterally homogeneous wetting front. This widely recognized process is an important source of uncertainty in simulations of subsurface temperature, density and water content in seasonal snow and in firn packs on glaciers and ice sheets. However, observation and quantification of preferential flow is challenging and therefore it is not accounted for by most of the contemporary snow/firn models. Here we use temperature measurements in the accumulation zone of Lomonosovfonna, Svalbard, done in April 2012 - 2015 using multiple thermistor strings to describe the process of water percolation in snow and firn. Effects of water flow through the snow and firn profile are further explored using a coupled surface energy balance - firn model forced by the output of the regional climate model WRF. In situ air temperature, radiation and surface height change measurements are used to constrain the surface energy and mass fluxes. To account for the effects of preferential water flow in snow and firn we test a set of depth-dependent functions allocating a certain fraction of the melt water available at the surface to each snow/firn layer. Experiments are performed for a range of characteristic percolation depths and results indicate a reduction in root mean square difference between the modeled and measured temperature by up to a factor of two compared to the results from the default water infiltration scheme. This illustrates the significance of accounting for preferential water percolation to simulate subsurface conditions. The suggested approach to parameterization of the preferential water flow requires low additional computational cost and can be implemented in layered snow/firn models applied both at local and regional scales, for distributed domains with multiple mesh points.

  13. Model parameterization to simulate and compare the PAR absorption potential of two competing plant species.

    Science.gov (United States)

    Bendix, Jörg; Silva, Brenner; Roos, Kristin; Göttlicher, Dietrich Otto; Rollenbeck, Rütger; Nauss, Thomas; Beck, Erwin

    2010-05-01

    Mountain pastures dominated by the pasture grass Setaria sphacelata in the Andes of southern Ecuador are heavily infested by southern bracken (Pteridium arachnoideum), a major problem for pasture management. Field observations suggest that bracken might outcompete the grass due to its competitive strength with regard to the absorption of photosynthetically active radiation (PAR). To understand the PAR absorption potential of both species, the aims of the current paper are to (1) parameterize a radiation scheme of a two-big-leaf model by deriving structural (LAI, leaf angle parameter) and optical (leaf albedo, transmittance) plant traits for average individuals from field surveys, (2) to initialize the properly parameterized radiation scheme with realistic global irradiation conditions of the Rio San Francisco Valley in the Andes of southern Ecuador, and (3) to compare the PAR absorption capabilities of both species under typical local weather conditions. Field data show that bracken reveals a slightly higher average leaf area index (LAI) and more horizontally oriented leaves in comparison to Setaria. Spectrometer measurements reveal that bracken and Setaria are characterized by a similar average leaf absorptance. Simulations with the average diurnal course of incoming solar radiation (1998-2005) and the mean leaf-sun geometry reveal that PAR absorption is fairly equal for both species. However, the comparison of typical clear and overcast days show that two parameters, (1) the relation of incoming diffuse and direct irradiance, and (2) the leaf-sun geometry play a major role for PAR absorption in the two-big-leaf approach: Under cloudy sky conditions (mainly diffuse irradiance), PAR absorption is slightly higher for Setaria while under clear sky conditions (mainly direct irradiance), the average bracken individual is characterized by a higher PAR absorption potential. (approximately 74 MJ m(-2) year(-1)). The latter situation which occurs if the maximum daily

  14. Evaluating Model Parameterizations of Submicron Aerosol Scattering and Absorption with in situ Data from ARCTAS 2008

    Science.gov (United States)

    Alvarado, Matthew J.; Lonsdale, Chantelle R.; Macintyre, Helen L.; Bian, Huisheng; Chin, Mian; Ridley, David A.; Heald, Colette L.; Thornhill, Kenneth L.; Anderson, Bruce E.; Cubison, Michael J.; hide

    2016-01-01

    Accurate modeling of the scattering and absorption of ultraviolet and visible radiation by aerosols is essential for accurate simulations of atmospheric chemistry and climate. Closure studies using in situ measurements of aerosol scattering and absorption can be used to evaluate and improve models of aerosol optical properties without interference from model errors in aerosol emissions, transport, chemistry, or deposition rates. Here we evaluate the ability of four externally mixed, fixed size distribution parameterizations used in global models to simulate submicron aerosol scattering and absorption at three wavelengths using in situ data gathered during the 2008 Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) campaign. The four models are the NASA Global Modeling Initiative (GMI) Combo model, GEOS-Chem v9- 02, the baseline configuration of a version of GEOS-Chem with online radiative transfer calculations (called GC-RT), and the Optical Properties of Aerosol and Clouds (OPAC v3.1) package. We also use the ARCTAS data to perform the first evaluation of the ability of the Aerosol Simulation Program (ASP v2.1) to simulate submicron aerosol scattering and absorption when in situ data on the aerosol size distribution are used, and examine the impact of different mixing rules for black carbon (BC) on the results. We find that the GMI model tends to overestimate submicron scattering and absorption at shorter wavelengths by 10-23 percent, and that GMI has smaller absolute mean biases for submicron absorption than OPAC v3.1, GEOS-Chem v9-02, or GC-RT. However, the changes to the density and refractive index of BC in GCRT improve the simulation of submicron aerosol absorption at all wavelengths relative to GEOS-Chem v9-02. Adding a variable size distribution, as in ASP v2.1, improves model performance for scattering but not for absorption, likely due to the assumption in ASP v2.1 that BC is present at a constant mass fraction

  15. Evaluating model parameterizations of submicron aerosol scattering and absorption with in situ data from ARCTAS 2008

    Directory of Open Access Journals (Sweden)

    M. J. Alvarado

    2016-07-01

    Full Text Available Accurate modeling of the scattering and absorption of ultraviolet and visible radiation by aerosols is essential for accurate simulations of atmospheric chemistry and climate. Closure studies using in situ measurements of aerosol scattering and absorption can be used to evaluate and improve models of aerosol optical properties without interference from model errors in aerosol emissions, transport, chemistry, or deposition rates. Here we evaluate the ability of four externally mixed, fixed size distribution parameterizations used in global models to simulate submicron aerosol scattering and absorption at three wavelengths using in situ data gathered during the 2008 Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS campaign. The four models are the NASA Global Modeling Initiative (GMI Combo model, GEOS-Chem v9-02, the baseline configuration of a version of GEOS-Chem with online radiative transfer calculations (called GC-RT, and the Optical Properties of Aerosol and Clouds (OPAC v3.1 package. We also use the ARCTAS data to perform the first evaluation of the ability of the Aerosol Simulation Program (ASP v2.1 to simulate submicron aerosol scattering and absorption when in situ data on the aerosol size distribution are used, and examine the impact of different mixing rules for black carbon (BC on the results. We find that the GMI model tends to overestimate submicron scattering and absorption at shorter wavelengths by 10–23 %, and that GMI has smaller absolute mean biases for submicron absorption than OPAC v3.1, GEOS-Chem v9-02, or GC-RT. However, the changes to the density and refractive index of BC in GC-RT improve the simulation of submicron aerosol absorption at all wavelengths relative to GEOS-Chem v9-02. Adding a variable size distribution, as in ASP v2.1, improves model performance for scattering but not for absorption, likely due to the assumption in ASP v2.1 that BC is present at a constant mass

  16. IMPROVED PARAMETERIZATION OF WATER CLOUD MODEL FOR HYBRID-POLARIZED BACKSCATTER SIMULATION USING INTERACTION FACTOR

    Directory of Open Access Journals (Sweden)

    S. Chauhan

    2017-07-01

    Full Text Available The prime aim of this study was to assess the potential of semi-empirical water cloud model (WCM in simulating hybrid-polarized SAR backscatter signatures (RH and RV retrieved from RISAT-1 data and integrate the results into a graphical user interface (GUI to facilitate easy comprehension and interpretation. A predominant agricultural wheat growing area was selected in Mathura and Bharatpur districts located in the Indian states of Uttar Pradesh and Rajasthan respectively to carry out the study. The three-date datasets were acquired covering the crucial growth stages of the wheat crop. In synchrony, the fieldwork was organized to measure crop/soil parameters. The RH and RV backscattering coefficient images were extracted from the SAR data for all the three dates. The effect of four combinations of vegetation descriptors (V1 and V2 viz., LAI-LAI, LAI-Plant water content (PWC, Leaf water area index (LWAI-LWAI, and LAI-Interaction factor (IF on the total RH and RV backscatter was analyzed. The results revealed that WCM calibrated with LAI and IF as the two vegetation descriptors simulated the total RH and RV backscatter values with highest R2 of 0.90 and 0.85 while the RMSE was lowest among the other tested models (1.18 and 1.25 dB, respectively. The theoretical considerations and interpretations have been discussed and examined in the paper. The novelty of this work emanates from the fact that it is a first step towards the modeling of hybrid-polarized backscatter data using an accurately parameterized semi-empirical approach.

  17. Advances in understanding, models and parameterizations of biosphere-atmosphere ammonia exchange

    Science.gov (United States)

    Flechard, C. R.; Massad, R.-S.; Loubet, B.; Personne, E.; Simpson, D.; Bash, J. O.; Cooter, E. J.; Nemitz, E.; Sutton, M. A.

    2013-07-01

    . Their level of complexity depends on their purpose, the spatial scale at which they are applied, the current level of parameterization, and the availability of the input data they require. State-of-the-art solutions for determining the emission/sink Γ potentials through the soil/canopy system include coupled, interactive chemical transport models (CTM) and soil/ecosystem modelling at the regional scale. However, it remains a matter for debate to what extent realistic options for future regional and global models should be based on process-based mechanistic versus empirical and regression-type models. Further discussion is needed on the extent and timescale by which new approaches can be used, such as integration with ecosystem models and satellite observations.

  18. Advances in understanding, models and parameterizations of biosphere-atmosphere ammonia exchange

    Directory of Open Access Journals (Sweden)

    C. R. Flechard

    2013-07-01

    -chemical species schemes. Their level of complexity depends on their purpose, the spatial scale at which they are applied, the current level of parameterization, and the availability of the input data they require. State-of-the-art solutions for determining the emission/sink Γ potentials through the soil/canopy system include coupled, interactive chemical transport models (CTM and soil/ecosystem modelling at the regional scale. However, it remains a matter for debate to what extent realistic options for future regional and global models should be based on process-based mechanistic versus empirical and regression-type models. Further discussion is needed on the extent and timescale by which new approaches can be used, such as integration with ecosystem models and satellite observations.

  19. Creating and parameterizing patient-specific deep brain stimulation pathway-activation models using the hyperdirect pathway as an example.

    Directory of Open Access Journals (Sweden)

    Kabilar Gunalan

    Full Text Available Deep brain stimulation (DBS is an established clinical therapy and computational models have played an important role in advancing the technology. Patient-specific DBS models are now common tools in both academic and industrial research, as well as clinical software systems. However, the exact methodology for creating patient-specific DBS models can vary substantially and important technical details are often missing from published reports.Provide a detailed description of the assembly workflow and parameterization of a patient-specific DBS pathway-activation model (PAM and predict the response of the hyperdirect pathway to clinical stimulation.Integration of multiple software tools (e.g. COMSOL, MATLAB, FSL, NEURON, Python enables the creation and visualization of a DBS PAM. An example DBS PAM was developed using 7T magnetic resonance imaging data from a single unilaterally implanted patient with Parkinson's disease (PD. This detailed description implements our best computational practices and most elaborate parameterization steps, as defined from over a decade of technical evolution.Pathway recruitment curves and strength-duration relationships highlight the non-linear response of axons to changes in the DBS parameter settings.Parameterization of patient-specific DBS models can be highly detailed and constrained, thereby providing confidence in the simulation predictions, but at the expense of time demanding technical implementation steps. DBS PAMs represent new tools for investigating possible correlations between brain pathway activation patterns and clinical symptom modulation.

  20. Improvement in the Modeled Representation of North American Monsoon Precipitation Using a Modified Kain–Fritsch Convective Parameterization Scheme

    Directory of Open Access Journals (Sweden)

    Thang M. Luong

    2018-01-01

    Full Text Available A commonly noted problem in the simulation of warm season convection in the North American monsoon region has been the inability of atmospheric models at the meso-β scales (10 s to 100 s of kilometers to simulate organized convection, principally mesoscale convective systems. With the use of convective parameterization, high precipitation biases in model simulations are typically observed over the peaks of mountain ranges. To address this issue, the Kain–Fritsch (KF cumulus parameterization scheme has been modified with new diagnostic equations to compute the updraft velocity, the convective available potential energy closure assumption, and the convective trigger function. The scheme has been adapted for use in the Weather Research and Forecasting (WRF. A numerical weather prediction-type simulation is conducted for the North American Monsoon Experiment Intensive Observing Period 2 and a regional climate simulation is performed, by dynamically downscaling. In both of these applications, there are notable improvements in the WRF model-simulated precipitation due to the better representation of organized, propagating convection. The use of the modified KF scheme for atmospheric model simulations may provide a more computationally economical alternative to improve the representation of organized convection, as compared to convective-permitting simulations at the kilometer scale or a super-parameterization approach.

  1. Improvement in the Modeled Representation of North American Monsoon Precipitation Using a Modified Kain–Fritsch Convective Parameterization Scheme

    KAUST Repository

    Luong, Thang

    2018-01-22

    A commonly noted problem in the simulation of warm season convection in the North American monsoon region has been the inability of atmospheric models at the meso-β scales (10 s to 100 s of kilometers) to simulate organized convection, principally mesoscale convective systems. With the use of convective parameterization, high precipitation biases in model simulations are typically observed over the peaks of mountain ranges. To address this issue, the Kain–Fritsch (KF) cumulus parameterization scheme has been modified with new diagnostic equations to compute the updraft velocity, the convective available potential energy closure assumption, and the convective trigger function. The scheme has been adapted for use in the Weather Research and Forecasting (WRF). A numerical weather prediction-type simulation is conducted for the North American Monsoon Experiment Intensive Observing Period 2 and a regional climate simulation is performed, by dynamically downscaling. In both of these applications, there are notable improvements in the WRF model-simulated precipitation due to the better representation of organized, propagating convection. The use of the modified KF scheme for atmospheric model simulations may provide a more computationally economical alternative to improve the representation of organized convection, as compared to convective-permitting simulations at the kilometer scale or a super-parameterization approach.

  2. Implementation of new sub-grid runoff parameterization within the Weather Research and Forecasting (WRF) modeling system

    Science.gov (United States)

    Khodamorad poor, M.; Irannejad, P.

    2012-04-01

    Runoff is an important component of the water cycle in land surface parameterization schemes, whose estimation is very difficult because of its dependence on rainfall, soil moisture, and topography, which vary temporally and spatially. In this study, two different methods of sub-grid parameterization of runoff are tested within the WRF numerical weather forecast model. The land surface scheme originally used in WRF is NOAH, in which runoff is parameterized based on the probably distributed function (PDF) of soil infiltration capacity. The river discharge calculated from WRF-NOAH simulated runoff and routed using total runoff integrating pathways (TRIP) model for three sub-basins of Karoon River, in the southwestern Iran, including Soosan, Harmaleh and Farseat is compared with observations for the winter 2006. WRF-NOAH extremely underestimates the discharge in the Karoon River basin, probably because of uncertainties in the runoff parameterization, which is in turn due to unavailability of soil infiltration data needed to estimate the shape and parameters of the PDF of the infiltration capacity. For this reason, we modified NOAH (NOAH-SIM) by substituting the infiltration capacity dependent runoff parameterization with a parameterization based on the PDF of the topographic index, following the philosophy used in the simplified TOPMODEL. As the topographic index is scale dependent, high resolution of topographic indices (10 m) are derived from digital elevation data model in low resolution (1000 m) by using a downscaling method. Evaluation of stimulated discharge by the two land surface schemes (NOAH-SIM, NOAH) coupled in WRF, with observed discharge proves improved runoff simulation by NOAH-SIM in all the three sub-basins. Compared to NOAH, NOAH-SIM simulated discharge has lower bias, smaller mean absolute error, higher efficiency coefficient, and a standard deviation closer to that observed. Coupling NOAH-SIM with WRF not only improves runoff simulations, but also

  3. Improving the temperature predictions of subsurface thermal models by using high-quality input data. Part 1: Uncertainty analysis of the thermal-conductivity parameterization

    DEFF Research Database (Denmark)

    Fuchs, Sven; Balling, Niels

    2016-01-01

    The subsurface temperature field and the geothermal conditions in sedimentary basins are frequently examined by using numerical thermal models. For those models, detailed knowledge of rock thermal properties are paramount for a reliable parameterization of layer properties and boundary conditions...

  4. New parameterization of external and induced fields in geomagnetic field modeling, and a candidate model for IGRF 2005

    DEFF Research Database (Denmark)

    Olsen, Nils; Sabaka, T.J.; Lowes, F.

    2005-01-01

    Q(1) of induced to external coefficients. A value of Q(1) = 0.27 was found from Magsat data and has been used by several authors when deriving recent field models from Orsted and CHAMP data. We describe a new approach that considers external and induced field based on a separation of D-st = E-st + I......-st into external (E-st) and induced (I-st) parts using a 1D model of mantle conductivity. The temporal behavior of q(1)(0) and of the corresponding induced coefficient are parameterized by E-st and I-st, respectively. In addition, we account for baseline-instabilities of D-st by estimating a value of q(1...

  5. Parameterization of Cloud Optical Properties for a Mixture of Ice Particles for use in Atmospheric Models

    Science.gov (United States)

    Chou, Ming-Dah; Lee, Kyu-Tae; Yang, Ping; Lau, William K. M. (Technical Monitor)

    2002-01-01

    Based on the single-scattering optical properties that are pre-computed using an improve geometric optics method, the bulk mass absorption coefficient, single-scattering albedo, and asymmetry factor of ice particles have been parameterized as a function of the mean effective particle size of a mixture of ice habits. The parameterization has been applied to compute fluxes for sample clouds with various particle size distributions and assumed mixtures of particle habits. Compared to the parameterization for a single habit of hexagonal column, the solar heating of clouds computed with the parameterization for a mixture of habits is smaller due to a smaller cosingle-scattering albedo. Whereas the net downward fluxes at the TOA and surface are larger due to a larger asymmetry factor. The maximum difference in the cloud heating rate is approx. 0.2 C per day, which occurs in clouds with an optical thickness greater than 3 and the solar zenith angle less than 45 degrees. Flux difference is less than 10 W per square meters for the optical thickness ranging from 0.6 to 10 and the entire range of the solar zenith angle. The maximum flux difference is approximately 3%, which occurs around an optical thickness of 1 and at high solar zenith angles.

  6. On the parameterization of rigid base and basepair models of DNA from molecular dynamics simulations

    Czech Academy of Sciences Publication Activity Database

    Lankaš, Filip; Gonzalez, O.; Heffler, L. M.; Stoll, G.; Moakher, M.; Maddocks, J. H.

    2009-01-01

    Roč. 11, č. 45 (2009), s. 10565-10588 ISSN 1463-9076 R&D Projects: GA MŠk LC512 Institutional research plan: CEZ:AV0Z40550506 Keywords : molecular dynamics * coarse-grained models * DNA mechanical properties Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 4.116, year: 2009

  7. Parameterizations for Cloud Overlapping and Shortwave Single-Scattering Properties for Use in General Circulation and Cloud Ensemble Models.

    Science.gov (United States)

    Chou, Ming-Dah; Suarez, Max J.; Ho, Chang-Hoi; Yan, Michael M.-H.; Lee, Kyu-Tae

    1998-02-01

    Parameterizations for cloud single-scattering properties and the scaling of optical thickness in a partial cloudiness condition have been developed for use in atmospheric models. Cloud optical properties are parameterized for four broad bands in the solar (or shortwave) spectrum; one in the ultraviolet and visible region and three in the infrared region. The extinction coefficient, single-scattering albedo, and asymmetry factor are parameterized separately for ice and water clouds. Based on high spectral-resolution calculations, the effective single-scattering coalbedo of a broad band is determined such that errors in the fluxes at the top of the atmosphere and at the surface are minimized. This parameterization introduces errors of a few percent in the absorption of shortwave radiation in the atmosphere and at the surface.Scaling of the optical thickness is based on the maximum-random cloud-overlapping approximation. The atmosphere is divided into three height groups separated approximately by the 400- and 700-mb levels. Clouds are assumed maximally overlapped within each height group and randomly overlapped among different groups. The scaling is applied only to the maximally overlapped cloud layers in individual height groups. The scaling as a function of the optical thickness, cloud amount, and the solar zenith angle is derived from detailed calculations and empirically adjusted to minimize errors in the fluxes at the top of the atmosphere and at the surface. Different scaling is used for direct and diffuse radiation. Except for a large solar zenith angle, the error in fluxes introduced by the scaling is only a few percent. In terms of absolute error, it is within a few watts per square meter.

  8. Performance assessment and parameterization of the SWAP-WOFOST model for peat soil under agricultural use in northern Europe.

    Science.gov (United States)

    Bertram, Sascha; Bechtold, Michel; Hendriks, Rob; Piayda, Arndt; Regina, Kristiina; Myllys, Merja; Tiemeyer, Bärbel

    2017-04-01

    Peat soils form a major share of soil suitable for agriculture in northern Europe. Successful agricultural production depends on hydrological and pedological conditions, local climate and agricultural management. Climate change impact assessment on food production and development of mitigation and adaptation strategies require reliable yield forecasts under given emission scenarios. Coupled soil hydrology - crop growth models, driven by regionalized future climate scenarios are a valuable tool and widely used for this purpose. Parameterization on local peat soil conditions and crop breed or grassland specie performance, however, remains a major challenge. The subject of this study is to evaluate the performance and sensitivity of the SWAP-WOFOST coupled soil hydrology and plant growth model with respect to the application on peat soils under different regional conditions across northern Europe. Further, the parameterization of region-specific crop and grass species is discussed. First results of the model application and parameterization at deep peat sites in southern Finland are presented. The model performed very well in reproducing two years of observed, daily ground water level data on four hydrologically contrasting sites. Naturally dry and wet sites could be modelled with the same performance as sites with active water table management by regulated drains in order to improve peat conservation. A simultaneous multi-site calibration scheme was used to estimate plant growth parameters of the local oat breed. Cross-site validation of the modelled yields against two years of observations proved the robustness of the chosen parameter set and gave no indication of possible overparameterization. This study proves the suitability of the coupled SWAP-WOFOST model for the prediction of crop yields and water table dynamics of peat soils in agricultural use under given climate conditions.

  9. Sensitivity of aerosol indirect forcing and autoconversion to cloud droplet parameterization: an assessment with the NASA Global Modeling Initiative.

    Science.gov (United States)

    Sotiropoulou, R. P.; Meshkhidze, N.; Nenes, A.

    2006-12-01

    The aerosol indirect forcing is one of the largest sources of uncertainty in assessments of anthropogenic climate change [IPCC, 2001]. Much of this uncertainty arises from the approach used for linking cloud droplet number concentration (CDNC) to precursor aerosol. Global Climate Models (GCM) use a wide range of cloud droplet activation mechanisms ranging from empirical [Boucher and Lohmann, 1995] to detailed physically- based formulations [e.g., Abdul-Razzak and Ghan, 2000; Fountoukis and Nenes, 2005]. The objective of this study is to assess the uncertainties in indirect forcing and autoconversion of cloud water to rain caused by the application of different cloud droplet parameterization mechanisms; this is an important step towards constraining the aerosol indirect effects (AIE). Here we estimate the uncertainty in indirect forcing and autoconversion rate using the NASA Global Model Initiative (GMI). The GMI allows easy interchange of meteorological fields, chemical mechanisms and the aerosol microphysical packages. Therefore, it is an ideal tool for assessing the effect of different parameters on aerosol indirect forcing. The aerosol module includes primary emissions, chemical production of sulfate in clear air and in-cloud aqueous phase, gravitational sedimentation, dry deposition, wet scavenging in and below clouds, and hygroscopic growth. Model inputs include SO2 (fossil fuel and natural), black carbon (BC), organic carbon (OC), mineral dust and sea salt. The meteorological data used in this work were taken from the NASA Data Assimilation Office (DAO) and two different GCMs: the NASA GEOS4 finite volume GCM (FVGCM) and the Goddard Institute for Space Studies version II' (GISS II') GCM. Simulations were carried out for "present day" and "preindustrial" emissions using different meteorological fields (i.e. DAO, FVGCM, GISS II'); cloud droplet number concentration is computed from the correlations of Boucher and Lohmann [1995], Abdul-Razzak and Ghan [2000

  10. Evaluation of snow and frozen soil parameterization in a cryosphere land surface modeling framework in the Tibetan Plateau

    Science.gov (United States)

    Zhou, J.

    2017-12-01

    Snow and frozen soil are important components in the Tibetan Plateau, and influence the water cycle and energy balances through snowpack accumulation and melt and soil freeze-thaw. In this study, a new cryosphere land surface model (LSM) with coupled snow and frozen soil parameterization was developed based on a hydrologically improved LSM (HydroSiB2). First, an energy-balance-based three-layer snow model was incorporated into HydroSiB2 (hereafter HydroSiB2-S) to provide an improved description of the internal processes of the snow pack. Second, a universal and simplified soil model was coupled with HydroSiB2-S to depict soil water freezing and thawing (hereafter HydroSiB2-SF). In order to avoid the instability caused by the uncertainty in estimating water phase changes, enthalpy was adopted as a prognostic variable instead of snow/soil temperature in the energy balance equation of the snow/frozen soil module. The newly developed models were then carefully evaluated at two typical sites of the Tibetan Plateau (TP) (one snow covered and the other snow free, both with underlying frozen soil). At the snow-covered site in northeastern TP (DY), HydroSiB2-SF demonstrated significant improvements over HydroSiB2-F (same as HydroSiB2-SF but using the original single-layer snow module of HydroSiB2), showing the importance of snow internal processes in three-layer snow parameterization. At the snow-free site in southwestern TP (Ngari), HydroSiB2-SF reasonably simulated soil water phase changes while HydroSiB2-S did not, indicating the crucial role of frozen soil parameterization in depicting the soil thermal and water dynamics. Finally, HydroSiB2-SF proved to be capable of simulating upward moisture fluxes toward the freezing front from the underlying soil layers in winter.

  11. Assessment of the weather research and forecasting model generalized parameterization schemes for advancement of precipitation forecasting in monsoon-driven river basins

    Science.gov (United States)

    Sikder, Safat; Hossain, Faisal

    2016-09-01

    Some of the world's largest and flood-prone river basins experience a seasonal flood regime driven by the monsoon weather system. Highly populated river basins with extensive rain-fed agricultural productivity such as the Ganges, Indus, Brahmaputra, Irrawaddy, and Mekong are examples of monsoon-driven river basins. It is therefore appropriate to investigate how precipitation forecasts from numerical models can advance flood forecasting in these basins. In this study, the Weather Research and Forecasting model was used to evaluate downscaling of coarse-resolution global precipitation forecasts from a numerical weather prediction model. Sensitivity studies were conducted using the TOPSIS analysis to identify the likely best set of microphysics and cumulus parameterization schemes, and spatial resolution from a total set of 15 combinations. This identified best set can pinpoint specific parameterizations needing further development to advance flood forecasting in monsoon-dominated regimes. It was found that the Betts-Miller-Janjic cumulus parameterization scheme with WRF Single-Moment 5-class, WRF Single-Moment 6-class, and Thompson microphysics schemes exhibited the most skill in the Ganges-Brahmaputra-Meghna basins. Finer spatial resolution (3 km) without cumulus parameterization schemes did not yield significant improvements. The short-listed set of the likely best microphysics-cumulus parameterization configurations was found to also hold true for the Indus basin. The lesson learned from this study is that a common set of model parameterization and spatial resolution exists for monsoon-driven seasonal flood regimes at least in South Asian river basins.

  12. Application of the dual Youla parameterization

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik

    1999-01-01

    Different applications of the parameterization of all systems stabilized by a given controller, i.e. the dual Youla parameterization, are considered in this paper. It will be shown how the parameterization can be applied in connection with controller design, adaptive controllers, model validation...

  13. Model studies of the influence of O2 photodissociation parameterizations in the Schumann-Runge bands on ozone related photolysis in the upper atmosphere

    Directory of Open Access Journals (Sweden)

    Gijs A. A. Koppers

    Full Text Available A new parameterization for atmospheric transmission and O2 photodissociation in the Schumann-Runge band region has been developed and tested with a 1D radiative-photochemical model. The parameterization is based on the O2-column along the line of sight to the Sun and the local temperature. Line-by-line calculations have served as a benchmark for testing this method and several other, commonly used, parameterizations. The comparisons suggest that differences between the line-by-line calculations and currently accepted parameterizations can be reduced significantly by using the new method, particularly at large solar zenith angles. The production rate of O-atoms computed with this method shows less than 6% deviation compared to the line-by-line calculations at any altitude, all solar zenith angles and in all seasons. The largest errors are found toward the shorter wavelengths in the Schumann-Runge region at low altitudes. Transmittance is approximated to better than 4% at any altitude and/or solar zenith angle. The total O-production rate above 20 km is approximated to better than 2%. The new parameterization is easily implemented in existing photochemical models and in many cases it may simply replace the existing algorithm. The computational effort exceeds that of other parameterizations but in view of the total computation time needed for the actual calculation of the parameterized Schumann-Runge bands this should not lead to significant performance degeneration. The first 14 coefficients of the parameterization are included in this study. Both the complete sets of coefficients and a simple algorithm can be obtained by contacting the authors. A photochemical model study shows the largest effect of the parameterization method is on odd hydrogen concentrations. Subsequent interaction with an odd oxygen family causes differences in the ozone concentrations between the different parameterizations of more than 10% at selected

  14. A model for the spatial distribution of snow water equivalent parameterized from the spatial variability of precipitation

    Directory of Open Access Journals (Sweden)

    T. Skaugen

    2016-09-01

    Full Text Available Snow is an important and complicated element in hydrological modelling. The traditional catchment hydrological model with its many free calibration parameters, also in snow sub-models, is not a well-suited tool for predicting conditions for which it has not been calibrated. Such conditions include prediction in ungauged basins and assessing hydrological effects of climate change. In this study, a new model for the spatial distribution of snow water equivalent (SWE, parameterized solely from observed spatial variability of precipitation, is compared with the current snow distribution model used in the operational flood forecasting models in Norway. The former model uses a dynamic gamma distribution and is called Snow Distribution_Gamma, (SD_G, whereas the latter model has a fixed, calibrated coefficient of variation, which parameterizes a log-normal model for snow distribution and is called Snow Distribution_Log-Normal (SD_LN. The two models are implemented in the parameter parsimonious rainfall–runoff model Distance Distribution Dynamics (DDD, and their capability for predicting runoff, SWE and snow-covered area (SCA is tested and compared for 71 Norwegian catchments. The calibration period is 1985–2000 and validation period is 2000–2014. Results show that SD_G better simulates SCA when compared with MODIS satellite-derived snow cover. In addition, SWE is simulated more realistically in that seasonal snow is melted out and the building up of "snow towers" and giving spurious positive trends in SWE, typical for SD_LN, is prevented. The precision of runoff simulations using SD_G is slightly inferior, with a reduction in Nash–Sutcliffe and Kling–Gupta efficiency criterion of 0.01, but it is shown that the high precision in runoff prediction using SD_LN is accompanied with erroneous simulations of SWE.

  15. Inversion and uncertainty of highly parameterized models in a Bayesian framework by sampling the maximal conditional posterior distribution of parameters

    Science.gov (United States)

    Mara, Thierry A.; Fajraoui, Noura; Younes, Anis; Delay, Frederick

    2015-02-01

    We introduce the concept of maximal conditional posterior distribution (MCPD) to assess the uncertainty of model parameters in a Bayesian framework. Although, Markov Chains Monte Carlo (MCMC) methods are particularly suited for this task, they become challenging with highly parameterized nonlinear models. The MCPD represents the conditional probability distribution function of a given parameter knowing that the other parameters maximize the conditional posterior density function. Unlike MCMC which accepts or rejects solutions sampled in the parameter space, MCPD is calculated through several optimization processes. Model inversion using MCPD algorithm is particularly useful for highly parameterized problems because calculations are independent. Consequently, they can be evaluated simultaneously with a multi-core computer. In the present work, the MCPD approach is applied to invert a 2D stochastic groundwater flow problem where the log-transmissivity field of the medium is inferred from scarce and noisy data. For this purpose, the stochastic field is expanded onto a set of orthogonal functions using a Karhunen-Loève (KL) transformation. Though the prior guess on the stochastic structure (covariance) of the transmissivity field is erroneous, the MCPD inference of the KL coefficients is able to extract relevant inverse solutions.

  16. Impact of a simple parameterization of convective gravity-wave drag in a stratosphere-troposphere general circulation model and its sensitivity to vertical resolution

    Directory of Open Access Journals (Sweden)

    C. Bossuet

    1998-02-01

    Full Text Available Systematic westerly biases in the southern hemisphere wintertime flow and easterly equatorial biases are experienced in the Météo-France climate model. These biases are found to be much reduced when a simple parameterization is introduced to take into account the vertical momentum transfer through the gravity waves excited by deep convection. These waves are quasi-stationary in the frame of reference moving with convection and they propagate vertically to higher levels in the atmosphere, where they may exert a significant deceleration of the mean flow at levels where dissipation occurs. Sixty-day experiments have been performed from a multiyear simulation with the standard 31 levels for a summer and a winter month, and with a T42 horizontal resolution. The impact of this parameterization on the integration of the model is found to be generally positive, with a significant deceleration in the westerly stratospheric jet and with a reduction of the easterly equatorial bias. The sensitivity of the Météo-France climate model to vertical resolution is also investigated by increasing the number of vertical levels, without moving the top of the model. The vertical resolution is increased up to 41 levels, using two kinds of level distribution. For the first, the increase in vertical resolution concerns especially the troposphere (with 22 levels in the troposphere, and the second treats the whole atmosphere in a homogeneous way (with 15 levels in the troposphere; the standard version of 31 levels has 10 levels in the troposphere. A comparison is made between the dynamical aspects of the simulations. The zonal wind and precipitation are presented and compared for each resolution. A positive impact is found with the finer tropospheric resolution on the precipitation in the mid-latitudes and on the westerly stratospheric jet, but the general impact on the model climate is weak, the physical parameterizations used appear to be mostly independent to the

  17. Impact of a simple parameterization of convective gravity-wave drag in a stratosphere-troposphere general circulation model and its sensitivity to vertical resolution

    Directory of Open Access Journals (Sweden)

    C. Bossuet

    Full Text Available Systematic westerly biases in the southern hemisphere wintertime flow and easterly equatorial biases are experienced in the Météo-France climate model. These biases are found to be much reduced when a simple parameterization is introduced to take into account the vertical momentum transfer through the gravity waves excited by deep convection. These waves are quasi-stationary in the frame of reference moving with convection and they propagate vertically to higher levels in the atmosphere, where they may exert a significant deceleration of the mean flow at levels where dissipation occurs. Sixty-day experiments have been performed from a multiyear simulation with the standard 31 levels for a summer and a winter month, and with a T42 horizontal resolution. The impact of this parameterization on the integration of the model is found to be generally positive, with a significant deceleration in the westerly stratospheric jet and with a reduction of the easterly equatorial bias. The sensitivity of the Météo-France climate model to vertical resolution is also investigated by increasing the number of vertical levels, without moving the top of the model. The vertical resolution is increased up to 41 levels, using two kinds of level distribution. For the first, the increase in vertical resolution concerns especially the troposphere (with 22 levels in the troposphere, and the second treats the whole atmosphere in a homogeneous way (with 15 levels in the troposphere; the standard version of 31 levels has 10 levels in the troposphere. A comparison is made between the dynamical aspects of the simulations. The zonal wind and precipitation are presented and compared for each resolution. A positive impact is found with the finer tropospheric resolution on the precipitation in the mid-latitudes and on the westerly stratospheric jet, but the general impact on the model climate is weak, the physical parameterizations used appear to be mostly independent to the

  18. Remote Sensing Image Enhancement Based on Non-subsampled Shearlet Transform and Parameterized Logarithmic Image Processing Model

    Directory of Open Access Journals (Sweden)

    TAO Feixiang

    2015-08-01

    Full Text Available Aiming at parts of remote sensing images with dark brightness and low contrast, a remote sensing image enhancement method based on non-subsampled Shearlet transform and parameterized logarithmic image processing model is proposed in this paper to improve the visual effects and interpretability of remote sensing images. Firstly, a remote sensing image is decomposed into a low-frequency component and high frequency components by non-subsampled Shearlet transform.Then the low frequency component is enhanced according to PLIP (parameterized logarithmic image processing model, which can improve the contrast of image, while the improved fuzzy enhancement method is used to enhance the high frequency components in order to highlight the information of edges and details. A large number of experimental results show that, compared with five kinds of image enhancement methods such as bidirectional histogram equalization method, the method based on stationary wavelet transform and the method based on non-subsampled contourlet transform, the proposed method has advantages in both subjective visual effects and objective quantitative evaluation indexes such as contrast and definition, which can more effectively improve the contrast of remote sensing image and enhance edges and texture details with better visual effects.

  19. Direct spondylolisthesis identification and measurement in MR/CT using detectors trained by articulated parameterized spine model

    Science.gov (United States)

    Cai, Yunliang; Leung, Stephanie; Warrington, James; Pandey, Sachin; Shmuilovich, Olga; Li, Shuo

    2017-02-01

    The identification of spondylolysis and spondylolisthesis is important in spinal diagnosis, rehabilitation, and surgery planning. Accurate and automatic detection of spinal portion with spondylolisthesis problem will significantly reduce the manual work of physician and provide a more robust evaluation for the spine condition. Most existing automatic identification methods adopted the indirect approach which used vertebrae locations to measure the spondylolisthesis. However, these methods relied heavily on automatic vertebra detection which often suffered from the pool spatial accuracy and the lack of validated pathological training samples. In this study, we present a novel spondylolisthesis detection method which can directly locate the irregular spine portion and output the corresponding grading. The detection is done by a set of learning-based detectors which are discriminatively trained by synthesized spondylolisthesis image samples. To provide sufficient pathological training samples, we used a parameterized spine model to synthesize different types of spondylolysis images from real MR/CT scans. The parameterized model can automatically locate the vertebrae in spine images and estimate their pose orientations, and can inversely alter the vertebrae locations and poses by changing the corresponding parameters. Various training samples can then be generated from only a few spine MR/CT images. The preliminary results suggest great potential for the fast and efficient spondylolisthesis identification and measurement in both MR and CT spine images.

  20. Identification of physical models

    DEFF Research Database (Denmark)

    Melgaard, Henrik

    1994-01-01

    The problem of identification of physical models is considered within the frame of stochastic differential equations. Methods for estimation of parameters of these continuous time models based on descrete time measurements are discussed. The important algorithms of a computer program for ML or MAP...... design of experiments, which is for instance the design of an input signal that are optimal according to a criterion based on the information provided by the experiment. Also model validation is discussed. An important verification of a physical model is to compare the physical characteristics...... of the model with the available prior knowledge. The methods for identification of physical models have been applied in two different case studies. One case is the identification of thermal dynamics of building components. The work is related to a CEC research project called PASSYS (Passive Solar Components...

  1. Approaches to highly parameterized inversion: A guide to using PEST for model-parameter and predictive-uncertainty analysis

    Science.gov (United States)

    Doherty, John E.; Hunt, Randall J.; Tonkin, Matthew J.

    2010-01-01

    Analysis of the uncertainty associated with parameters used by a numerical model, and with predictions that depend on those parameters, is fundamental to the use of modeling in support of decisionmaking. Unfortunately, predictive uncertainty analysis with regard to models can be very computationally demanding, due in part to complex constraints on parameters that arise from expert knowledge of system properties on the one hand (knowledge constraints) and from the necessity for the model parameters to assume values that allow the model to reproduce historical system behavior on the other hand (calibration constraints). Enforcement of knowledge and calibration constraints on parameters used by a model does not eliminate the uncertainty in those parameters. In fact, in many cases, enforcement of calibration constraints simply reduces the uncertainties associated with a number of broad-scale combinations of model parameters that collectively describe spatially averaged system properties. The uncertainties associated with other combinations of parameters, especially those that pertain to small-scale parameter heterogeneity, may not be reduced through the calibration process. To the extent that a prediction depends on system-property detail, its postcalibration variability may be reduced very little, if at all, by applying calibration constraints; knowledge constraints remain the only limits on the variability of predictions that depend on such detail. Regrettably, in many common modeling applications, these constraints are weak. Though the PEST software suite was initially developed as a tool for model calibration, recent developments have focused on the evaluation of model-parameter and predictive uncertainty. As a complement to functionality that it provides for highly parameterized inversion (calibration) by means of formal mathematical regularization techniques, the PEST suite provides utilities for linear and nonlinear error-variance and uncertainty analysis in

  2. Optimizing Weather Research and Forecasting model parameterizations for boundary-layer turbulence production and dissipation over the Southern Appalachians

    Science.gov (United States)

    Thaxton, C.; Sherman, J. P.; Krintz, I. A.; Scher, A.; Ross, D.; Schlesselman, D.

    2017-12-01

    Atmospheric aerosol and contaminant transport and mixing over complex terrain are influenced by a broad-spectrum of turbulence production and dissipation mechanisms that are not, at present, considered in the Weather Research and Forecasting (WRF) model v3.9 numerical schemes that are constrained to parameterize the dynamic effects of small-scale turbulent structures. Unresolved thermally-driven processes, such slope and valley flows and associated recirculations, as well as orographically-produced or enhanced mechanical turbulence structures, may express as systematic yet potentially predictable model biases in the diurnal evolution of measurables and diagnostic parameters such as planetary boundary layer (PBL) height. Herein, we present an assessment of the (non-LES) WRF PBL schemes - YSU, MYJ, MYNNx, and ACM2 - over a range of synoptic conditions in the warm months of 2013 through comparison to a subset of 76 radiosonde launches taken at various times throughout the day, as well as continuous ground weather station data and ground-based lidar-derived diagnostics. Preliminary results, many of which may be explained by known passive and active mechanisms in complex terrain, include an over-prediction of PBL heights for non-local PBL schemes; an enhanced surface layer cold bias and under-prediction of PBL heights for local PBL schemes; and peak variance in potential temperature, specific humidity, and wind speed for all schemes at or near the entrainment zone. Suppressed amplitudes in the diurnal lidar-derived PBL height time series also suggest enhanced turbulence production during a range of nocturnal flow conditions. The aim of this investigation is to develop a recommended suite of coupled WRF PBL-surface layer parameterizations optimized to support modeling of aerosol load dynamics, aerosol-meteorology coupling, and operational forecasting in the Southern Appalachians, as well as to inform future WRF PBL scheme use and development.

  3. Improving irrigation and groundwater parameterizations in the Community Land Model (CLM) using in-situ observations and satellite data

    Science.gov (United States)

    Felfelani, F.; Pokhrel, Y. N.

    2017-12-01

    In this study, we use in-situ observations and satellite data of soil moisture and groundwater to improve irrigation and groundwater parameterizations in the version 4.5 of the Community Land Model (CLM). The irrigation application trigger, which is based on the soil moisture deficit mechanism, is enhanced by integrating soil moisture observations and the data from the Soil Moisture Active Passive (SMAP) mission which is available since 2015. Further, we incorporate different irrigation application mechanisms based on schemes used in various other land surface models (LSMs) and carry out a sensitivity analysis using point simulations at two different irrigated sites in Mead, Nebraska where data from the AmeriFlux observational network are available. We then conduct regional simulations over the entire High Plains region and evaluate model results with the available irrigation water use data at the county-scale. Finally, we present results of groundwater simulations by implementing a simple pumping scheme based on our previous studies. Results from the implementation of current irrigation parameterization used in various LSMs show relatively large difference in vertical soil moisture content profile (e.g., 0.2 mm3/mm3) at point scale which is mostly decreased when averaged over relatively large regions (e.g., 0.04 mm3/mm3 in the High Plains region). It is found that original irrigation module in CLM 4.5 tends to overestimate the soil moisture content compared to both point observations and SMAP, and the results from the improved scheme linked with the groundwater pumping scheme show better agreement with the observations.

  4. Improvement and implementation of a parameterization for shallow cumulus in the global climate model ECHAM5-HAM

    Science.gov (United States)

    Isotta, Francesco; Spichtinger, Peter; Lohmann, Ulrike; von Salzen, Knut

    2010-05-01

    Convection is a crucial component of weather and climate. Its parameterization in General Circulation Models (GCMs) is one of the largest sources of uncertainty. Convection redistributes moisture and heat, affects the radiation budget and transports tracers from the PBL to higher levels. Shallow convection is very common over the globe, in particular over the oceans in the trade wind regions. A recently developed shallow convection scheme by von Salzen and McFarlane (2002) is implemented in the ECHAM5-HAM GCM instead of the standard convection scheme by Tiedtke (1989). The scheme of von Salzen and McFarlane (2002) is a bulk parameterization for an ensemble of transient shallow cumuli. A life cycle is considered, as well as inhomogeneities in the horizontal distribution of in-cloud properties due to mixing. The shallow convection scheme is further developed to take the ice phase and precipitation in form of rain and snow into account. The double moment microphysics scheme for cloud droplets and ice crystals implemented is consistent with the stratiform scheme and with the other types of convective clouds. The ice phase permits to alter the criterion to distinguish between shallow convection and the other two types of convection, namely deep and mid-level, which are still calculated by the Tiedtke (1989) scheme. The lunching layer of the test parcel in the shallow convection scheme is chosen as the one with maximum moist static energy in the three lowest levels. The latter is modified to the ``frozen moist static energy'' to account for the ice phase. Moreover, tracers (e.g. aerosols) are transported in the updraft and scavenged in and below clouds. As a first test of the performance of the new scheme and the interaction with the rest of the model, the Barbados Oceanographic and Meteorological EXperiment (BOMEX) and the Rain In Cumulus over the Ocean experiment (RICO) case are simulated with the single column model (SCM) and the results are compared with large eddy

  5. Using data from colloid transport experiments to parameterize filtration model parameters for favorable conditions

    Science.gov (United States)

    Kamai, Tamir; Nassar, Mohamed K.; Nelson, Kirk E.; Ginn, Timothy R.

    2017-04-01

    Colloid filtration in porous media spans across many disciplines and includes scenarios such as in-situ bioremediation, colloid-facilitated transport, water treatment of suspended particles and pathogenic bacteria, and transport of natural and engineered nanoparticles in the environment. Transport and deposition of colloid particles in porous media are determined by a combination of complex processes and forces. Given the convoluted physical, chemical, and biological processes involved, and the complexity of porous media in natural settings, it should not come as surprise that colloid filtration theory does not always sufficiently predict colloidal transport, and that there is still a pressing need for improved predictive capabilities. Here, instead of developing the macroscopic equation from pore-scale models, we parametrize the different terms in the macroscopic collection equation through fitting it to experimental data, by optimizing the parameters in the different terms of the equation. This way we combine a mechanistically-based filtration-equation with empirical evidence. The impact of different properties of colloids and porous media are studied by comparing experimental properties with different terms of the correlation equation. This comparison enables insight about different processes that occur during colloid transport and retention under in porous media under favorable conditions, and provides directions for future theoretical developments.

  6. Solid, liquid, and interfacial properties of TiAl alloys: parameterization of a new modified embedded atom method model.

    Science.gov (United States)

    Sun, Shoutian; Ramachandran, Bala Ramu; Wick, Collin D

    2018-02-21

    New interatomic potentials for pure Ti and Al, and binary TiAl were developed utilizing the second nearest neighbour modified embedded-atom method (MEAM) formalism. The potentials were parameterized to reproduce multiple properties spanning bulk solids, solid surfaces, solid/liquid phase changes, and liquid interfacial properties. This was carried out using a newly developed optimization procedure that combined the simple minimization of a fitness function with a genetic algorithm to efficiently span the parameter space. The resulting MEAM potentials gave good agreement with experimental and DFT solid and liquid properties, and reproduced the melting points for Ti, Al, and TiAl. However, the surface tensions from the model consistently underestimated experimental values. Liquid TiAl's surface was found to be mostly covered with Al atoms, showing that Al has a significant propensity for the liquid/air interface.

  7. Using ARM observations to evaluate cloud and convection parameterizations and cloud-convection-radiation interactions in the GFDL general circulation model

    Energy Technology Data Exchange (ETDEWEB)

    Donner, Leo J. [National Oceanic and Atmospheric Administration (NOAA), Washington, DC (United States); Lin, Yanluan [National Oceanic and Atmospheric Administration (NOAA), Washington, DC (United States); Cooke, Will [National Oceanic and Atmospheric Administration (NOAA), Washington, DC (United States)

    2015-01-11

    GFDL CM3 convective vertical velocities, parameterized because their scales are below those resolved in current climate models, have been compared with observations from DOE ARM. Single-column forcing has been used for this comparison for two time periods from TWP-ICE and three from MC3E. These are the first independent evaluations of the parameterized vertical velocities against observations. The results show that basic characteristics of the observations are captured by CM3 when forced with single-column observations and constrained to observed large-scale states. In many, but not all, cases, the parameterized vertical velocities exceed those observed, especially at higher levels in the clouds. Similar problems have been noted by others in cloud-resolving models. The results have important implications for cloud-aerosol interactions, which depend on vertical velocities, and likely pose constraints on entrainment by convection, which may be related to climate sensitivity.

  8. Age-distribution estimation for karst groundwater: Issues of parameterization and complexity in inverse modeling by convolution

    Science.gov (United States)

    Long, Andrew J.; Putnam, L.D.

    2009-01-01

    Convolution modeling is useful for investigating the temporal distribution of groundwater age based on environmental tracers. The framework of a quasi-transient convolution model that is applicable to two-domain flow in karst aquifers is presented. The model was designed to provide an acceptable level of statistical confidence in parameter estimates when only chlorofluorocarbon (CFC) and tritium (3H) data are available. We show how inverse modeling and uncertainty assessment can be used to constrain model parameterization to a level warranted by available data while allowing major aspects of the flow system to be examined. As an example, the model was applied to water from a pumped well open to the Madison aquifer in central USA with input functions of CFC-11, CFC-12, CFC-113, and 3H, and was calibrated to several samples collected during a 16-year period. A bimodal age distribution was modeled to represent quick and slow flow less than 50 years old. The effects of pumping and hydraulic head on the relative volumetric fractions of these domains were found to be influential factors for transient flow. Quick flow and slow flow were estimated to be distributed mainly within the age ranges of 0-2 and 26-41 years, respectively. The fraction of long-term flow (>50 years) was estimated but was not dateable. The different tracers had different degrees of influence on parameter estimation and uncertainty assessments, where 3H was the most critical, and CFC-113 was least influential.

  9. Models in physics teaching

    DEFF Research Database (Denmark)

    Kneubil, Fabiana Botelho

    2016-01-01

    In this work we show an approach based on models, for an usual subject in an introductory physics course, in order to foster discussions on the nature of physical knowledge. The introduction of elements of the nature of knowledge in physics lessons has been emphasised by many educators and one uses...... the case of metals to show the theoretical and phenomenological dimensions of physics. The discussion is made by means of four questions whose answers cannot be reached neither for theoretical elements nor experimental measurements. Between these two dimensions it is necessary to realise a series...... of reasoning steps to deepen the comprehension of microscopic concepts, such as electrical resistivity, drift velocity and free electrons. When this approach is highlighted, beyond the physical content, aspects of its nature become explicit and may improve the structuring of knowledge for learners...

  10. Polynomial Chaos–Based Bayesian Inference of K-Profile Parameterization in a General Circulation Model of the Tropical Pacific

    KAUST Repository

    Sraj, Ihab

    2016-08-26

    The authors present a polynomial chaos (PC)-based Bayesian inference method for quantifying the uncertainties of the K-profile parameterization (KPP) within the MIT general circulation model (MITgcm) of the tropical Pacific. The inference of the uncertain parameters is based on a Markov chain Monte Carlo (MCMC) scheme that utilizes a newly formulated test statistic taking into account the different components representing the structures of turbulent mixing on both daily and seasonal time scales in addition to the data quality, and filters for the effects of parameter perturbations over those as a result of changes in the wind. To avoid the prohibitive computational cost of integrating the MITgcm model at each MCMC iteration, a surrogate model for the test statistic using the PC method is built. Because of the noise in the model predictions, a basis-pursuit-denoising (BPDN) compressed sensing approach is employed to determine the PC coefficients of a representative surrogate model. The PC surrogate is then used to evaluate the test statistic in the MCMC step for sampling the posterior of the uncertain parameters. Results of the posteriors indicate good agreement with the default values for two parameters of the KPP model, namely the critical bulk and gradient Richardson numbers; while the posteriors of the remaining parameters were barely informative. © 2016 American Meteorological Society.

  11. Effect of numerical dispersion as a source of structural noise in the calibration of a highly parameterized saltwater intrusion model

    Science.gov (United States)

    Langevin, Christian D.; Hughes, Joseph D.

    2010-01-01

    A model with a small amount of numerical dispersion was used to represent saltwater 7 intrusion in a homogeneous aquifer for a 10-year historical calibration period with one 8 groundwater withdrawal location followed by a 10-year prediction period with two groundwater 9 withdrawal locations. Time-varying groundwater concentrations at arbitrary locations in this low-10 dispersion model were then used as observations to calibrate a model with a greater amount of 11 numerical dispersion. The low-dispersion model was solved using a Total Variation Diminishing 12 numerical scheme; an implicit finite difference scheme with upstream weighting was used for 13 the calibration simulations. Calibration focused on estimating a three-dimensional hydraulic 14 conductivity field that was parameterized using a regular grid of pilot points in each layer and a 15 smoothness constraint. Other model parameters (dispersivity, porosity, recharge, etc.) were 16 fixed at the known values. The discrepancy between observed and simulated concentrations 17 (due solely to numerical dispersion) was reduced by adjusting hydraulic conductivity through the 18 calibration process. Within the transition zone, hydraulic conductivity tended to be lower than 19 the true value for the calibration runs tested. The calibration process introduced lower hydraulic 20 conductivity values to compensate for numerical dispersion and improve the match between 21 observed and simulated concentration breakthrough curves at monitoring locations. 22 Concentrations were underpredicted at both groundwater withdrawal locations during the 10-23 year prediction period.

  12. Alternative Parameterization of the 3-PG Model for Loblolly Pine: A Regional Validation and Climate Change Assessment on Stand Productivity

    Science.gov (United States)

    Yang, J.; Gonzalez-Benecke, C. A.; Teskey, R. O.; Martin, T.; Jokela, E. J.

    2015-12-01

    Loblolly pine (Pinus taeda L.) is one of the fastest growing pine species. It has been planted on more than 10 million ha in the southeastern U.S., and also been introduced into many countries. Using data from the literature and long-term productivity studies, we re-parameterized the 3-PG model for loblolly pine stands. We developed new functions for estimating NPP allocation dynamics, canopy cover and needlefall dynamics, effects of frost on production, density-independent and density-dependent tree mortality, biomass pools at variable starting ages, and the fertility rating. New functions to estimate merchantable volume partitioning were also included, allowing for economic analyses. The fertility rating was determined as a function of site index (mean height of dominant trees at age=25 years). We used the largest and most geographically extensive validation dataset for this species ever used (91 pots in 12 states in U.S. and 10 plots in Uruguay). Comparison of modeled to measured data showed robust agreement across the natural range in the U.S., as well as in Uruguay, where the species is grown as an exotic. Using the new set of functions and parameters with downscaled projections from twenty different climate models, the model was applied to assess the impact of future climate change scenarios on stand productivity in the southeastern U.S.

  13. Parameterizing road construction in route-based road weather models: can ground-penetrating radar provide any answers?

    Science.gov (United States)

    Hammond, D. S.; Chapman, L.; Thornes, J. E.

    2011-05-01

    A ground-penetrating radar (GPR) survey of a 32 km mixed urban and rural study route is undertaken to assess the usefulness of GPR as a tool for parameterizing road construction in a route-based road weather forecast model. It is shown that GPR can easily identify even the smallest of bridges along the route, which previous thermal mapping surveys have identified as thermal singularities with implications for winter road maintenance. Using individual GPR traces measured at each forecast point along the route, an inflexion point detection algorithm attempts to identify the depth of the uppermost subsurface layers at each forecast point for use in a road weather model instead of existing ordinal road-type classifications. This approach has the potential to allow high resolution modelling of road construction and bridge decks on a scale previously not possible within a road weather model, but initial results reveal that significant future research will be required to unlock the full potential that this technology can bring to the road weather industry.

  14. Parameterizing road construction in route-based road weather models: can ground-penetrating radar provide any answers?

    International Nuclear Information System (INIS)

    Hammond, D S; Chapman, L; Thornes, J E

    2011-01-01

    A ground-penetrating radar (GPR) survey of a 32 km mixed urban and rural study route is undertaken to assess the usefulness of GPR as a tool for parameterizing road construction in a route-based road weather forecast model. It is shown that GPR can easily identify even the smallest of bridges along the route, which previous thermal mapping surveys have identified as thermal singularities with implications for winter road maintenance. Using individual GPR traces measured at each forecast point along the route, an inflexion point detection algorithm attempts to identify the depth of the uppermost subsurface layers at each forecast point for use in a road weather model instead of existing ordinal road-type classifications. This approach has the potential to allow high resolution modelling of road construction and bridge decks on a scale previously not possible within a road weather model, but initial results reveal that significant future research will be required to unlock the full potential that this technology can bring to the road weather industry. (technical design note)

  15. Modeling Optical Lithography Physics

    Science.gov (United States)

    Neureuther, Andrew R.; Rubinstein, Juliet; Chin, Eric; Wang, Lynn; Miller, Marshal; Clifford, Chris; Yamazoe, Kenji

    2010-06-01

    Key physical phenomena associated with resists, illumination, lenses and masks are used to show the progress in models and algorithms for modeling optical projection printing as well as current simulation challenges in managing process complexity for manufacturing. The amazing current capability and challenges for projection printing are discussed using the 22 nm device generation. A fundamental foundation for modeling resist exposure, partial coherent imaging and defect printability is given. The technology innovations of resolution enhancement and chemically amplified resist systems and their modeling challenges are overviewed. Automated chip-level applications in pattern pre-compensation and design-anticipation of residual process variations require new simulation approaches.

  16. Global parameterization and validation of a two-leaf light use efficiency model for predicting gross primary production across FLUXNET sites: TL-LUE Parameterization and Validation

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Yanlian [Jiangsu Provincial Key Laboratory of Geographic Information Science and Technology, School of Geographic and Oceanographic Sciences, Nanjing University, Nanjing China; Joint Center for Global Change Studies, Beijing China; Wu, Xiaocui [International Institute for Earth System Sciences, Nanjing University, Nanjing China; Joint Center for Global Change Studies, Beijing China; Ju, Weimin [International Institute for Earth System Sciences, Nanjing University, Nanjing China; Jiangsu Center for Collaborative Innovation in Geographic Information Resource Development and Application, Nanjing China; Chen, Jing M. [International Institute for Earth System Sciences, Nanjing University, Nanjing China; Joint Center for Global Change Studies, Beijing China; Wang, Shaoqiang [Key Laboratory of Ecosystem Network Observation and Modeling, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Science, Beijing China; Wang, Huimin [Key Laboratory of Ecosystem Network Observation and Modeling, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Science, Beijing China; Yuan, Wenping [State Key Laboratory of Earth Surface Processes and Resource Ecology, Future Earth Research Institute, Beijing Normal University, Beijing China; Andrew Black, T. [Faculty of Land and Food Systems, University of British Columbia, Vancouver British Columbia Canada; Jassal, Rachhpal [Faculty of Land and Food Systems, University of British Columbia, Vancouver British Columbia Canada; Ibrom, Andreas [Department of Environmental Engineering, Technical University of Denmark (DTU), Kgs. Lyngby Denmark; Han, Shijie [Institute of Applied Ecology, Chinese Academy of Sciences, Shenyang China; Yan, Junhua [South China Botanical Garden, Chinese Academy of Sciences, Guangzhou China; Margolis, Hank [Centre for Forest Studies, Faculty of Forestry, Geography and Geomatics, Laval University, Quebec City Quebec Canada; Roupsard, Olivier [CIRAD-Persyst, UMR Ecologie Fonctionnelle and Biogéochimie des Sols et Agroécosystèmes, SupAgro-CIRAD-INRA-IRD, Montpellier France; CATIE (Tropical Agricultural Centre for Research and Higher Education), Turrialba Costa Rica; Li, Yingnian [Northwest Institute of Plateau Biology, Chinese Academy of Sciences, Xining China; Zhao, Fenghua [Key Laboratory of Ecosystem Network Observation and Modeling, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Science, Beijing China; Kiely, Gerard [Environmental Research Institute, Civil and Environmental Engineering Department, University College Cork, Cork Ireland; Starr, Gregory [Department of Biological Sciences, University of Alabama, Tuscaloosa Alabama USA; Pavelka, Marian [Laboratory of Plants Ecological Physiology, Institute of Systems Biology and Ecology AS CR, Prague Czech Republic; Montagnani, Leonardo [Forest Services, Autonomous Province of Bolzano, Bolzano Italy; Faculty of Sciences and Technology, Free University of Bolzano, Bolzano Italy; Wohlfahrt, Georg [Institute for Ecology, University of Innsbruck, Innsbruck Austria; European Academy of Bolzano, Bolzano Italy; D' Odorico, Petra [Grassland Sciences Group, Institute of Agricultural Sciences, ETH Zurich Switzerland; Cook, David [Atmospheric and Climate Research Program, Environmental Science Division, Argonne National Laboratory, Argonne Illinois USA; Arain, M. Altaf [McMaster Centre for Climate Change and School of Geography and Earth Sciences, McMaster University, Hamilton Ontario Canada; Bonal, Damien [INRA Nancy, UMR EEF, Champenoux France; Beringer, Jason [School of Earth and Environment, The University of Western Australia, Crawley Australia; Blanken, Peter D. [Department of Geography, University of Colorado Boulder, Boulder Colorado USA; Loubet, Benjamin [UMR ECOSYS, INRA, AgroParisTech, Université Paris-Saclay, Thiverval-Grignon France; Leclerc, Monique Y. [Department of Crop and Soil Sciences, College of Agricultural and Environmental Sciences, University of Georgia, Athens Georgia USA; Matteucci, Giorgio [Viea San Camillo Ed LellisViterbo, University of Tuscia, Viterbo Italy; Nagy, Zoltan [MTA-SZIE Plant Ecology Research Group, Szent Istvan University, Godollo Hungary; Olejnik, Janusz [Meteorology Department, Poznan University of Life Sciences, Poznan Poland; Department of Matter and Energy Fluxes, Global Change Research Center, Brno Czech Republic; Paw U, Kyaw Tha [Department of Land, Air and Water Resources, University of California, Davis California USA; Joint Program on the Science and Policy of Global Change, Massachusetts Institute of Technology, Cambridge USA; Varlagin, Andrej [A.N. Severtsov Institute of Ecology and Evolution, Russian Academy of Sciences, Moscow Russia

    2016-04-06

    Light use efficiency (LUE) models are widely used to simulate gross primary production (GPP). However, the treatment of the plant canopy as a big leaf by these models can introduce large uncertainties in simulated GPP. Recently, a two-leaf light use efficiency (TL-LUE) model was developed to simulate GPP separately for sunlit and shaded leaves and has been shown to outperform the big-leaf MOD17 model at 6 FLUX sites in China. In this study we investigated the performance of the TL-LUE model for a wider range of biomes. For this we optimized the parameters and tested the TL-LUE model using data from 98 FLUXNET sites which are distributed across the globe. The results showed that the TL-LUE model performed in general better than the MOD17 model in simulating 8-day GPP. Optimized maximum light use efficiency of shaded leaves (εmsh) was 2.63 to 4.59 times that of sunlit leaves (εmsu). Generally, the relationships of εmsh and εmsu with εmax were well described by linear equations, indicating the existence of general patterns across biomes. GPP simulated by the TL-LUE model was much less sensitive to biases in the photosynthetically active radiation (PAR) input than the MOD17 model. The results of this study suggest that the proposed TL-LUE model has the potential for simulating regional and global GPP of terrestrial ecosystems and it is more robust with regard to usual biases in input data than existing approaches which neglect the bi-modal within-canopy distribution of PAR.

  17. CHEM2D-OPP: A new linearized gas-phase ozone photochemistry parameterization for high-altitude NWP and climate models

    Directory of Open Access Journals (Sweden)

    J. P. McCormack

    2006-01-01

    Full Text Available The new CHEM2D-Ozone Photochemistry Parameterization (CHEM2D-OPP for high-altitude numerical weather prediction (NWP systems and climate models specifies the net ozone photochemical tendency and its sensitivity to changes in ozone mixing ratio, temperature and overhead ozone column based on calculations from the CHEM2D interactive middle atmospheric photochemical transport model. We evaluate CHEM2D-OPP performance using both short-term (6-day and long-term (1-year stratospheric ozone simulations with the prototype high-altitude NOGAPS-ALPHA forecast model. An inter-comparison of NOGAPS-ALPHA 6-day ozone hindcasts for 7 February 2005 with ozone photochemistry parameterizations currently used in operational NWP systems shows that CHEM2D-OPP yields the best overall agreement with both individual Aura Microwave Limb Sounder ozone profile measurements and independent hemispheric (10°–90° N ozone analysis fields. A 1-year free-running NOGAPS-ALPHA simulation using CHEM2D-OPP produces a realistic seasonal cycle in zonal mean ozone throughout the stratosphere. We find that the combination of a model cold temperature bias at high latitudes in winter and a warm bias in the CHEM2D-OPP temperature climatology can degrade the performance of the linearized ozone photochemistry parameterization over seasonal time scales despite the fact that the parameterized temperature dependence is weak in these regions.

  18. Parameterization and calibration of a hydrologic model for long-term simulations of a small un-gauged basin in the Malian Sahel

    Science.gov (United States)

    Warms, M.; Ramirez, J. A.; Kaptue, A.; Hanan, N. P.; Sigdel-Phuyal, M.; Giree, N.

    2013-12-01

    The Sahelian region of Africa is the geographic belt directly south of the Sahara, connecting the desert to the wetter Sudanian and Guinean savannas to the South. The region is semi-arid, receiving only 300-600 mm of precipitation on average annually, and experiencing severe dry seasons (7-9 months) with little to no rain. In parts of the Sahel, a remarkable expansion of ephemeral lakes extending longer in the dry season and a recent hydrologic regime shift such that some of the previously ephemeral lakes in the region have become perennial have been observed. To test hypotheses of how this regime shift occurred, or whether this trend will continue, a coupled hydrological, ecological, and social processes model is being developed. In this paper we focus on the parameterization and calibration of the physical hydrologic model--a task that is difficult given the lack of high-resolution datasets for this region. While soils and land cover datasets exist for the entire globe, for the Sahel they are typically too coarse to adequately characterize variability for hydrologic modeling of the small watersheds associated with these lakes. In addition, climate forcing data at a daily scale are scarce. Lastly, streamflow and other gauged data typically used for calibration and validation of hydrologic models are unavailable. To address these issues, anecdotal data and in-situ observations were combined with remotely sensed data to capture as much spatial and temporal variability as possible in the watershed at the highest resolution, including 30-meter land cover data derived from Landsat imagery to infer soil information in the watershed. In order to produce long-term daily climate forcings, the coarse Climate Research Unit (CRU) and Tropical Rainfall Measuring Mission (TRMM) datasets were downscaled both spatially and temporally. Using Landsat imagery of lake sizes over time in conjunction with fractal descriptions of watershed topography and open water evaporation estimates

  19. Parameterization of aquatic ecosystem functioning and its natural variation: Hierarchical Bayesian modelling of plankton food web dynamics

    Science.gov (United States)

    Norros, Veera; Laine, Marko; Lignell, Risto; Thingstad, Frede

    2017-10-01

    Methods for extracting empirically and theoretically sound parameter values are urgently needed in aquatic ecosystem modelling to describe key flows and their variation in the system. Here, we compare three Bayesian formulations for mechanistic model parameterization that differ in their assumptions about the variation in parameter values between various datasets: 1) global analysis - no variation, 2) separate analysis - independent variation and 3) hierarchical analysis - variation arising from a shared distribution defined by hyperparameters. We tested these methods, using computer-generated and empirical data, coupled with simplified and reasonably realistic plankton food web models, respectively. While all methods were adequate, the simulated example demonstrated that a well-designed hierarchical analysis can result in the most accurate and precise parameter estimates and predictions, due to its ability to combine information across datasets. However, our results also highlighted sensitivity to hyperparameter prior distributions as an important caveat of hierarchical analysis. In the more complex empirical example, hierarchical analysis was able to combine precise identification of parameter values with reasonably good predictive performance, although the ranking of the methods was less straightforward. We conclude that hierarchical Bayesian analysis is a promising tool for identifying key ecosystem-functioning parameters and their variation from empirical datasets.

  20. Techniques for Collection and Analysis of Pop-Plot Data for Use in Parameterization of Reactive Flow Models

    Science.gov (United States)

    Lee, Richard; Svingala, Forrest; Dorgan, Robert; Dattelbaum, Dana; Furnish, Michael; Sutherland, Gerrit

    2017-06-01

    Reactive flow models have been used to design explosive trains and predict explosive response to various mechanical insults. Parametrization of these models can be determined using short-duration shock data from thin flyers for ignition behavior and sustained pulse Pop-plot data for growth to detonation behavior. The latter was measured in an explosive using 4 experimental configurations with different data collection techniques. The first two used gas-gun driven 1-D shock waves and either embedded particle velocity gauges, or photon Doppler velocimetry at the end of different sample thicknesses. The second two used explosive donors to produce either a 1-D or quasi-1-D shock wave in wedge or cylindrical acceptors, respectively. Break out of the detonation wave in wedge samples was observed by streak camera, while embedded time of arrival gauges were used for cylindrical samples. Run-distances were compared between all 4 cases using a consistent method involving the intersection of two linear fits through data prior to and after transition to detonation. All methods were found to provide consistent data, indicating that one or a combination of these methods are suitable for parameterizing a reactive flow model.

  1. Parameterization of Shortwave Cloud Optical Properties for a Mixture of Ice Particle Habits for use in Atmospheric Models

    Science.gov (United States)

    Chou, Ming-Dah; Lee, Kyu-Tae; Yang, Ping; Lau, William K. M. (Technical Monitor)

    2002-01-01

    Based on the single-scattering optical properties pre-computed with an improved geometric optics method, the bulk absorption coefficient, single-scattering albedo, and asymmetry factor of ice particles have been parameterized as a function of the effective particle size of a mixture of ice habits, the ice water amount, and spectral band. The parameterization has been applied to computing fluxes for sample clouds with various particle size distributions and assumed mixtures of particle habits. It is found that flux calculations are not overly sensitive to the assumed particle habits if the definition of the effective particle size is consistent with the particle habits that the parameterization is based. Otherwise, the error in the flux calculations could reach a magnitude unacceptable for climate studies. Different from many previous studies, the parameterization requires only an effective particle size representing all ice habits in a cloud layer, but not the effective size of individual ice habits.

  2. Remote Sensing Protocols for Parameterizing an Individual, Tree-Based, Forest Growth and Yield Model

    Science.gov (United States)

    2014-09-01

    Penelope Morgan. 2006. “Regression Modeling and Mapping of Coniferous Forest Basal Area and Tree Density from Discrete- Return LIDAR and... Basal Area Relationships of Open-Grown Southern Pines for Modeling Competition and Growth.” Canadian Journal of of Forest Research 22: 341–347... Forest Growth and Yield Model Co ns tr uc tio n En gi ne er in g R es ea rc h La bo ra to ry Scott A. Tweddale, Patrick J. Guertin, and

  3. An Integrative Wave Model for the Marginal Ice Zone Based on a Rheological Parameterization

    Science.gov (United States)

    2015-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. An Integrative Wave model for the Marginal Ice Zone...people.clarkson.edu/~hhshen LONG-TERM GOALS To enhance wave forecasting models such as WAVEWATCH III (WW3) so that they can predict the marginal ice zone (MIZ...Antarctic marginal ice zone were used to evaluate the viscoelastic ice damping models. The 2012 data came from two buoys separated by over 100km

  4. Development of polarizable models for molecular mechanical calculations I: parameterization of atomic polarizability.

    Science.gov (United States)

    Wang, Junmei; Cieplak, Piotr; Li, Jie; Hou, Tingjun; Luo, Ray; Duan, Yong

    2011-03-31

    In this work, four types of polarizable models have been developed for calculating interactions between atomic charges and induced point dipoles. These include the Applequist, Thole linear, Thole exponential model, and the Thole Tinker-like. The polarizability models have been optimized to reproduce the experimental static molecular polarizabilities obtained from the molecular refraction measurements on a set of 420 molecules reported by Bosque and Sales. We grouped the models into five sets depending on the interaction types, that is, whether the interactions of two atoms that form the bond, bond angle, and dihedral angle are turned off or scaled down. When 1-2 (bonded) and 1-3 (separated by two bonds) interactions are turned off, 1-4 (separated by three bonds) interactions are scaled down, or both, all models including the Applequist model achieved similar performance: the average percentage error (APE) ranges from 1.15 to 1.23%, and the average unsigned error (AUE) ranges from 0.143 to 0.158 Å(3). When the short-range 1-2, 1-3, and full 1-4 terms are taken into account (set D models), the APE ranges from 1.30 to 1.58% for the three Thole models, whereas the Applequist model (DA) has a significantly larger APE (3.82%). The AUE ranges from 0.166 to 0.196 Å(3) for the three Thole models, compared with 0.446 Å(3) for the Applequist model. Further assessment using the 70-molecule van Duijnen and Swart data set clearly showed that the developed models are both accurate and highly transferable and are in fact have smaller errors than the models developed using this particular data set (set E models). The fact that A, B, and C model sets are notably more accurate than both D and E model sets strongly suggests that the inclusion of 1-2 and 1-3 interactions reduces the transferability and accuracy.

  5. Parameterization des modeles tumoral bases sur des maillages des donnees experimentaux.

    OpenAIRE

    Jagiella , Nick

    2012-01-01

    In order to establish a predictive model for in-vivo tumor growth and therapy, a multiscale model has to be set-up and calibrated individually in a stepwise process to a targeted cell type and di erent environments (in-vitro and in-vivo). As a proof of principle we will present the process chain of model construction and parametrization from di erent data sources for the avascular growth of the EMT6/Ro and the SK-MES-1 cell line. In a rst step, a multiscale and individual-based model has been...

  6. Beyond Standard Model Physics

    Energy Technology Data Exchange (ETDEWEB)

    Bellantoni, L.

    2009-11-01

    There are many recent results from searches for fundamental new physics using the TeVatron, the SLAC b-factory and HERA. This talk quickly reviewed searches for pair-produced stop, for gauge-mediated SUSY breaking, for Higgs bosons in the MSSM and NMSSM models, for leptoquarks, and v-hadrons. There is a SUSY model which accommodates the recent astrophysical experimental results that suggest that dark matter annihilation is occurring in the center of our galaxy, and a relevant experimental result. Finally, model-independent searches at D0, CDF, and H1 are discussed.

  7. Development of a CFD Model Including Tree's Drag Parameterizations: Application to Pedestrian's Wind Comfort in an Urban Area

    Science.gov (United States)

    Kang, G.; Kim, J.

    2017-12-01

    This study investigated the tree's effect on wind comfort at pedestrian height in an urban area using a computational fluid dynamics (CFD) model. We implemented the tree's drag parameterization scheme to the CFD model and validated the simulated results against the wind-tunnel measurement data as well as LES data via several statistical methods. The CFD model underestimated (overestimated) the concentrations on the leeward (windward) walls inside the street canyon in the presence of trees, because the CFD model can't resolve the latticed cage and can't reflect the concentration increase and decrease caused by the latticed cage in the simulations. However, the scalar pollutants' dispersion simulated by the CFD model was quite similar to that in the wind-tunnel measurement in pattern and magnitude, on the whole. The CFD model overall satisfied the statistical validation indices (root normalized mean square error, geometric mean variance, correlation coefficient, and FAC2) but failed to satisfy the fractional bias and geometric mean bias due to the underestimation on the leeward wall and overestimation on the windward wall, showing that its performance was comparable to the LES's performance. We applied the CFD model to evaluation of the trees' effect on the pedestrian's wind-comfort in an urban area. To investigate sensory levels for human activities, the wind-comfort criteria based on Beaufort wind-force scales (BWSs) were used. In the tree-free scenario, BWS 4 and 5 (unpleasant condition for sitting long and sitting short, respectively) appeared in the narrow spaces between buildings, in the upwind side of buildings, and the unobstructed areas. In the tree scenario, BWSs decreased by 1 3 grade inside the campus of Pukyong National University located in the target area, which indicated that trees planted in the campus effectively improved pedestrian's wind comfort.

  8. Modelled climate sensitivity of the mass balance of Morteratschgletscher and its dependence on albedo parameterization

    NARCIS (Netherlands)

    Klok, E.J.; Oerlemans, J.

    2004-01-01

    This paper presents a study of the climate sensitivity of the mass balance of Morteratschgletscher in Switzerland, estimated from a two-dimensional mass balance model. Since the albedo scheme chosen is often the largest error source in mass balance models, we investigated the impact of using

  9. Parameterization of canopy resistance for modeling the energy partitioning of a paddy rice field

    NARCIS (Netherlands)

    Yan, H.; Zhang, C.; Hiroki, Oue

    2018-01-01

    Models for predicting hourly canopy resistance (rc) and latent heat flux (LET) based on the Penman–Monteith (PM) and bulk transfer methods are presented. The micrometeorological data and LET were observed during paddy rice-growing seasons in 2010 in Japan. One approach to model

  10. Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model

    Science.gov (United States)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2014-02-01

    Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation

  11. Technical Note: Approximate Bayesian parameterization of a complex tropical forest model

    Science.gov (United States)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2013-08-01

    Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can

  12. Approaches in highly parameterized inversion - GENIE, a general model-independent TCP/IP run manager

    Science.gov (United States)

    Muffels, Christopher T.; Schreuder, Willem A.; Doherty, John E.; Karanovic, Marinko; Tonkin, Matthew J.; Hunt, Randall J.; Welter, David E.

    2012-01-01

    GENIE is a model-independent suite of programs that can be used to generally distribute, manage, and execute multiple model runs via the TCP/IP infrastructure. The suite consists of a file distribution interface, a run manage, a run executer, and a routine that can be compiled as part of a program and used to exchange model runs with the run manager. Because communication is via a standard protocol (TCP/IP), any computer connected to the Internet can serve in any of the capacities offered by this suite. Model independence is consistent with the existing template and instruction file protocols of the widely used PEST parameter estimation program. This report describes (1) the problem addressed; (2) the approach used by GENIE to queue, distribute, and retrieve model runs; and (3) user instructions, classes, and functions developed. It also includes (4) an example to illustrate the linking of GENIE with Parallel PEST using the interface routine.

  13. Setup of a Parameterized FE Model for the Die Roll Prediction in Fine Blanking using Artificial Neural Networks

    Science.gov (United States)

    Stanke, J.; Trauth, D.; Feuerhack, A.; Klocke, F.

    2017-09-01

    Die roll is a morphological feature of fine blanked sheared edges. The die roll reduces the functional part of the sheared edge. To compensate for the die roll thicker sheet metal strips and secondary machining must be used. However, in order to avoid this, the influence of various fine blanking process parameters on the die roll has been experimentally and numerically studied, but there is still a lack of knowledge on the effects of some factors and especially factor interactions on the die roll. Recent changes in the field of artificial intelligence motivate the hybrid use of the finite element method and artificial neural networks to account for these non-considered parameters. Therefore, a set of simulations using a validated finite element model of fine blanking is firstly used to train an artificial neural network. Then the artificial neural network is trained with thousands of experimental trials. Thus, the objective of this contribution is to develop an artificial neural network that reliably predicts the die roll. Therefore, in this contribution, the setup of a fully parameterized 2D FE model is presented that will be used for batch training of an artificial neural network. The FE model enables an automatic variation of the edge radii of blank punch and die plate, the counter and blank holder force, the sheet metal thickness and part diameter, V-ring height and position, cutting velocity as well as material parameters covered by the Hensel-Spittel model for 16MnCr5 (1.7131, AISI/SAE 5115). The FE model is validated using experimental trails. The results of this contribution is a FE model suitable to perform 9.623 simulations and to pass the simulated die roll width and height automatically to an artificial neural network.

  14. Comparison of Aerodynamic Resistance Parameterizations and Implications for Dry Deposition Modeling

    Science.gov (United States)

    Nitrogen deposition data used to support the secondary National Ambient Air Quality Standards and critical loads research derives from both measurements and modeling. Data sets with spatial coverage sufficient for regional scale deposition assessments are currently generated fro...

  15. The trainable trajectory formation model TD-HMM parameterized for the LIPS 2008 challenge

    OpenAIRE

    Bailly , Gérard; Govokhina , Oxana; Breton , Gaspard; Elisei , Frédéric; Savariaux , Christophe

    2008-01-01

    International audience; We describe here the trainable trajectory formation model that will be used for the LIPS'2008 challenge organized at InterSpeech'2008. It predicts articulatory trajectories of a talking face from phonetic input. It basically uses HMM-based synthesis but asynchrony between acoustic and gestural boundaries - taking for example into account non audible anticipatory gestures - is handled by a phasing model that predicts the delays between the acoustic boundaries of allopho...

  16. Parameterizing Dose-Response Models to Estimate Relative Potency Functions Directly

    Science.gov (United States)

    Dinse, Gregg E.

    2012-01-01

    Many comparative analyses of toxicity assume that the potency of a test chemical relative to a reference chemical is constant, but employing such a restrictive assumption uncritically may generate misleading conclusions. Recent efforts to characterize non-constant relative potency rely on relative potency functions and estimate them secondarily after fitting dose-response models for the test and reference chemicals. We study an alternative approach of specifying a relative potency model a priori and estimating it directly using the dose-response data from both chemicals. We consider a power function in dose as a relative potency model and find that it keeps the two chemicals’ dose-response functions within the same family of models for families typically used in toxicology. When differences in the response limits for the test and reference chemicals are attributable to the chemicals themselves, the older two-stage approach is the more convenient. When differences in response limits are attributable to other features of the experimental protocol or when response limits do not differ, the direct approach is straightforward to apply with nonlinear regression methods and simplifies calculation of simultaneous confidence bands. We illustrate the proposed approach using Hill models with dose-response data from U.S. National Toxicology Program bioassays. Though not universally applicable, this method of estimating relative potency functions directly can be profitably applied to a broad family of dose-response models commonly used in toxicology. PMID:22700543

  17. Parameterized examination in econometrics

    Science.gov (United States)

    Malinova, Anna; Kyurkchiev, Vesselin; Spasov, Georgi

    2018-01-01

    The paper presents a parameterization of basic types of exam questions in Econometrics. This algorithm is used to automate and facilitate the process of examination, assessment and self-preparation of a large number of students. The proposed parameterization of testing questions reduces the time required to author tests and course assignments. It enables tutors to generate a large number of different but equivalent dynamic questions (with dynamic answers) on a certain topic, which are automatically assessed. The presented methods are implemented in DisPeL (Distributed Platform for e-Learning) and provide questions in the areas of filtering and smoothing of time-series data, forecasting, building and analysis of single-equation econometric models. Questions also cover elasticity, average and marginal characteristics, product and cost functions, measurement of monopoly power, supply, demand and equilibrium price, consumer and product surplus, etc. Several approaches are used to enable the required numerical computations in DisPeL - integration of third-party mathematical libraries, developing our own procedures from scratch, and wrapping our legacy math codes in order to modernize and reuse them.

  18. Comparing thixotropic and Herschel–Bulkley parameterizations for continuum models of avalanches and subaqueous debris flows

    Directory of Open Access Journals (Sweden)

    C.-H. Jeon

    2018-01-01

    Full Text Available Avalanches and subaqueous debris flows are two cases of a wide range of natural hazards that have been previously modeled with non-Newtonian fluid mechanics approximating the interplay of forces associated with gravity flows of granular and solid–liquid mixtures. The complex behaviors of such flows at unsteady flow initiation (i.e., destruction of structural jamming and flow stalling (restructuralization imply that the representative viscosity–stress relationships should include hysteresis: there is no reason to expect the timescale of microstructure destruction is the same as the timescale of restructuralization. The non-Newtonian Herschel–Bulkley relationship that has been previously used in such models implies complete reversibility of the stress–strain relationship and thus cannot correctly represent unsteady phases. In contrast, a thixotropic non-Newtonian model allows representation of initial structural jamming and aging effects that provide hysteresis in the stress–strain relationship. In this study, a thixotropic model and a Herschel–Bulkley model are compared to each other and to prior laboratory experiments that are representative of an avalanche and a subaqueous debris flow. A numerical solver using a multi-material level-set method is applied to track multiple interfaces simultaneously in the simulations. The numerical results are validated with analytical solutions and available experimental data using parameters selected based on the experimental setup and without post hoc calibration. The thixotropic (time-dependent fluid model shows reasonable agreement with all the experimental data. For most of the experimental conditions, the Herschel–Bulkley (time-independent model results were similar to the thixotropic model, a critical exception being conditions with a high yield stress where the Herschel–Bulkley model did not initiate flow. These results indicate that the thixotropic relationship is promising for

  19. Asymmetric transfer efficiencies between fomites and fingers: Impact on model parameterization.

    Science.gov (United States)

    Greene, Christine; Ceron, Nancy Hernandez; Eisenberg, Marisa C; Koopman, James; Miller, Jesse D; Xi, Chuanwu; Eisenberg, Joseph N S

    2018-01-31

    Healthcare-associated infections (HAIs) affect millions of patients every year. Pathogen transmission via fomites and healthcare workers (HCWs) contribute to the persistence of HAIs in hospitals. A critical parameter needed to assess risk of environmental transmission is the pathogen transfer efficiency between fomites and fingers. Recent studies have shown that pathogen transfer is not symmetric. In this study,we evaluated how the commonly used assumption of symmetry in transfer efficiency changes the dynamics of pathogen movement between patients and rooms and the exposures to uncolonized patients. We developed and analyzed a deterministic compartmental model of Acinetobacter baumannii describing the contact-mediated process among HCWs, patients, and the environment. We compared a system using measured asymmetrical transfer efficiency to 2 symmetrical transfer efficiency systems. Symmetric models consistently overestimated contamination levels on fomites and underestimated contamination on patients and HCWs compared to the asymmetrical model. The magnitudes of these miscalculations can exceed 100%. Regardless of the model, relative percent reductions in contamination declined after hand hygiene compliance reached approximately 60% in the large fomite scenario and 70% in the small fomite scenario. This study demonstrates how healthcare facility-specific data can be used for decision-making processes. We show that the incorrect use of transfer efficiency data leads to biased effectiveness estimates for intervention strategies. More accurate exposure models are needed for more informed infection prevention strategies. Copyright © 2018 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  20. Parameterization Of Solar Radiation Using Neural Network

    International Nuclear Information System (INIS)

    Jiya, J. D.; Alfa, B.

    2002-01-01

    This paper presents a neural network technique for parameterization of global solar radiation. The available data from twenty-one stations is used for training the neural network and the data from other ten stations is used to validate the neural model. The neural network utilizes latitude, longitude, altitude, sunshine duration and period number to parameterize solar radiation values. The testing data was not used in the training to demonstrate the performance of the neural network in unknown stations to parameterize solar radiation. The results indicate a good agreement between the parameterized solar radiation values and actual measured values

  1. Electrostatics of cysteine residues in proteins: Parameterization and validation of a simple model

    Science.gov (United States)

    Salsbury, Freddie R.; Poole, Leslie B.; Fetrow, Jacquelyn S.

    2013-01-01

    One of the most popular and simple models for the calculation of pKas from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pKas. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pKas; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pKas. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pKa values (where the calculation should reproduce the pKa within experimental error). Both the general behavior of cysteines in proteins and the perturbed pKa in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pKa should be shifted, and validation of force field parameters for cysteine residues. PMID:22777874

  2. A model predictive control approach to design a parameterized adaptive cruise control

    NARCIS (Netherlands)

    Naus, G.J.L.; Ploeg, J.; Molengraft, M.J.G. van de; Heemels, W.P.M.H.; Steinbuch, M.

    2010-01-01

    The combination of different desirable characteristics and situation-dependent behavior cause the design of adaptive cruise control (ACC) systems to be time consuming and tedious. This chapter presents a systematic approach for the design and tuning of an ACC, based on model predictive control

  3. Stochastic Parameterization of Convective Area Fractions with a Multicloud Model Inferred from Observational Data

    NARCIS (Netherlands)

    J. Dorrestijn (Jesse); D.T. Crommelin (Daan); A.P. Siebesma (Pier); H.J.J. Jonker (Harm); C Jakob

    2015-01-01

    htmlabstractObservational data of rainfall from a rain radar in Darwin, Australia, are combined with data defining the large-scale dynamic and thermodynamic state of the atmosphere around Darwin to develop a multicloud model based on a stochastic method using conditional Markov chains. The authors

  4. Development of a PBL Parameterization Scheme for the Tropical Cyclone Model and an Improved Magnetospheric Model for Magic.

    Science.gov (United States)

    1981-03-25

    jM 1.25(W P Il~ II I [ 6 :, LEVd~5 I ~JAMMOR jkppovo reecm * 1 L81 31 042 300 Unicorn Park Drive Wobum, Massachuset 0C601 _ __ _JA YCOR DEVELOPMENT...for the growth 4 of the tropical cyclone, and leads to a gradual shift of the storm center toward the warm ocean. "Test of a Planetary Boundary Layer... growth characteristics because gravity waves and model physics act to smooth them. Besides, random observational errors are not the major problem with

  5. Impact of Parameterized Lee Wave Drag on the Energy Budget of an Eddying Global Ocean Model

    Science.gov (United States)

    2013-08-26

    of Mexico and other regions in Fig. 2b of Arbic et al. (2010) relative to their Fig. 2a. In subsequent versions of HY- COM simulations with embedded...use of the leap- frog time-stepping scheme (Griffies et al., 2000), it is an unmeasured source of dissi- pation. An associated imbalance in surface...Molines, J.-M., New , A.L., 2001. Circulation characteristics in three eddy- permitting models of the North Atlantic. Progr. Oceanogr. 48, 123–161

  6. Electrostatics of cysteine residues in proteins: parameterization and validation of a simple model.

    Science.gov (United States)

    Salsbury, Freddie R; Poole, Leslie B; Fetrow, Jacquelyn S

    2012-11-01

    One of the most popular and simple models for the calculation of pK(a) s from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pK(a) s. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pK(a) s; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pK(a) s. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pK(a) values (where the calculation should reproduce the pK(a) within experimental error). Both the general behavior of cysteines in proteins and the perturbed pK(a) in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pK(a) should be shifted, and validation of force field parameters for cysteine residues. Copyright © 2012 Wiley Periodicals, Inc.

  7. On parameterization of heat conduction in coupled soil water and heat flow modelling

    Czech Academy of Sciences Publication Activity Database

    Votrubová, J.; Dohnal, M.; Vogel, T.; Tesař, Miroslav

    2012-01-01

    Roč. 7, č. 4 (2012), s. 125-137 ISSN 1801-5395 R&D Projects: GA ČR GA205/08/1174 Institutional research plan: CEZ:AV0Z20600510 Keywords : advective heat flux * dual-permeability model * soil heat transport * soil thermal conductivity * surface energy balance Subject RIV: DA - Hydrology ; Limnology Impact factor: 0.333, year: 2012

  8. Multiple Adaptations and Content-Adaptive FEC Using Parameterized RD Model for Embedded Wavelet Video

    Directory of Open Access Journals (Sweden)

    Yu Ya-Huei

    2007-01-01

    Full Text Available Scalable video coding (SVC has been an active research topic for the past decade. In the past, most SVC technologies were based on a coarse-granularity scalable model which puts many scalability constraints on the encoded bitstreams. As a result, the application scenario of adapting a preencoded bitstream multiple times along the distribution chain has not been seriously investigated before. In this paper, a model-based multiple-adaptation framework based on a wavelet video codec, MC-EZBC, is proposed. The proposed technology allows multiple adaptations on both the video data and the content-adaptive FEC protection codes. For multiple adaptations of video data, rate-distortion information must be embedded within the video bitstream in order to allow rate-distortion optimized operations for each adaptation. Experimental results show that the proposed method reduces the amount of side information by more than 50% on average when compared to the existing technique. It also reduces the number of iterations required to perform the tier-2 entropy coding by more than 64% on average. In addition, due to the nondiscrete nature of the rate-distortion model, the proposed framework also enables multiple adaptations of content-adaptive FEC protection scheme for more flexible error-resilient transmission of bitstreams.

  9. Methods used to parameterize the spatially-explicit components of a state-and-transition simulation model

    Science.gov (United States)

    Sleeter, Rachel; Acevedo, William; Soulard, Christopher E.; Sleeter, Benjamin M.

    2015-01-01

    Spatially-explicit state-and-transition simulation models of land use and land cover (LULC) increase our ability to assess regional landscape characteristics and associated carbon dynamics across multiple scenarios. By characterizing appropriate spatial attributes such as forest age and land-use distribution, a state-and-transition model can more effectively simulate the pattern and spread of LULC changes. This manuscript describes the methods and input parameters of the Land Use and Carbon Scenario Simulator (LUCAS), a customized state-and-transition simulation model utilized to assess the relative impacts of LULC on carbon stocks for the conterminous U.S. The methods and input parameters are spatially explicit and describe initial conditions (strata, state classes and forest age), spatial multipliers, and carbon stock density. Initial conditions were derived from harmonization of multi-temporal data characterizing changes in land use as well as land cover. Harmonization combines numerous national-level datasets through a cell-based data fusion process to generate maps of primary LULC categories. Forest age was parameterized using data from the North American Carbon Program and spatially-explicit maps showing the locations of past disturbances (i.e. wildfire and harvest). Spatial multipliers were developed to spatially constrain the location of future LULC transitions. Based on distance-decay theory, maps were generated to guide the placement of changes related to forest harvest, agricultural intensification/extensification, and urbanization. We analyze the spatially-explicit input parameters with a sensitivity analysis, by showing how LUCAS responds to variations in the model input. This manuscript uses Mediterranean California as a regional subset to highlight local to regional aspects of land change, which demonstrates the utility of LUCAS at many scales and applications.

  10. Methods used to parameterize the spatially-explicit components of a state-and-transition simulation model

    Directory of Open Access Journals (Sweden)

    Rachel R. Sleeter

    2015-06-01

    Full Text Available Spatially-explicit state-and-transition simulation models of land use and land cover (LULC increase our ability to assess regional landscape characteristics and associated carbon dynamics across multiple scenarios. By characterizing appropriate spatial attributes such as forest age and land-use distribution, a state-and-transition model can more effectively simulate the pattern and spread of LULC changes. This manuscript describes the methods and input parameters of the Land Use and Carbon Scenario Simulator (LUCAS, a customized state-and-transition simulation model utilized to assess the relative impacts of LULC on carbon stocks for the conterminous U.S. The methods and input parameters are spatially explicit and describe initial conditions (strata, state classes and forest age, spatial multipliers, and carbon stock density. Initial conditions were derived from harmonization of multi-temporal data characterizing changes in land use as well as land cover. Harmonization combines numerous national-level datasets through a cell-based data fusion process to generate maps of primary LULC categories. Forest age was parameterized using data from the North American Carbon Program and spatially-explicit maps showing the locations of past disturbances (i.e. wildfire and harvest. Spatial multipliers were developed to spatially constrain the location of future LULC transitions. Based on distance-decay theory, maps were generated to guide the placement of changes related to forest harvest, agricultural intensification/extensification, and urbanization. We analyze the spatially-explicit input parameters with a sensitivity analysis, by showing how LUCAS responds to variations in the model input. This manuscript uses Mediterranean California as a regional subset to highlight local to regional aspects of land change, which demonstrates the utility of LUCAS at many scales and applications.

  11. Investigating Marine Boundary Layer Parameterizations by Combining Observations with Models via State Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Delle Monahce, Luca [University Corporation for Atmospheric Research (UCAR), Boulder, CO (United States); Clifton, Andrew [National Renewable Energy Lab. (NREL), Golden, CO (United States); Hacker, Joshua [University Corporation for Atmospheric Research (UCAR), Boulder, CO (United States); Kosovic, Branko [University Corporation for Atmospheric Research (UCAR), Boulder, CO (United States); Lee, Jared [University Corporation for Atmospheric Research (UCAR), Boulder, CO (United States); Vanderberghe, Francois [University Corporation for Atmospheric Research (UCAR), Boulder, CO (United States); Wu, Yonghui [University Corporation for Atmospheric Research (UCAR), Boulder, CO (United States); Hawkins, Sam [Vattenfall, Solna Municipality (Sweden); Nissen, Jesper [Vattenfall, Solna Municipality (Sweden)

    2015-06-30

    In this project we have improved numerical weather prediction analyses and forecasts of low level winds in the marine boundary layer. This has been accomplished with the following tools; The National Center for Atmospheric Research (NCAR) Weather and Research Forecasting model, WRF, both in his single column (SCM) and three-dimensional (3D) versions; The National Oceanic and Atmospheric Administration (NOAA) Wave Watch III (WWIII); SE algorithms from the Data Assimilation Research Testbed (DART, Anderson et al. 2009); and Observations of key quantities of the lower MBL, including temperature and winds at multiple levels above the sea surface. The experiments with the WRF SCM / DART system have lead to large improvements with respect to a standard WRF configuration, which is currently commonly used by the wind energy industry. The single column model appears to be a tool particularly suitable for off-shore wind energy applications given its accuracy, the ability to quantify uncertainty, and the minimal computational resource requirements. In situations where the impact of an upwind wind park may be of interest in a downwind location, a 3D approach may be more suitable. We have demonstrated that with the WRF 3D / DART system the accuracy of wind predictions (and other meteorological parameters) can be improved over a 3D computational domain, and not only at specific locations. All the scripting systems developed in this project (i.e., to run WRF SCM / DART, WRF 3D / DART, and the coupling between WRF and WWIII) and the several modifications and upgrades made to the WRF SCM model will be shared with the broader community.

  12. Multiple-try differential evolution adaptive Metropolis for efficient solution of highly parameterized models

    Science.gov (United States)

    Eric, L.; Vrugt, J. A.

    2010-12-01

    Spatially distributed hydrologic models potentially contain hundreds of parameters that need to be derived by calibration against a historical record of input-output data. The quality of this calibration strongly determines the predictive capability of the model and thus its usefulness for science-based decision making and forecasting. Unfortunately, high-dimensional optimization problems are typically difficult to solve. Here we present our recent developments to the Differential Evolution Adaptive Metropolis (DREAM) algorithm (Vrugt et al., 2009) to warrant efficient solution of high-dimensional parameter estimation problems. The algorithm samples from an archive of past states (Ter Braak and Vrugt, 2008), and uses multiple-try Metropolis sampling (Liu et al., 2000) to decrease the required burn-in time for each individual chain and increase efficiency of posterior sampling. This approach is hereafter referred to as MT-DREAM. We present results for 2 synthetic mathematical case studies, and 2 real-world examples involving from 10 to 240 parameters. Results for those cases show that our multiple-try sampler, MT-DREAM, can consistently find better solutions than other Bayesian MCMC methods. Moreover, MT-DREAM is admirably suited to be implemented and ran on a parallel machine and is therefore a powerful method for posterior inference.

  13. Third nearest neighbor parameterized tight binding model for graphene nano-ribbons

    Directory of Open Access Journals (Sweden)

    Van-Truong Tran

    2017-07-01

    Full Text Available The existing tight binding models can very well reproduce the ab initio band structure of a 2D graphene sheet. For graphene nano-ribbons (GNRs, the current sets of tight binding parameters can successfully describe the semi-conducting behavior of all armchair GNRs. However, they are still failing in reproducing accurately the slope of the bands that is directly associated with the group velocity and the effective mass of electrons. In this work, both density functional theory and tight binding calculations were performed and a new set of tight binding parameters up to the third nearest neighbors including overlap terms is introduced. The results obtained with this model offer excellent agreement with the predictions of the density functional theory in most cases of ribbon structures, even in the high-energy region. Moreover, this set can induce electron-hole asymmetry as manifested in results from density functional theory. Relevant outcomes are also achieved for armchair ribbons of various widths as well as for zigzag structures, thus opening a route for multi-scale atomistic simulation of large systems that cannot be considered using density functional theory.

  14. Third nearest neighbor parameterized tight binding model for graphene nano-ribbons

    Science.gov (United States)

    Tran, Van-Truong; Saint-Martin, Jérôme; Dollfus, Philippe; Volz, Sebastian

    2017-07-01

    The existing tight binding models can very well reproduce the ab initio band structure of a 2D graphene sheet. For graphene nano-ribbons (GNRs), the current sets of tight binding parameters can successfully describe the semi-conducting behavior of all armchair GNRs. However, they are still failing in reproducing accurately the slope of the bands that is directly associated with the group velocity and the effective mass of electrons. In this work, both density functional theory and tight binding calculations were performed and a new set of tight binding parameters up to the third nearest neighbors including overlap terms is introduced. The results obtained with this model offer excellent agreement with the predictions of the density functional theory in most cases of ribbon structures, even in the high-energy region. Moreover, this set can induce electron-hole asymmetry as manifested in results from density functional theory. Relevant outcomes are also achieved for armchair ribbons of various widths as well as for zigzag structures, thus opening a route for multi-scale atomistic simulation of large systems that cannot be considered using density functional theory.

  15. Accounting for Unresolved Spatial Variability in Large Scale Models: Development and Evaluation of a Statistical Cloud Parameterization with Prognostic Higher Order Moments

    Energy Technology Data Exchange (ETDEWEB)

    Robert Pincus

    2011-05-17

    This project focused on the variability of clouds that is present across a wide range of scales ranging from the synoptic to the millimeter. In particular, there is substantial variability in cloud properties at scales smaller than the grid spacing of models used to make climate projections (GCMs) and weather forecasts. These models represent clouds and other small-scale processes with parameterizations that describe how those processes respond to and feed back on the largescale state of the atmosphere.

  16. Pyrolysis of reinforced polymer composites: Parameterizing a model for multiple compositions

    Science.gov (United States)

    Martin, Geraldine E.

    A single set of material properties was developed to describe the pyrolysis of fiberglass reinforced polyester composites at multiple composition ratios. Milligram-scale testing was performed on the unsaturated polyester (UP) resin using thermogravimetric analysis (TGA) coupled with differential scanning calorimetry (DSC) to establish and characterize an effective semi-global reaction mechanism, of three consecutive first-order reactions. Radiation-driven gasification experiments were conducted on UP resin and the fiberglass composites at compositions ranging from 41 to 54 wt% resin at external heat fluxes from 30 to 70 kW m -2. The back surface temperature was recorded with an infrared camera and used as the target for inverse analysis to determine the thermal conductivity of the systematically isolated constituent species. Manual iterations were performed in a comprehensive pyrolysis model, ThermaKin. The complete set of properties was validated for the ability to reproduce the mass loss rate during gasification testing.

  17. Parameterization of dust emissions in the global atmospheric chemistry-climate model EMAC: impact of nudging and soil properties

    Directory of Open Access Journals (Sweden)

    M. Astitha

    2012-11-01

    Full Text Available Airborne desert dust influences radiative transfer, atmospheric chemistry and dynamics, as well as nutrient transport and deposition. It directly and indirectly affects climate on regional and global scales. Two versions of a parameterization scheme to compute desert dust emissions are incorporated into the atmospheric chemistry general circulation model EMAC (ECHAM5/MESSy2.41 Atmospheric Chemistry. One uses a globally uniform soil particle size distribution, whereas the other explicitly accounts for different soil textures worldwide. We have tested these two versions and investigated the sensitivity to input parameters, using remote sensing data from the Aerosol Robotic Network (AERONET and dust concentrations and deposition measurements from the AeroCom dust benchmark database (and others. The two versions are shown to produce similar atmospheric dust loads in the N-African region, while they deviate in the Asian, Middle Eastern and S-American regions. The dust outflow from Africa over the Atlantic Ocean is accurately simulated by both schemes, in magnitude, location and seasonality. Approximately 70% of the modelled annual deposition data and 70–75% of the modelled monthly aerosol optical depth (AOD in the Atlantic Ocean stations lay in the range 0.5 to 2 times the observations for all simulations. The two versions have similar performance, even though the total annual source differs by ~50%, which underscores the importance of transport and deposition processes (being the same for both versions. Even though the explicit soil particle size distribution is considered more realistic, the simpler scheme appears to perform better in several locations. This paper discusses the differences between the two versions of the dust emission scheme, focusing on their limitations and strengths in describing the global dust cycle and suggests possible future improvements.

  18. Modelling deep-water formation in the north-west Mediterranean Sea with a new air-sea coupled model: sensitivity to turbulent flux parameterizations

    Science.gov (United States)

    Seyfried, Léo; Marsaleix, Patrick; Richard, Evelyne; Estournel, Claude

    2017-12-01

    In the north-western Mediterranean, the strong, dry, cold winds, the Tramontane and Mistral, produce intense heat and moisture exchange at the interface between the ocean and the atmosphere leading to the formation of deep dense waters, a process that occurs only in certain regions of the world. The purpose of this study is to demonstrate the ability of a new coupled ocean-atmosphere modelling system based on MESONH-SURFEX-SYMPHONIE to simulate a deep-water formation event in real conditions. The study focuses on summer 2012 to spring 2013, a favourable period that is well documented by previous studies and for which many observations are available. Model results are assessed through detailed comparisons with different observation data sets, including measurements from buoys, moorings and floats. The good overall agreement between observations and model results shows that the new coupled system satisfactorily simulates the formation of deep dense water and can be used with confidence to study ocean-atmosphere coupling in the north-western Mediterranean. In addition, to evaluate the uncertainty associated with the representation of turbulent fluxes in strong wind conditions, several simulations were carried out based on different parameterizations of the flux bulk formulas. The results point out that the choice of turbulent flux parameterization strongly influences the simulation of the deep-water convection and can modify the volume of the newly formed deep water by a factor of 2.

  19. Parameterization of Nitrogen Limitation for a Dynamic Ecohydrological Model: a Case Study from the Luquillo Critical Zone Observatory

    Science.gov (United States)

    Bastola, S.; Bras, R. L.

    2017-12-01

    Feedbacks between vegetation and the soil nutrient cycle are important in ecosystems where nitrogen limits plant growth, and consequently influences the carbon balance in the plant-soil system. However, many biosphere models do not include such feedbacks, because interactions between carbon and the nitrogen cycle can be complex, and remain poorly understood. In this study we coupled a nitrogen cycle model with an eco-hydrological model by using the concept of carbon cost economics. This concept accounts for different "costs" to the plant of acquiring nitrogen via different pathways. This study builds on tRIBS-VEGGIE, a spatially explicit hydrological model coupled with a model of photosynthesis, stomatal resistance, and energy balance, by combining it with a model of nitrogen recycling. Driven by climate and spatially explicit data of soils, vegetation and topography, the model (referred to as tRIBS-VEGGIE-CN) simulates the dynamics of carbon and nitrogen in the soil-plant system; the dynamics of vegetation; and different components of the hydrological cycle. The tRIBS-VEGGIE-CN is applied in a humid tropical watershed at the Luquillo Critical Zone Observatory (LCZO). The region is characterized by high availability and cycling of nitrogen, high soil respiration rates, and large carbon stocks.We drive the model under contemporary CO2 and hydro-climatic forcing and compare results to a simulation under doubling CO2 and a range of future climate scenarios. The results with parameterization of nitrogen limitation based on carbon cost economics show that the carbon cost of the acquisition of nitrogen is 14% of the net primary productivity (NPP) and the N uptake cost for different pathways vary over a large range depending on leaf nitrogen content, turnover rates of carbon in soil and nitrogen cycling processes. Moreover, the N fertilization simulation experiment shows that the application of N fertilizer does not significantly change the simulated NPP. Furthermore, an

  20. Least square regression based integrated multi-parameteric demand modeling for short term load forecasting

    International Nuclear Information System (INIS)

    Halepoto, I.A.; Uqaili, M.A.

    2014-01-01

    Nowadays, due to power crisis, electricity demand forecasting is deemed an important area for socioeconomic development and proper anticipation of the load forecasting is considered essential step towards efficient power system operation, scheduling and planning. In this paper, we present STLF (Short Term Load Forecasting) using multiple regression techniques (i.e. linear, multiple linear, quadratic and exponential) by considering hour by hour load model based on specific targeted day approach with temperature variant parameter. The proposed work forecasts the future load demand correlation with linear and non-linear parameters (i.e. considering temperature in our case) through different regression approaches. The overall load forecasting error is 2.98% which is very much acceptable. From proposed regression techniques, Quadratic Regression technique performs better compared to than other techniques because it can optimally fit broad range of functions and data sets. The work proposed in this paper, will pave a path to effectively forecast the specific day load with multiple variance factors in a way that optimal accuracy can be maintained. (author)

  1. Implementation of a Parameterized Interacting Multiple Model Filter on an FPGA for Satellite Communications

    Science.gov (United States)

    Hackett, Timothy M.; Bilen, Sven G.; Ferreira, Paulo Victor R.; Wyglinski, Alexander M.; Reinhart, Richard C.

    2016-01-01

    In a communications channel, the space environment between a spacecraft and an Earth ground station can potentially cause the loss of a data link or at least degrade its performance due to atmospheric effects, shadowing, multipath, or other impairments. In adaptive and coded modulation, the signal power level at the receiver can be used in order to choose a modulation-coding technique that maximizes throughput while meeting bit error rate (BER) and other performance requirements. It is the goal of this research to implement a generalized interacting multiple model (IMM) filter based on Kalman filters for improved received power estimation on software-dened radio (SDR) technology for satellite communications applications. The IMM filter has been implemented in Verilog consisting of a customizable bank of Kalman filters for choosing between performance and resource utilization. Each Kalman filter can be implemented using either solely a Schur complement module (for high area efficiency) or with Schur complement, matrix multiplication, and matrix addition modules (for high performance). These modules were simulated and synthesized for the Virtex II platform on the JPL Radio Experimenter Development System (EDS) at NASA Glenn Research Center. The results for simulation, synthesis, and hardware testing are presented.

  2. Final Technical Report of ASR project entitled “ARM Observations for the Development and Evaluation of Models and Parameterizations of Cloudy Boundary Layers” (DE-SC0000825)

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Ping [Florida Intl Univ., Miami, FL (United States)

    2016-02-22

    This project aims to elucidate the processes governing boundary layer clouds and improve the treatment of cloud processes in Global Climate Models (GCMs). Specifically, we have made research effort in following areas: (1) Developing novel numerical approach of using multiple scale Weather Research & Forecasting (WRF) model simulations for boundary layer cloud research; (2) Addressing issues of PDF schemes for parameterizing sub-grid scale cloud radiative properties; (3) Investigating the impact of mesoscale cloud organizations on the evolution of boundary layer clouds; (4) Evaluating parameterizations of the cumulus induced vertical transport; (5) Limited area model (LAM) intercomparison study of TWP-ICE convective case; (6) Investigating convective invigoration processes at shallow cumulus cold poll boundaries; and (7) Investigating vertical transport processes in moist convection.

  3. Sensitivity analysis of a parameterization of the stomatal component of the DO3SE model for Quercus ilex to estimate ozone fluxes

    International Nuclear Information System (INIS)

    Alonso, Rocio; Elvira, Susana; Sanz, Maria J.; Gerosa, Giacomo; Emberson, Lisa D.; Bermejo, Victoria; Gimeno, Benjamin S.

    2008-01-01

    A sensitivity analysis of a proposed parameterization of the stomatal conductance (g s ) module of the European ozone deposition model (DO 3 SE) for Quercus ilex was performed. The performance of the model was tested against measured g s in the field at three sites in Spain. The best fit of the model was found for those sites, or during those periods, facing no or mild stress conditions, but a worse performance was found under severe drought or temperature stress, mostly occurring at continental sites. The best performance was obtained when both f phen and f SWP were included. A local parameterization accounting for the lower temperatures recorded in winter and the higher water shortage at the continental sites resulted in a better performance of the model. The overall results indicate that two different parameterizations of the model are needed, one for marine-influenced sites and another one for continental sites. - No redundancy between phenological and water-related modifying functions was found when estimating stomatal behavior of Holm oak

  4. Modeling the regional impact of ship emissions on NOx and ozone levels over the Eastern Atlantic and Western Europe using ship plume parameterization

    Directory of Open Access Journals (Sweden)

    P. Pisoft

    2010-07-01

    Full Text Available In general, regional and global chemistry transport models apply instantaneous mixing of emissions into the model's finest resolved scale. In case of a concentrated source, this could result in erroneous calculation of the evolution of both primary and secondary chemical species. Several studies discussed this issue in connection with emissions from ships and aircraft. In this study, we present an approach to deal with the non-linear effects during dispersion of NOx emissions from ships. It represents an adaptation of the original approach developed for aircraft NOx emissions, which uses an exhaust tracer to trace the amount of the emitted species in the plume and applies an effective reaction rate for the ozone production/destruction during the plume's dilution into the background air. In accordance with previous studies examining the impact of international shipping on the composition of the troposphere, we found that the contribution of ship induced surface NOx to the total reaches 90% over remote ocean and makes 10–30% near coastal regions. Due to ship emissions, surface ozone increases by up to 4–6 ppbv making 10% contribution to the surface ozone budget. When applying the ship plume parameterization, we show that the large scale NOx decreases and the ship NOx contribution is reduced by up to 20–25%. A similar decrease was found in the case of O3. The plume parameterization suppressed the ship induced ozone production by 15–30% over large areas of the studied region. To evaluate the presented parameterization, nitrogen monoxide measurements over the English Channel were compared with modeled values and it was found that after activating the parameterization the model accuracy increases.

  5. Parameterizing Plasmaspheric Hiss Wave Power by Plasmapause Location

    Science.gov (United States)

    Malaspina, D.; Jaynes, A. N.; Boule, C.; Bortnik, J.; Thaller, S. A.; Ergun, R.; Kletzing, C.; Wygant, J. R.

    2016-12-01

    Plasmaspheric hiss is a superposition of electromagnetic whistler-mode waves largely confined within the plasmasphere, the cold plasma torus surrounding Earth. Hiss plays an important role in radiation belt dynamics by pitch angle scattering electrons for a wide range of electron energies (10's of keV to > 1 MeV) which can result in their loss to the atmosphere. This interaction is often included in predictive models of radiation belt dynamics using statistical hiss wave power distributions derived from observations. However, the traditional approach to creating these distributions parameterizes hiss power by L-parameter (e.g. MacIlwain L, dipole L, or L*) and a geomagnetic index (e.g. DST or AE). Such parameterization introduces spatial averaging of dissimilar wave power radial profiles, resulting in heavily smoothed wave power distributions. This work instead parameterizes hiss wave power distributions using plasmapause location and distance from the plasmapause. Using Van Allen Probes data and these new parameterizations, previously unreported and highly repeatable features of the hiss wave power distribution become apparent. These features include: (1) The highest amplitude hiss wave power is concentrated over a narrower range of L than previous studies have indicated, and (2) the location of the peak in hiss wave power is determined by the plasmapause location, occurring at a consistent standoff distance Earthward of the plasmapause. Based on these features, parameterizing hiss using the plasmapause location and distance from the plasmapause may shed new light on hiss generation and propagation physics, as well as serve to improve the parameterization of hiss in predictive models of the radiation belts.

  6. Improving the representation of river-groundwater interactions in land surface modeling at the regional scale: Observational evidence and parameterization applied in the Community Land Model

    KAUST Repository

    Zampieri, Matteo

    2012-02-01

    Groundwater is an important component of the hydrological cycle, included in many land surface models to provide a lower boundary condition for soil moisture, which in turn plays a key role in the land-vegetation-atmosphere interactions and the ecosystem dynamics. In regional-scale climate applications land surface models (LSMs) are commonly coupled to atmospheric models to close the surface energy, mass and carbon balance. LSMs in these applications are used to resolve the momentum, heat, water and carbon vertical fluxes, accounting for the effect of vegetation, soil type and other surface parameters, while lack of adequate resolution prevents using them to resolve horizontal sub-grid processes. Specifically, LSMs resolve the large-scale runoff production associated with infiltration excess and sub-grid groundwater convergence, but they neglect the effect from loosing streams to groundwater. Through the analysis of observed data of soil moisture obtained from the Oklahoma Mesoscale Network stations and land surface temperature derived from MODIS we provide evidence that the regional scale soil moisture and surface temperature patterns are affected by the rivers. This is demonstrated on the basis of simulations from a land surface model (i.e., Community Land Model - CLM, version 3.5). We show that the model cannot reproduce the features of the observed soil moisture and temperature spatial patterns that are related to the underlying mechanism of reinfiltration of river water to groundwater. Therefore, we implement a simple parameterization of this process in CLM showing the ability to reproduce the soil moisture and surface temperature spatial variabilities that relate to the river distribution at regional scale. The CLM with this new parameterization is used to evaluate impacts of the improved representation of river-groundwater interactions on the simulated water cycle parameters and the surface energy budget at the regional scale. © 2011 Elsevier B.V.

  7. A parameterization of the heterogeneous hydrolysis of N2O5 for mass-based aerosol models: improvement of particulate nitrate prediction

    Science.gov (United States)

    Chen, Ying; Wolke, Ralf; Ran, Liang; Birmili, Wolfram; Spindler, Gerald; Schröder, Wolfram; Su, Hang; Cheng, Yafang; Tegen, Ina; Wiedensohler, Alfred

    2018-01-01

    The heterogeneous hydrolysis of N2O5 on the surface of deliquescent aerosol leads to HNO3 formation and acts as a major sink of NOx in the atmosphere during night-time. The reaction constant of this heterogeneous hydrolysis is determined by temperature (T), relative humidity (RH), aerosol particle composition, and the surface area concentration (S). However, these parameters were not comprehensively considered in the parameterization of the heterogeneous hydrolysis of N2O5 in previous mass-based 3-D aerosol modelling studies. In this investigation, we propose a sophisticated parameterization (NewN2O5) of N2O5 heterogeneous hydrolysis with respect to T, RH, aerosol particle compositions, and S based on laboratory experiments. We evaluated closure between NewN2O5 and a state-of-the-art parameterization based on a sectional aerosol treatment. The comparison showed a good linear relationship (R = 0.91) between these two parameterizations. NewN2O5 was incorporated into a 3-D fully online coupled model, COSMO-MUSCAT, with the mass-based aerosol treatment. As a case study, we used the data from the HOPE Melpitz campaign (10-25 September 2013) to validate model performance. Here, we investigated the improvement of nitrate prediction over western and central Europe. The modelled particulate nitrate mass concentrations ([NO3-]) were validated by filter measurements over Germany (Neuglobsow, Schmücke, Zingst, and Melpitz). The modelled [NO3-] was significantly overestimated for this period by a factor of 5-19, with the corrected NH3 emissions (reduced by 50 %) and the original parameterization of N2O5 heterogeneous hydrolysis. The NewN2O5 significantly reduces the overestimation of [NO3-] by ˜ 35 %. Particularly, the overestimation factor was reduced to approximately 1.4 in our case study (12, 17-18 and 25 September 2013) when [NO3-] was dominated by local chemical formations. In our case, the suppression of organic coating was negligible over western and central Europe

  8. A linear CO chemistry parameterization in a chemistry-transport model: evaluation and application to data assimilation

    Directory of Open Access Journals (Sweden)

    M. Claeyman

    2010-07-01

    Full Text Available This paper presents an evaluation of a new linear parameterization valid for the troposphere and the stratosphere, based on a first order approximation of the carbon monoxide (CO continuity equation. This linear scheme (hereinafter noted LINCO has been implemented in the 3-D Chemical Transport Model (CTM MOCAGE (MOdèle de Chimie Atmospherique Grande Echelle. First, a one and a half years of LINCO simulation has been compared to output obtained from a detailed chemical scheme output. The mean differences between both schemes are about ±25 ppbv (part per billion by volume or 15% in the troposphere and ±10 ppbv or 100% in the stratosphere. Second, LINCO has been compared to diverse observations from satellite instruments covering the troposphere (Measurements Of Pollution In The Troposphere: MOPITT and the stratosphere (Microwave Limb Sounder: MLS and also from aircraft (Measurements of ozone and water vapour by Airbus in-service aircraft: MOZAIC programme mostly flying in the upper troposphere and lower stratosphere (UTLS. In the troposphere, the LINCO seasonal variations as well as the vertical and horizontal distributions are quite close to MOPITT CO observations. However, a bias of ~−40 ppbv is observed at 700 Pa between LINCO and MOPITT. In the stratosphere, MLS and LINCO present similar large-scale patterns, except over the poles where the CO concentration is underestimated by the model. In the UTLS, LINCO presents small biases less than 2% compared to independent MOZAIC profiles. Third, we assimilated MOPITT CO using a variational 3D-FGAT (First Guess at Appropriate Time method in conjunction with MOCAGE for a long run of one and a half years. The data assimilation greatly improves the vertical CO distribution in the troposphere from 700 to 350 hPa compared to independent MOZAIC profiles. At 146 hPa, the assimilated CO distribution is also improved compared to MLS observations by reducing the bias up to a factor of 2 in the tropics

  9. The predictive consequences of parameterization

    Science.gov (United States)

    White, J.; Hughes, J. D.; Doherty, J. E.

    2013-12-01

    In numerical groundwater modeling, parameterization is the process of selecting the aspects of a computer model that will be allowed to vary during history matching. This selection process is dependent on professional judgment and is, therefore, inherently subjective. Ideally, a robust parameterization should be commensurate with the spatial and temporal resolution of the model and should include all uncertain aspects of the model. Limited computing resources typically require reducing the number of adjustable parameters so that only a subset of the uncertain model aspects are treated as estimable parameters; the remaining aspects are treated as fixed parameters during history matching. We use linear subspace theory to develop expressions for the predictive error incurred by fixing parameters. The predictive error is comprised of two terms. The first term arises directly from the sensitivity of a prediction to fixed parameters. The second term arises from prediction-sensitive adjustable parameters that are forced to compensate for fixed parameters during history matching. The compensation is accompanied by inappropriate adjustment of otherwise uninformed, null-space parameter components. Unwarranted adjustment of null-space components away from prior maximum likelihood values may produce bias if a prediction is sensitive to those components. The potential for subjective parameterization choices to corrupt predictions is examined using a synthetic model. Several strategies are evaluated, including use of piecewise constant zones, use of pilot points with Tikhonov regularization and use of the Karhunen-Loeve transformation. The best choice of parameterization (as defined by minimum error variance) is strongly dependent on the types of predictions to be made by the model.

  10. A review of the theoretical basis for bulk mass flux convective parameterization

    Directory of Open Access Journals (Sweden)

    R. S. Plant

    2010-04-01

    Full Text Available Most parameterizations for precipitating convection in use today are bulk schemes, in which an ensemble of cumulus elements with different properties is modelled as a single, representative entraining-detraining plume. We review the underpinning mathematical model for such parameterizations, in particular by comparing it with spectral models in which elements are not combined into the representative plume. The chief merit of a bulk model is that the representative plume can be described by an equation set with the same structure as that which describes each element in a spectral model. The equivalence relies on an ansatz for detrained condensate introduced by Yanai et al. (1973 and on a simplified microphysics. There are also conceptual differences in the closure of bulk and spectral parameterizations. In particular, we show that the convective quasi-equilibrium closure of Arakawa and Schubert (1974 for spectral parameterizations cannot be carried over to a bulk parameterization in a straightforward way. Quasi-equilibrium of the cloud work function assumes a timescale separation between a slow forcing process and a rapid convective response. But, for the natural bulk analogue to the cloud-work function, the relevant forcing is characterised by a different timescale, and so its quasi-equilibrium entails a different physical constraint. Closures of bulk parameterizations that use a parcel value of CAPE do not suffer from this timescale issue. However, the Yanai et al. (1973 ansatz must be invoked as a necessary ingredient of those closures.

  11. Parameterized isoprene and monoterpene emissions from the boreal forest floor: Implementation into a 1D chemistry-transport model and investigation of the influence on atmospheric chemistry

    Science.gov (United States)

    Mogensen, Ditte; Aaltonen, Hermanni; Aalto, Juho; Bäck, Jaana; Kieloaho, Antti-Jussi; Gierens, Rosa; Smolander, Sampo; Kulmala, Markku; Boy, Michael

    2015-04-01

    Volatile organic compounds (VOCs) are emitted from the biosphere and can work as precursor gases for aerosol particles that can affect the climate (e.g. Makkonen et al., ACP, 2012). VOC emissions from needles and leaves have gained the most attention, however other parts of the ecosystem also have the ability to emit a vast amount of VOCs. This, often neglected, source can be important e.g. at periods where leaves are absent. Both sources and drivers related to forest floor emission of VOCs are currently limited. It is thought that the sources are mainly due to degradation of organic matter (Isidorov and Jdanova, Chemosphere, 2002), living roots (Asensio et al., Soil Biol. Biochem., 2008) and ground vegetation. The drivers are biotic (e.g. microbes) and abiotic (e.g. temperature and moisture). However, the relative importance of the sources and the drivers individually are currently poorly understood. Further, the relative importance of these factors is highly dependent on the tree species occupying the area of interest. The emission of isoprene and monoterpenes where measured from the boreal forest floor at the SMEAR II station in Southern Finland (Hari and Kulmala, Boreal Env. Res., 2005) during the snow-free period in 2010-2012. We used a dynamic method with 3 automated chambers analyzed by Proton Transfer Reaction - Mass Spectrometer (Aaltonen et al., Plant Soil, 2013). Using this data, we have developed empirical parameterizations for the emission of isoprene and monoterpenes from the forest floor. These parameterizations depends on abiotic factors, however, since the parameterizations are based on field measurements, biotic features are captured. Further, we have used the 1D chemistry-transport model SOSAA (Boy et al., ACP, 2011) to test the seasonal relative importance of inclusion of these parameterizations of the forest floor compared to the canopy crown emissions, on the atmospheric reactivity throughout the canopy.

  12. Physics Parameterization for Seasonal Prediction

    Science.gov (United States)

    2013-09-30

    particularly the Madden Julian Oscillation (MJO). We are continuing our participation in the project “Vertical Structure and Diabatic Processes of...Results are shown for: a) TRMM rainfall, b) NAVGEM 20-year run submitted for the YOTC/GEWEX project “Vertical Structure and Diabatic Processes of the MJO

  13. Model description and evaluation of the mark-recapture survival model used to parameterize the 2012 status and threats analysis for the Florida manatee (Trichechus manatus latirostris)

    Science.gov (United States)

    Langtimm, Catherine A.; Kendall, William L.; Beck, Cathy A.; Kochman, Howard I.; Teague, Amy L.; Meigs-Friend, Gaia; Peñaloza, Claudia L.

    2016-11-30

    This report provides supporting details and evidence for the rationale, validity and efficacy of a new mark-recapture model, the Barker Robust Design, to estimate regional manatee survival rates used to parameterize several components of the 2012 version of the Manatee Core Biological Model (CBM) and Threats Analysis (TA).  The CBM and TA provide scientific analyses on population viability of the Florida manatee subspecies (Trichechus manatus latirostris) for U.S. Fish and Wildlife Service’s 5-year reviews of the status of the species as listed under the Endangered Species Act.  The model evaluation is presented in a standardized reporting framework, modified from the TRACE (TRAnsparent and Comprehensive model Evaluation) protocol first introduced for environmental threat analyses.  We identify this new protocol as TRACE-MANATEE SURVIVAL and this model evaluation specifically as TRACE-MANATEE SURVIVAL, Barker RD version 1. The longer-term objectives of the manatee standard reporting format are to (1) communicate to resource managers consistent evaluation information over sequential modeling efforts; (2) build understanding and expertise on the structure and function of the models; (3) document changes in model structures and applications in response to evolving management objectives, new biological and ecological knowledge, and new statistical advances; and (4) provide greater transparency for management and research review.

  14. Building Mental Models by Dissecting Physical Models

    Science.gov (United States)

    Srivastava, Anveshna

    2016-01-01

    When students build physical models from prefabricated components to learn about model systems, there is an implicit trade-off between the physical degrees of freedom in building the model and the intensity of instructor supervision needed. Models that are too flexible, permitting multiple possible constructions require greater supervision to…

  15. NATO Advanced Study Institute on Advanced Physical Oceanographic Numerical Modelling

    CERN Document Server

    1986-01-01

    This book is a direct result of the NATO Advanced Study Institute held in Banyuls-sur-mer, France, June 1985. The Institute had the same title as this book. It was held at Laboratoire Arago. Eighty lecturers and students from almost all NATO countries attended. The purpose was to review the state of the art of physical oceanographic numerical modelling including the parameterization of physical processes. This book represents a cross-section of the lectures presented at the ASI. It covers elementary mathematical aspects through large scale practical aspects of ocean circulation calculations. It does not encompass every facet of the science of oceanographic modelling. We have, however, captured most of the essence of mesoscale and large-scale ocean modelling for blue water and shallow seas. There have been considerable advances in modelling coastal circulation which are not included. The methods section does not include important material on phase and group velocity errors, selection of grid structures, advanc...

  16. Physically-Derived Dynamical Cores in Atmospheric General Circulation Models

    Science.gov (United States)

    Rood, Richard B.; Lin, Shian-Kiann

    1999-01-01

    The algorithm chosen to represent the advection in atmospheric models is often used as the primary attribute to classify the model. Meteorological models are generally classified as spectral or grid point, with the term grid point implying discretization using finite differences. These traditional approaches have a number of shortcomings that render them non-physical. That is, they provide approximate solutions to the conservation equations that do not obey the fundamental laws of physics. The most commonly discussed shortcomings are overshoots and undershoots which manifest themselves most overtly in the constituent continuity equation. For this reason many climate models have special algorithms to model water vapor advection. This talk focuses on the development of an atmospheric general circulation model which uses a consistent physically-based advection algorithm in all aspects of the model formulation. The shallow-water model of Lin and Rood (QJRMS, 1997) is generalized to three dimensions and combined with the physics parameterizations of NCAR's Community Climate Model. The scientific motivation for the development is to increase the integrity of the underlying fluid dynamics so that the physics terms can be more effectively isolated, examined, and improved. The expected benefits of the new model are discussed and results from the initial integrations will be presented.

  17. Best convective parameterization scheme within RegCM4 to downscale CMIP5 multi-model data for the CORDEX-MENA/Arab domain

    Science.gov (United States)

    Almazroui, Mansour; Islam, Md. Nazrul; Al-Khalaf, A. K.; Saeed, Fahad

    2016-05-01

    A suitable convective parameterization scheme within Regional Climate Model version 4.3.4 (RegCM4) developed by the Abdus Salam International Centre for Theoretical Physics, Trieste, Italy, is investigated through 12 sensitivity runs for the period 2000-2010. RegCM4 is driven with European Centre for Medium-Range Weather Forecasts (ECMWF) ERA-Interim 6-hourly boundary condition fields for the CORDEX-MENA/Arab domain. Besides ERA-Interim lateral boundary conditions data, the Climatic Research Unit (CRU) data is also used to assess the performance of RegCM4. Different statistical measures are taken into consideration in assessing model performance for 11 sub-domains throughout the analysis domain, out of which 7 (4) sub-domains give drier (wetter) conditions for the area of interest. There is no common best option for the simulation of both rainfall and temperature (with lowest bias); however, one option each for temperature and rainfall has been found to be superior among the 12 options investigated in this study. These best options for the two variables vary from region to region as well. Overall, RegCM4 simulates large pressure and water vapor values along with lower wind speeds compared to the driving fields, which are the key sources of bias in simulating rainfall and temperature. Based on the climatic characteristics of most of the Arab countries located within the study domain, the drier sub-domains are given priority in the selection of a suitable convective scheme, albeit with a compromise for both rainfall and temperature simulations. The most suitable option Grell over Land and Emanuel over Ocean in wet (GLEO wet) delivers a rainfall wet bias of 2.96 % and a temperature cold bias of 0.26 °C, compared to CRU data. An ensemble derived from all 12 runs provides unsatisfactory results for rainfall (28.92 %) and temperature (-0.54 °C) bias in the drier region because some options highly overestimate rainfall (reaching up to 200 %) and underestimate

  18. Development and evaluation of a physics-based windblown dust emission scheme implemented in the CMAQ modeling system

    Science.gov (United States)

    A new windblown dust emission treatment was incorporated in the Community Multiscale Air Quality (CMAQ) modeling system. This new model treatment has been built upon previously developed physics-based parameterization schemes from the literature. A distinct and novel feature of t...

  19. Inheritance versus parameterization

    DEFF Research Database (Denmark)

    Ernst, Erik

    2013-01-01

    This position paper argues that inheritance and parameterization differ in their fundamental structure, even though they may emulate each other in many ways. Based on this, we claim that certain mechanisms, e.g., final classes, are in conflict with the nature of inheritance, and hence causes...

  20. Parameterization of extended systems

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik

    2006-01-01

    The YJBK parameterization (of all stabilizing controllers) is extended to handle systems with additional sensors and/or actuators. It is shown that the closed loop transfer function is still an affine function in the YJBK parameters in the nominal case. Further, some closed-loop stability results...

  1. Robustness and sensitivities of central U.S. summer convection in the super-parameterized CAM: Multi-model intercomparison with a new regional EOF index

    Science.gov (United States)

    Kooperman, Gabriel J.; Pritchard, Michael S.; Somerville, Richard C. J.

    2013-06-01

    Mesoscale convective systems (MCSs) can bring up to 60% of summer rainfall to the central United States but are not simulated by most global climate models. In this study, a new empirical orthogonal function based index is developed to isolate the MCS activity, similar to that developed by Wheeler and Hendon (2004) for the Madden-Julian Oscillation. The index is applied to compactly compare three conventional- and super-parameterized (SP) versions (3.0, 3.5, and 5.0) of the National Center for Atmospheric Research Community Atmosphere Model (CAM). Results show that nocturnal, eastward propagating convection is a robust effect of super-parameterization but is sensitive to its specific implementation. MCS composites based on the index show that in SP-CAM3.5, convective MCS anomalies are unrealistically large scale and concentrated, while surface precipitation is too weak. These aspects of the MCS signal are improved in the latest version (SP-CAM5.0), which uses high-order microphysics.

  2. Evaluating Aerosol/Cloud/Radiation Process Parameterizations with Single- Column Models and Second Aerosol Characterization Experiment (ACE-2) Cloudy Column Observations

    Energy Technology Data Exchange (ETDEWEB)

    Menon, Surabi; Brenguier, Jean-Louis; Boucher, Olivier; Davison, Paul; Del Genio, Anthony D.; Feichter, J; Ghan, Steven J.; Guibert, Sarah; Liu, Xiaohong; Lohmann, Ulrike; Pawlowska, Hanna; Penner, Joyce E.; Quaas, Johannes; Roberts, David L.; Schuller, Lothar; Snider, Jefferson

    2003-12-17

    The ACE-2 data set along with ECMWF reanalysis meteorological fields provided the basis for the single column model (SCM) simulations, which were performed as part of the PACE (Parameterization of the Aerosol Indirect Climatic Effect) project. Six different SCMs were used to simulate ACE-2 case studies of clean and polluted cloudy boundary layers, with the objective being to identify limitations of the aerosol/cloud/radiation interaction schemes within the range of uncertainty in in situ, reanalysis and satellite retrieved data that were used to constrain model results. The exercise proceeds in three steps. First, SCMs are configured with the same fine vertical resolution as the ACE-2 in situ data base to evaluate the numerical schemes for the prediction of aerosol activation, radiative transfer and precipitation formation. Second, the same test is performed at the coarser vertical resolution of GCMs to evaluate its impact on the performance of the parameterizations. Finally, SCMs are run for a 24 to 48 hr period to examine predictions of boundary layer clouds when initialized with large-scale meteorological fields.

  3. A parameterization of the heterogeneous hydrolysis of N2O5 for mass-based aerosol models: improvement of particulate nitrate prediction

    Directory of Open Access Journals (Sweden)

    Y. Chen

    2018-01-01

    Full Text Available The heterogeneous hydrolysis of N2O5 on the surface of deliquescent aerosol leads to HNO3 formation and acts as a major sink of NOx in the atmosphere during night-time. The reaction constant of this heterogeneous hydrolysis is determined by temperature (T, relative humidity (RH, aerosol particle composition, and the surface area concentration (S. However, these parameters were not comprehensively considered in the parameterization of the heterogeneous hydrolysis of N2O5 in previous mass-based 3-D aerosol modelling studies. In this investigation, we propose a sophisticated parameterization (NewN2O5 of N2O5 heterogeneous hydrolysis with respect to T, RH, aerosol particle compositions, and S based on laboratory experiments. We evaluated closure between NewN2O5 and a state-of-the-art parameterization based on a sectional aerosol treatment. The comparison showed a good linear relationship (R =  0.91 between these two parameterizations. NewN2O5 was incorporated into a 3-D fully online coupled model, COSMO–MUSCAT, with the mass-based aerosol treatment. As a case study, we used the data from the HOPE Melpitz campaign (10–25 September 2013 to validate model performance. Here, we investigated the improvement of nitrate prediction over western and central Europe. The modelled particulate nitrate mass concentrations ([NO3−] were validated by filter measurements over Germany (Neuglobsow, Schmücke, Zingst, and Melpitz. The modelled [NO3−] was significantly overestimated for this period by a factor of 5–19, with the corrected NH3 emissions (reduced by 50 % and the original parameterization of N2O5 heterogeneous hydrolysis. The NewN2O5 significantly reduces the overestimation of [NO3−] by  ∼  35 %. Particularly, the overestimation factor was reduced to approximately 1.4 in our case study (12, 17–18 and 25 September 2013 when [NO3−] was dominated by local chemical formations. In our case, the suppression of organic coating

  4. Physical Modeling Modular Boxes: PHOXES

    DEFF Research Database (Denmark)

    Gelineck, Steven; Serafin, Stefania

    2010-01-01

    This paper presents the development of a set of musical instruments, which are based on known physical modeling sound synthesis techniques. The instruments are modular, meaning that they can be combined in various ways. This makes it possible to experiment with physical interaction and sonic...

  5. Physics Beyond the Standard Model

    CERN Document Server

    Ellis, John

    2009-01-01

    The Standard Model is in good shape, apart possibly from g_\\mu - 2 and some niggling doubts about the electroweak data. Something like a Higgs boson is required to provide particle masses, but theorists are actively considering alternatives. The problems of flavour, unification and quantum gravity will require physics beyond the Standard Model, and astrophysics and cosmology also provide reasons to expect physics beyond the Standard Model, in particular to provide the dark matter and explain the origin of the matter in the Universe. Personally, I find supersymmetry to be the most attractive option for new physics at the TeV scale. The LHC should establish the origin of particle masses has good prospects for discovering dark matter, and might also cast light on unification and even quantum gravity. Important roles may also be played by lower-energy experiments, astrophysics and cosmology in the searches for new physics beyond the Standard Model.

  6. Standard Model physics

    CERN Multimedia

    Altarelli, Guido

    1999-01-01

    Introduction structure of gauge theories. The QEDand QCD examples. Chiral theories. The electroweak theory. Spontaneous symmetry breaking. The Higgs mechanism Gauge boson and fermion masses. Yukawa coupling. Charges current couplings. The Cabibo-Kobayashi-Maskawa matrix and CP violation. Neutral current couplings. The Glasow-Iliopoulos-Maiani mechanism. Gauge boson and Higgs coupling. Radiative corrections and loops. Cancellation of the chiral anomaly. Limits on the Higgs comparaison. Problems of the Standard Model. Outlook.

  7. Quasi standard model physics

    International Nuclear Information System (INIS)

    Peccei, R.D.

    1986-01-01

    Possible small extensions of the standard model are considered, which are motivated by the strong CP problem and by the baryon asymmetry of the Universe. Phenomenological arguments are given which suggest that imposing a PQ symmetry to solve the strong CP problem is only tenable if the scale of the PQ breakdown is much above M W . Furthermore, an attempt is made to connect the scale of the PQ breakdown to that of the breakdown of lepton number. It is argued that in these theories the same intermediate scale may be responsible for the baryon number of the Universe, provided the Kuzmin Rubakov Shaposhnikov (B+L) erasing mechanism is operative. (orig.)

  8. New representation of water activity based on a single solute specific constant to parameterize the hygroscopic growth of aerosols in atmospheric models

    Directory of Open Access Journals (Sweden)

    S. Metzger

    2012-06-01

    Full Text Available Water activity is a key factor in aerosol thermodynamics and hygroscopic growth. We introduce a new representation of water activity (aw, which is empirically related to the solute molality (μs through a single solute specific constant, νi. Our approach is widely applicable, considers the Kelvin effect and covers ideal solutions at high relative humidity (RH, including cloud condensation nuclei (CCN activation. It also encompasses concentrated solutions with high ionic strength at low RH such as the relative humidity of deliquescence (RHD. The constant νi can thus be used to parameterize the aerosol hygroscopic growth over a wide range of particle sizes, from nanometer nucleation mode to micrometer coarse mode particles. In contrast to other aw-representations, our νi factor corrects the solute molality both linearly and in exponent form x · ax. We present four representations of our basic aw-parameterization at different levels of complexity for different aw-ranges, e.g. up to 0.95, 0.98 or 1. νi is constant over the selected aw-range, and in its most comprehensive form, the parameterization describes the entire aw range (0–1. In this work we focus on single solute solutions. νi can be pre-determined with a root-finding method from our water activity representation using an aw−μs data pair, e.g. at solute saturation using RHD and solubility measurements. Our aw and supersaturation (Köhler-theory results compare well with the thermodynamic reference model E-AIM for the key compounds NaCl and (NH42SO4 relevant for CCN modeling and calibration studies. Envisaged applications include regional and global atmospheric chemistry and

  9. The influence of Cloud Longwave Scattering together with a state-of-the-art Ice Longwave Optical Parameterization in Climate Model Simulations

    Science.gov (United States)

    Chen, Y. H.; Kuo, C. P.; Huang, X.; Yang, P.

    2017-12-01

    Clouds play an important role in the Earth's radiation budget, and thus realistic and comprehensive treatments of cloud optical properties and cloud-sky radiative transfer are crucial for simulating weather and climate. However, most GCMs neglect LW scattering effects by clouds and tend to use inconsistent cloud SW and LW optical parameterizations. Recently, co-authors of this study have developed a new LW optical properties parameterization for ice clouds, which is based on ice cloud particle statistics from MODIS measurements and state-of-the-art scattering calculation. A two-stream multiple-scattering scheme has also been implemented into the RRTMG_LW, a widely used longwave radiation scheme by climate modeling centers. This study is to integrate both the new LW cloud-radiation scheme for ice clouds and the modified RRTMG_LW with scattering capability into the NCAR CESM to improve the cloud longwave radiation treatment. A number of single column model (SCM) simulations using the observation from the ARM SGP site on July 18 to August 4 in 1995 are carried out to assess the impact of new LW optical properties of clouds and scattering-enabled radiation scheme on simulated radiation budget and cloud radiative effect (CRE). The SCM simulation allows interaction between cloud and radiation schemes with other parameterizations, but the large-scale forcing is prescribed or nudged. Comparing to the results from the SCM of the standard CESM, the new ice cloud optical properties alone leads to an increase of LW CRE by 26.85 W m-2 in average, as well as an increase of the downward LW flux at surface by 6.48 W m-2. Enabling LW cloud scattering further increases the LW CRE by another 3.57 W m-2 and the downward LW flux at the surface by 0.2 W m-2. The change of LW CRE is mainly due to an increase of cloud top height, which enhances the LW CRE. A long-term simulation of CESM will be carried out to further understand the impact of such changes on simulated climates.

  10. Monte Carlo simulation for uncertainty estimation on structural data in implicit 3-D geological modeling, a guide for disturbance distribution selection and parameterization

    Directory of Open Access Journals (Sweden)

    E. Pakyuz-Charrier

    2018-04-01

    Full Text Available Three-dimensional (3-D geological structural modeling aims to determine geological information in a 3-D space using structural data (foliations and interfaces and topological rules as inputs. This is necessary in any project in which the properties of the subsurface matters; they express our understanding of geometries in depth. For that reason, 3-D geological models have a wide range of practical applications including but not restricted to civil engineering, the oil and gas industry, the mining industry, and water management. These models, however, are fraught with uncertainties originating from the inherent flaws of the modeling engines (working hypotheses, interpolator's parameterization and the inherent lack of knowledge in areas where there are no observations combined with input uncertainty (observational, conceptual and technical errors. Because 3-D geological models are often used for impactful decision-making it is critical that all 3-D geological models provide accurate estimates of uncertainty. This paper's focus is set on the effect of structural input data measurement uncertainty propagation in implicit 3-D geological modeling. This aim is achieved using Monte Carlo simulation for uncertainty estimation (MCUE, a stochastic method which samples from predefined disturbance probability distributions that represent the uncertainty of the original input data set. MCUE is used to produce hundreds to thousands of altered unique data sets. The altered data sets are used as inputs to produce a range of plausible 3-D models. The plausible models are then combined into a single probabilistic model as a means to propagate uncertainty from the input data to the final model. In this paper, several improved methods for MCUE are proposed. The methods pertain to distribution selection for input uncertainty, sample analysis and statistical consistency of the sampled distribution. Pole vector sampling is proposed as a more rigorous alternative than

  11. Monte Carlo simulation for uncertainty estimation on structural data in implicit 3-D geological modeling, a guide for disturbance distribution selection and parameterization

    Science.gov (United States)

    Pakyuz-Charrier, Evren; Lindsay, Mark; Ogarko, Vitaliy; Giraud, Jeremie; Jessell, Mark

    2018-04-01

    Three-dimensional (3-D) geological structural modeling aims to determine geological information in a 3-D space using structural data (foliations and interfaces) and topological rules as inputs. This is necessary in any project in which the properties of the subsurface matters; they express our understanding of geometries in depth. For that reason, 3-D geological models have a wide range of practical applications including but not restricted to civil engineering, the oil and gas industry, the mining industry, and water management. These models, however, are fraught with uncertainties originating from the inherent flaws of the modeling engines (working hypotheses, interpolator's parameterization) and the inherent lack of knowledge in areas where there are no observations combined with input uncertainty (observational, conceptual and technical errors). Because 3-D geological models are often used for impactful decision-making it is critical that all 3-D geological models provide accurate estimates of uncertainty. This paper's focus is set on the effect of structural input data measurement uncertainty propagation in implicit 3-D geological modeling. This aim is achieved using Monte Carlo simulation for uncertainty estimation (MCUE), a stochastic method which samples from predefined disturbance probability distributions that represent the uncertainty of the original input data set. MCUE is used to produce hundreds to thousands of altered unique data sets. The altered data sets are used as inputs to produce a range of plausible 3-D models. The plausible models are then combined into a single probabilistic model as a means to propagate uncertainty from the input data to the final model. In this paper, several improved methods for MCUE are proposed. The methods pertain to distribution selection for input uncertainty, sample analysis and statistical consistency of the sampled distribution. Pole vector sampling is proposed as a more rigorous alternative than dip vector

  12. The impact of changes in parameterizations of surface drag and vertical diffusion on the large-scale circulation in the Community Atmosphere Model (CAM5)

    Science.gov (United States)

    Lindvall, Jenny; Svensson, Gunilla; Caballero, Rodrigo

    2017-06-01

    Simulations with the Community Atmosphere Model version 5 (CAM5) are used to analyze the sensitivity of the large-scale circulation to changes in parameterizations of orographic surface drag and vertical diffusion. Many GCMs and NWP models use enhanced turbulent mixing in stable conditions to improve simulations, while CAM5 cuts off all turbulence at high stabilities and instead employs a strong orographic surface stress parameterization, known as turbulent mountain stress (TMS). TMS completely dominates the surface stress over land and reduces the near-surface wind speeds compared to simulations without TMS. It is found that TMS is generally beneficial for the large-scale circulation as it improves zonal wind speeds, Arctic sea level pressure and zonal anomalies of the 500-hPa stream function, compared to ERA-Interim. It also alleviates atmospheric blocking frequency biases in the Northern Hemisphere. Using a scheme that instead allows for a modest increase of turbulent diffusion at higher stabilities only in the planetary boundary layer (PBL) appears to in some aspects have a similar, although much smaller, beneficial effect as TMS. Enhanced mixing throughout the atmospheric column, however, degrades the CAM5 simulation. Evaluating the simulations in comparison with detailed measurements at two locations reveals that TMS is detrimental for the PBL at the flat grassland ARM Southern Great Plains site, giving too strong wind turning and too deep PBLs. At the Sodankylä forest site, the effect of TMS is smaller due to the larger local vegetation roughness. At both sites, all simulations substantially overestimate the boundary layer ageostrophic flow.

  13. Physical model of Nernst element

    International Nuclear Information System (INIS)

    Nakamura, Hiroaki; Ikeda, Kazuaki; Yamaguchi, Satarou

    1998-08-01

    Generation of electric power by the Nernst effect is a new application of a semiconductor. A key point of this proposal is to find materials with a high thermomagnetic figure-of-merit, which are called Nernst elements. In order to find candidates of the Nernst element, a physical model to describe its transport phenomena is needed. As the first model, we began with a parabolic two-band model in classical statistics. According to this model, we selected InSb as candidates of the Nernst element and measured their transport coefficients in magnetic fields up to 4 Tesla within a temperature region from 270 K to 330 K. In this region, we calculated transport coefficients numerically by our physical model. For InSb, experimental data are coincident with theoretical values in strong magnetic field. (author)

  14. Testing the importance of accurate meteorological input fields and parameterizations in atmospheric transport modelling using DREAM - Validation against ETEX-1

    DEFF Research Database (Denmark)

    Brandt, J.; Bastrup-Birk, A.; Christensen, J.H.

    1998-01-01

    A tracer model, the DREAM, which is based on a combination of a near-range Lagrangian model and a long-range Eulerian model, has been developed. The meteorological meso-scale model, MM5V1, is implemented as a meteorological driver for the tracer model. The model system is used for studying...

  15. Physics beyond the Standard Model

    CERN Document Server

    Valle, José W F

    1991-01-01

    We discuss some of the signatures associated with extensions of the Standard Model related to the neutrino and electroweak symmetry breaking sectors, with and without supersymmetry. The topics include a basic discussion of the theory of neutrino mass and the corresponding extensions of the Standard Model that incorporate massive neutrinos; an overview of the present observational status of neutrino mass searches, with emphasis on solar neutrinos, as well the as cosmological data on the amplitude of primordial density fluctuations; the implications of neutrino mass in cosmological nucleosynthesis, non-accelerator, as well as in high energy particle collider experiments. Turning to the electroweak breaking sector, we discuss the physics potential for Higgs boson searches at LEP200, including Majoron extensions of the Standard Model, and the physics of invisibly decaying Higgs bosons. We discuss the minimal supersymmetric Standard Model phenomenology, as well as some of the laboratory signatures that would be as...

  16. Improvement of a snow albedo parameterization in the Snow-Atmosphere-Soil Transfer model: evaluation of impacts of aerosol on seasonal snow cover

    Science.gov (United States)

    Zhong, Efang; Li, Qian; Sun, Shufen; Chen, Wen; Chen, Shangfeng; Nath, Debashis

    2017-11-01

    The presence of light-absorbing aerosols (LAA) in snow profoundly influence the surface energy balance and water budget. However, most snow-process schemes in land-surface and climate models currently do not take this into consideration. To better represent the snow process and to evaluate the impacts of LAA on snow, this study presents an improved snow albedo parameterization in the Snow-Atmosphere-Soil Transfer (SAST) model, which includes the impacts of LAA on snow. Specifically, the Snow, Ice and Aerosol Radiation (SNICAR) model is incorporated into the SAST model with an LAA mass stratigraphy scheme. The new coupled model is validated against in-situ measurements at the Swamp Angel Study Plot (SASP), Colorado, USA. Results show that the snow albedo and snow depth are better reproduced than those in the original SAST, particularly during the period of snow ablation. Furthermore, the impacts of LAA on snow are estimated in the coupled model through case comparisons of the snowpack, with or without LAA. The LAA particles directly absorb extra solar radiation, which accelerates the growth rate of the snow grain size. Meanwhile, these larger snow particles favor more radiative absorption. The average total radiative forcing of the LAA at the SASP is 47.5 W m-2. This extra radiative absorption enhances the snowmelt rate. As a result, the peak runoff time and "snow all gone" day have shifted 18 and 19.5 days earlier, respectively, which could further impose substantial impacts on the hydrologic cycle and atmospheric processes.

  17. A physical model for dementia

    Science.gov (United States)

    Sotolongo-Costa, O.; Gaggero-Sager, L. M.; Becker, J. T.; Maestu, F.; Sotolongo-Grau, O.

    2017-04-01

    Aging associated brain decline often result in some kind of dementia. Even when this is a complex brain disorder a physical model can be used in order to describe its general behavior. A probabilistic model for the development of dementia is obtained and fitted to some experimental data obtained from the Alzheimer's Disease Neuroimaging Initiative. It is explained how dementia appears as a consequence of aging and why it is irreversible.

  18. Accelerator physics and modeling: Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Parsa, Z. [ed.

    1991-12-31

    This report contains papers on the following topics: Physics of high brightness beams; radio frequency beam conditioner for fast-wave free-electron generators of coherent radiation; wake-field and space-charge effects on high brightness beams. Calculations and measured results for BNL-ATF; non-linear orbit theory and accelerator design; general problems of modeling for accelerators; development and application of dispersive soft ferrite models for time-domain simulation; and bunch lengthening in the SLC damping rings.

  19. Accelerator physics and modeling: Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Parsa, Z. (ed.)

    1991-01-01

    This report contains papers on the following topics: Physics of high brightness beams; radio frequency beam conditioner for fast-wave free-electron generators of coherent radiation; wake-field and space-charge effects on high brightness beams. Calculations and measured results for BNL-ATF; non-linear orbit theory and accelerator design; general problems of modeling for accelerators; development and application of dispersive soft ferrite models for time-domain simulation; and bunch lengthening in the SLC damping rings.

  20. Wave Generation in Physical Models

    DEFF Research Database (Denmark)

    Andersen, Thomas Lykke; Frigaard, Peter

    The present book describes the most important aspects of wave generation techniques in physical models. Moreover, the book serves as technical documentation for the wave generation software AwaSys 6, cf. Aalborg University (2012). In addition to the two main authors also Tue Hald and Michael...

  1. Development of the physical model

    International Nuclear Information System (INIS)

    Liu Zunqi; Morsy, Samir

    2001-01-01

    Full text: The Physical Model was developed during Program 93+2 as a technical tool to aid enhanced information analysis and now is an integrated part of the Department's on-going State evaluation process. This paper will describe the concept of the Physical Model, including its objectives, overall structure and the development of indicators with designated strengths, followed by a brief description of using the Physical Model in implementing the enhanced information analysis. The work plan for expansion and update of the Physical Model is also presented at the end of the paper. The development of the Physical Model is an attempt to identify, describe and characterize every known process for carrying out each step necessary for the acquisition of weapons-usable material, i.e., all plausible acquisition paths for highly enriched uranium (HEU) and separated plutonium (Pu). The overall structure of the Physical Model has a multilevel arrangement. It includes at the top level all the main steps (technologies) that may be involved in the nuclear fuel cycle from the source material production up to the acquisition of weapons-usable material, and then beyond the civilian fuel cycle to the development of nuclear explosive devices (weaponization). Each step is logically interconnected with the preceding and/or succeeding steps by nuclear material flows. It contains at its lower levels every known process that is associated with the fuel cycle activities presented at the top level. For example, uranium enrichment is broken down into three branches at the second level, i.e., enrichment of UF 6 , UCl 4 and U-metal respectively; and then further broken down at the third level into nine processes: gaseous diffusion, gas centrifuge, aerodynamic, electromagnetic, molecular laser (MLIS), atomic vapor laser (AVLIS), chemical exchange, ion exchange and plasma. Narratives are presented at each level, beginning with a general process description then proceeding with detailed

  2. Improving representation of convective transport for scale-aware parameterization: 1. Convection and cloud properties simulated with spectral bin and bulk microphysics: CRM Model Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Fan, Jiwen [Pacific Northwest National Laboratory, Richland Washington USA; Liu, Yi-Chin [Pacific Northwest National Laboratory, Richland Washington USA; Air Resources Board, Sacramento California USA; Xu, Kuan-Man [NASA Langley Research Center, Hampton Virginia USA; North, Kirk [Department of Atmospheric and Oceanic Sciences, McGill University, Montréal Québec Canada; Collis, Scott [Environmental Science Division, Argonne National Laboratory, Argonne Illinois USA; Dong, Xiquan [Department of Atmospheric Sciences, University of North Dakota, Grand Forks North Dakota USA; Zhang, Guang J. [Scripps Institution of Oceanography, University of California, San Diego, La Jolla California USA; Chen, Qian [Key Laboratory for Aerosol-Cloud-Precipitation of China Meteorological Administration, Nanjing University of Information Science and Technology, Nanjing China; Kollias, Pavlos [Pacific Northwest National Laboratory, Richland Washington USA; Ghan, Steven J. [Pacific Northwest National Laboratory, Richland Washington USA

    2015-04-27

    The ultimate goal of this study is to improve the representation of convective transport by cumulus parameterization for mesoscale and climate models. As Part 1 of the study, we perform extensive evaluations of cloud-resolving simulations of a squall line and mesoscale convective complexes in midlatitude continent and tropical regions using the Weather Research and Forecasting model with spectral bin microphysics (SBM) and with two double-moment bulk microphysics schemes: a modified Morrison (MOR) and Milbrandt and Yau (MY2). Compared to observations, in general, SBM gives better simulations of precipitation and vertical velocity of convective cores than MOR and MY2 and therefore will be used for analysis of scale dependence of eddy transport in Part 2. The common features of the simulations for all convective systems are (1) themodel tends to overestimate convection intensity in the middle and upper troposphere, but SBM can alleviate much of the overestimation and reproduce the observed convection intensity well; (2) the model greatly overestimates Ze in convective cores, especially for the weak updraft velocity; and (3) the model performs better for midlatitude convective systems than the tropical system. The modeled mass fluxes of the midlatitude systems are not sensitive to microphysics schemes but are very sensitive for the tropical case indicating strong microphysics modification to convection. Cloud microphysical measurements of rain, snow, and graupel in convective cores will be critically important to further elucidate issues within cloud microphysics schemes

  3. Coupling LMDZ physics in WRF model: Aqua-planet configuration tests

    Science.gov (United States)

    Fita, Lluís; Hourdin, Frédéric; Fairhead, Laurent; Drobinski, Phlippe

    2014-05-01

    Nowadays advances in climatological sciences, pose different challenges for the current global climate models (GCM). One of them is related to the resolution. In some exercises, GCMs are started to be used to that resolutions to which they were not designed for, or in advance of future uses, they have to be tested in order to know their limitations. With the mid term perspective in mind of future uses of the Laboratorie de Météorologie Dynamique Zoom (LMDZ) model, a framework has been designed in order to use the physical parameterizations of the LMDZ model coupled to the dynamical core of Weather Research and Forecasting (WRF) model. This framework will allow the analysis of different aspects such as: resolution thresholds of the LMDZ physics set, skill of LMDZ physics in comparison with cloud resolving simulations, impact of the primitive equations fully compressible dynamics from WRF in global runs among others. The design and implementation of the framework keeps almost all the original capabilities of both models. As a first step, results of an ensemble of 1-year low-resolution global aqua-planet runs performed with the original models using different physical configurations, and the new framework will be presented. These initial results show the correct performance of the new framework, and the sensitivity of the global circulation due to different dynamical atmospheric cores and physical parameterizations.

  4. Instream Physical Habitat Modelling Types

    DEFF Research Database (Denmark)

    Conallin, John; Boegh, Eva; Krogsgaard, Jørgen

    2010-01-01

    The introduction of the EU Water Framework Directive (WFD) is providing member state water resource managers with significant challenges in relation to meeting the deadline for 'Good Ecological Status' by 2015. Overall, instream physical habitat modelling approaches have advantages...... and disadvantages as management tools for member states in relation to the requirements of the WFD, but due to their different model structures they are distinct in their data needs, transferability, user-friendliness and presentable outputs. Water resource managers need information on what approaches will best...... management tools, but require large amounts of data and the model structure is complex. It is concluded that the use of habitat suitability indices (HSIs) and fuzzy rules in hydraulic-habitat modelling are the most ready model types to satisfy WFD demands. These models are well documented, transferable, user...

  5. Evaluation of the wind farm parameterization in the Weather Research and Forecasting model (version 3.8.1) with meteorological and turbine power data

    Science.gov (United States)

    Lee, Joseph C. Y.; Lundquist, Julie K.

    2017-11-01

    Forecasts of wind-power production are necessary to facilitate the integration of wind energy into power grids, and these forecasts should incorporate the impact of wind-turbine wakes. This paper focuses on a case study of four diurnal cycles with significant power production, and assesses the skill of the wind farm parameterization (WFP) distributed with the Weather Research and Forecasting (WRF) model version 3.8.1, as well as its sensitivity to model configuration. After validating the simulated ambient flow with observations, we quantify the value of the WFP as it accounts for wake impacts on power production of downwind turbines. We also illustrate with statistical significance that a vertical grid with approximately 12 m vertical resolution is necessary for reproducing the observed power production. Further, the WFP overestimates wake effects and hence underestimates downwind power production during high wind speed, highly stable, and low turbulence conditions. We also find the WFP performance is independent of the number of wind turbines per model grid cell and the upwind-downwind position of turbines. Rather, the ability of the WFP to predict power production is most dependent on the skill of the WRF model in simulating the ambient wind speed.

  6. Evaluation of the wind farm parameterization in the Weather Research and Forecasting model (version 3.8.1 with meteorological and turbine power data

    Directory of Open Access Journals (Sweden)

    J. C. Y. Lee

    2017-11-01

    Full Text Available Forecasts of wind-power production are necessary to facilitate the integration of wind energy into power grids, and these forecasts should incorporate the impact of wind-turbine wakes. This paper focuses on a case study of four diurnal cycles with significant power production, and assesses the skill of the wind farm parameterization (WFP distributed with the Weather Research and Forecasting (WRF model version 3.8.1, as well as its sensitivity to model configuration. After validating the simulated ambient flow with observations, we quantify the value of the WFP as it accounts for wake impacts on power production of downwind turbines. We also illustrate with statistical significance that a vertical grid with approximately 12 m vertical resolution is necessary for reproducing the observed power production. Further, the WFP overestimates wake effects and hence underestimates downwind power production during high wind speed, highly stable, and low turbulence conditions. We also find the WFP performance is independent of the number of wind turbines per model grid cell and the upwind–downwind position of turbines. Rather, the ability of the WFP to predict power production is most dependent on the skill of the WRF model in simulating the ambient wind speed.

  7. Physical models of cell motility

    CERN Document Server

    2016-01-01

    This book surveys the most recent advances in physics-inspired cell movement models. This synergetic, cross-disciplinary effort to increase the fidelity of computational algorithms will lead to a better understanding of the complex biomechanics of cell movement, and stimulate progress in research on related active matter systems, from suspensions of bacteria and synthetic swimmers to cell tissues and cytoskeleton.Cell motility and collective motion are among the most important themes in biology and statistical physics of out-of-equilibrium systems, and crucial for morphogenesis, wound healing, and immune response in eukaryotic organisms. It is also relevant for the development of effective treatment strategies for diseases such as cancer, and for the design of bioactive surfaces for cell sorting and manipulation. Substrate-based cell motility is, however, a very complex process as regulatory pathways and physical force generation mechanisms are intertwined. To understand the interplay between adhesion, force ...

  8. Regionalization of subsurface stormflow parameters of hydrologic models: Up-scaling from physically based numerical simulations at hillslope scale

    Energy Technology Data Exchange (ETDEWEB)

    Ali, Melkamu; Ye, Sheng; Li, Hongyi; Huang, Maoyi; Leung, Lai-Yung R.; Fiori, Aldo; Sivapalan, Murugesu

    2014-07-19

    Subsurface stormflow is an important component of the rainfall-runoff response, especially in steep forested regions. However; its contribution is poorly represented in current generation of land surface hydrological models (LSMs) and catchment-scale rainfall-runoff models. The lack of physical basis of common parameterizations precludes a priori estimation (i.e. without calibration), which is a major drawback for prediction in ungauged basins, or for use in global models. This paper is aimed at deriving physically based parameterizations of the storage-discharge relationship relating to subsurface flow. These parameterizations are derived through a two-step up-scaling procedure: firstly, through simulations with a physically based (Darcian) subsurface flow model for idealized three dimensional rectangular hillslopes, accounting for within-hillslope random heterogeneity of soil hydraulic properties, and secondly, through subsequent up-scaling to the catchment scale by accounting for between-hillslope and within-catchment heterogeneity of topographic features (e.g., slope). These theoretical simulation results produced parameterizations of the storage-discharge relationship in terms of soil hydraulic properties, topographic slope and their heterogeneities, which were consistent with results of previous studies. Yet, regionalization of the resulting storage-discharge relations across 50 actual catchments in eastern United States, and a comparison of the regionalized results with equivalent empirical results obtained on the basis of analysis of observed streamflow recession curves, revealed a systematic inconsistency. It was found that the difference between the theoretical and empirically derived results could be explained, to first order, by climate in the form of climatic aridity index. This suggests a possible codependence of climate, soils, vegetation and topographic properties, and suggests that subsurface flow parameterization needed for ungauged locations must

  9. Parameterization and Sensitivity Analysis of a Complex Simulation Model for Mosquito Population Dynamics, Dengue Transmission, and Their Control

    Science.gov (United States)

    Ellis, Alicia M.; Garcia, Andres J.; Focks, Dana A.; Morrison, Amy C.; Scott, Thomas W.

    2011-01-01

    Models can be useful tools for understanding the dynamics and control of mosquito-borne disease. More detailed models may be more realistic and better suited for understanding local disease dynamics; however, evaluating model suitability, accuracy, and performance becomes increasingly difficult with greater model complexity. Sensitivity analysis is a technique that permits exploration of complex models by evaluating the sensitivity of the model to changes in parameters. Here, we present results of sensitivity analyses of two interrelated complex simulation models of mosquito population dynamics and dengue transmission. We found that dengue transmission may be influenced most by survival in each life stage of the mosquito, mosquito biting behavior, and duration of the infectious period in humans. The importance of these biological processes for vector-borne disease models and the overwhelming lack of knowledge about them make acquisition of relevant field data on these biological processes a top research priority. PMID:21813844

  10. Development and Testing of a Life Cycle Model and a Parameterization of Thin Mid-level Stratiform Clouds

    Energy Technology Data Exchange (ETDEWEB)

    Krueger, Steven K.

    2008-03-03

    We used a cloud-resolving model (a detailed computer model of cloud systems) to evaluate and improve the representation of clouds in global atmospheric models used for numerical weather prediction and climate modeling. We also used observations of the atmospheric state, including clouds, made at DOE's Atmospheric Radiation Measurement (ARM) Program's Climate Research Facility located in the Southern Great Plains (Kansas and Oklahoma) during Intensive Observation Periods to evaluate our detailed computer model as well as a single-column version of a global atmospheric model used for numerical weather prediction (the Global Forecast System of the NOAA National Centers for Environmental Prediction). This so-called Single-Column Modeling approach has proved to be a very effective method for testing the representation of clouds in global atmospheric models. The method relies on detailed observations of the atmospheric state, including clouds, in an atmospheric column comparable in size to a grid column used in a global atmospheric model. The required observations are made by a combination of in situ and remote sensing instruments. One of the greatest problems facing mankind at the present is climate change. Part of the problem is our limited ability to predict the regional patterns of climate change. In order to increase this ability, uncertainties in climate models must be reduced. One of the greatest of these uncertainties is the representation of clouds and cloud processes. This project, and ARM taken as a whole, has helped to improve the representation of clouds in global atmospheric models.

  11. Physics beyond the standard model

    Energy Technology Data Exchange (ETDEWEB)

    Valle, J.W.F. [Valencia Univ. (Spain). Dept. de Fisica Teorica]. E-mail: valle@flamenco.uv.es

    1996-07-01

    We discuss some of the signatures associated with extensions of the Standard Model related to the neutrino and electroweak symmetry breaking sectors, with and without supersymmetry. The topics include a basic discussion of the theory of neutrino mass and the corresponding extensions of the Standard Model that incorporate massive neutrinos; an overview of the present observational status of neutrino mass searches, with emphasis on solar neutrinos, as well as cosmological data on the amplitude of primordial density fluctuations; the implications of neutrino mass in cosmological nucleosynthesis, non-accelerator, as well as in high energy particle collider experiments. Turning to the electroweak breaking sector, we discuss the physics potential for Higgs boson searches at LEP200, including Majorana extensions of the Standard Model, and the physics of invisibly decaying Higgs bosons. We discuss the minimal supersymmetric Standard Model phenomenology, as well as some of the laboratory signatures that would be associated to models with R parity violation, especially in Z and scalar boson decays. (author)

  12. Derivation, parameterization and validation of a creep deformation/rupture material constitutive model for SiC/SiC ceramic-matrix composites (CMCs

    Directory of Open Access Journals (Sweden)

    Mica Grujicic

    2016-05-01

    Full Text Available The present work deals with the development of material constitutive models for creep-deformation and creep-rupture of SiC/SiC ceramic-matrix composites (CMCs under general three-dimensional stress states. The models derived are aimed for use in finite element analyses of the performance, durability and reliability of CMC turbine blades used in gas-turbine engines. Towards that end, one set of available experimental data pertaining to the effect of stress magnitude and temperature on the time-dependent creep deformation and rupture, available in the open literature, is used to derive and parameterize material constitutive models for creep-deformation and creep-rupture. The two models derived are validated by using additional experimental data, also available in the open literature. To enable the use of the newly-developed CMC creep-deformation and creep-rupture models within a structural finite-element framework, the models are implemented in a user-material subroutine which can be readily linked with a finite-element program/solver. In this way, the performance and reliability of CMC components used in high-temperature high-stress applications, such as those encountered in gas-turbine engines can be investigated computationally. Results of a preliminary finite-element analysis concerning the creep-deformation-induced contact between a gas-turbine engine blade and the shroud are presented and briefly discussed in the last portion of the paper. In this analysis, it is assumed that: (a the blade is made of the SiC/SiC CMC; and (b the creep-deformation behavior of the SiC/SiC CMC can be represented by the creep-deformation model developed in the present work.

  13. The impact on UT/LS cirrus clouds in the CAM/CARMA model using a new interactive aerosol parameterization.

    Science.gov (United States)

    Maloney, C.; Toon, B.; Bardeen, C.

    2017-12-01

    Recent studies indicate that heterogeneous nucleation may play a large role in cirrus cloud formation in the UT/LS, a region previously thought to be primarily dominated by homogeneous nucleation. As a result, it is beneficial to ensure that general circulation models properly represent heterogeneous nucleation in ice cloud simulations. Our work strives towards addressing this issue in the NSF/DOE Community Earth System Model's atmospheric model, CAM. More specifically we are addressing the role of heterogeneous nucleation in the coupled sectional microphysics cloud model, CARMA. Currently, our CAM/CARMA cirrus model only performs homogenous ice nucleation while ignoring heterogeneous nucleation. In our work, we couple the CAM/CARMA cirrus model with the Modal Aerosol Model (MAM). By combining the aerosol model with CAM/CARMA we can both account for heterogeneous nucleation, as well as directly link the sulfates used for homogeneous nucleation to computed fields instead of the current static field being utilized. Here we present our initial results and compare our findings to observations from the long running CALIPSO and MODIS satellite missions.

  14. Sensitivity of Greenland Ice Sheet surface mass balance to surface albedo parameterization: a study with a regional climate model

    NARCIS (Netherlands)

    van Angelen, J.H.|info:eu-repo/dai/nl/325922470; Lenaerts, J.T.M.|info:eu-repo/dai/nl/314850163; Lhermitte, S.; Fettweis, X.; Kuipers Munneke, P.|info:eu-repo/dai/nl/304831891; van den Broeke, M.R.|info:eu-repo/dai/nl/073765643; van Meijgaard, E.; Smeets, C.J.P.P.|info:eu-repo/dai/nl/191522236

    2012-01-01

    We present a sensitivity study of the surface mass balance (SMB) of the Greenland Ice Sheet, as modeled using a regional atmospheric climate model, to various parameter settings in the albedo scheme. The snow albedo scheme uses grain size as a prognostic variable and further depends on cloud cover,

  15. Strategies for control of sudden oak death in Humboldt County-informed guidance based on a parameterized epidemiological model

    Science.gov (United States)

    João A. N. Filipe; Richard C. Cobb; David M. Rizzo; Ross K. Meentemeyer; Christopher A.. Gilligan

    2010-01-01

    Landscape- to regional-scale models of plant epidemics are direly needed to predict largescale impacts of disease and assess practicable options for control. While landscape heterogeneity is recognized as a major driver of disease dynamics, epidemiological models are rarely applied to realistic landscape conditions due to computational and data limitations. Here we...

  16. Approaches in highly parameterized inversion—PEST++ Version 3, a Parameter ESTimation and uncertainty analysis software suite optimized for large environmental models

    Science.gov (United States)

    Welter, David E.; White, Jeremy T.; Hunt, Randall J.; Doherty, John E.

    2015-09-18

    The PEST++ Version 1 object-oriented parameter estimation code is here extended to Version 3 to incorporate additional algorithms and tools to further improve support for large and complex environmental modeling problems. PEST++ Version 3 includes the Gauss-Marquardt-Levenberg (GML) algorithm for nonlinear parameter estimation, Tikhonov regularization, integrated linear-based uncertainty quantification, options of integrated TCP/IP based parallel run management or external independent run management by use of a Version 2 update of the GENIE Version 1 software code, and utilities for global sensitivity analyses. The Version 3 code design is consistent with PEST++ Version 1 and continues to be designed to lower the barriers of entry for users as well as developers while providing efficient and optimized algorithms capable of accommodating large, highly parameterized inverse problems. As such, this effort continues the original focus of (1) implementing the most popular and powerful features of the PEST software suite in a fashion that is easy for novice or experienced modelers to use and (2) developing a software framework that is easy to extend.

  17. A new Building Energy Model coupled with an Urban Canopy Parameterization for urban climate simulations—part II. Validation with one dimension off-line simulations

    Science.gov (United States)

    Salamanca, Francisco; Martilli, Alberto

    2010-01-01

    Recent studies show that the fluxes exchanged between buildings and the atmosphere play an important role in the urban climate. These fluxes are taken into account in mesoscale models considering new and more complex Urban Canopy Parameterizations (UCP). A standard methodology to test an UCP is to use one-dimensional (1D) off-line simulations. In this contribution, an UCP with and without a Building Energy Model (BEM) is run 1D off-line and the results are compared against the experimental data obtained in the BUBBLE measuring campaign over Basel (Switzerland) in 2002. The advantage of BEM is that it computes the evolution of the indoor building temperature as a function of energy production and consumption in the building, the radiation coming through the windows, and the fluxes of heat exchanged through the walls and roofs as well as the impact of the air conditioning system. This evaluation exercise is particularly significant since, for the period simulated, indoor temperatures were recorded. Different statistical parameters have been calculated over the entire simulated episode in order to compare the two versions of the UCP against measurements. In conclusion, with this work, we want to study the effect of BEM on the different turbulent fluxes and exploit the new possibilities that the UCP-BEM offers us, like the impact of the air conditioning systems and the evaluation of their energy consumption.

  18. Parameterization of ice- and water clouds and their radiation-transport properties for large-scale atmospheric models

    International Nuclear Information System (INIS)

    Rockel, B.

    1988-01-01

    A model of cloud and radiation transport for large-scale atmospheric models is introduced, which besides the water phase also takes the ice phase into account. The cloud model can diagnostically determine the degree of cloud cover, liquid water and ice content by the parameters of state given by the atmospheric model. It consists of four submodels for non-convective and convective cloudiness, boundary layer clouds and ice clouds. An existing radiation model was extended for the parametrization of the radiation transport in ice clouds. Now this model allows to calculate the radiation transport in water clouds as well as in ice clouds. Liquid and solid water phases can coexist according to a simple mixture statement. The results of a sensitivity study show a strong reaction of the cloud cover degree to changes in the relative humidity. Compared with this, variations of temperature and vertical wind velocity are of minor importance. The model of radiation transport reacts most sensitively to variations of the cloud cover degree and ice content. Changes of these two factors by about 20% lead to changes in the average warming rates in the order of magnitude of 0.1 K. (orig./KW) [de

  19. Construction of robust dynamic genome-scale metabolic model structures of Saccharomyces cerevisiae through iterative re-parameterization.

    Science.gov (United States)

    Sánchez, Benjamín J; Pérez-Correa, José R; Agosin, Eduardo

    2014-09-01

    Dynamic flux balance analysis (dFBA) has been widely employed in metabolic engineering to predict the effect of genetic modifications and environmental conditions in the cell׳s metabolism during dynamic cultures. However, the importance of the model parameters used in these methodologies has not been properly addressed. Here, we present a novel and simple procedure to identify dFBA parameters that are relevant for model calibration. The procedure uses metaheuristic optimization and pre/post-regression diagnostics, fixing iteratively the model parameters that do not have a significant role. We evaluated this protocol in a Saccharomyces cerevisiae dFBA framework calibrated for aerobic fed-batch and anaerobic batch cultivations. The model structures achieved have only significant, sensitive and uncorrelated parameters and are able to calibrate different experimental data. We show that consumption, suboptimal growth and production rates are more useful for calibrating dynamic S. cerevisiae metabolic models than Boolean gene expression rules, biomass requirements and ATP maintenance. Copyright © 2014 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.

  20. Approaches in highly parameterized inversion: TSPROC, a general time-series processor to assist in model calibration and result summarization

    Science.gov (United States)

    Westenbroek, Stephen M.; Doherty, John; Walker, John F.; Kelson, Victor A.; Hunt, Randall J.; Cera, Timothy B.

    2012-01-01

    The TSPROC (Time Series PROCessor) computer software uses a simple scripting language to process and analyze time series. It was developed primarily to assist in the calibration of environmental models. The software is designed to perform calculations on time-series data commonly associated with surface-water models, including calculation of flow volumes, transformation by means of basic arithmetic operations, and generation of seasonal and annual statistics and hydrologic indices. TSPROC can also be used to generate some of the key input files required to perform parameter optimization by means of the PEST (Parameter ESTimation) computer software. Through the use of TSPROC, the objective function for use in the model-calibration process can be focused on specific components of a hydrograph.

  1. Sensitivity of Tropical Cyclones to Resolution, Convection Scheme and Ocean Flux Parameterization over Eastern Tropical Pacific and Tropical North Atlantic Oceans in RegCM4 Model

    Science.gov (United States)

    Fuentes-Franco, Ramon; Giorgi, Filippo; Coppola, Erika; Zimmermann, Klaus

    2016-04-01

    The sensitivity of simulated tropical cyclones (TC) to resolution and convection scheme parameterization is investigated over the CORDEX Central America domain. The performance of the simulations, performed for a ten-year period (1989-1998) using ERA-Interim reanalysis as boundary and initial conditions, is assessed considering 50 km and 25 km resolution, and the use of two different convection schemes: Emanuel (Em) and Kain-Fritsch (KF). Two ocean surface fluxes are also compared as well: the Monin-Obukhov scheme, and the one proposed by Zeng et al. (1998). By comparing with observations, for the whole period we assess the spatial representation of the TC, and their intensity. At interannual scale we assess the representation of their variability and at daily scale we compare observed and simulated tracks in order to establish a measure of how similar to observed are the simulated tracks. In general the simulations using KF convection scheme show higher TC density, as well as longer-duration TC (up to 15 days) with stronger winds (> 50ms-1) than those using Em (<40ms-1). Similar results were found for simulations using 25 km respect to 50 km resolution. All simulations show a better spatial representation of simulated TC density and its interannual variability over the Tropical North Atlantic Ocean (TNA) than over the Eastern Tropical Pacific Ocean (ETP). The 25 km resolution simulations show an overestimation of TC density compared to observations over ETP off the coast of Mexico. The duration of the TC in simulations using 25km resolution is similar to the observations, while is underestimated by the 50km resolution. The Monin-Obukhov ocean flux overestimates the number of TCs, while Zeng parameterization give a number similar to observations in both oceans. At daily scale, in general all simulations capture the density of cyclones during highly active TC seasons over the TNA, however the tracks generally are not coincident with observations, except for highly

  2. Modelling winter organic aerosol at the European scale with CAMx: evaluation and source apportionment with a VBS parameterization based on novel wood burning smog chamber experiments

    Directory of Open Access Journals (Sweden)

    G. Ciarelli

    2017-06-01

    Full Text Available We evaluated a modified VBS (volatility basis set scheme to treat biomass-burning-like organic aerosol (BBOA implemented in CAMx (Comprehensive Air Quality Model with extensions. The updated scheme was parameterized with novel wood combustion smog chamber experiments using a hybrid VBS framework which accounts for a mixture of wood burning organic aerosol precursors and their further functionalization and fragmentation in the atmosphere. The new scheme was evaluated for one of the winter EMEP intensive campaigns (February–March 2009 against aerosol mass spectrometer (AMS measurements performed at 11 sites in Europe. We found a considerable improvement for the modelled organic aerosol (OA mass compared to our previous model application with the mean fractional bias (MFB reduced from −61 to −29 %. We performed model-based source apportionment studies and compared results against positive matrix factorization (PMF analysis performed on OA AMS data. Both model and observations suggest that OA was mainly of secondary origin at almost all sites. Modelled secondary organic aerosol (SOA contributions to total OA varied from 32 to 88 % (with an average contribution of 62 % and absolute concentrations were generally under-predicted. Modelled primary hydrocarbon-like organic aerosol (HOA and primary biomass-burning-like aerosol (BBPOA fractions contributed to a lesser extent (HOA from 3 to 30 %, and BBPOA from 1 to 39 % with average contributions of 13 and 25 %, respectively. Modelled BBPOA fractions were found to represent 12 to 64 % of the total residential-heating-related OA, with increasing contributions at stations located in the northern part of the domain. Source apportionment studies were performed to assess the contribution of residential and non-residential combustion precursors to the total SOA. Non-residential combustion and road transportation sector contributed about 30–40 % to SOA formation (with increasing

  3. Modelling winter organic aerosol at the European scale with CAMx: evaluation and source apportionment with a VBS parameterization based on novel wood burning smog chamber experiments

    Science.gov (United States)

    Ciarelli, Giancarlo; Aksoyoglu, Sebnem; El Haddad, Imad; Bruns, Emily A.; Crippa, Monica; Poulain, Laurent; Äijälä, Mikko; Carbone, Samara; Freney, Evelyn; O'Dowd, Colin; Baltensperger, Urs; Prévôt, André S. H.

    2017-06-01

    We evaluated a modified VBS (volatility basis set) scheme to treat biomass-burning-like organic aerosol (BBOA) implemented in CAMx (Comprehensive Air Quality Model with extensions). The updated scheme was parameterized with novel wood combustion smog chamber experiments using a hybrid VBS framework which accounts for a mixture of wood burning organic aerosol precursors and their further functionalization and fragmentation in the atmosphere. The new scheme was evaluated for one of the winter EMEP intensive campaigns (February-March 2009) against aerosol mass spectrometer (AMS) measurements performed at 11 sites in Europe. We found a considerable improvement for the modelled organic aerosol (OA) mass compared to our previous model application with the mean fractional bias (MFB) reduced from -61 to -29 %. We performed model-based source apportionment studies and compared results against positive matrix factorization (PMF) analysis performed on OA AMS data. Both model and observations suggest that OA was mainly of secondary origin at almost all sites. Modelled secondary organic aerosol (SOA) contributions to total OA varied from 32 to 88 % (with an average contribution of 62 %) and absolute concentrations were generally under-predicted. Modelled primary hydrocarbon-like organic aerosol (HOA) and primary biomass-burning-like aerosol (BBPOA) fractions contributed to a lesser extent (HOA from 3 to 30 %, and BBPOA from 1 to 39 %) with average contributions of 13 and 25 %, respectively. Modelled BBPOA fractions were found to represent 12 to 64 % of the total residential-heating-related OA, with increasing contributions at stations located in the northern part of the domain. Source apportionment studies were performed to assess the contribution of residential and non-residential combustion precursors to the total SOA. Non-residential combustion and road transportation sector contributed about 30-40 % to SOA formation (with increasing contributions at urban and near

  4. On the Relationship between Observed NLDN Lightning Strikes and Modeled Convective Precipitation Rates Parameterization of Lightning NOx Production in CMAQ

    Science.gov (United States)

    Lightning-produced nitrogen oxides (NOX=NO+NO2) in the middle and upper troposphere play an essential role in the production of ozone (O3) and influence the oxidizing capacity of the troposphere. Despite much effort in both observing and modeling lightning NOX during the past dec...

  5. Parameterizing Spatial Models of Infectious Disease Transmission that Incorporate Infection Time Uncertainty Using Sampling-Based Likelihood Approximations.

    Directory of Open Access Journals (Sweden)

    Rajat Malik

    Full Text Available A class of discrete-time models of infectious disease spread, referred to as individual-level models (ILMs, are typically fitted in a Bayesian Markov chain Monte Carlo (MCMC framework. These models quantify probabilistic outcomes regarding the risk of infection of susceptible individuals due to various susceptibility and transmissibility factors, including their spatial distance from infectious individuals. The infectious pressure from infected individuals exerted on susceptible individuals is intrinsic to these ILMs. Unfortunately, quantifying this infectious pressure for data sets containing many individuals can be computationally burdensome, leading to a time-consuming likelihood calculation and, thus, computationally prohibitive MCMC-based analysis. This problem worsens when using data augmentation to allow for uncertainty in infection times. In this paper, we develop sampling methods that can be used to calculate a fast, approximate likelihood when fitting such disease models. A simple random sampling approach is initially considered followed by various spatially-stratified schemes. We test and compare the performance of our methods with both simulated data and data from the 2001 foot-and-mouth disease (FMD epidemic in the U.K. Our results indicate that substantial computation savings can be obtained--albeit, of course, with some information loss--suggesting that such techniques may be of use in the analysis of very large epidemic data sets.

  6. Cabin Environment Physics Risk Model

    Science.gov (United States)

    Mattenberger, Christopher J.; Mathias, Donovan Leigh

    2014-01-01

    This paper presents a Cabin Environment Physics Risk (CEPR) model that predicts the time for an initial failure of Environmental Control and Life Support System (ECLSS) functionality to propagate into a hazardous environment and trigger a loss-of-crew (LOC) event. This physics-of failure model allows a probabilistic risk assessment of a crewed spacecraft to account for the cabin environment, which can serve as a buffer to protect the crew during an abort from orbit and ultimately enable a safe return. The results of the CEPR model replace the assumption that failure of the crew critical ECLSS functionality causes LOC instantly, and provide a more accurate representation of the spacecraft's risk posture. The instant-LOC assumption is shown to be excessively conservative and, moreover, can impact the relative risk drivers identified for the spacecraft. This, in turn, could lead the design team to allocate mass for equipment to reduce overly conservative risk estimates in a suboptimal configuration, which inherently increases the overall risk to the crew. For example, available mass could be poorly used to add redundant ECLSS components that have a negligible benefit but appear to make the vehicle safer due to poor assumptions about the propagation time of ECLSS failures.

  7. Meta-analysis of field-saturated hydraulic conductivity recovery following wildland fire: Applications for hydrologic model parameterization and resilience assessment

    Science.gov (United States)

    Ebel, Brian A.; Martin, Deborah

    2017-01-01

    Hydrologic recovery after wildfire is critical for restoring the ecosystem services of protecting of human lives and infrastructure from hazards and delivering water supply of sufficient quality and quantity. Recovery of soil-hydraulic properties, such as field-saturated hydraulic conductivity (Kfs), is a key factor for assessing the duration of watershed-scale flash flood and debris flow risks after wildfire. Despite the crucial role of Kfs in parameterizing numerical hydrologic models to predict the magnitude of postwildfire run-off and erosion, existing quantitative relations to predict Kfsrecovery with time since wildfire are lacking. Here, we conduct meta-analyses of 5 datasets from the literature that measure or estimate Kfs with time since wildfire for longer than 3-year duration. The meta-analyses focus on fitting 2 quantitative relations (linear and non-linear logistic) to explain trends in Kfs temporal recovery. The 2 relations adequately described temporal recovery except for 1 site where macropore flow dominated infiltration and Kfs recovery. This work also suggests that Kfs can have low hydrologic resistance (large postfire changes), and moderate to high hydrologic stability (recovery time relative to disturbance recurrence interval) and resilience (recovery of hydrologic function and provision of ecosystem services). Future Kfs relations could more explicitly incorporate processes such as soil-water repellency, ground cover and soil structure regeneration, macropore recovery, and vegetation regrowth.

  8. Comparison of mean properties of simulated convection in a cloud-resolving model with those produced by cumulus parameterization

    Energy Technology Data Exchange (ETDEWEB)

    Dudhia, J.; Parsons, D.B. [National Center for Atmospheric Research, Boulder, CO (United States)

    1996-04-01

    An Intensive Observation Period (IOP) of the Atmospheric Radiation Measurement (ARM) Program took place at the Southern Great Plains (SGP) Cloud and Radiation Testbed (CART) site from June 16-26, 1993. The National Center for Atmospheric Research (NCAR)/Penn State Mesoscale Model (MM5) has been used to simulate this period on a 60-km domain with 20- and 6.67-km nests centered on Lamont, Oklahoma. Simulations are being run with data assimilation by the nudging technique to incorporate upper-air and surface data from a variety of platforms. The model maintains dynamical consistency between the fields, while the data correct for model biases that may occur during long-term simulations and provide boundary conditions. For the work reported here the Mesoscale Atmospheric Prediction System (MAPS) of the National Ocean and Atmospheric Administration (NOAA) 3-hourly analyses were used to drive the 60-km domain while the inner domains were unforced. A continuous 10-day period was simulated.

  9. Sensitivity of Greenland Ice Sheet surface mass balance to surface albedo parameterization: a study with a regional climate model

    Directory of Open Access Journals (Sweden)

    J. H. van Angelen

    2012-10-01

    Full Text Available We present a sensitivity study of the surface mass balance (SMB of the Greenland Ice Sheet, as modeled using a regional atmospheric climate model, to various parameter settings in the albedo scheme. The snow albedo scheme uses grain size as a prognostic variable and further depends on cloud cover, solar zenith angle and black carbon concentration. For the control experiment the overestimation of absorbed shortwave radiation (+6% at the K-transect (west Greenland for the period 2004–2009 is considerably reduced compared to the previous density-dependent albedo scheme (+22%. To simulate realistic snow albedo values, a small concentration of black carbon is needed, which has strongest impact on melt in the accumulation area. A background ice albedo field derived from MODIS imagery improves the agreement between the modeled and observed SMB gradient along the K-transect. The effect of enhanced meltwater retention and refreezing is a decrease of the albedo due to an increase in snow grain size. As a secondary effect of refreezing the snowpack is heated, enhancing melt and further lowering the albedo. Especially in a warmer climate this process is important, since it reduces the refreezing potential of the firn layer that covers the Greenland Ice Sheet.

  10. Sensitivity of a two-dimensional chemistry-transport model to changes in parameterizations of radiative processes

    International Nuclear Information System (INIS)

    Grant, K.E.; Ellingson, R.G.; Wuebbles, D.J.

    1988-08-01

    Radiative processes strongly effect equilibrium trace gas concentrations both directly, through photolysis reactions, and indirectly through temperature and transport processes. As part of our continuing radiative submodel development and validation, we have used the LLNL 2-D chemical-radiative-transport (CRT) model to investigate the net sensitivity of equilibrium ozone concentrations to several changes in radiative forcing. Doubling CO 2 from 300 ppmv to 600 ppmv resulted in a temperature decrease of 5 K to 8 K in the middle stratosphere along with an 8% to 16% increase in ozone in the same region. Replacing our usual shortwave scattering algorithms with a simplified Rayleigh algorithm led to a 1% to 2% increase in ozone in the lower stratosphere. Finally, modifying our normal CO 2 cooling rates by corrections derived from line-by-line calculations resulted in several regions of heating and cooling. We observed temperature changes on the order of 1 K to 1.5 K with corresponding changes of 0.5% to 1.5% in O 3 . Our results for doubled CO 2 compare favorably with those by other authors. Results for our two perturbation scenarios stress the need for accurately modeling radiative processes while confirming the general validity of current 2-D CRT models. 15 refs., 5 figs

  11. Enhancement of hydrological parameterization and its impact on atmospheric modeling: a WRF-Hydro case study in the upper Heihe river basin, China

    Science.gov (United States)

    Zhang, Zhenyu; Arnault, Joel; Wagner, Sven; Kunstmann, Harald

    2017-04-01

    The upper Heihe river basin (10,020 km2) is situated in the alpine region of northwestern China, where gauge coverage is poor. Water-related activity is essential for the human economy in this region, which requires detailed knowledge of the available water resources. However, the lack of hydro-meteorological data makes any water balance investigation challenging. The use of regional atmospheric models can compensate this lack of data. The aim of this study is to investigate which improvement can be gained by enhancing the hydrological parameterization in atmospheric models. For this purpose, we employ the Weather Research and Forecasting model (WRF) and its coupled atmospheric-hydrological version (WRF-Hydro). In comparison to WRF, WRF-Hydro integrates horizontal terrestrial water transport at the land surface and subsurface. Atmospheric processes are downscaled from ECMWF operational analysis to 4 km resolution, and lateral terrestrial water flows are resolved on a sub-grid at 400 m. The study period is 2008-2009, during which observed discharge is available at three gauge stations. The joint terrestrial-atmospheric water budget is investigated in both WRF and WRF-Hydro. In WRF-Hydro, overland flow and re-infiltration increase the soil water storage, consequently increasing evapotranspiration and decreasing river runoff. This change in evapotranspiration influences moisture convergence in the atmosphere, and slightly changes precipitation patterns. Comparing model results with in-situ and gridded datasets (ITP-CAS forcing data, FLUXNET-MTE), WRF-Hydro shows improvement on precipitation and evapotranspiration simulation. The ability of WRF-Hydro to reproduce observed streamflow is also demonstrated.

  12. Using Leaf Chlorophyll to Parameterize Light-Use-Efficiency Within a Thermal-Based Carbon, Water and Energy Exchange Model

    Science.gov (United States)

    Houlborg, Rasmus; Anderson, Martha C.; Daughtry, C. S. T.; Kustas, W. P.; Rodell, Matthew

    2010-01-01

    Chlorophylls absorb photosynthetically active radiation and thus function as vital pigments for photosynthesis, which makes leaf chlorophyll content (C(sub ab) useful for monitoring vegetation productivity and an important indicator of the overall plant physiological condition. This study investigates the utility of integrating remotely sensed estimates of C(sub ab) into a thermal-based Two-Source Energy Balance (TSEB) model that estimates land-surface CO2 and energy fluxes using an analytical, light-use-efficiency (LUE) based model of canopy resistance. The LUE model component computes canopy-scale carbon assimilation and transpiration fluxes and incorporates LUE modifications from a nominal (species-dependent) value (LUE(sub n)) in response to short term variations in environmental conditions, However LUE(sub n) may need adjustment on a daily timescale to accommodate changes in plant phenology, physiological condition and nutrient status. Day to day variations in LUE(sub n) were assessed for a heterogeneous corn crop field in Maryland, U,S.A. through model calibration with eddy covariance CO2 flux tower observations. The optimized daily LUE(sub n) values were then compared to estimates of C(sub ab) integrated from gridded maps of chlorophyll content weighted over the tower flux source area. The time continuous maps of daily C(sub ab) over the study field were generated by focusing in-situ measurements with retrievals generated with an integrated radiative transfer modeling tool (accurate to within +/-10%) using at-sensor radiances in green, red and near-infrared wavelengths acquired with an aircraft imaging system. The resultant daily changes in C(sub ab) within the tower flux source area generally correlated well with corresponding changes in daily calibrated LUE(sub n) derived from the tower flux data, and hourly water, energy and carbon flux estimation accuracies from TSEB were significantly improved when using C(sub ab) for delineating spatio

  13. Excellence in Physics Education Award: Modeling Theory for Physics Instruction

    Science.gov (United States)

    Hestenes, David

    2014-03-01

    All humans create mental models to plan and guide their interactions with the physical world. Science has greatly refined and extended this ability by creating and validating formal scientific models of physical things and processes. Research in physics education has found that mental models created from everyday experience are largely incompatible with scientific models. This suggests that the fundamental problem in learning and understanding science is coordinating mental models with scientific models. Modeling Theory has drawn on resources of cognitive science to work out extensive implications of this suggestion and guide development of an approach to science pedagogy and curriculum design called Modeling Instruction. Modeling Instruction has been widely applied to high school physics and, more recently, to chemistry and biology, with noteworthy results.

  14. Stellar Atmospheric Parameterization Based on Deep Learning

    Science.gov (United States)

    Pan, Ru-yang; Li, Xiang-ru

    2017-07-01

    Deep learning is a typical learning method widely studied in the fields of machine learning, pattern recognition, and artificial intelligence. This work investigates the problem of stellar atmospheric parameterization by constructing a deep neural network with five layers, and the node number in each layer of the network is respectively 3821-500-100-50-1. The proposed scheme is verified on both the real spectra measured by the Sloan Digital Sky Survey (SDSS) and the theoretic spectra computed with the Kurucz's New Opacity Distribution Function (NEWODF) model, to make an automatic estimation for three physical parameters: the effective temperature (Teff), surface gravitational acceleration (lg g), and metallic abundance (Fe/H). The results show that the stacked autoencoder deep neural network has a better accuracy for the estimation. On the SDSS spectra, the mean absolute errors (MAEs) are 79.95 for Teff/K, 0.0058 for (lg Teff/K), 0.1706 for lg (g/(cm·s-2)), and 0.1294 dex for the [Fe/H], respectively; On the theoretic spectra, the MAEs are 15.34 for Teff/K, 0.0011 for lg (Teff/K), 0.0214 for lg(g/(cm · s-2)), and 0.0121 dex for [Fe/H], respectively.

  15. Modelling evapotranspiration at three boreal forest stands using the CLASS: tests of parameterizations for canopy conductance and soil evaporation

    Science.gov (United States)

    Bartlett, Paul A.; McCaughey, J. Harry; Lafleur, Peter M.; Verseghy, Diana L.

    2003-03-01

    The performance of the Canadian Land Surface Scheme (CLASS) was evaluated in off-line runs, using data collected at three boreal forest stands located near Thompson, Manitoba: young jack pine, mature jack pine, and mature black spruce. The data were collected in the late spring through autumn of 1994 and 1996, as part of the Boreal Ecosystem-Atmosphere Study (BOREAS).The diurnal range in modelled soil heat flux was exaggerated at all sites. Soil evaporation was modelled poorly at the jack pine stands, with overestimation common and a step change to low evaporation as the soil dried. Replacing the soil evaporation algorithm, which was based on the estimation of a surface relative humidity value, with one based on soil moisture in the top soil layer reduced the overestimation and eliminated the step changes. Modelled water movement between soil layers was too slow at the jack pine stands. Modifying the soil hydraulic parameters to match an observed characteristic curve at the young jack pine stand produced a soil water suction that agreed more closely with measurements and improved drainage between soil layers.The latent heat flux was overestimated and the sensible heat flux underestimated at all three stands. New Jarvis-Stewart-type canopy conductance algorithms were developed from stomatal conductance measurements. At the jack pine stands, stomatal conductance scaled by leaf area index reproduced canopy conductance, but a reduction in the scaled stomatal conductance by one half was necessary at the black spruce stand, indicating a nonlinearity in the scaling of stomatal conductance for this ecosystem. The root-mean-squared error for daily average latent heat flux for the control run of the CLASS and for the best test run are 49 W m-2 and 14 W m-2 respectively at the young jack pine stand, 50 W m-2 and 15 W m-2 respectively at the old jack pine stand, and 48 W m-2 and 13 W m-2 respectively at the old black spruce stand.

  16. Models and structures: mathematical physics

    International Nuclear Information System (INIS)

    2003-01-01

    This document gathers research activities along 5 main directions. 1) Quantum chaos and dynamical systems. Recent results concern the extension of the exact WKB method that has led to a host of new results on the spectrum and wave functions. Progress have also been made in the description of the wave functions of chaotic quantum systems. Renormalization has been applied to the analysis of dynamical systems. 2) Combinatorial statistical physics. We see the emergence of new techniques applied to various such combinatorial problems, from random walks to random lattices. 3) Integrability: from structures to applications. Techniques of conformal field theory and integrable model systems have been developed. Progress is still made in particular for open systems with boundary conditions, in connection to strings and branes physics. Noticeable links between integrability and exact WKB quantization to 2-dimensional disordered systems have been highlighted. New correlations of eigenvalues and better connections to integrability have been formulated for random matrices. 4) Gravities and string theories. We have developed aspects of 2-dimensional string theory with a particular emphasis on its connection to matrix models as well as non-perturbative properties of M-theory. We have also followed an alternative path known as loop quantum gravity. 5) Quantum field theory. The results obtained lately concern its foundations, in flat or curved spaces, but also applications to second-order phase transitions in statistical systems

  17. Models and structures: mathematical physics

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2003-07-01

    This document gathers research activities along 5 main directions. 1) Quantum chaos and dynamical systems. Recent results concern the extension of the exact WKB method that has led to a host of new results on the spectrum and wave functions. Progress have also been made in the description of the wave functions of chaotic quantum systems. Renormalization has been applied to the analysis of dynamical systems. 2) Combinatorial statistical physics. We see the emergence of new techniques applied to various such combinatorial problems, from random walks to random lattices. 3) Integrability: from structures to applications. Techniques of conformal field theory and integrable model systems have been developed. Progress is still made in particular for open systems with boundary conditions, in connection to strings and branes physics. Noticeable links between integrability and exact WKB quantization to 2-dimensional disordered systems have been highlighted. New correlations of eigenvalues and better connections to integrability have been formulated for random matrices. 4) Gravities and string theories. We have developed aspects of 2-dimensional string theory with a particular emphasis on its connection to matrix models as well as non-perturbative properties of M-theory. We have also followed an alternative path known as loop quantum gravity. 5) Quantum field theory. The results obtained lately concern its foundations, in flat or curved spaces, but also applications to second-order phase transitions in statistical systems.

  18. Sensitivity of tropical cyclones to resolution, convection scheme and ocean flux parameterization over Eastern Tropical Pacific and Tropical North Atlantic Oceans in the RegCM4 model

    Science.gov (United States)

    Fuentes-Franco, Ramón; Giorgi, Filippo; Coppola, Erika; Zimmermann, Klaus

    2017-07-01

    The sensitivity of simulated tropical cyclones (TCs) to resolution, convection scheme and ocean surface flux parameterization is investigated with a regional climate model (RegCM4) over the CORDEX Central America domain, including the Tropical North Atlantic (TNA) and Eastern Tropical Pacific (ETP) basins. Simulations for the TC seasons of the ten-year period (1989-1998) driven by ERA-Interim reanalysis fields are completed using 50 and 25 km grid spacing, two convection schemes (Emanuel, Em; and Kain-Fritsch, KF) and two ocean surface flux representations, a Monin-Obukhov scheme available in the BATS land surface package (Dickinson et al. 1993), and the scheme of Zeng et al. (J Clim 11(10):2628-2644, 1998). The model performance is assessed against observed TC characteristics for the simulation period. In general, different sensitivities are found over the two basins investigated. The simulations using the KF scheme show higher TC density, longer TC duration (up to 15 days) and stronger peak winds (>50 ms-1) than those using Em (<40 ms-1). All simulations show a better spatial representation of simulated TC density and interannual variability over the TNA than over the ETP. The 25 km resolution simulations show greater TC density, duration and intensity compared to the 50 km resolution ones, especially over the ETP basin, and generally more in line with observations. Simulated TCs show a strong sensitivity to ocean fluxes, especially over the TNA basin, with the Monin-Obukhov scheme leading to an overestimate of the TC number, and the Zeng scheme being closer to observations. All simulations capture the density of cyclones during active TC seasons over the TNA, however, without data assimilation, the tracks of individual events do not match closely the corresponding observed ones. Overall, the best model performance is obtained when using the KF and Zeng schemes at 25 km grid spacing.

  19. When probabilistic seismic hazard climbs volcanoes: the Mt. Etna case, Italy - Part 1: Model components for sources parameterization

    Science.gov (United States)

    Azzaro, Raffaele; Barberi, Graziella; D'Amico, Salvatore; Pace, Bruno; Peruzza, Laura; Tuvè, Tiziana

    2017-11-01

    The volcanic region of Mt. Etna (Sicily, Italy) represents a perfect lab for testing innovative approaches to seismic hazard assessment. This is largely due to the long record of historical and recent observations of seismic and tectonic phenomena, the high quality of various geophysical monitoring and particularly the rapid geodynamics clearly demonstrate some seismotectonic processes. We present here the model components and the procedures adopted for defining seismic sources to be used in a new generation of probabilistic seismic hazard assessment (PSHA), the first results and maps of which are presented in a companion paper, Peruzza et al. (2017). The sources include, with increasing complexity, seismic zones, individual faults and gridded point sources that are obtained by integrating geological field data with long and short earthquake datasets (the historical macroseismic catalogue, which covers about 3 centuries, and a high-quality instrumental location database for the last decades). The analysis of the frequency-magnitude distribution identifies two main fault systems within the volcanic complex featuring different seismic rates that are controlled essentially by volcano-tectonic processes. We discuss the variability of the mean occurrence times of major earthquakes along the main Etnean faults by using an historical approach and a purely geologic method. We derive a magnitude-size scaling relationship specifically for this volcanic area, which has been implemented into a recently developed software tool - FiSH (Pace et al., 2016) - that we use to calculate the characteristic magnitudes and the related mean recurrence times expected for each fault. Results suggest that for the Mt. Etna area, the traditional assumptions of uniform and Poissonian seismicity can be relaxed; a time-dependent fault-based modeling, joined with a 3-D imaging of volcano-tectonic sources depicted by the recent instrumental seismicity, can therefore be implemented in PSHA maps

  20. Improving representation of convective transport for scale-aware parameterization: 2. Analysis of cloud-resolving model simulations

    Science.gov (United States)

    Liu, Yi-Chin; Fan, Jiwen; Zhang, Guang J.; Xu, Kuan-Man; Ghan, Steven J.

    2015-04-01

    Following Part I, in which 3-D cloud-resolving model (CRM) simulations of a squall line and mesoscale convective complex in the midlatitude continental and the tropical regions are conducted and evaluated, we examine the scale dependence of eddy transport of water vapor, evaluate different eddy transport formulations, and improve the representation of convective transport across all scales by proposing a new formulation that more accurately represents the CRM-calculated eddy flux. CRM results show that there are strong grid-spacing dependencies of updraft and downdraft fractions regardless of altitudes, cloud life stage, and geographical location. As for the eddy transport of water vapor, updraft eddy flux is a major contributor to total eddy flux in the lower and middle troposphere. However, downdraft eddy transport can be as large as updraft eddy transport in the lower atmosphere especially at the mature stage of midlatitude continental convection. We show that the single-updraft approach significantly underestimates updraft eddy transport of water vapor because it fails to account for the large internal variability of updrafts, while a single downdraft represents the downdraft eddy transport of water vapor well. We find that using as few as three updrafts can account for the internal variability of updrafts well. Based on the evaluation with the CRM simulated data, we recommend a simplified eddy transport formulation that considers three updrafts and one downdraft. Such formulation is similar to the conventional one but much more accurately represents CRM-simulated eddy flux across all grid scales.

  1. New and extended parameterization of the thermodynamic model AIOMFAC: calculation of activity coefficients for organic-inorganic mixtures containing carboxyl, hydroxyl, carbonyl, ether, ester, alkenyl, alkyl, and aromatic functional groups

    Science.gov (United States)

    Zuend, A.; Marcolli, C.; Booth, A. M.; Lienhard, D. M.; Soonsin, V.; Krieger, U. K.; Topping, D. O.; McFiggans, G.; Peter, T.; Seinfeld, J. H.

    2011-09-01

    We present a new and considerably extended parameterization of the thermodynamic activity coefficient model AIOMFAC (Aerosol Inorganic-Organic Mixtures Functional groups Activity Coefficients) at room temperature. AIOMFAC combines a Pitzer-like electrolyte solution model with a UNIFAC-based group-contribution approach and explicitly accounts for interactions between organic functional groups and inorganic ions. Such interactions constitute the salt-effect, may cause liquid-liquid phase separation, and affect the gas-particle partitioning of aerosols. The previous AIOMFAC version was parameterized for alkyl and hydroxyl functional groups of alcohols and polyols. With the goal to describe a wide variety of organic compounds found in atmospheric aerosols, we extend here the parameterization of AIOMFAC to include the functional groups carboxyl, hydroxyl, ketone, aldehyde, ether, ester, alkenyl, alkyl, aromatic carbon-alcohol, and aromatic hydrocarbon. Thermodynamic equilibrium data of organic-inorganic systems from the literature are critically assessed and complemented with new measurements to establish a comprehensive database. The database is used to determine simultaneously the AIOMFAC parameters describing interactions of organic functional groups with the ions H+, Li+, Na+, K+, NH4+, Mg2+, Ca2+, Cl-, Br-, NO3-, HSO4-, and SO42-. Detailed descriptions of different types of thermodynamic data, such as vapor-liquid, solid-liquid, and liquid-liquid equilibria, and their use for the model parameterization are provided. Issues regarding deficiencies of the database, types and uncertainties of experimental data, and limitations of the model, are discussed. The challenging parameter optimization problem is solved with a novel combination of powerful global minimization algorithms. A number of exemplary calculations for systems containing atmospherically relevant aerosol components are shown. Amongst others, we discuss aqueous mixtures of ammonium sulfate with

  2. New and extended parameterization of the thermodynamic model AIOMFAC: calculation of activity coefficients for organic-inorganic mixtures containing carboxyl, hydroxyl, carbonyl, ether, ester, alkenyl, alkyl, and aromatic functional groups

    Directory of Open Access Journals (Sweden)

    A. Zuend

    2011-09-01

    Full Text Available We present a new and considerably extended parameterization of the thermodynamic activity coefficient model AIOMFAC (Aerosol Inorganic-Organic Mixtures Functional groups Activity Coefficients at room temperature. AIOMFAC combines a Pitzer-like electrolyte solution model with a UNIFAC-based group-contribution approach and explicitly accounts for interactions between organic functional groups and inorganic ions. Such interactions constitute the salt-effect, may cause liquid-liquid phase separation, and affect the gas-particle partitioning of aerosols. The previous AIOMFAC version was parameterized for alkyl and hydroxyl functional groups of alcohols and polyols. With the goal to describe a wide variety of organic compounds found in atmospheric aerosols, we extend here the parameterization of AIOMFAC to include the functional groups carboxyl, hydroxyl, ketone, aldehyde, ether, ester, alkenyl, alkyl, aromatic carbon-alcohol, and aromatic hydrocarbon. Thermodynamic equilibrium data of organic-inorganic systems from the literature are critically assessed and complemented with new measurements to establish a comprehensive database. The database is used to determine simultaneously the AIOMFAC parameters describing interactions of organic functional groups with the ions H+, Li+, Na+, K+, NH4+, Mg2+, Ca2+, Cl, Br, NO3, HSO4, and SO42−. Detailed descriptions of different types of thermodynamic data, such as vapor-liquid, solid-liquid, and liquid-liquid equilibria, and their use for the model parameterization are provided. Issues regarding deficiencies of the database, types and uncertainties of experimental data, and limitations of the model, are discussed. The challenging parameter optimization problem is solved with a novel combination of powerful global minimization

  3. Improving Representation of Convective Transport for Scale-Aware Parameterization, Part II: Analysis of Cloud-Resolving Model Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yi-Chin; Fan, Jiwen; Zhang, Guang J.; Xu, Kuan-Man; Ghan, Steven J.

    2015-04-27

    Following Part I, in which 3-D cloud-resolving model (CRM) simulations of a squall line and mesoscale convective complex in the mid-latitude continental and the tropical regions are conducted and evaluated, we examine the scale-dependence of eddy transport of water vapor, evaluate different eddy transport formulations, and improve the representation of convective transport across all scales by proposing a new formulation that more accurately represents the CRM-calculated eddy flux. CRM results show that there are strong grid-spacing dependencies of updraft and downdraft fractions regardless of altitudes, cloud life stage, and geographical location. As for the eddy transport of water vapor, updraft eddy flux is a major contributor to total eddy flux in the lower and middle troposphere. However, downdraft eddy transport can be as large as updraft eddy transport in the lower atmosphere especially at the mature stage of 38 mid-latitude continental convection. We show that the single updraft approach significantly underestimates updraft eddy transport of water vapor because it fails to account for the large internal variability of updrafts, while a single downdraft represents the downdraft eddy transport of water vapor well. We find that using as few as 3 updrafts can account for the internal variability of updrafts well. Based on evaluation with the CRM simulated data, we recommend a simplified eddy transport formulation that considers three updrafts and one downdraft. Such formulation is similar to the conventional one but much more accurately represents CRM-simulated eddy flux across all grid scales.

  4. Why is the simulated climatology of tropical cyclones so sensitive to the choice of cumulus parameterization scheme in the WRF model?

    Science.gov (United States)

    Zhang, Chunxi; Wang, Yuqing

    2018-01-01

    The sensitivity of simulated tropical cyclones (TCs) to the choice of cumulus parameterization (CP) scheme in the advanced Weather Research and Forecasting Model (WRF-ARW) version 3.5 is analyzed based on ten seasonal simulations with 20-km horizontal grid spacing over the western North Pacific. Results show that the simulated frequency and intensity of TCs are very sensitive to the choice of the CP scheme. The sensitivity can be explained well by the difference in the low-level circulation in a height and sorted moisture space. By transporting moist static energy from dry to moist region, the low-level circulation is important to convective self-aggregation which is believed to be related to genesis of TC-like vortices (TCLVs) and TCs in idealized settings. The radiative and evaporative cooling associated with low-level clouds and shallow convection in dry regions is found to play a crucial role in driving the moisture-sorted low-level circulation. With shallow convection turned off in a CP scheme, relatively strong precipitation occurs frequently in dry regions. In this case, the diabatic cooling can still drive the low-level circulation but its strength is reduced and thus TCLV/TC genesis is suppressed. The inclusion of the cumulus momentum transport (CMT) in a CP scheme can considerably suppress genesis of TCLVs/TCs, while changes in the moisture-sorted low-level circulation and horizontal distribution of precipitation are trivial, indicating that the CMT modulates the TCLVs/TCs activities in the model by mechanisms other than the horizontal transport of moist static energy.

  5. Impact of different parameterization schemes on simulation of mesoscale convective system over south-east India

    Science.gov (United States)

    Madhulatha, A.; Rajeevan, M.

    2018-02-01

    Main objective of the present paper is to examine the role of various parameterization schemes in simulating the evolution of mesoscale convective system (MCS) occurred over south-east India. Using the Weather Research and Forecasting (WRF) model, numerical experiments are conducted by considering various planetary boundary layer, microphysics, and cumulus parameterization schemes. Performances of different schemes are evaluated by examining boundary layer, reflectivity, and precipitation features of MCS using ground-based and satellite observations. Among various physical parameterization schemes, Mellor-Yamada-Janjic (MYJ) boundary layer scheme is able to produce deep boundary layer height by simulating warm temperatures necessary for storm initiation; Thompson (THM) microphysics scheme is capable to simulate the reflectivity by reasonable distribution of different hydrometeors during various stages of system; Betts-Miller-Janjic (BMJ) cumulus scheme is able to capture the precipitation by proper representation of convective instability associated with MCS. Present analysis suggests that MYJ, a local turbulent kinetic energy boundary layer scheme, which accounts strong vertical mixing; THM, a six-class hybrid moment microphysics scheme, which considers number concentration along with mixing ratio of rain hydrometeors; and BMJ, a closure cumulus scheme, which adjusts thermodynamic profiles based on climatological profiles might have contributed for better performance of respective model simulations. Numerical simulation carried out using the above combination of schemes is able to capture storm initiation, propagation, surface variations, thermodynamic structure, and precipitation features reasonably well. This study clearly demonstrates that the simulation of MCS characteristics is highly sensitive to the choice of parameterization schemes.

  6. Ensemble assimilation of JASON/ENVISAT and JASON/AltiKA altimetric observations with stochastic parameterization of the model dynamical uncertainties

    Science.gov (United States)

    Brasseur, Pierre; Candille, Guillem; Bouttier, Pierre-Antoine; Brankart, Jean-Michel; Verron, Jacques

    2015-04-01

    The objective of this study is to explicitly simulate and quantify the uncertainty related to sea-level anomalies diagnosed from eddy-resolving ocean circulation models, in order to develop advanced methods suitable for addressing along-track altimetric data assimilation into such models. This work is carried out jointly with the MyOcean and SANGOMA (Stochastic Assimilation for the Next Generation Ocean Model Applications) consortium, funded by EU under the GMES umbrella over the 2012-2015 period. In this framework, a realistic circulation model of the North Atlantic ocean at 1/4° resolution (NATL025 configuration) has been adapted to include effects of unresolved scales on the dynamics. This is achieved by introducing stochastic perturbations of the equation of state to represent the associated model uncertainty. Assimilation experiments are designed using altimetric data from past and on-going missions (Jason-2 and Saral/AltiKA experiments, and Cryosat-2 for fully independent altimetric validation) to better control the Gulf Stream circulation, especially the frontal regions which are predominantly affected by the non-resolved dynamical scales. An ensemble based on such stochastic perturbations is then produced and evaluated -through the probabilistic criteria: the reliability and the resolution- using the model equivalent of along-track altimetric observations. These three elements (stochastic parameterization, ensemble simulation and 4D observation operator) are used together to perform optimal 4D analysis of along-track altimetry over 10-day assimilation windows. In this presentation, the results show that the free ensemble -before starting the assimilation process- well reproduces the climatological variability over the Gulf Stream area: the system is then pretty reliable but no informative (null probabilistic resolution). Updating the free ensemble with altimetric data leads to a better reliability and to an improvement of the information (resolution

  7. The performance of different cumulus parameterization schemes in ...

    Indian Academy of Sciences (India)

    . The loss has been estimated around USD 500 million in economy and. Keywords. Modelling; acuity–fidelity; cumulus parameterization scheme; southern peninsular Malaysia; rainfall. J. Earth Syst. Sci. 121, No. 2, April 2012, pp. 317–327.

  8. Challenges of Representing Sub-Grid Physics in an Adaptive Mesh Refinement Atmospheric Model

    Science.gov (United States)

    O'Brien, T. A.; Johansen, H.; Johnson, J. N.; Rosa, D.; Benedict, J. J.; Keen, N. D.; Collins, W.; Goodfriend, E.

    2015-12-01

    Some of the greatest potential impacts from future climate change are tied to extreme atmospheric phenomena that are inherently multiscale, including tropical cyclones and atmospheric rivers. Extremes are challenging to simulate in conventional climate models due to existing models' coarse resolutions relative to the native length-scales of these phenomena. Studying the weather systems of interest requires an atmospheric model with sufficient local resolution, and sufficient performance for long-duration climate-change simulations. To this end, we have developed a new global climate code with adaptive spatial and temporal resolution. The dynamics are formulated using a block-structured conservative finite volume approach suitable for moist non-hydrostatic atmospheric dynamics. By using both space- and time-adaptive mesh refinement, the solver focuses computational resources only where greater accuracy is needed to resolve critical phenomena. We explore different methods for parameterizing sub-grid physics, such as microphysics, macrophysics, turbulence, and radiative transfer. In particular, we contrast the simplified physics representation of Reed and Jablonowski (2012) with the more complex physics representation used in the System for Atmospheric Modeling of Khairoutdinov and Randall (2003). We also explore the use of a novel macrophysics parameterization that is designed to be explicitly scale-aware.

  9. Aerosol water parameterization: a single parameter framework

    Science.gov (United States)

    Metzger, S.; Steil, B.; Abdelkader, M.; Klingmüller, K.; Xu, L.; Penner, J. E.; Fountoukis, C.; Nenes, A.; Lelieveld, J.

    2015-11-01

    We introduce a framework to efficiently parameterize the aerosol water uptake for mixtures of semi-volatile and non-volatile compounds, based on the coefficient, νi. This solute specific coefficient was introduced in Metzger et al. (2012) to accurately parameterize the single solution hygroscopic growth, considering the Kelvin effect - accounting for the water uptake of concentrated nanometer sized particles up to dilute solutions, i.e., from the compounds relative humidity of deliquescence (RHD) up to supersaturation (Köhler-theory). Here we extend the νi-parameterization from single to mixed solutions. We evaluate our framework at various levels of complexity, by considering the full gas-liquid-solid partitioning for a comprehensive comparison with reference calculations using the E-AIM, EQUISOLV II, ISORROPIA II models as well as textbook examples. We apply our parameterization in EQSAM4clim, the EQuilibrium Simplified Aerosol Model V4 for climate simulations, implemented in a box model and in the global chemistry-climate model EMAC. Our results show: (i) that the νi-approach enables to analytically solve the entire gas-liquid-solid partitioning and the mixed solution water uptake with sufficient accuracy, (ii) that, e.g., pure ammonium nitrate and mixed ammonium nitrate - ammonium sulfate mixtures can be solved with a simple method, and (iii) that the aerosol optical depth (AOD) simulations are in close agreement with remote sensing observations for the year 2005. Long-term evaluation of the EMAC results based on EQSAM4clim and ISORROPIA II will be presented separately.

  10. An Evaluation of Marine Boundary Layer Cloud Property Simulations in the Community Atmosphere Model Using Satellite Observations: Conventional Subgrid Parameterization versus CLUBB

    Energy Technology Data Exchange (ETDEWEB)

    Song, Hua [Joint Center for Earth Systems Technology, University of Maryland, Baltimore County, Baltimore, Maryland; Zhang, Zhibo [Joint Center for Earth Systems Technology, and Physics Department, University of Maryland, Baltimore County, Baltimore, Maryland; Ma, Po-Lun [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland, Washington; Ghan, Steven J. [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland, Washington; Wang, Minghuai [Institute for Climate and Global Change Research, and School of Atmospheric Sciences, Nanjing University, Nanjing, China

    2018-03-01

    This paper presents a two-step evaluation of the marine boundary layer (MBL) cloud properties from two Community Atmospheric Model (version 5.3, CAM5) simulations, one based on the CAM5 standard parameterization schemes (CAM5-Base), and the other on the Cloud Layers Unified By Binormals (CLUBB) scheme (CAM5-CLUBB). In the first step, we compare the cloud properties directly from model outputs between the two simulations. We find that the CAM5-CLUBB run produces more MBL clouds in the tropical and subtropical large-scale descending regions. Moreover, the stratocumulus (Sc) to cumulus (Cu) cloud regime transition is much smoother in CAM5-CLUBB than in CAM5-Base. In addition, in CAM5-Base we find some grid cells with very small low cloud fraction (<20%) to have very high in-cloud water content (mixing ratio up to 400mg/kg). We find no such grid cells in the CAM5-CLUBB run. However, we also note that both simulations, especially CAM5-CLUBB, produce a significant amount of “empty” low cloud cells with significant cloud fraction (up to 70%) and near-zero in-cloud water content. In the second step, we use satellite observations from CERES, MODIS and CloudSat to evaluate the simulated MBL cloud properties by employing the COSP satellite simulators. We note that a feature of the COSP-MODIS simulator to mimic the minimum detection threshold of MODIS cloud masking removes much more low clouds from CAM5-CLUBB than it does from CAM5-Base. This leads to a surprising result — in the large-scale descending regions CAM5-CLUBB has a smaller COSP-MODIS cloud fraction and weaker shortwave cloud radiative forcing than CAM5-Base. A sensitivity study suggests that this is because CAM5-CLUBB suffers more from the above-mentioned “empty” clouds issue than CAM5-Base. The COSP-MODIS cloud droplet effective radius in CAM5-CLUBB shows a spatial increase from coastal St toward Cu, which is in qualitative agreement with MODIS observations. In contrast, COSP-MODIS cloud droplet

  11. Sensitivity of hurricane track to cumulus parameterization schemes in the WRF model for three intense tropical cyclones: impact of convective asymmetry

    Science.gov (United States)

    Shepherd, Tristan J.; Walsh, Kevin J.

    2017-08-01

    This study investigates the effect of the choice of convective parameterization (CP) scheme on the simulated tracks of three intense tropical cyclones (TCs), using the Weather Research and Forecasting (WRF) model. We focus on diagnosing the competing influences of large-scale steering flow, beta drift and convectively induced changes in track, as represented by four different CP schemes (Kain-Fritsch (KF), Betts-Miller-Janjic (BMJ), Grell-3D (G-3), and the Tiedtke (TD) scheme). The sensitivity of the results to initial conditions, model domain size and shallow convection is also tested. We employ a diagnostic technique by Chan et al. (J Atmos Sci 59:1317-1336, 2002) that separates the influence of the large-scale steering flow, beta drift and the modifications of the steering flow by the storm-scale convection. The combined effect of the steering flow and the beta drift causes TCs typically to move in the direction of the wavenumber-1 (WN-1) cyclonic potential vorticity tendency (PVT). In instances of asymmetrical TCs, the simulated TC motion does not necessarily match the motion expected from the WN-1 PVT due to changes in the convective pattern. In the present study, we test this concept in the WRF simulations and investigate whether if the diagnosed motion from the WN-1 PVT and the TC motion do not match, this can be related to the emerging evolution of changes in convective structure. Several systematic results are found across the three cyclone cases. The sensitivity of TC track to initial conditions (the initialisation time and model domain size) is less than the sensitivity of TC track to changing the CP scheme. The simulated track is not overly sensitive to shallow convection in the KF, BMJ, and TD schemes, compared to the track difference between CP schemes. The G3 scheme, however, is highly sensitive to shallow convection being used. Furthermore, while agreement between the simulated TC track direction and the WN-1 diagnostic is usually good, there are

  12. Physics modeling support contract: Final report

    Energy Technology Data Exchange (ETDEWEB)

    1987-09-30

    This document is the final report for the Physics Modeling Support contract between TRW, Inc. and the Lawrence Livermore National Laboratory for fiscal year 1987. It consists of following projects: TIBER physics modeling and systems code development; advanced blanket modeling task; time dependent modeling; and free electron maser for TIBER II.

  13. Physics modeling support contract: Final report

    International Nuclear Information System (INIS)

    1987-01-01

    This document is the final report for the Physics Modeling Support contract between TRW, Inc. and the Lawrence Livermore National Laboratory for fiscal year 1987. It consists of following projects: TIBER physics modeling and systems code development; advanced blanket modeling task; time dependent modeling; and free electron maser for TIBER II

  14. Capturing the complex behavior of hydraulic fracture stimulation through multi-physics modeling, field-based constraints, and model reduction

    Science.gov (United States)

    Johnson, S.; Chiaramonte, L.; Cruz, L.; Izadi, G.

    2016-12-01

    Advances in the accuracy and fidelity of numerical methods have significantly improved our understanding of coupled processes in unconventional reservoirs. However, such multi-physics models are typically characterized by many parameters and require exceptional computational resources to evaluate systems of practical importance, making these models difficult to use for field analyses or uncertainty quantification. One approach to remove these limitations is through targeted complexity reduction and field data constrained parameterization. For the latter, a variety of field data streams may be available to engineers and asset teams, including micro-seismicity from proximate sites, well logs, and 3D surveys, which can constrain possible states of the reservoir as well as the distributions of parameters. We describe one such workflow, using the Argos multi-physics code and requisite geomechanical analysis to parameterize the underlying models. We illustrate with a field study involving a constraint analysis of various field data and details of the numerical optimizations and model reduction to demonstrate how complex models can be applied to operation design in hydraulic fracturing operations, including selection of controllable completion and fluid injection design properties. The implication of this work is that numerical methods are mature and computationally tractable enough to enable complex engineering analysis and deterministic field estimates and to advance research into stochastic analyses for uncertainty quantification and value of information applications.

  15. Parameterization of the Satellite-Based Model (METRIC for the Estimation of Instantaneous Surface Energy Balance Components over a Drip-Irrigated Vineyard

    Directory of Open Access Journals (Sweden)

    Marcos Carrasco-Benavides

    2014-11-01

    Full Text Available A study was carried out to parameterize the METRIC (Mapping EvapoTranspiration at high Resolution with Internalized Calibration model for estimating instantaneous values of albedo (shortwave albedo (αi, net radiation (Rni and soil heat flux (Gi, sensible (Hi and latent heat (LEi over a drip-irrigated Merlot vineyard (location: 35°25′ LS; 71°32′ LW; 125 m.a.s. (l. The experiment was carried out in a plot of 4.25 ha, processing 15 Landsat images, which were acquired from 2006 to 2009. An automatic weather station was placed inside the experimental plot to measure αi, Rni and Gi. In the same tower an Eddy Covariance (EC system was mounted to measure Hi and LEi. Specific sub-models to estimate Gi, leaf area index (LAI and aerodynamic roughness length for momentum transfer (zom were calibrated for the Merlot vineyard as an improvement to the original METRIC model. Results indicated that LAI, zom and Gi were estimated using the calibrated functions with errors of 4%, 2% and 17%, while those were computed using the original functions with errors of 58%, 81%, and 5%, respectively. At the time of satellite overpass, comparisons between measured and estimated values indicated that METRIC overestimated αi in 21% and Rni in 11%. Also, METRIC using the calibrated functions overestimated Hi and LEi with errors of 16% and 17%, respectively while it using the original functions overestimated Hi and LEi with errors of 13% and 15%, respectively. Finally, LEi was estimated with root mean square error (RMSE between 43 and 60 W∙m−2 and mean absolute error (MAE between 35 and 48 W∙m−2 for both calibrated and original functions, respectively. These results suggested that biases observed for instantaneous pixel-by-pixel values of Rni, Gi and other intermediate components of the algorithm were presumably absorbed into the computation of sensible heat flux as a result of the internal self-calibration of METRIC.

  16. A physically based model of global freshwater surface temperature

    Science.gov (United States)

    van Beek, Ludovicus P. H.; Eikelboom, Tessa; van Vliet, Michelle T. H.; Bierkens, Marc F. P.

    2012-09-01

    Temperature determines a range of physical properties of water and exerts a strong control on surface water biogeochemistry. Thus, in freshwater ecosystems the thermal regime directly affects the geographical distribution of aquatic species through their growth and metabolism and indirectly through their tolerance to parasites and diseases. Models used to predict surface water temperature range between physically based deterministic models and statistical approaches. Here we present the initial results of a physically based deterministic model of global freshwater surface temperature. The model adds a surface water energy balance to river discharge modeled by the global hydrological model PCR-GLOBWB. In addition to advection of energy from direct precipitation, runoff, and lateral exchange along the drainage network, energy is exchanged between the water body and the atmosphere by shortwave and longwave radiation and sensible and latent heat fluxes. Also included are ice formation and its effect on heat storage and river hydraulics. We use the coupled surface water and energy balance model to simulate global freshwater surface temperature at daily time steps with a spatial resolution of 0.5° on a regular grid for the period 1976-2000. We opt to parameterize the model with globally available data and apply it without calibration in order to preserve its physical basis with the outlook of evaluating the effects of atmospheric warming on freshwater surface temperature. We validate our simulation results with daily temperature data from rivers and lakes (U.S. Geological Survey (USGS), limited to the USA) and compare mean monthly temperatures with those recorded in the Global Environment Monitoring System (GEMS) data set. Results show that the model is able to capture the mean monthly surface temperature for the majority of the GEMS stations, while the interannual variability as derived from the USGS and NOAA data was captured reasonably well. Results are poorest for

  17. Parameterized and resolved Southern Ocean eddy compensation

    Science.gov (United States)

    Poulsen, Mads B.; Jochum, Markus; Nuterman, Roman

    2018-04-01

    The ability to parameterize Southern Ocean eddy effects in a forced coarse resolution ocean general circulation model is assessed. The transient model response to a suite of different Southern Ocean wind stress forcing perturbations is presented and compared to identical experiments performed with the same model in 0.1° eddy-resolving resolution. With forcing of present-day wind stress magnitude and a thickness diffusivity formulated in terms of the local stratification, it is shown that the Southern Ocean residual meridional overturning circulation in the two models is different in structure and magnitude. It is found that the difference in the upper overturning cell is primarily explained by an overly strong subsurface flow in the parameterized eddy-induced circulation while the difference in the lower cell is mainly ascribed to the mean-flow overturning. With a zonally constant decrease of the zonal wind stress by 50% we show that the absolute decrease in the overturning circulation is insensitive to model resolution, and that the meridional isopycnal slope is relaxed in both models. The agreement between the models is not reproduced by a 50% wind stress increase, where the high resolution overturning decreases by 20%, but increases by 100% in the coarse resolution model. It is demonstrated that this difference is explained by changes in surface buoyancy forcing due to a reduced Antarctic sea ice cover, which strongly modulate the overturning response and ocean stratification. We conclude that the parameterized eddies are able to mimic the transient response to altered wind stress in the high resolution model, but partly misrepresent the unperturbed Southern Ocean meridional overturning circulation and associated heat transports.

  18. Evaluating a Model of Youth Physical Activity

    Science.gov (United States)

    Heitzler, Carrie D.; Lytle, Leslie A.; Erickson, Darin J.; Barr-Anderson, Daheia; Sirard, John R.; Story, Mary

    2010-01-01

    Objective: To explore the relationship between social influences, self-efficacy, enjoyment, and barriers and physical activity. Methods: Structural equation modeling examined relationships between parent and peer support, parent physical activity, individual perceptions, and objectively measured physical activity using accelerometers among a…

  19. Physical Accuracy of Q Models of Seismic Attenuation

    Science.gov (United States)

    Morozov, I. B.

    2016-12-01

    Accuracy of theoretical models is a required prerequisite for any type of seismic imaging and interpretation. Among all geophysical disciplines, the theory of seismic and tidal attenuation is the least developed, and most practical studies use viscoelastic models based on empirical Q factors. To simplify imaging and inversions, the Qs are often approximated as frequency-independent or following a power law with frequency. However, simplicity of inversion should not outweigh the problematic physical accuracy of such models. Typical images of spatially-variable crustal and mantle Qs are "apparent," analogously to pseudo-depth, apparent-resistivity images in electrical imaging. Problems with Q models can be seen from controversial general observations present in many studies; for example: 1) In global Q models, bulk attenuation is much lower than the shear one throughout the whole Earth. This is considered a fundamental relation for the Earth; nevertheless, it is also very peculiar physically and suggests a negative Q for the Lamé modulus. This relation is also not supported by most first-principle models of materials and laboratory studies. 2) The Q parameterization requires that the entire outer core of the Earth is assigned zero attenuation, despite its large volume, presence of viscosity and shear deformation in free oscillations. 3) In laboratory and surface-wave studies, the bulk and shear Qs can be different for different wave modes, different sample sizes boundary conditions on the surface. Similarly, the Qs measured from body-S, Love, Lg, or ScS waves may not equal each other. 4) In seismic coda studies, the Q is often found to be linearly (or even faster) increasing with frequency. Such character of energy dissipation is controversial physically, but can be readily explained as an artifact of inaccurately-known geometrical spreading. To overcome the physical inaccuracies and apparent character of seismic attenuation models, mechanical theories of materials

  20. Dual Youla parameterization

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik

    2003-01-01

    A different aspect of using the parameterisation of all systems stabilised by a given controller, i.e. the dual Youla parameterisation, is considered. The relation between system change and the dual Youla parameter is derived in explicit form. A number of standard uncertain model descriptions...... are considered and the relation with the dual Youla parameter given. Some applications of the dual Youla parameterisation are considered in connection with the design of controllers and model/performance validation....

  1. A Framework to Evaluate Unified Parameterizations for Seasonal Prediction: An LES/SCM Parameterization Test-Bed

    Science.gov (United States)

    2013-09-30

    GOALS The long term goals of this effort are (i) the development of a unified parameterization for the marine boundary layer; (ii) the...Unfortunately most of these small-scale processes are extremely difficult to represent (parameterize) in global models such as the Navy’s NAVGEM. The Marine ...horizontal boundaries are periodic and the top and bottom boundaries are impermeable with a ‘ sponge ’ region near the top boundary to minimize undesirable

  2. Collaborative Research: Reducing tropical precipitation biases in CESM — Tests of unified parameterizations with ARM observations

    Energy Technology Data Exchange (ETDEWEB)

    Larson, Vincent [Univ. of Wisconsin, Madison, WI (United States); Gettelman, Andrew [Univ. Corporation for Atmospheric Research, Boulder, CO (United States); Morrison, Hugh [Univ. Corporation for Atmospheric Research, Boulder, CO (United States); Bacmeister, Julio [Univ. Corporation for Atmospheric Research, Boulder, CO (United States); Feingold, Graham [National Oceanic and Atmospheric Administration (NOAA), Boulder, CO (United States). Earth System Research Lab.; Lee, Seoung-soo [National Oceanic and Atmospheric Administration (NOAA), Boulder, CO (United States). Earth System Research Lab.; Williams, Christopher [Univ. of Colorado, Boulder, CO (United States)

    2016-09-14

    In state-of-the-art climate models, each cloud type is treated using its own separate cloud parameterization and its own separate microphysics parameterization. This use of separate schemes for separate cloud regimes is undesirable because it is theoretically unfounded, it hampers interpretation of results, and it leads to the temptation to overtune parameters. In this grant, we are creating a climate model that contains a unified cloud parameterization and a unified microphysics parameterization. This model will be used to address the problems of excessive frequency of drizzle in climate models and excessively early onset of deep convection in the Tropics over land. The resulting model will be compared with ARM observations.

  3. A validated physical model of greenhouse climate.

    NARCIS (Netherlands)

    Bot, G.P.A.

    1989-01-01

    In the greenhouse model the momentaneous environmental crop growth factors are calculated as output, together with the physical behaviour of the crop. The boundary conditions for this model are the outside weather conditions; other inputs are the physical characteristics of the crop, of the

  4. Analysis of different atmospheric physical parameterizations in COAWST modeling system for the Tropical Storm Nock-ten application

    DEFF Research Database (Denmark)

    Ren, Danqin; Du, Jianting; Hua, Feng

    2016-01-01

    the storm center area. As a result, using Kain–Fritsch cumulus scheme, Goddard shortwave radiation scheme and RRTM longwave radiation scheme in WRF may lead to much larger wind intensity, significant wave height, current intensity, as well as lower SST and sea surface pressure. Thus...... of atmosphere, ocean wave and current features were compared with storm observations, ERA-Interim data, NOAA sea surface temperature data, AVISO current data and HYCOM data, respectively. It was found that the storm track and intensity are sensitive to the cumulus and radiation schemes in WRF, especially around...

  5. Numerical modelling in material physics

    International Nuclear Information System (INIS)

    Proville, L.

    2004-12-01

    The author first briefly presents his past research activities: investigation of a dislocation sliding in solid solution by molecular dynamics, modelling of metal film growth by phase field and Monte Carlo kinetics, phase field model for surface self-organisation, phase field model for the Al 3 Zr alloy, calculation of anharmonic photons, mobility of bipolarons in superconductors. Then, he more precisely reports the mesoscopic modelling in phase field, and some atomistic modelling (dislocation sliding, Monte Carlo simulation of metal surface growth, anharmonic network optical spectrum modelling)

  6. Elastic orthorhombic anisotropic parameter inversion: An analysis of parameterization

    KAUST Repository

    Oh, Juwon

    2016-09-15

    The resolution of a multiparameter full-waveform inversion (FWI) is highly influenced by the parameterization used in the inversion algorithm, as well as the data quality and the sensitivity of the data to the elastic parameters because the scattering patterns of the partial derivative wavefields (PDWs) vary with parameterization. For this reason, it is important to identify an optimal parameterization for elastic orthorhombic FWI by analyzing the radiation patterns of the PDWs for many reasonable model parameterizations. We have promoted a parameterization that allows for the separation of the anisotropic properties in the radiation patterns. The central parameter of this parameterization is the horizontal P-wave velocity, with an isotropic scattering potential, influencing the data at all scales and directions. This parameterization decouples the influence of the scattering potential given by the P-wave velocity perturbation fromthe polar changes described by two dimensionless parameter perturbations and from the azimuthal variation given by three additional dimensionless parameters perturbations. In addition, the scattering potentials of the P-wave velocity perturbation are also decoupled from the elastic influences given by one S-wave velocity and two additional dimensionless parameter perturbations. The vertical S-wave velocity is chosen with the best resolution obtained from S-wave reflections and converted waves, little influence on P-waves in conventional surface seismic acquisition. The influence of the density on observed data can be absorbed by one anisotropic parameter that has a similar radiation pattern. The additional seven dimensionless parameters describe the polar and azimuth variations in the P- and S-waves that we may acquire, with some of the parameters having distinct influences on the recorded data on the earth\\'s surface. These characteristics of the new parameterization offer the potential for a multistage inversion from high symmetry

  7. Impact of cloud microphysics and cumulus parameterization on ...

    Indian Academy of Sciences (India)

    2007-10-09

    Oct 9, 2007 ... Impact of cloud microphysics and cumulus parameterization on simulation of heavy rainfall event during 7–9 October 2007 over Bangladesh. M Mahbub Alam. Theoretical Division, SAARC Meteorological Research Centre (SMRC), Dhaka, Bangladesh. Department of Physics, Khulna University of ...

  8. A subgrid parameterization scheme for precipitation

    Directory of Open Access Journals (Sweden)

    S. Turner

    2012-04-01

    Full Text Available With increasing computing power, the horizontal resolution of numerical weather prediction (NWP models is improving and today reaches 1 to 5 km. Nevertheless, clouds and precipitation formation are still subgrid scale processes for most cloud types, such as cumulus and stratocumulus. Subgrid scale parameterizations for water vapor condensation have been in use for many years and are based on a prescribed probability density function (PDF of relative humidity spatial variability within the model grid box, thus providing a diagnosis of the cloud fraction. A similar scheme is developed and tested here. It is based on a prescribed PDF of cloud water variability and a threshold value of liquid water content for droplet collection to derive a rain fraction within the model grid. Precipitation of rainwater raises additional concerns relative to the overlap of cloud and rain fractions, however. The scheme is developed following an analysis of data collected during field campaigns in stratocumulus (DYCOMS-II and fair weather cumulus (RICO and tested in a 1-D framework against large eddy simulations of these observed cases. The new parameterization is then implemented in a 3-D NWP model with a horizontal resolution of 2.5 km to simulate real cases of precipitating cloud systems over France.

  9. The Physical Internet and Business Model Innovation

    Directory of Open Access Journals (Sweden)

    Diane Poulin

    2012-06-01

    Full Text Available Building on the analogy of data packets within the Digital Internet, the Physical Internet is a concept that dramatically transforms how physical objects are designed, manufactured, and distributed. This approach is open, efficient, and sustainable beyond traditional proprietary logistical solutions, which are often plagued by inefficiencies. The Physical Internet redefines supply chain configurations, business models, and value-creation patterns. Firms are bound to be less dependent on operational scale and scope trade-offs because they will be in a position to offer novel hybrid products and services that would otherwise destroy value. Finally, logistical chains become flexible and reconfigurable in real time, thus becoming better in tune with firm strategic choices. This article focuses on the potential impact of the Physical Internet on business model innovation, both from the perspectives of Physical-Internet enabled and enabling business models.

  10. Slush Fund: Modeling the Multiphase Physics of Oceanic Ices

    Science.gov (United States)

    Buffo, J.; Schmidt, B. E.

    2016-12-01

    The prevalence of ice interacting with an ocean, both on Earth and throughout the solar system, and its crucial role as the mediator of exchange between the hydrosphere below and atmosphere above, have made quantifying the thermodynamic, chemical, and physical properties of the ice highly desirable. While direct observations of these quantities exist, their scarcity increases with the difficulty of obtainment; the basal surfaces of terrestrial ice shelves remain largely unexplored and the icy interiors of moons like Europa and Enceladus have never been directly observed. Our understanding of these entities thus relies on numerical simulation, and the efficacy of their incorporation into larger systems models is dependent on the accuracy of these initial simulations. One characteristic of seawater, likely shared by the oceans of icy moons, is that it is a solution. As such, when it is frozen a majority of the solute is rejected from the forming ice, concentrating in interstitial pockets and channels, producing a two-component reactive porous media known as a mushy layer. The multiphase nature of this layer affects the evolution and dynamics of the overlying ice mass. Additionally ice can form in the water column and accrete onto the basal surface of these ice masses via buoyancy driven sedimentation as frazil or platelet ice. Numerical models hoping to accurately represent ice-ocean interactions should include the multiphase behavior of these two phenomena. While models of sea ice have begun to incorporate multiphase physics into their capabilities, no models of ice shelves/shells explicitly account for the two-phase behavior of the ice-ocean interface. Here we present a 1D multiphase model of floating oceanic ice that includes parameterizations of both density driven advection within the `mushy layer' and buoyancy driven sedimentation. The model is validated against contemporary sea ice models and observational data. Environmental stresses such as supercooling and

  11. Are Physical Education Majors Models for Fitness?

    Science.gov (United States)

    Kamla, James; Snyder, Ben; Tanner, Lori; Wash, Pamela

    2012-01-01

    The National Association of Sport and Physical Education (NASPE) (2002) has taken a firm stance on the importance of adequate fitness levels of physical education teachers stating that they have the responsibility to model an active lifestyle and to promote fitness behaviors. Since the NASPE declaration, national initiatives like Let's Move…

  12. Bridging physics and biology teaching through modeling

    Science.gov (United States)

    Hoskinson, Anne-Marie; Couch, Brian A.; Zwickl, Benjamin M.; Hinko, Kathleen A.; Caballero, Marcos D.

    2014-05-01

    As the frontiers of biology become increasingly interdisciplinary, the physics education community has engaged in ongoing efforts to make physics classes more relevant to life science majors. These efforts are complicated by the many apparent differences between these fields, including the types of systems that each studies, the behavior of those systems, the kinds of measurements that each makes, and the role of mathematics in each field. Nonetheless, physics and biology are both sciences that rely on observations and measurements to construct models of the natural world. In this article, we propose that efforts to bridge the teaching of these two disciplines must emphasize shared scientific practices, particularly scientific modeling. We define modeling using language common to both disciplines and highlight how an understanding of the modeling process can help reconcile apparent differences between the teaching of physics and biology. We elaborate on how models can be used for explanatory, predictive, and functional purposes and present common models from each discipline demonstrating key modeling principles. By framing interdisciplinary teaching in the context of modeling, we aim to bridge physics and biology teaching and to equip students with modeling competencies applicable in any scientific discipline.

  13. Data Science Innovations That Streamline Development, Documentation, Reproducibility, and Dissemination of Models in Computational Thermodynamics: An Application of Image Processing Techniques for Rapid Computation, Parameterization and Modeling of Phase Diagrams

    Science.gov (United States)

    Ghiorso, M. S.

    2014-12-01

    Computational thermodynamics (CT) represents a collection of numerical techniques that are used to calculate quantitative results from thermodynamic theory. In the Earth sciences, CT is most often applied to estimate the equilibrium properties of solutions, to calculate phase equilibria from models of the thermodynamic properties of materials, and to approximate irreversible reaction pathways by modeling these as a series of local equilibrium steps. The thermodynamic models that underlie CT calculations relate the energy of a phase to temperature, pressure and composition. These relationships are not intuitive and they are seldom well constrained by experimental data; often, intuition must be applied to generate a robust model that satisfies the expectations of use. As a consequence of this situation, the models and databases the support CT applications in geochemistry and petrology are tedious to maintain as new data and observations arise. What is required to make the process more streamlined and responsive is a computational framework that permits the rapid generation of observable outcomes from the underlying data/model collections, and importantly, the ability to update and re-parameterize the constitutive models through direct manipulation of those outcomes. CT procedures that take models/data to the experiential reference frame of phase equilibria involve function minimization, gradient evaluation, the calculation of implicit lines, curves and surfaces, contour extraction, and other related geometrical measures. All these procedures are the mainstay of image processing analysis. Since the commercial escalation of video game technology, open source image processing libraries have emerged (e.g., VTK) that permit real time manipulation and analysis of images. These tools find immediate application to CT calculations of phase equilibria by permitting rapid calculation and real time feedback between model outcome and the underlying model parameters.

  14. Simplified Models for LHC New Physics Searches

    CERN Document Server

    Alves, Daniele; Arora, Sanjay; Bai, Yang; Baumgart, Matthew; Berger, Joshua; Buckley, Matthew; Butler, Bart; Chang, Spencer; Cheng, Hsin-Chia; Cheung, Clifford; Chivukula, R.Sekhar; Cho, Won Sang; Cotta, Randy; D'Alfonso, Mariarosaria; El Hedri, Sonia; Essig, Rouven; Evans, Jared A.; Fitzpatrick, Liam; Fox, Patrick; Franceschini, Roberto; Freitas, Ayres; Gainer, James S.; Gershtein, Yuri; Gray, Richard; Gregoire, Thomas; Gripaios, Ben; Gunion, Jack; Han, Tao; Haas, Andy; Hansson, Per; Hewett, JoAnne; Hits, Dmitry; Hubisz, Jay; Izaguirre, Eder; Kaplan, Jared; Katz, Emanuel; Kilic, Can; Kim, Hyung-Do; Kitano, Ryuichiro; Koay, Sue Ann; Ko, Pyungwon; Krohn, David; Kuflik, Eric; Lewis, Ian; Lisanti, Mariangela; Liu, Tao; Liu, Zhen; Lu, Ran; Luty, Markus; Meade, Patrick; Morrissey, David; Mrenna, Stephen; Nojiri, Mihoko; Okui, Takemichi; Padhi, Sanjay; Papucci, Michele; Park, Michael; Park, Myeonghun; Perelstein, Maxim; Peskin, Michael; Phalen, Daniel; Rehermann, Keith; Rentala, Vikram; Roy, Tuhin; Ruderman, Joshua T.; Sanz, Veronica; Schmaltz, Martin; Schnetzer, Stephen; Schuster, Philip; Schwaller, Pedro; Schwartz, Matthew D.; Schwartzman, Ariel; Shao, Jing; Shelton, Jessie; Shih, David; Shu, Jing; Silverstein, Daniel; Simmons, Elizabeth; Somalwar, Sunil; Spannowsky, Michael; Spethmann, Christian; Strassler, Matthew; Su, Shufang; Tait, Tim; Thomas, Brooks; Thomas, Scott; Toro, Natalia; Volansky, Tomer; Wacker, Jay; Waltenberger, Wolfgang; Yavin, Itay; Yu, Felix; Zhao, Yue; Zurek, Kathryn

    2012-01-01

    This document proposes a collection of simplified models relevant to the design of new-physics searches at the LHC and the characterization of their results. Both ATLAS and CMS have already presented some results in terms of simplified models, and we encourage them to continue and expand this effort, which supplements both signature-based results and benchmark model interpretations. A simplified model is defined by an effective Lagrangian describing the interactions of a small number of new particles. Simplified models can equally well be described by a small number of masses and cross-sections. These parameters are directly related to collider physics observables, making simplified models a particularly effective framework for evaluating searches and a useful starting point for characterizing positive signals of new physics. This document serves as an official summary of the results from the "Topologies for Early LHC Searches" workshop, held at SLAC in September of 2010, the purpose of which was to develop a...

  15. Physics of the Quark Model

    Science.gov (United States)

    Young, Robert D.

    1973-01-01

    Discusses the charge independence, wavefunctions, magnetic moments, and high-energy scattering of hadrons on the basis of group theory and nonrelativistic quark model with mass spectrum calculated by first-order perturbation theory. The presentation is explainable to advanced undergraduate students. (CC)

  16. Recent developments and assessment of a three-dimensional PBL parameterization for improved wind forecasting over complex terrain

    Science.gov (United States)

    Kosovic, B.; Jimenez, P. A.; Haupt, S. E.; Martilli, A.; Olson, J.; Bao, J. W.

    2017-12-01

    At present, the planetary boundary layer (PBL) parameterizations available in most numerical weather prediction (NWP) models are one-dimensional. One-dimensional parameterizations are based on the assumption of horizontal homogeneity. This homogeneity assumption is appropriate for grid cell sizes greater than 10 km. However, for mesoscale simulations of flows in complex terrain with grid cell sizes below 1 km, the assumption of horizontal homogeneity is violated. Applying a one-dimensional PBL parameterization to high-resolution mesoscale simulations in complex terrain could result in significant error. For high-resolution mesoscale simulations of flows in complex terrain, we have therefore developed and implemented a three-dimensional (3D) PBL parameterization in the Weather Research and Forecasting (WRF) model. The implementation of the 3D PBL scheme is based on the developments outlined by Mellor and Yamada (1974, 1982). Our implementation in the Weather Research and Forecasting (WRF) model uses a pure algebraic model (level 2) to diagnose the turbulent fluxes. To evaluate the performance of the 3D PBL model, we use observations from the Wind Forecast Improvement Project 2 (WFIP2). The WFIP2 field study took place in the Columbia River Gorge area from 2015-2017. We focus on selected cases when physical phenomena of significance for wind energy applications such as mountain waves, topographic wakes, and gap flows were observed. Our assessment of the 3D PBL parameterization also considers a large-eddy simulation (LES). We carried out a nested LES with grid cell sizes of 30 m and 10 m covering a large fraction of the WFIP2 study area. Both LES domains were discretized using 6000 x 3000 x 200 grid cells in zonal, meridional, and vertical direction, respectively. The LES results are used to assess the relative magnitude of horizontal gradients of turbulent stresses and fluxes in comparison to vertical gradients. The presentation will highlight the advantages of the 3

  17. Physical model for recognition tunneling

    International Nuclear Information System (INIS)

    Krstić, Predrag; Ashcroft, Brian; Lindsay, Stuart

    2015-01-01

    Recognition tunneling (RT) identifies target molecules trapped between tunneling electrodes functionalized with recognition molecules that serve as specific chemical linkages between the metal electrodes and the trapped target molecule. Possible applications include single molecule DNA and protein sequencing. This paper addresses several fundamental aspects of RT by multiscale theory, applying both all-atom and coarse-grained DNA models: (1) we show that the magnitude of the observed currents are consistent with the results of non-equilibrium Green’s function calculations carried out on a solvated all-atom model. (2) Brownian fluctuations in hydrogen bond-lengths lead to current spikes that are similar to what is observed experimentally. (3) The frequency characteristics of these fluctuations can be used to identify the trapped molecules with a machine-learning algorithm, giving a theoretical underpinning to this new method of identifying single molecule signals. (paper)

  18. The evolution of process-based hydrologic models: historical challenges and the collective quest for physical realism

    Directory of Open Access Journals (Sweden)

    M. P. Clark

    2017-07-01

    Full Text Available The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1 define suitable model equations, (2 define adequate model parameters, and (3 cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.

  19. Report: Physics Constrained Stochastic Statistical Models for Extended Range Environmental Prediction

    Science.gov (United States)

    2013-09-30

    picture of ENSO-driven autoregressive models for North Pacific SST variability, providing evidence that intermittent processes, such as variability of...intermittent aspects (i) and (ii) are achieved by developing a simple stochastic parameterization for the unresolved details of synoptic -scale...stochastic parameterization of synoptic scale activity to build a stochastic skeleton model for the MJO; this is the first low order model of the MJO which

  20. On the Computation Power of Name Parameterization in Higher-order Processes

    Directory of Open Access Journals (Sweden)

    Xian Xu

    2015-08-01

    Full Text Available Parameterization extends higher-order processes with the capability of abstraction (akin to that in lambda-calculus, and is known to be able to enhance the expressiveness. This paper focuses on the parameterization of names, i.e. a construct that maps a name to a process, in the higher-order setting. We provide two results concerning its computation capacity. First, name parameterization brings up a complete model, in the sense that it can express an elementary interactive model with built-in recursive functions. Second, we compare name parameterization with the well-known pi-calculus, and provide two encodings between them.

  1. Ontology modeling in physical asset integrity management

    CERN Document Server

    Yacout, Soumaya

    2015-01-01

    This book presents cutting-edge applications of, and up-to-date research on, ontology engineering techniques in the physical asset integrity domain. Though a survey of state-of-the-art theory and methods on ontology engineering, the authors emphasize essential topics including data integration modeling, knowledge representation, and semantic interpretation. The book also reflects novel topics dealing with the advanced problems of physical asset integrity applications such as heterogeneity, data inconsistency, and interoperability existing in design and utilization. With a distinctive focus on applications relevant in heavy industry, Ontology Modeling in Physical Asset Integrity Management is ideal for practicing industrial and mechanical engineers working in the field, as well as researchers and graduate concerned with ontology engineering in physical systems life cycles. This book also: Introduces practicing engineers, research scientists, and graduate students to ontology engineering as a modeling techniqu...

  2. Optimizing the parameterization of deep mixing and internal seiches in one-dimensional hydrodynamic models: a case study with Simstrat v1.3

    Directory of Open Access Journals (Sweden)

    A. Gaudard

    2017-09-01

    Full Text Available This paper presents an improvement of a one-dimensional lake hydrodynamic model (Simstrat to characterize the vertical thermal structure of deep lakes. Using physically based arguments, we refine the transfer of wind energy to basin-scale internal waves (BSIWs. We consider the properties of the basin, the characteristics of the wind time series and the stability of the water column to filter and thereby optimize the magnitude of wind energy transferred to BSIWs. We show that this filtering procedure can significantly improve the accuracy of modelled temperatures, especially in the deep water of lakes such as Lake Geneva, for which the root mean square error between observed and simulated temperatures was reduced by up to 40 %. The modification, tested on four different lakes, increases model accuracy and contributes to a significantly better reproduction of seasonal deep convective mixing, a fundamental parameter for biogeochemical processes such as oxygen depletion. It also improves modelling over long time series for the purpose of climate change studies.

  3. Optimizing the parameterization of deep mixing and internal seiches in one-dimensional hydrodynamic models: a case study with Simstrat v1.3

    Science.gov (United States)

    Gaudard, Adrien; Schwefel, Robert; Råman Vinnå, Love; Schmid, Martin; Wüest, Alfred; Bouffard, Damien

    2017-09-01

    This paper presents an improvement of a one-dimensional lake hydrodynamic model (Simstrat) to characterize the vertical thermal structure of deep lakes. Using physically based arguments, we refine the transfer of wind energy to basin-scale internal waves (BSIWs). We consider the properties of the basin, the characteristics of the wind time series and the stability of the water column to filter and thereby optimize the magnitude of wind energy transferred to BSIWs. We show that this filtering procedure can significantly improve the accuracy of modelled temperatures, especially in the deep water of lakes such as Lake Geneva, for which the root mean square error between observed and simulated temperatures was reduced by up to 40 %. The modification, tested on four different lakes, increases model accuracy and contributes to a significantly better reproduction of seasonal deep convective mixing, a fundamental parameter for biogeochemical processes such as oxygen depletion. It also improves modelling over long time series for the purpose of climate change studies.

  4. Parameterization of mixing by secondary circulation in estuaries

    Science.gov (United States)

    Basdurak, N. B.; Huguenard, K. D.; Valle-Levinson, A.; Li, M.; Chant, R. J.

    2017-07-01

    Eddy viscosity parameterizations that depend on a gradient Richardson number Ri have been most pertinent to the open ocean. Parameterizations applicable to stratified coastal regions typically require implementation of a numerical model. Two novel parameterizations of the vertical eddy viscosity, based on Ri, are proposed here for coastal waters. One turbulence closure considers temporal changes in stratification and bottom stress and is coined the "regular fit." The alternative approach, named the "lateral fit," incorporates variability of lateral flows that are prevalent in estuaries. The two turbulence parameterization schemes are tested using data from a Self-Contained Autonomous Microstructure Profiler (SCAMP) and an Acoustic Doppler Current Profiler (ADCP) collected in the James River Estuary. The "regular fit" compares favorably to SCAMP-derived vertical eddy viscosity values but only at relatively small values of gradient Ri. On the other hand, the "lateral fit" succeeds at describing the lateral variability of eddy viscosity over a wide range of Ri. The modifications proposed to Ri-dependent eddy viscosity parameterizations allow applicability to stratified coastal regions, particularly in wide estuaries, without requiring implementation of a numerical model.

  5. Human impact parameterization in global hydrological models improves estimates of monthly discharges and hydrological extremes: a multi-model validation study

    Science.gov (United States)

    Veldkamp, Ted; Ward, Philip; de Moel, Hans; Aerts, Jeroen; Muller Schmied, Hannes; Portmann, Felix; Zhao, Fang; Gerten, Dieter; Masaki, Yoshimitsu; Pokhrel, Yadu; Satoh, Yusuke; Gosling, Simon; Zaherpour, Jamal; Wada, Yoshihide

    2017-04-01

    Human impacts on freshwater resources and hydrological features form the core of present-day water related hazards, like flooding, droughts, water scarcity, and water quality issues. Driven by the societal and scientific needs to correctly model such water related hazards a fair amount of resources has been invested over the past decades to represent human activities and their interactions with the hydrological cycle in global hydrological models (GHMs). Use of these GHMs - including the human dimension - is widespread, especially in water resources research. Evaluation or comparative assessments of the ability of such GHMs to represent real-world hydrological conditions are, unfortunately, however often limited to (near-)natural river basins. Such studies are, therefore, not able to test the model representation of human activities and its associated impact on estimates of freshwater resources or assessments of hydrological extremes. Studies that did perform a validation exercise - including the human dimension and looking into managed catchments - either focused only on one hydrological model, and/or incorporated only a few data points (i.e. river basins) for validation. To date, a comprehensive comparative analysis that evaluates whether and where incorporating the human dimension actually improves the performance of different GHMs with respect to their representation of real-world hydrological conditions and extremes is missing. The absence of such study limits the potential benchmarking of GHMs and their outcomes in hydrological hazard and risk assessments significantly, potentially hampering incorporation of GHMs and their modelling results in actual policy making and decision support with respect to water resources management. To address this issue, we evaluate in this study the performance of five state-of-the-art GHMs that include anthropogenic activities in their modelling scheme, with respect to their representation of monthly discharges and hydrological

  6. A Flexible Parameterization for Shortwave Optical Properties of Ice Crystals

    Science.gov (United States)

    VanDiedenhoven, Bastiaan; Ackerman, Andrew S.; Cairns, Brian; Fridlind, Ann M.

    2014-01-01

    A parameterization is presented that provides extinction cross section sigma (sub e), single-scattering albedo omega, and asymmetry parameter (g) of ice crystals for any combination of volume, projected area, aspect ratio, and crystal distortion at any wavelength in the shortwave. Similar to previous parameterizations, the scheme makes use of geometric optics approximations and the observation that optical properties of complex, aggregated ice crystals can be well approximated by those of single hexagonal crystals with varying size, aspect ratio, and distortion levels. In the standard geometric optics implementation used here, sigma (sub e) is always twice the particle projected area. It is shown that omega is largely determined by the newly defined absorption size parameter and the particle aspect ratio. These dependences are parameterized using a combination of exponential, lognormal, and polynomial functions. The variation of (g) with aspect ratio and crystal distortion is parameterized for one reference wavelength using a combination of several polynomials. The dependences of g on refractive index and omega are investigated and factors are determined to scale the parameterized (g) to provide values appropriate for other wavelengths. The parameterization scheme consists of only 88 coefficients. The scheme is tested for a large variety of hexagonal crystals in several wavelength bands from 0.2 to 4 micron, revealing absolute differences with reference calculations of omega and (g) that are both generally below 0.015. Over a large variety of cloud conditions, the resulting root-mean-squared differences with reference calculations of cloud reflectance, transmittance, and absorptance are 1.4%, 1.1%, and 3.4%, respectively. Some practical applications of the parameterization in atmospheric models are highlighted.

  7. Optika : a GUI framework for parameterized applications.

    Energy Technology Data Exchange (ETDEWEB)

    Nusbaum, Kurtis L.

    2011-06-01

    In the field of scientific computing there are many specialized programs designed for specific applications in areas such as biology, chemistry, and physics. These applications are often very powerful and extraordinarily useful in their respective domains. However, some suffer from a common problem: a non-intuitive, poorly-designed user interface. The purpose of Optika is to address this problem and provide a simple, viable solution. Using only a list of parameters passed to it, Optika can dynamically generate a GUI. This allows the user to specify parameters values in a fashion that is much more intuitive than the traditional 'input decks' used by some parameterized scientific applications. By leveraging the power of Optika, these scientific applications will become more accessible and thus allow their designers to reach a much wider audience while requiring minimal extra development effort.

  8. Parameterized Analysis of Paging and List Update Algorithms

    DEFF Research Database (Denmark)

    Dorrigiv, Reza; Ehmsen, Martin R.; López-Ortiz, Alejandro

    2015-01-01

    that a larger cache leads to a better performance. We also apply the parameterized analysis framework to list update and show that certain randomized algorithms which are superior to MTF in the classical model are not so in the parameterized case, which matches experimental results....... set model and express the performance of well known algorithms in terms of this parameter. This explicitly introduces parameterized-style analysis to online algorithms. The idea is that rather than normalizing the performance of an online algorithm by an (optimal) offline algorithm, we explicitly...... express the behavior of the algorithm in terms of two more natural parameters: the size of the cache and Denning’s working set measure. This technique creates a performance hierarchy of paging algorithms which better reflects their experimentally observed relative strengths. It also reflects the intuition...

  9. The performance of different cumulus parameterization schemes in ...

    Indian Academy of Sciences (India)

    The performance of four different cumulus parameterization schemes (CPS) in the Weather Research and Forecasting (WRF) model for simulating three heavy rainfall episodes over the southern peninsular Malaysia during the winter monsoon of 2006/2007 were examined. The modelled rainfall was compared with the ...

  10. Utilities for high performance dispersion model PHYSIC

    International Nuclear Information System (INIS)

    Yamazawa, Hiromi

    1992-09-01

    The description and usage of the utilities for the dispersion calculation model PHYSIC were summarized. The model was developed in the study of developing high performance SPEEDI with the purpose of introducing meteorological forecast function into the environmental emergency response system. The procedure of PHYSIC calculation consists of three steps; preparation of relevant files, creation and submission of JCL, and graphic output of results. A user can carry out the above procedure with the help of the Geographical Data Processing Utility, the Model Control Utility, and the Graphic Output Utility. (author)

  11. A new parameterization for waveform inversion in acoustic orthorhombic media

    KAUST Repository

    Masmoudi, Nabil

    2016-05-26

    Orthorhombic anisotropic model inversion is extra challenging because of the multiple parameter nature of the inversion problem. The high number of parameters required to describe the medium exerts considerable trade-off and additional nonlinearity to a full-waveform inversion (FWI) application. Choosing a suitable set of parameters to describe the model and designing an effective inversion strategy can help in mitigating this problem. Using the Born approximation, which is the central ingredient of the FWI update process, we have derived radiation patterns for the different acoustic orthorhombic parameterizations. Analyzing the angular dependence of scattering (radiation patterns) of the parameters of different parameterizations starting with the often used Thomsen-Tsvankin parameterization, we have assessed the potential trade-off between the parameters and the resolution in describing the data and inverting for the parameters. The analysis led us to introduce new parameters ϵd, δd, and ηd, which have azimuthally dependent radiation patterns, but keep the scattering potential of the transversely isotropic parameters stationary with azimuth (azimuth independent). The novel parameters ϵd, δd, and ηd are dimensionless and represent a measure of deviation between the vertical planes in orthorhombic anisotropy. Therefore, these deviation parameters offer a new parameterization style for an acoustic orthorhombic medium described by six parameters: three vertical transversely isotropic (VTI) parameters, two deviation parameters, and one parameter describing the anisotropy in the horizontal symmetry plane. The main feature of any parameterization based on the deviation parameters, is the azimuthal independency of the modeled data with respect to the VTI parameters, which allowed us to propose practical inversion strategies based on our experience with the VTI parameters. This feature of the new parameterization style holds for even the long-wavelength components of

  12. Parameterized Metatheory for Continuous Markovian Logic

    Directory of Open Access Journals (Sweden)

    Kim G. Larsen

    2012-12-01

    Full Text Available This paper shows that a classic metalogical framework, including all Boolean operators, can be used to support the development of a metric behavioural theory for Markov processes. Previously, only intuitionistic frameworks or frameworks without negation and logical implication have been developed to fulfill this task. The focus of this paper is on continuous Markovian logic (CML, a logic that characterizes stochastic bisimulation of Markov processes with an arbitrary measurable state space and continuous-time transitions. For a parameter epsilon>0 interpreted as observational error, we introduce an epsilon-parameterized metatheory for CML: we define the concepts of epsilon-satisfiability and epsilon-provability related by a sound and complete axiomatization and prove a series of "parameterized" metatheorems including decidability, weak completeness and finite model property. We also prove results regarding the relations between metalogical concepts defined for different parameters. Using this framework, we can characterize both the stochastic bisimulation relation and various observational preorders based on behavioural pseudometrics. The main contribution of this paper is proving that all these analyses can actually be done using a unified complete Boolean framework. This extends the state of the art in this field, since the related works only propose intuitionistic contexts that limit, for instance, the use of the Boolean logical implication.

  13. Physically realistic modeling of maritime training simulation

    OpenAIRE

    Cieutat , Jean-Marc

    2003-01-01

    Maritime training simulation is an important matter of maritime teaching, which requires a lot of scientific and technical skills.In this framework, where the real time constraint has to be maintained, all physical phenomena cannot be studied; the most visual physical phenomena relating to the natural elements and the ship behaviour are reproduced only. Our swell model, based on a surface wave simulation approach, permits to simulate the shape and the propagation of a regular train of waves f...

  14. Fast Physics Testbed for the FASTER Project

    Energy Technology Data Exchange (ETDEWEB)

    Lin, W.; Liu, Y.; Hogan, R.; Neggers, R.; Jensen, M.; Fridlind, A.; Lin, Y.; Wolf, A.

    2010-03-15

    This poster describes the Fast Physics Testbed for the new FAst-physics System Testbed and Research (FASTER) project. The overall objective is to provide a convenient and comprehensive platform for fast turn-around model evaluation against ARM observations and to facilitate development of parameterizations for cloud-related fast processes represented in global climate models. The testbed features three major components: a single column model (SCM) testbed, an NWP-Testbed, and high-resolution modeling (HRM). The web-based SCM-Testbed features multiple SCMs from major climate modeling centers and aims to maximize the potential of SCM approach to enhance and accelerate the evaluation and improvement of fast physics parameterizations through continuous evaluation of existing and evolving models against historical as well as new/improved ARM and other complementary measurements. The NWP-Testbed aims to capitalize on the large pool of operational numerical weather prediction products. Continuous evaluations of NWP forecasts against observations at ARM sites are carried out to systematically identify the biases and skills of physical parameterizations under all weather conditions. The highresolution modeling (HRM) activities aim to simulate the fast processes at high resolution to aid in the understanding of the fast processes and their parameterizations. A four-tier HRM framework is established to augment the SCM- and NWP-Testbeds towards eventual improvement of the parameterizations.

  15. Simplified Models for LHC New Physics Searches

    International Nuclear Information System (INIS)

    Alves, Daniele; Arkani-Hamed, Nima; Arora, Sanjay; Bai, Yang; Baumgart, Matthew; Berger, Joshua; Butler, Bart; Chang, Spencer; Cheng, Hsin-Chia; Cheung, Clifford; Chivukula, R. Sekhar; Cho, Won Sang; Cotta, Randy; D'Alfonso, Mariarosaria; El Hedri, Sonia; Essig, Rouven; Fitzpatrick, Liam; Fox, Patrick; Franceschini, Roberto

    2012-01-01

    This document proposes a collection of simplified models relevant to the design of new-physics searches at the LHC and the characterization of their results. Both ATLAS and CMS have already presented some results in terms of simplified models, and we encourage them to continue and expand this effort, which supplements both signature-based results and benchmark model interpretations. A simplified model is defined by an effective Lagrangian describing the interactions of a small number of new particles. Simplified models can equally well be described by a small number of masses and cross-sections. These parameters are directly related to collider physics observables, making simplified models a particularly effective framework for evaluating searches and a useful starting point for characterizing positive signals of new physics. This document serves as an official summary of the results from the 'Topologies for Early LHC Searches' workshop, held at SLAC in September of 2010, the purpose of which was to develop a set of representative models that can be used to cover all relevant phase space in experimental searches. Particular emphasis is placed on searches relevant for the first ∼ 50-500 pb -1 of data and those motivated by supersymmetric models. This note largely summarizes material posted at http://lhcnewphysics.org/, which includes simplified model definitions, Monte Carlo material, and supporting contacts within the theory community. We also comment on future developments that may be useful as more data is gathered and analyzed by the experiments.

  16. Simplified Models for LHC New Physics Searches

    Energy Technology Data Exchange (ETDEWEB)

    Alves, Daniele; /SLAC; Arkani-Hamed, Nima; /Princeton, Inst. Advanced Study; Arora, Sanjay; /Rutgers U., Piscataway; Bai, Yang; /SLAC; Baumgart, Matthew; /Johns Hopkins U.; Berger, Joshua; /Cornell U., Phys. Dept.; Buckley, Matthew; /Fermilab; Butler, Bart; /SLAC; Chang, Spencer; /Oregon U. /UC, Davis; Cheng, Hsin-Chia; /UC, Davis; Cheung, Clifford; /UC, Berkeley; Chivukula, R.Sekhar; /Michigan State U.; Cho, Won Sang; /Tokyo U.; Cotta, Randy; /SLAC; D' Alfonso, Mariarosaria; /UC, Santa Barbara; El Hedri, Sonia; /SLAC; Essig, Rouven, (ed.); /SLAC; Evans, Jared A.; /UC, Davis; Fitzpatrick, Liam; /Boston U.; Fox, Patrick; /Fermilab; Franceschini, Roberto; /LPHE, Lausanne /Pittsburgh U. /Argonne /Northwestern U. /Rutgers U., Piscataway /Rutgers U., Piscataway /Carleton U. /CERN /UC, Davis /Wisconsin U., Madison /SLAC /SLAC /SLAC /Rutgers U., Piscataway /Syracuse U. /SLAC /SLAC /Boston U. /Rutgers U., Piscataway /Seoul Natl. U. /Tohoku U. /UC, Santa Barbara /Korea Inst. Advanced Study, Seoul /Harvard U., Phys. Dept. /Michigan U. /Wisconsin U., Madison /Princeton U. /UC, Santa Barbara /Wisconsin U., Madison /Michigan U. /UC, Davis /SUNY, Stony Brook /TRIUMF; /more authors..

    2012-06-01

    This document proposes a collection of simplified models relevant to the design of new-physics searches at the LHC and the characterization of their results. Both ATLAS and CMS have already presented some results in terms of simplified models, and we encourage them to continue and expand this effort, which supplements both signature-based results and benchmark model interpretations. A simplified model is defined by an effective Lagrangian describing the interactions of a small number of new particles. Simplified models can equally well be described by a small number of masses and cross-sections. These parameters are directly related to collider physics observables, making simplified models a particularly effective framework for evaluating searches and a useful starting point for characterizing positive signals of new physics. This document serves as an official summary of the results from the 'Topologies for Early LHC Searches' workshop, held at SLAC in September of 2010, the purpose of which was to develop a set of representative models that can be used to cover all relevant phase space in experimental searches. Particular emphasis is placed on searches relevant for the first {approx} 50-500 pb{sup -1} of data and those motivated by supersymmetric models. This note largely summarizes material posted at http://lhcnewphysics.org/, which includes simplified model definitions, Monte Carlo material, and supporting contacts within the theory community. We also comment on future developments that may be useful as more data is gathered and analyzed by the experiments.

  17. Prediction of heavy rainfall over Chennai Metropolitan City, Tamil Nadu, India: Impact of microphysical parameterization schemes

    Science.gov (United States)

    Singh, K. S.; Bonthu, Subbareddy; Purvaja, R.; Robin, R. S.; Kannan, B. A. M.; Ramesh, R.

    2018-04-01

    This study attempts to investigate the real-time prediction of a heavy rainfall event over the Chennai Metropolitan City, Tamil Nadu, India that occurred on 01 December 2015 using Advanced Research Weather Research and Forecasting (WRF-ARW) model. The study evaluates the impact of six microphysical (Lin, WSM6, Goddard, Thompson, Morrison and WDM6) parameterization schemes of the model on prediction of heavy rainfall event. In addition, model sensitivity has also been evaluated with six Planetary Boundary Layer (PBL) and two Land Surface Model (LSM) schemes. Model forecast was carried out using nested domain and the impact of model horizontal grid resolutions were assessed at 9 km, 6 km and 3 km. Analysis of the synoptic features using National Center for Environmental Prediction Global Forecast System (NCEP-GFS) analysis data revealed strong upper-level divergence and high moisture content at lower level were favorable for the occurrence of heavy rainfall event over the northeast coast of Tamil Nadu. The study signified that forecasted rainfall was more sensitive to the microphysics and PBL schemes compared to the LSM schemes. The model provided better forecast of the heavy rainfall event using the logical combination of Goddard microphysics, YSU PBL and Noah LSM schemes, and it was mostly attributed to timely initiation and development of the convective system. The forecast with different horizontal resolutions using cumulus parameterization indicated that the rainfall prediction was not well represented at 9 km and 6 km. The forecast with 3 km horizontal resolution provided better prediction in terms of timely initiation and development of the event. The study highlights that forecast of heavy rainfall events using a high-resolution mesoscale model with suitable representations of physical parameterization schemes are useful for disaster management and planning to minimize the potential loss of life and property.

  18. Impact of PBL and convection parameterization schemes for prediction of severe land-falling Bay of Bengal cyclones using WRF-ARW model

    Science.gov (United States)

    Singh, K. S.; Bhaskaran, Prasad K.

    2017-12-01

    This study evaluates the performance of the Advanced Research Weather Research and Forecasting (WRF-ARW) model for prediction of land-falling Bay of Bengal (BoB) tropical cyclones (TCs). Model integration was performed using two-way interactive double nested domains at 27 and 9 km resolutions. The present study comprises two major components. Firstly, the study explores the impact of five different planetary boundary layer (PBL) and six cumulus convection (CC) schemes on seven land-falling BoB TCs. A total of 85 numerical simulations were studied in detail, and the results signify that the model simulated better both the track and intensity by using a combination of Yonsei University (YSU) PBL and the old simplified Arakawa-Schubert CC scheme. Secondly, the study also investigated the model performance based on the best possible combinations of model physics on the real-time forecasts of four BoB cyclones (Phailin, Helen, Lehar, and Madi) that made landfall during 2013 based on another 15 numerical simulations. The predicted mean track error during 2013 was about 71 km, 114 km, 133 km, 148 km, and 130 km respectively from day-1 to day-5. The Root Mean Square Error (RMSE) for Minimum Central Pressure (MCP) was about 6 hPa and the same noticed for Maximum Surface Wind (MSW) was about 4.5 m s-1 noticed during the entire simulation period. In addition the study also reveals that the predicted track errors during 2013 cyclones improved respectively by 43%, 44%, and 52% from day-1 to day-3 as compared to cyclones simulated during the period 2006-2011. The improvements noticed can be attributed due to relatively better quality data that was specified for the initial mean position error (about 48 km) during 2013. Overall the study signifies that the track and intensity forecast for 2013 cyclones using the specified combinations listed in the first part of this study performed relatively better than the other NWP (Numerical Weather Prediction) models, and thereby finds

  19. PHYSICAL EDUCATION - PHYSICAL CULTURE. TWO MODELS, TWO DIDACTIC

    Directory of Open Access Journals (Sweden)

    Manuel Vizuete Carrizosa

    2014-11-01

    The survival of these conflicting positions and their interests and different views on education, in a lengthy space of time, as a consequence threw two teaching approaches and two different educational models, in which the objectives and content of education differ , and with them the forms and methods of teaching. The need to define the cultural and educational approach, in every time and place, is now a pressing need and challenge the processes of teacher training, as responsible for shaping an advanced physical education, adjusted to the time and place, the interests and needs of citizens and the democratic values of modern society.

  20. PHYSICAL EDUCATION - PHYSICAL CULTURE. TWO MODELS, TWO DIDACTIC

    Directory of Open Access Journals (Sweden)

    Manuel Vizuete Carrizosa

    2014-10-01

    Full Text Available Physical Education is currently facing a number of problems that are rooted in the identity crisis prompted by the spread of the professional group, the confrontation of ideas from the scientific community and the competing interests of different political and social areas, compared to which physical education has failed, or unable, to react in time. The political and ideological confrontation that characterized the twentieth century gave us two forms, each with a consistent ideological position, in which the body as a subject of education was understood from two different positions: one set from the left and communism and another, from Western democratic societies.The survival of these conflicting positions and their interests and different views on education, in a lengthy space of time, as a consequence threw two teaching approaches and two different educational models, in which the objectives and content of education differ , and with them the forms and methods of teaching. The need to define the cultural and educational approach, in every time and place, is now a pressing need and challenge the processes of teacher training, as responsible for shaping an advanced physical education, adjusted to the time and place, the interests and needs of citizens and the democratic values of modern society.

  1. Neutrosophic Parameterized Soft Relations and Their Applications

    Directory of Open Access Journals (Sweden)

    Irfan Deli

    2014-06-01

    Full Text Available The aim of this paper is to introduce the concept of relation on neutrosophic parameterized soft set (NP- soft sets theory. We have studied some related properties and also put forward some propositions on neutrosophic parameterized soft relation with proofs and examples. Finally the notions of symmetric, transitive, reflexive, and equivalence neutrosophic parameterized soft set relations have been established in our work. Finally a decision making method on NP-soft sets is presented.

  2. Plasma simulation studies using multilevel physics models

    International Nuclear Information System (INIS)

    Park, W.; Belova, E.V.; Fu, G.Y.

    2000-01-01

    The question of how to proceed toward ever more realistic plasma simulation studies using ever increasing computing power is addressed. The answer presented here is the M3D (Multilevel 3D) project, which has developed a code package with a hierarchy of physics levels that resolve increasingly complete subsets of phase-spaces and are thus increasingly more realistic. The rationale for the multilevel physics models is given. Each physics level is described and examples of its application are given. The existing physics levels are fluid models (3D configuration space), namely magnetohydrodynamic (MHD) and two-fluids; and hybrid models, namely gyrokinetic-energetic-particle/MHD (5D energetic particle phase-space), gyrokinetic-particle-ion/fluid-electron (5D ion phase-space), and full-kinetic-particle-ion/fluid-electron level (6D ion phase-space). Resolving electron phase-space (5D or 6D) remains a future project. Phase-space-fluid models are not used in favor of delta f particle models. A practical and accurate nonlinear fluid closure for noncollisional plasmas seems not likely in the near future

  3. Plasma simulation studies using multilevel physics models

    Energy Technology Data Exchange (ETDEWEB)

    Park, W.; Belova, E.V.; Fu, G.Y. [and others

    2000-01-19

    The question of how to proceed toward ever more realistic plasma simulation studies using ever increasing computing power is addressed. The answer presented here is the M3D (Multilevel 3D) project, which has developed a code package with a hierarchy of physics levels that resolve increasingly complete subsets of phase-spaces and are thus increasingly more realistic. The rationale for the multilevel physics models is given. Each physics level is described and examples of its application are given. The existing physics levels are fluid models (3D configuration space), namely magnetohydrodynamic (MHD) and two-fluids; and hybrid models, namely gyrokinetic-energetic-particle/MHD (5D energetic particle phase-space), gyrokinetic-particle-ion/fluid-electron (5D ion phase-space), and full-kinetic-particle-ion/fluid-electron level (6D ion phase-space). Resolving electron phase-space (5D or 6D) remains a future project. Phase-space-fluid models are not used in favor of delta f particle models. A practical and accurate nonlinear fluid closure for noncollisional plasmas seems not likely in the near future.

  4. Tuning controllers using the dual Youla parameterization

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Stoustrup, Jakob

    2000-01-01

    This paper describes the application of the Youla parameterization of all stabilizing controllers and the dual Youla parameterization of all systems stabilized by a given controller in connection with tuning of controllers. In the uncertain case, it is shown that the use of the Youla parameteriza......This paper describes the application of the Youla parameterization of all stabilizing controllers and the dual Youla parameterization of all systems stabilized by a given controller in connection with tuning of controllers. In the uncertain case, it is shown that the use of the Youla...

  5. Topos models for physics and topos theory

    Energy Technology Data Exchange (ETDEWEB)

    Wolters, Sander, E-mail: s.wolters@math.ru.nl [Radboud Universiteit Nijmegen, Institute for Mathematics, Astrophysics, and Particle Physics (Netherlands)

    2014-08-15

    What is the role of topos theory in the topos models for quantum theory as used by Isham, Butterfield, Döring, Heunen, Landsman, Spitters, and others? In other words, what is the interplay between physical motivation for the models and the mathematical framework used in these models? Concretely, we show that the presheaf topos model of Butterfield, Isham, and Döring resembles classical physics when viewed from the internal language of the presheaf topos, similar to the copresheaf topos model of Heunen, Landsman, and Spitters. Both the presheaf and copresheaf models provide a “quantum logic” in the form of a complete Heyting algebra. Although these algebras are natural from a topos theoretic stance, we seek a physical interpretation for the logical operations. Finally, we investigate dynamics. In particular, we describe how an automorphism on the operator algebra induces a homeomorphism (or isomorphism of locales) on the associated state spaces of the topos models, and how elementary propositions and truth values transform under the action of this homeomorphism. Also with dynamics the focus is on the internal perspective of the topos.

  6. Elastic full-waveform inversion and parameterization analysis applied to walk-away vertical seismic profile data for unconventional (heavy oil) reservoir characterization

    Science.gov (United States)

    Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu

    2018-03-01

    Seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter tradeoff, arising from the covariance between different physical parameters, which increases nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parameterization and acquisition arrangement. An appropriate choice of model parameterization is critical to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parameterizations in isotropic-elastic FWI with walk-away vertical seismic profile (W-VSP) dataset for unconventional heavy oil reservoir characterization. Six model parameterizations are considered: velocity-density (α, β and ρ΄), modulus-density (κ, μ and ρ), Lamé-density (λ, μ΄ and ρ‴), impedance-density (IP, IS and ρ″), velocity-impedance-I (α΄, β΄ and I_P^'), and velocity-impedance-II (α″, β″ and I_S^'). We begin analyzing the interparameter tradeoff by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. In this paper, we discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter tradeoffs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter tradeoffs for various model parameterizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parameterization, the inverted density

  7. Physical and mathematical modelling of extrusion processes

    DEFF Research Database (Denmark)

    Arentoft, Mogens; Gronostajski, Z.; Niechajowics, A.

    2000-01-01

    The main objective of the work is to study the extrusion process using physical modelling and to compare the findings of the study with finite element predictions. The possibilities and advantages of the simultaneous application of both of these methods for the analysis of metal forming processes...

  8. Why supersymmetry? Physics beyond the standard model

    Indian Academy of Sciences (India)

    The Naturalness Principle as a requirement that the heavy mass scales decouple from the physics of light mass scales is reviewed. In quantum field theories containing {\\em elementary} scalar fields, such as the StandardModel of electroweak interactions containing the Higgs particle, mass of the scalar field is not a natural ...

  9. Protein Folding: Search for Basic Physical Models

    Directory of Open Access Journals (Sweden)

    Ivan Y. Torshin

    2003-01-01

    Full Text Available How a unique three-dimensional structure is rapidly formed from the linear sequence of a polypeptide is one of the important questions in contemporary science. Apart from biological context of in vivo protein folding (which has been studied only for a few proteins, the roles of the fundamental physical forces in the in vitro folding remain largely unstudied. Despite a degree of success in using descriptions based on statistical and/or thermodynamic approaches, few of the current models explicitly include more basic physical forces (such as electrostatics and Van Der Waals forces. Moreover, the present-day models rarely take into account that the protein folding is, essentially, a rapid process that produces a highly specific architecture. This review considers several physical models that may provide more direct links between sequence and tertiary structure in terms of the physical forces. In particular, elaboration of such simple models is likely to produce extremely effective computational techniques with value for modern genomics.

  10. A physical model of the intrathoracic stomach

    NARCIS (Netherlands)

    Bemelman, W. A.; Verburg, J.; Brummelkamp, W. H.; Klopper, P. J.

    1988-01-01

    To determine whether duodenogastric reflux into the thoracic stomach could be caused by the transmission of negative intrapleural pressure fluctuations into the gastric lumen, a physical model is described and an equation calculated Pm + Pa - Pmb - (Sv.Pmb.Vmb/Pm) = Ppl - Sv.Vmb where Pm is

  11. Why supersymmetry? Physics beyond the standard model

    Indian Academy of Sciences (India)

    2016-08-23

    Aug 23, 2016 ... Abstract. The Naturalness Principle as a requirement that the heavy mass scales decouple from the physics of light mass scales is reviewed. In quantum field theories containing elementary scalar fields, such as the Standard. Model of electroweak interactions containing the Higgs particle, mass of the ...

  12. Continuum Modeling in the Physical Sciences

    NARCIS (Netherlands)

    Groesen, van E.; Molenaar, J.

    2007-01-01

    Mathematical modeling—the ability to apply mathematical concepts and techniques to real-life systems—has expanded considerably over the last decades, making it impossible to cover all of its aspects in one course or textbook. Continuum Modeling in the Physical Sciences provides an extensive

  13. Parameterization of solar flare dose

    International Nuclear Information System (INIS)

    Lamarche, A.H.; Poston, J.W.

    1996-01-01

    A critical aspect of missions to the moon or Mars will be the safety and health of the crew. Radiation in space is a hazard for astronauts, especially high-energy radiation following certain types of solar flares. A solar flare event can be very dangerous if astronauts are not adequately shielded because flares can deliver a very high dose in a short period of time. The goal of this research was to parameterize solar flare dose as a function of time to see if it was possible to predict solar flare occurrence, thus providing a warning time. This would allow astronauts to take corrective action and avoid receiving a dose greater than the recommended limit set by the National Council on Radiation Protection and Measurements (NCRP)

  14. A Multi-scale Modeling System with Unified Physics to Study Precipitation Processes

    Science.gov (United States)

    Tao, W. K.

    2017-12-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), and (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF). The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitation, processes and their sensitivity on model resolution and microphysics schemes will be presented. Also how to use of the multi-satellite simulator to improve precipitation processes will be discussed.

  15. Distance parameterization for efficient seismic history matching with the ensemble Kalman Filter

    NARCIS (Netherlands)

    Leeuwenburgh, O.; Arts, R.

    2012-01-01

    The Ensemble Kalman Filter (EnKF), in combination with travel-time parameterization, provides a robust and flexible method for quantitative multi-model history matching to time-lapse seismic data. A disadvantage of the parameterization in terms of travel-times is that it requires simulation of

  16. On parameterized deformations and unsupervised learning

    DEFF Research Database (Denmark)

    Hansen, Michael Sass

    on an unrestricted linear parameter space, where all derivatives are defined, is introduced. Furthermore, it is shown that L2-norm the parameter space introduces a reasonable metric in the actual space of modelled diffeomorphisms. A new parametrization of 3D deformation fields, using potentials and Helmholtz...... smoothing or averaging cost, of selecting warp parameterizations at a specific kernel resolution, has been analyzed. A refinement measure has been derived, which is shown to be efficient for guiding the local mesh layout. With the combination of the refinement measure and the local flexibility...... of the multivariate B-splines, the warp field is automatically refined in areas where it results in the minimization of the registration cost function....

  17. Global evaluation of particulate organic carbon flux parameterizations and implications for atmospheric pCO2

    Science.gov (United States)

    Gloege, Lucas; McKinley, Galen A.; Mouw, Colleen B.; Ciochetto, Audrey B.

    2017-07-01

    The shunt of photosynthetically derived particulate organic carbon (POC) from the euphotic zone and deep remineralization comprises the basic mechanism of the "biological carbon pump." POC raining through the "twilight zone" (euphotic depth to 1 km) and "midnight zone" (1 km to 4 km) is remineralized back to inorganic form through respiration. Accurately modeling POC flux is critical for understanding the "biological pump" and its impacts on air-sea CO2 exchange and, ultimately, long-term ocean carbon sequestration. Yet commonly used parameterizations have not been tested quantitatively against global data sets using identical modeling frameworks. Here we use a single one-dimensional physical-biogeochemical modeling framework to assess three common POC flux parameterizations in capturing POC flux observations from moored sediment traps and thorium-234 depletion. The exponential decay, Martin curve, and ballast model are compared to data from 11 biogeochemical provinces distributed across the globe. In each province, the model captures satellite-based estimates of surface primary production within uncertainties. Goodness of fit is measured by how well the simulation captures the observations, quantified by bias and the root-mean-square error and displayed using "target diagrams." Comparisons are presented separately for the twilight zone and midnight zone. We find that the ballast hypothesis shows no improvement over a globally or regionally parameterized Martin curve. For all provinces taken together, Martin's b that best fits the data is [0.70, 0.98]; this finding reduces by at least a factor of 3 previous estimates of potential impacts on atmospheric pCO2 of uncertainty in POC export to a more modest range [-16 ppm, +12 ppm].

  18. Photonic Crystals Physics and Practical Modeling

    CERN Document Server

    Sukhoivanov, Igor A

    2009-01-01

    The great interest in photonic crystals and their applications in the past decade requires a thorough training of students and professionals who can practically apply the knowledge of physics of photonic crystals together with skills of independent calculation of basic characteristics of photonic crystals and modelling of various photonic crystal elements for application in all-optical communication systems. This book combines basic backgrounds in fiber and integrated optics with detailed analysis of mathematical models for 1D, 2D and 3D photonic crystals and microstructured fibers, as well as with descriptions of real algorithms and codes for practical realization of the models.

  19. Statistical physics of pairwise probability models

    DEFF Research Database (Denmark)

    Roudi, Yasser; Aurell, Erik; Hertz, John

    2009-01-01

    (dansk abstrakt findes ikke) Statistical models for describing the probability distribution over the states of biological systems are commonly used for dimensional reduction. Among these models, pairwise models are very attractive in part because they can be fit using a reasonable amount of  data......: knowledge of the means and correlations between pairs of elements in the system is sufficient. Not surprisingly, then, using pairwise models for studying neural data has been the focus of many studies in recent years. In this paper, we describe how tools from statistical physics can be employed for studying...... and using pairwise models. We build on our previous work on the subject and study the relation between different methods for fitting these models and evaluating their quality. In particular, using data from simulated cortical networks we study how the quality of various approximate methods for inferring...

  20. Simulation of Mixed-Phase Convective Clouds: A Comparison of Spectral and Parameterized Microphysics

    Science.gov (United States)

    Seifert, A.; Khain, A.; Pokrovsky, A.

    2002-12-01

    The simulation of clouds and precipitation is one of the most complex problems in atmospheric modeling. The microphysics of clouds has to deal with a large variety of hydrometeor types and a multitude of complicated physical processes like nukleation, condensation, freezing, melting, collection and breakup of particles. Due to the lack of reliable in-situ observations many of the processes are still not well understood. Nevertheless a cloud resolving model (CRM) has to include these processes in some way. All CRMs can be separated into two groups, according to the microphysical representation used. Cloud models of the first kind utilize the so-called bulk parameterization of cloud microphysics. This concept has been introduced by Kessler (1969) and has been improved and extended in the field of mesoscale modeling. The state-of-the-art bulk schemes include several particle types like cloud droplets, raindrops, ice crystals, snow and graupel which are represented by mass contents and for some of them also by the number concentrations. Within a bulk microphysical model all relevant processes have to be parameterized in terms of these model variables. CRMs of the second kind are based on the spectral formulation of cloud microphysics. For each particle type taken into account the size distribution function is represented by a number of discrete size bins with its corresponding budget equation. To achieve satisfactory numerical results at least 30 bins are necessary for each particle type. This approach has the clear advantage of being a more general representation of the relevant physical processes and the different physical properties of particles of different sizes. A spectral model is able to include detailed descriptions of collisional and condensational growth and activation/nucleation of particles. But this approach suffers from the large computational effort necessary, especially in threedimensional models. We present a comparison between a cloud model with

  1. Climate impacts of parameterized Nordic Sea overflows

    Science.gov (United States)

    Danabasoglu, Gokhan; Large, William G.; Briegleb, Bruce P.

    2010-11-01

    A new overflow parameterization (OFP) of density-driven flows through ocean ridges via narrow, unresolved channels has been developed and implemented in the ocean component of the Community Climate System Model version 4. It represents exchanges from the Nordic Seas and the Antarctic shelves, associated entrainment, and subsequent injection of overflow product waters into the abyssal basins. We investigate the effects of the parameterized Denmark Strait (DS) and Faroe Bank Channel (FBC) overflows on the ocean circulation, showing their impacts on the Atlantic Meridional Overturning Circulation and the North Atlantic climate. The OFP is based on the Marginal Sea Boundary Condition scheme of Price and Yang (1998), but there are significant differences that are described in detail. Two uncoupled (ocean-only) and two fully coupled simulations are analyzed. Each pair consists of one case with the OFP and a control case without this parameterization. In both uncoupled and coupled experiments, the parameterized DS and FBC source volume transports are within the range of observed estimates. The entrainment volume transports remain lower than observational estimates, leading to lower than observed product volume transports. Due to low entrainment, the product and source water properties are too similar. The DS and FBC overflow temperature and salinity properties are in better agreement with observations in the uncoupled case than in the coupled simulation, likely reflecting surface flux differences. The most significant impact of the OFP is the improved North Atlantic Deep Water penetration depth, leading to a much better comparison with the observational data and significantly reducing the chronic, shallow penetration depth bias in level coordinate models. This improvement is due to the deeper penetration of the southward flowing Deep Western Boundary Current. In comparison with control experiments without the OFP, the abyssal ventilation rates increase in the North

  2. The CCPP-ARM Parameterization Testbed (CAPT): Where Climate Simulation Meets Weather Prediction

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, T J; Potter, G L; Williamson, D L; Cederwall, R T; Boyle, J S; Fiorino, M; Hnilo, J J; Olson, J G; Xie, S; Yio, J J

    2003-11-21

    To significantly improve the simulation of climate by general circulation models (GCMs), systematic errors in representations of relevant processes must first be identified, and then reduced. This endeavor demands, in particular, that the GCM parameterizations of unresolved processes should be tested over a wide range of time scales, not just in climate simulations. Thus, a numerical weather prediction (NWP) methodology for evaluating model parameterizations and gaining insights into their behavior may prove useful, provied that suitable adaptations are made for implementation in climate GCMs. This method entails the generation of short-range weather forecasts by realistically initialized climate GCM, and the application of six-hourly NWP analyses and observations of parameterized variables to evaluate these forecasts. The behavior of the parameterizations in such a weather-forecasting framework can provide insights on how these schemes might be improved, and modified parameterizations then can be similarly tested. In order to further this method for evaluating and analyzing parameterizations in climate GCMs, the USDOE is funding a joint venture of its Climate Change Prediction Program (CCPP) and Atmospheric Radiation Measurement (ARM) Program: the CCPP-ARM Parameterization Testbed (CAPT). This article elaborates the scientific rationale for CAPT, discusses technical aspects of its methodology, and presents examples of its implementation in a representative climate GCM. Numerical weather prediction methods show promise for improving parameterizations in climate GCMs.

  3. B physics beyond the Standard Model

    International Nuclear Information System (INIS)

    Hewett, J.A.L.

    1997-12-01

    The ability of present and future experiments to test the Standard Model in the B meson sector is described. The authors examine the loop effects of new interactions in flavor changing neutral current B decays and in Z → b anti b, concentrating on supersymmetry and the left-right symmetric model as specific examples of new physics scenarios. The procedure for performing a global fit to the Wilson coefficients which describe b → s transitions is outlined, and the results of such a fit from Monte Carlo generated data is compared to the predictions of the two sample new physics scenarios. A fit to the Zb anti b couplings from present data is also given

  4. LHC Higgs physics beyond the Standard Model

    Energy Technology Data Exchange (ETDEWEB)

    Spannowsky, M.

    2007-09-22

    The Large Hadron Collider (LHC) at CERN will be able to perform proton collisions at a much higher center-of-mass energy and luminosity than any other collider. Its main purpose is to detect the Higgs boson, the last unobserved particle of the Standard Model, explaining the riddle of the origin of mass. Studies have shown, that for the whole allowed region of the Higgs mass processes exist to detect the Higgs at the LHC. However, the Standard Model cannot be a theory of everything and is not able to provide a complete understanding of physics. It is at most an effective theory up to a presently unknown energy scale. Hence, extensions of the Standard Model are necessary which can affect the Higgs-boson signals. We discuss these effects in two popular extensions of the Standard Model: the Minimal Supersymmetric Standard Model (MSSM) and the Standard Model with four generations (SM4G). Constraints on these models come predominantly from flavor physics and electroweak precision measurements. We show, that the SM4G is still viable and that a fourth generation has strong impact on decay and production processes of the Higgs boson. Furthermore, we study the charged Higgs boson in the MSSM, yielding a clear signal for physics beyond the Standard Model. For small tan {beta} in minimal flavor violation (MFV) no processes for the detection of a charged Higgs boson do exist at the LHC. However, MFV is just motivated by the experimental agreement of results from flavor physics with Standard Model predictions, but not by any basic theoretical consideration. In this thesis, we calculate charged Higgs boson production cross sections beyond the assumption of MFV, where a large number of free parameters is present in the MSSM. We find that the soft-breaking parameters which enhance the charged-Higgs boson production most are just bound to large values, e.g. by rare B-meson decays. Although the charged-Higgs boson cross sections beyond MFV turn out to be sizeable, only a detailed

  5. LHC Higgs physics beyond the Standard Model

    International Nuclear Information System (INIS)

    Spannowsky, M.

    2007-01-01

    The Large Hadron Collider (LHC) at CERN will be able to perform proton collisions at a much higher center-of-mass energy and luminosity than any other collider. Its main purpose is to detect the Higgs boson, the last unobserved particle of the Standard Model, explaining the riddle of the origin of mass. Studies have shown, that for the whole allowed region of the Higgs mass processes exist to detect the Higgs at the LHC. However, the Standard Model cannot be a theory of everything and is not able to provide a complete understanding of physics. It is at most an effective theory up to a presently unknown energy scale. Hence, extensions of the Standard Model are necessary which can affect the Higgs-boson signals. We discuss these effects in two popular extensions of the Standard Model: the Minimal Supersymmetric Standard Model (MSSM) and the Standard Model with four generations (SM4G). Constraints on these models come predominantly from flavor physics and electroweak precision measurements. We show, that the SM4G is still viable and that a fourth generation has strong impact on decay and production processes of the Higgs boson. Furthermore, we study the charged Higgs boson in the MSSM, yielding a clear signal for physics beyond the Standard Model. For small tan β in minimal flavor violation (MFV) no processes for the detection of a charged Higgs boson do exist at the LHC. However, MFV is just motivated by the experimental agreement of results from flavor physics with Standard Model predictions, but not by any basic theoretical consideration. In this thesis, we calculate charged Higgs boson production cross sections beyond the assumption of MFV, where a large number of free parameters is present in the MSSM. We find that the soft-breaking parameters which enhance the charged-Higgs boson production most are just bound to large values, e.g. by rare B-meson decays. Although the charged-Higgs boson cross sections beyond MFV turn out to be sizeable, only a detailed

  6. Statistical physics of pairwise probability models

    Directory of Open Access Journals (Sweden)

    Yasser Roudi

    2009-11-01

    Full Text Available Statistical models for describing the probability distribution over the states of biological systems are commonly used for dimensional reduction. Among these models, pairwise models are very attractive in part because they can be fit using a reasonable amount of data: knowledge of the means and correlations between pairs of elements in the system is sufficient. Not surprisingly, then, using pairwise models for studying neural data has been the focus of many studies in recent years. In this paper, we describe how tools from statistical physics can be employed for studying and using pairwise models. We build on our previous work on the subject and study the relation between different methods for fitting these models and evaluating their quality. In particular, using data from simulated cortical networks we study how the quality of various approximate methods for inferring the parameters in a pairwise model depends on the time bin chosen for binning the data. We also study the effect of the size of the time bin on the model quality itself, again using simulated data. We show that using finer time bins increases the quality of the pairwise model. We offer new ways of deriving the expressions reported in our previous work for assessing the quality of pairwise models.

  7. Physical models on discrete space and time

    International Nuclear Information System (INIS)

    Lorente, M.

    1986-01-01

    The idea of space and time quantum operators with a discrete spectrum has been proposed frequently since the discovery that some physical quantities exhibit measured values that are multiples of fundamental units. This paper first reviews a number of these physical models. They are: the method of finite elements proposed by Bender et al; the quantum field theory model on discrete space-time proposed by Yamamoto; the finite dimensional quantum mechanics approach proposed by Santhanam et al; the idea of space-time as lattices of n-simplices proposed by Kaplunovsky et al; and the theory of elementary processes proposed by Weizsaecker and his colleagues. The paper then presents a model proposed by the authors and based on the (n+1)-dimensional space-time lattice where fundamental entities interact among themselves 1 to 2n in order to build up a n-dimensional cubic lattice as a ground field where the physical interactions take place. The space-time coordinates are nothing more than the labelling of the ground field and take only discrete values. 11 references

  8. Development of a land surface model with coupled snow and frozen soil physics

    Science.gov (United States)

    Wang, Lei; Zhou, Jing; Qi, Jia; Sun, Litao; Yang, Kun; Tian, Lide; Lin, Yanluan; Liu, Wenbin; Shrestha, Maheswor; Xue, Yongkang; Koike, Toshio; Ma, Yaoming; Li, Xiuping; Chen, Yingying; Chen, Deliang; Piao, Shilong; Lu, Hui

    2017-06-01

    Snow and frozen soil are important factors that influence terrestrial water and energy balances through snowpack accumulation and melt and soil freeze-thaw. In this study, a new land surface model (LSM) with coupled snow and frozen soil physics was developed based on a hydrologically improved LSM (HydroSiB2). First, an energy-balance-based three-layer snow model was incorporated into HydroSiB2 (hereafter HydroSiB2-S) to provide an improved description of the internal processes of the snow pack. Second, a universal and simplified soil model was coupled with HydroSiB2-S to depict soil water freezing and thawing (hereafter HydroSiB2-SF). In order to avoid the instability caused by the uncertainty in estimating water phase changes, enthalpy was adopted as a prognostic variable instead of snow/soil temperature in the energy balance equation of the snow/frozen soil module. The newly developed models were then carefully evaluated at two typical sites of the Tibetan Plateau (TP) (one snow covered and the other snow free, both with underlying frozen soil). At the snow-covered site in northeastern TP (DY), HydroSiB2-SF demonstrated significant improvements over HydroSiB2-F (same as HydroSiB2-SF but using the original single-layer snow module of HydroSiB2), showing the importance of snow internal processes in three-layer snow parameterization. At the snow-free site in southwestern TP (Ngari), HydroSiB2-SF reasonably simulated soil water phase changes while HydroSiB2-S did not, indicating the crucial role of frozen soil parameterization in depicting the soil thermal and water dynamics. Finally, HydroSiB2-SF proved to be capable of simulating upward moisture fluxes toward the freezing front from the underlying soil layers in winter.

  9. Generomak: Fusion physics, engineering and costing model

    International Nuclear Information System (INIS)

    Delene, J.G.; Krakowski, R.A.; Sheffield, J.; Dory, R.A.

    1988-06-01

    A generic fusion physics, engineering and economics model (Generomak) was developed as a means of performing consistent analysis of the economic viability of alternative magnetic fusion reactors. The original Generomak model developed at Oak Ridge by Sheffield was expanded for the analyses of the Senior Committee on Environmental Safety and Economics of Magnetic Fusion Energy (ESECOM). This report describes the Generomak code as used by ESECOM. The input data used for each of the ten ESECOM fusion plants and the Generomak code output for each case is given. 14 refs., 3 figs., 17 tabs

  10. Elastic FWI for VTI media: A synthetic parameterization study

    KAUST Repository

    Kamath, Nishant

    2016-09-06

    A major challenge for multiparameter full-waveform inversion (FWI) is the inherent trade-offs (or cross-talk) between model parameters. Here, we perform FWI of multicomponent data generated for a synthetic VTI (transversely isotropic with a vertical symmetry axis) model based on a geologic section of the Valhall field. A horizontal displacement source, which excites intensive shear waves in the conventional offset range, helps provide more accurate updates to the SV-wave vertical velocity. We test three model parameterizations, which exhibit different radiation patterns and, therefore, create different parameter trade-offs. The results show that the choice of parameterization for FWI depends on the availability of long-offset data, the quality of the initial model for the anisotropy coefficients, and the parameter that needs to be resolved with the highest accuracy.

  11. Physical vs. Mathematical Models in Rock Mechanics

    Science.gov (United States)

    Morozov, I. B.; Deng, W.

    2013-12-01

    One of the less noted challenges in understanding the mechanical behavior of rocks at both in situ and lab conditions is the character of theoretical approaches being used. Currently, the emphasis is made on spatial averaging theories (homogenization and numerical models of microstructure), empirical models for temporal behavior (material memory, compliance functions and complex moduli), and mathematical transforms (Laplace and Fourier) used to infer the Q-factors and 'relaxation mechanisms'. In geophysical applications, we have to rely on such approaches for very broad spatial and temporal scales which are not available in experiments. However, the above models often make insufficient use of physics and utilize, for example, the simplified 'correspondence principle' instead of the laws of viscosity and friction. As a result, the commonly-used time- and frequency dependent (visco)elastic moduli represent apparent properties related to the measurement procedures and not necessarily to material properties. Predictions made from such models may therefore be inaccurate or incorrect when extrapolated beyond the lab scales. To overcome the above challenge, we need to utilize the methods of micro- and macroscopic mechanics and thermodynamics known in theoretical physics. This description is rigorous and accurate, uses only partial differential equations, and allows straightforward numerical implementations. One important observation from the physical approach is that the analysis should always be done for the specific geometry and parameters of the experiment. Here, we illustrate these methods on axial deformations of a cylindrical rock sample in the lab. A uniform, isotropic elastic rock with a thermoelastic effect is considered in four types of experiments: 1) axial extension with free transverse boundary, 2) pure axial extension with constrained transverse boundary, 3) pure bulk expansion, and 4) axial loading harmonically varying with time. In each of these cases, an

  12. A physically-based parsimonious hydrological model for flash floods in Mediterranean catchments

    Directory of Open Access Journals (Sweden)

    H. Roux

    2011-09-01

    Full Text Available A spatially distributed hydrological model, dedicated to flood simulation, is developed on the basis of physical process representation (infiltration, overland flow, channel routing. Estimation of model parameters requires data concerning topography, soil properties, vegetation and land use. Four parameters are calibrated for the entire catchment using one flood event. Model sensitivity to individual parameters is assessed using Monte-Carlo simulations. Results of this sensitivity analysis with a criterion based on the Nash efficiency coefficient and the error of peak time and runoff are used to calibrate the model. This procedure is tested on the Gardon d'Anduze catchment, located in the Mediterranean zone of southern France. A first validation is conducted using three flood events with different hydrometeorological characteristics. This sensitivity analysis along with validation tests illustrates the predictive capability of the model and points out the possible improvements on the model's structure and parameterization for flash flood forecasting, especially in ungauged basins. Concerning the model structure, results show that water transfer through the subsurface zone also contributes to the hydrograph response to an extreme event, especially during the recession period. Maps of soil saturation emphasize the impact of rainfall and soil properties variability on these dynamics. Adding a subsurface flow component in the simulation also greatly impacts the spatial distribution of soil saturation and shows the importance of the drainage network. Measures of such distributed variables would help discriminating between different possible model structures.

  13. Modellus: Learning Physics with Mathematical Modelling

    Science.gov (United States)

    Teodoro, Vitor

    Computers are now a major tool in research and development in almost all scientific and technological fields. Despite recent developments, this is far from true for learning environments in schools and most undergraduate studies. This thesis proposes a framework for designing curricula where computers, and computer modelling in particular, are a major tool for learning. The framework, based on research on learning science and mathematics and on computer user interface, assumes that: 1) learning is an active process of creating meaning from representations; 2) learning takes place in a community of practice where students learn both from their own effort and from external guidance; 3) learning is a process of becoming familiar with concepts, with links between concepts, and with representations; 4) direct manipulation user interfaces allow students to explore concrete-abstract objects such as those of physics and can be used by students with minimal computer knowledge. Physics is the science of constructing models and explanations about the physical world. And mathematical models are an important type of models that are difficult for many students. These difficulties can be rooted in the fact that most students do not have an environment where they can explore functions, differential equations and iterations as primary objects that model physical phenomena--as objects-to-think-with, reifying the formal objects of physics. The framework proposes that students should be introduced to modelling in a very early stage of learning physics and mathematics, two scientific areas that must be taught in very closely related way, as they were developed since Galileo and Newton until the beginning of our century, before the rise of overspecialisation in science. At an early stage, functions are the main type of objects used to model real phenomena, such as motions. At a later stage, rates of change and equations with rates of change play an important role. This type of equations

  14. An efficient geometric parameterization technique for the continuation power flow

    Energy Technology Data Exchange (ETDEWEB)

    Garbelini, Enio; Alves, Dilson A.; Neto, Alfredo B.; Righeto, Edson [Department of Electrical Engineering, Electrical Engineering Faculty, Paulista State University (CISA/UNESP), C.P. 31, CEP 15378-000 Ilha Solteira, SP (Brazil); da Silva, Luiz C.P.; Castro, Carlos A. [School of Electrical and Computer Engineering State University of Campinas, UNICAMP C.P. 6101, CEP 13081-970 Campinas, SP (Brazil)

    2007-01-15

    Continuation methods have been shown as efficient tools for solving ill-conditioned cases, with close to singular Jacobian matrices, such as the maximum loading point of power systems. Some parameterization techniques have been proposed to avoid matrix singularity and successfully solve those cases. This paper presents a new geometric parameterization scheme that allows the complete tracing of the P-V curves without ill-conditioning problems. The proposed technique associates robustness to simplicity and, it is of easy understanding. The Jacobian matrix singularity is avoided by the addition of a line equation, which passes through a point in the plane determined by the total real power losses and loading factor. These two parameters have clear physical meaning. The application of this new technique to the IEEE systems (14, 30, 57, 118 and 300 buses) shows that the best characteristics of the conventional Newton's method are not only preserved but also improved. (author)

  15. Physics Beyond the Standard Model: Supersymmetry

    Energy Technology Data Exchange (ETDEWEB)

    Nojiri, M.M.; /KEK, Tsukuba /Tsukuba, Graduate U. Adv. Studies /Tokyo U.; Plehn, T.; /Edinburgh U.; Polesello, G.; /INFN, Pavia; Alexander, John M.; /Edinburgh U.; Allanach, B.C.; /Cambridge U.; Barr, Alan J.; /Oxford U.; Benakli, K.; /Paris U., VI-VII; Boudjema, F.; /Annecy, LAPTH; Freitas, A.; /Zurich U.; Gwenlan, C.; /University Coll. London; Jager, S.; /CERN /LPSC, Grenoble

    2008-02-01

    This collection of studies on new physics at the LHC constitutes the report of the supersymmetry working group at the Workshop 'Physics at TeV Colliders', Les Houches, France, 2007. They cover the wide spectrum of phenomenology in the LHC era, from alternative models and signatures to the extraction of relevant observables, the study of the MSSM parameter space and finally to the interplay of LHC observations with additional data expected on a similar time scale. The special feature of this collection is that while not each of the studies is explicitly performed together by theoretical and experimental LHC physicists, all of them were inspired by and discussed in this particular environment.

  16. Parameterization of the InVEST Crop Pollination Model to spatially predict abundance of wild blueberry (Vaccinium angustifolium Aiton) native bee pollinators in Maine, USA

    Science.gov (United States)

    Groff, Shannon C.; Loftin, Cynthia S.; Drummond, Frank; Bushmann, Sara; McGill, Brian J.

    2016-01-01

    Non-native honeybees historically have been managed for crop pollination, however, recent population declines draw attention to pollination services provided by native bees. We applied the InVEST Crop Pollination model, developed to predict native bee abundance from habitat resources, in Maine's wild blueberry crop landscape. We evaluated model performance with parameters informed by four approaches: 1) expert opinion; 2) sensitivity analysis; 3) sensitivity analysis informed model optimization; and, 4) simulated annealing (uninformed) model optimization. Uninformed optimization improved model performance by 29% compared to expert opinion-informed model, while sensitivity-analysis informed optimization improved model performance by 54%. This suggests that expert opinion may not result in the best parameter values for the InVEST model. The proportion of deciduous/mixed forest within 2000 m of a blueberry field also reliably predicted native bee abundance in blueberry fields, however, the InVEST model provides an efficient tool to estimate bee abundance beyond the field perimeter.

  17. Physical model for membrane protrusions during spreading

    International Nuclear Information System (INIS)

    Chamaraux, F; Ali, O; Fourcade, B; Keller, S; Bruckert, F

    2008-01-01

    During cell spreading onto a substrate, the kinetics of the contact area is an observable quantity. This paper is concerned with a physical approach to modeling this process in the case of ameboid motility where the membrane detaches itself from the underlying cytoskeleton at the leading edge. The physical model we propose is based on previous reports which highlight that membrane tension regulates cell spreading. Using a phenomenological feedback loop to mimic stress-dependent biochemistry, we show that the actin polymerization rate can be coupled to the stress which builds up at the margin of the contact area between the cell and the substrate. In the limit of small variation of membrane tension, we show that the actin polymerization rate can be written in a closed form. Our analysis defines characteristic lengths which depend on elastic properties of the membrane–cytoskeleton complex, such as the membrane–cytoskeleton interaction, and on molecular parameters, the rate of actin polymerization. We discuss our model in the case of axi-symmetric and non-axi-symmetric spreading and we compute the characteristic time scales as a function of fundamental elastic constants such as the strength of membrane–cytoskeleton adherence

  18. Parameterization of the 3-PG model for Pinus elliottii stands using alternative methods to estimate fertility rating, biomass partitioning and canopy closure

    Science.gov (United States)

    Carlos A. Gonzalez-Benecke; Eric J. Jokela; Wendell P. Cropper; Rosvel Bracho; Daniel J. Leduc

    2014-01-01

    The forest simulation model, 3-PG, has been widely applied as a useful tool for predicting growth of forest species in many countries. The model has the capability to estimate the effects of management, climate and site characteristics on many stand attributes using easily available data. Currently, there is an increasing interest in estimating biomass and assessing...

  19. Global parameterization and validation of a two-leaf light use efficiency model for predicting gross primary production across FLUXNET sites

    DEFF Research Database (Denmark)

    Zhou, Yanlian; Wu, Xiaocui; Ju, Weimin

    2015-01-01

    Light use efficiency (LUE) models are widely used to simulate gross primary production (GPP). However, the treatment of the plant canopy as a big leaf by these models can introduce large uncertainties in simulated GPP. Recently, a two-leaf light use efficiency (TL-LUE) model was developed...... to simulate GPP separately for sunlit and shaded leaves and has been shown to outperform the big-leaf MOD17 model at six FLUX sites in China. In this study we investigated the performance of the TL-LUE model for a wider range of biomes. For this we optimized the parameters and tested the TL-LUE model using...... data from 98 FLUXNET sites which are distributed across the globe. The results showed that the TL-LUE model performed in general better than the MOD17 model in simulating 8 day GPP. Optimized maximum light use efficiency of shaded leaves (epsilon(msh)) was 2.63 to 4.59 times that of sunlit leaves...

  20. A new 2D climate model with chemistry and self consistent eddy-parameterization. The impact of airplane NO{sub x} on the chemistry of the atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Gepraegs, R.; Schmitz, G.; Peters, D. [Institut fuer Atmosphaerenphysik, Kuehlungsborn (Germany)

    1997-12-31

    A 2D version of the ECHAM T21 climate model has been developed. The new model includes an efficient spectral transport scheme with implicit diffusion. Furthermore, photodissociation and chemistry of the NCAR 2D model have been incorporated. A self consistent parametrization scheme is used for eddy heat- and momentum flux in the troposphere. It is based on the heat flux parametrization of Branscome and mixing-length formulation for quasi-geostrophic vorticity. Above 150 hPa the mixing-coefficient K{sub yy} is prescribed. Some of the model results are discussed, concerning especially the impact of aircraft NO{sub x} emission on the model chemistry. (author) 6 refs.

  1. A method of dopant electron energy spectrum parameterization for calculation of single-electron nanodevices

    Science.gov (United States)

    Shorokhov, V. V.

    2017-05-01

    Solitary dopants in semiconductors and dielectrics that possess stable electron structures and interesting physical properties may be used as building blocks of quantum computers and sensor systems that operate based on new physical principles. This study proposes a phenomenological method of parameterization for a single-particle energy spectrum of dopant valence electrons in crystalline semiconductors and dielectrics that takes electron-electron interactions into account. It is proposed to take electron-electron interactions in the framework of the outer electron shell model into account. The proposed method is applied to construct the procedure for the determination of the effective dopant outer shell capacity and the method for calculation of the tunneling current in a single-electron device with one or several active dopants-charge centers.

  2. Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies

    Science.gov (United States)

    Williams, Paul; Howe, Nicola; Gregory, Jonathan; Smith, Robin; Joshi, Manoj

    2017-04-01

    In climate simulations, the impacts of the subgrid scales on the resolved scales are conventionally represented using deterministic closure schemes, which assume that the impacts are uniquely determined by the resolved scales. Stochastic parameterization relaxes this assumption, by sampling the subgrid variability in a computationally inexpensive manner. This study shows that the simulated climatological state of the ocean is improved in many respects by implementing a simple stochastic parameterization of ocean eddies into a coupled atmosphere-ocean general circulation model. Simulations from a high-resolution, eddy-permitting ocean model are used to calculate the eddy statistics needed to inject realistic stochastic noise into a low-resolution, non-eddy-permitting version of the same model. A suite of four stochastic experiments is then run to test the sensitivity of the simulated climate to the noise definition by varying the noise amplitude and decorrelation time within reasonable limits. The addition of zero-mean noise to the ocean temperature tendency is found to have a nonzero effect on the mean climate. Specifically, in terms of the ocean temperature and salinity fields both at the surface and at depth, the noise reduces many of the biases in the low-resolution model and causes it to more closely resemble the high-resolution model. The variability of the strength of the global ocean thermohaline circulation is also improved. It is concluded that stochastic ocean perturbations can yield reductions in climate model error that are comparable to those obtained by refining the resolution, but without the increased computational cost. Therefore, stochastic parameterizations of ocean eddies have the potential to significantly improve climate simulations. Reference Williams PD, Howe NJ, Gregory JM, Smith RS, and Joshi MM (2016) Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies. Journal of Climate, 29, 8763-8781. http://dx.doi.org/10

  3. A physically based framework for modeling the organic fractionation of sea spray aerosol from bubble film Langmuir equilibria

    Directory of Open Access Journals (Sweden)

    S. M. Burrows

    2014-12-01

    Full Text Available The presence of a large fraction of organic matter in primary sea spray aerosol (SSA can strongly affect its cloud condensation nuclei activity and interactions with marine clouds. Global climate models require new parameterizations of the SSA composition in order to improve the representation of these processes. Existing proposals for such a parameterization use remotely sensed chlorophyll a concentrations as a proxy for the biogenic contribution to the aerosol. However, both observations and theoretical considerations suggest that existing relationships with chlorophyll a, derived from observations at only a few locations, may not be representative for all ocean regions. We introduce a novel framework for parameterizing the fractionation of marine organic matter into SSA based on a competitive Langmuir adsorption equilibrium at bubble surfaces. Marine organic matter is partitioned into classes with differing molecular weights, surface excesses, and Langmuir adsorption parameters. The classes include a lipid-like mixture associated with labile dissolved organic carbon (DOC, a polysaccharide-like mixture associated primarily with semilabile DOC, a protein-like mixture with concentrations intermediate between lipids and polysaccharides, a processed mixture associated with recalcitrant surface DOC, and a deep abyssal humic-like mixture. Box model calculations have been performed for several cases of organic adsorption to illustrate the underlying concepts. We then apply the framework to output from a global marine biogeochemistry model, by partitioning total dissolved organic carbon into several classes of macromolecules. Each class is represented by model compounds with physical and chemical properties based on existing laboratory data. This allows us to globally map the predicted organic mass fraction of the nascent submicron sea spray aerosol. Predicted relationships between chlorophyll a and organic fraction are similar to existing empirical

  4. Menangkal Serangan SQL Injection Dengan Parameterized Query

    Directory of Open Access Journals (Sweden)

    Yulianingsih Yulianingsih

    2016-06-01

    Full Text Available Semakin meningkat pertumbuhan layanan informasi maka semakin tinggi pula tingkat kerentanan keamanan dari suatu sumber informasi. Melalui tulisan ini disajikan penelitian yang dilakukan secara eksperimen yang membahas tentang kejahatan penyerangan database secara SQL Injection. Penyerangan dilakukan melalui halaman autentikasi dikarenakan halaman ini merupakan pintu pertama akses yang seharusnya memiliki pertahanan yang cukup. Kemudian dilakukan eksperimen terhadap metode Parameterized Query untuk mendapatkan solusi terhadap permasalahan tersebut.   Kata kunci— Layanan Informasi, Serangan, eksperimen, SQL Injection, Parameterized Query.

  5. Parameterization and measurements of helical magnetic fields

    International Nuclear Information System (INIS)

    Fischer, W.; Okamura, M.

    1997-01-01

    Magnetic fields with helical symmetry can be parameterized using multipole coefficients (a n , b n ). We present a parameterization that gives the familiar multipole coefficients (a n , b n ) for straight magnets when the helical wavelength tends to infinity. To measure helical fields all methods used for straight magnets can be employed. We show how to convert the results of those measurements to obtain the desired helical multipole coefficients (a n , b n )

  6. Parameterized String Matching Algorithms with Application to ...

    African Journals Online (AJOL)

    In the parameterized string matching problem, a given pattern P is said to match with a sub-string t of the text T, if there exist a bijection from the symbols of P to the symbols of t. Salmela and Tarhio solve the parameterized string matching problem in sub-linear time by applying the concept of q-gram in the Horspool algorithm ...

  7. Physics and Dynamics Coupling Across Scales in the Next Generation CESM. Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Bacmeister, Julio T. [University Corporation for Atmospheric Research (UCAR), Boulder, CO (United States)

    2015-06-12

    This project examines physics/dynamics coupling, that is, exchange of meteorological profiles and tendencies between an atmospheric model’s dynamical core and its various physics parameterizations. Most model physics parameterizations seek to represent processes that occur on scales smaller than the smallest scale resolved by the dynamical core. As a consequence a key conceptual aspect of parameterizations is an assumption about the subgrid variability of quantities such as temperature, humidity or vertical wind. Most existing parameterizations of processes such as turbulence, convection, cloud, and gravity wave drag make relatively ad hoc assumptions about this variability and are forced to introduce empirical parameters, i.e., “tuning knobs” to obtain realistic simulations. These knobs make systematic dependences on model grid size difficult to quantify.

  8. Parameterization for subgrid-scale motion of ice-shelf calving fronts

    Directory of Open Access Journals (Sweden)

    T. Albrecht

    2011-01-01

    Full Text Available A parameterization for the motion of ice-shelf fronts on a Cartesian grid in finite-difference land-ice models is presented. The scheme prevents artificial thinning of the ice shelf at its edge, which occurs due to the finite resolution of the model. The intuitive numerical implementation diminishes numerical dispersion at the ice front and enables the application of physical boundary conditions to improve the calculation of stress and velocity fields throughout the ice-sheet-shelf system. Numerical properties of this subgrid modification are assessed in the Potsdam Parallel Ice Sheet Model (PISM-PIK for different geometries in one and two horizontal dimensions and are verified against an analytical solution in a flow-line setup.

  9. Global parameterization and validation of a two-leaf light use efficiency model for predicting gross primary production across FLUXNET sites

    Czech Academy of Sciences Publication Activity Database

    Zhou, Y.; Wu, X.; Weiming, J.; Chen, J.; Wang, S.; Wang, H.; Wenping, Y.; Black, T. A.; Jassal, R.; Ibrom, A.; Han, S.; Yan, J.; Margolis, H.; Roupsard, O.; Li, Y.; Zhao, F.; Kiely, G.; Starr, G.; Pavelka, Marian; Montagnani, L.; Wohlfahrt, G.; D'Odorico, P.; Cook, D.; Altaf Arain, M.; Bonal, D.; Beringer, J.; Blanken, P. D.; Loubet, B.; Leclerc, M. Y.; Matteucci, G.; Nagy, Z.; Olejnik, Janusz; U., K. T. P.; Varlagin, A.

    2016-01-01

    Roč. 36, č. 7 (2016), s. 2743-2760 ISSN 2169-8953 Institutional support: RVO:67179843 Keywords : global parametrization * predicting model * FlUXNET Subject RIV: EH - Ecology, Behaviour Impact factor: 3.395, year: 2016

  10. Les Houches Summer School on Theoretical Physics: Session 84: Particle Physics Beyond the Standard Model

    CERN Document Server

    Lavignac, Stephan; Dalibard, Jean

    2006-01-01

    The Standard Model of elementary particles and interactions is one of the tested theories in physics. This book presents a collection of lectures given in August 2005 at the Les Houches Summer School on Particle Physics beyond the Standard Model. It provides a pedagogical introduction to the aspects of particle physics beyond the Standard Model

  11. Parameterizing Size Distribution in Ice Clouds

    Energy Technology Data Exchange (ETDEWEB)

    DeSlover, Daniel; Mitchell, David L.

    2009-09-25

    PARAMETERIZING SIZE DISTRIBUTIONS IN ICE CLOUDS David L. Mitchell and Daniel H. DeSlover ABSTRACT An outstanding problem that contributes considerable uncertainty to Global Climate Model (GCM) predictions of future climate is the characterization of ice particle sizes in cirrus clouds. Recent parameterizations of ice cloud effective diameter differ by a factor of three, which, for overcast conditions, often translate to changes in outgoing longwave radiation (OLR) of 55 W m-2 or more. Much of this uncertainty in cirrus particle sizes is related to the problem of ice particle shattering during in situ sampling of the ice particle size distribution (PSD). Ice particles often shatter into many smaller ice fragments upon collision with the rim of the probe inlet tube. These small ice artifacts are counted as real ice crystals, resulting in anomalously high concentrations of small ice crystals (D < 100 µm) and underestimates of the mean and effective size of the PSD. Half of the cirrus cloud optical depth calculated from these in situ measurements can be due to this shattering phenomenon. Another challenge is the determination of ice and liquid water amounts in mixed phase clouds. Mixed phase clouds in the Arctic contain mostly liquid water, and the presence of ice is important for determining their lifecycle. Colder high clouds between -20 and -36 oC may also be mixed phase but in this case their condensate is mostly ice with low levels of liquid water. Rather than affecting their lifecycle, the presence of liquid dramatically affects the cloud optical properties, which affects cloud-climate feedback processes in GCMs. This project has made advancements in solving both of these problems. Regarding the first problem, PSD in ice clouds are uncertain due to the inability to reliably measure the concentrations of the smallest crystals (D < 100 µm), known as the “small mode”. Rather than using in situ probe measurements aboard aircraft, we employed a treatment of ice

  12. Physics-based models of the plasmasphere

    Energy Technology Data Exchange (ETDEWEB)

    Jordanova, Vania K [Los Alamos National Laboratory; Pierrard, Vivane [BELGIUM; Goldstein, Jerry [SWRI; Andr' e, Nicolas [ESTEC/ESA; Kotova, Galina A [SRI, RUSSIA; Lemaire, Joseph F [BELGIUM; Liemohn, Mike W [U OF MICHIGAN; Matsui, H [UNIV OF NEW HAMPSHIRE

    2008-01-01

    We describe recent progress in physics-based models of the plasmasphere using the Auid and the kinetic approaches. Global modeling of the dynamics and inAuence of the plasmasphere is presented. Results from global plasmasphere simulations are used to understand and quantify (i) the electric potential pattern and evolution during geomagnetic storms, and (ii) the inAuence of the plasmasphere on the excitation of electromagnetic ion cyclotron (ElvIIC) waves a.nd precipitation of energetic ions in the inner magnetosphere. The interactions of the plasmasphere with the ionosphere a.nd the other regions of the magnetosphere are pointed out. We show the results of simulations for the formation of the plasmapause and discuss the inAuence of plasmaspheric wind and of ultra low frequency (ULF) waves for transport of plasmaspheric material. Theoretical formulations used to model the electric field and plasma distribution in the plasmasphere are given. Model predictions are compared to recent CLUSTER and MAGE observations, but also to results of earlier models and satellite observations.

  13. Relativistic nuclear physics with the spectator model

    International Nuclear Information System (INIS)

    Gross, F.

    1988-01-01

    The spectator model, a general approach to the relativistic treatment of nuclear physics problems in which spectators to nuclear interactions are put on their mass-shell, will be defined nd described. The approach grows out of the relativistic treatment of two and three body systems in which one particle is off-shell, and recent numerical results for the NN interaction will be presented. Two meson-exchange models, one with only 4 mesons (π, σ, /rho/, ω) but with a 25% admixture of γ 5 coupling for the pion, and a second with 6 mesons (π, σ, /rho/, ω, δ, and /eta/) but a pure γ 5 γ/sup mu/ pion coupling, are shown to give very good quantitative fits to NN scattering phase shifts below 400 MeV, and also a good description of the /rho/ 40 Cα elastic scattering observables. 19 refs., 6 figs., 1 tab

  14. Physical and Chemical Environmental Abstraction Model

    International Nuclear Information System (INIS)

    Nowak, E.

    2000-01-01

    As directed by a written development plan (CRWMS M and O 1999a), Task 1, an overall conceptualization of the physical and chemical environment (P/CE) in the emplacement drift is documented in this Analysis/Model Report (AMR). Included are the physical components of the engineered barrier system (EBS). The intended use of this descriptive conceptualization is to assist the Performance Assessment Department (PAD) in modeling the physical and chemical environment within a repository drift. It is also intended to assist PAD in providing a more integrated and complete in-drift geochemical model abstraction and to answer the key technical issues raised in the U.S. Nuclear Regulatory Commission (NRC) Issue Resolution Status Report (IRSR) for the Evolution of the Near-Field Environment (NFE) Revision 2 (NRC 1999). EBS-related features, events, and processes (FEPs) have been assembled and discussed in ''EBS FEPs/Degradation Modes Abstraction'' (CRWMS M and O 2000a). Reference AMRs listed in Section 6 address FEPs that have not been screened out. This conceptualization does not directly address those FEPs. Additional tasks described in the written development plan are recommended for future work in Section 7.3. To achieve the stated purpose, the scope of this document includes: (1) the role of in-drift physical and chemical environments in the Total System Performance Assessment (TSPA) (Section 6.1); (2) the configuration of engineered components (features) and critical locations in drifts (Sections 6.2.1 and 6.3, portions taken from EBS Radionuclide Transport Abstraction (CRWMS M and O 2000b)); (3) overview and critical locations of processes that can affect P/CE (Section 6.3); (4) couplings and relationships among features and processes in the drifts (Section 6.4); and (5) identities and uses of parameters transmitted to TSPA by some of the reference AMRs (Section 6.5). This AMR originally considered a design with backfill, and is now being updated (REV 00 ICN1) to address

  15. Propulsion Physics Using the Chameleon Density Model

    Science.gov (United States)

    Robertson, Glen A.

    2011-01-01

    To grow as a space faring race, future spaceflight systems will require a new theory of propulsion. Specifically one that does not require mass ejection without limiting the high thrust necessary to accelerate within or beyond our solar system and return within a normal work period or lifetime. The Chameleon Density Model (CDM) is one such model that could provide new paths in propulsion toward this end. The CDM is based on Chameleon Cosmology a dark matter theory; introduced by Khrouy and Weltman in 2004. Chameleon as it is hidden within known physics, where the Chameleon field represents a scalar field within and about an object; even in the vacuum. The CDM relates to density changes in the Chameleon field, where the density changes are related to matter accelerations within and about an object. These density changes in turn change how an object couples to its environment. Whereby, thrust is achieved by causing a differential in the environmental coupling about an object. As a demonstration to show that the CDM fits within known propulsion physics, this paper uses the model to estimate the thrust from a solid rocket motor. Under the CDM, a solid rocket constitutes a two body system, i.e., the changing density of the rocket and the changing density in the nozzle arising from the accelerated mass. Whereby, the interactions between these systems cause a differential coupling to the local gravity environment of the earth. It is shown that the resulting differential in coupling produces a calculated value for the thrust near equivalent to the conventional thrust model used in Sutton and Ross, Rocket Propulsion Elements. Even though imbedded in the equations are the Universe energy scale factor, the reduced Planck mass and the Planck length, which relates the large Universe scale to the subatomic scale.

  16. submitter Data-driven RBE parameterization for helium ion beams

    CERN Document Server

    Mairani, A; Dokic, I; Valle, S M; Tessonnier, T; Galm, R; Ciocca, M; Parodi, K; Ferrari, A; Jäkel, O; Haberer, T; Pedroni, P; Böhlen, T T

    2016-01-01

    Helium ion beams are expected to be available again in the near future for clinical use. A suitable formalism to obtain relative biological effectiveness (RBE) values for treatment planning (TP) studies is needed. In this work we developed a data-driven RBE parameterization based on published in vitro experimental values. The RBE parameterization has been developed within the framework of the linear-quadratic (LQ) model as a function of the helium linear energy transfer (LET), dose and the tissue specific parameter ${{(\\alpha /\\beta )}_{\\text{ph}}}$ of the LQ model for the reference radiation. Analytic expressions are provided, derived from the collected database, describing the $\\text{RB}{{\\text{E}}_{\\alpha}}={{\\alpha}_{\\text{He}}}/{{\\alpha}_{\\text{ph}}}$ and ${{\\text{R}}_{\\beta}}={{\\beta}_{\\text{He}}}/{{\\beta}_{\\text{ph}}}$ ratios as a function of LET. Calculated RBE values at 2 Gy photon dose and at 10% survival ($\\text{RB}{{\\text{E}}_{10}}$ ) are compared with the experimental ones. Pearson's correlati...

  17. Cross-flow turbines: physical and numerical model studies towards improved array simulations

    Science.gov (United States)

    Wosnik, M.; Bachant, P.

    2015-12-01

    Cross-flow, or vertical-axis turbines, show potential in marine hydrokinetic (MHK) and wind energy applications. As turbine designs mature, the research focus is shifting from individual devices towards improving turbine array layouts for maximizing overall power output, i.e., minimizing wake interference for axial-flow turbines, or taking advantage of constructive wake interaction for cross-flow turbines. Numerical simulations are generally better suited to explore the turbine array design parameter space, as physical model studies of large arrays at large model scale would be expensive. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries, the turbines' interaction with the energy resource needs to be parameterized, or modeled. Most models in use today, e.g. actuator disk, are not able to predict the unique wake structure generated by cross-flow turbines. Experiments were carried out using a high-resolution turbine test bed in a large cross-section tow tank, designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. To improve parameterization in array simulations, an actuator line model (ALM) was developed to provide a computationally feasible method for simulating full turbine arrays inside Navier--Stokes models. The ALM predicts turbine loading with the blade element method combined with sub-models for dynamic stall and flow curvature. The open-source software is written as an extension library for the OpenFOAM CFD package, which allows the ALM body force to be applied to their standard RANS and LES solvers. Turbine forcing is also applied to volume of fluid (VOF) models, e.g., for predicting free surface effects on submerged MHK devices. An

  18. Lightning NOx Production in CMAQ: Part II - Parameterization Based on Relationship between Observed NLDN Lightning Strikes and Modeled Convective Precipitation Rates

    Science.gov (United States)

    Lightning-produced nitrogen oxides (NOX=NO+NO2) in the middle and upper troposphere play an essential role in the production of ozone (O3) and influence the oxidizing capacity of the troposphere. Despite much effort in both observing and modeling lightning NOX during the past dec...

  19. Landscape-scale parameterization of a tree-level forest growth model: a k-nearest neighbor imputation approach incorporating LiDAR data

    Science.gov (United States)

    Michael J. Falkowski; Andrew T. Hudak; Nicholas L. Crookston; Paul E. Gessler; Edward H. Uebler; Alistair M. S. Smith

    2010-01-01

    Sustainable forest management requires timely, detailed forest inventory data across large areas, which is difficult to obtain via traditional forest inventory techniques. This study evaluated k-nearest neighbor imputation models incorporating LiDAR data to predict tree-level inventory data (individual tree height, diameter at breast height, and...

  20. Parameterization of a numerical 2-D debris flow model with entrainment: a case study of the Faucon catchment, Southern French Alps

    Directory of Open Access Journals (Sweden)

    H. Y. Hussin

    2012-10-01

    Full Text Available The occurrence of debris flows has been recorded for more than a century in the European Alps, accounting for the risk to settlements and other human infrastructure that have led to death, building damage and traffic disruptions. One of the difficulties in the quantitative hazard assessment of debris flows is estimating the run-out behavior, which includes the run-out distance and the related hazard intensities like the height and velocity of a debris flow. In addition, as observed in the French Alps, the process of entrainment of material during the run-out can be 10–50 times in volume with respect to the initially mobilized mass triggered at the source area. The entrainment process is evidently an important factor that can further determine the magnitude and intensity of debris flows. Research on numerical modeling of debris flow entrainment is still ongoing and involves some difficulties. This is partly due to our lack of knowledge of the actual process of the uptake and incorporation of material and due the effect of entrainment on the final behavior of a debris flow. Therefore, it is important to model the effects of this key erosional process on the formation of run-outs and related intensities. In this study we analyzed a debris flow with high entrainment rates that occurred in 2003 at the Faucon catchment in the Barcelonnette Basin (Southern French Alps. The historic event was back-analyzed using the Voellmy rheology and an entrainment model imbedded in the RAMMS 2-D numerical modeling software. A sensitivity analysis of the rheological and entrainment parameters was carried out and the effects of modeling with entrainment on the debris flow run-out, height and velocity were assessed.

  1. Fuzzy modelling of Atlantic salmon physical habitat

    Science.gov (United States)

    St-Hilaire, André; Mocq, Julien; Cunjak, Richard

    2015-04-01

    Fish habitat models typically attempt to quantify the amount of available river habitat for a given fish species for various flow and hydraulic conditions. To achieve this, information on the preferred range of values of key physical habitat variables (e.g. water level, velocity, substrate diameter) for the targeted fishs pecies need to be modelled. In this context, we developed several habitat suitability indices sets for three Atlantic salmon life stages (young-of-the-year (YOY), parr, spawning adults) with the help of fuzzy logic modeling. Using the knowledge of twenty-seven experts, from both sides of the Atlantic Ocean, we defined fuzzy sets of four variables (depth, substrate size, velocity and Habitat Suitability Index, or HSI) and associated fuzzy rules. When applied to the Romaine River (Canada), median curves of standardized Weighted Usable Area (WUA) were calculated and a confidence interval was obtained by bootstrap resampling. Despite the large range of WUA covered by the expert WUA curves, confidence intervals were relatively narrow: an average width of 0.095 (on a scale of 0 to 1) for spawning habitat, 0.155 for parr rearing habitat and 0.160 for YOY rearing habitat. When considering an environmental flow value corresponding to 90% of the maximum reached by WUA curve, results seem acceptable for the Romaine River. Generally, this proposed fuzzy logic method seems suitable to model habitat availability for the three life stages, while also providing an estimate of uncertainty in salmon preferences.

  2. Intensity-dependent parameterization of elevation effects in precipitation analysis

    Directory of Open Access Journals (Sweden)

    T. Haiden

    2009-03-01

    Full Text Available Elevation effects in long-term (monthly to inter-annual precipitation data have been widely studied and are taken into account in the regionalization of point-like precipitation amounts by using methods like external drift kriging and cokriging. On the daily or hourly time scale, precipitation-elevation gradients are more variable, and difficult to parameterize. For example, application of the annual relative precipitation-elevation gradient to each 12-h sub-period reproduces the annual total, but at the cost of a large root-mean-square error. If the precipitation-elevation gradient is parameterized as a function of precipitation rate, the error can be substantially reduced. It is shown that the form of the parameterization suggested by the observations conforms to what one would expect based on the physics of the orographic precipitation process (the seeder-feeder mechanism. At low precipitation rates, orographic precipitation is "conversion-limited", thus increasing roughly linearly with precipitation rate. At higher rates, orographic precipitation becomes "condensation-limited" thus leading to an additive rather than multiplicative orographic precipitation enhancement. Also it is found that for large elevation differences it becomes increasingly important to take into account those events where the mountain station receives precipitation but the valley station remains dry.

  3. Testing longwave radiation parameterizations under clear and overcast skies at Storglaciären, Sweden

    OpenAIRE

    Sedlar, J.; Hock, R.

    2009-01-01

    Energy balance based glacier melt models require accurate estimates of incoming longwave radiation but direct measurements are often not available. Multi-year near-surface meteorological data from Storglaciären, Northern Sweden, were used to evaluate commonly used longwave radiation parameterizations in a glacier environment under clear-sky and all-sky conditions. Parameterizations depending solely on air temperature performed worse than those which include water vapor pressure. All models te...

  4. Computer Integrated Manufacturing: Physical Modelling Systems Design. A Personal View.

    Science.gov (United States)

    Baker, Richard

    A computer-integrated manufacturing (CIM) Physical Modeling Systems Design project was undertaken in a time of rapid change in the industrial, business, technological, training, and educational areas in Australia. A specification of a manufacturing physical modeling system was drawn up. Physical modeling provides a flexibility and configurability…

  5. Development of a Parameterization for Mesoscale Hydrological Modeling and Application to Landscape and Climate Change in the Interior Alaska Boreal Forest Ecosystem

    Science.gov (United States)

    Endalamaw, Abraham Melesse

    The Interior Alaska boreal forest ecosystem is one of the largest ecosystems on earth and lies between the warmer southerly temperate and colder Arctic regions. The ecosystem is underlain by discontinuous permafrost. The presence or absence of permafrost primarily controls water pathways and ecosystem composition. As a result, the region hosts two distinct ecotypes that transition over a very short spatial scale--often on the order of meters. Accurate mesoscale hydrological modeling of the region is critical as the region is experiencing unprecedented ecological and hydrological changes that have regional and global implications. However, accurate representation of the landscape heterogeneity and mesoscale hydrological processes has remained a big challenge. This study addressed this challenge by developing a simple landscape model from the hill-slope studies and in situ measurements over the past several decades. The new approach improves the mesoscale prediction of several hydrological processes including streamflow and evapotranspiration (ET). The impact of climate induced landscape change under a changing climate is also investigated. In the projected climate scenario, Interior Alaska is projected to undergo a major landscape shift including transitioning from a coniferous-dominated to deciduous-dominated ecosystem and from discontinuous permafrost to either a sporadic or isolated permafrost region. This major landscape shift is predicted to have a larger and complex impact in the predicted runoff, evapotranspiration, and moisture deficit (precipitation minus evapotranspiration). Overall, a large increase in runoff, evapotranspiration, and moisture deficit is predicted under future climate. Most hydrological climate change impact studies do not usually include the projected change in landscape into the model. In this study, we found that ignoring the projected ecosystem change could lead to an inaccurate conclusion. Hence climate induced vegetation and

  6. Influence of Superparameterization and a Higher-Order Turbulence Closure on Rainfall Bias Over Amazonia in Community Atmosphere Model Version 5: How Parameterization Changes Rainfall

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Kai [Jackson School of Geosciences, University of Texas at Austin, Austin TX USA; Fu, Rong [Jackson School of Geosciences, University of Texas at Austin, Austin TX USA; Department of Atmospheric and Oceanic Sciences, University of California, Los Angeles CA USA; Shaikh, Muhammad J. [Jackson School of Geosciences, University of Texas at Austin, Austin TX USA; Ghan, Steven [Pacific Northwest National Laboratory, Richland WA USA; Wang, Minghuai [Institute for Climate and Global Change Research and School of Atmospheric Sciences, Nanjing University, Nanjing China; Collaborative Innovation Center of Climate Change, Nanjing China; Leung, L. Ruby [Pacific Northwest National Laboratory, Richland WA USA; Dickinson, Robert E. [Jackson School of Geosciences, University of Texas at Austin, Austin TX USA; Marengo, Jose [Centro Nacional de Monitoramento e Alertas aos Desastres Naturais, São Jose dos Campos Brazil

    2017-09-21

    We evaluate the Community Atmosphere Model Version 5 (CAM5) with a higher-order turbulence closure scheme, named Cloud Layers Unified By Binomials (CLUBB), and a Multiscale Modeling Framework (MMF) with two different microphysics configurations to investigate their influences on rainfall simulations over Southern Amazonia. The two different microphysics configurations in MMF are the one-moment cloud microphysics without aerosol treatment (SAM1MOM) and two-moment cloud microphysics coupled with aerosol treatment (SAM2MOM). Results show that both MMF-SAM2MOM and CLUBB effectively reduce the low biases of rainfall, mainly during the wet season. The CLUBB reduces low biases of humidity in the lower troposphere with further reduced shallow clouds. The latter enables more surface solar flux, leading to stronger convection and more rainfall. MMF, especially MMF-SAM2MOM, unstablizes the atmosphere with more moisture and higher atmospheric temperatures in the atmospheric boundary layer, allowing the growth of more extreme convection and further generating more deep convection. MMF-SAM2MOM significantly increases rainfall in the afternoon, but it does not reduce the early bias of the diurnal rainfall peak; LUBB, on the other hand, delays the afternoon peak time and produces more precipitation in the early morning, due to more realistic gradual transition between shallow and deep convection. MMF appears to be able to realistically capture the observed increase of relative humidity prior to deep convection, especially with its two-moment configuration. In contrast, in CAM5 and CAM5 with CLUBB, occurrence of deep convection in these models appears to be a result of stronger heating rather than higher relative humidity.

  7. Probabilistic modelling of overflow, surcharge and flooding in urban drainage using the first-order reliability method and parameterization of local rain series

    DEFF Research Database (Denmark)

    Thorndahl, Søren; Willems, Patrick

    2007-01-01

    storm conceptualized to a synthetic rainfall hyetograph by a Gaussian shape with the parameters: rain storm depth, duration and peak intensity. Probability distributions were calibrated for these three parameters and used on the basis of the failure probability estimation, together with a hydrodynamic......, and alternative methods based on random sampling (Monte Carlo Direct Sampling and Importance Sampling). It is concluded that without crucial influence on the modelling accuracy, the First Order Reliability Method is very applicable as an alternative to traditional long-term simulations of urban drainage systems....

  8. Architectural and growth traits differ in effects on performance of clonal plants: an analysis using a field-parameterized simulation model

    Czech Academy of Sciences Publication Activity Database

    Wildová, Radka; Gough, L.; Herben, Tomáš; Hershock, Ch.; Goldberg, D. E.

    2007-01-01

    Roč. 116, č. 5 (2007), s. 836-852 ISSN 0030-1299 R&D Projects: GA ČR(CZ) GA206/02/0953; GA ČR(CZ) GA206/02/0578 Grant - others:NSF(US) DEB99-74296; NSF(US) DEB99-74284 Institutional research plan: CEZ:AV0Z60050516 Keywords : individual-based model * performance * plant architecture * competitive response * resource allocation Subject RIV: EF - Botanics Impact factor: 3.136, year: 2007

  9. Analysing the Competency of Mathematical Modelling in Physics

    OpenAIRE

    Redish, Edward F.

    2016-01-01

    A primary goal of physics is to create mathematical models that allow both predictions and explanations of physical phenomena. We weave maths extensively into our physics instruction beginning in high school, and the level and complexity of the maths we draw on grows as our students progress through a physics curriculum. Despite much research on the learning of both physics and math, the problem of how to successfully teach most of our students to use maths in physics effectively remains unso...

  10. Sensitivity of quantitative precipitation forecasts to boundary layer parameterization: a flash flood case study in the Western Mediterranean

    Directory of Open Access Journals (Sweden)

    M. Zampieri

    2005-01-01

    Full Text Available The 'Montserrat-2000' severe flash flood event which occurred over Catalonia on 9 and 10 June 2000 is analyzed. Strong precipitation was generated by a mesoscale convective system associated with the development of a cyclone. The location of heavy precipitation depends on the position of the cyclone, which, in turn, is found to be very sensitive to various model characteristics and initial conditions. Numerical simulations of this case study using the hydrostatic BOLAM and the non-hydrostatic MOLOCH models are performed in order to test the effects of different formulations of the boundary layer parameterization: a modified version of the Louis (order 1 model and a custom version of the E-ℓ (order 1.5 model. Both of them require a diagnostic formulation of the mixing length, but the use of the turbulent kinetic energy equation in the E-ℓ model allows to represent turbulence history and non-locality effects and to formulate a more physically based mixing length. The impact of the two schemes is different in the two models. The hydrostatic model, run at 1/5 degree resolution, is less sensitive, but the quantitative precipitation forecast is in any case unsatisfactory in terms of localization and amount. Conversely, the non-hydrostatic model, run at 1/50 degree resolution, is capable of realistically simulate timing, position and amount of precipitation, with the apparently superior results obtained with the E-ℓ parameterization model.

  11. Outstanding questions: physics beyond the Standard Model

    CERN Document Server

    Ellis, John

    2012-01-01

    The Standard Model of particle physics agrees very well with experiment, but many important questions remain unanswered, among them are the following. What is the origin of particle masses and are they due to a Higgs boson? How does one understand the number of species of matter particles and how do they mix? What is the origin of the difference between matter and antimatter, and is it related to the origin of the matter in the Universe? What is the nature of the astrophysical dark matter? How does one unify the fundamental interactions? How does one quantize gravity? In this article, I introduce these questions and discuss how they may be addressed by experiments at the Large Hadron Collider, with particular attention to the search for the Higgs boson and supersymmetry.

  12. Physical Model of Cellular Symmetry Breaking

    Science.gov (United States)

    van der Gucht, Jasper; Sykes, Cécile

    2009-01-01

    Cells can polarize in response to external signals, such as chemical gradients, cell–cell contacts, and electromagnetic fields. However, cells can also polarize in the absence of an external cue. For example, a motile cell, which initially has a more or less round shape, can lose its symmetry spontaneously even in a homogeneous environment and start moving in random directions. One of the principal determinants of cell polarity is the cortical actin network that underlies the plasma membrane. Tension in this network generated by myosin motors can be relaxed by rupture of the shell, leading to polarization. In this article, we discuss how simplified model systems can help us to understand the physics that underlie the mechanics of symmetry breaking. PMID:20066077

  13. Surface physics theoretical models and experimental methods

    CERN Document Server

    Mamonova, Marina V; Prudnikova, I A

    2016-01-01

    The demands of production, such as thin films in microelectronics, rely on consideration of factors influencing the interaction of dissimilar materials that make contact with their surfaces. Bond formation between surface layers of dissimilar condensed solids-termed adhesion-depends on the nature of the contacting bodies. Thus, it is necessary to determine the characteristics of adhesion interaction of different materials from both applied and fundamental perspectives of surface phenomena. Given the difficulty in obtaining reliable experimental values of the adhesion strength of coatings, the theoretical approach to determining adhesion characteristics becomes more important. Surface Physics: Theoretical Models and Experimental Methods presents straightforward and efficient approaches and methods developed by the authors that enable the calculation of surface and adhesion characteristics for a wide range of materials: metals, alloys, semiconductors, and complex compounds. The authors compare results from the ...

  14. Comprehensive assessment of parameterization methods for estimating clear-sky surface downward longwave radiation

    Science.gov (United States)

    Guo, Yamin; Cheng, Jie; Liang, Shunlin

    2018-02-01

    Surface downward longwave radiation (SDLR) is a key variable for calculating the earth's surface radiation budget. In this study, we evaluated seven widely used clear-sky parameterization methods using ground measurements collected from 71 globally distributed fluxnet sites. The Bayesian model averaging (BMA) method was also introduced to obtain a multi-model ensemble estimate. As a whole, the parameterization method of Carmona et al. (2014) performs the best, with an average BIAS, RMSE, and R 2 of - 0.11 W/m2, 20.35 W/m2, and 0.92, respectively, followed by the parameterization methods of Idso (1981), Prata (Q J R Meteorol Soc 122:1127-1151, 1996), Brunt and Sc (Q J R Meteorol Soc 58:389-420, 1932), and Brutsaert (Water Resour Res 11:742-744, 1975). The accuracy of the BMA is close to that of the parameterization method of Carmona et al. (2014) and comparable to that of the parameterization method of Idso (1981). The advantage of the BMA is that it achieves balanced results compared to the integrated single parameterization methods. To fully assess the performance of the parameterization methods, the effects of climate type, land cover, and surface elevation were also investigated. The five parameterization methods and BMA all failed over land with the tropical climate type, with high water vapor, and had poor results over forest, wetland, and ice. These methods achieved better results over desert, bare land, cropland, and grass and had acceptable accuracies for sites at different elevations, except for the parameterization method of Carmona et al. (2014) over high elevation sites. Thus, a method that can be successfully applied everywhere does not exist.

  15. On the application of principal component analysis to the calculation of the bulk integral optical properties for radiation parameterizations in climate models.

    Science.gov (United States)

    Baran, Anthony J; Newman, Stuart M

    2017-03-01

    Rigorous electromagnetic computations required for the calculation of high-resolution monochromatic bulk integral optical properties of irregular atmospheric particles are onerous in memory and in time requirements. Here, it is shown that from a set of 145 monochromatic bulk integral ice optical properties, it is possible to reduce the set to eight hinge wavelengths by using the method of principal component analysis (PCA) regression. From the eight hinge wavelengths, the full set can be reconstructed to within root mean square errors of ≪1%. To obtain optimal reconstruction, the training set must cover as wide a range of parameter space as possible. Rigorous electromagnetic methods can now be routinely applied to represent accurately the integral optical properties of atmospheric particles in climate models.

  16. Under-canopy turbulence and root water uptake of a Tibetan meadow ecosystem modeled by Noah-MP

    NARCIS (Netherlands)

    Zheng, Donghai; van der Velde, Rogier; Su, Zhongbo; Wen, Jun; Booij, Martijn J.; Hoekstra, Arjen Ysbert; Wang, Xin

    2015-01-01

    The Noah-MP land surface model adopts a multiparameterization framework to accommodate various alternative parameterizations for more than 10 physical processes. In this paper, the parameterizations implemented in Noah-MP associated with under-canopy turbulence and root water uptake are enhanced

  17. Mathematical models of physics problems (physics research and technology)

    CERN Document Server

    Anchordoqui, Luis Alfredo

    2013-01-01

    This textbook is intended to provide a foundation for a one-semester introductory course on the advanced mathematical methods that form the cornerstones of the hard sciences and engineering. The work is suitable for first year graduate or advanced undergraduate students in the fields of Physics, Astronomy and Engineering. This text therefore employs a condensed narrative sufficient to prepare graduate and advanced undergraduate students for the level of mathematics expected in more advanced graduate physics courses, without too much exposition on related but non-essential material. In contrast to the two semesters traditionally devoted to mathematical methods for physicists, the material in this book has been quite distilled, making it a suitable guide for a one-semester course. The assumption is that the student, once versed in the fundamentals, can master more esoteric aspects of these topics on his or her own if and when the need arises during the course of conducting research. The book focuses on two cor...

  18. Vector and axial nucleon form factors: A duality constrained parameterization

    International Nuclear Information System (INIS)

    Bodek, A.; Avvakumov, S.; Bradford, R.; Budd, H.

    2008-01-01

    We present new parameterizations of vector and axial nucleon form factors. We maintain an excellent descriptions of the form factors at low momentum transfers, where the spatial structure of the nucleon is important, and use the Nachtman scaling variable ξ to relate elastic and inelastic form factors and impose quark-hadron duality constraints at high momentum transfers where the quark structure dominates. We use the new vector form factors to re-extract updated values of the axial form factor from neutrino experiments on deuterium. We obtain an updated world average value from ν μ d and pion electroproduction experiments of M A =1.014±0.014 GeV/c 2 . Our parameterizations are useful in modeling neutrino interactions at low energies (e.g. for neutrino oscillations experiments). The predictions for high momentum transfers can be tested in the next generation electron and neutrino scattering experiments. (orig.)

  19. Investigating the Sensitivity of Nucleation Parameterization on Ice Growth

    Science.gov (United States)

    Gaudet, L.; Sulia, K. J.

    2017-12-01

    The accurate prediction of precipitation from lake-effect snow events associated with the Great Lakes region depends on the parameterization of thermodynamic and microphysical processes, including the formation and subsequent growth of frozen hydrometeors. More specifically, the formation of ice hydrometeors has been represented through varying forms of ice nucleation parameterizations considering the different nucleation modes (e.g., deposition, condensation-freezing, homogeneous). These parameterizations have been developed from in-situ measurements and laboratory observations. A suite of nucleation parameterizations consisting of those published in Meyers et al. (1992) and DeMott et al. (2010) as well as varying ice nuclei data sources are coupled with the Adaptive Habit Model (AHM, Harrington et al. 2013), a microphysics module where ice crystal aspect ratio and density are predicted and evolve in time. Simulations are run with the AHM which is implemented in the Weather Research and Forecasting (WRF) model to investigate the effect of ice nucleation parameterization on the non-spherical growth and evolution of ice crystals and the subsequent effects on liquid-ice cloud-phase partitioning. Specific lake-effect storms that were observed during the Ontario Winter Lake-Effect Systems (OWLeS) field campaign (Kristovich et al. 2017) are examined to elucidate this potential microphysical effect. Analysis of these modeled events is aided by dual-polarization radar data from the WSR-88D in Montague, New York (KTYX). This enables a comparison of the modeled and observed polarmetric and microphysical profiles of the lake-effect clouds, which involves investigating signatures of reflectivity, specific differential phase, correlation coefficient, and differential reflectivity. Microphysical features of lake-effect bands, such as ice, snow, and liquid mixing ratios, ice crystal aspect ratio, and ice density are analyzed to understand signatures in the aforementioned modeled

  20. When probabilistic seismic hazard climbs volcanoes: the Mt. Etna case, Italy – Part 1: Model components for sources parameterization

    Directory of Open Access Journals (Sweden)

    R. Azzaro

    2017-11-01

    Full Text Available The volcanic region of Mt. Etna (Sicily, Italy represents a perfect lab for testing innovative approaches to seismic hazard assessment. This is largely due to the long record of historical and recent observations of seismic and tectonic phenomena, the high quality of various geophysical monitoring and particularly the rapid geodynamics clearly demonstrate some seismotectonic processes. We present here the model components and the procedures adopted for defining seismic sources to be used in a new generation of probabilistic seismic hazard assessment (PSHA, the first results and maps of which are presented in a companion paper, Peruzza et al. (2017. The sources include, with increasing complexity, seismic zones, individual faults and gridded point sources that are obtained by integrating geological field data with long and short earthquake datasets (the historical macroseismic catalogue, which covers about 3 centuries, and a high-quality instrumental location database for the last decades. The analysis of the frequency–magnitude distribution identifies two main fault systems within the volcanic complex featuring different seismic rates that are controlled essentially by volcano-tectonic processes. We discuss the variability of the mean occurrence times of major earthquakes along the main Etnean faults by using an historical approach and a purely geologic method. We derive a magnitude–size scaling relationship specifically for this volcanic area, which has been implemented into a recently developed software tool – FiSH (Pace et al., 2016 – that we use to calculate the characteristic magnitudes and the related mean recurrence times expected for each fault. Results suggest that for the Mt. Etna area, the traditional assumptions of uniform and Poissonian seismicity can be relaxed; a time-dependent fault-based modeling, joined with a 3-D imaging of volcano-tectonic sources depicted by the recent instrumental seismicity, can therefore be

  1. Physical Model Method for Seismic Study of Concrete Dams

    Directory of Open Access Journals (Sweden)

    Bogdan Roşca

    2008-01-01

    Full Text Available The study of the dynamic behaviour of concrete dams by means of the physical model method is very useful to understand the failure mechanism of these structures to action of the strong earthquakes. Physical model method consists in two main processes. Firstly, a study model must be designed by a physical modeling process using the dynamic modeling theory. The result is a equations system of dimensioning the physical model. After the construction and instrumentation of the scale physical model a structural analysis based on experimental means is performed. The experimental results are gathered and are available to be analysed. Depending on the aim of the research may be designed an elastic or a failure physical model. The requirements for the elastic model construction are easier to accomplish in contrast with those required for a failure model, but the obtained results provide narrow information. In order to study the behaviour of concrete dams to strong seismic action is required the employment of failure physical models able to simulate accurately the possible opening of joint, sliding between concrete blocks and the cracking of concrete. The design relations for both elastic and failure physical models are based on dimensional analysis and consist of similitude relations among the physical quantities involved in the phenomenon. The using of physical models of great or medium dimensions as well as its instrumentation creates great advantages, but this operation involves a large amount of financial, logistic and time resources.

  2. A Conceptual Model of Observed Physical Literacy

    Science.gov (United States)

    Dudley, Dean A.

    2015-01-01

    Physical literacy is a concept that is gaining greater acceptance around the world with the United Nations Educational, Cultural, and Scientific Organization (2013) recognizing it as one of several central tenets in a quality physical education framework. However, previous attempts to understand progression in physical literacy learning have been…

  3. Estimating Crop Albedo in the Application of a Physical Model Based on the Law of Energy Conservation and Spectral Invariants

    Directory of Open Access Journals (Sweden)

    Jingjing Peng

    2015-11-01

    Full Text Available Albedo characterizes the radiometric interface of land surfaces, especially vegetation, and the atmosphere. Albedo is a critical input to many models, such as crop growth models, hydrological models and climate models. For the extensive attention to crop monitoring, a physical albedo model for crops is developed based on the law of energy conservation and spectral invariants, which is derived from a prior forest albedo model. The model inputs have been efficiently and physically parameterized, including the dependency of albedo on the solar zenith/azimuth angle, the fraction of diffuse skylight in the incident radiance, the canopy structure, the leaf reflectance/transmittance and the soil reflectance characteristics. Both the anisotropy of soil reflectance and the clumping effect of crop leaves at the canopy scale are considered, which contribute to the improvement of the model accuracy. The comparison between the model results and Monte Carlo simulation results indicates that the canopy albedo has high accuracy with an RMSE < 0.005. The validation using ground measurements has also demonstrated the reliability of the model and that it can reflect the interaction mechanism between radiation and the canopy-soil system.

  4. Physical controls and predictability of stream hyporheic flow evaluated with a multiscale model

    Science.gov (United States)

    Stonedahl, Susa H.; Harvey, Judson W.; Detty, Joel; Aubeneau, Antoine; Packman, Aaron I.

    2012-01-01

    Improved predictions of hyporheic exchange based on easily measured physical variables are needed to improve assessment of solute transport and reaction processes in watersheds. Here we compare physically based model predictions for an Indiana stream with stream tracer results interpreted using the Transient Storage Model (TSM). We parameterized the physically based, Multiscale Model (MSM) of stream-groundwater interactions with measured stream planform and discharge, stream velocity, streambed hydraulic conductivity and porosity, and topography of the streambed at distinct spatial scales (i.e., ripple, bar, and reach scales). We predicted hyporheic exchange fluxes and hyporheic residence times using the MSM. A Continuous Time Random Walk (CTRW) model was used to convert the MSM output into predictions of in stream solute transport, which we compared with field observations and TSM parameters obtained by fitting solute transport data. MSM simulations indicated that surface-subsurface exchange through smaller topographic features such as ripples was much faster than exchange through larger topographic features such as bars. However, hyporheic exchange varies nonlinearly with groundwater discharge owing to interactions between flows induced at different topographic scales. MSM simulations showed that groundwater discharge significantly decreased both the volume of water entering the subsurface and the time it spent in the subsurface. The MSM also characterized longer timescales of exchange than were observed by the tracer-injection approach. The tracer data, and corresponding TSM fits, were limited by tracer measurement sensitivity and uncertainty in estimates of background tracer concentrations. Our results indicate that rates and patterns of hyporheic exchange are strongly influenced by a continuum of surface-subsurface hydrologic interactions over a wide range of spatial and temporal scales rather than discrete processes.

  5. A Structural Equation Model of Conceptual Change in Physics