WorldWideScience

Sample records for subgrid averaged models

  1. A dynamic global-coefficient mixed subgrid-scale model for large-eddy simulation of turbulent flows

    International Nuclear Information System (INIS)

    Singh, Satbir; You, Donghyun

    2013-01-01

    Highlights: ► A new SGS model is developed for LES of turbulent flows in complex geometries. ► A dynamic global-coefficient SGS model is coupled with a scale-similarity model. ► Overcome some of difficulties associated with eddy-viscosity closures. ► Does not require averaging or clipping of the model coefficient for stabilization. ► The predictive capability is demonstrated in a number of turbulent flow simulations. -- Abstract: A dynamic global-coefficient mixed subgrid-scale eddy-viscosity model for large-eddy simulation of turbulent flows in complex geometries is developed. In the present model, the subgrid-scale stress is decomposed into the modified Leonard stress, cross stress, and subgrid-scale Reynolds stress. The modified Leonard stress is explicitly computed assuming a scale similarity, while the cross stress and the subgrid-scale Reynolds stress are modeled using the global-coefficient eddy-viscosity model. The model coefficient is determined by a dynamic procedure based on the global-equilibrium between the subgrid-scale dissipation and the viscous dissipation. The new model relieves some of the difficulties associated with an eddy-viscosity closure, such as the nonalignment of the principal axes of the subgrid-scale stress tensor and the strain rate tensor and the anisotropy of turbulent flow fields, while, like other dynamic global-coefficient models, it does not require averaging or clipping of the model coefficient for numerical stabilization. The combination of the global-coefficient eddy-viscosity model and a scale-similarity model is demonstrated to produce improved predictions in a number of turbulent flow simulations

  2. A Lagrangian dynamic subgrid-scale model turbulence

    Science.gov (United States)

    Meneveau, C.; Lund, T. S.; Cabot, W.

    1994-01-01

    A new formulation of the dynamic subgrid-scale model is tested in which the error associated with the Germano identity is minimized over flow pathlines rather than over directions of statistical homogeneity. This procedure allows the application of the dynamic model with averaging to flows in complex geometries that do not possess homogeneous directions. The characteristic Lagrangian time scale over which the averaging is performed is chosen such that the model is purely dissipative, guaranteeing numerical stability when coupled with the Smagorinsky model. The formulation is tested successfully in forced and decaying isotropic turbulence and in fully developed and transitional channel flow. In homogeneous flows, the results are similar to those of the volume-averaged dynamic model, while in channel flow, the predictions are superior to those of the plane-averaged dynamic model. The relationship between the averaged terms in the model and vortical structures (worms) that appear in the LES is investigated. Computational overhead is kept small (about 10 percent above the CPU requirements of the volume or plane-averaged dynamic model) by using an approximate scheme to advance the Lagrangian tracking through first-order Euler time integration and linear interpolation in space.

  3. Analysis and modeling of subgrid scalar mixing using numerical data

    Science.gov (United States)

    Girimaji, Sharath S.; Zhou, YE

    1995-01-01

    Direct numerical simulations (DNS) of passive scalar mixing in isotropic turbulence is used to study, analyze and, subsequently, model the role of small (subgrid) scales in the mixing process. In particular, we attempt to model the dissipation of the large scale (supergrid) scalar fluctuations caused by the subgrid scales by decomposing it into two parts: (1) the effect due to the interaction among the subgrid scales; and (2) the effect due to interaction between the supergrid and the subgrid scales. Model comparisons with DNS data show good agreement. This model is expected to be useful in the large eddy simulations of scalar mixing and reaction.

  4. Dynamic subgrid scale model of large eddy simulation of cross bundle flows

    International Nuclear Information System (INIS)

    Hassan, Y.A.; Barsamian, H.R.

    1996-01-01

    The dynamic subgrid scale closure model of Germano et. al (1991) is used in the large eddy simulation code GUST for incompressible isothermal flows. Tube bundle geometries of staggered and non-staggered arrays are considered in deep bundle simulations. The advantage of the dynamic subgrid scale model is the exclusion of an input model coefficient. The model coefficient is evaluated dynamically for each nodal location in the flow domain. Dynamic subgrid scale results are obtained in the form of power spectral densities and flow visualization of turbulent characteristics. Comparisons are performed among the dynamic subgrid scale model, the Smagorinsky eddy viscosity model (that is used as the base model for the dynamic subgrid scale model) and available experimental data. Spectral results of the dynamic subgrid scale model correlate better with experimental data. Satisfactory turbulence characteristics are observed through flow visualization

  5. High-resolution subgrid models: background, grid generation, and implementation

    Science.gov (United States)

    Sehili, Aissa; Lang, Günther; Lippert, Christoph

    2014-04-01

    The basic idea of subgrid models is the use of available high-resolution bathymetric data at subgrid level in computations that are performed on relatively coarse grids allowing large time steps. For that purpose, an algorithm that correctly represents the precise mass balance in regions where wetting and drying occur was derived by Casulli (Int J Numer Method Fluids 60:391-408, 2009) and Casulli and Stelling (Int J Numer Method Fluids 67:441-449, 2010). Computational grid cells are permitted to be wet, partially wet, or dry, and no drying threshold is needed. Based on the subgrid technique, practical applications involving various scenarios were implemented including an operational forecast model for water level, salinity, and temperature of the Elbe Estuary in Germany. The grid generation procedure allows a detailed boundary fitting at subgrid level. The computational grid is made of flow-aligned quadrilaterals including few triangles where necessary. User-defined grid subdivision at subgrid level allows a correct representation of the volume up to measurement accuracy. Bottom friction requires a particular treatment. Based on the conveyance approach, an appropriate empirical correction was worked out. The aforementioned features make the subgrid technique very efficient, robust, and accurate. Comparison of predicted water levels with the comparatively highly resolved classical unstructured grid model shows very good agreement. The speedup in computational performance due to the use of the subgrid technique is about a factor of 20. A typical daily forecast can be carried out in less than 10 min on a standard PC-like hardware. The subgrid technique is therefore a promising framework to perform accurate temporal and spatial large-scale simulations of coastal and estuarine flow and transport processes at low computational cost.

  6. Enhancing the representation of subgrid land surface characteristics in land surface models

    Directory of Open Access Journals (Sweden)

    Y. Ke

    2013-09-01

    Full Text Available Land surface heterogeneity has long been recognized as important to represent in the land surface models. In most existing land surface models, the spatial variability of surface cover is represented as subgrid composition of multiple surface cover types, although subgrid topography also has major controls on surface processes. In this study, we developed a new subgrid classification method (SGC that accounts for variability of both topography and vegetation cover. Each model grid cell was represented with a variable number of elevation classes and each elevation class was further described by a variable number of vegetation types optimized for each model grid given a predetermined total number of land response units (LRUs. The subgrid structure of the Community Land Model (CLM was used to illustrate the newly developed method in this study. Although the new method increases the computational burden in the model simulation compared to the CLM subgrid vegetation representation, it greatly reduced the variations of elevation within each subgrid class and is able to explain at least 80% of the total subgrid plant functional types (PFTs. The new method was also evaluated against two other subgrid methods (SGC1 and SGC2 that assigned fixed numbers of elevation and vegetation classes for each model grid (SGC1: M elevation bands–N PFTs method; SGC2: N PFTs–M elevation bands method. Implemented at five model resolutions (0.1°, 0.25°, 0.5°, 1.0°and 2.0° with three maximum-allowed total number of LRUs (i.e., NLRU of 24, 18 and 12 over North America (NA, the new method yielded more computationally efficient subgrid representation compared to SGC1 and SGC2, particularly at coarser model resolutions and moderate computational intensity (NLRU = 18. It also explained the most PFTs and elevation variability that is more homogeneously distributed spatially. The SGC method will be implemented in CLM over the NA continent to assess its impacts on

  7. On the TFNS Subgrid Models for Liquid-Fueled Turbulent Combustion

    Science.gov (United States)

    Liu, Nan-Suey; Wey, Thomas

    2014-01-01

    This paper describes the time-filtered Navier-Stokes (TFNS) approach capable of capturing unsteady flow structures important for turbulent mixing in the combustion chamber and two different subgrid models used to emulate the major processes occurring in the turbulence-chemistry interaction. These two subgrid models are termed as LEM-like model and EUPDF-like model (Eulerian probability density function), respectively. Two-phase turbulent combustion in a single-element lean-direct-injection (LDI) combustor is calculated by employing the TFNS/LEM-like approach as well as the TFNS/EUPDF-like approach. Results obtained from the TFNS approach employing these two different subgrid models are compared with each other, along with the experimental data, followed by more detailed comparison between the results of an updated calculation using the TFNS/LEM-like model and the experimental data.

  8. On the Representation of Subgrid Microtopography Effects in Process-based Hydrologic Models

    Science.gov (United States)

    Jan, A.; Painter, S. L.; Coon, E. T.

    2017-12-01

    Increased availability of high-resolution digital elevation are enabling process-based hydrologic modeling on finer and finer scales. However, spatial variability in surface elevation (microtopography) exists below the scale of a typical hyper-resolution grid cell and has the potential to play a significant role in water retention, runoff, and surface/subsurface interactions. Though the concept of microtopographic features (depressions, obstructions) and the associated implications on flow and discharge are well established, representing those effects in watershed-scale integrated surface/subsurface hydrology models remains a challenge. Using the complex and coupled hydrologic environment of the Arctic polygonal tundra as an example, we study the effects of submeter topography and present a subgrid model parameterized by small-scale spatial heterogeneities for use in hyper-resolution models with polygons at a scale of 15-20 meters forming the surface cells. The subgrid model alters the flow and storage terms in the diffusion wave equation for surface flow. We compare our results against sub-meter scale simulations (acts as a benchmark for our simulations) and hyper-resolution models without the subgrid representation. The initiation of runoff in the fine-scale simulations is delayed and the recession curve is slowed relative to simulated runoff using the hyper-resolution model with no subgrid representation. Our subgrid modeling approach improves the representation of runoff and water retention relative to models that ignore subgrid topography. We evaluate different strategies for parameterizing subgrid model and present a classification-based method to efficiently move forward to larger landscapes. This work was supported by the Interoperable Design of Extreme-scale Application Software (IDEAS) project and the Next-Generation Ecosystem Experiments-Arctic (NGEE Arctic) project. NGEE-Arctic is supported by the Office of Biological and Environmental Research in the

  9. Subgrid Parameterization of the Soil Moisture Storage Capacity for a Distributed Rainfall-Runoff Model

    Directory of Open Access Journals (Sweden)

    Weijian Guo

    2015-05-01

    Full Text Available Spatial variability plays an important role in nonlinear hydrologic processes. Due to the limitation of computational efficiency and data resolution, subgrid variability is usually assumed to be uniform for most grid-based rainfall-runoff models, which leads to the scale-dependence of model performances. In this paper, the scale effect on the Grid-Xinanjiang model was examined. The bias of the estimation of precipitation, runoff, evapotranspiration and soil moisture at the different grid scales, along with the scale-dependence of the effective parameters, highlights the importance of well representing the subgrid variability. This paper presents a subgrid parameterization method to incorporate the subgrid variability of the soil storage capacity, which is a key variable that controls runoff generation and partitioning in the Grid-Xinanjiang model. In light of the similar spatial pattern and physical basis, the soil storage capacity is correlated with the topographic index, whose spatial distribution can more readily be measured. A beta distribution is introduced to represent the spatial distribution of the soil storage capacity within the grid. The results derived from the Yanduhe Basin show that the proposed subgrid parameterization method can effectively correct the watershed soil storage capacity curve. Compared to the original Grid-Xinanjiang model, the model performances are quite consistent at the different grid scales when the subgrid variability is incorporated. This subgrid parameterization method reduces the recalibration necessity when the Digital Elevation Model (DEM resolution is changed. Moreover, it improves the potential for the application of the distributed model in the ungauged basin.

  10. Structure-Preserving Variational Multiscale Modeling of Turbulent Incompressible Flow with Subgrid Vortices

    Science.gov (United States)

    Evans, John; Coley, Christopher; Aronson, Ryan; Nelson, Corey

    2017-11-01

    In this talk, a large eddy simulation methodology for turbulent incompressible flow will be presented which combines the best features of divergence-conforming discretizations and the residual-based variational multiscale approach to large eddy simulation. In this method, the resolved motion is represented using a divergence-conforming discretization, that is, a discretization that preserves the incompressibility constraint in a pointwise manner, and the unresolved fluid motion is explicitly modeled by subgrid vortices that lie within individual grid cells. The evolution of the subgrid vortices is governed by dynamical model equations driven by the residual of the resolved motion. Consequently, the subgrid vortices appropriately vanish for laminar flow and fully resolved turbulent flow. As the resolved velocity field and subgrid vortices are both divergence-free, the methodology conserves mass in a pointwise sense and admits discrete balance laws for energy, enstrophy, and helicity. Numerical results demonstrate the methodology yields improved results versus state-of-the-art eddy viscosity models in the context of transitional, wall-bounded, and rotational flow when a divergence-conforming B-spline discretization is utilized to represent the resolved motion.

  11. Unsteady Flame Embedding (UFE) Subgrid Model for Turbulent Premixed Combustion Simulations

    KAUST Repository

    El-Asrag, Hossam

    2010-01-04

    We present a formulation for an unsteady subgrid model for premixed combustion in the flamelet regime. Since chemistry occurs at the unresolvable scales, it is necessary to introduce a subgrid model that accounts for the multi-scale nature of the problem using the information available on the resolved scales. Most of the current models are based on the laminar flamelet concept, and often neglect the unsteady effects. The proposed model\\'s primary objective is to encompass many of the flame/turbulence interactions unsteady features and history effects. In addition it provides a dynamic and accurate approach for computing the subgrid flame propagation velocity. The unsteady flame embedding approach (UFE) treats the flame as an ensemble of locally one-dimensional flames. A set of elemental one dimensional flames is used to describe the turbulent flame structure at the subgrid level. The stretched flame calculations are performed on the stagnation line of a strained flame using the unsteady filtered strain rate computed from the resolved- grid. The flame iso-surface is tracked using an accurate high-order level set formulation to propagate the flame interface at the coarse resolution with minimum numerical diffusion. In this paper the solver and the model components are introduced and used to investigate two unsteady flames with different Lewis numbers in the thin reaction zone regime. The results show that the UFE model captures the unsteady flame-turbulence interactions and the flame propagation speed reasonably well. Higher propagation speed is observed for the lower than unity Lewis number flame because of the impact of differential diffusion.

  12. Recursive renormalization group theory based subgrid modeling

    Science.gov (United States)

    Zhou, YE

    1991-01-01

    Advancing the knowledge and understanding of turbulence theory is addressed. Specific problems to be addressed will include studies of subgrid models to understand the effects of unresolved small scale dynamics on the large scale motion which, if successful, might substantially reduce the number of degrees of freedom that need to be computed in turbulence simulation.

  13. Dynamic subgrid scale model used in a deep bundle turbulence prediction using the large eddy simulation method

    International Nuclear Information System (INIS)

    Barsamian, H.R.; Hassan, Y.A.

    1996-01-01

    Turbulence is one of the most commonly occurring phenomena of engineering interest in the field of fluid mechanics. Since most flows are turbulent, there is a significant payoff for improved predictive models of turbulence. One area of concern is the turbulent buffeting forces experienced by the tubes in steam generators of nuclear power plants. Although the Navier-Stokes equations are able to describe turbulent flow fields, the large number of scales of turbulence limit practical flow field calculations with current computing power. The dynamic subgrid scale closure model of Germano et. al (1991) is used in the large eddy simulation code GUST for incompressible isothermal flows. Tube bundle geometries of staggered and non-staggered arrays are considered in deep bundle simulations. The advantage of the dynamic subgrid scale model is the exclusion of an input model coefficient. The model coefficient is evaluated dynamically for each nodal location in the flow domain. Dynamic subgrid scale results are obtained in the form of power spectral densities and flow visualization of turbulent characteristics. Comparisons are performed among the dynamic subgrid scale model, the Smagorinsky eddy viscosity model (Smagorinsky, 1963) (that is used as the base model for the dynamic subgrid scale model) and available experimental data. Spectral results of the dynamic subgrid scale model correlate better with experimental data. Satisfactory turbulence characteristics are observed through flow visualization

  14. Subgrid-scale models for large-eddy simulation of rotating turbulent channel flows

    Science.gov (United States)

    Silvis, Maurits H.; Bae, Hyunji Jane; Trias, F. Xavier; Abkar, Mahdi; Moin, Parviz; Verstappen, Roel

    2017-11-01

    We aim to design subgrid-scale models for large-eddy simulation of rotating turbulent flows. Rotating turbulent flows form a challenging test case for large-eddy simulation due to the presence of the Coriolis force. The Coriolis force conserves the total kinetic energy while transporting it from small to large scales of motion, leading to the formation of large-scale anisotropic flow structures. The Coriolis force may also cause partial flow laminarization and the occurrence of turbulent bursts. Many subgrid-scale models for large-eddy simulation are, however, primarily designed to parametrize the dissipative nature of turbulent flows, ignoring the specific characteristics of transport processes. We, therefore, propose a new subgrid-scale model that, in addition to the usual dissipative eddy viscosity term, contains a nondissipative nonlinear model term designed to capture transport processes, such as those due to rotation. We show that the addition of this nonlinear model term leads to improved predictions of the energy spectra of rotating homogeneous isotropic turbulence as well as of the Reynolds stress anisotropy in spanwise-rotating plane-channel flows. This work is financed by the Netherlands Organisation for Scientific Research (NWO) under Project Number 613.001.212.

  15. Parameterizing Subgrid-Scale Orographic Drag in the High-Resolution Rapid Refresh (HRRR) Atmospheric Model

    Science.gov (United States)

    Toy, M. D.; Olson, J.; Kenyon, J.; Smirnova, T. G.; Brown, J. M.

    2017-12-01

    The accuracy of wind forecasts in numerical weather prediction (NWP) models is improved when the drag forces imparted on atmospheric flow by subgrid-scale orography are included. Without such parameterizations, only the terrain resolved by the model grid, along with the small-scale obstacles parameterized by the roughness lengths can have an effect on the flow. This neglects the impacts of subgrid-scale terrain variations, which typically leads to wind speeds that are too strong. Using statistical information about the subgrid-scale orography, such as the mean and variance of the topographic height within a grid cell, the drag forces due to flow blocking, gravity wave drag, and turbulent form drag are estimated and distributed vertically throughout the grid cell column. We recently implemented the small-scale gravity wave drag paramterization of Steeneveld et al. (2008) and Tsiringakis et al. (2017) for stable planetary boundary layers, and the turbulent form drag parameterization of Beljaars et al. (2004) in the High-Resolution Rapid Refresh (HRRR) NWP model developed at the National Oceanic and Atmospheric Administration (NOAA). As a result, a high surface wind speed bias in the model has been reduced and small improvement to the maintenance of stable layers has also been found. We present the results of experiments with the subgrid-scale orographic drag parameterization for the regional HRRR model, as well as for a global model in development at NOAA, showing the direct and indirect impacts.

  16. A simple dynamic subgrid-scale model for LES of particle-laden turbulence

    Science.gov (United States)

    Park, George Ilhwan; Bassenne, Maxime; Urzay, Javier; Moin, Parviz

    2017-04-01

    In this study, a dynamic model for large-eddy simulations is proposed in order to describe the motion of small inertial particles in turbulent flows. The model is simple, involves no significant computational overhead, contains no adjustable parameters, and is flexible enough to be deployed in any type of flow solvers and grids, including unstructured setups. The approach is based on the use of elliptic differential filters to model the subgrid-scale velocity. The only model parameter, which is related to the nominal filter width, is determined dynamically by imposing consistency constraints on the estimated subgrid energetics. The performance of the model is tested in large-eddy simulations of homogeneous-isotropic turbulence laden with particles, where improved agreement with direct numerical simulation results is observed in the dispersed-phase statistics, including particle acceleration, local carrier-phase velocity, and preferential-concentration metrics.

  17. Subgrid Modeling of AGN-driven Turbulence in Galaxy Clusters

    Science.gov (United States)

    Scannapieco, Evan; Brüggen, Marcus

    2008-10-01

    Hot, underdense bubbles powered by active galactic nuclei (AGNs) are likely to play a key role in halting catastrophic cooling in the centers of cool-core galaxy clusters. We present three-dimensional simulations that capture the evolution of such bubbles, using an adaptive mesh hydrodynamic code, FLASH3, to which we have added a subgrid model of turbulence and mixing. While pure hydro simulations indicate that AGN bubbles are disrupted into resolution-dependent pockets of underdense gas, proper modeling of subgrid turbulence indicates that this is a poor approximation to a turbulent cascade that continues far beyond the resolution limit. Instead, Rayleigh-Taylor instabilities act to effectively mix the heated region with its surroundings, while at the same time preserving it as a coherent structure, consistent with observations. Thus, bubbles are transformed into hot clouds of mixed material as they move outward in the hydrostatic intracluster medium (ICM), much as large airbursts lead to a distinctive "mushroom cloud" structure as they rise in the hydrostatic atmosphere of Earth. Properly capturing the evolution of such clouds has important implications for many ICM properties. In particular, it significantly changes the impact of AGNs on the distribution of entropy and metals in cool-core clusters such as Perseus.

  18. Large-eddy simulation with accurate implicit subgrid-scale diffusion

    NARCIS (Netherlands)

    B. Koren (Barry); C. Beets

    1996-01-01

    textabstractA method for large-eddy simulation is presented that does not use an explicit subgrid-scale diffusion term. Subgrid-scale effects are modelled implicitly through an appropriate monotone (in the sense of Spekreijse 1987) discretization method for the advective terms. Special attention is

  19. Mass-flux subgrid-scale parameterization in analogy with multi-component flows: a formulation towards scale independence

    Directory of Open Access Journals (Sweden)

    J.-I. Yano

    2012-11-01

    Full Text Available A generalized mass-flux formulation is presented, which no longer takes a limit of vanishing fractional areas for subgrid-scale components. The presented formulation is applicable to a~situation in which the scale separation is still satisfied, but fractional areas occupied by individual subgrid-scale components are no longer small. A self-consistent formulation is presented by generalizing the mass-flux formulation under the segmentally-constant approximation (SCA to the grid–scale variabilities. The present formulation is expected to alleviate problems arising from increasing resolutions of operational forecast models without invoking more extensive overhaul of parameterizations.

    The present formulation leads to an analogy of the large-scale atmospheric flow with multi-component flows. This analogy allows a generality of including any subgrid-scale variability into the mass-flux parameterization under SCA. Those include stratiform clouds as well as cold pools in the boundary layer.

    An important finding under the present formulation is that the subgrid-scale quantities are advected by the large-scale velocities characteristic of given subgrid-scale components (large-scale subcomponent flows, rather than by the total large-scale flows as simply defined by grid-box average. In this manner, each subgrid-scale component behaves as if like a component of multi-component flows. This formulation, as a result, ensures the lateral interaction of subgrid-scale variability crossing the grid boxes, which are missing in the current parameterizations based on vertical one-dimensional models, and leading to a reduction of the grid-size dependencies in its performance. It is shown that the large-scale subcomponent flows are driven by large-scale subcomponent pressure gradients. The formulation, as a result, furthermore includes a self-contained description of subgrid-scale momentum transport.

    The main purpose of the present paper

  20. Large eddy simulation of new subgrid scale model for three-dimensional bundle flows

    International Nuclear Information System (INIS)

    Barsamian, H.R.; Hassan, Y.A.

    2004-01-01

    Having led to increased inefficiencies and power plant shutdowns fluid flow induced vibrations within heat exchangers are of great concern due to tube fretting-wear or fatigue failures. Historically, scaling law and measurement accuracy problems were encountered for experimental analysis at considerable effort and expense. However, supercomputers and accurate numerical methods have provided reliable results and substantial decrease in cost. In this investigation Large Eddy Simulation has been successfully used to simulate turbulent flow by the numeric solution of the incompressible, isothermal, single phase Navier-Stokes equations. The eddy viscosity model and a new subgrid scale model have been utilized to model the smaller eddies in the flow domain. A triangular array flow field was considered and numerical simulations were performed in two- and three-dimensional fields, and were compared to experimental findings. Results show good agreement of the numerical findings to that of the experimental, and solutions obtained with the new subgrid scale model represent better energy dissipation for the smaller eddies. (author)

  1. Application of a Steady Meandering River with Piers Using a Lattice Boltzmann Sub-Grid Model in Curvilinear Coordinate Grid

    Directory of Open Access Journals (Sweden)

    Liping Chen

    2018-05-01

    Full Text Available A sub-grid multiple relaxation time (MRT lattice Boltzmann model with curvilinear coordinates is applied to simulate an artificial meandering river. The method is based on the D2Q9 model and standard Smagorinsky sub-grid scale (SGS model is introduced to simulate meandering flows. The interpolation supplemented lattice Boltzmann method (ISLBM and the non-equilibrium extrapolation method are used for second-order accuracy and boundary conditions. The proposed model was validated by a meandering channel with a 180° bend and applied to a steady curved river with piers. Excellent agreement between the simulated results and previous computational and experimental data was found, showing that MRT-LBM (MRT lattice Boltzmann method coupled with a Smagorinsky sub-grid scale (SGS model in a curvilinear coordinates grid is capable of simulating practical meandering flows.

  2. A priori study of subgrid-scale flux of a passive scalar in isotropic homogeneous turbulence

    International Nuclear Information System (INIS)

    Chumakov, Sergei

    2008-01-01

    We perform a direct numerical simulation (DNS) of forced homogeneous isotropic turbulence with a passive scalar that is forced by mean gradient. The DNS data are used to study the properties of subgrid-scale flux of a passive scalar in the framework of large eddy simulation (LES), such as alignment trends between the flux, resolved, and subgrid-scale flow structures. It is shown that the direction of the flux is strongly coupled with the subgrid-scale stress axes rather than the resolved flow quantities such as strain, vorticity, or scalar gradient. We derive an approximate transport equation for the subgrid-scale flux of a scalar and look at the relative importance of the terms in the transport equation. A particular form of LES tensor-viscosity model for the scalar flux is investigated, which includes the subgrid-scale stress. Effect of different models for the subgrid-scale stress on the model for the subgrid-scale flux is studied.

  3. A priori study of subgrid-scale flux of a passive scalar in isotropic homogeneous turbulence.

    Science.gov (United States)

    Chumakov, Sergei G

    2008-09-01

    We perform a direct numerical simulation (DNS) of forced homogeneous isotropic turbulence with a passive scalar that is forced by mean gradient. The DNS data are used to study the properties of subgrid-scale flux of a passive scalar in the framework of large eddy simulation (LES), such as alignment trends between the flux, resolved, and subgrid-scale flow structures. It is shown that the direction of the flux is strongly coupled with the subgrid-scale stress axes rather than the resolved flow quantities such as strain, vorticity, or scalar gradient. We derive an approximate transport equation for the subgrid-scale flux of a scalar and look at the relative importance of the terms in the transport equation. A particular form of LES tensor-viscosity model for the scalar flux is investigated, which includes the subgrid-scale stress. Effect of different models for the subgrid-scale stress on the model for the subgrid-scale flux is studied.

  4. Statistical dynamical subgrid-scale parameterizations for geophysical flows

    International Nuclear Information System (INIS)

    O'Kane, T J; Frederiksen, J S

    2008-01-01

    Simulations of both atmospheric and oceanic circulations at given finite resolutions are strongly dependent on the form and strengths of the dynamical subgrid-scale parameterizations (SSPs) and in particular are sensitive to subgrid-scale transient eddies interacting with the retained scale topography and the mean flow. In this paper, we present numerical results for SSPs of the eddy-topographic force, stochastic backscatter, eddy viscosity and eddy-mean field interaction using an inhomogeneous statistical turbulence model based on a quasi-diagonal direct interaction approximation (QDIA). Although the theoretical description on which our model is based is for general barotropic flows, we specifically focus on global atmospheric flows where large-scale Rossby waves are present. We compare and contrast the closure-based results with an important earlier heuristic SSP of the eddy-topographic force, based on maximum entropy or statistical canonical equilibrium arguments, developed specifically for general ocean circulation models (Holloway 1992 J. Phys. Oceanogr. 22 1033-46). Our results demonstrate that where strong zonal flows and Rossby waves are present, such as in the atmosphere, maximum entropy arguments are insufficient to accurately parameterize the subgrid contributions due to eddy-eddy, eddy-topographic and eddy-mean field interactions. We contrast our atmospheric results with findings for the oceans. Our study identifies subgrid-scale interactions that are currently not parameterized in numerical atmospheric climate models, which may lead to systematic defects in the simulated circulations.

  5. Subgrid-scale turbulence in shock-boundary layer flows

    Science.gov (United States)

    Jammalamadaka, Avinash; Jaberi, Farhad

    2015-04-01

    Data generated by direct numerical simulation (DNS) for a Mach 2.75 zero-pressure gradient turbulent boundary layer interacting with shocks of different intensities are used for a priori analysis of subgrid-scale (SGS) turbulence and various terms in the compressible filtered Navier-Stokes equations. The numerical method used for DNS is based on a hybrid scheme that uses a non-dissipative central scheme in the shock-free turbulent regions and a robust monotonicity-preserving scheme in the shock regions. The behavior of SGS stresses and their components, namely Leonard, Cross and Reynolds components, is examined in various regions of the flow for different shock intensities and filter widths. The backscatter in various regions of the flow is found to be significant only instantaneously, while the ensemble-averaged statistics indicate no significant backscatter. The budgets for the SGS kinetic energy equation are examined for a better understanding of shock-tubulence interactions at the subgrid level and also with the aim of providing useful information for one-equation LES models. A term-by-term analysis of SGS terms in the filtered total energy equation indicate that while each term in this equation is significant by itself, the net contribution by all of them is relatively small. This observation is consistent with our a posteriori analysis.

  6. Effect of grid resolution and subgrid assumptions on the model prediction of a reactive buoyant plume under convective conditions

    International Nuclear Information System (INIS)

    Chock, D.P.; Winkler, S.L.; Pu Sun

    2002-01-01

    We have introduced a new and elaborate approach to understand the impact of grid resolution and subgrid chemistry assumption on the grid-model prediction of species concentrations for a system with highly non-homogeneous chemistry - a reactive buoyant plume immediately downwind of the stack in a convective boundary layer. The Parcel-Grid approach plume was used to describe both the air parcel turbulent transport and chemistry. This approach allows an identical transport process for all simulations. It also allows a description of subgrid chemistry. The ambient and plume parcel transport follows the description of Luhar and Britter (Atmos. Environ, 23 (1989) 1911, 26A (1992) 1283). The chemistry follows that of the Carbon-Bond mechanism. Three different grid sizes were considered: fine, medium and coarse, together with three different subgrid chemistry assumptions: micro-scale or individual parcel, tagged-parcel (plume and ambient parcels treated separately), and untagged-parcel (plume and ambient parcels treated indiscriminately). Reducing the subgrid information is not necessarily similar to increasing the model grid size. In our example, increasing the grid size leads to a reduction in the suppression of ozone in the presence of a high-NO x stack plume, and a reduction in the effectiveness of the NO x -inhibition effect. On the other hand, reducing the subgrid information (by using the untagged-parcel assumption) leads to an increase in ozone reduction and an enhancement of the NO x -inhibition effect insofar as the ozone extremum is concerned. (author)

  7. A new subgrid characteristic length for turbulence simulations on anisotropic grids

    Science.gov (United States)

    Trias, F. X.; Gorobets, A.; Silvis, M. H.; Verstappen, R. W. C. P.; Oliva, A.

    2017-11-01

    Direct numerical simulations of the incompressible Navier-Stokes equations are not feasible yet for most practical turbulent flows. Therefore, dynamically less complex mathematical formulations are necessary for coarse-grained simulations. In this regard, eddy-viscosity models for Large-Eddy Simulation (LES) are probably the most popular example thereof. This type of models requires the calculation of a subgrid characteristic length which is usually associated with the local grid size. For isotropic grids, this is equal to the mesh step. However, for anisotropic or unstructured grids, such as the pancake-like meshes that are often used to resolve near-wall turbulence or shear layers, a consensus on defining the subgrid characteristic length has not been reached yet despite the fact that it can strongly affect the performance of LES models. In this context, a new definition of the subgrid characteristic length is presented in this work. This flow-dependent length scale is based on the turbulent, or subgrid stress, tensor and its representations on different grids. The simplicity and mathematical properties suggest that it can be a robust definition that minimizes the effects of mesh anisotropies on simulation results. The performance of the proposed subgrid characteristic length is successfully tested for decaying isotropic turbulence and a turbulent channel flow using artificially refined grids. Finally, a simple extension of the method for unstructured meshes is proposed and tested for a turbulent flow around a square cylinder. Comparisons with existing subgrid characteristic length scales show that the proposed definition is much more robust with respect to mesh anisotropies and has a great potential to be used in complex geometries where highly skewed (unstructured) meshes are present.

  8. Use of fundamental condensation heat transfer experiments for the development of a sub-grid liquid jet condensation model

    Energy Technology Data Exchange (ETDEWEB)

    Buschman, Francis X., E-mail: Francis.Buschman@unnpp.gov; Aumiller, David L.

    2017-02-15

    Highlights: • Direct contact condensation data on liquid jets up to 1.7 MPa in pure steam and in the presence of noncondensable gas. • Identified a pressure effect on the impact of noncondensables to suppress condensation heat transfer not captured in existing data or correlations. • Pure steam data is used to develop a new correlation for condensation heat transfer on subcooled liquid jets. • Noncondensable data used to develop a modification to the renewal time estimate used in the Young and Bajorek correlation for condensation suppression in the presence of noncondensables. • A jet injection boundary condition, using a sub-grid jet condensation model, is developed for COBRA-IE which provides a more detailed estimate of the condensation rate on the liquid jet and allows the use of jet specific closure relationships. - Abstract: Condensation on liquid jets is an important phenomenon for many different facets of nuclear power plant transients and analyses such as containment spray cooling. An experimental facility constructed at the Pennsylvania State University, the High Pressure Liquid Jet Condensation Heat Transfer facility (HPLJCHT), has been used to perform steady-state condensation heat transfer experiments in which the temperature of the liquid jet is measured at different axial locations allowing the condensation rate to be determined over the jet length. Test data have been obtained in a pure steam environment and with varying concentrations of noncondensable gas. This data extends the available jet condensation data from near atmospheric pressure up to a pressure of 1.7 MPa. An empirical correlation for the liquid side condensation heat transfer coefficient has been developed based on the data obtained in pure steam. The data obtained with noncondensable gas were used to develop a correlation for the renewal time as used in the condensation suppression model developed by Young and Bajorek. This paper describes a new sub-grid liquid jet

  9. Use of fundamental condensation heat transfer experiments for the development of a sub-grid liquid jet condensation model

    International Nuclear Information System (INIS)

    Buschman, Francis X.; Aumiller, David L.

    2017-01-01

    Highlights: • Direct contact condensation data on liquid jets up to 1.7 MPa in pure steam and in the presence of noncondensable gas. • Identified a pressure effect on the impact of noncondensables to suppress condensation heat transfer not captured in existing data or correlations. • Pure steam data is used to develop a new correlation for condensation heat transfer on subcooled liquid jets. • Noncondensable data used to develop a modification to the renewal time estimate used in the Young and Bajorek correlation for condensation suppression in the presence of noncondensables. • A jet injection boundary condition, using a sub-grid jet condensation model, is developed for COBRA-IE which provides a more detailed estimate of the condensation rate on the liquid jet and allows the use of jet specific closure relationships. - Abstract: Condensation on liquid jets is an important phenomenon for many different facets of nuclear power plant transients and analyses such as containment spray cooling. An experimental facility constructed at the Pennsylvania State University, the High Pressure Liquid Jet Condensation Heat Transfer facility (HPLJCHT), has been used to perform steady-state condensation heat transfer experiments in which the temperature of the liquid jet is measured at different axial locations allowing the condensation rate to be determined over the jet length. Test data have been obtained in a pure steam environment and with varying concentrations of noncondensable gas. This data extends the available jet condensation data from near atmospheric pressure up to a pressure of 1.7 MPa. An empirical correlation for the liquid side condensation heat transfer coefficient has been developed based on the data obtained in pure steam. The data obtained with noncondensable gas were used to develop a correlation for the renewal time as used in the condensation suppression model developed by Young and Bajorek. This paper describes a new sub-grid liquid jet

  10. Birefringent dispersive FDTD subgridding scheme

    OpenAIRE

    De Deckere, B; Van Londersele, Arne; De Zutter, Daniël; Vande Ginste, Dries

    2016-01-01

    A novel 2D finite difference time domain (FDTD) subgridding method is proposed, only subject to the Courant limit of the coarse grid. By making mu or epsilon inside the subgrid dispersive, unconditional stability is induced at the cost of a sparse, implicit set of update equations. By only adding dispersion along preferential directions, it is possible to dramatically reduce the rank of the matrix equation that needs to be solved.

  11. Large eddy simulation of transitional flow in an idealized stenotic blood vessel: evaluation of subgrid scale models.

    Science.gov (United States)

    Pal, Abhro; Anupindi, Kameswararao; Delorme, Yann; Ghaisas, Niranjan; Shetty, Dinesh A; Frankel, Steven H

    2014-07-01

    In the present study, we performed large eddy simulation (LES) of axisymmetric, and 75% stenosed, eccentric arterial models with steady inflow conditions at a Reynolds number of 1000. The results obtained are compared with the direct numerical simulation (DNS) data (Varghese et al., 2007, "Direct Numerical Simulation of Stenotic Flows. Part 1. Steady Flow," J. Fluid Mech., 582, pp. 253-280). An inhouse code (WenoHemo) employing high-order numerical methods for spatial and temporal terms, along with a 2nd order accurate ghost point immersed boundary method (IBM) (Mark, and Vanwachem, 2008, "Derivation and Validation of a Novel Implicit Second-Order Accurate Immersed Boundary Method," J. Comput. Phys., 227(13), pp. 6660-6680) for enforcing boundary conditions on curved geometries is used for simulations. Three subgrid scale (SGS) models, namely, the classical Smagorinsky model (Smagorinsky, 1963, "General Circulation Experiments With the Primitive Equations," Mon. Weather Rev., 91(10), pp. 99-164), recently developed Vreman model (Vreman, 2004, "An Eddy-Viscosity Subgrid-Scale Model for Turbulent Shear Flow: Algebraic Theory and Applications," Phys. Fluids, 16(10), pp. 3670-3681), and the Sigma model (Nicoud et al., 2011, "Using Singular Values to Build a Subgrid-Scale Model for Large Eddy Simulations," Phys. Fluids, 23(8), 085106) are evaluated in the present study. Evaluation of SGS models suggests that the classical constant coefficient Smagorinsky model gives best agreement with the DNS data, whereas the Vreman and Sigma models predict an early transition to turbulence in the poststenotic region. Supplementary simulations are performed using Open source field operation and manipulation (OpenFOAM) ("OpenFOAM," http://www.openfoam.org/) solver and the results are inline with those obtained with WenoHemo.

  12. Resolving terrestrial ecosystem processes along a subgrid topographic gradient for an earth-system model

    Science.gov (United States)

    Subin, Z M; Milly, Paul C.D.; Sulman, B N; Malyshev, Sergey; Shevliakova, E

    2014-01-01

    Soil moisture is a crucial control on surface water and energy fluxes, vegetation, and soil carbon cycling. Earth-system models (ESMs) generally represent an areal-average soil-moisture state in gridcells at scales of 50–200 km and as a result are not able to capture the nonlinear effects of topographically-controlled subgrid heterogeneity in soil moisture, in particular where wetlands are present. We addressed this deficiency by building a subgrid representation of hillslope-scale topographic gradients, TiHy (Tiled-hillslope Hydrology), into the Geophysical Fluid Dynamics Laboratory (GFDL) land model (LM3). LM3-TiHy models one or more representative hillslope geometries for each gridcell by discretizing them into land model tiles hydrologically coupled along an upland-to-lowland gradient. Each tile has its own surface fluxes, vegetation, and vertically-resolved state variables for soil physics and biogeochemistry. LM3-TiHy simulates a gradient in soil moisture and water-table depth between uplands and lowlands in each gridcell. Three hillslope hydrological regimes appear in non-permafrost regions in the model: wet and poorly-drained, wet and well-drained, and dry; with large, small, and zero wetland area predicted, respectively. Compared to the untiled LM3 in stand-alone experiments, LM3-TiHy simulates similar surface energy and water fluxes in the gridcell-mean. However, in marginally wet regions around the globe, LM3-TiHy simulates shallow groundwater in lowlands, leading to higher evapotranspiration, lower surface temperature, and higher leaf area compared to uplands in the same gridcells. Moreover, more than four-fold larger soil carbon concentrations are simulated globally in lowlands as compared with uplands. We compared water-table depths to those simulated by a recent global model-observational synthesis, and we compared wetland and inundated areas diagnosed from the model to observational datasets. The comparisons demonstrate that LM3-TiHy has the

  13. Subgrid models for mass and thermal diffusion in turbulent mixing

    Energy Technology Data Exchange (ETDEWEB)

    Sharp, David H [Los Alamos National Laboratory; Lim, Hyunkyung [STONY BROOK UNIV; Li, Xiao - Lin [STONY BROOK UNIV; Gilmm, James G [STONY BROOK UNIV

    2008-01-01

    We are concerned with the chaotic flow fields of turbulent mixing. Chaotic flow is found in an extreme form in multiply shocked Richtmyer-Meshkov unstable flows. The goal of a converged simulation for this problem is twofold: to obtain converged solutions for macro solution features, such as the trajectories of the principal shock waves, mixing zone edges, and mean densities and velocities within each phase, and also for such micro solution features as the joint probability distributions of the temperature and species concentration. We introduce parameterized subgrid models of mass and thermal diffusion, to define large eddy simulations (LES) that replicate the micro features observed in the direct numerical simulation (DNS). The Schmidt numbers and Prandtl numbers are chosen to represent typical liquid, gas and plasma parameter values. Our main result is to explore the variation of the Schmidt, Prandtl and Reynolds numbers by three orders of magnitude, and the mesh by a factor of 8 per linear dimension (up to 3200 cells per dimension), to allow exploration of both DNS and LES regimes and verification of the simulations for both macro and micro observables. We find mesh convergence for key properties describing the molecular level of mixing, including chemical reaction rates between the distinct fluid species. We find results nearly independent of Reynolds number for Re 300, 6000, 600K . Methodologically, the results are also new. In common with the shock capturing community, we allow and maintain sharp solution gradients, and we enhance these gradients through use of front tracking. In common with the turbulence modeling community, we include subgrid scale models with no adjustable parameters for LES. To the authors' knowledge, these two methodologies have not been previously combined. In contrast to both of these methodologies, our use of Front Tracking, with DNS or LES resolution of the momentum equation at or near the Kolmogorov scale, but without

  14. A subgrid parameterization scheme for precipitation

    Directory of Open Access Journals (Sweden)

    S. Turner

    2012-04-01

    Full Text Available With increasing computing power, the horizontal resolution of numerical weather prediction (NWP models is improving and today reaches 1 to 5 km. Nevertheless, clouds and precipitation formation are still subgrid scale processes for most cloud types, such as cumulus and stratocumulus. Subgrid scale parameterizations for water vapor condensation have been in use for many years and are based on a prescribed probability density function (PDF of relative humidity spatial variability within the model grid box, thus providing a diagnosis of the cloud fraction. A similar scheme is developed and tested here. It is based on a prescribed PDF of cloud water variability and a threshold value of liquid water content for droplet collection to derive a rain fraction within the model grid. Precipitation of rainwater raises additional concerns relative to the overlap of cloud and rain fractions, however. The scheme is developed following an analysis of data collected during field campaigns in stratocumulus (DYCOMS-II and fair weather cumulus (RICO and tested in a 1-D framework against large eddy simulations of these observed cases. The new parameterization is then implemented in a 3-D NWP model with a horizontal resolution of 2.5 km to simulate real cases of precipitating cloud systems over France.

  15. Modeling Subgrid Scale Droplet Deposition in Multiphase-CFD

    Science.gov (United States)

    Agostinelli, Giulia; Baglietto, Emilio

    2017-11-01

    The development of first-principle-based constitutive equations for the Eulerian-Eulerian CFD modeling of annular flow is a major priority to extend the applicability of multiphase CFD (M-CFD) across all two-phase flow regimes. Two key mechanisms need to be incorporated in the M-CFD framework, the entrainment of droplets from the liquid film, and their deposition. Here we focus first on the aspect of deposition leveraging a separate effects approach. Current two-field methods in M-CFD do not include appropriate local closures to describe the deposition of droplets in annular flow conditions. As many integral correlations for deposition have been proposed for lumped parameters methods applications, few attempts exist in literature to extend their applicability to CFD simulations. The integral nature of the approach limits its applicability to fully developed flow conditions, without geometrical or flow variations, therefore negating the scope of CFD application. A new approach is proposed here that leverages local quantities to predict the subgrid-scale deposition rate. The methodology is first tested into a three-field approach CFD model.

  16. Can a numerically stable subgrid-scale model for turbulent flow computation be ideally accurate?: a preliminary theoretical study for the Gaussian filtered Navier-Stokes equations.

    Science.gov (United States)

    Ida, Masato; Taniguchi, Nobuyuki

    2003-09-01

    This paper introduces a candidate for the origin of the numerical instabilities in large eddy simulation repeatedly observed in academic and practical industrial flow computations. Without resorting to any subgrid-scale modeling, but based on a simple assumption regarding the streamwise component of flow velocity, it is shown theoretically that in a channel-flow computation, the application of the Gaussian filtering to the incompressible Navier-Stokes equations yields a numerically unstable term, a cross-derivative term, which is similar to one appearing in the Gaussian filtered Vlasov equation derived by Klimas [J. Comput. Phys. 68, 202 (1987)] and also to one derived recently by Kobayashi and Shimomura [Phys. Fluids 15, L29 (2003)] from the tensor-diffusivity subgrid-scale term in a dynamic mixed model. The present result predicts that not only the numerical methods and the subgrid-scale models employed but also only the applied filtering process can be a seed of this numerical instability. An investigation concerning the relationship between the turbulent energy scattering and the unstable term shows that the instability of the term does not necessarily represent the backscatter of kinetic energy which has been considered a possible origin of numerical instabilities in large eddy simulation. The present findings raise the question whether a numerically stable subgrid-scale model can be ideally accurate.

  17. Sensitivity of regional meteorology and atmospheric composition during the DISCOVER-AQ period to subgrid-scale cloud-radiation interactions

    Science.gov (United States)

    Huang, X.; Allen, D. J.; Herwehe, J. A.; Alapaty, K. V.; Loughner, C.; Pickering, K. E.

    2014-12-01

    Subgrid-scale cloudiness directly influences global and regional atmospheric radiation budgets by attenuating shortwave radiation, leading to suppressed convection, decreased surface precipitation as well as other meteorological parameter changes. We use the latest version of WRF (v3.6, Apr 2014), which incorporates the Kain-Fritsch (KF) convective parameterization to provide subgrid-scale cloud fraction and condensate feedback to the rapid radiative transfer model-global (RRTMG) shortwave and longwave radiation schemes. We apply the KF scheme to simulate the DISCOVER-AQ Maryland field campaign (July 2011), and compare the sensitivity of meteorological parameters to the control run that does not include subgrid cloudiness. Furthermore, we will examine the chemical impact from subgrid cloudiness using a regional chemical transport model (CMAQ). There are several meteorological parameters influenced by subgrid cumulus clouds that are very important to air quality modeling, including changes in surface temperature that impact biogenic emission rates; changes in PBL depth that affect pollutant concentrations; and changes in surface humidity levels that impact peroxide-related reactions. Additionally, subgrid cumulus clouds directly impact air pollutant concentrations by modulating photochemistry and vertical mixing. Finally, we will compare with DISCOVER-AQ flight observation data and evaluate how well this off-line CMAQ simulation driven by WRF with the KF scheme simulates the effects of regional convection on atmospheric composition.

  18. An improved anisotropy-resolving subgrid-scale model for flows in laminar–turbulent transition region

    International Nuclear Information System (INIS)

    Inagaki, Masahide; Abe, Ken-ichi

    2017-01-01

    Highlights: • An anisotropy-resolving subgrid-scale model, covering a wide range of grid resolutions, is improved. • The new model enhances its applicability to flows in the laminar-turbulent transition region. • A mixed-timescale subgrid-scale model is used as the eddy viscosity model. • The proposed model successfully predicts the channel flows at transitional Reynolds numbers. • The influence of the definition of the grid-filter width is also investigated. - Abstract: Some types of mixed subgrid-scale (SGS) models combining an isotropic eddy-viscosity model and a scale-similarity model can be used to effectively improve the accuracy of large eddy simulation (LES) in predicting wall turbulence. Abe (2013) has recently proposed a stabilized mixed model that maintains its computational stability through a unique procedure that prevents the energy transfer between the grid-scale (GS) and SGS components induced by the scale-similarity term. At the same time, since this model can successfully predict the anisotropy of the SGS stress, the predictive performance, particularly at coarse grid resolutions, is remarkably improved in comparison with other mixed models. However, since the stabilized anisotropy-resolving SGS model includes a transport equation of the SGS turbulence energy, k SGS , containing a production term proportional to the square root of k SGS , its applicability to flows with both laminar and turbulent regions is not so high. This is because such a production term causes k SGS to self-reproduce. Consequently, the laminar–turbulent transition region predicted by this model depends on the inflow or initial condition of k SGS . To resolve these issues, in the present study, the mixed-timescale (MTS) SGS model proposed by Inagaki et al. (2005) is introduced into the stabilized mixed model as the isotropic eddy-viscosity part and the production term in the k SGS transport equation. In the MTS model, the SGS turbulence energy, k es , estimated by

  19. Smaller global and regional carbon emissions from gross land use change when considering sub-grid secondary land cohorts in a global dynamic vegetation model

    Science.gov (United States)

    Yue, Chao; Ciais, Philippe; Li, Wei

    2018-02-01

    Several modelling studies reported elevated carbon emissions from historical land use change (ELUC) by including bidirectional transitions on the sub-grid scale (termed gross land use change), dominated by shifting cultivation and other land turnover processes. However, most dynamic global vegetation models (DGVMs) that have implemented gross land use change either do not account for sub-grid secondary lands, or often have only one single secondary land tile over a model grid cell and thus cannot account for various rotation lengths in shifting cultivation and associated secondary forest age dynamics. Therefore, it remains uncertain how realistic the past ELUC estimations are and how estimated ELUC will differ between the two modelling approaches with and without multiple sub-grid secondary land cohorts - in particular secondary forest cohorts. Here we investigated historical ELUC over 1501-2005 by including sub-grid forest age dynamics in a DGVM. We run two simulations, one with no secondary forests (Sageless) and the other with sub-grid secondary forests of six age classes whose demography is driven by historical land use change (Sage). Estimated global ELUC for 1501-2005 is 176 Pg C in Sage compared to 197 Pg C in Sageless. The lower ELUC values in Sage arise mainly from shifting cultivation in the tropics under an assumed constant rotation length of 15 years, being 27 Pg C in Sage in contrast to 46 Pg C in Sageless. Estimated cumulative ELUC values from wood harvest in the Sage simulation (31 Pg C) are however slightly higher than Sageless (27 Pg C) when the model is forced by reconstructed harvested areas because secondary forests targeted in Sage for harvest priority are insufficient to meet the prescribed harvest area, leading to wood harvest being dominated by old primary forests. An alternative approach to quantify wood harvest ELUC, i.e. always harvesting the close-to-mature forests in both Sageless and Sage, yields similar values of 33 Pg C by both

  20. Discontinuous Galerkin Subgrid Finite Element Method for Heterogeneous Brinkman’s Equations

    KAUST Repository

    Iliev, Oleg P.

    2010-01-01

    We present a two-scale finite element method for solving Brinkman\\'s equations with piece-wise constant coefficients. This system of equations model fluid flows in highly porous, heterogeneous media with complex topology of the heterogeneities. We make use of the recently proposed discontinuous Galerkin FEM for Stokes equations by Wang and Ye in [12] and the concept of subgrid approximation developed for Darcy\\'s equations by Arbogast in [4]. In order to reduce the error along the coarse-grid interfaces we have added a alternating Schwarz iteration using patches around the coarse-grid boundaries. We have implemented the subgrid method using Deal.II FEM library, [7], and we present the computational results for a number of model problems. © 2010 Springer-Verlag Berlin Heidelberg.

  1. Monte Carlo-based subgrid parameterization of vertical velocity and stratiform cloud microphysics in ECHAM5.5-HAM2

    Directory of Open Access Journals (Sweden)

    J. Tonttila

    2013-08-01

    Full Text Available A new method for parameterizing the subgrid variations of vertical velocity and cloud droplet number concentration (CDNC is presented for general circulation models (GCMs. These parameterizations build on top of existing parameterizations that create stochastic subgrid cloud columns inside the GCM grid cells, which can be employed by the Monte Carlo independent column approximation approach for radiative transfer. The new model version adds a description for vertical velocity in individual subgrid columns, which can be used to compute cloud activation and the subgrid distribution of the number of cloud droplets explicitly. Autoconversion is also treated explicitly in the subcolumn space. This provides a consistent way of simulating the cloud radiative effects with two-moment cloud microphysical properties defined at subgrid scale. The primary impact of the new parameterizations is to decrease the CDNC over polluted continents, while over the oceans the impact is smaller. Moreover, the lower CDNC induces a stronger autoconversion of cloud water to rain. The strongest reduction in CDNC and cloud water content over the continental areas promotes weaker shortwave cloud radiative effects (SW CREs even after retuning the model. However, compared to the reference simulation, a slightly stronger SW CRE is seen e.g. over mid-latitude oceans, where CDNC remains similar to the reference simulation, and the in-cloud liquid water content is slightly increased after retuning the model.

  2. Large Eddy simulation of turbulence: A subgrid scale model including shear, vorticity, rotation, and buoyancy

    Science.gov (United States)

    Canuto, V. M.

    1994-01-01

    The Reynolds numbers that characterize geophysical and astrophysical turbulence (Re approximately equals 10(exp 8) for the planetary boundary layer and Re approximately equals 10(exp 14) for the Sun's interior) are too large to allow a direct numerical simulation (DNS) of the fundamental Navier-Stokes and temperature equations. In fact, the spatial number of grid points N approximately Re(exp 9/4) exceeds the computational capability of today's supercomputers. Alternative treatments are the ensemble-time average approach, and/or the volume average approach. Since the first method (Reynolds stress approach) is largely analytical, the resulting turbulence equations entail manageable computational requirements and can thus be linked to a stellar evolutionary code or, in the geophysical case, to general circulation models. In the volume average approach, one carries out a large eddy simulation (LES) which resolves numerically the largest scales, while the unresolved scales must be treated theoretically with a subgrid scale model (SGS). Contrary to the ensemble average approach, the LES+SGS approach has considerable computational requirements. Even if this prevents (for the time being) a LES+SGS model to be linked to stellar or geophysical codes, it is still of the greatest relevance as an 'experimental tool' to be used, inter alia, to improve the parameterizations needed in the ensemble average approach. Such a methodology has been successfully adopted in studies of the convective planetary boundary layer. Experienc e with the LES+SGS approach from different fields has shown that its reliability depends on the healthiness of the SGS model for numerical stability as well as for physical completeness. At present, the most widely used SGS model, the Smagorinsky model, accounts for the effect of the shear induced by the large resolved scales on the unresolved scales but does not account for the effects of buoyancy, anisotropy, rotation, and stable stratification. The

  3. An investigation of the sub-grid variability of trace gases and aerosols for global climate modeling

    Directory of Open Access Journals (Sweden)

    Y. Qian

    2010-07-01

    Full Text Available One fundamental property and limitation of grid based models is their inability to identify spatial details smaller than the grid cell size. While decades of work have gone into developing sub-grid treatments for clouds and land surface processes in climate models, the quantitative understanding of sub-grid processes and variability for aerosols and their precursors is much poorer. In this study, WRF-Chem is used to simulate the trace gases and aerosols over central Mexico during the 2006 MILAGRO field campaign, with multiple spatial resolutions and emission/terrain scenarios. Our analysis focuses on quantifying the sub-grid variability (SGV of trace gases and aerosols within a typical global climate model grid cell, i.e. 75×75 km2.

    Our results suggest that a simulation with 3-km horizontal grid spacing adequately reproduces the overall transport and mixing of trace gases and aerosols downwind of Mexico City, while 75-km horizontal grid spacing is insufficient to represent local emission and terrain-induced flows along the mountain ridge, subsequently affecting the transport and mixing of plumes from nearby sources. Therefore, the coarse model grid cell average may not correctly represent aerosol properties measured over polluted areas. Probability density functions (PDFs for trace gases and aerosols show that secondary trace gases and aerosols, such as O3, sulfate, ammonium, and nitrate, are more likely to have a relatively uniform probability distribution (i.e. smaller SGV over a narrow range of concentration values. Mostly inert and long-lived trace gases and aerosols, such as CO and BC, are more likely to have broad and skewed distributions (i.e. larger SGV over polluted regions. Over remote areas, all trace gases and aerosols are more uniformly distributed compared to polluted areas. Both CO and O3 SGV vertical profiles are nearly constant within the PBL during daytime, indicating that trace gases

  4. Study of subgrid-scale velocity models for reacting and nonreacting flows

    Science.gov (United States)

    Langella, I.; Doan, N. A. K.; Swaminathan, N.; Pope, S. B.

    2018-05-01

    A study is conducted to identify advantages and limitations of existing large-eddy simulation (LES) closures for the subgrid-scale (SGS) kinetic energy using a database of direct numerical simulations (DNS). The analysis is conducted for both reacting and nonreacting flows, different turbulence conditions, and various filter sizes. A model, based on dissipation and diffusion of momentum (LD-D model), is proposed in this paper based on the observed behavior of four existing models. Our model shows the best overall agreements with DNS statistics. Two main investigations are conducted for both reacting and nonreacting flows: (i) an investigation on the robustness of the model constants, showing that commonly used constants lead to a severe underestimation of the SGS kinetic energy and enlightening their dependence on Reynolds number and filter size; and (ii) an investigation on the statistical behavior of the SGS closures, which suggests that the dissipation of momentum is the key parameter to be considered in such closures and that dilatation effect is important and must be captured correctly in reacting flows. Additional properties of SGS kinetic energy modeling are identified and discussed.

  5. Subgrid models for mass and thermal diffusion in turbulent mixing

    International Nuclear Information System (INIS)

    Lim, H; Yu, Y; Glimm, J; Li, X-L; Sharp, D H

    2010-01-01

    We propose a new method for the large eddy simulation (LES) of turbulent mixing flows. The method yields convergent probability distribution functions (PDFs) for temperature and concentration and a chemical reaction rate when applied to reshocked Richtmyer-Meshkov (RM) unstable flows. Because such a mesh convergence is an unusual and perhaps original capability for LES of RM flows, we review previous validation studies of the principal components of the algorithm. The components are (i) a front tracking code, FronTier, to control numerical mass diffusion and (ii) dynamic subgrid scale (SGS) models to compensate for unresolved scales in the LES. We also review the relevant code comparison studies. We compare our results to a simple model based on 1D diffusion, taking place in the geometry defined statistically by the interface (the 50% isoconcentration surface between the two fluids). Several conclusions important to physics could be drawn from our study. We model chemical reactions with no closure approximations beyond those in the LES of the fluid variables itself, and as with dynamic SGS models, these closures contain no adjustable parameters. The chemical reaction rate is specified by the joint PDF for temperature and concentration. We observe a bimodal distribution for the PDF and we observe significant dependence on fluid transport parameters.

  6. A criterion of orthogonality on the assumption and restrictions in subgrid-scale modelling of turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Fang, L. [LMP, Ecole Centrale de Pékin, Beihang University, Beijing 100191 (China); Co-Innovation Center for Advanced Aero-Engine, Beihang University, Beijing 100191 (China); Sun, X.Y. [LMP, Ecole Centrale de Pékin, Beihang University, Beijing 100191 (China); Liu, Y.W., E-mail: liuyangwei@126.com [National Key Laboratory of Science and Technology on Aero-Engine Aero-Thermodynamics, School of Energy and Power Engineering, Beihang University, Beijing 100191 (China); Co-Innovation Center for Advanced Aero-Engine, Beihang University, Beijing 100191 (China)

    2016-12-09

    In order to shed light on understanding the subgrid-scale (SGS) modelling methodology, we analyze and define the concepts of assumption and restriction in the modelling procedure, then show by a generalized derivation that if there are multiple stationary restrictions in a modelling, the corresponding assumption function must satisfy a criterion of orthogonality. Numerical tests using one-dimensional nonlinear advection equation are performed to validate this criterion. This study is expected to inspire future research on generally guiding the SGS modelling methodology. - Highlights: • The concepts of assumption and restriction in the SGS modelling procedure are defined. • A criterion of orthogonality on the assumption and restrictions is derived. • Numerical tests using one-dimensional nonlinear advection equation are performed to validate this criterion.

  7. A global data set of soil hydraulic properties and sub-grid variability of soil water retention and hydraulic conductivity curves

    Science.gov (United States)

    Montzka, Carsten; Herbst, Michael; Weihermüller, Lutz; Verhoef, Anne; Vereecken, Harry

    2017-07-01

    Agroecosystem models, regional and global climate models, and numerical weather prediction models require adequate parameterization of soil hydraulic properties. These properties are fundamental for describing and predicting water and energy exchange processes at the transition zone between solid earth and atmosphere, and regulate evapotranspiration, infiltration and runoff generation. Hydraulic parameters describing the soil water retention (WRC) and hydraulic conductivity (HCC) curves are typically derived from soil texture via pedotransfer functions (PTFs). Resampling of those parameters for specific model grids is typically performed by different aggregation approaches such a spatial averaging and the use of dominant textural properties or soil classes. These aggregation approaches introduce uncertainty, bias and parameter inconsistencies throughout spatial scales due to nonlinear relationships between hydraulic parameters and soil texture. Therefore, we present a method to scale hydraulic parameters to individual model grids and provide a global data set that overcomes the mentioned problems. The approach is based on Miller-Miller scaling in the relaxed form by Warrick, that fits the parameters of the WRC through all sub-grid WRCs to provide an effective parameterization for the grid cell at model resolution; at the same time it preserves the information of sub-grid variability of the water retention curve by deriving local scaling parameters. Based on the Mualem-van Genuchten approach we also derive the unsaturated hydraulic conductivity from the water retention functions, thereby assuming that the local parameters are also valid for this function. In addition, via the Warrick scaling parameter λ, information on global sub-grid scaling variance is given that enables modellers to improve dynamical downscaling of (regional) climate models or to perturb hydraulic parameters for model ensemble output generation. The present analysis is based on the ROSETTA PTF

  8. Optimal 25-Point Finite-Difference Subgridding Techniques for the 2D Helmholtz Equation

    Directory of Open Access Journals (Sweden)

    Tingting Wu

    2016-01-01

    Full Text Available We present an optimal 25-point finite-difference subgridding scheme for solving the 2D Helmholtz equation with perfectly matched layer (PML. This scheme is second order in accuracy and pointwise consistent with the equation. Subgrids are used to discretize the computational domain, including the interior domain and the PML. For the transitional node in the interior domain, the finite difference equation is formulated with ghost nodes, and its weight parameters are chosen by a refined choice strategy based on minimizing the numerical dispersion. Numerical experiments are given to illustrate that the newly proposed schemes can produce highly accurate seismic modeling results with enhanced efficiency.

  9. A moving subgrid model for simulation of reflood heat transfer

    International Nuclear Information System (INIS)

    Frepoli, Cesare; Mahaffy, John H.; Hochreiter, Lawrence E.

    2003-01-01

    In the quench front and froth region the thermal-hydraulic parameters experience a sharp axial variation. The heat transfer regime changes from single-phase liquid, to nucleate boiling, to transition boiling and finally to film boiling in a small axial distance. One of the major limitations of all the current best-estimate codes is that a relatively coarse mesh is used to solve the complex fluid flow and heat transfer problem in proximity of the quench front during reflood. The use of a fine axial mesh for the entire core becomes prohibitive because of the large computational costs involved. Moreover, as the mesh size decreases, the standard numerical methods based on a semi-implicit scheme, tend to become unstable. A subgrid model was developed to resolve the complex thermal-hydraulic problem at the quench front and froth region. This model is a Fine Hydraulic Moving Grid (FHMG) that overlies a coarse Eulerian mesh in the proximity of the quench front and froth region. The fine mesh moves in the core and follows the quench front as it advances in the core while the rods cool and quench. The FHMG software package was developed and implemented into the COBRA-TF computer code. This paper presents the model and discusses preliminary results obtained with the COBRA-TF/FHMG computer code

  10. A regional scale model for ozone in the United States with subgrid representation of urban and power plant plumes

    International Nuclear Information System (INIS)

    Sillman, S.; Logan, J.A.; Wofsy, S.C.

    1990-01-01

    A new approach to modeling regional air chemistry is presented for application to industrialized regions such as the continental US. Rural chemistry and transport are simulated using a coarse grid, while chemistry and transport in urban and power plant plumes are represented by detailed subgrid models. Emissions from urban and power plant sources are processed in generalized plumes where chemistry and dilution proceed for 8-12 hours before mixing with air in a large resolution element. A realistic fraction of pollutants reacts under high-NO x conditions, and NO x is removed significantly before dispersal. Results from this model are compared with results from grid odels that do not distinguish plumes and with observational data defining regional ozone distributions. Grid models with coarse resolution are found to artificially disperse NO x over rural areas, therefore overestimating rural levels of both NO x and O 3 . Regional net ozone production is too high in coarse grid models, because production of O 3 is more efficient per molecule of NO x in the low-concentration regime of rural areas than in heavily polluted plumes from major emission sources. Ozone levels simulated by this model are shown to agree with observations in urban plumes and in rural regions. The model reproduces accurately average regional and peak ozone concentrations observed during a 4-day ozone episode. Computational costs for the model are reduced 25-to 100-fold as compared to fine-mesh models

  11. Simulations of mixing in Inertial Confinement Fusion with front tracking and sub-grid scale models

    Science.gov (United States)

    Rana, Verinder; Lim, Hyunkyung; Melvin, Jeremy; Cheng, Baolian; Glimm, James; Sharp, David

    2015-11-01

    We present two related results. The first discusses the Richtmyer-Meshkov (RMI) and Rayleigh-Taylor instabilities (RTI) and their evolution in Inertial Confinement Fusion simulations. We show the evolution of the RMI to the late time RTI under transport effects and tracking. The role of the sub-grid scales helps capture the interaction of turbulence with diffusive processes. The second assesses the effects of concentration on the physics model and examines the mixing properties in the low Reynolds number hot spot. We discuss the effect of concentration on the Schmidt number. The simulation results are produced using the University of Chicago code FLASH and Stony Brook University's front tracking algorithm.

  12. Challenges of Representing Sub-Grid Physics in an Adaptive Mesh Refinement Atmospheric Model

    Science.gov (United States)

    O'Brien, T. A.; Johansen, H.; Johnson, J. N.; Rosa, D.; Benedict, J. J.; Keen, N. D.; Collins, W.; Goodfriend, E.

    2015-12-01

    Some of the greatest potential impacts from future climate change are tied to extreme atmospheric phenomena that are inherently multiscale, including tropical cyclones and atmospheric rivers. Extremes are challenging to simulate in conventional climate models due to existing models' coarse resolutions relative to the native length-scales of these phenomena. Studying the weather systems of interest requires an atmospheric model with sufficient local resolution, and sufficient performance for long-duration climate-change simulations. To this end, we have developed a new global climate code with adaptive spatial and temporal resolution. The dynamics are formulated using a block-structured conservative finite volume approach suitable for moist non-hydrostatic atmospheric dynamics. By using both space- and time-adaptive mesh refinement, the solver focuses computational resources only where greater accuracy is needed to resolve critical phenomena. We explore different methods for parameterizing sub-grid physics, such as microphysics, macrophysics, turbulence, and radiative transfer. In particular, we contrast the simplified physics representation of Reed and Jablonowski (2012) with the more complex physics representation used in the System for Atmospheric Modeling of Khairoutdinov and Randall (2003). We also explore the use of a novel macrophysics parameterization that is designed to be explicitly scale-aware.

  13. Rotating Turbulent Flow Simulation with LES and Vreman Subgrid-Scale Models in Complex Geometries

    Directory of Open Access Journals (Sweden)

    Tao Guo

    2014-07-01

    Full Text Available The large eddy simulation (LES method based on Vreman subgrid-scale model and SIMPIEC algorithm were applied to accurately capture the flowing character in Francis turbine passage under the small opening condition. The methodology proposed is effective to understand the flow structure well. It overcomes the limitation of eddy-viscosity model which is excessive, dissipative. Distributions of pressure, velocity, and vorticity as well as some special flow structure in guide vane near-wall zones and blade passage were gained. The results show that the tangential velocity component of fluid has absolute superiority under small opening condition. This situation aggravates the impact between the wake vortices that shed from guide vanes. The critical influence on the balance of unit by spiral vortex in blade passage and the nonuniform flow around guide vane, combined with the transmitting of stress wave, has been confirmed.

  14. Demonstration of two-phase Direct Numerical Simulation (DNS) methods potentiality to give information to averaged models: application to bubbles column

    International Nuclear Information System (INIS)

    Magdeleine, S.

    2009-11-01

    This work is a part of a long term project that aims at using two-phase Direct Numerical Simulation (DNS) in order to give information to averaged models. For now, it is limited to isothermal bubbly flows with no phase change. It could be subdivided in two parts: Firstly, theoretical developments are made in order to build an equivalent of Large Eddy Simulation (LES) for two phase flows called Interfaces and Sub-grid Scales (ISS). After the implementation of the ISS model in our code called Trio U , a set of various cases is used to validate this model. Then, special test are made in order to optimize the model for our particular bubbly flows. Thus we showed the capacity of the ISS model to produce a cheap pertinent solution. Secondly, we use the ISS model to perform simulations of bubbly flows in column. Results of these simulations are averaged to obtain quantities that appear in mass, momentum and interfacial area density balances. Thus, we processed to an a priori test of a complete one dimensional averaged model.We showed that this model predicts well the simplest flows (laminar and monodisperse). Moreover, the hypothesis of one pressure, which is often made in averaged model like CATHARE, NEPTUNE and RELAP5, is satisfied in such flows. At the opposite, without a polydisperse model, the drag is over-predicted and the uncorrelated A i flux needs a closure law. Finally, we showed that in turbulent flows, fluctuations of velocity and pressure in the liquid phase are not represented by the tested averaged model. (author)

  15. Assessing the Resolution Adaptability of the Zhang-McFarlane Cumulus Parameterization With Spatial and Temporal Averaging: RESOLUTION ADAPTABILITY OF ZM SCHEME

    Energy Technology Data Exchange (ETDEWEB)

    Yun, Yuxing [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; State Key Laboratory of Severe Weather, Chinese Academy of Meteorological Sciences, Beijing China; Fan, Jiwen [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; Xiao, Heng [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; Zhang, Guang J. [Scripps Institution of Oceanography, University of California, San Diego CA USA; Ghan, Steven J. [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; Xu, Kuan-Man [NASA Langley Research Center, Hampton VA USA; Ma, Po-Lun [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; Gustafson, William I. [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA

    2017-11-01

    Realistic modeling of cumulus convection at fine model resolutions (a few to a few tens of km) is problematic since it requires the cumulus scheme to adapt to higher resolution than they were originally designed for (~100 km). To solve this problem, we implement the spatial averaging method proposed in Xiao et al. (2015) and also propose a temporal averaging method for the large-scale convective available potential energy (CAPE) tendency in the Zhang-McFarlane (ZM) cumulus parameterization. The resolution adaptability of the original ZM scheme, the scheme with spatial averaging, and the scheme with both spatial and temporal averaging at 4-32 km resolution is assessed using the Weather Research and Forecasting (WRF) model, by comparing with Cloud Resolving Model (CRM) results. We find that the original ZM scheme has very poor resolution adaptability, with sub-grid convective transport and precipitation increasing significantly as the resolution increases. The spatial averaging method improves the resolution adaptability of the ZM scheme and better conserves the total transport of moist static energy and total precipitation. With the temporal averaging method, the resolution adaptability of the scheme is further improved, with sub-grid convective precipitation becoming smaller than resolved precipitation for resolution higher than 8 km, which is consistent with the results from the CRM simulation. Both the spatial distribution and time series of precipitation are improved with the spatial and temporal averaging methods. The results may be helpful for developing resolution adaptability for other cumulus parameterizations that are based on quasi-equilibrium assumption.

  16. Analysis of isotropic turbulence using a public database and the Web service model, and applications to study subgrid models

    Science.gov (United States)

    Meneveau, Charles; Yang, Yunke; Perlman, Eric; Wan, Minpin; Burns, Randal; Szalay, Alex; Chen, Shiyi; Eyink, Gregory

    2008-11-01

    A public database system archiving a direct numerical simulation (DNS) data set of isotropic, forced turbulence is used for studying basic turbulence dynamics. The data set consists of the DNS output on 1024-cubed spatial points and 1024 time-samples spanning about one large-scale turn-over timescale. This complete space-time history of turbulence is accessible to users remotely through an interface that is based on the Web-services model (see http://turbulence.pha.jhu.edu). Users may write and execute analysis programs on their host computers, while the programs make subroutine-like calls that request desired parts of the data over the network. The architecture of the database is briefly explained, as are some of the new functions such as Lagrangian particle tracking and spatial box-filtering. These tools are used to evaluate and compare subgrid stresses and models.

  17. Subgrid-scale scalar flux modelling based on optimal estimation theory and machine-learning procedures

    Science.gov (United States)

    Vollant, A.; Balarac, G.; Corre, C.

    2017-09-01

    New procedures are explored for the development of models in the context of large eddy simulation (LES) of a passive scalar. They rely on the combination of the optimal estimator theory with machine-learning algorithms. The concept of optimal estimator allows to identify the most accurate set of parameters to be used when deriving a model. The model itself can then be defined by training an artificial neural network (ANN) on a database derived from the filtering of direct numerical simulation (DNS) results. This procedure leads to a subgrid scale model displaying good structural performance, which allows to perform LESs very close to the filtered DNS results. However, this first procedure does not control the functional performance so that the model can fail when the flow configuration differs from the training database. Another procedure is then proposed, where the model functional form is imposed and the ANN used only to define the model coefficients. The training step is a bi-objective optimisation in order to control both structural and functional performances. The model derived from this second procedure proves to be more robust. It also provides stable LESs for a turbulent plane jet flow configuration very far from the training database but over-estimates the mixing process in that case.

  18. A SUB-GRID VOLUME-OF-FLUIDS (VOF) MODEL FOR MIXING IN RESOLVED SCALE AND IN UNRESOLVED SCALE COMPUTATIONS

    International Nuclear Information System (INIS)

    Vold, Erik L.; Scannapieco, Tony J.

    2007-01-01

    A sub-grid mix model based on a volume-of-fluids (VOF) representation is described for computational simulations of the transient mixing between reactive fluids, in which the atomically mixed components enter into the reactivity. The multi-fluid model allows each fluid species to have independent values for density, energy, pressure and temperature, as well as independent velocities and volume fractions. Fluid volume fractions are further divided into mix components to represent their 'mixedness' for more accurate prediction of reactivity. Time dependent conversion from unmixed volume fractions (denoted cf) to atomically mixed (af) fluids by diffusive processes is represented in resolved scale simulations with the volume fractions (cf, af mix). In unresolved scale simulations, the transition to atomically mixed materials begins with a conversion from unmixed material to a sub-grid volume fraction (pf). This fraction represents the unresolved small scales in the fluids, heterogeneously mixed by turbulent or multi-phase mixing processes, and this fraction then proceeds in a second step to the atomically mixed fraction by diffusion (cf, pf, af mix). Species velocities are evaluated with a species drift flux, ρ i u di = ρ i (u i -u), used to describe the fluid mixing sources in several closure options. A simple example of mixing fluids during 'interfacial deceleration mixing with a small amount of diffusion illustrates the generation of atomically mixed fluids in two cases, for resolved scale simulations and for unresolved scale simulations. Application to reactive mixing, including Inertial Confinement Fusion (ICF), is planned for future work.

  19. Quadratic inner element subgrid scale discretisation of the Boltzmann transport equation

    International Nuclear Information System (INIS)

    Baker, C.M.J.; Buchan, A.G.; Pain, C.C.; Tollit, B.; Eaton, M.D.; Warner, P.

    2012-01-01

    This paper explores the application of the inner element subgrid scale method to the Boltzmann transport equation using quadratic basis functions. Previously, only linear basis functions for both the coarse scale and the fine scale were considered. This paper, therefore, analyses the advantages of using different coarse and subgrid basis functions for increasing the accuracy of the subgrid scale method. The transport of neutral particle radiation may be described by the Boltzmann transport equation (BTE) which, due to its 7 dimensional phase space, is computationally expensive to resolve. Multi-scale methods offer an approach to efficiently resolve the spatial dimensions of the BTE by separating the solution into its coarse and fine scales and formulating a solution whereby only the computationally efficient coarse scales need to be solved. In previous work an inner element subgrid scale method was developed that applied a linear continuous and discontinuous finite element method to represent the solution’s coarse and fine scale components. This approach was shown to generate efficient and stable solutions, and so this article continues its development by formulating higher order quadratic finite element expansions over the continuous and discontinuous scales. Here it is shown that a solution’s convergence can be improved significantly using higher order basis functions. Furthermore, by using linear finite elements to represent coarse scales in combination with quadratic fine scales, convergence can also be improved with only a modest increase in computational expense.

  20. Autonomous Operation of Hybrid Microgrid With AC and DC Subgrids

    DEFF Research Database (Denmark)

    Chiang Loh, Poh; Li, Ding; Kang Chai, Yi

    2013-01-01

    sources distributed throughout the two types of subgrids, which is certainly tougher than previous efforts developed for only ac or dc microgrid. This wider scope of control has not yet been investigated, and would certainly rely on the coordinated operation of dc sources, ac sources, and interlinking...... converters. Suitable control and normalization schemes are now developed for controlling them with the overall hybrid microgrid performance already verified in simulation and experiment.......This paper investigates on power-sharing issues of an autonomous hybrid microgrid. Unlike existing microgrids which are purely ac, the hybrid microgrid studied here comprises dc and ac subgrids interconnected by power electronic interfaces. The main challenge here is to manage power flows among all...

  1. Sensitivity test of parameterizations of subgrid-scale orographic form drag in the NCAR CESM1

    Science.gov (United States)

    Liang, Yishuang; Wang, Lanning; Zhang, Guang Jun; Wu, Qizhong

    2017-05-01

    Turbulent drag caused by subgrid orographic form drag has significant effects on the atmosphere. It is represented through parameterization in large-scale numerical prediction models. An indirect parameterization scheme, the Turbulent Mountain Stress scheme (TMS), is currently used in the National Center for Atmospheric Research Community Earth System Model v1.0.4. In this study we test a direct scheme referred to as BBW04 (Beljaars et al. in Q J R Meteorol Soc 130:1327-1347, 10.1256/qj.03.73), which has been used in several short-term weather forecast models and earth system models. Results indicate that both the indirect and direct schemes increase surface wind stress and improve the model's performance in simulating low-level wind speed over complex orography compared to the simulation without subgrid orographic effect. It is shown that the TMS scheme produces a more intense wind speed adjustment, leading to lower wind speed near the surface. The low-level wind speed by the BBW04 scheme agrees better with the ERA-Interim reanalysis and is more sensitive to complex orography as a direct method. Further, the TMS scheme increases the 2-m temperature and planetary boundary layer height over large areas of tropical and subtropical Northern Hemisphere land.

  2. Final Report. Evaluating the Climate Sensitivity of Dissipative Subgrid-Scale Mixing Processes and Variable Resolution in NCAR's Community Earth System Model

    Energy Technology Data Exchange (ETDEWEB)

    Jablonowski, Christiane [Univ. of Michigan, Ann Arbor, MI (United States)

    2015-12-14

    The goals of this project were to (1) assess and quantify the sensitivity and scale-dependency of unresolved subgrid-scale mixing processes in NCAR’s Community Earth System Model (CESM), and (2) to improve the accuracy and skill of forthcoming CESM configurations on modern cubed-sphere and variable-resolution computational grids. The research thereby contributed to the description and quantification of uncertainties in CESM’s dynamical cores and their physics-dynamics interactions.

  3. An Extended Eddy-Diffusivity Mass-Flux Scheme for Unified Representation of Subgrid-Scale Turbulence and Convection

    Science.gov (United States)

    Tan, Zhihong; Kaul, Colleen M.; Pressel, Kyle G.; Cohen, Yair; Schneider, Tapio; Teixeira, João.

    2018-03-01

    Large-scale weather forecasting and climate models are beginning to reach horizontal resolutions of kilometers, at which common assumptions made in existing parameterization schemes of subgrid-scale turbulence and convection—such as that they adjust instantaneously to changes in resolved-scale dynamics—cease to be justifiable. Additionally, the common practice of representing boundary-layer turbulence, shallow convection, and deep convection by discontinuously different parameterizations schemes, each with its own set of parameters, has contributed to the proliferation of adjustable parameters in large-scale models. Here we lay the theoretical foundations for an extended eddy-diffusivity mass-flux (EDMF) scheme that has explicit time-dependence and memory of subgrid-scale variables and is designed to represent all subgrid-scale turbulence and convection, from boundary layer dynamics to deep convection, in a unified manner. Coherent up and downdrafts in the scheme are represented as prognostic plumes that interact with their environment and potentially with each other through entrainment and detrainment. The more isotropic turbulence in their environment is represented through diffusive fluxes, with diffusivities obtained from a turbulence kinetic energy budget that consistently partitions turbulence kinetic energy between plumes and environment. The cross-sectional area of up and downdrafts satisfies a prognostic continuity equation, which allows the plumes to cover variable and arbitrarily large fractions of a large-scale grid box and to have life cycles governed by their own internal dynamics. Relatively simple preliminary proposals for closure parameters are presented and are shown to lead to a successful simulation of shallow convection, including a time-dependent life cycle.

  4. Comparison of Large eddy dynamo simulation using dynamic sub-grid scale (SGS) model with a fully resolved direct simulation in a rotating spherical shell

    Science.gov (United States)

    Matsui, H.; Buffett, B. A.

    2017-12-01

    The flow in the Earth's outer core is expected to have vast length scale from the geometry of the outer core to the thickness of the boundary layer. Because of the limitation of the spatial resolution in the numerical simulations, sub-grid scale (SGS) modeling is required to model the effects of the unresolved field on the large-scale fields. We model the effects of sub-grid scale flow and magnetic field using a dynamic scale similarity model. Four terms are introduced for the momentum flux, heat flux, Lorentz force and magnetic induction. The model was previously used in the convection-driven dynamo in a rotating plane layer and spherical shell using the Finite Element Methods. In the present study, we perform large eddy simulations (LES) using the dynamic scale similarity model. The scale similarity model is implement in Calypso, which is a numerical dynamo model using spherical harmonics expansion. To obtain the SGS terms, the spatial filtering in the horizontal directions is done by taking the convolution of a Gaussian filter expressed in terms of a spherical harmonic expansion, following Jekeli (1981). A Gaussian field is also applied in the radial direction. To verify the present model, we perform a fully resolved direct numerical simulation (DNS) with the truncation of the spherical harmonics L = 255 as a reference. And, we perform unresolved DNS and LES with SGS model on coarser resolution (L= 127, 84, and 63) using the same control parameter as the resolved DNS. We will discuss the verification results by comparison among these simulations and role of small scale fields to large scale fields through the role of the SGS terms in LES.

  5. Development and analysis of prognostic equations for mesoscale kinetic energy and mesoscale (subgrid scale) fluxes for large-scale atmospheric models

    Science.gov (United States)

    Avissar, Roni; Chen, Fei

    1993-01-01

    Generated by landscape discontinuities (e.g., sea breezes) mesoscale circulation processes are not represented in large-scale atmospheric models (e.g., general circulation models), which have an inappropiate grid-scale resolution. With the assumption that atmospheric variables can be separated into large scale, mesoscale, and turbulent scale, a set of prognostic equations applicable in large-scale atmospheric models for momentum, temperature, moisture, and any other gaseous or aerosol material, which includes both mesoscale and turbulent fluxes is developed. Prognostic equations are also developed for these mesoscale fluxes, which indicate a closure problem and, therefore, require a parameterization. For this purpose, the mean mesoscale kinetic energy (MKE) per unit of mass is used, defined as E-tilde = 0.5 (the mean value of u'(sub i exp 2), where u'(sub i) represents the three Cartesian components of a mesoscale circulation (the angle bracket symbol is the grid-scale, horizontal averaging operator in the large-scale model, and a tilde indicates a corresponding large-scale mean value). A prognostic equation is developed for E-tilde, and an analysis of the different terms of this equation indicates that the mesoscale vertical heat flux, the mesoscale pressure correlation, and the interaction between turbulence and mesoscale perturbations are the major terms that affect the time tendency of E-tilde. A-state-of-the-art mesoscale atmospheric model is used to investigate the relationship between MKE, landscape discontinuities (as characterized by the spatial distribution of heat fluxes at the earth's surface), and mesoscale sensible and latent heat fluxes in the atmosphere. MKE is compared with turbulence kinetic energy to illustrate the importance of mesoscale processes as compared to turbulent processes. This analysis emphasizes the potential use of MKE to bridge between landscape discontinuities and mesoscale fluxes and, therefore, to parameterize mesoscale fluxes

  6. Sub-grid scale combustion models for large eddy simulation of unsteady premixed flame propagation around obstacles.

    Science.gov (United States)

    Di Sarli, Valeria; Di Benedetto, Almerinda; Russo, Gennaro

    2010-08-15

    In this work, an assessment of different sub-grid scale (sgs) combustion models proposed for large eddy simulation (LES) of steady turbulent premixed combustion (Colin et al., Phys. Fluids 12 (2000) 1843-1863; Flohr and Pitsch, Proc. CTR Summer Program, 2000, pp. 61-82; Kim and Menon, Combust. Sci. Technol. 160 (2000) 119-150; Charlette et al., Combust. Flame 131 (2002) 159-180; Pitsch and Duchamp de Lageneste, Proc. Combust. Inst. 29 (2002) 2001-2008) was performed to identify the model that best predicts unsteady flame propagation in gas explosions. Numerical results were compared to the experimental data by Patel et al. (Proc. Combust. Inst. 29 (2002) 1849-1854) for premixed deflagrating flame in a vented chamber in the presence of three sequential obstacles. It is found that all sgs combustion models are able to reproduce qualitatively the experiment in terms of step of flame acceleration and deceleration around each obstacle, and shape of the propagating flame. Without adjusting any constants and parameters, the sgs model by Charlette et al. also provides satisfactory quantitative predictions for flame speed and pressure peak. Conversely, the sgs combustion models other than Charlette et al. give correct predictions only after an ad hoc tuning of constants and parameters. Copyright 2010 Elsevier B.V. All rights reserved.

  7. The effects of spatial heterogeneity and subsurface lateral transfer on evapotranspiration estimates in large scale Earth system models

    Science.gov (United States)

    Rouholahnejad, E.; Fan, Y.; Kirchner, J. W.; Miralles, D. G.

    2017-12-01

    Most Earth system models (ESM) average over considerable sub-grid heterogeneity in land surface properties, and overlook subsurface lateral flow. This could potentially bias evapotranspiration (ET) estimates and has implications for future temperature predictions, since overestimations in ET imply greater latent heat fluxes and potential underestimation of dry and warm conditions in the context of climate change. Here we quantify the bias in evaporation estimates that may arise from the fact that ESMs average over considerable heterogeneity in surface properties, and also neglect lateral transfer of water across the heterogeneous landscapes at global scale. We use a Budyko framework to express ET as a function of P and PET to derive simple sub-grid closure relations that quantify how spatial heterogeneity and lateral transfer could affect average ET as seen from the atmosphere. We show that averaging over sub-grid heterogeneity in P and PET, as typical Earth system models do, leads to overestimation of average ET. Our analysis at global scale shows that the effects of sub-grid heterogeneity will be most pronounced in steep mountainous areas where the topographic gradient is high and where P is inversely correlated with PET across the landscape. In addition, we use the Total Water Storage (TWS) anomaly estimates from the Gravity Recovery and Climate Experiment (GRACE) remote sensing product and assimilate it into the Global Land Evaporation Amsterdam Model (GLEAM) to correct for existing free drainage lower boundary condition in GLEAM and quantify whether, and how much, accounting for changes in terrestrial storage can improve the simulation of soil moisture and regional ET fluxes at global scale.

  8. Autonomous Operation of Hybrid Microgrid with AC and DC Sub-Grids

    DEFF Research Database (Denmark)

    Loh, Poh Chiang; Blaabjerg, Frede

    2011-01-01

    the power flow among all the sources distributed throughout the two types of sub-grids, which certainly is tougher than previous efforts developed for only either ac or dc microgrid. This wider scope of control has not yet been investigated, and would certainly rely on the coordinated operation of dc...... sources, ac sources and interlinking converters. Suitable control and normalization schemes are therefore developed for controlling them with results presented for showing the overall performance of the hybrid microgrid.......This paper investigates on the active and reactive power sharing of an autonomous hybrid microgrid. Unlike existing microgrids which are purely ac, the hybrid microgrid studied here comprises dc and ac sub-grids, interconnected by power electronic interfaces. The main challenge here is to manage...

  9. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  10. Mesh-Sequenced Realizations for Evaluation of Subgrid-Scale Models for Turbulent Combustion (Short Term Innovative Research Program)

    Science.gov (United States)

    2018-02-15

    conservation equations. The closure problem hinges on the evaluation of the filtered chemical production rates. In MRA/MSR, simultaneous large-eddy... simultaneous , constrained large-eddy simulations at three different mesh levels as a means of connecting reactive scalar information at different...functions of a locally normalized subgrid Damköhler number (a measure of the distribution of inverse chemical time scales in the neighborhood of a

  11. The Storm Surge and Sub-Grid Inundation Modeling in New York City during Hurricane Sandy

    Directory of Open Access Journals (Sweden)

    Harry V. Wang

    2014-03-01

    Full Text Available Hurricane Sandy inflicted heavy damage in New York City and the New Jersey coast as the second costliest storm in history. A large-scale, unstructured grid storm tide model, Semi-implicit Eulerian Lagrangian Finite Element (SELFE, was used to hindcast water level variation during Hurricane Sandy in the mid-Atlantic portion of the U.S. East Coast. The model was forced by eight tidal constituents at the model’s open boundary, 1500 km away from the coast, and the wind and pressure fields from atmospheric model Regional Atmospheric Modeling System (RAMS provided by Weatherflow Inc. The comparisons of the modeled storm tide with the NOAA gauge stations from Montauk, NY, Long Island Sound, encompassing New York Harbor, Atlantic City, NJ, to Duck, NC, were in good agreement, with an overall root mean square error and relative error in the order of 15–20 cm and 5%–7%, respectively. Furthermore, using large-scale model outputs as the boundary conditions, a separate sub-grid model that incorporates LIDAR data for the major portion of the New York City was also set up to investigate the detailed inundation process. The model results compared favorably with USGS’ Hurricane Sandy Mapper database in terms of its timing, local inundation area, and the depth of the flooding water. The street-level inundation with water bypassing the city building was created and the maximum extent of horizontal inundation was calculated, which was within 30 m of the data-derived estimate by USGS.

  12. From Detailed Description of Chemical Reacting Carbon Particles to Subgrid Models for CFD

    Directory of Open Access Journals (Sweden)

    Schulze S.

    2013-04-01

    Full Text Available This work is devoted to the development and validation of a sub-model for the partial oxidation of a spherical char particle moving in an air/steam atmosphere. The particle diameter is 2 mm. The coal particle is represented by moisture- and ash-free nonporous carbon while the coal rank is implemented using semi-global reaction rate expressions taken from the literature. The submodel includes six gaseous chemical species (O2, CO2, CO, H2O, H2, N2. Three heterogeneous reactions are employed, along with two homogeneous semi-global reactions, namely carbon monoxide oxidation and the water-gas-shift reaction. The distinguishing feature of the subgrid model is that it takes into account the influence of homogeneous reactions on integral characteristics such as carbon combustion rates and particle temperature. The sub-model was validated by comparing its results with a comprehensive CFD-based model resolving the issues of bulk flow and boundary layer around the particle. In this model, the Navier-Stokes equations coupled with the energy and species conservation equations were used to solve the problem by means of the pseudo-steady state approach. At the surface of the particle, the balance of mass, energy and species concentration was applied including the effect of the Stefan flow and heat loss due to radiation at the surface of the particle. Good agreement was achieved between the sub-model and the CFD-based model. Additionally, the CFD-based model was verified against experimental data published in the literature (Makino et al. (2003 Combust. Flame 132, 743-753. Good agreement was achieved between numerically predicted and experimentally obtained data for input conditions corresponding to the kinetically controlled regime. The maximal discrepancy (10% between the experiments and the numerical results was observed in the diffusion-controlled regime. Finally, we discuss the influence of the Reynolds number, the ambient O2 mass fraction and the ambient

  13. A mixed multiscale model better accounting for the cross term of the subgrid-scale stress and for backscatter

    Science.gov (United States)

    Thiry, Olivier; Winckelmans, Grégoire

    2016-02-01

    In the large-eddy simulation (LES) of turbulent flows, models are used to account for the subgrid-scale (SGS) stress. We here consider LES with "truncation filtering only" (i.e., that due to the LES grid), thus without regular explicit filtering added. The SGS stress tensor is then composed of two terms: the cross term that accounts for interactions between resolved scales and unresolved scales, and the Reynolds term that accounts for interactions between unresolved scales. Both terms provide forward- (dissipation) and backward (production, also called backscatter) energy transfer. Purely dissipative, eddy-viscosity type, SGS models are widely used: Smagorinsky-type models, or more advanced multiscale-type models. Dynamic versions have also been developed, where the model coefficient is determined using a dynamic procedure. Being dissipative by nature, those models do not provide backscatter. Even when using the dynamic version with local averaging, one typically uses clipping to forbid negative values of the model coefficient and hence ensure the stability of the simulation; hence removing the backscatter produced by the dynamic procedure. More advanced SGS model are thus desirable, and that better conform to the physics of the true SGS stress, while remaining stable. We here investigate, in decaying homogeneous isotropic turbulence, and using a de-aliased pseudo-spectral method, the behavior of the cross term and of the Reynolds term: in terms of dissipation spectra, and in terms of probability density function (pdf) of dissipation in physical space: positive and negative (backscatter). We then develop a new mixed model that better accounts for the physics of the SGS stress and for the backscatter. It has a cross term part which is built using a scale-similarity argument, further combined with a correction for Galilean invariance using a pseudo-Leonard term: this is the term that also does backscatter. It also has an eddy-viscosity multiscale model part that

  14. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  15. Hybrid Large Eddy Simulation / Reynolds Averaged Navier-Stokes Modeling in Directed Energy Applications

    Science.gov (United States)

    Zilberter, Ilya Alexandrovich

    In this work, a hybrid Large Eddy Simulation / Reynolds-Averaged Navier Stokes (LES/RANS) turbulence model is applied to simulate two flows relevant to directed energy applications. The flow solver blends the Menter Baseline turbulence closure near solid boundaries with a Lenormand-type subgrid model in the free-stream with a blending function that employs the ratio of estimated inner and outer turbulent length scales. A Mach 2.2 mixing nozzle/diffuser system representative of a gas laser is simulated under a range of exit pressures to assess the ability of the model to predict the dynamics of the shock train. The simulation captures the location of the shock train responsible for pressure recovery but under-predicts the rate of pressure increase. Predicted turbulence production at the wall is found to be highly sensitive to the behavior of the RANS turbulence model. A Mach 2.3, high-Reynolds number, three-dimensional cavity flow is also simulated in order to compute the wavefront aberrations of an optical beam passing thorough the cavity. The cavity geometry is modeled using an immersed boundary method, and an auxiliary flat plate simulation is performed to replicate the effects of the wind-tunnel boundary layer on the computed optical path difference. Pressure spectra extracted on the cavity walls agree with empirical predictions based on Rossiter's formula. Proper orthogonal modes of the wavefront aberrations in a beam originating from the cavity center agree well with experimental data despite uncertainty about in flow turbulence levels and boundary layer thicknesses over the wind tunnel window. Dynamic mode decomposition of a planar wavefront spanning the cavity reveals that wavefront distortions are driven by shear layer oscillations at the Rossiter frequencies; these disturbances create eddy shocklets that propagate into the free-stream, creating additional optical wavefront distortion.

  16. Structure and modeling of turbulence

    International Nuclear Information System (INIS)

    Novikov, E.A.

    1995-01-01

    The open-quotes vortex stringsclose quotes scale l s ∼ LRe -3/10 (L-external scale, Re - Reynolds number) is suggested as a grid scale for the large-eddy simulation. Various aspects of the structure of turbulence and subgrid modeling are described in terms of conditional averaging, Markov processes with dependent increments and infinitely divisible distributions. The major request from the energy, naval, aerospace and environmental engineering communities to the theory of turbulence is to reduce the enormous number of degrees of freedom in turbulent flows to a level manageable by computer simulations. The vast majority of these degrees of freedom is in the small-scale motion. The study of the structure of turbulence provides a basis for subgrid-scale (SGS) models, which are necessary for the large-eddy simulations (LES)

  17. Cloud-In-Cell modeling of shocked particle-laden flows at a ``SPARSE'' cost

    Science.gov (United States)

    Taverniers, Soren; Jacobs, Gustaaf; Sen, Oishik; Udaykumar, H. S.

    2017-11-01

    A common tool for enabling process-scale simulations of shocked particle-laden flows is Eulerian-Lagrangian Particle-Source-In-Cell (PSIC) modeling where each particle is traced in its Lagrangian frame and treated as a mathematical point. Its dynamics are governed by Stokes drag corrected for high Reynolds and Mach numbers. The computational burden is often reduced further through a ``Cloud-In-Cell'' (CIC) approach which amalgamates groups of physical particles into computational ``macro-particles''. CIC does not account for subgrid particle fluctuations, leading to erroneous predictions of cloud dynamics. A Subgrid Particle-Averaged Reynolds-Stress Equivalent (SPARSE) model is proposed that incorporates subgrid interphase velocity and temperature perturbations. A bivariate Gaussian source distribution, whose covariance captures the cloud's deformation to first order, accounts for the particles' momentum and energy influence on the carrier gas. SPARSE is validated by conducting tests on the interaction of a particle cloud with the accelerated flow behind a shock. The cloud's average dynamics and its deformation over time predicted with SPARSE converge to their counterparts computed with reference PSIC models as the number of Gaussians is increased from 1 to 16. This work was supported by AFOSR Grant No. FA9550-16-1-0008.

  18. Sub-Grid Modeling of Electrokinetic Effects in Micro Flows

    Science.gov (United States)

    Chen, C. P.

    2005-01-01

    Advances in micro-fabrication processes have generated tremendous interests in miniaturizing chemical and biomedical analyses into integrated microsystems (Lab-on-Chip devices). To successfully design and operate the micro fluidics system, it is essential to understand the fundamental fluid flow phenomena when channel sizes are shrink to micron or even nano dimensions. One important phenomenon is the electro kinetic effect in micro/nano channels due to the existence of the electrical double layer (EDL) near a solid-liquid interface. Not only EDL is responsible for electro-osmosis pumping when an electric field parallel to the surface is imposed, EDL also causes extra flow resistance (the electro-viscous effect) and flow anomaly (such as early transition from laminar to turbulent flow) observed in pressure-driven microchannel flows. Modeling and simulation of electro-kinetic effects on micro flows poses significant numerical challenge due to the fact that the sizes of the double layer (10 nm up to microns) are very thin compared to channel width (can be up to 100 s of m). Since the typical thickness of the double layer is extremely small compared to the channel width, it would be computationally very costly to capture the velocity profile inside the double layer by placing sufficient number of grid cells in the layer to resolve the velocity changes, especially in complex, 3-d geometries. Existing approaches using "slip" wall velocity and augmented double layer are difficult to use when the flow geometry is complicated, e.g. flow in a T-junction, X-junction, etc. In order to overcome the difficulties arising from those two approaches, we have developed a sub-grid integration method to properly account for the physics of the double layer. The integration approach can be used on simple or complicated flow geometries. Resolution of the double layer is not needed in this approach, and the effects of the double layer can be accounted for at the same time. With this

  19. Reproducing multi-model ensemble average with Ensemble-averaged Reconstructed Forcings (ERF) in regional climate modeling

    Science.gov (United States)

    Erfanian, A.; Fomenko, L.; Wang, G.

    2016-12-01

    Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling

  20. Use of upscaled elevation and surface roughness data in two-dimensional surface water models

    Science.gov (United States)

    Hughes, J.D.; Decker, J.D.; Langevin, C.D.

    2011-01-01

    In this paper, we present an approach that uses a combination of cell-block- and cell-face-averaging of high-resolution cell elevation and roughness data to upscale hydraulic parameters and accurately simulate surface water flow in relatively low-resolution numerical models. The method developed allows channelized features that preferentially connect large-scale grid cells at cell interfaces to be represented in models where these features are significantly smaller than the selected grid size. The developed upscaling approach has been implemented in a two-dimensional finite difference model that solves a diffusive wave approximation of the depth-integrated shallow surface water equations using preconditioned Newton–Krylov methods. Computational results are presented to show the effectiveness of the mixed cell-block and cell-face averaging upscaling approach in maintaining model accuracy, reducing model run-times, and how decreased grid resolution affects errors. Application examples demonstrate that sub-grid roughness coefficient variations have a larger effect on simulated error than sub-grid elevation variations.

  1. Air quality impact of two power plants using a sub-grid

    International Nuclear Information System (INIS)

    Drevet, Jerome; Musson-Genon, Luc

    2012-01-01

    Modeling point source emissions of air pollutants with regional Eulerian models is likely to lead to errors because a 3D Eulerian model is not able to correctly reproduce the evolution of a plume near its source. To overcome these difficulties, we applied a Gaussian puff model - imbedded within a 3D Eulerian model - for an impact assessment of EDF fossil fuel-fired power plants of Porcheville and Vitry, Ile-de-France. We simulated an entire year of atmospheric processes for an area covering the Paris region with the Polyphemus platform with which we conducted various scenarios with or without a Gaussian puff model, referred as Plume-in-grid, to independently handle 'with major point source emissions in Ile-de-France. Our study focuses on four chemical compounds (NO, NO 2 , SO 2 and O 3 ). The use of a Gaussian model is important, particularly for primary compounds with low reactivity such as SO, especially as industrial stacks are the major source of its emissions. SO 2 concentrations simulated using Plume-in-grid tare closer to the concentrations measured by the stations of the air quality agencies (Associations Agreees de Surveillance de la Qualite de l'Air, AASQA), although they remain largely overestimated. The use of a Gaussian model increases the concentrations near the source and lowers background levels of various chemical species (except O 3 ). The simulated concentrations may vary by over 30 % depending on whether we consider the Gaussian model for primary compounds such as SO 2 and NO, and around 2 % for secondary compounds such as NO 2 and O 3 . Regarding the impact of fossil fuel-fired power plants, simulated concentrations are increased by about 1 μg/m 3 approximately for SO 2 annual averages close to the Porcheville stack and are lowered by about 0.5 μg/m 3 far from the sources, highlighting the less diffusive character of the Gaussian model by comparison with the Eulerian model. The integration of a sub-grid Gaussian model offers the advantage of

  2. Permafrost sub-grid heterogeneity of soil properties key for 3-D soil processes and future climate projections

    Directory of Open Access Journals (Sweden)

    Christian Beer

    2016-08-01

    Full Text Available There are massive carbon stocks stored in permafrost-affected soils due to the 3-D soil movement process called cryoturbation. For a reliable projection of the past, recent and future Arctic carbon balance, and hence climate, a reliable concept for representing cryoturbation in a land surface model (LSM is required. The basis of the underlying transport processes is pedon-scale heterogeneity of soil hydrological and thermal properties as well as insulating layers, such as snow and vegetation. Today we still lack a concept of how to reliably represent pedon-scale properties and processes in a LSM. One possibility could be a statistical approach. This perspective paper demonstrates the importance of sub-grid heterogeneity in permafrost soils as a pre-requisite to implement any lateral transport parametrization. Representing such heterogeneity at the sub-pixel size of a LSM is the next logical step of model advancements. As a result of a theoretical experiment, heterogeneity of thermal and hydrological soil properties alone lead to a remarkable initial sub-grid range of subsoil temperature of 2 deg C, and active-layer thickness of 150 cm in East Siberia. These results show the way forward in representing combined lateral and vertical transport of water and soil in LSMs.

  3. Quasi-analytical treatment of spatially averaged radiation transfer in complex terrain

    Science.gov (United States)

    LöWe, H.; Helbig, N.

    2012-10-01

    We provide a new quasi-analytical method to compute the subgrid topographic influences on the shortwave radiation fluxes and the effective albedo in complex terrain as required for large-scale meteorological, land surface, or climate models. We investigate radiative transfer in complex terrain via the radiosity equation on isotropic Gaussian random fields. Under controlled approximations we derive expressions for domain-averaged fluxes of direct, diffuse, and terrain radiation and the sky view factor. Domain-averaged quantities can be related to a type of level-crossing probability of the random field, which is approximated by long-standing results developed for acoustic scattering at ocean boundaries. This allows us to express all nonlocal horizon effects in terms of a local terrain parameter, namely, the mean-square slope. Emerging integrals are computed numerically, and fit formulas are given for practical purposes. As an implication of our approach, we provide an expression for the effective albedo of complex terrain in terms of the Sun elevation angle, mean-square slope, the area-averaged surface albedo, and the ratio of atmospheric direct beam to diffuse radiation. For demonstration we compute the decrease of the effective albedo relative to the area-averaged albedo in Switzerland for idealized snow-covered and clear-sky conditions at noon in winter. We find an average decrease of 5.8% and spatial patterns which originate from characteristics of the underlying relief. Limitations and possible generalizations of the method are discussed.

  4. Bayesian model averaging and weighted average least squares : Equivariance, stability, and numerical issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares

  5. On the influence of cloud fraction diurnal cycle and sub-grid cloud optical thickness variability on all-sky direct aerosol radiative forcing

    International Nuclear Information System (INIS)

    Min, Min; Zhang, Zhibo

    2014-01-01

    The objective of this study is to understand how cloud fraction diurnal cycle and sub-grid cloud optical thickness variability influence the all-sky direct aerosol radiative forcing (DARF). We focus on the southeast Atlantic region where transported smoke is often observed above low-level water clouds during burning seasons. We use the CALIOP observations to derive the optical properties of aerosols. We developed two diurnal cloud fraction variation models. One is based on sinusoidal fitting of MODIS observations from Terra and Aqua satellites. The other is based on high-temporal frequency diurnal cloud fraction observations from SEVIRI on board of geostationary satellite. Both models indicate a strong cloud fraction diurnal cycle over the southeast Atlantic region. Sensitivity studies indicate that using a constant cloud fraction corresponding to Aqua local equatorial crossing time (1:30 PM) generally leads to an underestimated (less positive) diurnal mean DARF even if solar diurnal variation is considered. Using cloud fraction corresponding to Terra local equatorial crossing time (10:30 AM) generally leads overestimation. The biases are a typically around 10–20%, but up to more than 50%. The influence of sub-grid cloud optical thickness variability on DARF is studied utilizing the cloud optical thickness histogram available in MODIS Level-3 daily data. Similar to previous studies, we found the above-cloud smoke in the southeast Atlantic region has a strong warming effect at the top of the atmosphere. However, because of the plane-parallel albedo bias the warming effect of above-cloud smoke could be significantly overestimated if the grid-mean, instead of the full histogram, of cloud optical thickness is used in the computation. This bias generally increases with increasing above-cloud aerosol optical thickness and sub-grid cloud optical thickness inhomogeneity. Our results suggest that the cloud diurnal cycle and sub-grid cloud variability are important factors

  6. Multi-scale properties of large eddy simulations: correlations between resolved-scale velocity-field increments and subgrid-scale quantities

    Science.gov (United States)

    Linkmann, Moritz; Buzzicotti, Michele; Biferale, Luca

    2018-06-01

    We provide analytical and numerical results concerning multi-scale correlations between the resolved velocity field and the subgrid-scale (SGS) stress-tensor in large eddy simulations (LES). Following previous studies for Navier-Stokes equations, we derive the exact hierarchy of LES equations governing the spatio-temporal evolution of velocity structure functions of any order. The aim is to assess the influence of the subgrid model on the inertial range intermittency. We provide a series of predictions, within the multifractal theory, for the scaling of correlation involving the SGS stress and we compare them against numerical results from high-resolution Smagorinsky LES and from a-priori filtered data generated from direct numerical simulations (DNS). We find that LES data generally agree very well with filtered DNS results and with the multifractal prediction for all leading terms in the balance equations. Discrepancies are measured for some of the sub-leading terms involving cross-correlation between resolved velocity increments and the SGS tensor or the SGS energy transfer, suggesting that there must be room to improve the SGS modelisation to further extend the inertial range properties for any fixed LES resolution.

  7. Development of a new dynamic turbulent model, applications to two-dimensional and plane parallel flows

    International Nuclear Information System (INIS)

    Laval, Jean Philippe

    1999-01-01

    We developed a turbulent model based on asymptotic development of the Navier-Stokes equations within the hypothesis of non-local interactions at small scales. This model provides expressions of the turbulent Reynolds sub-grid stresses via estimates of the sub-grid velocities rather than velocities correlations as is usually done. The model involves the coupling of two dynamical equations: one for the resolved scales of motions, which depends upon the Reynolds stresses generated by the sub-grid motions, and one for the sub-grid scales of motions, which can be used to compute the sub-grid Reynolds stresses. The non-locality of interaction at sub-grid scales allows to model their evolution with a linear inhomogeneous equation where the forcing occurs via the energy cascade from resolved to sub-grid scales. This model was solved using a decomposition of sub-grid scales on Gabor's modes and implemented numerically in 2D with periodic boundary conditions. A particles method (PIC) was used to compute the sub-grid scales. The results were compared with results of direct simulations for several typical flows. The model was also applied to plane parallel flows. An analytical study of the equations allows a description of mean velocity profiles in agreement with experimental results and theoretical results based on the symmetries of the Navier-Stokes equation. Possible applications and improvements of the model are discussed in the conclusion. (author) [fr

  8. Comparison of GCM subgrid fluxes calculated using BATS and SiB schemes with a coupled land-atmosphere high-resolution model

    Energy Technology Data Exchange (ETDEWEB)

    Shen, Jinmei; Arritt, R.W. [Iowa State Univ., Ames, IA (United States)

    1996-12-31

    The importance of land-atmosphere interactions and biosphere in climate change studies has long been recognized, and several land-atmosphere interaction schemes have been developed. Among these, the Simple Biosphere scheme (SiB) of Sellers et al. and the Biosphere Atmosphere Transfer Scheme (BATS) of Dickinson et al. are two of the most widely known. The effects of GCM subgrid-scale inhomogeneities of surface properties in general circulation models also has received increasing attention in recent years. However, due to the complexity of land surface processes and the difficulty to prescribe the large number of parameters that determine atmospheric and soil interactions with vegetation, many previous studies and results seem to be contradictory. A GCM grid element typically represents an area of 10{sup 4}-10{sup 6} km{sup 2}. Within such an area, there exist variations of soil type, soil wetness, vegetation type, vegetation density and topography, as well as urban areas and water bodies. In this paper, we incorporate both BATS and SiB2 land surface process schemes into a nonhydrostatic, compressible version of AMBLE model (Atmospheric Model -- Boundary-Layer Emphasis), and compare the surface heat fluxes and mesoscale circulations calculated using the two schemes. 8 refs., 5 figs.

  9. Model averaging, optimal inference and habit formation

    Directory of Open Access Journals (Sweden)

    Thomas H B FitzGerald

    2014-06-01

    Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.

  10. Large Eddy Simulations of a Premixed Jet Combustor Using Flamelet-Generated Manifolds: Effects of Heat Loss and Subgrid-Scale Models

    KAUST Repository

    Hernandez Perez, Francisco E.; Lee, Bok Jik; Im, Hong G.; Fancello, Alessio; Donini, Andrea; van Oijen, Jeroen A.; de Goey, Philip H.

    2017-01-01

    Large eddy simulations of a turbulent premixed jet flame in a confined chamber were conducted using the flamelet-generated manifold technique for chemistry tabulation. The configuration is characterized by an off-center nozzle having an inner diameter of 10 mm, supplying a lean methane-air mixture with an equivalence ratio of 0.71 and a mean velocity of 90 m/s, at 573 K and atmospheric pressure. Conductive heat loss is accounted for in the manifold via burner-stabilized flamelets and the subgrid-scale (SGS) turbulencechemistry interaction is modeled via presumed probability density functions. Comparisons between numerical results and measured data show that a considerable improvement in the prediction of temperature is achieved when heat losses are included in the manifold, as compared to the adiabatic one. Additional improvement in the temperature predictions is obtained by incorporating radiative heat losses. Moreover, further enhancements in the LES predictions are achieved by employing SGS models based on transport equations, such as the SGS turbulence kinetic energy equation with dynamic coefficients. While the numerical results display good agreement up to a distance of 4 nozzle diameters downstream of the nozzle exit, the results become less satisfactory along the downstream, suggesting that further improvements in the modeling are required, among which a more accurate model for the SGS variance of progress variable can be relevant.

  11. Large Eddy Simulations of a Premixed Jet Combustor Using Flamelet-Generated Manifolds: Effects of Heat Loss and Subgrid-Scale Models

    KAUST Repository

    Hernandez Perez, Francisco E.

    2017-01-05

    Large eddy simulations of a turbulent premixed jet flame in a confined chamber were conducted using the flamelet-generated manifold technique for chemistry tabulation. The configuration is characterized by an off-center nozzle having an inner diameter of 10 mm, supplying a lean methane-air mixture with an equivalence ratio of 0.71 and a mean velocity of 90 m/s, at 573 K and atmospheric pressure. Conductive heat loss is accounted for in the manifold via burner-stabilized flamelets and the subgrid-scale (SGS) turbulencechemistry interaction is modeled via presumed probability density functions. Comparisons between numerical results and measured data show that a considerable improvement in the prediction of temperature is achieved when heat losses are included in the manifold, as compared to the adiabatic one. Additional improvement in the temperature predictions is obtained by incorporating radiative heat losses. Moreover, further enhancements in the LES predictions are achieved by employing SGS models based on transport equations, such as the SGS turbulence kinetic energy equation with dynamic coefficients. While the numerical results display good agreement up to a distance of 4 nozzle diameters downstream of the nozzle exit, the results become less satisfactory along the downstream, suggesting that further improvements in the modeling are required, among which a more accurate model for the SGS variance of progress variable can be relevant.

  12. A priori study of subgrid-scale features in turbulent Rayleigh-Bénard convection

    Science.gov (United States)

    Dabbagh, F.; Trias, F. X.; Gorobets, A.; Oliva, A.

    2017-10-01

    At the crossroad between flow topology analysis and turbulence modeling, a priori studies are a reliable tool to understand the underlying physics of the subgrid-scale (SGS) motions in turbulent flows. In this paper, properties of the SGS features in the framework of a large-eddy simulation are studied for a turbulent Rayleigh-Bénard convection (RBC). To do so, data from direct numerical simulation (DNS) of a turbulent air-filled RBC in a rectangular cavity of aspect ratio unity and π spanwise open-ended distance are used at two Rayleigh numbers R a ∈{1 08,1 010 } [Dabbagh et al., "On the evolution of flow topology in turbulent Rayleigh-Bénard convection," Phys. Fluids 28, 115105 (2016)]. First, DNS at Ra = 108 is used to assess the performance of eddy-viscosity models such as QR, Wall-Adapting Local Eddy-viscosity (WALE), and the recent S3PQR-models proposed by Trias et al. ["Building proper invariants for eddy-viscosity subgrid-scale models," Phys. Fluids 27, 065103 (2015)]. The outcomes imply that the eddy-viscosity modeling smoothes the coarse-grained viscous straining and retrieves fairly well the effect of the kinetic unfiltered scales in order to reproduce the coherent large scales. However, these models fail to approach the exact evolution of the SGS heat flux and are incapable to reproduce well the further dominant rotational enstrophy pertaining to the buoyant production. Afterwards, the key ingredients of eddy-viscosity, νt, and eddy-diffusivity, κt, are calculated a priori and revealed positive prevalent values to maintain a turbulent wind essentially driven by the mean buoyant force at the sidewalls. The topological analysis suggests that the effective turbulent diffusion paradigm and the hypothesis of a constant turbulent Prandtl number are only applicable in the large-scale strain-dominated areas in the bulk. It is shown that the bulk-dominated rotational structures of vortex-stretching (and its synchronous viscous dissipative structures) hold

  13. Alpha-modeling strategy for LES of turbulent mixing

    NARCIS (Netherlands)

    Geurts, Bernard J.; Holm, Darryl D.; Drikakis, D.; Geurts, B.J.

    2002-01-01

    The α-modeling strategy is followed to derive a new subgrid parameterization of the turbulent stress tensor in large-eddy simulation (LES). The LES-α modeling yields an explicitly filtered subgrid parameterization which contains the filtered nonlinear gradient model as well as a model which

  14. Simultaneous inference for model averaging of derived parameters

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2015-01-01

    Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...

  15. Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method

    Science.gov (United States)

    Tsai, F. T. C.; Elshall, A. S.

    2014-12-01

    Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.

  16. Impact of Subgrid Scale Models and Heat Loss on Large Eddy Simulations of a Premixed Jet Burner Using Flamelet-Generated Manifolds

    Science.gov (United States)

    Hernandez Perez, Francisco E.; Im, Hong G.; Lee, Bok Jik; Fancello, Alessio; Donini, Andrea; van Oijen, Jeroen A.; de Goey, L. Philip H.

    2017-11-01

    Large eddy simulations (LES) of a turbulent premixed jet flame in a confined chamber are performed employing the flamelet-generated manifold (FGM) method for tabulation of chemical kinetics and thermochemical properties, as well as the OpenFOAM framework for computational fluid dynamics. The burner has been experimentally studied by Lammel et al. (2011) and features an off-center nozzle, feeding a preheated lean methane-air mixture with an equivalence ratio of 0.71 and mean velocity of 90 m/s, at 573 K and atmospheric pressure. Conductive heat loss is accounted for in the FGM tabulation via burner-stabilized flamelets and the subgrid-scale (SGS) turbulence-chemistry interaction is modeled via presumed filtered density functions. The impact of heat loss inclusion as well as SGS modeling for both the SGS stresses and SGS variance of progress variable on the numerical results is investigated. Comparisons of the LES results against measurements show a significant improvement in the prediction of temperature when heat losses are incorporated into FGM. While further enhancements in the LES results are accomplished by using SGS models based on transported quantities and/or dynamically computed coefficients as compared to the Smagorinsky model, heat loss inclusion is more relevant. This research was sponsored by King Abdullah University of Science and Technology (KAUST) and made use of computational resources at KAUST Supercomputing Laboratory.

  17. Accurate phenotyping: Reconciling approaches through Bayesian model averaging.

    Directory of Open Access Journals (Sweden)

    Carla Chia-Ming Chen

    Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.

  18. Average Bandwidth Allocation Model of WFQ

    Directory of Open Access Journals (Sweden)

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  19. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam's Window.

    Science.gov (United States)

    Onorante, Luca; Raftery, Adrian E

    2016-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam's window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods.

  20. Subgrid-scale stresses and scalar fluxes constructed by the multi-scale turnover Lagrangian map

    Science.gov (United States)

    AL-Bairmani, Sukaina; Li, Yi; Rosales, Carlos; Xie, Zheng-tong

    2017-04-01

    The multi-scale turnover Lagrangian map (MTLM) [C. Rosales and C. Meneveau, "Anomalous scaling and intermittency in three-dimensional synthetic turbulence," Phys. Rev. E 78, 016313 (2008)] uses nested multi-scale Lagrangian advection of fluid particles to distort a Gaussian velocity field and, as a result, generate non-Gaussian synthetic velocity fields. Passive scalar fields can be generated with the procedure when the fluid particles carry a scalar property [C. Rosales, "Synthetic three-dimensional turbulent passive scalar fields via the minimal Lagrangian map," Phys. Fluids 23, 075106 (2011)]. The synthetic fields have been shown to possess highly realistic statistics characterizing small scale intermittency, geometrical structures, and vortex dynamics. In this paper, we present a study of the synthetic fields using the filtering approach. This approach, which has not been pursued so far, provides insights on the potential applications of the synthetic fields in large eddy simulations and subgrid-scale (SGS) modelling. The MTLM method is first generalized to model scalar fields produced by an imposed linear mean profile. We then calculate the subgrid-scale stress, SGS scalar flux, SGS scalar variance, as well as related quantities from the synthetic fields. Comparison with direct numerical simulations (DNSs) shows that the synthetic fields reproduce the probability distributions of the SGS energy and scalar dissipation rather well. Related geometrical statistics also display close agreement with DNS results. The synthetic fields slightly under-estimate the mean SGS energy dissipation and slightly over-predict the mean SGS scalar variance dissipation. In general, the synthetic fields tend to slightly under-estimate the probability of large fluctuations for most quantities we have examined. Small scale anisotropy in the scalar field originated from the imposed mean gradient is captured. The sensitivity of the synthetic fields on the input spectra is assessed by

  1. The model evaluation of subsonic aircraft effect on the ozone and radiative forcing

    Energy Technology Data Exchange (ETDEWEB)

    Rozanov, E.; Zubov, V.; Egorova, T.; Ozolin, Y. [Main Geophysical Observatory, St.Petersburg (Russian Federation)

    1997-12-31

    Two dimensional transient zonally averaged model was used for the evaluation of the effect of subsonic aircraft exhausts upon the ozone, trace gases and radiation in the troposphere and lower stratosphere. The mesoscale transformation of gas composition was included on the base of the box model simulations. It has been found that the transformation of the exhausted gases in sub-grid scale is able to influence the results of the modelling. The radiative forcing caused by gas, sulfate aerosol, soot and contrails changes was estimated as big as 0.12-0.15 W/m{sup 2} (0.08 W/m{sup 2} globally and annually averaged). (author) 10 refs.

  2. The model evaluation of subsonic aircraft effect on the ozone and radiative forcing

    Energy Technology Data Exchange (ETDEWEB)

    Rozanov, E; Zubov, V; Egorova, T; Ozolin, Y [Main Geophysical Observatory, St.Petersburg (Russian Federation)

    1998-12-31

    Two dimensional transient zonally averaged model was used for the evaluation of the effect of subsonic aircraft exhausts upon the ozone, trace gases and radiation in the troposphere and lower stratosphere. The mesoscale transformation of gas composition was included on the base of the box model simulations. It has been found that the transformation of the exhausted gases in sub-grid scale is able to influence the results of the modelling. The radiative forcing caused by gas, sulfate aerosol, soot and contrails changes was estimated as big as 0.12-0.15 W/m{sup 2} (0.08 W/m{sup 2} globally and annually averaged). (author) 10 refs.

  3. Average inactivity time model, associated orderings and reliability properties

    Science.gov (United States)

    Kayid, M.; Izadkhah, S.; Abouammoh, A. M.

    2018-02-01

    In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.

  4. Towards improved parameterization of a macroscale hydrologic model in a discontinuous permafrost boreal forest ecosystem

    Directory of Open Access Journals (Sweden)

    A. Endalamaw

    2017-09-01

    -basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.

  5. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam’s Window*

    Science.gov (United States)

    Onorante, Luca; Raftery, Adrian E.

    2015-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam’s window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods. PMID:26917859

  6. Comparison of power pulses from homogeneous and time-average-equivalent models

    International Nuclear Information System (INIS)

    De, T.K.; Rouben, B.

    1995-01-01

    The time-average-equivalent model is an 'instantaneous' core model designed to reproduce the same three dimensional power distribution as that generated by a time-average model. However it has been found that the time-average-equivalent model gives a full-core static void reactivity about 8% smaller than the time-average or homogeneous models. To investigate the consequences of this difference in static void reactivity in time dependent calculations, simulations of the power pulse following a hypothetical large-loss-of-coolant accident were performed with a homogeneous model and compared with the power pulse from the time-average-equivalent model. The results show that there is a much smaller difference in peak dynamic reactivity than in static void reactivity between the two models. This is attributed to the fact that voiding is not complete, but also to the retardation effect of the delayed-neutron precursors on the dynamic flux shape. The difference in peak reactivity between the models is 0.06 milli-k. The power pulses are essentially the same in the two models, because the delayed-neutron fraction in the time-average-equivalent model is lower than in the homogeneous model, which compensates for the lower void reactivity in the time-average-equivalent model. (author). 1 ref., 5 tabs., 9 figs

  7. Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation

    Science.gov (United States)

    Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.

  8. Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging

    Directory of Open Access Journals (Sweden)

    Naoya Sueishi

    2013-07-01

    Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.

  9. Yearly, seasonal and monthly daily average diffuse sky radiation models

    International Nuclear Information System (INIS)

    Kassem, A.S.; Mujahid, A.M.; Turner, D.W.

    1993-01-01

    A daily average diffuse sky radiation regression model based on daily global radiation was developed utilizing two year data taken near Blytheville, Arkansas (Lat. =35.9 0 N, Long. = 89.9 0 W), U.S.A. The model has a determination coefficient of 0.91 and 0.092 standard error of estimate. The data were also analyzed for a seasonal dependence and four seasonal average daily models were developed for the spring, summer, fall and winter seasons. The coefficient of determination is 0.93, 0.81, 0.94 and 0.93, whereas the standard error of estimate is 0.08, 0.102, 0.042 and 0.075 for spring, summer, fall and winter, respectively. A monthly average daily diffuse sky radiation model was also developed. The coefficient of determination is 0.92 and the standard error of estimate is 0.083. A seasonal monthly average model was also developed which has 0.91 coefficient of determination and 0.085 standard error of estimate. The developed monthly daily average and daily models compare well with a selected number of previously developed models. (author). 11 ref., figs., tabs

  10. Effect of LES models on the entrainment of a passive scalar in a turbulent planar jet

    Science.gov (United States)

    Chambel Lopes, Diogo; da Silva, Carlos; Reis, Ricardo; Raman, Venkat

    2011-11-01

    Direct and large-eddy simulations (DNS/LES) of turbulent planar jets are used to study the role of subgrid-scale models in the integral characteristics of the passive scalar mixing in a jet. Specifically the effect of subgrid-scale models in the jet spreading rate and centreline passive scalar decay rates are assessed and compared. The modelling of the subgrid-scale fluxes is particularly challenging in the turbulent/nonturbulent (T/NT) region that divides the two regions in the jet flow: the outer region where the flow is irrotational and the inner region where the flow is turbulent. It has been shown that important Reynolds stresses exist near the T/NT interface and that these stresses determine in part the mixing and combustion rates in jets. The subgrid scales of motion near the T/NT interface are far from equilibrium and contain an important fraction of the total kinetic energy. Model constants used in several subgrid-scale models such as the Smagorinsky and the gradient models need to be corrected near the jet edge. The procedure used to obtain the dynamic Smagorinsky constant is not able to cope with the intermittent nature of this region.

  11. Averaging principle for second-order approximation of heterogeneous models with homogeneous models.

    Science.gov (United States)

    Fibich, Gadi; Gavious, Arieh; Solan, Eilon

    2012-11-27

    Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced by its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of differentiability and symmetry is O(ε(2)) equivalent to the outcome of the corresponding homogeneous model, where ε is the level of heterogeneity. We then use this averaging principle to obtain new results in queuing theory, game theory (auctions), and social networks (marketing).

  12. Averaging principle for second-order approximation of heterogeneous models with homogeneous models

    Science.gov (United States)

    Fibich, Gadi; Gavious, Arieh; Solan, Eilon

    2012-01-01

    Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced by its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of differentiability and symmetry is O(ɛ2) equivalent to the outcome of the corresponding homogeneous model, where ɛ is the level of heterogeneity. We then use this averaging principle to obtain new results in queuing theory, game theory (auctions), and social networks (marketing). PMID:23150569

  13. A depth semi-averaged model for coastal dynamics

    Science.gov (United States)

    Antuono, M.; Colicchio, G.; Lugni, C.; Greco, M.; Brocchini, M.

    2017-05-01

    The present work extends the semi-integrated method proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)], which comprises a subset of depth-averaged equations (similar to Boussinesq-like models) and a Poisson equation that accounts for vertical dynamics. Here, the subset of depth-averaged equations has been reshaped in a conservative-like form and both the Poisson equation formulations proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)] are investigated: the former uses the vertical velocity component (formulation A) and the latter a specific depth semi-averaged variable, ϒ (formulation B). Our analyses reveal that formulation A is prone to instabilities as wave nonlinearity increases. On the contrary, formulation B allows an accurate, robust numerical implementation. Test cases derived from the scientific literature on Boussinesq-type models—i.e., solitary and Stokes wave analytical solutions for linear dispersion and nonlinear evolution and experimental data for shoaling properties—are used to assess the proposed solution strategy. It is found that the present method gives reliable predictions of wave propagation in shallow to intermediate waters, in terms of both semi-averaged variables and conservation properties.

  14. Physical modelling of interactions between interfaces and turbulence; Modelisation physique des interactions entre interfaces et turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Toutant, A

    2006-12-15

    The complex interactions between interfaces and turbulence strongly impact the flow properties. Unfortunately, Direct Numerical Simulations (DNS) have to entail a number of degrees of freedom proportional to the third power of the Reynolds number to correctly describe the flow behaviour. This extremely hard constraint makes it impossible to use DNS for industrial applications. Our strategy consists in using and improving DNS method in order to develop the Interfaces and Sub-grid Scales concept. ISS is a two-phase equivalent to the single-phase Large Eddy Simulation (LES) concept. The challenge of ISS is to integrate the two-way coupling phenomenon into sub-grid models. Applying a space filter, we have exhibited correlations or sub-grid terms that require closures. We have shown that, in two-phase flows, the presence of a discontinuity leads to specific sub-grid terms. Comparing the maximum of the norm of the sub-grid terms with the maximum of the norm of the advection tensor, we have found that sub-grid terms related to interfacial forces and viscous effect are negligible. Consequently, in the momentum balance, only the sub-grid terms related to inertia have to be closed. Thanks to a priori tests performed on several DNS data, we demonstrate that the scale similarity hypothesis, reinterpreted near discontinuity, provides sub-grid models that take into account the two-way coupling phenomenon. These models correspond to the first step of our work. Indeed, in this step, interfaces are smooth and, interactions between interfaces and turbulence occur in a transition zone where each physical variable varies sharply but continuously. The next challenge has been to determine the jump conditions across the sharp equivalent interface corresponding to the sub-grid models of the transition zone. We have used the matched asymptotic expansion method to obtain the jump conditions. The first tests on the velocity of the sharp equivalent interface are very promising (author)

  15. Analysis and comparison of safety models using average daily, average hourly, and microscopic traffic.

    Science.gov (United States)

    Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie

    2018-02-01

    There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. A dynamic globalization model for large eddy simulation of complex turbulent flow

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Hae Cheon; Park, No Ma; Kim, Jin Seok [Seoul National Univ., Seoul (Korea, Republic of)

    2005-07-01

    A dynamic subgrid-scale model is proposed for large eddy simulation of turbulent flows in complex geometry. The eddy viscosity model by Vreman [Phys. Fluids, 16, 3670 (2004)] is considered as a base model. A priori tests with the original Vreman model show that it predicts the correct profile of subgrid-scale dissipation in turbulent channel flow but the optimal model coefficient is far from universal. Dynamic procedures of determining the model coefficient are proposed based on the 'global equilibrium' between the subgrid-scale dissipation and viscous dissipation. An important feature of the proposed procedures is that the model coefficient determined is globally constant in space but varies only in time. Large eddy simulations with the present dynamic model are conducted for forced isotropic turbulence, turbulent channel flow and flow over a sphere, showing excellent agreements with previous results.

  17. Optimizing Prediction Using Bayesian Model Averaging: Examples Using Large-Scale Educational Assessments.

    Science.gov (United States)

    Kaplan, David; Lee, Chansoon

    2018-01-01

    This article provides a review of Bayesian model averaging as a means of optimizing the predictive performance of common statistical models applied to large-scale educational assessments. The Bayesian framework recognizes that in addition to parameter uncertainty, there is uncertainty in the choice of models themselves. A Bayesian approach to addressing the problem of model uncertainty is the method of Bayesian model averaging. Bayesian model averaging searches the space of possible models for a set of submodels that satisfy certain scientific principles and then averages the coefficients across these submodels weighted by each model's posterior model probability (PMP). Using the weighted coefficients for prediction has been shown to yield optimal predictive performance according to certain scoring rules. We demonstrate the utility of Bayesian model averaging for prediction in education research with three examples: Bayesian regression analysis, Bayesian logistic regression, and a recently developed approach for Bayesian structural equation modeling. In each case, the model-averaged estimates are shown to yield better prediction of the outcome of interest than any submodel based on predictive coverage and the log-score rule. Implications for the design of large-scale assessments when the goal is optimal prediction in a policy context are discussed.

  18. Modelling transport and deposition of caesium and iodine from the Chernobyl accident using the DREAM model

    Directory of Open Access Journals (Sweden)

    J. Brandt

    2002-01-01

    Full Text Available A tracer model, DREAM (the Danish Rimpuff and Eulerian Accidental release Model, has been developed for modelling transport, dispersion and deposition (wet and dry of radioactive material from accidental releases, as the Chernobyl accident. The model is a combination of a Lagrangian model, that includes the near source dispersion, and an Eulerian model describing the long-range transport. The performance of the transport model has previously been tested within the European Tracer Experiment, ETEX, which included transport and dispersion of an inert, non-depositing tracer from a controlled release. The focus of this paper is the model performance with respect to the total deposition of  137Cs, 134Cs and 131I from the Chernobyl accident, using different relatively simple and comprehensive parameterizations for dry- and wet deposition. The performance, compared to measurements, of using different combinations of two different wet deposition parameterizations and three different parameterizations of dry deposition has been evaluated, using different statistical tests. The best model performance, compared to measurements, is obtained when parameterizing the total deposition combined of a simple method for dry deposition and a subgrid-scale averaging scheme for wet deposition based on relative humidities. The same major conclusion is obtained for all the three different radioactive isotopes and using two different deposition measurement databases. Large differences are seen in the results obtained by using the two different parameterizations of wet deposition based on precipitation rates and relative humidities, respectively. The parameterization based on subgrid-scale averaging is, in all cases, performing better than the parameterization based on precipitation rates. This indicates that the in-cloud scavenging process is more important than the below cloud scavenging process for the submicron particles and that the precipitation rates are

  19. Correction of Excessive Precipitation over Steep and High Mountains in a GCM: A Simple Method of Parameterizing the Thermal Effects of Subgrid Topographic Variation

    Science.gov (United States)

    Chao, Winston C.

    2015-01-01

    The excessive precipitation over steep and high mountains (EPSM) in GCMs and meso-scale models is due to a lack of parameterization of the thermal effects of the subgrid-scale topographic variation. These thermal effects drive subgrid-scale heated slope induced vertical circulations (SHVC). SHVC provide a ventilation effect of removing heat from the boundary layer of resolvable-scale mountain slopes and depositing it higher up. The lack of SHVC parameterization is the cause of EPSM. The author has previously proposed a method of parameterizing SHVC, here termed SHVC.1. Although this has been successful in avoiding EPSM, the drawback of SHVC.1 is that it suppresses convective type precipitation in the regions where it is applied. In this article we propose a new method of parameterizing SHVC, here termed SHVC.2. In SHVC.2 the potential temperature and mixing ratio of the boundary layer are changed when used as input to the cumulus parameterization scheme over mountainous regions. This allows the cumulus parameterization to assume the additional function of SHVC parameterization. SHVC.2 has been tested in NASA Goddard's GEOS-5 GCM. It achieves the primary goal of avoiding EPSM while also avoiding the suppression of convective-type precipitation in regions where it is applied.

  20. A note on moving average models for Gaussian random fields

    DEFF Research Database (Denmark)

    Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.

    The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...... basis, a general modeling framework which includes several types of non-Gaussian models. We propose a new one-parameter spatial correlation model which arises from a power kernel and show that the associated Hausdorff dimension of the sample paths can take any value between 2 and 3. As a result...

  1. Computation of transitional flow past a circular cylinder using multiblock lattice Boltzmann method with a dynamic subgrid scale model

    International Nuclear Information System (INIS)

    Premnath, Kannan N; Pattison, Martin J; Banerjee, Sanjoy

    2013-01-01

    Lattice Boltzmann method (LBM) is a kinetic based numerical scheme for the simulation of fluid flow. While the approach has attracted considerable attention during the last two decades, there is a need for systematic investigation of its applicability for complex canonical turbulent flow problems of engineering interest, where the nature of the numerical properties of the underlying scheme plays an important role for their accurate solution. In this paper, we discuss and evaluate a LBM based on a multiblock approach for efficient large eddy simulation of three-dimensional external flow past a circular cylinder in the transitional regime characterized by the presence of multiple scales. For enhanced numerical stability at higher Reynolds numbers, a multiple relaxation time formulation is considered. The effect of subgrid scales is represented by means of a Smagorinsky eddy-viscosity model, where the model coefficient is computed locally by means of a dynamic procedure, providing better representation of flow physics with reduced empiricism. Simulations are performed for a Reynolds number of 3900 based on the free stream velocity and cylinder diameter for which prior data is available for comparison. The presence of laminar boundary layer which separates into a pair of shear layers that evolve into turbulent wakes impose particular challenge for numerical methods for this condition. The relatively low numerical dissipation introduced by the inherently parallel and second-order accurate LBM is an important computational asset in this regard. Computations using five different grid levels, where the various blocks are suitably aligned to resolve multiscale flow features show that the structure of the recirculation region is well reproduced and the statistics of the mean flow and turbulent fluctuations are in satisfactory agreement with prior data. (paper)

  2. Computation of transitional flow past a circular cylinder using multiblock lattice Boltzmann method with a dynamic subgrid scale model

    Energy Technology Data Exchange (ETDEWEB)

    Premnath, Kannan N [Department of Mechanical Engineering, University of Colorado Denver, 1200 Larimer Street, Denver, CO 80217 (United States); Pattison, Martin J [HyPerComp Inc., 2629 Townsgate Road, Suite 105, Westlake Village, CA 91361 (United States); Banerjee, Sanjoy, E-mail: kannan.premnath@ucdenver.edu, E-mail: kannan.np@gmail.com [Department of Chemical Engineering, City College of New York, City University of New York, New York, NY 10031 (United States)

    2013-10-15

    Lattice Boltzmann method (LBM) is a kinetic based numerical scheme for the simulation of fluid flow. While the approach has attracted considerable attention during the last two decades, there is a need for systematic investigation of its applicability for complex canonical turbulent flow problems of engineering interest, where the nature of the numerical properties of the underlying scheme plays an important role for their accurate solution. In this paper, we discuss and evaluate a LBM based on a multiblock approach for efficient large eddy simulation of three-dimensional external flow past a circular cylinder in the transitional regime characterized by the presence of multiple scales. For enhanced numerical stability at higher Reynolds numbers, a multiple relaxation time formulation is considered. The effect of subgrid scales is represented by means of a Smagorinsky eddy-viscosity model, where the model coefficient is computed locally by means of a dynamic procedure, providing better representation of flow physics with reduced empiricism. Simulations are performed for a Reynolds number of 3900 based on the free stream velocity and cylinder diameter for which prior data is available for comparison. The presence of laminar boundary layer which separates into a pair of shear layers that evolve into turbulent wakes impose particular challenge for numerical methods for this condition. The relatively low numerical dissipation introduced by the inherently parallel and second-order accurate LBM is an important computational asset in this regard. Computations using five different grid levels, where the various blocks are suitably aligned to resolve multiscale flow features show that the structure of the recirculation region is well reproduced and the statistics of the mean flow and turbulent fluctuations are in satisfactory agreement with prior data. (paper)

  3. Comparison of four large-eddy simulation research codes and effects of model coefficient and inflow turbulence in actuator-line-based wind turbine modeling

    DEFF Research Database (Denmark)

    Martínez-Tossas, Luis A.; Churchfield, Matthew J.; Yilmaz, Ali Emre

    2018-01-01

    to match closely for all codes. The value of the Smagorinsky coefficient in the subgrid-scale turbulence model is shown to have a negligible effect on the time-averaged loads along the blades. Conversely, the breakdown location of the wake is strongly dependent on the Smagorinsky coefficient in uniform...... coefficient has a negligible effect on the wake profiles. It is concluded that for LES of wind turbines and wind farms using ALM, careful implementation and extensive cross-verification among codes can result in highly reproducible predictions. Moreover, the characteristics of the inflow turbulence appear...

  4. Essays on model averaging and political economics

    NARCIS (Netherlands)

    Wang, W.

    2013-01-01

    This thesis first investigates various issues related with model averaging, and then evaluates two policies, i.e. West Development Drive in China and fiscal decentralization in U.S, using econometric tools. Chapter 2 proposes a hierarchical weighted least squares (HWALS) method to address multiple

  5. GI Joe or Average Joe? The impact of average-size and muscular male fashion models on men's and women's body image and advertisement effectiveness.

    Science.gov (United States)

    Diedrichs, Phillippa C; Lee, Christina

    2010-06-01

    Increasing body size and shape diversity in media imagery may promote positive body image. While research has largely focused on female models and women's body image, men may also be affected by unrealistic images. We examined the impact of average-size and muscular male fashion models on men's and women's body image and perceived advertisement effectiveness. A sample of 330 men and 289 women viewed one of four advertisement conditions: no models, muscular, average-slim or average-large models. Men and women rated average-size models as equally effective in advertisements as muscular models. For men, exposure to average-size models was associated with more positive body image in comparison to viewing no models, but no difference was found in comparison to muscular models. Similar results were found for women. Internalisation of beauty ideals did not moderate these effects. These findings suggest that average-size male models can promote positive body image and appeal to consumers. 2010 Elsevier Ltd. All rights reserved.

  6. Impact of Sub-grid Soil Textural Properties on Simulations of Hydrological Fluxes at the Continental Scale Mississippi River Basin

    Science.gov (United States)

    Kumar, R.; Samaniego, L. E.; Livneh, B.

    2013-12-01

    Knowledge of soil hydraulic properties such as porosity and saturated hydraulic conductivity is required to accurately model the dynamics of near-surface hydrological processes (e.g. evapotranspiration and root-zone soil moisture dynamics) and provide reliable estimates of regional water and energy budgets. Soil hydraulic properties are commonly derived from pedo-transfer functions using soil textural information recorded during surveys, such as the fractions of sand and clay, bulk density, and organic matter content. Typically large scale land-surface models are parameterized using a relatively coarse soil map with little or no information on parametric sub-grid variability. In this study we analyze the impact of sub-grid soil variability on simulated hydrological fluxes over the Mississippi River Basin (≈3,240,000 km2) at multiple spatio-temporal resolutions. A set of numerical experiments were conducted with the distributed mesoscale hydrologic model (mHM) using two soil datasets: (a) the Digital General Soil Map of the United States or STATSGO2 (1:250 000) and (b) the recently collated Harmonized World Soil Database based on the FAO-UNESCO Soil Map of the World (1:5 000 000). mHM was parameterized with the multi-scale regionalization technique that derives distributed soil hydraulic properties via pedo-transfer functions and regional coefficients. Within the experimental framework, the 3-hourly model simulations were conducted at four spatial resolutions ranging from 0.125° to 1°, using meteorological datasets from the NLDAS-2 project for the time period 1980-2012. Preliminary results indicate that the model was able to capture observed streamflow behavior reasonably well with both soil datasets, in the major sub-basins (i.e. the Missouri, the Upper Mississippi, the Ohio, the Red, and the Arkansas). However, the spatio-temporal patterns of simulated water fluxes and states (e.g. soil moisture, evapotranspiration) from both simulations, showed marked

  7. Transferability of hydrological models and ensemble averaging methods between contrasting climatic periods

    Science.gov (United States)

    Broderick, Ciaran; Matthews, Tom; Wilby, Robert L.; Bastola, Satish; Murphy, Conor

    2016-10-01

    Understanding hydrological model predictive capabilities under contrasting climate conditions enables more robust decision making. Using Differential Split Sample Testing (DSST), we analyze the performance of six hydrological models for 37 Irish catchments under climate conditions unlike those used for model training. Additionally, we consider four ensemble averaging techniques when examining interperiod transferability. DSST is conducted using 2/3 year noncontinuous blocks of (i) the wettest/driest years on record based on precipitation totals and (ii) years with a more/less pronounced seasonal precipitation regime. Model transferability between contrasting regimes was found to vary depending on the testing scenario, catchment, and evaluation criteria considered. As expected, the ensemble average outperformed most individual ensemble members. However, averaging techniques differed considerably in the number of times they surpassed the best individual model member. Bayesian Model Averaging (BMA) and the Granger-Ramanathan Averaging (GRA) method were found to outperform the simple arithmetic mean (SAM) and Akaike Information Criteria Averaging (AICA). Here GRA performed better than the best individual model in 51%-86% of cases (according to the Nash-Sutcliffe criterion). When assessing model predictive skill under climate change conditions we recommend (i) setting up DSST to select the best available analogues of expected annual mean and seasonal climate conditions; (ii) applying multiple performance criteria; (iii) testing transferability using a diverse set of catchments; and (iv) using a multimodel ensemble in conjunction with an appropriate averaging technique. Given the computational efficiency and performance of GRA relative to BMA, the former is recommended as the preferred ensemble averaging technique for climate assessment.

  8. Research & development and growth: A Bayesian model averaging analysis

    Czech Academy of Sciences Publication Activity Database

    Horváth, Roman

    2011-01-01

    Roč. 28, č. 6 (2011), s. 2669-2673 ISSN 0264-9993. [Society for Non-linear Dynamics and Econometrics Annual Conferencen. Washington DC, 16.03.2011-18.03.2011] R&D Projects: GA ČR GA402/09/0965 Institutional research plan: CEZ:AV0Z10750506 Keywords : Research and development * Growth * Bayesian model averaging Subject RIV: AH - Economic s Impact factor: 0.701, year: 2011 http://library.utia.cas.cz/separaty/2011/E/horvath-research & development and growth a bayesian model averaging analysis.pdf

  9. Development of fine-resolution analyses and expanded large-scale forcing properties: 2. Scale awareness and application to single-column model experiments

    Science.gov (United States)

    Feng, Sha; Li, Zhijin; Liu, Yangang; Lin, Wuyin; Zhang, Minghua; Toto, Tami; Vogelmann, Andrew M.; Endo, Satoshi

    2015-01-01

    three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy's Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multiscale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scales larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.

  10. Application of Bayesian model averaging to measurements of the primordial power spectrum

    International Nuclear Information System (INIS)

    Parkinson, David; Liddle, Andrew R.

    2010-01-01

    Cosmological parameter uncertainties are often stated assuming a particular model, neglecting the model uncertainty, even when Bayesian model selection is unable to identify a conclusive best model. Bayesian model averaging is a method for assessing parameter uncertainties in situations where there is also uncertainty in the underlying model. We apply model averaging to the estimation of the parameters associated with the primordial power spectra of curvature and tensor perturbations. We use CosmoNest and MultiNest to compute the model evidences and posteriors, using cosmic microwave data from WMAP, ACBAR, BOOMERanG, and CBI, plus large-scale structure data from the SDSS DR7. We find that the model-averaged 95% credible interval for the spectral index using all of the data is 0.940 s s is specified at a pivot scale 0.015 Mpc -1 . For the tensors model averaging can tighten the credible upper limit, depending on prior assumptions.

  11. Econometric modelling of Serbian current account determinants: Jackknife Model Averaging approach

    Directory of Open Access Journals (Sweden)

    Petrović Predrag

    2014-01-01

    Full Text Available This research aims to model Serbian current account determinants for the period Q1 2002 - Q4 2012. Taking into account the majority of relevant determinants, using the Jackknife Model Averaging approach, 48 different models have been estimated, where 1254 equations needed to be estimated and averaged for each of the models. The results of selected representative models indicate moderate persistence of the CA and positive influence of: fiscal balance, oil trade balance, terms of trade, relative income and real effective exchange rates, where we should emphasise: (i a rather strong influence of relative income, (ii the fact that the worsening of oil trade balance results in worsening of other components (probably non-oil trade balance of CA and (iii that the positive influence of terms of trade reveals functionality of the Harberger-Laursen-Metzler effect in Serbia. On the other hand, negative influence is evident in case of: relative economic growth, gross fixed capital formation, net foreign assets and trade openness. What particularly stands out is the strong effect of relative economic growth that, most likely, reveals high citizens' future income growth expectations, which has negative impact on the CA.

  12. Forecasting house prices in the 50 states using Dynamic Model Averaging and Dynamic Model Selection

    DEFF Research Database (Denmark)

    Bork, Lasse; Møller, Stig Vinther

    2015-01-01

    We examine house price forecastability across the 50 states using Dynamic Model Averaging and Dynamic Model Selection, which allow for model change and parameter shifts. By allowing the entire forecasting model to change over time and across locations, the forecasting accuracy improves substantia......We examine house price forecastability across the 50 states using Dynamic Model Averaging and Dynamic Model Selection, which allow for model change and parameter shifts. By allowing the entire forecasting model to change over time and across locations, the forecasting accuracy improves...

  13. Serpent-COREDAX analysis of CANDU-6 time-average model

    Energy Technology Data Exchange (ETDEWEB)

    Motalab, M.A.; Cho, B.; Kim, W.; Cho, N.Z.; Kim, Y., E-mail: yongheekim@kaist.ac.kr [Korea Advanced Inst. of Science and Technology (KAIST), Dept. of Nuclear and Quantum Engineering Daejeon (Korea, Republic of)

    2015-07-01

    COREDAX-2 is the nuclear core analysis nodal code that has adopted the Analytic Function Expansion Nodal (AFEN) methodology which has been developed in Korea. AFEN method outperforms in terms of accuracy compared to other conventional nodal methods. To evaluate the possibility of CANDU-type core analysis using the COREDAX-2, the time-average analysis code system was developed. The two-group homogenized cross-sections were calculated using Monte Carlo code, Serpent2. A stand-alone time-average module was developed to determine the time-average burnup distribution in the core for a given fuel management strategy. The coupled Serpent-COREDAX-2 calculation converges to an equilibrium time-average model for the CANDU-6 core. (author)

  14. A collisional-radiative average atom model for hot plasmas

    International Nuclear Information System (INIS)

    Rozsnyai, B.F.

    1996-01-01

    A collisional-radiative 'average atom' (AA) model is presented for the calculation of opacities of hot plasmas not in the condition of local thermodynamic equilibrium (LTE). The electron impact and radiative rate constants are calculated using the dipole oscillator strengths of the average atom. A key element of the model is the photon escape probability which at present is calculated for a semi infinite slab. The Fermi statistics renders the rate equation for the AA level occupancies nonlinear, which requires iterations until the steady state. AA level occupancies are found. Detailed electronic configurations are built into the model after the self-consistent non-LTE AA state is found. The model shows a continuous transition from the non-LTE to the LTE state depending on the optical thickness of the plasma. 22 refs., 13 figs., 1 tab

  15. Testing the skill of numerical hydraulic modeling to simulate spatiotemporal flooding patterns in the Logone floodplain, Cameroon

    Science.gov (United States)

    Fernández, Alfonso; Najafi, Mohammad Reza; Durand, Michael; Mark, Bryan G.; Moritz, Mark; Jung, Hahn Chul; Neal, Jeffrey; Shastry, Apoorva; Laborde, Sarah; Phang, Sui Chian; Hamilton, Ian M.; Xiao, Ningchuan

    2016-08-01

    Recent innovations in hydraulic modeling have enabled global simulation of rivers, including simulation of their coupled wetlands and floodplains. Accurate simulations of floodplains using these approaches may imply tremendous advances in global hydrologic studies and in biogeochemical cycling. One such innovation is to explicitly treat sub-grid channels within two-dimensional models, given only remotely sensed data in areas with limited data availability. However, predicting inundated area in floodplains using a sub-grid model has not been rigorously validated. In this study, we applied the LISFLOOD-FP hydraulic model using a sub-grid channel parameterization to simulate inundation dynamics on the Logone River floodplain, in northern Cameroon, from 2001 to 2007. Our goal was to determine whether floodplain dynamics could be simulated with sufficient accuracy to understand human and natural contributions to current and future inundation patterns. Model inputs in this data-sparse region include in situ river discharge, satellite-derived rainfall, and the shuttle radar topography mission (SRTM) floodplain elevation. We found that the model accurately simulated total floodplain inundation, with a Pearson correlation coefficient greater than 0.9, and RMSE less than 700 km2, compared to peak inundation greater than 6000 km2. Predicted discharge downstream of the floodplain matched measurements (Nash-Sutcliffe efficiency of 0.81), and indicated that net flow from the channel to the floodplain was modeled accurately. However, the spatial pattern of inundation was not well simulated, apparently due to uncertainties in SRTM elevations. We evaluated model results at 250, 500 and 1000-m spatial resolutions, and found that results are insensitive to spatial resolution. We also compared the model output against results from a run of LISFLOOD-FP in which the sub-grid channel parameterization was disabled, finding that the sub-grid parameterization simulated more realistic

  16. Waif goodbye! Average-size female models promote positive body image and appeal to consumers.

    Science.gov (United States)

    Diedrichs, Phillippa C; Lee, Christina

    2011-10-01

    Despite consensus that exposure to media images of thin fashion models is associated with poor body image and disordered eating behaviours, few attempts have been made to enact change in the media. This study sought to investigate an effective alternative to current media imagery, by exploring the advertising effectiveness of average-size female fashion models, and their impact on the body image of both women and men. A sample of 171 women and 120 men were assigned to one of three advertisement conditions: no models, thin models and average-size models. Women and men rated average-size models as equally effective in advertisements as thin and no models. For women with average and high levels of internalisation of cultural beauty ideals, exposure to average-size female models was associated with a significantly more positive body image state in comparison to exposure to thin models and no models. For men reporting high levels of internalisation, exposure to average-size models was also associated with a more positive body image state in comparison to viewing thin models. These findings suggest that average-size female models can promote positive body image and appeal to consumers.

  17. Evidence on Features of a DSGE Business Cycle Model from Bayesian Model Averaging

    NARCIS (Netherlands)

    R.W. Strachan (Rodney); H.K. van Dijk (Herman)

    2012-01-01

    textabstractThe empirical support for features of a Dynamic Stochastic General Equilibrium model with two technology shocks is valuated using Bayesian model averaging over vector autoregressions. The model features include equilibria, restrictions on long-run responses, a structural break of unknown

  18. Using Bayes Model Averaging for Wind Power Forecasts

    Science.gov (United States)

    Preede Revheim, Pål; Beyer, Hans Georg

    2014-05-01

    For operational purposes predictions of the forecasts of the lumped output of groups of wind farms spread over larger geographic areas will often be of interest. A naive approach is to make forecasts for each individual site and sum them up to get the group forecast. It is however well documented that a better choice is to use a model that also takes advantage of spatial smoothing effects. It might however be the case that some sites tends to more accurately reflect the total output of the region, either in general or for certain wind directions. It will then be of interest giving these a greater influence over the group forecast. Bayesian model averaging (BMA) is a statistical post-processing method for producing probabilistic forecasts from ensembles. Raftery et al. [1] show how BMA can be used for statistical post processing of forecast ensembles, producing PDFs of future weather quantities. The BMA predictive PDF of a future weather quantity is a weighted average of the ensemble members' PDFs, where the weights can be interpreted as posterior probabilities and reflect the ensemble members' contribution to overall forecasting skill over a training period. In Revheim and Beyer [2] the BMA procedure used in Sloughter, Gneiting and Raftery [3] were found to produce fairly accurate PDFs for the future mean wind speed of a group of sites from the single sites wind speeds. However, when the procedure was attempted applied to wind power it resulted in either problems with the estimation of the parameters (mainly caused by longer consecutive periods of no power production) or severe underestimation (mainly caused by problems with reflecting the power curve). In this paper the problems that arose when applying BMA to wind power forecasting is met through two strategies. First, the BMA procedure is run with a combination of single site wind speeds and single site wind power production as input. This solves the problem with longer consecutive periods where the input data

  19. Extension of the time-average model to Candu refueling schemes involving reshuffling

    International Nuclear Information System (INIS)

    Rouben, Benjamin; Nichita, Eleodor

    2008-01-01

    Candu reactors consist of a horizontal non-pressurized heavy-water-filled vessel penetrated axially by fuel channels, each containing twelve 50-cm-long fuel bundles cooled by pressurized heavy water. Candu reactors are refueled on-line and, as a consequence, the core flux and power distributions change continuously. For design purposes, a 'time-average' model was developed in the 1970's to calculate the average over time of the flux and power distribution and to study the effects of different refueling schemes. The original time-average model only allows treatment of simple push-through refueling schemes whereby fresh fuel is inserted at one end of the channel and irradiated fuel is removed from the other end. With the advent of advanced fuel cycles and new Candu designs, novel refueling schemes may be considered, such as reshuffling discharged fuel from some channels into other channels, to achieve better overall discharge burnup. Such reshuffling schemes cannot be handled by the original time-average model. This paper presents an extension of the time-average model to allow for the treatment of refueling schemes with reshuffling. Equations for the extended model are presented, together with sample results for a simple demonstration case. (authors)

  20. Time Series ARIMA Models of Undergraduate Grade Point Average.

    Science.gov (United States)

    Rogers, Bruce G.

    The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…

  1. Kumaraswamy autoregressive moving average models for double bounded environmental data

    Science.gov (United States)

    Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme

    2017-12-01

    In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.

  2. Evaluation of WRF Simulations With Different Selections of Subgrid Orographic Drag Over the Tibetan Plateau

    Science.gov (United States)

    Zhou, X.; Beljaars, A.; Wang, Y.; Huang, B.; Lin, C.; Chen, Y.; Wu, H.

    2017-09-01

    Weather Research and Forecasting (WRF) simulations with different selections of subgrid orographic drag over the Tibetan Plateau have been evaluated with observation and ERA-Interim reanalysis. Results show that the subgrid orographic drag schemes, especially the turbulent orographic form drag (TOFD) scheme, efficiently reduce the 10 m wind speed bias and RMS error with respect to station measurements. With the combination of gravity wave, flow blocking and TOFD schemes, wind speed is simulated more realistically than with the individual schemes only. Improvements are also seen in the 2 m air temperature and surface pressure. The gravity wave drag, flow blocking drag, and TOFD schemes combined have the smallest station mean bias (-2.05°C in 2 m air temperature and 1.27 hPa in surface pressure) and RMS error (3.59°C in 2 m air temperature and 2.37 hPa in surface pressure). Meanwhile, the TOFD scheme contributes more to the improvements than the gravity wave drag and flow blocking schemes. The improvements are more pronounced at low levels of the atmosphere than at high levels due to the stronger drag enhancement on the low-level flow. The reduced near-surface cold bias and high-pressure bias over the Tibetan Plateau are the result of changes in the low-level wind components associated with the geostrophic balance. The enhanced drag directly leads to weakened westerlies but also enhances the a-geostrophic flow in this case reducing (enhancing) the northerlies (southerlies), which bring more warm air across the Himalaya Mountain ranges from South Asia (bring less cold air from the north) to the interior Tibetan Plateau.

  3. Bootstrap-after-bootstrap model averaging for reducing model uncertainty in model selection for air pollution mortality studies.

    Science.gov (United States)

    Roberts, Steven; Martin, Michael A

    2010-01-01

    Concerns have been raised about findings of associations between particulate matter (PM) air pollution and mortality that have been based on a single "best" model arising from a model selection procedure, because such a strategy may ignore model uncertainty inherently involved in searching through a set of candidate models to find the best model. Model averaging has been proposed as a method of allowing for model uncertainty in this context. To propose an extension (double BOOT) to a previously described bootstrap model-averaging procedure (BOOT) for use in time series studies of the association between PM and mortality. We compared double BOOT and BOOT with Bayesian model averaging (BMA) and a standard method of model selection [standard Akaike's information criterion (AIC)]. Actual time series data from the United States are used to conduct a simulation study to compare and contrast the performance of double BOOT, BOOT, BMA, and standard AIC. Double BOOT produced estimates of the effect of PM on mortality that have had smaller root mean squared error than did those produced by BOOT, BMA, and standard AIC. This performance boost resulted from estimates produced by double BOOT having smaller variance than those produced by BOOT and BMA. Double BOOT is a viable alternative to BOOT and BMA for producing estimates of the mortality effect of PM.

  4. Online Prediction under Model Uncertainty Via Dynamic Model Averaging: Application to a Cold Rolling Mill

    National Research Council Canada - National Science Library

    Raftery, Adrian E; Karny, Miroslav; Andrysek, Josef; Ettler, Pavel

    2007-01-01

    ... is. We develop a method called Dynamic Model Averaging (DMA) in which a state space model for the parameters of each model is combined with a Markov chain model for the correct model. This allows the (correct...

  5. A comparative analysis of 9 multi-model averaging approaches in hydrological continuous streamflow simulation

    Science.gov (United States)

    Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc

    2015-10-01

    This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.

  6. A satellite simulator for TRMM PR applied to climate model simulations

    Science.gov (United States)

    Spangehl, T.; Schroeder, M.; Bodas-Salcedo, A.; Hollmann, R.; Riley Dellaripa, E. M.; Schumacher, C.

    2017-12-01

    Climate model simulations have to be compared against observation based datasets in order to assess their skill in representing precipitation characteristics. Here we use a satellite simulator for TRMM PR in order to evaluate simulations performed with MPI-ESM (Earth system model of the Max Planck Institute for Meteorology in Hamburg, Germany) performed within the MiKlip project (https://www.fona-miklip.de/, funded by Federal Ministry of Education and Research in Germany). While classical evaluation methods focus on geophysical parameters such as precipitation amounts, the application of the satellite simulator enables an evaluation in the instrument's parameter space thereby reducing uncertainties on the reference side. The CFMIP Observation Simulator Package (COSP) provides a framework for the application of satellite simulators to climate model simulations. The approach requires the introduction of sub-grid cloud and precipitation variability. Radar reflectivities are obtained by applying Mie theory, with the microphysical assumptions being chosen to match the atmosphere component of MPI-ESM (ECHAM6). The results are found to be sensitive to the methods used to distribute the convective precipitation over the sub-grid boxes. Simple parameterization methods are used to introduce sub-grid variability of convective clouds and precipitation. In order to constrain uncertainties a comprehensive comparison with sub-grid scale convective precipitation variability which is deduced from TRMM PR observations is carried out.

  7. Surface drag effects on simulated wind fields in high-resolution atmospheric forecast model

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Kyo Sun; Lim, Jong Myoung; Ji, Young Yong [Environmental Radioactivity Assessment Team,Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Shin, Hye Yum [NOAA/Geophysical Fluid Dynamics Laboratory, Princeton (United States); Hong, Jin Kyu [Yonsei University, Seoul (Korea, Republic of)

    2017-04-15

    It has been reported that the Weather Research and Forecasting (WRF) model generally shows a substantial over prediction bias at low to moderate wind speeds and winds are too geostrophic (Cheng and Steenburgh 2005), which limits the application of WRF model in the area that requires the accurate surface wind estimation such as wind-energy application, air-quality studies, and radioactive-pollutants dispersion studies. The surface drag generated by the subgrid-scale orography is represented by introducing a sink term in the momentum equation in their studies. The purpose of our study is to evaluate the simulated meteorological fields in the high-resolution WRF framework, that includes the parameterization of subgrid-scale orography developed by Mass and Ovens (2010), and enhance the forecast skill of low-level wind fields, which plays an important role in transport and dispersion of air pollutants including radioactive pollutants. The positive bias in 10-m wind speed is significantly alleviated by implementing the subgrid-scale orography parameterization, while other meteorological fields including 10-m wind direction are not changed. Increased variance of subgrid- scale orography enhances the sink of momentum and further reduces the bias in 10-m wind speed.

  8. Wind Farm parametrization in the mesoscale model WRF

    DEFF Research Database (Denmark)

    Volker, Patrick; Badger, Jake; Hahmann, Andrea N.

    2012-01-01

    , but are parametrized as another sub-grid scale process. In order to appropriately capture the wind farm wake recovery and its direction, two properties are important, among others, the total energy extracted by the wind farm and its velocity deficit distribution. In the considered parametrization the individual...... the extracted force is proportional to the turbine area interfacing a grid cell. The sub-grid scale wake expansion is achieved by adding turbulence kinetic energy (proportional to the extracted power) to the flow. The validity of both wind farm parametrizations has been verified against observational data. We...... turbines produce a thrust dependent on the background velocity. For the sub-grid scale velocity deficit, the entrainment from the free atmospheric flow into the wake region, which is responsible for the expansion, is taken into account. Furthermore, since the model horizontal distance is several times...

  9. A NEW COMBINED LOCAL AND NON-LOCAL PBL MODEL FOR METEOROLOGY AND AIR QUALITY MODELING

    Science.gov (United States)

    A new version of the Asymmetric Convective Model (ACM) has been developed to describe sub-grid vertical turbulent transport in both meteorology models and air quality models. The new version (ACM2) combines the non-local convective mixing of the original ACM with local eddy diff...

  10. Synergies Between Grace and Regional Atmospheric Modeling Efforts

    Science.gov (United States)

    Kusche, J.; Springer, A.; Ohlwein, C.; Hartung, K.; Longuevergne, L.; Kollet, S. J.; Keune, J.; Dobslaw, H.; Forootan, E.; Eicker, A.

    2014-12-01

    In the meteorological community, efforts converge towards implementation of high-resolution (precipitation, evapotranspiration and runoff data; confirming that the model does favorably at representing observations. We show that after GRACE-derived bias correction, basin-average hydrological conditions prior to 2002 can be reconstructed better than before. Next, comparing GRACE with CLM forced by EURO-CORDEX simulations allows identifying processes needing improvement in the model. Finally, we compare COSMO-EU atmospheric pressure, a proxy for mass corrections in satellite gravimetry, with ERA-Interim over Europe at timescales shorter/longer than 1 month, and spatial scales below/above ERA resolution. We find differences between regional and global model more pronounced at high frequencies, with magnitude at sub-grid scale and larger scale corresponding to 1-3 hPa (1-3 cm EWH); relevant for the assessment of post-GRACE concepts.

  11. Hybrid Reynolds-Averaged/Large Eddy Simulation of a Cavity Flameholder; Assessment of Modeling Sensitivities

    Science.gov (United States)

    Baurle, R. A.

    2015-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. The cases simulated corresponded to those used to examine this flowfield experimentally using particle image velocimetry. A variety of turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged / large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This effort was undertaken to formally assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community. The numerical errors were quantified for both the steady-state and scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results showed a high degree of variability when comparing the predictions obtained from each turbulence model, with the non-linear eddy viscosity model (an explicit algebraic stress model) providing the most accurate prediction of the measured values. The hybrid Reynolds-averaged/large eddy simulation results were carefully scrutinized to ensure that even the coarsest grid had an acceptable level of resolution for large eddy simulation, and that the time-averaged statistics were acceptably accurate. The autocorrelation and its Fourier transform were the primary tools used for this assessment. The statistics extracted from the hybrid simulation strategy proved to be more accurate than the Reynolds-averaged results obtained using the linear eddy viscosity models. However, there was no predictive improvement noted over the results obtained from the explicit

  12. Electricity demand loads modeling using AutoRegressive Moving Average (ARMA) models

    Energy Technology Data Exchange (ETDEWEB)

    Pappas, S.S. [Department of Information and Communication Systems Engineering, University of the Aegean, Karlovassi, 83 200 Samos (Greece); Ekonomou, L.; Chatzarakis, G.E. [Department of Electrical Engineering Educators, ASPETE - School of Pedagogical and Technological Education, N. Heraklion, 141 21 Athens (Greece); Karamousantas, D.C. [Technological Educational Institute of Kalamata, Antikalamos, 24100 Kalamata (Greece); Katsikas, S.K. [Department of Technology Education and Digital Systems, University of Piraeus, 150 Androutsou Srt., 18 532 Piraeus (Greece); Liatsis, P. [Division of Electrical Electronic and Information Engineering, School of Engineering and Mathematical Sciences, Information and Biomedical Engineering Centre, City University, Northampton Square, London EC1V 0HB (United Kingdom)

    2008-09-15

    This study addresses the problem of modeling the electricity demand loads in Greece. The provided actual load data is deseasonilized and an AutoRegressive Moving Average (ARMA) model is fitted on the data off-line, using the Akaike Corrected Information Criterion (AICC). The developed model fits the data in a successful manner. Difficulties occur when the provided data includes noise or errors and also when an on-line/adaptive modeling is required. In both cases and under the assumption that the provided data can be represented by an ARMA model, simultaneous order and parameter estimation of ARMA models under the presence of noise are performed. The produced results indicate that the proposed method, which is based on the multi-model partitioning theory, tackles successfully the studied problem. For validation purposes the produced results are compared with three other established order selection criteria, namely AICC, Akaike's Information Criterion (AIC) and Schwarz's Bayesian Information Criterion (BIC). The developed model could be useful in the studies that concern electricity consumption and electricity prices forecasts. (author)

  13. Analysis of aggregation and disaggregation effects for grid-based hydrological models and the development of improved precipitation disaggregation procedures for GCMs

    Directory of Open Access Journals (Sweden)

    H. S. Wheater

    1999-01-01

    Full Text Available Appropriate representation of hydrological processes within atmospheric General Circulation Models (GCMs is important with respect to internal model dynamics (e.g. surface feedback effects on atmospheric fluxes, continental runoff production and to simulation of terrestrial impacts of climate change. However, at the scale of a GCM grid-square, several methodological problems arise. Spatial disaggregation of grid-square average climatological parameters is required in particular to produce appropriate point intensities from average precipitation. Conversely, aggregation of land surface heterogeneity is necessary for grid-scale or catchment scale application. The performance of grid-based hydrological models is evaluated for two large (104km2 UK catchments. Simple schemes, using sub-grid average of individual land use at 40 km scale and with no calibration, perform well at the annual time-scale and, with the addition of a (calibrated routing component, at the daily and monthly time-scale. Decoupling of hillslope and channel routing does not necessarily improve performance or identifiability. Scale dependence is investigated through application of distribution functions for rainfall and soil moisture at 100 km scale. The results depend on climate, but show interdependence of the representation of sub-grid rainfall and soil moisture distribution. Rainfall distribution is analysed directly using radar rainfall data from the UK and the Arkansas Red River, USA. Among other properties, the scale dependence of spatial coverage upon radar pixel resolution and GCM grid-scale, as well as the serial correlation of coverages are investigated. This leads to a revised methodology for GCM application, as a simple extension of current procedures. A new location-based approach using an image processing technique is then presented, to allow for the preservation of the spatial memory of the process.

  14. Modeling and forecasting monthly movement of annual average solar insolation based on the least-squares Fourier-model

    International Nuclear Information System (INIS)

    Yang, Zong-Chang

    2014-01-01

    Highlights: • Introduce a finite Fourier-series model for evaluating monthly movement of annual average solar insolation. • Present a forecast method for predicting its movement based on the extended Fourier-series model in the least-squares. • Shown its movement is well described by a low numbers of harmonics with approximately 6-term Fourier series. • Predict its movement most fitting with less than 6-term Fourier series. - Abstract: Solar insolation is one of the most important measurement parameters in many fields. Modeling and forecasting monthly movement of annual average solar insolation is of increasingly importance in areas of engineering, science and economics. In this study, Fourier-analysis employing finite Fourier-series is proposed for evaluating monthly movement of annual average solar insolation and extended in the least-squares for forecasting. The conventional Fourier analysis, which is the most common analysis method in the frequency domain, cannot be directly applied for prediction. Incorporated with the least-square method, the introduced Fourier-series model is extended to predict its movement. The extended Fourier-series forecasting model obtains its optimums Fourier coefficients in the least-square sense based on its previous monthly movements. The proposed method is applied to experiments and yields satisfying results in the different cities (states). It is indicated that monthly movement of annual average solar insolation is well described by a low numbers of harmonics with approximately 6-term Fourier series. The extended Fourier forecasting model predicts the monthly movement of annual average solar insolation most fitting with less than 6-term Fourier series

  15. Passive heat transfer in a turbulent channel flow simulation using large eddy simulation based on the lattice Boltzmann method framework

    Energy Technology Data Exchange (ETDEWEB)

    Wu Hong [National Key Laboratory of Science and Technology on Aero-Engine Aero-Thermodynamics, Beihang University, Beijing 100191 (China); Wang Jiao, E-mail: wangjiao@sjp.buaa.edu.cn [National Key Laboratory of Science and Technology on Aero-Engine Aero-Thermodynamics, Beihang University, Beijing 100191 (China); Tao Zhi [National Key Laboratory of Science and Technology on Aero-Engine Aero-Thermodynamics, Beihang University, Beijing 100191 (China)

    2011-12-15

    Highlights: Black-Right-Pointing-Pointer A double MRT-LBM is used to study heat transfer in turbulent channel flow. Black-Right-Pointing-Pointer Turbulent Pr is modeled by dynamic subgrid scale model. Black-Right-Pointing-Pointer Temperature gradients are calculated by the non-equilibrium temperature distribution moments. - Abstract: In this paper, a large eddy simulation based on the lattice Boltzmann framework is carried out to simulate the heat transfer in a turbulent channel flow, in which the temperature can be regarded as a passive scalar. A double multiple relaxation time (DMRT) thermal lattice Boltzmann model is employed. While applying DMRT, a multiple relaxation time D3Q19 model is used to simulate the flow field, and a multiple relaxation time D3Q7 model is used to simulate the temperature field. The dynamic subgrid stress model, in which the turbulent eddy viscosity and the turbulent Prandtl number are dynamically computed, is integrated to describe the subgrid effect. Not only the strain rate but also the temperature gradient is calculated locally by the non-equilibrium moments. The Reynolds number based on the shear velocity and channel half height is 180. The molecular Prandtl numbers are set to be 0.025 and 0.71. Statistical quantities, such as the average velocity, average temperature, Reynolds stress, root mean square (RMS) velocity fluctuations, RMS temperature and turbulent heat flux are obtained and compared with the available data. The results demonstrate great reliability of DMRT-LES in studying turbulence.

  16. A Parameterization for Land-Atmosphere-Cloud Exchange (PLACE): Documentation and Testing of a Detailed Process Model of the Partly Cloudy Boundary Layer over Heterogeneous Land.

    Science.gov (United States)

    Wetzel, Peter J.; Boone, Aaron

    1995-07-01

    This paper presents a general description of, and demonstrates the capabilities of, the Parameterization for Land-Atmosphere-Cloud Exchange (PLACE). The PLACE model is a detailed process model of the partly cloudy atmospheric boundary layer and underlying heterogeneous land surfaces. In its development, particular attention has been given to three of the model's subprocesses: the prediction of boundary layer cloud amount, the treatment of surface and soil subgrid heterogeneity, and the liquid water budget. The model includes a three-parameter nonprecipitating cumulus model that feeds back to the surface and boundary layer through radiative effects. Surface heterogeneity in the PLACE model is treated both statistically and by resolving explicit subgrid patches. The model maintains a vertical column of liquid water that is divided into seven reservoirs, from the surface interception store down to bedrock.Five single-day demonstration cases are presented, in which the PLACE model was initialized, run, and compared to field observations from four diverse sites. The model is shown to predict cloud amount well in these while predicting the surface fluxes with similar accuracy. A slight tendency to underpredict boundary layer depth is noted in all cases.Sensitivity tests were also run using anemometer-level forcing provided by the Project for Inter-comparison of Land-surface Parameterization Schemes (PILPS). The purpose is to demonstrate the relative impact of heterogeneity of surface parameters on the predicted annual mean surface fluxes. Significant sensitivity to subgrid variability of certain parameters is demonstrated, particularly to parameters related to soil moisture. A major result is that the PLACE-computed impact of total (homogeneous) deforestation of a rain forest is comparable in magnitude to the effect of imposing heterogeneity of certain surface variables, and is similarly comparable to the overall variance among the other PILPS participant models. Were

  17. Average Nuclear properties based on statistical model

    International Nuclear Information System (INIS)

    El-Jaick, L.J.

    1974-01-01

    The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt

  18. Focused information criterion and model averaging based on weighted composite quantile regression

    KAUST Repository

    Xu, Ganggang; Wang, Suojin; Huang, Jianhua Z.

    2013-01-01

    We study the focused information criterion and frequentist model averaging and their application to post-model-selection inference for weighted composite quantile regression (WCQR) in the context of the additive partial linear models. With the non

  19. Puff-on-cell model for computing pollutant transport and diffusion

    International Nuclear Information System (INIS)

    Sheih, C.M.

    1975-01-01

    Most finite-difference methods of modeling pollutant dispersion have been shown to introduce numerical pseudodiffusion, which can be much larger than the true diffusion in the fluid flow and can even generate negative values in the predicted pollutant concentrations. Two attempts to minimize the effect of pseudodiffusion are discussed with emphasis on the particle-in-cell (PIC) method of Sklarew. This paper describes a method that replaces Sklarew's numerous particles in a grid volume, and parameterizes subgrid-scale concentration with a Gaussian puff, and thus avoids the computation of the moments, as in the model of Egan and Mahoney by parameterizing subgrid-scale concentration with a Guassian puff

  20. Macroeconomic Forecasts in Models with Bayesian Averaging of Classical Estimates

    Directory of Open Access Journals (Sweden)

    Piotr Białowolski

    2012-03-01

    Full Text Available The aim of this paper is to construct a forecasting model oriented on predicting basic macroeconomic variables, namely: the GDP growth rate, the unemployment rate, and the consumer price inflation. In order to select the set of the best regressors, Bayesian Averaging of Classical Estimators (BACE is employed. The models are atheoretical (i.e. they do not reflect causal relationships postulated by the macroeconomic theory and the role of regressors is played by business and consumer tendency survey-based indicators. Additionally, survey-based indicators are included with a lag that enables to forecast the variables of interest (GDP, unemployment, and inflation for the four forthcoming quarters without the need to make any additional assumptions concerning the values of predictor variables in the forecast period.  Bayesian Averaging of Classical Estimators is a method allowing for full and controlled overview of all econometric models which can be obtained out of a particular set of regressors. In this paper authors describe the method of generating a family of econometric models and the procedure for selection of a final forecasting model. Verification of the procedure is performed by means of out-of-sample forecasts of main economic variables for the quarters of 2011. The accuracy of the forecasts implies that there is still a need to search for new solutions in the atheoretical modelling.

  1. Regularization modeling for large-eddy simulation

    NARCIS (Netherlands)

    Geurts, Bernardus J.; Holm, D.D.

    2003-01-01

    A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of

  2. Application of autoregressive moving average model in reactor noise analysis

    International Nuclear Information System (INIS)

    Tran Dinh Tri

    1993-01-01

    The application of an autoregressive (AR) model to estimating noise measurements has achieved many successes in reactor noise analysis in the last ten years. The physical processes that take place in the nuclear reactor, however, are described by an autoregressive moving average (ARMA) model rather than by an AR model. Consequently more correct results could be obtained by applying the ARMA model instead of the AR model to reactor noise analysis. In this paper the system of the generalised Yule-Walker equations is derived from the equation of an ARMA model, then a method for its solution is given. Numerical results show the applications of the method proposed. (author)

  3. Bayesian model averaging using particle filtering and Gaussian mixture modeling : Theory, concepts, and simulation experiments

    NARCIS (Netherlands)

    Rings, J.; Vrugt, J.A.; Schoups, G.; Huisman, J.A.; Vereecken, H.

    2012-01-01

    Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive

  4. Atmospheric Boundary Layer Modeling for Combined Meteorology and Air Quality Systems

    Science.gov (United States)

    Atmospheric Eulerian grid models for mesoscale and larger applications require sub-grid models for turbulent vertical exchange processes, particularly within the Planetary Boundary Layer (PSL). In combined meteorology and air quality modeling systems consistent PSL modeling of wi...

  5. Estimation and Forecasting in Vector Autoregressive Moving Average Models for Rich Datasets

    DEFF Research Database (Denmark)

    Dias, Gustavo Fruet; Kapetanios, George

    We address the issue of modelling and forecasting macroeconomic variables using rich datasets, by adopting the class of Vector Autoregressive Moving Average (VARMA) models. We overcome the estimation issue that arises with this class of models by implementing an iterative ordinary least squares (...

  6. Free-free opacity in dense plasmas with an average atom model

    International Nuclear Information System (INIS)

    Shaffer, Nathaniel R.; Ferris, Natalie G.; Colgan, James Patrick; Kilcrease, David Parker; Starrett, Charles Edward

    2017-01-01

    A model for the free-free opacity of dense plasmas is presented. The model uses a previously developed average atom model, together with the Kubo-Greenwood model for optical conductivity. This, in turn, is used to calculate the opacity with the Kramers-Kronig dispersion relations. Furthermore, comparisons to other methods for dense deuterium results in excellent agreement with DFT-MD simulations, and reasonable agreement with a simple Yukawa screening model corrected to satisfy the conductivity sum rule.

  7. Modeling methane emission via the infinite moving average process

    Czech Academy of Sciences Publication Activity Database

    Jordanova, D.; Dušek, Jiří; Stehlík, M.

    2013-01-01

    Roč. 122, - (2013), s. 40-49 ISSN 0169-7439 R&D Projects: GA MŠk(CZ) ED1.1.00/02.0073; GA ČR(CZ) GAP504/11/1151 Institutional support: RVO:67179843 Keywords : Environmental chemistry * Pareto tails * t-Hill estimator * Weak consistency * Moving average process * Methane emission model Subject RIV: EH - Ecology, Behaviour Impact factor: 2.381, year: 2013

  8. Two-Dimensional Depth-Averaged Beach Evolution Modeling: Case Study of the Kizilirmak River Mouth, Turkey

    DEFF Research Database (Denmark)

    Baykal, Cüneyt; Ergin, Ayşen; Güler, Işikhan

    2014-01-01

    investigated by satellite images, physical model tests, and one-dimensional numerical models. The current study uses a two-dimensional depth-averaged numerical beach evolution model, developed based on existing methodologies. This model is mainly composed of four main submodels: a phase-averaged spectral wave......This study presents an application of a two-dimensional beach evolution model to a shoreline change problem at the Kizilirmak River mouth, which has been facing severe coastal erosion problems for more than 20 years. The shoreline changes at the Kizilirmak River mouth have been thus far...... transformation model, a two-dimensional depth-averaged numerical waveinduced circulation model, a sediment transport model, and a bottom evolution model. To validate and verify the numerical model, it is applied to several cases of laboratory experiments. Later, the model is applied to a shoreline change problem...

  9. Analysis of nonlinear systems using ARMA [autoregressive moving average] models

    International Nuclear Information System (INIS)

    Hunter, N.F. Jr.

    1990-01-01

    While many vibration systems exhibit primarily linear behavior, a significant percentage of the systems encountered in vibration and model testing are mildly to severely nonlinear. Analysis methods for such nonlinear systems are not yet well developed and the response of such systems is not accurately predicted by linear models. Nonlinear ARMA (autoregressive moving average) models are one method for the analysis and response prediction of nonlinear vibratory systems. In this paper we review the background of linear and nonlinear ARMA models, and illustrate the application of these models to nonlinear vibration systems. We conclude by summarizing the advantages and disadvantages of ARMA models and emphasizing prospects for future development. 14 refs., 11 figs

  10. A Storm Surge and Inundation Model of the Back River Watershed at NASA Langley Research Center

    Science.gov (United States)

    Loftis, Jon Derek; Wang, Harry V.; DeYoung, Russell J.

    2013-01-01

    This report on a Virginia Institute for Marine Science project demonstrates that the sub-grid modeling technology (now as part of Chesapeake Bay Inundation Prediction System, CIPS) can incorporate high-resolution Lidar measurements provided by NASA Langley Research Center into the sub-grid model framework to resolve detailed topographic features for use as a hydrological transport model for run-off simulations within NASA Langley and Langley Air Force Base. The rainfall over land accumulates in the ditches/channels resolved via the model sub-grid was tested to simulate the run-off induced by heavy precipitation. Possessing both the capabilities for storm surge and run-off simulations, the CIPS model was then applied to simulate real storm events starting with Hurricane Isabel in 2003. It will be shown that the model can generate highly accurate on-land inundation maps as demonstrated by excellent comparison of the Langley tidal gauge time series data (CAPABLE.larc.nasa.gov) and spatial patterns of real storm wrack line measurements with the model results simulated during Hurricanes Isabel (2003), Irene (2011), and a 2009 Nor'easter. With confidence built upon the model's performance, sea level rise scenarios from the ICCP (International Climate Change Partnership) were also included in the model scenario runs to simulate future inundation cases.

  11. Analysis of litter size and average litter weight in pigs using a recursive model

    DEFF Research Database (Denmark)

    Varona, Luis; Sorensen, Daniel; Thompson, Robin

    2007-01-01

    An analysis of litter size and average piglet weight at birth in Landrace and Yorkshire using a standard two-trait mixed model (SMM) and a recursive mixed model (RMM) is presented. The RMM establishes a one-way link from litter size to average piglet weight. It is shown that there is a one......-to-one correspondence between the parameters of SMM and RMM and that they generate equivalent likelihoods. As parameterized in this work, the RMM tests for the presence of a recursive relationship between additive genetic values, permanent environmental effects, and specific environmental effects of litter size......, on average piglet weight. The equivalent standard mixed model tests whether or not the covariance matrices of the random effects have a diagonal structure. In Landrace, posterior predictive model checking supports a model without any form of recursion or, alternatively, a SMM with diagonal covariance...

  12. Collaborative Research: Lagrangian Modeling of Dispersion in the Planetary Boundary Layer

    National Research Council Canada - National Science Library

    Weil, Jeffrey

    2003-01-01

    ...), using Lagrangian "particle" models coupled with large-eddy simulation (LES) fields. A one-particle model for the mean concentration field was enhanced by a theoretically improved treatment of the LES subgrid-scale (SGS) velocities...

  13. Model averaging in the analysis of leukemia mortality among Japanese A-bomb survivors

    International Nuclear Information System (INIS)

    Richardson, David B.; Cole, Stephen R.

    2012-01-01

    Epidemiological studies often include numerous covariates, with a variety of possible approaches to control for confounding of the association of primary interest, as well as a variety of possible models for the exposure-response association of interest. Walsh and Kaiser (Radiat Environ Biophys 50:21-35, 2011) advocate a weighted averaging of the models, where the weights are a function of overall model goodness of fit and degrees of freedom. They apply this method to analyses of radiation-leukemia mortality associations among Japanese A-bomb survivors. We caution against such an approach, noting that the proposed model averaging approach prioritizes the inclusion of covariates that are strong predictors of the outcome, but which may be irrelevant as confounders of the association of interest, and penalizes adjustment for covariates that are confounders of the association of interest, but may contribute little to overall model goodness of fit. We offer a simple illustration of how this approach can lead to biased results. The proposed model averaging approach may also be suboptimal as way to handle competing model forms for an exposure-response association of interest, given adjustment for the same set of confounders; alternative approaches, such as hierarchical regression, may provide a more useful way to stabilize risk estimates in this setting. (orig.)

  14. Hybrid Reynolds-Averaged/Large Eddy Simulation of the Flow in a Model SCRamjet Cavity Flameholder

    Science.gov (United States)

    Baurle, R. A.

    2016-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. Experimental data available for this configuration include velocity statistics obtained from particle image velocimetry. Several turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged/large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This e ort was undertaken to not only assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community, but to also begin to understand how this capability can best be used to augment standard Reynolds-averaged simulations. The numerical errors were quantified for the steady-state simulations, and at least qualitatively assessed for the scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results displayed a high degree of variability when comparing the flameholder fuel distributions obtained from each turbulence model. This prompted the consideration of applying the higher-fidelity scale-resolving simulations as a surrogate "truth" model to calibrate the Reynolds-averaged closures in a non-reacting setting prior to their use for the combusting simulations. In general, the Reynolds-averaged velocity profile predictions at the lowest fueling level matched the particle imaging measurements almost as well as was observed for the non-reacting condition. However, the velocity field predictions proved to be more sensitive to the flameholder fueling rate than was indicated in the measurements.

  15. High resolution modelling of extreme precipitation events in urban areas

    Science.gov (United States)

    Siemerink, Martijn; Volp, Nicolette; Schuurmans, Wytze; Deckers, Dave

    2015-04-01

    The present day society needs to adjust to the effects of climate change. More extreme weather conditions are expected, which can lead to longer periods of drought, but also to more extreme precipitation events. Urban water systems are not designed for such extreme events. Most sewer systems are not able to drain the excessive storm water, causing urban flooding. This leads to high economic damage. In order to take appropriate measures against extreme urban storms, detailed knowledge about the behaviour of the urban water system above and below the streets is required. To investigate the behaviour of urban water systems during extreme precipitation events new assessment tools are necessary. These tools should provide a detailed and integral description of the flow in the full domain of overland runoff, sewer flow, surface water flow and groundwater flow. We developed a new assessment tool, called 3Di, which provides detailed insight in the urban water system. This tool is based on a new numerical methodology that can accurately deal with the interaction between overland runoff, sewer flow and surface water flow. A one-dimensional model for the sewer system and open channel flow is fully coupled to a two-dimensional depth-averaged model that simulates the overland flow. The tool uses a subgrid-based approach in order to take high resolution information of the sewer system and of the terrain into account [1, 2]. The combination of using the high resolution information and the subgrid based approach results in an accurate and efficient modelling tool. It is now possible to simulate entire urban water systems using extreme high resolution (0.5m x 0.5m) terrain data in combination with a detailed sewer and surface water network representation. The new tool has been tested in several Dutch cities, such as Rotterdam, Amsterdam and The Hague. We will present the results of an extreme precipitation event in the city of Schiedam (The Netherlands). This city deals with

  16. Statistics of the Navier–Stokes-alpha-beta regularization model for fluid turbulence

    International Nuclear Information System (INIS)

    Hinz, Denis F; Kim, Tae-Yeon; Fried, Eliot

    2014-01-01

    We explore one-point and two-point statistics of the Navier–Stokes-αβ regularization model at moderate Reynolds number (Re ≈ 200) in homogeneous isotropic turbulence. The results are compared to the limit cases of the Navier–Stokes-α model and the Navier–Stokes-αβ model without subgrid-scale stress, as well as with high-resolution direct numerical simulation. After reviewing spectra of different energy norms of the Navier–Stokes-αβ model, the Navier–Stokes-α model, and Navier–Stokes-αβ model without subgrid-scale stress, we present probability density functions and normalized probability density functions of the filtered and unfiltered velocity increments along with longitudinal velocity structure functions of the regularization models and direct numerical simulation results. We highlight differences in the statistical properties of the unfiltered and filtered velocity fields entering the governing equations of the Navier–Stokes-α and Navier–Stokes-αβ models and discuss the usability of both velocity fields for realistic flow predictions. The influence of the modified viscous term in the Navier–Stokes-αβ model is studied through comparison to the case where the underlying subgrid-scale stress tensor is neglected. Whereas, the filtered velocity field is found to have physically more viable probability density functions and structure functions for the approximation of direct numerical simulation results, the unfiltered velocity field is found to have flatness factors close to direct numerical simulation results. (paper)

  17. LOW-MASS GALAXY FORMATION IN COSMOLOGICAL ADAPTIVE MESH REFINEMENT SIMULATIONS: THE EFFECTS OF VARYING THE SUB-GRID PHYSICS PARAMETERS

    International Nuclear Information System (INIS)

    ColIn, Pedro; Vazquez-Semadeni, Enrique; Avila-Reese, Vladimir; Valenzuela, Octavio; Ceverino, Daniel

    2010-01-01

    We present numerical simulations aimed at exploring the effects of varying the sub-grid physics parameters on the evolution and the properties of the galaxy formed in a low-mass dark matter halo (∼7 x 10 10 h -1 M sun at redshift z = 0). The simulations are run within a cosmological setting with a nominal resolution of 218 pc comoving and are stopped at z = 0.43. For simulations that cannot resolve individual molecular clouds, we propose the criterion that the threshold density for star formation, n SF , should be chosen such that the column density of the star-forming cells equals the threshold value for molecule formation, N ∼ 10 21 cm -2 , or ∼8 M sun pc -2 . In all of our simulations, an extended old/intermediate-age stellar halo and a more compact younger stellar disk are formed, and in most cases, the halo's specific angular momentum is slightly larger than that of the galaxy, and sensitive to the SF/feedback parameters. We found that a non-negligible fraction of the halo stars are formed in situ in a spheroidal distribution. Changes in the sub-grid physics parameters affect significantly and in a complex way the evolution and properties of the galaxy: (1) lower threshold densities n SF produce larger stellar effective radii R e , less peaked circular velocity curves V c (R), and greater amounts of low-density and hot gas in the disk mid-plane; (2) when stellar feedback is modeled by temporarily switching off radiative cooling in the star-forming regions, R e increases (by a factor of ∼2 in our particular model), the circular velocity curve becomes flatter, and a complex multi-phase gaseous disk structure develops; (3) a more efficient local conversion of gas mass to stars, measured by a stellar particle mass distribution biased toward larger values, increases the strength of the feedback energy injection-driving outflows and inducing burstier SF histories; (4) if feedback is too strong, gas loss by galactic outflows-which are easier to produce in low

  18. A novel Generalized State-Space Averaging (GSSA) model for advanced aircraft electric power systems

    International Nuclear Information System (INIS)

    Ebrahimi, Hadi; El-Kishky, Hassan

    2015-01-01

    Highlights: • A study model is developed for aircraft electric power systems. • A novel GSSA model is developed for the interconnected power grid. • The system’s dynamics are characterized under various conditions. • The averaged results are compared and verified with the actual model. • The obtained measured values are validated with available aircraft standards. - Abstract: The growing complexity of Advanced Aircraft Electric Power Systems (AAEPS) has made conventional state-space averaging models inadequate for systems analysis and characterization. This paper presents a novel Generalized State-Space Averaging (GSSA) model for the system analysis, control and characterization of AAEPS. The primary objective of this paper is to introduce a mathematically elegant and computationally simple model to copy the AAEPS behavior at the critical nodes of the electric grid. Also, to reduce some or all of the drawbacks (complexity, cost, simulation time…, etc) associated with sensor-based monitoring and computer aided design software simulations popularly used for AAEPS characterization. It is shown in this paper that the GSSA approach overcomes the limitations of the conventional state-space averaging method, which fails to predict the behavior of AC signals in a circuit analysis. Unlike conventional averaging method, the GSSA model presented in this paper includes both DC and AC components. This would capture the key dynamic and steady-state characteristics of the aircraft electric systems. The developed model is then examined for the aircraft system’s visualization and accuracy of computation under different loading scenarios. Through several case studies, the applicability and effectiveness of the GSSA method is verified by comparing to the actual real-time simulation model obtained from Powersim 9 (PSIM9) software environment. The simulations results represent voltage, current and load power at the major nodes of the AAEPS. It has been demonstrated that

  19. Large Eddy/Reynolds-Averaged Navier-Stokes Simulations of CUBRC Base Heating Experiments

    Science.gov (United States)

    Salazar, Giovanni; Edwards, Jack R.; Amar, Adam J.

    2012-01-01

    ven with great advances in computational techniques and computing power during recent decades, the modeling of unsteady separated flows, such as those encountered in the wake of a re-entry vehicle, continues to be one of the most challenging problems in CFD. Of most interest to the aerothermodynamics community is accurately predicting transient heating loads on the base of a blunt body, which would result in reduced uncertainties and safety margins when designing a re-entry vehicle. However, the prediction of heat transfer can vary widely depending on the turbulence model employed. Therefore, selecting a turbulence model which realistically captures as much of the flow physics as possible will result in improved results. Reynolds Averaged Navier Stokes (RANS) models have become increasingly popular due to their good performance with attached flows, and the relatively quick turnaround time to obtain results. However, RANS methods cannot accurately simulate unsteady separated wake flows, and running direct numerical simulation (DNS) on such complex flows is currently too computationally expensive. Large Eddy Simulation (LES) techniques allow for the computation of the large eddies, which contain most of the Reynolds stress, while modeling the smaller (subgrid) eddies. This results in models which are more computationally expensive than RANS methods, but not as prohibitive as DNS. By complimenting an LES approach with a RANS model, a hybrid LES/RANS method resolves the larger turbulent scales away from surfaces with LES, and switches to a RANS model inside boundary layers. As pointed out by Bertin et al., this type of hybrid approach has shown a lot of promise for predicting turbulent flows, but work is needed to verify that these models work well in hypersonic flows. The very limited amounts of flight and experimental data available presents an additional challenge for researchers. Recently, a joint study by NASA and CUBRC has focused on collecting heat transfer data

  20. Large-eddy simulation of ethanol spray combustion using a finite-rate combustion model

    Energy Technology Data Exchange (ETDEWEB)

    Li, K.; Zhou, L.X. [Tsinghua Univ., Beijing (China). Dept. of Engineering Mechanics; Chan, C.K. [Hong Kong Polytechnic Univ. (China). Dept. of Applied Mathematics

    2013-07-01

    Large-eddy simulation of spray combustion is under its rapid development, but the combustion models are less validated by detailed experimental data. In this paper, large-eddy simulation of ethanol-air spray combustion was made using an Eulerian-Lagrangian approach, a subgrid-scale kinetic energy stress model, and a finite-rate combustion model. The simulation results are validated in detail by experiments. The LES obtained statistically averaged temperature is in agreement with the experimental results in most regions. The instantaneous LES results show the coherent structures of the shear region near the high-temperature flame zone and the fuel vapor concentration map, indicating the droplets are concentrated in this shear region. The droplet sizes are found to be in the range of 20-100{mu}m. The instantaneous temperature map shows the close interaction between the coherent structures and the combustion reaction.

  1. Semi-analytical wave functions in relativistic average atom model for high-temperature plasmas

    International Nuclear Information System (INIS)

    Guo Yonghui; Duan Yaoyong; Kuai Bin

    2007-01-01

    The semi-analytical method is utilized for solving a relativistic average atom model for high-temperature plasmas. Semi-analytical wave function and the corresponding energy eigenvalue, containing only a numerical factor, are obtained by fitting the potential function in the average atom into hydrogen-like one. The full equations for the model are enumerated, and more attentions are paid upon the detailed procedures including the numerical techniques and computer code design. When the temperature of plasmas is comparatively high, the semi-analytical results agree quite well with those obtained by using a full numerical method for the same model and with those calculated by just a little different physical models, and the result's accuracy and computation efficiency are worthy of note. The drawbacks for this model are also analyzed. (authors)

  2. Effect of LES models on the entrainment characteristics in a turbulent planar jet

    Science.gov (United States)

    Chambel Lopes, Diogo; da Silva, Carlos; Raman, Venkat

    2012-11-01

    The effect of subgrid-scale (SGS) models in the jet spreading rate and centreline passive scalar decay rates are assessed and compared. The modelling of the subgrid-scale fluxes is particularly challenging in the turbulent/nonturbulent (T/NT) region that divides the two regions in the jet flow: the outer region where the flow is irrotational and the inner region where the flow is turbulent: it has been shown that important Reynolds stresses exist near the T/NT interface and that these stresses determine in part the mixing and combustion rates in jets. In this work direct and large-eddy simulations (DNS/LES) of turbulent planar jets are used to study the role of subgrid-scale models in the integral characteristics of the passive scalar mixing in a jet. LES show that different SGS modes lead to different spreading rates for the velocity and scalar fields, and the scalar quantities are more affected than the velocity e.g. SGS models affect strongly the centreline mean scalar decay than the centreline mean velocity decay. The results suggest the need for a minimum resolution close to the Taylor micro-scale in order to recover the correct results for the integral quantities and this can be explained by recent results on the dynamics of the T/NT interface.

  3. Analysis of subgrid scale mixing using a hybrid LES-Monte-Carlo PDF method

    International Nuclear Information System (INIS)

    Olbricht, C.; Hahn, F.; Sadiki, A.; Janicka, J.

    2007-01-01

    This contribution introduces a hybrid LES-Monte-Carlo method for a coupled solution of the flow and the multi-dimensional scalar joint pdf in two complex mixing devices. For this purpose an Eulerian Monte-Carlo method is used. First, a complex mixing device (jet-in-crossflow, JIC) is presented in which the stochastic convergence and the coherency between the scalar field solution obtained via finite-volume methods and that from the stochastic solution of the pdf for the hybrid method are evaluated. Results are compared to experimental data. Secondly, an extensive investigation of the micromixing on the basis of assumed shape and transported SGS-pdfs in a configuration with practical relevance is carried out. This consists of a mixing chamber with two opposite rows of jets penetrating a crossflow (multi-jet-in-crossflow, MJIC). Some numerical results are compared to available experimental data and to RANS based results. It turns out that the hybrid LES-Monte-Carlo method could achieve a detailed analysis of the mixing at the subgrid level

  4. Focused information criterion and model averaging based on weighted composite quantile regression

    KAUST Repository

    Xu, Ganggang

    2013-08-13

    We study the focused information criterion and frequentist model averaging and their application to post-model-selection inference for weighted composite quantile regression (WCQR) in the context of the additive partial linear models. With the non-parametric functions approximated by polynomial splines, we show that, under certain conditions, the asymptotic distribution of the frequentist model averaging WCQR-estimator of a focused parameter is a non-linear mixture of normal distributions. This asymptotic distribution is used to construct confidence intervals that achieve the nominal coverage probability. With properly chosen weights, the focused information criterion based WCQR estimators are not only robust to outliers and non-normal residuals but also can achieve efficiency close to the maximum likelihood estimator, without assuming the true error distribution. Simulation studies and a real data analysis are used to illustrate the effectiveness of the proposed procedure. © 2013 Board of the Foundation of the Scandinavian Journal of Statistics..

  5. A new nonlinear turbulence model based on Partially-Averaged Navier-Stokes Equations

    International Nuclear Information System (INIS)

    Liu, J T; Wu, Y L; Cai, C; Liu, S H; Wang, L Q

    2013-01-01

    Partially-averaged Navier-Stokes (PANS) Model was recognized as a Reynolds-averaged Navier-Stokes (RANS) to direct numerical simulation (DNS) bridging method. PANS model was purported for any filter width-from RANS to DNS. PANS method also shared some similarities with the currently popular URANS (unsteady RANS) method. In this paper, a new PANS model was proposed, which was based on RNG k-ε turbulence model. The Standard and RNG k-ε turbulence model were both isotropic models, as well as PANS models. The sheer stress in those PANS models was solved by linear equation. The linear hypothesis was not accurate in the simulation of complex flow, such as stall phenomenon. The sheer stress here was solved by nonlinear method proposed by Ehrhard. Then, the nonlinear PANS model was set up. The pressure coefficient of the suction side of the NACA0015 hydrofoil was predicted. The result of pressure coefficient agrees well with experimental result, which proves that the nonlinear PANS model can capture the high pressure gradient flow. A low specific centrifugal pump was used to verify the capacity of the nonlinear PANS model. The comparison between the simulation results of the centrifugal pump and Particle Image Velocimetry (PIV) results proves that the nonlinear PANS model can be used in the prediction of complex flow field

  6. Validation of a mixture-averaged thermal diffusion model for premixed lean hydrogen flames

    Science.gov (United States)

    Schlup, Jason; Blanquart, Guillaume

    2018-03-01

    The mixture-averaged thermal diffusion model originally proposed by Chapman and Cowling is validated using multiple flame configurations. Simulations using detailed hydrogen chemistry are done on one-, two-, and three-dimensional flames. The analysis spans flat and stretched, steady and unsteady, and laminar and turbulent flames. Quantitative and qualitative results using the thermal diffusion model compare very well with the more complex multicomponent diffusion model. Comparisons are made using flame speeds, surface areas, species profiles, and chemical source terms. Once validated, this model is applied to three-dimensional laminar and turbulent flames. For these cases, thermal diffusion causes an increase in the propagation speed of the flames as well as increased product chemical source terms in regions of high positive curvature. The results illustrate the necessity for including thermal diffusion, and the accuracy and computational efficiency of the mixture-averaged thermal diffusion model.

  7. Achieving scale-independent land-surface flux estimates - Application of the Multiscale Parameter Regionalization (MPR) to the Noah-MP land-surface model across the contiguous USA

    Science.gov (United States)

    Thober, S.; Mizukami, N.; Samaniego, L. E.; Attinger, S.; Clark, M. P.; Cuntz, M.

    2016-12-01

    Land-surface models use a variety of process representations to calculate terrestrial energy, water and biogeochemical fluxes. These process descriptions are usually derived from point measurements but are scaled to much larger resolutions in applications that range from about 1 km in catchment hydrology to 100 km in climate modelling. Both, hydrologic and climate models are nowadays run on different spatial resolutions, using the exact same land surface representations. A fundamental criterion for the physical consistency of land-surface simulations across scales is that a flux estimated over a given area is independent of the spatial model resolution (i.e., the flux-matching criterion). The Noah-MP land surface model considers only one soil and land cover type per model grid cell without any representation of subgrid variability, implying a weak flux-matching. A fractional approach simulates subgrid variability but it requires a higher computational demand than using effective parameters and it is used only for land cover in current land surface schemes. A promising approach to derive scale-independent parameters is the Multiscale Parameter Regionalization (MPR) technique, which consists of two steps: first, it applies transfer functions directly to high-resolution data (such as 100 m soil maps) to derive high-resolution model parameter fields, acknowledging the full subgrid variability. Second, it upscales these high-resolution parameter fields to the model resolution by using appropriate upscaling operators. MPR has shown to improve substantially the scalability of hydrologic models. Here, we apply the MPR technique to the Noah-MP land-surface model for a large sample of basins distributed across the contiguous USA. Specifically, we evaluate the flux-matching criterion for several hydrologic fluxes such as evapotranspiration and total runoff at scales ranging from 3 km to 48 km. We also investigate a p-norm scaling operator that goes beyond the current

  8. An averaging battery model for a lead-acid battery operating in an electric car

    Science.gov (United States)

    Bozek, J. M.

    1979-01-01

    A battery model is developed based on time averaging the current or power, and is shown to be an effective means of predicting the performance of a lead acid battery. The effectiveness of this battery model was tested on battery discharge profiles expected during the operation of an electric vehicle following the various SAE J227a driving schedules. The averaging model predicts the performance of a battery that is periodically charged (regenerated) if the regeneration energy is assumed to be converted to retrievable electrochemical energy on a one-to-one basis.

  9. A Tidally Averaged Sediment-Transport Model for San Francisco Bay, California

    Science.gov (United States)

    Lionberger, Megan A.; Schoellhamer, David H.

    2009-01-01

    A tidally averaged sediment-transport model of San Francisco Bay was incorporated into a tidally averaged salinity box model previously developed and calibrated using salinity, a conservative tracer (Uncles and Peterson, 1995; Knowles, 1996). The Bay is represented in the model by 50 segments composed of two layers: one representing the channel (>5-meter depth) and the other the shallows (0- to 5-meter depth). Calculations are made using a daily time step and simulations can be made on the decadal time scale. The sediment-transport model includes an erosion-deposition algorithm, a bed-sediment algorithm, and sediment boundary conditions. Erosion and deposition of bed sediments are calculated explicitly, and suspended sediment is transported by implicitly solving the advection-dispersion equation. The bed-sediment model simulates the increase in bed strength with depth, owing to consolidation of fine sediments that make up San Francisco Bay mud. The model is calibrated to either net sedimentation calculated from bathymetric-change data or measured suspended-sediment concentration. Specified boundary conditions are the tributary fluxes of suspended sediment and suspended-sediment concentration in the Pacific Ocean. Results of model calibration and validation show that the model simulates the trends in suspended-sediment concentration associated with tidal fluctuations, residual velocity, and wind stress well, although the spring neap tidal suspended-sediment concentration variability was consistently underestimated. Model validation also showed poor simulation of seasonal sediment pulses from the Sacramento-San Joaquin River Delta at Point San Pablo because the pulses enter the Bay over only a few days and the fate of the pulses is determined by intra-tidal deposition and resuspension that are not included in this tidally averaged model. The model was calibrated to net-basin sedimentation to calculate budgets of sediment and sediment-associated contaminants. While

  10. Residence-time framework for modeling multicomponent reactive transport in stream hyporheic zones

    Science.gov (United States)

    Painter, S. L.; Coon, E. T.; Brooks, S. C.

    2017-12-01

    Process-based models for transport and transformation of nutrients and contaminants in streams require tractable representations of solute exchange between the stream channel and biogeochemically active hyporheic zones. Residence-time based formulations provide an alternative to detailed three-dimensional simulations and have had good success in representing hyporheic exchange of non-reacting solutes. We extend the residence-time formulation for hyporheic transport to accommodate general multicomponent reactive transport. To that end, the integro-differential form of previous residence time models is replaced by an equivalent formulation based on a one-dimensional advection dispersion equation along the channel coupled at each channel location to a one-dimensional transport model in Lagrangian travel-time form. With the channel discretized for numerical solution, the associated Lagrangian model becomes a subgrid model representing an ensemble of streamlines that are diverted into the hyporheic zone before returning to the channel. In contrast to the previous integro-differential forms of the residence-time based models, the hyporheic flowpaths have semi-explicit spatial representation (parameterized by travel time), thus allowing coupling to general biogeochemical models. The approach has been implemented as a stream-corridor subgrid model in the open-source integrated surface/subsurface modeling software ATS. We use bedform-driven flow coupled to a biogeochemical model with explicit microbial biomass dynamics as an example to show that the subgrid representation is able to represent redox zonation in sediments and resulting effects on metal biogeochemical dynamics in a tractable manner that can be scaled to reach scales.

  11. A Partially-Stirred Batch Reactor Model for Under-Ventilated Fire Dynamics

    Science.gov (United States)

    McDermott, Randall; Weinschenk, Craig

    2013-11-01

    A simple discrete quadrature method is developed for closure of the mean chemical source term in large-eddy simulations (LES) and implemented in the publicly available fire model, Fire Dynamics Simulator (FDS). The method is cast as a partially-stirred batch reactor model for each computational cell. The model has three distinct components: (1) a subgrid mixing environment, (2) a mixing model, and (3) a set of chemical rate laws. The subgrid probability density function (PDF) is described by a linear combination of Dirac delta functions with quadrature weights set to satisfy simple integral constraints for the computational cell. It is shown that under certain limiting assumptions, the present method reduces to the eddy dissipation concept (EDC). The model is used to predict carbon monoxide concentrations in direct numerical simulation (DNS) of a methane slot burner and in LES of an under-ventilated compartment fire.

  12. Evaluation of LES models for flow over bluff body from engineering ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    one-equation model for subgrid kinetic energy is the best choice. ... He also contemplated to use Spalart & Allmaras (1992) one-equation RANS model for this ..... characteristics of the turbulent flow near wake of a square cylinder. J. Fluid Mech ...

  13. Evaluation of subject contrast and normalized average glandular dose by semi-analytical models

    International Nuclear Information System (INIS)

    Tomal, A.; Poletti, M.E.; Caldas, L.V.E.

    2010-01-01

    In this work, two semi-analytical models are described to evaluate the subject contrast of nodules and the normalized average glandular dose in mammography. Both models were used to study the influence of some parameters, such as breast characteristics (thickness and composition) and incident spectra (kVp and target-filter combination) on the subject contrast of a nodule and on the normalized average glandular dose. From the subject contrast results, detection limits of nodules were also determined. Our results are in good agreement with those reported by other authors, who had used Monte Carlo simulation, showing the robustness of our semi-analytical method.

  14. Evidence on a Real Business Cycle Model with Neutral and Investment-Specific Technology Shocks using Bayesian Model Averaging

    NARCIS (Netherlands)

    R.W. Strachan (Rodney); H.K. van Dijk (Herman)

    2010-01-01

    textabstractThe empirical support for a real business cycle model with two technology shocks is evaluated using a Bayesian model averaging procedure. This procedure makes use of a finite mixture of many models within the class of vector autoregressive (VAR) processes. The linear VAR model is

  15. Modeling and Forecasting Average Temperature for Weather Derivative Pricing

    Directory of Open Access Journals (Sweden)

    Zhiliang Wang

    2015-01-01

    Full Text Available The main purpose of this paper is to present a feasible model for the daily average temperature on the area of Zhengzhou and apply it to weather derivatives pricing. We start by exploring the background of weather derivatives market and then use the 62 years of daily historical data to apply the mean-reverting Ornstein-Uhlenbeck process to describe the evolution of the temperature. Finally, Monte Carlo simulations are used to price heating degree day (HDD call option for this city, and the slow convergence of the price of the HDD call can be found through taking 100,000 simulations. The methods of the research will provide a frame work for modeling temperature and pricing weather derivatives in other similar places in China.

  16. Modeling and analysis of large-eddy simulations of particle-laden turbulent boundary layer flows

    KAUST Repository

    Rahman, Mustafa M.; Samtaney, Ravi

    2017-01-01

    layer employs stretched spiral vortex subgrid-scale model and a virtual wall model similar to the work of Cheng, Pullin & Samtaney (J. Fluid Mech., 2015). This LES model is virtually parameter free and involves no active filtering of the computed

  17. Application of the Periodic Average System Model in Dam Deformation Analysis

    Directory of Open Access Journals (Sweden)

    Yueqian Shen

    2015-01-01

    Full Text Available Dams are among the most important hydraulic engineering facilities used for water supply, flood control, and hydroelectric power. Monitoring of dams is crucial since deformation might have occurred. How to obtain the deformation information and then judge the safe conditions is the key and difficult problem in dam deformation monitoring field. This paper proposes the periodic average system model and creates the concept of “settlement activity” based on the dam deformation issue. Long-term deformation monitoring data is carried out in a pumped-storage power station, this model combined with settlement activity is used to make the single point deformation analysis, and then the whole settlement activity profile is drawn by clustering analysis. Considering the cumulative settlement value of every point, the dam deformation trend is analyzed in an intuitive effect way. The analysis mode of combined single point with multipoints is realized. The results show that the key deformation information of the dam can be easily grasped by the application of the periodic average system model combined with the distribution diagram of settlement activity. And, above all, the ideas of this research provide an effective method for dam deformation analysis.

  18. Influence of Averaging Preprocessing on Image Analysis with a Markov Random Field Model

    Science.gov (United States)

    Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato

    2018-02-01

    This paper describes our investigations into the influence of averaging preprocessing on the performance of image analysis. Averaging preprocessing involves a trade-off: image averaging is often undertaken to reduce noise while the number of image data available for image analysis is decreased. We formulated a process of generating image data by using a Markov random field (MRF) model to achieve image analysis tasks such as image restoration and hyper-parameter estimation by a Bayesian approach. According to the notions of Bayesian inference, posterior distributions were analyzed to evaluate the influence of averaging. There are three main results. First, we found that the performance of image restoration with a predetermined value for hyper-parameters is invariant regardless of whether averaging is conducted. We then found that the performance of hyper-parameter estimation deteriorates due to averaging. Our analysis of the negative logarithm of the posterior probability, which is called the free energy based on an analogy with statistical mechanics, indicated that the confidence of hyper-parameter estimation remains higher without averaging. Finally, we found that when the hyper-parameters are estimated from the data, the performance of image restoration worsens as averaging is undertaken. We conclude that averaging adversely influences the performance of image analysis through hyper-parameter estimation.

  19. Time line cell tracking for the approximation of lagrangian coherent structures with subgrid accuracy

    KAUST Repository

    Kuhn, Alexander

    2013-12-05

    Lagrangian coherent structures (LCSs) have become a widespread and powerful method to describe dynamic motion patterns in time-dependent flow fields. The standard way to extract LCS is to compute height ridges in the finite-time Lyapunov exponent field. In this work, we present an alternative method to approximate Lagrangian features for 2D unsteady flow fields that achieve subgrid accuracy without additional particle sampling. We obtain this by a geometric reconstruction of the flow map using additional material constraints for the available samples. In comparison to the standard method, this allows for a more accurate global approximation of LCS on sparse grids and for long integration intervals. The proposed algorithm works directly on a set of given particle trajectories and without additional flow map derivatives. We demonstrate its application for a set of computational fluid dynamic examples, as well as trajectories acquired by Lagrangian methods, and discuss its benefits and limitations. © 2013 The Authors Computer Graphics Forum © 2013 The Eurographics Association and John Wiley & Sons Ltd.

  20. Modelling the atmospheric dispersion of foot-and-mouth disease virus for emergency preparedness

    DEFF Research Database (Denmark)

    Sørensen, J.H.; Jensen, C.O.; Mikkelsen, T.

    2001-01-01

    A model system for simulating airborne spread of foot-and-mouth disease (FMD) is described. The system includes a virus production model and the local- and mesoscale atmospheric dispersion model RIMPUFF linked to the LINCOM local-scale Row model. LINCOM is used to calculate the sub-grid scale Row...

  1. Compact and accurate linear and nonlinear autoregressive moving average model parameter estimation using laguerre functions

    DEFF Research Database (Denmark)

    Chon, K H; Cohen, R J; Holstein-Rathlou, N H

    1997-01-01

    A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via moving...... average models, as is the case for the Volterra-Wiener analysis, we propose an ARMA model-based approach. The proposed algorithm is essentially the same as LEK, but this algorithm is extended to include past values of the output as well. Thus, all of the advantages associated with using the Laguerre...

  2. Actuator disk model of wind farms based on the rotor average wind speed

    DEFF Research Database (Denmark)

    Han, Xing Xing; Xu, Chang; Liu, De You

    2016-01-01

    Due to difficulty of estimating the reference wind speed for wake modeling in wind farm, this paper proposes a new method to calculate the momentum source based on the rotor average wind speed. The proposed model applies volume correction factor to reduce the influence of the mesh recognition of ...

  3. Adding complex terrain and stable atmospheric condition capability to the OpenFOAM-based flow solver of the simulator for on/offshore wind farm applications (SOWFA

    Directory of Open Access Journals (Sweden)

    Churchfield Matthew J.

    2014-01-01

    Full Text Available The National Renewable Energy Laboratory's Simulator for On/Offshore Wind Farm Applications contains an OpenFOAM-based flow solver for performing large-eddy simulation of flow through wind plants. The solver computes the atmospheric boundary layer flow and models turbines with actuator lines. Until recently, the solver was limited to flows over flat terrain and could only use the standard Smagorinsky subgrid-scale model. In this work, we present our improvements to the flow solver that enable us to 1 use any OpenFOAM-standard subgrid-scale model and 2 simulate flow over complex terrain. We used the flow solver to compute a stably stratified atmospheric boundary layer using both the standard and the Lagrangian-averaged scale-independent dynamic Smagorinsky models. Surprisingly, the results using the standard Smagorinsky model compare well to other researchers' results of the same case, although it is often said that the standard Smagorinsky model is too dissipative for accurate stable stratification calculations. The scale-independent dynamic subgrid-scale model produced poor results, probably due to the spikes in model constant with values as high as 4.6. We applied a simple bounding of the model constant to remove these spikes, which caused the model to produce results much more in line with other researchers' results. We also computed flow over a simple hilly terrain and performed some basic qualitative analysis to verify the proper operation of the terrain-local surface stress model we employed.

  4. Predicting water main failures using Bayesian model averaging and survival modelling approach

    International Nuclear Information System (INIS)

    Kabir, Golam; Tesfamariam, Solomon; Sadiq, Rehan

    2015-01-01

    To develop an effective preventive or proactive repair and replacement action plan, water utilities often rely on water main failure prediction models. However, in predicting the failure of water mains, uncertainty is inherent regardless of the quality and quantity of data used in the model. To improve the understanding of water main failure, a Bayesian framework is developed for predicting the failure of water mains considering uncertainties. In this study, Bayesian model averaging method (BMA) is presented to identify the influential pipe-dependent and time-dependent covariates considering model uncertainties whereas Bayesian Weibull Proportional Hazard Model (BWPHM) is applied to develop the survival curves and to predict the failure rates of water mains. To accredit the proposed framework, it is implemented to predict the failure of cast iron (CI) and ductile iron (DI) pipes of the water distribution network of the City of Calgary, Alberta, Canada. Results indicate that the predicted 95% uncertainty bounds of the proposed BWPHMs capture effectively the observed breaks for both CI and DI water mains. Moreover, the performance of the proposed BWPHMs are better compare to the Cox-Proportional Hazard Model (Cox-PHM) for considering Weibull distribution for the baseline hazard function and model uncertainties. - Highlights: • Prioritize rehabilitation and replacements (R/R) strategies of water mains. • Consider the uncertainties for the failure prediction. • Improve the prediction capability of the water mains failure models. • Identify the influential and appropriate covariates for different models. • Determine the effects of the covariates on failure

  5. A new model for friction under shock conditions

    Directory of Open Access Journals (Sweden)

    Dambakizi F.

    2011-01-01

    Full Text Available This article is aimed at the developpement of a new model for friction under shock conditions. Thanks to a subgrid model and a specific Coulomb friction law, it takes into account the interface temperature and deformation but also the influence of asperities when the contact pressure is relatively low (≤ 3 GPa.

  6. ISS modeling strategy for the numerical simulation of turbulent sub-channel liquid-vapor flows

    International Nuclear Information System (INIS)

    Olivier Lebaigue; Benoit Mathieu; Didier Jamet

    2005-01-01

    Full text of publication follows: The general objective is to perform numerical simulation of the liquid-vapor turbulent two-phase flows that occur in sub-channels of a nuclear plant assembly under nominal or incidental situations. Additional features concern nucleate boiling at the surface of fuel rods and the sliding of vapor bubbles on this surface with possible dynamic contact lines. The Interfaces and Sub-grid Scales (ISS) modeling strategy for numerical simulations is one of the possible two-phase equivalents for the one-phase LES concept. It consists in solving the two-phase flows features at the scales that are resolved by the grid of the numerical method, and to take into account the unresolved scales with sub-grid models. Interfaces are tracked in a DNS-like approach while specific features of the behavior of interfaces such as contact line physics, coalescence and fragmentation, and the smallest scales of turbulence within each phase have an unresolved scale part that is modeled. The problem of the modeling of the smallest scales of turbulence is rather simple even if the classical situation is altered by the presence of the interfaces. In a typical sub-channel situation (e.g., 15 MPa and 3.5 m.s -1 water flow in a PWR sub-channel), the Kolmogorov scale is ca. 1 μm whereas typical bubble size are supposed to be close to 150 μm. Therefore, the use of a simple sub-grid model between, e.g., 1 and 20 μm allows a drastic reduction of the number of nodes in the space discretization while it remains possible to validate by comparison to true DNS results. Other sub-grid models have been considered to recover physical phenomena that cannot be captured with a realistic discretization: they rely on physical scales from molecular size to 1 μm. In these cases, the use of sub-grid model is no longer a matter of CPU-time and memory saving only, but also a corner stone to recover physical behavior. From this point of view at least we are no longer performing true

  7. Stochastic modelling of the monthly average maximum and minimum temperature patterns in India 1981-2015

    Science.gov (United States)

    Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.

    2018-04-01

    The paper investigates the stochastic modelling and forecasting of monthly average maximum and minimum temperature patterns through suitable seasonal auto regressive integrated moving average (SARIMA) model for the period 1981-2015 in India. The variations and distributions of monthly maximum and minimum temperatures are analyzed through Box plots and cumulative distribution functions. The time series plot indicates that the maximum temperature series contain sharp peaks in almost all the years, while it is not true for the minimum temperature series, so both the series are modelled separately. The possible SARIMA model has been chosen based on observing autocorrelation function (ACF), partial autocorrelation function (PACF), and inverse autocorrelation function (IACF) of the logarithmic transformed temperature series. The SARIMA (1, 0, 0) × (0, 1, 1)12 model is selected for monthly average maximum and minimum temperature series based on minimum Bayesian information criteria. The model parameters are obtained using maximum-likelihood method with the help of standard error of residuals. The adequacy of the selected model is determined using correlation diagnostic checking through ACF, PACF, IACF, and p values of Ljung-Box test statistic of residuals and using normal diagnostic checking through the kernel and normal density curves of histogram and Q-Q plot. Finally, the forecasting of monthly maximum and minimum temperature patterns of India for the next 3 years has been noticed with the help of selected model.

  8. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Diks, Cees G H [NON LANL; Clark, Martyn P [NON LANL

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  9. Zonally averaged chemical-dynamical model of the lower thermosphere

    International Nuclear Information System (INIS)

    Kasting, J.F.; Roble, R.G.

    1981-01-01

    A zonally averaged numerical model of the thermosphere is used to examine the coupling between neutral composition, including N 2 , O 2 and O, temperature, and winds at solstice for solar minimum conditions. The meridional circulation forced by solar heating results in a summer-to-winter flow, with a winter enhancement in atomic oxygen density that is a factor of about 1.8 greater than the summer hemisphere at 160 km. The O 2 and N 2 variations are associated with a latitudinal gradient in total number density, which is required to achieve pressure balance in the presence of large zonal jets. Latitudinal profiles OI (5577A) green line emission intensity are calculated by using both Chapman and Barth mechanisms. Composition of the lower thermosphere is shown to be strongly influenced by circulation patterns initiated in the stratosphere and lower mesosphere, below the lower boundary used in the model

  10. Modeling Water Exchange and Contaminant Transport through a Baltic Coastal Region

    International Nuclear Information System (INIS)

    Engqvist, Anders; Doeoes, Kristofer; Andrejev, Oleg

    2006-01-01

    The water exchange of the Baltic coastal zone is characterized by its seasonally varying regimes. In the safety assessment of a potential repository for spent nuclear fuel, it is important to assess the consequences of a hypothetical leak of radionuclides through the seabed into a waterborne transport phase. In particular, estimates of the associated residence times in the near-shore coastal zone are of interest. There are several methods to quantify such measures, of which three are presented here. Using the coastal location of Forsmark (Sweden) as an example, methods based on passive tracers, particle trajectories, and the average age distribution of exogenous water parcels are compared for a representative one-year cycle. Tracer-based methods can simulate diffusivity more realistically than the other methods. Trajectory-based methods can handle Lagrangian dispersion processes due to advection but neglect diffusion on the sub-grid scale. The method based on the concept of average age (AvA) of exogenous water can include all such sources simultaneously not only boundary water bodies but also various (fresh)water discharges. Due to the inclusion of sub-grid diffusion this method gives a smoother measure of the water renewal. It is shown that backward in time trajectories and AvA-times are basically equipollent methods, yielding correlated results within the limits set by the diffusivity

  11. Application of a New Hybrid RANS/LES Modeling Paradigm to Compressible Flow

    Science.gov (United States)

    Oliver, Todd; Pederson, Clark; Haering, Sigfried; Moser, Robert

    2017-11-01

    It is well-known that traditional hybrid RANS/LES modeling approaches suffer from a number of deficiencies. These deficiencies often stem from overly simplistic blending strategies based on scalar measures of turbulence length scale and grid resolution and from use of isotropic subgrid models in LES regions. A recently developed hybrid modeling approach has shown promise in overcoming these deficiencies in incompressible flows [Haering, 2015]. In the approach, RANS/LES blending is accomplished using a hybridization parameter that is governed by an additional model transport equation and is driven to achieve equilibrium between the resolved and unresolved turbulence for the given grid. Further, the model uses an tensor eddy viscosity that is formulated to represent the effects of anisotropic grid resolution on subgrid quantities. In this work, this modeling approach is extended to compressible flows and implemented in the compressible flow solver SU2 (http://su2.stanford.edu/). We discuss both modeling and implementation challenges and show preliminary results for compressible flow test cases with smooth wall separation.

  12. Implementation, Comparison and Application of an Average Simulation Model of a Wind Turbine Driven Doubly Fed Induction Generator

    Directory of Open Access Journals (Sweden)

    Lidula N. Widanagama Arachchige

    2017-10-01

    Full Text Available Wind turbine driven doubly-fed induction generators (DFIGs are widely used in the wind power industry. With the increasing penetration of wind farms, analysis of their effect on power systems has become a critical requirement. This paper presents the modeling of wind turbine driven DFIGs using the conventional vector controls in a detailed model of a DFIG that represents power electronics (PE converters with device level models and proposes an average model eliminating the PE converters. The PSCAD/EMTDC™ (4.6 electromagnetic transient simulation software is used to develop the detailed and the proposing average model of a DFIG. The comparison of the two models reveals that the designed average DFIG model is adequate for simulating and analyzing most of the transient conditions.

  13. Introducing Subrid-scale Cloud Feedbacks to Radiation for Regional Meteorological and Cllimate Modeling

    Science.gov (United States)

    Convection systems and associated cloudiness directly influence regional and local radiation budgets, and dynamics and thermodynamics through feedbacks. However, most subgrid-scale convective parameterizations in regional weather and climate models do not consider cumulus cloud ...

  14. Reynolds-Averaged Navier-Stokes Modeling of Turbulent Free Shear Layers

    Science.gov (United States)

    Schilling, Oleg

    2017-11-01

    Turbulent mixing of gases in free shear layers is simulated using a weighted essentially nonoscillatory implementation of ɛ- and L-based Reynolds-averaged Navier-Stokes models. Specifically, the air/air shear layer with velocity ratio 0.6 studied experimentally by Bell and Mehta (1990) is modeled. The detailed predictions of turbulent kinetic energy dissipation rate and lengthscale models are compared to one another, and to the experimental data. The role of analytical, self-similar solutions for model calibration and physical insights is also discussed. It is shown that turbulent lengthscale-based models are unable to predict both the growth parameter (spreading rate) and turbulent kinetic energy normalized by the square of the velocity difference of the streams. The terms in the K, ɛ, and L equation budgets are compared between the models, and it is shown that the production and destruction mechanisms are substantially different in the ɛ and L equations. Application of the turbulence models to the Brown and Roshko (1974) experiments with streams having various velocity and density ratios is also briefly discussed. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  15. Advanced subgrid modeling for Multiphase CFD in CASL VERA tools

    International Nuclear Information System (INIS)

    Baglietto, Emilio; Gilman, Lindsey; Sugrue, Rosie

    2014-01-01

    This work introduces advanced modeling capabilities that are being developed to improve the accuracy and extend the applicability of Multiphase CFD. Specifics of the advanced and hardened boiling closure model are described in this work. The development has been driven by new physical understanding, derived from the innovative experimental techniques available at MIT. A new experimental-based mechanistic approach to heat partitioning is proposed. The model introduces a new description of the bubble evaporation, sliding and interaction on the heated surface to accurately capture the evaporation occurring at the heated surface, while also tracking the local surface conditions. The model is being assembled to cover an extended application area, up to Critical Heat Flux (CHF). The accurate description of the bubble interaction, effective microlayer and dry surface area are considered to be the enabling quantities towards innovated CHF capturing methodologies. Further, improved mechanistic force-balance models for bubble departure predictions and lift-off diameter predictions are implemented in the model. Studies demonstrate the influence of the newly implemented partitioning components. Finally, the development work towards a more consistent and integrated hydrodynamic closure is presented. The main objective here is to develop a set of robust momentum closure relations which focuses on the specific application to PWR conditions, but will facilitate the application to other geometries, void fractions, and flow regimes. The innovative approach considers local flow conditions on a cell-by-cell basis to ensure robustness. Closure relations of interest initially include drag, lift, and turbulence dispersion, with near wall corrections applied for both drag and lift. (author)

  16. QUANTIFYING SUBGRID POLLUTANT VARIABILITY IN EULERIAN AIR QUALITY MODELS

    Science.gov (United States)

    In order to properly assess human risk due to exposure to hazardous air pollutants or air toxics, detailed information is needed on the location and magnitude of ambient air toxic concentrations. Regional scale Eulerian air quality models are typically limited to relatively coar...

  17. Estimation of Model's Marginal likelihood Using Adaptive Sparse Grid Surrogates in Bayesian Model Averaging

    Science.gov (United States)

    Zeng, X.

    2015-12-01

    A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.

  18. Community Land Model Version 3.0 (CLM3.0) Developer's Guide

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, FM

    2004-12-21

    This document describes the guidelines adopted for software development of the Community Land Model (CLM) and serves as a reference to the entire code base of the released version of the model. The version of the code described here is Version 3.0 which was released in the summer of 2004. This document, the Community Land Model Version 3.0 (CLM3.0) User's Guide (Vertenstein et al., 2004), the Technical Description of the Community Land Model (CLM) (Oleson et al., 2004), and the Community Land Model's Dynamic Global Vegetation Model (CLM-DGVM): Technical Description and User's Guide (Levis et al., 2004) provide the developer, user, or researcher with details of implementation, instructions for using the model, a scientific description of the model, and a scientific description of the Dynamic Global Vegetation Model integrated with CLM respectively. The CLM is a single column (snow-soil-vegetation) biogeophysical model of the land surface which can be run serially (on a laptop or personal computer) or in parallel (using distributed or shared memory processors or both) on both vector and scalar computer architectures. Written in Fortran 90, CLM can be run offline (i.e., run in isolation using stored atmospheric forcing data), coupled to an atmospheric model (e.g., the Community Atmosphere Model (CAM)), or coupled to a climate system model (e.g., the Community Climate System Model Version 3 (CCSM3)) through a flux coupler (e.g., Coupler 6 (CPL6)). When coupled, CLM exchanges fluxes of energy, water, and momentum with the atmosphere. The horizontal land surface heterogeneity is represented by a nested subgrid hierarchy composed of gridcells, landunits, columns, and plant functional types (PFTs). This hierarchical representation is reflected in the data structures used by the model code. Biophysical processes are simulated for each subgrid unit (landunit, column, and PFT) independently, and prognostic variables are maintained for each subgrid unit

  19. A novel approach for introducing cloud spatial structure into cloud radiative transfer parameterizations

    Science.gov (United States)

    Huang, Dong; Liu, Yangang

    2014-12-01

    Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost, allowing for more realistic representation of cloud radiation interactions in large-scale models.

  20. Time series forecasting using ERNN and QR based on Bayesian model averaging

    Science.gov (United States)

    Pwasong, Augustine; Sathasivam, Saratha

    2017-08-01

    The Bayesian model averaging technique is a multi-model combination technique. The technique was employed to amalgamate the Elman recurrent neural network (ERNN) technique with the quadratic regression (QR) technique. The amalgamation produced a hybrid technique known as the hybrid ERNN-QR technique. The potentials of forecasting with the hybrid technique are compared with the forecasting capabilities of individual techniques of ERNN and QR. The outcome revealed that the hybrid technique is superior to the individual techniques in the mean square error sense.

  1. A Pareto-optimal moving average multigene genetic programming model for daily streamflow prediction

    Science.gov (United States)

    Danandeh Mehr, Ali; Kahya, Ercan

    2017-06-01

    Genetic programming (GP) is able to systematically explore alternative model structures of different accuracy and complexity from observed input and output data. The effectiveness of GP in hydrological system identification has been recognized in recent studies. However, selecting a parsimonious (accurate and simple) model from such alternatives still remains a question. This paper proposes a Pareto-optimal moving average multigene genetic programming (MA-MGGP) approach to develop a parsimonious model for single-station streamflow prediction. The three main components of the approach that take us from observed data to a validated model are: (1) data pre-processing, (2) system identification and (3) system simplification. The data pre-processing ingredient uses a simple moving average filter to diminish the lagged prediction effect of stand-alone data-driven models. The multigene ingredient of the model tends to identify the underlying nonlinear system with expressions simpler than classical monolithic GP and, eventually simplification component exploits Pareto front plot to select a parsimonious model through an interactive complexity-efficiency trade-off. The approach was tested using the daily streamflow records from a station on Senoz Stream, Turkey. Comparing to the efficiency results of stand-alone GP, MGGP, and conventional multi linear regression prediction models as benchmarks, the proposed Pareto-optimal MA-MGGP model put forward a parsimonious solution, which has a noteworthy importance of being applied in practice. In addition, the approach allows the user to enter human insight into the problem to examine evolved models and pick the best performing programs out for further analysis.

  2. Benefits of Dominance over Additive Models for the Estimation of Average Effects in the Presence of Dominance

    Directory of Open Access Journals (Sweden)

    Pascal Duenk

    2017-10-01

    Full Text Available In quantitative genetics, the average effect at a single locus can be estimated by an additive (A model, or an additive plus dominance (AD model. In the presence of dominance, the AD-model is expected to be more accurate, because the A-model falsely assumes that residuals are independent and identically distributed. Our objective was to investigate the accuracy of an estimated average effect (α^ in the presence of dominance, using either a single locus A-model or AD-model. Estimation was based on a finite sample from a large population in Hardy-Weinberg equilibrium (HWE, and the root mean squared error of α^ was calculated for several broad-sense heritabilities, sample sizes, and sizes of the dominance effect. Results show that with the A-model, both sampling deviations of genotype frequencies from HWE frequencies and sampling deviations of allele frequencies contributed to the error. With the AD-model, only sampling deviations of allele frequencies contributed to the error, provided that all three genotype classes were sampled. In the presence of dominance, the root mean squared error of α^ with the AD-model was always smaller than with the A-model, even when the heritability was less than one. Remarkably, in the absence of dominance, there was no disadvantage of fitting dominance. In conclusion, the AD-model yields more accurate estimates of average effects from a finite sample, because it is more robust against sampling deviations from HWE frequencies than the A-model. Genetic models that include dominance, therefore, yield higher accuracies of estimated average effects than purely additive models when dominance is present.

  3. Average intensity and spreading of partially coherent model beams propagating in a turbulent biological tissue

    International Nuclear Information System (INIS)

    Wu, Yuqian; Zhang, Yixin; Wang, Qiu; Hu, Zhengda

    2016-01-01

    For Gaussian beams with three different partially coherent models, including Gaussian-Schell model (GSM), Laguerre-Gaussian Schell-model (LGSM) and Bessel-Gaussian Schell-model (BGSM) beams propagating through a biological turbulent tissue, the expression of the spatial coherence radius of a spherical wave propagating in a turbulent biological tissue, and the average intensity and beam spreading for GSM, LGSM and BGSM beams are derived based on the fractal model of power spectrum of refractive-index variations in biological tissue. Effects of partially coherent model and parameters of biological turbulence on such beams are studied in numerical simulations. Our results reveal that the spreading of GSM beams is smaller than LGSM and BGSM beams on the same conditions, and the beam with larger source coherence width has smaller beam spreading than that with smaller coherence width. The results are useful for any applications involved light beam propagation through tissues, especially the cases where the average intensity and spreading properties of the light should be taken into account to evaluate the system performance and investigations in the structures of biological tissue. - Highlights: • Spatial coherence radius of a spherical wave propagating in a turbulent biological tissue is developed. • Expressions of average intensity and beam spreading for GSM, LGSM and BGSM beams in a turbulent biological tissue are derived. • The contrast for the three partially coherent model beams is shown in numerical simulations. • The results are useful for any applications involved light beam propagation through tissues.

  4. Correction of Excessive Precipitation over Steep Mountains in a General Circulation Model (GCM)

    Science.gov (United States)

    Chao, Winston C.

    2012-01-01

    Excessive precipitation over steep and high mountains (EPSM) is a well-known problem in GCMs and regional climate models even at a resolution as high as 19km. The affected regions include the Andes, the Himalayas, Sierra Madre, New Guinea and others. This problem also shows up in some data assimilation products. Among the possible causes investigated in this study, we found that the most important one, by far, is a missing upward transport of heat out of the boundary layer due to the vertical circulations forced by the daytime subgrid-scale upslope winds, which in turn is forced by heated boundary layer on the slopes. These upslope winds are associated with large subgrid-scale topographic variance, which is found over steep mountains. Without such subgrid-scale heat ventilation, the resolvable-scale upslope flow in the boundary layer generated by surface sensible heat flux along the mountain slopes is excessive. Such an excessive resolvable-scale upslope flow in the boundary layer combined with the high moisture content in the boundary layer results in excessive moisture transport toward mountaintops, which in turn gives rise to excessive precipitation over the affected regions. We have parameterized the effects of subgrid-scale heated-slope-induced vertical circulation (SHVC) by removing heat from the boundary layer and depositing it in the layers higher up when topographic variance exceeds a critical value. Test results using NASA/Goddard's GEOS-5 GCM have shown that the EPSM problem is largely solved.

  5. Assimilation of time-averaged observations in a quasi-geostrophic atmospheric jet model

    Energy Technology Data Exchange (ETDEWEB)

    Huntley, Helga S. [University of Washington, Department of Applied Mathematics, Seattle, WA (United States); University of Delaware, School of Marine Science and Policy, Newark, DE (United States); Hakim, Gregory J. [University of Washington, Department of Atmospheric Sciences, Seattle, WA (United States)

    2010-11-15

    The problem of reconstructing past climates from a sparse network of noisy time-averaged observations is considered with a novel ensemble Kalman filter approach. Results for a sparse network of 100 idealized observations for a quasi-geostrophic model of a jet interacting with a mountain reveal that, for a wide range of observation averaging times, analysis errors are reduced by about 50% relative to the control case without assimilation. Results are robust to changes to observational error, the number of observations, and an imperfect model. Specifically, analysis errors are reduced relative to the control case for observations having errors up to three times the climatological variance for a fixed 100-station network, and for networks consisting of ten or more stations when observational errors are fixed at one-third the climatological variance. In the limit of small numbers of observations, station location becomes critically important, motivating an optimally determined network. A network of fifteen optimally determined observations reduces analysis errors by 30% relative to the control, as compared to 50% for a randomly chosen network of 100 observations. (orig.)

  6. An empirical investigation on the forecasting ability of mallows model averaging in a macro economic environment

    Science.gov (United States)

    Yin, Yip Chee; Hock-Eam, Lim

    2012-09-01

    This paper investigates the forecasting ability of Mallows Model Averaging (MMA) by conducting an empirical analysis of five Asia countries, Malaysia, Thailand, Philippines, Indonesia and China's GDP growth rate. Results reveal that MMA has no noticeable differences in predictive ability compared to the general autoregressive fractional integrated moving average model (ARFIMA) and its predictive ability is sensitive to the effect of financial crisis. MMA could be an alternative forecasting method for samples without recent outliers such as financial crisis.

  7. MODELLING OF TURBULENT WAKE FOR TWO WIND TURBINES

    Directory of Open Access Journals (Sweden)

    Arina S. Kryuchkova

    2018-01-01

    Full Text Available The construction of several large wind farms (The Ulyanovsk region, the Republic of Adygea, the Kaliningrad region, the North of the Russian Federation is planned on the territory of the Russian Federation in 2018–2020. The tasks, connected with the design of new wind farms, are currently important. One of the possible direction in the design is connected with mathematical modeling. Large eddy method (eddy-resolving simulation, developed within the Computational Fluid Dynamics, allows to reproduce unsteady structure of the flow in details and define various integrated characteristics for wind turbines. The mathematical model included the main equations of continuity and momentum equations for incompressible viscous flow. The large-scale vortex structures were calculated by means of integration the filtered equations. The calculation was carried out using lagrangian dynamic Smagorinsky’s model to define turbulent subgrid viscosity. The parallelepiped-shaped numerical domain and 3 different unstructured meshes (with 2,4,8 million cells were used for numerical simulation.The geometrical parameters of wind turbine were set proceeding to open sources for BlindTest 2–4 project from Internet. All physical values were defined at the center of computational cell. The approximation of items in the equations was performed with the second order of accuracy for time and space. The equations for coupling of velocity, pressure were solved by means of iterative algorithm PIMPLE. The total quantity of the calculated physical values at each time step was equal 18. So, the resources of a high performance computer were required. As a result of flow simulation in the wake for two three-bladed wind turbines the average and instantaneous values of velocity, pressure, subgrid kinetic energy, turbulent viscosity, components of stress tensor were calculated. The received results qualitatively matching the known results of experiment and numerical simulation testify

  8. Comparison of depth-averaged concentration and bed load flux sediment transport models of dam-break flow

    Directory of Open Access Journals (Sweden)

    Jia-heng Zhao

    2017-10-01

    Full Text Available This paper presents numerical simulations of dam-break flow over a movable bed. Two different mathematical models were compared: a fully coupled formulation of shallow water equations with erosion and deposition terms (a depth-averaged concentration flux model, and shallow water equations with a fully coupled Exner equation (a bed load flux model. Both models were discretized using the cell-centered finite volume method, and a second-order Godunov-type scheme was used to solve the equations. The numerical flux was calculated using a Harten, Lax, and van Leer approximate Riemann solver with the contact wave restored (HLLC. A novel slope source term treatment that considers the density change was introduced to the depth-averaged concentration flux model to obtain higher-order accuracy. A source term that accounts for the sediment flux was added to the bed load flux model to reflect the influence of sediment movement on the momentum of the water. In a one-dimensional test case, a sensitivity study on different model parameters was carried out. For the depth-averaged concentration flux model, Manning's coefficient and sediment porosity values showed an almost linear relationship with the bottom change, and for the bed load flux model, the sediment porosity was identified as the most sensitive parameter. The capabilities and limitations of both model concepts are demonstrated in a benchmark experimental test case dealing with dam-break flow over variable bed topography.

  9. Properties of bright solitons in averaged and unaveraged models for SDG fibres

    Science.gov (United States)

    Kumar, Ajit; Kumar, Atul

    1996-04-01

    Using the slowly varying envelope approximation and averaging over the fibre cross-section the evolution equation for optical pulses in semiconductor-doped glass (SDG) fibres is derived from the nonlinear wave equation. Bright soliton solutions of this equation are obtained numerically and their properties are studied and compared with those of the bright solitons in the unaveraged model.

  10. Averaging of the Equations of the Standard Cosmological Model over Rapid Oscillations

    Science.gov (United States)

    Ignat'ev, Yu. G.; Samigullina, A. R.

    2017-11-01

    An averaging of the equations of the standard cosmological model (SCM) is carried out. It is shown that the main contribution to the macroscopic energy density of the scalar field comes from its microscopic oscillations with the Compton period. The effective macroscopic equation of state of the oscillations of the scalar field corresponds to the nonrelativistic limit.

  11. Average is Over

    Science.gov (United States)

    Eliazar, Iddo

    2018-02-01

    The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.

  12. A novel approach for introducing cloud spatial structure into cloud radiative transfer parameterizations

    International Nuclear Information System (INIS)

    Huang, Dong; Liu, Yangang

    2014-01-01

    Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost, allowing for more realistic representation of cloud radiation interactions in large-scale models. (letter)

  13. A Stochastic Model of Space-Time Variability of Tropical Rainfall: I. Statistics of Spatial Averages

    Science.gov (United States)

    Kundu, Prasun K.; Bell, Thomas L.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    Global maps of rainfall are of great importance in connection with modeling of the earth s climate. Comparison between the maps of rainfall predicted by computer-generated climate models with observation provides a sensitive test for these models. To make such a comparison, one typically needs the total precipitation amount over a large area, which could be hundreds of kilometers in size over extended periods of time of order days or months. This presents a difficult problem since rain varies greatly from place to place as well as in time. Remote sensing methods using ground radar or satellites detect rain over a large area by essentially taking a series of snapshots at infrequent intervals and indirectly deriving the average rain intensity within a collection of pixels , usually several kilometers in size. They measure area average of rain at a particular instant. Rain gauges, on the other hand, record rain accumulation continuously in time but only over a very small area tens of centimeters across, say, the size of a dinner plate. They measure only a time average at a single location. In making use of either method one needs to fill in the gaps in the observation - either the gaps in the area covered or the gaps in time of observation. This involves using statistical models to obtain information about the rain that is missed from what is actually detected. This paper investigates such a statistical model and validates it with rain data collected over the tropical Western Pacific from ship borne radars during TOGA COARE (Tropical Oceans Global Atmosphere Coupled Ocean-Atmosphere Response Experiment). The model incorporates a number of commonly observed features of rain. While rain varies rapidly with location and time, the variability diminishes when averaged over larger areas or longer periods of time. Moreover, rain is patchy in nature - at any instant on the average only a certain fraction of the observed pixels contain rain. The fraction of area covered by

  14. Averaging Robertson-Walker cosmologies

    International Nuclear Information System (INIS)

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane

    2009-01-01

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ω eff 0 ≈ 4 × 10 −6 , with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10 −8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w eff < −1/3 can be found for strongly phantom models

  15. Log-Normal Turbulence Dissipation in Global Ocean Models

    Science.gov (United States)

    Pearson, Brodie; Fox-Kemper, Baylor

    2018-03-01

    Data from turbulent numerical simulations of the global ocean demonstrate that the dissipation of kinetic energy obeys a nearly log-normal distribution even at large horizontal scales O (10 km ) . As the horizontal scales of resolved turbulence are larger than the ocean is deep, the Kolmogorov-Yaglom theory for intermittency in 3D homogeneous, isotropic turbulence cannot apply; instead, the down-scale potential enstrophy cascade of quasigeostrophic turbulence should. Yet, energy dissipation obeys approximate log-normality—robustly across depths, seasons, regions, and subgrid schemes. The distribution parameters, skewness and kurtosis, show small systematic departures from log-normality with depth and subgrid friction schemes. Log-normality suggests that a few high-dissipation locations dominate the integrated energy and enstrophy budgets, which should be taken into account when making inferences from simplified models and inferring global energy budgets from sparse observations.

  16. Real-time traffic signal optimization model based on average delay time per person

    Directory of Open Access Journals (Sweden)

    Pengpeng Jiao

    2015-10-01

    Full Text Available Real-time traffic signal control is very important for relieving urban traffic congestion. Many existing traffic control models were formulated using optimization approach, with the objective functions of minimizing vehicle delay time. To improve people’s trip efficiency, this article aims to minimize delay time per person. Based on the time-varying traffic flow data at intersections, the article first fits curves of accumulative arrival and departure vehicles, as well as the corresponding functions. Moreover, this article transfers vehicle delay time to personal delay time using average passenger load of cars and buses, employs such time as the objective function, and proposes a signal timing optimization model for intersections to achieve real-time signal parameters, including cycle length and green time. This research further implements a case study based on practical data collected at an intersection in Beijing, China. The average delay time per person and queue length are employed as evaluation indices to show the performances of the model. The results show that the proposed methodology is capable of improving traffic efficiency and is very effective for real-world applications.

  17. CFD Wake Modelling with a BEM Wind Turbine Sub-Model

    Directory of Open Access Journals (Sweden)

    Anders Hallanger

    2013-01-01

    Full Text Available Modelling of wind farms using computational fluid dynamics (CFD resolving the flow field around each wind turbine's blades on a moving computational grid is still too costly and time consuming in terms of computational capacity and effort. One strategy is to use sub-models for the wind turbines, and sub-grid models for turbulence production and dissipation to model the turbulent viscosity accurately enough to handle interaction of wakes in wind farms. A wind turbine sub-model, based on the Blade Momentum Theory, see Hansen (2008, has been implemented in an in-house CFD code, see Hallanger et al. (2002. The tangential and normal reaction forces from the wind turbine blades are distributed on the control volumes (CVs at the wind turbine rotor location as sources in the conservation equations of momentum. The classical k-epsilon turbulence model of Launder and Spalding (1972 is implemented with sub-grid turbulence (SGT model, see Sha and Launder (1979 and Sand and Salvesen (1994. Steady state CFD simulations were compared with flow and turbulence measurements in the wake of a model scale wind turbine, see Krogstad and Eriksen (2011. The simulated results compared best with experiments when stalling (boundary layer separation on the wind turbine blades did not occur. The SGT model did improve turbulence level in the wake but seems to smear the wake flow structure. It should be noted that the simulations are carried out steady state not including flow oscillations caused by vortex shedding from tower and blades as they were in the experiments. Further improvement of the simulated velocity defect and turbulence level seems to rely on better parameter estimation to the SGT model, improvements to the SGT model, and possibly transient- instead of steady state simulations.

  18. Adaptive neuro-fuzzy based inferential sensor model for estimating the average air temperature in space heating systems

    Energy Technology Data Exchange (ETDEWEB)

    Jassar, S.; Zhao, L. [Department of Electrical and Computer Engineering, Ryerson University, 350 Victoria Street, Toronto, ON (Canada); Liao, Z. [Department of Architectural Science, Ryerson University (Canada)

    2009-08-15

    The heating systems are conventionally controlled by open-loop control systems because of the absence of practical methods for estimating average air temperature in the built environment. An inferential sensor model, based on adaptive neuro-fuzzy inference system modeling, for estimating the average air temperature in multi-zone space heating systems is developed. This modeling technique has the advantage of expert knowledge of fuzzy inference systems (FISs) and learning capability of artificial neural networks (ANNs). A hybrid learning algorithm, which combines the least-square method and the back-propagation algorithm, is used to identify the parameters of the network. This paper describes an adaptive network based inferential sensor that can be used to design closed-loop control for space heating systems. The research aims to improve the overall performance of heating systems, in terms of energy efficiency and thermal comfort. The average air temperature results estimated by using the developed model are strongly in agreement with the experimental results. (author)

  19. Average spectral power changes at the hippocampal electroencephalogram in schizophrenia model induced by ketamine.

    Science.gov (United States)

    Sampaio, Luis Rafael L; Borges, Lucas T N; Silva, Joyse M F; de Andrade, Francisca Roselin O; Barbosa, Talita M; Oliveira, Tatiana Q; Macedo, Danielle; Lima, Ricardo F; Dantas, Leonardo P; Patrocinio, Manoel Cláudio A; do Vale, Otoni C; Vasconcelos, Silvânia M M

    2018-02-01

    The use of ketamine (Ket) as a pharmacological model of schizophrenia is an important tool for understanding the main mechanisms of glutamatergic regulated neural oscillations. Thus, the aim of the current study was to evaluate Ket-induced changes in the average spectral power using the hippocampal quantitative electroencephalography (QEEG). To this end, male Wistar rats were submitted to a stereotactic surgery for the implantation of an electrode in the right hippocampus. After three days, the animals were divided into four groups that were treated for 10 consecutive days with Ket (10, 50, or 100 mg/kg). Brainwaves were captured on the 1st or 10th day, respectively, to acute or repeated treatments. The administration of Ket (10, 50, or 100 mg/kg), compared with controls, induced changes in the hippocampal average spectral power of delta, theta, alpha, gamma low or high waves, after acute or repeated treatments. Therefore, based on the alterations in the average spectral power of hippocampal waves induced by Ket, our findings might provide a basis for the use of hippocampal QEEG in animal models of schizophrenia. © 2017 Société Française de Pharmacologie et de Thérapeutique.

  20. The dynamics of multimodal integration: The averaging diffusion model.

    Science.gov (United States)

    Turner, Brandon M; Gao, Juan; Koenig, Scott; Palfy, Dylan; L McClelland, James

    2017-12-01

    We combine extant theories of evidence accumulation and multi-modal integration to develop an integrated framework for modeling multimodal integration as a process that unfolds in real time. Many studies have formulated sensory processing as a dynamic process where noisy samples of evidence are accumulated until a decision is made. However, these studies are often limited to a single sensory modality. Studies of multimodal stimulus integration have focused on how best to combine different sources of information to elicit a judgment. These studies are often limited to a single time point, typically after the integration process has occurred. We address these limitations by combining the two approaches. Experimentally, we present data that allow us to study the time course of evidence accumulation within each of the visual and auditory domains as well as in a bimodal condition. Theoretically, we develop a new Averaging Diffusion Model in which the decision variable is the mean rather than the sum of evidence samples and use it as a base for comparing three alternative models of multimodal integration, allowing us to assess the optimality of this integration. The outcome reveals rich individual differences in multimodal integration: while some subjects' data are consistent with adaptive optimal integration, reweighting sources of evidence as their relative reliability changes during evidence integration, others exhibit patterns inconsistent with optimality.

  1. Incorporation of ice sheet models into an Earth system model: Focus on methodology of coupling

    Science.gov (United States)

    Rybak, Oleg; Volodin, Evgeny; Morozova, Polina; Nevecherja, Artiom

    2018-03-01

    Elaboration of a modern Earth system model (ESM) requires incorporation of ice sheet dynamics. Coupling of an ice sheet model (ICM) to an AOGCM is complicated by essential differences in spatial and temporal scales of cryospheric, atmospheric and oceanic components. To overcome this difficulty, we apply two different approaches for the incorporation of ice sheets into an ESM. Coupling of the Antarctic ice sheet model (AISM) to the AOGCM is accomplished via using procedures of resampling, interpolation and assigning to the AISM grid points annually averaged meanings of air surface temperature and precipitation fields generated by the AOGCM. Surface melting, which takes place mainly on the margins of the Antarctic peninsula and on ice shelves fringing the continent, is currently ignored. AISM returns anomalies of surface topography back to the AOGCM. To couple the Greenland ice sheet model (GrISM) to the AOGCM, we use a simple buffer energy- and water-balance model (EWBM-G) to account for orographically-driven precipitation and other sub-grid AOGCM-generated quantities. The output of the EWBM-G consists of surface mass balance and air surface temperature to force the GrISM, and freshwater run-off to force thermohaline circulation in the oceanic block of the AOGCM. Because of a rather complex coupling procedure of GrIS compared to AIS, the paper mostly focuses on Greenland.

  2. Reynolds averaged turbulence modelling using deep neural networks with embedded invariance

    International Nuclear Information System (INIS)

    Ling, Julia; Kurzawski, Andrew; Templeton, Jeremy

    2016-01-01

    There exists significant demand for improved Reynolds-averaged Navier–Stokes (RANS) turbulence models that are informed by and can represent a richer set of turbulence physics. This paper presents a method of using deep neural networks to learn a model for the Reynolds stress anisotropy tensor from high-fidelity simulation data. A novel neural network architecture is proposed which uses a multiplicative layer with an invariant tensor basis to embed Galilean invariance into the predicted anisotropy tensor. It is demonstrated that this neural network architecture provides improved prediction accuracy compared with a generic neural network architecture that does not embed this invariance property. Furthermore, the Reynolds stress anisotropy predictions of this invariant neural network are propagated through to the velocity field for two test cases. For both test cases, significant improvement versus baseline RANS linear eddy viscosity and nonlinear eddy viscosity models is demonstrated.

  3. Applications of Analytical Self-Similar Solutions of Reynolds-Averaged Models for Instability-Induced Turbulent Mixing

    Science.gov (United States)

    Hartland, Tucker; Schilling, Oleg

    2017-11-01

    Analytical self-similar solutions to several families of single- and two-scale, eddy viscosity and Reynolds stress turbulence models are presented for Rayleigh-Taylor, Richtmyer-Meshkov, and Kelvin-Helmholtz instability-induced turbulent mixing. The use of algebraic relationships between model coefficients and physical observables (e.g., experimental growth rates) following from the self-similar solutions to calibrate a member of a given family of turbulence models is shown. It is demonstrated numerically that the algebraic relations accurately predict the value and variation of physical outputs of a Reynolds-averaged simulation in flow regimes that are consistent with the simplifying assumptions used to derive the solutions. The use of experimental and numerical simulation data on Reynolds stress anisotropy ratios to calibrate a Reynolds stress model is briefly illustrated. The implications of the analytical solutions for future Reynolds-averaged modeling of hydrodynamic instability-induced mixing are briefly discussed. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  4. A Two-Factor Autoregressive Moving Average Model Based on Fuzzy Fluctuation Logical Relationships

    Directory of Open Access Journals (Sweden)

    Shuang Guan

    2017-10-01

    Full Text Available Many of the existing autoregressive moving average (ARMA forecast models are based on one main factor. In this paper, we proposed a new two-factor first-order ARMA forecast model based on fuzzy fluctuation logical relationships of both a main factor and a secondary factor of a historical training time series. Firstly, we generated a fluctuation time series (FTS for two factors by calculating the difference of each data point with its previous day, then finding the absolute means of the two FTSs. We then constructed a fuzzy fluctuation time series (FFTS according to the defined linguistic sets. The next step was establishing fuzzy fluctuation logical relation groups (FFLRGs for a two-factor first-order autoregressive (AR(1 model and forecasting the training data with the AR(1 model. Then we built FFLRGs for a two-factor first-order autoregressive moving average (ARMA(1,m model. Lastly, we forecasted test data with the ARMA(1,m model. To illustrate the performance of our model, we used real Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX and Dow Jones datasets as a secondary factor to forecast TAIEX. The experiment results indicate that the proposed two-factor fluctuation ARMA method outperformed the one-factor method based on real historic data. The secondary factor may have some effects on the main factor and thereby impact the forecasting results. Using fuzzified fluctuations rather than fuzzified real data could avoid the influence of extreme values in historic data, which performs negatively while forecasting. To verify the accuracy and effectiveness of the model, we also employed our method to forecast the Shanghai Stock Exchange Composite Index (SHSECI from 2001 to 2015 and the international gold price from 2000 to 2010.

  5. An axially averaged-radial transport model of tokamak edge plasmas

    International Nuclear Information System (INIS)

    Prinja, A.K.; Conn, R.W.

    1984-01-01

    A two-zone axially averaged-radial transport model for edge plasmas is described that incorporates parallel electron and ion conduction, localized recycling, parallel electron pressure gradient effects and sheath losses. Results for high recycling show that the radial electron temperature profile is determined by parallel electron conduction over short radial distances (proportional 3 cm). At larger radius where Tsub(e) has fallen appreciably, convective transport becomes equally important. The downstream density and ion temperature profiles are very flat over the region where electron conduction dominates. This is seen to result from a sharply decaying velocity profile that follows the radial electron temperature. A one-dimensional analytical recycling model shows that at high neutral pumping rates, the plasma density at the plate, nsub(ia), scales linearly with the unperturbed background density, nsub(io). When ionization dominates nsub(ia)/nsub(io) proportional exp(nsub(io)) while in the intermediate regime nsub(ia)/nsub(io) proportional exp(proportional nsub(io)). Such behavior is qualitatively in accord with experimental observations. (orig.)

  6. The flattening of the average potential in models with fermions

    International Nuclear Information System (INIS)

    Bornholdt, S.

    1993-01-01

    The average potential is a scale dependent scalar effective potential. In a phase with spontaneous symmetry breaking its inner region becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations affect the shape of the average potential in this region and its flattening with decreasing physical scale. They have to be taken into account to find the true minimum of the scalar potential which determines the scale of spontaneous symmetry breaking. (orig.)

  7. Maximum stress estimation model for multi-span waler beams with deflections at the supports using average strains.

    Science.gov (United States)

    Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon

    2015-03-30

    The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.

  8. A rod-airfoil experiment as a benchmark for broadband noise modeling

    Energy Technology Data Exchange (ETDEWEB)

    Jacob, M.C. [Ecole Centrale de Lyon, Laboratoire de Mecanique des Fluides et d' Acoustique, Ecully Cedex (France); Universite Claude Bernard/Lyon I, Villeurbanne Cedex (France); Boudet, J.; Michard, M. [Ecole Centrale de Lyon, Laboratoire de Mecanique des Fluides et d' Acoustique, Ecully Cedex (France); Casalino, D. [Ecole Centrale de Lyon, Laboratoire de Mecanique des Fluides et d' Acoustique, Ecully Cedex (France); Fluorem SAS, Ecully Cedex (France)

    2005-07-01

    A low Mach number rod-airfoil experiment is shown to be a good benchmark for numerical and theoretical broadband noise modeling. The benchmarking approach is applied to a sound computation from a 2D unsteady-Reynolds-averaged Navier-Stokes (U-RANS) flow field, where 3D effects are partially compensated for by a spanwise statistical model and by a 3D large eddy simulation. The experiment was conducted in the large anechoic wind tunnel of the Ecole Centrale de Lyon. Measurements taken included particle image velocity (PIV) around the airfoil, single hot wire, wall pressure coherence, and far field pressure. These measurements highlight the strong 3D effects responsible for spectral broadening around the rod vortex shedding frequency in the subcritical regime, and the dominance of the noise generated around the airfoil leading edge. The benchmarking approach is illustrated by two examples: the validation of a stochastical noise generation model applied to a 2D U-RANS computation; the assessment of a 3D LES computation using a new subgrid scale (SGS) model coupled to an advanced-time Ffowcs-Williams and Hawkings sound computation. (orig.)

  9. Averaging operations on matrices

    Indian Academy of Sciences (India)

    2014-07-03

    Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...

  10. GRAVITATIONALLY UNSTABLE FLAMES: RAYLEIGH-TAYLOR STRETCHING VERSUS TURBULENT WRINKLING

    International Nuclear Information System (INIS)

    Hicks, E. P.; Rosner, R.

    2013-01-01

    In this paper, we provide support for the Rayleigh-Taylor-(RT)-based subgrid model used in full-star simulations of deflagrations in Type Ia supernovae explosions. We use the results of a parameter study of two-dimensional direct numerical simulations of an RT unstable model flame to distinguish between the two main types of subgrid models (RT or turbulence dominated) in the flamelet regime. First, we give scalings for the turbulent flame speed, the Reynolds number, the viscous scale, and the size of the burning region as the non-dimensional gravity (G) is varied. The flame speed is well predicted by an RT-based flame speed model. Next, the above scalings are used to calculate the Karlovitz number (Ka) and to discuss appropriate combustion regimes. No transition to thin reaction zones is seen at Ka = 1, although such a transition is expected by turbulence-dominated subgrid models. Finally, we confirm a basic physical premise of the RT subgrid model, namely, that the flame is fractal, and thus self-similar. By modeling the turbulent flame speed, we demonstrate that it is affected more by large-scale RT stretching than by small-scale turbulent wrinkling. In this way, the RT instability controls the flame directly from the large scales. Overall, these results support the RT subgrid model.

  11. The Value of Multivariate Model Sophistication: An Application to pricing Dow Jones Industrial Average options

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars; Violante, Francesco

    innovation for a Laplace innovation assumption improves the pricing in a smaller way. Apart from investigating directly the value of model sophistication in terms of dollar losses, we also use the model condence set approach to statistically infer the set of models that delivers the best pricing performance.......We assess the predictive accuracy of a large number of multivariate volatility models in terms of pricing options on the Dow Jones Industrial Average. We measure the value of model sophistication in terms of dollar losses by considering a set 248 multivariate models that differer...

  12. Modeling an Application's Theoretical Minimum and Average Transactional Response Times

    Energy Technology Data Exchange (ETDEWEB)

    Paiz, Mary Rose [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-04-01

    The theoretical minimum transactional response time of an application serves as a ba- sis for the expected response time. The lower threshold for the minimum response time represents the minimum amount of time that the application should take to complete a transaction. Knowing the lower threshold is beneficial in detecting anomalies that are re- sults of unsuccessful transactions. On the converse, when an application's response time falls above an upper threshold, there is likely an anomaly in the application that is causing unusual performance issues in the transaction. This report explains how the non-stationary Generalized Extreme Value distribution is used to estimate the lower threshold of an ap- plication's daily minimum transactional response time. It also explains how the seasonal Autoregressive Integrated Moving Average time series model is used to estimate the upper threshold for an application's average transactional response time.

  13. Forecasting Rice Productivity and Production of Odisha, India, Using Autoregressive Integrated Moving Average Models

    Directory of Open Access Journals (Sweden)

    Rahul Tripathi

    2014-01-01

    Full Text Available Forecasting of rice area, production, and productivity of Odisha was made from the historical data of 1950-51 to 2008-09 by using univariate autoregressive integrated moving average (ARIMA models and was compared with the forecasted all Indian data. The autoregressive (p and moving average (q parameters were identified based on the significant spikes in the plots of partial autocorrelation function (PACF and autocorrelation function (ACF of the different time series. ARIMA (2, 1, 0 model was found suitable for all Indian rice productivity and production, whereas ARIMA (1, 1, 1 was best fitted for forecasting of rice productivity and production in Odisha. Prediction was made for the immediate next three years, that is, 2007-08, 2008-09, and 2009-10, using the best fitted ARIMA models based on minimum value of the selection criterion, that is, Akaike information criteria (AIC and Schwarz-Bayesian information criteria (SBC. The performances of models were validated by comparing with percentage deviation from the actual values and mean absolute percent error (MAPE, which was found to be 0.61 and 2.99% for the area under rice in Odisha and India, respectively. Similarly for prediction of rice production and productivity in Odisha and India, the MAPE was found to be less than 6%.

  14. Retrospective cost adaptive Reynolds-averaged Navier-Stokes k-ω model for data-driven unsteady turbulent simulations

    Science.gov (United States)

    Li, Zhiyong; Hoagg, Jesse B.; Martin, Alexandre; Bailey, Sean C. C.

    2018-03-01

    This paper presents a data-driven computational model for simulating unsteady turbulent flows, where sparse measurement data is available. The model uses the retrospective cost adaptation (RCA) algorithm to automatically adjust the closure coefficients of the Reynolds-averaged Navier-Stokes (RANS) k- ω turbulence equations to improve agreement between the simulated flow and the measurements. The RCA-RANS k- ω model is verified for steady flow using a pipe-flow test case and for unsteady flow using a surface-mounted-cube test case. Measurements used for adaptation of the verification cases are obtained from baseline simulations with known closure coefficients. These verification test cases demonstrate that the RCA-RANS k- ω model can successfully adapt the closure coefficients to improve agreement between the simulated flow field and a set of sparse flow-field measurements. Furthermore, the RCA-RANS k- ω model improves agreement between the simulated flow and the baseline flow at locations at which measurements do not exist. The RCA-RANS k- ω model is also validated with experimental data from 2 test cases: steady pipe flow, and unsteady flow past a square cylinder. In both test cases, the adaptation improves agreement with experimental data in comparison to the results from a non-adaptive RANS k- ω model that uses the standard values of the k- ω closure coefficients. For the steady pipe flow, adaptation is driven by mean stream-wise velocity measurements at 24 locations along the pipe radius. The RCA-RANS k- ω model reduces the average velocity error at these locations by over 35%. For the unsteady flow over a square cylinder, adaptation is driven by time-varying surface pressure measurements at 2 locations on the square cylinder. The RCA-RANS k- ω model reduces the average surface-pressure error at these locations by 88.8%.

  15. Large Eddy Simulation of Wall-Bounded Turbulent Flows with the Lattice Boltzmann Method: Effect of Collision Model, SGS Model and Grid Resolution

    Science.gov (United States)

    Pradhan, Aniruddhe; Akhavan, Rayhaneh

    2017-11-01

    Effect of collision model, subgrid-scale model and grid resolution in Large Eddy Simulation (LES) of wall-bounded turbulent flows with the Lattice Boltzmann Method (LBM) is investigated in turbulent channel flow. The Single Relaxation Time (SRT) collision model is found to be more accurate than Multi-Relaxation Time (MRT) collision model in well-resolved LES. Accurate LES requires grid resolutions of Δ+ LBM requires either grid-embedding in the near-wall region, with grid resolutions comparable to DNS, or a wall model. Results of LES with grid-embedding and wall models will be discussed.

  16. Flow and transport simulation of Madeira River using three depth-averaged two-equation turbulence closure models

    Directory of Open Access Journals (Sweden)

    Li-ren Yu

    2012-03-01

    Full Text Available This paper describes a numerical simulation in the Amazon water system, aiming to develop a quasi-three-dimensional numerical tool for refined modeling of turbulent flow and passive transport of mass in natural waters. Three depth-averaged two-equation turbulence closure models, k˜−ε˜,k˜−w˜, and k˜−ω˜ , were used to close the non-simplified quasi-three dimensional hydrodynamic fundamental governing equations. The discretized equations were solved with the advanced multi-grid iterative method using non-orthogonal body-fitted coarse and fine grids with collocated variable arrangement. Except for steady flow computation, the processes of contaminant inpouring and plume development at the beginning of discharge, caused by a side-discharge of a tributary, have also been numerically investigated. The three depth-averaged two-equation closure models are all suitable for modeling strong mixing turbulence. The newly established turbulence models such as the k˜−ω˜ model, with a higher order of magnitude of the turbulence parameter, provide a possibility for improving computational precision.

  17. Effects of stratospheric aerosol surface processes on the LLNL two-dimensional zonally averaged model

    International Nuclear Information System (INIS)

    Connell, P.S.; Kinnison, D.E.; Wuebbles, D.J.; Burley, J.D.; Johnston, H.S.

    1992-01-01

    We have investigated the effects of incorporating representations of heterogeneous chemical processes associated with stratospheric sulfuric acid aerosol into the LLNL two-dimensional, zonally averaged, model of the troposphere and stratosphere. Using distributions of aerosol surface area and volume density derived from SAGE 11 satellite observations, we were primarily interested in changes in partitioning within the Cl- and N- families in the lower stratosphere, compared to a model including only gas phase photochemical reactions

  18. Maximum Stress Estimation Model for Multi-Span Waler Beams with Deflections at the Supports Using Average Strains

    Directory of Open Access Journals (Sweden)

    Sung Woo Park

    2015-03-01

    Full Text Available The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs, the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.

  19. Model averaging in the presence of structural uncertainty about treatment effects: influence on treatment decision and expected value of information.

    Science.gov (United States)

    Price, Malcolm J; Welton, Nicky J; Briggs, Andrew H; Ades, A E

    2011-01-01

    Standard approaches to estimation of Markov models with data from randomized controlled trials tend either to make a judgment about which transition(s) treatments act on, or they assume that treatment has a separate effect on every transition. An alternative is to fit a series of models that assume that treatment acts on specific transitions. Investigators can then choose among alternative models using goodness-of-fit statistics. However, structural uncertainty about any chosen parameterization will remain and this may have implications for the resulting decision and the need for further research. We describe a Bayesian approach to model estimation, and model selection. Structural uncertainty about which parameterization to use is accounted for using model averaging and we developed a formula for calculating the expected value of perfect information (EVPI) in averaged models. Marginal posterior distributions are generated for each of the cost-effectiveness parameters using Markov Chain Monte Carlo simulation in WinBUGS, or Monte-Carlo simulation in Excel (Microsoft Corp., Redmond, WA). We illustrate the approach with an example of treatments for asthma using aggregate-level data from a connected network of four treatments compared in three pair-wise randomized controlled trials. The standard errors of incremental net benefit using structured models is reduced by up to eight- or ninefold compared to the unstructured models, and the expected loss attaching to decision uncertainty by factors of several hundreds. Model averaging had considerable influence on the EVPI. Alternative structural assumptions can alter the treatment decision and have an overwhelming effect on model uncertainty and expected value of information. Structural uncertainty can be accounted for by model averaging, and the EVPI can be calculated for averaged models. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights

  20. Benefits of dominance over additive models for the estimation of average effects in the presence of dominance

    NARCIS (Netherlands)

    Duenk, Pascal; Calus, Mario P.L.; Wientjes, Yvonne C.J.; Bijma, Piter

    2017-01-01

    In quantitative genetics, the average effect at a single locus can be estimated by an additive (A) model, or an additive plus dominance (AD) model. In the presence of dominance, the AD-model is expected to be more accurate, because the A-model falsely assumes that residuals are independent and

  1. Efficient implicit LES method for the simulation of turbulent cavitating flows

    International Nuclear Information System (INIS)

    Egerer, Christian P.; Schmidt, Steffen J.; Hickel, Stefan; Adams, Nikolaus A.

    2016-01-01

    We present a numerical method for efficient large-eddy simulation of compressible liquid flows with cavitation based on an implicit subgrid-scale model. Phase change and subgrid-scale interface structures are modeled by a homogeneous mixture model that assumes local thermodynamic equilibrium. Unlike previous approaches, emphasis is placed on operating on a small stencil (at most four cells). The truncation error of the discretization is designed to function as a physically consistent subgrid-scale model for turbulence. We formulate a sensor functional that detects shock waves or pseudo-phase boundaries within the homogeneous mixture model for localizing numerical dissipation. In smooth regions of the flow field, a formally non-dissipative central discretization scheme is used in combination with a regularization term to model the effect of unresolved subgrid scales. The new method is validated by computing standard single- and two-phase test-cases. Comparison of results for a turbulent cavitating mixing layer obtained with the new method demonstrates its suitability for the target applications.

  2. Fleet average NOx emission performance of 2004 model year light-duty vehicles, light-duty trucks and medium-duty passenger vehicles

    International Nuclear Information System (INIS)

    2006-05-01

    The On-Road Vehicle and Engine Emission Regulations came into effect on January 1, 2004. The regulations introduced more stringent national emission standards for on-road vehicles and engines, and also required that companies submit reports containing information concerning the company's fleets. This report presented a summary of the regulatory requirements relating to nitric oxide (NO x ) fleet average emissions for light-duty vehicles, light-duty trucks, and medium-duty passenger vehicles under the new regulations. The effectiveness of the Canadian fleet average NO x emission program at achieving environmental performance objectives was also evaluated. A summary of the fleet average NO x emission performance of individual companies was presented, as well as the overall Canadian fleet average of the 2004 model year based on data submitted by companies in their end of model year reports. A total of 21 companies submitted reports covering 2004 model year vehicles in 10 test groups, comprising 1,350,719 vehicles of the 2004 model year manufactured or imported for the purpose of sale in Canada. The average NO x value for the entire Canadian LDV/LDT fleet was 0.2016463 grams per mile. The average NO x values for the entire Canadian HLDT/MDPV fleet was 0.321976 grams per mile. It was concluded that the NO x values for both fleets were consistent with the environmental performance objectives of the regulations for the 2004 model year. 9 tabs

  3. Average and dispersion of the luminosity-redshift relation in the concordance model

    Energy Technology Data Exchange (ETDEWEB)

    Ben-Dayan, I. [DESY Hamburg (Germany). Theory Group; Gasperini, M. [Bari Univ. (Italy). Dipt. di Fisica; Istituto Nazionale di Fisica Nucleare, Bari (Italy); Marozzi, G. [College de France, 75 - Paris (France); Geneve Univ. (Switzerland). Dept. de Physique Theorique and CAP; Nugier, F. [Ecole Normale Superieure CNRS, Paris (France). Laboratoire de Physique Theorique; Veneziano, G. [College de France, 75 - Paris (France); CERN, Geneva (Switzerland). Physics Dept.; New York Univ., NY (United States). Dept. of Physics

    2013-03-15

    Starting from the luminosity-redshift relation recently given up to second order in the Poisson gauge, we calculate the effects of the realistic stochastic background of perturbations of the so-called concordance model on the combined light-cone and ensemble average of various functions of the luminosity distance, and on their variance, as functions of redshift. We apply a gauge-invariant light-cone averaging prescription which is free from infrared and ultraviolet divergences, making our results robust with respect to changes of the corresponding cutoffs. Our main conclusions, in part already anticipated in a recent letter for the case of a perturbation spectrum computed in the linear regime, are that such inhomogeneities not only cannot avoid the need for dark energy, but also cannot prevent, in principle, the determination of its parameters down to an accuracy of order 10{sup -3} - 10{sup -5}, depending on the averaged observable and on the regime considered for the power spectrum. However, taking into account the appropriate corrections arising in the non-linear regime, we predict an irreducible scatter of the data approaching the 10% level which, for limited statistics, will necessarily limit the attainable precision. The predicted dispersion appears to be in good agreement with current observational estimates of the distance-modulus variance due to Doppler and lensing effects (at low and high redshifts, respectively), and represents a challenge for future precision measurements.

  4. Comparative Analysis of Market Volatility in Indian Banking and IT Sectors by using Average Decline Model

    OpenAIRE

    Kirti AREKAR; Rinku JAIN

    2017-01-01

    The stock market volatility is depends on three major features, complete volatility, volatility fluctuations, and volatility attention and they are calculate by the statistical techniques. Comparative analysis of market volatility for two major index i.e. banking & IT sector in Bombay stock exchange (BSE) by using average decline model. The average degeneration process in volatility has being used after very high and low stock returns. The results of this study explain significant decline in...

  5. Dynamic Average-Value Modeling of Doubly-Fed Induction Generator Wind Energy Conversion Systems

    Science.gov (United States)

    Shahab, Azin

    In a Doubly-fed Induction Generator (DFIG) wind energy conversion system, the rotor of a wound rotor induction generator is connected to the grid via a partial scale ac/ac power electronic converter which controls the rotor frequency and speed. In this research, detailed models of the DFIG wind energy conversion system with Sinusoidal Pulse-Width Modulation (SPWM) scheme and Optimal Pulse-Width Modulation (OPWM) scheme for the power electronic converter are developed in detail in PSCAD/EMTDC. As the computer simulation using the detailed models tends to be computationally extensive, time consuming and even sometimes not practical in terms of speed, two modified approaches (switching-function modeling and average-value modeling) are proposed to reduce the simulation execution time. The results demonstrate that the two proposed approaches reduce the simulation execution time while the simulation results remain close to those obtained using the detailed model simulation.

  6. Fractal Characteristics Analysis of Blackouts in Interconnected Power Grid

    DEFF Research Database (Denmark)

    Wang, Feng; Li, Lijuan; Li, Canbing

    2018-01-01

    The power failure models are a key to understand the mechanism of large scale blackouts. In this letter, the similarity of blackouts in interconnected power grids (IPGs) and their sub-grids is discovered by the fractal characteristics analysis to simplify the failure models of the IPG. The distri......The power failure models are a key to understand the mechanism of large scale blackouts. In this letter, the similarity of blackouts in interconnected power grids (IPGs) and their sub-grids is discovered by the fractal characteristics analysis to simplify the failure models of the IPG....... The distribution characteristics of blackouts in various sub-grids are demonstrated based on the Kolmogorov-Smirnov (KS) test. The fractal dimensions (FDs) of the IPG and its sub-grids are then obtained by using the KS test and the maximum likelihood estimation (MLE). The blackouts data in China were used...

  7. A Diagnostic PDF Cloud Scheme to Improve Subtropical Low Clouds in NCAR Community Atmosphere Model (CAM5)

    Science.gov (United States)

    Qin, Yi; Lin, Yanluan; Xu, Shiming; Ma, Hsi-Yen; Xie, Shaocheng

    2018-02-01

    Low clouds strongly impact the radiation budget of the climate system, but their simulation in most GCMs has remained a challenge, especially over the subtropical stratocumulus region. Assuming a Gaussian distribution for the subgrid-scale total water and liquid water potential temperature, a new statistical cloud scheme is proposed and tested in NCAR Community Atmospheric Model version 5 (CAM5). The subgrid-scale variance is diagnosed from the turbulent and shallow convective processes in CAM5. The approach is able to maintain the consistency between cloud fraction and cloud condensate and thus alleviates the adjustment needed in the default relative humidity-based cloud fraction scheme. Short-term forecast simulations indicate that low cloud fraction and liquid water content, including their diurnal cycle, are improved due to a proper consideration of subgrid-scale variance over the southeastern Pacific Ocean region. Compared with the default cloud scheme, the new approach produced the mean climate reasonably well with improved shortwave cloud forcing (SWCF) due to more reasonable low cloud fraction and liquid water path over regions with predominant low clouds. Meanwhile, the SWCF bias over the tropical land regions is also alleviated. Furthermore, the simulated marine boundary layer clouds with the new approach extend further offshore and agree better with observations. The new approach is able to obtain the top of atmosphere (TOA) radiation balance with a slightly alleviated double ITCZ problem in preliminary coupled simulations. This study implies that a close coupling of cloud processes with other subgrid-scale physical processes is a promising approach to improve cloud simulations.

  8. Accounting for subgrid scale topographic variations in flood propagation modeling using MODFLOW

    DEFF Research Database (Denmark)

    Milzow, Christian; Kinzelbach, W.

    2010-01-01

    To be computationally viable, grid-based spatially distributed hydrological models of large wetlands or floodplains must be set up using relatively large cells (order of hundreds of meters to kilometers). Computational costs are especially high when considering the numerous model runs or model time...

  9. Adaptive and self-averaging Thouless-Anderson-Palmer mean-field theory for probabilistic modeling

    DEFF Research Database (Denmark)

    Opper, Manfred; Winther, Ole

    2001-01-01

    We develop a generalization of the Thouless-Anderson-Palmer (TAP) mean-field approach of disorder physics. which makes the method applicable to the computation of approximate averages in probabilistic models for real data. In contrast to the conventional TAP approach, where the knowledge...... of the distribution of couplings between the random variables is required, our method adapts to the concrete set of couplings. We show the significance of the approach in two ways: Our approach reproduces replica symmetric results for a wide class of toy models (assuming a nonglassy phase) with given disorder...... distributions in the thermodynamic limit. On the other hand, simulations on a real data model demonstrate that the method achieves more accurate predictions as compared to conventional TAP approaches....

  10. Applying Hillslope Hydrology to Bridge between Ecosystem and Grid-Scale Processes within an Earth System Model

    Science.gov (United States)

    Subin, Z. M.; Sulman, B. N.; Malyshev, S.; Shevliakova, E.

    2013-12-01

    Soil moisture is a crucial control on surface energy fluxes, vegetation properties, and soil carbon cycling. Its interactions with ecosystem processes are highly nonlinear across a large range, as both drought stress and anoxia can impede vegetation and microbial growth. Earth System Models (ESMs) generally only represent an average soil-moisture state in grid cells at scales of 50-200 km, and as a result are not able to adequately represent the effects of subgrid heterogeneity in soil moisture, especially in regions with large wetland areas. We addressed this deficiency by developing the first ESM-coupled subgrid hillslope-hydrological model, TiHy (Tiled-hillslope Hydrology), embedded within the Geophysical Fluid Dynamics Laboratory (GFDL) land model. In each grid cell, one or more representative hillslope geometries are discretized into land model tiles along an upland-to-lowland gradient. These geometries represent ~1 km hillslope-scale hydrological features and allow for flexible representation of hillslope profile and plan shapes, in addition to variation of subsurface properties among or within hillslopes. Each tile (which may represent ~100 m along the hillslope) has its own surface fluxes, vegetation state, and vertically-resolved state variables for soil physics and biogeochemistry. Resolution of water state in deep layers (~200 m) down to bedrock allows for physical integration of groundwater transport with unsaturated overlying dynamics. Multiple tiles can also co-exist at the same vertical position along the hillslope, allowing the simulation of ecosystem heterogeneity due to disturbance. The hydrological model is coupled to the vertically-resolved Carbon, Organisms, Respiration, and Protection in the Soil Environment (CORPSE) model, which captures non-linearity resulting from interactions between vertically-heterogeneous soil carbon and water profiles. We present comparisons of simulated water table depth to observations. We examine sensitivities to

  11. Validation of numerical model for cook stove using Reynolds averaged Navier-Stokes based solver

    Science.gov (United States)

    Islam, Md. Moinul; Hasan, Md. Abdullah Al; Rahman, Md. Mominur; Rahaman, Md. Mashiur

    2017-12-01

    Biomass fired cook stoves, for many years, have been the main cooking appliance for the rural people of developing countries. Several researches have been carried out to the find efficient stoves. In the present study, numerical model of an improved household cook stove is developed to analyze the heat transfer and flow behavior of gas during operation. The numerical model is validated with the experimental results. Computation of the numerical model is executed the using non-premixed combustion model. Reynold's averaged Navier-Stokes (RaNS) equation along with the κ - ɛ model governed the turbulent flow associated within the computed domain. The computational results are in well agreement with the experiment. Developed numerical model can be used to predict the effect of different biomasses on the efficiency of the cook stove.

  12. Refining Markov state models for conformational dynamics using ensemble-averaged data and time-series trajectories

    Science.gov (United States)

    Matsunaga, Y.; Sugita, Y.

    2018-06-01

    A data-driven modeling scheme is proposed for conformational dynamics of biomolecules based on molecular dynamics (MD) simulations and experimental measurements. In this scheme, an initial Markov State Model (MSM) is constructed from MD simulation trajectories, and then, the MSM parameters are refined using experimental measurements through machine learning techniques. The second step can reduce the bias of MD simulation results due to inaccurate force-field parameters. Either time-series trajectories or ensemble-averaged data are available as a training data set in the scheme. Using a coarse-grained model of a dye-labeled polyproline-20, we compare the performance of machine learning estimations from the two types of training data sets. Machine learning from time-series data could provide the equilibrium populations of conformational states as well as their transition probabilities. It estimates hidden conformational states in more robust ways compared to that from ensemble-averaged data although there are limitations in estimating the transition probabilities between minor states. We discuss how to use the machine learning scheme for various experimental measurements including single-molecule time-series trajectories.

  13. Rf system modeling for the high average power FEL at CEBAF

    International Nuclear Information System (INIS)

    Merminga, L.; Fugitt, J.; Neil, G.; Simrock, S.

    1995-01-01

    High beam loading and energy recovery compounded by use of superconducting cavities, which requires tight control of microphonic noise, place stringent constraints on the linac rf system design of the proposed high average power FEL at CEBAF. Longitudinal dynamics imposes off-crest operation, which in turn implies a large tuning angle to minimize power requirements. Amplitude and phase stability requirements are consistent with demonstrated performance at CEBAF. A numerical model of the CEBAF rf control system is presented and the response of the system is examined under large parameter variations, microphonic noise, and beam current fluctuations. Studies of the transient behavior lead to a plausible startup and recovery scenario

  14. Ensemble averaging and stacking of ARIMA and GSTAR model for rainfall forecasting

    Science.gov (United States)

    Anggraeni, D.; Kurnia, I. F.; Hadi, A. F.

    2018-04-01

    Unpredictable rainfall changes can affect human activities, such as in agriculture, aviation, shipping which depend on weather forecasts. Therefore, we need forecasting tools with high accuracy in predicting the rainfall in the future. This research focus on local forcasting of the rainfall at Jember in 2005 until 2016, from 77 rainfall stations. The rainfall here was not only related to the occurrence of the previous of its stations, but also related to others, it’s called the spatial effect. The aim of this research is to apply the GSTAR model, to determine whether there are some correlations of spatial effect between one to another stations. The GSTAR model is an expansion of the space-time model that combines the time-related effects, the locations (stations) in a time series effects, and also the location it self. The GSTAR model will also be compared to the ARIMA model that completely ignores the independent variables. The forcested value of the ARIMA and of the GSTAR models then being combined using the ensemble forecasting technique. The averaging and stacking method of ensemble forecasting method here provide us the best model with higher acuracy model that has the smaller RMSE (Root Mean Square Error) value. Finally, with the best model we can offer a better local rainfall forecasting in Jember for the future.

  15. Tracer water transport and subgrid precipitation variation within atmospheric general circulation models

    Science.gov (United States)

    Koster, Randal D.; Eagleson, Peter S.; Broecker, Wallace S.

    1988-03-01

    A capability is developed for monitoring tracer water movement in the three-dimensional Goddard Institute for Space Science Atmospheric General Circulation Model (GCM). A typical experiment with the tracer water model follows water evaporating from selected grid squares and determines where this water first returns to the Earth's surface as precipitation or condensate, thereby providing information on the lateral scales of hydrological transport in the GCM. Through a comparison of model results with observations in nature, inferences can be drawn concerning real world water transport. Tests of the tracer water model include a comparison of simulated and observed vertically-integrated vapor flux fields and simulations of atomic tritium transport from the stratosphere to the oceans. The inter-annual variability of the tracer water model results is also examined.

  16. Tracer water transport and subgrid precipitation variation within atmospheric general circulation models

    Science.gov (United States)

    Koster, Randal D.; Eagleson, Peter S.; Broecker, Wallace S.

    1988-01-01

    A capability is developed for monitoring tracer water movement in the three-dimensional Goddard Institute for Space Science Atmospheric General Circulation Model (GCM). A typical experiment with the tracer water model follows water evaporating from selected grid squares and determines where this water first returns to the Earth's surface as precipitation or condensate, thereby providing information on the lateral scales of hydrological transport in the GCM. Through a comparison of model results with observations in nature, inferences can be drawn concerning real world water transport. Tests of the tracer water model include a comparison of simulated and observed vertically-integrated vapor flux fields and simulations of atomic tritium transport from the stratosphere to the oceans. The inter-annual variability of the tracer water model results is also examined.

  17. A systematic comparison of two-equation Reynolds-averaged Navier-Stokes turbulence models applied to shock-cloud interactions

    Science.gov (United States)

    Goodson, Matthew D.; Heitsch, Fabian; Eklund, Karl; Williams, Virginia A.

    2017-07-01

    Turbulence models attempt to account for unresolved dynamics and diffusion in hydrodynamical simulations. We develop a common framework for two-equation Reynolds-averaged Navier-Stokes turbulence models, and we implement six models in the athena code. We verify each implementation with the standard subsonic mixing layer, although the level of agreement depends on the definition of the mixing layer width. We then test the validity of each model into the supersonic regime, showing that compressibility corrections can improve agreement with experiment. For models with buoyancy effects, we also verify our implementation via the growth of the Rayleigh-Taylor instability in a stratified medium. The models are then applied to the ubiquitous astrophysical shock-cloud interaction in three dimensions. We focus on the mixing of shock and cloud material, comparing results from turbulence models to high-resolution simulations (up to 200 cells per cloud radius) and ensemble-averaged simulations. We find that the turbulence models lead to increased spreading and mixing of the cloud, although no two models predict the same result. Increased mixing is also observed in inviscid simulations at resolutions greater than 100 cells per radius, which suggests that the turbulent mixing begins to be resolved.

  18. Large eddy simulation of n-heptane spray combustion in partially premixed combustion regime with linear eddy model

    International Nuclear Information System (INIS)

    Xiao, Gang; Jia, Ming; Wang, Tianyou

    2016-01-01

    Spray combustion of n-heptane in a constant-volume vessel under engine-relevant conditions was investigated using linear eddy model in the framework of large eddy simulation. In this numerical approach, turbulent mixing was traced by an innovative stochastic approach instead of the conventional gradient diffusion model. Chemical reaction rates were calculated with the consideration of the sub-grid scale spatial fluctuations of reactive scalars. Turbulence-chemistry interactions were represented by the separated treatments of the underlying processes including turbulent stirring, chemical reaction, and molecular diffusion. The model was validated against the experimental data of ignition delay times, chemiluminescence images, and soot images from Sandia National Laboratories. Numerical results showed that the ignition process changed from the temperature-controlled regime to the mixing-controlled regime as the initial ambient temperature increased from 800 K to 1000 K. The premixed flame and the diffusion flame coexisted, while the gross heat release rate was found to be dominated by the premixed flame. The temperature fluctuation was mainly observed around the spray jet due to the cooling effect of the fuel vaporization. The fluctuations were more significantly smoothed out by the high-temperature flame than the low-temperature flame. The mean temperature would be overpredicted if the sub-grid temperature fluctuation was neglected. - Highlights: • Turbulent mixing is traced by stochastic method instead of gradient diffusion model. • Sub-grid scale fluctuations of reactive scalars are captured. • Ignition process varies from temperature-controlled to mixing-controlled regime. • Temperature fluctuation can be smoothed out by high-temperature flame. • The heat release rate is dominated by the premixed flame.

  19. Evaluation of an ARPS-based canopy flow modeling system for use in future operational smoke prediction efforts

    Science.gov (United States)

    M. T. Kiefer; S. Zhong; W. E. Heilman; J. J. Charney; X. Bian

    2013-01-01

    Efforts to develop a canopy flow modeling system based on the Advanced Regional Prediction System (ARPS) model are discussed. The standard version of ARPS is modified to account for the effect of drag forces on mean and turbulent flow through a vegetation canopy, via production and sink terms in the momentum and subgrid-scale turbulent kinetic energy (TKE) equations....

  20. Reynolds-Averaged Navier-Stokes Analysis of Zero Efflux Flow Control over a Hump Model

    Science.gov (United States)

    Rumsey, Christopher L.

    2006-01-01

    The unsteady flow over a hump model with zero efflux oscillatory flow control is modeled computationally using the unsteady Reynolds-averaged Navier-Stokes equations. Three different turbulence models produce similar results, and do a reasonably good job predicting the general character of the unsteady surface pressure coefficients during the forced cycle. However, the turbulent shear stresses are underpredicted in magnitude inside the separation bubble, and the computed results predict too large a (mean) separation bubble compared with experiment. These missed predictions are consistent with earlier steady-state results using no-flow-control and steady suction, from a 2004 CFD validation workshop for synthetic jets.

  1. Large eddy simulation of turbulent premixed combustion flows over backward facing step

    Energy Technology Data Exchange (ETDEWEB)

    Park, Nam Seob [Yuhan University, Bucheon (Korea, Republic of); Ko, Sang Cheol [Jeju National University, Jeju (Korea, Republic of)

    2011-03-15

    Large eddy simulation (LES) of turbulent premixed combustion flows over backward facing step has been performed using a dynamic sub-grid G-equation flamelet model. A flamelet model for the premixed flame is combined with a dynamic sub-grid combustion model for the filtered propagation of flame speed. The objective of this study is to investigate the validity of the dynamic sub-grid G-equation model in a complex turbulent premixed combustion flow. For the purpose of validating the LES combustion model, the LES of isothermal and reacting shear layer formed at a backward facing step is carried out. The calculated results are compared with the experimental results, and a good agreement is obtained.

  2. Large eddy simulation of turbulent premixed combustion flows over backward facing step

    International Nuclear Information System (INIS)

    Park, Nam Seob; Ko, Sang Cheol

    2011-01-01

    Large eddy simulation (LES) of turbulent premixed combustion flows over backward facing step has been performed using a dynamic sub-grid G-equation flamelet model. A flamelet model for the premixed flame is combined with a dynamic sub-grid combustion model for the filtered propagation of flame speed. The objective of this study is to investigate the validity of the dynamic sub-grid G-equation model in a complex turbulent premixed combustion flow. For the purpose of validating the LES combustion model, the LES of isothermal and reacting shear layer formed at a backward facing step is carried out. The calculated results are compared with the experimental results, and a good agreement is obtained

  3. FDTD calculation of whole-body average SAR in adult and child models for frequencies from 30 MHz to 3 GHz

    International Nuclear Information System (INIS)

    Wang Jianqing; Fujiwara, Osamu; Kodera, Sachiko; Watanabe, Soichi

    2006-01-01

    Due to the difficulty of the specific absorption rate (SAR) measurement in an actual human body for electromagnetic radio-frequency (RF) exposure, in various compliance assessment procedures the incident electric field or power density is being used as a reference level, which should never yield a larger whole-body average SAR than the basic safety limit. The relationship between the reference level and the whole-body average SAR, however, was established mainly based on numerical calculations for highly simplified human modelling dozens of years ago. Its validity is being questioned by the latest calculation results. In verifying the validity of the reference level with respect to the basic SAR limit for RF exposure, it is essential to have a high accuracy of human modelling and numerical code. In this study, we made a detailed error analysis in the whole-body average SAR calculation for the finite-difference time-domain (FDTD) method in conjunction with the perfectly matched layer (PML) absorbing boundaries. We derived a basic rule for the PML employment based on a dielectric sphere and the Mie theory solution. We then attempted to clarify to what extent the whole-body average SAR may reach using an anatomically based Japanese adult model and a scaled child model. The results show that the whole-body average SAR under the ICNIRP reference level exceeds the basic safety limit nearly 30% for the child model both in the resonance frequency and 2 GHz band

  4. FDTD calculation of whole-body average SAR in adult and child models for frequencies from 30 MHz to 3 GHz

    Energy Technology Data Exchange (ETDEWEB)

    Wang Jianqing [Graduate School of Engineering, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 466-8555 (Japan); Fujiwara, Osamu [Graduate School of Engineering, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 466-8555 (Japan); Kodera, Sachiko [Graduate School of Engineering, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 466-8555 (Japan); Watanabe, Soichi [National Institute of Information and Communications Technology, Nukui-kitamachi, Koganei, Tokyo 184-8795 (Japan)

    2006-09-07

    Due to the difficulty of the specific absorption rate (SAR) measurement in an actual human body for electromagnetic radio-frequency (RF) exposure, in various compliance assessment procedures the incident electric field or power density is being used as a reference level, which should never yield a larger whole-body average SAR than the basic safety limit. The relationship between the reference level and the whole-body average SAR, however, was established mainly based on numerical calculations for highly simplified human modelling dozens of years ago. Its validity is being questioned by the latest calculation results. In verifying the validity of the reference level with respect to the basic SAR limit for RF exposure, it is essential to have a high accuracy of human modelling and numerical code. In this study, we made a detailed error analysis in the whole-body average SAR calculation for the finite-difference time-domain (FDTD) method in conjunction with the perfectly matched layer (PML) absorbing boundaries. We derived a basic rule for the PML employment based on a dielectric sphere and the Mie theory solution. We then attempted to clarify to what extent the whole-body average SAR may reach using an anatomically based Japanese adult model and a scaled child model. The results show that the whole-body average SAR under the ICNIRP reference level exceeds the basic safety limit nearly 30% for the child model both in the resonance frequency and 2 GHz band.

  5. Evaluations of average level spacings

    International Nuclear Information System (INIS)

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables

  6. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  7. Comparative study of dense plasma state equations obtained from different models of average-atom

    International Nuclear Information System (INIS)

    Fromy, Patrice

    1991-01-01

    This research thesis addresses the influence of temperature and density effects on magnitudes such as pressure, energy, ionisation, and on energy levels of a body described according to the approximation of an electrically neutral isolated atomic sphere. Starting from the general formalism of the functional density, with some approximations, the author deduces the Thomas-Fermi, Thomas-Fermi-Dirac, and Thomas-Fermi-Dirac-Weizsaecker models, and an average-atom approximated quantum model. For each of these models, the author presents an explicit method of resolution, as well as the determination of different magnitudes taken into account in this study. For the different studied magnitudes, the author highlights effects due to the influence of temperature and of density, as well as variations due to the different models [fr

  8. Spatial models for probabilistic prediction of wind power with application to annual-average and high temporal resolution data

    DEFF Research Database (Denmark)

    Lenzi, Amanda; Pinson, Pierre; Clemmensen, Line Katrine Harder

    2017-01-01

    average wind power generation, and for a high temporal resolution (typically wind power averages over 15-min time steps). In both cases, we use a spatial hierarchical statistical model in which spatial correlation is captured by a latent Gaussian field. We explore how such models can be handled...... with stochastic partial differential approximations of Matérn Gaussian fields together with Integrated Nested Laplace Approximations. We demonstrate the proposed methods on wind farm data from Western Denmark, and compare the results to those obtained with standard geostatistical methods. The results show...

  9. A spatially-averaged mathematical model of kidney branching morphogenesis

    KAUST Repository

    Zubkov, V.S.

    2015-08-01

    © 2015 Published by Elsevier Ltd. Kidney development is initiated by the outgrowth of an epithelial ureteric bud into a population of mesenchymal cells. Reciprocal morphogenetic responses between these two populations generate a highly branched epithelial ureteric tree with the mesenchyme differentiating into nephrons, the functional units of the kidney. While we understand some of the mechanisms involved, current knowledge fails to explain the variability of organ sizes and nephron endowment in mice and humans. Here we present a spatially-averaged mathematical model of kidney morphogenesis in which the growth of the two key populations is described by a system of time-dependant ordinary differential equations. We assume that branching is symmetric and is invoked when the number of epithelial cells per tip reaches a threshold value. This process continues until the number of mesenchymal cells falls below a critical value that triggers cessation of branching. The mathematical model and its predictions are validated against experimentally quantified C57Bl6 mouse embryonic kidneys. Numerical simulations are performed to determine how the final number of branches changes as key system parameters are varied (such as the growth rate of tip cells, mesenchyme cells, or component cell population exit rate). Our results predict that the developing kidney responds differently to loss of cap and tip cells. They also indicate that the final number of kidney branches is less sensitive to changes in the growth rate of the ureteric tip cells than to changes in the growth rate of the mesenchymal cells. By inference, increasing the growth rate of mesenchymal cells should maximise branch number. Our model also provides a framework for predicting the branching outcome when ureteric tip or mesenchyme cells change behaviour in response to different genetic or environmental developmental stresses.

  10. A spatially-averaged mathematical model of kidney branching morphogenesis

    KAUST Repository

    Zubkov, V.S.; Combes, A.N.; Short, K.M.; Lefevre, J.; Hamilton, N.A.; Smyth, I.M.; Little, M.H.; Byrne, H.M.

    2015-01-01

    © 2015 Published by Elsevier Ltd. Kidney development is initiated by the outgrowth of an epithelial ureteric bud into a population of mesenchymal cells. Reciprocal morphogenetic responses between these two populations generate a highly branched epithelial ureteric tree with the mesenchyme differentiating into nephrons, the functional units of the kidney. While we understand some of the mechanisms involved, current knowledge fails to explain the variability of organ sizes and nephron endowment in mice and humans. Here we present a spatially-averaged mathematical model of kidney morphogenesis in which the growth of the two key populations is described by a system of time-dependant ordinary differential equations. We assume that branching is symmetric and is invoked when the number of epithelial cells per tip reaches a threshold value. This process continues until the number of mesenchymal cells falls below a critical value that triggers cessation of branching. The mathematical model and its predictions are validated against experimentally quantified C57Bl6 mouse embryonic kidneys. Numerical simulations are performed to determine how the final number of branches changes as key system parameters are varied (such as the growth rate of tip cells, mesenchyme cells, or component cell population exit rate). Our results predict that the developing kidney responds differently to loss of cap and tip cells. They also indicate that the final number of kidney branches is less sensitive to changes in the growth rate of the ureteric tip cells than to changes in the growth rate of the mesenchymal cells. By inference, increasing the growth rate of mesenchymal cells should maximise branch number. Our model also provides a framework for predicting the branching outcome when ureteric tip or mesenchyme cells change behaviour in response to different genetic or environmental developmental stresses.

  11. Boundary Conditions and SGS Models for LES of Wall-Bounded Separated Flows: An Application to Engine-Like Geometries

    Directory of Open Access Journals (Sweden)

    Piscaglia F.

    2013-11-01

    Full Text Available The implementation and the combination of advanced boundary conditions and subgrid scale models for Large Eddy Simulations are presented. The goal is to perform reliable cold flow LES simulations in complex geometries, such as in the cylinders of internal combustion engines. The implementation of an inlet boundary condition for synthetic turbulence generation and of two subgrid scale models, the local Dynamic Smagorinsky and the Wall-Adapting Local Eddy-viscosity SGS model ( WALE is described. The WALE model is based on the square of the velocity gradient tensor and it accounts for the effects of both the strain and the rotation rate of the smallest resolved turbulent fluctuations and it recovers the proper y3 near-wall scaling for the eddy viscosity without requiring dynamic pressure; hence, it is supposed to be a very reliable model for ICE simulation. Model validation has been performed separately on two steady state flow benches: a backward facing step geometry and a simple IC engine geometry with one axed central valve. A discussion on the completeness of the LES simulation (i.e. LES simulation quality is given.

  12. ANALISIS CURAH HUJAN DAN DEBIT MODEL SWAT DENGAN METODE MOVING AVERAGE DI DAS CILIWUNG HULU

    Directory of Open Access Journals (Sweden)

    Defri Satiya Zuma

    2017-09-01

    Full Text Available Watershed can be regarded as a hydrological system that has a function in transforming rainwater as an input into outputs such as flow and sediment. The transformation of inputs into outputs has specific forms and properties. The transformation involves many processes, including processes occurred on the surface of the land, river basins, in soil and aquifer. This study aimed to apply the SWAT model  in  Ciliwung Hulu Watershed, asses the effect of average rainfall  on 3 days, 5 days, 7 days and 10 days of the hydrological characteristics in Ciliwung Hulu Watershed. The correlation coefficient (r between rainfall and discharge was positive, it indicated that there was an unidirectional relationship between rainfall and discharge in the upstream, midstream and downstream of the watershed. The upper limit ratio of discharge had a downward trend from upstream to downstream, while the lower limit ratio of  discharge had an upward trend from upstream to downstream. It showed that the discharge peak in Ciliwung  Hulu Watershed from upstream to downstream had a downward trend while the baseflow from upstream to downstream had an upward trend. It showed that the upstream of Ciliwung Hulu Watershed had the highest ratio of discharge peak  and baseflow so it needs the soil and water conservations and technical civil measures. The discussion concluded that the SWAT model could be well applied in Ciliwung Hulu Watershed, the most affecting average rainfall on the hydrological characteristics was the average rainfall of 10 days. On average  rainfall of 10 days, all components had contributed maximally for river discharge.

  13. Numerical Methods for the Optimization of Nonlinear Residual-Based Sungrid-Scale Models Using the Variational Germano Identity

    NARCIS (Netherlands)

    Maher, G.D.; Hulshoff, S.J.

    2014-01-01

    The Variational Germano Identity [1, 2] is used to optimize the coefficients of residual-based subgrid-scale models that arise from the application of a Variational Multiscale Method [3, 4]. It is demonstrated that numerical iterative methods can be used to solve the Germano relations to obtain

  14. Bayesian Averaging over Many Dynamic Model Structures with Evidence on the Great Ratios and Liquidity Trap Risk

    NARCIS (Netherlands)

    R.W. Strachan (Rodney); H.K. van Dijk (Herman)

    2008-01-01

    textabstractA Bayesian model averaging procedure is presented that makes use of a finite mixture of many model structures within the class of vector autoregressive (VAR) processes. It is applied to two empirical issues. First, stability of the Great Ratios in U.S. macro-economic time series is

  15. Exploiting scale dependence in cosmological averaging

    International Nuclear Information System (INIS)

    Mattsson, Teppo; Ronkainen, Maria

    2008-01-01

    We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion

  16. GEM-AQ, an on-line global multiscale chemical weather modelling system: model description and evaluation of gas phase chemistry processes

    Directory of Open Access Journals (Sweden)

    J. W. Kaminski

    2008-06-01

    Full Text Available Tropospheric chemistry and air quality processes were implemented on-line in the Global Environmental Multiscale weather prediction model. The integrated model, GEM-AQ, was developed as a platform to investigate chemical weather at scales from global to urban. The current chemical mechanism is comprised of 50 gas-phase species, 116 chemical and 19 photolysis reactions, and is complemented by a sectional aerosol module with 5 aerosols types. All tracers are advected using the semi-Lagrangian scheme native to GEM. The vertical transport includes parameterized subgrid-scale turbulence and large scale deep convection. Dry deposition is included as a flux boundary condition in the vertical diffusion equation. Wet deposition of gas-phase species is treated in a simplified way, and only below-cloud scavenging is considered. The emissions used include yearly-averaged anthropogenic, and monthly-averaged biogenic, ocean, soil, and biomass burning emission fluxes, as well as NOx from lightning. In order to evaluate the ability to simulate seasonal variations and regional distributions of trace gases such as ozone, nitrogen dioxide and carbon monoxide, the model was run for a period of five years (2001–2005 on a global uniform 1.5°×1.5° horizontal resolution domain and 28 hybrid levels extending up to 10 hPa. Model results were compared with observations from satellites, aircraft measurement campaigns and balloon sondes. We find that GEM-AQ is able to capture the spatial details of the chemical fields in the middle and lower troposphere. The modelled ozone consistently shows good agreement with observations, except over tropical oceans. The comparison of carbon monoxide and nitrogen dioxide with satellite measurements emphasizes the need for more accurate, year-specific emissions fluxes for biomass burning and anthropogenic sources. Other species also compare well with available observations.

  17. Combining multi-objective optimization and bayesian model averaging to calibrate forecast ensembles of soil hydraulic models

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Wohling, Thomas [NON LANL

    2008-01-01

    Most studies in vadose zone hydrology use a single conceptual model for predictive inference and analysis. Focusing on the outcome of a single model is prone to statistical bias and underestimation of uncertainty. In this study, we combine multi-objective optimization and Bayesian Model Averaging (BMA) to generate forecast ensembles of soil hydraulic models. To illustrate our method, we use observed tensiometric pressure head data at three different depths in a layered vadose zone of volcanic origin in New Zealand. A set of seven different soil hydraulic models is calibrated using a multi-objective formulation with three different objective functions that each measure the mismatch between observed and predicted soil water pressure head at one specific depth. The Pareto solution space corresponding to these three objectives is estimated with AMALGAM, and used to generate four different model ensembles. These ensembles are post-processed with BMA and used for predictive analysis and uncertainty estimation. Our most important conclusions for the vadose zone under consideration are: (1) the mean BMA forecast exhibits similar predictive capabilities as the best individual performing soil hydraulic model, (2) the size of the BMA uncertainty ranges increase with increasing depth and dryness in the soil profile, (3) the best performing ensemble corresponds to the compromise (or balanced) solution of the three-objective Pareto surface, and (4) the combined multi-objective optimization and BMA framework proposed in this paper is very useful to generate forecast ensembles of soil hydraulic models.

  18. Renormalization-group theory for the eddy viscosity in subgrid modeling

    Science.gov (United States)

    Zhou, YE; Vahala, George; Hossain, Murshed

    1988-01-01

    Renormalization-group theory is applied to incompressible three-dimensional Navier-Stokes turbulence so as to eliminate unresolvable small scales. The renormalized Navier-Stokes equation now includes a triple nonlinearity with the eddy viscosity exhibiting a mild cusp behavior, in qualitative agreement with the test-field model results of Kraichnan. For the cusp behavior to arise, not only is the triple nonlinearity necessary but the effects of pressure must be incorporated in the triple term. The renormalized eddy viscosity will not exhibit a cusp behavior if it is assumed that a spectral gap exists between the large and small scales.

  19. Comparative Analysis of Market Volatility in Indian Banking and IT Sectors by using Average Decline Model

    Directory of Open Access Journals (Sweden)

    Kirti AREKAR

    2017-12-01

    Full Text Available The stock market volatility is depends on three major features, complete volatility, volatility fluctuations, and volatility attention and they are calculate by the statistical techniques. Comparative analysis of market volatility for two major index i.e. banking & IT sector in Bombay stock exchange (BSE by using average decline model. The average degeneration process in volatility has being used after very high and low stock returns. The results of this study explain significant decline in volatility fluctuations, attention, and level between epochs of pre and post particularly high stock returns.

  20. New insight of Arctic cloud parameterization from regional climate model simulations, satellite-based, and drifting station data

    Science.gov (United States)

    Klaus, D.; Dethloff, K.; Dorn, W.; Rinke, A.; Wu, D. L.

    2016-05-01

    Cloud observations from the CloudSat and CALIPSO satellites helped to explain the reduced total cloud cover (Ctot) in the atmospheric regional climate model HIRHAM5 with modified cloud physics. Arctic climate conditions are found to be better reproduced with (1) a more efficient Bergeron-Findeisen process and (2) a more generalized subgrid-scale variability of total water content. As a result, the annual cycle of Ctot is improved over sea ice, associated with an almost 14% smaller area average than in the control simulation. The modified cloud scheme reduces the Ctot bias with respect to the satellite observations. Except for autumn, the cloud reduction over sea ice improves low-level temperature profiles compared to drifting station data. The HIRHAM5 sensitivity study highlights the need for improving accuracy of low-level (<700 m) cloud observations, as these clouds exert a strong impact on the near-surface climate.

  1. Radiative forcing and climate metrics for ozone precursor emissions: the impact of multi-model averaging

    Directory of Open Access Journals (Sweden)

    C. R. MacIntosh

    2015-04-01

    Full Text Available Multi-model ensembles are frequently used to assess understanding of the response of ozone and methane lifetime to changes in emissions of ozone precursors such as NOx, VOCs (volatile organic compounds and CO. When these ozone changes are used to calculate radiative forcing (RF (and climate metrics such as the global warming potential (GWP and global temperature-change potential (GTP there is a methodological choice, determined partly by the available computing resources, as to whether the mean ozone (and methane concentration changes are input to the radiation code, or whether each model's ozone and methane changes are used as input, with the average RF computed from the individual model RFs. We use data from the Task Force on Hemispheric Transport of Air Pollution source–receptor global chemical transport model ensemble to assess the impact of this choice for emission changes in four regions (East Asia, Europe, North America and South Asia. We conclude that using the multi-model mean ozone and methane responses is accurate for calculating the mean RF, with differences up to 0.6% for CO, 0.7% for VOCs and 2% for NOx. Differences of up to 60% for NOx 7% for VOCs and 3% for CO are introduced into the 20 year GWP. The differences for the 20 year GTP are smaller than for the GWP for NOx, and similar for the other species. However, estimates of the standard deviation calculated from the ensemble-mean input fields (where the standard deviation at each point on the model grid is added to or subtracted from the mean field are almost always substantially larger in RF, GWP and GTP metrics than the true standard deviation, and can be larger than the model range for short-lived ozone RF, and for the 20 and 100 year GWP and 100 year GTP. The order of averaging has most impact on the metrics for NOx, as the net values for these quantities is the residual of the sum of terms of opposing signs. For example, the standard deviation for the 20 year GWP is 2–3

  2. Regional averaging and scaling in relativistic cosmology

    International Nuclear Information System (INIS)

    Buchert, Thomas; Carfora, Mauro

    2002-01-01

    Averaged inhomogeneous cosmologies lie at the forefront of interest, since cosmological parameters such as the rate of expansion or the mass density are to be considered as volume-averaged quantities and only these can be compared with observations. For this reason the relevant parameters are intrinsically scale-dependent and one wishes to control this dependence without restricting the cosmological model by unphysical assumptions. In the latter respect we contrast our way to approach the averaging problem in relativistic cosmology with shortcomings of averaged Newtonian models. Explicitly, we investigate the scale-dependence of Eulerian volume averages of scalar functions on Riemannian three-manifolds. We propose a complementary view of a Lagrangian smoothing of (tensorial) variables as opposed to their Eulerian averaging on spatial domains. This programme is realized with the help of a global Ricci deformation flow for the metric. We explain rigorously the origin of the Ricci flow which, on heuristic grounds, has already been suggested as a possible candidate for smoothing the initial dataset for cosmological spacetimes. The smoothing of geometry implies a renormalization of averaged spatial variables. We discuss the results in terms of effective cosmological parameters that would be assigned to the smoothed cosmological spacetime. In particular, we find that on the smoothed spatial domain B-bar evaluated cosmological parameters obey Ω-bar B-bar m + Ω-bar B-bar R + Ω-bar B-bar A + Ω-bar B-bar Q 1, where Ω-bar B-bar m , Ω-bar B-bar R and Ω-bar B-bar A correspond to the standard Friedmannian parameters, while Ω-bar B-bar Q is a remnant of cosmic variance of expansion and shear fluctuations on the averaging domain. All these parameters are 'dressed' after smoothing out the geometrical fluctuations, and we give the relations of the 'dressed' to the 'bare' parameters. While the former provide the framework of interpreting observations with a 'Friedmannian bias

  3. Robust Determinants of Growth in Asian Developing Economies: A Bayesian Panel Data Model Averaging Approach

    OpenAIRE

    LEON-GONZALEZ, Roberto; VINAYAGATHASAN, Thanabalasingam

    2013-01-01

    This paper investigates the determinants of growth in the Asian developing economies. We use Bayesian model averaging (BMA) in the context of a dynamic panel data growth regression to overcome the uncertainty over the choice of control variables. In addition, we use a Bayesian algorithm to analyze a large number of competing models. Among the explanatory variables, we include a non-linear function of inflation that allows for threshold effects. We use an unbalanced panel data set of 27 Asian ...

  4. Dynamic average modeling of a bidirectional solid state transformer for feasibility studies and real-time implementation

    OpenAIRE

    Martínez Velasco, Juan Antonio; Alepuz Menéndez, Salvador; Gonzalez Molina, Francisco; Martín Arnedo, Jacinto

    2014-01-01

    Detailed switching models of power electronics devices often lead to long computing times, limiting the size of the system to be simulated. This drawback is especially important when the goal is to implement the model in a real-time simulation platform. An alternative is to use dynamic average models (DAM) for analyzing the dynamic behavior of power electronic devices. This paper presents the development of a DAM for a bidirectional solid-state transformer and its implementation in a real-tim...

  5. Modelling the average velocity of propagation of the flame front in a gasoline engine with hydrogen additives

    Science.gov (United States)

    Smolenskaya, N. M.; Smolenskii, V. V.

    2018-01-01

    The paper presents models for calculating the average velocity of propagation of the flame front, obtained from the results of experimental studies. Experimental studies were carried out on a single-cylinder gasoline engine UIT-85 with hydrogen additives up to 6% of the mass of fuel. The article shows the influence of hydrogen addition on the average velocity propagation of the flame front in the main combustion phase. The dependences of the turbulent propagation velocity of the flame front in the second combustion phase on the composition of the mixture and operating modes. The article shows the influence of the normal combustion rate on the average flame propagation velocity in the third combustion phase.

  6. Statistical aspects of autoregressive-moving average models in the assessment of radon mitigation

    International Nuclear Information System (INIS)

    Dunn, J.E.; Henschel, D.B.

    1989-01-01

    Radon values, as reflected by hourly scintillation counts, seem dominated by major, pseudo-periodic, random fluctuations. This methodological paper reports a moderate degree of success in modeling these data using relatively simple autoregressive-moving average models to assess the effectiveness of radon mitigation techniques in existing housing. While accounting for the natural correlation of successive observations, familiar summary statistics such as steady state estimates, standard errors, confidence limits, and tests of hypothesis are produced. The Box-Jenkins approach is used throughout. In particular, intervention analysis provides an objective means of assessing the effectiveness of an active mitigation measure, such as a fan off/on cycle. Occasionally, failure to declare a significant intervention has suggested a means of remedial action in the data collection procedure

  7. A comparison of boundary-layer heights inferred from wind-profiler backscatter profiles with diagnostic calculations using regional model forecasts

    Energy Technology Data Exchange (ETDEWEB)

    Baltink, H.K.; Holtslag, A.A.M. [Royal Netherlands Meteorological Inst., KNMI, De Bilt (Netherlands)

    1997-10-01

    From October 1994 through January 1997 the Tropospheric Energy Budget Experiment (TEBEX) was executed by KNMI. The main objectives are to study boundary layer processes and cloud variability on the sub-grid scale of present Global Climate Models and to improve the related sub-grid parametrizations. A suite of instruments was deployed to measure a large number of variables. Measurements to characterize ABL processes were focussed around the 200 m high meteorological observation tower of the KNMI in Cabauw. In the framework of TEBEX a 1290 MHz wind-profiler/RASS was installed in July 1994 at 300 m from tower. Data collected during TEBEX are used to assess the performance of a Regional Atmospheric Climate Model (RACMO). This climate model runs also in a operational forecast mode once a day. The diagnostic ABL-height (h{sub model}) is calculated from the RACMO forecast output. A modified Richardson`s number method extended with an excess parcel temperature is applied for all stability conditions. We present the preliminary results of a comparison of h{sub model} from forecasts with measured h{sub TS} derived from profiler and sodar data for July 1995. (au)

  8. Elucidating fluctuating diffusivity in center-of-mass motion of polymer models with time-averaged mean-square-displacement tensor

    Science.gov (United States)

    Miyaguchi, Tomoshige

    2017-10-01

    There have been increasing reports that the diffusion coefficient of macromolecules depends on time and fluctuates randomly. Here a method is developed to elucidate this fluctuating diffusivity from trajectory data. Time-averaged mean-square displacement (MSD), a common tool in single-particle-tracking (SPT) experiments, is generalized to a second-order tensor with which both magnitude and orientation fluctuations of the diffusivity can be clearly detected. This method is used to analyze the center-of-mass motion of four fundamental polymer models: the Rouse model, the Zimm model, a reptation model, and a rigid rodlike polymer. It is found that these models exhibit distinctly different types of magnitude and orientation fluctuations of diffusivity. This is an advantage of the present method over previous ones, such as the ergodicity-breaking parameter and a non-Gaussian parameter, because with either of these parameters it is difficult to distinguish the dynamics of the four polymer models. Also, the present method of a time-averaged MSD tensor could be used to analyze trajectory data obtained in SPT experiments.

  9. Averaging in spherically symmetric cosmology

    International Nuclear Information System (INIS)

    Coley, A. A.; Pelavas, N.

    2007-01-01

    The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis

  10. Improving consensus structure by eliminating averaging artifacts

    Directory of Open Access Journals (Sweden)

    KC Dukka B

    2009-03-01

    Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which

  11. A nonlinear structural subgrid-scale closure for compressible MHD. I. Derivation and energy dissipation properties

    Energy Technology Data Exchange (ETDEWEB)

    Vlaykov, Dimitar G., E-mail: Dimitar.Vlaykov@ds.mpg.de [Institut für Astrophysik, Universität Göttingen, Friedrich-Hund-Platz 1, D-37077 Göttingen (Germany); Max-Planck-Institut für Dynamik und Selbstorganisation, Am Faßberg 17, D-37077 Göttingen (Germany); Grete, Philipp [Institut für Astrophysik, Universität Göttingen, Friedrich-Hund-Platz 1, D-37077 Göttingen (Germany); Max-Planck-Institut für Sonnensystemforschung, Justus-von-Liebig-Weg 3, D-37077 Göttingen (Germany); Schmidt, Wolfram [Hamburger Sternwarte, Universität Hamburg, Gojenbergsweg 112, D-21029 Hamburg (Germany); Schleicher, Dominik R. G. [Departamento de Astronomía, Facultad Ciencias Físicas y Matemáticas, Universidad de Concepción, Av. Esteban Iturra s/n Barrio Universitario, Casilla 160-C (Chile)

    2016-06-15

    Compressible magnetohydrodynamic (MHD) turbulence is ubiquitous in astrophysical phenomena ranging from the intergalactic to the stellar scales. In studying them, numerical simulations are nearly inescapable, due to the large degree of nonlinearity involved. However, the dynamical ranges of these phenomena are much larger than what is computationally accessible. In large eddy simulations (LESs), the resulting limited resolution effects are addressed explicitly by introducing to the equations of motion additional terms associated with the unresolved, subgrid-scale dynamics. This renders the system unclosed. We derive a set of nonlinear structural closures for the ideal MHD LES equations with particular emphasis on the effects of compressibility. The closures are based on a gradient expansion of the finite-resolution operator [W. K. Yeo (CUP, 1993)] and require no assumptions about the nature of the flow or magnetic field. Thus, the scope of their applicability ranges from the sub- to the hyper-sonic and -Alfvénic regimes. The closures support spectral energy cascades both up and down-scale, as well as direct transfer between kinetic and magnetic resolved and unresolved energy budgets. They implicitly take into account the local geometry, and in particular, the anisotropy of the flow. Their properties are a priori validated in Paper II [P. Grete et al., Phys. Plasmas 23, 062317 (2016)] against alternative closures available in the literature with respect to a wide range of simulation data of homogeneous and isotropic turbulence.

  12. Average Costs versus Net Present Value

    NARCIS (Netherlands)

    E.A. van der Laan (Erwin); R.H. Teunter (Ruud)

    2000-01-01

    textabstractWhile the net present value (NPV) approach is widely accepted as the right framework for studying production and inventory control systems, average cost (AC) models are more widely used. For the well known EOQ model it can be verified that (under certain conditions) the AC approach gives

  13. Dual-component model of respiratory motion based on the periodic autoregressive moving average (periodic ARMA) method

    International Nuclear Information System (INIS)

    McCall, K C; Jeraj, R

    2007-01-01

    A new approach to the problem of modelling and predicting respiration motion has been implemented. This is a dual-component model, which describes the respiration motion as a non-periodic time series superimposed onto a periodic waveform. A periodic autoregressive moving average algorithm has been used to define a mathematical model of the periodic and non-periodic components of the respiration motion. The periodic components of the motion were found by projecting multiple inhale-exhale cycles onto a common subspace. The component of the respiration signal that is left after removing this periodicity is a partially autocorrelated time series and was modelled as an autoregressive moving average (ARMA) process. The accuracy of the periodic ARMA model with respect to fluctuation in amplitude and variation in length of cycles has been assessed. A respiration phantom was developed to simulate the inter-cycle variations seen in free-breathing and coached respiration patterns. At ±14% variability in cycle length and maximum amplitude of motion, the prediction errors were 4.8% of the total motion extent for a 0.5 s ahead prediction, and 9.4% at 1.0 s lag. The prediction errors increased to 11.6% at 0.5 s and 21.6% at 1.0 s when the respiration pattern had ±34% variations in both these parameters. Our results have shown that the accuracy of the periodic ARMA model is more strongly dependent on the variations in cycle length than the amplitude of the respiration cycles

  14. Boundary layer structure over areas of heterogeneous heat fluxes

    International Nuclear Information System (INIS)

    Doran, J.C.; Barnes, F.J.; Coulter, R.L.; Crawford, T.L.

    1993-01-01

    In general circulation models (GCMs), some properties of a grid element are necessarily considered homogeneous. That is, for each grid volume there is associated a particular combination of boundary layer depth, vertical profiles of wind and temperature, surface fluxes of sensible and latent heat, etc. In reality, all of these quantities may exhibit significant spatial variations the grid area, and the larger the area the greater the likely variations. In balancing the benefits of higher resolution against increased computational time and expense, it is useful to consider what the consequences of such subgrid-scale variability may be. Moreover, in interpreting the results of a simulation, one must be able to define an appropriate average value over a grid. There are two aspects of this latter problem: (1) in observations, how does one take a set of discrete or volume-averaged measurements and relate these to properties of the entire domain, and (2) in computations, how can subgrid-scale features be accounted for in the model parameterizations? To address these and related issues, two field campaigns were carried out near Boardman, Oregon, in June 1991 and 1992. These campaigns were designed to measure the surface fluxes of latent and sensible heat over adjacent areas with strongly contrasting surface types and to measure the response of the boundary layer to those fluxes. This paper discusses some initial findings from those campaigns

  15. Performance of Reynolds Averaged Navier-Stokes Models in Predicting Separated Flows: Study of the Hump Flow Model Problem

    Science.gov (United States)

    Cappelli, Daniele; Mansour, Nagi N.

    2012-01-01

    Separation can be seen in most aerodynamic flows, but accurate prediction of separated flows is still a challenging problem for computational fluid dynamics (CFD) tools. The behavior of several Reynolds Averaged Navier-Stokes (RANS) models in predicting the separated ow over a wall-mounted hump is studied. The strengths and weaknesses of the most popular RANS models (Spalart-Allmaras, k-epsilon, k-omega, k-omega-SST) are evaluated using the open source software OpenFOAM. The hump ow modeled in this work has been documented in the 2004 CFD Validation Workshop on Synthetic Jets and Turbulent Separation Control. Only the baseline case is treated; the slot flow control cases are not considered in this paper. Particular attention is given to predicting the size of the recirculation bubble, the position of the reattachment point, and the velocity profiles downstream of the hump.

  16. Robust nonlinear autoregressive moving average model parameter estimation using stochastic recurrent artificial neural networks

    DEFF Research Database (Denmark)

    Chon, K H; Hoyer, D; Armoundas, A A

    1999-01-01

    In this study, we introduce a new approach for estimating linear and nonlinear stochastic autoregressive moving average (ARMA) model parameters, given a corrupt signal, using artificial recurrent neural networks. This new approach is a two-step approach in which the parameters of the deterministic...... part of the stochastic ARMA model are first estimated via a three-layer artificial neural network (deterministic estimation step) and then reestimated using the prediction error as one of the inputs to the artificial neural networks in an iterative algorithm (stochastic estimation step). The prediction...... error is obtained by subtracting the corrupt signal of the estimated ARMA model obtained via the deterministic estimation step from the system output response. We present computer simulation examples to show the efficacy of the proposed stochastic recurrent neural network approach in obtaining accurate...

  17. Large deviations of a long-time average in the Ehrenfest urn model

    Science.gov (United States)

    Meerson, Baruch; Zilber, Pini

    2018-05-01

    Since its inception in 1907, the Ehrenfest urn model (EUM) has served as a test bed of key concepts of statistical mechanics. Here we employ this model to study large deviations of a time-additive quantity. We consider two continuous-time versions of the EUM with K urns and N balls: with and without interactions between the balls in the same urn. We evaluate the probability distribution that the average number of balls in one urn over time T, , takes any specified value aN, where . For long observation time, , a Donsker–Varadhan large deviation principle holds: , where … denote additional parameters of the model. We calculate the rate function exactly by two different methods due to Donsker and Varadhan and compare the exact results with those obtained with a variant of WKB approximation (after Wentzel, Kramers and Brillouin). In the absence of interactions the WKB prediction for is exact for any N. In the presence of interactions the WKB method gives asymptotically exact results for . The WKB method also uncovers the (very simple) time history of the system which dominates the contribution of different time histories to .

  18. Average of delta: a new quality control tool for clinical laboratories.

    Science.gov (United States)

    Jones, Graham R D

    2016-01-01

    Average of normals is a tool used to control assay performance using the average of a series of results from patients' samples. Delta checking is a process of identifying errors in individual patient results by reviewing the difference from previous results of the same patient. This paper introduces a novel alternate approach, average of delta, which combines these concepts to use the average of a number of sequential delta values to identify changes in assay performance. Models for average of delta and average of normals were developed in a spreadsheet application. The model assessed the expected scatter of average of delta and average of normals functions and the effect of assay bias for different values of analytical imprecision and within- and between-subject biological variation and the number of samples included in the calculations. The final assessment was the number of patients' samples required to identify an added bias with 90% certainty. The model demonstrated that with larger numbers of delta values, the average of delta function was tighter (lower coefficient of variation). The optimal number of samples for bias detection with average of delta was likely to be between 5 and 20 for most settings and that average of delta outperformed average of normals when the within-subject biological variation was small relative to the between-subject variation. Average of delta provides a possible additional assay quality control tool which theoretical modelling predicts may be more valuable than average of normals for analytes where the group biological variation is wide compared with within-subject variation and where there is a high rate of repeat testing in the laboratory patient population. © The Author(s) 2015.

  19. Improved Large-Eddy Simulation Using a Stochastic Backscatter Model: Application to the Neutral Atmospheric Boundary Layer and Urban Street Canyon Flow

    Science.gov (United States)

    O'Neill, J. J.; Cai, X.; Kinnersley, R.

    2015-12-01

    Large-eddy simulation (LES) provides a powerful tool for developing our understanding of atmospheric boundary layer (ABL) dynamics, which in turn can be used to improve the parameterisations of simpler operational models. However, LES modelling is not without its own limitations - most notably, the need to parameterise the effects of all subgrid-scale (SGS) turbulence. Here, we employ a stochastic backscatter SGS model, which explicitly handles the effects of both forward and reverse energy transfer to/from the subgrid scales, to simulate the neutrally stratified ABL as well as flow within an idealised urban street canyon. In both cases, a clear improvement in LES output statistics is observed when compared with the performance of a SGS model that handles forward energy transfer only. In the neutral ABL case, the near-surface velocity profile is brought significantly closer towards its expected logarithmic form. In the street canyon case, the strength of the primary vortex that forms within the canyon is more accurately reproduced when compared to wind tunnel measurements. Our results indicate that grid-scale backscatter plays an important role in both these modelled situations.

  20. Sub-grid scale representation of vegetation in global land surface schemes: implications for estimation of the terrestrial carbon sink

    Directory of Open Access Journals (Sweden)

    J. R. Melton

    2014-02-01

    Full Text Available Terrestrial ecosystem models commonly represent vegetation in terms of plant functional types (PFTs and use their vegetation attributes in calculations of the energy and water balance as well as to investigate the terrestrial carbon cycle. Sub-grid scale variability of PFTs in these models is represented using different approaches with the "composite" and "mosaic" approaches being the two end-members. The impact of these two approaches on the global carbon balance has been investigated with the Canadian Terrestrial Ecosystem Model (CTEM v 1.2 coupled to the Canadian Land Surface Scheme (CLASS v 3.6. In the composite (single-tile approach, the vegetation attributes of different PFTs present in a grid cell are aggregated and used in calculations to determine the resulting physical environmental conditions (soil moisture, soil temperature, etc. that are common to all PFTs. In the mosaic (multi-tile approach, energy and water balance calculations are performed separately for each PFT tile and each tile's physical land surface environmental conditions evolve independently. Pre-industrial equilibrium CLASS-CTEM simulations yield global totals of vegetation biomass, net primary productivity, and soil carbon that compare reasonably well with observation-based estimates and differ by less than 5% between the mosaic and composite configurations. However, on a regional scale the two approaches can differ by > 30%, especially in areas with high heterogeneity in land cover. Simulations over the historical period (1959–2005 show different responses to evolving climate and carbon dioxide concentrations from the two approaches. The cumulative global terrestrial carbon sink estimated over the 1959–2005 period (excluding land use change (LUC effects differs by around 5% between the two approaches (96.3 and 101.3 Pg, for the mosaic and composite approaches, respectively and compares well with the observation-based estimate of 82.2 ± 35 Pg C over the same

  1. Estimating the average treatment effect on survival based on observational data and using partly conditional modeling.

    Science.gov (United States)

    Gong, Qi; Schaubel, Douglas E

    2017-03-01

    Treatments are frequently evaluated in terms of their effect on patient survival. In settings where randomization of treatment is not feasible, observational data are employed, necessitating correction for covariate imbalances. Treatments are usually compared using a hazard ratio. Most existing methods which quantify the treatment effect through the survival function are applicable to treatments assigned at time 0. In the data structure of our interest, subjects typically begin follow-up untreated; time-until-treatment, and the pretreatment death hazard are both heavily influenced by longitudinal covariates; and subjects may experience periods of treatment ineligibility. We propose semiparametric methods for estimating the average difference in restricted mean survival time attributable to a time-dependent treatment, the average effect of treatment among the treated, under current treatment assignment patterns. The pre- and posttreatment models are partly conditional, in that they use the covariate history up to the time of treatment. The pre-treatment model is estimated through recently developed landmark analysis methods. For each treated patient, fitted pre- and posttreatment survival curves are projected out, then averaged in a manner which accounts for the censoring of treatment times. Asymptotic properties are derived and evaluated through simulation. The proposed methods are applied to liver transplant data in order to estimate the effect of liver transplantation on survival among transplant recipients under current practice patterns. © 2016, The International Biometric Society.

  2. Simulations and measurements of adiabatic annular flows in triangular, tight lattice nuclear fuel bundle model

    Energy Technology Data Exchange (ETDEWEB)

    Saxena, Abhishek, E-mail: asaxena@lke.mavt.ethz.ch [ETH Zurich, Laboratory for Nuclear Energy Systems, Department of Mechanical and Process Engineering, Sonneggstrasse 3, 8092 Zürich (Switzerland); Zboray, Robert [Laboratory for Thermal-hydraulics, Nuclear Energy and Safety Department, Paul Scherrer Institute, 5232 Villigen PSI (Switzerland); Prasser, Horst-Michael [ETH Zurich, Laboratory for Nuclear Energy Systems, Department of Mechanical and Process Engineering, Sonneggstrasse 3, 8092 Zürich (Switzerland); Laboratory for Thermal-hydraulics, Nuclear Energy and Safety Department, Paul Scherrer Institute, 5232 Villigen PSI (Switzerland)

    2016-04-01

    High conversion light water reactors (HCLWR) having triangular, tight-lattice fuels bundles could enable improved fuel utilization compared to present day LWRs. However, the efficient cooling of a tight lattice bundle has to be still proven. Major concern is the avoidance of high-quality boiling crisis (film dry-out) by the use of efficient functional spacers. For this reason, we have carried out experiments on adiabatic, air-water annular two-phase flows in a tight-lattice, triangular fuel bundle model using generic spacers. A high-spatial-resolution, non-intrusive measurement technology, cold neutron tomography, has been utilized to resolve the distribution of the liquid film thickness on the virtual fuel pin surfaces. Unsteady CFD simulations have also been performed to replicate and compare with the experiments using the commercial code STAR-CCM+. Large eddies have been resolved on the grid level to capture the dominant unsteady flow features expected to drive the liquid film thickness distribution downstream of a spacer while the subgrid scales have been modeled using the Wall Adapting Local Eddy (WALE) subgrid model. A Volume of Fluid (VOF) method, which directly tracks the interface and does away with closure relationship models for interfacial exchange terms, has also been employed. The present paper shows first comparison of the measurement with the simulation results.

  3. The value of model averaging and dynamical climate model predictions for improving statistical seasonal streamflow forecasts over Australia

    Science.gov (United States)

    Pokhrel, Prafulla; Wang, Q. J.; Robertson, David E.

    2013-10-01

    Seasonal streamflow forecasts are valuable for planning and allocation of water resources. In Australia, the Bureau of Meteorology employs a statistical method to forecast seasonal streamflows. The method uses predictors that are related to catchment wetness at the start of a forecast period and to climate during the forecast period. For the latter, a predictor is selected among a number of lagged climate indices as candidates to give the "best" model in terms of model performance in cross validation. This study investigates two strategies for further improvement in seasonal streamflow forecasts. The first is to combine, through Bayesian model averaging, multiple candidate models with different lagged climate indices as predictors, to take advantage of different predictive strengths of the multiple models. The second strategy is to introduce additional candidate models, using rainfall and sea surface temperature predictions from a global climate model as predictors. This is to take advantage of the direct simulations of various dynamic processes. The results show that combining forecasts from multiple statistical models generally yields more skillful forecasts than using only the best model and appears to moderate the worst forecast errors. The use of rainfall predictions from the dynamical climate model marginally improves the streamflow forecasts when viewed over all the study catchments and seasons, but the use of sea surface temperature predictions provide little additional benefit.

  4. Analysis of baseline, average, and longitudinally measured blood pressure data using linear mixed models.

    Science.gov (United States)

    Hossain, Ahmed; Beyene, Joseph

    2014-01-01

    This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.

  5. Average monthly and annual climate maps for Bolivia

    KAUST Repository

    Vicente-Serrano, Sergio M.

    2015-02-24

    This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.

  6. The B-dot Earth Average Magnetic Field

    Science.gov (United States)

    Capo-Lugo, Pedro A.; Rakoczy, John; Sanders, Devon

    2013-01-01

    The average Earth's magnetic field is solved with complex mathematical models based on mean square integral. Depending on the selection of the Earth magnetic model, the average Earth's magnetic field can have different solutions. This paper presents a simple technique that takes advantage of the damping effects of the b-dot controller and is not dependent of the Earth magnetic model; but it is dependent on the magnetic torquers of the satellite which is not taken into consideration in the known mathematical models. Also the solution of this new technique can be implemented so easily that the flight software can be updated during flight, and the control system can have current gains for the magnetic torquers. Finally, this technique is verified and validated using flight data from a satellite that it has been in orbit for three years.

  7. A depth-averaged debris-flow model that includes the effects of evolving dilatancy. I. physical basis

    Science.gov (United States)

    Iverson, Richard M.; George, David L.

    2014-01-01

    To simulate debris-flow behaviour from initiation to deposition, we derive a depth-averaged, two-phase model that combines concepts of critical-state soil mechanics, grain-flow mechanics and fluid mechanics. The model's balance equations describe coupled evolution of the solid volume fraction, m, basal pore-fluid pressure, flow thickness and two components of flow velocity. Basal friction is evaluated using a generalized Coulomb rule, and fluid motion is evaluated in a frame of reference that translates with the velocity of the granular phase, vs. Source terms in each of the depth-averaged balance equations account for the influence of the granular dilation rate, defined as the depth integral of ∇⋅vs. Calculation of the dilation rate involves the effects of an elastic compressibility and an inelastic dilatancy angle proportional to m−meq, where meq is the value of m in equilibrium with the ambient stress state and flow rate. Normalization of the model equations shows that predicted debris-flow behaviour depends principally on the initial value of m−meq and on the ratio of two fundamental timescales. One of these timescales governs downslope debris-flow motion, and the other governs pore-pressure relaxation that modifies Coulomb friction and regulates evolution of m. A companion paper presents a suite of model predictions and tests.

  8. Modeling lightning-NOx chemistry on a sub-grid scale in a global chemical transport model

    Directory of Open Access Journals (Sweden)

    A. Gressent

    2016-05-01

    Full Text Available For the first time, a plume-in-grid approach is implemented in a chemical transport model (CTM to parameterize the effects of the nonlinear reactions occurring within high concentrated NOx plumes from lightning NOx emissions (LNOx in the upper troposphere. It is characterized by a set of parameters including the plume lifetime, the effective reaction rate constant related to NOx–O3 chemical interactions, and the fractions of NOx conversion into HNO3 within the plume. Parameter estimates were made using the Dynamical Simple Model of Atmospheric Chemical Complexity (DSMACC box model, simple plume dispersion simulations, and the 3-D Meso-NH (non-hydrostatic mesoscale atmospheric model. In order to assess the impact of the LNOx plume approach on the NOx and O3 distributions on a large scale, simulations for the year 2006 were performed using the GEOS-Chem global model with a horizontal resolution of 2° × 2.5°. The implementation of the LNOx parameterization implies an NOx and O3 decrease on a large scale over the region characterized by a strong lightning activity (up to 25 and 8 %, respectively, over central Africa in July and a relative increase downwind of LNOx emissions (up to 18 and 2 % for NOx and O3, respectively, in July. The calculated variability in NOx and O3 mixing ratios around the mean value according to the known uncertainties in the parameter estimates is at a maximum over continental tropical regions with ΔNOx [−33.1, +29.7] ppt and ΔO3 [−1.56, +2.16] ppb, in January, and ΔNOx [−14.3, +21] ppt and ΔO3 [−1.18, +1.93] ppb, in July, mainly depending on the determination of the diffusion properties of the atmosphere and the initial NO mixing ratio injected by lightning. This approach allows us (i to reproduce a more realistic lightning NOx chemistry leading to better NOx and O3 distributions on the large scale and (ii to focus on other improvements to reduce remaining uncertainties from processes

  9. Influence of the orographic roughness of glacier valleys across the Transantarctic Mountains in an atmospheric regional model

    Energy Technology Data Exchange (ETDEWEB)

    Jourdain, Nicolas C.; Gallee, Hubert [Laboratoire de Glaciologie et Geophysique de l' Environnement, Saint Martin d' Heres (France)

    2011-03-15

    Glacier valleys across the Transantarctic Mountains are not properly taken into account in climate models, because of their coarse resolution. Nonetheless, glacier valleys control katabatic winds in this region, and the latter are thought to affect the climate of the Ross Sea sector, frsater formation to snow mass balance. The purpose of this paper is to investigate the role of the production of turbulent kinetic energy by the subgrid-scale orography in the Transantarctic Mountains using a 20-km atmospheric regional model. A classical orographic roughness length parametrization is modified to produce either smooth or rough valleys. A one-year simulation shows that katabatic winds in the Transantarctic Mountains are strongly improved using smooth valleys rather than rough valleys. Pressure and temperature fields are affected by the representation of the orographic roughness, specifically in the Transantarctic Mountains and over the Ross Ice Shelf. A smooth representation of escarpment regions shows better agreement with automatic weather station observations than a rough representation. This work stresses the need to improve the representation of subgrid-scale orography to simulate realistic katabatic flows. This paper also provides a way of improving surface winds in an atmospheric model without increasing its resolution. (orig.)

  10. Effects of Resolution on the Simulation of Boundary-layer Clouds and the Partition of Kinetic Energy to Subgrid Scales

    Directory of Open Access Journals (Sweden)

    Anning Cheng

    2010-02-01

    Full Text Available Seven boundary-layer cloud cases are simulated with UCLA-LES (The University of California, Los Angeles – large eddy simulation model with different horizontal and vertical gridspacing to investigate how the results depend on gridspacing. Some variables are more sensitive to horizontal gridspacing, while others are more sensitive to vertical gridspacing, and still others are sensitive to both horizontal and vertical gridspacings with similar or opposite trends. For cloud-related variables having the opposite dependence on horizontal and vertical gridspacings, changing the gridspacing proportionally in both directions gives the appearance of convergence. In this study, we mainly discuss the impact of subgrid-scale (SGS kinetic energy (KE on the simulations with coarsening of horizontal and vertical gridspacings. A running-mean operator is used to separate the KE of the high-resolution benchmark simulations into that of resolved scales of coarse-resolution simulations and that of SGSs. The diagnosed SGS KE is compared with that parameterized by the Smagorinsky-Lilly SGS scheme at various gridspacings. It is found that the parameterized SGS KE for the coarse-resolution simulations is usually underestimated but the resolved KE is unrealistically large, compared to benchmark simulations. However, the sum of resolved and SGS KEs is about the same for simulations with various gridspacings. The partitioning of SGS and resolved heat and moisture transports is consistent with that of SGS and resolved KE, which means that the parameterized transports are underestimated but resolved-scale transports are overestimated. On the whole, energy shifts to large-scales as the horizontal gridspacing becomes coarse, hence the size of clouds and the resolved circulation increase, the clouds become more stratiform-like with an increase in cloud fraction, cloud liquid-water path and surface precipitation; when coarse vertical gridspacing is used, cloud sizes do not

  11. Approximate deconvolution model for the simulation of turbulent gas-solid flows: An a priori analysis

    Science.gov (United States)

    Schneiderbauer, Simon; Saeedipour, Mahdi

    2018-02-01

    Highly resolved two-fluid model (TFM) simulations of gas-solid flows in vertical periodic channels have been performed to study closures for the filtered drag force and the Reynolds-stress-like contribution stemming from the convective terms. An approximate deconvolution model (ADM) for the large-eddy simulation of turbulent gas-solid suspensions is detailed and subsequently used to reconstruct those unresolved contributions in an a priori manner. With such an approach, an approximation of the unfiltered solution is obtained by repeated filtering allowing the determination of the unclosed terms of the filtered equations directly. A priori filtering shows that predictions of the ADM model yield fairly good agreement with the fine grid TFM simulations for various filter sizes and different particle sizes. In particular, strong positive correlation (ρ > 0.98) is observed at intermediate filter sizes for all sub-grid terms. Additionally, our study reveals that the ADM results moderately depend on the choice of the filters, such as box and Gaussian filter, as well as the deconvolution order. The a priori test finally reveals that ADM is superior compared to isotropic functional closures proposed recently [S. Schneiderbauer, "A spatially-averaged two-fluid model for dense large-scale gas-solid flows," AIChE J. 63, 3544-3562 (2017)].

  12. IMPACT OF BARYONIC PHYSICS ON INTRINSIC ALIGNMENTS

    Energy Technology Data Exchange (ETDEWEB)

    Tenneti, Ananth; Gnedin, Nickolay Y. [Particle Astrophysics Center, Fermi National Accelerator Laboratory, Batavia, IL 60510 (United States); Feng, Yu, E-mail: vat@andrew.cmu.edu [Berkeley Center for Cosmological Physics, Department of Physics, University of California Berkeley, Berkeley, CA 94720 (United States)

    2017-01-10

    We explore the effects of specific assumptions in the subgrid models of star formation and stellar and active galactic nucleus feedback on intrinsic alignments of galaxies in cosmological simulations of the “MassiveBlack-II” family. Using smaller-volume simulations, we explore the parameter space of the subgrid star formation and feedback model and find remarkable robustness of the observable statistical measures to the details of subgrid physics. The one observational probe most sensitive to modeling details is the distribution of misalignment angles. We hypothesize that the amount of angular momentum carried away by the galactic wind is the primary physical quantity that controls the orientation of the stellar distribution. Our results are also consistent with a similar study by the EAGLE simulation team.

  13. on the performance of Autoregressive Moving Average Polynomial

    African Journals Online (AJOL)

    Timothy Ademakinwa

    Distributed Lag (PDL) model, Autoregressive Polynomial Distributed Lag ... Moving Average Polynomial Distributed Lag (ARMAPDL) model. ..... Global Journal of Mathematics and Statistics. Vol. 1. ... Business and Economic Research Center.

  14. Distribution of average, marginal, and participation tax rates among Czech taxpayers: results from a TAXBEN model

    Czech Academy of Sciences Publication Activity Database

    Dušek, Libor; Kalíšková, Klára; Münich, Daniel

    2013-01-01

    Roč. 63, č. 6 (2013), s. 474-504 ISSN 0015-1920 R&D Projects: GA TA ČR(CZ) TD010033 Institutional support: RVO:67985998 Keywords : TAXBEN models * average tax rates * marginal tax rates Subject RIV: AH - Economics Impact factor: 0.358, year: 2013 http://journal.fsv.cuni.cz/storage/1287_dusek.pdf

  15. Distribution of average, marginal, and participation tax rates among Czech taxpayers: results from a TAXBEN model

    Czech Academy of Sciences Publication Activity Database

    Dušek, Libor; Kalíšková, Klára; Münich, Daniel

    2013-01-01

    Roč. 63, č. 6 (2013), s. 474-504 ISSN 0015-1920 R&D Projects: GA MŠk(CZ) SVV 267801/2013 Institutional support: PRVOUK-P23 Keywords : TAXBEN models * average tax rates * marginal tax rates Subject RIV: AH - Economics Impact factor: 0.358, year: 2013 http://journal.fsv.cuni.cz/storage/1287_dusek.pdf

  16. Filtered Mass Density Function for Subgrid Scale Modeling of Turbulent Diffusion Flames

    National Research Council Canada - National Science Library

    Givi, Peyman

    2002-01-01

    .... These equations were solved with a new Lagrangian Monte Carlo scheme. The model predictions were compared with results obtained via conventional LES closures and with direct numerical simulation (DNS...

  17. Modelling of an intermediate pressure microwave oxygen discharge reactor: from stationary two-dimensional to time-dependent global (volume-averaged) plasma models

    International Nuclear Information System (INIS)

    Kemaneci, Efe; Graef, Wouter; Rahimi, Sara; Van Dijk, Jan; Kroesen, Gerrit; Carbone, Emile; Jimenez-Diaz, Manuel

    2015-01-01

    A microwave-induced oxygen plasma is simulated using both stationary and time-resolved modelling strategies. The stationary model is spatially resolved and it is self-consistently coupled to the microwaves (Jimenez-Diaz et al 2012 J. Phys. D: Appl. Phys. 45 335204), whereas the time-resolved description is based on a global (volume-averaged) model (Kemaneci et al 2014 Plasma Sources Sci. Technol. 23 045002). We observe agreement of the global model data with several published measurements of microwave-induced oxygen plasmas in both continuous and modulated power inputs. Properties of the microwave plasma reactor are investigated and corresponding simulation data based on two distinct models shows agreement on the common parameters. The role of the square wave modulated power input is also investigated within the time-resolved description. (paper)

  18. Land surface modelling in hydrology and meteorology – lessons learned from the Baltic Basin

    Directory of Open Access Journals (Sweden)

    L. P. Graham

    2000-01-01

    Full Text Available By both tradition and purpose, the land parameterization schemes of hydrological and meteorological models differ greatly. Meteorologists are concerned primarily with solving the energy balance, whereas hydrologists are most interested in the water balance. Meteorological climate models typically have multi-layered soil parameterisation that solves temperature fluxes numerically with diffusive equations. The same approach is carried over to a similar treatment of water transport. Hydrological models are not usually so interested in soil temperatures, but must provide a reasonable representation of soil moisture to get runoff right. To treat the heterogeneity of the soil, many hydrological models use only one layer with a statistical representation of soil variability. Such a hydrological model can be used on large scales while taking subgrid variability into account. Hydrological models also include lateral transport of water – an imperative if' river discharge is to be estimated. The concept of a complexity chain for coupled modelling systems is introduced, together with considerations for mixing model components. Under BALTEX (Baltic Sea Experiment and SWECLIM (Swedish Regional Climate Modelling Programme, a large-scale hydrological model of runoff in the Baltic Basin is used to review atmospheric climate model simulations. This incorporates both the runoff record and hydrological modelling experience into atmospheric model development. Results from two models are shown. A conclusion is that the key to improved models may be less complexity. Perhaps the meteorological models should keep their multi-layered approach for modelling soil temperature, but add a simpler, yet physically consistent, hydrological approach for modelling snow processes and water transport in the soil. Keywords: land surface modelling; hydrological modelling; atmospheric climate models; subgrid variability; Baltic Basin

  19. Representing anthropogenic gross land use change, wood harvest, and forest age dynamics in a global vegetation model ORCHIDEE-MICT v8.4.2

    Science.gov (United States)

    Yue, Chao; Ciais, Philippe; Luyssaert, Sebastiaan; Li, Wei; McGrath, Matthew J.; Chang, Jinfeng; Peng, Shushi

    2018-01-01

    Land use change (LUC) is among the main anthropogenic disturbances in the global carbon cycle. Here we present the model developments in a global dynamic vegetation model ORCHIDEE-MICT v8.4.2 for a more realistic representation of LUC processes. First, we included gross land use change (primarily shifting cultivation) and forest wood harvest in addition to net land use change. Second, we included sub-grid evenly aged land cohorts to represent secondary forests and to keep track of the transient stage of agricultural lands since LUC. Combination of these two features allows the simulation of shifting cultivation with a rotation length involving mainly secondary forests instead of primary ones. Furthermore, a set of decision rules regarding the land cohorts to be targeted in different LUC processes have been implemented. Idealized site-scale simulation has been performed for miombo woodlands in southern Africa assuming an annual land turnover rate of 5 % grid cell area between forest and cropland. The result shows that the model can correctly represent forest recovery and cohort aging arising from agricultural abandonment. Such a land turnover process, even though without a net change in land cover, yields carbon emissions largely due to the imbalance between the fast release from forest clearing and the slow uptake from agricultural abandonment. The simulation with sub-grid land cohorts gives lower emissions than without, mainly because the cleared secondary forests have a lower biomass carbon stock than the mature forests that are otherwise cleared when sub-grid land cohorts are not considered. Over the region of southern Africa, the model is able to account for changes in different forest cohort areas along with the historical changes in different LUC activities, including regrowth of old forests when LUC area decreases. Our developments provide possibilities to account for continental or global forest demographic change resulting from past anthropogenic and

  20. Unsteady Flame Embedding

    KAUST Repository

    El-Asrag, Hossam A.

    2011-01-01

    Direct simulation of all the length and time scales relevant to practical combustion processes is computationally prohibitive. When combustion processes are driven by reaction and transport phenomena occurring at the unresolved scales of a numerical simulation, one must introduce a dynamic subgrid model that accounts for the multiscale nature of the problem using information available on a resolvable grid. Here, we discuss a model that captures unsteady flow-flame interactions- including extinction, re-ignition, and history effects-via embedded simulations at the subgrid level. The model efficiently accounts for subgrid flame structure and incorporates detailed chemistry and transport, allowing more accurate prediction of the stretch effect and the heat release. In this chapter we first review the work done in the past thirty years to develop the flame embedding concept. Next we present a formulation for the same concept that is compatible with Large Eddy Simulation in the flamelet regimes. The unsteady flame embedding approach (UFE) treats the flame as an ensemble of locally one-dimensional flames, similar to the flamelet approach. However, a set of elemental one-dimensional flames is used to describe the turbulent flame structure directly at the subgrid level. The calculations employ a one-dimensional unsteady flame model that incorporates unsteady strain rate, curvature, and mixture boundary conditions imposed by the resolved scales. The model is used for closure of the subgrid terms in the context of large eddy simulation. Direct numerical simulation (DNS) data from a flame-vortex interaction problem is used for comparison. © Springer Science+Business Media B.V. 2011.

  1. Large Eddy Simulation and Reynolds-Averaged Navier-Stokes modeling of flow in a realistic pharyngeal airway model: an investigation of obstructive sleep apnea.

    Science.gov (United States)

    Mihaescu, Mihai; Murugappan, Shanmugam; Kalra, Maninder; Khosla, Sid; Gutmark, Ephraim

    2008-07-19

    Computational fluid dynamics techniques employing primarily steady Reynolds-Averaged Navier-Stokes (RANS) methodology have been recently used to characterize the transitional/turbulent flow field in human airways. The use of RANS implies that flow phenomena are averaged over time, the flow dynamics not being captured. Further, RANS uses two-equation turbulence models that are not adequate for predicting anisotropic flows, flows with high streamline curvature, or flows where separation occurs. A more accurate approach for such flow situations that occur in the human airway is Large Eddy Simulation (LES). The paper considers flow modeling in a pharyngeal airway model reconstructed from cross-sectional magnetic resonance scans of a patient with obstructive sleep apnea. The airway model is characterized by a maximum narrowing at the site of retropalatal pharynx. Two flow-modeling strategies are employed: steady RANS and the LES approach. In the RANS modeling framework both k-epsilon and k-omega turbulence models are used. The paper discusses the differences between the airflow characteristics obtained from the RANS and LES calculations. The largest discrepancies were found in the axial velocity distributions downstream of the minimum cross-sectional area. This region is characterized by flow separation and large radial velocity gradients across the developed shear layers. The largest difference in static pressure distributions on the airway walls was found between the LES and the k-epsilon data at the site of maximum narrowing in the retropalatal pharynx.

  2. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  3. Non-Markovian closure models for large eddy simulations using the Mori-Zwanzig formalism

    Science.gov (United States)

    Parish, Eric J.; Duraisamy, Karthik

    2017-01-01

    This work uses the Mori-Zwanzig (M-Z) formalism, a concept originating from nonequilibrium statistical mechanics, as a basis for the development of coarse-grained models of turbulence. The mechanics of the generalized Langevin equation (GLE) are considered, and insight gained from the orthogonal dynamics equation is used as a starting point for model development. A class of subgrid models is considered which represent nonlocal behavior via a finite memory approximation [Stinis, arXiv:1211.4285 (2012)], the length of which is determined using a heuristic that is related to the spectral radius of the Jacobian of the resolved variables. The resulting models are intimately tied to the underlying numerical resolution and are capable of approximating non-Markovian effects. Numerical experiments on the Burgers equation demonstrate that the M-Z-based models can accurately predict the temporal evolution of the total kinetic energy and the total dissipation rate at varying mesh resolutions. The trajectory of each resolved mode in phase space is accurately predicted for cases where the coarse graining is moderate. Large eddy simulations (LESs) of homogeneous isotropic turbulence and the Taylor-Green Vortex show that the M-Z-based models are able to provide excellent predictions, accurately capturing the subgrid contribution to energy transfer. Last, LESs of fully developed channel flow demonstrate the applicability of M-Z-based models to nondecaying problems. It is notable that the form of the closure is not imposed by the modeler, but is rather derived from the mathematics of the coarse graining, highlighting the potential of M-Z-based techniques to define LES closures.

  4. Reynolds-Averaged Turbulence Model Assessment for a Highly Back-Pressured Isolator Flowfield

    Science.gov (United States)

    Baurle, Robert A.; Middleton, Troy F.; Wilson, L. G.

    2012-01-01

    The use of computational fluid dynamics in scramjet engine component development is widespread in the existing literature. Unfortunately, the quantification of model-form uncertainties is rarely addressed with anything other than sensitivity studies, requiring that the computational results be intimately tied to and calibrated against existing test data. This practice must be replaced with a formal uncertainty quantification process for computational fluid dynamics to play an expanded role in the system design, development, and flight certification process. Due to ground test facility limitations, this expanded role is believed to be a requirement by some in the test and evaluation community if scramjet engines are to be given serious consideration as a viable propulsion device. An effort has been initiated at the NASA Langley Research Center to validate several turbulence closure models used for Reynolds-averaged simulations of scramjet isolator flows. The turbulence models considered were the Menter BSL, Menter SST, Wilcox 1998, Wilcox 2006, and the Gatski-Speziale explicit algebraic Reynolds stress models. The simulations were carried out using the VULCAN computational fluid dynamics package developed at the NASA Langley Research Center. A procedure to quantify the numerical errors was developed to account for discretization errors in the validation process. This procedure utilized the grid convergence index defined by Roache as a bounding estimate for the numerical error. The validation data was collected from a mechanically back-pressured constant area (1 2 inch) isolator model with an isolator entrance Mach number of 2.5. As expected, the model-form uncertainty was substantial for the shock-dominated, massively separated flowfield within the isolator as evidenced by a 6 duct height variation in shock train length depending on the turbulence model employed. Generally speaking, the turbulence models that did not include an explicit stress limiter more closely

  5. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  6. Using Bayesian Model Averaging (BMA) to calibrate probabilistic surface temperature forecasts over Iran

    Energy Technology Data Exchange (ETDEWEB)

    Soltanzadeh, I. [Tehran Univ. (Iran, Islamic Republic of). Inst. of Geophysics; Azadi, M.; Vakili, G.A. [Atmospheric Science and Meteorological Research Center (ASMERC), Teheran (Iran, Islamic Republic of)

    2011-07-01

    Using Bayesian Model Averaging (BMA), an attempt was made to obtain calibrated probabilistic numerical forecasts of 2-m temperature over Iran. The ensemble employs three limited area models (WRF, MM5 and HRM), with WRF used with five different configurations. Initial and boundary conditions for MM5 and WRF are obtained from the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) and for HRM the initial and boundary conditions come from analysis of Global Model Europe (GME) of the German Weather Service. The resulting ensemble of seven members was run for a period of 6 months (from December 2008 to May 2009) over Iran. The 48-h raw ensemble outputs were calibrated using BMA technique for 120 days using a 40 days training sample of forecasts and relative verification data. The calibrated probabilistic forecasts were assessed using rank histogram and attribute diagrams. Results showed that application of BMA improved the reliability of the raw ensemble. Using the weighted ensemble mean forecast as a deterministic forecast it was found that the deterministic-style BMA forecasts performed usually better than the best member's deterministic forecast. (orig.)

  7. Using Bayesian Model Averaging (BMA to calibrate probabilistic surface temperature forecasts over Iran

    Directory of Open Access Journals (Sweden)

    I. Soltanzadeh

    2011-07-01

    Full Text Available Using Bayesian Model Averaging (BMA, an attempt was made to obtain calibrated probabilistic numerical forecasts of 2-m temperature over Iran. The ensemble employs three limited area models (WRF, MM5 and HRM, with WRF used with five different configurations. Initial and boundary conditions for MM5 and WRF are obtained from the National Centers for Environmental Prediction (NCEP Global Forecast System (GFS and for HRM the initial and boundary conditions come from analysis of Global Model Europe (GME of the German Weather Service. The resulting ensemble of seven members was run for a period of 6 months (from December 2008 to May 2009 over Iran. The 48-h raw ensemble outputs were calibrated using BMA technique for 120 days using a 40 days training sample of forecasts and relative verification data. The calibrated probabilistic forecasts were assessed using rank histogram and attribute diagrams. Results showed that application of BMA improved the reliability of the raw ensemble. Using the weighted ensemble mean forecast as a deterministic forecast it was found that the deterministic-style BMA forecasts performed usually better than the best member's deterministic forecast.

  8. Wind-Farm Forecasting Using the HARMONIE Weather Forecast Model and Bayes Model Averaging for Bias Removal.

    Science.gov (United States)

    O'Brien, Enda; McKinstry, Alastair; Ralph, Adam

    2015-04-01

    Building on previous work presented at EGU 2013 (http://www.sciencedirect.com/science/article/pii/S1876610213016068 ), more results are available now from a different wind-farm in complex terrain in southwest Ireland. The basic approach is to interpolate wind-speed forecasts from an operational weather forecast model (i.e., HARMONIE in the case of Ireland) to the precise location of each wind-turbine, and then use Bayes Model Averaging (BMA; with statistical information collected from a prior training-period of e.g., 25 days) to remove systematic biases. Bias-corrected wind-speed forecasts (and associated power-generation forecasts) are then provided twice daily (at 5am and 5pm) out to 30 hours, with each forecast validation fed back to BMA for future learning. 30-hr forecasts from the operational Met Éireann HARMONIE model at 2.5km resolution have been validated against turbine SCADA observations since Jan. 2014. An extra high-resolution (0.5km grid-spacing) HARMONIE configuration has been run since Nov. 2014 as an extra member of the forecast "ensemble". A new version of HARMONIE with extra filters designed to stabilize high-resolution configurations has been run since Jan. 2015. Measures of forecast skill and forecast errors will be provided, and the contributions made by the various physical and computational enhancements to HARMONIE will be quantified.

  9. Average nuclear surface properties

    International Nuclear Information System (INIS)

    Groote, H. von.

    1979-01-01

    The definition of the nuclear surface energy is discussed for semi-infinite matter. This definition is extended also for the case that there is a neutron gas instead of vacuum on the one side of the plane surface. The calculations were performed with the Thomas-Fermi Model of Syler and Blanchard. The parameters of the interaction of this model were determined by a least squares fit to experimental masses. The quality of this fit is discussed with respect to nuclear masses and density distributions. The average surface properties were calculated for different particle asymmetry of the nucleon-matter ranging from symmetry beyond the neutron-drip line until the system no longer can maintain the surface boundary and becomes homogeneous. The results of the calculations are incorporated in the nuclear Droplet Model which then was fitted to experimental masses. (orig.)

  10. Bayesian model averaging method for evaluating associations between air pollution and respiratory mortality: a time-series study.

    Science.gov (United States)

    Fang, Xin; Li, Runkui; Kan, Haidong; Bottai, Matteo; Fang, Fang; Cao, Yang

    2016-08-16

    To demonstrate an application of Bayesian model averaging (BMA) with generalised additive mixed models (GAMM) and provide a novel modelling technique to assess the association between inhalable coarse particles (PM10) and respiratory mortality in time-series studies. A time-series study using regional death registry between 2009 and 2010. 8 districts in a large metropolitan area in Northern China. 9559 permanent residents of the 8 districts who died of respiratory diseases between 2009 and 2010. Per cent increase in daily respiratory mortality rate (MR) per interquartile range (IQR) increase of PM10 concentration and corresponding 95% confidence interval (CI) in single-pollutant and multipollutant (including NOx, CO) models. The Bayesian model averaged GAMM (GAMM+BMA) and the optimal GAMM of PM10, multipollutants and principal components (PCs) of multipollutants showed comparable results for the effect of PM10 on daily respiratory MR, that is, one IQR increase in PM10 concentration corresponded to 1.38% vs 1.39%, 1.81% vs 1.83% and 0.87% vs 0.88% increase, respectively, in daily respiratory MR. However, GAMM+BMA gave slightly but noticeable wider CIs for the single-pollutant model (-1.09 to 4.28 vs -1.08 to 3.93) and the PCs-based model (-2.23 to 4.07 vs -2.03 vs 3.88). The CIs of the multiple-pollutant model from two methods are similar, that is, -1.12 to 4.85 versus -1.11 versus 4.83. The BMA method may represent a useful tool for modelling uncertainty in time-series studies when evaluating the effect of air pollution on fatal health outcomes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  11. Testing averaged cosmology with type Ia supernovae and BAO data

    Energy Technology Data Exchange (ETDEWEB)

    Santos, B.; Alcaniz, J.S. [Departamento de Astronomia, Observatório Nacional, 20921-400, Rio de Janeiro – RJ (Brazil); Coley, A.A. [Department of Mathematics and Statistics, Dalhousie University, Halifax, B3H 3J5 Canada (Canada); Devi, N. Chandrachani, E-mail: thoven@on.br, E-mail: aac@mathstat.dal.ca, E-mail: chandrachaniningombam@astro.unam.mx, E-mail: alcaniz@on.br [Instituto de Astronomía, Universidad Nacional Autónoma de México, Box 70-264, México City, México (Mexico)

    2017-02-01

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.

  12. Testing averaged cosmology with type Ia supernovae and BAO data

    International Nuclear Information System (INIS)

    Santos, B.; Alcaniz, J.S.; Coley, A.A.; Devi, N. Chandrachani

    2017-01-01

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.

  13. The effect of adiabatic and conducting wall boundary conditions on LES of a thermal mixing tee

    International Nuclear Information System (INIS)

    Howard, Richard J.A.; Pasutto, Thomas

    2009-01-01

    In this paper preliminary LES simulations are carried out of the FATHERINO mixing T junction experiment. In this experiment 80degC hot water enters a lateral steel pipe which has a diameter of D=0.054m, at a speed of 1.04m/s and meets 5degC cold water which enters a perpendicular steel pipe branch that also has a diameter D=0.054m but this time at a lower speed of 0.26m/s. The modelling of the steel pipe walls is tested by comparing adiabatic and 1D conducting wall boundary conditions. The numerical grid used contains approximately 440,000 hexahedral elements. The near wall refinement is not sufficient to resolve the near wall boundary layer (y + approx. = 32) and a standard logarithmic boundary condition is used. A method known as the synthetic eddy method is used to generate the turbulent flow at the pipe inlets. Three different LES models are used (Smagorinsky, dynamic Smagorinsky and wale) to resolve the subgrid turbulent motion beyond the wall grid. An additional test is carried out where no subgrid model is used with only the wall modelling being applied. The results show that the wale model generates much less resolved turbulence than the other cases and this model shows virtually no difference between the two methods of wall thermal modelling. The dynamic Smagorinsky model shows that, downstream of the mixing T, the lower wall remains at a lower temperature for longer when the adiabatic boundary condition is applied. The Smagorinsky model is found to produce the highest level of resolved temperature fluctuation. For this model the 1D thermal modelling approach increases the unsteadiness of both the velocity and temperature fields at the onset of the mixing and in the middle of the pipe downstream of the T junction. However near the lower wall the 1D thermal modelling approach tends to reduce the unsteadiness. The case with no subgrid modelling shows higher levels of turbulence kinetic energy but lower levels of temperature fluctuation than the cases with

  14. How robust are the estimated effects of air pollution on health? Accounting for model uncertainty using Bayesian model averaging.

    Science.gov (United States)

    Pannullo, Francesca; Lee, Duncan; Waclawski, Eugene; Leyland, Alastair H

    2016-08-01

    The long-term impact of air pollution on human health can be estimated from small-area ecological studies in which the health outcome is regressed against air pollution concentrations and other covariates, such as socio-economic deprivation. Socio-economic deprivation is multi-factorial and difficult to measure, and includes aspects of income, education, and housing as well as others. However, these variables are potentially highly correlated, meaning one can either create an overall deprivation index, or use the individual characteristics, which can result in a variety of pollution-health effects. Other aspects of model choice may affect the pollution-health estimate, such as the estimation of pollution, and spatial autocorrelation model. Therefore, we propose a Bayesian model averaging approach to combine the results from multiple statistical models to produce a more robust representation of the overall pollution-health effect. We investigate the relationship between nitrogen dioxide concentrations and cardio-respiratory mortality in West Central Scotland between 2006 and 2012. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. Embedding complex hydrology in the climate system - towards fully coupled climate-hydrology models

    DEFF Research Database (Denmark)

    Butts, M.; Rasmussen, S.H.; Ridler, M.

    2013-01-01

    Motivated by the need to develop better tools to understand the impact of future management and climate change on water resources, we present a set of studies with the overall aim of developing a fully dynamic coupling between a comprehensive hydrological model, MIKE SHE, and a regional climate...... distributed parameters using satellite remote sensing. Secondly, field data are used to investigate the effects of model resolution and parameter scales for use in a coupled model. Finally, the development of the fully coupled climate-hydrology model is described and some of the challenges associated...... with coupling models for hydrological processes on sub-grid scales of the regional climate model are presented....

  16. Urban runoff (URO) process for MODFLOW 2005: simulation of sub-grid scale urban hydrologic processes in Broward County, FL

    Science.gov (United States)

    Decker, Jeremy D.; Hughes, J.D.

    2013-01-01

    Climate change and sea-level rise could cause substantial changes in urban runoff and flooding in low-lying coast landscapes. A major challenge for local government officials and decision makers is to translate the potential global effects of climate change into actionable and cost-effective adaptation and mitigation strategies at county and municipal scales. A MODFLOW process is used to represent sub-grid scale hydrology in urban settings to help address these issues. Coupled interception, surface water, depression, and unsaturated zone storage are represented. A two-dimensional diffusive wave approximation is used to represent overland flow. Three different options for representing infiltration and recharge are presented. Additional features include structure, barrier, and culvert flow between adjacent cells, specified stage boundaries, critical flow boundaries, source/sink surface-water terms, and the bi-directional runoff to MODFLOW Surface-Water Routing process. Some abilities of the Urban RunOff (URO) process are demonstrated with a synthetic problem using four land uses and varying cell coverages. Precipitation from a hypothetical storm was applied and cell by cell surface-water depth, groundwater level, infiltration rate, and groundwater recharge rate are shown. Results indicate the URO process has the ability to produce time-varying, water-content dependent infiltration and leakage, and successfully interacts with MODFLOW.

  17. Metallurgical source-contribution analysis of PM10 annual average concentration: A dispersion modeling approach in moravian-silesian region

    Directory of Open Access Journals (Sweden)

    P. Jančík

    2013-10-01

    Full Text Available The goal of the article is to present analysis of metallurgical industry contribution to annual average PM10 concentrations in Moravian-Silesian based on means of the air pollution modelling in accord with the Czech reference methodology SYMOS´97.

  18. Maximum-likelihood model averaging to profile clustering of site types across discrete linear sequences.

    Directory of Open Access Journals (Sweden)

    Zhang Zhang

    2009-06-01

    Full Text Available A major analytical challenge in computational biology is the detection and description of clusters of specified site types, such as polymorphic or substituted sites within DNA or protein sequences. Progress has been stymied by a lack of suitable methods to detect clusters and to estimate the extent of clustering in discrete linear sequences, particularly when there is no a priori specification of cluster size or cluster count. Here we derive and demonstrate a maximum likelihood method of hierarchical clustering. Our method incorporates a tripartite divide-and-conquer strategy that models sequence heterogeneity, delineates clusters, and yields a profile of the level of clustering associated with each site. The clustering model may be evaluated via model selection using the Akaike Information Criterion, the corrected Akaike Information Criterion, and the Bayesian Information Criterion. Furthermore, model averaging using weighted model likelihoods may be applied to incorporate model uncertainty into the profile of heterogeneity across sites. We evaluated our method by examining its performance on a number of simulated datasets as well as on empirical polymorphism data from diverse natural alleles of the Drosophila alcohol dehydrogenase gene. Our method yielded greater power for the detection of clustered sites across a breadth of parameter ranges, and achieved better accuracy and precision of estimation of clusters, than did the existing empirical cumulative distribution function statistics.

  19. Modelling and analysis of turbulent datasets using Auto Regressive Moving Average processes

    International Nuclear Information System (INIS)

    Faranda, Davide; Dubrulle, Bérengère; Daviaud, François; Pons, Flavio Maria Emanuele; Saint-Michel, Brice; Herbert, Éric; Cortet, Pierre-Philippe

    2014-01-01

    We introduce a novel way to extract information from turbulent datasets by applying an Auto Regressive Moving Average (ARMA) statistical analysis. Such analysis goes well beyond the analysis of the mean flow and of the fluctuations and links the behavior of the recorded time series to a discrete version of a stochastic differential equation which is able to describe the correlation structure in the dataset. We introduce a new index Υ that measures the difference between the resulting analysis and the Obukhov model of turbulence, the simplest stochastic model reproducing both Richardson law and the Kolmogorov spectrum. We test the method on datasets measured in a von Kármán swirling flow experiment. We found that the ARMA analysis is well correlated with spatial structures of the flow, and can discriminate between two different flows with comparable mean velocities, obtained by changing the forcing. Moreover, we show that the Υ is highest in regions where shear layer vortices are present, thereby establishing a link between deviations from the Kolmogorov model and coherent structures. These deviations are consistent with the ones observed by computing the Hurst exponents for the same time series. We show that some salient features of the analysis are preserved when considering global instead of local observables. Finally, we analyze flow configurations with multistability features where the ARMA technique is efficient in discriminating different stability branches of the system

  20. Non-self-averaging nucleation rate due to quenched disorder

    International Nuclear Information System (INIS)

    Sear, Richard P

    2012-01-01

    We study the nucleation of a new thermodynamic phase in the presence of quenched disorder. The quenched disorder is a generic model of both impurities and disordered porous media; both are known to have large effects on nucleation. We find that the nucleation rate is non-self-averaging. This is in a simple Ising model with clusters of quenched spins. We also show that non-self-averaging behaviour is straightforward to detect in experiments, and may be rather common. (fast track communication)

  1. Investigation of multi-dimensional computational models for calculating pollutant transport

    International Nuclear Information System (INIS)

    Pepper, D.W.; Cooper, R.E.; Baker, A.J.

    1980-01-01

    A performance study of five numerical solution algorithms for multi-dimensional advection-diffusion prediction on mesoscale grids was made. Test problems include transport of point and distributed sources, and a simulation of a continuous source. In all cases, analytical solutions are available to assess relative accuracy. The particle-in-cell and second-moment algorithms, both of which employ sub-grid resolution coupled with Lagrangian advection, exhibit superior accuracy in modeling a point source release. For modeling of a distributed source, algorithms based upon the pseudospectral and finite element interpolation concepts, exhibit improved accuracy on practical discretizations

  2. Determining Time-Varying Drivers of Spot Oil Price in a Dynamic Model Averaging Framework

    Directory of Open Access Journals (Sweden)

    Krzysztof Drachal

    2018-05-01

    Full Text Available This article presents results from modelling spot oil prices by Dynamic Model Averaging (DMA. First, based on a literature review and availability of data, the following oil price drivers have been selected: stock prices indices, stock prices volatility index, exchange rates, global economic activity, interest rates, supply and demand indicators and inventories level. Next, they have been included as explanatory variables in various DMA models with different initial parameters. Monthly data between January 1986 and December 2015 has been analyzed. Several variations of DMA models have been constructed, because DMA requires the initial setting of certain parameters. Interestingly, DMA has occurred to be robust to setting different values to these parameters. It has also occurred that the quality of prediction is the highest for the model with the drivers solely connected with the stock markets behavior. Drivers connected with macroeconomic fundamental indicators have not been found so important. This observation can serve as an argument favoring the hypothesis of the increasing financialization of the oil market, at least in the short-term period. The predictions from other, slightly different modelling variations based on DMA methodology, have happened to be consistent with each other in general. Many constructed models have outperformed alternative forecasting methods. It has also been found that normalization of the initial data, although not necessary for DMA from the theoretical point of view, significantly improves the quality of prediction.

  3. Large-signal analysis of DC motor drive system using state-space averaging technique

    International Nuclear Information System (INIS)

    Bekir Yildiz, Ali

    2008-01-01

    The analysis of a separately excited DC motor driven by DC-DC converter is realized by using state-space averaging technique. Firstly, a general and unified large-signal averaged circuit model for DC-DC converters is given. The method converts power electronic systems, which are periodic time-variant because of their switching operation, to unified and time independent systems. Using the averaged circuit model enables us to combine the different topologies of converters. Thus, all analysis and design processes about DC motor can be easily realized by using the unified averaged model which is valid during whole period. Some large-signal variations such as speed and current relating to DC motor, steady-state analysis, large-signal and small-signal transfer functions are easily obtained by using the averaged circuit model

  4. Evaluation of deconvolution modelling applied to numerical combustion

    Science.gov (United States)

    Mehl, Cédric; Idier, Jérôme; Fiorina, Benoît

    2018-01-01

    A possible modelling approach in the large eddy simulation (LES) of reactive flows is to deconvolve resolved scalars. Indeed, by inverting the LES filter, scalars such as mass fractions are reconstructed. This information can be used to close budget terms of filtered species balance equations, such as the filtered reaction rate. Being ill-posed in the mathematical sense, the problem is very sensitive to any numerical perturbation. The objective of the present study is to assess the ability of this kind of methodology to capture the chemical structure of premixed flames. For that purpose, three deconvolution methods are tested on a one-dimensional filtered laminar premixed flame configuration: the approximate deconvolution method based on Van Cittert iterative deconvolution, a Taylor decomposition-based method, and the regularised deconvolution method based on the minimisation of a quadratic criterion. These methods are then extended to the reconstruction of subgrid scale profiles. Two methodologies are proposed: the first one relies on subgrid scale interpolation of deconvolved profiles and the second uses parametric functions to describe small scales. Conducted tests analyse the ability of the method to capture the chemical filtered flame structure and front propagation speed. Results show that the deconvolution model should include information about small scales in order to regularise the filter inversion. a priori and a posteriori tests showed that the filtered flame propagation speed and structure cannot be captured if the filter size is too large.

  5. Forecast of sea surface temperature off the Peruvian coast using an autoregressive integrated moving average model

    Directory of Open Access Journals (Sweden)

    Carlos Quispe

    2013-04-01

    Full Text Available El Niño connects globally climate, ecosystems and socio-economic activities. Since 1980 this event has been tried to be predicted, but until now the statistical and dynamical models are insuffi cient. Thus, the objective of the present work was to explore using an autoregressive moving average model the effect of El Niño over the sea surface temperature (TSM off the Peruvian coast. The work involved 5 stages: identifi cation, estimation, diagnostic checking, forecasting and validation. Simple and partial autocorrelation functions (FAC and FACP were used to identify and reformulate the orders of the model parameters, as well as Akaike information criterium (AIC and Schwarz criterium (SC for the selection of the best models during the diagnostic checking. Among the main results the models ARIMA(12,0,11 were proposed, which simulated monthly conditions in agreement with the observed conditions off the Peruvian coast: cold conditions at the end of 2004, and neutral conditions at the beginning of 2005.

  6. SU-F-R-44: Modeling Lung SBRT Tumor Response Using Bayesian Network Averaging

    International Nuclear Information System (INIS)

    Diamant, A; Ybarra, N; Seuntjens, J; El Naqa, I

    2016-01-01

    Purpose: The prediction of tumor control after a patient receives lung SBRT (stereotactic body radiation therapy) has proven to be challenging, due to the complex interactions between an individual’s biology and dose-volume metrics. Many of these variables have predictive power when combined, a feature that we exploit using a graph modeling approach based on Bayesian networks. This provides a probabilistic framework that allows for accurate and visually intuitive predictive modeling. The aim of this study is to uncover possible interactions between an individual patient’s characteristics and generate a robust model capable of predicting said patient’s treatment outcome. Methods: We investigated a cohort of 32 prospective patients from multiple institutions whom had received curative SBRT to the lung. The number of patients exhibiting tumor failure was observed to be 7 (event rate of 22%). The serum concentration of 5 biomarkers previously associated with NSCLC (non-small cell lung cancer) was measured pre-treatment. A total of 21 variables were analyzed including: dose-volume metrics with BED (biologically effective dose) correction and clinical variables. A Markov Chain Monte Carlo technique estimated the posterior probability distribution of the potential graphical structures. The probability of tumor failure was then estimated by averaging the top 100 graphs and applying Baye’s rule. Results: The optimal Bayesian model generated throughout this study incorporated the PTV volume, the serum concentration of the biomarker EGFR (epidermal growth factor receptor) and prescription BED. This predictive model recorded an area under the receiver operating characteristic curve of 0.94(1), providing better performance compared to competing methods in other literature. Conclusion: The use of biomarkers in conjunction with dose-volume metrics allows for the generation of a robust predictive model. The preliminary results of this report demonstrate that it is possible

  7. SU-F-R-44: Modeling Lung SBRT Tumor Response Using Bayesian Network Averaging

    Energy Technology Data Exchange (ETDEWEB)

    Diamant, A; Ybarra, N; Seuntjens, J [McGill University, Montreal, Quebec (Canada); El Naqa, I [University of Michigan, Ann Arbor, MI (United States)

    2016-06-15

    Purpose: The prediction of tumor control after a patient receives lung SBRT (stereotactic body radiation therapy) has proven to be challenging, due to the complex interactions between an individual’s biology and dose-volume metrics. Many of these variables have predictive power when combined, a feature that we exploit using a graph modeling approach based on Bayesian networks. This provides a probabilistic framework that allows for accurate and visually intuitive predictive modeling. The aim of this study is to uncover possible interactions between an individual patient’s characteristics and generate a robust model capable of predicting said patient’s treatment outcome. Methods: We investigated a cohort of 32 prospective patients from multiple institutions whom had received curative SBRT to the lung. The number of patients exhibiting tumor failure was observed to be 7 (event rate of 22%). The serum concentration of 5 biomarkers previously associated with NSCLC (non-small cell lung cancer) was measured pre-treatment. A total of 21 variables were analyzed including: dose-volume metrics with BED (biologically effective dose) correction and clinical variables. A Markov Chain Monte Carlo technique estimated the posterior probability distribution of the potential graphical structures. The probability of tumor failure was then estimated by averaging the top 100 graphs and applying Baye’s rule. Results: The optimal Bayesian model generated throughout this study incorporated the PTV volume, the serum concentration of the biomarker EGFR (epidermal growth factor receptor) and prescription BED. This predictive model recorded an area under the receiver operating characteristic curve of 0.94(1), providing better performance compared to competing methods in other literature. Conclusion: The use of biomarkers in conjunction with dose-volume metrics allows for the generation of a robust predictive model. The preliminary results of this report demonstrate that it is possible

  8. Bayesian data assimilation for stochastic multiscale models of transport in porous media.

    Energy Technology Data Exchange (ETDEWEB)

    Marzouk, Youssef M. (Massachusetts Institute of Technology, Cambridge, MA); van Bloemen Waanders, Bart Gustaaf (Sandia National Laboratories, Albuquerque NM); Parno, Matthew (Massachusetts Institute of Technology, Cambridge, MA); Ray, Jaideep; Lefantzi, Sophia; Salazar, Luke (Sandia National Laboratories, Albuquerque NM); McKenna, Sean Andrew (Sandia National Laboratories, Albuquerque NM); Klise, Katherine A. (Sandia National Laboratories, Albuquerque NM)

    2011-10-01

    We investigate Bayesian techniques that can be used to reconstruct field variables from partial observations. In particular, we target fields that exhibit spatial structures with a large spectrum of lengthscales. Contemporary methods typically describe the field on a grid and estimate structures which can be resolved by it. In contrast, we address the reconstruction of grid-resolved structures as well as estimation of statistical summaries of subgrid structures, which are smaller than the grid resolution. We perform this in two different ways (a) via a physical (phenomenological), parameterized subgrid model that summarizes the impact of the unresolved scales at the coarse level and (b) via multiscale finite elements, where specially designed prolongation and restriction operators establish the interscale link between the same problem defined on a coarse and fine mesh. The estimation problem is posed as a Bayesian inverse problem. Dimensionality reduction is performed by projecting the field to be inferred on a suitable orthogonal basis set, viz. the Karhunen-Loeve expansion of a multiGaussian. We first demonstrate our techniques on the reconstruction of a binary medium consisting of a matrix with embedded inclusions, which are too small to be grid-resolved. The reconstruction is performed using an adaptive Markov chain Monte Carlo method. We find that the posterior distributions of the inferred parameters are approximately Gaussian. We exploit this finding to reconstruct a permeability field with long, but narrow embedded fractures (which are too fine to be grid-resolved) using scalable ensemble Kalman filters; this also allows us to address larger grids. Ensemble Kalman filtering is then used to estimate the values of hydraulic conductivity and specific yield in a model of the High Plains Aquifer in Kansas. Strong conditioning of the spatial structure of the parameters and the non-linear aspects of the water table aquifer create difficulty for the ensemble Kalman

  9. Average stopping powers for electron and photon sources for radiobiological modeling and microdosimetric applications

    Science.gov (United States)

    Vassiliev, Oleg N.; Kry, Stephen F.; Grosshans, David R.; Mohan, Radhe

    2018-03-01

    This study concerns calculation of the average electronic stopping power for photon and electron sources. It addresses two problems that have not yet been fully resolved. The first is defining the electron spectrum used for averaging in a way that is most suitable for radiobiological modeling. We define it as the spectrum of electrons entering the sensitive to radiation volume (SV) within the cell nucleus, at the moment they enter the SV. For this spectrum we derive a formula that combines linearly the fluence spectrum and the source spectrum. The latter is the distribution of initial energies of electrons produced by a source. Previous studies used either the fluence or source spectra, but not both, thereby neglecting a part of the complete spectrum. Our derived formula reduces to these two prior methods in the case of high and low energy sources, respectively. The second problem is extending electron spectra to low energies. Previous studies used an energy cut-off on the order of 1 keV. However, as we show, even for high energy sources, such as 60Co, electrons with energies below 1 keV contribute about 30% to the dose. In this study all the spectra were calculated with Geant4-DNA code and a cut-off energy of only 11 eV. We present formulas for calculating frequency- and dose-average stopping powers, numerical results for several important electron and photon sources, and tables with all the data needed to use our formulas for arbitrary electron and photon sources producing electrons with initial energies up to  ∼1 MeV.

  10. A Smoothing Algorithm for a New Two-Stage Stochastic Model of Supply Chain Based on Sample Average Approximation

    OpenAIRE

    Liu Yang; Yao Xiong; Xiao-jiao Tong

    2017-01-01

    We construct a new two-stage stochastic model of supply chain with multiple factories and distributors for perishable product. By introducing a second-order stochastic dominance (SSD) constraint, we can describe the preference consistency of the risk taker while minimizing the expected cost of company. To solve this problem, we convert it into a one-stage stochastic model equivalently; then we use sample average approximation (SAA) method to approximate the expected values of the underlying r...

  11. Application of Depth-Averaged Velocity Profile for Estimation of Longitudinal Dispersion in Rivers

    Directory of Open Access Journals (Sweden)

    Mohammad Givehchi

    2010-01-01

    Full Text Available River bed profiles and depth-averaged velocities are used as basic data in empirical and analytical equations for estimating the longitudinal dispersion coefficient which has always been a topic of great interest for researchers. The simple model proposed by Maghrebi is capable of predicting the normalized isovel contours in the cross section of rivers and channels as well as the depth-averaged velocity profiles. The required data in Maghrebi’s model are bed profile, shear stress, and roughness distributions. Comparison of depth-averaged velocities and longitudinal dispersion coefficients observed in the field data and those predicted by Maghrebi’s model revealed that Maghrebi’s model had an acceptable accuracy in predicting depth-averaged velocity.

  12. Hydrodynamic Modeling to Assess the Impact of Man-Made Fishing Canals on Floodplain Dynamics: A Case Study in the Logone Floodplain

    Science.gov (United States)

    Shastry, A. R.; Durand, M. T.; Fernandez, A.; Phang, S. C.; Hamilton, I.; Laborde, S.; Mark, B. G.; Moritz, M.; Neal, J. C.

    2017-12-01

    The Logone floodplain in northern Cameroon, also known as Yaayre, is an excellent example of coupled human-natural systems because of strong couplings between social, ecological and hydrologic systems. Overbank flow from the Logone River inundates the floodplain ( 8000 km2) annually and the flood is essential for fish populations and the fishers that depend on them for their livelihood. However, a recent trend of construction of fishing canals threatens to change flood dynamics like duration and timing of onset and may reduce fish productivity. Fishers dig canals during dry season, which are used to catch fish by collecting and channeling water during the flood recession. By connecting the floodplain to the river, these fishing canals act an extension of the river drainage network. The goal of this study is to characterize the relationship between the observed exponential increase in numbers of fishing canals and flood dynamics. We modelled the Logone floodplain as a two-dimensional hydrodynamic model with sub-grid parameterizations of channels using LISFLOOD-FP. We use a simplified version of the hydraulic system at a grid-cell size of 1-km, upscaled using a new high accuracy map of global terrain elevations from Shuttle Radar Topography Mission (SRTM). Using data from a field-collected survey performed in 2014, 1120 fishing canal were collated and parameterized as 111 sub-grid channels and the fishnet structure was represented as a combination of weir and mesh screens. 49 mapped floodplain depressions were also represented as sub-grid channels. In situ discharge observations available at Katoa between 2001 and 2007 were used as input for the model. Preliminary results show that presence of canals resulted in a 24% quicker recession of water in the natural depressions showing increasing canal numbers lead to quicker flood recession. We also investigate the rate of effect increasing number of fishing canals has on flood recession by simulating varying numbers of

  13. Short-term electricity prices forecasting based on support vector regression and Auto-regressive integrated moving average modeling

    International Nuclear Information System (INIS)

    Che Jinxing; Wang Jianzhou

    2010-01-01

    In this paper, we present the use of different mathematical models to forecast electricity price under deregulated power. A successful prediction tool of electricity price can help both power producers and consumers plan their bidding strategies. Inspired by that the support vector regression (SVR) model, with the ε-insensitive loss function, admits of the residual within the boundary values of ε-tube, we propose a hybrid model that combines both SVR and Auto-regressive integrated moving average (ARIMA) models to take advantage of the unique strength of SVR and ARIMA models in nonlinear and linear modeling, which is called SVRARIMA. A nonlinear analysis of the time-series indicates the convenience of nonlinear modeling, the SVR is applied to capture the nonlinear patterns. ARIMA models have been successfully applied in solving the residuals regression estimation problems. The experimental results demonstrate that the model proposed outperforms the existing neural-network approaches, the traditional ARIMA models and other hybrid models based on the root mean square error and mean absolute percentage error.

  14. A Bayesian model averaging approach for estimating the relative risk of mortality associated with heat waves in 105 U.S. cities.

    Science.gov (United States)

    Bobb, Jennifer F; Dominici, Francesca; Peng, Roger D

    2011-12-01

    Estimating the risks heat waves pose to human health is a critical part of assessing the future impact of climate change. In this article, we propose a flexible class of time series models to estimate the relative risk of mortality associated with heat waves and conduct Bayesian model averaging (BMA) to account for the multiplicity of potential models. Applying these methods to data from 105 U.S. cities for the period 1987-2005, we identify those cities having a high posterior probability of increased mortality risk during heat waves, examine the heterogeneity of the posterior distributions of mortality risk across cities, assess sensitivity of the results to the selection of prior distributions, and compare our BMA results to a model selection approach. Our results show that no single model best predicts risk across the majority of cities, and that for some cities heat-wave risk estimation is sensitive to model choice. Although model averaging leads to posterior distributions with increased variance as compared to statistical inference conditional on a model obtained through model selection, we find that the posterior mean of heat wave mortality risk is robust to accounting for model uncertainty over a broad class of models. © 2011, The International Biometric Society.

  15. Development of realistic high-resolution whole-body voxel models of Japanese adult males and females of average height and weight, and application of models to radio-frequency electromagnetic-field dosimetry

    International Nuclear Information System (INIS)

    Nagaoka, Tomoaki; Watanabe, Soichi; Sakurai, Kiyoko; Kunieda, Etsuo; Watanabe, Satoshi; Taki, Masao; Yamanaka, Yukio

    2004-01-01

    With advances in computer performance, the use of high-resolution voxel models of the entire human body has become more frequent in numerical dosimetries of electromagnetic waves. Using magnetic resonance imaging, we have developed realistic high-resolution whole-body voxel models for Japanese adult males and females of average height and weight. The developed models consist of cubic voxels of 2 mm on each side; the models are segmented into 51 anatomic regions. The adult female model is the first of its kind in the world and both are the first Asian voxel models (representing average Japanese) that enable numerical evaluation of electromagnetic dosimetry at high frequencies of up to 3 GHz. In this paper, we will also describe the basic SAR characteristics of the developed models for the VHF/UHF bands, calculated using the finite-difference time-domain method

  16. An averaged polarizable potential for multiscale modeling in phospholipid membranes

    DEFF Research Database (Denmark)

    Witzke, Sarah; List, Nanna Holmgaard; Olsen, Jógvan Magnus Haugaard

    2017-01-01

    A set of average atom-centered charges and polarizabilities has been developed for three types of phospholipids for use in polarizable embedding calculations. The lipids investigated are 1,2-dimyristoyl-sn-glycero-3-phosphocholine, 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine, and 1-palmitoyl...

  17. Improving the representation of river-groundwater interactions in land surface modeling at the regional scale: Observational evidence and parameterization applied in the Community Land Model

    KAUST Repository

    Zampieri, Matteo

    2012-02-01

    Groundwater is an important component of the hydrological cycle, included in many land surface models to provide a lower boundary condition for soil moisture, which in turn plays a key role in the land-vegetation-atmosphere interactions and the ecosystem dynamics. In regional-scale climate applications land surface models (LSMs) are commonly coupled to atmospheric models to close the surface energy, mass and carbon balance. LSMs in these applications are used to resolve the momentum, heat, water and carbon vertical fluxes, accounting for the effect of vegetation, soil type and other surface parameters, while lack of adequate resolution prevents using them to resolve horizontal sub-grid processes. Specifically, LSMs resolve the large-scale runoff production associated with infiltration excess and sub-grid groundwater convergence, but they neglect the effect from loosing streams to groundwater. Through the analysis of observed data of soil moisture obtained from the Oklahoma Mesoscale Network stations and land surface temperature derived from MODIS we provide evidence that the regional scale soil moisture and surface temperature patterns are affected by the rivers. This is demonstrated on the basis of simulations from a land surface model (i.e., Community Land Model - CLM, version 3.5). We show that the model cannot reproduce the features of the observed soil moisture and temperature spatial patterns that are related to the underlying mechanism of reinfiltration of river water to groundwater. Therefore, we implement a simple parameterization of this process in CLM showing the ability to reproduce the soil moisture and surface temperature spatial variabilities that relate to the river distribution at regional scale. The CLM with this new parameterization is used to evaluate impacts of the improved representation of river-groundwater interactions on the simulated water cycle parameters and the surface energy budget at the regional scale. © 2011 Elsevier B.V.

  18. Numerical calculation of the dispersion of heat and material in rivers by means of a depth-averaged model

    International Nuclear Information System (INIS)

    Pavlovic, R.N.

    1981-01-01

    Nowadays, our rivers are polluted to an ever increasing degree by industrial and domestic discharges of waste heat and sewage. An important task of environmental protection is to predict the consequences of such pollutions in order to be able to plan and perform protective measures. For the solution of this problem a reliable mathematical model is very helpful. In the present paper a depth-averaged model is developed consisting of a two-dimensional elliptical model component for the direct near-field of a discharge and a two-dimensional parabolic separate model for the calculation of longer river distances further downstream. This model is exhaustively tested by application to a number of laboratory flows and real discharges to rivers. (orig./RW) [de

  19. Correction of Excessive Precipitation Over Steep and High Mountains in a General Circulation Model

    Science.gov (United States)

    Chao, Winston C.

    2012-01-01

    Excessive precipitation over steep and high mountains (EPSM) is a well-known problem in GCMs and meso-scale models. This problem impairs simulation and data assimilation products. Among the possible causes investigated in this study, we found that the most important one, by far, is a missing upward transport of heat out of the boundary layer due to the vertical circulations forced by the daytime upslope winds, which are forced by the heated boundary layer on subgrid-scale slopes. These upslope winds are associated with large subgrid-scale topographic variation, which is found over steep and high mountains. Without such subgridscale heat ventilation, the resolvable-scale upslope flow in the boundary layer generated by surface sensible heat flux along the mountain slopes is excessive. Such an excessive resolvablescale upslope flow combined with the high moisture content in the boundary layer results in excessive moisture transport toward mountaintops, which in turn gives rise to EPSM. Other possible causes of EPSM that we have investigated include 1) a poorly-designed horizontal moisture flux in the terrain-following coordinates, 2) the condition for cumulus convection being too easily satisfied at mountaintops, 3) the presence of conditional instability of the computational kind, and 4) the absence of blocked flow drag. These are all minor or inconsequential. We have parameterized the ventilation effects of the subgrid-scale heated-slope-induced vertical circulation (SHVC) by removing heat from the boundary layer and depositing it in layers higher up when the topographic variance exceeds a critical value. Test results using NASA/Goddard's GEOS-S GCM have shown that this largely solved the EPSM problem.

  20. Benchmarking statistical averaging of spectra with HULLAC

    Science.gov (United States)

    Klapisch, Marcel; Busquet, Michel

    2008-11-01

    Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).

  1. A highly detailed FEM volume conductor model based on the ICBM152 average head template for EEG source imaging and TCS targeting.

    Science.gov (United States)

    Haufe, Stefan; Huang, Yu; Parra, Lucas C

    2015-08-01

    In electroencephalographic (EEG) source imaging as well as in transcranial current stimulation (TCS), it is common to model the head using either three-shell boundary element (BEM) or more accurate finite element (FEM) volume conductor models. Since building FEMs is computationally demanding and labor intensive, they are often extensively reused as templates even for subjects with mismatching anatomies. BEMs can in principle be used to efficiently build individual volume conductor models; however, the limiting factor for such individualization are the high acquisition costs of structural magnetic resonance images. Here, we build a highly detailed (0.5mm(3) resolution, 6 tissue type segmentation, 231 electrodes) FEM based on the ICBM152 template, a nonlinear average of 152 adult human heads, which we call ICBM-NY. We show that, through more realistic electrical modeling, our model is similarly accurate as individual BEMs. Moreover, through using an unbiased population average, our model is also more accurate than FEMs built from mismatching individual anatomies. Our model is made available in Matlab format.

  2. A stratiform cloud parameterization for General Circulation Models

    International Nuclear Information System (INIS)

    Ghan, S.J.; Leung, L.R.; Chuang, C.C.; Penner, J.E.; McCaa, J.

    1994-01-01

    The crude treatment of clouds in General Circulation Models (GCMs) is widely recognized as a major limitation in the application of these models to predictions of global climate change. The purpose of this project is to develop a paxameterization for stratiform clouds in GCMs that expresses stratiform clouds in terms of bulk microphysical properties and their subgrid variability. In this parameterization, precipitating cloud species are distinguished from non-precipitating species, and the liquid phase is distinguished from the ice phase. The size of the non-precipitating cloud particles (which influences both the cloud radiative properties and the conversion of non-precipitating cloud species to precipitating species) is determined by predicting both the mass and number concentrations of each species

  3. Scalar energy fluctuations in Large-Eddy Simulation of turbulent flames: Statistical budgets and mesh quality criterion

    Energy Technology Data Exchange (ETDEWEB)

    Vervisch, Luc; Domingo, Pascale; Lodato, Guido [CORIA - CNRS and INSA de Rouen, Technopole du Madrillet, BP 8, 76801 Saint-Etienne-du-Rouvray (France); Veynante, Denis [EM2C - CNRS and Ecole Centrale Paris, Grande Voie des Vignes, 92295 Chatenay-Malabry (France)

    2010-04-15

    Large-Eddy Simulation (LES) provides space-filtered quantities to compare with measurements, which usually have been obtained using a different filtering operation; hence, numerical and experimental results can be examined side-by-side in a statistical sense only. Instantaneous, space-filtered and statistically time-averaged signals feature different characteristic length-scales, which can be combined in dimensionless ratios. From two canonical manufactured turbulent solutions, a turbulent flame and a passive scalar turbulent mixing layer, the critical values of these ratios under which measured and computed variances (resolved plus sub-grid scale) can be compared without resorting to additional residual terms are first determined. It is shown that actual Direct Numerical Simulation can hardly accommodate a sufficiently large range of length-scales to perform statistical studies of LES filtered reactive scalar-fields energy budget based on sub-grid scale variances; an estimation of the minimum Reynolds number allowing for such DNS studies is given. From these developments, a reliability mesh criterion emerges for scalar LES and scaling for scalar sub-grid scale energy is discussed. (author)

  4. Analysis of the K-epsilon turbulence model

    International Nuclear Information System (INIS)

    Mohammadi, B.; Pironneau, O.

    1993-12-01

    This book is aimed at applied mathematicians interested in numerical simulation of turbulent flows. The book is centered around the k - ε model but it also deals with other models such as subgrid scale models, one equation models and Reynolds Stress models. The reader is expected to have some knowledge of numerical methods for fluids and, if possible, some understanding of fluid mechanics, the partial differential equations used and their variational formulations. This book presents the k - ε method for turbulence in a language familiar to applied mathematicians, stripped bare of all the technicalities of turbulence theory. The model is justified from a mathematical standpoint rather than from a physical one. The numerical algorithms are investigated and some theoretical and numerical results presented. This book should prove an invaluable tool for those studying a subject that is still controversial but very useful for industrial applications. (authors). 71 figs., 200 refs

  5. Neutron resonance averaging

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs

  6. Point-by-point model description of average prompt neutron data as a function of total kinetic energy of fission fragments

    International Nuclear Information System (INIS)

    Tudora, A.

    2013-01-01

    The experimental data of average prompt neutron multiplicity as a function of total kinetic energy of fragments <ν>(TKE) exhibit, especially in the case of 252 Cf(SF), different slopes dTKE/dν and different behaviours at low TKE values. The Point-by-Point (PbP) model can describe these different behaviours. The higher slope dTKE/dν and the flattening of <ν> at low TKE exhibited by a part of experimental data sets is very well reproduced when the PbP multi-parametric matrix ν(A,TKE) is averaged over a double distribution Y(A,TKE). The lower slope and the almost linear behaviour over the entire TKE range exhibited by other data sets is well described when the same matrix ν(A,TKE) is averaged over a single distribution Y(A). In the case of average prompt neutron energy in SCM as a function of TKE, different dTKE/dε slopes are also obtained by averaging the same PbP matrix ε(A,TKE) over Y(A,TKE) and over Y(A). The results are exemplified for 3 fissioning systems benefiting of experimental data as a function of TKE: 252 Cf(SF), 235 U(n th ,f) and 239 Pu(n th ,f). In the case of 234 U(n,f) for the first time it was possible to calculate <ν>(TKE) and <ε>(TKE) at many incident energies by averaging the PbP multi-parametric matrices over the experimental Y(A,TKE) distributions recently measured at IRMM for 14 incident energies in the range 0.3- 5 MeV. The results revealed that the slope dTKE/dν does not vary with the incident energy and the flattening of <ν> at low TKE values is more pronounced at low incident energies. The average model parameters dependences on TKE resulted from the PbP treatment allow the use of the most probable fragmentation approach, having the great advantage to provide results at many TKE values in a very short computing time compared to PbP and Monte Carlo treatments. (author)

  7. Characterization of Cloud Water-Content Distribution

    Science.gov (United States)

    Lee, Seungwon

    2010-01-01

    The development of realistic cloud parameterizations for climate models requires accurate characterizations of subgrid distributions of thermodynamic variables. To this end, a software tool was developed to characterize cloud water-content distributions in climate-model sub-grid scales. This software characterizes distributions of cloud water content with respect to cloud phase, cloud type, precipitation occurrence, and geo-location using CloudSat radar measurements. It uses a statistical method called maximum likelihood estimation to estimate the probability density function of the cloud water content.

  8. Parametric Study of Flow Control Over a Hump Model Using an Unsteady Reynolds- Averaged Navier-Stokes Code

    Science.gov (United States)

    Rumsey, Christopher L.; Greenblatt, David

    2007-01-01

    This is an expanded version of a limited-length paper that appeared at the 5th International Symposium on Turbulence and Shear Flow Phenomena by the same authors. A computational study was performed for steady and oscillatory flow control over a hump model with flow separation to assess how well the steady and unsteady Reynolds-averaged Navier-Stokes equations predict trends due to Reynolds number, control magnitude, and control frequency. As demonstrated in earlier studies, the hump model case is useful because it clearly demonstrates a failing in all known turbulence models: they under-predict the turbulent shear stress in the separated region and consequently reattachment occurs too far downstream. In spite of this known failing, three different turbulence models were employed to determine if trends can be captured even though absolute levels are not. Overall the three turbulence models showed very similar trends as experiment for steady suction, but only agreed qualitatively with some of the trends for oscillatory control.

  9. An upscaled two-equation model of transport in porous media through unsteady-state closure of volume averaged formulations

    Science.gov (United States)

    Chaynikov, S.; Porta, G.; Riva, M.; Guadagnini, A.

    2012-04-01

    We focus on a theoretical analysis of nonreactive solute transport in porous media through the volume averaging technique. Darcy-scale transport models based on continuum formulations typically include large scale dispersive processes which are embedded in a pore-scale advection diffusion equation through a Fickian analogy. This formulation has been extensively questioned in the literature due to its inability to depict observed solute breakthrough curves in diverse settings, ranging from the laboratory to the field scales. The heterogeneity of the pore-scale velocity field is one of the key sources of uncertainties giving rise to anomalous (non-Fickian) dispersion in macro-scale porous systems. Some of the models which are employed to interpret observed non-Fickian solute behavior make use of a continuum formulation of the porous system which assumes a two-region description and includes a bimodal velocity distribution. A first class of these models comprises the so-called ''mobile-immobile'' conceptualization, where convective and dispersive transport mechanisms are considered to dominate within a high velocity region (mobile zone), while convective effects are neglected in a low velocity region (immobile zone). The mass exchange between these two regions is assumed to be controlled by a diffusive process and is macroscopically described by a first-order kinetic. An extension of these ideas is the two equation ''mobile-mobile'' model, where both transport mechanisms are taken into account in each region and a first-order mass exchange between regions is employed. Here, we provide an analytical derivation of two region "mobile-mobile" meso-scale models through a rigorous upscaling of the pore-scale advection diffusion equation. Among the available upscaling methodologies, we employ the Volume Averaging technique. In this approach, the heterogeneous porous medium is supposed to be pseudo-periodic, and can be represented through a (spatially) periodic unit cell

  10. Exploring Modeling Options and Conversion of Average Response to Appropriate Vibration Envelopes for a Typical Cylindrical Vehicle Panel with Rib-stiffened Design

    Science.gov (United States)

    Harrison, Phil; LaVerde, Bruce; Teague, David

    2009-01-01

    Although applications for Statistical Energy Analysis (SEA) techniques are more widely used in the aerospace industry today, opportunities to anchor the response predictions using measured data from a flight-like launch vehicle structure are still quite valuable. Response and excitation data from a ground acoustic test at the Marshall Space Flight Center permitted the authors to compare and evaluate several modeling techniques available in the SEA module of the commercial code VA One. This paper provides an example of vibration response estimates developed using different modeling approaches to both approximate and bound the response of a flight-like vehicle panel. Since both vibration response and acoustic levels near the panel were available from the ground test, the evaluation provided an opportunity to learn how well the different modeling options can match band-averaged spectra developed from the test data. Additional work was performed to understand the spatial averaging of the measurements across the panel from measured data. Finally an evaluation/comparison of two conversion approaches from the statistical average response results that are output from an SEA analysis to a more useful envelope of response spectra appropriate to specify design and test vibration levels for a new vehicle.

  11. Large-eddy simulation/Reynolds-averaged Navier-Stokes hybrid schemes for high speed flows

    Science.gov (United States)

    Xiao, Xudong

    Three LES/RANS hybrid schemes have been proposed for the prediction of high speed separated flows. Each method couples the k-zeta (Enstrophy) BANS model with an LES subgrid scale one-equation model by using a blending function that is coordinate system independent. Two of these functions are based on turbulence dissipation length scale and grid size, while the third one has no explicit dependence on the grid. To implement the LES/RANS hybrid schemes, a new rescaling-reintroducing method is used to generate time-dependent turbulent inflow conditions. The hybrid schemes have been tested on a Mach 2.88 flow over 25 degree compression-expansion ramp and a Mach 2.79 flow over 20 degree compression ramp. A special computation procedure has been designed to prevent the separation zone from expanding upstream to the recycle-plane. The code is parallelized using Message Passing Interface (MPI) and is optimized for running on IBM-SP3 parallel machine. The scheme was validated first for a flat plate. It was shown that the blending function has to be monotonic to prevent the RANS region from appearing in the LES region. In the 25 deg ramp case, the hybrid schemes provided better agreement with experiment in the recovery region. Grid refinement studies demonstrated the importance of using a grid independent blend function and further improvement with experiment in the recovery region. In the 20 deg ramp case, with a relatively finer grid, the hybrid scheme characterized by grid independent blending function well predicted the flow field in both the separation region and the recovery region. Therefore, with "appropriately" fine grid, current hybrid schemes are promising for the simulation of shock wave/boundary layer interaction problems.

  12. A dynamic analysis of moving average rules

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type

  13. Monthly streamflow forecasting with auto-regressive integrated moving average

    Science.gov (United States)

    Nasir, Najah; Samsudin, Ruhaidah; Shabri, Ani

    2017-09-01

    Forecasting of streamflow is one of the many ways that can contribute to better decision making for water resource management. The auto-regressive integrated moving average (ARIMA) model was selected in this research for monthly streamflow forecasting with enhancement made by pre-processing the data using singular spectrum analysis (SSA). This study also proposed an extension of the SSA technique to include a step where clustering was performed on the eigenvector pairs before reconstruction of the time series. The monthly streamflow data of Sungai Muda at Jeniang, Sungai Muda at Jambatan Syed Omar and Sungai Ketil at Kuala Pegang was gathered from the Department of Irrigation and Drainage Malaysia. A ratio of 9:1 was used to divide the data into training and testing sets. The ARIMA, SSA-ARIMA and Clustered SSA-ARIMA models were all developed in R software. Results from the proposed model are then compared to a conventional auto-regressive integrated moving average model using the root-mean-square error and mean absolute error values. It was found that the proposed model can outperform the conventional model.

  14. Single nuclear transfer strengths and sum rules in the interacting boson-fermion model and in the spectral averaging theory

    International Nuclear Information System (INIS)

    Kota, V.K.B.

    1991-01-01

    In the interacting boson-fermion model of collective nuclei, in the symmetry limits of the model appropriate for vibrational, rotational and γ-unstable nuclei, for one-particle transfer, the selection rules, model predictions for the allowed strengths and comparison of theory with experiment are briefly reviewed. In the spectral-averaging theory, with the specific example of orbit occupancies, the smoothed forms (linear or better ratio of Gaussians) as determined by central limit theorems, how they provide a good criterion for selecting effective interactions and the convolution structure of occupancy densities in huge spaces are described. Complementary information provided by nuclear models and statistical laws is broughtout. (author). 63 refs., 5 figs

  15. Iterative Bayesian Model Averaging: a method for the application of survival analysis to high-dimensional microarray data

    Directory of Open Access Journals (Sweden)

    Raftery Adrian E

    2009-02-01

    Full Text Available Abstract Background Microarray technology is increasingly used to identify potential biomarkers for cancer prognostics and diagnostics. Previously, we have developed the iterative Bayesian Model Averaging (BMA algorithm for use in classification. Here, we extend the iterative BMA algorithm for application to survival analysis on high-dimensional microarray data. The main goal in applying survival analysis to microarray data is to determine a highly predictive model of patients' time to event (such as death, relapse, or metastasis using a small number of selected genes. Our multivariate procedure combines the effectiveness of multiple contending models by calculating the weighted average of their posterior probability distributions. Our results demonstrate that our iterative BMA algorithm for survival analysis achieves high prediction accuracy while consistently selecting a small and cost-effective number of predictor genes. Results We applied the iterative BMA algorithm to two cancer datasets: breast cancer and diffuse large B-cell lymphoma (DLBCL data. On the breast cancer data, the algorithm selected a total of 15 predictor genes across 84 contending models from the training data. The maximum likelihood estimates of the selected genes and the posterior probabilities of the selected models from the training data were used to divide patients in the test (or validation dataset into high- and low-risk categories. Using the genes and models determined from the training data, we assigned patients from the test data into highly distinct risk groups (as indicated by a p-value of 7.26e-05 from the log-rank test. Moreover, we achieved comparable results using only the 5 top selected genes with 100% posterior probabilities. On the DLBCL data, our iterative BMA procedure selected a total of 25 genes across 3 contending models from the training data. Once again, we assigned the patients in the validation set to significantly distinct risk groups (p

  16. Assessment of water exchange between a discharge region and the open sea A comparison of different methodological concepts

    Science.gov (United States)

    Döös, Kristofer; Engqvist, Anders

    2007-09-01

    Two different methods of estimating the water exchange through the Baltic coastal region of Laxemar have been used, consisting of particle trajectories and passive tracers. Water is traced from and to a small discharge region near the coast. The discharge material in this region is treated as zero-dimensional particles or tracers with neutral buoyancy. The real discharge material could be a leakage of radio-nuclides through the sea floor from an underground repository of nuclear waste. Water exchange rates between the discharge region and the model domain are estimated using both forward and backward trajectories as well as passive tracers. The Lagrangian trajectories can account for the time evolution of the water exchange while the tracers give one average age per model grid box. Water exchange times such as residence time, age and transient times have been calculated with trajectories but only the average age (AvA) for tracers. The trajectory calculations provide a more detailed time evolution than the tracers. On the other hand the tracers are integrated "on-line" simultaneously in the sea circulation model with the same time step while the Lagrangian trajectories are integrated "off-line" from the stored model velocities with its inherent temporal resolution, presently 1 h. The sub-grid turbulence is parameterised as the Laplacian diffusion for the passive tracers and with an extra stochastic velocity for trajectories. The importance of the parameterised sub-grid turbulence for the trajectories is estimated to give an extra diffusion of the same order as the Laplacian diffusion by comparing the Lagrangian dispersions with and without parameterisation. The results of the different methods are similar but depend on the chosen diffusivity coefficient with a slightly higher correlation between trajectories and tracers when integrated with a lower diffusivity coefficient.

  17. Tracers vs. trajectories in a coastal region

    Science.gov (United States)

    Engqvist, A.; Döös, K.

    2008-12-01

    Two different methods of estimating the water exchange through a Baltic coastal region have been used, consisting of particle trajectories and passive tracers. Water is traced from and to a small discharge region near the coast. The discharge material in this region is treated as zero dimensional particles or tracers with neutral buoyancy. The real discharge material could be a leakage of radio-nuclides through the sea floor from an underground repository of nuclear waste. Water exchange rates between the discharge region and the model domain are estimated using both forward and backward trajectories as well as passive tracers. The Lagrangian trajectories can account for the time evolution of the water exchange while the tracers give one average age per model grid box. Water exchange times such as residence time, age and transient times have been calculated with trajectories but only the average age (AvA) for tracers. The trajectory calculations provide a more detailed time evolution than the tracers. On the other hand the tracers are integrated "on-line" simultaneously in the sea circulation model with the same time step while the Lagrangian trajectories are integrated "off-line" from the stored model velocities with its inherent temporal resolution, presently one hour. The sub-grid turbulence is parameterised as a Laplacian diffusion for the passive tracers and with an extra stochastic velocity for trajectories. The importance of the parameterised sub-grid turbulence for the trajectories is estimated to give an extra diffusion of the same order as the Laplacian diffusion by comparing the Lagrangian dispersions with and without parameterisation. The results of the different methods are similar but depend on the chosen diffusivity coefficient with a slightly higher correlation between trajectories and tracers when integrated with a lower diffusivity coefficient.

  18. Development of Parallel Code for the Alaska Tsunami Forecast Model

    Science.gov (United States)

    Bahng, B.; Knight, W. R.; Whitmore, P.

    2014-12-01

    The Alaska Tsunami Forecast Model (ATFM) is a numerical model used to forecast propagation and inundation of tsunamis generated by earthquakes and other means in both the Pacific and Atlantic Oceans. At the U.S. National Tsunami Warning Center (NTWC), the model is mainly used in a pre-computed fashion. That is, results for hundreds of hypothetical events are computed before alerts, and are accessed and calibrated with observations during tsunamis to immediately produce forecasts. ATFM uses the non-linear, depth-averaged, shallow-water equations of motion with multiply nested grids in two-way communications between domains of each parent-child pair as waves get closer to coastal waters. Even with the pre-computation the task becomes non-trivial as sub-grid resolution gets finer. Currently, the finest resolution Digital Elevation Models (DEM) used by ATFM are 1/3 arc-seconds. With a serial code, large or multiple areas of very high resolution can produce run-times that are unrealistic even in a pre-computed approach. One way to increase the model performance is code parallelization used in conjunction with a multi-processor computing environment. NTWC developers have undertaken an ATFM code-parallelization effort to streamline the creation of the pre-computed database of results with the long term aim of tsunami forecasts from source to high resolution shoreline grids in real time. Parallelization will also permit timely regeneration of the forecast model database with new DEMs; and, will make possible future inclusion of new physics such as the non-hydrostatic treatment of tsunami propagation. The purpose of our presentation is to elaborate on the parallelization approach and to show the compute speed increase on various multi-processor systems.

  19. Autoregressive-moving-average hidden Markov model for vision-based fall prediction-An application for walker robot.

    Science.gov (United States)

    Taghvaei, Sajjad; Jahanandish, Mohammad Hasan; Kosuge, Kazuhiro

    2017-01-01

    Population aging of the societies requires providing the elderly with safe and dependable assistive technologies in daily life activities. Improving the fall detection algorithms can play a major role in achieving this goal. This article proposes a real-time fall prediction algorithm based on the acquired visual data of a user with walking assistive system from a depth sensor. In the lack of a coupled dynamic model of the human and the assistive walker a hybrid "system identification-machine learning" approach is used. An autoregressive-moving-average (ARMA) model is fitted on the time-series walking data to forecast the upcoming states, and a hidden Markov model (HMM) based classifier is built on the top of the ARMA model to predict falling in the upcoming time frames. The performance of the algorithm is evaluated through experiments with four subjects including an experienced physiotherapist while using a walker robot in five different falling scenarios; namely, fall forward, fall down, fall back, fall left, and fall right. The algorithm successfully predicts the fall with a rate of 84.72%.

  20. Reduced fractal model for quantitative analysis of averaged micromotions in mesoscale: Characterization of blow-like signals

    International Nuclear Information System (INIS)

    Nigmatullin, Raoul R.; Toboev, Vyacheslav A.; Lino, Paolo; Maione, Guido

    2015-01-01

    Highlights: •A new approach describes fractal-branched systems with long-range fluctuations. •A reduced fractal model is proposed. •The approach is used to characterize blow-like signals. •The approach is tested on data from different fields. -- Abstract: It has been shown that many micromotions in the mesoscale region are averaged in accordance with their self-similar (geometrical/dynamical) structure. This distinctive feature helps to reduce a wide set of different micromotions describing relaxation/exchange processes to an averaged collective motion, expressed mathematically in a rather general form. This reduction opens new perspectives in description of different blow-like signals (BLS) in many complex systems. The main characteristic of these signals is a finite duration also when the generalized reduced function is used for their quantitative fitting. As an example, we describe quantitatively available signals that are generated by bronchial asthmatic people, songs by queen bees, and car engine valves operating in the idling regime. We develop a special treatment procedure based on the eigen-coordinates (ECs) method that allows to justify the generalized reduced fractal model (RFM) for description of BLS that can propagate in different complex systems. The obtained describing function is based on the self-similar properties of the different considered micromotions. This kind of cooperative model is proposed here for the first time. In spite of the fact that the nature of the dynamic processes that take place in fractal structure on a mesoscale level is not well understood, the parameters of the RFM fitting function can be used for construction of calibration curves, affected by various external/random factors. Then, the calculated set of the fitting parameters of these calibration curves can characterize BLS of different complex systems affected by those factors. Though the method to construct and analyze the calibration curves goes beyond the scope

  1. Modelling of turbulence and combustion for simulation of gas explosions in complex geometries

    Energy Technology Data Exchange (ETDEWEB)

    Arntzen, Bjoern Johan

    1998-12-31

    This thesis analyses and presents new models for turbulent reactive flows for CFD (Computational Fluid Dynamics) simulation of gas explosions in complex geometries like offshore modules. The course of a gas explosion in a complex geometry is largely determined by the development of turbulence and the accompanying increased combustion rate. To be able to model the process it is necessary to use a CFD code as a starting point, provided with a suitable turbulence and combustion model. The modelling and calculations are done in a three-dimensional finite volume CFD code, where complex geometries are represented by a porosity concept, which gives porosity on the grid cell faces, depending on what is inside the cell. The turbulent flow field is modelled with a k-{epsilon} turbulence model. Subgrid models are used for production of turbulence from geometry not fully resolved on the grid. Results from laser doppler anemometry measurements around obstructions in steady and transient flows have been analysed and the turbulence models have been improved to handle transient, subgrid and reactive flows. The combustion is modelled with a burning velocity model and a flame model which incorporates the burning velocity into the code. Two different flame models have been developed: SIF (Simple Interface Flame model), which treats the flame as an interface between reactants and products, and the {beta}-model where the reaction zone is resolved with about three grid cells. The flame normally starts with a quasi laminar burning velocity, due to flame instabilities, modelled as a function of flame radius and laminar burning velocity. As the flow field becomes turbulent, the flame uses a turbulent burning velocity model based on experimental data and dependent on turbulence parameters and laminar burning velocity. The laminar burning velocity is modelled as a function of gas mixture, equivalence ratio, pressure and temperature in reactant. Simulations agree well with experiments. 139

  2. An ensemble-based dynamic Bayesian averaging approach for discharge simulations using multiple global precipitation products and hydrological models

    Science.gov (United States)

    Qi, Wei; Liu, Junguo; Yang, Hong; Sweetapple, Chris

    2018-03-01

    Global precipitation products are very important datasets in flow simulations, especially in poorly gauged regions. Uncertainties resulting from precipitation products, hydrological models and their combinations vary with time and data magnitude, and undermine their application to flow simulations. However, previous studies have not quantified these uncertainties individually and explicitly. This study developed an ensemble-based dynamic Bayesian averaging approach (e-Bay) for deterministic discharge simulations using multiple global precipitation products and hydrological models. In this approach, the joint probability of precipitation products and hydrological models being correct is quantified based on uncertainties in maximum and mean estimation, posterior probability is quantified as functions of the magnitude and timing of discharges, and the law of total probability is implemented to calculate expected discharges. Six global fine-resolution precipitation products and two hydrological models of different complexities are included in an illustrative application. e-Bay can effectively quantify uncertainties and therefore generate better deterministic discharges than traditional approaches (weighted average methods with equal and varying weights and maximum likelihood approach). The mean Nash-Sutcliffe Efficiency values of e-Bay are up to 0.97 and 0.85 in training and validation periods respectively, which are at least 0.06 and 0.13 higher than traditional approaches. In addition, with increased training data, assessment criteria values of e-Bay show smaller fluctuations than traditional approaches and its performance becomes outstanding. The proposed e-Bay approach bridges the gap between global precipitation products and their pragmatic applications to discharge simulations, and is beneficial to water resources management in ungauged or poorly gauged regions across the world.

  3. Category structure determines the relative attractiveness of global versus local averages.

    Science.gov (United States)

    Vogel, Tobias; Carr, Evan W; Davis, Tyler; Winkielman, Piotr

    2018-02-01

    Stimuli that capture the central tendency of presented exemplars are often preferred-a phenomenon also known as the classic beauty-in-averageness effect . However, recent studies have shown that this effect can reverse under certain conditions. We propose that a key variable for such ugliness-in-averageness effects is the category structure of the presented exemplars. When exemplars cluster into multiple subcategories, the global average should no longer reflect the underlying stimulus distributions, and will thereby become unattractive. In contrast, the subcategory averages (i.e., local averages) should better reflect the stimulus distributions, and become more attractive. In 3 studies, we presented participants with dot patterns belonging to 2 different subcategories. Importantly, across studies, we also manipulated the distinctiveness of the subcategories. We found that participants preferred the local averages over the global average when they first learned to classify the patterns into 2 different subcategories in a contrastive categorization paradigm (Experiment 1). Moreover, participants still preferred local averages when first classifying patterns into a single category (Experiment 2) or when not classifying patterns at all during incidental learning (Experiment 3), as long as the subcategories were sufficiently distinct. Finally, as a proof-of-concept, we mapped our empirical results onto predictions generated by a well-known computational model of category learning (the Generalized Context Model [GCM]). Overall, our findings emphasize the key role of categorization for understanding the nature of preferences, including any effects that emerge from stimulus averaging. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  4. Accounting for disagreements on average cone loss rates in retinitis pigmentosa with a new kinetic model: Its relevance for clinical trials.

    Science.gov (United States)

    Baumgartner, W A; Baumgartner, A M

    2016-04-01

    Since 1985, at least nine studies of the average rate of cone loss in retinitis pigmentosa (RP) populations have yielded conflicting average rate constant values (-k), differing by 90-160%. This is surprising, since, except for the first two investigations, the Harvard or Johns Hopkins' protocols used in these studies were identical with respect to: use of the same exponential decline model, calculation of average -k from individual patient k values, monitoring patients over similarly large time frames, and excluding data exhibiting floor and ceiling effects. A detailed analysis of Harvard's and Hopkins' protocols and data revealed two subtle differences: (i) Hopkins' use of half-life t0.5 (or t(1/e)) for expressing patient cone-loss rates rather than k as used by Harvard; (ii) Harvard obtaining substantially more +k from improving fields due to dormant-cone recovery effects and "small -k" values than Hopkins' ("small -k" is defined as less than -0.040 year(-1)), e.g., 16% +k, 31% small -k, vs. Hopkins' 3% and 6% respectively. Since t0.5=0.693/k, it follows that when k=0, or is very small, t0.5 (or t(1/e)) is respectively infinity or a very large number. This unfortunate mathematical property (which also prevents t0.5 (t(1/e)) histogram construction corresponding to -k to +k) caused Hopkins' to delete all "small -k" and all +k due to "strong leverage". Naturally this contributed to Hopkins' larger average -k. Difference (ii) led us to re-evaluate the Harvard/Hopkins' exponential unchanging -k model. In its place we propose a model of increasing biochemical stresses from dying rods on cones during RP progression: increasing oxidative stresses and trophic factor deficiencies (e.g., RdCVF), and RPE malfunction. Our kinetic analysis showed rod loss to follow exponential kinetics with unchanging -k due to constant genetic stresses, thereby providing a theoretical basis for Clarke et al.'s empirical observation of such kinetics with eleven animal models of RP. In

  5. Determination of the diagnostic x-ray tube practical peak voltage (PPV) from average or average peak voltage measurements

    Energy Technology Data Exchange (ETDEWEB)

    Hourdakis, C J, E-mail: khour@gaec.gr [Ionizing Radiation Calibration Laboratory-Greek Atomic Energy Commission, PO Box 60092, 15310 Agia Paraskevi, Athens, Attiki (Greece)

    2011-04-07

    The practical peak voltage (PPV) has been adopted as the reference measuring quantity for the x-ray tube voltage. However, the majority of commercial kV-meter models measure the average peak, U-bar{sub P}, the average, U-bar, the effective, U{sub eff} or the maximum peak, U{sub P} tube voltage. This work proposed a method for determination of the PPV from measurements with a kV-meter that measures the average U-bar or the average peak, U-bar{sub p} voltage. The kV-meter reading can be converted to the PPV by applying appropriate calibration coefficients and conversion factors. The average peak k{sub PPV,kVp} and the average k{sub PPV,Uav} conversion factors were calculated from virtual voltage waveforms for conventional diagnostic radiology (50-150 kV) and mammography (22-35 kV) tube voltages and for voltage ripples from 0% to 100%. Regression equation and coefficients provide the appropriate conversion factors at any given tube voltage and ripple. The influence of voltage waveform irregularities, like 'spikes' and pulse amplitude variations, on the conversion factors was investigated and discussed. The proposed method and the conversion factors were tested using six commercial kV-meters at several x-ray units. The deviations between the reference and the calculated - according to the proposed method - PPV values were less than 2%. Practical aspects on the voltage ripple measurement were addressed and discussed. The proposed method provides a rigorous base to determine the PPV with kV-meters from U-bar{sub p} and U-bar measurement. Users can benefit, since all kV-meters, irrespective of their measuring quantity, can be used to determine the PPV, complying with the IEC standard requirements.

  6. SACALCCYL, Calculates the average solid angle subtended by a volume; SACALC2B, Calculates the average solid angle for source-detector geometries

    International Nuclear Information System (INIS)

    Whitcher, Ralph

    2007-01-01

    1 - Description of program or function: SACALC2B calculates the average solid angle subtended by a rectangular or circular detector window to a coaxial or non-coaxial rectangular, circular or point source, including where the source and detector planes are not parallel. SACALC C YL calculates the average solid angle subtended by a cylinder to a rectangular or circular source, plane or thick, at any location and orientation. This is needed, for example, in calculating the intrinsic gamma efficiency of a detector such as a GM tube. The program also calculates the number of hits on the cylinder side and on each end, and the average path length through the detector volume (assuming no scattering or absorption). Point sources can be modelled by using a circular source of zero radius. NEA-1688/03: Documentation has been updated (January 2006). 2 - Methods: The program uses a Monte Carlo method to calculate average solid angle for source-detector geometries that are difficult to analyse by analytical methods. The values of solid angle are calculated to accuracies of typically better than 0.1%. The calculated values from the Monte Carlo method agree closely with those produced by polygon approximation and numerical integration by Gardner and Verghese, and others. 3 - Restrictions on the complexity of the problem: The program models a circular or rectangular detector in planes that are not necessarily coaxial, nor parallel. Point sources can be modelled by using a circular source of zero radius. The sources are assumed to be uniformly distributed. NEA-1688/04: In SACALC C YL, to avoid rounding errors, differences less than 1 E-12 are assumed to be zero

  7. A statistical study of gyro-averaging effects in a reduced model of drift-wave transport

    Science.gov (United States)

    da Fonseca, J. D.; del-Castillo-Negrete, D.; Sokolov, I. M.; Caldas, I. L.

    2016-08-01

    A statistical study of finite Larmor radius (FLR) effects on transport driven by electrostatic drift-waves is presented. The study is based on a reduced discrete Hamiltonian dynamical system known as the gyro-averaged standard map (GSM). In this system, FLR effects are incorporated through the gyro-averaging of a simplified weak-turbulence model of electrostatic fluctuations. Formally, the GSM is a modified version of the standard map in which the perturbation amplitude, K0, becomes K0J0(ρ ̂ ) , where J0 is the zeroth-order Bessel function and ρ ̂ is the Larmor radius. Assuming a Maxwellian probability density function (pdf) for ρ ̂ , we compute analytically and numerically the pdf and the cumulative distribution function of the effective drift-wave perturbation amplitude K0J0(ρ ̂ ) . Using these results, we compute the probability of loss of confinement (i.e., global chaos), Pc, and the probability of trapping in the main drift-wave resonance, Pt. It is shown that Pc provides an upper bound for the escape rate, and that Pt provides a good estimate of the particle trapping rate. The analytical results are compared with direct numerical Monte-Carlo simulations of particle transport.

  8. Site-specific dissociation dynamics of H2/D2 on Ag(111) and Co(0001) and the validity of the site-averaging model

    International Nuclear Information System (INIS)

    Hu, Xixi; Jiang, Bin; Xie, Daiqian; Guo, Hua

    2015-01-01

    Dissociative chemisorption of polyatomic molecules on metal surfaces involves high-dimensional dynamics, of which quantum mechanical treatments are computationally challenging. A promising reduced-dimensional approach approximates the full-dimensional dynamics by a weighted average of fixed-site results. To examine the performance of this site-averaging model, we investigate two distinct reactions, namely, hydrogen dissociation on Co(0001) and Ag(111), using accurate first principles potential energy surfaces (PESs). The former has a very low barrier of ∼0.05 eV while the latter is highly activated with a barrier of ∼1.15 eV. These two systems allow the investigation of not only site-specific dynamical behaviors but also the validity of the site-averaging model. It is found that the reactivity is not only controlled by the barrier height but also by the topography of the PES. Moreover, the agreement between the site-averaged and full-dimensional results is much better on Ag(111), though quantitative in neither system. Further quasi-classical trajectory calculations showed that the deviations can be attributed to dynamical steering effects, which are present in both reactions at all energies

  9. Scale-invariant Green-Kubo relation for time-averaged diffusivity

    Science.gov (United States)

    Meyer, Philipp; Barkai, Eli; Kantz, Holger

    2017-12-01

    In recent years it was shown both theoretically and experimentally that in certain systems exhibiting anomalous diffusion the time- and ensemble-averaged mean-squared displacement are remarkably different. The ensemble-averaged diffusivity is obtained from a scaling Green-Kubo relation, which connects the scale-invariant nonstationary velocity correlation function with the transport coefficient. Here we obtain the relation between time-averaged diffusivity, usually recorded in single-particle tracking experiments, and the underlying scale-invariant velocity correlation function. The time-averaged mean-squared displacement is given by 〈δ2¯〉 ˜2 DνtβΔν -β , where t is the total measurement time and Δ is the lag time. Here ν is the anomalous diffusion exponent obtained from ensemble-averaged measurements 〈x2〉 ˜tν , while β ≥-1 marks the growth or decline of the kinetic energy 〈v2〉 ˜tβ . Thus, we establish a connection between exponents that can be read off the asymptotic properties of the velocity correlation function and similarly for the transport constant Dν. We demonstrate our results with nonstationary scale-invariant stochastic and deterministic models, thereby highlighting that systems with equivalent behavior in the ensemble average can differ strongly in their time average. If the averaged kinetic energy is finite, β =0 , the time scaling of 〈δ2¯〉 and 〈x2〉 are identical; however, the time-averaged transport coefficient Dν is not identical to the corresponding ensemble-averaged diffusion constant.

  10. Predicting dissolution patterns in variable aperture fractures: 1. Development and evaluation of an enhanced depth-averaged computational model

    Energy Technology Data Exchange (ETDEWEB)

    Detwiler, R L; Rajaram, H

    2006-04-21

    Water-rock interactions within variable-aperture fractures can lead to dissolution of fracture surfaces and local alteration of fracture apertures, potentially transforming the transport properties of the fracture over time. Because fractures often provide dominant pathways for subsurface flow and transport, developing models that effectively quantify the role of dissolution on changing transport properties over a range of scales is critical to understanding potential impacts of natural and anthropogenic processes. Dissolution of fracture surfaces is controlled by surface-reaction kinetics and transport of reactants and products to and from the fracture surfaces. We present development and evaluation of a depth-averaged model of fracture flow and reactive transport that explicitly calculates local dissolution-induced alterations in fracture apertures. The model incorporates an effective mass transfer relationship that implicitly represents the transition from reaction-limited dissolution to transport-limited dissolution. We evaluate the model through direct comparison to previously reported physical experiments in transparent analog fractures fabricated by mating an inert, transparent rough surface with a smooth single crystal of potassium dihydrogen phosphate (KDP), which allowed direct measurement of fracture aperture during dissolution experiments using well-established light transmission techniques [Detwiler, et al., 2003]. Comparison of experiments and simulations at different flow rates demonstrate the relative impact of the dimensionless Peclet and Damkohler numbers on fracture dissolution and the ability of the computational model to simulate dissolution. Despite some discrepancies in the small-scale details of dissolution patterns, the simulations predict the evolution of large-scale features quite well for the different experimental conditions. This suggests that our depth-averaged approach to simulating fracture dissolution provides a useful approach for

  11. The natural emergence of the correlation between H2 and star formation rate surface densities in galaxy simulations

    Science.gov (United States)

    Lupi, Alessandro; Bovino, Stefano; Capelo, Pedro R.; Volonteri, Marta; Silk, Joseph

    2018-03-01

    In this study, we present a suite of high-resolution numerical simulations of an isolated galaxy to test a sub-grid framework to consistently follow the formation and dissociation of H2 with non-equilibrium chemistry. The latter is solved via the package KROME, coupled to the mesh-less hydrodynamic code GIZMO. We include the effect of star formation (SF), modelled with a physically motivated prescription independent of H2, supernova feedback and mass-losses from low-mass stars, extragalactic and local stellar radiation, and dust and H2 shielding, to investigate the emergence of the observed correlation between H2 and SF rate surface densities. We present two different sub-grid models and compare them with on-the-fly radiative transfer (RT) calculations, to assess the main differences and limits of the different approaches. We also discuss a sub-grid clumping factor model to enhance the H2 formation, consistent with our SF prescription, which is crucial, at the achieved resolution, to reproduce the correlation with H2. We find that both sub-grid models perform very well relative to the RT simulation, giving comparable results, with moderate differences, but at much lower computational cost. We also find that, while the Kennicutt-Schmidt relation for the total gas is not strongly affected by the different ingredients included in the simulations, the H2-based counterpart is much more sensitive, because of the crucial role played by the dissociating radiative flux and the gas shielding.

  12. Generalized Heteroskedasticity ACF for Moving Average Models in Explicit Forms

    Directory of Open Access Journals (Sweden)

    Samir Khaled Safi

    2014-02-01

    Full Text Available Normal 0 false false false MicrosoftInternetExplorer4 The autocorrelation function (ACF measures the correlation between observations at different   distances apart. We derive explicit equations for generalized heteroskedasticity ACF for moving average of order q, MA(q. We consider two cases: Firstly: when the disturbance term follow the general covariance matrix structure Cov(wi, wj=S with si,j ¹ 0 " i¹j . Secondly: when the diagonal elements of S are not all identical but sij = 0 " i¹j, i.e. S=diag(s11, s22,…,stt. The forms of the explicit equations depend essentially on the moving average coefficients and covariance structure of the disturbance terms.   /* Style Definitions */ table.MsoNormalTable {mso-style-name:"جدول عادي"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;}

  13. A stratiform cloud parameterization for general circulation models

    International Nuclear Information System (INIS)

    Ghan, S.J.; Leung, L.R.; Chuang, C.C.; Penner, J.E.; McCaa, J.

    1994-01-01

    The crude treatment of clouds in general circulation models (GCMs) is widely recognized as a major limitation in applying these models to predictions of global climate change. The purpose of this project is to develop in GCMs a stratiform cloud parameterization that expresses clouds in terms of bulk microphysical properties and their subgrid variability. Various clouds variables and their interactions are summarized. Precipitating cloud species are distinguished from non-precipitating species, and the liquid phase is distinguished from the ice phase. The size of the non-precipitating cloud particles (which influences both the cloud radiative properties and the conversion of non-precipitating cloud species to precipitating species) is determined by predicting both the mass and number concentrations of each species

  14. The Value and Feasibility of Farming Differently Than the Local Average

    OpenAIRE

    Morris, Cooper; Dhuyvetter, Kevin; Yeager, Elizabeth A; Regier, Greg

    2018-01-01

    The purpose of this research is to quantify the value of being different than the local average and feasibility of distinguishing particular parts of an operation from the local average. Kansas crop farms are broken down by their farm characteristics, production practices, and management performances. An ordinary least squares regression model is used to quantify the value of having different than average characteristics, practices, and management performances. The degree farms have distingui...

  15. Multiscale correlations in highly resolved Large Eddy Simulations

    Science.gov (United States)

    Biferale, Luca; Buzzicotti, Michele; Linkmann, Moritz

    2017-11-01

    Understanding multiscale turbulent statistics is one of the key challenges for many modern applied and fundamental problems in fluid dynamics. One of the main obstacles is the existence of anomalously strong non Gaussian fluctuations, which become more and more important with increasing Reynolds number. In order to assess the performance of LES models in reproducing these extreme events with reasonable accuracy, it is helpful to further understand the statistical properties of the coupling between the resolved and the subgrid scales. We present analytical and numerical results focussing on the multiscale correlations between the subgrid stress and the resolved velocity field obtained both from LES and filtered DNS data. Furthermore, a comparison is carried out between LES and DNS results concerning the scaling behaviour of higher-order structure functions using both Smagorinsky or self-similar Fourier sub-grid models. ERC AdG Grant No 339032 NewTURB.

  16. Characteristics of phase-averaged equations for modulated wave groups

    NARCIS (Netherlands)

    Klopman, G.; Petit, H.A.H.; Battjes, J.A.

    2000-01-01

    The project concerns the influence of long waves on coastal morphology. The modelling of the combined motion of the long waves and short waves in the horizontal plane is done by phase-averaging over the short wave motion and using intra-wave modelling for the long waves, see e.g. Roelvink (1993).

  17. Forecasting Construction Tender Price Index in Ghana using Autoregressive Integrated Moving Average with Exogenous Variables Model

    Directory of Open Access Journals (Sweden)

    Ernest Kissi

    2018-03-01

    Full Text Available Prices of construction resources keep on fluctuating due to unstable economic situations that have been experienced over the years. Clients knowledge of their financial commitments toward their intended project remains the basis for their final decision. The use of construction tender price index provides a realistic estimate at the early stage of the project. Tender price index (TPI is influenced by various economic factors, hence there are several statistical techniques that have been employed in forecasting. Some of these include regression, time series, vector error correction among others. However, in recent times the integrated modelling approach is gaining popularity due to its ability to give powerful predictive accuracy. Thus, in line with this assumption, the aim of this study is to apply autoregressive integrated moving average with exogenous variables (ARIMAX in modelling TPI. The results showed that ARIMAX model has a better predictive ability than the use of the single approach. The study further confirms the earlier position of previous research of the need to use the integrated model technique in forecasting TPI. This model will assist practitioners to forecast the future values of tender price index. Although the study focuses on the Ghanaian economy, the findings can be broadly applicable to other developing countries which share similar economic characteristics.

  18. Long-Term Prediction of Emergency Department Revenue and Visitor Volume Using Autoregressive Integrated Moving Average Model

    Directory of Open Access Journals (Sweden)

    Chieh-Fan Chen

    2011-01-01

    Full Text Available This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume.

  19. Large interface simulation in an averaged two-fluid code

    International Nuclear Information System (INIS)

    Henriques, A.

    2006-01-01

    Different ranges of size of interfaces and eddies are involved in multiphase flow phenomena. Classical formalisms focus on a specific range of size. This study presents a Large Interface Simulation (LIS) two-fluid compressible formalism taking into account different sizes of interfaces. As in the single-phase Large Eddy Simulation, a filtering process is used to point out Large Interface (LI) simulation and Small interface (SI) modelization. The LI surface tension force is modelled adapting the well-known CSF method. The modelling of SI transfer terms is done calling for classical closure laws of the averaged approach. To simulate accurately LI transfer terms, we develop a LI recognition algorithm based on a dimensionless criterion. The LIS model is applied in a classical averaged two-fluid code. The LI transfer terms modelling and the LI recognition are validated on analytical and experimental tests. A square base basin excited by a horizontal periodic movement is studied with the LIS model. The capability of the model is also shown on the case of the break-up of a bubble in a turbulent liquid flow. The break-up of a large bubble at a grid impact performed regime transition between two different scales of interface from LI to SI and from PI to LI. (author) [fr

  20. Role of spatial averaging in multicellular gradient sensing.

    Science.gov (United States)

    Smith, Tyler; Fancher, Sean; Levchenko, Andre; Nemenman, Ilya; Mugler, Andrew

    2016-05-20

    Gradient sensing underlies important biological processes including morphogenesis, polarization, and cell migration. The precision of gradient sensing increases with the length of a detector (a cell or group of cells) in the gradient direction, since a longer detector spans a larger range of concentration values. Intuition from studies of concentration sensing suggests that precision should also increase with detector length in the direction transverse to the gradient, since then spatial averaging should reduce the noise. However, here we show that, unlike for concentration sensing, the precision of gradient sensing decreases with transverse length for the simplest gradient sensing model, local excitation-global inhibition. The reason is that gradient sensing ultimately relies on a subtraction of measured concentration values. While spatial averaging indeed reduces the noise in these measurements, which increases precision, it also reduces the covariance between the measurements, which results in the net decrease in precision. We demonstrate how a recently introduced gradient sensing mechanism, regional excitation-global inhibition (REGI), overcomes this effect and recovers the benefit of transverse averaging. Using a REGI-based model, we compute the optimal two- and three-dimensional detector shapes, and argue that they are consistent with the shapes of naturally occurring gradient-sensing cell populations.

  1. THOR: A New Higher-Order Closure Assumed PDF Subgrid-Scale Parameterization; Evaluation and Application to Low Cloud Feedbacks

    Science.gov (United States)

    Firl, G. J.; Randall, D. A.

    2013-12-01

    The so-called "assumed probability density function (PDF)" approach to subgrid-scale (SGS) parameterization has shown to be a promising method for more accurately representing boundary layer cloudiness under a wide range of conditions. A new parameterization has been developed, named the Two-and-a-Half ORder closure (THOR), that combines this approach with a higher-order turbulence closure. THOR predicts the time evolution of the turbulence kinetic energy components, the variance of ice-liquid water potential temperature (θil) and total non-precipitating water mixing ratio (qt) and the covariance between the two, and the vertical fluxes of horizontal momentum, θil, and qt. Ten corresponding third-order moments in addition to the skewnesses of θil and qt are calculated using diagnostic functions assuming negligible time tendencies. The statistical moments are used to define a trivariate double Gaussian PDF among vertical velocity, θil, and qt. The first three statistical moments of each variable are used to estimate the two Gaussian plume means, variances, and weights. Unlike previous similar models, plume variances are not assumed to be equal or zero. Instead, they are parameterized using the idea that the less dominant Gaussian plume (typically representing the updraft-containing portion of a grid cell) has greater variance than the dominant plume (typically representing the "environmental" or slowly subsiding portion of a grid cell). Correlations among the three variables are calculated using the appropriate covariance moments, and both plume correlations are assumed to be equal. The diagnosed PDF in each grid cell is used to calculate SGS condensation, SGS fluxes of cloud water species, SGS buoyancy terms, and to inform other physical parameterizations about SGS variability. SGS condensation is extended from previous similar models to include condensation over both liquid and ice substrates, dependent on the grid cell temperature. Implementations have been

  2. Analysis of experimental data: The average shape of extreme wave forces on monopile foundations and the NewForce model

    DEFF Research Database (Denmark)

    Schløer, Signe; Bredmose, Henrik; Ghadirian, Amin

    2017-01-01

    Experiments with a stiff pile subjected to extreme wave forces typical of offshore wind farm storm conditions are considered. The exceedance probability curves of the nondimensional force peaks and crest heights are analysed. The average force time history normalised with their peak values are co...... to the average shapes. For more nonlinear wave shapes, higher order terms has to be considered in order for the NewForce model to be able to predict the expected shapes.......Experiments with a stiff pile subjected to extreme wave forces typical of offshore wind farm storm conditions are considered. The exceedance probability curves of the nondimensional force peaks and crest heights are analysed. The average force time history normalised with their peak values...... are compared across the sea states. It is found that the force shapes show a clear similarity when grouped after the values of the normalised peak force, F/(ρghR2), normalised depth h/(gT2p) and presented in a normalised time scale t/Ta. For the largest force events, slamming can be seen as a distinct ‘hat...

  3. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  4. Direct numerical simulation of turbulent velocity-, pressure- and temperature-fields in channel flows

    International Nuclear Information System (INIS)

    Goetzbach, G.

    1977-10-01

    For the simulation of non stationary, three-dimensional, turbulent flow- and temperature-fields in channel flows with constant properties a method is presented which is based on a finite difference scheme of the complete conservation equations for mass, momentum and enthalpie. The fluxes of momentum and heat within the grid cells are described by sub-grid scale models. The sub-grid scale model for momentum introduced here is for the first time applicable to small Reynolds-numbers, rather coarse grids, and channels with space dependent roughness distributions. (orig.) [de

  5. Integrating Artificial Neural Networks into the VIC Model for Rainfall-Runoff Modeling

    Directory of Open Access Journals (Sweden)

    Changqing Meng

    2016-09-01

    Full Text Available A hybrid rainfall-runoff model was developed in this study by integrating the variable infiltration capacity (VIC model with artificial neural networks (ANNs. In the proposed model, the prediction interval of the ANN replaces separate, individual simulation (i.e., single simulation. The spatial heterogeneity of horizontal resolution, subgrid-scale features and their influence on the streamflow can be assessed according to the VIC model. In the routing module, instead of a simple linear superposition of the streamflow generated from each subbasin, ANNs facilitate nonlinear mappings of the streamflow produced from each subbasin into the total streamflow at the basin outlet. A total of three subbasins were delineated and calibrated independently via the VIC model; daily runoff errors were simulated for each subbasin, then corrected by an ANN bias-correction model. The initial streamflow and corrected runoff from the simulation for individual subbasins serve as inputs to the ANN routing model. The feasibility of this proposed method was confirmed according to the performance of its application to a case study on rainfall-runoff prediction in the Jinshajiang River Basin, the headwater area of the Yangtze River.

  6. A singularity theorem based on spatial averages

    Indian Academy of Sciences (India)

    journal of. July 2007 physics pp. 31–47. A singularity theorem based on spatial ... In this paper I would like to present a result which confirms – at least partially – ... A detailed analysis of how the model fits in with the .... Further, the statement that the spatial average ...... Financial support under grants FIS2004-01626 and no.

  7. Large-eddy simulations of turbulence

    National Research Council Canada - National Science Library

    Lesieur, Marcel; Métais, O; Comte, P

    2005-01-01

    ... physical-space models are generally more readily applied, spectral models give insight into the requirements and limitations in subgrid-scale modeling and backscattering. A third special feature ...

  8. Average-energy games

    Directory of Open Access Journals (Sweden)

    Patricia Bouyer

    2015-09-01

    Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.

  9. Adaptive radiotherapy with an average anatomy model: Evaluation and quantification of residual deformations in head and neck cancer patients

    International Nuclear Information System (INIS)

    Kranen, Simon van; Mencarelli, Angelo; Beek, Suzanne van; Rasch, Coen; Herk, Marcel van; Sonke, Jan-Jakob

    2013-01-01

    Background and purpose: To develop and validate an adaptive intervention strategy for radiotherapy of head-and-neck cancer that accounts for systematic deformations by modifying the planning-CT (pCT) to the average misalignments in daily cone beam CT (CBCT) measured with deformable registration (DR). Methods and materials: Daily CBCT scans (808 scans) for 25 patients were retrospectively registered to the pCT with B-spline DR. The average deformation vector field ( ) was used to deform the pCT for adaptive intervention. Two strategies were simulated: single intervention after 10 fractions and weekly intervention with an from the previous week. The model was geometrically validated with the residual misalignment of anatomical landmarks both on bony-anatomy (BA; automatically generated) and soft-tissue (ST; manually identified). Results: Systematic deformations were 2.5/3.4 mm vector length (BA/ST). Single intervention reduced deformations to 1.5/2.7 mm (BA/ST). Weekly intervention resulted in 1.0/2.2 mm (BA/ST) and accounted better for progressive changes. 15 patients had average systematic deformations >2 mm (BA): reductions were 1.1/1.9 mm (single/weekly BA). ST improvements were underestimated due to observer and registration variability. Conclusions: Adaptive intervention with a pCT modified to the average anatomy during treatment successfully reduces systematic deformations. The improved accuracy could possibly be exploited in margin reduction and/or dose escalation

  10. Average male and female virtual dummy model (BioRID and EvaRID) simulations with two seat concepts in the Euro NCAP low severity rear impact test configuration.

    Science.gov (United States)

    Linder, Astrid; Holmqvist, Kristian; Svensson, Mats Y

    2018-05-01

    Soft tissue neck injuries, also referred to as whiplash injuries, which can lead to long term suffering accounts for more than 60% of the cost of all injuries leading to permanent medical impairment for the insurance companies, with respect to injuries sustained in vehicle crashes. These injuries are sustained in all impact directions, however they are most common in rear impacts. Injury statistics have since the mid-1960s consistently shown that females are subject to a higher risk of sustaining this type of injury than males, on average twice the risk of injury. Furthermore, some recently developed anti-whiplash systems have revealed they provide less protection for females than males. The protection of both males and females should be addresses equally when designing and evaluating vehicle safety systems to ensure maximum safety for everyone. This is currently not the case. The norm for crash test dummies representing humans in crash test laboratories is an average male. The female part of the population is not represented in tests performed by consumer information organisations such as NCAP or in regulatory tests due to the absence of a physical dummy representing an average female. Recently, the world first virtual model of an average female crash test dummy was developed. In this study, simulations were run with both this model and an average male dummy model, seated in a simplified model of a vehicle seat. The results of the simulations were compared to earlier published results from simulations run in the same test set-up with a vehicle concepts seat. The three crash pulse severities of the Euro NCAP low severity rear impact test were applied. The motion of the neck, head and upper torso were analysed in addition to the accelerations and the Neck Injury Criterion (NIC). Furthermore, the response of the virtual models was compared to the response of volunteers as well as the average male model, to that of the response of a physical dummy model. Simulations

  11. Averaging of nonlinearity-managed pulses

    International Nuclear Information System (INIS)

    Zharnitsky, Vadim; Pelinovsky, Dmitry

    2005-01-01

    We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons

  12. The Average Temporal and Spectral Evolution of Gamma-Ray Bursts

    International Nuclear Information System (INIS)

    Fenimore, E.E.

    1999-01-01

    We have averaged bright BATSE bursts to uncover the average overall temporal and spectral evolution of gamma-ray bursts (GRBs). We align the temporal structure of each burst by setting its duration to a standard duration, which we call T left-angleDurright-angle . The observed average open-quotes aligned T left-angleDurright-angle close quotes profile for 32 bright bursts with intermediate durations (16 - 40 s) has a sharp rise (within the first 20% of T left-angleDurright-angle ) and then a linear decay. Exponentials and power laws do not fit this decay. In particular, the power law seen in the X-ray afterglow (∝T -1.4 ) is not observed during the bursts, implying that the X-ray afterglow is not just an extension of the average temporal evolution seen during the gamma-ray phase. The average burst spectrum has a low-energy slope of -1.03, a high-energy slope of -3.31, and a peak in the νF ν distribution at 390 keV. We determine the average spectral evolution. Remarkably, it is also a linear function, with the peak of the νF ν distribution given by ∼680-600(T/T left-angleDurright-angle ) keV. Since both the temporal profile and the peak energy are linear functions, on average, the peak energy is linearly proportional to the intensity. This behavior is inconsistent with the external shock model. The observed temporal and spectral evolution is also inconsistent with that expected from variations in just a Lorentz factor. Previously, trends have been reported for GRB evolution, but our results are quantitative relationships that models should attempt to explain. copyright copyright 1999. The American Astronomical Society

  13. New features to the night sky radiance model illumina: Hyperspectral support, improved obstacles and cloud reflection

    Science.gov (United States)

    Aubé, M.; Simoneau, A.

    2018-05-01

    Illumina is one of the most physically detailed artificial night sky brightness model to date. It has been in continuous development since 2005 [1]. In 2016-17, many improvements were made to the Illumina code including an overhead cloud scheme, an improved blocking scheme for subgrid obstacles (trees and buildings), and most importantly, a full hyperspectral modeling approach. Code optimization resulted in significant reduction in execution time enabling users to run the model on standard personal computers for some applications. After describing the new schemes introduced in the model, we give some examples of applications for a peri-urban and a rural site both located inside the International Dark Sky reserve of Mont-Mégantic (QC, Canada).

  14. Global (volume-averaged) model of inductively coupled chlorine plasma : influence of Cl wall recombination and external heating on continuous and pulse-modulated plasmas

    NARCIS (Netherlands)

    Kemaneci, E.H.; Carbone, E.A.D.; Booth, J.P.; Graef, W.A.A.D.; Dijk, van J.; Kroesen, G.M.W.

    An inductively coupled radio-frequency plasma in chlorine is investigated via a global (volume-averaged) model, both in continuous and square wave modulated power input modes. After the power is switched off (in a pulsed mode) an ion–ion plasma appears. In order to model this phenomenon, a novel

  15. Instantaneous-to-daily GPP upscaling schemes based on a coupled photosynthesis-stomatal conductance model: correcting the overestimation of GPP by directly using daily average meteorological inputs.

    Science.gov (United States)

    Wang, Fumin; Gonsamo, Alemu; Chen, Jing M; Black, T Andrew; Zhou, Bin

    2014-11-01

    Daily canopy photosynthesis is usually temporally upscaled from instantaneous (i.e., seconds) photosynthesis rate. The nonlinear response of photosynthesis to meteorological variables makes the temporal scaling a significant challenge. In this study, two temporal upscaling schemes of daily photosynthesis, the integrated daily model (IDM) and the segmented daily model (SDM), are presented by considering the diurnal variations of meteorological variables based on a coupled photosynthesis-stomatal conductance model. The two models, as well as a simple average daily model (SADM) with daily average meteorological inputs, were validated using the tower-derived gross primary production (GPP) to assess their abilities in simulating daily photosynthesis. The results showed IDM closely followed the seasonal trend of the tower-derived GPP with an average RMSE of 1.63 g C m(-2) day(-1), and an average Nash-Sutcliffe model efficiency coefficient (E) of 0.87. SDM performed similarly to IDM in GPP simulation but decreased the computation time by >66%. SADM overestimated daily GPP by about 15% during the growing season compared to IDM. Both IDM and SDM greatly decreased the overestimation by SADM, and improved the simulation of daily GPP by reducing the RMSE by 34 and 30%, respectively. The results indicated that IDM and SDM are useful temporal upscaling approaches, and both are superior to SADM in daily GPP simulation because they take into account the diurnally varying responses of photosynthesis to meteorological variables. SDM is computationally more efficient, and therefore more suitable for long-term and large-scale GPP simulations.

  16. Sediment transport modelling in a distributed physically based hydrological catchment model

    Directory of Open Access Journals (Sweden)

    M. Konz

    2011-09-01

    Full Text Available Bedload sediment transport and erosion processes in channels are important components of water induced natural hazards in alpine environments. A raster based distributed hydrological model, TOPKAPI, has been further developed to support continuous simulations of river bed erosion and deposition processes. The hydrological model simulates all relevant components of the water cycle and non-linear reservoir methods are applied for water fluxes in the soil, on the ground surface and in the channel. The sediment transport simulations are performed on a sub-grid level, which allows for a better discretization of the channel geometry, whereas water fluxes are calculated on the grid level in order to be CPU efficient. Several transport equations as well as the effects of an armour layer on the transport threshold discharge are considered. Flow resistance due to macro roughness is also considered. The advantage of this approach is the integrated simulation of the entire basin runoff response combined with hillslope-channel coupled erosion and transport simulation. The comparison with the modelling tool SETRAC demonstrates the reliability of the modelling concept. The devised technique is very fast and of comparable accuracy to the more specialised sediment transport model SETRAC.

  17. Large-Eddy Simulation of Flow and Pollutant Transport in Urban Street Canyons with Ground Heating

    OpenAIRE

    Li, Xian-Xiang; Britter, Rex E.; Koh, Tieh Yong; Norford, Leslie Keith; Liu, Chun-Ho; Entekhabi, Dara; Leung, Dennis Y. C.

    2009-01-01

    Our study employed large-eddy simulation (LES) based on a one-equation subgrid-scale model to investigate the flow field and pollutant dispersion characteristics inside urban street canyons. Unstable thermal stratification was produced by heating the ground of the street canyon. Using the Boussinesq approximation, thermal buoyancy forces were taken into account in both the Navier–Stokes equations and the transport equation for subgrid-scale turbulent kinetic energy (TKE). The LESs were valida...

  18. Effect of tank geometry on its average performance

    Science.gov (United States)

    Orlov, Aleksey A.; Tsimbalyuk, Alexandr F.; Malyugin, Roman V.; Leontieva, Daria A.; Kotelnikova, Alexandra A.

    2018-03-01

    The mathematical model of non-stationary filling of vertical submerged tanks with gaseous uranium hexafluoride is presented in the paper. There are calculations of the average productivity, heat exchange area, and filling time of various volumes tanks with smooth inner walls depending on their "height : radius" ratio as well as the average productivity, degree, and filling time of horizontal ribbing tank with volume 6.10-2 m3 with change central hole diameter of the ribs. It has been shown that the growth of "height / radius" ratio in tanks with smooth inner walls up to the limiting values allows significantly increasing tank average productivity and reducing its filling time. Growth of H/R ratio of tank with volume 1.0 m3 to the limiting values (in comparison with the standard tank having H/R equal 3.49) augments tank productivity by 23.5 % and the heat exchange area by 20%. Besides, we have demonstrated that maximum average productivity and a minimum filling time are reached for the tank with volume 6.10-2 m3 having central hole diameter of horizontal ribs 6.4.10-2 m.

  19. Generalized Heteroskedasticity ACF for Moving Average Models in Explicit Forms

    OpenAIRE

    Samir Khaled Safi

    2014-01-01

    The autocorrelation function (ACF) measures the correlation between observations at different   distances apart. We derive explicit equations for generalized heteroskedasticity ACF for moving average of order q, MA(q). We consider two cases: Firstly: when the disturbance term follow the general covariance matrix structure Cov(wi, wj)=S with si,j ¹ 0 " i¹j . Secondly: when the diagonal elements of S are not all identical but sij = 0 " i¹j, i.e. S=diag(s11, s22,&hellip...

  20. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  1. Self-averaging correlation functions in the mean field theory of spin glasses

    International Nuclear Information System (INIS)

    Mezard, M.; Parisi, G.

    1984-01-01

    In the infinite range spin glass model, we consider the staggered spin σsub(lambda)associated with a given eigenvector of the interaction matrix. We show that the thermal average of sub(lambda)sup(2) is a self-averaging quantity and we compute it

  2. A computational study on oblique shock wave-turbulent boundary layer interaction

    Science.gov (United States)

    Joy, Md. Saddam Hossain; Rahman, Saeedur; Hasan, A. B. M. Toufique; Ali, M.; Mitsutake, Y.; Matsuo, S.; Setoguchi, T.

    2016-07-01

    A numerical computation of an oblique shock wave incident on a turbulent boundary layer was performed for free stream flow of air at M∞ = 2.0 and Re1 = 10.5×106 m-1. The oblique shock wave was generated from a 8° wedge. Reynolds averaged Navier-Stokes (RANS) simulation with k-ω SST turbulence model was first utilized for two dimensional (2D) steady case. The results were compared with the experiment at the same flow conditions. Further, to capture the unsteadiness, a 2D Large Eddy Simulation (LES) with sub-grid scale model WMLES was performed which showed the unsteady effects. The frequency of the shock oscillation was computed and was found to be comparable with that of experimental measurement.

  3. Comparison of a vertically-averaged and a vertically-resolved model for hyporheic flow beneath a pool-riffle bedform

    Science.gov (United States)

    Ibrahim, Ahmad; Steffler, Peter; She, Yuntong

    2018-02-01

    The interaction between surface water and groundwater through the hyporheic zone is recognized to be important as it impacts the water quantity and quality in both flow systems. Three-dimensional (3D) modeling is the most complete representation of a real-world hyporheic zone. However, 3D modeling requires extreme computational power and efforts; the sophistication is often significantly compromised by not being able to obtain the required input data accurately. Simplifications are therefore often needed. The objective of this study was to assess the accuracy of the vertically-averaged approximation compared to a more complete vertically-resolved model of the hyporheic zone. The groundwater flow was modeled by either a simple one-dimensional (1D) Dupuit approach or a two-dimensional (2D) horizontal/vertical model in boundary fitted coordinates, with the latter considered as a reference model. Both groundwater models were coupled with a 1D surface water model via the surface water depth. Applying the two models to an idealized pool-riffle sequence showed that the 1D Dupuit approximation gave comparable results in determining the characteristics of the hyporheic zone to the reference model when the stratum thickness is not very large compared to the surface water depth. Conditions under which the 1D model can provide reliable estimate of the seepage discharge, upwelling/downwelling discharges and locations, the hyporheic flow, and the residence time were determined.

  4. Time averaging, ageing and delay analysis of financial time series

    Science.gov (United States)

    Cherstvy, Andrey G.; Vinod, Deepak; Aghion, Erez; Chechkin, Aleksei V.; Metzler, Ralf

    2017-06-01

    We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black-Scholes-Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.

  5. Spatial Variability in Column CO2 Inferred from High Resolution GEOS-5 Global Model Simulations: Implications for Remote Sensing and Inversions

    Science.gov (United States)

    Ott, L.; Putman, B.; Collatz, J.; Gregg, W.

    2012-01-01

    Column CO2 observations from current and future remote sensing missions represent a major advancement in our understanding of the carbon cycle and are expected to help constrain source and sink distributions. However, data assimilation and inversion methods are challenged by the difference in scale of models and observations. OCO-2 footprints represent an area of several square kilometers while NASA s future ASCENDS lidar mission is likely to have an even smaller footprint. In contrast, the resolution of models used in global inversions are typically hundreds of kilometers wide and often cover areas that include combinations of land, ocean and coastal areas and areas of significant topographic, land cover, and population density variations. To improve understanding of scales of atmospheric CO2 variability and representativeness of satellite observations, we will present results from a global, 10-km simulation of meteorology and atmospheric CO2 distributions performed using NASA s GEOS-5 general circulation model. This resolution, typical of mesoscale atmospheric models, represents an order of magnitude increase in resolution over typical global simulations of atmospheric composition allowing new insight into small scale CO2 variations across a wide range of surface flux and meteorological conditions. The simulation includes high resolution flux datasets provided by NASA s Carbon Monitoring System Flux Pilot Project at half degree resolution that have been down-scaled to 10-km using remote sensing datasets. Probability distribution functions are calculated over larger areas more typical of global models (100-400 km) to characterize subgrid-scale variability in these models. Particular emphasis is placed on coastal regions and regions containing megacities and fires to evaluate the ability of coarse resolution models to represent these small scale features. Additionally, model output are sampled using averaging kernels characteristic of OCO-2 and ASCENDS measurement

  6. Large Eddy Simulation of the ventilated wave boundary layer

    DEFF Research Database (Denmark)

    Lohmann, Iris P.; Fredsøe, Jørgen; Sumer, B. Mutlu

    2006-01-01

    A Large Eddy Simulation (LES) of (1) a fully developed turbulent wave boundary layer and (2) case 1 subject to ventilation (i.e., suction and injection varying alternately in phase) has been performed, using the Smagorinsky subgrid-scale model to express the subgrid viscosity. The model was found...... slows down the flow in the full vertical extent of the boundary layer, destabilizes the flow and decreases the mean bed shear stress significantly; whereas suction generally speeds up the flow in the full vertical extent of the boundary layer, stabilizes the flow and increases the mean bed shear stress...

  7. Modeling jet and outflow feedback during star cluster formation

    Energy Technology Data Exchange (ETDEWEB)

    Federrath, Christoph [Monash Centre for Astrophysics, School of Mathematical Sciences, Monash University, VIC 3800 (Australia); Schrön, Martin [Department of Computational Hydrosystems, Helmholtz Centre for Environmental Research-UFZ, Permoserstr. 15, D-04318 Leipzig (Germany); Banerjee, Robi [Hamburger Sternwarte, Gojenbergsweg 112, D-21029 Hamburg (Germany); Klessen, Ralf S., E-mail: christoph.federrath@monash.edu [Universität Heidelberg, Zentrum für Astronomie, Institut für Theoretische Astrophysik, Albert-Ueberle-Strasse 2, D-69120 Heidelberg (Germany)

    2014-08-01

    Powerful jets and outflows are launched from the protostellar disks around newborn stars. These outflows carry enough mass and momentum to transform the structure of their parent molecular cloud and to potentially control star formation itself. Despite their importance, we have not been able to fully quantify the impact of jets and outflows during the formation of a star cluster. The main problem lies in limited computing power. We would have to resolve the magnetic jet-launching mechanism close to the protostar and at the same time follow the evolution of a parsec-size cloud for a million years. Current computer power and codes fall orders of magnitude short of achieving this. In order to overcome this problem, we implement a subgrid-scale (SGS) model for launching jets and outflows, which demonstrably converges and reproduces the mass, linear and angular momentum transfer, and the speed of real jets, with ∼1000 times lower resolution than would be required without the SGS model. We apply the new SGS model to turbulent, magnetized star cluster formation and show that jets and outflows (1) eject about one-fourth of their parent molecular clump in high-speed jets, quickly reaching distances of more than a parsec, (2) reduce the star formation rate by about a factor of two, and (3) lead to the formation of ∼1.5 times as many stars compared to the no-outflow case. Most importantly, we find that jets and outflows reduce the average star mass by a factor of ∼ three and may thus be essential for understanding the characteristic mass of the stellar initial mass function.

  8. MOTION ARTIFACT REDUCTION IN FUNCTIONAL NEAR INFRARED SPECTROSCOPY SIGNALS BY AUTOREGRESSIVE MOVING AVERAGE MODELING BASED KALMAN FILTERING

    Directory of Open Access Journals (Sweden)

    MEHDI AMIAN

    2013-10-01

    Full Text Available Functional near infrared spectroscopy (fNIRS is a technique that is used for noninvasive measurement of the oxyhemoglobin (HbO2 and deoxyhemoglobin (HHb concentrations in the brain tissue. Since the ratio of the concentration of these two agents is correlated with the neuronal activity, fNIRS can be used for the monitoring and quantifying the cortical activity. The portability of fNIRS makes it a good candidate for studies involving subject's movement. The fNIRS measurements, however, are sensitive to artifacts generated by subject's head motion. This makes fNIRS signals less effective in such applications. In this paper, the autoregressive moving average (ARMA modeling of the fNIRS signal is proposed for state-space representation of the signal which is then fed to the Kalman filter for estimating the motionless signal from motion corrupted signal. Results are compared to the autoregressive model (AR based approach, which has been done previously, and show that the ARMA models outperform AR models. We attribute it to the richer structure, containing more terms indeed, of ARMA than AR. We show that the signal to noise ratio (SNR is about 2 dB higher for ARMA based method.

  9. Valuing structure, model uncertainty and model averaging in vector autoregressive processes

    NARCIS (Netherlands)

    R.W. Strachan (Rodney); H.K. van Dijk (Herman)

    2004-01-01

    textabstractEconomic policy decisions are often informed by empirical analysis based on accurate econometric modeling. However, a decision-maker is usually only interested in good estimates of outcomes, while an analyst must also be interested in estimating the model. Accurate inference on

  10. Spectral non-equilibrium property in homogeneous isotropic turbulence and its implication in subgrid-scale modeling

    Energy Technology Data Exchange (ETDEWEB)

    Fang, Le [Laboratory of Mathematics and Physics, Ecole Centrale de Pékin, Beihang University, Beijing 100191 (China); Zhu, Ying [Laboratory of Mathematics and Physics, Ecole Centrale de Pékin, Beihang University, Beijing 100191 (China); National Key Laboratory of Science and Technology on Aero-Engine Aero-Thermodynamics, School of Energy and Power Engineering, Beihang University, Beijing 100191 (China); Liu, Yangwei, E-mail: liuyangwei@126.com [National Key Laboratory of Science and Technology on Aero-Engine Aero-Thermodynamics, School of Energy and Power Engineering, Beihang University, Beijing 100191 (China); Lu, Lipeng [National Key Laboratory of Science and Technology on Aero-Engine Aero-Thermodynamics, School of Energy and Power Engineering, Beihang University, Beijing 100191 (China)

    2015-10-09

    The non-equilibrium property in turbulence is a non-negligible problem in large-eddy simulation but has not yet been systematically considered. The generalization from equilibrium turbulence to non-equilibrium turbulence requires a clear recognition of the non-equilibrium property. As a preliminary step of this recognition, the present letter defines a typical non-equilibrium process, that is, the spectral non-equilibrium process, in homogeneous isotropic turbulence. It is then theoretically investigated by employing the skewness of grid-scale velocity gradient, which permits the decomposition of resolved velocity field into an equilibrium one and a time-reversed one. Based on this decomposition, an improved Smagorinsky model is proposed to correct the non-equilibrium behavior of the traditional Smagorinsky model. The present study is expected to shed light on the future studies of more generalized non-equilibrium turbulent flows. - Highlights: • A spectral non-equilibrium process in isotropic turbulence is defined theoretically. • A decomposition method is proposed to divide a non-equilibrium turbulence field. • An improved Smagorinsky model is proposed to correct the non-equilibrium behavior.

  11. A grid-independent EMMS/bubbling drag model for bubbling and turbulent fluidization

    DEFF Research Database (Denmark)

    Luo, Hao; Lu, Bona; Zhang, Jingyuan

    2017-01-01

    The EMMS/bubbling drag model takes the effects of meso-scale structures (i.e. bubbles) into modeling of drag coefficient and thus improves coarse-grid simulation of bubbling and turbulent fluidized beds. However, its dependence on grid size has not been fully investigated. In this article, we adopt...... a two-step scheme to extend the EMMS/bubbling model to the sub-grid level. Thus the heterogeneity index, HD, which accounts for the hydrodynamic disparity between homogeneous and heterogeneous fluidization, can be correlated as a function of both local voidage and slip velocity. Simulations over...... a periodic domain show the new drag model is less sensitive to grid size because of the additional dependence on local slip velocity. When applying the new drag model to simulations of realistic bubbling and turbulent fluidized beds, we find grid-independent results are easier to obtain for high...

  12. 49 CFR 525.11 - Termination of exemption; amendment of alternative average fuel economy standard.

    Science.gov (United States)

    2010-10-01

    ... average fuel economy standard. 525.11 Section 525.11 Transportation Other Regulations Relating to... EXEMPTIONS FROM AVERAGE FUEL ECONOMY STANDARDS § 525.11 Termination of exemption; amendment of alternative average fuel economy standard. (a) Any exemption granted under this part for an affected model year does...

  13. The performance of FLake in the Met Office Unified Model

    Directory of Open Access Journals (Sweden)

    Gabriel Gerard Rooney

    2013-12-01

    Full Text Available We present results from the coupling of FLake to the Met Office Unified Model (MetUM. The coupling and initialisation are first described, and the results of testing the coupled model in local and global model configurations are presented. These show that FLake has a small statistical impact on screen temperature, but has the potential to modify the weather in the vicinity of areas of significant inland water. Examination of FLake lake ice has revealed that the behaviour of lakes in the coupled model is unrealistic in some areas of significant sub-grid orography. Tests of various modifications to ameliorate this behaviour are presented. The results indicate which of the possible model changes best improve the annual cycle of lake ice. As FLake has been developed and tuned entirely outside the Unified Model system, these results can be interpreted as a useful objective measure of the performance of the Unified Model in terms of its near-surface characteristics.

  14. A Smoothing Algorithm for a New Two-Stage Stochastic Model of Supply Chain Based on Sample Average Approximation

    Directory of Open Access Journals (Sweden)

    Liu Yang

    2017-01-01

    Full Text Available We construct a new two-stage stochastic model of supply chain with multiple factories and distributors for perishable product. By introducing a second-order stochastic dominance (SSD constraint, we can describe the preference consistency of the risk taker while minimizing the expected cost of company. To solve this problem, we convert it into a one-stage stochastic model equivalently; then we use sample average approximation (SAA method to approximate the expected values of the underlying random functions. A smoothing approach is proposed with which we can get the global solution and avoid introducing new variables and constraints. Meanwhile, we investigate the convergence of an optimal value from solving the transformed model and show that, with probability approaching one at exponential rate, the optimal value converges to its counterpart as the sample size increases. Numerical results show the effectiveness of the proposed algorithm and analysis.

  15. The phenotypic equilibrium of cancer cells: From average-level stability to path-wise convergence.

    Science.gov (United States)

    Niu, Yuanling; Wang, Yue; Zhou, Da

    2015-12-07

    The phenotypic equilibrium, i.e. heterogeneous population of cancer cells tending to a fixed equilibrium of phenotypic proportions, has received much attention in cancer biology very recently. In the previous literature, some theoretical models were used to predict the experimental phenomena of the phenotypic equilibrium, which were often explained by different concepts of stabilities of the models. Here we present a stochastic multi-phenotype branching model by integrating conventional cellular hierarchy with phenotypic plasticity mechanisms of cancer cells. Based on our model, it is shown that: (i) our model can serve as a framework to unify the previous models for the phenotypic equilibrium, and then harmonizes the different kinds of average-level stabilities proposed in these models; and (ii) path-wise convergence of our model provides a deeper understanding to the phenotypic equilibrium from stochastic point of view. That is, the emergence of the phenotypic equilibrium is rooted in the stochastic nature of (almost) every sample path, the average-level stability just follows from it by averaging stochastic samples. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Bondi or not Bondi: the impact of resolution on accretion and drag force modelling for Supermassive Black Holes

    Science.gov (United States)

    Beckmann, R. S.; Slyz, A.; Devriendt, J.

    2018-04-01

    Whilst in galaxy-size simulations, supermassive black holes (SMBH) are entirely handled by sub-grid algorithms, computational power now allows the accretion radius of such objects to be resolved in smaller scale simulations. In this paper, we investigate the impact of resolution on two commonly used SMBH sub-grid algorithms; the Bondi-Hoyle-Lyttleton (BHL) formula for accretion onto a point mass, and the related estimate of the drag force exerted onto a point mass by a gaseous medium. We find that when the accretion region around the black hole scales with resolution, and the BHL formula is evaluated using local mass-averaged quantities, the accretion algorithm smoothly transitions from the analytic BHL formula (at low resolution) to a supply limited accretion (SLA) scheme (at high resolution). However, when a similar procedure is employed to estimate the drag force it can lead to significant errors in its magnitude, and/or apply this force in the wrong direction in highly resolved simulations. At high Mach numbers and for small accretors, we also find evidence of the advective-acoustic instability operating in the adiabatic case, and of an instability developing around the wake's stagnation point in the quasi-isothermal case. Moreover, at very high resolution, and Mach numbers above M_∞ ≥ 3, the flow behind the accretion bow shock becomes entirely dominated by these instabilities. As a result, accretion rates onto the black hole drop by about an order of magnitude in the adiabatic case, compared to the analytic BHL formula.

  17. Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.

    Science.gov (United States)

    Brezis, Noam; Bronfman, Zohar Z; Usher, Marius

    2015-06-04

    We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two.

  18. Computation of the average energy for LXY electrons

    International Nuclear Information System (INIS)

    Grau Carles, A.; Grau, A.

    1996-01-01

    The application of an atomic rearrangement model in which we only consider the three shells K, L and M, to compute the counting efficiency for electron capture nuclides, requires a fine averaged energy value for LMN electrons. In this report, we illustrate the procedure with two example, ''125 I and ''109 Cd. (Author) 4 refs

  19. Hybrid Reynolds-Averaged/Large-Eddy Simulations of a Coaxial Supersonic Free-Jet Experiment

    Science.gov (United States)

    Baurle, Robert A.; Edwards, Jack R.

    2010-01-01

    Reynolds-averaged and hybrid Reynolds-averaged/large-eddy simulations have been applied to a supersonic coaxial jet flow experiment. The experiment was designed to study compressible mixing flow phenomenon under conditions that are representative of those encountered in scramjet combustors. The experiment utilized either helium or argon as the inner jet nozzle fluid, and the outer jet nozzle fluid consisted of laboratory air. The inner and outer nozzles were designed and operated to produce nearly pressure-matched Mach 1.8 flow conditions at the jet exit. The purpose of the computational effort was to assess the state-of-the-art for each modeling approach, and to use the hybrid Reynolds-averaged/large-eddy simulations to gather insight into the deficiencies of the Reynolds-averaged closure models. The Reynolds-averaged simulations displayed a strong sensitivity to choice of turbulent Schmidt number. The initial value chosen for this parameter resulted in an over-prediction of the mixing layer spreading rate for the helium case, but the opposite trend was observed when argon was used as the injectant. A larger turbulent Schmidt number greatly improved the comparison of the results with measurements for the helium simulations, but variations in the Schmidt number did not improve the argon comparisons. The hybrid Reynolds-averaged/large-eddy simulations also over-predicted the mixing layer spreading rate for the helium case, while under-predicting the rate of mixing when argon was used as the injectant. The primary reason conjectured for the discrepancy between the hybrid simulation results and the measurements centered around issues related to the transition from a Reynolds-averaged state to one with resolved turbulent content. Improvements to the inflow conditions were suggested as a remedy to this dilemma. Second-order turbulence statistics were also compared to their modeled Reynolds-averaged counterparts to evaluate the effectiveness of common turbulence closure

  20. Fluctuations of wavefunctions about their classical average

    International Nuclear Information System (INIS)

    Benet, L; Flores, J; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H

    2003-01-01

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics

  1. Moving average rules as a source of market instability

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    Despite the pervasiveness of the efficient markets paradigm in the academic finance literature, the use of various moving average (MA) trading rules remains popular with financial market practitioners. This paper proposes a stochastic dynamic financial market model in which demand for traded assets

  2. Site-specific dissociation dynamics of H{sub 2}/D{sub 2} on Ag(111) and Co(0001) and the validity of the site-averaging model

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Xixi [Department of Chemistry and Chemical Biology, University of New Mexico, Albuquerque, New Mexico 87131 (United States); Institute of Theoretical and Computational Chemistry, Key Laboratory of Mesoscopic Chemistry, School of Chemistry and Chemical Engineering, Nanjing University, Nanjing 210093 (China); Jiang, Bin [Department of Chemistry and Chemical Biology, University of New Mexico, Albuquerque, New Mexico 87131 (United States); Department of Chemical Physics, University of Science and Technology of China, Hefei 230026 (China); Xie, Daiqian, E-mail: dqxie@nju.edu.cn, E-mail: hguo@unm.edu [Institute of Theoretical and Computational Chemistry, Key Laboratory of Mesoscopic Chemistry, School of Chemistry and Chemical Engineering, Nanjing University, Nanjing 210093 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Guo, Hua, E-mail: dqxie@nju.edu.cn, E-mail: hguo@unm.edu [Department of Chemistry and Chemical Biology, University of New Mexico, Albuquerque, New Mexico 87131 (United States)

    2015-09-21

    Dissociative chemisorption of polyatomic molecules on metal surfaces involves high-dimensional dynamics, of which quantum mechanical treatments are computationally challenging. A promising reduced-dimensional approach approximates the full-dimensional dynamics by a weighted average of fixed-site results. To examine the performance of this site-averaging model, we investigate two distinct reactions, namely, hydrogen dissociation on Co(0001) and Ag(111), using accurate first principles potential energy surfaces (PESs). The former has a very low barrier of ∼0.05 eV while the latter is highly activated with a barrier of ∼1.15 eV. These two systems allow the investigation of not only site-specific dynamical behaviors but also the validity of the site-averaging model. It is found that the reactivity is not only controlled by the barrier height but also by the topography of the PES. Moreover, the agreement between the site-averaged and full-dimensional results is much better on Ag(111), though quantitative in neither system. Further quasi-classical trajectory calculations showed that the deviations can be attributed to dynamical steering effects, which are present in both reactions at all energies.

  3. Hybrid Reynolds-Averaged/Large-Eddy Simulations of a Co-Axial Supersonic Free-Jet Experiment

    Science.gov (United States)

    Baurle, R. A.; Edwards, J. R.

    2009-01-01

    Reynolds-averaged and hybrid Reynolds-averaged/large-eddy simulations have been applied to a supersonic coaxial jet flow experiment. The experiment utilized either helium or argon as the inner jet nozzle fluid, and the outer jet nozzle fluid consisted of laboratory air. The inner and outer nozzles were designed and operated to produce nearly pressure-matched Mach 1.8 flow conditions at the jet exit. The purpose of the computational effort was to assess the state-of-the-art for each modeling approach, and to use the hybrid Reynolds-averaged/large-eddy simulations to gather insight into the deficiencies of the Reynolds-averaged closure models. The Reynolds-averaged simulations displayed a strong sensitivity to choice of turbulent Schmidt number. The baseline value chosen for this parameter resulted in an over-prediction of the mixing layer spreading rate for the helium case, but the opposite trend was noted when argon was used as the injectant. A larger turbulent Schmidt number greatly improved the comparison of the results with measurements for the helium simulations, but variations in the Schmidt number did not improve the argon comparisons. The hybrid simulation results showed the same trends as the baseline Reynolds-averaged predictions. The primary reason conjectured for the discrepancy between the hybrid simulation results and the measurements centered around issues related to the transition from a Reynolds-averaged state to one with resolved turbulent content. Improvements to the inflow conditions are suggested as a remedy to this dilemma. Comparisons between resolved second-order turbulence statistics and their modeled Reynolds-averaged counterparts were also performed.

  4. Towards Improving the Efficiency of Bayesian Model Averaging Analysis for Flow in Porous Media via the Probabilistic Collocation Method

    Directory of Open Access Journals (Sweden)

    Liang Xue

    2018-04-01

    Full Text Available The characterization of flow in subsurface porous media is associated with high uncertainty. To better quantify the uncertainty of groundwater systems, it is necessary to consider the model uncertainty. Multi-model uncertainty analysis can be performed in the Bayesian model averaging (BMA framework. However, the BMA analysis via Monte Carlo method is time consuming because it requires many forward model evaluations. A computationally efficient BMA analysis framework is proposed by using the probabilistic collocation method to construct a response surface model, where the log hydraulic conductivity field and hydraulic head are expanded into polynomials through Karhunen–Loeve and polynomial chaos methods. A synthetic test is designed to validate the proposed response surface analysis method. The results show that the posterior model weight and the key statistics in BMA framework can be accurately estimated. The relative errors of mean and total variance in the BMA analysis results are just approximately 0.013% and 1.18%, but the proposed method can be 16 times more computationally efficient than the traditional BMA method.

  5. Comparison of mass transport using average and transient rainfall boundary conditions

    International Nuclear Information System (INIS)

    Duguid, J.O.; Reeves, M.

    1976-01-01

    A general two-dimensional model for simulation of saturated-unsaturated transport of radionuclides in ground water has been developed and is currently being tested. The model is being applied to study the transport of radionuclides from a waste-disposal site where field investigations are currently under way to obtain the necessary model parameters. A comparison of the amount of tritium transported is made using both average and transient rainfall boundary conditions. The simulations indicate that there is no substantial difference in the transport for the two conditions tested. However, the values of dispersivity used in the unsaturated zone caused more transport above the water table than has been observed under actual conditions. This deficiency should be corrected and further comparisons should be made before average rainfall boundary conditions are used for long-term transport simulations

  6. Changing mortality and average cohort life expectancy

    Directory of Open Access Journals (Sweden)

    Robert Schoen

    2005-10-01

    Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.

  7. Modelling lidar volume-averaging and its significance to wind turbine wake measurements

    Science.gov (United States)

    Meyer Forsting, A. R.; Troldborg, N.; Borraccino, A.

    2017-05-01

    Lidar velocity measurements need to be interpreted differently than conventional in-situ readings. A commonly ignored factor is “volume-averaging”, which refers to lidars not sampling in a single, distinct point but along its entire beam length. However, especially in regions with large velocity gradients, like the rotor wake, can it be detrimental. Hence, an efficient algorithm mimicking lidar flow sampling is presented, which considers both pulsed and continous-wave lidar weighting functions. The flow-field around a 2.3 MW turbine is simulated using Detached Eddy Simulation in combination with an actuator line to test the algorithm and investigate the potential impact of volume-averaging. Even with very few points discretising the lidar beam is volume-averaging captured accurately. The difference in a lidar compared to a point measurement is greatest at the wake edges and increases from 30% one rotor diameter (D) downstream of the rotor to 60% at 3D.

  8. Average Soil Water Retention Curves Measured by Neutron Radiography

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Chu-Lin [ORNL; Perfect, Edmund [University of Tennessee, Knoxville (UTK); Kang, Misun [ORNL; Voisin, Sophie [ORNL; Bilheux, Hassina Z [ORNL; Horita, Juske [Texas Tech University (TTU); Hussey, Dan [NIST Center for Neutron Research (NCRN), Gaithersburg, MD

    2011-01-01

    Water retention curves are essential for understanding the hydrologic behavior of partially-saturated porous media and modeling flow transport processes within the vadose zone. In this paper we report direct measurements of the main drying and wetting branches of the average water retention function obtained using 2-dimensional neutron radiography. Flint sand columns were saturated with water and then drained under quasi-equilibrium conditions using a hanging water column setup. Digital images (2048 x 2048 pixels) of the transmitted flux of neutrons were acquired at each imposed matric potential (~10-15 matric potential values per experiment) at the NCNR BT-2 neutron imaging beam line. Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert s law after taking into account beam hardening and geometric corrections. To remove scattering effects at high water contents the volumetric water contents were normalized (to give relative saturations) by dividing the drying and wetting sequences of images by the images obtained at saturation and satiation, respectively. The resulting pixel values were then averaged and combined with information on the imposed basal matric potentials to give average water retention curves. The average relative saturations obtained by neutron radiography showed an approximate one-to-one relationship with the average values measured volumetrically using the hanging water column setup. There were no significant differences (at p < 0.05) between the parameters of the van Genuchten equation fitted to the average neutron radiography data and those estimated from replicated hanging water column data. Our results indicate that neutron imaging is a very effective tool for quantifying the average water retention curve.

  9. Large Eddy Simulation of Supersonic Boundary Layer Transition over a Flat-Plate Based on the Spatial Mode

    Directory of Open Access Journals (Sweden)

    Suozhu Wang

    2014-02-01

    Full Text Available The large eddy simulation (LES of spatially evolving supersonic boundary layer transition over a flat-plate with freestream Mach number 4.5 is performed in the present work. The Favre-filtered Navier-Stokes equations are used to simulate large scales, while a dynamic mixed subgrid-scale (SGS model is used to simulate subgrid stress. The convective terms are discretized with a fifth-order upwind compact difference scheme, while a sixth-order symmetric compact difference scheme is employed for the diffusive terms. The basic mean flow is obtained from the similarity solution of the compressible laminar boundary layer. In order to ensure the transition from the initial laminar flow to fully developed turbulence, a pair of oblique first-mode perturbation is imposed on the inflow boundary. The whole process of the spatial transition is obtained from the simulation. Through the space-time average, the variations of typical statistical quantities are analyzed. It is found that the distributions of turbulent Mach number, root-mean-square (rms fluctuation quantities, and Reynolds stresses along the wall-normal direction at different streamwise locations exhibit self-similarity in fully developed turbulent region. Finally, the onset and development of large-scale coherent structures through the transition process are depicted.

  10. Averaged head phantoms from magnetic resonance images of Korean children and young adults

    Science.gov (United States)

    Han, Miran; Lee, Ae-Kyoung; Choi, Hyung-Do; Jung, Yong Wook; Park, Jin Seo

    2018-02-01

    Increased use of mobile phones raises concerns about the health risks of electromagnetic radiation. Phantom heads are routinely used for radiofrequency dosimetry simulations, and the purpose of this study was to construct averaged phantom heads for children and young adults. Using magnetic resonance images (MRI), sectioned cadaver images, and a hybrid approach, we initially built template phantoms representing 6-, 9-, 12-, 15-year-old children and young adults. Our subsequent approach revised the template phantoms using 29 averaged items that were identified by averaging the MRI data from 500 children and young adults. In females, the brain size and cranium thickness peaked in the early teens and then decreased. This is contrary to what was observed in males, where brain size and cranium thicknesses either plateaued or grew continuously. The overall shape of brains was spherical in children and became ellipsoidal by adulthood. In this study, we devised a method to build averaged phantom heads by constructing surface and voxel models. The surface model could be used for phantom manipulation, whereas the voxel model could be used for compliance test of specific absorption rate (SAR) for users of mobile phones or other electronic devices.

  11. An intercomparison of regional climate simulations for Europe

    DEFF Research Database (Denmark)

    Déqué, M.; Rowell, D. P.; Lüthi, D.

    2007-01-01

    Ten regional climate models (RCM) have been integrated with the standard forcings of the PRUDENCE experiment: IPCC-SRES A2 radiative forcing and Hadley Centre boundary conditions. The response over Europe, calculated as the difference between the 2071-2100 and the 1961-1990 means can be viewed...... as an average over a finite number of years (30). Model uncertainty is due to the fact that the models use different techniques to discretize the equations and to represent sub-grid effects. Radiative uncertainty is due to the fact that IPCC-SRES A2 is merely one hypothesis. Some RCMs have been run with another...... scenario of greenhouse gas concentration (IPCC-SRES B2). Boundary uncertainty is due to the fact that the regional models have been run under the constraint of the same global model. Some RCMs have been run with other boundary forcings. The contribution of the different sources varies according...

  12. Numerical Simulations of Two-Phase Reacting Flow in a Single-Element Lean Direct Injection (LDI) Combustor Using NCC

    Science.gov (United States)

    Liu, Nan-Suey; Shih, Tsan-Hsing; Wey, C. Thomas

    2011-01-01

    A series of numerical simulations of Jet-A spray reacting flow in a single-element lean direct injection (LDI) combustor have been conducted by using the National Combustion Code (NCC). The simulations have been carried out using the time filtered Navier-Stokes (TFNS) approach ranging from the steady Reynolds-averaged Navier-Stokes (RANS), unsteady RANS (URANS), to the dynamic flow structure simulation (DFS). The sub-grid model employed for turbulent mixing and combustion includes the well-mixed model, the linear eddy mixing (LEM) model, and the filtered mass density function (FDF/PDF) model. The starting condition of the injected liquid spray is specified via empirical droplet size correlation, and a five-species single-step global reduced mechanism is employed for fuel chemistry. All the calculations use the same grid whose resolution is of the RANS type. Comparisons of results from various models are presented.

  13. How to average logarithmic retrievals?

    Directory of Open Access Journals (Sweden)

    B. Funke

    2012-04-01

    Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.

  14. On numerical considerations for modeling reactive astrophysical shocks

    International Nuclear Information System (INIS)

    Papatheodore, Thomas L.; Messer, O. E. Bronson

    2014-01-01

    Simulating detonations in astrophysical environments is often complicated by numerical approximations to shock structure. A common prescription to ensure correct detonation speeds and associated quantities is to prohibit burning inside the numerically broadened shock. We have performed a series of simulations to verify the efficacy of this approximation and to understand how resolution and dimensionality might affect its use. Our results show that in one dimension, prohibiting burning in the shock is important wherever the carbon burning length is not resolved, in keeping with the results of Fryxell et al. In two dimensions, we find that the prohibition of shock burning effectively inhibits the development of cellular structure for all but the most highly resolved cases. We discuss the possible impacts this outcome may have on sub-grid models and detonation propagation in models of Type Ia supernovae, including potential impacts on observables.

  15. Averaging processes in granular flows driven by gravity

    Science.gov (United States)

    Rossi, Giulia; Armanini, Aronne

    2016-04-01

    One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental

  16. Source of non-arrhenius average relaxation time in glass-forming liquids

    DEFF Research Database (Denmark)

    Dyre, Jeppe

    1998-01-01

    then discuss a recently proposed model according to which the activation energy of the average relaxation time is determined by the work done in shoving aside the surrounding liquid to create space needed for a "flow event". In this model, which is based on the fact that intermolecular interactions...

  17. FOREGROUND MODEL AND ANTENNA CALIBRATION ERRORS IN THE MEASUREMENT OF THE SKY-AVERAGED λ21 cm SIGNAL AT z∼ 20

    Energy Technology Data Exchange (ETDEWEB)

    Bernardi, G. [SKA SA, 3rd Floor, The Park, Park Road, Pinelands, 7405 (South Africa); McQuinn, M. [Department of Astronomy, University of California, Berkeley, CA 94720 (United States); Greenhill, L. J., E-mail: gbernardi@ska.ac.za [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States)

    2015-01-20

    The most promising near-term observable of the cosmic dark age prior to widespread reionization (z ∼ 15-200) is the sky-averaged λ21 cm background arising from hydrogen in the intergalactic medium. Though an individual antenna could in principle detect the line signature, data analysis must separate foregrounds that are orders of magnitude brighter than the λ21 cm background (but that are anticipated to vary monotonically and gradually with frequency, e.g., they are considered {sup s}pectrally smooth{sup )}. Using more physically motivated models for foregrounds than in previous studies, we show that the intrinsic spectral smoothness of the foregrounds is likely not a concern, and that data analysis for an ideal antenna should be able to detect the λ21 cm signal after subtracting a ∼fifth-order polynomial in log ν. However, we find that the foreground signal is corrupted by the angular and frequency-dependent response of a real antenna. The frequency dependence complicates modeling of foregrounds commonly based on the assumption of spectral smoothness. Our calculations focus on the Large-aperture Experiment to detect the Dark Age, which combines both radiometric and interferometric measurements. We show that statistical uncertainty remaining after fitting antenna gain patterns to interferometric measurements is not anticipated to compromise extraction of the λ21 cm signal for a range of cosmological models after fitting a seventh-order polynomial to radiometric data. Our results generalize to most efforts to measure the sky-averaged spectrum.

  18. Statistical comparison of models for estimating the monthly average daily diffuse radiation at a subtropical African site

    International Nuclear Information System (INIS)

    Bashahu, M.

    2003-01-01

    Nine correlations have been developed in this paper to estimate the monthly average diffuse radiation for Dakar, Senegal. A 16-year period data on the global (H) and diffuse (H d ) radiation, together with data on the bright sunshine hours (N), the fraction of the sky's (Ne/8), the water vapour pressure in the air (e) and the ambient temperature (T) have been used for that purpose. A model inter-comparison based on the MBE, RMSE and t statistical tests has shown that estimates in any of the obtained correlations are not significantly different from their measured counterparts, thus all the nine models are recommended for the aforesaid location. Three of them should be particularly selected for their simplicity, universal applicability and high accuracy. Those are simple linear correlations between K d and N/N d , Ne/8 or K t . Even presenting adequate performance, the remaining correlations are either simple but less accurate, or multiple or nonlinear regressions needing one or two input variables. (author)

  19. PERAMALAN DERET WAKTU MENGGUNAKAN MODEL FUNGSI BASIS RADIAL (RBF DAN AUTO REGRESSIVE INTEGRATED MOVING AVERAGE (ARIMA

    Directory of Open Access Journals (Sweden)

    DT Wiyanti

    2013-07-01

    Full Text Available Salah satu metode peramalan yang paling dikembangkan saat ini adalah time series, yakni menggunakan pendekatan kuantitatif dengan data masa lampau yang dijadikan acuan untuk peramalan masa depan. Berbagai penelitian telah mengusulkan metode-metode untuk menyelesaikan time series, di antaranya statistik, jaringan syaraf, wavelet, dan sistem fuzzy. Metode-metode tersebut memiliki kekurangan dan keunggulan yang berbeda. Namun permasalahan yang ada dalam dunia nyata merupakan masalah yang kompleks. Satu metode saja mungkin tidak mampu mengatasi masalah tersebut dengan baik. Dalam artikel ini dibahas penggabungan dua buah metode yaitu Auto Regressive Integrated Moving Average (ARIMA dan Radial Basis Function (RBF. Alasan penggabungan kedua metode ini adalah karena adanya asumsi bahwa metode tunggal tidak dapat secara total mengidentifikasi semua karakteristik time series. Pada artikel ini dibahas peramalan terhadap data Indeks Harga Perdagangan Besar (IHPB dan data inflasi komoditi Indonesia; kedua data berada pada rentang tahun 2006 hingga beberapa bulan di tahun 2012. Kedua data tersebut masing-masing memiliki enam variabel. Hasil peramalan metode ARIMA-RBF dibandingkan dengan metode ARIMA dan metode RBF secara individual. Hasil analisa menunjukkan bahwa dengan metode penggabungan ARIMA dan RBF, model yang diberikan memiliki hasil yang lebih akurat dibandingkan dengan penggunaan salah satu metode saja. Hal ini terlihat dalam visual plot, MAPE, dan RMSE dari semua variabel pada dua data uji coba. The accuracy of time series forecasting is the subject of many decision-making processes. Time series use a quantitative approach to employ data from the past to make forecast for the future. Many researches have proposed several methods to solve time series, such as using statistics, neural networks, wavelets, and fuzzy systems. These methods have different advantages and disadvantages. But often the problem in the real world is just too complex that a

  20. A Capital Mistake? The Neglected Effect of Immigration on Average Wages

    OpenAIRE

    Declan Trott

    2011-01-01

    Much recent literature on the wage effects of immigration assumes that the return to capital, and therefore the average wage, is unaffected in the long run. If immigration is modelled as a continuous flow rather than a one off shock, this result does not necessarily hold. A simple calibration with pre-crisis US immigration rates gives a reduction in average wages of 5%, larger than most estimates of its effect on relative wages.

  1. The effect of the behavior of an average consumer on the public debt dynamics

    Science.gov (United States)

    De Luca, Roberto; Di Mauro, Marco; Falzarano, Angelo; Naddeo, Adele

    2017-09-01

    An important issue within the present economic crisis is understanding the dynamics of the public debt of a given country, and how the behavior of average consumers and tax payers in that country affects it. Starting from a model of the average consumer behavior introduced earlier by the authors, we propose a simple model to quantitatively address this issue. The model is then studied and analytically solved under some reasonable simplifying assumptions. In this way we obtain a condition under which the public debt steadily decreases.

  2. Turbulent precipitation of uranium oxalate in a vortex reactor - experimental study and modelling

    International Nuclear Information System (INIS)

    Sommer de Gelicourt, Y.

    2004-03-01

    Industrial oxalic precipitation processed in an un-baffled magnetically stirred tank, the Vortex Reactor, has been studied with uranium simulating plutonium. Modelling precipitation requires a mixing model for the continuous liquid phase and the solution of population balance for the dispersed solid phase. Being chemical reaction influenced by the degree of mixing at molecular scale, that commercial CFD code does not resolve, a sub-grid scale model has been introduced: the finite mode probability density functions, and coupled with a model for the liquid energy spectrum. Evolution of the dispersed phase has been resolved by the quadrature method of moments, first used here with experimental nucleation and growth kinetics, and an aggregation kernel based on local shear rate. The promising abilities of this local approach, without any fitting constant, are strengthened by the similarity between experimental results and simulations. (author)

  3. Procedure for the direct numerical simulation of turbulent flows in plane channels and annuli and its application in the development of turbulence models

    Energy Technology Data Exchange (ETDEWEB)

    Schumann, U

    1973-10-01

    Thesis. Submitted to Technische Hochschule, Karlsruhe (West Germany). A numerical difference scheme is described to simulate threedimensional, time- dependent, turbulent flows of incompressible fluids at high Reynolds numbers in a plane channel and in concertric annuli. Starting from the results of Deardorff, the NavierStokes equations, averaged over grid volumes, are integrated. For description of the subgrid scale motion a novel model has been developed which takes into account strongly inhomogeneous turbulence and grid volumes of unequal side lengths. The premises used in the model are described and discussed. Stability criteria are established for this method and for similar difference schemes. For computation of the pressure field the appropriate Poisson's equation is solved accurately, except for rounding errors, by Fast Fourier Transform. The procedure implemented in the TURBIT-1 program is used to simulate turbulent flows in a plane channel and an annulus of 5: 1 ratio of radii. For both types of flow, different cases are realized with a maximum number of grid volumes of 65536. For rather small grid volume numbers the numerical results are in good agreement with experimental values. Especially the velocity profile and the mean velocity fluctuations are computed with significantly better accuracy than in earlier, direct simulations. The energy --length-scale model and the pressurestrain correlation are used as examples to show that the method may be used successfully to evaluate the parameters of turbulence models. Earlier results are reviewed and proposals for future research are made. (auth)

  4. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  5. Study on characteristics of the aperture-averaging factor of atmospheric scintillation in terrestrial optical wireless communication

    Science.gov (United States)

    Shen, Hong; Liu, Wen-xing; Zhou, Xue-yun; Zhou, Li-ling; Yu, Long-Kun

    2018-02-01

    In order to thoroughly understand the characteristics of the aperture-averaging effect of atmospheric scintillation in terrestrial optical wireless communication and provide references for engineering design and performance evaluation of the optics system employed in the atmosphere, we have theoretically deduced the generally analytic expression of the aperture-averaging factor of atmospheric scintillation, and numerically investigated characteristics of the apertureaveraging factor under different propagation conditions. The limitations of the current commonly used approximate calculation formula of aperture-averaging factor have been discussed, and the results showed that the current calculation formula is not applicable for the small receiving aperture under non-uniform turbulence link. Numerical calculation has showed that aperture-averaging factor of atmospheric scintillation presented an exponential decline model for the small receiving aperture under non-uniform turbulent link, and the general expression of the model was given. This model has certain guiding significance for evaluating the aperture-averaging effect in the terrestrial optical wireless communication.

  6. Development of a hybrid 3-D hydrological model to simulate hillslopes and the regional unconfined aquifer system in Earth system models

    Science.gov (United States)

    Hazenberg, P.; Broxton, P. D.; Brunke, M.; Gochis, D.; Niu, G. Y.; Pelletier, J. D.; Troch, P. A. A.; Zeng, X.

    2015-12-01

    The terrestrial hydrological system, including surface and subsurface water, is an essential component of the Earth's climate system. Over the past few decades, land surface modelers have built one-dimensional (1D) models resolving the vertical flow of water through the soil column for use in Earth system models (ESMs). These models generally have a relatively coarse model grid size (~25-100 km) and only account for sub-grid lateral hydrological variations using simple parameterization schemes. At the same time, hydrologists have developed detailed high-resolution (~0.1-10 km grid size) three dimensional (3D) models and showed the importance of accounting for the vertical and lateral redistribution of surface and subsurface water on soil moisture, the surface energy balance and ecosystem dynamics on these smaller scales. However, computational constraints have limited the implementation of the high-resolution models for continental and global scale applications. The current work presents a hybrid-3D hydrological approach is presented, where the 1D vertical soil column model (available in many ESMs) is coupled with a high-resolution lateral flow model (h2D) to simulate subsurface flow and overland flow. H2D accounts for both local-scale hillslope and regional-scale unconfined aquifer responses (i.e. riparian zone and wetlands). This approach was shown to give comparable results as those obtained by an explicit 3D Richards model for the subsurface, but improves runtime efficiency considerably. The h3D approach is implemented for the Delaware river basin, where Noah-MP land surface model (LSM) is used to calculated vertical energy and water exchanges with the atmosphere using a 10km grid resolution. Noah-MP was coupled within the WRF-Hydro infrastructure with the lateral 1km grid resolution h2D model, for which the average depth-to-bedrock, hillslope width function and soil parameters were estimated from digital datasets. The ability of this h3D approach to simulate

  7. Calibrating the simple biosphere model for Amazonian tropical forest using field and remote sensing data. I - Average calibration with field data

    Science.gov (United States)

    Sellers, Piers J.; Shuttleworth, W. James; Dorman, Jeff L.; Dalcher, Amnon; Roberts, John M.

    1989-01-01

    Using meteorological and hydrological measurements taken in and above the central-Amazon-basin tropical forest, calibration of the Sellers et al. (1986) simple biosphere (SiB) model are described. The SiB model is a one-dimensional soil-vegetation-atmosphere model designed for use within GCMs models, representing the vegetation cover by analogy with processes operating within a single representative plant. The experimental systems and the procedures used to obtain field data are described, together with the specification of the physiological parameterization required to provide an average description of data. It was found that some of the existing literature on stomatal behavior for tropical species is inconsistent with the observed behavior of the complete canopy in Amazonia, and that the rainfall interception store of the canopy is considerably smaller than originally specified in the SiB model.

  8. Quantifying the effect of mixing on the mean age of air in CCMVal-2 and CCMI-1 models

    Science.gov (United States)

    Dietmüller, Simone; Eichinger, Roland; Garny, Hella; Birner, Thomas; Boenisch, Harald; Pitari, Giovanni; Mancini, Eva; Visioni, Daniele; Stenke, Andrea; Revell, Laura; Rozanov, Eugene; Plummer, David A.; Scinocca, John; Jöckel, Patrick; Oman, Luke; Deushi, Makoto; Kiyotaka, Shibata; Kinnison, Douglas E.; Garcia, Rolando; Morgenstern, Olaf; Zeng, Guang; Stone, Kane Adam; Schofield, Robyn

    2018-05-01

    The stratospheric age of air (AoA) is a useful measure of the overall capabilities of a general circulation model (GCM) to simulate stratospheric transport. Previous studies have reported a large spread in the simulation of AoA by GCMs and coupled chemistry-climate models (CCMs). Compared to observational estimates, simulated AoA is mostly too low. Here we attempt to untangle the processes that lead to the AoA differences between the models and between models and observations. AoA is influenced by both mean transport by the residual circulation and two-way mixing; we quantify the effects of these processes using data from the CCM inter-comparison projects CCMVal-2 (Chemistry-Climate Model Validation Activity 2) and CCMI-1 (Chemistry-Climate Model Initiative, phase 1). Transport along the residual circulation is measured by the residual circulation transit time (RCTT). We interpret the difference between AoA and RCTT as additional aging by mixing. Aging by mixing thus includes mixing on both the resolved and subgrid scale. We find that the spread in AoA between the models is primarily caused by differences in the effects of mixing and only to some extent by differences in residual circulation strength. These effects are quantified by the mixing efficiency, a measure of the relative increase in AoA by mixing. The mixing efficiency varies strongly between the models from 0.24 to 1.02. We show that the mixing efficiency is not only controlled by horizontal mixing, but by vertical mixing and vertical diffusion as well. Possible causes for the differences in the models' mixing efficiencies are discussed. Differences in subgrid-scale mixing (including differences in advection schemes and model resolutions) likely contribute to the differences in mixing efficiency. However, differences in the relative contribution of resolved versus parameterized wave forcing do not appear to be related to differences in mixing efficiency or AoA.

  9. Local and average structure of Mn- and La-substituted BiFeO3

    Science.gov (United States)

    Jiang, Bo; Selbach, Sverre M.

    2017-06-01

    The local and average structure of solid solutions of the multiferroic perovskite BiFeO3 is investigated by synchrotron X-ray diffraction (XRD) and electron density functional theory (DFT) calculations. The average experimental structure is determined by Rietveld refinement and the local structure by total scattering data analyzed in real space with the pair distribution function (PDF) method. With equal concentrations of La on the Bi site or Mn on the Fe site, La causes larger structural distortions than Mn. Structural models based on DFT relaxed geometry give an improved fit to experimental PDFs compared to models constrained by the space group symmetry. Berry phase calculations predict a higher ferroelectric polarization than the experimental literature values, reflecting that structural disorder is not captured in either average structure space group models or DFT calculations with artificial long range order imposed by periodic boundary conditions. Only by including point defects in a supercell, here Bi vacancies, can DFT calculations reproduce the literature results on the structure and ferroelectric polarization of Mn-substituted BiFeO3. The combination of local and average structure sensitive experimental methods with DFT calculations is useful for illuminating the structure-property-composition relationships in complex functional oxides with local structural distortions.

  10. Average current is better than peak current as therapeutic dosage for biphasic waveforms in a ventricular fibrillation pig model of cardiac arrest.

    Science.gov (United States)

    Chen, Bihua; Yu, Tao; Ristagno, Giuseppe; Quan, Weilun; Li, Yongqin

    2014-10-01

    Defibrillation current has been shown to be a clinically more relevant dosing unit than energy. However, the effects of average and peak current in determining shock outcome are still undetermined. The aim of this study was to investigate the relationship between average current, peak current and defibrillation success when different biphasic waveforms were employed. Ventricular fibrillation (VF) was electrically induced in 22 domestic male pigs. Animals were then randomized to receive defibrillation using one of two different biphasic waveforms. A grouped up-and-down defibrillation threshold-testing protocol was used to maintain the average success rate of 50% in the neighborhood. In 14 animals (Study A), defibrillations were accomplished with either biphasic truncated exponential (BTE) or rectilinear biphasic waveforms. In eight animals (Study B), shocks were delivered using two BTE waveforms that had identical peak current but different waveform durations. Both average and peak currents were associated with defibrillation success when BTE and rectilinear waveforms were investigated. However, when pathway impedance was less than 90Ω for the BTE waveform, bivariate correlation coefficient was 0.36 (p=0.001) for the average current, but only 0.21 (p=0.06) for the peak current in Study A. In Study B, a high defibrillation success (67.9% vs. 38.8%, pcurrent (14.9±2.1A vs. 13.5±1.7A, pcurrent unchanged. In this porcine model of VF, average current was better than peak current to be an adequate parameter to describe the therapeutic dosage when biphasic defibrillation waveforms were used. The institutional protocol number: P0805. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  11. Lagrangian averaging with geodesic mean.

    Science.gov (United States)

    Oliver, Marcel

    2017-11-01

    This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.

  12. Ergodic averages for monotone functions using upper and lower dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain. Our...... methods are studied in detail for three models using Markov chain Monte Carlo methods and we also discuss various types of other models for which our methods apply....

  13. Ergodic averages for monotone functions using upper and lower dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2007-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain. Our...... methods are studied in detail for three models using Markov chain Monte Carlo methods and we also discuss various types of other models for which our methods apply....

  14. Edgeworth expansion for the pre-averaging estimator

    DEFF Research Database (Denmark)

    Podolskij, Mark; Veliyev, Bezirgen; Yoshida, Nakahiro

    In this paper, we study the Edgeworth expansion for a pre-averaging estimator of quadratic variation in the framework of continuous diffusion models observed with noise. More specifically, we obtain a second order expansion for the joint density of the estimators of quadratic variation and its...... asymptotic variance. Our approach is based on martingale embedding, Malliavin calculus and stable central limit theorems for continuous diffusions. Moreover, we derive the density expansion for the studentized statistic, which might be applied to construct asymptotic confidence regions....

  15. Modeling of frequency-domain scalar wave equation with the average-derivative optimal scheme based on a multigrid-preconditioned iterative solver

    Science.gov (United States)

    Cao, Jian; Chen, Jing-Bo; Dai, Meng-Xue

    2018-01-01

    An efficient finite-difference frequency-domain modeling of seismic wave propagation relies on the discrete schemes and appropriate solving methods. The average-derivative optimal scheme for the scalar wave modeling is advantageous in terms of the storage saving for the system of linear equations and the flexibility for arbitrary directional sampling intervals. However, using a LU-decomposition-based direct solver to solve its resulting system of linear equations is very costly for both memory and computational requirements. To address this issue, we consider establishing a multigrid-preconditioned BI-CGSTAB iterative solver fit for the average-derivative optimal scheme. The choice of preconditioning matrix and its corresponding multigrid components is made with the help of Fourier spectral analysis and local mode analysis, respectively, which is important for the convergence. Furthermore, we find that for the computation with unequal directional sampling interval, the anisotropic smoothing in the multigrid precondition may affect the convergence rate of this iterative solver. Successful numerical applications of this iterative solver for the homogenous and heterogeneous models in 2D and 3D are presented where the significant reduction of computer memory and the improvement of computational efficiency are demonstrated by comparison with the direct solver. In the numerical experiments, we also show that the unequal directional sampling interval will weaken the advantage of this multigrid-preconditioned iterative solver in the computing speed or, even worse, could reduce its accuracy in some cases, which implies the need for a reasonable control of directional sampling interval in the discretization.

  16. Two-Layer Variable Infiltration Capacity Land Surface Representation for General Circulation Models

    Science.gov (United States)

    Xu, L.

    1994-01-01

    A simple two-layer variable infiltration capacity (VIC-2L) land surface model suitable for incorporation in general circulation models (GCMs) is described. The model consists of a two-layer characterization of the soil within a GCM grid cell, and uses an aerodynamic representation of latent and sensible heat fluxes at the land surface. The effects of GCM spatial subgrid variability of soil moisture and a hydrologically realistic runoff mechanism are represented in the soil layers. The model was tested using long-term hydrologic and climatalogical data for Kings Creek, Kansas to estimate and validate the hydrological parameters. Surface flux data from three First International Satellite Land Surface Climatology Project Field Experiments (FIFE) intensive field compaigns in the summer and fall of 1987 in central Kansas, and from the Anglo-Brazilian Amazonian Climate Observation Study (ABRACOS) in Brazil were used to validate the mode-simulated surface energy fluxes and surface temperature.

  17. The consequences of time averaging for measuring temporal species turnover in the fossil record

    Science.gov (United States)

    Tomašových, Adam; Kidwell, Susan

    2010-05-01

    Modeling time averaging effects with simple simulations allows us to evaluate the magnitude of change in temporal species turnover that is expected to occur in long (paleoecological) time series with fossil assemblages. Distinguishing different modes of metacommunity dynamics (such as neutral, density-dependent, or trade-off dynamics) with time-averaged fossil assemblages requires scaling-up time-averaging effects because the decrease in temporal resolution and the decrease in temporal inter-sample separation (i.e., the two main effects of time averaging) substantially increase community stability relative to assemblages without or with weak time averaging. Large changes in temporal scale that cover centuries to millennia can lead to unprecedented effects on temporal rate of change in species composition. Temporal variation in species composition monotonically decreases with increasing duration of time-averaging in simulated fossil assemblages. Time averaging is also associated with the reduction of species dominance owing to the temporal switching in the identity of dominant species. High degrees of time averaging can cause that community parameters of local fossil assemblages converge to parameters of metacommunity rather that to parameters of individual local non-averaged communities. We find that the low variation in species composition observed among mollusk and ostracod subfossil assemblages can be explained by time averaging alone, and low temporal resolution and reduced temporal separation among assemblages in time series can thus explain a substantial part of the reduced variation in species composition relative to unscaled predictions of neutral model (i.e., species do not differ in birth, death, and immigration rates on per capita basis). The structure of time-averaged assemblages can thus provide important insights into processes that act over larger temporal scales, such as evolution of niches and dispersal, range-limit dynamics, taxon cycles, and

  18. Towards LES Models of Jets and Plumes

    Science.gov (United States)

    Webb, A. T.; Mansour, N. N.

    2000-01-01

    As pointed out by Rodi standard integral solutions for jets and plumes developed for discharge into infinite, quiescent ambient are difficult to extend to complex situations, particularly in the presence of boundaries such as the sea floor or ocean surface. In such cases the assumption of similarity breaks down and it is impossible to find a suitable entrainment coefficient. The models are also incapable of describing any but the most slowly varying unsteady motions. There is therefore a need for full time-dependent modeling of the flow field for which there are three main approaches: (1) Reynolds averaged numerical simulation (RANS), (2) large eddy simulation (LES), and (3) direct numerical simulation (DNS). Rodi applied RANS modeling to both jets and plumes with considerable success, the test being a match with experimental data for time-averaged velocity and temperature profiles as well as turbulent kinetic energy and rms axial turbulent velocity fluctuations. This model still relies on empirical constants, some eleven in the case of the buoyant jet, and so would not be applicable to a partly laminar plume, may have limited use in the presence of boundaries, and would also be unsuitable if one is after details of the unsteady component of the flow (the turbulent eddies). At the other end of the scale DNS modeling includes all motions down to the viscous scales. Boersma et al. have built such a model for the non-buoyant case which also compares well with measured data for mean and turbulent velocity components. The model demonstrates its versatility by application to a laminar flow case. As its name implies, DNS directly models the Navier-Stokes equations without recourse to subgrid modeling so for flows with a broad spectrum of motions (high Re) the cost can be prohibitive - the number of required grid points scaling with Re(exp 9/4) and the number of time steps with Re(exp 3/4). The middle road is provided by LES whereby the Navier-Stokes equations are formally

  19. Effects of average degree of network on an order–disorder transition in opinion dynamics

    International Nuclear Information System (INIS)

    Cun-Fang, Feng; Jian-Yue, Guan; Ying-Hai, Wang; Zhi-Xi, Wu

    2010-01-01

    We have investigated the influence of the average degree (k) of network on the location of an order-disorder transition in opinion dynamics. For this purpose, a variant of majority rule (VMR) model is applied to Watts–Strogatz (WS) small-world networks and Barabási–Albert (BA) scale-free networks which may describe some non-trivial properties of social systems. Using Monte Carlo simulations, we find that the order–disorder transition point of the VMR model is greatly affected by the average degree (k) of the networks; a larger value of (k) results in a more ordered state of the system. Comparing WS networks with BA networks, we find WS networks have better orderliness than BA networks when the average degree (k) is small. With the increase of (k), BA networks have a more ordered state. By implementing finite-size scaling analysis, we also obtain critical exponents β/ν, γ/ν and 1/ν for several values of average degree (k). Our results may be helpful to understand structural effects on order–disorder phase transition in the context of the majority rule model. (general)

  20. Large Eddy Simulation of an SD7003 Airfoil: Effects of Reynolds number and Subgrid-scale modeling

    DEFF Research Database (Denmark)

    Sarlak Chivaee, Hamid

    2017-01-01

    This paper presents results of a series of numerical simulations in order to study aerodynamic characteristics of the low Reynolds number Selig-Donovan airfoil, SD7003. Large Eddy Simulation (LES) technique is used for all computations at chord-based Reynolds numbers 10,000, 24,000 and 60...... the Reynolds number, and the effect is visible even at a relatively low chord-Reynolds number of 60,000. Among the tested models, the dynamic Smagorinsky gives the poorest predictions of the flow, with overprediction of lift and a larger separation on airfoils suction side. Among various models, the implicit...